sensors-logo

Journal Browser

Journal Browser

Explainable/Interpretable Machine Learning for Biomedical Sensing, Sensor Data Fusion and Diagnostics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 17086

Special Issue Editors


E-Mail Website
Guest Editor
1. Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), 44227 Dortmund, Germany
2. Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, 45147 Essen, Germany
Interests: machine learning; computational intelligence; biomedical applications; interpretable machine learning; natural language processing (NLP); computer vision; augmented reality; information extraction; information retrieval; image processing; biostatistics; bioinformatics; mathematics for computer science
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Guest Editor
Faculty of Information Technology, Dortmund University of Applied Sciences and Arts, 44139 Dortmund, Germany
Interests: biomedical applications; biomedical instrumentation; measurement technology; sensing; signal processing; image processing; machine learning; biophotonics; modeling and simulation

Special Issue Information

Dear Colleagues,

Over recent years, many biomedical applications related to sensing, sensor data fusion and diagnostics have successfully invoked machine learning (ML) models. Some of them only became possible due to powerful ML models. In the future, the relevance of ML for such biomedical applications will further increase.

Modern ML methods like deep learning approaches are often black-box models. Despite their high performance, which exceeds current processing techniques and even human levels in some domains, users are often uncomfortable using them. This lack of trust is problematic in healthcare applications and during certification of devices.

As a solution to this problem, explainable or interpretable machine learning (IML) models and methods for interpretation, respectively, have been proposed. Some classical machine learning models like decision trees or logistic regression models inherently allow for interpretation, at least when used for problems with a small number of features. Regarding other models, which do not inherently feature interpretability, specific methods can foster interpretation, e.g., by visualization or rule-based expressions. Despite increasing trust, IML can help to reveal (unknown) pathophysiological mechanisms. The knowledge gained can flow back to enhanced sensing and diagnostics, which also makes IML of high interest.

For this Special Issue, we seek original contributions in the fields of biomedical sensing, diagnostics and sensor data fusion with a relation to explainable/interpretable machine learning. This includes works with a focus on

- The interpretation of applied/existing ML models

- Improvements that can be or even have been obtained by IML

- Methods to foster interpretability

- Theoretical analyses with respect to IML

Review articles with a focus on explainability/interpretability are welcome as well.

If you are planning a contribution but are not sure if it is in the focus of the Special Issue, feel free to contact the guest editors.

Prof. Dr. Christoph M. Friedrich
Prof. Dr. Sebastian Zaunseder
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Interpretable machine learning (IML)
  • Explainable machine learning
  • LIME
  • Shapley values
  • Feature importance
  • Knowledge integration
  • Visual interpretation support
  • Transparent ML models
  • Global/local explanations
  • IML in regular contexts
  • Deep learning
  • Classical machine learning methods
  • Model agnostic models
  • Rule-based models
  • Feature interactions
  • Ensemble methods
  • Trusted models
  • Robustness of models
  • Biomedical applications
  • Diagnostics
  • Risk assessment
  • Healthcare
  • Wearable sensors
  • Internet of Things (IoT)
  • Multisensor fusion
  • Data fusion
  • Anomaly detection
  • Audio processing
  • Computer vision
  • Image processing
  • Signal processing

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 7518 KiB  
Article
A Robust Ensemble of Convolutional Neural Networks for the Detection of Monkeypox Disease from Skin Images
by Luis Muñoz-Saavedra, Elena Escobar-Linero, Javier Civit-Masot, Francisco Luna-Perejón, Antón Civit and Manuel Domínguez-Morales
Sensors 2023, 23(16), 7134; https://doi.org/10.3390/s23167134 - 12 Aug 2023
Cited by 4 | Viewed by 1106
Abstract
Monkeypox is a smallpox-like disease that was declared a global health emergency in July 2022. Because of this resemblance, it is not easy to distinguish a monkeypox rash from other similar diseases; however, due to the novelty of this disease, there are no [...] Read more.
Monkeypox is a smallpox-like disease that was declared a global health emergency in July 2022. Because of this resemblance, it is not easy to distinguish a monkeypox rash from other similar diseases; however, due to the novelty of this disease, there are no widely used databases for this purpose with which to develop image-based classification algorithms. Therefore, three significant contributions are proposed in this work: first, the development of a publicly available dataset of monkeypox images; second, the development of a classification system based on convolutional neural networks in order to automatically distinguish monkeypox marks from those produced by other diseases; and, finally, the use of explainable AI tools for ensemble networks. For point 1, free images of monkeypox cases and other diseases have been searched in government databases and processed until we are left with only a section of the skin of the patients in each case. For point 2, various pre-trained models were used as classifiers and, in the second instance, combinations of these were used to form ensembles. And, for point 3, this is the first documented time that an explainable AI technique (like GradCAM) is applied to the results of ensemble networks. Among all the tests, the accuracy reaches 93% in the case of single pre-trained networks, and up to 98% using an ensemble of three networks (ResNet50, EfficientNetB0, and MobileNetV2). Comparing these results with previous work, a substantial improvement in classification accuracy is observed. Full article
Show Figures

Figure 1

21 pages, 9724 KiB  
Article
Investigation of Phase Shifts Using AUC Diagrams: Application to Differential Diagnosis of Parkinson’s Disease and Essential Tremor
by Olga S. Sushkova, Alexei A. Morozov, Ivan A. Kershner, Margarita N. Khokhlova, Alexandra V. Gabova, Alexei V. Karabanov, Larisa A. Chigaleichick and Sergei N. Illarioshkin
Sensors 2023, 23(3), 1531; https://doi.org/10.3390/s23031531 - 30 Jan 2023
Cited by 2 | Viewed by 2046
Abstract
This study was motivated by the well-known problem of the differential diagnosis of Parkinson’s disease and essential tremor using the phase shift between the tremor signals in the antagonist muscles of patients. Different phase shifts are typical for different diseases; however, it remains [...] Read more.
This study was motivated by the well-known problem of the differential diagnosis of Parkinson’s disease and essential tremor using the phase shift between the tremor signals in the antagonist muscles of patients. Different phase shifts are typical for different diseases; however, it remains unclear how this parameter can be used for clinical diagnosis. Neurophysiological papers have reported different estimations of the accuracy of this parameter, which varies from insufficient to 100%. To address this issue, we developed special types of area under the ROC curve (AUC) diagrams and used them to analyze the phase shift. Different phase estimations, including the Hilbert instantaneous phase and the cross-wavelet spectrum mean phase, were applied. The results of the investigation of the clinical data revealed several regularities with opposite directions in the phase shift of the electromyographic signals in patients with Parkinson’s disease and essential tremor. The detected regularities provide insights into the contradictory results reported in the literature. Moreover, the developed AUC diagrams show the potential for the investigation of neurodegenerative diseases related to the hyperkinetic movements of the extremities and the creation of high-accuracy methods of clinical diagnosis. Full article
Show Figures

Figure 1

21 pages, 1217 KiB  
Article
Structure and Base Analysis of Receptive Field Neural Networks in a Character Recognition Task
by Jozef Goga, Radoslav Vargic, Jarmila Pavlovicova, Slavomir Kajan and Milos Oravec
Sensors 2022, 22(24), 9743; https://doi.org/10.3390/s22249743 - 12 Dec 2022
Viewed by 1728
Abstract
This paper explores extensions and restrictions of shallow convolutional neural networks with fixed kernels trained with a limited number of training samples. We extend the work recently done in research on Receptive Field Neural Networks (RFNN) and show their behaviour using different bases [...] Read more.
This paper explores extensions and restrictions of shallow convolutional neural networks with fixed kernels trained with a limited number of training samples. We extend the work recently done in research on Receptive Field Neural Networks (RFNN) and show their behaviour using different bases and step-by-step changes within the network architecture. To ensure the reproducibility of the results, we simplified the baseline RFNN architecture to a single-layer CNN network and introduced a deterministic methodology for RFNN training and evaluation. This methodology enabled us to evaluate the significance of changes using the (recently widely used in neural networks) Bayesian comparison. The results indicate that a change in the base may have less of an effect on the results than re-training using another seed. We show that the simplified network with tested bases has similar performance to the chosen baseline RFNN architecture. The data also show the positive impact of energy normalization of used filters, which improves the classification accuracy, even when using randomly initialized filters. Full article
Show Figures

Figure 1

17 pages, 3194 KiB  
Article
AI-Based Detection of Aspiration for Video-Endoscopy with Visual Aids in Meaningful Frames to Interpret the Model Outcome
by Jürgen Konradi, Milla Zajber, Ulrich Betz, Philipp Drees, Annika Gerken and Hans Meine
Sensors 2022, 22(23), 9468; https://doi.org/10.3390/s22239468 - 04 Dec 2022
Viewed by 1798
Abstract
Disorders of swallowing often lead to pneumonia when material enters the airways (aspiration). Flexible Endoscopic Evaluation of Swallowing (FEES) plays a key role in the diagnostics of aspiration but is prone to human errors. An AI-based tool could facilitate this process. Recent non-endoscopic/non-radiologic [...] Read more.
Disorders of swallowing often lead to pneumonia when material enters the airways (aspiration). Flexible Endoscopic Evaluation of Swallowing (FEES) plays a key role in the diagnostics of aspiration but is prone to human errors. An AI-based tool could facilitate this process. Recent non-endoscopic/non-radiologic attempts to detect aspiration using machine-learning approaches have led to unsatisfying accuracy and show black-box characteristics. Hence, for clinical users it is difficult to trust in these model decisions. Our aim is to introduce an explainable artificial intelligence (XAI) approach to detect aspiration in FEES. Our approach is to teach the AI about the relevant anatomical structures, such as the vocal cords and the glottis, based on 92 annotated FEES videos. Simultaneously, it is trained to detect boluses that pass the glottis and become aspirated. During testing, the AI successfully recognized the glottis and the vocal cords but could not yet achieve satisfying aspiration detection quality. While detection performance must be optimized, our architecture results in a final model that explains its assessment by locating meaningful frames with relevant aspiration events and by highlighting suspected boluses. In contrast to comparable AI tools, our framework is verifiable and interpretable and, therefore, accountable for clinical users. Full article
Show Figures

Figure 1

15 pages, 1408 KiB  
Article
Interpretable Machine Learning for Inpatient COVID-19 Mortality Risk Assessments: Diabetes Mellitus Exclusive Interplay
by Heydar Khadem, Hoda Nemat, Jackie Elliott and Mohammed Benaissa
Sensors 2022, 22(22), 8757; https://doi.org/10.3390/s22228757 - 12 Nov 2022
Cited by 3 | Viewed by 1778
Abstract
People with diabetes mellitus (DM) are at elevated risk of in-hospital mortality from coronavirus disease-2019 (COVID-19). This vulnerability has spurred efforts to pinpoint distinctive characteristics of COVID-19 patients with DM. In this context, the present article develops ML models equipped with interpretation modules [...] Read more.
People with diabetes mellitus (DM) are at elevated risk of in-hospital mortality from coronavirus disease-2019 (COVID-19). This vulnerability has spurred efforts to pinpoint distinctive characteristics of COVID-19 patients with DM. In this context, the present article develops ML models equipped with interpretation modules for inpatient mortality risk assessments of COVID-19 patients with DM. To this end, a cohort of 156 hospitalised COVID-19 patients with pre-existing DM is studied. For creating risk assessment platforms, this work explores a pool of historical, on-admission, and during-admission data that are DM-related or, according to preliminary investigations, are exclusively attributed to the COVID-19 susceptibility of DM patients. First, a set of careful pre-modelling steps are executed on the clinical data, including cleaning, pre-processing, subdivision, and feature elimination. Subsequently, standard machine learning (ML) modelling analysis is performed on the cured data. Initially, a classifier is tasked with forecasting COVID-19 fatality from selected features. The model undergoes thorough evaluation analysis. The results achieved substantiate the efficacy of the undertaken data curation and modelling steps. Afterwards, SHapley Additive exPlanations (SHAP) technique is assigned to interpret the generated mortality risk prediction model by rating the predictors’ global and local influence on the model’s outputs. These interpretations advance the comprehensibility of the analysis by explaining the formation of outcomes and, in this way, foster the adoption of the proposed methodologies. Next, a clustering algorithm demarcates patients into four separate groups based on their SHAP values, providing a practical risk stratification method. Finally, a re-evaluation analysis is performed to verify the robustness of the proposed framework. Full article
Show Figures

Figure 1

16 pages, 1402 KiB  
Article
Learning to Ascend Stairs and Ramps: Deep Reinforcement Learning for a Physics-Based Human Musculoskeletal Model
by Aurelien J. C. Adriaenssens, Vishal Raveendranathan and Raffaella Carloni
Sensors 2022, 22(21), 8479; https://doi.org/10.3390/s22218479 - 03 Nov 2022
Cited by 4 | Viewed by 2216
Abstract
This paper proposes to use deep reinforcement learning to teach a physics-based human musculoskeletal model to ascend stairs and ramps. The deep reinforcement learning architecture employs the proximal policy optimization algorithm combined with imitation learning and is trained with experimental data of a [...] Read more.
This paper proposes to use deep reinforcement learning to teach a physics-based human musculoskeletal model to ascend stairs and ramps. The deep reinforcement learning architecture employs the proximal policy optimization algorithm combined with imitation learning and is trained with experimental data of a public dataset. The human model is developed in the open-source simulation software OpenSim, together with two objects (i.e., the stairs and ramp) and the elastic foundation contact dynamics. The model can learn to ascend stairs and ramps with muscle forces comparable to healthy subjects and with a forward dynamics comparable to the experimental training data, achieving an average correlation of 0.82 during stair ascent and of 0.58 during ramp ascent across both the knee and ankle joints. Full article
Show Figures

Figure 1

18 pages, 958 KiB  
Article
Beat-to-Beat Blood Pressure Estimation by Photoplethysmography and Its Interpretation
by Vincent Fleischhauer, Aarne Feldheiser and Sebastian Zaunseder
Sensors 2022, 22(18), 7037; https://doi.org/10.3390/s22187037 - 17 Sep 2022
Cited by 7 | Viewed by 2204
Abstract
Blood pressure (BP) is among the most important vital signals. Estimation of absolute BP solely using photoplethysmography (PPG) has gained immense attention over the last years. Available works differ in terms of used features as well as classifiers and bear large differences in [...] Read more.
Blood pressure (BP) is among the most important vital signals. Estimation of absolute BP solely using photoplethysmography (PPG) has gained immense attention over the last years. Available works differ in terms of used features as well as classifiers and bear large differences in their results. This work aims to provide a machine learning method for absolute BP estimation, its interpretation using computational methods and its critical appraisal in face of the current literature. We used data from three different sources including 273 subjects and 259,986 single beats. We extracted multiple features from PPG signals and its derivatives. BP was estimated by xgboost regression. For interpretation we used Shapley additive values (SHAP). Absolute systolic BP estimation using a strict separation of subjects yielded a mean absolute error of 9.456mmHg and correlation of 0.730. The results markedly improve if data separation is changed (MAE: 6.366mmHg, r: 0.874). Interpretation by means of SHAP revealed four features from PPG, its derivation and its decomposition to be most relevant. The presented approach depicts a general way to interpret multivariate prediction algorithms and reveals certain features to be valuable for absolute BP estimation. Our work underlines the considerable impact of data selection and of training/testing separation, which must be considered in detail when algorithms are to be compared. In order to make our work traceable, we have made all methods available to the public. Full article
Show Figures

Figure 1

30 pages, 15978 KiB  
Article
Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology
by Daniel Sauter, Georg Lodde, Felix Nensa, Dirk Schadendorf, Elisabeth Livingstone and Markus Kukuk
Sensors 2022, 22(14), 5346; https://doi.org/10.3390/s22145346 - 18 Jul 2022
Cited by 6 | Viewed by 2500
Abstract
Digital histopathology poses several challenges such as label noise, class imbalance, limited availability of labelled data, and several latent biases to deep learning, negatively influencing transparency, reproducibility, and classification performance. In particular, biases are well known to cause poor generalization. Proposed tools from [...] Read more.
Digital histopathology poses several challenges such as label noise, class imbalance, limited availability of labelled data, and several latent biases to deep learning, negatively influencing transparency, reproducibility, and classification performance. In particular, biases are well known to cause poor generalization. Proposed tools from explainable artificial intelligence (XAI), bias detection, and bias discovery suffer from technical challenges, complexity, unintuitive usage, inherent biases, or a semantic gap. A promising XAI method, not studied in the context of digital histopathology is automated concept-based explanation (ACE). It automatically extracts visual concepts from image data. Our objective is to evaluate ACE’s technical validity following design science principals and to compare it to Guided Gradient-weighted Class Activation Mapping (Grad-CAM), a conventional pixel-wise explanation method. To that extent, we created and studied five convolutional neural networks (CNNs) in four different skin cancer settings. Our results demonstrate that ACE is a valid tool for gaining insights into the decision process of histopathological CNNs that can go beyond explanations from the control method. ACE validly visualized a class sampling ratio bias, measurement bias, sampling bias, and class-correlated bias. Furthermore, the complementary use with Guided Grad-CAM offers several benefits. Finally, we propose practical solutions for several technical challenges. In contradiction to results from the literature, we noticed lower intuitiveness in some dermatopathology scenarios as compared to concept-based explanations on real-world images. Full article
Show Figures

Figure 1

Back to TopTop