Explainability, Reliability and Trust in Smart Internet of Things Healthcare Systems

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 1790

Special Issue Editor


E-Mail Website
Guest Editor
Dept. of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy
Interests: sensor-based activity recognition; hybrid activity recognition methods; recognition of behavioral anomalies; pervasive computing and context awareness; context modeling techniques; privacy in location-based services; privacy in pervasive computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The emerging integration of sensing, reasoning, and communication capabilities into smart homes and everyday objects provides unprecedented opportunities for innovation in several fields. In the healthcare domain, artificial intelligence (AI) algorithms are increasingly applied to Internet of Things (IoT) data for supporting different applications, including remote monitoring of medical conditions, early detection of cognitive issues, rehabilitation, and personal well-being. Currently, however, most AI algorithms for healthcare act as black-boxes. Thus, the lack of explainability and interpretability of the reasons behind the AI algorithm’s output challenges the reliability and trustfulness of smart IoT healthcare systems.

The goal of this Special Issue is to provide an overview of the latest developments regarding methods to increase the explainability, reliability, and trust in smart IoT healthcare systems.

Topics of interest include but are not limited to the following:

  • Explainable AI methods for digital health;
  • Trust in IoT healthcare systems;
  • Reliability in IoT healthcare systems;
  • Privacy for smart healthcare and well-being;
  • Security in smart healthcare ecosystems;
  • Persuasiveness and explainability in behavior change apps;
  • Acceptability and user experience in smart healthcare;
  • Smart user interfaces for digital healthcare platforms.

Dr. Daniele Riboni
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • reliability in smart healthcare
  • trust and usability in e-health systems
  • security and privacy for pervasive healthcare
  • interpretability of medical AI models

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 2156 KiB  
Article
Context-Aware Behavioral Tips to Improve Sleep Quality via Machine Learning and Large Language Models
by Erica Corda, Silvia M. Massa and Daniele Riboni
Future Internet 2024, 16(2), 46; https://doi.org/10.3390/fi16020046 - 30 Jan 2024
Viewed by 1354
Abstract
As several studies demonstrate, good sleep quality is essential for individuals’ well-being, as a lack of restoring sleep may disrupt different physical, mental, and social dimensions of health. For this reason, there is increasing interest in tools for the monitoring of sleep based [...] Read more.
As several studies demonstrate, good sleep quality is essential for individuals’ well-being, as a lack of restoring sleep may disrupt different physical, mental, and social dimensions of health. For this reason, there is increasing interest in tools for the monitoring of sleep based on personal sensors. However, there are currently few context-aware methods to help individuals to improve their sleep quality through behavior change tips. In order to tackle this challenge, in this paper, we propose a system that couples machine learning algorithms and large language models to forecast the next night’s sleep quality, and to provide context-aware behavior change tips to improve sleep. In order to encourage adherence and to increase trust, our system includes the use of large language models to describe the conditions that the machine learning algorithm finds harmful to sleep health, and to explain why the behavior change tips are generated as a consequence. We develop a prototype of our system, including a smartphone application, and perform experiments with a set of users. Results show that our system’s forecast is correlated to the actual sleep quality. Moreover, a preliminary user study suggests that the use of large language models in our system is useful in increasing trust and engagement. Full article
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Accurate Wearable Respiratory Rate Measurement for Wireless Healthcare Monitoring Systems
Authors: Mitsuhiro Fukuda(1,2); Ryosuke Omoto(2); Takunori Shimazaki(3); Daisuke Anzai(2)
Affiliation: (1) METS INC., Tokyo, Japan (2) Nagoya Institute of Technology, Nagoya, Japan (3) Jikei University of Health Care Sciences, Osaka, Japan
Abstract: It is necessary to obtain respiratory rate with simple devices in order to detect a sudden change in a patient's condition in case of emergency situations. Although the respiratory rate is usually measured based on a combination of impedance change of electrodes and ECG, the impedance measurement should be largely affected by body movement. In contrast, a capnometer used with a ventilator can measure respiratory rate accurately, avoiding the body movement influence. However, the capnometer requires a large scale of the device and the connection to the ventilator circuit. Therefore, this study developed a simple respiratory rate measurement system using a wearable CO2 sensor, which can support reliable wireless healthcare monitoring systems. We then experimentally evaluated the estimation accuracy of the respiratory rate based on the wearable CO2 sensor.

Title: Is attention all we need? A review on new explainable systems for biomedical image processing with deep learning
Authors: Giovanna Maria Dimitri
Affiliation: Dipartimento di Ingegneria Dell’Informazione e Scienze Matematiche (DIISM), Università degli Studi di Siena, 53100 Siena, Italy
Abstract: In the realm of biomedical image processing, deep learning has emerged as a transformative tool, enabling unprecedented levels of accuracy and efficiency in various tasks. This review critically examines the role of attention mechanisms in enhancing the interpretability of deep learning models applied to biomedical image analysis. While attention mechanisms have gained considerable attention themselves for their ability to focus on relevant regions of an image, this study investigates whether they are the sole key to achieving explainable results in this domain. We explore recent developments and innovations in deep learning-based systems designed for biomedical image analysis, with a particular emphasis on explainability and transparency. Through a comprehensive analysis of the literature, this review discusses the various components, including attention mechanisms, and their interplay in creating interpretable models. Furthermore, we delve into practical use cases where explainable systems have proven to be indispensable for aiding medical professionals in diagnosis and decision-making. The review not only identifies the potential limitations of attention mechanisms but also sheds light on alternative techniques that can complement or surpass them in the pursuit of enhancing transparency in deep learning models.

Back to TopTop