Explainable AI (XAI): Theory, Methods and Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 3356

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Information Engineering, Politecnico di Bari, Via E Orabona 4, I-70125 Bari, Italy
Interests: explainable artificial intelligence; machine learning; BCI; complex networks; brain connectivity; magnetic resonance imaging
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computing and Communication, Lancaster University, Lancaster LA1 4YW, UK
Interests: artificial intelligence; AI ethics; privacy-preserving machine learning; quantum AI; neuronal computation and biomedical image analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In the recent years, many efforts have been made to improve the interpretability of the decisions of machine learning algorithms. Explainable Artificial Intelligence (XAI) has been established as a new research area, which aims to provide new methodologies and algorithms to enhance transparency and reliability to both the decisions made by predictive algorithms and the contributions and importance of individual features to the outcome. An ever-increasing number of issues are being addressed, including the negative aspects of automated applications such as biases and failures, which in turn have led to the development of new ethical guidelines and regulations. Moreover, a growing number of empirical studies cover emerging techniques such as post-hoc explanations, explanations by prototypes, counterfactual explanations, and graphical explanations.

This Special Issue will focus on research advances and developments in the XAI research field. Interdisciplinary contributions and different applications of XAI domains such as healthcare, sensor-based applications, and education are welcome.

The topics of interest for the Special Issue include, but are not limited to:

  • Post-hoc XAI techniques;
  • Theoretical foundations of XAI;
  • Multimodal XAI approaches;
  • XAI decision support systems in different scenarios (e.g. medical domain, industrial applications, security);
  • Evaluation metrics for XAI algorithms;
  • XAI methods for Deep Learning.

Dr. Angela Lombardi
Dr. Richard Jiang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • XAI
  • human-centered AI
  • interpretability
  • trustworthy AI
  • explainable AI

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 3875 KiB  
Article
Interpretable Multi-Channel Capsule Network for Human Motion Recognition
by Peizhang Li, Qing Fei, Zhen Chen and Xiangdong Liu
Electronics 2023, 12(20), 4313; https://doi.org/10.3390/electronics12204313 - 18 Oct 2023
Viewed by 666
Abstract
Recently, capsule networks have emerged as a novel neural network architecture for human motion recognition owing to their enhanced interpretability compared to traditional deep learning networks. However, the characteristic features of human motion are often distributed across distinct spatial dimensions and existing capsule [...] Read more.
Recently, capsule networks have emerged as a novel neural network architecture for human motion recognition owing to their enhanced interpretability compared to traditional deep learning networks. However, the characteristic features of human motion are often distributed across distinct spatial dimensions and existing capsule networks struggle to independently extract and combine features across multiple spatial dimensions. In this paper, we propose a new multi-channel capsule network architecture that extracts feature capsules in different spatial dimensions, generates a multi-channel capsule chain with independent routing within each channel, and culminates in the aggregation of information from capsules in different channels to activate categories. The proposed structure endows the network with the capability to independently cluster interpretable features within different channels; aggregates features across channels during classification, thereby enhancing classification accuracy and robustness; and also presents the potential for mining interpretable primitives within individual channels. Experimental comparisons with several existing capsule network structures demonstrate the superior performance of the proposed architecture. Furthermore, in contrast to previous studies that vaguely discussed the interpretability of capsule networks, we include additional visual experiments that illustrate the interpretability of the proposed network structure in practical scenarios. Full article
(This article belongs to the Special Issue Explainable AI (XAI): Theory, Methods and Applications)
Show Figures

Figure 1

35 pages, 1236 KiB  
Article
A DEXiRE for Extracting Propositional Rules from Neural Networks via Binarization
by Victor Contreras, Niccolo Marini, Lora Fanda, Gaetano Manzo, Yazan Mualla, Jean-Paul Calbimonte, Michael Schumacher and Davide Calvaresi
Electronics 2022, 11(24), 4171; https://doi.org/10.3390/electronics11244171 - 13 Dec 2022
Cited by 3 | Viewed by 1841
Abstract
Background: Despite the advancement in eXplainable Artificial Intelligence, the explanations provided by model-agnostic predictors still call for improvements (i.e., lack of accurate descriptions of predictors’ behaviors). Contribution: We present a tool for Deep Explanations and Rule Extraction (DEXiRE) to approximate rules for Deep [...] Read more.
Background: Despite the advancement in eXplainable Artificial Intelligence, the explanations provided by model-agnostic predictors still call for improvements (i.e., lack of accurate descriptions of predictors’ behaviors). Contribution: We present a tool for Deep Explanations and Rule Extraction (DEXiRE) to approximate rules for Deep Learning models with any number of hidden layers. Methodology: DEXiRE proposes the binarization of neural networks to induce Boolean functions in the hidden layers, generating as many intermediate rule sets. A rule set is inducted between the first hidden layer and the input layer. Finally, the complete rule set is obtained using inverse substitution on intermediate rule sets and first-layer rules. Statistical tests and satisfiability algorithms reduce the final rule set’s size and complexity (filtering redundant, inconsistent, and non-frequent rules). DEXiRE has been tested in binary and multiclass classifications with six datasets having different structures and models. Results: The performance is consistent (in terms of accuracy, fidelity, and rule length) with respect to the state-of-the-art rule extractors (i.e., ECLAIRE). Moreover, compared with ECLAIRE, DEXiRE has generated shorter rules (i.e., up to 74% fewer terms) and has shortened the execution time (improving up to 197% in the best-case scenario). Conclusions: DEXiRE can be applied for binary and multiclass classification of deep learning predictors with any number of hidden layers. Moreover, DEXiRE can identify the activation pattern per class and use it to reduce the search space for rule extractors (pruning irrelevant/redundant neurons)—shorter rules and execution times with respect to ECLAIRE. Full article
(This article belongs to the Special Issue Explainable AI (XAI): Theory, Methods and Applications)
Show Figures

Figure 1

Back to TopTop