Interpretable and Explainable AI Applications

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: 30 September 2024 | Viewed by 9609

Special Issue Editors

School of Innovation, Design and Engineering (IDT), Mälardalen University, Box 883, 721 23 Västerås, Sweden
Interests: deep learning; XAI; human-centric AI; case-based reasoning; data mining; fuzzy logic and other machine learning and machine intelligence approaches for analytics—especially in big data
Special Issues, Collections and Topics in MDPI journals
College of Computing & Informatics, Drexel University, Philadelphia, PA 19802, USA
Interests: use-inspired textual agents; explainable agency; case-based reasoning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The sub-fields of interpretable machine learning (IML) and explainable artificial intelligence (XAI) have substantial overlap. Both are concerned with the broader goal that AI systems, those that include AI methods, need to be interpretable to users and designers. AI is a much broader field than ML, so these fields have a great deal of synergy and must consider one another’s contributions. This Special Issue will target contributions that consider the broader view of the field that aims to investigate how AI systems explain themselves, either via interpretability or a combination of interpretability and explainability. IML/XAI will also play a vital role in sustainability issues with the sustainable development of AI applications as humans/society can trust AI.

The aim of this Special Issue is to provide a leading forum for the timely, in-depth presentation of recent advances in the research and development of interpretability and explainability techniques for AI applications.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  • How artificial intelligence methods and systems explain their decisions;
  • Interpretability of AI models and methods;
  • Validation of explainability or interpretability approaches for AI;
  • Robustness of methods for interpretability and explainability;
  • Applications adopting AI methods with explainability or interpretability methods;
  • Applications benefiting from different types of explanation contents, e.g., counterfactuals, feature attribution, instance attribution;
  • Social aspects of explainability and interpretability in AI;
  • Accountability of AI systems.

Please include in your submission a statement regarding whether your manuscript’s contribution is in computing and engineering or in social aspects. If your submission includes contributions in both aspects, please indicate which authors are contributing to each.

We look forward to receiving your contributions.

You may choose our Joint Special Issue in Sustainability.

Prof. Dr. Mobyen Uddin Ahmed
Prof. Dr. Rosina O. Weber
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainability
  • interpretability
  • artificial intelligence applications
  • validation
  • accountability

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

11 pages, 508 KiB  
Article
An Empirical Comparison of Interpretable Models to Post-Hoc Explanations
by Parisa Mahya and Johannes Fürnkranz
AI 2023, 4(2), 426-436; https://doi.org/10.3390/ai4020023 - 19 May 2023
Cited by 1 | Viewed by 2522
Abstract
Recently, some effort went into explaining intransparent and black-box models, such as deep neural networks or random forests. So-called model-agnostic methods typically approximate the prediction of the intransparent black-box model with an interpretable model, without considering any specifics of the black-box model itself. [...] Read more.
Recently, some effort went into explaining intransparent and black-box models, such as deep neural networks or random forests. So-called model-agnostic methods typically approximate the prediction of the intransparent black-box model with an interpretable model, without considering any specifics of the black-box model itself. It is a valid question whether direct learning of interpretable white-box models should not be preferred over post-hoc approximations of intransparent and black-box models. In this paper, we report the results of an empirical study, which compares post-hoc explanations and interpretable models on several datasets for rule-based and feature-based interpretable models. The results seem to underline that often directly learned interpretable models approximate the black-box models at least as well as their post-hoc surrogates, even though the former do not have direct access to the black-box model. Full article
(This article belongs to the Special Issue Interpretable and Explainable AI Applications)
Show Figures

Figure 1

Review

Jump to: Research

32 pages, 1570 KiB  
Review
Explainable Image Classification: The Journey So Far and the Road Ahead
by Vidhya Kamakshi and Narayanan C. Krishnan
AI 2023, 4(3), 620-651; https://doi.org/10.3390/ai4030033 - 01 Aug 2023
Cited by 2 | Viewed by 6110
Abstract
Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide a comprehensive analysis of existing approaches in the field of XAI, focusing on the tradeoff [...] Read more.
Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide a comprehensive analysis of existing approaches in the field of XAI, focusing on the tradeoff between model accuracy and interpretability. Motivated by the need to address this tradeoff, we conduct an extensive review of the literature, presenting a multi-view taxonomy that offers a new perspective on XAI methodologies. We analyze various sub-categories of XAI methods, considering their strengths, weaknesses, and practical challenges. Moreover, we explore causal relationships in model explanations and discuss approaches dedicated to explaining cross-domain classifiers. The latter is particularly important in scenarios where training and test data are sampled from different distributions. Drawing insights from our analysis, we propose future research directions, including exploring explainable allied learning paradigms, developing evaluation metrics for both traditionally trained and allied learning-based classifiers, and applying neural architectural search techniques to minimize the accuracy–interpretability tradeoff. This survey paper provides a comprehensive overview of the state-of-the-art in XAI, serving as a valuable resource for researchers and practitioners interested in understanding and advancing the field. Full article
(This article belongs to the Special Issue Interpretable and Explainable AI Applications)
Show Figures

Figure 1

Back to TopTop