Explainable AI (XAI) for Information Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 September 2023) | Viewed by 8495

Special Issue Editors


E-Mail Website
Guest Editor
Artificial Intelligence Department, Universidad Politécnica de Madrid, E.T.S.I.I Campus de Montegancedo s/n, Boadilla del Monte, 28660 Madrid, Spain
Interests: machine learning; explicable artificial intelligence XAI; dimensionality reduction; data mining and knowledge discovery; medical diagnosis

E-Mail Website
Guest Editor
Ontology Engineering Group, Universidad Politécnica de Madrid, Madrid, Spain
Interests: pattern recognition; computer vision; machine learning; internet of things; fog computing

E-Mail Website
Guest Editor
Computer Science Department, Universidad Carlos III de Madrid, 30 Avenida Universidad, Leganes, 28911 Madrid, Spain
Interests: machine learning; evolutionary computation; renewable energy forecasting

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) is one of the most promising fields today and has applications in many different research areas, especially because of the large amount of available data. However, while many current AI algorithms exhibit high performance, many are incomprehensible in terms of explainability. The concept of a “black box” in machine learning is used in cases where the final decision of the AI algorithm cannot be explained. There are several areas where the adoption of AI methods requires better explainability of the provided outputs (for example, clinical diagnoses). The different paradigms underlying this problem fall under the umbrella of so-called explainable artificial intelligence (XAI). The term XAI is related to algorithms and techniques that apply AI in a way that the solution can be understood by human users. Despite a significant amount of research on this topic, there are still many challenges related to the robustness of the methods, their transparency, the stability of the results, validation and quality measurement, and interpretability.

This Special Issue aims not only to review the latest research progress in the field of XAI but also to propose solutions to some of the above-mentioned problems and their application in different research areas. Therefore, we encourage submissions of conceptual, theoretical, empirical, and literature review scientific articles that focus on this field. Different types and approaches of XAI are welcome in this Special Issue.

Dr. Esteban García-Cuesta
Dr. Manuel Castillo-Cara
Dr. Ricardo Aler Mur
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable artificial intelligence (XAI)
  • human-understandable machines
  • explainable information fusion
  • responsible artificial intelligence (RAI)

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

19 pages, 1664 KiB  
Article
eDA3-X: Distributed Attentional Actor Architecture for Interpretability of Coordinated Behaviors in Multi-Agent Systems
by Yoshinari Motokawa and Toshiharu Sugawara
Appl. Sci. 2023, 13(14), 8454; https://doi.org/10.3390/app13148454 - 21 Jul 2023
Viewed by 1040
Abstract
In this paper, we propose an enhanced version of the distributed attentional actor architecture (eDA3-X) for model-free reinforcement learning. This architecture is designed to facilitate the interpretability of learned coordinated behaviors in multi-agent systems through the use of a saliency vector that captures [...] Read more.
In this paper, we propose an enhanced version of the distributed attentional actor architecture (eDA3-X) for model-free reinforcement learning. This architecture is designed to facilitate the interpretability of learned coordinated behaviors in multi-agent systems through the use of a saliency vector that captures partial observations of the environment. Our proposed method, in principle, can be integrated with any deep reinforcement learning method, as indicated by X, and can help us identify the information in input data that individual agents attend to during and after training. We then validated eDA3-X through experiments in the object collection game. We also analyzed the relationship between cooperative behaviors and three types of attention heatmaps (standard, positional, and class attentions), which provided insight into the information that the agents consider crucial when making decisions. In addition, we investigated how attention is developed by an agent through training experiences. Our experiments indicate that our approach offers a promising solution for understanding coordinated behaviors in multi-agent reinforcement learning. Full article
(This article belongs to the Special Issue Explainable AI (XAI) for Information Processing)
Show Figures

Figure 1

26 pages, 3987 KiB  
Article
ATICVis: A Visual Analytics System for Asymmetric Transformer Models Interpretation and Comparison
by Jian-Lin Wu, Pei-Chen Chang, Chao Wang and Ko-Chih Wang
Appl. Sci. 2023, 13(3), 1595; https://doi.org/10.3390/app13031595 - 26 Jan 2023
Cited by 1 | Viewed by 1601
Abstract
In recent years, natural language processing (NLP) technology has made great progress. Models based on transformers have performed well in various natural language processing problems. However, a natural language task can be carried out by multiple different models with slightly different architectures, such [...] Read more.
In recent years, natural language processing (NLP) technology has made great progress. Models based on transformers have performed well in various natural language processing problems. However, a natural language task can be carried out by multiple different models with slightly different architectures, such as different numbers of layers and attention heads. In addition to quantitative indicators such as the basis for selecting models, many users also consider the language understanding ability of the model and the computing resources it requires. However, comparing and deeply analyzing two transformer-based models with different numbers of layers and attention heads are not easy because it lacks the inherent one-to-one match between models, so comparing models with different architectures is a crucial and challenging task when users train, select, or improve models for their NLP tasks. In this paper, we develop a visual analysis system to help machine learning experts deeply interpret and compare the pros and cons of asymmetric transformer-based models when the models are applied to a user’s target NLP task. We propose metrics to evaluate the similarity between layers or attention heads to help users to identify valuable layers and attention head combinations to compare. Our visual tool provides an interactive overview-to-detail framework for users to explore when and why models behave differently. In the use cases, users use our visual tool to find out and explain why a large model does not significantly outperform a small model and understand the linguistic features captured by layers and attention heads. The use cases and user feedback show that our tool can help people gain insight and facilitate model comparison tasks. Full article
(This article belongs to the Special Issue Explainable AI (XAI) for Information Processing)
Show Figures

Figure 1

Review

Jump to: Research

20 pages, 423 KiB  
Review
Explainability of Automated Fact Verification Systems: A Comprehensive Review
by Manju Vallayil, Parma Nand, Wei Qi Yan and Héctor Allende-Cid
Appl. Sci. 2023, 13(23), 12608; https://doi.org/10.3390/app132312608 - 23 Nov 2023
Viewed by 961
Abstract
The rapid growth in Artificial Intelligence (AI) has led to considerable progress in Automated Fact Verification (AFV). This process involves collecting evidence for a statement, assessing its relevance, and predicting its accuracy. Recently, research has begun to explore automatic explanations as an integral [...] Read more.
The rapid growth in Artificial Intelligence (AI) has led to considerable progress in Automated Fact Verification (AFV). This process involves collecting evidence for a statement, assessing its relevance, and predicting its accuracy. Recently, research has begun to explore automatic explanations as an integral part of the accuracy analysis process. However, the explainability within AFV is lagging compared to the wider field of explainable AI (XAI), which aims at making AI decisions more transparent. This study looks at the notion of explainability as a topic in the field of XAI, with a focus on how it applies to the specific task of Automated Fact Verification. It examines the explainability of AFV, taking into account architectural, methodological, and dataset-related elements, with the aim of making AI more comprehensible and acceptable to general society. Although there is a general consensus on the need for AI systems to be explainable, there a dearth of systems and processes to achieve it. This research investigates the concept of explainable AI in general and demonstrates its various aspects through the particular task of Automated Fact Verification. This study explores the topic of faithfulness in the context of local and global explainability. This paper concludes by highlighting the gaps and limitations in current data science practices and possible recommendations for modifications to architectural and data curation processes, contributing to the broader goals of explainability in Automated Fact Verification. Full article
(This article belongs to the Special Issue Explainable AI (XAI) for Information Processing)
Show Figures

Figure 1

23 pages, 634 KiB  
Review
A Scoping Review on the Progress, Applicability, and Future of Explainable Artificial Intelligence in Medicine
by Raquel González-Alday, Esteban García-Cuesta, Casimir A. Kulikowski and Victor Maojo
Appl. Sci. 2023, 13(19), 10778; https://doi.org/10.3390/app131910778 - 28 Sep 2023
Cited by 1 | Viewed by 1371
Abstract
Due to the success of artificial intelligence (AI) applications in the medical field over the past decade, concerns about the explainability of these systems have increased. The reliability requirements of black-box algorithms for making decisions affecting patients pose a challenge even beyond their [...] Read more.
Due to the success of artificial intelligence (AI) applications in the medical field over the past decade, concerns about the explainability of these systems have increased. The reliability requirements of black-box algorithms for making decisions affecting patients pose a challenge even beyond their accuracy. Recent advances in AI increasingly emphasize the necessity of integrating explainability into these systems. While most traditional AI methods and expert systems are inherently interpretable, the recent literature has focused primarily on explainability techniques for more complex models such as deep learning. This scoping review critically analyzes the existing literature regarding the explainability and interpretability of AI methods within the clinical domain. It offers a comprehensive overview of past and current research trends with the objective of identifying limitations that hinder the advancement of Explainable Artificial Intelligence (XAI) in the field of medicine. Such constraints encompass the diverse requirements of key stakeholders, including clinicians, patients, and developers, as well as cognitive barriers to knowledge acquisition, the absence of standardised evaluation criteria, the potential for mistaking explanations for causal relationships, and the apparent trade-off between model accuracy and interpretability. Furthermore, this review discusses possible research directions aimed at surmounting these challenges. These include alternative approaches to leveraging medical expertise to enhance interpretability within clinical settings, such as data fusion techniques and interdisciplinary assessments throughout the development process, emphasizing the relevance of taking into account the needs of final users to design trustable explainability methods. Full article
(This article belongs to the Special Issue Explainable AI (XAI) for Information Processing)
Show Figures

Figure 1

28 pages, 871 KiB  
Review
Exploring Local Explanation of Practical Industrial AI Applications: A Systematic Literature Review
by Thi-Thu-Huong Le, Aji Teguh Prihatno, Yustus Eko Oktian, Hyoeun Kang and Howon Kim
Appl. Sci. 2023, 13(9), 5809; https://doi.org/10.3390/app13095809 - 08 May 2023
Cited by 6 | Viewed by 2511
Abstract
In recent years, numerous explainable artificial intelligence (XAI) use cases have been developed, to solve numerous real problems in industrial applications while maintaining the explainability level of the used artificial intelligence (AI) models to judge their quality and potentially hold the models accountable [...] Read more.
In recent years, numerous explainable artificial intelligence (XAI) use cases have been developed, to solve numerous real problems in industrial applications while maintaining the explainability level of the used artificial intelligence (AI) models to judge their quality and potentially hold the models accountable if they become corrupted. Therefore, understanding the state-of-the-art methods, pointing out recent issues, and deriving future directions are important to drive XAI research efficiently. This paper presents a systematic literature review of local explanation techniques and their practical applications in various industrial sectors. We first establish the need for XAI in response to opaque AI models and survey different local explanation methods for industrial AI applications. The number of studies is then examined with several factors, including industry sectors, AI models, data types, and XAI-based usage and purpose. We also look at the advantages and disadvantages of local explanation methods and how well they work in practical settings. The difficulties of using local explanation techniques are also covered, including computing complexity and the trade-off between precision and interpretability. Our findings demonstrate that local explanation techniques can boost industrial AI models’ transparency and interpretability and give insightful information about them. The efficiency of these procedures must be improved, and ethical concerns about their application must be resolved. This paper contributes to the increasing knowledge of local explanation strategies and offers guidance to academics and industry professionals who want to use these methods in practical settings. Full article
(This article belongs to the Special Issue Explainable AI (XAI) for Information Processing)
Show Figures

Figure 1

Back to TopTop