Explainability in AI and Machine Learning

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 November 2024 | Viewed by 1505

Special Issue Editors

Department of Computer Engineering and Informatics, University of Patras, 26504 Patras, Greece
Interests: artificial intelligence; knowledge representation; intelligent systems; intelligent e-learning; sentiment analysis
Department of Computer Engineering & Informatics, University of Patras, 26504 Rio, Greece
Interests: artificial intelligence; learning technologies; hybrid systems; natural language processing; virtual reality
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Explainable Artificial Intelligence (XAI) in general concerns the problem of communicating explanations to human users by AI systems regarding their decisions. This has been of natural interest in systems or models of "traditional" AI (e.g., knowledge representation and reasoning systems, planning systems), where the "internal" decision-making process is mostly transparent (white-box models), which, although mostly interpretable, are not explainable. However, recently, due to the development of many successful models, explainability is of particular concern to the machine learning (ML) community. This is because, although some models are interpretable, most ML models act like black-boxes, and in many applications (e.g., medicine, healthcare, education, automated driving), practitioners want to understand models' decision making, to be able to trust them when used in reality.

So, XAI has become an active subfield of machine learning aiming at increasing the transparency of machine learning models. Explainability, apart from increasing trust and confidence, can also provide further insights regarding the model itself and the problem.

Deep Neural Networks (DNNs) are ML models that have achieved major advances. However, a clear understanding of their internal decision making is lacking. Interpreting the internal mechanisms of DNNs has been a very interesting topic. Symbolic methods could be used for network interpretation, by making clear inference patterns inside DNNs, and explaining the decisions made by them. On the other hand, re-designing DNNs in an interpretable or explainable way could be a solution.

Natural language (NL) techniques, such NL Generation (NLG) and NL Processing (NLP), can help in providing comprehensible explanations of automated decisions to human users of AI systems.

Topics of interest include, but are not limited to, the following:

  • Applications of XAI systems;
  • Evaluation of XAI approaches;
  • Explainable Agents;
  • Explaining Black-box Models;
  • Explaining Logical Formulas;
  • Explainable Machine Learning;
  • Explainable Planning;
  • Interpretable Machine Learning;
  • Metrics for Explainability Evaluation;
  • Models for Explainable Recommendations;
  • Natural Language Generation for Explainable AI;
  • Self-explanatory Decision-Support Systems;
  • Verbalizing Knowledge Bases.

Prof. Dr. Ioannis Hatzilygeroudis
Prof. Dr. Vasile Palade
Dr. Isidoros Perikos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 4185 KiB  
Article
Intelligent Decision Support for Energy Management: A Methodology for Tailored Explainability of Artificial Intelligence Analytics
by Dimitrios P. Panagoulias, Elissaios Sarmas, Vangelis Marinakis, Maria Virvou, George A. Tsihrintzis and Haris Doukas
Electronics 2023, 12(21), 4430; https://doi.org/10.3390/electronics12214430 - 27 Oct 2023
Cited by 4 | Viewed by 980
Abstract
This paper presents a novel development methodology for artificial intelligence (AI) analytics in energy management that focuses on tailored explainability to overcome the “black box” issue associated with AI analytics. Our approach addresses the fact that any given analytic service is to be [...] Read more.
This paper presents a novel development methodology for artificial intelligence (AI) analytics in energy management that focuses on tailored explainability to overcome the “black box” issue associated with AI analytics. Our approach addresses the fact that any given analytic service is to be used by different stakeholders, with different backgrounds, preferences, abilities, skills, and goals. Our methodology is aligned with the explainable artificial intelligence (XAI) paradigm and aims to enhance the interpretability of AI-empowered decision support systems (DSSs). Specifically, a clustering-based approach is adopted to customize the depth of explainability based on the specific needs of different user groups. This approach improves the accuracy and effectiveness of energy management analytics while promoting transparency and trust in the decision-making process. The methodology is structured around an iterative development lifecycle for an intelligent decision support system and includes several steps, such as stakeholder identification, an empirical study on usability and explainability, user clustering analysis, and the implementation of an XAI framework. The XAI framework comprises XAI clusters and local and global XAI, which facilitate higher adoption rates of the AI system and ensure responsible and safe deployment. The methodology is tested on a stacked neural network for an analytics service, which estimates energy savings from renovations, and aims to increase adoption rates and benefit the circular economy. Full article
(This article belongs to the Special Issue Explainability in AI and Machine Learning)
Show Figures

Figure 1

Back to TopTop