Explainability Methods in Artificial Intelligence

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: closed (15 August 2023) | Viewed by 574

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
Interests: disease recognition using artificial intelligence methods; digital health; multimodal interfaces; biomedical imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

As artificial intelligence (AI) systems are being deployed in more and more critical applications, there is an increasing need to understand how they make decisions and to ensure that they are trustworthy. Explainable AI (XAI) is a rapidly growing area of research that aims to make AI systems more transparent and interpretable, so that their decisions can be understood and trusted by human users.

In recent years, there has been a significant increase in the use of AI systems in various domains, such as healthcare, finance, autonomous systems, and many more. However, these AI systems are often based on complex and opaque models, such as deep neural networks, that are difficult for humans to interpret. This lack of transparency can make it difficult to understand why a particular decision was made, and it can also make it difficult to ensure that the system is making fair and unbiased decisions. To address these issues, researchers have started to develop methods for making AI systems more interpretable and transparent.

The field of XAI Is still in its ea”ly s’ages, and there is currently a lack of consensus on what exactly constitutes an explainable AI system. However, there are several different approaches that have been proposed to make AI systems more interpretable, including techniques for visualizing deep neural networks, methods for generating human-readable explanations of AI decisions, and approaches for evaluating the interpretability of AI models.

In addition to these technical approaches, there is also a growing body of research on the social and ethical implications of XAI, including studies on the trade-offs between model complexity and interpretability, and research on the integration of explainability methods with other AI tasks, such as fairness and robustness.

This Special Issue aims to provide a comprehensive overview of the latest developments and trends in XAI, and will cover a wide range of topics related to the transparency and interpretability of AI systems. We invite submissions from researchers working in areas such as machine learning, computer vision, natural language processing, and other fields that are relevant to explainable AI.

The goal of this Special Issue is to provide a forum for researchers to share their latest research findings and to promote further discussion and collaboration in this rapidly growing area of research.

Prof. Dr. Robertas Damaševičius
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • techniques for visualizing and interpreting deep neural networks
  • methods for generating human-readable explanations of AI decisions
  • approaches for evaluating the interpretability of AI models
  • research on the trade-offs between model complexity and interpretability
  • integration of explainability methods with other AI tasks, such as fairness and robustness
  • theoretical foundations and frameworks for explainable AI
  • case studies and real-world applications of explainable AI
  • human–AI interaction and explainability in human-in-the-loop systems
  • natural language generation models and chatbots
  • advancement in explainable AI in various domains, such as healthcare, autonomous systems, education, and finance
  • surveys of explainable ai systems and applications
  • future trends and open challenges in explainable AI

Published Papers

There is no accepted submissions to this special issue at this moment.
Back to TopTop