Recent Developments in Explainable Artificial Intelligence

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 October 2023) | Viewed by 368

Special Issue Editors


E-Mail Website
Guest Editor
School of Cybersecurity, Korea University, Seoul 02841, Republic of Korea
Interests: trustworthy AI; AI model stealing; XAI; applications in cybersecurity
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science & Engineering, Soongsil University, Seoul 06978, Republic of Korea
Interests: whole-genome sequencing data analysis; analysis of large-scale data using probabilistic graphical models

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) has rapidly advanced in recent years, bringing significant breakthroughs in areas such as natural language processing, computer vision, and robotics, among many others. However, as AI systems become more complex, they become less transparent, and their decision-making processes become harder to interpret. This lack of interpretability poses a significant challenge for AI adoption in critical applications, where the ability to explain decisions and actions is essential. Explainable AI (XAI) aims to address this challenge by developing techniques that can provide clear, understandable, and trustworthy explanations for the behavior and outputs of AI systems.

This Special Issue aims to bring together the latest research on recent developments in XAI. We invite researchers and practitioners to submit original articles on topics including, but not limited to, the following topics:

  • Novel techniques for generating explanations from AI models;
  • New AI models with improved interpretability;
  • Evaluation methodologies for XAI;
  • Real-world applications of XAI;
  • Human factors and user-centered design for XAI;
  • Theoretical foundations of XAI.

Dr. Sangkyun Lee
Prof. Dr. Kyu-Baek Hwang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • interpretable machine learning
  • explanation generation
  • local/global/multimodal/counterfactual explanations
  • human-in-the-loop
  • explanation validation

Published Papers

There is no accepted submissions to this special issue at this moment.
Back to TopTop