Causal and Explainable Artificial Intelligence

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 April 2024 | Viewed by 2914

Special Issue Editors


E-Mail Website
Guest Editor
Data61, CSIRO, GPO BOX 1700, Canberra, ACT 2601, Australia
Interests: interpretable and explainable AI; causal discovery and inference

E-Mail Website
Guest Editor
School of Computer Science, The University of Auckland, Auckland 1010, New Zealand
Interests: machine learning; unsupervised learning and weak supervision; data stream mining and continual learning; anomaly detection; transfer learning

Special Issue Information

Dear Colleagues,

Over the last decade, machine learning (ML) and artificial intelligence (AI) have been increasingly adopted in various domains. However, the lack of transparency and interpretability in AI/ML models has resulted in a growing demand to make them more understandable to humans. This is crucial for ensuring effective collaboration between humans and AI systems and for ensuring regulatory compliance.

Based on the Causal and Explainable AI (CXAI 2023) workshop, this Special Issue will present a collection of cutting-edge research and recent real-world applications in causal inference/discovery and interpretable/explainable AI, with the aim of making complex or black-box AI/ML models understandable and supporting reliable, trustable and responsible decision making.

Authors are encouraged to submit their papers to the CXAI 2023 workshop first and, if accepted, then submit extended versions of their papers to this Special Issue. Alternatively, authors may submit their papers directly to the Special Issue, without submitting to the CXAI workshop.

Dr. Yanchang Zhao
Dr. Yun-Sing Koh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • interpretable machine learning
  • explainable artificial intelligence
  • causal discovery
  • causal inference
  • counterfactual analysis

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Review

24 pages, 398 KiB  
Review
Making Sense of Machine Learning: A Review of Interpretation Techniques and Their Applications
by Ainura Tursunalieva, David L. J. Alexander, Rob Dunne, Jiaming Li, Luis Riera and Yanchang Zhao
Appl. Sci. 2024, 14(2), 496; https://doi.org/10.3390/app14020496 - 05 Jan 2024
Viewed by 1691
Abstract
Transparency in AI models is essential for promoting human–AI collaboration and ensuring regulatory compliance. However, interpreting these models is a complex process influenced by various methods and datasets. This study presents a comprehensive overview of foundational interpretation techniques, meticulously referencing the original authors [...] Read more.
Transparency in AI models is essential for promoting human–AI collaboration and ensuring regulatory compliance. However, interpreting these models is a complex process influenced by various methods and datasets. This study presents a comprehensive overview of foundational interpretation techniques, meticulously referencing the original authors and emphasizing their pivotal contributions. Recognizing the seminal work of these pioneers is imperative for contextualizing the evolutionary trajectory of interpretation in the field of AI. Furthermore, this research offers a retrospective analysis of interpretation techniques, critically evaluating their inherent strengths and limitations. We categorize these techniques into model-based, representation-based, post hoc, and hybrid methods, delving into their diverse applications. Furthermore, we analyze publication trends over time to see how the adoption of advanced computational methods within various categories of interpretation techniques has shaped the development of AI interpretability over time. This analysis highlights a notable preference shift towards data-driven approaches in the field. Moreover, we consider crucial factors such as the suitability of these techniques for generating local or global insights and their compatibility with different data types, including images, text, and tabular data. This structured categorization serves as a guide for practitioners navigating the landscape of interpretation techniques in AI. In summary, this review not only synthesizes various interpretation techniques but also acknowledges the contributions of their original authors. By emphasizing the origins of these techniques, we aim to enhance AI model explainability and underscore the importance of recognizing biases, uncertainties, and limitations inherent in the methods and datasets. This approach promotes the ethical and practical use of interpretation insights, empowering AI practitioners, researchers, and professionals to make informed decisions when selecting techniques for responsible AI implementation in real-world scenarios. Full article
(This article belongs to the Special Issue Causal and Explainable Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop