entropy-logo

Journal Browser

Journal Browser

Causal Inference and Causal AI: Machine Learning Meets Information Theory

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 1424

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore
Interests: artificial intelligence; networks; network science; machine learning and optimization theory
Teletraffic Research Centre, The University of Adelaide, Adelaide, SA 5005, Australia
Interests: information theory; communication theory; network coding; causal inference; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In several fields, such as statistics, machine learning, and quantum information theory, inferring the latent structure is a crucial task. This involves uncovering hidden variables or underlying patterns that can explain the observed data. There are two key steps in this process: model specification and causal inference. Model specification involves selecting a probabilistic model that can capture the relationship between the observed and latent variables. Causal inference requires computing the posterior distribution of the latent variables based on the observed data. Successful applications in computer science include topic modeling, collaborative filtering in recommendation systems and automated theorem proving. In recent years, causality has also begun to play a critical role in addressing decision-making issues, such as Explainable Artificial Intelligence (AI) and causal AI, due to its ability to facilitate generalization and robustness. Causal AI refers to the use of machine learning algorithms and techniques to identify causal relationships and make predictions based on them. This facilitates the use of Explainable AI in designing AI systems that can provide clear and understandable explanations for their decisions and actions. Causal AI has a wide range of applications in various fields, such as healthcare, finance, and marketing. For example, in healthcare, causal AI can be used to identify the causes of diseases and develop effective treatments. In finance, it can be used to identify the causes of market fluctuations and predict future trends. Information–theoretic approaches provide a unique set of tools that can expand the range of traditional methods for causal inference and discovery problems in various applications. These include the use of entropy regularization, directed information, the minimum information principle, minimum entropy couplings, and the information bottleneck principle to analyze causal structures of deep neural networks. Causal inference and causal AI thus rely on information theory to quantify the performance of machine learning techniques to make explainable predictions or decisions. For this Special Issue, we are inviting the submission of research presenting novel machine learning and information–theoretic approaches to causal learning in real-world applications, including, but not limited to: causal inference, model interpretation, graphical models, belief propagation and message-passing algorithms, Explainable AI based on the information–theoretic perspective, generative AI and causal AI, and emerging machine learning applications based on Large Language Models.

Dr. Chee Wei Tan
Dr. Siu-Wai Ho
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • causal inference
  • causal learning
  • model interpretation
  • graphical models
  • belief propagation
  • message passing algorithms
  • information bottleneck
  • latent structures
  • large language models
  • explainable AI
  • causal AI

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 38115 KiB  
Article
FedTKD: A Trustworthy Heterogeneous Federated Learning Based on Adaptive Knowledge Distillation
by Leiming Chen, Weishan Zhang, Cihao Dong, Dehai Zhao, Xingjie Zeng, Sibo Qiao, Yichang Zhu and Chee Wei Tan
Entropy 2024, 26(1), 96; https://doi.org/10.3390/e26010096 - 22 Jan 2024
Viewed by 959
Abstract
Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based [...] Read more.
Federated learning allows multiple parties to train models while jointly protecting user privacy. However, traditional federated learning requires each client to have the same model structure to fuse the global model. In real-world scenarios, each client may need to develop personalized models based on its environment, making it difficult to perform federated learning in a heterogeneous model environment. Some knowledge distillation methods address the problem of heterogeneous model fusion to some extent. However, these methods assume that each client is trustworthy. Some clients may produce malicious or low-quality knowledge, making it difficult to aggregate trustworthy knowledge in a heterogeneous environment. To address these challenges, we propose a trustworthy heterogeneous federated learning framework (FedTKD) to achieve client identification and trustworthy knowledge fusion. Firstly, we propose a malicious client identification method based on client logit features, which can exclude malicious information in fusing global logit. Then, we propose a selectivity knowledge fusion method to achieve high-quality global logit computation. Additionally, we propose an adaptive knowledge distillation method to improve the accuracy of knowledge transfer from the server side to the client side. Finally, we design different attack and data distribution scenarios to validate our method. The experiment shows that our method outperforms the baseline methods, showing stable performance in all attack scenarios and achieving an accuracy improvement of 2% to 3% in different data distributions. Full article
Show Figures

Figure 1

Back to TopTop