entropy-logo

Journal Browser

Journal Browser

Fairness in Machine Learning: Information Theoretic Perspectives

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 5567

Special Issue Editors


E-Mail Website
Guest Editor
IBM Research, Cambridge, MA 02142, USA
Interests: machine learning; computer vision; data science; Bayesian inference; deep generative modeling; uncertainty quantification
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Interests: machine learning; information theory; signal processing; fair/trustworthy machine learning; uncertainty quantification

Special Issue Information

Dear Colleagues,

Recent literature has found that machine learning algorithms can amplify biases present in data and even produce systemic prejudice. As the adoption of machine learning algorithms in a wide range of applications accelerates, including in critical workflows such as healthcare management, employment screening, and automated loan processing, the legal and reputational risks of such algorithms increase. Numerous metrics and criteria have been proposed with the aim of enforcing fairness in machine learning to mitigate biases. For this Special Issue, we are inviting submissions presenting novel information–theoretic approaches to fair machine learning, including but not limited to: fairness criteria defined considering an information–theoretic perspective, fairness loss function involving information measures, and new applications in fair reinforcement learning or transfer learning.

Dr. Prasanna Sattigeri
Dr. Yuheng Bu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • fair machine learning
  • group fairness
  • individual fairness
  • information–theoretic approach
  • sensitive attribute

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

44 pages, 3070 KiB  
Article
Differential Fairness: An Intersectional Framework for Fair AI
by Rashidul Islam, Kamrun Naher Keya, Shimei Pan, Anand D. Sarwate and James R. Foulds
Entropy 2023, 25(4), 660; https://doi.org/10.3390/e25040660 - 14 Apr 2023
Cited by 3 | Viewed by 2457
Abstract
We propose definitions of fairness in machine learning and artificial intelligence systems that are informed by the framework of intersectionality, a critical lens from the legal, social science, and humanities literature which analyzes how interlocking systems of power and oppression affect individuals along [...] Read more.
We propose definitions of fairness in machine learning and artificial intelligence systems that are informed by the framework of intersectionality, a critical lens from the legal, social science, and humanities literature which analyzes how interlocking systems of power and oppression affect individuals along overlapping dimensions including gender, race, sexual orientation, class, and disability. We show that our criteria behave sensibly for any subset of the set of protected attributes, and we prove economic, privacy, and generalization guarantees. Our theoretical results show that our criteria meaningfully operationalize AI fairness in terms of real-world harms, making the measurements interpretable in a manner analogous to differential privacy. We provide a simple learning algorithm using deterministic gradient methods, which respects our intersectional fairness criteria. The measurement of fairness becomes statistically challenging in the minibatch setting due to data sparsity, which increases rapidly in the number of protected attributes and in the values per protected attribute. To address this, we further develop a practical learning algorithm using stochastic gradient methods which incorporates stochastic estimation of the intersectional fairness criteria on minibatches to scale up to big data. Case studies on census data, the COMPAS criminal recidivism dataset, the HHP hospitalization data, and a loan application dataset from HMDA demonstrate the utility of our methods. Full article
(This article belongs to the Special Issue Fairness in Machine Learning: Information Theoretic Perspectives)
Show Figures

Figure 1

Review

Jump to: Research

14 pages, 551 KiB  
Review
A Review of Partial Information Decomposition in Algorithmic Fairness and Explainability
by Sanghamitra Dutta and Faisal Hamman
Entropy 2023, 25(5), 795; https://doi.org/10.3390/e25050795 - 13 May 2023
Viewed by 2088
Abstract
Partial Information Decomposition (PID) is a body of work within information theory that allows one to quantify the information that several random variables provide about another random variable, either individually (unique information), redundantly (shared information), or only jointly (synergistic information). This review article [...] Read more.
Partial Information Decomposition (PID) is a body of work within information theory that allows one to quantify the information that several random variables provide about another random variable, either individually (unique information), redundantly (shared information), or only jointly (synergistic information). This review article aims to provide a survey of some recent and emerging applications of partial information decomposition in algorithmic fairness and explainability, which are of immense importance given the growing use of machine learning in high-stakes applications. For instance, PID, in conjunction with causality, has enabled the disentanglement of the non-exempt disparity which is the part of the overall disparity that is not due to critical job necessities. Similarly, in federated learning, PID has enabled the quantification of tradeoffs between local and global disparities. We introduce a taxonomy that highlights the role of PID in algorithmic fairness and explainability in three main avenues: (i) Quantifying the legally non-exempt disparity for auditing or training; (ii) Explaining contributions of various features or data points; and (iii) Formalizing tradeoffs among different disparities in federated learning. Lastly, we also review techniques for the estimation of PID measures, as well as discuss some challenges and future directions. Full article
(This article belongs to the Special Issue Fairness in Machine Learning: Information Theoretic Perspectives)
Show Figures

Figure 1

Back to TopTop