Fairness and Explanation for Trustworthy AI

A special issue of Machine Learning and Knowledge Extraction (ISSN 2504-4990). This special issue belongs to the section "Privacy".

Deadline for manuscript submissions: closed (15 December 2023) | Viewed by 26504

Special Issue Editors

Data Science Institute, University of Technology Sydney, Ultimo, NSW 2007, Australia
Interests: AI ethics; AI fairness; AI explainability; Behavior analytics; human–computer interaction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Human-Centered AI Lab, Institute of Forest Engineering, Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences, 1190 Vienna, Austria
2. xAI Lab, Alberta Machine Intelligence Institute, University of Alberta, Edmonton, AB T5J 3B1, Canada
Interests: human-centered AI; explainable-AI; interactive machine learning; decision support; medical AI
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Data Science Institute, University of Technology Sydney, Ultimo, NSW 2007, Australia
Interests: machine learning; pattern recognition; human–machine interaction; behavior analytics; cognitive modelling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) and machine learning (ML) are increasingly being used to shape our daily lives by making, or at least influencing, decisions with ethical and legal implications in a variety of application areas (from agriculture to zoology). However, due to biased input data and/or flawed algorithms, unfair AI-informed decision-making systems may result in reinforcing discrimination, such as racial/gender bias in AI-informed decision-making, or even in high risk environments due to incorrect decisions, e.g., in medical diagnoses. Furthermore, due to the black-box nature of deep learning, for example, the use of such algorithms requires verification and plausibility checks by experts, especially in high-risk areas, such as health, not only for safety and ethical reasons, but particularly for mandatory legal reasons. Such requirements need to provide re-traceability, explainability, interpretability, and transparency for such AI systems—which is technically challenging. AI explanations will become indispensable in the future to interpret black-box results and provide users with insights into the system's decision-making process. Meanwhile, fairness and explanations are key components in fostering trust and confidence in AI systems. In this Special Issue, we will feature cutting-edge research where fairness and explanations are presented for making trustworthy decisions in AI systems.

This Special Issue invites submissions that feature original research on designing, presenting, and evaluating approaches for fairness and explanations in AI systems. The approaches aim at improving human trust in AI systems.

Dr. Jianlong Zhou
Prof. Dr. Andreas Holzinger
Prof. Dr. Fang Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • role of fairness in trustworthy AI systems
  • role of explanation in trustworthy AI systems
  • role of both fairness and explanation in trustworthy AI systems
  • human’s judgement of fairness and explanations in AI systems
  • innovative methods and technologies in presenting fairness and explanations for boosting trustworthiness of AI systems
  • novel applications of user experience design and evaluation methods for trustworthy AI with fairness and explanations
  • social, ethical and legal aspects of fairness in AI, fostering trustworthy AI

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 1150 KiB  
Article
More Capable, Less Benevolent: Trust Perceptions of AI Systems across Societal Contexts
by Ekaterina Novozhilova, Kate Mays, Sejin Paik and James E. Katz
Mach. Learn. Knowl. Extr. 2024, 6(1), 342-366; https://doi.org/10.3390/make6010017 - 05 Feb 2024
Viewed by 2129
Abstract
Modern AI applications have caused broad societal implications across key public domains. While previous research primarily focuses on individual user perspectives regarding AI systems, this study expands our understanding to encompass general public perceptions. Through a survey (N = 1506), we examined [...] Read more.
Modern AI applications have caused broad societal implications across key public domains. While previous research primarily focuses on individual user perspectives regarding AI systems, this study expands our understanding to encompass general public perceptions. Through a survey (N = 1506), we examined public trust across various tasks within education, healthcare, and creative arts domains. The results show that participants vary in their trust across domains. Notably, AI systems’ abilities were evaluated higher than their benevolence across all domains. Demographic traits had less influence on trust in AI abilities and benevolence compared to technology-related factors. Specifically, participants with greater technological competence, AI familiarity, and knowledge viewed AI as more capable in all domains. These participants also perceived greater systems’ benevolence in healthcare and creative arts but not in education. We discuss the importance of considering public trust and its determinants in AI adoption. Full article
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)
Show Figures

Figure 1

13 pages, 1194 KiB  
Article
What Do the Regulators Mean? A Taxonomy of Regulatory Principles for the Use of AI in Financial Services
by Mustafa Pamuk, Matthias Schumann and Robert C. Nickerson
Mach. Learn. Knowl. Extr. 2024, 6(1), 143-155; https://doi.org/10.3390/make6010008 - 11 Jan 2024
Viewed by 1466
Abstract
The intended automation in the financial industry creates a proper area for artificial intelligence usage. However, complex and high regulatory standards and rapid technological developments pose significant challenges in developing and deploying AI-based services in the finance industry. The regulatory principles defined by [...] Read more.
The intended automation in the financial industry creates a proper area for artificial intelligence usage. However, complex and high regulatory standards and rapid technological developments pose significant challenges in developing and deploying AI-based services in the finance industry. The regulatory principles defined by financial authorities in Europe need to be structured in a fine-granular way to promote understanding and ensure customer safety and the quality of AI-based services in the financial industry. This will lead to a better understanding of regulators’ priorities and guide how AI-based services are built. This paper provides a classification pattern with a taxonomy that clarifies the existing European regulatory principles for researchers, regulatory authorities, and financial services companies. Our study can pave the way for developing compliant AI-based services by bringing out the thematic focus of regulatory principles. Full article
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)
Show Figures

Figure 1

13 pages, 632 KiB  
Article
Evaluating the Role of Machine Learning in Defense Applications and Industry
by Evaldo Jorge Alcántara Suárez and Victor Monzon Baeza
Mach. Learn. Knowl. Extr. 2023, 5(4), 1557-1569; https://doi.org/10.3390/make5040078 - 22 Oct 2023
Viewed by 3638
Abstract
Machine learning (ML) has become a critical technology in the defense sector, enabling the development of advanced systems for threat detection, decision making, and autonomous operations. However, the increasing ML use in defense systems has raised ethical concerns related to accountability, transparency, and [...] Read more.
Machine learning (ML) has become a critical technology in the defense sector, enabling the development of advanced systems for threat detection, decision making, and autonomous operations. However, the increasing ML use in defense systems has raised ethical concerns related to accountability, transparency, and bias. In this paper, we provide a comprehensive analysis of the impact of ML on the defense sector, including the benefits and drawbacks of using ML in various applications such as surveillance, target identification, and autonomous weapons systems. We also discuss the ethical implications of using ML in defense, focusing on privacy, accountability, and bias issues. Finally, we present recommendations for mitigating these ethical concerns, including increased transparency, accountability, and stakeholder involvement in designing and deploying ML systems in the defense sector. Full article
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)
Show Figures

Figure 1

20 pages, 1444 KiB  
Article
FairCaipi: A Combination of Explanatory Interactive and Fair Machine Learning for Human and Machine Bias Reduction
by Louisa Heidrich, Emanuel Slany, Stephan Scheele and Ute Schmid
Mach. Learn. Knowl. Extr. 2023, 5(4), 1519-1538; https://doi.org/10.3390/make5040076 - 18 Oct 2023
Viewed by 1494
Abstract
The rise of machine-learning applications in domains with critical end-user impact has led to a growing concern about the fairness of learned models, with the goal of avoiding biases that negatively impact specific demographic groups. Most existing bias-mitigation strategies adapt the importance of [...] Read more.
The rise of machine-learning applications in domains with critical end-user impact has led to a growing concern about the fairness of learned models, with the goal of avoiding biases that negatively impact specific demographic groups. Most existing bias-mitigation strategies adapt the importance of data instances during pre-processing. Since fairness is a contextual concept, we advocate for an interactive machine-learning approach that enables users to provide iterative feedback for model adaptation. Specifically, we propose to adapt the explanatory interactive machine-learning approach Caipi for fair machine learning. FairCaipi incorporates human feedback in the loop on predictions and explanations to improve the fairness of the model. Experimental results demonstrate that FairCaipi outperforms a state-of-the-art pre-processing bias mitigation strategy in terms of the fairness and the predictive performance of the resulting machine-learning model. We show that FairCaipi can both uncover and reduce bias in machine-learning models and allows us to detect human bias. Full article
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)
Show Figures

Figure 1

24 pages, 1534 KiB  
Article
Fairness and Explanation in AI-Informed Decision Making
by Alessa Angerschmid, Jianlong Zhou, Kevin Theuermann, Fang Chen and Andreas Holzinger
Mach. Learn. Knowl. Extr. 2022, 4(2), 556-579; https://doi.org/10.3390/make4020026 - 16 Jun 2022
Cited by 50 | Viewed by 9992
Abstract
AI-assisted decision-making that impacts individuals raises critical questions about transparency and fairness in artificial intelligence (AI). Much research has highlighted the reciprocal relationships between the transparency/explanation and fairness in AI-assisted decision-making. Thus, considering their impact on user trust or perceived fairness simultaneously benefits [...] Read more.
AI-assisted decision-making that impacts individuals raises critical questions about transparency and fairness in artificial intelligence (AI). Much research has highlighted the reciprocal relationships between the transparency/explanation and fairness in AI-assisted decision-making. Thus, considering their impact on user trust or perceived fairness simultaneously benefits responsible use of socio-technical AI systems, but currently receives little attention. In this paper, we investigate the effects of AI explanations and fairness on human-AI trust and perceived fairness, respectively, in specific AI-based decision-making scenarios. A user study simulating AI-assisted decision-making in two health insurance and medical treatment decision-making scenarios provided important insights. Due to the global pandemic and restrictions thereof, the user studies were conducted as online surveys. From the participant’s trust perspective, fairness was found to affect user trust only under the condition of a low fairness level, with the low fairness level reducing user trust. However, adding explanations helped users increase their trust in AI-assisted decision-making. From the perspective of perceived fairness, our work found that low levels of introduced fairness decreased users’ perceptions of fairness, while high levels of introduced fairness increased users’ perceptions of fairness. The addition of explanations definitely increased the perception of fairness. Furthermore, we found that application scenarios influenced trust and perceptions of fairness. The results show that the use of AI explanations and fairness statements in AI applications is complex: we need to consider not only the type of explanations and the degree of fairness introduced, but also the scenarios in which AI-assisted decision-making is used. Full article
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)
Show Figures

Figure 1

14 pages, 730 KiB  
Article
Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair
by Arashdeep Singh, Jashandeep Singh, Ariba Khan and Amar Gupta
Mach. Learn. Knowl. Extr. 2022, 4(1), 240-253; https://doi.org/10.3390/make4010011 - 12 Mar 2022
Cited by 3 | Viewed by 5285
Abstract
Machine learning (ML) models are increasingly being used for high-stake applications that can greatly impact people’s lives. Sometimes, these models can be biased toward certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this [...] Read more.
Machine learning (ML) models are increasingly being used for high-stake applications that can greatly impact people’s lives. Sometimes, these models can be biased toward certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this “model discrimination” by updating the training data (pre-processing), altering the model learning process (in-processing), or manipulating the model output (post-processing). However, more work can be done in extending this situation to intersectional fairness, where we consider multiple sensitive parameters (e.g., race) and sensitive options (e.g., black or white), thus allowing for greater real-world usability. Prior work in fairness has also suffered from an accuracy–fairness trade-off that prevents both accuracy and fairness from being high. Moreover, the previous literature has not clearly presented holistic fairness metrics that work with intersectional fairness. In this paper, we address all three of these problems by (a) creating a bias mitigation technique called DualFair and (b) developing a new fairness metric (i.e., AWI, a measure of bias of an algorithm based upon inconsistent counterfactual predictions) that can handle intersectional fairness. Lastly, we test our novel mitigation method using a comprehensive U.S. mortgage lending dataset and show that our classifier, or fair loan predictor, obtains relatively high fairness and accuracy metrics. Full article
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)
Show Figures

Figure 1

Back to TopTop