Next Article in Journal
Defining a Digital Twin: A Data Science-Based Unification
Previous Article in Journal
Improving Spiking Neural Network Performance with Auxiliary Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence Ethics and Challenges in Healthcare Applications: A Comprehensive Review in the Context of the European GDPR Mandate

by
Mohammad Mohammad Amini
1,*,
Marcia Jesus
1,
Davood Fanaei Sheikholeslami
1,
Paulo Alves
2,3,
Aliakbar Hassanzadeh Benam
1 and
Fatemeh Hariri
1
1
R&D Department, SENSOMATT Lda., 6000-767 Castelo Branco, Portugal
2
Centre for Interdisciplinary Research in Health, Universidade Católica Portuguesa, Rua de Diogo Botelho, 1327, 4169-005 Porto, Portugal
3
Institute of Health Sciences, School of Nursing, 4200-072 Porto, Portugal
*
Author to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2023, 5(3), 1023-1035; https://doi.org/10.3390/make5030053
Submission received: 13 June 2023 / Revised: 31 July 2023 / Accepted: 3 August 2023 / Published: 7 August 2023

Abstract

:
This study examines the ethical issues surrounding the use of Artificial Intelligence (AI) in healthcare, specifically nursing, under the European General Data Protection Regulation (GDPR). The analysis delves into how GDPR applies to healthcare AI projects, encompassing data collection and decision-making stages, to reveal the ethical implications at each step. A comprehensive review of the literature categorizes research investigations into three main categories: Ethical Considerations in AI; Practical Challenges and Solutions in AI Integration; and Legal and Policy Implications in AI. The analysis uncovers a significant research deficit in this field, with a particular focus on data owner rights and AI ethics within GDPR compliance. To address this gap, the study proposes new case studies that emphasize the importance of comprehending data owner rights and establishing ethical norms for AI use in medical applications, especially in nursing. This review makes a valuable contribution to the AI ethics debate and assists nursing and healthcare professionals in developing ethical AI practices. The insights provided help stakeholders navigate the intricate terrain of data protection, ethical considerations, and regulatory compliance in AI-driven healthcare. Lastly, the study introduces a case study of a real AI health-tech project named SENSOMATT, spotlighting GDPR and privacy issues.

1. Introduction

Artificial Intelligence (AI) has emerged as a transformative technology, reshaping diverse sectors, most prominently healthcare. AI plays a pivotal role in identifying rare genetic illnesses, improving patient flow in mental health facilities, aiding clinical decision making, and transforming pathological studies. However, the increasing ubiquity of AI in healthcare also invites complex ethical, practical, and legal considerations, particularly within the European General Data Protection Regulation (GDPR) framework.
Ethical aspects of AI, such as interpretability, accountability, and bias, are critical and need to be addressed carefully. Recent research has identified a need for better understandability of machine learning algorithms and predictions. This includes clarifying AI’s role in decision making, advocating for transparency, reducing algorithmic bias, and enhancing trust among stakeholders [1].
Furthermore, the design and use of AI in healthcare systems require a holistic view. Human-centered design principles can help integrate AI in a way that respects local community demands and socio-technical factors. Especially in underprivileged regions, such as the Global South, AI should be used to democratize, not monopolize, healthcare provision. The potential bias in AI, fueled by narrow measurements and unrepresentative data, calls for long-term structural changes, incorporating social, cultural, and ethical judgements into medical AI instruction.
On the practical side, the implementation of AI faces technical and pedagogical hurdles. The cross-disciplinary nature of AI necessitates joint consideration of technical and legal aspects, especially in sensitive areas such as data cleaning in medical AI. Instructors’ technological skills are instrumental but not sufficient for integrating AI in classrooms, further complicating the issue.
AI implementation goes beyond healthcare; it impacts various sectors such as cyber-physical production systems (CPPS). Ethical AI breakthroughs should assist in the design of CPPS. However, the development of such systems also underscores the need for “effective contestability” in AI decision making [2].
Legal and policy implications of AI are manifold, with privacy and GDPR compliance at the forefront. Health data’s value to both public and private entities necessitates better mechanisms for collection and dissemination, with a focus on sustainable methods for curating open-source health data. In addition, GDPR-compliant personal data protection procedures need integration with FAIR data principles for environmental and health research. Amidst growing concerns over consumer privacy and data protection, national and international rules and regulations need to be enforced [3].
AI presents a multitude of opportunities for enhancing healthcare, but its adoption must be carefully managed, keeping ethical, practical, and legal considerations at the forefront. This comprehensive review provides insights into these aspects, aiming to contribute to the ongoing discourse on AI ethics and assist stakeholders in formulating responsible AI practices in healthcare.

2. Materials and Methods

The overarching aim of this comprehensive review is to shed light on the ethical implications of artificial intelligence (AI) applications within healthcare, with specific emphasis on the nursing sector and within the context of the European General Data Protection Regulation (GDPR). The methodology employed follows a systematic approach to gather, assess, and synthesize relevant literature, encompassing scholarly articles, conference papers, and research reports that delve into the intersection of AI ethics, GDPR, and healthcare applications.
The initial stage of the review process hinged on setting up the review protocol, which defined research objectives, inclusion and exclusion criteria, and the search strategy. The fundamental research objective focused on identifying studies that discuss the ethical implications of AI in healthcare within the GDPR context, with specific interest in issues related to data privacy and security; informed consent; bias and fairness; transparency and explainability; and accountability and responsibility. Articles published within the last decade, peer-reviewed, and in English, were included to ensure relevance and contemporaneity.
A comprehensive literature search was conducted using a mix of pertinent keywords and Boolean operators across several databases, including PubMed, IEEE Xplore, ACM Digital Library, and Web of Science. Keywords included terms such as “AI in healthcare”, “GDPR”, “ethics”, “data privacy”, “security”, “informed consent”, “bias”, “fairness”, “transparency”, “explainability”, “accountability”, “responsibility”, and more. The references of the selected articles were also scrutinized for any additional relevant sources.
Subsequent to the initial search, titles and abstracts were screened for relevance. This was followed by a full-text review of chosen articles, with any articles not meeting the criteria being excluded. The selection process was independently conducted by two reviewers, with any differences reconciled through discussion or consultation with a third reviewer.
A standard template was utilized in the data extraction process to maintain uniformity. Information such as authors, year of publication, research objectives, methodologies used, key findings, and implications were extracted. A quality assessment of the chosen articles was conducted using the critical appraisal skills program (CASP) tool, with emphasis on clarity of research objectives, appropriateness of the methodology, validity of the results, and the overall relevance of the findings to this review.
The final step involved a thematic analysis of the collected data, facilitating the identification of common themes and trends within the literature. The reviewed studies were organized into three primary groups: Ethical Considerations in AI, Practical Challenges and Solutions in AI Integration, and Legal and Policy Implications in AI. Each group was further segmented into sub-groups for a more nuanced discussion.
The categorization of all 3 main groups and 6 sub-groups can be found in Table 1.
The rigorous and systematic approach employed throughout the review process ensures the reliability and validity of the findings, providing valuable insights into ethical considerations within AI healthcare applications under the GDPR mandate, thus contributing to the development of responsible AI practices in healthcare.
This review paper investigates and analyzes over 70 published articles from 2019 to June 2023. Among them, we selected 30 papers relevant to health tech AI and data collection issues. These papers are used to present the challenges in AI data protection and privacy policy, especially in healthcare practical cases. One of the references includes a sample and case study of an R&D project dataset collection, which the authors were involved in. While the details of this case study will be published in a separate paper, it is briefly mentioned in the Results section.
All 30 papers are divided into three sections, the first of which is Ethical Considerations and Mandates, which delves into the intricate ethical aspects associated with the implementation and utilization of AI in healthcare. The main takeaway from this section is the critical need for a balanced approach that combines technical proficiency with ethical sensitivity, including data privacy, bias avoidance, and patient-centric perspectives (12 papers are used as reference in this section).
The following section, Practical Challenges in AI Integration, discusses and outlines potential solutions to the multifaceted technical and pedagogical challenges that arise during the implementation of AI in healthcare settings. To successfully navigate these practical challenges, the emphasis here is on the importance of interdisciplinary expertise and continuous refinement in AI application (8 papers are used as reference).
Finally, the authors emphasize the legal and ethical implications of AI, emphasizing the need for comprehensive, inclusive data management practices that strike a balance between the FAIR principles and GDPR compliance. The important outcome is the recognition of an urgent need for robust, nuanced regulatory frameworks capable of navigating the complex interplay between AI, ethics, and human rights (9 papers are used as reference).

3. Ethical Considerations in AI

As AI technology advances, particularly in the healthcare sector, the ethical considerations associated with its use become increasingly complex and important. This section reviews studies that delve into interpretability, accountability, and bias in AI, as well as ethical considerations surrounding the design and use of AI in healthcare.

3.1. Interpretability, Accountability, and Bias in AI

L. Farah et al.’s (2023) study emphasizes the criticality of AI performance, interpretability, and explainability in health technologies. Their research indicates that gaining a comprehensive understanding of how machine learning algorithms function and produce predictions is paramount for healthcare technology assessment agencies. The authors propose the development and implementation of tools to evaluate these aspects, thereby fostering a sense of accountability among stakeholders [4].
N. Hallowell et al. (2023) approach AI from the perspective of diagnostic tools, addressing the potential benefits and risks associated with AI’s interpretability and reliability. While AI has the potential to enhance diagnostic yield and accuracy, it also poses a risk of undermining the specialist clinical workforce’s skill sets. This study proposes that AI be used as an assistive technology rather than a replacement for human expertise [5].
In a similar vein, N. Norori et al. (2021) highlight potential biases in AI and big data applications within the healthcare sector. They caution against training AI algorithms on non-representative samples and propose a set of guidelines to make AI algorithms more equitable. Based on this research, in Figure 1, different sources of bias in machine learning can be seen. By incorporating open science principles into AI design and evaluation, they argue that AI can become more inclusive and effective in addressing global health issues [6].
Building upon these findings, E. Fosch-Villaronga et al. (2022) examine the potentially reinforcing effects of gender bias in AI-based healthcare applications. They emphasize the importance of accounting for gender and sex differences in the development of medical algorithms, warning that the neglect of these factors could lead to misdiagnoses and potential discrimination. This research underscores the need to incorporate diversity and inclusion considerations into AI developments in healthcare, to prevent exacerbating existing biases or creating new ones [7].
A. Panagopoulos et al. (2022) contribute to the policy dialogue regarding data governance in the AI era, specifically focusing on incentivizing patients to share their health data. The authors suggest a GDPR-based policy that does not restrict data flows, arguing that lessons can be learned from the monetization of digitized copyright material like music. They believe this approach could offer a practical and tested solution that incentivizes data sharing without limiting its supply, which is critical for the development and effectiveness of healthcare AI technologies [8].
These studies highlight the ethical considerations surrounding AI in healthcare, particularly regarding interpretability, accountability, and bias. They underscore the importance of ensuring that AI technologies are equitable, transparent, and complementary to human expertise, rather than functioning as replacements. Furthermore, they draw attention to the critical role of data governance and incentivization strategies in the development and application of AI technologies in the healthcare sector.

3.2. Ethical Design and Use of AI in Healthcare

The use of AI in healthcare presents a unique blend of opportunities and ethical challenges. Central to the discussion is the concept of human-centered AI and the ethical implications associated with its design and deployment.
Chinasa T. Okolo [9] in 2022 discusses the urgent need for human-centered AI in healthcare, particularly in low-resource regions often referred to as the Global South. Okolo warns about the perils of biased AI systems, arguing they might inadvertently amplify existing inequities in these regions. The author underscores the importance of considering local needs and circumstances in AI design to create more effective and equitable healthcare solutions.
In the same vein, G. Rubeis [10], in 2022, addresses the ethical dimensions of integrating AI and big data into mental healthcare, a concept the author terms “iHealth”. Rubeis points to the dual-edged sword of self-monitoring, ecological momentary assessment (EMA), and data mining, presenting a detailed ethical analysis of the potential of iHealth. The author identifies privacy concerns, user autonomy, and potential bias as critical challenges, and emphasizes the need for robust ethical guidelines at policy, research and development, and practitioner levels.
A.M. Oprescu, et al. [11], in 2022, also present a detailed study on the ethical considerations in AI applications in pregnancy healthcare. They highlight the potential benefits of such applications, but stress the importance of responsible AI. They further argue that pregnant women, a key stakeholder group, should find the applications trustworthy, useful, and safe, and that explanations about the system decisions should be transparent.
N. Hallowell and M. Parker [12] in 2019 dive into the ethical considerations of using big data in phenotyping rare diseases. While the technology promises accelerated and accurate diagnoses, the authors caution about data-induced discrimination, the management of incidental findings, and the commodification of datasets. These challenges necessitate a thoughtful approach to the design and use of AI in healthcare, including strict data security and privacy measures.
The papers of James Zou and Londa Schiebinger [13], in 2021, and J. Drogt et al. [14], in 2022, complement these discussions. They emphasize the importance of AI serving diverse populations and its careful integration into specific medical contexts such as pathology. They also identify bias and disparity in data collection and design as key issues, and propose solutions ranging from post-deployment monitoring to educational and funding reforms.
Lastly, C.W.-L. Ho and K. Caals [15], in 2021 discuss the state of AI in nephrology, urging for a dedicated ethics and governance action plan. They underscore the fact that while AI holds the potential to alleviate challenges related to kidney disease, most applications are still in the early stages of development, necessitating robust validation to ensure positive patient outcomes.
The use of AI in healthcare demands careful ethical consideration. There is a growing consensus that AI can significantly contribute to healthcare improvements. However, its application should not only address technical aspects but also the perspectives of healthcare professionals and patients. Key considerations include data security and privacy, the avoidance of bias, and ensuring that AI is responsive to diverse and local needs. Lastly, it is important that these technologies are developed and deployed responsibly, and that they engender trust in those who use them.

4. Practical Challenges and Solutions in AI Integration

4.1. Technical and Pedagogical Aspects of AI Implementation

This section starts by discussing the technical and pedagogical aspects of AI implementation. F. M. Dawoodbhoy et al. (2021) illustrate the capabilities of AI in enhancing patient flow within mental health units of the NHS, focusing on the technical aspect of AI. The research underlines the potential of AI in streamlining administrative tasks, supporting decision making, and fostering a personalized model of mental healthcare, though highlighting challenges such as inconsistency in predictive variables that require further AI refinement [16].
In contrast, the pedagogical aspect is captured by Ismail Celik (2022), who explores the importance of teachers’ professional knowledge in integrating AI into education in an ethical and pedagogically sound manner. The study introduces the Intelligent-TPACK concept, emphasizing the integration of technological and pedagogical knowledge for effectively leveraging AI and underscoring the significant role of educators in meaningfully incorporating AI into education [17].
On a similar note, N. Truong et al. (2021) delve into the issue of data privacy and security in ML-based applications and services. The study presents FL as a promising yet challenging solution that meets GDPR regulations, stressing the need for applicable privacy mechanisms to fortify such systems. This research highlights the interplay between data privacy, ML, and legal regulations, marking a crucial point of intersection between computer science and law [18].
K. Stöger et al. (2021) further contribute to the discussion by emphasizing the critical legal aspects of data cleansing in medical AI. Their study illustrates the importance of high-quality data and the potential risks associated with mishandling during the data cleansing process, making an urgent call for interdisciplinarity in the AI domain, particularly between computer science and law [19].

4.2. Implementing Ethical AI in Different Spheres (Revised)

This section delves into the ethical considerations that arise when integrating AI across diverse domains. István Mezgár and József Váncza (2022) bring attention to the need for explicit ethical considerations in developing Cyber-Physical Production Systems (CPPS). Based on this research and for a typical data process, the AI problem recognition procedure which leads to AI standards development could be seen in Figure 2. Given the ethical, legal, and standardization challenges posed by AI, their research suggests a pathway from ethical norms to standards, fostering enhanced trust in AI technologies [20].
Further, Thomas Ploug and Søren Holm (2020) approach ethics from a patient-centric perspective, advocating for ‘effective contestability’ in AI diagnostics. They propose a model where patients have the right to challenge AI diagnostic systems’ decisions, pushing for greater transparency and explainability [21].
Similarly, L. Caroprese et al. (2022) highlight the need for transparency and explainability in AI in medical informatics. By championing the use of logic approaches for XAI, their research offers a pathway for building ethical and justifiable intelligent systems, with argumentation theory playing a crucial role in medical decision making, explanations, and dialogues [22].
Lastly, M. Maher et al. (2023) propose an innovative method for monetizing digital health data while maintaining patient privacy and security. The study puts forth a marketplace model where patients can sell their health data to various stakeholders, acknowledging patients as active contributors in the AI ecosystem [23]. This spectrum of studies underscores the importance of ethical considerations across various contexts in AI integration.

5. Policy Implications in AI

This section underscores the multifaceted legal and ethical implications of AI, underscoring the need for comprehensive approaches to data collection and distribution that account for inclusivity, representation, and particularly the interests of lower-income demographics. It also underlines the urgent necessity for robust data management practices, striking a balance between the FAIR principles and GDPR compliance. The need for refined policy and legislative interventions is recognized, even though the specific nature and focus of these interventions are still under debate. A clear necessity emerges for more comprehensive and sophisticated regulatory frameworks capable of navigating the complex intersection of AI, ethics, and human rights.

5.1. Privacy, GDPR, and Ethical Considerations in AI

This section begins with analyzing a paper by I. R. Alberto et al. (2023), investigating the importance and potential misuse of health data [24]. The authors identify an insufficiency in current data frameworks, especially the issue of ‘data poverty’ in lower-income countries, and advocate for more inclusive health data collection practices.
Next, E. Govarts et al. (2022) explore the complexity of balancing open science-driven FAIR data management with GDPR-compliant personal data protection [25]. The authors recommend technical advancements, such as federated data management and advanced search software, to support the ‘FAIRification’ of data.
Peter J. van de Waerdt (2020) then delves into the limitations of the GDPR in addressing information asymmetries between consumers and data-driven companies [26]. The paper suggests that the GDPR may not sufficiently address the transparency issue inherent in data-driven business models, potentially infringing upon consumers’ data protection rights.
Adding to the discourse, T. Kuru and I. de Miguel Beriain (2022) unpack the challenges posed by shared genetic data in the GDPR framework [27]. The authors propose that biological family members can also be considered data subjects. However, they highlight several practical and legal uncertainties arising from this interpretation, especially when family members exercise their data subject rights. The paper concludes by suggesting that the European Data Protection Board revisit the 2004 Working Document on Genetic Data to address these uncertainties.
Lastly, M. van Bekkum and F. Zuiderveen Borgesius (2023) question whether the GDPR needs an exception to prevent AI-driven discrimination [28]. Given that AI systems can have discriminatory effects and that GDPR bans the use of certain ‘special categories of data’ like ethnicity, the authors argue that the GDPR could hinder efforts to prevent AI discrimination. The paper presents a thorough examination of the arguments for and against creating an exception to this GDPR rule, contributing to the ongoing debate on balancing privacy and a non-discrimination policy.
These papers provide a robust examination of privacy, GDPR, and ethical considerations in AI, emphasizing the need for policy adaptations to address emerging challenges.

5.2. Policy and Legislative Recommendations for AI

Beginning this section, B.C. Stahl et al.’s 2023 Delphi study explores ethical and human rights issues related to AI, highlighting an apparent lack of emphasis on legislation and formal regulation among participating experts [29].
Subsequently, Sara Gerke and Delaram Rezaeikhonakdar, in 2022, spotlight privacy issues associated with direct-to-consumer AI/ML health apps [30]. They advocate for consumer education on data privacy and data usage, and propose the enactment of a US federal privacy law akin to the EU’s GDPR to ensure robust data protection.
In the third paper, Angeliki Kerasidou in 2021 scrutinizes the ethical issues arising from the use of AI in global health [31]. The paper emphasizes the importance of national and international regulations to manage ethical dilemmas surrounding AI.
In addition, D. Amram, in 2020, delves into the obligations and practical issues researchers face when processing health-related data for research purposes under articles 9 and 89 of the EU Reg. 2016/679 (GDPR) [32]. The paper conducts a comparative analysis of the national implementations of the GDPR on this topic, and builds a model to achieve an acceptable standard of accountability in health-related data research, termed the “accountable Ulysses” standard. This standard suggests that an interdisciplinary dialogue is required, involving ethical–legal experts, clear data protection plans, suitable technical and organizational measures, and secure data management strategies. The paper concludes by encouraging research institutes to provide coherent ethical–legal support to core research to promote ethical–legal compliance.
Together, these papers offer insightful policy and legislative recommendations for AI. They underscore the need for increased focus on formal legislation and regulation, consumer education, comprehensive privacy laws, national and international regulations, and responsible health data processing practices, all to better navigate the intricate nexus of AI, ethics, and human rights.

6. Review Results and Case Study

6.1. Review Results

Upon conducting a comprehensive review of the literature, several important outcomes were revealed, contributing to the understanding of AI application in healthcare within the GDPR context.
Ethical considerations were found to be paramount in the adoption and implementation of AI in healthcare. The need for interpretability and understandability of machine learning algorithms and predictions was emphasized, with studies suggesting transparency in decision making and a reduction in algorithmic bias as prerequisites for establishing trust among stakeholders. In addition, the literature suggested the importance of integrating social, cultural, and ethical judgements into the instruction of medical AI to counter potential biases stemming from narrow measurements and unrepresentative data.
The review also highlighted practical challenges in integrating AI into healthcare systems. Technical and pedagogical hurdles were identified, with the interdisciplinary nature of AI demanding a comprehensive approach that equally addresses technical and legal aspects. This includes a focus on data cleaning in medical AI and ensuring that instructors possess the necessary technological skills. The literature also suggested the use of insights from AI ethics in the design of cyber-physical production systems (CPPS), stressing the concept of “effective contestability” in AI decision making.
In terms of legal and policy implications, privacy and GDPR compliance emerged as focal points in the context of AI. With AI playing an increasing role in the collection and dissemination of health data, studies recommended the development of robust, sustainable methods for managing open-source health data. This includes the integration of GDPR-compliant personal data protection procedures with FAIR data principles for environmental and health research. In the light of growing concerns about consumer privacy and data protection, the literature highlighted the need for enforcing national and international rules and regulations.
This comprehensive review underscores the importance of a balanced approach in adopting AI in healthcare, considering ethical, practical, and legal considerations. The need for a deep understanding of data owners’ rights and the development of ethical norms for using AI in medical applications was also underlined. The results of this review present important insights for healthcare stakeholders in formulating responsible AI practices.

6.2. Case Study: Sensomatt Project

Sensomatt is an innovative hybrid AI/Hardware project, fundamentally devised to create an intelligent sensor sheet. This sheet is designed to be placed beneath a medical mattress, enabling the non-invasive measurement of pressure distribution to aid in the reduction of pressure ulcer risk in patients with limited mobility or those unconscious. By assessing point pressures via a pre-trained AI algorithm, the system identifies the maximum pressure in 13 body joints, thus enabling risk analysis for potential pressure ulcers if the detected pressure exceeds a defined risky threshold or if it persists over an extended duration—both considered significant risk indicators.
During the data collection phase of this project, there were considerable challenges securing the approval of the Ethical Committee of the Polytechnic Institute of Castelo Branco to gather the most valuable data. The data collection process relied on volunteers from the university’s student and staff body, prompting heightened sensitivity regarding ethical considerations. Despite the sensors operating from beneath the medical mattress without any physical contact with the subjects, and the data collected lacking any sensitive information, there was strong resistance. The proposal to install ceiling-mounted cameras to collect RGB images of varying sleep poses of volunteers aimed to enhance the algorithms’ accuracy, which currently relies solely on pressure heatmaps. This request, however, was emphatically denied by the Ethical Committee, even with the assurance of face masks usage to conceal volunteers’ identities and the commitment to data deletion two months post-storage. The objective of leveraging this data to improve AI training and decision-making accuracy was thus hindered by privacy regulations.
The results of the data collection process were published by L. Fonseca et al. [33] in the Journal of Data in July 2023. However, the datasets proved somewhat inadequate for accurately determining the maximum pressure location within body organs. In response, it was decided to employ pose models, extracting the joint locations from the pressure heatmap data. Instead of localizing the maximum pressure point within body organs, it focused on a limited neighborhood surrounding each of the 13 joints as indicated by the pose model. Through this strategy, it was successfully developed into a practical AI tool capable of measuring maximum pressure in body joints without necessitating physical contact or interfering with medical procedures.

7. Discussion

This comprehensive review of the ethical challenges associated with the application of AI in healthcare, particularly in the context of nursing, within the European GDPR framework has provided valuable insights into the current state of affairs. However, it has also highlighted several potential directions for future research, with an aim to deepen our understanding of these topics and contribute to the development of responsible AI practices.

7.1. Deepening the Understanding of Data Owner Rights

While the GDPR provides a regulatory framework to ensure data protection, the intricacies surrounding data owner rights within AI healthcare applications remain an understudied area. Future research could delve deeper into these rights, examining their implications and practical applications in AI systems. This could include exploring questions such as how the rights of data owners may affect the development of AI algorithms, or how these rights might be effectively communicated to, and exercised by, the data owners themselves.

7.2. Ethical Guidelines for AI in Medical Applications

Our review has pointed out a significant need for more specific ethical guidelines for the use of AI in medical applications, particularly in nursing. Future research could seek to address this gap by developing practical, nuanced, and context-specific ethical guidelines for different types of AI applications. These guidelines could provide much-needed direction for developers, users, and regulators of AI systems in healthcare, ensuring that these systems are used in a way that respects the rights and dignity of all stakeholders.

7.3. Addressing ‘Data Poverty’ and Inclusivity

The concept of ‘data poverty’ was highlighted as an area that requires more attention, particularly in the context of lower-income countries. Future studies could focus on addressing this issue, exploring innovative strategies for collecting and analyzing health data in an inclusive and representative manner. Such research could contribute significantly to the creation of AI systems that are truly global and equitable.

7.4. Improving Transparency and Trust in AI Systems

Our review identified a need for greater transparency and trust in AI systems, particularly in terms of how they make decisions. Future research could aim to address this by investigating strategies for enhancing transparency, such as the development of explainability algorithms or user-friendly interfaces that clearly communicate how decisions are being made.

7.5. Advancements in Legislation and Policy for AI

Lastly, our review underscored the need for further advancements in legislation and policy related to AI. Future research could contribute to this by investigating the potential impacts of different regulatory frameworks, conducting comparative analyses of existing legislation, or proposing new models for regulation that can effectively navigate the complex and rapidly evolving landscape of AI.
While our review provides a comprehensive overview of the current state of AI ethics in healthcare and the GDPR, it has also illuminated a number of areas that require further investigation. By exploring these areas, future researchers can contribute to the ongoing conversation about AI ethics and assist in the development of responsible AI practices in healthcare.

8. Conclusions

In the era of rapid technological advancement, the integration of artificial intelligence (AI) into the healthcare industry presents both a compelling opportunity and a considerable challenge. This study undertook a comprehensive review of the ethical and practical challenges associated with AI applications in healthcare, specifically nursing, within the context of the European General Data Protection Regulation (GDPR) mandate.
By evaluating the existing literature, the study found a significant gap in research pertaining to the understanding of data owner rights in AI applications and compliance with GDPR mandates. This deficit underscores the need for an enriched comprehension of these rights, the development of specific ethical guidelines for AI applications in healthcare, and further exploration into the transparency of AI decision-making processes.
While AI presents a promising future for the healthcare industry, with potential benefits including improved patient care, efficiency, and overall healthcare outcomes, it is imperative that ethical considerations and data protection regulations remain at the forefront. The GDPR offers a critical framework in this regard, underpinning data protection rights, but understanding the application of these rights within the context of AI is an area requiring further study.
Furthermore, the study identified the urgent need for more inclusive data collection methods to avoid ‘data poverty’, particularly in lower-income countries, and highlighted the need for the development of robust and practical ethical guidelines for AI use in medical applications, notably nursing.
Finally, while AI holds vast potential to transform healthcare and nursing practices, it is essential that its development and application are guided by robust ethical considerations and regulations. This study hopes to have shed light on the current challenges and sparked further discussions on AI ethics and the formation of responsible AI practices in the nursing and healthcare fields.
As we venture into the future, the need for continued research in these areas is paramount. Researchers, policymakers, practitioners, and other stakeholders must work together to ensure that, as we move forward, we create AI systems that are not only effective and efficient but also ethical, transparent, inclusive, and respectful of data protection rights.

Author Contributions

Conceptualization, M.M.A., M.J. and D.F.S.; methodology, M.M.A. and M.J.; investigation, D.F.S. and A.H.B.; resources, D.F.S. and F.H.; data curation, P.A. and A.H.B.; writing—original draft preparation, M.J. and P.A.; writing—review and editing, F.H. and A.H.B.; supervision, M.J. and M.M.A.; project administration, M.M.A. and A.H.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was carried out under the SensoMatt project, grant agreement no. CENTRO-01-0247-FEDER-070107, co-financed by European Funds (FEDER) by CENTRO2020.

Data Availability Statement

Not applicable.

Acknowledgments

The authors express their sincere gratitude for the support provided by the Agência Nacional de Inovação, ANI (Portugal National Agency of Innovation), as well as the Polytechnic Institute of Castelo Branco (IPCB), Portugal.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Balasubramaniam, N.; Kauppinen, M.; Rannisto, A.; Hiekkanen, K.; Kujala, S. Transparency and explainability of AI systems: From ethical guidelines to requirements. Inf. Softw. Technol. 2023, 159, 107197. [Google Scholar] [CrossRef]
  2. Bangui, H.; Buhnova, B.; Ge, M. Social Internet of Things: Ethical AI Principles in Trust Management. Procedia Comput. Sci. 2023, 220, 553–560. [Google Scholar] [CrossRef]
  3. Mourby, M.; Ó Cathaoir, K.; Collin, C.B. Transparency of machine-learning in healthcare: The GDPR & European health law. Comput. Law Secur. Rev. 2021, 43, 105611. [Google Scholar] [CrossRef]
  4. Farah, L.; Murris, J.M.; Borget, I.; Guilloux, A.; Martelli, N.M.; Katsahian, S.I.M. Assessment of Performance, Interpretability, and Explainability in Artificial Intelligence–Based Health Technologies: What Healthcare Stakeholders Need to Know. Mayo Clin. Proc. Innov. Qual. Outcomes 2023, 1, 120–138. [Google Scholar] [CrossRef]
  5. Hallowell, N.; Badger, S.; McKay, F.; Kerasidou, A.; Nellåker, C. Democratising or disrupting diagnosis? Ethical issues raised by the use of AI tools for rare disease diagnosis. SSM Qual. Res. Health 2023, 3, 100240. [Google Scholar] [CrossRef]
  6. Norori, N.; Hu, Q.; Aellen, F.M.; Faraci, F.D.; Tzovara, A. Addressing bias in big data and AI for health care: A call for open science. Patterns 2021, 2, 100347. [Google Scholar] [CrossRef]
  7. Fosch-Villaronga, E.; Drukarch, H.; Khanna, P.; Verhoef, T.; Custers, B. Accounting for diversity in AI for medicine. Comput. Law Secur. Rev. 2022, 47, 105735. [Google Scholar] [CrossRef]
  8. Panagopoulos, A.; Minssen, T.; Sideri, K.; Yu, H.; Compagnucci, M.C. Incentivizing the sharing of healthcare data in the AI Era. Comput. Law Secur. Rev. 2022, 45, 105670. [Google Scholar] [CrossRef]
  9. Okolo, C.T. Optimizing human-centered AI for healthcare in the Global South. Patterns 2022, 3, 100421. [Google Scholar] [CrossRef]
  10. Rubeis, G. iHealth: The ethics of artificial intelligence and big data in mental healthcare. Internet Interv. 2022, 28, 100518. [Google Scholar] [CrossRef]
  11. Oprescu, A.M.; Miró-Amarante, G.; García-Díaz, L.; Rey, V.E.; Chimenea-Toscano, A.; Martínez-Martínez, R.; Romero-Ternero, M.C. Towards a data collection methodology for Responsible Artificial Intelligence in health: A prospective and qualitative study in pregnancy. Inf. Fusion 2022, 83–84, 53–78. [Google Scholar] [CrossRef]
  12. Hallowell, N.; Parker, M.; Nellåker, C. Big data phenotyping in rare diseases: Some ethical issues. Genet. Med. 2019, 21, 272–274. [Google Scholar] [CrossRef] [Green Version]
  13. Zou, J.; Schiebinger, L. Ensuring that biomedical AI benefits diverse populations. EBioMedicine 2021, 67, 103358. [Google Scholar] [CrossRef] [PubMed]
  14. Drogt, J.; Milota, M.; Vos, S.; Bredenoord, A.; Jongsma, K. Integrating artificial intelligence in pathology: A qualitative interview study of users’ experiences and expectations. Mod. Pathol. 2022, 35, 1540–1550. [Google Scholar] [CrossRef] [PubMed]
  15. Ho, C.W.-L.; Caals, K.A. Call for an Ethics and Governance Action Plan to Harness the Power of Artificial Intelligence and Digitalization in Nephrology. Semin. Nephrol. 2021, 41, 282–293. [Google Scholar] [CrossRef]
  16. Dawoodbhoy, F.M.; Delaney, J.; Cecula, P.; Yu, J.; Peacock, I.; Tan, J.; Cox, B. AI in patient flow: Applications of artificial intelligence to improve patient flow in NHS acute mental health inpatient units. Heliyon 2021, 7, e06993. [Google Scholar] [CrossRef]
  17. Celik, I. Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Comput. Hum. Behav. 2023, 138, 107468. [Google Scholar] [CrossRef]
  18. Truong, N.; Sun, K.; Wang, S.; Guitton, F.; Guo, Y. Privacy preservation in federated learning: An insightful survey from the GDPR perspective. Comput. Secur. 2021, 110, 102402. [Google Scholar] [CrossRef]
  19. Stöger, K.; Schneeberger, D.; Kieseberg, P.; Holzinger, A. Legal aspects of data cleansing in medical AI. Comput. Law Secur. Rev. 2021, 42, 105587. [Google Scholar] [CrossRef]
  20. Mezgár, I.; Váncza, J. From ethics to standards—A path via responsible AI to cyber-physical production systems. Annu. Rev. Control. 2022, 53, 391–404. [Google Scholar] [CrossRef]
  21. Ploug, T.; Holm, S. The four dimensions of contestable AI diagnostics—A patient-centric approach to explainable AI. Artif. Intell. Med. 2020, 107, 101901. [Google Scholar] [CrossRef] [PubMed]
  22. Caroprese, L.; Vocaturo, E.; Zumpano, E. Argumentation approaches for explainable AI in medical informatics. Intell. Syst. Appl. 2022, 16, 200109. [Google Scholar] [CrossRef]
  23. Maher, M.; Khan, I.; Verma, P. Monetisation of digital health data through a GDPR-compliant and blockchain enabled digital health data marketplace: A proposal to enhance patient’s engagement with health data repositories. Int. J. Inf. Manag. Data Insights 2023, 3, 100159. [Google Scholar] [CrossRef]
  24. Alberto, I.R.; Alberto, N.R.; Ghosh, A.K.; Jain, B.; Jayakumar, S.; Martinez-Martin, N.; McCague, N.; Moukheiber, D.; Mou-kheiber, L.; Moukheiber, M.; et al. The impact of commercial health datasets on medical research and health-care algorithms. Lancet Digit. Health 2023, 5, e288–e294. [Google Scholar] [CrossRef]
  25. Govarts, E.; Gilles, L.; Bopp, S.; Holub, P.; Matalonga, L.; Vermeulen, R.; Vrijheid, M.; Beltran, S.; Hartlev, M.; Jones, S.; et al. Position paper on management of personal data in environment and health research in Europe. Environ. Int. 2022, 165, 107334. [Google Scholar] [CrossRef]
  26. Van de Waerdt, P.J. Information asymmetries: Recognizing the limits of the GDPR on the data-driven market. Comput. Law Secur. Rev. 2020, 38, 105436. [Google Scholar] [CrossRef]
  27. Kuru, T.; de Miguel Beriain, I. Your genetic data is my genetic data: Unveiling another enforcement issue of the GDPR. Comput. Law Secur. Rev. 2022, 47, 105752. [Google Scholar] [CrossRef]
  28. Van Bekkum, M.; Zuiderveen Borgesius, F. Using sensitive data to prevent discrimination by artificial intelligence: Does the GDPR need a new exception? Comput. Law Secur. Rev. 2023, 48, 105770. [Google Scholar] [CrossRef]
  29. Stahl, B.C.; Brooks, L.; Hatzakis, T.; Santiago, N.; Wright, D. Exploring ethics and human rights in artificial intelligence—A Delphi study. Technol. Forecast. Soc. Chang. 2023, 191, 122502. [Google Scholar] [CrossRef]
  30. Gerke, S.; Rezaeikhonakdar, D. Privacy aspects of direct-to-consumer artificial intelligence/machine learning health apps. Intell. Based Med. 2022, 6, 100061. [Google Scholar] [CrossRef]
  31. Kerasidou, A. Ethics of artificial intelligence in global health: Explainability, algorithmic bias and trust. J. Oral Biol. Craniofac. Res. 2021, 11, 612–614. [Google Scholar] [CrossRef] [PubMed]
  32. Amram, D. Building up the “Accountable Ulysses” model. The impact of GDPR and national implementations, ethics, and health-data research: Comparative remarks. Comput. Law Secur. Rev. 2020, 37, 105413. [Google Scholar] [CrossRef]
  33. Fonseca, L.; Ribeiro, F.; Metrôlho, J.; Santos, A.; Dionisio, R.; Amini, M.M.; Silva, A.F.; Heravi, A.R.; Sheikholeslami, D.F.; Fidalgo, F.; et al. PoPu-Data: A Multilayered, Simultaneously Collected Lying Position Dataset. Data 2023, 8, 120. [Google Scholar] [CrossRef]
Figure 1. Different sources of bias in machine learning algorithms.
Figure 1. Different sources of bias in machine learning algorithms.
Make 05 00053 g001
Figure 2. Flowchart of AI problem recognition in AI standard developments.
Figure 2. Flowchart of AI problem recognition in AI standard developments.
Make 05 00053 g002
Table 1. Primary grouping of current scientific review.
Table 1. Primary grouping of current scientific review.
GroupingSub-GroupingDescription
Ethical Considerations in AlInterpretability, Accountability, and Bias in AlThis sub-group addresses the essential elements of Al’s explainability, its interpretability, and the potential biases inherent in its use.
Ethical Design and Use of Al in HealthcareIt focuses on the ethical aspects of Al design and implementation in healthcare, considering societal, cultural, and community-specific needs.
Practical Challenges and Solutions in Al IntegrationTechnical and Pedagogical Aspects of Al ImplementationIt considers the technical, financial, and pedagogical challenges and solutions associated with implementing Al in various sectors, including healthcare and education.
Implementing Ethical Al in Different SpheresIt investigates the ways in which ethical Al can be integrated into different domains, emphasizing the importance of ethical considerations in system design and user interaction.
Legal and Policy Implications in AlPrivacy, GDPR, and Ethical Considerations in AlIt explores the issues surrounding privacy, data protection, and the ethical use of Al in the context of GDPR regulations.
Policy and Legislative Recommendations for AlThis sub-group delves into the policy and legislative aspects of Al, proposing recommendations for better privacy protections, ethical discussions, and regulatory compliance.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohammad Amini, M.; Jesus, M.; Fanaei Sheikholeslami, D.; Alves, P.; Hassanzadeh Benam, A.; Hariri, F. Artificial Intelligence Ethics and Challenges in Healthcare Applications: A Comprehensive Review in the Context of the European GDPR Mandate. Mach. Learn. Knowl. Extr. 2023, 5, 1023-1035. https://doi.org/10.3390/make5030053

AMA Style

Mohammad Amini M, Jesus M, Fanaei Sheikholeslami D, Alves P, Hassanzadeh Benam A, Hariri F. Artificial Intelligence Ethics and Challenges in Healthcare Applications: A Comprehensive Review in the Context of the European GDPR Mandate. Machine Learning and Knowledge Extraction. 2023; 5(3):1023-1035. https://doi.org/10.3390/make5030053

Chicago/Turabian Style

Mohammad Amini, Mohammad, Marcia Jesus, Davood Fanaei Sheikholeslami, Paulo Alves, Aliakbar Hassanzadeh Benam, and Fatemeh Hariri. 2023. "Artificial Intelligence Ethics and Challenges in Healthcare Applications: A Comprehensive Review in the Context of the European GDPR Mandate" Machine Learning and Knowledge Extraction 5, no. 3: 1023-1035. https://doi.org/10.3390/make5030053

Article Metrics

Back to TopTop