Next Article in Journal
Few-Shot Fine-Grained Image Classification: A Comprehensive Review
Previous Article in Journal
Automated Classification of User Needs for Beginner User Experience Designers: A Kano Model and Text Analysis Approach Using Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring

by
Elham Albaroudi
*,
Taha Mansouri
and
Ali Alameer
School of Science, Engineering and Environment, University of Salford, University Road, Manchester M5 4QJ, UK
*
Author to whom correspondence should be addressed.
AI 2024, 5(1), 383-404; https://doi.org/10.3390/ai5010019
Submission received: 25 December 2023 / Revised: 29 January 2024 / Accepted: 1 February 2024 / Published: 7 February 2024
(This article belongs to the Section AI Systems: Theory and Applications)

Abstract

:
The study comprehensively reviews artificial intelligence (AI) techniques for addressing algorithmic bias in job hiring. More businesses are using AI in curriculum vitae (CV) screening. While the move improves efficiency in the recruitment process, it is vulnerable to biases, which have adverse effects on organizations and the broader society. This research aims to analyze case studies on AI hiring to demonstrate both successful implementations and instances of bias. It also seeks to evaluate the impact of algorithmic bias and the strategies to mitigate it. The basic design of the study entails undertaking a systematic review of existing literature and research studies that focus on artificial intelligence techniques employed to mitigate bias in hiring. The results demonstrate that the correction of the vector space and data augmentation are effective natural language processing (NLP) and deep learning techniques for mitigating algorithmic bias in hiring. The findings underscore the potential of artificial intelligence techniques in promoting fairness and diversity in the hiring process with the application of artificial intelligence techniques. The study contributes to human resource practice by enhancing hiring algorithms’ fairness. It recommends the need for collaboration between machines and humans to enhance the fairness of the hiring process. The results can help AI developers make algorithmic changes needed to enhance fairness in AI-driven tools. This will enable the development of ethical hiring tools, contributing to fairness in society.

1. Introduction

In the era of globalization, businesses navigate a landscape of intensified international competition [1]. Technological advances have fostered increased accessibility of foreign markets for businesses. Although this has expanded market reach, it has also exposed them to heightened levels of competition. Consequently, enterprises are compelled to maintain a competitive advantage to ensure their survival [2]. To maintain competitiveness, businesses must prioritize hiring the right workforce. Acknowledged as the most critical asset within any organizational framework [3], employees possess the knowledge, skills, and expertise essential for an entity’s functioning. Research establishes the correlation between employees’ productivity and organizational performance [4]. Given the pivotal role of employees, organizations are using more stringent hiring practices. The aim is to hire workers with the requisite skills and qualifications to perform effectively [5]. Organizations using strict hiring criteria increase the probability of acquiring high-performing employees. Research stipulates that employees suitably matched to their functions exhibit more productivity and motivation [6]. Hiring unsuitable candidates extends to heightened turnover rates [7], increasing costs and disruptions for businesses [8]. Legislative measures have been established to address workplace discrimination. Notably, Title VII of the Civil Rights Act of 1964 in the United States (US) prohibits intentional exclusion based on identity and the unintentional disadvantage of a protected class through a facially neutral procedure [9]. While identity-based forms of discrimination have become diminished over time, unintentional biases persist [10]. In the United Kingdom (UK), the Equality Act 2010 safeguards individuals with “protected characteristics” like sex, race, disability, age, marital status, and belief from unfair treatment [11].
Despite the provisions of the Equality Act, questions persist about the enduring presence of hiring procedures that disproportionately affect minority groups in developed nations, although efforts were initiated during the civil rights era to eliminate such biases [12]. Since the hiring practices in the 1960s were tailored to white middle-class men, new instruments were required to create equal access to opportunities for women and people of color. Sadly, this objective remains unrealized despite the advances in scientific research on human aptitudes [12]. Sociologist Lincoln Quillian and his colleagues conducted a comprehensive examination of hiring practices across nine European and North American countries. Using a formal meta-analysis of 97 field experiments of discrimination incorporating more than 200,000 job applications, Quillian and collaborators established the existence of pervasive hiring discrimination against all non-whites [13]. The meta-analysis, inclusive of published and unpublished studies, employed field experimental methodologies until 2016 and measured the level of discrimination by calculating the percentage of interview callbacks native whites received compared to non-whites. Callbacks are a request to a candidate to return for an interview or provide more information, demonstrating interest from the potential employer. The results indicated that Germany had the lowest level of hiring discrimination at 24 percent callback rates for white natives, while France exhibited the highest discrimination against non-whites at 83 percent callback rates [13]. Figure 1 below illustrates the percentage of additional callbacks a native white individual received in comparison to a non-white counterpart across the nine countries.
In the face of persistent hiring biases, businesses should make changes to rectify the issue. This is particularly salient as organizations increasingly turn to technology to automate tasks like decision-making processes [14].
The contributions of this study are as follows:
(a)
Employing a comprehensive literature review approach, this study outlines the factors contributing to algorithmic biases in the hiring process. The review unveils the fragmented understanding of algorithmic biases and their mitigation strategies. Our study endeavors to address the gaps and offer solutions to rectify algorithmic biases.
(b)
Despite the growing reliance on algorithmic solutions to enhance the hiring process, vulnerabilities exist within hiring algorithms. This study seeks to provide AI-powered solutions designed to mitigate such biases, thereby assisting businesses in improving workforce diversity.
(c)
We take an approach that advocates for collaboration between humans and AI to address algorithmic biases. The study contends that addressing algorithmic biases requires humans to work alongside AI-powered tools.

2. Research Background

The integration of AI in hiring is not a recent development. In the late 20th century, the initial adoption of AI in hiring focused on automating routine tasks like resume screening and application sorting. The early systems were designed to enhance efficiency by reducing the time and effort invested in such tasks. However, the sophistication inherent in contemporary AI hiring technologies was absent in the early iterations. For instance, resumix, introduced in 1988 and later acquired by HotJobs in 2000, is one of the earliest examples of resume parsing tools [15]. The tool deployed AI to read resumes and extract specific keywords, work experiences, and educational qualifications. Advancements in the 1990s saw the integration of job posting sites like CareerBuild with an application tracking system (ATS) to improve the sourcing of applications [16]. The early 2000s saw the emergence of talent assessment tools like eSkill and SkillSurvey, leveraging AI for automating pre-employment testing and reference checks. The 2010s marked the emergence of AI-powered video interviewing software. For instance, HireVue gained prominence in the mid-2010s for utilizing machine learning (ML) algorithms to assess candidates based on analysis of facial expressions, speech patterns, and body language. In a notable recent example, Amazon attempted to automate its recruitment process using AI in 2018. The algorithm, trained on decades of resumes, aimed to optimize the selection of the most appropriate candidates from a large pool of applicants. However, the tool faced significant challenges and was ultimately abandoned due to allegations of biases [17]. The algorithm, trained on resumes predominantly submitted by male applicants over a ten-year period, exhibited a preference for male-centric language patterns, discriminating against female-oriented applicants [17]. Despite Amazon’s setback, commercially available online hiring tools like Talenture, fetcher, TurboHire, and Findem are widely used.
The availability of data has facilitated the evolution of AI-powered tools, enabling the extraction of new insights through computational analysis [18]. However, the development has given rise to unintended consequences. Algorithmic screening tools that appear evidence-based have emerged as purported alternatives to subjective human evaluations [19]. Divergent opinions exist on the objectivity of algorithmic techniques in mitigating biases. Some scholars assert that algorithmic techniques, especially those utilizing deep learning, are inherently bias-proof, affording businesses an objective way of selecting candidates [20]. Conversely, opposing evidence demonstrates that such tools are susceptible to perpetrating human biases in the datasets upon which they are trained [21].
Algorithmic biases emerge in various forms. Firstly, measurement bias emerges from the identification and measurement of specific features [22]. This type of bias occurs when training data for AI algorithms inadequately represents the intended construct it seeks to measure [23]. In hiring, measurement bias can manifest when training data do not accurately capture the skills and other traits relevant to the job. If an organization continues to hire more white than Black applicants, it may associate good performance with being white given the availability of data about the performance of white employees. Measurement bias can be addressed by auditing and updating training data regularly to ensure they reflect the evolving requirements of a job. The second type, representation bias, results from the ways in which researchers sample populations during data collection, causing non-representative samples that do not represent the entire population [23]. In hiring, this type of bias manifests in under-representation and over-representation of particular demographic groups. For instance, Amazon’s AI hiring system favored male-centric-language patterns, which discriminated against those associated with female applicants [17] since it was trained on more data from white male applicants. Over-sampling or under-sampling can rectify non-representation bias.
Moreover, omitted variable bias occurs when hiring algorithms have one or more important variables missing in a model [24], affecting how systems make predictions. Additionally, linking biases occur when biases and attributes from user connections misrepresent the behavior of users [25]. While analyzing users’ connections, hiring algorithms may draw conclusions that do not align with the actual behavior of individuals. For instance, an algorithm over-relying on technical skills without factoring in interpersonal skills may overlook candidates excelling in critical components like communication. The omitted variable bias can be addressed by undertaking a thorough analysis to include all the relevant variables in a model. The last type is aggregation bias, arising from false conclusions made regarding individuals based on an analysis of the entire population [25]. Aggregation biases cause hiring models to ignore individual differences, making them unsuitable where diversity is involved [20]. For instance, an algorithm assuming younger people are more technologically adept might unfairly favor them for information technology jobs, disregarding equally qualified older candidates. Aggression bias can be addressed through inclusive data training to help a model learn from a more inclusive dataset.
This paper diverges from the prevalent emphasis on AI as the definitive future of hiring. Instead, our primary objective is to evaluate how AI can be effectively used to identify and mitigate algorithmic bias within the job hiring process. This allows for us to transcend the conventional discourse on the prospective dominance of AI in hiring. Instead, we gravitate towards a comprehensive evaluation of AI’s capability to discern and mitigate biases embedded in the algorithms employed for job hiring. Our analysis centers on the concept of deep learning as Mishra, Reddy, and Pathak define it as “a computer-based modeling approach, comprised of multiple processing layers for understanding data representation with several levels of abstraction” [26]. Moreover, we conceptualize NLP based on the perspective of Sodhar, Jalbani, Buller, Mirani, and Sodhar, who consider it “a branch of artificial intelligence dealing with NLP and computer interpretation” [27].

3. Research Motivation, Aims and Significance

This section outlines the motivation, aim, and significance of the study.

3.1. Research Motivation

We recognize the prevalence of algorithmic deployment in hiring processes and the concerns arising due to the potential biases inherent in these algorithms. The motivation emanates from a commitment to understanding and mitigating biases that may perpetrate existing inequalities and discriminatory practices. As more businesses embrace AI-based hiring, it is important to evaluate how these algorithmic biases impact underrepresented groups, accelerating the development of techniques to promote fair and inclusive practices.

3.2. Research Aim

The primary aims of the study are:
  • To analyze case studies on AI in hiring. This entails the exploration of specific case studies, both successful implementations and instances of bias. By examining the real-world impact of AI on hiring, the study provides insights into the challenges and opportunities associated with AI-based recruitment.
  • To evaluate the impact of algorithmic bias. The study assesses the impact of algorithmic bias in AI-driven hiring processes and the potential implications for organizations. This assists in uncovering the multifaceted impacts of biases on organizational dynamics.
  • To investigate bias mitigation strategies. The paper investigates AI techniques designed to mitigate algorithmic biases in hiring, empowering organizations to leverage AI-powered tools to foster diversity and inclusivity.

3.3. Research Significance

The study holds significance in the following aspects:
  • Academic implications. The study expands beyond the prevalent focus on the application of AI in hiring into the often-overlooked problem of algorithmic biases. By providing a comprehensive understanding of the biases through historical perspectives, case studies, and advanced techniques, the study seeks to fill a significant gap in existing research.
  • Social implications. Addressing algorithmic biases in hiring allows access to employment opportunities to all individuals irrespective of factors like complexion, sex, religion, race, or social status. This is imperative for promoting equality and fostering an inclusive society.
  • Economic implications. Unbiased hiring can lead to greater diversity within organizations using AI-based tools. This diversity positively influences organizational performance by fostering a culture of innovation and creativity.
  • Ethical implications. The study provides insights into the emergence of biases in hiring and the strategies to rectify them. The study acts as a catalyst for responsible AI practices.

4. Methodology

In this section, we detail the methodology employed to conduct a comprehensive review of AI techniques for addressing algorithmic biases in the hiring process. Our approach endeavors to thoroughly examine existing evidence to meet the research aims. The details presented here are meant to facilitate the reproducibility of our work. The section demonstrates the sources of information, inclusion and exclusion criteria, search strategy, and date restrictions.

4.1. Search Strategy

Systematic search was conducted across various academic databases, including Google Scholar, ScienceDirect, SpringerLink, IEEE Xplore, and Wiley Online Library.
We used the following terms: Machine learning, artificial intelligence, hiring, recruitment, fairness, algorithmic bias, hiring biases, Artificial Intelligence-powered tools, Artificial Intelligence systems, and bias mitigation.
The following Boolean operators were used to refine the search:
  • “Artificial Intelligence-powered tools” OR “Artificial intelligence systems”;
  • “algorithmic bias” OR “algorithmic unfairness”;
  • “transparency” OR ”ethical”;
  • “recruitment” OR ”hiring”;
  • (“Artificial Intelligence-powered tools” OR “Artificial Intelligence systems”) AND (“algorithmic bias” OR “algorithmic unfairness”);
  • (“algorithmic bias” OR “algorithmic unfairness”) AND (“recruitment” OR “hiring”).
The search was confined to articles published in English.

4.2. Inclusion and Exclusion Criteria

The following inclusion and exclusion criteria guided the identification of relevant articles (Table 1).
Data extraction followed a systematic process involving the identification of methodologies used and the extraction of findings. The systematic approach facilitated the analysis of common themes and patterns, ultimately providing insights into mitigating AI algorithmic bias in hiring.

4.3. Screening Process

The screening process was executed carefully to uphold the integrity of the review. The use of the specified search terms and Boolean operators facilitated the identification of the most suitable articles. Duplicates were identified and removed to ensure the data’s accuracy and reliability. The titles and abstracts of the remaining articles were thoroughly screened according to the criteria in Table 1 above. The figure below (Figure 2) provides a detailed description of the screening process.

5. Understanding Bias in Hiring

This section examines the biases within the CV ranking process. The first part outlines the CV hiring process, outlining the inherent biases. The second part outlines non-technical solutions aimed at mitigating the biases in the CV ranking process.

5.1. HR Bias in Ranking CVs

The CV ranking process, as an initial stage in hiring, involves the assessment of applicants’ CVs by recruiters or HR professionals to identify candidates possessing the required qualifications and experiences for specific positions [28]. This crucial step establishes whether an applicant advances to subsequent stages like interviews and assessments. Recruiters use CV ranking to sift through a large pool of applicants and identify potentially suitable candidates for a job. Given the volume of applications HR professionals receive, specific criteria or qualifications are established to rank CVs [28]. CV ranking is crucial in streamlining the recruitment process, enabling organizations to focus on evaluating the most promising candidates. It optimizes time and resource efficiency in hiring, allowing recruiters swift assessment of applicants’ qualifications and skills, making informed decisions about candidates who should proceed to the next stage. Through CV ranking, HR professionals eliminate unsuitable candidates, saving time. Simultaneously, the CV ranking process influences an applicant’s perception of the hiring organization. From the initial interactions with an organization, applicants form impressions regarding the fairness and transparency of the entire recruitment process. Perception of unfairness in the CV ranking process may dissuade potential candidates from applying, depriving a business of a qualified labor force.
The CV ranking process, while integral to hiring, introduces the risk of bias and discrimination. HR professionals may harbor unconscious biases that influence their CV evaluation. Implicit bias, manifested in unconscious attitudes and stereotypes that people may hold towards particular groups [29], is a major contributor to hiring discrimination. Researchers use the Implicit Association Test (IAT) to measure the unconscious association of specific traits with certain demographic groups [30]. The results confirm that people tend to associate negative traits with racial minorities [31]. When activated in the hiring process, such associations impede fair competition for job opportunities for minority groups. The pie chart in Figure 3 from Wenzel Fenton, a reputable law firm, shows the causes of discrimination in hiring [32].
Stereotypes influence perceptions and evaluations unconsciously, significantly impacting employment decisions. Studies indicate that when evaluating people from stereotyped groups, individuals tend to concentrate on information aligning with stereotypes and interpret data to affirm the stereotypes [33]. Countering stereotypes poses challenges in the CV ranking process, especially for stigmatized groups expected to conform to established stereotypes [34]. Deviation from stereotypes may result in a good performance being dismissed as mere luck. Consequently, the anticipation of biased treatment adversely impacts the performance of stereotyped groups. Additionally, the preference for individuals within one’s own group while dismissing those from other groups introduces another layer of bias in the CV ranking process. This is manifested where some organizations forego public job advertisements, relying on existing networks of friends, classmates, and relatives to identify potential candidates [35]. The limited access to diverse social networks propagates the exclusion of certain groups in the hiring process.
In summary, CV ranking is an important tool for filtering numerous applications; however, it introduces biases that compromise the fairness of the recruitment process. Unconscious attitudes and stereotypes among HR professionals contribute to decisions that favor or discriminate against certain groups. Reliance on existing networks for sourcing exacerbates biases, causing candidates in certain groups to rank higher compared to the rest of the applicants.

5.2. Non-Technical Solutions in Ranking CVs

The prevalence of HR bias in CV ranking is a critical organizational challenge. HR professionals and other recruiters should be aware of biases [36] and the following non-technical solutions available to mitigate them. Firstly, training programs are needed to equip HR professionals with the skills needed to minimize biases. Often, inadequate training limits recruiters’ understanding of hiring biases. In response, organizations can implement unconscious bias training programs to equip HR professionals with the skills required to reduce biases [37]. The training raises awareness among hiring professionals concerning implicit biases that may influence decision making [37]. It facilitates the recognition of unconscious biases, enhancing the decision-making capabilities of HR professionals. Further, such training enhances cultural shifts within organizations, promoting a culture of fairness in hiring processes. Organizations should supplement the training programs with clear and objective criteria in CV ranking. Research demonstrates that developing structured guidelines for HR recruiters can minimize bias and improve the matching of resumes to available jobs [28]. With established criteria, HR professionals can assess candidates more objectively based on their skills, qualifications, and experiences. An objective approach promotes fairness and impartiality in candidate assessments.
Additionally, recruiters can adopt blind hiring techniques in CV ranking [38]. HR professionals are supposed to consider applicants’ basic qualifications, background, and educational aspects to determine suitability for a position. However, personal details may introduce biases related to religious, cultural, and background factors. For instance, studies indicate hidden biases against men and women of color when applying for job positions [39]. The presence of personally identifiable information from CVs can influence how HR professionals rank CVs. To counteract biases emanating from personal details, blind hiring techniques enable employers to anonymize CVs by removing personal details like names, gender, age, and ethnicity. Blind hiring techniques allow for recruiters to focus on the applicants’ qualifications and skills, significantly reducing bias in the assessment process.
Moreover, diversifying CV ranking teams is instrumental in mitigating biases. The absence of diversity in hiring teams prompts biased hiring, as recruiters favor specific groups. Including recruiters from diverse backgrounds, experiences, and perspectives effectively promotes multiple viewpoints in the evaluation process, challenging biased assumptions [40]. Businesses must develop a culture that fosters diversity. This will encourage individuals from different backgrounds, religions, economic statuses, and genders to apply.
Subsequent to the implementation of the non-technical measures, entities should adopt continuous monitoring and evaluation. An analysis of data generated from CV ranking will assist recruiters in identifying areas that need improvement. Where discrepancies emerge, investigations should be conducted to address the problem. Constant scrutiny of the CV ranking process will foster a more inclusive and equitable CV ranking, providing an even playing ground to all candidates. Calibration of the CV ranking process should be performed often to align with organizational goals.
In summary, non-technical solutions exist to address biases in the CV ranking process. These include the implementation of unconscious bias training programs to equip HR professionals with the necessary skills to minimize biases, having clear and objective evaluation criteria, blind hiring techniques in CV ranking, and the diversification of their CV ranking teams. The implementation of these measures must be accompanied by continuous monitoring and evaluation of the hiring outcomes to establish areas requiring improvements.

6. Applications of AI in Hiring

AI techniques have revolutionised various industries and processes, including hiring. Recognising the need to avoid biases in hiring, NLP techniques emerged as innovative solutions, poised to streamline the hiring process. NLP, a subset of AI, allows for computers to understand human language, hence deriving meaning from a vast array of linguistic inputs [41]. Traditionally, businesses adopted manual processing of information. However, the advancement in NLP technology and the deployment of neural networks allows for organizations to leverage data for the development of systems to address common issues. The adaption of NLP systems facilitates efficiency and cost reduction in organizations. Manual HR processes in large organizations are difficult and time consuming, and often frustrate candidates. With the global shift towards post-COVID-19 pandemic operations, businesses are optimizing their operations, intensifying the demand for more workforce. NLP techniques are increasingly adopted to automate the hiring process. Similarly, deep learning, a subset of ML, is a useful tool in with potential application in the measurement of human behavior [42]. Deep learning techniques have automated routine tasks, for example, in healthcare, where they have proven superior to medical professionals in the detection of cancer in mammograms [43]. Similarly, deep learning techniques in the legal field exhibit proficiency in identifying legal risks in contracts [44]. The table below summarizes the application of AI in hiring (Table 2).
The conventional approaches utilizing psychometric principles have proven less effective, particularly in recruiting candidates with the required skills and qualifications [45]. At the same time, traditional sourcing methods like printed job applications have become less popular, paving way to more advanced internet-based sources and e-recruitment processes. The HR function enhances an organization’s growth competitive advantage and innovation. Businesses are engaged in fierce competition to attract and retain candidates with the required skill sets. Organizations have resulted in technologically driven processes in the hiring process as demonstrated with the increased adoption of AI from 2018 when businesses started sourcing for candidates using information derived from social media profiles [45]. Data from social media enabled recruiters to evaluate candidate values, beliefs, and attitudes, providing information that could not be obtained from traditional CVs. Since then, the adoption of AI techniques in the hiring process has witnessed widespread acceptance.
AI is instrumental in the CV screening process, especially in the identification of the most suitable candidates. As the first step in the hiring process, CV screening entails the identification of CVs for a particular position based on the job description. A manual approach proves laborious and time consuming, especially when dealing with large volumes of applications. AI techniques allow the automation of the CV screening process. They can autonomously extract important information from resumes, including education, work, experience, and skills. Automation saves recruiters time and allows for them to focus on establishing the suitability of the candidates. Bhakagat indicates that the recruitment industry will save significant time with AI-enabled tools [46]. Unlike manual CV screening, which is time-consuming, AI-based tools analyze extensive data sets, providing comprehensive results promptly for HR professionals [46]. Research indicates that AI-based hiring processes are speedy, accurate, and cost effective [47]. Nawaz and Gomes demonstrate that for a single job post, organizations receive numerous applications, which is challenging for HR professionals [48]. In industries experiencing high turnover, efficiency in hiring is a strategic advantage [49]. Efficiency in hiring promotes a quicker decision-making process. James Wright believes that AI-powered tools expedite decision making, reducing errors inherent in human recruiters [50]. Businesses can use algorithm-based hiring tools to select the most suitable candidates from thousands of job applicants [47]. Sridevi and Suganthi analyze AI-based CV-matching system developed using Python 3.7 and packages like Pandas, NumPy, and Matplotlib using 14,906 CVs [47]. The results indicate that the AI-based CV-matching system is easy to implement and integrate into the web server. The system saves time, making it appropriate for HR professionals to deal with numerous applications [47]. The automation of the hiring process facilitates skill assessment. Pre-trained hiring algorithms can identify the level of skills of applicants and establish a candidate’s suitability for a post. This allows for a more objective way of assessing candidate skills and qualifications, allowing for a fair recruitment process.
Beyond CV screening, AI techniques can assess applicants’ personalities and behaviors. Deep learning models can analyze data from applicants’ social media profiles and other online forums, since research shows that individuals are using their social media accounts to share aspects about themselves [51]. Social media profiles serve as a repository of information that can offer more insights into a candidate’s persona. Recruiters can use the information to understand the personality traits that align with the job requirements and how individuals fit into the organizational culture. Cover letters can establish candidates’ enthusiasm and worldview, allowing for businesses to filter less motivated candidates and those whose worldview is contrary to the values of the business from a large pool of applicants. Notably, AI can identify candidates with racist social media posts and prevent them from proceeding further in the hiring process, ensuring that candidates adhere with organizational values.
Language barrier is a major challenge in hiring, especially for multinational corporations. Such organizations operate across diverse linguistic landscapes, exposing them to language barriers. For instance, a UK-based multinational corporation operating in Germany and China would be forced to hire workers who speak these languages. These global entities often encounter the necessity to recruit local talent to align with the unique needs of each geographical location. Interviewing candidates from different cultures forces a business to employ HR professionals who understand local languages. However, employing such a large number of HR professionals is expensive. AI facilitates the hiring of employees for businesses with operations overseas despite the language barriers [47]. AI techniques can recognize different languages, allowing for HR professionals to interview candidates without necessarily understanding the local dialect. The evolution of NLP further amplifies this capability, with the promise of incorporating an even broader spectrum of languages into the AI-driven hiring base [52]. AI allows for businesses, particularly multinational corporations, to recruit employees from different linguistic backgrounds without necessarily hiring translators.
Engaging candidates through the hiring process is critical in the contemporary hiring landscape, with AI-powered solutions emerging as enablers. Organizations are unable to maintain constant communication with candidates due to time constraints. A solution to this challenge comes in the form of AI-driven chatbots that can offer applicants more information about the organization’s mission, vision, values, and any other information they need. Nawaz and Gomes consider AI chatbots as the new recruiters [48]. These chatbots use advanced NLP capabilities to simulate real-life conversations with candidates, familiarizing the applicants with an entity, hence allowing for them to decide an organization’s suitability. AI-powered chatbots allow for applicants to access information without necessarily engaging the recruiters directly. As a result, recruiters can allocate their time more efficiently as the AI systems can coordinate the process by automatically scheduling calls, tests, and interviews [53]. AI-powered chatbots mitigate the frustration applicants experience while awaiting responses, especially in cases where organizations contend with a high volume of applications.
In summary, AI has widespread applications in the hiring process. Firstly, AI acts as a catalyst in the automatic CV screening process, saving time wasted in a manual laborious process. Secondly, AI facilitates the selection of quality candidates from a pool of applicants. The analytical capabilities of AI contribute significantly to the identification of candidates best suited for specific positions. Thirdly, AI tools assume a major role in assessing the personality and behavior of candidates. Their ability to extract data from social media profiles assists in establishing a candidate’s suitability for a given post. Lastly, AI systems assist in overcoming language barriers between applicants and hiring professionals, facilitating global recruitment efforts.

7. Applications of AI in Eliminating Bias

Algorithmic biases may perpetuate prejudices against certain groups of potential candidates, leading to inequitable access to job opportunities. For instance, while not directly related to hiring, the COMPAS system which the police use in the US exhibits inaccuracies in analyzing individuals of different racial backgrounds [54]. In this system, disparities emerge, as it systematically allocates a lower recidivism level to white people compared to their actual risk while attributing higher risk levels to Black individuals [55]. In the medical field, Optum, a medical system, allocates fewer resources for treating Black patients and disproportionately more resources to the white population [56]. The HR field has witnessed biased outcomes in automated recruitment systems, like in the case of HireVue, which disadvantages non-English native speakers due to difficulties in understanding their accents, excluding them [57]. As hiring algorithms become increasingly integral in the hiring process, their potential to amplify the existing biases poses a substantial risk. Given the adverse implications of algorithmic bias, organizations should consider proactive measures to address the issue.

7.1. AI Approaches to Bias Mitigation

Hiring algorithms exhibit biases across various dimensions, even where designers work towards eliminating them. It is important to consider ways of enhancing hiring algorithms to detect and mitigate some of the biases. The application of AI techniques in hiring is a promising solution to algorithmic biases [58]. It commences with the examination of how hiring algorithms introduce biases. In ML evaluations, bias and discrimination can be examined by considering the confusion matrices for various protected categories. Language models estimate the probability of a sequence of words, allowing for them to predict the most probable next word or phrase. Since biases are prevalent in any human language, language models are vulnerable to the same biases. Unfairness emanates from skewed behavior that wrongly uses biases to create a certain outcome that discriminates against a certain group. When dealing with words describing gender, e.g., men and women, certain attributes can be ascribed to each category, significantly reinforcing stereotypes. For instance, words like toughness, persistent, and strong can be associated with “man”, while others like tender, emotional, and weaker can be connected to “woman”. In such cases, it is evident that the language model is not to blame for the bias, but rather the training [59].
There are two major AI techniques for addressing algorithmic biases: correction of the vector space and data augmentation. For the correction of the vector space of the model, bias often emerges in the vector space where word embeddings are learned. Words associated with gender, ethnicity, or other sensitive attributes may become vectors that perpetuate biases [60]. Correcting the vector space follows a structured procedure. Developers identify the vector space dimensions that harbor the bias and try to equalize the distance between the protected attribute (such as Blacks and white people) and the biased concept (like qualifications and skills). Where the vector model leans towards associating white people with better skills and qualifications, developers can rectify the problem by associating the same skills and qualifications with Black people, neutralizing biased embeddings by moving them closer to a neutral point in the vector space [61]. This entails adjusting the gender-related word pairs like “he” or “she” to have similar embeddings, which ensures equal representation in the vector space. Several researchers believe the approach can be used to eliminate biases in hiring algorithms [62,63]. This is because it contributes to a more neutral representation of words, which significantly mitigates the impact of stereotypes. Nonetheless, the approach is problematic since it may cause a semantic drift. In the process of correcting the vector space, this may alter word semantics, significantly affecting the performance of the model [64]. Moreover, the approach may not address intersectionality challenges effectively. Addressing biases that relate to multiple attributes like gender and ethnicity can be complex for the approach.
Data augmentation is a major AI technique in mitigating bias [65]. It works by generating data using information derived from the training set. The deep learning technique operates by fine-tuning the model by changing its source data [66]. The technique seeks to diversify the available examples, exposing the model to a wider range of scenarios without necessarily introducing new data. The process begins with identifying underrepresented groups in the existing training data. Next, modifications of the existing data are performed to create new instances through synonym replacement, paraphrasing, or introducing slight variations to numerical features. In this way, developers can balance the number of times protected attributes appear [65]. For instance, if the original data contains the following statement: “white people are qualified and have the right skills”, the technique will add the following: “Black people are qualified and have the right skills”. While undertaking data augmentation, the generated examples need to remain as realistic as possible. This will stimulate potential variations within the same group. The augmented data instances are then merged with the original data set. The outcome is an expanded and a more diverse dataset that can be used to train the algorithm. Since the algorithm derives its results from the augmented data employed, this technique promises to eliminate biases in hiring. This is because it allows for the model to learn from a more extensive range of examples. The creation of additional instances for the underrepresented groups is crucial in minimizing biases associated with specific demographics or attributes. Despite the effectiveness of data augmentation, the technique can suffer from overfitting [67]. Here, the model can be too tailored to the augmented examples, reducing its ability to generalize. There is also the risk of the augmented instances deviating slightly from the original data, reducing the representative of the model in real-world scenarios.

7.2. Step-by-Step Approach to Algorithm Bias

Garrido-Muñoz et al. provide a systematic approach to algorithm bias, offering a comprehensive guide that software engineers can follow to address biases in deep model generation and application [59]. While the authors do not specifically focus on algorithms in hiring, the steps are universal for any author who wants to deal with bias, including in designing algorithms for hiring. The first step is defining stereotype knowledge by identifying the protected properties and the related stereotyped aspects. Algorithm designers are encouraged to develop an ontology for each protected category [59], enabling them to populate their stereotyped knowledge to identify potential biases that may harm the system. The second step is the need for software engineers to evaluate the model to establish how it behaves with stereotyped and protected expressions. The third step is the need for developers to analyze the results of the evaluation [59]. This is meant to pinpoint the expressions or categories resulting in higher bias [59]. Next, software engineers must reevaluate the model and loop the last steps until they receive an acceptable response. Lastly, the procedure results should be reported by attaching model cards to attain transparent model reporting [59]. The procedure can be adapted according to the requirements of the particular AI project, which makes it applicable in addressing biases in hiring algorithms.
Pagano et al. categorize bias mitigation techniques into three main categories: pre-processing, in-processing, and post-processing [68]. Pre-processing approaches work towards rebalancing the data. In-processing mitigation concentrates on the model’s regularization with a bias correction term [69]. Software engineers can employ pre-processing mitigation techniques to alter a dataset to ensure fairness metrics. In this regard, the approach mitigates bias in the distortions that come up with protected groups while seeing the resulting changes in the dataset. In essence, the methods quantify the discriminatory effects to facilitate the removal or accounting of the same [70]. The ultimate aim is to form a fair training distribution that ensures hiring algorithms rely on fair data sets. Undertaking pre-processing permits authors to preserve the users’ privacy by generating synthetic data from the representation of the initial data [69]. An advantage of pre-processing is that it is undertaken independently of the model. At the same time, since the approach alters the data before building the hiring model, this assists in addressing the root cause of the bias problem [70]. However, pre-processing bias mitigation measures may be ineffective since data are likely to be biased. The approach is tedious when dealing with a large data set. As a result, applying deep learning can assist in addressing the problem through in-processing mitigation approaches. These improve fairness by producing nondiscriminatory results, even based on biased data. One of the in-processing approaches is adversarial de-biasing. It involves training the adversarial network to predict protected demographic information from biased data [71]. The adversarial network serves as the discriminator. The fair network learns to fool the discrimination to minimize the probability of the discriminator predicting an attribute from the model’s output. As a result, adversarial learning reduces the impact a protected train has on the output [70]. The advantage of the in-processing approach is its widespread generalizability in applications, and hence, it can be employed to mitigate biases in hiring algorithms. The last category is the post-processing method. Other researchers have considered this approach as applicable in mitigating bias. It involves manipulating model predictions using a certain fairness constraint [70]. The approach works without accessing the model parameters. However, using a post-processing bias mitigation technique results in a significant loss in performance. The table below summarizes the core ideas and deficiencies of hiring algorithms using AI techniques and those without AI techniques (Table 3).

7.3. Case Studies of AI Eliminating Biases

AI techniques have been used to mitigate biases in different industries. For instance, IBM’s AI Fairness 360 Toolkit is an open-source software that assists in in detecting and removing bias in ML. It allows for developers to utilize state-of-the-art algorithms to identify unwanted biases from appearing in their ML pipeline [79]. By adopting the toolkit, businesses can improve the fairness of their candidate selection process. Moreover, Textio’s augmented writing platform employs AI to mitigate biased language in job descriptions. The platform uses NLP to analyze job postings and suggest alternative language that is more inclusive and appealing to a diverse audience. Organizations using Textio, like CISCO, have indicated the platform has helped them create gender-neutral job adverts, allowing for the business to resonate with a diverse pool of people [80]. Lastly, Accenture’s Fairness Tool for AI is designed to evaluate and address bias in AI models, including those in recruitment. The tool assesses models for fairness across demographic groups and offers recommendations to organizations on how to mitigate biases. The tool has been employed to enhance fairness in hiring algorithms, ensuring that AI systems do not discriminate against particular groups [81].
Conclusively, AI techniques provide valuable solutions for mitigating biases in hiring processes. Firstly, a correction of the vector space of an algorithm can be used to eliminate bias in hiring. In this case, developers identify the dimension that contains the protected attribute and the biased concepts. Secondly, organizations can undertake data augmentation by fine-tuning the data source to eliminate biased data. Bias mitigation strategies can be evaluated from three broad categories: pre-processing, in-processing, and post-processing.

8. Limitations of AI in Eliminating Bias

Deep learning techniques can analyze large sets of data and, in the process, extract complex patterns [82,83,84,85,86]. Such capability makes deep learning methods instrumental in addressing various biases in different domains, including hiring. Leveraging the ability of AI, organizations can become fairer and more unbiased in critical practices like candidate selection and evaluation. Indeed, there is no doubt that AI techniques have found widespread application in hiring and promise to make the process seamless and unbiased. However, addressing biases in hiring is not a straightforward issue. The complexity involved in detecting and addressing biases in hiring makes it challenging even for AI techniques. Despite their promise to handle the issue of bias in hiring, it is crucial to understand their limitations and challenges associated with eliminating biases through AI.
One limitation of AI methods is the quality of data and criterion definition used. Deep learning and NPL approaches need predictor variables of a certain quality in the hiring process. For instance, psychological constructs need to be reliable and valid. However, given that ML hiring algorithms can handle large sets of predictor variables, job interviews and observation assessments do not need to be aggregated to obtain reliable measures. In such cases, the quality of a single input is less significant than in traditional selection procedures, where a single input has a major effect on the outcome. AI techniques can incorporate multiple sources like automated interviews and social media platforms, among others [87]. This offsets the weaknesses resulting from a single inaccurate data source. However, AI faces a serious issue when there is a defined target variable that the model is expected to predict [88]. For instance, an organization may use non-protected aspects like sales numbers to make hiring decisions. AI techniques can alter hiring algorithms to ignore protected characteristics in favor of a more objective variable like sales performance. While subjective assessments do not influence some of the factors like sales numbers and can be utilized to prevent prejudices and biases, they may not be an appropriate measure in addressing biases. Aspects like sales volumes may be biased and dependent on other factors like organizational climate and not necessarily on the skills and qualifications of a candidate. For instance, where an employee worked in an efficient organization, their sales may be significantly higher than those of another employee in a less efficient business. When AI relies on such single variables, it is likely to mislead hiring managers on the suitability of candidates. Another limitation of AI techniques in mitigating bias is that they may fail to ensure algorithmic fairness [89]. In this case, fairness implies that the results of the hiring algorithm are independent of particular sensitive attributes like gender and religion or the proxy variables associated with sensitive data like a zip code. Such fairness results from a bias that deep learning models and NLP techniques introduce in processing text. Biases in text classification models result in biased outcomes even where AI is used to mitigate the negative effect.
Moreover, AI techniques may be prone to overgeneralization since deep learning models depend largely on training data. This causes the risk of overgeneralizing the patterns, further reinforcing the biases in the algorithmic biases [90]. For instance, if historical successful hires have been predominantly male, the model may favor male candidates in future predictions. Hence, hiring algorithms may continue with biased outcomes despite embedding AI techniques to address the issue. Overcoming this problem requires thoroughly creating the training data to ensure a balanced approach. At the same time, despite the advancement of AI techniques, it is challenging to use these methods to understand contextual nuances and social complexities that contribute to biases. Research has shown that deep learning methods excel at pattern recognition [76]. While this is commendable, they fail to understand the underlying sociocultural factors that influence bias in hiring. For instance, the model can use deep learning to detect protected and sensitive categories and put corrective measures. However, deep learning techniques lack a holistic understanding of the social dynamics and the integration of contextual information. In hiring, bias is a complex social aspect that depends on various factors. The inability of AI approaches to understand continual nuances makes them vulnerable to biases in different contexts.

Privacy and Ethical Implications of AI Systems

The adoption of AI-driven systems raises privacy concerns. AI-based hiring tools require algorithms to be trained on diverse data, which often requires access to sensitive personal information [91]. This raises concerns about privacy, especially where organizations do not handle data securely. The failure of organizations to prioritize data security when sensitive personal information is used violates privacy. Violating privacy could expose organizations to lawsuits [92]. Additionally, AI-based hiring systems raise serious ethical concerns. A majority of the models lack transparency and explainability. The “black-box” nature makes it difficult to understand how the systems reach hiring decisions. The lack of transparency raises ethical concerns, especially in hiring, where decisions have significant effects on individuals and organizations. Moreover, AI systems may struggle to assess soft skills accurately [93]. The potential oversight of soft skills makes it challenging to evaluate candidates beyond technical qualifications. Furthermore, as organizations embrace the use of AI tools in hiring, this could lead to job displacements for human recruiters. This could lead to more job losses, increasing economic disparities.
In summary, despite the important role of AI approaches in eliminating biases in hiring, they are limited in various ways. In particular, AI methods rely largely on the quality of data and criterion definition utilized. This happens especially when the techniques have a defined target they need to are expected to predict. Additionally, AI may not achieve algorithmic fairness. Biased outcomes are possible, even where the techniques are employed to mitigate the negative effects. Lastly, the techniques often fail to understand the contextual and social complexities that cause biases. The inability to understand such factors means that the techniques cannot address hiring biases. There is also the problem of over-generalization, where AI techniques can over-rely on patterns, further reinforcing the biases.

9. Comparing and Contrasting Studies

This section compares and contrasts studies focusing on mitigating biases using AI. The comparison indicates researchers who agree on the use of AI techniques to mitigate bias in hiring. A contrasting study indicates how researchers differ and arrive at different conclusions regarding using AI to address hiring biases.

9.1. Comparing Research Articles

Existing body of research shows that AI techniques are instrumental in addressing biases in algorithmic biases [42,45,47]. As organizations shift to hiring algorithms to address the problem of hiring biases, this has not addressed the problem. Hiring algorithms, like humans, are vulnerable to biases, which ultimately affect how they make decisions [48]. Hiring algorithms are biased, depending on the data used to train them. Algorithm designers may introduce biases in making them, increasing the chances that the hiring algorithms will introduce biases in decision making. Despite the weaknesses of algorithm-based systems in hiring, research shows that more can be done to make them better [57]. In particular, investigators see AI techniques as the antidotes of biases in algorithm-based hiring systems. Investigators share this belief due to the ability of these techniques to improve hiring algorithms and introduce corrective measures [25,59]. The ability of deep learning methods to identify patterns and make more accurate predictions makes them appropriate for addressing the issue of bias in employment, which has remained problematic for a long time. When used with NLP approaches, deep learning offers more viable solutions to the problem.
AI techniques are not without defects. They can fail to detect biases in hiring, especially when they are trained using already biased data and form patterns consistent with the biased data [94]. Further, some forms of bias may be challenging to detect due to the complex nature of the issue. Bias is a social issue that varies from one context to another. The inability of AI methods to understand all contexts means they can fail to detect bias [95]. The reviewed studies demonstrate that HR professionals should be aware of possible biases even where AI methods are incorporated into hiring systems. Chen believes that humans must collaborate with autonomous systems to address biases [96]. The collaboration provides a double check for bias, detecting and eliminating any bias that either fails to notice. Chen indicates that humans tend to conflict with machines. The competition for control adversely affects how humans use machines to address issues like bias [63]. Brishti and Javed agree that humans must collaborate with AI-based solutions to achieve proper benefits and opportunities in any field [40].

9.2. Contrasting Research Articles

However, concerns have been raised by applicants concerning how AI-powered systems treat them. In research involving the collaboration between recruiters and AI, Zhisheng Chen explored participants’ feelings when interacting with AI-powered hiring tools. While some participants responded positively, others demonstrated that the conversations with AI systems did not feel as natural as communicating with a human being [48]. One researcher recommends that AI systems strengthen their hiring algorithms and optimize their communication [48]. Humans must collaborate with AI-based tools using AI techniques to eliminate biases. Collaboration between humans and AI-based systems is fundamental in addressing the challenge of bias present when human recruiters are left alone to evaluate CVs or when humans do not closely monitor AI-based systems. However, researchers acknowledge a possible conflict of shared control between humans and autonomous systems. However, the conflict can be addressed using a heuristic model proposed by Vanderhaegen [96]. The model follows four main stages: testing shared control, identifying detection parameters, pinpointing conflicting decisions, and testing them [96]. Averting the conflict necessitates the need for humans and autonomous AI systems to have a certain amount of control over a given process. In the case of hiring, organizations need to consider ways of addressing HR practitioners’ concerns against AI-based systems. Chen believes that humans and AI can coexist, but the latter needs to undertake technological learning and make improvements to warning systems [96].
Researchers disagree on the best approaches to using AI to eliminate algorithmic bias. For instance, Gonen and Goldberg [97] disagree with the approach involving the correction of the vector space of the model, indicating that the technique hides biases instead of eliminating them. The researchers explore previous works where the technique has been used unsuccessfully to address challenges in word embedding models, concluding that the approach is insufficient. Instead, Ismael Garrido-Muñoz [59] supports the data augmentation technique in eliminating bias. The researcher demonstrates that the approach fine-tunes existing trained models, avoiding designing new models from scratch, which would be expensive and time consuming [59]. Given that hiring algorithms take time to design, fine tuning them will allow for programmers to make the necessary adjustments to ensure a more balanced approach to protected categories like race and gender in hiring.

9.3. AI Fairness and Accuracy in Hiring

Researchers demonstrate that the application of AI techniques in hiring increases accuracy and fairness. In particular, Thompson et al. demonstrate that Robustly optimized bidirectional encoder representations from the transformers approach (RoBERTa) have an accuracy of avg r = 0.84. This is near the inter-rater reliability that multiple expert raters attained following consensus (avg r = 0.85) [42]. Moreover, Sridevi and Suganthi use classifiers such as linear regression, decision tree, Adaboost, and XGBoost to establish the suitability of candidates from a pool of resumes. An accuracy of 95.14 percent is attained when the researchers employ the XGBoost classifier [47]. Additionally, Luetge undertakes research on how to improve fairness perceptions of AI in hiring. The research has three major hypotheses. Hypothesis 1 indicated that participants would evaluate an AI interview as fairer when it is conducted at the initial stage than when it is at the last phase. The results of ANOVA show that there is a difference between AI interviews in the screening stage versus the final decision stage, F(7396) = 6.81, p < 0.01 [98], which supports the hypothesis. In the second hypothesis, the researcher proposes that participants would consider the selection process fairer if they had additional information on how AI can minimize bias. The results of the ANOVA show the difference between the sensitized and non-sensitized groups is F(7396) = 4.85, p < 0.05 [98], which supports the hypothesis. The third hypothesis demonstrates that participants would consider the selection procedures fairer if the AI selection were made under the supervision of a human. However, the ANOVA results showed no significant difference between AI decision making with oversight and without it [98].
In summary, researchers agree that hiring algorithms are vulnerable to biases, impacting how they make decisions regarding hiring. There is consensus that AI techniques are critical in addressing algorithmic biases in hiring. Researchers consider the two methods as the antidotes of biases in algorithm-based hiring systems. Studies indicate that these techniques improve the hiring process by introducing corrective measures needed to make the hiring process fair for all candidates. However, investigators point out that AI techniques have limitations. As a result, studies call for a collaboration between humans and autonomous systems to address the biases. Instead of humans conflicting with machines, they should embrace them to enhance the hiring process while at the same time addressing algorithmic biases that emerge.

10. Broader Implications

The findings have a broader implication for AI development and HR management. In AI development, the findings demonstrate the need for ethical considerations while developing AI tools in hiring. AI developers need to prioritize fairness, transparency, and accountability to ensure that AI systems contribute positively to the recruitment process without perpetrating biases. Additionally, AI models in hiring need to undergo continuous monitoring and updating. AI developers need to undertake regular assessments to identify and rectify biases that may emerge over time. As organizations and societies evolve, there is a need to update AI algorithms to ensure they mitigate biases that emerge. For HR management, the findings underscore the need for informed decision making. HR professionals should collaborate with AI models in hiring rather than leaving AI as the sole determinant. The findings suggest the need for AI and HR professionals to collaborate so that unique aspects of candidates are not overlooked. Moreover, HR professionals should be equipped with the knowledge and skills to understand and oversee AI-driven hiring processes. There should be training programs focusing on enhancing oversight and the ability to assess AI outcomes critically.

11. Conclusions

Hiring algorithms are prone to errors, and HR professionals must know this limitation. Biases in hiring are a serious ethical concern that needs immediate attention. Bias can cause discrimination against certain categories of individuals and should be minimized and avoided. Our analysis is critical in streamlining the hiring process to make it fairer for all groups. It demonstrates that algorithmic biases in hiring can be addressed using correction of the vector space and data augmentation. Applying AI techniques cannot succeed without a collaboration between humans and machines to address biases. We suggest that recruiters actively build and test hiring applications to ensure they meet the required standards. Despite the progress in applying AI in hiring, our findings show that research on hiring algorithms is still in its formative stages. Consequently, we advocate for extensive research in applying AI techniques in HR to examine the sources of biases and develop solutions to address the issue.
The study findings suggest areas for future study. Firstly, researchers should focus on the dynamics involved in human–AI collaboration. An understanding of enhancing the synergy between humans and AI-driven hiring tools can lead to more effective and unbiased hiring processes. Secondly, more research should focus on advanced bias mitigation techniques. Studies should explore more innovative mitigation approaches that consider the evolving nature of biases in society. Additionally, studies should explore the potential impact of emerging technologies like quantum computing on hiring practices. Investigating how the technology can transform efficiency and fairness in AI-driven recruitment systems is critical in unearthing better AI bias mitigation techniques.

Author Contributions

Writing—original draft, E.A.; writing—review and editing, E.A., T.M. and A.A.; supervision, T.M. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ATSApplication Tracking System
CVCurriculum Vitae
HRHuman Resource
IATImplicit Association Test
MLMachine Learning
NLPNatural Language Processing
RoBERTaRobustly optimised Bidirectional Encoder Representations from Transformers approach
UKUnited Kingdom
USUnited States

References

  1. Hameed, K.; Arshed, N.; Yazdani, N.; Munir, M. On globalization and business competitiveness: A panel data country classification. Stud. Appl. Econ. 2021, 39, 1–27. [Google Scholar] [CrossRef]
  2. Farida, I.; Setiawan, D. Business strategies and competitive advantage: The role of performance and innovation. J. Open Innov. Technol. Mark. Complex. 2022, 8, 163. [Google Scholar] [CrossRef]
  3. Dupret, K.; Pultz, S. People as our most important asset: A critical exploration of agility and employee commitment. Proj. Manag. J. 2022, 53, 219–235. [Google Scholar] [CrossRef]
  4. Charles, J.; Francis, F.; Zirra, C. Effect of employee involvement in decision making and organization productivity. Arch. Bus. Res. ABR 2021, 9, 28–34. [Google Scholar] [CrossRef]
  5. Hamadamin, H.H.; Atan, T. The impact of strategic human resource management practices on competitive advantage sustainability: The mediation of human capital development and employee commitment. Sustainability 2019, 11, 5782. [Google Scholar] [CrossRef]
  6. Sukmana, P.; Hakim, A. The Influence of Work Quality and Employee Competence on Human Resources Professionalism at the Ministry of Defense Planning and Finance Bureau. Int. J. Soc. Sci. Bus. 2023, 7, 233–242. [Google Scholar] [CrossRef]
  7. Li, Q.; Lourie, B.; Nekrasov, A.; Shevlin, T. Employee turnover and firm performance: Large-sample archival evidence. Manag. Sci. 2022, 68, 5667–5683. [Google Scholar] [CrossRef]
  8. Lyons, P.; Bandura, R. Employee turnover: Features and perspectives. Dev. Learn. Organ. Int. J. 2020, 34, 1–4. [Google Scholar] [CrossRef]
  9. Bishop, J.; D’arpino, E.; Garcia-Bou, G.; Henderson, K.; Rebeil, S.; Renda, E.; Urias, G.; Wind, N. Sex Discrimination Claims Under Title VII of the Civil Rights Act of 1964. Georget. J. Gender Law 2021, 22, 369–373. [Google Scholar]
  10. Fry, R.; Kennedy, B.; Funk, C. STEM Jobs See Uneven Progress in Increasing Gender, Racial and Ethnic Diversity; Pew Research Center: Washington, DC, USA, 2021; pp. 1–28. [Google Scholar]
  11. Bunbury, S. Unconscious bias and the medical model: How the social model may hold the key to transformative thinking about disability discrimination. Int. J. Discrim. Law 2019, 19, 26–47. [Google Scholar] [CrossRef]
  12. Kassir, S.; Baker, L.; Dolphin, J.; Polli, F. AI for hiring in context: A perspective on overcoming the unique challenges of employment research to mitigate disparate impact. AI Ethics 2023, 3, 845–868. [Google Scholar] [CrossRef]
  13. Quillian, L.; Heath, A.; Pager, D.; Midtbøen, A.H.; Fleischmann, F.; Hexel, O. Do some countries discriminate more than others? Evidence from 97 field experiments of racial discrimination in hiring. Sociol. Sci. 2019, 6, 467–496. [Google Scholar] [CrossRef] [PubMed]
  14. Benbya, H.; Davenport, T.H.; Pachidi, S. Artificial intelligence in organizations: Current state and future opportunities. MIS Q. Exec. 2020, 19, 4. [Google Scholar] [CrossRef]
  15. HireAbility. The Evolution of Resume Parsing: A Journey Through Time. 2023. Available online: https://www.linkedin.com/pulse/evolution-resume-parsing-journey-through-time-hireability-com-llc?trk=organization_guest_main-feed-card_feed-article-content (accessed on 5 January 2024).
  16. Ajunwa, I. Automated video interviewing as the new phrenology. Berkeley Technol. Law J. 2021, 36, 1173. [Google Scholar]
  17. Dastin, J. Insight—Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women. 11 October 2018. Available online: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-922showed-bias-against-women-idUSKCN1MK08G/ (accessed on 5 January 2024).
  18. Wang, J.; Yang, Y.; Wang, T.; Sherratt, R.S.; Zhang, J. Big data service architecture: A survey. J. Internet Technol. 2020, 21, 393–405. [Google Scholar]
  19. De Cremer, D.; De Schutter, L. How to use algorithmic decision-making to promote inclusiveness in organizations. AI Ethics 2021, 1, 563–567. [Google Scholar] [CrossRef]
  20. Kordzadeh, N.; Ghasemaghaei, M. Algorithmic bias: Review, synthesis, and future research directions. Eur. J. Inf. Syst. 2022, 31, 388–409. [Google Scholar] [CrossRef]
  21. Lee, N.T.; Resnick, P.; Barton, G. Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms; Brookings Institute: Washington, DC, USA, 2019; Volume 2. [Google Scholar]
  22. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. CSUR 2021, 54, 115. [Google Scholar] [CrossRef]
  23. Shahbazi, N.; Lin, Y.; Asudeh, A.; Jagadish, H. Representation Bias in Data: A Survey on Identification and Resolution Techniques. ACM Comput. Surv. 2023, 55, 293. [Google Scholar] [CrossRef]
  24. Wilms, R.; Mäthner, E.; Winnen, L.; Lanwehr, R. Omitted variable bias: A threat to estimating causal relationships. Methods Psychol. 2021, 5, 100075. [Google Scholar] [CrossRef]
  25. Sun, W.; Nasraoui, O.; Shafto, P. Evolution and impact of bias in human and machine learning algorithm interaction. PLoS ONE 2020, 15, e0235502. [Google Scholar] [CrossRef]
  26. Mishra, R.K.; Reddy, G.S.; Pathak, H. The understanding of deep learning: A comprehensive review. Math. Probl. Eng. 2021, 2021, 5548884. [Google Scholar] [CrossRef]
  27. Sodhar, I.N.; Jalbani, A.H.; Buller, A.H.; Mirani, A.A.; Sodhar, A.N. Chapter 1—Natural Language Processing: Applications, Techniques and Challenges. In Advances in Computer Science; AkiNik Publications: New Delhi, India, 2020; Volume 7. [Google Scholar]
  28. Fisher, E.; Thomas, R.S.; Higgins, M.K.; Williams, C.J.; Choi, I.; McCauley, L.A. Finding the right candidate: Developing hiring guidelines for screening applicants for clinical research coordinator positions. J. Clin. Transl. Sci. 2022, 6, e20. [Google Scholar] [CrossRef]
  29. FitzGerald, C.; Martin, A.; Berner, D.; Hurst, S. Interventions designed to reduce implicit prejudices and implicit stereotypes in real world contexts: A systematic review. BMC Psychol. 2019, 7, 29. [Google Scholar] [CrossRef] [PubMed]
  30. Marvel, J.D.; Resh, W.D. An unconscious drive to help others? Using the implicit association test to measure prosocial motivation. Int. Public Manag. J. 2019, 22, 29–70. [Google Scholar] [CrossRef]
  31. Banaji, M.R.; Fiske, S.T.; Massey, D.S. Systemic racism: Individuals and interactions, institutions and society. Cogn. Res. Princ. Implic. 2021, 6, 82. [Google Scholar] [CrossRef] [PubMed]
  32. Fenton, W. 2023 Employment Discrimination Statistics Employees Need to Know. Available online: https://www.wenzelfenton.com/blog/2022/07/18/employment-discrimination-statistics-employees-need-to-know/ (accessed on 20 November 2023).
  33. Tabassum, N.; Nayak, B.S. Gender stereotypes and their impact on women’s career progressions from a managerial perspective. IIM Kozhikode Soc. Manag. Rev. 2021, 10, 192–208. [Google Scholar] [CrossRef]
  34. Zingora, T.; Vezzali, L.; Graf, S. Stereotypes in the face of reality: Intergroup contact inconsistent with group stereotypes changes attitudes more than stereotype-consistent contact. Group Process. Intergroup Relat. 2021, 24, 1284–1305. [Google Scholar] [CrossRef]
  35. Stopfer, J.M.; Gosling, S.D. Online social networks in the work context. In Current Issues in Work and Organizational Psychology; Routledge: London, UK, 2018; pp. 300–315. [Google Scholar]
  36. Marcelin, J.R.; Siraj, D.S.; Victor, R.; Kotadia, S.; Maldonado, Y.A. The impact of unconscious bias in healthcare: How to recognize and mitigate it. J. Infect. Dis. 2019, 220, S62–S73. [Google Scholar] [CrossRef] [PubMed]
  37. Kim, J.Y.; Roberson, L. I’m biased and so are you. What should organizations do? A review of organizational implicit-bias training programs. Consult. Psychol. J. 2022, 74, 19. [Google Scholar] [CrossRef]
  38. Yarger, L.; Cobb Payton, F.; Neupane, B. Algorithmic equity in the hiring of underrepresented IT job candidates. Online Inf. Rev. 2020, 44, 383–395. [Google Scholar] [CrossRef]
  39. Cohen, J.R.; Dalton, D.W.; Holder-Webb, L.L.; McMillan, J.J. An analysis of glass ceiling perceptions in the accounting profession. J. Bus. Ethics 2020, 164, 17–38. [Google Scholar] [CrossRef]
  40. Ashikali, T.; Groeneveld, S.; Kuipers, B. The role of inclusive leadership in supporting an inclusive climate in diverse public sector teams. Rev. Public Pers. Adm. 2021, 41, 497–519. [Google Scholar] [CrossRef]
  41. de la Fuente Garcia, S.; Ritchie, C.W.; Luz, S. Artificial intelligence, speech, and language processing approaches to monitoring Alzheimer’s disease: A systematic review. J. Alzheimer’s Dis. 2020, 78, 1547–1574. [Google Scholar] [CrossRef] [PubMed]
  42. Thompson, I.; Koenig, N.; Mracek, D.L.; Tonidandel, S. Deep Learning in Employee Selection: Evaluation of Algorithms to Automate the Scoring of Open-Ended Assessments. J. Bus. Psychol. 2023, 38, 509–527. [Google Scholar] [CrossRef]
  43. McKinney, S.M.; Sieniek, M.; Godbole, V.; Godwin, J.; Antropova, N.; Ashrafian, H.; Back, T.; Chesus, M.; Corrado, G.S.; Darzi, A.; et al. International evaluation of an AI system for breast cancer screening. Nature 2020, 577, 89–94. [Google Scholar] [CrossRef] [PubMed]
  44. Goodman, C.C. AI/Esq.: Impacts of artificial intelligence in lawyer-client relationships. Okla. Law Rev. 2019, 72, 149. [Google Scholar]
  45. Brishti, J.K.; Javed, A. The Viability of AI-Based Recruitment Process: A Systematic Literature Review. Master’s Thesis, Umeå University, Umeå, Sweden, 2020. [Google Scholar]
  46. Bhalgat, K.H. An Exploration of How Artificial Intelligence Is Impacting Recruitment and Selection Process. Ph.D. Thesis, Dublin Business School, Dublin, Ireland, 2019. [Google Scholar]
  47. Sridevi, G.; Suganthi, S.K. AI based suitability measurement and prediction between job description and job seeker profiles. Int. J. Inf. Manag. Data Insights 2022, 2, 100109. [Google Scholar]
  48. Nawaz, N.; Gomes, A.M. Artificial intelligence chatbots are new recruiters. IJACSA Int. J. Adv. Comput. Sci. Appl. 2019, 10, 1–5. [Google Scholar] [CrossRef]
  49. Black, J.S.; van Esch, P. AI-enabled recruiting: What is it and how should a manager use it? Bus. Horiz. 2020, 63, 215–226. [Google Scholar] [CrossRef]
  50. Wright, J.; Atkinson, D. The Impact of Artificial Intelligence within the Recruitment Industry: Defining a New Way of Recruiting; Carmichael Fisher: Los Angeles, CA, USA, 2019; pp. 1–39. [Google Scholar]
  51. Adegboyega, L.O. Influence of Social Media on the Social Behavior of Students as Viewed by Primary School Teachers in Kwara State, Nigeria. Elem. Sch. Forum (Mimbar Sekol. Dasar) 2020, 7, 43–53. [Google Scholar] [CrossRef]
  52. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 1–74. [Google Scholar] [CrossRef] [PubMed]
  53. Chen, Z. Collaboration among recruiters and artificial intelligence: Removing human prejudices in employment. Cogn. Technol. Work. 2023, 25, 135–149. [Google Scholar] [CrossRef]
  54. Washington, A.L. How to argue with an algorithm: Lessons from the COMPAS-ProPublica debate. Colo. Technol. Law J. 2018, 17, 131. [Google Scholar]
  55. Jackson, E.; Mendoza, C. Setting the record straight: What the COMPAS core risk and need assessment is and is not. Harv. Data Sci. Rev. 2020, 2, 1–14. [Google Scholar]
  56. Obermeyer, Z.; Mullainathan, S. Dissecting racial bias in an algorithm that guides health decisions for 70 million people. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019; p. 89. [Google Scholar]
  57. Harwell, D. A face-scanning algorithm increasingly decides whether you deserve the job. In Ethics of Data and Analytics; Auerbach Publications: Boca Raton, FL, USA, 2022; pp. 206–211. [Google Scholar]
  58. Pandey, A.; Caliskan, A. Disparate impact of artificial intelligence bias in ridehailing economy’s price discrimination algorithms. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, Virtual, 19–21 May 2021; pp. 822–833. [Google Scholar]
  59. Garrido-Muñoz, I.; Montejo-Ráez, A.; Martínez-Santiago, F.; Ureña-López, L.A. A survey on bias in deep NLP. Appl. Sci. 2021, 11, 3184. [Google Scholar] [CrossRef]
  60. Nemani, P.; Joel, Y.D.; Vijay, P.; Liza, F.F. Gender bias in transformers: A comprehensive review of detection and mitigation strategies. Nat. Lang. Process. J. 2023, 6, 100047. [Google Scholar] [CrossRef]
  61. Shin, S.; Song, K.; Jang, J.; Kim, H.; Joo, W.; Moon, I.C. Neutralizing gender bias in word embedding with latent disentanglement and counterfactual generation. arXiv 2020, arXiv:2004.03133. [Google Scholar]
  62. Manzini, T.; Lim, Y.C.; Tsvetkov, Y.; Black, A.W. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. arXiv 2019, arXiv:1904.04047. [Google Scholar]
  63. Zhou, P.; Shi, W.; Zhao, J.; Huang, K.H.; Chen, M.; Chang, K.W. Analyzing and Mitigating Gender Bias in Languages with Grammatical Gender and Bilingual Word Embeddings; ACL: Montréal, QC, Canada, 2019. [Google Scholar]
  64. Martínez-Huertas, J.Á.; Olmos, R.; Jorge-Botana, G.; León, J.A. Distilling vector space model scores for the assessment of constructed responses with bifactor Inbuilt Rubric method and latent variables. Behav. Res. Methods 2022, 54, 2579–2601. [Google Scholar] [CrossRef]
  65. Maudslay, R.H.; Gonen, H.; Cotterell, R.; Teufel, S. It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution. arXiv 2019, arXiv:1909.00871. [Google Scholar]
  66. Sinha, R.S.; Lee, S.M.; Rim, M.; Hwang, S.H. Data augmentation schemes for deep learning in an indoor positioning application. Electronics 2019, 8, 554. [Google Scholar] [CrossRef]
  67. Pereira, S.; Correia, J.; Machado, P. Evolving Data Augmentation Strategies. In Proceedings of the International Conference on the Applications of Evolutionary Computation (Part of EvoStar), Madrid, Spain, 20–22 April 2022; Springer: Cham, Switzerland, 2022; pp. 337–351. [Google Scholar]
  68. Pagano, T.P.; Loureiro, R.B.; Lisboa, F.V.N.; Cruz, G.O.R.; Peixoto, R.M.; Guimarães, G.A.d.S.; Santos, L.L.d.; Araujo, M.M.; Cruz, M.; de Oliveira, E.L.S.; et al. Bias and unfairness in machine learning models: A systematic literature review. arXiv 2022, arXiv:2202.08176. [Google Scholar]
  69. Di Noia, T.; Tintarev, N.; Fatourou, P.; Schedl, M. Recommender systems under European AI regulations. Commun. ACM 2022, 65, 69–73. [Google Scholar] [CrossRef]
  70. Feldman, T.; Peake, A. End-to-end bias mitigation: Removing gender bias in deep learning. arXiv 2021, arXiv:2104.02532. [Google Scholar]
  71. Sweeney, C.; Najafian, M. Reducing sentiment polarity for demographic attributes in word embeddings using adversarial learning. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 359–368. [Google Scholar]
  72. Aouragh, S.L.; Yousfi, A.; Laaroussi, S.; Gueddah, H.; Nejja, M. A new estimate of the n-gram language model. Procedia Comput. Sci. 2021, 189, 211–215. [Google Scholar] [CrossRef]
  73. Aissani, N.; Beldjilali, B.; Trentesaux, D. Use of machine learning for continuous improvement of the real time heterarchical manufacturing control system performances. Int. J. Ind. Syst. Eng. 2008, 3, 474–497. [Google Scholar] [CrossRef]
  74. Fritts, M.; Cabrera, F. AI recruitment algorithms and the dehumanization problem. Ethics Inf. Technol. 2021, 23, 791–801. [Google Scholar] [CrossRef]
  75. Lavanchy, M.; Reichert, P.; Narayanan, J.; Savani, K. Applicants’ fairness perceptions of algorithm-driven hiring procedures. J. Bus. Ethics 2023, 188, 125–150. [Google Scholar] [CrossRef]
  76. Serey, J.; Alfaro, M.; Fuertes, G.; Vargas, M.; Durán, C.; Ternero, R.; Rivera, R.; Sabattin, J. Pattern recognition and deep learning technologies, enablers of industry 4.0, and their role in engineering research. Symmetry 2023, 15, 535. [Google Scholar] [CrossRef]
  77. Hunkenschroer, A.L.; Kriebitz, A. Is AI recruiting (un) ethical? A human rights perspective on the use of AI for hiring. AI and Ethics 2023, 3, 199–213. [Google Scholar] [CrossRef] [PubMed]
  78. Akter, S.; McCarthy, G.; Sajib, S.; Michael, K.; Dwivedi, Y.K.; D’Ambra, J.; Shen, K.N. Algorithmic bias in data-driven innovation in the age of AI. Int. J. Inf. Manag. 2021, 60, 102387. [Google Scholar] [CrossRef]
  79. IBM. AI Fairness 360. 14 February 2018. Available online: https://www.ibm.com/opensource/open/projects/ai-fairness-360/ (accessed on 24 October 2023).
  80. Novet, J. Cisco Is Hiring More Women and Non-White Employees than Ever, and They Credit This Start-Up for 1051 Helping. 9 October 2019. Available online: https://www.cnbc.com/2018/10/09/textio-helping-cisco-atlassian-improve-workforce-diversity.html (accessed on 4 January 2024).
  81. Alameer, A.; Degenaar, P.; Nazarpour, K. Processing occlusions using elastic-net hierarchical max model of the visual cortex. In Proceedings of the 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), Gdynia, Poland, 3–5 July 2017; IEEE: New York, NY, USA, 2017; pp. 163–167. [Google Scholar]
  82. Rejikumar, G.; Gopikumar, V.; Dinesh, K.S.; Asokan-Ajitha, A.; Jose, A. Privacy breach perceptions and litigation intentions: Evidence from e-commerce customers. IIMB Manag. Rev. 2021, 33, 322–336. [Google Scholar]
  83. Robinson, M.F. Artificial Intelligence in Hiring: Understanding Attitudes and Perspectives of HR Practitioners; Wilmington University (Delaware): New Castle, DE, USA, 2019. [Google Scholar]
  84. Vanderhaegen, F. Heuristic-based method for conflict discovery of shared control between humans and autonomous systems-A driving automation case study. Robot. Auton. Syst. 2021, 146, 103867. [Google Scholar] [CrossRef]
  85. Gonen, H.; Goldberg, Y. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv 2019, arXiv:1903.03862. [Google Scholar]
  86. Hunkenschroer, A.L.; Lütge, C. How to improve fairness perceptions of AI in hiring: The crucial role of positioning and sensitization. AI Ethics J. 2021, 2, 1–19. [Google Scholar] [CrossRef]
  87. Alameer, A.; Ghazaeil, G.; Degenaar, P.; Nazarpour, K. An elastic net-regularized HMAX model of visual processing. In Proceedings of the 2nd IET International Conference on Intelligent Signal Processing 2015 (ISP), London, UK, 1–2 December 2015. [Google Scholar]
  88. Alameer, A.; Degenaar, P.; Nazarpour, K. Objects and scenes classification with selective use of central and peripheral image content. J. Vis. Commun. Image Represent. 2020, 66, 102698. [Google Scholar] [CrossRef]
  89. Alameer, A.; Degenaar, P.; Nazarpour, K. Context-based object recognition: Indoor versus outdoor environments. In Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC); Springer: Cham, Switzerland, 2020; Volume 2, pp. 473–490. [Google Scholar]
  90. Alameer, A.; Degenaar, P.; Nazarpour, K. Biologically-inspired object recognition system for recognizing natural scene categories. In Proceedings of the 2016 International Conference for Students on Applied Engineering (ICSAE), Newcastle Upon Tyne, UK, 20–21 October 2016; IEEE: New York, NY, USA, 2016; pp. 129–132. [Google Scholar]
  91. Accenture Art. The Art of AI maturity: Advancing from Practice to Performance. 16 November 2023. Available online: https://www.accenture.com/us-en/services/applied-intelligence/ai-ethics-governance (accessed on 23 November 2023).
  92. Woods, S.A.; Ahmed, S.; Nikolaou, I.; Costa, A.C.; Anderson, N.R. Personnel selection in the digital age: A review of validity and applicant reactions, and future research challenges. Eur. J. Work. Organ. Psychol. 2020, 29, 64–77. [Google Scholar] [CrossRef]
  93. Goretzko, D.; Israel, L.S.F. Pitfalls of machine learning-based Personnel Selection. J. Pers. Psychol. 2021, 21. [Google Scholar] [CrossRef]
  94. Köchling, A.; Wehner, M.C. Discriminated by an algorithm: A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Bus. Res. 2020, 13, 795–848. [Google Scholar] [CrossRef]
  95. Schwartz, R.; Vassilev, A.; Greene, K.; Perine, L.; Burt, A.; Hall, P. Towards a Standard for Identifying and Managing Bias in Artificial Intelligence; NIST Special Publication 1270; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2022. [Google Scholar]
  96. Gichoya, J.W.; Thomas, K.; Celi, L.A.; Safdar, N.; Banerjee, I.; Banja, J.D.; Seyyed-Kalantari, L.; Trivedi, H.; Purkayastha, S. AI pitfalls and what not to do: Mitigating bias in AI. Br. J. Radiol. 2023, 96, 20230023. [Google Scholar] [CrossRef] [PubMed]
  97. Lokanan, M. The determinants of investment fraud: A machine learning and artificial intelligence approach. Front. Big Data 2022, 5, 961039. [Google Scholar] [CrossRef] [PubMed]
  98. Chen, Z. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanit. Soc. Sci. Commun. 2023, 10, 567. [Google Scholar] [CrossRef]
Figure 1. Racial Discrimination in Labor Markets.
Figure 1. Racial Discrimination in Labor Markets.
Ai 05 00019 g001
Figure 2. Description of the Literature Screening Process.
Figure 2. Description of the Literature Screening Process.
Ai 05 00019 g002
Figure 3. Causes of Discrimination in the Hiring Process.
Figure 3. Causes of Discrimination in the Hiring Process.
Ai 05 00019 g003
Table 1. Inclusion and Exclusion Criteria.
Table 1. Inclusion and Exclusion Criteria.
CriteriaInclusionExclusion
Publication typePeer-reviewed journalsSources that have not been peer-reviewed
Publication DateSources must be published between 2017 and 2023Sources published earlier than 2017
ContentStudies related to hiring algorithms, artificial intelligence, and how they impact the hiring processStudies not related to hiring algorithms, artificial intelligence, and hiring
LanguageStudies published in EnglishStudies published in languages other than English and not having an English translation
Table 2. The application of AI in hiring.
Table 2. The application of AI in hiring.
NoApplicationDescription
1CV screeningAutomating the CV screening process to identify the best candidate match
2Personality and behavior assessmentAnalyzing data from social media profiles and other online forums.
3Overcoming language barriersCan recognize different languages, allowing for hiring professionals to assess candidates from different parts of the world
Table 3. A summary of the core ideas and deficiencies of hiring algorithms without AI and those using AI techniques.
Table 3. A summary of the core ideas and deficiencies of hiring algorithms without AI and those using AI techniques.
NameCore IdeasDeficiencies
1
Hiring algorithms without AI
(a)
Checks how many words are included in the job description [72].
(b)
Human oversight.
(c)
Continual improvement [73].
(a)
Overemphasis on keywords.
(b)
Lack of human connection in the hiring process [74].
(c)
Limited representation.
(d)
Ignoring soft skills.
2
Hiring Algorithms using AI techniques
(a)
Language proficiency assessment.
(b)
Enhances fairness [75].
(c)
Diverse training data.
(d)
Sentiment analysis of candidate responses.
(a)
Low quality of data introduces biases [76].
(b)
Black box models lead to lack of transparency [77].
(c)
Failure to eliminate algorithmic unfairness [78].
(d)
Causes over-generalization [78].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Albaroudi, E.; Mansouri, T.; Alameer, A. A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring. AI 2024, 5, 383-404. https://doi.org/10.3390/ai5010019

AMA Style

Albaroudi E, Mansouri T, Alameer A. A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring. AI. 2024; 5(1):383-404. https://doi.org/10.3390/ai5010019

Chicago/Turabian Style

Albaroudi, Elham, Taha Mansouri, and Ali Alameer. 2024. "A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring" AI 5, no. 1: 383-404. https://doi.org/10.3390/ai5010019

Article Metrics

Back to TopTop