Next Article in Journal
Trends and Opportunities in Social Entrepreneurship Education Research
Next Article in Special Issue
“No Need to Dress to Impress” Evidence on Teleworking during and after the Pandemic: A Systematic Review
Previous Article in Journal
The Influence of Empowerment on the Motivation of Portuguese Employees—A Study Based on a Structural Equation Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

In the AI of the Beholder—A Qualitative Study of HR Professionals’ Beliefs about AI-Based Chatbots and Decision Support in Candidate Pre-Selection

1
Business Analytics and Data Science-Center, University of Graz, 8010 Graz, Austria
2
Institute of Psychology, Work and Organizational Psychology, University of Graz, 8010 Graz, Austria
*
Author to whom correspondence should be addressed.
Adm. Sci. 2023, 13(11), 231; https://doi.org/10.3390/admsci13110231
Submission received: 16 September 2023 / Revised: 20 October 2023 / Accepted: 25 October 2023 / Published: 30 October 2023
(This article belongs to the Special Issue Human Resource Management Innovation and Practice in a Digital Age)

Abstract

:
Despite the high potential of artificial intelligence (AI), its actual adoption in recruiting is low. Explanations for this discrepancy are scarce. Hence, this paper presents an exploratory interview study investigating HR professionals’ beliefs about AI to examine their impact on use cases and barriers and to identify the reasons that lead to the non-adoption of AI in recruiting. Semi-structured interviews were conducted with 25 HR professionals from 21 companies. The results revealed that HR professionals’ beliefs about AI could be categorised along two dimensions: (1) the scope of AI and (2) the definition of instruction. “Scope of Al” describes the perceived technical capabilities of AI and determines the use cases that HR professionals imagine. In contrast, the “definition of instruction” describes the perceived effort to enable an AI to take on a task and determines how HR professionals perceive barriers to Al. Our findings suggest that HR professionals’ beliefs are based on vague knowledge about AI, leading to non-adoption. Drawing on our findings, we discuss theoretical implications for the existing literature on HR and algorithm aversion and practical implications for managers, employees, and policymakers.

1. Introduction

Recent advances in artificial intelligence (AI), that is, applications of machine learning (ML) to automatically learn from given data sets (Huang and Rust 2018) instead of applying hand-designed rules, enable new use cases in recruiting. Such AI systems can support or completely take over labour-intensive tasks such as interviews via chatbots, personality traits assessments, or (pre-)selection of suitable candidates via automated matching (e.g., Albert 2019; Black and van Esch 2020; Michelotti et al. 2021). Hence, AI is proposed to be a promising opportunity to support HR professionals (Guo et al. 2021).
Despite positive evaluations of AI’s potential in recruiting and a predicted increase in its usage (e.g., van den Broek et al. 2021), actual use is surprisingly low, with only a few companies applying AI in recruiting (e.g., Unilever Feloni 2017). Current survey data show that the implementation of AI in companies is mainly limited to prototypes, while long-term use of AI in organisations hardly exists (e.g., McKinsey 2021; McKinsey and Company 2018; O’Reilly 2020). To date, the low adoption of AI in recruiting is still unclear, which is equally problematic for organisations and various user groups such as HR professionals. AI can support almost the entire recruiting process (Sekhri and Cheema 2019), generating competitive advantages for organisations, workload reduction for HR professionals, and a better candidate experience for applicants (e.g., Ore and Sposato 2021). Furthermore, organisations are also increasingly pressured by legal authorities to adopt AI through actions such as the business judgment rule (Lücke 2019) and by applicants through their increasing expectations and requirements for a good application experience. However, organisations are unable or limited to respond to these uprisings due to the uncertainty about the cause of the so far rather unsuccessful attempts to adopt AI in practice.
Therefore, it is important to identify reasons for the current non-adoption of AI in recruiting. As the previous literature on algorithm aversion highlights, users’ beliefs about AI are a central lens through which reasons for the rejection of algorithms can be investigated. Considering that HR professionals are direct users of AI, their beliefs about AI can be expected to have a major impact on its adoption. Some authors have even attributed a key role to HR professionals in the adoption of AI because they can “empower sceptical employees at all levels” (Suseno et al. 2021, p. 3). Previous attempts to explain the reluctance of HR professionals to adopt AI identified ethical (Hunkenschroer and Luetge 2022), fear of replacement (Ore and Sposato 2021), privacy, and cost-related (Black and van Esch 2020) concerns.
However, studies that examine the perspective of HR professionals and their companies (e.g., Pillai and Sivathanu 2020; Suseno et al. 2021) are underrepresented but equally important when it comes to identifying potential barriers to the use of AI in recruiting. Such studies may help to better understand the factors leading to the identified concerns and the low adoption rate. Therefore, it is important to investigate the beliefs of HR professionals in the context of recruiting.
The aim of this study is to provide an explanation for the discrepancy between the high potential of AI and its low adoption in recruiting. To achieve this goal, we conducted qualitative interviews with HR professionals. We investigated both the use of AI in recruiting and HR professionals’ beliefs about AI, from which we identified their reasons for non-adopting AI. Thus, our study answers the following question: How do HR professionals’ beliefs influence potential use cases and barriers of AI in candidate pre-selection?
Our results contribute to the existing literature as they provide explanations for the discrepancy between the high potential of AI and the low application in practice from the perspective of users, namely HR professionals. By reconstructing HR professionals’ beliefs about AI that motivate its use or non-use, we contribute to the literature on HR as well as algorithm aversion. We also discuss practical implications for managers, employees, and policymakers by identifying use cases as well as barriers that (currently) may hinder the use of AI. Furthermore, we discuss how the new knowledge gained in this study can be applied in practice in the form of concrete and targeted measures such as the certification of AI, training, and awareness campaigns to mitigate HR professionals’ concerns about AI and increase their trust in the technology.

2. Background

Due to AI’s technological potential, it is used in various industries, such as healthcare (Davenport and Kalakota 2019), sales and marketing (Siau and Yang 2017), education (George and Wooden 2023), and supply chain management (Riahi et al. 2021). AI also has the potential to transform traditional HR processes (Charlwood and Guenole 2022), which will likely affect HR functions and HR roles (Nankervis and Cameron 2023). It is expected that the relevance of AI will continue to increase (Cooke et al. 2021) and that it will shape the future of HR (Vrontis et al. 2021). In a business context, AI is often defined as a system or algorithm with learning functions and cognitive abilities that can perform tasks and business functions that traditionally require human cognition (e.g., van Esch et al. 2021; Pan et al. 2021; Kot et al. 2021). Especially in the recent debate around AI adoption in business use cases, the ability to learn from given input data is key and refers to the increasing importance of ML. In our study, we define AI as machines that are able to learn from a given data set automatically, make predictions on that data, and automatically improve or adapt these predictions through experiences (Huang and Rust 2018). Consequently, we focus on AI models that are trained with ML algorithms and not on traditional rule-based AI systems.
Central functionalities of AI include integrating cognition, ML, emotion recognition, human–computer interaction, data storage, and decision making (Zhang and Lu 2021; Lu 2019). Based on these functionalities, AI can support almost every step of the recruiting process (Sekhri and Cheema 2019): (1) outreach, i.e., AI creates recruiting strategies and posts job ads based on potential candidates’ click stream data; (2) screening, i.e., AI scans resume and matches them with job posts to score/rank candidates; (3) assessment, i.e., AI analyses (video) interviews by evaluating applicants’ answers (e.g., word choice) and creates candidates’ personality and competence profiles; and (4) facilitation, i.e., coordination with applicants via chatbots and AI automatically fills in application forms by analysing unstructured application documents and extracting relevant information (Hunkenschroer and Luetge 2022; Laurim et al. 2021); this resulted in high demand for supporting technologies, especially in the phases in which all applications are processed: “screening”, “assessment,” and “facilitation”. Overall, AI can support four tasks within these phases: (1) providing information, (2) gathering data, (3) candidate exploration, and (4) matching and (pre-)selection. Therefore, we focus on these three phases and the four corresponding tasks, summarised under the term candidate pre-selection.
Despite the high potential of AI, actual adoption in recruiting is low, and explanations for this discrepancy are scarce (e.g., Suseno et al. 2021; Albert 2019). Existing attempts to explain this gap have identified a number of potential reasons why HR professionals may be cautious about adopting AI. Hunkenschroer and Luetge (2022) found that ethical considerations can hinder the use of AI, including algorithmic bias, power asymmetry, lack of transparency, obfuscation of accountability, and potential loss of human oversight. Also, HR professionals’ fear of losing their jobs to AI has been identified as a potential obstacle (Ore and Sposato 2021), leading to the risk that they may even sabotage AI adoption (Black and van Esch 2020). As AI requires big amounts of user data, privacy concerns are also mentioned (Black and van Esch 2020). Other factors include assumed high development costs (Black and van Esch 2020), AI bias (Tuffaha 2023), and concerns about the loss of the human component (Ore and Sposato 2021). The literature provides an overview of companies’ and HR professionals’ possible concerns about AI. However, there is currently little research investigating the (intention to) use AI from the perspective of HR professionals (Pillai and Sivathanu 2020). Since HR professionals are key users of AI in recruiting, such studies may help to better understand the factors leading to the identified concerns and the low adoption rate.
Theoretical models aiming to explain the adoption of technologies by users, such as the technology acceptance model (TAM) (Davis et al. 1989) and Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al. 2003), place special emphasis on users’ beliefs as a factor influencing their intention to use the technology, as those beliefs determine both perceived barriers and use cases. Beliefs are all the information that a person has about a situation, which may be incomplete or wrong (Kim et al. 2012) and can be understood as what a person thinks about this technology (Suseno et al. 2021). Both use cases and barriers have been the focus of previous research on the use of AI in recruiting. The cases described in the literature range from chatbots (Albert 2019) to a selection of employees (Black and van Esch 2020) and decision-making processes (Vassilopoulou et al. 2022). Barriers identified in the literature mostly relate to technological (e.g., data availability) and economic (e.g., costs) aspects (Cubric 2020). Since use cases and barriers are determined by HR professionals’ assumptions about AI, their beliefs are key to understanding their intention to use AI.
However, the concept of beliefs is only superficially discussed in the TAM or UTAUT. We built on the Theory of Planned Behaviour (TPB; Ajzen 1991) to substantiate the concept.
According to the TPB (Ajzen 1991), beliefs influence the intention to perform a particular behaviour, and thus the intention to use AI, and can further be distinguished into different types, among others: (1) behavioural beliefs and (2) control beliefs. Control beliefs describe a person’s assumptions about factors that lead to an increase or decrease in the perceived difficulty in performing a behaviour. In contrast, behavioural beliefs cover the perceived positive and negative consequences of performing a behaviour and influence attitudes towards the behaviour. Consequently, the belief structure related to AI in recruiting consists of the following: (1) the attitude towards AI (i.e., to what extent HR professionals perceive AI as useful and necessary) and (2) the perceived behavioural control (i.e., to what extent HR professionals assume that AI is under their control). Our study empirically investigates these two beliefs of HR professionals about AI and their impacts on the intention to use AI.
Previous quantitative studies have focused on identifying factors that influence users’ intention to use AI in recruiting, with several indicating the high relevance of users’ beliefs. A recent survey study found that performance expectancy, i.e., the extent to which users believe that using AI will help them achieve a specific job performance, has a positive impact on HR professionals’ intention to use AI (Alam et al. 2020). A survey of HR professionals in China revealed that the perceived complexity of the technology hinders the adoption of AI (Pan et al. 2021). Change readiness for AI adoption by HR professionals has also been found to be positively influenced by positive beliefs about AI and high-performance work systems while at the same time being negatively influenced by AI anxiety (Suseno et al. 2021). These studies show that the users’ beliefs about AI can have a substantial influence on the actual use.
The relevance of understanding (HR) professionals’ beliefs about AI for its successful use is also addressed in research on algorithm aversion (Burton et al. 2019; Jussupow et al. 2020). The literature on algorithm aversion explores factors that influence the intention to use AI and explains why users, on average, are, all else being equal, less willing to accept AI decisions compared to human decisions (e.g., Dietvorst et al. 2015). Burton et al. (2019, p. 220) defined algorithm aversion as the “reluctance of human decision makers to use superior but imperfect algorithms,” and users’ beliefs about AI are identified as critical in explaining algorithm aversion (e.g., Jussupow et al. 2020; Berger et al. 2021) and thus understanding actual use.
Summing up, the literature currently offers few explanations for the discrepancy between the high potential of AI and the low adoption in recruiting. Algorithm aversion is one potential explanation for this discrepancy. Previous research on algorithm aversion highlights that users’ beliefs about AI are the central lens through which reasons for rejecting algorithms can be examined. However, the literature on individuals’ beliefs about AI is rare (Suseno et al. 2021). Therefore, we will study the discrepancy from the perspective of HR professionals’ beliefs about AI.

3. Method

To investigate both the use of AI in recruiting and HR professionals’ beliefs, an exploratory inquiry is needed, and therefore, semi-structured interviews were conducted with HR professionals.
To recruit interview partners, a web-based search was performed to identify which of the hundred largest companies in Austria already use AI in recruiting or plan to use AI in the future. Furthermore, AI providers were contacted to find additional companies. Finally, the recommendations of the interview partners were taken into account. The identified companies were contacted by telephone or email to capture the different perspectives of HR professionals. When selecting the interview partners, attention was also paid to ensuring a wide range in terms of company size (from 10 to 18,000 employees) and sector (e.g., automotive industry and media).
In each interview, the interviewees were asked for their understanding of AI. Afterwards, two example applications, i.e., an AI-based chatbot and a dashboard, were presented to them to provide a common basis. The chatbot conducted an information and application interview with a person who applied for a job at the university. The recorded applicant data were aggregated and ranked in a dashboard intended for HR professionals.
All interviewees received an overview of the interview guidelines in advance. The guideline was developed based on a review of the literature on AI in recruiting, which included the following themes: the company’s recruiting process, the interviewee’s personal experience with AI in recruiting, use cases, advantages and challenges, changes due to use, requirements and future trends and developments of AI in recruiting. The interview guideline was tested in a pretest interview with an HR professional and adapted accordingly. In the pretest, the time overrun of the interview duration was identified as the only critical factor, so questions had to be shortened and summarised, but the themes were retained.
From June 2020 to March 2021, 25 HR professionals from 21 Austrian companies were interviewed. Theoretical saturation, i.e., the point when the interviews cease to reveal new information, was chosen as the stopping criterion for the interviews (Glaser and Strauss 1967); this was the case after 16 interviews. As a robustness check, five additional interviews were conducted. The interviews lasted, on average, 60 minutes and were conducted as video conferences involving two researchers. The two researchers complemented perspectives during the interview, held debriefing interviews afterwards, and challenged themselves during the coding.
All interviewees are involved in their companies’ recruiting process, including HR business partners, HR analysts, HR legal experts, and hiring managers. The majority of the interviewees had several years of professional experience in recruiting and were working in a management position at the time of the interview. Two interviewees stated that their companies already use AI in recruiting, and one stated that his/her company is in the process of implementation. Table 1 gives an overview of the interviewees’ demographic details. All interviews were audio recorded and transcribed. All quotes used in this paper were translated into English by one of the authors, and these translations were checked by the other authors.
The transcripts were analysed using thematic analysis following the recommendations by Braun and Clarke (2006). First, patterns in the data were identified, and ideas for initial codes were developed, which served as the basis for further analysis. We build on the knowledge gained from the previously conducted literature research by creating initial codes based on the categories of the interview guidelines (e.g., area of use). Complementary to this step, interesting data extracts were classified into meaningful groups, leading to additional initial codes. For example, the data revealed that perceived challenges of AI for applicants and HR professionals have a significant impact on the use of AI in recruiting. Accordingly, these two influencing factors led to corresponding codes. The initial codes were expanded inductively throughout the coding process, i.e., the corresponding quotations from the transcripts were assigned to previously defined codes, and new codes were added as necessary. For example, interviewees indicated that their previous experience and prior knowledge of AI in different business contexts had a direct impact on their perception of the technology, which in turn affected their intention to use it in the company. For this reason, “prior knowledge and previous experience” became a code. After all data were coded, the different codes and the coded data extracts were sorted into overarching themes. For instance, the codes “advantages of chatbots” and “advantages of dashboards” were assigned to the theme “advantages of AI”. After the themes were created, they were refined. It was verified that the coded data extracts for each theme formed a pattern. The refinement of the themes revealed differences in the beliefs of AI, particularly along two dimensions: (1) the scope of AI and (2) the definition of instruction. Consequently, the data set was coded again using these two dimensions as main codes. In the final analysis step, all codes were again refined, and the individual themes were clearly defined and named.

4. Result

Interviewees define AI as an algorithm or approach that can access, process, and analyse large amounts of data quickly and efficiently. Furthermore, they describe AI as decision support that helps them in a structured way by taking over tasks that they previously had to perform themselves. Overall, the interviewees’ descriptions of AI are rather broad or focus on specific work tasks and use cases that AI could take over. In addition to the multitude of potential use cases, the interviewees’ beliefs about AI can be grouped into two themes: (1) the scope of AI and (2) the definition of instruction.

4.1. Scope of AI and Associated Use Cases

The scope of AI describes the beliefs interviewees hold regarding the capabilities of AI. It entails narrow and broad beliefs. With a narrow belief, interviewees assume AI to be a rule-based system whose capabilities are limited to the spectrum of the defined rule range. Interviewees with a broad belief perceive AI as a system that uses natural language processing (NLP) and ML. Its capabilities are not limited to a predefined rule range but operate outside this spectrum.
The use cases that interviewees associate with AI are related to the perceived scope of AI and align with four main recruiting tasks: (1) providing information, (2) gathering of data, (3) candidate exploration, and (4) matching and (pre-)selection. Figure 1 shows specific use cases in these areas of recruiting for a narrow and broad belief separately.

4.1.1. Providing Information: Use Cases

Providing information about the open position and the company to (potential) applicants was named as one of the most promising use cases of AI. When holding a narrow belief, interviewees imagine AI responding to a set of predefined, frequently asked questions. Interviewees reported that some applicants already have various questions about the job advertisement before they apply, which are currently answered by HR professionals. This task is described as time-consuming, repetitive, and annoying by almost every interviewee. Furthermore, service improvements for applicants are seen as questions or status updates that can be answered at any time. These potential use cases are also mentioned by interviewees holding a broad belief. AI is described as being capable of delivering information specifically tailored to the applicant.
AI provides specific information about open job positions (e.g., working hours) or general information about the company (e.g., company values) specifically tailored to the applicant. The interviewees also imagine that even after being rejected for a certain job, applicants can be recommended to the company by AI for a “better fitting” position. This use case is seen as having great potential to recruit promising candidates for the company.

4.1.2. Gathering of Data: Use Cases

Gathering data was mentioned by interviewees as another potential AI use case. Interviewees with a narrow belief assume that AI allows applicants to upload their application documents and ensures that all required documents are submitted. AI is seen to offer benefits for an efficient selection process and to improve the applicant experience.
I think he [the applicant] gets support because he doesn’t have to think too much. Because he is immediately told what else we need from, him. So he feels in good hands because he can’t forget anything and because he knows immediately where he is standing. Because the chatbot says: “You’ve done everything you have to do. Thank you very much and someone will get in touch with you.” I think the applicant will leave the interaction happier.
[E19]
AI is also seen as an alternative to online forms. The predefined question fields of online forms are replaced by a question–answer dialogue with AI, also for the assessment of personality types. Interviewees with a broad belief imagine that AI is able to assess the personality type based on speech patterns and text modules from application documents as well as voice tones, gestures, and facial expressions from applicant videos. The question of whether or not AI can take over job interviews is viewed critically and is discussed by interviewees, often in an emotional manner. Regardless of the interviewees’ beliefs about the scope of AI, the prevailing assumption is that AI should not perform activities requiring personal contact. Despite these concerns, interviewees can envision the use of AI for certain parts of job interviews. With regard to this use case, the narrow belief is dominant, and our interviewees imagine that AI is able to ask standardised questions to gather data about qualifications. Initial or pre-interviews are often limited to standardised questions and aim to gather hard facts rather than an impression about the candidate’s personality, which makes them acceptable as potential use cases. Interviewees holding a broad belief assume that AI has the potential to completely take over job interviews at some point. However, they pointed out that AI currently lacks the required technological maturity and that job interviews via AI are still a remote future scenario.

4.1.3. Candidate Exploration: Use Cases

HR professionals search internal and external databases to identify a sufficient number of candidates. This task is described as complex and time-consuming and is often outsourced to external and costly agencies since HR professionals have to examine the CV of each potential candidate individually. Here, the interviewees see a promising use case for AI in identifying promising candidates. Interviewees having a narrow belief expect that AI is able to identify potential candidates in internal and external (e.g., LinkedIn) databases based on predefined criteria, carry out matching, and generate short and long lists. This procedure is also conceivable for internal filling (e.g., temporary positions and teams).
Where I could well imagine it (AI), is when there are many potential candidates who did not apply but who are in some databases. To search these databases according to these very criteria and then to get a shortlist or longlist of candidates.
[E8]
HR professionals with a broad belief expect that AI is able to search internal company databases containing profiles of existing employees and identify promising individuals suitable for job rotations. Furthermore, AI is assumed to have the potential to create and post job advertisements automatically.

4.1.4. Matching and (Pre-)Selection: Use Cases

Matching and (pre-)selection were other areas of potential use cases discussed in the interviews. Interviewees having a narrow belief expect AI to support the matching by comparing the requirements of a vacant position with the candidates’ profiles using clearly defined criteria. The interviewees could very well imagine using AI-generated ranking lists, which contain information on hard facts and test results as decision support.
Calculating a score based on the basic requirements that I have, for example: Bachelor’s degree, at least one year of experience, things like that. You make a list of who fulfils these criteria and to what percentage. (...) I save time or have it presented more clearly who has the biggest match. And on the basis of that, I can either start making a shortlist or invite people directly.
[E4]
The quote shows that the interviewee sees the potential value of AI in data preparation and visualisation. Well-prepared and clear data make work easier, and HR professionals are happy to hand over this task to AI.
The quote also indicates another possible function of AI, namely the creation of proposals for rejections or invitations. AI-based proposals are seen as a helpful decision-making aid, especially with a large number of applications. If the selection decision is delayed, it is assumed that automated notifications will be sent to the applicants via AI. Furthermore, statistics and reports can be created based on the data collected by AI.
Interviewees having a broad belief did not limit AI’s capabilities to matching people’s qualifications with predefined requirements. These interviewees envision matching candidates’ preferences and values with company values or AI being able to make a final personnel decision and communicate it to the applicants in an automated way. However, this function was seen critically by the interviewees. Automated rejection or acceptance is perceived as too impersonal, and HR professionals do not want to hand over decision-making power to Al.

4.2. Definition of Instruction: Manual versus Automatic

AI is often associated with a need for instruction. We observe two different beliefs of how HR professionals perceive the instructional nature of Al: (1) a high level of instruction caused by the need for ongoing manual input, and (2) a low level of instruction caused by machine learning capabilities. We term the first belief manual and the second belief automatic.
Interviewees holding a manual belief perceive AI as similar to traditional software and expect a commonly used rule-based instruction. The manual creation of the decision model, as well as the determination of the decision component of AI by the HR professionals, was seen as a basic prerequisite for the commissioning of Al.
I think that an AI has to be programmed. Assuming you would program the AI in a way that it eliminates males from the process. Or a matching below 50 percent. Then that has to be in the code, that has to be captured somewhere, programmed into the AI. (...) I believe otherwise the AI can’t throw them out.
[E5-3]
HR professionals holding an automatic belief assume that AI has the ability to train models from a given data set and consequently is able to recognise patterns based on historical personnel decisions. Representatives from both beliefs perceive three barriers that hinder the use of AI in recruiting: (1) low benefit-effort-ratio, (2) fear of losing applicants, and (3) fear of replacement. However, HR professionals with manual and automatic beliefs differ in terms of their assumptions about AI, which leads to the perception of the barriers. Thus, HR professionals’ beliefs determine barriers’ perceived scope and causes. Figure 2 displays the relationship between assumptions and barriers for manual and automatic beliefs separately.

4.2.1. Barrier: Low Benefit-Effort-Ratio

The assumed high effort both when implementing and programming AI was named as one of the main adoption barriers by interviewees holding a manual belief. These interviewees expected that the efforts and costs for the implementation and programming of AI outweigh the expected benefits. Due to the size of the company and the number of applicants, AI is simply not worthwhile.
We have few positions to none that have a large number of applications. (...) The mass of applications needed for an added value from automation or a decision support tool is not there.
[E6]
The statement also highlights that the primary assumed advantage of AI is “to help in situations with too many applications”. Interviewees who have a manual belief reported that managing applications is not their major challenge; rather, they have frequently changed specialist positions. Managing many applications means having the same positions over a long time with many applicants, such as cashiers for fast food restaurant chains. Most interviewees with a manual belief expressed the idea that AI is only relevant for certain job postings, namely for internships, entry-level jobs, or jobs in production, since here, many applicants are to be expected, and the requirement profiles remain relatively constant over time. In contrast, the use of AI is seen as unsuitable for leadership roles since they come with few applicants and specific requirements, which may even be further specified during the recruiting process. Additionally, the interviewees emphasised that personality traits and interpersonal skills are more important for leadership positions. They assume that currently, AI is not able to capture this accordingly, nor that it is accepted by this applicant group as a substitute for humans. The higher they perceive the manual effort needed to set up and maintain an AI, the higher the requirements for stable use case conditions, and the lower the intention to adopt AI.
For HR professionals who hold a manual belief, the high effort stems from the requirement to adopt and reprogram AI constantly. For example, the interviewees assume that for job postings that are not standardised and stable for a long time, the AI has to be reprogrammed to account for each change. The assumption that AI requires high programming effort affects the perceived maturity of the technology. Representatives of the automatic belief of AI also perceive the low benefit-effort-ratio as the main barrier to AI. The same main reasons (e.g., a small number of applicants) were named as to why the use of AI is not considered efficient. However, these interviewees do not see manual programming as the issue but rather training and continuous maintenance of AI. HR professionals assume that AI learns autonomously based on historical data, but at the same time, emphasise the need to provide new data continuously as job requirements change. The training effort itself is also perceived as very time-consuming. It must be ensured that sufficient up-to-date data are continuously available so AI is able to perform its tasks accordingly.
Maintenance costs are a disadvantage. Somebody has to continuously take care of this technological achievement and provide content. (...) I have the feeling that you have to do it right or not at all, because a chatbot with old info does not help anybody.
[E12]
This HR professional’s automatic belief highlights the effort reflected in estimated costs and effort as well as the purchase and development of AI, which is considered high by HR professionals.
The added value of AI is seen in the elimination of work steps. To avoid duplication of both steps and longer loops within the process when using AI and to ensure a good applicant experience, representatives of the automatic belief see the need for process analysis and process adaption. The implementation of this is associated with a high effort of time and resources.

4.2.2. Barrier: Fear of Losing Applicants

The risk of losing candidates due to the use of AI was named as another barrier. HR professionals holding a manual belief assume that the evaluation of candidates by an AI is based on predefined selection criteria, which leads to the fear that promising applicants who do not meet the standard templates will be sorted out by AI. It is assumed that there is a lack of flexibility regarding the adaptation of the selection criteria. This problem is seen especially in the case of job profiles that are adapted several times in the course of the selection process. Applicants who have already been sorted out might meet the advertised position’s requirements later in the process but are no longer considered by AI. The perceived black-box nature of AI reinforces these fears, as AI-based decisions cannot be understood.
Interviewees with a manual belief also fear that applicants could be deterred by AI. This concern is based on the negative experiences that HR professionals have had with AI in other contexts or general concerns about the maturity of the technology. Interviewees reported several instances where AI was unable to adequately answer questions, raising concerns that AI currently does not have the maturity to be used in recruiting. In this case, the fear of a negative image of the company was raised. AI is not expected to have sufficient flexibility, nor is it expected to respond appropriately to personal statements.
Very personal topics can come up in job interviews. Sometimes I think to myself, I didn’t really want to know that, but obviously, you’ve got into a topic that moves the applicant personally. And as an interviewer, you have to react accordingly. And in such a situation, you have to show empathy. (...) And it’s hard for me to imagine how that would work with an avatar.
[E5_2]
The novelty of AI can cause scepticism among less tech-savvy and older applicant groups, which can lead to the discontinuation of the application process. HR professionals holding a manual belief expect not only the loss of certain candidates but also the potential attraction of other applicant groups. It is assumed that tech-savvy applicant groups or young professionals, especially, associate the use of AI with an innovative and progressive company and thus perceive the company as an attractive employer.
The concerns expressed by interviewees holding an automatic belief in AI regarding their fear of losing promising applicants are consistent with those holding a manual belief. These differ only in the assumption that the AI may recognise patterns that do not correspond to the desired evaluation criteria of the HR professionals.

4.2.3. Barrier: Fear of Replacement

The idea of delegating administrative and repetitive tasks to AI is attractive to most interviewees. At the same time, there is also fear that AI could reduce their field of activity.
If I’m honest, you can clarify everything with the chatbot. You have to program it correctly. If you can manage that, then a lot is possible with the chatbot. Recruiting in particular, except for the interpersonal, is a part that can generally be taken over by chatbots, AI at some point. (...) This is a relief on the one hand, but on the other hand, jobs are eliminated.
[E11]
HR professionals holding a manual belief are aware of the benefits of AI but also the risk of being (partially) replaced by AI. However, the fear that AI is able to completely take over the job of an HR professional was not expressed by any interviewee. AI tends to be seen only as a supporting tool.
I think AI has a lot of potential that you can use. I stand by my statement that AI can and should only support and will in my view never be able to make decisions without humans who must be significantly involved in the decision-making process.
[E8]
This quote reflects the scepticism of HR professionals who hold a manual belief and assume that AI does not have the technical prerequisites to pose a threat to their jobs.
The ability of AI to take on recruiting tasks is perceived as both an attractive advantage and a threat by HR professionals, who hold an automatic belief. They fear that they would lose not only administrative and repetitive tasks but also tasks interesting to them that involve direct contact with applicants. With the ongoing development of AI, concerns of increasingly losing tasks to AI are rising. The higher the decision-making power of AI, the less it is accepted by HR professionals. Furthermore, HR professionals who hold an automatic belief perceive that the automation tendency in the use of AI is a risk. They are concerned that HR professionals may rely too much on the outcome of AI.

5. Discussion

The aim of this study is to provide an explanation for the discrepancy between the high potential of AI and its low adoption in recruiting. Studies in algorithm aversion highlight the relevance of users’ beliefs about AI for AI adoption. Therefore, we investigated HR professionals’ beliefs about AI to identify reasons leading to non-adoption. Our findings show that beliefs about AI can be grouped into two dimensions: the perceived capabilities of AI and the need for instruction of AI. Regarding the perceived capabilities of AI, HR professionals with narrow and broad beliefs can be classified. Interviewees with a narrow belief assume AI to be a rule-based system whose capabilities are limited to the spectrum of the defined rule range, while interviewees with a broad belief perceive AI as a system that uses NLP and ML and whose capabilities operate outside this predefined spectrum. Furthermore, we found two beliefs about the need for instruction: (1) manual and (2) automatic. Respondents holding a manual belief assume that AI is similar to traditional software and that decision rules have been hand-designed. HR professionals holding an automatic belief assume that AI is, under supervision, able to train models from a given data set, recognise patterns based on historical decisions, and extrapolate them. We found that HR professionals’ different beliefs about AI influence which AI use cases they could see and which barriers they anticipate to AI adoption. Depending on a narrow or broad belief about the AI capabilities, HR professionals associate different use cases, e.g., HR professionals with a narrow belief see use cases of AI in providing information upfront. At the same time, HR professionals with a broad belief also see employer branding and consulting as further AI use cases. HR professionals with a narrow belief assume that the capabilities of AI are limited, which is why they perceive a smaller number of use cases than those with a broad belief. However, each perceived use case is associated with a benefit for the use of AI, which has a positive effect on the intention to use AI.
Representatives of both manual and automatic beliefs perceive three barriers that inhibit the current use of AI in recruiting: (1) low benefit-effort-ratio, (2) fear of losing applicants, and (3) fear of replacement. The respective beliefs differ in terms of the assumptions about AI, which led to the expression of the barriers. The “low benefit-effort-ratio” barrier describes HR professionals’ concern that the effort and cost of implementing and programming AI outweigh its expected benefits. HR professionals with a manual belief explain this barrier with their assumption that AI requires ongoing manual input and has a lack of technological maturity, while those with an automatic belief refer it to high training and maintenance effort, high costs, need for process analysis and adaptation and immaturity of the technology. This barrier seems insurmountable for representatives of a manual belief, while those with an automatic belief describe it as a manageable obstacle. HR professionals link AI with a variety of possible use cases, but at the same time, also with the threat of being replaced (barrier: fear of replacement). With a manual belief, a high level of instruction of the AI and thus limited use is assumed, whereby only the loss of administrative tasks to the AI is expected. An automatic belief assumes a low level of instruction and flexible use, whereby the loss of decision-making power, the takeover of interesting activities, and the risk of automation tendency are also concerns. This barrier is perceived more strongly with an automatic belief than with a manual belief. Holders of both beliefs perceive the fear of losing applicants similarly. Regardless of how strongly a barrier is perceived, each is associated with a risk to a successful AI adoption, which is why they have a negative impact on the intention to use Al. Figure 3 displays the relationship between beliefs about AI and the intention to use Al.
Building on our findings, our study provides several theoretical as well as practical implications. By reconstructing HR professionals’ beliefs about AI, we extend general theoretical models such as TAM and UTAUT with domain-specific insights, thus contributing to the existing literature on HR and algorithm aversion. Furthermore, our study offers practical implications for managers, employees, and policymakers consisting of concrete measures to influence HR professionals’ beliefs about AI.

5.1. Theoretical Implications

Being, to the best of our knowledge, the first study to investigate users’ beliefs about AI in the context of recruiting, we enrich the existing literature on AI adoption, mainly consisting of established theoretical models that have provided general statements about users’ intentions or adoption of technologies, with domain-specific insights. Consequently, our findings contribute to theoretical models such as TAM, UTAUT, and TPB, as well as to the literature on HR and algorithm aversion.
We contribute to the HR literature by identifying HR professionals’ beliefs as explanations for the discrepancy between the high potential of AI and the low adoption in recruiting. While beliefs have already been identified as a major factor contributing to algorithm aversion, our study concretely highlights the importance of HR professionals’ beliefs about the instructional needs of AI on perceived barriers and, thus, on their intention to adopt Al. For example, HR professionals with a manual belief perceive the benefit-effort-ratio of AI as not promising for most application cases. The literature on AI shows that these assumptions about AI’s capabilities and instructional needs do not correspond to Al’s actual conditions. Consequently, false beliefs of HR professionals lead to unfounded perceptions of barriers that do not exist to this extent, which in turn negatively impact their intention to use Al.
Our findings extended previous findings in the HR literature on possible reasons for not adopting Al. So far, the literature has attempted to explain the low adoption of AI mainly due to ethical concerns, fear of losing a job to AI (Hunkenschroer and Luetge 2022), high development costs, privacy and legal concerns (Black and van Esch 2020). However, the assumptions that lead to these concerns are unknown, making them difficult to comprehend. Our findings are consistent with the HR professionals’ concerns about AI adoption identified in the literature and extend them by highlighting the assumptions that lead to these concerns through the identification of AI-specific beliefs. Consequently, the perceived barriers can be better understood, and targeted countermeasures can be taken to foster AI implementation.
From an HR perspective, identifying and showing the benefits of AI are key factors for AI adoption. In the literature, reduction of bias, time savings, and replacement of repetitive tasks (e.g., Gikopoulos 2019; Upadhyay and Khandelwal 2018) are reported as key benefits of AI; this is supported by our findings, often mentioned in combination with specific use cases. Use cases, as well as benefits, are seen even by interviewees who do not use AI, indicating that the potential of AI is perceived by HR professionals.
We contribute to the literature on algorithm aversion by identifying specific beliefs about AI in recruiting that led to non-adoption. Research on algorithm aversion is based on the comparatively lower acceptance of otherwise identical AI vs. human decisions (Dietvorst et al. 2015); this raises the question of the mechanisms responsible for aversion to algorithms. Previous research indicates that users’ beliefs are an important factor influencing the willingness to use AI. To the best of our knowledge, this is the first study to examine HR professionals’ beliefs about AI in the context of recruiting, resulting in two major dimensions of HR professionals’ beliefs about AI capabilities and instructional needs. Beliefs influence users’ intention to adopt AI, making them highly relevant to action theory. By consciously and gradually changing beliefs, the intention to adopt AI changes, making beliefs an essential starting point for studying AI adoption.

5.2. Practical Implications

Our study offers several practical implications for three target groups: managers, employees, and policymakers.
Our findings can serve as a basis for employer-tailored training design that focuses on the beliefs of HR professionals and thus increases their intentions to use AI. Experts are convinced that AI will offer benefits to businesses, and AI is superior to humans for certain tasks (e.g., information retrieval) and can be considered “state of the art” (Lücke 2019). However, it is assumed that individual variables of HR professionals influence the interaction with AI systems and thus also the decision quality (Kupfer et al. 2023). Our results show that HR professionals’ respective beliefs determine perceived use cases and barriers and thus have an impact on the actual use of AI. The comparison of HR professionals’ beliefs with the actual technological capabilities of AI shows that their assumptions are not in line with the state of practice. Consequently, some perceived barriers turn out to be unfounded. Companies intending to adopt AI should ensure that their HR professionals have positive beliefs about AI (Suseno et al. 2021), are convinced of the benefits of AI, build their trust in AI (Hengstler et al. 2016), and acknowledge and manage their concerns about the impact of AI on their field of work (Suseno et al. 2021). This approach is also relevant as there is currently a legal debate on whether the business judgment rule should come into force regarding AI, which obliges company management to adopt AI in the company (Lücke 2019). There is already a legal obligation to use AI and to delegate tasks to algorithms in exceptional cases, i.e., when the entrepreneurial scope for action and decision making is exceeded by not using AI (Lücke 2019). Consequently, companies that intend to use AI should ensure that HR professionals’ beliefs are aligned with the state of practice to counteract potential limitations and biases; this can be achieved through short-term change intervention (Mlekus et al. 2022), employee participation in the implementation process (Paruzel et al. 2019), education and awareness training, and adequate information, open communication, and participative decision making and implementation of technologies (Rafferty et al. 2013).
By reconstructing HR professionals’ beliefs about AI that motivate their non-use of AI, our study highlights areas where managers may need to take action to mitigate HR professionals’ concerns about AI optimally. Our findings can serve as a roadmap for managers to select and implement appropriate countermeasures in a targeted way. For example, our identified beliefs indicate that as a result of the black-box character of AI, several HR professionals have concerns about the traceability and transparency of AI-based decisions. HR professionals’ trust can be increased through explainable artificial intelligence (XAI). XAI can be used to transparently demonstrate how AI makes both specific decisions as well as decisions in general (Fleiß et al. 2023), thus ensuring the identification of biases in the training data, fairness, and the expected functionality of the algorithm (Gilpin et al. 2018). Consequently, XAI promises to mitigate barriers to AI adoption, such as fear of losing promising candidates due to bias or legal concerns.
Similar results can be achieved by considering the design principles of an AI, which are guidelines for the development of systems. Design principles can influence aversion to algorithms (Burton et al. 2019) by changing HR professionals’ beliefs about AI, reducing their perception of barriers and increasing their intention to use AI. For example, the discrepancy between the need to produce knowledge using AI independently of domain experts and the need to remain relevant to recruiting can be resolved by developing a human–ML hybrid that combines ML and expertise (van den Broek et al. 2021). The desire for control is an important indicator of acceptance of algorithms, which can be met by adapting algorithmic decision-making processes (Burton et al. 2019). For example, AI systems can be designed to allow HR professionals to subsequently correct AI-based decisions, giving them a feeling of control over the AI. According to the TPB (Ajzen 1991), the extent of HR professionals’ perceived control over AI affects their intention to use it; this can reduce the perception of barriers such as fear of bias or fear of loss of decision-making power by HR professionals.
Our findings can serve policymakers as a basis for designing and adapting AI certification measures tailored to HR professionals, thus directly addressing and optimally mitigating HR professionals’ concerns. Legal certainty is considered a key prerequisite for AI adoption by all interviewees. However, the identified beliefs indicate that some HR professionals perceive a lack of trust and legal certainty regarding AI, which significantly inhibits its adoption. One way to achieve trustworthy and legally secure AI is to certify AI systems. By auditing AI, HR professionals, and companies are provided with a legal framework for its use, which can reduce their concerns. Although AI certification is currently the subject of widespread debate, including political debate in the EU and the USA, there is still a lack of established guidelines in this area (Fernández Martínez and Fernández 2019). As recruiting is a sensitive area, it is crucial to establish guidelines for developing and using AI. In our view, such a successful certification of AI could mitigate the concerns and perceived barriers of HR professionals. By considering HR professionals’ beliefs about AI when designing AI certification measures, policymakers can ensure that they are aligned with the target group and directly address their concerns. As a result, HR professionals’ trust in AI will increase.

6. Limitation and Future Research Directions

This paper has two major limitations. First, only interviewees from Austria were selected because research funding was provided for the study focusing on Austrian companies. HR professionals from international companies with offices in Austria were recruited to counteract the limitation. Furthermore, a qualitative study that investigated the usefulness and limitations of AI-based chatbots in the recruiting process in India indicates that HR professionals from other countries, such as India, perceive similar benefits and limitations of chatbots as Austrian HR professionals (Tuffaha et al. 2022). Second, despite a targeted search, we identified only a few companies already using AI in recruiting. This situation is consistent with findings from the literature that the use of AI in practice is still restrained. It can be assumed that surveying a larger number of HR professionals who already use AI will provide further insights into their specific beliefs about AI. The differences between the beliefs of HR professionals with several or no application experiences can be examined with a more balanced ratio.
Our study can serve as a starting point for research on AI in recruiting. Future studies should build on our research by investigating how HR professionals’ beliefs about AI can be changed to adapt to AI’s actual conditions (e.g., need for instruction). Previous studies indicate several factors that change beliefs. For example, Lewis et al. (2003) found that top management commitment has a positive impact on employee’s belief about the usefulness of information technology, while Rafferty et al. (2013) argue that communication, participation, and leadership positively influence beliefs about change and are thus relevant factors for change readiness. Building on these studies, we propose to examine how HR professionals’ beliefs about AI can be changed, considering these factors. AI is expected to be increasingly used in recruiting in the coming years due to its ongoing development. Hence, we encourage researchers to observe the ongoing development of AI and investigate if and how HR professionals’ beliefs change over time to draw conclusions about the perceived use cases and barriers. Building on our study, we call for research on whether changed beliefs result in new perceived use cases and barriers.

7. Conclusions

Our study aims to close the research gap regarding the discrepancy between the high potential of AI and the low adoption in recruiting by providing explanations. The present interview study sheds light on HR professionals’ beliefs to identify reasons for the non-adoption of AI in practice. Specifically, we found that the assumptions about the perceived capabilities of AI determine the associated use cases, while the assumed need for instruction determines which barriers are perceived to AI adoption. Our results support the view that HR professionals’ beliefs are key to the successful adoption of AI in recruiting. Building on the identified beliefs, practitioners such as managers and policymakers can implement employer-tailored measures to encourage widespread AI adoption in recruiting.

Author Contributions

Conceptualization, C.M., C.K., J.F., B.K. and S.T.; methodology, C.M., C.K., J.F., B.K. and S.T.; validation, C.M., C.K., J.F., B.K. and S.T.; formal analysis, C.M., C.K., J.F., B.K. and S.T.; investigation, C.M., C.K., J.F., B.K. and S.T.; resources, C.M., C.K., J.F., B.K. and S.T.; data curation, C.M., C.K., J.F., B.K. and S.T.; writing—original draft preparation, C.M., C.K., J.F., B.K. and S.T.; writing—review and editing, C.M., C.K., J.F., B.K. and S.T.; visualization, C.M., C.K., J.F., B.K. and S.T.; supervision, C.M., C.K., J.F., B.K. and S.T.; project administration, C.M., C.K., J.F., B.K. and S.T.; funding acquisition, C.M., C.K., J.F., B.K. and S.T. All authors have read and agreed to the published version of the manuscript.

Funding

Funding from Public Employment Service Austria (AMS) is gratefully acknowledged.

Institutional Review Board Statement

According to the statutes of the University of Graz a research project is subject to IRB approval if this project may impair the physical or psychological integrity of the participants or the right to privacy or other important rights and interests of the participants or their relatives (§ 2 Ethics Committee). None of these concerns apply to the current study, as it only uses commonly used standard measures of experimental economics and survey research. On request a confirmation of the IRB can be obtained. Source: University of Graz (2008): Statutes of the University of Graz—Ethics Committee. Retrieved from https://static.uni-graz.at/fileadmin/portal/forschen/Files/Ethikkommission_Satzungsteil.pdf accessed on 24 October 2023.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

Upon request to the first author, the data supporting the results of this interview study are available.

Acknowledgments

The authors acknowledge the financial support for open-access publication by the University of Graz. We thank Renate Ortlieb for comments on the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ajzen, Icek. 1991. The Theory of Planned Behavior. Organizational Behavior and Human Decision Processes 50: 179–211. [Google Scholar] [CrossRef]
  2. Alam, Mohammad Sarwar, Tohid-Uz-Zaman Khan, Sanjib Sutra Dhar, and Kazi Sirajum Munira. 2020. HR Professionals’ Intention to Adopt and Use of Artificial Intelligence in Recruiting Talents. Business Perspective Review 2: 15–30. [Google Scholar] [CrossRef]
  3. Albert, Edward Tristram. 2019. AI in talent acquisition: A review of AI-applications used in recruitment and selection. Strategic HR Review 18: 215–21. [Google Scholar] [CrossRef]
  4. Berger, Benedikt, Martin Adam, Alexander Rühr, and Alexander Benlian. 2021. Watch Me Improve Algorithm Aversion and Demonstrating the Ability to Learn. Business and Information Systems Engineering 63: 55–68. [Google Scholar] [CrossRef]
  5. Black, J. Stewart, and Patrick van Esch. 2020. AI-enabled recruiting: What is it and how should a manager use it? Business Horizons 63: 215–26. [Google Scholar] [CrossRef]
  6. Braun, Virginia, and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3: 77–101. [Google Scholar] [CrossRef]
  7. Burton, Jason W., Mari-Klara Stein, and Tina Blegind Jensen. 2019. A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making 33: 220–39. [Google Scholar] [CrossRef]
  8. Charlwood, Andy, and Nigel Guenole. 2022. Can HR adapt to the paradoxes of artificial intelligence? Human Resource Management Journal 32: 729–42. [Google Scholar] [CrossRef]
  9. Cooke, Fang Lee, Michael Dickmann, and Emma Parry. 2021. IJHRM after 30 years: Taking stock in times of COVID-19 and looking towards the future of HR research. The International Journal of Human Resource Management 32: 1–23. [Google Scholar] [CrossRef]
  10. Cubric, Marija. 2020. Drivers, barriers and social considerations for AI adoption in business and management: A tertiary study. Technology in Society 62: 101257. [Google Scholar] [CrossRef]
  11. Davenport, Thomas, and Ravi Kalakota. 2019. The potential for artificial intelligence in healthcare. Future Healthcare Journal 6: 94–98. [Google Scholar] [CrossRef] [PubMed]
  12. Davis, Fred D., Richard P. Bagozzi, and Paul R. Warshaw. 1989. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Management Science 35: 982–1003. [Google Scholar] [CrossRef]
  13. Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. 2015. Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. Journal of Experimental Psychology: General 144: 114–26. [Google Scholar] [CrossRef] [PubMed]
  14. Feloni, Richard. 2017. Consumer Goods Giant Unilever Has Been Hiring Employees Using Brain Games and Artificial Intelligence and It’s a Huge Success. New York: Business Insider. [Google Scholar]
  15. Fernández Martínez, Carmen, and Alberto Fernández. 2019. Ontologies and AI in Recruiting. A Rule-Based Approach to Address Ethical and Legal Auditing. Paper presented at the International Semantic Web Conference (ISWC), Auckland, New Zealand, October 26–30. [Google Scholar]
  16. Fleiß, Jürgen, Elisabeth Bäck, and Stefan Thalmann. 2023. Mitigating algorithm aversion in recruiting: A study on explainable AI for conversational agents. The DATA BASE for Advances in Information Systems. in press. [Google Scholar]
  17. George, Babu, and Ontario Wooden. 2023. Managing the strategic Transformation of Higher Education through Artificial Intelligence. Administrative Sciences 13: 196. [Google Scholar] [CrossRef]
  18. Gikopoulos, John. 2019. Alongside, not against: Balancing man with machine in the HR function. Strategic HR Review 18: 56–61. [Google Scholar] [CrossRef]
  19. Gilpin, Leilani H., David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining Explanations: An Overview of Interpretability of Machine Learning. Paper presented at the IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Turin, Italy, October 1–3; pp. 80–89. [Google Scholar] [CrossRef]
  20. Glaser, Barney G., and Anselm L. Strauss. 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research. Mill Valley: Sociology Press. [Google Scholar]
  21. Guo, Feng, Christopher M. Gallagher, Tianjun Sun, Saba Tavoosi, and Hanyi Min. 2021. Smarter people analytics with organizational text data: Demonstrations using classic and advanced NLP models. Human Resource Management Journal 2021: 1–16. [Google Scholar] [CrossRef]
  22. Hengstler, Monika, Ellen Enkel, and Selina Duelli. 2016. Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technological Farecasting and Social Change 105: 105–20. [Google Scholar] [CrossRef]
  23. Huang, Ming-Hui, and Roland T. Rust. 2018. Artificial Intelligence in Service. Journal of Service Research 21: 155–72. [Google Scholar] [CrossRef]
  24. Hunkenschroer, Anna Lena, and Christoph Luetge. 2022. Ethics of AI-Enabled Recruiting and Selection: A Review and Research Agenda. Journal of Business Ethics 178: 977–1007. [Google Scholar] [CrossRef]
  25. Jussupow, Ekaterina, Izak Benbasat, and Armin Heinzl. 2020. Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. Paper presented at the 28th European Conference an Information Systems (ECIS), Marrakech, Morocco, June 11–16. [Google Scholar]
  26. Kim, Sojung, Hui Xi, Santosh Mungle, and Young-Jun Son. 2012. Modeling Human Interactions with Learning under the Extended Belief-Desire-Intention Framework using Agent-based Simulation. Paper presented at the 2012 Industrial and Systems Engineering Research Conference, San Juan, Puerto Rico, November 7. [Google Scholar]
  27. Kot, Sebastian, Hafezali Iqbal Hussain, Svitlana Bilan, Muhammad Haseeb, and Leonardus W. W. Mihardjo. 2021. The role of artificial intelligence recruitment and quality to explain the phenomenon of employer reputation. Journal of Business Economics and Management 22: 867–83. [Google Scholar] [CrossRef]
  28. Kupfer, Cordula, Rita Prassl, Jürgen Fleiß, Christine Malin, Stefan Thalmann, and Bettina Kubicek. 2023. Check the box! How to deal with automation bias in AI-based personnel selection. Frontiers in Psychology 14: 1118723. [Google Scholar] [CrossRef] [PubMed]
  29. Laurim, Vanessa, Selin Arpaci, Barbara Prommegger, and Helmut Krcmar. 2021. Computer, Whom Should I Hire?—Acceptance Criteria for Artificial lntelligence in the Recruitment Process. Paper presented at the 54th Hawaii International Conference on System, Kauai, HI, USA, January 5. [Google Scholar]
  30. Lewis, William, Ritu Agarwal, and Vallabh Sambamurthy. 2003. Sources of Influence on Beliefs about Information Technology Use: An Empirical Study of Knowledge Workers. MIS Quarterly 27: 657–78. [Google Scholar] [CrossRef]
  31. Lu, Yang. 2019. Artificial intelligence: A survey on evolution, models, applications and future trends. Journal of Management Analytics 6: 1–29. [Google Scholar] [CrossRef]
  32. Lücke, Oliver. 2019. Der Einsatz von KI in der und durch die Unternehmensleitung. “Lieutenant Commander Data” on bord oder natural intelligence still needed? BB 2019: 1986–94. [Google Scholar]
  33. McKinsey. 2021. The State of AI in 2021. Available online: https://www.mckinsey.com/~/media/McKinsey/Business%20Functions/McKinsey%20Analytics/Our%20Insights/Global%20survey%20The%20state%20of%20AI%20in%202021/Global-survey-The-state-of-AI-in-2021.pdf (accessed on 1 January 2023).
  34. McKinsey and Company. 2018. Notes from the AI Frontier: AI Adoption Advances, but Foundational Barriers Remain. Available online: https://www.mckinsey.com/midwest/~/media/McKinsey/Featured%20Insights/Artificial%20Intelligence/AI%20adoption%20advances%20but%20foundational%20barriers%20remain/Notes-from-the-AI-frontier-AI-adoption-advances-but-foundational-barriers-remain.ashx (accessed on 1 January 2023).
  35. Michelotti, Marco, Rod McColl, Petya Puncheva-Michelotti, Ronald Clarke, and Tom McNamara. 2021. The effects of medium and sequence on personality trait assessments in face-to-face and videoconference selection interviews: Implication, for HR analytics. Human Resource Management Journal 2021: 1–19. [Google Scholar] [CrossRef]
  36. Mlekus, Lisa, Anna-Lena Kato-Beiderwieden, Katharina D. Schlicher, and Günther W. Maier. 2022. With a Little Help From Change Management. Effects of a Short-Term Change Intervention on Employee Attitutes and Behavior. German Journal of Work and Organizational Psychology 66: 40–51. [Google Scholar] [CrossRef]
  37. Nankervis, Alan R., and Roslyn Cameron. 2023. Capabilities and competencies for digitised human resource management: Perspectives from Australian HR professionals. Asia Pacific Journal of Human Resources 61: 232–51. [Google Scholar] [CrossRef]
  38. Ore, Olajide, and Martin Sposato. 2021. Opportunities and risks of artificial intelligence in recruitment and selection. International Journal of Organizational Analysis 30: 1771–82. [Google Scholar] [CrossRef]
  39. O’Reilly. 2020. AI Adoption in the Enterprise 2020. Available online: https://www.oreilly.com/radar/ai-adoption-in-the-enterprise-2020/ (accessed on 1 January 2023).
  40. Pan, Yuan, Fabian Froese, Ni Liu, Yunyang Hu, and Maolin Ye. 2021. The adoption of artificial intelligence in employee recruitment: The influence of contextual factors. The International Journal of Human Resource Management 33: 1–23. [Google Scholar] [CrossRef]
  41. Paruzel, Agnieszka, Dominik Bentler, Katharina D. Schlicher, Wolfgang Nettelstroth, and Günter W. Maier. 2019. Employees First, Technology Second. Implementation of Smart Glasses in a Manufacturing Company. German Journal of Work and Organizational Psychology 64: 46–57. [Google Scholar] [CrossRef]
  42. Pillai, Rajasshrie, and Brijesh Sivathanu. 2020. Adoption of artificial intelligence (AI) for talent acquisition in IT /ITeS organizations. Benchmarking: An International Journal 27: 2599–629. [Google Scholar] [CrossRef]
  43. Rafferty, Alannah E., Nerina L. Jimmieson, and Achilles A. Armenakis. 2013. Change Readiness: A Multilevel Review. Journal of Management 39: 110–35. [Google Scholar] [CrossRef]
  44. Riahi, Youssra, Tarik Saikouk, Angappa Gunasekaran, and Ismail Badraoui. 2021. Artificial intelligence applications in supply chain: A descriptive bibliometric analysis and future research directions. Expert Systems with Applications 173: 114702. [Google Scholar] [CrossRef]
  45. Sekhri, Alka, and Jagvinder Cheema. 2019. The new era of HRM: AI reinventing HRM functions. International Journal of Scientific Research and Review 7: 3073–77. [Google Scholar]
  46. Siau, Keng L., and Yin Yang. 2017. Impact of Artificial Intelligence, Robotics, and Machine Learning on Sales and Marketing. Paper presented at the Twelfth Midwest Association for Information Systems Conference (MWAIS), Springfield, IL, USA, May 18–19; p. 48. [Google Scholar]
  47. Suseno, Yuliani, Chiachi Chang, Marek Hudik, and Eddy S. Fang. 2021. Beliefs, anxiety and change readiness for artificial intelligence adoption among human resource managers: The moderating role of high-performance work systems. The International Journal of Human Resource Management 33: 1209–36. [Google Scholar] [CrossRef]
  48. Tuffaha, Mohand. 2023. The Impact of Artificial Intelligence Bias on Human Resource Management Functions: Systematic Literature Review and Future Research Directions. European Journal of Business and Innovation Research 11: 35–58. [Google Scholar] [CrossRef]
  49. Tuffaha, Mohand, Bharti Pandya, and M. Rosario Perello-Marin. 2022. AI-powered chatbots in recruitment from Indian HR professionals’ perspectives: Qualitative study. Journal of Contemporary Issues in Business and Government 28: 1971–89. [Google Scholar]
  50. Upadhyay, Ashwani Kumar, and Komal Khandelwal. 2018. Applying artificial intelligence: Implications for recruitment. Strategic HR Review 17: 255–58. [Google Scholar] [CrossRef]
  51. van den Broek, Elmira, Anastasia Sergeeva, and Marleen Huysman. 2021. When the Machine Meets the Expert: An Ethnography of Developing AI for Hiring. MIS Quarterly 45: 1557–80. [Google Scholar] [CrossRef]
  52. van Esch, Patrick, J. Stewart Black, and Denni Arli. 2021. Job candidates’ reactions to AI-enabled job application processes. AI and Ethics 1: 119–30. [Google Scholar] [CrossRef]
  53. Vassilopoulou, Joana, Olivia Kyriakidou, Mustafa F. Ozbilgin, and Dimitria Groutsis. 2022. Scientism as illusio in HR algorithms: Towards a framework for algorithmic hygiene for bias proofing. Human Resource Management Journal 2022: 1–15. [Google Scholar] [CrossRef]
  54. Venkatesh, Viswanath, Michael G. Morris, Gordon B. Davis, and Fred D. Davis. 2003. User acceptance of information technology: Toward a unied view. MIS Quarterly 27: 425–78. [Google Scholar] [CrossRef]
  55. Vrontis, Demetris, Michael Christofi, Vijay Pereira, Shlomo Tarba, Anna Makrides, and Eleni Trichina. 2021. Artificial intelligence, robotics, advanced technologies and human resource management: A systematic review. The International Journal of Human Resource Management 33: 1–30. [Google Scholar] [CrossRef]
  56. Zhang, Caiming, and Yang Lu. 2021. Study on artificial intelligence: The state of the art and future prospects. Journal of Industrial Information Integration 23: 100224. [Google Scholar] [CrossRef]
Figure 1. Use cases associated with a narrow and broad scope of AI grouped by recruiting tasks.
Figure 1. Use cases associated with a narrow and broad scope of AI grouped by recruiting tasks.
Admsci 13 00231 g001
Figure 2. Assumptions and barriers associated with “manual” and “automatic” definitions of instruction.
Figure 2. Assumptions and barriers associated with “manual” and “automatic” definitions of instruction.
Admsci 13 00231 g002
Figure 3. Influence of beliefs on the intention to use AI.
Figure 3. Influence of beliefs on the intention to use AI.
Admsci 13 00231 g003
Table 1. Demographic details of interviewees.
Table 1. Demographic details of interviewees.
IDIndustryExperience in Recruiting
E0HR consulting22 years
E1Research and development3 years
E2Media2 years
E3Construction, procurement, printing centre, facility management and cleaning, and IT5 years
E4Financial services1 year
E5_1Automotive industry3 years
E5_2Automotive industry12 years
E5_3Automotive industry7 years
E6Audit, consulting, financial advisory, risk advisory, and tax10 years
E7Electrical and electronics industry5 years
E8Intralogistics22 years
E9Paper industry, corrugated board industry, and packaging industry4 years
E10Automotive industry10 years
E11Metal industry, machine, and plant engineering12 years
E12Healthcare12 years
E13_1Public service and representation of interests10 years
E13_2Public service and representation of interests20 years
E13_3Public service and representation of interests5 years
E14Telecommunications, IT, and mobile communications10 years
E15Research30 years
E16Food production and trade12 years
E17Management and technology consulting1 year
E18Staffing service4 years
E19IT7 years
E20 Insurancen.a.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Malin, C.; Kupfer, C.; Fleiß, J.; Kubicek, B.; Thalmann, S. In the AI of the Beholder—A Qualitative Study of HR Professionals’ Beliefs about AI-Based Chatbots and Decision Support in Candidate Pre-Selection. Adm. Sci. 2023, 13, 231. https://doi.org/10.3390/admsci13110231

AMA Style

Malin C, Kupfer C, Fleiß J, Kubicek B, Thalmann S. In the AI of the Beholder—A Qualitative Study of HR Professionals’ Beliefs about AI-Based Chatbots and Decision Support in Candidate Pre-Selection. Administrative Sciences. 2023; 13(11):231. https://doi.org/10.3390/admsci13110231

Chicago/Turabian Style

Malin, Christine, Cordula Kupfer, Jürgen Fleiß, Bettina Kubicek, and Stefan Thalmann. 2023. "In the AI of the Beholder—A Qualitative Study of HR Professionals’ Beliefs about AI-Based Chatbots and Decision Support in Candidate Pre-Selection" Administrative Sciences 13, no. 11: 231. https://doi.org/10.3390/admsci13110231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop