2.1. The Evolution of Artificial Intelligence and Its Professional and Ethical Impact
An intelligent agent is defined as a knowledge-based system that analyzes the surroundings, reasons to interpret the perceptions, solves issues, and determines solutions to accomplish specific tasks for which it was designed [
7]. It extracts its data and knowledge based on which it will act, and continuously adapts to the given assignment [
7]. Artificial intelligence is intelligence associated with machines, in contrast to natural intelligence specific to humans and animals [
8]. AI is mainly developed to have speech recognition, machine-learning, planning, and problem-solving attributes [
9]. AI is applicable in computer science. Therefore, it implicates constructing devices that execute specific operations that would require intelligence if accomplished by humans [
10].
The technical and managerial scientific literature offers multiple definitions of AI. Thus, AI can be seen as a new way computers are programmed to think in the same way that people do [
11]. Artificial intelligence reflects the automation of human thinking, such as decision making, problem solving, or learning [
12]. On the other hand, AI is characterized by a study of the computations that make perception, reason, and action possible [
9]. Russell and Norvig [
13] distinguish four approaches to AI that aim to simulate human thought, rational thinking, human actions, and rational actions by a programmable machine. Finally, it is reasonable to predict that AI will eventually impact all human activities, individual, professional, and social [
14].
Large companies use this technology to implement marketing, human resources, or production strategies. However, increasingly frequent questions arise from the considerable development of AI applications and the determined implications, such as the replacement of the workforce and activities carried out by humans, or ethical issues: AI may cause significant job losses and could change the idea of employment [
15], or it may exit from human control and even have the power of managing its own evolution [
16].
Numerous researchers broadly study data privacy and security, as individuals should have complete control over their personal data, and their usage should not cause any harm or discrimination [
17]. Privacy refers to controlling information about oneself and the right to keep it secret [
18]. Artificial intelligence offers the ability to organize and store a large set of data, which entails the vulnerability and risk of personal information being accessed by other entities and used without the owners’ consent [
19]. There may be situations where personal information is traded for a fee between entities and used in marketing and advertising processes, more quickly identifying the target market to promote their products or services [
20].
Because controlling personal data is much more difficult online than in a physical format, most of the details of people’s lives are becoming increasingly accessible digitally, where data are collected and visible on high-capacity servers or in the cloud [
21]. Many technologies that use AI amplify these problems. By using specific techniques, such as fingerprints or facial recognition, these technologies enable the identification of individuals and create a profile for each user [
22]. Well-established legal protection of individual rights, including consumer rights or the responsibility for protecting intellectual property rights, is often lacking in digital products, or is challenging to implement [
18]. Leslie [
23] summarizes (
Table 1) the potential damages a system based on artificial intelligence can produce.
The digitalization is continuously expanding, and technology is undergoing significant changes. Therefore, regulations should be adapted accordingly [
24]. Artificial intelligence must comply with all applicable national and international legislation and regulations, and a set of requirements, such as safe, reliable, and robust algorithms to correct mistakes or inconsistencies throughout all phases of the AI systems lifecycle [
25]. All AI systems should guarantee transparency, diversity, non-discrimination, and fairness while equally ensuring the accessibility of all users [
26]. The European Commission’s High-Level Expert Group on AI [
27] states that the well-being of society and the environment must be protected by intelligent systems and AI should be used to promote positive social change and improve environmental sustainability.
2.2. Business Consulting and the Impact of Artificial Intelligence on Business Professionals
Business consultants operate in multiple industries and with a variety of clients. Through their activities, they gather experience and valuable information that they can use and adapt according to the requirements. Nowadays, most consultants are asked to provide not only advice but also solutions, such as changing a company’s strategy [
28]. Generally, there are no universal, standardized criteria for selecting a consultant because each client can define personal standards which reflect the company’s expectations and experience in consulting services. However, the price is often seen as an indicator of quality [
29].
With digital access to data and equally available technologies, it takes more effort to differentiate between consulting firms. What can be considered a general characteristic is the consultant’s focus on the client and his needs, the goal of the consulting services being to solve the problems he faces as quickly and efficiently as possible [
28]. Understanding and fully leveraging the data they operate with is one of the most important skills a consultant must possess [
30]. Companies collect data continuously, being, at the same time, concerned with how these data are processed and exploited by consultants. They must maintain professional and ethical standards when working with their clients, having an obligation to keep the information obtained confidential [
31].
The concept of digital business transformation is the use of technology to design unique business models, procedures, or techniques, providing greater efficiency, attracting additional revenue, and increasing the competitive advantage [
32]. Digital transformation is also specific to companies in the field of business consulting, being able to have a positive and successful impact only with a solid strategy and management [
33]. Machine-learning algorithms can build models and understand complex correlations through pattern detection, a challenging process for even the most promising and effective consulting teams [
34]. If managed precisely, AI and automation can remarkably improve these firms’ functionality and customer services [
35]. The benefits of using AI in business consulting are considered, according to Bayati [
36], the following: AI has a fast and accurate ability to analyze large-scale data, better knowledge of the market and users, high efficiency in performing administrative tasks, and it can guide companies to allocate their financial resources properly.
All AI-based systems have a social and ethical impact on stakeholders and communities. The main goal regarding these systems is to achieve innovation to benefit society, but there are situations where they have the opposite impact. The new field of AI ethics has mainly appeared as a reaction to the individual and societal harms that the mishandling, poor configuration, or unintended damaging results of AI technologies can generate [
37]. Leslie [
23] suggests that, to develop and use a system based on artificial intelligence responsibly, it must be equitable and ethical, considering its impact on the well-being of individuals and the community. The use of an AI-based system must be fair and non-discriminatory, trustworthy, and transparent [
23].
2.3. Research Hypotheses and Conceptual Model
Technology and specific computer programs have made employees’ work more efficient and, with AI’s development, these technologies’ limits continue to be exceeded. A computer could be programmed to analyze and enter data much faster than a person, but in order to follow these results, there remains the need for interpretation and creativity, which intelligent systems cannot yet replace [
38]. To identify the perception of accounting professionals and educators on AI and its associated threats, Whitman [
39] used survey data. In this study, younger and, therefore, less experienced participants believed that AI implementation could be helpful and improve their work by taking over their repetitive and administrative tasks [
39].
The results of Abdullah’s [
40] study indicated a need for training among healthcare employees regarding AI technologies and revealed that 78% of respondents worried that AI could completely replace the human workforce. Alexandre and Blanckaert [
41] indicated that in the business consulting sector, small firms can hardly define and implement AI technologies, while bigger firms are able to internally develop and use it to make decisions. The implementation of AI is also strongly correlated with the company size and, at the same time, the opportunities for adopting AI programs are often found in more significant firms, while, for small companies, they are harder to implement [
42]. Due to the financial capacity that allows large- and medium-scale firms to adopt the latest technology infrastructure, Gaafars’ [
43] study revealed considerable distinctions between the responses concerning the application of AI tools according to the company’s characteristics.
Lestari and Djastuti [
44] indicated different perspectives on AI in a specific sector (banking industry) according to the respondents’ attributes and company positions. Their research revealed that most employees working in frontline roles are concerned that AI technologies will be able to replace their jobs. On the contrary, respondents with back-office positions believed that human actions would still be required to conduct analysis procedures correctly and did not feel threatened by AI replacement [
44]. These results show that professional characteristics (field of activity and position in the company) influence the perceived disadvantage of AI (possibility of replacing jobs). Taking into consideration the above literature, we assume that:
Hypothesis H1. Professional characteristics and the field of activity determine the perceived disadvantages of using AI.
Many companies have used AI as an opportunity to develop, leading to increasing competition through consulting firms, including start-ups [
41]. Assigning tasks to AI could determine that consultants would focus less on repetitive tasks and allocate more time to assessing problems and providing solutions. Workforce perspectives of the employees whose professional tasks are automated by a robotic process were examined by Zande [
45] through eight interviews. The results showed that the interviewees positively perceived this technology because it lessened their workload.
Because many companies have already implemented AI systems, candidates capable of working with AI programs may be favored in hiring processes and a higher qualification would be required [
41]. Understanding these innovations that fundamentally change a company’s internal processes can help leaders, managers, and the rest of the employees evolve. Choi [
46] has shown that the clarity of the user and AI’s functions is positively associated with the user’s eagerness to accept AI implementation. According to Jaiswal [
15], employees must possess five critical skills: data analysis, digital, complex cognitive, decision-making, and continuous learning skills.
Focusing on the internal implementation of AI, Alexandre and Blanckaert’s study [
41] concluded that there are different results according to the firm’s size and its clients. Investments in AI technologies can have a notable effect on consulting firms. Besides the financial implications that require a concise strategy, the employees must also rapidly adapt to these changes [
41]. All these arguments can lead to the following hypothesis:
Hypothesis H2. Professional characteristics and the field of activity determine the future perspective of AI implementation.
Consultants spend considerable time analyzing data and information, an activity that a system based on AI could replace. According to Streib [
47], the most common concern is that some professions and jobs will disappear due to the implementation of automatic analytics systems. Another issue pointed out refers to the explainability of AI processes [
47]. For an AI management system to have interaction fairness while operating with employees and clients, the information it provides would require explainability of the procedures being carried out and its decisions or outcomes [
48].
According to Ardon [
49], AI mechanisms’ successful implementation must address workforce concerns. In his study, most employees are worried about job loss, unreliable algorithms, or security issues. Related to job losses is data privacy because AI systems will operate with more personal information. The capabilities of AI raised significant risks in business and the economy [
47]. Streib [
47] states that the insurance companies may use private data to evaluate the insurance premium or its coverage, and financial institutions could deliver credits and determine solutions based on such data.
Social interaction is a human need felt by every individual. Research has demonstrated that connections and interactions with colleagues in the workplace are essential to providing job satisfaction and reducing turnover intentions [
48]. Therefore, the lack of communication could negatively affect a company’s professional activity and customers’ perceptions of that organization. The lack of social interaction could have an impact on the services provided by consultants, as clients could feel the absence of a real conversation. AI systems cannot offer the same advice or guidance as a person [
38]. Consequently, we propose the following hypothesis:
Hypothesis H3. The perceived disadvantages of using AI positively influences the perception of the ethical challenges of AI implementation.
AI and its capacity to develop patterns and generate insights might raise privacy threats, even without direct access to confidential data. Therefore, privacy and data security risks contest AI systems’ reliability. While there are doubts regarding the reliability of all technological devices, the machine-learning systems’ opacity and their unpredictability thus generate difficulties in testing their results [
50]. The willingness to accept AI technology was negatively correlated to privacy concerns [
46]. AI can affect ethical standards, safety, transparency, and public fear, and AI governance might be rigid and hard to control, discouraging new business ideas and containing their execution [
47].
From a cognitive viewpoint, if employees have a better understanding of AI technologies, specifically their limitations and coverage, they can develop realistic expectations regarding these systems. Employees are less likely to pursue the suggestions of an AI management program if they are concerned that their actions will not have a beneficial effect or if they believe that it is not easy to use [
48]. When encountering potential harmful outcomes emerging from AI, employees with higher levels of knowledge will be capable of determining solutions to overcome or diminish such threats [
51]. Based on these findings, we assume that:
Hypothesis H4. The perception of the ethical challenges of using AI negatively influences the future perspective of AI implementation.