Next Article in Journal
Learning about Victims of Holocaust in Virtual Reality: The Main, Mediating and Moderating Effects of Technology, Instructional Method, Flow, Presence, and Prior Knowledge
Next Article in Special Issue
Online Platforms for Remote Immersive Virtual Reality Testing: An Emerging Tool for Experimental Behavioral Research
Previous Article in Journal / Special Issue
Developing Usability Guidelines for mHealth Applications (UGmHA)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Need for UAI–Anatomy of the Paradigm of Usable Artificial Intelligence for Domain-Specific AI Applicability

1
Faculty of Mechanical Engineering, Institute of Mechatronic Engineering, Technische Universität Dresden, Helmholtzstr. 7a, 01069 Dresden, Germany
2
Fraunhofer-Institut für Werkzeugmaschinen und Umformtechnik IWU, Reichenhainer Str. 88, 09126 Chemnitz, Germany
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2023, 7(3), 27; https://doi.org/10.3390/mti7030027
Submission received: 18 January 2023 / Revised: 9 February 2023 / Accepted: 20 February 2023 / Published: 28 February 2023

Abstract

:
Data-driven methods based on artificial intelligence (AI) are powerful yet flexible tools for gathering knowledge and automating complex tasks in many areas of science and practice. Despite the rapid development of the field, the existing potential of AI methods to solve recent industrial, corporate and social challenges has not yet been fully exploited. Research shows the insufficient practicality of AI in domain-specific contexts as one of the main application hurdles. Focusing on industrial demands, this publication introduces a new paradigm in terms of applicability of AI methods, called Usable AI (UAI). Aspects of easily accessible, domain-specific AI methods are derived, which address essential user-oriented AI services within the UAI paradigm: usability, suitability, integrability and interoperability. The relevance of UAI is clarified by describing challenges, hurdles and peculiarities of AI applications in the production area, whereby the following user roles have been abstracted: developers of cyber–physical production systems (CPPS), developers of processes and operators of processes. The analysis shows that target artifacts, motivation, knowledge horizon and challenges differ for the user roles. Therefore, UAI shall enable domain- and user-role-specific adaptation of affordances accompanied by adaptive support of vertical and horizontal integration across the domains and user roles.

1. Introduction

Artificial intelligence (AI), as one of the most potent methods of data science, greatly enriches data processing and is expected to be applied as a powerful yet flexible tool for gathering knowledge and automating complex tasks in many areas of science and practice. Unfortunately, the adoption rate of AI does not yet adequately reflect the extent of its application potential [1]. As the academic domain of AI is developing rapidly, AI still needs to become a general part of standard education and general knowledge. Furthermore, it appears to be unstructured, as it branches in different directions [2]. At the same time, enormous amounts of methods, approaches, information and educational formats for AI applications are open-source. However, the rise of AI utilization does not proceed without widespread resistance to the new opportunities associated with AI applications [3,4]. New technology appears to potential users like an offer that comes with invisible hidden costs. An answer to whether to accept an offer or not is most likely linked to the consideration, “Can it be used sufficiently?”. Trying to answer that question raises three further questions: “Does it even work?”, “Is it worth the implications?”, such as footprint, power and noise, and “Will the use of it be comfortable?”.
The aspect of user-friendliness in the sense of an item being easy and ergonomic to use addresses all three of these questions. Nevertheless, to take into account the wider sense of usability proves to be highly advantageous. Although rightly considered as interrelated, usability in the sense of user-friendliness only acts as a sub-condition of the broader sense of usability [5]. To apply to this broader understanding of usability, an AI application must be both effective, providing users with insights they demand, and efficient, providing a solution of which the value added exceeds the caused effort [6]. Moreover, the application needs to be operable in a satisfying manner. How to fulfil these criteria has been researched and described in the field of software engineering [7]. However, the area of AI-based applications still suffers from a lack of usability and operability, especially in domain-specific contexts [8].
A characteristic key aspect of any data science field is the very interdisciplinary subject nature. Typically, AI draws on research from different domains such as computer science, psychology, neurology, mathematics and logic, communication studies, philosophy and linguistics. AI research, in turn, has also influenced other fields, such as engineering [9,10]. Therefore, AI application often involves collaboration between experts from multiple disciplines, data science specialists and practitioners in the application domain [11,12]. Typically, there is a risk of staff turnover, shortage of experts and shortage of resources in the usual application environment [13]. Therefore, managing method knowledge and meta-knowledge are invaluable in AI application [14]. For system developers, special requirements arise in view of the wide scope covered by AI. Developers’ proactive coverage of the specific needs of the application context is required to contribute to wider AI utilization.
The hurdles in using AI-based methods include unfulfilled legal, economic or technical requirements [15]. In many cases, however, the black-box characteristics that make up artificial intelligence cause application difficulties [16]. In contrast to numerical solutions, simulations or classic prediction models, it is difficult or even impossible to observe and understand the solution path of AI algorithms. The need for more transparency of AI represents a major matter in addressing user demands. The research field of explainable artificial intelligence (XAI) contributes intensively to reducing AI opacity [17]. XAI rigorously and extensively reveals the inner workings of AI algorithms. However, XAI does not fully meet the need to explain the principles upon which AI-based solutions work to non-data science practitioners, as formally correct explanations are often not easy to follow either. In order to ensure user-friendliness, users must be provided with comprehensible concepts whose implications are understandable in practice. As XAI does not consider the non-data science perspective, XAI struggles to contribute to an understanding of AI methods in a broader end-user oriented manner.
The publication introduces a novel paradigm regarding the usability of AI-based methods, called UAI, focusing on tailoring AI methods for practitioners of non-data expert domains. The new UAI paradigm summarizes methods, solutions and approaches to meet requirements of usability of artificial intelligence in an end-user-centric sense, addressing the lack of practical applicability of AI methods for domain experts and aiming towards development of solutions tailored to the needs of different user roles. Thus, the publication addresses the research question (RQ): what constitutes usable AI?

2. Materials and Methods

2.1. Problem Description

A gap exists between data science and the application perspective with regard to the development of AI algorithms and the provision of AI-based methods. Data scientists tend to develop algorithms in a straightforward way to uncover hidden relationships in data sets, regardless of which physical, technical, sociological or other functional contexts actually underlie the data. In contrast, domain experts are inclined to take into account or anticipate familiar physical, technical, sociological or other technical contexts upon which the data are actually based, and thus exhibit a bias, even when actively trying to act in an open-ended manner. However, this gap between the algorithmic view and the application view can be problematic. Algorithms typically lack direct applicability to real problems. Computer scientists make solution details available without being able to provide domain-specific explanations or contexts, which makes direct application in practice difficult. As data analysts in turn lack necessary domain knowledge, it is not trivial to apply found solution principles to real tasks [18,19,20,21].
As shown in Figure 1, for data science applications, it is necessary to extend approaches by specific peripheral aspects originating from the corresponding domain. One aspect is to consider the form in which data can be accessed, i.e., retrieval from domain-specific sources and processing via hardware and middleware to ultimately obtain an assessable database [11]. In this context, a key issue is a lack of sufficient cross-domain knowledge among data scientists and practitioners in each of the domains involved.
It is essential to ensure a fit of quality and quantity of the data available, collected, processed and analyzed with the analysis methods applied [21]. In addition to data collection, data quality and data analysis, incorporating domain-specific knowledge and contextual information becomes indispensable in order to enable a holistic data-based collaborative response for all involved recipients. Context-related causal relationships or domain-specific boundary conditions afford efficient access to knowledge and enable better interpretability of results. The aspects of domain-specific data analysis are highly intertwined and must be considered throughout the data-driven system development and application.
The interdisciplinary nature of data processing science and its technical implementation leads to communication problems, as experts in a particular field are naturally biased. Usually, individuals and organizations are unaware of this fact, making it difficult to bridge [23]. Too strict timelines, lack of resources and lack of skilled workers make the implementation of complex software such as AI applications difficult [24]. Closing this gap requires algorithm-based approaches that offer both a sufficient degree of flexibility and satisfactory degree of user-friendliness. Provision of methods and concepts that make AI software more accessible, transparent and easier to communicate paves the way for efficient application of AI-based methods. Moreover, a high level of usability assists the establishment of a common understanding of system properties, solution details and implications, providing a basis for communication, which makes the derived results more reliably applicable for the entire user community.
Data are heterogeneous and complex and, in addition, data collection often require specialized hard- and middleware, so close cooperation between domain experts and data scientists proves to be crucial for success [25]. Hence, linking cross-domain knowledge across practitioners of all fields involved marks a superordinate encounter.
Technical systems merging hard- and software are sociotechnical and socioeconomic systems with people involved in their use and further development. However, user overload leads to inefficiency, while supporting the user directly contributes to a better outcome [3,4]. Dualistic system properties, which constitute a matter of complexity on their own, can ultimately provide further means to solve the problems in management of tacit knowledge [26,27]. Users must be able to approach the system intuitively to benefit from insights, rather than drowning in a complexity of possibilities.

2.2. Foundations

2.2.1. Artificial Intelligence

Reflecting the origins of the field of artificial intelligence, which has evolved from and is associated with several research fields, multiple definitions of artificial intelligence exist, some of which relate to reasoning, while others relate to behavior. Some definitions measure success in terms of consistency with human performance, while others measure rationality in terms of an ideal performance measure. Disciplines that contribute ideas, viewpoints, and techniques to AI include Philosophy, Mathematics, Economics, Neuroscience, Psychology, Computer Engineering and Cybernetics, amongst others [2]. Popularity of AI as field of computer science has risen in recent years through automation of intelligent behavior and machine learning. In general, the definition of the term AI remains difficult, as no precise definition of “intelligence” is yet commonly established. One among many circulating definitions of artificial intelligence originates from Winston, 1992: “The study of the computations that make it possible to perceive, reason, and act.” [28]. Nevertheless, the term AI is referred more and more frequently in research and development. Strong AI systems can handle difficult creative tasks on par with humans. Weak AI systems solve specific application problems. The ability to learn is a main characteristic for AI systems, constituting an integral part, accompanied by the ability to deal with uncertainty and probabilistic information [2].

2.2.2. Usability

In a common sense, “usability” draws its meaning directly from its stem “to use” and describes the possibility to properly use an item or artifact for a sensible purpose. The international standard ISO 9241-11 “Ergonomics of human-system interaction—Usability Definitions and concepts” states that “usability relates to the outcome of interacting with a system, product or service”, not as an attribute of a product, but as “a more comprehensive concept than is commonly understood by “ease-of-use” or “user-friendliness”” [5]. Especially in computer science the narrower sense of the term usability describes user friendly, ergonomic design. Nielsen describes this specific understanding of the term based on the defining attributes learnability, efficiency, memorability, errors and satisfaction [29].

2.2.3. Explainable Artificial Intelligence

Assumptions and simplifications are vital for modeling of complex systems, but complexity reduction comes at the cost of validity drop. AI models are typically considered as black boxes. AI algorithms take vast numbers of data points as inputs and correlates certain data features to produce an output. This process is largely self-directed and generally difficult for data scientists, programmers and users to interpret. This carries the risk of incorrect application of AI models. A user not understanding how a model works cannot recognize where the model works incorrectly. The user hands over the responsibility to the model and lacks supervision because of the poor transparency of the model. Furthermore, in using AI, there remains little possibility to influence the outcome quality actively. As AI is hard to inspect or understand, errors can remain unnoticed until they cause major problems. To tackle the black box problem and to explore reasoning behind AI models, the term XAI was introduced by van Lent et al. in 2004 [30]. However, efforts exist already in the 1980s to explain intelligent systems (Refs. [31,32,33] and in the 1990s [34]) to explain neural networks though without calling them XAI. More recently, the success of AI applications in various high-interest domains and the rising usage of complex and nontransparent models, such as deep learning, have elevated attention to the need for explainable AI [35]. However, no uniform definition of XAI exists. Barredo Arrieta et al. established a widely accepted definition of XAI: “Given an audience, an explainable Artificial Intelligence is one that produces details or reasons to make its functioning clear or easy to understand.” [17]. Understandability remains a two-sided topic: understandability of the model versus the human ability to understand. In addition to the understandability of the model, the cognitive abilities and goals of users must also be considered. Overall, XAI supplies knowledge on a specific AI model. However, recognizing all given information on AI specifics is neither possible for an inexperienced human mind nor efficient for solving an average domain-specific task. Thus, XAI fails to satisfy the needs in domain-specific application contexts as XAI neither assists model implementation nor supports performance tailoring towards particular domain burdens. To bridge the gap, users must be provided with approachable and low-level solution concepts, increasing usability of AI within the application domain.

2.2.4. Usability of Artificial Intelligence

Many publications address the usability of software. With regard to AI, the description of the state-of-the-art in the following focuses explicitly on usability in relation to AI. Harris refers to the understanding of AI in the domain of natural language processing (NLP) with tasks such as translation, question answering or named entity recognition (NER) [36]. The authors also base their work on the ISO standard for software design ISO-9241 Part 11. Their approach is to design the system and then measure how efficient the software makes the user, how effective it makes the users in solving the task, how often they actually do it and how satisfying the solution is; the software is a delight, easy to use and represents a significant competitive advantage.
Jameson et al. state that the term usable is known in the field of human–computer interaction (HCI), but not so much in AI [37]. They claim that systems intended for use by individuals should be usable. AI systems do not have to adhere to the requirement of usability if they “are working on the technical optimization of algorithms, as they are already being used successfully in usable interactive systems” or if the aspect of user interaction can be left “to HCI people to design and test usable interfaces”. On the contrary, Xu claimed that HCI experts should take a leading role by providing explainable and understandable AI [38]. He argued that the third wave of AI is technology enhancement and application with a human-centric approach and advised HCI professionals to proactively engage in AI research and development.
Gajos and Weld referred to usable AI for the quick construction of personalized and personalizable interfaces [39]. Gao et al. stated that the barrier to realization of mass adoption of AI on various business domains is too high because most domain experts have no background in AI [1]. Developing AI applications involves multiple phases: data preparation, application modeling, product deployment while the main focus in AI research is at new AI models. Usability, efficiency and security of AI on the other hand are neglected. The authors explicated the development of an AI platform to face the above challenges and support the fast development of domain-specific applications. Pfau et al. presented a use case of AI in videogames showing the transfer of academic algorithms to the videogames industry, increasing both explainability and usability [40]. Lau worked on programming by demonstration (PBD) systems for no-code programming. PBD systems typically use AI methods for predicting future behavior [41]. A set of guidelines to consider when designing usable AI-based systems based on lessons learned from 3 PBD systems was presented. Bunt et al. presented an approach based on interface personalization. An interface was provided for users, adapted specifically to their needs [42]. They proposed mixed-initiative customization assistance, a compromise between an adaptable approach, where personalization is fully user controlled, and an adaptive approach, where personalization is fully system controlled. Other work relating to the topic of usability in AI includes: Song and Jung conducting a study on the usability of a specific AI-based application [43], Bécue et al. providing a survey and discussion on opportunities and threats of AI in the manufacturing domain [44], Pekarciková et al. describing the impact of digitization on the usability of lean tools [45], and Ozkaya focusing on data preparation from the view of a data scientist [46].
In summary, publications explicitly addressing the usability of AI-based applications already exist but address special use cases and originate from different domains. A systematic determination of the requirements regarding user-friendliness (e.g., based on existing standards) has not yet been carried out.

3. Results

3.1. Usable Artificial Intelligence (UAI) Paradigm

ISO 92411-11 defines usability as usable in a specific context of use, emphasizing the dependency on specific circumstances, specified goals and specified users. In addition, the properties of usable software are described as efficient, effective and satisfying [5]. Therefore, data science must address the user’s need for analytical insights in a way that the user can understand. Addressing user, goal and context involves considering relevant user roles and their specific needs. When attempting to examine the pains and gains of users looking for specific insights into a domain, the value proposition concept is a viable approach [47]. Originally intended as a strategy for marketing activities, the value proposition concept describes the basic principle of the value creation partnership, which is applicable in any pairing of provider and consumer. While a company tries to offer its customers a benefit in the original value proposition concept, software development tries to satisfy users through requirements engineering. Starting from the value proposition concept, the success of user-centric requirement engineering consequently depends on the analysis of the user profile, i.e., the pains and gains of potential users. Defining customer jobs, or “user jobs” for that matter, can either alleviate a specific pain and/or help achieve a specific gain [48]. Accordingly, the value proposition analyzes which products and services can act as profit makers or pain relievers. Transferred to the statement of usable AI, gain creators are all solutions that provide access to data-based insights, painkillers are all methods that reduce effort or limit the user’s status quo. In order to examine user pains and gains and to create viable AI methods to accomplish relevant user tasks, it is essential to explicate specific user roles, goals and contexts, as the definition of usability suggests [5].
As shown in Figure 2, users within a domain fulfill different user roles, although the described user roles do not have to be fulfilled by one person or organization, but describe certain types of domain-specific AI implementations [14]. Using data analysis, each user role urges insights to their specific goals. Delivering UAI solutions insists that each user role is provided with useful insights that correspond to their specific context and goal. Despite being specific, goal and context are not independent. Consequently, methods for AI systems are supposed to follow a hierarchical structure, building on and relying on one another, which is suggested to be structured according to user roles, as depicted in Figure 2.
Table 1 provides an overview of the domain-specific user roles, their specific artifacts, and a description of how they operate in their domain. In order to derive the final domain-specific outcome, practitioners in a domain rely on processes creating a perceived deliverable. AI can be useful to practitioners by supporting process control or by pushing limitations, while exhibiting a flexibility that is not achievable with conventional imperative software [49]. Preceding the operation of the process, planners can enhance process characteristics by analyzing the system executing the process and providing predictions [50,51]. During system development, researchers can include AI-based methods enhancing system functionality [52,53,54]. UAI method development is dedicated to the user groups of practitioners, planners and researchers, providing them with usable approaches suited to their needs. In attempting the tasks mentioned, UAI method developers can draw on any AI method and evolve it into propositions fulfilling usability requirements.
User jobs in domain-specific data analysis comprise the provision of insight. System and process information is supposed to be valuable in enhancing the outcome of the use case specific process. To provide added value, the gained insight enables users to exceed process limitations or to achieve process outcomes with less effort. To achieve these goals, it is necessary to inquire:
  • What does the user perceive as costing too much or taking too long?
  • What are the main difficulties and challenges the user faces?
  • Where do existing solutions fall short of user expectations, e.g., missing features?
  • What makes the user feel bad, concerning social and basic needs? What risks does the user fear?
In order to define user activities based on specific user roles and their respective application goals, it is necessary to elaborate the usability definition further. Usability implies that specified users for specified goals in a specified context can apply software with effectiveness, efficiency and satisfaction. Potential users will not even consider application of a new approach that appears to attach additional workload to their individual status quo. On the contrary, a concept seamlessly integrated into the users’ daily business and promising value added will be considered. In other words, any task needs to be regarded worth its time and effort. The two ways to fulfill this requirement are either minimizing necessary time and/or effort to apply a new concept or maximizing its perceived value. However, even if a user considers the trade-off between benefits and additional workload to be positive, there remains the question of whether the user is able to recognize the proposition of that positive potential. The provision of AI solution approaches represents a proposal to potential users for their specific problems. This proposition must also be regarded as such by a potential user. Motivation to actively embrace and respond to a proposition only occurs when the effort required to embrace it does not exceed the benefit. In terms of data-analysis this means presenting AI solutions to the user in an appealing and promising way, being easily accessible and understandable (matching the cognitive capacity) for each potential user addressed.
As described in Table 2, when applying a concept or method within a domain, to have it be of any use to the domain expert, questions arise: “Does it work?”, asking for its effectiveness, “Is it worth its effort?”, questioning the efficiency of the approach, and “Will the use be pleasant?”, desiring it to be satisfactory. To fulfill these criteria in the application of AI methods, usable AI methods can provide a supporting layer between the AI model itself and the users within the domain, as shown in Figure 3.
The UAI paradigm places an interactive communicative layer between the user-centric bottom-up view and the modeling-based top-down view. The concept aims for this communicative layer to proactively be approached by both sides. The UAI layer is supposed to provide structure, comprehensible methodological knowledge and guidance fitted to the respective users. To address the user roles described in Table 2, the provided methods need to relate to each of the user roles, providing the means to grasp complex concepts to the necessary extent for application. Most importantly, the communicative UAI layer not only supports the provision of AI solution concepts to potential users, but also forms a common basis for rendering domain-specific requirements visible. By providing domain context for method developers, supporting goal communication within the domain context and presenting data science insights to users in a comprehensible manner, UAI establishes a major leap towards applicability of AI methods.
Attributes of the UAI paradigm. The UAI concept is based on broader understanding of the term usability as denoted in the standard ISO 9241-11 in terms of a more comprehensive concept such as user-friendliness, and is guided by its literal meaning as described in Table 3 [5].
The characteristics defined in Table 3 constitute a synthesis with the characteristics of the target definition derived by applying the concept of value proposition described in ISO 9241-11 [5]. Similar to Nielsen’s definition of five attributes constituting the understanding of usability in a narrow sense [29], the UAI paradigm relies on four fundamental user expectations for the properties of AI solutions in order to make the provided methods suitable for application in specific domains: Usefulness, Suitability, Integration and Interoperability. Based on the four key attributes, further sub-attributes can be derived to clarify the understanding of the term and to provide an anatomy of the UAI paradigm.
To make applying a concept or method effective, efficient and satisfactory, users within a specific domain will ask, as detailed further in Figure 4:
  • “Is it applicable right away?”, demanding usefulness;
  • “Does it fit my purpose?”, questioning suitability;
  • “Does it work with what we are working with around here?”, asking for integrability;
  • “Does it also work when we are dealing with third parties?”, requesting interoperability.
Concise descriptions of the sub-attributes provided in Table 4 exemplify the UAI concept, while Section 3.2 discusses examples from the field of production engineering. The elements of the UAI paradigm serve as a blueprint for addressing and accommodating all user needs. The sub-attributes must be considered to achieve the usability of an AI application. Consequently, it is crucial to consider all attributes in determining user requirements for a particular use case and to tailor solution approaches to user role, goal and domain context in each of these categories.
In order to achieve usability in the sense of the UAI paradigm, methods and proposed implementations are prepared and presented in such a manner that all attributes of the UAI paradigm are attainable during utilization. Therefore, to provide UAI methods, developers shall be encouraged to:
  • Proactively support the generation and provision of method knowledge.
    • Deliver concepts and functionality for connecting interdisciplinary background knowledge, methods and experiences.
    • Provide instruments and methods to not only produce the aimed-at insight for each use case, but also produce insight about used methods exceeding one use case, aiming at problem solving in this domain setting in general.
  • Collect, simplify, abstract, systematize and comprehensibly communicate methodological knowledge.
  • Provide generic, visual solution approaches such as, for example, reference models, procedure models, problem structuring, visual inquiry tools.
  • Deliver support for finding parameters and boundary conditions, provide “help for self-help”.
  • Make room to contextualize method and parameter selection.
    • Allow for the integration of domain-specific causal relationships and logical connections and thereby detach from pure combinatorics in data-based solution finding;
    • Include domain-specific knowledge and explanations regarding certain preferences to identify optimum constraints rather than conducting complete combinatorial exploration.
UAI-based methods are effective in solving user problems according to user goals and context by supplying knowledge and answering user questions in a manner providing insights to the use case. The application of AI methods is intended to enhance the outcome of domain-specific processes. According to UAI principles, the application of AI methods shall not increase the overall user workload. By fulfilling domain-specific work tasks and lowering user overload, UAI-based solution approaches lead to better efficiency. Additionally, incorporating UAI principles into data-based decision-making increases the ability to transfer solution approaches to adjacent application domains, thus preventing unnecessary repetition. A more clearly and vividly communicated and comprehensibly described solution approach lowers excessive solution bias and therefore reduces the risk of failure and erroneous results.

3.2. Translating the UAI Paradigm to AI Application in Industry

3.2.1. Cyber–Physical Production Systems in Industry 4.0

Industrial production is focused on manufacture of products under specification of economic and quality-oriented performance indicators. Therefore, expertise of the personnel is concentrated towards economic and quality-oriented target criteria. This includes operation and maintenance of machines and tools used as well as planning activities for the production process. While data-based methods can greatly support these tasks, details regarding their application are not a core competency of production engineers, neither in terms of work capacity nor typical qualification set. The problem of low expertise on data science in production–engineering especially holds true in small and medium-sized enterprises [55,56]. The conditions still render it difficult to introduce AI applications without help of data scientists.
As an exemplary application, predictive maintenance is one of the most beneficial data-based approaches used in manufacturing engineering. Machine downtime can account for up to 20 percent of total production costs [50,51]. Artificial intelligence-based solutions can prevent downtime and thus maximize machine availability. In addition, AI can measure machine efficiency and determine optimum production cycles. In addition to predictive maintenance, many other applications are conceivable, for example, in the field of production quality, warehouse and servicing, or in the context of supply chains. Moreover, current industrial production is characterized by strong customization of products down to batch size 1, which is subject to highly flexible mass customization. Automation technology, required for intelligent and flexible mass production, must become smarter by introducing processes of self-optimization, self-configuration, self-diagnosis and cognition to better support and assist operators in their increasingly complex tasks. Scientific studies confirm high potential benefits for reducing production time, increasing automation, manufacturing customized products and integrating unexploited data [57].
Particular responsibility in the application of AI arises from contexts linking cyber and real worlds. Cyberworld errors can cause real-world damage, e.g., destruction of machines, injuries to humans, destruction of nature, resulting in justified reluctance to implement AI. However, data-driven methods provide opportunities to understand the complex relationships between materials, machines and tools in terms of quality and productivity. This applies, in particular, to process chains in manufacturing, where quality correlations exist across several manufacturing steps, sometimes even across company boundaries. Consequently, companies see a need to adopt digitization and AI to survive in a harsh and rapidly evolving market environment. Market developments include aspects such as increasing individualization of products, which leads to smaller batch sizes and thus to increased set-up efforts. The use of new materials further accelerates complexity in manufacturing. Existing empirical knowledge, accumulated over years, in some cases decades, often encounters limitations when faced with current complex economic challenges. Another aspect of increasing complexity is the inclusion of environmental and social sustainability criteria, requiring additional data collection and verification.
Cyber–physical systems form a networked complex of informatics and software with mechanical and electronic parts communicating via data infrastructure. The formation of cyber–physical systems arises from the networking of embedded systems through wired or typically wireless communication networks and is characterized by a high degree of complexity [58]. Next-gen industrial production shall be fully interlinked with information and communication technology, making autonomous intelligent systems one key technology for plenty of management tasks in the cyber–physical factory environment. Self-organized production will be enabled with people, machines, equipment, logistics and products communicating and cooperating with each other. Networking permits optimizing not just single production steps, but entire value chains. The network is intended to include all phases of a product’s life cycle from the idea from development, production, use and maintenance to recycling. Efforts remain to be carried out across industries to further digitize production and to utilize data with suitable methods, as companies still see need for development towards conditions making AI applications shop-floor ready. Manufacturing zones deploy devices from a range of vendors. Embedded systems can include legacy and nonlegacy components, with some operators upgrading all devices simultaneously, and others upgrading single devices when required. The challenge is to integrate many different data sources with their typically vast number of signals into different tools. The configuration of interfaces is time-consuming, as signals need to be examined and extensive documentation needs to be studied, requiring both IT and plant knowledge carried by domain-specific trained personnel, which limits the digitally networked connection of machines and production systems. Ensuring data quality is another major challenge. In this regard, AI can be assigned the task of providing actively self-configuring machine data interfaces with integrated quality monitoring.

3.2.2. Challenges in AI Application in Industry

The core element of the UAI paradigm is user role centricity. User perspectives in industrial production from studies on challenges in the application of AI form the basis for the deduction of typical AI user roles in industry. Thirteen surveys examining companies implementing AI were analyzed and summarized in Table 5. The surveys were carried out by various associations and institutes such as acatech (German Academy of Science and Engineering) [59], bitkom (industry association of the German information and telecommunications sector) [60,61], Stiftung Arbeit und Umwelt der IG BCE [62], BDI [63], Verband der TÜV e. V. [64] and the German Federal Ministry for Economic Affairs and Energy (BMWi) [65,66]. A total of 4696 industrial companies were interviewed about existing obstacles to digitization and use of AI. Some companies may have participated multiple times. However, this plays a subordinate role in the overall view, as a qualitative representation of the general need for action is intended.
The questions in these surveys have been grouped into key points and grouped into four categories: strategy and management; technology and R&D; functionality and applicability; and legislative. Within the key points listed, some points can be assigned to several categories. In order to avoid redundancies, each point is only assigned to the most suitable category. Table 5 shows the resulting overview of the surveys evaluated. Key points that apply to UAI are highlighted in grey.
In the category of strategy and management, the main challenges are the availability of skilled employees, the proportionality of investment costs and the IT affinity of the workforce. In the category of technology, production and R&D, the availability of suitable data sets, the acceptance of AI by the workforce and the effort involved in data preparation represent the main challenges. Critical for the category of functionality and applicability are the data infrastructure and the broadband network connection. On the legislative side, the main focus is on data security, privacy and establishing reliable standards. Only one company indicated the issues “Ethical and moral considerations”, “AI Regulation”, “Communication and cooperation with data protection authorities” as barriers to digitization and the use of AI. It is not known to which field the interviewees belonged. The initiative originated with associations from various industries, and it can be assumed that industry interests and perspectives were incorporated. The evaluation was carried out with the aim of compiling the main requirements expressed by industry with regard to usability.

3.2.3. User Roles in AI Application in Industry

Table 6 describes typical user roles in the context of digitized production and examples of typical use cases for them. Development engineers of different domains are involved in designing cyber–physical production systems. The target artifact of development engineers is the CPPS itself. Manufacturing process automation, AI-supported digital twins and connected factories, as well as data-based correction of system behavior constitute typical use cases. In contrast, development engineers in production technology are concerned with researching and optimizing the production processes. The target artifacts of process engineers are formed by the data-driven production process, which includes embedded systems as part of its data infrastructure. In this context, predictive maintenance, process and logistics optimization, supply chain management and edge analytics constitute common use cases. Process engineers are tasked with managing the production process and ensuring the target criteria of running production. AI-based product development, quality assurance, material delivery forecasting, robotics, design customization and shop floor performance optimization comprise representative use cases.
Although target artifacts and use cases of the user roles differ, the user roles affect each other in causality because development engineers and process engineers utilize the CPPS that the interdisciplinary designers of the CPPS have developed, and process engineers run the process that the development engineers have evolved. However, the target criteria of the user roles differ fundamentally. The design of the CPPS is very much influenced by strategic considerations. As a result, aspects such as the magnitude of investment costs, meaningful areas of application, certainty of benefits from the application, integration into the corporate strategy, as well as ethical and moral considerations draw attention. The optimization of processes, on the other hand, is characterized more by product quality criteria, but still by strategic considerations. In process optimization, the complexity of the topic, time constraints, provision of suitable data, data preparation effort and suitability of the data infrastructure rank among the key considerations. The management of production processes is primarily concerned with the achievement of strict minimum requirements for various key figures of product quality, production efficiency and legal specifications. In this respect, the availability of capable employees, IT affinity and IT competence of the employees, acceptance of AI in the workforce, availability of data in the required quality and clear assignment of responsibility are decisive factors in production process management.
A key aspect is the linking of the different user roles, whose work results depend on one another, but whose pains and gains are fundamentally different. The flow of information between different levels or across different divisions is already well known to be a key property of organizations, so approaches from knowledge management are considered as useful. Anyway, no expert knowledge in data science can be expected from the respective user roles at any of the levels. For this reason, all user roles must also be provided with comprehensible access to relevant expert knowledge of the data science domain. AI already demonstrates a very broad application potential via human-oriented signal channels, i.e., image and speech processing. Thus, it is obvious that AI will integrate the required add-on functionalities for user support into its functional scope and provide them by default for future applications.

3.2.4. Application Example Material Design in Industry

The UAI paradigm can be applied in other domains, respectively, as long as the constraints and goal described in Section 3.1 hold true for the targeted domain as well. The field of material design fulfils these criteria and represents another example for the application of the UAI paradigm. AI is applied in Material Design for material property analysis, material discovery and quantum chemistry [98]. Typical user roles in this domain and target artifacts are given in Table 7.
The challenges for the application of AI described in Section 3.2 for the production-engineering domain are equally valid for the domain of material design. Differences result from the fact that the field of material design is more laboratory-based. This leads to the same data integration issues as in production engineering but mitigates the data quality problem to some extent. Moreover, compared to other fields, materials data are typically much smaller and more diverse [99], while extreme values are often of the most interest. These challenges make it especially necessary to validate the results of AI models in this domain very carefully.
Table 7. Typical user roles in the material design domain.
Table 7. Typical user roles in the material design domain.
UserTarget ArtifactUse-Case Examples
Engineer discovering new materialsProduced material and its synthesisMaterials discovery of piezoelectrics with large electrostrains [100]
Machine learning in solid-state materials science [101,102]
Synthetic organic chemistry driven by artificial intelligence [103]
Engineer optimizing given materialsProcess of material production/ synthesisPrediction of mechanical behavior of textile reinforced concrete [104,105]
Steel production quality control via process data [106]
Process optimization for a copper alloy considering hardness and electrical conductivity [107]
Engineer running materials production processesProduced materialControl bead geometry in additive manufacturing of steel [108]
Artificial neural networks applied to polymer composites [109]
Application of intelligent technology in functional material quality control [110]

3.3. UAI Attributes

The evaluation of relevant characteristics of software is typically achieved with quality models, and with the emergence of AI, models for evaluating the quality of AI solutions have also been derived. This involves adapting the principles of developing AI quality models and their sequence, and developing approaches to formulating definitions of AI features, methods for representing dependencies and hierarchies of features [8]. Recent research using systematic mapping studies shows that quality models for AI systems and software exist; however, so far, work on AI software components is still lacking [7].
To address the challenges examined, the UAI paradigm is proposed. The sub-attributes of explainability and interpretability are already well covered in this regard by implementing existing XAI concepts. As XAI, due to its still strong data science orientation, falls short to provide actual user-friendliness for end users, e.g., in user roles in production technology, further affordances are addressed by following UAI attributes: accessibility, user-friendliness, transferability, aptitude, integration and interoperability.
Recently, a variety of AI frameworks and libraries have emerged that improve accessibility (Tensorflow, Keras, Caffe, PyTorch, Scikit-learn, to name a few). The frameworks are built to enable the application of advanced AI without requiring in-depth knowledge of AI algorithm implementation or software development. Additional tools are implemented within these ecosystems, such as tools for visualization, data validation, preprocessing, build model analysis and deployment. Nevertheless, it can be assumed that accessibility will be further improved, for example by implementing many AI tools with graphical user interfaces, browser-based and automatic provision of packages.
In addition to the already greatly simplified frameworks, libraries and APIs, the concept of automated machine learning (AutoML) further simplifies the application of AI by automating the effort of finding best model architecture and hyperparameters [111]. This increases the learnability, as it makes AI application easier to learn for beginners. In addition, AI solutions will be expectation-compliant by matching user expectations that users carry along from experiences with workflows. For this purpose, AI with natural language processing (NLP) can derive and leverage suitable analogies from existing dialog systems, user manuals or training materials [112].
The task efficiency of the AI application makes it imperative that the perceived benefit for the user role is greater than the additional effort caused by AI implementation. This can be ensured, on the one hand, by maximizing the perceived benefit (forecasting, predictive maintenance, quality control and risk reduction) or by minimizing the effort (learnability of new AI applications).
The memorability will have to be improved by findings from the psychology of perception. People only perceive what they want to perceive based on their motivational disposition, their abilities and their experience. These criteria differ depending on the user role and depending on the domain. Accordingly, the different motivational dispositions, skills and experiences of user roles and domains must be taken into account in the outputs of the AI. In process control, for example, it is recommended that the AI solution is oriented towards established forms of representation familiar to process engineers, such as Ishikawa diagrams. In material discovery, AI solutions should be aligned with established representations for materials, such as structure graphs.
In order to address error proneness, AI developers need to be aware of the fact that errors can be caused by users and also by environmental influences, such as poor internet connection, small screens, bright sunlight, excessive noise, etc. The developer needs to be aware of the circumstances under which the AI will be used. In particular, due to the black box nature of most AI solutions, errors can remain undetected for long durations and lead to far-reaching dire consequences recognized too late. Therefore, efficient possibilities for regular plausibility checks and efficient measurements of the AIs confidence must be provided at the earliest stage possible.
Pleasantness can be enhanced through accounting natural human abilities to respond. AI can learn right reward functions by querying human experts. Asking easy questions constitutes a goal for an information gain formulation for optimally selecting questions that naturally account for the human’s ability to answer. Questions that optimize the trade-off between robot and human uncertainty shall be identified, and questions becoming redundant or costly shall be determined. In robotics, for example, not only the script form will be available for human-machine interaction in the future, but also the speech form and, at best, in combination with gesture recognition, in order to make human–machine interaction as pleasant as possible.
AI models have demonstrated great success in learning complex patterns and making predictions about uncovered domains as long as the problem or use case is clearly defined and sufficiently describable with enough codifiable information. These one-trick horses are the still common soft AI examples. In contrast, transferability, the ability to apply patterns learned to other domains, still appears to be weak. So far, the models have limited applicability to domains beyond the training–learning domain. Advances in deep learning, e.g., artificial neural networks, transfer learning and combined approaches in feature selection provide extended opportunities to improve transfer to adjacent domains. For example, in image-based process monitoring can be applied pre-trained artificial neural networks in which only the final layers are trained on a particular production engineering use case. In material design, for structure-based property prediction, pretrained artificial neural networks can be used as a frozen featurizer of the material structure. Again, only the final layers are trained on the smaller dataset of the particular materials engineering use case. This transfer learning strategy does not only reduce the training effort for the particular production application, but also improves the functionality of the AI model, as the pretrained layers lead to better generalizability of the AI solution.
The aptitude of the AI must be tailored both to the domain and to the use case. Both domain specificity and use case specificity are already considered in the design of the CPPS. In particular, this requires intensive and effective communication, both horizontal and vertical, along the domain experts and across hierarchy levels. The user roles of process operation differ from the user roles of process development and the user roles of CPPS development. Pains and gains of the process operators have to be regarded as well as their hidden knowledge about the process’ operation. Obtaining the latter is typically particularly difficult, especially in the context of usability with computer algorithms. Furthermore, provision for adaptation possibilities is recommended. For example, AI can improve production planning with assistance in revealing hidden knowledge of assembly workers from the shop floor. Because hidden knowledge is involved, unsupervised methods, such as clustering or outlier detection, or reinforcement learning will figure in this. Such unsupervised AI methods will empower software to an unprecedented degree to adapt autonomously to the needs of domains and use cases in an agile manner.
As AI evolves from pure research into production use, a significant increase in integrability is to be expected. The integration of AI into the production application involves hardware and software, as well as interfaces. In a variety of technical application fields, AI-based systems receive sensor data and return process-influencing information to actuators. The interactions between information processing, the process as data source and data sink, and the influences of the quality of the sensor data and the actuator operations are also decisive for the overall system function of the systems. In addition to the particular measuring principle for the respective process variable, smart sensors include signal preprocessing, monitoring algorithms to safeguard the sensor function, connectivity and, depending on the area of application, energy supply functions. Smart actuators also supplement the control intervention in the technical process with advanced signal processing and monitoring mechanisms as well as various communication methods. The resultant signal processing system incorporates additional smart features that further enhance system performance.
Consistent with the overall shared vision of digitized and networked production, AI solutions must also feature interoperability with applications beyond the boundaries of their operating environment. In this regard, the issue of interoperability exceeds the question of technical realization, which, in fact, has progressed to the point where technical interoperability can be taken for granted. Much more urgent will be the aspects of socioeconomics and legal regulations of all kinds. Tech giants try to control AI with lock-in models, which has a negative impact on AI development. Smaller companies cannot compete, and developers are locked into services and providers such as Amazon’s AWS. Anytime this occurs, it results in the risk that potentially extremely robust AI architectures developed by smaller companies will be neglected. For example, Google’s TensorFlow is one of the most popular AI frameworks due to its high computational power. However, because it does not have multiple pretrained AI models, it is not the best option when it comes to innovation and faster time to market. Similarly, Amazon’s AWS offers both comprehensive data analytics tools and a high level of security but falls short in terms of flexibility for specific AI algorithms. It is much easier for development teams to use best-of-breed frameworks and multiple related functions if they are all interoperable and flexible. So far, purely economic aspects at the software level are considered. In the cyber–physical production context, system developers, production companies, customer groups, among others, must also be socioeconomically motivated and ultimately be enabled to achieve interoperability in compliance with legal regulations.

4. Conclusions

To further elevate digitization by the use of AI methods in various application domains, the paradigm of usable AI, UAI, is introduced. The primary objective of UAI is user orientation. For this purpose, solutions shall heavily rely on user-specific considerations. This implies domain-specific adaptation, on the one hand, and adaptation to user roles, on the other.
Numerous surveys on working with AI-based systems in production practice have been examined to investigate the relevance of the concept within a specific application domain. The main difficulties and their causes as perceived by practitioners have been analyzed. Based on the identified hurdles, possible research directions have been derived. As a result, to gain maximum leverage for increasing the willingness and prospect of success in introducing AI applications it is necessary to improve the usability of AI in practice in a wider sense, especially in terms of solution transparency and user competence. It is indicated that previous approaches to increasing the transparency of black-box-like AI algorithms still need to close the gap between Data Science and its users within specific domains. The need persists to provide sufficient AI application support for engineers and practitioners within the domain.
It is inferred that AI will evolve towards improved usability like other software solutions over the past three decades. The UAI paradigm addresses as essential user-centric AI attributes: usability, suitability, integrability and interoperability.
In the context of CPPS, the following user roles have been abstracted: developer of CPPS, developer of processes and operator of processes. The analysis shows that target artifacts, motivation, knowledge horizon and challenges differ for the user groups. Therefore, in the future, UAI shall enable a domain- and user-role-specific adaptation of attributes, accompanied by adaptive support for vertical and horizontal integration across the domain-user-role matrix.
UAI defines a transformational paradigm. It provides orientation and structure for developers, research and development departments and enterprises. Adoption of the UAI paradigm in solution development will afford future users to apply AI in an easier, more competent way. Hence, AI solutions gain appeal and relevance for new user groups and use cases in a broad variety of domains. An expanded scope of activity and therefore the ability to work more effectively and efficiently are the result. By proposing the concept of UAI, we aim at inspiring a new field of research and to initiate the much-needed further development of methods for easily accessible AI application.

Author Contributions

Conceptualization, H.W., D.S., V.L., F.C., M.M., E.B., K.F. and L.D.; methodology, D.S., F.C., M.M., E.B. and K.F.; validation, H.W. and D.S.; investigation, D.S., V.L. and K.F.; data curation, H.W.; writing—original draft preparation, D.S. and V.L.; writing—review and editing, D.S. and V.L.; visualization, H.W., D.S., V.L., F.C., M.M., E.B., K.F. and L.D.; supervision, S.I.; project administration, H.W. and D.S. All authors have read and agreed to the published version of the manuscript.

Funding

Funding for this work comes from two groups of projects: a group characterized by machine-related activities, and a group characterized by a focus on process flows. Contributions to the machine-related side were developed in the projects “Smart Data Services for Production Systems” (funded by the European Social Fund (ESF) and co-financed by tax funds based on the budget approved by the members of the Saxon State Parliament with funding code 100302264) and “PreMaMisch-Predictive-Maintenance-Systems for Mixing Plants” (funded by the German Federal Ministry of Economics and Technology through the AiF as part of the program “Central Innovation Program for SMEs” based on a resolution of the German Bundestag with funding code KK5023201LT0). Contributions related to process operations were developed in the projects “AMTwin-Data-driven process, material and structure analysis for additive manufacturing” (funded by the European Regional Development Fund (ERDF) under funding code 100373343), “PrognoseMES-Development of an AI-based forecasting module for Manufacturing Execution System (MES) for predictive production control” (funded by the European Regional Development Fund (ERDF) and co-financed by tax funds based on the budget approved by the members of the Saxon State Parliament with funding code 100390519), “PAL-PerspektiveArbeitLausitz” (funded by The Federal Ministry of Education and Research (BMBF) with funding code 02L19C301), Project C3 within Research Training Group GRK2250/2 (funded by the German Research Foundation (DFG) with funding code 287321140), “Ganzheitlich angewandte KI im Ingenieurbereich-KIIng” (funded by the German Federal Ministry of Education and Research as part of the program “Digitale Hochschulbildung” with the funding code 16DHBQP020, and KIOptiPack (funded by the Federal Ministry of Education and Research (BMBF) with funding code 033KI129).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, J.; Wang, W.; Zhang, M.; Chen, G.; Jagadish, H.; Li, G.; Ng, T.; Ooi, B.; Wang, S.; Zhou, J. PANDA: Facilitating Usable AI Development. arXiv 2018, arXiv:1804.09997v1. [Google Scholar]
  2. Russell, S.J.; Norvig, P.; Davis, E. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice Hall Series in Artificial Intelligence; Prentice Hall: Upper Saddle River, NJ, USA, 2010; ISBN 978-0-13-604259-4. [Google Scholar]
  3. Salanova, M.; Llorens, S.; Cifre, E. The Dark Side of Technologies: Technostress among Users of Information and Communication Technologies. Int. J. Psychol. 2013, 48, 422–436. [Google Scholar] [CrossRef] [Green Version]
  4. Tarafdar, M.; Tu, Q.; Ragu-Nathan, T.S. Impact of Technostress on End-User Satisfaction and Performance. J. Manag. Inf. Syst. 2010, 27, 303–334. [Google Scholar] [CrossRef]
  5. ISO 9241-11; Ergonomics of Human-System Interaction. ISO: Geneva, Switzerland, 2018.
  6. Dieber, J.; Kirrane, S. A Novel Model Usability Evaluation Framework (MUsE) for Explainable Artificial Intelligence. Inf. Fusion 2022, 81, 143–153. [Google Scholar] [CrossRef]
  7. Ali, M.A.; Yap, N.K.; Ghani, A.A.A.; Zulzalil, H.; Admodisastro, N.I.; Najafabadi, A.A. A Systematic Mapping of Quality Models for AI Systems, Software and Components. Appl. Sci. 2022, 12, 8700. [Google Scholar] [CrossRef]
  8. Kharchenko, V.; Fesenko, H.; Illiashenko, O. Quality Models for Artificial Intelligence Systems: Characteristic-Based Approach, Development and Application. Sensors 2022, 22, 4865. [Google Scholar] [CrossRef] [PubMed]
  9. Cioffi, R.; Travaglioni, M.; Piscitelli, G.; Petrillo, A.; De Felice, F. Artificial Intelligence and Machine Learning Applications in Smart Production: Progress, Trends, and Directions. Sustainability 2020, 12, 492. [Google Scholar] [CrossRef] [Green Version]
  10. Mohammadi, V.; Minaei, S. Artificial Intelligence in the Production Process. In Engineering Tools in the Beverage Industry; Elsevier: Amsterdam, The Netherlands, 2019; pp. 27–63. ISBN 978-0-12-815258-4. [Google Scholar]
  11. Huber, S.; Wiemer, H.; Schneider, D.; Ihlenfeldt, S. DMME: Data Mining Methodology for Engineering Applications—A Holistic Extension to the CRISP-DM Model. In Proceedings of the Procedia CIRP (2018), Gulf of Naples, Italy, 18 July 2018; pp. 403–408. [Google Scholar]
  12. Kross, S.; Guo, P.J. Practitioners Teaching Data Science in Industry and Academia: Expectations, Workflows, and Challenges. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 2 May 2019; ACM: Rochester, NY, USA, 2017; pp. 1–14. [Google Scholar]
  13. Bianchini, M.; Michalkova, V. Data Analytics in SMEs: Trends and Policies; OECD SME and Entrepreneurship Papers; OECD: Paris, France, 2019; Volume 15. [Google Scholar]
  14. Schneider, D.; Kusturica, W. Towards a Guideline Affording Overarching Knowledge Building in Data Analysis Projects. Bus. Inf. Syst. 2021, 1, 49–59. [Google Scholar] [CrossRef]
  15. Delipetrev, B.; Tsinaraki, C.; Kostic, U. Historical Evolution of Artificial Intelligence; Publications Office of the European Union: Luxembourg, 2020. [Google Scholar]
  16. Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
  17. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
  18. Kropatschek, S.; Steuer, T.; Kiesling, E.; Meixner, K.; Fruhwirth, T.; Sommer, P.; Schachinger, D.; Biffl, S. Towards the Representation of Cross-Domain Quality Knowledge for Efficient Data Analytics. In Proceedings of the 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Vasteras, Sweden, 7–10 September 2021; IEEE: Piscataway, NI, USA, 2021; pp. 1–4. [Google Scholar]
  19. Langer, T.; Meisen, T. System Design to Utilize Domain Expertise for Visual Exploratory Data Analysis. Information 2021, 12, 140. [Google Scholar] [CrossRef]
  20. Li, G.; Yuan, C.; Kamarthi, S.; Moghaddam, M.; Jin, X. Data Science Skills and Domain Knowledge Requirements in the Manufacturing Industry: A Gap Analysis. J. Manuf. Syst. 2021, 60, 692–706. [Google Scholar] [CrossRef]
  21. Wiemer, H.; Dementyev, A.; Ihlenfeldt, S. A Holistic Quality Assurance Approach for Machine Learning Applications in Cyber-Physical Production Systems. Appl. Sci. 2021, 11, 9590. [Google Scholar] [CrossRef]
  22. Fayyad, U.; Piatetsky-Shapiro, G.; Smyth, P. From Data Mining to Knowledge Discovery in Databases. AI Mag. 1996, 17, 37. [Google Scholar]
  23. Fügener, A.; Grahl, J.; Gupta, A.; Ketter, W. Cognitive Challenges in Human–Artificial Intelligence Collaboration: Investigating the Path Toward Productive Delegation. Inf. Syst. Res. 2022, 33, 678–696. [Google Scholar] [CrossRef]
  24. Ishikawa, F.; Yoshioka, N. How Do Engineers Perceive Difficulties in Engineering of Machine-Learning Systems?—Questionnaire Survey. In Proceedings of the 2019 IEEE/ACM Joint 7th International Workshop on Conducting Empirical Studies in Industry (CESI) and 6th International Workshop on Software Engineering Research and Industrial Practice (SER&IP), Montreal, QC, Canada, 28 May 2019; IEEE: Piscataway, NI, USA, 2019; pp. 2–9. [Google Scholar]
  25. Song, I.-Y.; Zhu, Y. Big Data and Data Science: What Should We Teach? Expert Syst. 2016, 33, 364–373. [Google Scholar] [CrossRef]
  26. Chennamaneni, A.; Teng, J.T.C. An Integrated Framework for Effective Tacit Knowledge Transfer. AMCIS Proc. 2011, 1, 277. [Google Scholar]
  27. Foos, T.; Schum, G.; Rothenberg, S. Tacit Knowledge Transfer and the Knowledge Disconnect. J. Knowl. Manag. 2006, 10, 6–18. [Google Scholar] [CrossRef] [Green Version]
  28. Winston, P.H. Artificial Intelligence; Addison-Wesley Longman Publishing Co., Inc.: North York, ON, Canada, 1992. [Google Scholar]
  29. Nielsen, J. Usability Engineering; Academic Press: Boston, MA, USA, 1993; ISBN 978-0-12-518406-9. [Google Scholar]
  30. van Lent, M.; Fisher, W.; Mancuso, M. An Explainable Artificial Intelligence System for Small-Unit Tactical Behavior. In Proceedings of the Nineteenth National Conference on Artificial Intelligence, Sixteenth Conference on Innovative Applications of Artificial Intelligence, San Jose, CA, USA, 25–29 July 2004. [Google Scholar]
  31. Moore, J.; Swartout, W. Explanation in Expert Systems: A Survey; University of Southern California: Los Angeles, CA, USA, 1989. [Google Scholar]
  32. Swartout, W.R. XPLAIN: A System for Creating and Explaining Expert Consulting Programs. Artif. Intell. 1983, 21, 285–325. [Google Scholar] [CrossRef]
  33. Van Melle, W.; Shortliffe, E.H.; Buchanan, B.G. EMYCIN: A Knowledge Engineer’s Tool for Constructing Rule-Based Expert Systems. In Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project; Addison-Wesley: North York, ON, Canada, 1984; pp. 302–313. [Google Scholar]
  34. Andrews, R.; Diederich, J.; Tickle, A.B. Survey and Critique of Techniques for Extracting Rules from Trained Artificial Neural Networks. Knowl.-Based Syst. 1995, 8, 373–389. [Google Scholar] [CrossRef]
  35. Abdul, A.; Vermeulen, J.; Wang, D.; Lim, B.Y.; Kankanhalli, M. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21 April 2018; ACM: Rochester, NY, USA, 2018; pp. 1–18. [Google Scholar]
  36. Harris, J. Usable AI: User Experience Designers Have a Simple Answer to the AI “True Understanding” Problem. Medium 2019. Available online: https://medium.com/@julian.harris/usable-ai-user-experience-designers-have-a-simple-answer-to-the-ai-true-understanding-problem-82932616ee50 (accessed on 17 November 2019).
  37. Jameson, A.; Spaulding, A.; Yorke-Smith, N. Introduction to the Special Issue on “Usable AI”. AI Mag. 2009, 30, 11–15. [Google Scholar] [CrossRef] [Green Version]
  38. Xu, W. Toward Human-Centered AI: A Perspective from Human-Computer Interaction. Interactions 2019, 26, 42–46. [Google Scholar] [CrossRef] [Green Version]
  39. Gajos, K.; Weld, D. Usable AI: Experience and Reflections. In Proceedings of the CHI 2008 Workshops and Courses: Usable Artificial Intelligence, Florence, Italy, 5–10 April 2008. [Google Scholar]
  40. Pfau, J.; Smeddinck, J.; Malaka, R. The Case for Usable AI: What Industry Professionals Make of Academic AI in Video Games. In Proceedings of the CHI PLAY’20: Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play, Virtual, 2 November 2020. [Google Scholar]
  41. Lau, T. Why Programming-By-Demonstration Systems Fail: Lessons Learned for Usable AI. AI Mag. 2009, 30, 65–67. [Google Scholar] [CrossRef] [Green Version]
  42. Bunt, A.; Conati, C.; McGrenere, J. Mixed-Initiative Interface Personalization as a Case Study in Usable AI. AI Mag. 2010, 30, 58. [Google Scholar] [CrossRef] [Green Version]
  43. Song, J.; Jung, D. A Study on the Usability of AI-Based Naver App Search Service. J. Korean Soc. Des. Cult. 2021, 27, 197–207. [Google Scholar] [CrossRef]
  44. Bécue, A.; Praça, I.; Gama, J. Artificial Intelligence, Cyber-Threats and Industry 4.0: Challenges and Opportunities. Artif. Intell. Rev. 2021, 54, 3849–3886. [Google Scholar] [CrossRef]
  45. Pekarčíková, M.; Trebuňa, P.; Kliment, M. Digitalization Effects on the Usability of Lean Tools 6: 9–13. Acta Logist. 2019, 6, 9–13. [Google Scholar] [CrossRef]
  46. Ozkaya, I. What Is Really Different in Engineering AI-Enabled Systems? IEEE Softw. 2020, 37, 3–6. [Google Scholar] [CrossRef]
  47. Payne, A.; Frow, P.; Eggert, A. The Customer Value Proposition: Evolution, Development, and Application in Marketing. J. Acad. Mark. Sci. 2017, 45, 467–489. [Google Scholar] [CrossRef]
  48. Osterwalder, A.; Pigneur, Y.; Bernardakēs, G.N.; Smith, A.; Papadakos, T. Value Proposition Design: Entwickeln Sie Produkte und Services, die Ihre Kunden Wirklich Wollen. Beginnen Sie Mit; Campus Verlag: New York, NY, USA, 2015; ISBN 978-3-593-50331-8. [Google Scholar]
  49. Uraikul, V.; Chan, C.W.; Tontiwachwuthikul, P. Artificial Intelligence for Monitoring and Supervisory Control of Process Systems. Eng. Appl. Artif. Intell. 2007, 20, 115–131. [Google Scholar] [CrossRef]
  50. Butte, S.; Prashanth, A.R.; Patil, S. Machine Learning Based Predictive Maintenance Strategy: A Super Learning Approach with Deep Neural Networks. In Proceedings of the 2018 IEEE Workshop on Microelectronics and Electron Devices (WMED), Boise, ID, USA, 20 April 2018; IEEE: Piscataway, NI, USA, 2018; pp. 1–5. [Google Scholar]
  51. Kaur, K.; Selway, M.; Grossmann, G.; Stumptner, M.; Johnston, A. Towards an Open-Standards Based Framework for Achieving Condition-Based Predictive Maintenance. In Proceedings of the 8th International Conference on the Internet of Things, Santa Barbara, CA, USA, 15 October 2018; ACM: Rochester, NY, USA, 2018; pp. 1–8. [Google Scholar]
  52. Vogel-Heuser, B.; Ribeiro, L. Bringing Automated Intelligence to Cyber-Physical Production Systems in Factory Automation. In Proceedings of the 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), Munich, Germany, 20–24 August 2018; IEEE: Piscataway, NI, USA, 2018; pp. 347–352. [Google Scholar]
  53. Liu, C.; Jiang, P.; Jiang, W. Web-Based Digital Twin Modeling and Remote Control of Cyber-Physical Production Systems. Robot. Comput.-Integr. Manuf. 2020, 64, 101956. [Google Scholar] [CrossRef]
  54. Venkatasubramanian, V. The Promise of Artificial Intelligence in Chemical Engineering: Is It Here, Finally? AIChE J. 2019, 65, 466–478. [Google Scholar] [CrossRef] [Green Version]
  55. Dittert, M.; Härting, R.-C.; Reichstein, C.; Bayer, C. A Data Analytics Framework for Business in Small and Medium-Sized Organizations. In Intelligent Decision Technologies 2017; Czarnowski, I., Howlett, R.J., Jain, L.C., Eds.; Smart Innovation, Systems and Technologies; Springer International Publishing: Cham, Switzerland, 2018; Volume 73, pp. 169–181. ISBN 978-3-319-59423-1. [Google Scholar]
  56. Rautenbach, S.; de Kock, I.; Grobler, J. Data Science for Small and Medium-Sized Enterprises: A Structured Literature Review. S. Afr. J. Ind. Eng. 2022, 32, 83–95. [Google Scholar] [CrossRef]
  57. Schmidt, R.; Möhring, M.; Härting, R.-C.; Reichstein, C.; Neumaier, P.; Jozinović, P. Industry 4.0-Potentials for Creating Smart Products: Empirical Research Results. In Proceedings of the International Conference on Business Information Systems, Poznań, Poland, 24–26 June 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 16–27. [Google Scholar]
  58. Gronau, N.; Grum, M.; Bender, B. Determining the Optimal Level of Autonomy in Cyber-Physical Production Systems. In Proceedings of the 2016 IEEE 14th International Conference on Industrial Informatics (INDIN), Poitiers, France, 19–21 July 2016; IEEE: Piscataway, NI, USA, 2016; pp. 1293–1299. [Google Scholar]
  59. Müller-Markus, C.; Küstner, V.; Eßmann, A. Acatech Künstliche Intelligenz in Der Industrie; Acatech: München, Germany, 2020; ISBN 2625-9605. [Google Scholar]
  60. Berg, A. Industrie 4.0—Jetzt Mit KI; Bitkom: Hannover, Geramny, 2019. [Google Scholar]
  61. Rohleder, B. Industrie 4.0—So Digital Sind Deutschlands Fabriken; Bitkom: Hannover, Geramny, 2021. [Google Scholar]
  62. Hutapea, L.; Malanowski, N. Potenziale und Hindernisse bei der Einführung digitaler Technik in der kunststoffverarbeitenden Industrie; Stiftung Arbeit und Umwelt der IG BCE: Berlin, Geramny, 2019. [Google Scholar]
  63. Röhl, K.-H.; Bolwin, L.; Hüttl, P. Datenwirtschaft in Deutschland; BDI Bundesverbands der Deutschen Industrie e.V.: Berlin, Geramny, 2021. [Google Scholar]
  64. Bühler, J.; Fliehe, M.; Shahd, M. Künstliche Intelligenz in Unternehmen; Verband der TÜV e. V.: Berlin, Geramny, 2021. [Google Scholar]
  65. Seifert, I.; Bürger, M.; Wangler, L.; Christmann-Budian, S.; Rohde, M.; Gabriel, P.; Zinke, G. Potenziale Der Künstlichen Intelligenz Im Produzierenden Gewerbe in Deutschland; iit-Institut für Innovation und Technik in der VDI/VDE Innovation + Technik GmbH: Berlin, Geramny, 2018. [Google Scholar]
  66. Weber, K.; Bertschek, I.; Ohnemus, J.; Ebert, M. Monitoring-Report Wirtschaft DIGITAL 2018; Bundesministerium für Wirtschaft und Energie (BMWi): Berlin, Geramny, 2018. [Google Scholar]
  67. Sames, G.; Diener, A. Stand der Digitalisierung von Geschäftsprozessen zu Industrie 4.0 im Mittelstand—Ergebnisse einer Umfrage bei Unternehmen. Master’s Thesis, Technische Hochschule Mittelhessen, Gießen, Geramny, 2018. [Google Scholar]
  68. Lundborg, M.; Gull, I. Künstliche Intelligenz Im Mittelstand; Begleitforschung Mittelstand-Digital WIK-Consult GmbH: Berlin, Geramny, 2021. [Google Scholar]
  69. Lundborg, M.; Märkel, C. Künstliche Intelligenz Im Mittelstand; Begleitforschung Mittelstand-Digital WIK GmbH: Berlin, Geramny, 2019. [Google Scholar]
  70. Metternich, J.; Biegel, T.; Bretones Cassoli, B.; Hoffmann, F.; Jourdan, N.; Rosemeyer, J.; Stanula, P.; Ziegenbein, A. Künstliche Intelligenz zur Umsetzung von Industrie 4.0 im Mittelstand: Leitfaden zur Expertise des Forschungsbeirats der Plattform Industrie 4.0; Acatech-Deutsche Akademie der Technikwissenschaften: München, Germany, 2021. [Google Scholar]
  71. Perez, F.; Irisarri, E.; Orive, D.; Marcos, M.; Estevez, E. A CPPS Architecture Approach for Industry 4.0. In Proceedings of the 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, 8–11 September 2015; IEEE: Piscataway, NI, USA, 2015; pp. 1–4. [Google Scholar]
  72. Ding, K.; Chan, F.T.S.; Zhang, X.; Zhou, G.; Zhang, F. Defining a Digital Twin-Based Cyber-Physical Production System for Autonomous Manufacturing in Smart Shop Floors. Int. J. Prod. Res. 2019, 57, 6315–6334. [Google Scholar] [CrossRef] [Green Version]
  73. Ghobakhloo, M. Determinants of Information and Digital Technology Implementation for Smart Manufacturing. Int. J. Prod. Res. 2020, 58, 2384–2405. [Google Scholar] [CrossRef]
  74. Lee, J.; Singh, J.; Azamfar, M. Industrial Artificial Intelligence. arXiv 2019, arXiv:1908.02150. [Google Scholar]
  75. Brecher, C.; Spierling, R.; Fey, M.; Neus, S. Direct Measurement of Thermo-Elastic Errors of a Machine Tool. CIRP Ann. 2021, 70, 333–336. [Google Scholar] [CrossRef]
  76. Kannengiesser, U.; Krenn, F.; Stary, C. A Behaviour-Driven Development Approach for Cyber-Physical Production Systems. In Proceedings of the 2020 IEEE Conference on Industrial Cyberphysical Systems (ICPS), Tampere, Finland, 10 June 2020; IEEE: Piscataway, NI, USA, 2020; pp. 179–184. [Google Scholar]
  77. Iannino, V.; Colla, V.; Denker, J.; Göttsche, M. A CPS-Based Simulation Platform for Long Production Factories. Metals 2019, 9, 1025. [Google Scholar] [CrossRef] [Green Version]
  78. Park, H.S.; Nguyen, D.S.; Le-Hong, T.; Van Tran, X. Machine Learning-Based Optimization of Process Parameters in Selective Laser Melting for Biomedical Applications. J. Intell. Manuf. 2022, 33, 1843–1858. [Google Scholar] [CrossRef]
  79. Qiao, F.; Liu, J.; Ma, Y. Industrial Big-Data-Driven and CPS-Based Adaptive Production Scheduling for Smart Manufacturing. Int. J. Prod. Res. 2021, 59, 7139–7159. [Google Scholar] [CrossRef]
  80. Nagy, G.; Illés, B.; Bányai, Á. Impact of Industry 4.0 on Production Logistics. IOP Conf. Ser. Mater. Sci. Eng. 2018, 448, 012013. [Google Scholar] [CrossRef]
  81. Schuhmacher, J.; Hummel, V. Decentralized Control of Logistic Processes in Cyber-Physical Production Systems at the Example of ESB Logistics Learning Factory. Procedia CIRP 2016, 54, 19–24. [Google Scholar] [CrossRef] [Green Version]
  82. Klötzer, C.; Pflaum, A. Toward the Development of a Maturity Model for Digitalization within the Manufacturing Industry’s Supply Chain. In Proceedings of the 50th Hawaii International Conference on System Sciences, Hilton Waikoloa Village, HI, USA, 4–7 January 2017. [Google Scholar]
  83. Park, K.T.; Son, Y.H.; Noh, S.D. The Architectural Framework of a Cyber Physical Logistics System for Digital-Twin-Based Supply Chain Control. Int. J. Prod. Res. 2021, 59, 5721–5742. [Google Scholar] [CrossRef]
  84. Küfner, T.; Schönig, S.; Jasinski, R.; Ermer, A. Vertical Data Continuity with Lean Edge Analytics for Industry 4.0 Production. Comput. Ind. 2021, 125, 103389. [Google Scholar] [CrossRef]
  85. Yin, S.; Bao, J.; Zhang, J.; Li, J.; Wang, J.; Huang, X. Real-Time Task Processing for Spinning Cyber-Physical Production Systems Based on Edge Computing. J. Intell. Manuf. 2020, 31, 2069–2087. [Google Scholar] [CrossRef]
  86. Aranburu, A.; Justel, D.; Contero, M.; Camba, J.D. Geometric Variability in Parametric 3D Models: Implications for Engineering Design. Procedia CIRP 2022, 109, 383–388. [Google Scholar] [CrossRef]
  87. Bhad, P.U.; Buktar, R.B. Integrated Approach for an Artificial Intelligence-Based Generative Product Design. Int. J. Des. Eng. 2021, 10, 110–135. [Google Scholar]
  88. Boos, E.; Schwarzenberger, M.; Jaretzki, M.; Ihlenfeldt, S. Melt Pool Monitoring Using Fuzzy Based Anomaly Detection in Laser Beam Melting. In Proceedings of the Metal Additive Manufacturing Conference, Örebro, Sweden, 25–27 November 2019; pp. 1–11. [Google Scholar]
  89. Lee, J.; Noh, S.; Kim, H.-J.; Kang, Y.-S. Implementation of Cyber-Physical Production Systems for Quality Prediction and Operation Control in Metal Casting. Sensors 2018, 18, 1428. [Google Scholar] [CrossRef] [Green Version]
  90. Allen, M.K. The Development of an Artificial Intelligence System for Inventory Management Using Multiple Experts; The Ohio State University: Columbus, OH, USA, 1986. [Google Scholar]
  91. Preil, D.; Krapp, M. Artificial Intelligence-Based Inventory Management: A Monte Carlo Tree Search Approach. Ann. Oper. Res. 2022, 308, 415–439. [Google Scholar] [CrossRef]
  92. Amirkolaii, K.N.; Baboli, A.; Shahzad, M.K.; Tonadre, R. Demand Forecasting for Irregular Demands in Business Aircraft Spare Parts Supply Chains by Using Artificial Intelligence (AI). IFAC-Pap. 2017, 50, 15221–15226. [Google Scholar] [CrossRef]
  93. Praveen, U.; Farnaz, G.; Hatim, G. Inventory Management and Cost Reduction of Supply Chain Processes Using AI Based Time-Series Forecasting and ANN Modeling. Procedia Manuf. 2019, 38, 256–263. [Google Scholar] [CrossRef]
  94. Xiao, H.; Muthu, B.; Kadry, S.N. Artificial Intelligence with Robotics for Advanced Manufacturing Industry Using Robot-Assisted Mixed-Integer Programming Model. Intell. Serv. Robot. 2020. [Google Scholar] [CrossRef]
  95. Plappert, S.; Gembarski, P.C.; Lachmayer, R. The Use of Knowledge-Based Engineering Systems and Artificial Intelligence in Product Development: A Snapshot. In Proceedings of the International Conference on Information Systems Architecture and Technology, Wrocław, Poland, 15–17 September 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 62–73. [Google Scholar]
  96. Simpson, T.W. Product Platform Design and Customization: Status and Promise. Artif. Intell. Eng. Des. Anal. Manuf. 2004, 18, 3–20. [Google Scholar] [CrossRef] [Green Version]
  97. Tao, F.; Zhang, M. Digital Twin Shop-Floor: A New Shop-Floor Paradigm Towards Smart Manufacturing. IEEE Access 2017, 5, 20418–20427. [Google Scholar] [CrossRef]
  98. Wei, J.; Chu, X.; Sun, X.-Y.; Xu, K.; Deng, H.-X.; Chen, J.; Wei, Z.; Lei, M. Machine Learning in Materials Science. InfoMat 2019, 1, 338–358. [Google Scholar] [CrossRef] [Green Version]
  99. de Jong, M.; Chen, W.; Notestine, R.; Persson, K.; Ceder, G.; Jain, A.; Asta, M.; Gamst, A. A Statistical Learning Framework for Materials Science: Application to Elastic Moduli of k-Nary Inorganic Polycrystalline Compounds. Sci. Rep. 2016, 6, 34256. [Google Scholar] [CrossRef] [PubMed]
  100. Yuan, R.; Liu, Z.; Balachandran, P.V.; Xue, D.; Zhou, Y.; Ding, X.; Sun, J.; Xue, D.; Lookman, T. Accelerated Discovery of Large Electrostrains in BaTiO3-Based Piezoelectrics Using Active Learning. Adv. Mater. 2018, 30, 1702884. [Google Scholar] [CrossRef] [PubMed]
  101. Schmidt, J.; Marques, M.R.G.; Botti, S.; Marques, M.A.L. Recent Advances and Applications of Machine Learning in Solid-State Materials Science. Npj Comput. Mater. 2019, 5, 83. [Google Scholar] [CrossRef] [Green Version]
  102. Stein, H.S.; Gregoire, J.M. Progress and Prospects for Accelerating Materials Science with Automated and Autonomous Workflows. Chem. Sci. 2019, 10, 9640–9649. [Google Scholar] [CrossRef] [Green Version]
  103. de Almeida, A.F.; Moreira, R.; Rodrigues, T. Synthetic Organic Chemistry Driven by Artificial Intelligence. Nat. Rev. Chem. 2019, 3, 589–604. [Google Scholar] [CrossRef]
  104. Stöcker, J.; Fuchs, A.; Leichsenring, F.; Kaliske, M. A Novel Self-Adversarial Training Scheme for Enhanced Robustness of Inelastic Constitutive Descriptions by Neural Networks. Comput. Struct. 2022, 265, 106774. [Google Scholar] [CrossRef]
  105. Fuchs, A. On the Numerical Multiscale Analysis of Mineral Based Composites Using Machine Learning. Ph.D. Thesis, Institute for Structural Analysis, Technische Universität Dresden, Dresden, Germany, 2021. [Google Scholar]
  106. Guo, S.; Yu, J.; Liu, X.; Wang, C.; Jiang, Q. A Predicting Model for Properties of Steel Using the Industrial Big Data Based on Machine Learning. Comput. Mater. Sci. 2019, 160, 95–104. [Google Scholar] [CrossRef]
  107. Su, J.; Li, H.; Liu, P.; Dong, Q.; Li, A. Aging Process Optimization for a Copper Alloy Considering Hardness and Electrical Conductivity. Comput. Mater. Sci. 2007, 38, 697–701. [Google Scholar] [CrossRef]
  108. Xiong, J.; Zhang, G.; Hu, J.; Wu, L. Bead Geometry Prediction for Robotic GMAW-Based Rapid Manufacturing through a Neural Network and a Second-Order Regression Analysis. J. Intell. Manuf. 2014, 25, 157–163. [Google Scholar] [CrossRef]
  109. Zhang, Z.; Friedrich, K. Artificial Neural Networks Applied to Polymer Composites: A Review. Compos. Sci. Technol. 2003, 63, 2029–2044. [Google Scholar] [CrossRef]
  110. Stolbov, V.Y.; Gitman, M.B.; Sharybin, S.I. Application of Intelligent Technology in Functional Materials Quality Control. Mater. Sci. Forum 2016, 870, 717–724. [Google Scholar] [CrossRef]
  111. Conrad, F.; Mälzer, M.; Schwarzenberger, M.; Wiemer, H.; Ihlenfeldt, S. Benchmarking AutoML for Regression Tasks on Small Tabular Data in Materials Design. Sci. Rep. 2022, 12, 19350. [Google Scholar] [CrossRef]
  112. Zschech, P.; Horn, R.; Höschele, D.; Janiesch, C.; Heinrich, K. Intelligent User Assistance for Automated Data Mining Method Selection. Bus. Inf. Syst. Eng. 2020, 62, 227–247. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Components of domain-specific data-analysis [22].
Figure 1. Components of domain-specific data-analysis [22].
Mti 07 00027 g001
Figure 2. Relation of relevant user roles and respective AI application goals.
Figure 2. Relation of relevant user roles and respective AI application goals.
Mti 07 00027 g002
Figure 3. Layers of AI model comprehensiveness.
Figure 3. Layers of AI model comprehensiveness.
Mti 07 00027 g003
Figure 4. Anatomy of the usable artificial intelligence (UAI) paradigm.
Figure 4. Anatomy of the usable artificial intelligence (UAI) paradigm.
Mti 07 00027 g004
Table 1. Domain-specific user roles.
Table 1. Domain-specific user roles.
UserArtifactDescription
Researcher in DomainCyber–physical systemUses AI-based methods to enhance system
Planner in DomainData-driven process within cyber-physical systemUses enhanced system and AI-based methods to control process
Practitioner in DomainOutcome of data-driven processGenerates process outcome
Table 2. Definition of usability characteristics.
Table 2. Definition of usability characteristics.
“Why...?”
Usability Provides:
EffectivenessEfficiencySatisfactory
Intuition“Does it work at all?”“Is it worth its effort?”“Does it provide enjoyable achievements?”
DescriptionAbility to achieve a particular outcomeRatio of input vs. outputGood feeling when achieving something
ISO 9241-11 (2018)“Accuracy and completeness with which users achieve specified goals”“Resources used in relation to the results achieved”“Extent to which the user’s physical, cognitive and emotional responses that result from the use of a system, product or service meet the user’s needs and expectations”
Table 3. Definition of the term usability.
Table 3. Definition of the term usability.
“What...?”
UsabilityUsableUse
“Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” [ISO 9241-11(2018), 3.1.1]Allowing an intended useDo something with a machine, a method, an object, etc., for a particular purpose
Table 4. Description of attributes of the UAI paradigm.
Table 4. Description of attributes of the UAI paradigm.
Usability = ∑(…)
Usefulness“Can it be applied right away?”
Comprehensibility Easy to understand
ExplainabilityAbility to explicate the working principles of solution approaches/algorithms
InterpretabilityAbility to understand the results of applied methods within the specific application context
User-friendliness Perceived ease of use, attributes [29]
LearnabilityHow easy is it for users to accomplish basic tasks the first time they encounter the design?
Task efficiencyOnce users have learned the design, how quickly can they perform tasks?
MemorabilityWhen users return to the design after a period of not using it, how easily can they re-establish proficiency?
Error pronenessHow many errors do users make, how severe are these errors, and how easily can they recover?
PleasantnessHow pleasant is it to use the design?
Accessibility Are tools, methods and instructions directly available for use?
Suitability“Does it fit my purpose?”
AptitudeDomain-specificityDoes it work within my context of use?
Use-case specificityDoes it fit my field of application?
Transferability Can the solution set-up be re-used or adapted to other contexts of use?
Integrability“Does it work with what we are working with around here?”
Interfaces Smooth interaction with other internal systems
Software Solution compatible with on-site software
Hardware Solution compatible with on-site hardware
Interoperability“Does it work with what we need to work with during cooperation?”
Socio-economy Smooth interaction with external subjects and organizations
Technical Systems Smooth interaction with embedded and external systems
Regulations Legal, organizational or contractual requirements
Table 5. Overview of surveys to barriers to the digitization and use of AI-technologies in industrial production.
Table 5. Overview of surveys to barriers to the digitization and use of AI-technologies in industrial production.
Bitcom 2 [60,61]IG BCE [62]Acatech [59]BDI [63]THM [67]TUeV [64]BMWi [65]BMWi [66]Mittelstand [68,69]Metternich [70]
Strategy and Management
Shortage of skilled employeesXXXXXXXXXX10
High investment costsXXX XXXXXX9
Lack of IT affinity and IT competence among employees X XXXXXX7
No meaningful areas of application XXX X4
Uncertainty regarding the benefits of the application XXX X 4
No clear idea of the “appropriate” value of the data X X 2
Lack of trust in AI-driven decision making X 1
Lack of acceptance by customers X1
Necessary time requirement to implement X 1
Lack of cooperation partners for Implementation X 1
Lack of integration into the corporate strategy X X2
Missing business models X X2
Inadequate digital infrastructure XX2
No clear responsibilities in the company X 1
Adaption difficulties in organization of operations and work X 1
Lack of contribution options for employees X1
Missing marketplaces X 1
Lack of digital maturity within the company X 1
Ethical and moral considerations X 1
Technology, Production and R&D
No suitable data sets available X X XX4
Lack of acceptance of AI applications in the workforceX X XX4
Data preparation costs time and resources X XX 3
Complexity of the topicX 1
Lack of data quality X1
Lack of knowledge of best practice example X 1
Functionality and Applicability
Susceptibility of the systems to failuresX 1
Technical barriers X 1
Lack of possibilities for the technical protection of the data X 1
No high-performance broadband network XX 2
Inadequate digital infrastructure XX2
Legislative
Data security requirementsX XXXXXXX8
Data privacy requirementsX X XXX 5
Lack of reliable standardsX X XX X5
Lack of legal frameworkX X X3
Competition or antitrust hurdles X 1
Fear of legal problems X 1
Communication and cooperation with data protection authorities X 1
AI regulation X 1
Table 6. Typical user roles in the production–engineering domain.
Table 6. Typical user roles in the production–engineering domain.
UserTarget ArtifactsUse-Case Examples
Engineer designing cyber–physical production systems (CPPS)Cyber–physical production system (CPPS)Manufacturing process automation [52,71]
AI-powered digital twin [53,72]
AI-based connected factory [73,74]
Data-based correction of system behavior [75,76]
Engineer optimizing production processesData-driven production process
embedded system
Predictive maintenance [50,51]
Process optimization [77,78,79]
Logistics optimization [80,81]
Supply Chain Management [82,83]
Edge analytics [84,85]
Engineer running production processesProduced product
raw materials, semi-finished products and components
AI-based product development and generative design [86,87]
Quality Assurance [88,89]
Inventory management [90,91]
Forecast supply of raw materials and parts [92,93]
Robotics [94]
Design customization [95,96]
Shop floor performance optimization [97]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wiemer, H.; Schneider, D.; Lang, V.; Conrad, F.; Mälzer, M.; Boos, E.; Feldhoff, K.; Drowatzky, L.; Ihlenfeldt, S. Need for UAI–Anatomy of the Paradigm of Usable Artificial Intelligence for Domain-Specific AI Applicability. Multimodal Technol. Interact. 2023, 7, 27. https://doi.org/10.3390/mti7030027

AMA Style

Wiemer H, Schneider D, Lang V, Conrad F, Mälzer M, Boos E, Feldhoff K, Drowatzky L, Ihlenfeldt S. Need for UAI–Anatomy of the Paradigm of Usable Artificial Intelligence for Domain-Specific AI Applicability. Multimodal Technologies and Interaction. 2023; 7(3):27. https://doi.org/10.3390/mti7030027

Chicago/Turabian Style

Wiemer, Hajo, Dorothea Schneider, Valentin Lang, Felix Conrad, Mauritz Mälzer, Eugen Boos, Kim Feldhoff, Lucas Drowatzky, and Steffen Ihlenfeldt. 2023. "Need for UAI–Anatomy of the Paradigm of Usable Artificial Intelligence for Domain-Specific AI Applicability" Multimodal Technologies and Interaction 7, no. 3: 27. https://doi.org/10.3390/mti7030027

Article Metrics

Back to TopTop