Next Article in Journal
Graphene Passively Q-Switched Nd:YAG Laser by 885 nm Laser Diode Resonant Pumping
Next Article in Special Issue
Artificial Intelligence-Based Technological-Oriented Knowledge Management, Innovation, and E-Service Delivery in Smart Cities: Moderating Role of E-Governance
Previous Article in Journal
Analysis of Malicious Node Identification Algorithm of Internet of Vehicles under Blockchain Technology: A Case Study of Intelligent Technology in Automotive Engineering
Previous Article in Special Issue
A Conceptual Model Proposal to Assess the Effectiveness of IoT in Sustainability Orientation in Manufacturing Industry: An Environmental and Social Focus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence Synergetic Opportunities in Services: Conversational Systems Perspective

1
Department of Industrial Engineering and Technology Management, Holon Institute of Technology, Holon 5810201, Israel
2
Department of Industrial Engineering, Afeka Tel-Aviv College of Engineering, Tel Aviv 6998812, Israel
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 8363; https://doi.org/10.3390/app12168363
Submission received: 24 July 2022 / Revised: 9 August 2022 / Accepted: 18 August 2022 / Published: 21 August 2022
(This article belongs to the Special Issue Smart Services: Artificial Intelligence in Service Systems)

Abstract

:
The importance of this paper is its discovery of the unused synergetic potential of integration between several AI techniques into an orchestrated effort to improve service. Special emphasis is given to the conversational capabilities of AI systems. The paper shows that the literature related to the use of AI in service is divided into independent knowledge domains (silos) that are either related to the technology under consideration, or to a small group of technologies related to a certain application; it then discusses the reasons for the isolation of these silos, and reveals the barriers and the traps for their integration. Two case studies of service systems are presented to illustrate the importance of synergy. A special focus is given to the conversation part of these service systems: the first case presents an application with high potential for integrating new AI technologies into its AI portfolio, while the second case illustrates the advantages of a mature application that has already integrated many technologies into its AI portfolio. Finally, the paper discusses the two case studies and presents inclusion relationships between AI capabilities to facilitate generating a roadmap for extending AI capabilities with synergetic opportunities.

1. Introduction

This paper uncovers synergetic opportunities for integrating bundles of several different AI specializations in various services with a special focus on conversational systems. For this purpose, we use an extensive literature review and two case studies. In the two case studies, special attention is given to the analysis of conversational capabilities. The case studies are discussed, and future research opportunities are extracted and listed in the conclusion section.
The idea of using Artificial Intelligence (AI) for service is not new [1]; however, there is considerable overlapping and blur between AI implementation in services and in the industry. For coherent analysis of this paper, we find it useful to describe the following hierarchy of AI techniques, tools, methods, and implementations into three distinct levels: (1) General AI techniques, (2) domain specialization AI techniques, (3) application tailored AI solutions. Each of these categories will be briefly discussed next and illustrated in Figure 1.
  • The fundamental AI techniques could serve almost any purpose, and any application specialty. Some examples of pure AI techniques are pattern recognition, data mining (DM), machine learning (ML), deep learning (DL), rule-based reasoning, fuzzy logic, expert systems, etc.
  • The domain specialization AI techniques, generate a specific domain specialization (for example, natural language processing (NLP)) mostly by adjusting and improving one or more of the pure AI techniques (e.g., NLP may use pattern recognition, machine learning or deep learning); these domain specializations may possibly integrate other non-AI techniques as well (for example, statistics and statistical inference).
  • The application-tailored bundles; are already solutions for specific needs based on use cases.
In the AI toolbox, fundamental AI techniques are the prevalent familiar tools, and using any of them as an independent decision is mostly unrelated to the synergies we shall discuss. The domain specialization techniques, on the other hand, have a broad potential for synergies. The synergies related to domain specialization techniques stem from the integration between domain specialization techniques themselves. The synergies are also related to adding new fundamental technique/s to a domain specialization.
AI domain specialization developed in many directions: natural language processing (NLP), case-based reasoning (CBR), speech and tone recognition, face and emotion recognition, gesture recognition, collaborative systems and autonomous systems—to name a few [2]. Thus, each direction brought with it a specialization and specialists that focus on their field of specialty without mastering other AI fields [3]. For example, it is extremely rare to find affection computing experts that also specialize in autonomous systems; however, there are some AI fields that are closer than others to certain other AI fields. To visualize these relationships, in the service context, we map them to clusters as shown in Figure 2.
Figure 2 groups the AI domain specializations, level II in Figure 1, into clusters of major AI specializations and their relationships. In the context of service provision, the main identified AI clusters are:
  • Speech-related cluster—AI techniques that focus on speech analysis and generation.
  • Text analysis cluster—AI techniques that focus on analyzing different types of text.
  • Emotional recognition cluster—AI technique that focuses on emotions recognition and analysis.
  • Computer vision cluster—AI techniques that focus on various methods of analysis pictures, photos and videos.
  • Collaborative cluster—AI techniques that focus on collaboration between human operators and digital machines.
  • Awareness cluster—AI techniques that focus on various awareness capabilities such as self-awareness and context awareness.
A remarkable phenomenon is the usage of Gesture recognition in three different clusters. Namely, the Emotion recognition cluster, Computer vision cluster, and Collaborative cluster.
In terms of the AI analysis in those fields, there are several overlapping cases, where a field may belong to more than one cluster.
The relationships in Figure 2 are both reflected by adjacency and common AI fields. For example, tone recognition is part of both Speech related cluster and Emotion recognition cluster. Another example is the adjacency of the Text analysis cluster to Speech related cluster, which points at their relative closeness.
The use of AI in service is divided into narrow specializations.
AI field is characterized by the narrow specialization of AI experts [3,4]. Experts have a deep understanding of their narrow field of experience, but know very little about other AI fields; this phenomenon is also known as sectorial silos [5]. Gupta et al. [3] and Meunier and Bellais [5] supported the idea that these sectorial silos are limiting, and wider approach should be adopted.
Several papers deal with information silos, their drawbacks, and the advantages of breaking them, for example [6,7,8]. Gupta et al. [3] state that there is a common need to go beyond the development of AI in sectorial silos, so as to understand the impacts AI might have across society. Miriyev A. and Kovač M. [4] state that: “An education methodology is needed for researchers to develop a combination of skills in physical artificial intelligence.”
Martínez-Plumed et al. [9] while dealing with the futures of artificial intelligence technologies, developed a TRL (technology readiness levels) methodology for evaluating the maturity level of each AI technique.
The rest of this paper is structured as follows:
The introduction is followed by a literature review section of the related literature; The next section describes a case study on Robo-chef to illustrate a case where a large potential remains to be realized. A second case study follows on a mature technology of chatbots, which illustrates the advantage of integrating numerous AI fields across several clusters for an advantageous integrative system. A discussion section follows, and the last section is a summary and conclusion, with some insights for future research.

2. Literature Review

This section reviews the literature related to AI deployment in services, the papers that advocate the integration of several AI techniques or technologies in relation to services, and the literature related to the case studies.

2.1. AI Deployment in Services

Artificial intelligence (AI) is increasingly reshaping services by performing various tasks, constituting a major source of innovation [10]. The deployment of AI in services is typically attributed to encounters with customers. For example, Li et al. [11] identified four modes of AI technology-based service encounters: AI-supplemented, AI-generated, AI-mediated, and AI-facilitated encounters. Smart technologies and connected objects are rapidly changing the organizational frontline. Yet, our understanding of how these technologies infuse service encounters remains limited [12]. Robinson et al. [13] categorized several types of services, interspecific service (AI-to-human), inter AI service (AI-to-AI), and introduced the concept of counterfeit service; they also proposed a research agenda focused on the implementation of AI in dyadic service exchanges. AI is frequently used for integrating robotics into services [14,15]. Belk [16] examined ethical issues related to integrating AI and robotics in services.

2.2. Advocating Integration of Several AI Techniques or Technologies in Services

Numerous papers advocated the integration of several AI techniques. In all of these articles the integration yields improved synergic results.
Some examples are as follows: Benbya et al. [17] in their editorial article, stated that “artificial intelligence (AI) technologies offer novel, distinctive opportunities and pose new significant challenges to organizations that set them apart from other forms of digital technologies”. Chen and Decary [18] provide a guide to understand the fundamentals of major AI technologies such as, machine learning, natural language processing, AI voice assistants and medical robotics, as well as their proper use in healthcare; they also provide practical recommendations to help decision-makers develop an AI strategy that can support their digital healthcare transformation.
Androutsopoulou et al. [19] addressed the transformation of communication between citizens and government via chatbots guided by AI; they advocated the integration of natural language processing, machine learning and data mining technologies, and leverages existing data of various forms (with the possibility of using expert systems).
Elfatih et al. [20] advocated the integration of different AI technologies in automotive 5G communication, such as machine learning and swarm intelligence algorithms; they also advocated using Google’s Kubemeter project as an open source, enables to improve the community and sharing machines between applications. The sharing machines require ensuring that two applications do not try to use the same ports [21].
Vocke et al. [22] advocated the integration of the current technological solutions available in the field of artificial intelligence. In addition, they discussed the next step to integrate AI technologies with current methods for optimizing the innovation process.

2.3. Literature as a Background to the Robo-Chef Case Study

A search for robotic chefs (on Google scholar) found only very few relevant papers. For example, searching for robo-chef yields 19 results (June 2022), of which two are citations only, two not in English, and 5 published before 2017. Therefore, this review includes some background work on robots in somewhat similar environments.
Rosete et al. [23] reviews the literature on service robots in the hospitality industry. Afsheen et al. [24] presents a self-sufficient robo-waiter with some navigation capability; however, the communication is only done by the user on the keyboard attached to the robo-waiter, and wirelessly sent to the kitchen; this is a clear case of limited use of technology: no NLP, no speech recognition, no vocal communication…
Al-Sayed [25] mentions robo-chefs as part of the developing technologies in the future kitchen. Sener and Yao [26] discuss the theoretical learning process of a robo-chef. Garcia-Haro et al. [27] discuss service robots in catering applications and mention robo-chefs and one of the robot types. Zhu and Chang [28] dealt with the motions of the robotic chef and their effect on food quality. Bollini et al. [29] presented a robot that identifies ingredients placed on the table and operates according to a set of natural language instructions.

2.4. Literature for the Chatbot Case Study

The research on chatbots has proliferated in recent years, as the usage of chatbots dramatically increased and almost became ubiquitous [30,31]. The large majority of chatbots are confined to written text messages, as text gives an answer to the vast range of service issues that are raised by the customers [32,33]; this reliance on text has spawned research and implementations of natural language processing (NLP) and sentiment analysis based on text [34].
Okuda and Shoda [35] describe an AI-based chatbot service for the financial industry; they apply AI to generate a service thesaurus by applying ML on the text in FAQ, and call-center response-manuals.
Sidaoui et al. [36] evaluated customer experience during the conversation with a chatbot.
Chen et al. [37] classify and measure the service quality of AI chatbots in frontline service; they developed seven dimensions for service quality measurement that could be described as follows:
  • Understanding the explicit and implicit meaning, and the emotional implication of the text;
  • Close human-AI collaboration;
  • Human-like behavior (including empathy, and social cues);
  • Continuous improvement (including learning and updates);
  • Personalization;
  • Culture adaption;
  • Responsiveness and simple.
Bulla et al. [38] review a plethora of issues and applications of AI Based Medical Assistant Chatbots; they present concurrent technology applications and identify issues to be amended in future years. Varitimiadis et al. [39] Presented a distributed and collaborative multi-chatbot system for guidance in museums.
Borsci et al. [40] developed a chatbot usability scale for interaction with AI-based chatbots; they developed two measuring tools: (i) A diagnostic tool in the form of a checklist. (ii) A 15-item questionnaire with high reliability.
Erickson and Kim [41] check the compatibility chatbots with a knowledge management system and concluded that while the implementation of this combination is rare, the merits of this integration are highly attractive. Chao et al. [42] stated that the bottleneck of service provision has shifted from system development to the establishment of an in-depth domain knowledge base. Mydyti, H., and Kadriu [43] studied how chatbots can be implemented in the domains of banking, e-commerce, tourism, and call centers, and discussed the benefits and challenges of chatbots in driving the digital transformation of businesses.
Chatbots are not only a low-cost solution for helping burnt-out human service providers, but their scalability (ability to increase the answering capacity) and their autonomous operation enables increasing service availability and performance [44].

3. Case Study 1: Robo-Chef

This Robotic Chef is an intelligent robotic arm which is able to cook food dishes in an ordinary home environment or restaurants. The robotic arm recognizes the utensils and ingredients required for the particular recipe [45]. Robotic cooking is a difficult task, as the translation from raw recipe instructions to robotic actions involving precise motions and tool-handling is challenging [46]. Robotic chefs are starting to replace human chefs in the restaurant industry [28].
The aim of the case study is to point at the potential of integrating various technologies into the Robo-chef functionality. In particular, the potential of using conversational capabilities which currently are missing form most Robo-chefs will be highlighted. Such integration will bring Robo-chef closer to the look and feel of human beings, and their behavior and reactions could be even more effective—as advanced robotic and navigation abilities combined with access to vast knowledge, deep learning, and other artificial intelligence technologies could enhance their ability to converse and guide the user.
The literature about technologies integration in robo-chefs is very limited and concentrates in a limited number of applications. For example, Zhu and Chang [28] examined the robotic chef anthropomorphism on food quality prediction. Bollini et al. [29] presented a robot that initialized with a set of ingredients placed on the table and a set of natural language instructions describing how to use these ingredients in cooking a plate-like cookies, a salad, or a meatloaf.
Park et al. [47] describe five different robot chef systems, none of which has speech recognition, voice recognition, CBR, gesture recognition, machine learning and Cobots capabilities; however, they mentioned implicit sensing capabilities and it was not clear how these were achieved.
Li et al. [11] lists the following AI characteristics for service robots: personification, autonomy, deep learning, complex interaction, people-oriented, understanding of aesthetics, and emotional abilities; however, except for the complex interaction, none of these abilities are mentioned in the literature about Robo-chefs.
We did not find evidence for vast adaptation of robot chefs in the food industry; moreover, we did not find evidence of wide usage of the following technologies with Robo-chefs:
  • Voice/speech recognition;
  • Emotion recognition;
  • Face recognition;
  • Navigation;
  • Gesture recognition;
  • Cobots capabilities.

Synergy Opportunities for Robo-Chef

We divide the opportunities for Robo-chef improvements into two parts: (1) conversational capabilities, (2) fundamental AI capabilities.
While Robo-chefs currently do not possess the same conversational capabilities as service systems such as chatbots, it is inevitable that its maturity trajectory will eventually lead to the addition of these capabilities. Therefore, we start from discussing the improvement opportunities for Robo-chef that come from conversational capabilities.
Synergy potential of Conversational AI capabilities:
  • Speech communication potential: the Robo chef could get instructions from the human chef while the human chef is busy; this has the potential for activating the robo-chef without immediate closeness and while the chef’s hands are occupied in other activities. Speech instructions are easy and flowing, with the ability for passing much more information and exerting tighter control on the robo-chef; this would make the robo chef more flexible and easier to guide.
  • Gesture recognition: collaborative work with humans is the hallmark of the new digital age. Therefore, a robo chef should be able to collaborate with a human chef. Using gesture recognition, the human chef can efficiently signal the robo—chef several important instructions such as: (1) stop current activity, (2) increase or decrease the flames, (3) operate faster or slower, (4) wait, (5) start stirring etc.
  • Face and sentiment recognition: These abilities are currently missing form Robo-chefs, but greatly improve the conversational capabilities of many other service AI systems (e.g., chatbots). Thus, they hold the same potential for Robo-chefs.
Synergy potential of fundamental AI Capabilities
  • Computer vision potential: certain robo-chefs currently have ability to verify the readiness of the food they cook and its location; however, there is no evidence of using vision capabilities for identifying silverware and cookware and bringing them to the cooking arena, as well as removing them for washing and cleaning.
  • Navigation potential: If robo-chefs could navigate the clattered kitchen, this would enable them to move from place to place, to bring pots, pans mixers and other large cookware from different parts of the kitchen; this dramatically increases the robo-chef’s independence and efficiency.
  • Machine learning (ML): ML is a very efficient tool for identifying patterns and their effects, Identify the correct combination of dosages that would make a recipe a great success. ML can identify repeating situations and can infer when and what to do, to improve the process. For example: (1) when to bring hot or cold water to the chef, (2) when to bring salt or certain spices to the chef, (3) when to change the oven temperature and how to control it, etc.

4. Case Study 2: Chatbot

The aim of the case study is very different from the Robo-waiter case, since chatbots are much more developed and matured, especially in the capabilities of the conversational system. The conversational capabilities in chatbots are much more prevalent with wide usage across all industries; this maturity led many research projects to propose integration with more technologies to increase the synergy and improve the chatbot performances; this strengthens the claim of this paper regarding the contribution and synergy of integrating technologies in various services. Chatbots represent a unique tool that has extraordinary maturity and can enlighten the path for future progress of all other service tools. The very basic core of chatbots uses text-related tool and NLP [34].

4.1. Chatbot Adoption of AI Technologies

In the following paragraphs we describe several conversational AI technologies that were integrated into chatbots in research or implementations for the benefit of service.

4.1.1. Voice Recognition and Text to Speech Chatbots

The integration of voice recognition into chatbot functionality was mentioned in a research on the use of chatbots for distance education as early as 2005 [48]. A computing architecture for full voice input and output is offered by du Preez, et al. [49]. Voice recognition with text to speech technology is mentioned several times in a review by Satu et al. [50], and a survey by Abdul-Kade and Woods [51]. Shakhovska, et al. [52] describes the development of a speech-to-text chatbot interface based on Google API. Kumar et al. [53] proposed a voice recognition-based educational chatbot for visually impaired people. Using voice recognition in medical services chatbots is described in Athota et al. [54], Tjiptomongsoguno [55] and Kadam et al. (2021). A clear contribution of the voice recognition addition was found or implied in all of the above cases.
In all these cases, the chatbot used NLP analysis, and voice, and speech technologies.

4.1.2. Tone Recognition and Chatbots

In early chatbot design, the focus was merely on producing coherent responses and using grammatically correct sentences [56,57,58,59]. Nowadays, more important is not only what is said by a chatbot, but also how it is expressed linguistically [60]. Users prefer chatbots that use language that is structurally correct and has a logical style [61]. So, the use of tone analysis became more popular in the recent research. Johnson et al. [62] uses tone recognition for its proposed chatbot; they mention the ability of IBM’s Watson system for tone recognition and mood analysis: “Tone analyzer service documentation”, in IBM.com while citing and its integration with a chatbot (https://www.ibm.com/watson/developercloud/doc/toneanalyzer/Retrieved—24 May 2022).
Lee et al. [63] developed a system that pre-processes speech with sound data enhancing method in speech emotion recognition and transform the sound into a spectrogram that applied to recognize the five emotions, which are peace, happy, sad, angry and fear.
Sheth et al. [64] stated that an intelligent chatbot has the ability to leverage a microphone tone analyzer service to analyze speech tonality and sentiment.

4.1.3. Gesture Recognition and Chatbots

Gesture recognition technology integration into chatbots is another researched combination that is still not widely implemented, but is mentioned in many research papers. Most of these papers also include voice and speech recognition. The idea to use gesture recognition for human-computer interaction is not new (e.g., [65]) and especially for chatbots [66]; however, the actual case studies and pioneering implementations of this technology came much later. Johnson et al. [62] report integrating gesture recognition capabilities with a chatbot using capabilities of two external systems: (1) IBM Watson and (2) cleverbot. An advanced chatbot with the abilities of face recognition and hand gesture recognition is described in Gopinath et al. [67].
Several recent papers suggested integrating sign language and chatbots for serving people with hearing disabilities. Pardasani et al. [68] report their experience of integrating American sign language based on fingers and hand gestures. The system they devised has also a full audio interface. Huang, Wu and Kameda [69] propose a new design of a chat bot system for people with Hearing or Speech Disorder (PHSD) with Leap Motion gesture recognition. Fledgling use of Gesture recognition in chatbots was mentioned in a medical chatbot applications [70] and in a Human-Resource (HR) management application [71].

4.1.4. Face and Sentiment Recognition and Chatbots

The integration of face recognition into chatbot functionality is another example of technology specialization (in addition for using core chatbot text and NLP technologies); however, while some cases added this technology separately (e.g., Angeline et al. [72]; Ahmed et al. [73]), many of the presented cases of chatbot usage of face-recognition are also using voice recognition (e.g., [74,75]).
A prevalent use of face and facial expressions recognition for chatbots is for sentiment analysis. Some examples are: Lee et al. [76] uses facial expression analysis in its counseling service, using emotional response generation. Devaram [77] empathic chatbot uses facial recognition for identifying feelings of its human customers. Lee et al. [76] use deep neural network with its facial recognition for emotion recognition. Silapasuphakornwong and Uehira [78] utilize facial recognition along with voice emotion recognition for elderly emotion monitoring. Huang and Chueh [79] even go further to analyze the facial expression of pats for veterinary consultation.

4.1.5. Case Based Reasoning

Case-based reasoning (CBR) is an artificial-intelligence technique that categorizes experience into “cases” and correlates the current problem to a similar “case”. The integration of Case Based Reasoning (CBR) in chatbots is now prevalent in many organizations [80]. The combination of CBR and chatbots is very natural as CBR is based on text processing. The integration of CBR in chatbots is mentioned as early as 2007 [81] for a chatbot that was used in the educational environment. An additional case of an educational chatbot that integrates CBR is mentioned in Buyrukoğlu, and Yilmaz [82]. In many cases CBR is used for solving customer problems. For example, handling of customer complaints [83].

4.1.6. Avatar Technology and Chatbots

Advances in computer technology have fostered the proliferation of virtual characters, often called avatars, that are representation of users, controlled by a human or software, that have an ability to interact [84]. Three-dimensional avatars can be added to make the chat bot more appealing. The avatar is certainly capable of displaying emotions and moods in the form of the facial expression of the conversation that was spoken so the subject gets more interesting [74]. Przegalinska et al. [85] conducted research on anthropomorphizing chat bots using avatars; their recommendations are to focus on the three new trust-enhancing features of chatbots that transparency, integrity, explain ability that allow for greater control.

4.1.7. Context and Client History Analysis and Chatbots

Many organizations store valuable data and information about their customers. In many cases this data is related to Customer Relationship Management (CRM) [86]; this data is a treasure trove for chatbot communication with the customers. Inferring the customer situation can be significantly improved by analyzing their history. For example, A client who has many recent complaints in their history file deserves special attention; and repeated claims should be analyzed and, in some cases, referred to a human manager.
To continue the above discussion, few papers describe the advantages of holistic system with all the above technologies. For example, Angga, et al. [74] describe a chatbot with 3D avatar, full voice interface, and facial expression analysis. Trappey et al. [2] integrates virtual reality (VR) and Case Based Reasoning (CBR) with other technologies. Knickerbocker et al. [87] advocated the heterogeneous integration of IoT, AI and advanced computing technologies for healthcare.

5. Capabilities Evolution of the Major AI Specializations in Conversational Services

Some papers advocate the integration of technologies for overall better communication experience that leads to improved customer satisfaction. In this section we propose an evolutionary perspective for describing and illustrating AI capabilities relationships. The proposed relationships convey inclusion of capabilities. For example, “Text to Speech” capability includes both capabilities of “Text recognition” and “Speech recognition”; this evolutionary description is a theoretical description, not necessarily related to materializing a specific application. In practice, in some cases several levels of the hierarchy could be bypassed (for example by using deep-learning, instead of machine learning). Our approach is illustrated and summarized in Figure 3.
The main considerations of this approach are: (1) the realization that AI capabilities are built in evolutionary manner with hierarchical capabilities. (2) the realization that performing a service function is the main goal of AI deployment in service provision. (3) therefore, relating AI technology to a service function is a key for assessing its importance. (4) however, immature AI technology, or the unavailability of a certain AI technology are serious barriers that prevent this AI integration in service provision.
Based on Figure 2, we analyzed the capability evolution of the various AI technologies for decision-making in conversational service context and developed the following capabilities evolution chart; this chart could be used for generating an evolutionary plan for effective and efficient AI integration process in conversational service systems.
Figure 3 presents the capabilities’ hierarchy of the various AI technologies in conversational service context. For example, “Face recognition” capabilities include the capabilities of “Visual pattern recognition”. Figure 3 shows the input-related capabilities on its top level, and downward hierarchy, of progressively advanced processing capabilities leading to the outputs at the bottom level.
While Figure 3 focuses on the AI integration in conversational service systems, it is important to mention that performing non-conversational service functions may also be important in service provision, leading to a different dependencies scheme. For example, for Robo-Chef some tactile and navigation capabilities are more important than the emotion recognition capabilities. Therefore, nowadays these capabilities are more developed in Robo-Chefs than the emotion recognition capabilities. In the long-range, we may see integration of mature AI technologies even if their function is less significant. In this regard, even if a function is very important, but the current AI technology is immature, it forms a serious obstacle that prevent this AI integration in the service provision.

6. Discussion

So-far, our discussion was generic, and we did not deal with specific implementations by a specific user, or a specific company. In this section we try to portray the approach that can utilize the information presented thus far, for the benefit of implementers. Specific implementation is always dependent upon the use case, product requirement, and market demand; it is therefore, that the requirement elicitation must be done so that the needs of the systems would be clear, prior to the decision on the AI framework to be developed; it is therefore assumed that at least a preliminary use-case analysis is already done, and the functional requirements of the system are already defined. From this point, we propose a sequential approach for generating an AI technology integration plan; this approach is illustrated and summarized in Figure 4.
The main considerations of this approach are: (1) the realization that different services may have vastly different characteristics and functional requirements. (2) the realization that performing a service function is the main goal of AI deployment in service provision. (3) therefore, relating AI technology to a service function is a key for assessing its importance. (4) however, immature AI technology, or the unavailability of a required AI technology are serious barriers that hamper this AI integration in service provision.
Thus, we now describe the stages of generating an AI integration plan, after getting the service’s main functions and transactions from a preliminary use-case analysis, and some non-functional requirements (mainly related to the interface).
  • The first stage is to evaluate and prioritize the importance of elements from a preliminary use-case analysis (both functional and non-functional) and generate an importance hierarchy (prioritization list) of these requirements.
  • The second stage is to derive AI requirements from the prioritization list (of functional and non-functional elements); this is to define the major outcomes required from the AI technology capabilities.
  • The third stage is to use AI capability evolution (Figure 3) to elicit required AI technologies, or combination of AI technologies, related to relevant functional or non-functional elements. In some cases, there is more than one way to achieve the same capability—For example, given available training data, one can build a single deep learning architecture for emotion recognition based on raw text and speech data, without building the intermediate modules illustrated in Figure 3. At this stage we show all alternatives to be chosen in the next stage.
  • The fourth stage is to determine the content of AI technologies in the implementation (based on contribution cost tradeoff. Including: priority, maturity and availability of each AI technology). At this stage, at most, only one AI alternative is chosen (e.g., deep learning vs. machine learning).
  • The fifth stage is to generate a conceptual rough-cut plan for implementing AI technology integration (possibly a conceptual Gantt chart).
Figure 4 describes and summarizes the proposed process for generating an AI technology integration plan; however, it would take several case studies to test this approach and come up with insights as to its strengths and weaknesses.

6.1. Simple Illustrative Example

We use a simple chatbot for illustration purposes only. Specifically, we use administrative chatbot of an eye-care-center which gives information: (1) about the center services, (2) appointment availability, (3) various forms, (4) procedures to be followed.
Stage 1: A quick analysis of the use-case and requirements reveals heavy dependency on text communication; however, it is clear that many of the clients have real eye-sight problem, so that the option of speech communication is crucial. The communications scope is limited in a way that can keeps it structured; it is assumed that dealing with emotions is less important.
Stage 2: AI Requirements:
  • “Heavy dependency on text communication” → Human machine interaction through text leading to natural language understanding.
  • “Speech communication is crucial” → Speech interaction ability.
  • “Structured communication” → Easy text mining.
Stage 3: Required AI Technologies:
From the stage 2 it is clear that NLP is needed for the implementation. So, from Figure 3 we conclude that: “text mining” is needed as well as “text recognition”.
From stage 2 it is clear that Speech interaction ability is required. Therefore, from Figure 3: “text to speech”, “speech recognition”, and “text recognition” are needed.
Stage 4: Final inclusion/exclusion of AI technologies
From previous stages it is clear that most of the AI technologies are fully required. Therefore, the included technologies are as follows:
“Text recognition”
“Text mining”
“Speech recognition”
“Text to speech”
Stage 5: Steps for AI technology integration
The outcome of stage 4 leads to the following AI implementation plan:
“Text recognition” → “Text mining”
“Speech recognition” → “Text to speech”
Note that the implementation is done with two parallel progress paths.

6.2. Limitations and Potential Benefits of the Proposed Approach

6.2.1. Limitations

The limitations of this study are as follows:
  • The study focused on conversational service systems, therefore is may not fully reflect all the situations and practices of other domains.
  • The study did not consider the point that deep learning may outperform classic machine learning methods because deep learning automatically “combines” raw data features instead of requiring feature engineering.
  • The study contains only 2 case studies, so there may be points that the study overlooks which may appear in other case studies.
  • The study gives a time snapshot that may be short lived in our dynamic changing technological world.

6.2.2. Potential Benefits

The benefits of this study are as follows:
  • The benefit of this study is its discovery of unused synergetic potential of integration between several AI techniques into an orchestrated effort to improve service.
  • The study tackles the problem of AI knowledge silos in service provision; it discusses the reasons for the isolation of these silos, and reveals the barriers and the traps for their integration.
  • The study described a roadmap of AI clusters in the service domain.
  • The study illustrates the synergetic use of AI technologies in a mature case study, and the lack of major AI synergy in a less matured second case study.
  • The paper presents a novel evolutionary inclusion model of conversational AI capabilities.
  • The paper presents a novel sequential approach for generating AI implementation plan.

7. Conclusions

This paper examined the potential of integrating several AI techniques or technologies for performing better service and increasing customer satisfaction related to the service encounters. The paper maps the major AI clusters and analyze dependencies between the major elements of AI capabilities. Two different case studies are presented. One of the case studies shows the rich integration of AI technologies in a mature chatbot application, while the other case study reveals a very narrow use of AI integration in less mature applications such as Robo-Chef; it was shown that while research literature often advocates such integration, the actual integration of various technologies is very slow and, in some cases, non-existent; it is therefore, that we propose an evolutional approach in the discussion section.
Future research may include maturity model developments for specific environments; case studies related to integrating technologies in the service industry; and scorecard dashboard development based on quantitative data analysis.
A barrier to the integration of technologies is the privacy of the customer and the user. The user and the customers must first agree to either be seen or be heard in the computerized system.

Author Contributions

Conceptualization, Y.C. and S.R.; methodology, S.R. and Y.C.;; formal analysis, S.R.; investigation, Y.C.; writing—original draft preparation, S.R. and Y.C.; writing—review and editing, Y.C. and S.R.; visualization, S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable, the study not involve humans or animals.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gomi, T. New AI and Service Robots. Ind. Robot. 2003, 30, 123–138. [Google Scholar] [CrossRef]
  2. Trappey, A.J.; Trappey, C.V.; Chao, M.-H.; Wu, C.-T. VR-Enabled Engineering Consultation Chatbot for Integrated and Intelligent Manufacturing Services. J. Ind. Inf. Integr. 2022, 26, 100331. [Google Scholar] [CrossRef]
  3. Gupta, S.; Langhans, S.D.; Domisch, S.; Fuso-Nerini, F.; Felländer, A.; Battaglini, M.; Tegmark, M.; Vinuesa, R. Assessing Whether Artificial Intelligence Is an Enabler or an Inhibitor of Sustainability at Indicator Level. Transp. Eng. 2021, 4, 100064. [Google Scholar] [CrossRef]
  4. Miriyev, A.; Kovač, M. Skills for Physical Artificial Intelligence. Nat. Mach. Intell. 2020, 2, 658–660. [Google Scholar] [CrossRef]
  5. Meunier, F.-X.; Bellais, R. Technical Systems and Cross-Sector Knowledge Diffusion: An Illustration with Drones. Technol. Anal. Strateg. Manag. 2019, 31, 433–446. [Google Scholar] [CrossRef]
  6. Miller, L.C.; Jones, B.B.; Graves, R.S.; Sievert, M.C. Merging Silos: Collaborating for Information Literacy. J. Contin. Educ. Nurs. 2010, 41, 267–272. [Google Scholar] [CrossRef]
  7. De Gregorio, G.; Ranchordás, S. Breaking down Information Silos with Big Data: A Legal Analysis of Data Sharing. In Legal Challenges of Big Data; Edward Elgar Publishing: Cheltenham, UK, 2020. [Google Scholar]
  8. Charbonneau, D.H.; Priehs, M.; Hukill, G. Beyond Knowledge Silos: Preserving and Sharing Institutional Knowledge in Academic Libraries. Knowl. Silos Preserv. Shar. Inst. Knowl. Acad. Libr. 2016, 1–8. [Google Scholar]
  9. Martínez-Plumed, F.; Gómez, E.; Hernández-Orallo, J. Futures of Artificial Intelligence through Technology Readiness Levels. Telemat. Inform. 2021, 58, 101525. [Google Scholar] [CrossRef]
  10. Huang, M.-H.; Rust, R.T. Artificial Intelligence in Service. J. Serv. Res. 2018, 21, 155–172. [Google Scholar] [CrossRef]
  11. Li, M.; Yin, D.; Qiu, H.; Bai, B. A Systematic Review of AI Technology-Based Service Encounters: Implications for Hospitality and Tourism Operations. Int. J. Hosp. Manag. 2021, 95, 102930. [Google Scholar] [CrossRef]
  12. De Keyser, A.; Köcher, S.; Alkire, L.; Verbeeck, C.; Kandampully, J. Frontline Service Technology Infusion: Conceptual Archetypes and Future Research Directions. J. Serv. Manag. 2019, 30, 156–183. [Google Scholar] [CrossRef]
  13. Robinson, S.; Orsingher, C.; Alkire, L.; De Keyser, A.; Giebelhausen, M.; Papamichail, K.N.; Shams, P.; Temerak, M.S. Frontline Encounters of the AI Kind: An Evolved Service Encounter Framework. J. Bus. Res. 2020, 116, 366–376. [Google Scholar] [CrossRef]
  14. Wirtz, J.; Patterson, P.G.; Kunz, W.H.; Gruber, T.; Lu, V.N.; Paluch, S.; Martins, A. Brave New World: Service Robots in the Frontline. J. Serv. Manag. 2018, 29, 907–931. [Google Scholar] [CrossRef] [Green Version]
  15. Belanche, D.; Casaló, L.V.; Flavián, C.; Schepers, J. Robots or Frontline Employees? Exploring Customers’ Attributions of Responsibility and Stability after Service Failure or Success. J. Serv. Manag. 2020, 31, 267–289. [Google Scholar] [CrossRef]
  16. Belk, R. Ethical Issues in Service Robotics and Artificial Intelligence. Serv. Ind. J. 2021, 41, 860–876. [Google Scholar] [CrossRef] [Green Version]
  17. Benbya, H.; Pachidi, S.; Jarvenpaa, S. Special Issue Editorial: Artificial Intelligence in Organizations: Implications for Information Systems Research. J. Assoc. Inf. Syst. 2021, 22, 10. [Google Scholar] [CrossRef]
  18. Chen, M.; Decary, M. Artificial Intelligence in Healthcare: An Essential Guide for Health Leaders. Healthc. Manag. Forum 2020, 33, 10–18. [Google Scholar] [CrossRef] [PubMed]
  19. Androutsopoulou, A.; Karacapilidis, N.; Loukis, E.; Charalabidis, Y. Transforming the Communication between Citizens and Government through AI-Guided Chatbots. Gov. Inf. Q. 2019, 36, 358–367. [Google Scholar] [CrossRef]
  20. Elfatih, N.M.; Hasan, M.K.; Kamal, Z.; Gupta, D.; Saeed, R.A.; Ali, E.S.; Hosain, M.S. Internet of Vehicle’s Resource Management in 5G Networks Using AI Technologies: Current Status and Trends. IET Commun. 2022, 16, 400–420. [Google Scholar] [CrossRef]
  21. Pokhrel, S.R. Software Defined Internet of Vehicles for Automation and Orchestration. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3890–3899. [Google Scholar] [CrossRef]
  22. Vocke, C.; Constantinescu, C.; Popescu, D. Application Potentials of Artificial Intelligence for the Design of Innovation Processes. Procedia CIRP 2019, 84, 810–813. [Google Scholar] [CrossRef]
  23. Rosete, A.; Soares, B.; Salvadorinho, J.; Reis, J.; Amorim, M. Service Robots in the Hospitality Industry: An Exploratory Literature Review. In International Conference on Exploring Services Science; Springer: Berlin/Heidelberg, Germany, 2020; pp. 174–186. [Google Scholar]
  24. Afsheen, S.; Fathima, L.; Khan, Z.; Elahi, M. A Self-Sufficient Waiter Robo for Serving in Restaurants. Int. J. Adv. Res. Dev. 2018, 3, 57–67. [Google Scholar]
  25. Al-Sayed, L. Technologies at the Crossroads of Food Security and Migration. In Food Tech Transitions; Springer: Berlin/Heidelberg, Germany, 2019; pp. 129–148. [Google Scholar]
  26. Sener, F.; Yao, A. Zero-Shot Anticipation for Instructional Activities. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 862–871. [Google Scholar]
  27. Garcia-Haro, J.M.; Oña, E.D.; Hernandez-Vicen, J.; Martinez, S.; Balaguer, C. Service Robots in Catering Applications: A Review and Future Challenges. Electronics 2020, 10, 47. [Google Scholar] [CrossRef]
  28. Zhu, D.H.; Chang, Y.P. Robot with Humanoid Hands Cooks Food Better? Effect of Robotic Chef Anthropomorphism on Food Quality Prediction. Int. J. Contemp. Hosp. Manag. 2020, 32, 1367–1383. [Google Scholar] [CrossRef]
  29. Bollini, M.; Tellex, S.; Thompson, T.; Roy, N.; Rus, D. Interpreting and Executing Recipes with a Cooking Robot. In Experimental Robotics: The 13th International Symposium on Experimental Robotics; Desai, J.P., Dudek, G., Khatib, O., Kumar, V., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2013; pp. 481–495. ISBN 978-3-319-00065-7. [Google Scholar]
  30. Adamopoulou, E.; Moussiades, L. An Overview of Chatbot Technology. In IFIP International Conference on Artificial Intelligence Applications and Innovations; Springer: Berlin/Heidelberg, Germany, 2020; pp. 373–383. [Google Scholar]
  31. Rapp, A.; Curti, L.; Boldi, A. The Human Side of Human-Chatbot Interaction: A Systematic Literature Review of Ten Years of Research on Text-Based Chatbots. Int. J. Hum.-Comput. Stud. 2021, 151, 102630. [Google Scholar] [CrossRef]
  32. Hien, H.T.; Cuong, P.-N.; Nam, L.N.H.; Nhung, H.L.T.K.; Thang, L.D. Intelligent Assistants in Higher-Education Environments: The FIT-EBot, a Chatbot for Administrative and Learning Support. In Proceedings of the 9th International Symposium on Information and Communication Technology, Danang City, Vietnam, 6–7 December 2018; pp. 69–76. [Google Scholar]
  33. Yarovyi, A.; Kudriavtsev, D. Method of Multi-Purpose Text Analysis Based on a Combination of Knowledge Bases for Intelligent Chatbot. In Proceedings of the CEUR Workshop Proceedings, Vulnius, Lithuania, 8 December 2021; Volume 2870, pp. 1238–1248. [Google Scholar]
  34. Locke, S.; Bashall, A.; Al-Adely, S.; Moore, J.; Wilson, A.; Kitchen, G.B. Natural Language Processing in Medicine: A Review. Trends Anaesth. Crit. Care 2021, 38, 4–9. [Google Scholar] [CrossRef]
  35. Okuda, T.; Shoda, S. AI-Based Chatbot Service for Financial Industry. Fujitsu Sci. Tech. J. 2018, 54, 4–8. [Google Scholar]
  36. Sidaoui, K.; Jaakkola, M.; Burton, J. AI Feel You: Customer Experience Assessment via Chatbot Interviews. J. Serv. Manag. 2020, 31, 745–766. [Google Scholar] [CrossRef]
  37. Chen, Q.; Gong, Y.; Lu, Y.; Tang, J. Classifying and Measuring the Service Quality of AI Chatbot in Frontline Service. J. Bus. Res. 2022, 145, 552–568. [Google Scholar] [CrossRef]
  38. Bulla, C.; Parushetti, C.; Teli, A.; Aski, S.; Koppad, S. A Review of AI Based Medical Assistant Chatbot. Res. Appl. Web Dev. Des. 2020, 3. [Google Scholar] [CrossRef]
  39. Varitimiadis, S.; Kotis, K.; Pittou, D.; Konstantakis, G. Graph-Based Conversational AI: Towards a Distributed and Collaborative Multi-Chatbot Approach for Museums. Appl. Sci. 2021, 11, 9160. [Google Scholar] [CrossRef]
  40. Borsci, S.; Malizia, A.; Schmettow, M.; Van Der Velde, F.; Tariverdiyeva, G.; Balaji, D.; Chamberlain, A. The Chatbot Usability Scale: The Design and Pilot of a Usability Scale for Interaction with AI-Based Conversational Agents. Pers. Ubiquitous Comput. 2022, 26, 95–119. [Google Scholar] [CrossRef]
  41. Erickson, M.; Kim, P. Can chatbots work well with knowledge management systems? Issues Inf. Syst. 2020, 21, 53–58. [Google Scholar]
  42. Chao, M.-H.; Trappey, A.J.C.; Wu, C.-T. Emerging Technologies of Natural Language-Enabled Chatbots: A Review and Trend Forecast Using Intelligent Ontology Extraction and Patent Analytics. Complexity 2021, 2021, 5511866. [Google Scholar] [CrossRef]
  43. Mydyti, H.; Kadriu, A. The Impact of Chatbots in Driving Digital Transformation. Int. J. E-Serv. Mob. Appl. IJESMA 2021, 13, 88–104. [Google Scholar] [CrossRef]
  44. Um, T.; Kim, T.; Chung, N. How Does an Intelligence Chatbot Affect Customers Compared with Self-Service Technology for Sustainable Services? Sustainability 2020, 12, 5119. [Google Scholar] [CrossRef]
  45. Ban, P.; Desale, S.; Barge, R.; Chavan, P. Intelligent Robotic Arm. In ITM Web of Conferences; EDP Sciences: Les Ulis, France, 2020; Chapter 32; p. 01005. [Google Scholar]
  46. Danno, D.; Hauser, S.; Iida, F. Robotic Cooking Through Pose Extraction from Human Natural Cooking Using OpenPose. In Intelligent Autonomous Systems 16; Ang, M.H., Jr., Asama, H., Lin, W., Foong, S., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; pp. 288–298. [Google Scholar]
  47. Park, S.Y.; Kim, S.; Leifer, L. “Human Chef” to “Computer Chef”: Culinary Interactions Framework for Understanding HCI in the Food Industry. In International Conference on Human-Computer Interaction; Springer: Berlin/Heidelberg, Germany, 2017; pp. 214–233. [Google Scholar]
  48. Heller, B.; Proctor, M.; Mah, D.; Jewell, L.; Cheung, B. Freudbot: An Investigation of Chatbot Technology in Distance Education. In EdMedia+ Innovate Learning, Publisher: Association for the Advancement of Computing in Education (AACE)); Association for the Advancement of Computing in Education (AACE): Waynesville, NC, USA, 2005; pp. 3913–3918. [Google Scholar]
  49. Du Preez, S.J.; Lall, M.; Sinha, S. Sinha An Intelligent Web-Based Voice Chat Bot. In Proceedings of the IEEE EUROCON 2009, St. Petersburg, Russia, 18–23 May 2009; pp. 386–391. [Google Scholar]
  50. Satu, M.S.; Parvez, M.H. Review of Integrated Applications with Aiml Based Chatbot. In 2015 International Conference on Computer and Information Engineering (ICCIE); IEEE: Piscataway, NJ, USA, 2015; pp. 87–90. [Google Scholar]
  51. Abdul-Kader, S.A.; Woods, J.C. Survey on Chatbot Design Techniques in Speech Conversation Systems. Int. J. Adv. Comput. Sci. Appl. 2015, 6, 72–80. [Google Scholar]
  52. Shakhovska, N.; Basystiuk, O.; Shakhovska, K. Development of the Speech-to-Text Chatbot Interface Based on Google API. In MoMLeT; CEUR-WS: Aachen, Germany, 2019; pp. 212–221. [Google Scholar]
  53. Kumar, M.N.; Chandar, P.L.; Prasad, A.V.; Sumangali, K. Android Based Educational Chatbot for Visually Impaired People. In Proceedings of the 2016 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), Chennai, India, 15–17 December 2016; pp. 1–4. [Google Scholar]
  54. Athota, L.; Shukla, V.K.; Pandey, N.; Rana, A. Chatbot for Healthcare System Using Artificial Intelligence. In Proceedings of the 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 4–5 June 2020; pp. 619–622. [Google Scholar]
  55. Tjiptomongsoguno, A.R.W.; Chen, A.; Sanyoto, H.M.; Irwansyah, E.; Kanigoro, B. Medical Chatbot Techniques: A Review. Proc. Comput. Methods Syst. Softw. 2020, 1294, 346–356. [Google Scholar] [CrossRef]
  56. Perrachione, T.K.; Del Tufo, S.N.; Gabrieli, J.D. Human Voice Recognition Depends on Language Ability. Science 2011, 333, 595. [Google Scholar] [CrossRef] [Green Version]
  57. Anand, A.; Shanmugam, R. Voice Speech and Recognition—An Overview. In Proceedings of the 3rd International Conference on Computing Informatics and Networks, Delhi, India, 29–30 July 2020; Springer: Berlin/Heidelberg, Germany, 2021; pp. 347–356. [Google Scholar]
  58. Todkar, S.P.; Babar, S.S.; Ambike, R.U.; Suryakar, P.B.; Prasad, J.R. Speaker Recognition Techniques: A Review. In Proceedings of the 2018 3rd International Conference for Convergence in Technology (I2CT), Pune, India, 6–8 April 2018; pp. 1–5. [Google Scholar]
  59. Kabir, M.M.; Mridha, M.F.; Shin, J.; Jahan, I.; Ohi, A.Q. A Survey of Speaker Recognition: Fundamental Theories, Recognition Methods and Opportunities. IEEE Access 2021, 9, 79236–79263. [Google Scholar] [CrossRef]
  60. Chaves, A.P.; Doerry, E.; Egbert, J.; Gerosa, M. It’s How You Say It: Identifying Appropriate Register for Chatbot Language Design. In Proceedings of the 7th International Conference on Human-Agent Interaction, Kyoto, Japan, 6–10 October 2019; pp. 102–109. [Google Scholar]
  61. Jain, M.; Kumar, P.; Kota, R.; Patel, S.N. Evaluating and Informing the Design of Chatbots. In Proceedings of the 2018 Designing Interactive Systems Conference, Hong Kong, China, 9–13 June 2018; pp. 895–906. [Google Scholar]
  62. Johnson, A.; Roush, T.; Fulton, M.; Reese, A. Implementing Physical Capabilities for an Existing Chatbot by Using a Repurposed Animatronic to Synchronize Motor Positioning with Speech. Int. J. Adv. Stud. Comput. Sci. Eng. 2017, 6, 20. [Google Scholar]
  63. Lee, M.-C.; Chiang, S.-Y.; Yeh, S.-C.; Wen, T.-F. Study on Emotion Recognition and Companion Chatbot Using Deep Neural Network. Multimed. Tools Appl. 2020, 79, 19629–19657. [Google Scholar] [CrossRef]
  64. Sheth, A.; Yip, H.Y.; Iyengar, A.; Tepper, P. Cognitive Services and Intelligent Chatbots: Current Perspectives and Special Issue Introduction. IEEE Internet Comput. 2019, 23, 6–12. [Google Scholar] [CrossRef] [PubMed]
  65. Iurgel, I. From Another Point of View: Art-E-Fact. In International Conference on Technologies for Interactive Digital Storytelling and Entertainment; Springer: Berlin/Heidelberg, Germany, 2004; pp. 26–35. [Google Scholar]
  66. Montero, C.A.; Araki, K. Enhancing Computer Chat: Toward a Smooth User-Computer Interaction. In International Conference on Knowledge-Based and Intelligent Information and Engineering Systems; Springer: Berlin/Heidelberg, Germany, 2005; pp. 918–924. [Google Scholar]
  67. Gopinath, D.; Vijayakumar, S.; Harish, J. Analyze of Facial Expression and Hand Gestures Using Deep Learning. In AIP Conference Proceedings; AIP Publishing LLC: Melville, NY, USA, 2022; Volume 2444, p. 030001. [Google Scholar]
  68. Pardasani, A.; Sharma, A.K.; Banerjee, S.; Garg, V.; Roy, D.S. Enhancing the Ability to Communicate by Synthesizing American Sign Language Using Image Recognition in a Chatbot for Differently Abled. In Proceedings of the 2018 7th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 29–31 August 2018; pp. 529–532. [Google Scholar]
  69. Huang, X.; Wu, B.; Kameda, H. Development of a Sign Language Dialogue System for a Healing Dialogue Robot. In Proceedings of the 2021 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Virtual Conference, 25–28 October 2021; pp. 867–872. [Google Scholar]
  70. Fadhlallah, G.M. A Deep Learning-Based Approach for Chatbot: Medical Assistance a Case Study. Master’s Thesis, University of Mohamed Khider—BISKRA, Khider, Algeria, 2021. [Google Scholar]
  71. AbdElminaam, D.S.; ElMasry, N.; Talaat, Y.; Adel, M.; Hisham, A.; Atef, K.; Mohamed, A.; Akram, M. HR-Chat Bot: Designing and Building Effective Interview Chat-Bots for Fake CV Detection. In Proceedings of the 2021 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC), Cairo, Egypt, 26–27 May 2021; pp. 403–408. [Google Scholar]
  72. Angeline, R.; Gaurav, T.; Rampuriya, P.; Dey, S. Supermarket Automation with Chatbot and Face Recognition Using IoT and AI. In Proceedings of the 2018 3rd International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 15–16 October 2018; pp. 1183–1186. [Google Scholar]
  73. Ahmed, S.; Paul, D.; Masnun, R.; Shanto, M.U.A.; Farah, T. Smart Home Shield and Automation System Using Facebook Messenger Chatbot. In Proceedings of the 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh, 5–7 June 2020; pp. 1791–1794. [Google Scholar]
  74. Angga, P.A.; Fachri, W.E.; Elevanita, A.; Agushinta, R.D. Design of Chatbot with 3D Avatar, Voice Interface, and Facial Expression. In Proceedings of the 2015 international conference on science in information technology (ICSITech), Yogyakarta, Indonesia, 27–28 October 2015; pp. 326–330. [Google Scholar]
  75. Margreat, L.; Paul, J.J.; Mary, T.B. Chatbot-Attendance and Location Guidance System (ALGs). In Proceedings of the 2021 3rd International Conference on Signal Processing and Communication (ICPSC), Coimbatore, India, 13–14 May 2021; pp. 718–722. [Google Scholar]
  76. Lee, D.; Oh, K.-J.; Choi, H.-J. The Chatbot Feels You-a Counseling Service Using Emotional Response Generation. In Proceedings of the 2017 IEEE international conference on big data and smart computing (BigComp), Jeju, Korea, 13–16 February 2017; pp. 437–440. [Google Scholar]
  77. Devaram, S. Empathic Chatbot: Emotional Intelligence for Empathic Chatbot: Emotional Intelligence for Mental Health Well-Being. arXiv 2020, arXiv:201209130. [Google Scholar]
  78. Silapasuphakornwong, P.; Uehira, K. Smart Mirror for Elderly Emotion Monitoring. In Proceedings of the 2021 IEEE 3rd Global Conference on Life Sciences and Technologies (LifeTech), Nara, Japan, 9–11 March 2021; pp. 356–359. [Google Scholar]
  79. Huang, D.-H.; Chueh, H.-E. Chatbot Usage Intention Analysis: Veterinary Consultation. J. Innov. Knowl. 2021, 6, 135–144. [Google Scholar] [CrossRef]
  80. Alsarayreh, S. The Impact of Technology on Knowledge Retention: A Systematic Review. Int. J. Inf. Technol. Lang. Stud. 2021, 5, 38–46. [Google Scholar]
  81. Leonhardt, M.D.; Tarouco, L.; Vicari, R.M.; Santos, E.R.; da Silva, M.D.S. Using Chatbots for Network Management Training through Problem-Based Oriented Education. In Proceedings of the Seventh IEEE International Conference on Advanced Learning Technologies (ICALT 2007), Niigata, Japan, 18–20 July 2007; pp. 845–847.
  82. Buyrukoğlu, S.; Yilmaz, Y. A Novel Semi-Automated Chatbot Model: Providing Consistent Response of Students’ Email in Higher Education Based on Case-Based Reasoning and Latent Semantic Analysis. Int. J. Multidiscip. Stud. Innov. Technol. 2021, 5, 6–12. [Google Scholar]
  83. Lee, C.-H.; Wang, Y.-H.; Trappey, A.J. Ontology-Based Reasoning for the Intelligent Handling of Customer Complaints. Comput. Ind. Eng. 2015, 84, 144–155. [Google Scholar] [CrossRef]
  84. Miao, F.; Kozlenkova, I.V.; Wang, H.; Xie, T.; Palmatier, R.W. An Emerging Theory of Avatar Marketing. J. Mark. 2022, 86, 67–90. [Google Scholar] [CrossRef]
  85. Przegalinska, A.; Ciechanowski, L.; Stroz, A.; Gloor, P.; Mazurek, G. In Bot We Trust: A New Methodology of Chatbot Performance Measures. Digit. Transform. Disrupt. 2019, 62, 785–797. [Google Scholar] [CrossRef]
  86. Saravanan, D.; Kaur, P. Customer Relationship Management in Banking in the UK Industry: Case of Lloyds Bank. ECS Trans. 2022, 107, 14325. [Google Scholar]
  87. Knickerbocker, J.U.; Budd, R.; Dang, B.; Chen, Q.; Colgan, E.; Hung, L.W.; Kumar, S.; Lee, K.W.; Lu, M.; Nah, J.W.; et al. Heterogeneous Integration Technology Demonstrations for Future Healthcare, IoT, and AI Computing Solutions. In Proceedings of the 2018 IEEE 68th Electronic Components and Technology Conference (ECTC), San Diego, CA, USA, 29 June 2018; pp. 1519–1528. [Google Scholar]
Figure 1. Hierarchy of specialization in AI tools and techniques.
Figure 1. Hierarchy of specialization in AI tools and techniques.
Applsci 12 08363 g001
Figure 2. Clusters of major AI domain specializations in services and their relationships.
Figure 2. Clusters of major AI domain specializations in services and their relationships.
Applsci 12 08363 g002
Figure 3. Capabilities evolution of the major AI specializations in conversational support service systems.
Figure 3. Capabilities evolution of the major AI specializations in conversational support service systems.
Applsci 12 08363 g003
Figure 4. Proposed process for generating AI technology integration plan.
Figure 4. Proposed process for generating AI technology integration plan.
Applsci 12 08363 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rozenes, S.; Cohen, Y. Artificial Intelligence Synergetic Opportunities in Services: Conversational Systems Perspective. Appl. Sci. 2022, 12, 8363. https://doi.org/10.3390/app12168363

AMA Style

Rozenes S, Cohen Y. Artificial Intelligence Synergetic Opportunities in Services: Conversational Systems Perspective. Applied Sciences. 2022; 12(16):8363. https://doi.org/10.3390/app12168363

Chicago/Turabian Style

Rozenes, Shai, and Yuval Cohen. 2022. "Artificial Intelligence Synergetic Opportunities in Services: Conversational Systems Perspective" Applied Sciences 12, no. 16: 8363. https://doi.org/10.3390/app12168363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop