Next Article in Journal
An Improved Calibration Method for the IMU Biases Utilizing KF-Based AdaGrad Algorithm
Next Article in Special Issue
A Blockchain-IoT Platform for the Smart Pallet Pooling Management
Previous Article in Journal
Performance Evaluation of Bundle Adjustment with Population Based Optimization Algorithms Applied to Panoramic Image Stitching
Previous Article in Special Issue
Power and Radio Resource Management in Femtocell Networks for Interference Mitigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

User Experience in Social Robots

by
Elaheh Shahmir Shourmasti
,
Ricardo Colomo-Palacios
*,
Harald Holone
and
Selina Demi
Department of Computer Science, Østfold University College, 1783 Halden, Norway
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(15), 5052; https://doi.org/10.3390/s21155052
Submission received: 28 May 2021 / Revised: 20 July 2021 / Accepted: 23 July 2021 / Published: 26 July 2021
(This article belongs to the Collection Human-Computer Interaction in Pervasive Computing Environments)

Abstract

:
Social robots are increasingly penetrating our daily lives. They are used in various domains, such as healthcare, education, business, industry, and culture. However, introducing this technology for use in conventional environments is not trivial. For users to accept social robots, a positive user experience is vital, and it should be considered as a critical part of the robots’ development process. This may potentially lead to excessive use of social robots and strengthen their diffusion in society. The goal of this study is to summarize the extant literature that is focused on user experience in social robots, and to identify the challenges and benefits of UX evaluation in social robots. To achieve this goal, the authors carried out a systematic literature review that relies on PRISMA guidelines. Our findings revealed that the most common methods to evaluate UX in social robots are questionnaires and interviews. UX evaluations were found out to be beneficial in providing early feedback and consequently in handling errors at an early stage. However, despite the importance of UX in social robots, robot developers often neglect to set UX goals due to lack of knowledge or lack of time. This study emphasizes the need for robot developers to acquire the required theoretical and practical knowledge on how to perform a successful UX evaluation.

1. Introduction

Robots can be defined as machines that can sense and react to the events around them [1]. The creation of recent robots includes electromechanical systems that need numerous energy conversions to convert the existing energy resources to the power that is needed for these electromechanical systems [2]. Accordingly, it is necessary to develop new types of robots that can perform their functionality by mixing the attributes from living biological materials with electromechanical systems [2]. Unsurprisingly, recent technological advances provide robust solutions to a variety of technical problems that have limited robots development over years. These technological solutions led to an ever-increasing integration of robots into our physical and social settings [3]. One of the traditional strategies to achieve this integration is the humanoid form of robots. Although human-robot collaboration is an area of expansion in both research and industry [4], the ultimate quest of roboticists is to develop fully human-like robots [3]. In the future, robots are expected to possess high level of skills to fulfill humans’ expectations and assist them in any circumstances. In this regard, social robots researchers aim to develop robots with natural social interaction [5].
Duffy et al. [6] defined social robots (SR) as “a physical entity embodied in a complex, dynamic, and social environment sufficiently empowered to behave in a manner conducive to its own goals and those of its community”. The increasing need for social robots for both entertainment and education has fostered the revolution of this technology. The first works devoted to the connection between human intelligence and machines occurred in the middle of the twentieth century. Humanoid robots can grasp human mental resources by imitating their characteristics [7]. Socialization with such robots is similar to socialization with other humans. These robots are socially intelligent in a way that they can understand and respond to humans based on the situation and learn how to behave by experiencing real-life circumstances. In addition, the development of these robots has the potential to enhance the level of acknowledgment about ourselves [8]. Currently, social robots are becoming progressively common elements of our world [9,10]. Social robots are present in many areas, including education [11,12,13] and learning [14,15,16], health [17,18,19,20], care [21], tourism and hospitality [22,23,24,25,26], media [27,28], services in general [29,30], or public spaces [31,32,33,34], citing just some of the most frequent, recent, and relevant uses. Focusing on service delivery, a recent publication underlines that the adoption of social robots in such environments is motivated by branding strategies rather than functional purposes [35]. Fashion, hype or need, social robots are here to stay.
Communicating with social robots, understanding their behaviors, and developing user experience over time requires longitudinal studies, and it seems that robotics in home appliances research have been at the center of the attention [36]. Mandal [37] defined communication as conveying information through signals which are elements that are perceived by touch, sound, smell, and sight. The author stated that a signal connects the sender to the receiver and consists of three components: the signal itself, what it refers to, and the interpreter. Body language is the daily usage of this concept. We use body language to interact with people without even noticing it. Body posture and gesture, facial expression, hands, and head movements are all part of nonverbal behaviors and communications. Robots are not an exception when it comes to human-robot interaction [37]. To deliver communications, it is necessary to explain matters. Therefore, humans use their verbal and nonverbal actions to convey their defining characteristics. Likewise, social robots need this coordination to perform human-like behaviors [38]. Previous studies have focused on the trends that humans gain information or skills from each other [39]. This focus on robotics social learning can be explained by the increasing interest in developing robots that can be customized by ordinary people to be used at home, work, and public spaces [39]. The uses and capabilities of social robots are deeply analyzed in a recent literature study [40].
To successfully interact with social robots, a positive user experience (UX) is a significant matter in order to have a great impact on human life. Therefore, to fulfill this purpose, the design and evaluation of this process needs to be performed accurately [41]. As indicated by Hartson et al. [42], user experience is “the totality of the effect or effects felt by a user as a result of interaction with, and the usage context of, a system, device, or product, including the influence of usability, usefulness, and emotional impact during interaction and savoring memory after interaction”. The international standard on ergonomics of human-system interaction provides a simpler definition of UX that entails the perceptions and responses that are produced as a result of the use or anticipated use of a specific system, product or service [43]. In this study, the authors focus on the concept of UX evaluation which constitutes a set of methods and techniques that are adopted to explore how users perceive an interactive service, system or product [44]. It is worthy to note that the focus is not on evaluating users, but their experience of interacting with social robots.
User experience is remarkably crucial for a product to be successful, and extracting this information from users is not a trivial task. Some factors like psychophysiological behaviors that are very significant for measuring user experience, are not completely considered, and the product usage is not studied continuously [45]. Moreover, researchers prefer techniques such as questionnaires and interviews over real-time procedures [45]. Maia and Furtado [45] carried out a systematic literature review about user experience evaluation and found out that 84% of the studies use questionnaires and 16% of the studies used interviews to assess user experience.
Although the concept of UX is significant in the context of social robots, due to the need to communicate with robots, to the best of our knowledge, there is no previous systematic literature review on the integration between the two topics: UX and social robots. Therefore, we carried out a systematic literature review (SLR) in order to summarize the extant literature on the topic in a comprehensive and concise manner. This overview may be of interest to UX researchers and robot developers willing to consider the UX perspective during the whole social robot development process.

2. Research Methodology

This section introduces the research methodology adopted to search, select, and analyze previously published research that covers the topic of user experience in social robots.

2.1. Systematic Literature Review Using Prisma Guidelines

We carried out a systematic literature review that aims to synthesize the extant literature about user experience in social robots. This review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines [46,47,48]. This approach has been used by several high-quality papers in different fields of study, e.g., [49,50,51]. It is worth mentioning that prior to conducting the systematic literature review, authors assessed its novelty. Authors acknowledge the existence of several systematic literature reviews in the broad field of robotics [52,53,54,55,56], some of them in the field of human-robot interaction [57,58,59,60]. However, in the specific field of social robots, there are two works worth mention, one on the interaction with sexbots [61] and another on specific design guidelines for social robots for the elderly [62]. To the best of our knowledge, an SLR that focuses on user experience in social robots does not exist in the extant literature. Bearing in mind the goal of this study, the authors formulated three research questions (RQi), as follows:
  • RQ1: What has been studied about user experience in social robots operating in different domains?
  • RQ2: What are the reported techniques for assessing UX in social robots?
  • RQ3: What are the reported challenges and benefits in evaluating UX for social robots?

2.2. Eligibility Criteria

The study is eligible for review if it fits the following inclusion criteria: (i) The study is presented in the form of an article, report, white paper, thesis, or book. (ii) The study discusses the concepts of social robots and user experience. (iii) The study describes the methods of UX assessment for social robots (iv) The study identifies the challenges and benefits of UX evaluation for social robots (v) The study has been published within the last 10 years. The study is excluded if: (i) The full text version is not accessible. (ii) It is not written in English. (iii) It focuses on UX in domains other than social robots.

2.3. Information Sources and Search

Literature searches were conducted in five major databases:
  • IEEE Xplore
  • ACM Digital Library
  • Springer
  • ScienceDirect
  • Google Scholar
The selected databases have been used broadly in secondary studies in the broad field of computing [63,64,65]. The search process aims to retrieve as many studies related to the topic of interest as possible. Bearing this in mind, two key terms were included in the search string: “user experience” and “social robot”. The term “user experience” has been also referred to as “UX”. Therefore, these keywords were included in the search string. The aforementioned search terms were structured by using the Boolean operators: “AND” for concatenation of the terms, “OR” for alternative terms.
Consequently, we formulated the following search string:
(“user experience” OR “UX”) AND “social robot”
Regarding the term “social robots”, we observed that some of the studies used the term “humanoid robot”. Therefore, we carried out trial searches with this term. However, we did not identify other relevant studies. The only filter used was to delimit the time range to the last 10 years, since the concepts of social robots and user experience evaluation techniques have witnessed progress recently.

2.4. Study Selection and Data Collection Process

The study selection process is represented by means of the PRISMA flow diagram (See Figure 1). A total of 2303 studies (IEEE Xplore (n = 140), ACM Digital Library (n = 164), Springer (n = 166), ScienceDirect (n = 113), Google Scholar (n = 1720)) were identified during the initial search process. The title and metadata of these studies were examined, and 2185 studies were evaluated as unsuitable for this systematic review. A further 10 studies were excluded because of duplication. In addition, 64 studies were excluded as a result of assessing their abstracts against inclusion/exclusion criteria. Consequently, only 44 studies were selected for the eligibility stage. Inclusion/exclusion criteria were applied to the full texts of these studies and only 20 studies were finally included in this systematic review.

2.5. Data Extraction and Synthesis

This study used Zotero to extract attributes from the selected studies in an automatic fashion, such as study title, author(s) name(s), publication year, publication source and keywords. In addition, two of the authors extracted the research topic and themes independently. Previous literature has introduced two data synthesis approaches: descriptive/narrative and quantitative data synthesis [66]. This study adopts the descriptive data synthesis approach that encompasses a simple description of topics and techniques in order to answer RQ1 and RQ2, and the identification of themes to answer RQ3. The themes were discussed among three of the authors. Disagreements that emerged due to conflicting themes were resolved with the contribution of the third author who is experienced in HRI research.

2.6. Risk of Biases

The studies selected for this review were assessed with regards to the risk of biases. The following types of biases were identified in these studies: (i) sampling biases due to the selection of a homogeneous group of users. For instance, Gerłowska et al. [67] selected elderly with no significant differences in age, gender, education level, and similar attitude towards novel technology as the participants of their study, whereas Reich-Stiebert and Eyssel [68] selected homogeneous university students without considering other groups of interest such as teachers and parents. Cesta et al. [69] adopted non-probability sampling methods: convenience sampling combined with chain sampling methods. This combination of methods has been referred to as mixed purposeful sampling [69]. (ii) measurement biases. In order to investigate the attitude towards education robots, Reich-Stiebert and Eyssel [68] provided a written description of characteristics and functions of education robots without using picture materials of implemented education robots. Destephe et al. [70] aimed to investigate the relevance of attractiveness on the acceptance of a robot as a partner in the working environment. The authors of this study used videos as stimuli, as a useful means to explore indirect interactions. However, it does not enable the understanding of real interactions with users. De Graaf and Ben Allouch [71] investigated variables influencing the acceptance of social robots. Due to the short-term nature of this study, the authors measured a novelty effect. Therefore, they recommended long-term studies to omit such an effect.

3. Results

In this section, we address the research questions that were defined in Section 2.1. The full list of primary studies are collected in Appendix A.

3.1. RQ1: What Has Been Studied About User Experience in Social Robots Operating in Different Domains?

Prior to answering RQ1, it is important to observe the trend of the studies related to the topic of interest. Figure 2 depicts the frequency of studies published per period of time. We observed a slight increase of the number of studies published during the second period of time (2014, 2017] compared to the first period.
For many years, robots have been tools used primarily in factories [72]. Nowadays, robots have become embedded in everyday people’s life and co-operate in both the industrial and service sectors. In particular, social robots are expected to play an important role in human society [41]. However, to achieve long-term benefits to the lives of people, there is a need for an iterative and positive UX when interacting with social robots. Social robots are being used in various application domains such as home use, manufacturing, healthcare, and education. One may argue that considering human-robots interaction in the design of robots can increase their usefulness and safety [41].
One of the social robots’ applications is in the care and assistance domain. Indeed, it is complicated to provide care and assistance services to people who demand special care. Telepresence robots are robots that are being used in different contexts, especially for elderly assistance [69]. Cesta et al. [69] described their experience with using telepresence robots in a real context for a long-term period. Their evaluation indicated that the telepresence robot was perceived positively by both males and females in terms of perceived usefulness, intention to use and attitude. These findings rely on the MARTA (Multidimensional Assessment of telepresence RoboT for older Adults) methodology, which defines a set of variables of interest in the interaction of telepresence robots and users. Furthermore, the results revealed that the perceived usefulness, intention to use and perceived adaptiveness of users towards telepresence robots increased over time. In addition, primary users stated that they would be willing to continue using the robot showing a positive attitude towards a telepresence robot, namely Giraff [69].
Gerłowska et al. [67] assessed the impact of assistant robots on aging patients with memory impairments. The authors conducted experiments on participants (55–90 years old, both genders) with and without cognitive impairments. The participants were asked to carry out some daily tasks like cooking, leisure time, medication intake, and social interaction. In order to support the participants while performing the tasks, RAMCIP (Robotic Assistant for Mild Cognitive Impairments Patients at Home) was used. The authors conducted an in-depth analysis of the impact and attractiveness of the robots, and the results revealed that the assistant robot was highly assessed and easy to get familiar with. On the other hand, usability was considered neutral. However, the authors reported that they needed a longer interaction to grasp it properly. Additionally, for the matter of societal impact, the robot is recognized as highly advantageous for patients’ health and wellbeing [67].
Destephe et al. [70] aimed to enhance the understanding of the Uncanny valley phenomenon, i.e., humanoid robots may cause uneasy feeling in human observers. To achieve this goal, the authors investigated factors that could have an impact on our perception of humanoid robots. They carried out a cross-cultural study using videos based on changing the motions of a humanoid robot, namely WABIAN-2R and distributed questionnaires to 69 subjects. Their findings suggested that the main influencing factor of the Uncanny valley feeling is the attitude towards humanoid robots. Subjects with a positive viewpoint regarding robots rated the robot as less eerie and more attractive than subjects with negative viewpoint. In turn, the perceived attractiveness of the robots impacts in a significant manner its occupation acceptability regardless of the perceived eeriness.
Humanoid robots may also be applied in other domains such as learning and teaching processes. In fact, in many countries, they are being used as guidance in science classes. According to a survey conducted among German university students, most of the participants prefer education robots as co-teachers and assistants in the classroom and just a miniature part of them perceive humanoid robots as independent teachers. Overall, the results of the survey showed a relative reluctance to involve robots in the learning processes [68].
Šabanović [73] carried out observational work in Japan, with the goal to explore how roboticists design social robots. Robotics in Japan are easily accepted by society as social agents. In order to advance robotics, scientists proposed the incorporation of traditional themes and cultural values with robotics to promote cultural continuity. Using cultural frameworks for novel robotic technologies facilitates the understanding among roboticists that robotics need to not only fit into, but also should be supported by suitable cultural structures.
Using social robots for commercial applications is another scope of robotics operation. Accordingly, Tonkin et al. [74] applied a human-robot interaction (HRI) methodology to the trial implementation of a social robot at an airport. They integrated the Lean UX with HRI research to design robots more efficiently. According to Gothelf et al. [75], Lean UX refers to the evolution of the product design and team collaboration. Essentially, it is an integration between the best part of the designer’s toolkit with agile software development and lean startup thinking, which is available to the entire product team. Tonkin et al. [74] tested the robot at the airport check-in and gate environments for one week. The results revealed that more than 50% of the users interacted with the robot via touchscreen or voice to request a joke. However, they rarely had a voice interaction. Besides, checking carry-on baggage size by the robot was not accepted by any user [74].
Table 1 provides information about the studies used in this section to answer RQ1. We selected a subset of the primary studies (8 out of 20, 40%) which explicitly discussed the perception of users towards social robots operating in different areas.

3.2. RQ2: What Are the Reported Techniques for Assessing UX in Social Robots?

There are many factors that determine the proper UX evaluation method including but not limited to, timespan, purpose, field, financial resources, supply, and methodological knowledge [44]. The received feedback from users can be assessed and measured in two ways: formative and summative [44]. Formative evaluation aims at getting feedback on conceptual design ideas during the development process by using techniques such as interaction flows, rough sketches of the design of robots and physical mock-ups. On the other hand, summative evaluation aims at understanding the robots’ usage in its real context. Therefore, the focus is on evaluating the final robot. However, it is more complicated, time-consuming, and expensive to change the interaction design and robot’s interface in the latter development stages. Thus, it is recommended to perform formative and summative evaluations throughout the whole design process [44]. In order to have a better approach to answer the questions “what to do” and “when one should use which method”, the following chart illustrates the procedure and method selection of the UX evaluation. This is a general method to evaluate user experience in each specific context. The vertical axis demonstrates the attitudinal vs behavioral dimensions, and the horizontal axis refers to qualitative vs quantitative evaluation. Each dimension determines the most suitable way of evaluation and types of questions that can be asked for the intended purpose [44,76].
To have a successful UX evaluation, the robot developer or UX researcher should focus on facets like picking the UX aspects and methods, data collection method, and target that for the specific area [77]. The UX evaluation involves a wide range of techniques, including empirical methods such as lab-based evaluation and analytical methods [44,77]. While it is true that the use of all methods in every HRI project is not feasible, it is also true that the use of several methods leads to a better UX evaluation, e.g., not only conducting contrived experiments including questionnaires [78].
Greunen [41] advocates the importance of UX evaluation, although investigations are often carried out after the actual human-robot interaction. Consequently, the subjects reflect upon their interaction with the social robot afterwards, and this might introduce biases, which can affect the reliability of the investigation.
As indicated by many researchers, robots in real environments demand long-term evaluation studies. Moreover, it is necessary to involve people in the design process for developing robots by having a real experience in their homes during their daily life. The methodology for collecting data in participants’ homes can be fulfilled by questionnaires, interviews, sensors, and robot logs [79,80]. After designing the robot, the evaluations are often performed in laboratories due to insufficient time without considering the prolonged real-world challenges [69].
Regarding long-term evaluation, it is necessary to gather data from human-robot interaction and user experience during a specific period. To collect these data, there are different methods and techniques. For instance, direct observation and recorded observation are primary observational techniques in which the researcher directly observes the interaction between the user and the robot, monitors and documents the observations. In recorded observation, the researcher is not present in the session, and the observations are captured by means of video recording. Each technique is suitable for a certain purpose. For instance, if the focused interaction and behavioral aspects are clear from the beginning, direct observation is suitable because it is easy to analyze that amount of data. Recording sessions and going through the records in order to collect data afterwards is a time-consuming process. Nonetheless, if any features should be measured in detail, recorded observation is the recommended approach [42,44].
Cesta et al. [69] present the MARTA methodology. This methodology denotes the variables that are principal for the evaluation of the adoption and effective use of social robots over time in order to investigate the consequences of habituation and potential reasons for rejection by users. In this assessment, the authors involved quantitative and qualitative instruments (questionnaires and interviews/diaries) to reveal users’ needs properly [69]. Interview is a method in which the researchers can gain profound information about users’ feelings and their way of thinking. Furthermore, questionnaire is a quantitative data collection technique for gathering subjective data about how users view the design [42].
Gerłowska et al. [67] targeted the assessment of RAMCIP in a semi-controlled environment. The assessment centered on the functionality, usability, level of the perceived acceptability, and societal impact by the end-users. The authors evaluated the functionality of the RAMCIP by means of User Experience Questionnaire (UEQ) and a survey to assess societal impact. Furthermore, in order to assess the attractiveness and acceptance, the authors carried out a User Experience Questionnaire in which the measurements clustered in six scales [67]. The UEQ scales involves [81,82]:
  • Attractiveness: the impression of the product
  • Perspicuity: easy to use and follow the product
  • Efficiency: solving users’ task without additional work
  • Dependability: users’ feeling of being in control of the interaction
  • Stimulation: how engaging the product is
  • Novelty: the novelty of the product
Destephe et al. [70] conducted a study on the acceptance of the robot as a working partner and factors that might affect the perception towards the robot. The authors conducted an experiment to detect where the participants would see the robot carrying out a task. To accomplish this assessment, the authors distributed questionnaires to the participants. The questionnaires involved inquiries about general information of participants, their robot-related experiences, their attitude towards robots based on another questionnaire called MacDorman, personality questionnaire, and participants’ reactions and feelings about robots based on Ho’s questionnaire. The questionnaire intended to measure three categories [70]:
  • Perceived Humanness: the level of humanity and human-like characteristics of the robot
  • Eeriness: the feeling of strangeness, disgust, and familiarity
  • Attractiveness: the level of physical attraction
Accordingly, the results showed that attractiveness is the main factor in predicting occupation acceptability.
Sabanovic et al. [79] conducted in situ evaluation to investigate multiple design alternatives of the break management robot, one that functioned as a simple alarm and the other with social behavior. Before the experiment, participants enrolled in semi-structured interviews and online questionnaires to be questioned about their break-taking, work practices, and attitudes towards technology. The questionnaires were inspired by the “Technology Attitude Instrument”, “Perceived Usefulness” scale, full “Negative Attitudes Towards Robots Scale” (NARS), and “Emotional Contagion Scale”. The user experience was evaluated over four consecutive weeks. Half of the participants were given the socially interactive prototype and the others received the alarm to use for two weeks. The participants described their first-week experience through a self-report. Throughout the second and third week, behavioral self-reports from users and logs recorded by the robots were collected. At the end of the experiment, the participants were asked to fill the “Adoption of Information Technology in the Workplace” online survey, which included the voluntariness of future use, perceived relative advantage, compatibility, image, and ease of use [79].
Table 2 presents an example of a questionnaire that is called NARS. The NARS questionnaire has scales based on participants’ responses to explain differences in their behavior and tensions regarding the interaction with robots. The grades of the answers are 1: Strongly disagree, 2: Disagree, 3: Neutral, 4: Agree, 5: Strongly agree. Then, the scores of all the items are added up with the reverse of scores in some items. By adding up the scores of all the items included in the subscale, with the reverse of scores in some items, the individual’s score can be measured. Hence, the minimum and maximum scores are 6 and 30 in S1, 5 and 25 in S2, and 3 and 15 in S3, respectively [83].
The NARS items are classified into three subscales which can be described as follows:
  • Sub-scale 1: Negative Attitudes towards Situations and Interactions with Robots (six items)
  • Sub-scale 2: Negative Attitudes towards Social Influence of Robots (five items)
  • Sub-scale 3: Negative Attitudes towards Emotions in Interaction with Robots (three items)
The analysis of these questions guided researchers to investigate the attitude and experience of users towards robots. Moreover, in order to have a more natural HRI, empathy plays a crucial role in interacting with social robots [84]. Ficocelli et al. [85] performed an emotion-based assistive behavior in social assistive robots, and from users’ point of view, it improved robots with more proper emotions.
Table 3 summarizes the user experience evaluation methods which are discussed in this section. It is noteworthy that Table 4 includes only a subset of the primary studies because these studies present the user experience evaluation method explicitly.
Overall, most of the UX of SR assessments are performed by means of interviews, surveys, and daily self-reporting. Other evaluation methods do exist, such as lab-based studies, direct observation, and recorded observation which are becoming popular in HRI studies. These methods have been considered as fair approaches because the users are not able to consciously manipulate the activities and procedures. However, in some cases, these methods have been refused due to the participants’ wishes [79,86].

3.3. RQ3: What Are the Reported Challenges and Benefits in Evaluating UX for Social Robots?

In this section, the authors identify and discuss benefits and challenges related to evaluating UX of human-robot interaction. The authors outline the value of receiving feedback from users in early stages of the development process, and the value of setting UX goals for robot developers. On the other hand, the UX evaluation of HRI faces challenges related to defining and incorporating UX goals, challenges related to the users’ first experience with social robots, and the limited number of studies that consider UX evaluation of HRI in its real-world context. In what follows, these benefits and challenges are discussed.
Benefits in early-stage feedback: In any interactive systems including social robots, a positive UX is inevitable to harvest the expected privileges. To achieve a positive UX, Lindblom et al. [44] encourage the inclusion of formative evaluations throughout the entire design lifecycle process. Formative evaluations entail receiving feedback on conceptual design ideas in the early stages of the UX design process. This initial feedback from users provides valuable information on the interaction quality, choosing among multiple alternative designs, recognizing UX obstacles and a negative UX. The earlier identification of these obstacles leads to easier, less time-consuming and more financially viable modifications of the robots’ design or interaction flow than in the latter stages.
Benefits for robot developers: UX goals foster robot developers to focus on the expected experience of interacting with robots. Therefore, the evaluation process has the potential to clarify what exactly should be done to improve specific aspects of the robots’ UX. During the design process, UX goals set some quantitative and qualitative metrics which assist robot developers in understanding when the required quality of interaction has been achieved, when to stop repeating the design process and when it is deemed to achieve a successful design [42,44].
Challenges of defining relevant UX goals: UX goals can be defined as high-level objectives or desired effects that must identify aspects that are important to the user when interacting with a specific system [87]. As recently pointed out by Lindblom and Andreasson [87], although defining UX goals is a fundamental activity, it has been overlooked in HRI, probably due to the lack of knowledge or time. In fact, these UX goals may serve as evaluation metrics, which support the UX evaluation process and enable developers to reflect upon the evaluation outcomes. It is worthy to note that UX goals focus on the human-robot interaction quality rather than the evaluations of the robots’ behavior and functionalities. Therefore, these goals support the whole development lifecycle by identifying when the desired interaction quality has been achieved.
Challenges of the first User Experience: People may feel unnatural during their first experience with social robots, but they may find it useful and well-adapted after a more prolonged time of interaction. Yet, some people may find it boring, and some find it interesting. Hence, to design the robot with the expected user experience, it is crucial to recognize what kinds of feelings a robot should arouse, and at what level the robot is expected to evoke the intended user experience [87]. These are the aspects of HRI that should be evaluated by means of long-term studies, given that human-robot interactions may vary from trial to trial, due to the autonomous nature of social robots [87].
Challenges of limited assessment: It is important to highlight that human interaction with SR has been assessed only to a limited extent because it is often accomplished in laboratories for a short period of time. The assessment outcomes are solely based on the results of some pre-produced tasks that a limited number of users are given. Thereby, these assessments overlook real-world challenges, and consequently constrain the expansion of social robots beyond laboratory settings. Likewise, a similar conclusion was reached by Dautenhahn [78], who outlined the need for HRI studies to explore the long-term interaction between humans and complex social robots in the real-world context and circumstances. Indeed, such studies are complex to design and execute, intensive from a research perspective and time-consuming [78].
Table 4 presents the studies used to answer RQ3. These studies discussed the benefits and challenges of the UX evaluation for SR.

4. Discussion

4.1. UX in Social Robots: An Overview

The fusion of social robots in human life is undeniable, and their importance in human society is increasing [84]. Social robots have been applied in several domains including healthcare [88], education [89], business [90], and culture [91]. The user experience and acceptance of such robots varies in each domain and in different settings (e.g., after a brief initial interaction, as underlined in [92]). For instance, in healthcare, robots were perceived as having positive effects on patient care and communication. Interestingly, assistive robots were well-accepted by elderly people, but were not well-accepted by some care professionals or in the education domain. While some respondents in primary studies believed that robots could participate as a teacher or teacher assistant in subjects such as science and mathematics, others were reluctant to participate in teaching provided by a robot.
A negative user experience may lead to unfortunate consequences such as negative credit or reluctance to use that specific robot [41]. For social robots to bring a long-term value to humans, a positive UX is a requirement. However, it does not occur automatically. Achieving a positive user experience requires an intelligent and systematic design and iterative evaluation [77]. UX evaluation provides an overview of each step of the design and development process and assists in handling the possible errors or imperfections at an early stage. There are various methods to evaluate UX in robotics including questionnaire, survey, interview, self-report, focus group, direct observation, and recorded observation. According to our selected studies, questionnaires and interviews are the most common methods for evaluating UX in social robots. Interviews provide richer information about users’ expectations, judgments, and perceptions than questionnaires. On the other hand, questionnaires render the possibility to gather more quantifiable information such as rating dimensions of UX. It is worth mentioning that, in some cases, the users refused the direct observation and recorded observation methods.
This is in line with the findings of the applicability of questionnaires in UX evaluation e.g., [93,94,95]. Therefore, the findings of this study (See Table 4) provide evidence that the techniques used for UX evaluation in social robots are similar to traditional UX evaluation techniques. While it is true that the adoption of existing practices, techniques and methods from other fields such as UX and human-computer interaction has been encouraged in the extant literature [87], it is also true that there is a need to tailor these practices to the HRI field. As stated by Dautenhahn [78], the field of HRI differs from human-human interaction, human-computer interaction, traditional robotics and engineering research. These differences should be considered when selecting and customizing appropriate UX evaluation techniques.
With regards to the challenges, our studies revealed that setting UX goals is often neglected by robot developers due to the lack of knowledge or lack of time. In fact, the use of an inappropriate strategy leads to a restricted UX evaluation. Hence, robot developers need to obtain the required knowledge in both theory and practice and understand how to perform a successful UX evaluation. This is in line with relevant and recent contributions in the topic [96,97].

4.2. Limitations

We carried out this SLR with rigor and guided by sound research methods and guidelines. However, the study faces a set of limitations which are referred to as threats to validity [98]. In what follows, the main threats to validity are explained and justified according to actions taken to mitigate them adapting actions and definition from [99].
Internal validity refers to the ability to make assumptions about the relationship of the study results and reaching the conclusion from causes and effects. The selection of five databases is a threat to internal validity because it may be influenced to some extent by researchers’ biases. However, the databases were chosen based on their popularity. Therefore, we believe that they cover the majority of relevant studies. Furthermore, the development of the field can be another threat to internal validity. We decided to conduct a systematic literature review because the topic of UX in social robots has been studied for a relatively long time.
Furthermore, external validity plays a crucial role in research. External validity concerns the validity of applying the conclusions that can be generalized to other contexts. It is worthy to note that, the selected studies are not restricted to a specific domain, and the benefits and challenges identified are of general nature.
Construct validity refers to the degree to which a test measures what it claims to measure. In our study, the term “user experience” was also mentioned as “UX” in some contexts. Therefore, these terms were included in the search string. Moreover, we recognized that the term “social robots” was also referred to other terms such as humanoid robots. Trial searches were carried out with this term, however additional relevant studies were not identified. Other terms may exist, and this is a potential threat to construct validity. Despite the fact that this study included only a subset of the extant literature, we believe it covers the most relevant primary studies regarding user experience in social robots. Furthermore, this study can present a selection bias due to the academic search engines selected. However, these databases are commonly used in published SLRs and their selection is also based on Kitchenham and Charters [66].
Conclusion validity refers to the reliability of the conclusions. We assured reliability by performing discussion sessions and analyzing our findings. Furthermore, we carried out searches separately and retrieved data about the selected studies. The studies were assessed by two of the authors according to the formulated inclusion/exclusion criteria, and two other authors evaluated the whole process. After brainstorming sessions and negotiations, the final set of studies was selected. To avoid possible human errors regarding the data analysis process, the results were assessed thoroughly by the third author who is experienced in SLRs in this topic.
Finally, to safeguard the replicability of the study, the data used in this SLR can be accessed online as indicated in the Data Availability Statement.

5. Conclusions and Future Work

This study focused on user experience in social robots, the benefits of evaluating user experience when interacting with social robots and the challenges that robots developers or UX researchers may face during the assessments. The goal of the study is to summarize the extant literature in this domain, by carrying out a systematic literature review which follows PRISMA guidelines for systematic literature reviews. The formulated search string was applied to a set of databases mentioned in the methodology section in order to extract the relevant results. The initial results were assessed against inclusion and exclusion criteria, and as a result, 20 papers were selected as relevant for this study. It is important to note that this is just a small portion of the existing studies about this topic. Data was extracted from these studies and the results were analyzed and interpreted in order to answer the following research questions: (i) What has been studied about user experience in social robots operating in different domains? (ii) What are the reported techniques for assessing UX in social robots? (iii) What are the reported challenges and benefits in evaluating UX for social robots?
As discussed in this study, for socially interactive robots, it is of major importance to stress the need for positive user experience. Assessing the UX in the early stages of social robots development benefits the process with immediate action to perform modifications, shift to alternative designs, and, consequently, it saves time and expenses. There are several methods to evaluate UX in SR. The best methods can be selected according to the purpose and the UX goals can lead the process to the expected quality and direction.
Based on the findings of this study, the authors suggest further research efforts in four main dimensions, as follows: (i) the interdisciplinary perspective of the interaction between humans and social robots. This entails adopting concepts, methods and practices from more mature fields, such as human factors, experimental psychology, human-computer interaction, anthropology, and ethology, and tailoring them to the field of social robotics; (ii) guidelines to support robot developers on how to define UX goals in terms of which UX dimensions to consider for specific purposes, user needs, usage context and domain; (iii) the use of a more diverse set of techniques to assess UX in social robots, beyond the conventional techniques of questionnaires and interviews. In fact, these techniques provide after-the-fact insights about interaction quality, which may result in biases that inevitably affect the reliability and conclusion validity of the study; (iv) there is a need for naturalistic field studies for the long-term evaluation of UX in social robots. Indeed, the transfer of outcomes from experimental settings to real-world environments is not trivial. While the authors of this study are aware of the complexities of such studies, they strongly believe in the importance of these evaluations for the further expansion of social robots, beyond laboratory settings.

Author Contributions

Conceptualization, E.S.S., R.C.-P.; methodology, E.S.S.; validation, R.C.-P., H.H. and S.D.; formal analysis, E.S.S., R.C.-P., H.H. and S.D.; data curation, E.S.S. and R.C.-P.; writing—original draft preparation, E.S.S.; writing—review and editing, R.C.-P., H.H. and S.D.; supervision, R.C.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This research and the APC was partially funded by “User-centered Security Framework for Social Robots in Public Space”, project code 321324, funded by Norwegian Research Council.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data supporting reported results can be found at https://doi.org/10.6084/m9.figshare.14550870 (accessed on 26 July 2021).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Selected studies.
Table A1. Selected studies.
IDReferenceYearTitleSource
1[44]2020Evaluating the User Experience of Human–Robot InteractionBook Chapter
(HRI)
2[41]2019User Experience for Social Human-Robot InteractionsConference (AICAI)
3[84]2019A Survey of Behavioral Models for Social RobotsJournal (Robotics)
4[67]2018Assessment of Perceived Attractiveness, Usability, and Societal Impact of a Multimodal Robotic Assistant for Aging Patients with Memory ImpairmentsJournal
(Front. Neurol)
5[74]2018Design Methodology for the UX of HRI: A Field Study of a Commercial Social Robot at an AirportConference
(HRI)
6[78]2018Some Brief Thoughts on the Past and Future of Human-Robot InteractionJournal
(HRI)
7[42]2018Agile UX Design for a Quality User ExperienceBook
8[81]2017Design and Evaluation of a Short Version of the User Experience QuestionnaireJournal (IJIMAI)
9[69]2016Long-Term Evaluation of a Telepresence Robot for the Elderly: Methodology and Ecological Case StudyJournal
(Int J of Soc Robotics)
10[87]2016Current Challenges for UX Evaluation of Human-Robot InteractionConference
(AHFE)
11[85]2016Promoting Interactions Between Humans and Robots Using Robotic Emotional BehaviorJournal
(IEEE Trans Cybern.)
12[70]2015Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partnerJournal
(Front. Psychol.)
13[68]2015Learning with Educational Companion Robots? Toward Attitudes on Education Robots, Predictors of Attitudes, and Application Potentials for Education RobotsJournal
(Int J of Soc Robotics)
14[86]2015Extensive assessment and evaluation methodologies on assistive social robots for modelling human–robot interactionJournal
(IS)
15[73]2014Inventing Japan’s ‘robotics culture’: The repeated assembly of science, technology, and culture in social roboticsJournal
(Soc Stud Sci)
16[79]2014Designing Robots in the Wild: In situ Prototype Evaluation for a Break Management RobotJournal
(HRI)
17[80]2014Review: Seven Matters of Concern of Social Robots and Older PeopleJournal
(Int J of Soc Robotics)
18[76]2014When to Use Which User-Experience Research MethodsNielsen Norman Group
19[71]2013Exploring influencing variables for the acceptance of social robotsJournal
(RAS)
20[77]2011User Experience and Experience DesignBook Chapter

References

  1. Sandry, E. (Ed.) Introduction. In Robots and Communication; Palgrave Macmillan: London, UK, 2015; pp. 1–10. [Google Scholar]
  2. Zhang, C.; Wang, W.; Xi, N.; Wang, Y.; Liu, L. Development and Future Challenges of Bio-Syncretic Robots. Engineering 2018, 4, 452–463. [Google Scholar] [CrossRef]
  3. Duffy, B.R. Anthropomorphism and the social robot. Robot. Auton. Syst. 2003, 42, 177–190. [Google Scholar] [CrossRef]
  4. Malik, A.A.; Brem, A. Digital twins for collaborative robots: A case study in human-robot interaction. Robot. Comput. Manuf. 2021, 68, 102092. [Google Scholar] [CrossRef]
  5. Graaf, D.M. Living with Robots: Investigating the User Acceptance of Social Robots in Domestic Environments. Ph.D. Thesis, Univesity of Twente, Twente, The Netherlands, 2015. [Google Scholar]
  6. Duffy, B.R.; Rooney, C.; O’Hare, G.M.P.; O’Donoghue, R. What is a Social Robot? In Proceedings of the 10th Irish Conference on Artificial Intelligence & Cognitive Science, Cork, Ireland, 1–3 September 1999. [Google Scholar]
  7. Siciliano, B.; Khatib, O. Humanoid Robots: Historical Perspective, Overview, and Scope. In Humanoid Robotics: A Reference; Goswami, A., Vadakkepat, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; pp. 3–8. [Google Scholar]
  8. Breazeal, C. Designing Sociable Robots. Des. Sociable Robot. 2004. [Google Scholar] [CrossRef]
  9. Malinowska, J.K. Can I Feel Your Pain? The Biological and Socio-Cognitive Factors Shaping People’s Empathy with Social Robots. Int. J. Soc. Robot. 2021, 1–15. [Google Scholar] [CrossRef]
  10. Henschel, A.; Laban, G.; Cross, E.S. What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You. Curr. Robot. Rep. 2021, 2, 9–19. [Google Scholar] [CrossRef]
  11. Kennedy, J.; Ramachandran, A.; Scassellati, B.; Tanaka, F. Social robots for education: A review. Sci. Robot. 2018, 3, eaat5954. [Google Scholar] [CrossRef] [Green Version]
  12. Lytridis, C.; Bazinas, C.; Sidiropoulos, G.; Papakostas, G.A.; Kaburlasos, V.G.; Nikopoulou, V.-A.; Holeva, V.; Evangeliou, A. Distance Special Education Delivery by Social Robots. Electronics 2020, 9, 1034. [Google Scholar] [CrossRef]
  13. Rosenberg-Kima, R.B.; Koren, Y.; Gordon, G. Robot-Supported Collaborative Learning (RSCL): Social Robots as Teaching Assistants for Higher Education Small Group Facilitation. Front. Robot. Ai 2020, 6, 148. [Google Scholar] [CrossRef] [Green Version]
  14. Berghe, R.V.D.; Verhagen, J.; Oudgenoeg-Paz, O.; van der Ven, S.; Leseman, P. Social Robots for Language Learning: A Review. Rev. Educ. Res. 2019, 89, 259–295. [Google Scholar] [CrossRef] [Green Version]
  15. Kanero, J.; Geçkin, V.; Oranç, C.; Mamus, E.; Küntay, A.; Göksun, T. Social Robots for Early Language Learning: Current Evidence and Future Directions. Child Dev. Perspect. 2018, 12, 146–151. [Google Scholar] [CrossRef] [Green Version]
  16. Belpaeme, T.; Vogt, P.; Berghe, R.V.D.; Bergmann, K.; Göksun, T.; De Haas, M.; Kanero, J.; Kennedy, J.; Küntay, A.; Oudgenoeg-Paz, O.; et al. Guidelines for Designing Social Robots as Second Language Tutors. Int. J. Soc. Robot. 2018, 10, 325–341. [Google Scholar] [CrossRef] [Green Version]
  17. Logan, D.E.; Breazeal, C.; Goodwin, M.S.; Jeong, S.; O’Connell, B.; Smith-Freedman, D.; Heathers, J.; Weinstock, P. Social Robots for Hospitalized Children. Pediatrics 2019, 144, e20181511. [Google Scholar] [CrossRef] [PubMed]
  18. Robinson, N.L.; Cottier, T.V.; Kavanagh, D.J. Psychosocial Health Interventions by Social Robots: Systematic Review of Randomized Controlled Trials. J. Med Internet Res. 2019, 21, e13203. [Google Scholar] [CrossRef] [PubMed]
  19. Chen, S.; Jones, C.; Moyle, W. Social Robots for Depression in Older Adults: A Systematic Review. J. Nurs. Sch. 2018, 50, 612–622. [Google Scholar] [CrossRef] [Green Version]
  20. Scoglio, A.A.; Reilly, E.D.; A Gorman, J.; E Drebing, C. Use of Social Robots in Mental Health and Well-Being Research: Systematic Review. J. Med. Internet Res. 2019, 21, e13322. [Google Scholar] [CrossRef]
  21. Pu, L.; Moyle, W.; Jones, C.; Todorovic, M. The Effectiveness of Social Robots for Older Adults: A Systematic Review and Meta-Analysis of Randomized Controlled Studies. Gerontologist 2019, 59, e37–e51. [Google Scholar] [CrossRef]
  22. De Kervenoael, R.; Hasan, R.; Schwob, A.; Goh, E. Leveraging human-robot interaction in hospitality services: Incorporating the role of perceived value, empathy, and information sharing into visitors’ intentions to use social robots. Tour. Manag. 2020, 78, 104042. [Google Scholar] [CrossRef]
  23. Nakanishi, J.; Kuramoto, I.; Baba, J.; Ogawa, K.; Yoshikawa, Y.; Ishiguro, H. Continuous Hospitality with Social Robots at a hotel. SN Appl. Sci. 2020, 2, 1–13. [Google Scholar] [CrossRef] [Green Version]
  24. Huang, H.-L.; Cheng, L.-K.; Sun, P.-C.; Chou, S.-J. The Effects of Perceived Identity Threat and Realistic Threat on the Negative Attitudes and Usage Intentions Toward Hotel Service Robots: The Moderating Effect of the Robot’s Anthropomorphism. Int. J. Soc. Robot. 2021, 1–13. [Google Scholar] [CrossRef]
  25. Fuentes-Moraleda, L.; Díaz-Pérez, P.; Orea-Giner, A.; Mazón, A.M.-; Villacé-Molinero, T. Interaction between hotel service robots and humans: A hotel-specific Service Robot Acceptance Model (sRAM). Tour. Manag. Perspect. 2020, 36, 100751. [Google Scholar] [CrossRef]
  26. Garcia-Haro, J.M.; Oña, E.D.; Hernandez-Vicen, J.; Martinez, S.; Balaguer, C. Service Robots in Catering Applications: A Review and Future Challenges. Electronics 2020, 10, 47. [Google Scholar] [CrossRef]
  27. De Boer, S.; Jansen, B.; Bustos, V.M.; Prinse, M.; Horwitz, Y.; Hoorn, J.F. Social Robotics in Eastern and Western Newspapers: China and (Even) Japan are Optimistic. Int. J. Innov. Technol. Manag. 2021, 18, 2040001. [Google Scholar] [CrossRef]
  28. Horstmann, A.C.; Krämer, N.C. Great Expectations? Relation of Previous Experiences with Social Robots in Real Life or in the Media and Expectancies Based on Qualitative and Quantitative Assessment. Front. Psychol. 2019, 10, 939. [Google Scholar] [CrossRef] [Green Version]
  29. Čaić, M.; Mahr, D.; Oderkerken-Schröder, G. Value of social robots in services: Social cognition perspective. J. Serv. Mark. 2019, 33, 463–478. [Google Scholar] [CrossRef]
  30. Chi, O.H.; Jia, S.; Li, Y.; Gursoy, D. Developing a formative scale to measure consumers’ trust toward interaction with artificially intelligent (AI) social robots in service delivery. Comput. Hum. Behav. 2021, 118, 106700. [Google Scholar] [CrossRef]
  31. Mubin, O.; Ahmad, M.I.; Kaur, S.; Shi, W.; Khan, A. Social Robots in Public Spaces: A Meta-review. In Social Robotics; Ge, S.S., Cabibihan, J.-J., Salichs, M.A., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 213–220. [Google Scholar]
  32. Pieterson, W.; Ebbers, W.; Madsen, C.Ø. New Channels, New Possibilities: A Typology and Classification of Social Robots and Their Role in Multi-channel Public Service Delivery. In Electronic Government; Janssen, M., Axelsson, K., Glassey, O., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 47–59. [Google Scholar]
  33. Thunberg, S.; Ziemke, T. Are People Ready for Social Robots in Public Spaces? In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 482–484. [Google Scholar]
  34. Mintrom, M.; Sumartojo, S.; Kulić, D.; Tian, L.; Carreno-Medrano, P.; Allen, A. Robots in public spaces: Implications for policy design. Policy Des. Pract. 2021, 1–16. [Google Scholar] [CrossRef]
  35. Aymerich-Franch, L.; Ferrer, I. Social Robots as a Brand Strategy. In Innovation in Advertising and Branding Communication; Routledge: New York, NY, USA, 2020; pp. 86–102. [Google Scholar]
  36. Leite, I.; Martinho, C.; Paiva, A. Social Robots for Long-Term Interaction: A Survey. Int. J. Soc. Robot. 2013, 5, 291–308. [Google Scholar] [CrossRef]
  37. Mandal, F.B. Nonverbal Communication in Humans. J. Hum. Behav. Soc. Env. 2014, 24, 417–421. [Google Scholar] [CrossRef]
  38. Yamazaki, A.; Yamazaki, K.; Kuno, Y.; Burdelski, M.; Kawashima, M.; Kuzuoka, H. Precision Timing in Human-Robot Interaction: Coordination of Head Movement and Utterance. In Proceedings of the CHI ‘08: CHI Conference on Human Factors in Computing Systems, Florence, Italy, 5–10 April 2008; pp. 131–140. [Google Scholar]
  39. Breazeal, C.; Scassellati, B. Robots that imitate humans. Trends Cogn. Sci. 2002, 6, 481–487. [Google Scholar] [CrossRef]
  40. Lambert, A.; Norouzi, N.; Bruder, G.; Welch, G. A Systematic Review of Ten Years of Research on Human Interaction with Social Robots. Int. J. Hum. Comput. Interact. 2020, 36, 1804–1817. [Google Scholar] [CrossRef]
  41. Greunen, D.v. User Experience for Social Human-Robot Interactions. In Proceedings of the Amity International Conference on Artificial Intelligence (AICAI), Dubai, United Arab Emirates, 4–6 February 2019; pp. 32–36. [Google Scholar]
  42. Hartson, R.; Pyla, P.S. The UX Book: Agile UX Design for a Quality User Experience; Morgan Kaufmann: Cambridge, MA, USA, 2018. [Google Scholar]
  43. ISO 8968-1:Milk and Milk Products—Determination of Nitrogen Content—Part 1: Kjeldahl Principle and Crude Protein Calculation. Available online: https://www.iso.org/obp/ui/#iso:std:iso:8968:-1:ed-2:v1:en (accessed on 12 January 2021).
  44. Lindblom, J.; Alenljung, B.; Billing, E. Evaluating the User Experience of Human–Robot Interaction. In Springer Series on Bio- and Neurosystems; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2020; pp. 231–256. [Google Scholar]
  45. Maia, C.L.B.; Furtado, E. A Systematic Review About User Experience Evaluation. In Transactions on Petri Nets and Other Models of Concurrency XV; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2016; pp. 445–455. [Google Scholar]
  46. Moher, D.; Shamseer, L.; Clarke, M.; Ghersi, D.; Liberati, A.; Petticrew, M.; Shekelle, P.; Stewart, L.A.; PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst. Rev. 2015, 4, 1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Shamseer, L.; Moher, D.; Clarke, M.; Ghersi, D.; Liberati, A.; Petticrew, M.; Shekelle, P.; Stewart, L.A.; PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: Elaboration and explanation. BMJ Br. Med. J. 2015, 349, g7647. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Moher, D. Updating guidance for reporting systematic reviews: Development of the PRISMA 2020 statement. J. Clin. Epidemiol. 2021, 134, 103–112. [Google Scholar] [CrossRef] [PubMed]
  49. Şalvarlı, Ş.İ.; Griffiths, M.D. Internet Gaming Disorder and Its Associated Personality Traits: A Systematic Review Using PRISMA Guidelines. Int. J. Ment. Health Addict. 2019, 1–23. [Google Scholar] [CrossRef] [Green Version]
  50. Gaikwad, M.; Ahirrao, S.; Phansalkar, S.; Kotecha, K. Online Extremism Detection: A Systematic Literature Review with Emphasis on Datasets, Classification Techniques, Validation Methods and Tools. IEEE Access 2021, 48364–48404. [Google Scholar] [CrossRef]
  51. Frizzo-Barker, J.; Chow-White, P.A.; Adams, P.R.; Mentanko, J.; Ha, D.; Green, S. Blockchain as a disruptive technology for business: A systematic review. Int. J. Inf. Manag. 2020, 51, 102029. [Google Scholar] [CrossRef]
  52. Çetin, M.; Demircan, H. Özlen Empowering technology and engineering for STEM education through programming robots: A systematic literature review. Early Child Dev. Care 2020, 190, 1323–1335. [Google Scholar] [CrossRef]
  53. Savela, N.; Turja, T.; Oksanen, A. Social Acceptance of Robots in Different Occupational Fields: A Systematic Literature Review. Int. J. Soc. Robot. 2018, 10, 493–502. [Google Scholar] [CrossRef] [Green Version]
  54. Gualtieri, L.; Rauch, E.; Vidoni, R. Emerging research fields in safety and ergonomics in industrial collaborative robotics: A systematic literature review. Robot. Comput. Manuf. 2021, 67, 101998. [Google Scholar] [CrossRef]
  55. Buettner, R.; Renner, A.; Boos, A. A Systematic Literature Review of Research in the Surgical Field of Medical Robotics. In Proceedings of the 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 13–17 July 2020; pp. 517–522. [Google Scholar]
  56. Tijani, B.; Feng, Y. Social Impacts of Adopting Robotics in the Construction Industry: A Systematic Literature Review. In Proceedings of the 23rd International Symposium on Advancement of Construction Management and Real Estate; Long, F., Zheng, S., Wu, Y., Eds.; Springer: Singapore, 2021; pp. 668–680. [Google Scholar]
  57. Schulz, T.; Torresen, J.; Herstad, J. Animation Techniques in Human-Robot Interaction User Studies. ACM Trans. Hum. Robot Interact. 2019, 8, 1–22. [Google Scholar] [CrossRef] [Green Version]
  58. Zafrani, O.; Nimrod, G. Towards a Holistic Approach to Studying Human–Robot Interaction in Later Life. Gerontologist 2018, 59, e26–e36. [Google Scholar] [CrossRef]
  59. Nelles, J.; Kwee-Meier, S.T.; Mertens, A. Evaluation Metrics Regarding Human Well-Being and System Performance in Human-Robot Interaction–A Literature Review. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018); Bagnara, S., Tartaglia, R., Albolino, S., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 124–135. [Google Scholar]
  60. Esterwood, C.; Robert, L.P. Personality in Healthcare Human Robot Interaction (H-HRI): A Literature Review and Brief Critique. In Proceedings of the 8th International Conference on Human-Agent Interaction; Association for Computing Machinery: New York, NY, USA, 2020; pp. 87–95. [Google Scholar]
  61. González-González, C.S.; Gil-Iranzo, R.M.; Paderewski-Rodríguez, P. Human–Robot Interaction and Sexbots: A Systematic Literature Review. Sensors 2020, 21, 216. [Google Scholar] [CrossRef] [PubMed]
  62. Lin, C.-C.; Liao, H.-Y.; Tung, F.-W. Design Guidelines of Social-Assisted Robots for the Elderly: A Mixed Method Systematic Literature Review. In Late Breaking Papers: Cognition, Learning and Games; Stephanidis, C., Harris, D., Li, W.-C., Eds.; HCI International 2020; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 90–104. [Google Scholar]
  63. Kitchenham, B. Procedures for Performing Systematic Reviews; Keele University: Keele, UK, 2004; pp. 1–26. [Google Scholar]
  64. Petersen, K.; Feldt, R.; Mujtaba, S.; Mattsson, M. Systematic Mapping Studies in Software Engineering. In Proceedings of the International Conference on Evaluation and Assessment in Software Engineering (EASE), Bari, Italy, 26–27 June 2008; pp. 1–10. [Google Scholar]
  65. Garousi, V.; Felderer, M.; Mäntylä, M.V. Guidelines for including grey literature and conducting multivocal literature reviews in software engineering. Inf. Softw. Technol. 2019, 106, 101–121. [Google Scholar] [CrossRef] [Green Version]
  66. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Keele University: Keele, UK, 2017. [Google Scholar]
  67. Gerłowska, J.; Skrobas, U.; Grabowska-Aleksandrowicz, K.; Korchut, A.; Szklener, S.; Szczęśniak-Stańczyk, D.; Tzovaras, D.; Rejdak, K. Assessment of Perceived Attractiveness, Usability, and Societal Impact of a Multimodal Robotic Assistant for Aging Patients with Memory Impairments. Front. Neurol. 2018, 9. [Google Scholar] [CrossRef]
  68. Reich-Stiebert, N.; Eyssel, F. Learning with Educational Companion Robots? Toward Attitudes on Education Robots, Predictors of Attitudes, and Application Potentials for Education Robots. Int. J. Soc. Robot. 2015, 7, 875–888. [Google Scholar] [CrossRef]
  69. Cesta, A.; Cortellessa, G.; Orlandini, A.; Tiberio, L. Long-Term Evaluation of a Telepresence Robot for the Elderly: Methodology and Ecological Case Study. Int. J. Soc. Robot. 2016, 8, 421–441. [Google Scholar] [CrossRef] [Green Version]
  70. Destephe, M.; Brandão, M.; Kishi, T.; Zecca, M.; Hashimoto, K.; Takanishi, A. Walking in the uncanny valley: Importance of the attractiveness on the acceptance of a robot as a working partner. Front. Psychol. 2015, 6, 204. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. De Graaf, M.M.; Ben Allouch, S. Exploring influencing variables for the acceptance of social robots. Robot. Auton. Syst. 2013, 61, 1476–1486. [Google Scholar] [CrossRef]
  72. Korn, O.; Akalin, N.; Gouveia, R. Understanding Cultural Preferences for Social Robots. ACM Trans. Hum. Robot Interact. 2021, 10, 1–19. [Google Scholar] [CrossRef]
  73. Šabanović, S. Inventing Japan’s ‘robotics culture’: The repeated assembly of science, technology, and culture in social robotics. Soc. Stud. Sci. 2014, 44, 342–367. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Tonkin, M.; Vitale, J.; Herse, S.; Williams, M.A.; Judge, W.; Wang, X. Design Methodology for the UX of HRI: A Field Study of a Commercial Social Robot at an Airport. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction; ACM: Chicago, IL, USA, 2018; pp. 407–415. [Google Scholar]
  75. Gothelf, J.; Seiden, J. Lean UX: Designing Great Products with Agile Teams; O’Reilly Media, Inc.: Newton, MA, USA, 2016. [Google Scholar]
  76. Rohrer, C. When to Use Which User-Experience Research Methods; Nielsen Norman Group: Fremont, CA, USA, 2014. [Google Scholar]
  77. Hassenzahl, M. User Experience and Experience Design; Interaction Design Fundation: Aarhus, Denmark, 2011. [Google Scholar]
  78. Dautenhahn, K. Some Brief Thoughts on the Past and Future of Human-Robot Interaction. ACM Trans. Hum. Robot Interact. 2018, 7, 1–3. [Google Scholar] [CrossRef] [Green Version]
  79. Šabanović, S.; Reeder, S.M.; Kechavarzi, B. Designing Robots in the Wild: In situ Prototype Evaluation for a Break Management Robot. J. Hum. Robot Interact. 2014, 3, 70–88. [Google Scholar] [CrossRef] [Green Version]
  80. Frennert, S.; Östlund, B. Review: Seven Matters of Concern of Social Robots and Older People. Int. J. Soc. Robot. 2014, 6, 299–310. [Google Scholar] [CrossRef]
  81. Schrepp, M.; Hinderks, A.; Thomaschewski, J. Design and Evaluation of a Short Version of the User Experience Questionnaire (UEQ-S). Int. J. Interact. Multimed. Artif. Intell. 2017, 4, 103. [Google Scholar] [CrossRef] [Green Version]
  82. Schrepp, M. User Experience Questionnaire Handbook; Version 2; SAP Research: Waldorf, Germany, 2016. [Google Scholar]
  83. Nomura, T.; Kanda, T.; Suzuki, T. Experimental investigation into influence of negative attitudes toward robots on human–robot interaction. Ai Soc. 2005, 20, 138–150. [Google Scholar] [CrossRef]
  84. Nocentini, O.; Fiorini, L.; Acerbi, G.; Sorrentino, A.; Mancioppi, G.; Cavallo, F. A Survey of Behavioral Models for Social Robots. Robotics 2019, 8, 54. [Google Scholar] [CrossRef] [Green Version]
  85. Ficocelli, M.; Terao, J.; Nejat, G. Promoting Interactions Between Humans and Robots Using Robotic Emotional Behavior. IEEE Trans. Cybern. 2016, 46, 2911–2923. [Google Scholar] [CrossRef]
  86. Sim, D.Y.Y.; Loo, C.K. Extensive assessment and evaluation methodologies on assistive social robots for modelling human–robot interaction—A review. Inf. Sci. 2015, 301, 305–344. [Google Scholar] [CrossRef]
  87. Lindblom, J.; Andreasson, R. Current Challenges for UX Evaluation of Human-Robot Interaction. In Advances in Ergonomics of Manufacturing: Managing the Enterprise of the Future; Schlick, C., Trzcieliński, S., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 267–277. [Google Scholar]
  88. Cao, H.-L.; Esteban, P.G.; De Beir, A.; Simut, R.; Van De Perre, G.; Lefeber, D.; VanderBorght, B. A Survey on Behavior Control Architectures for Social Robots in Healthcare Interventions. Int. J. Hum. Robot. 2017, 14, 1750021. [Google Scholar] [CrossRef]
  89. Amanatiadis, A.; Kaburlasos, V.; Dardani, C.; Chatzichristofis, S. Interactive social robots in special education. In Proceedings of the 2017 IEEE 7th International Conference on Consumer Electronics-Berlin (ICCE-Berlin), Berlin, Germany, 3–6 September 2017; pp. 126–129. [Google Scholar]
  90. Niemelä, M.; Heikkilä, P.; Lammi, H.; Oksman, V. A Social Robot in a Shopping Mall: Studies on Acceptance and Stakeholder Expectations. In Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction; Korn, O., Ed.; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 119–144. [Google Scholar]
  91. De Carolis, B.; Palestra, G.; Della Penna, C.; Cianciotta, M.; Cervelione, A. Social robots supporting the inclusion of unaccompanied migrant children: Teaching the meaning of culture-related gestures. J. E-Learning Knowl. Soc. 2019, 15, 43–57. [Google Scholar]
  92. Edwards, A.; Edwards, C.; Westerman, D.; Spence, P.R. Initial expectations, interactions, and beyond with social robots. Comput. Hum. Behav. 2019, 90, 308–314. [Google Scholar] [CrossRef]
  93. Miler, J.; Menjega-Schmidt, M. Evaluation of Selected UX Techniques by Product Managers—A Preliminary Survey. In Integrating Research and Practice in Software Engineering; Jarzabek, S., Poniszewska-Marańda, A., Madeyski, L., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 159–169. [Google Scholar]
  94. Alomari, H.W.; Ramasamy, V.; Kiper, J.D.; Potvin, G. A User Interface (UI) and User eXperience (UX) evaluation framework for cyberlearning environments in computer science and software engineering education. Heliyon 2020, 6, e03917. [Google Scholar] [CrossRef] [PubMed]
  95. Biduski, D.; Bellei, E.A.; Rodriguez, J.P.M.; Zaina, L.A.M.; De Marchi, A.C.B. Assessing long-term user experience on a mobile health application through an in-app embedded conversation-based questionnaire. Comput. Hum. Behav. 2020, 104, 106169. [Google Scholar] [CrossRef]
  96. Lindblom, J.; Alenljung, B. The ANEMONE: Theoretical Foundations for UX Evaluation of Action and Intention Recognition in Human-Robot Interaction. Sensors 2020, 20, 4284. [Google Scholar] [CrossRef] [PubMed]
  97. Chavan, A.L.; Prabhu, G. Should We Measure UX Differently? In Design, User Experience, and Usability. Interaction Design; Marcus, A., Rosenzweig, E., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 176–187. [Google Scholar]
  98. Zhou, X.; Jin, Y.; Zhang, H.; Li, S.; Huang, X. A Map of Threats to Validity of Systematic Literature Reviews in Software Engineering. In Proceedings of the 2016 23rd Asia-Pacific Software Engineering Conference (APSEC), Hamilton, New Zealand, 6–9 December 2016; pp. 153–160. [Google Scholar]
  99. Wohlin, C.; Runeson, P.; Höst, M.; Ohlsson, M.; Regnell, B.; Wesslén, A. Experimentation in Software Engineering; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
Figure 1. PRISMA flow diagram of study selection process.
Figure 1. PRISMA flow diagram of study selection process.
Sensors 21 05052 g001
Figure 2. Frequency of studies per period of time.
Figure 2. Frequency of studies per period of time.
Sensors 21 05052 g002
Table 1. Studies used to answer RQ1.
Table 1. Studies used to answer RQ1.
StudyPublication YearAuthorScope
[41]2019D. v GreunenSocial Robots Application Areas
[69]2016A. Cesta et al.Evaluation of Telepresence Social Robot Assistant for the Elderly
[67]2018J. Gerłowska et al.Assessment of Impact of a Robotic Assistant for Aging Patients with Memory Impairments
[70]2015M. Destephe et al.Assessment of UX the acceptance of a robot as a working partner
[68]2015N. Reich-Stiebert et al.Education Robots and teacher assistance
[73]2014S. ŠabanovićIntegration of Social Robots and cultural and traditional themes
[74]2018M. Tonkin et al.Implementation of Commercial Social Robot at an Airport
[75]2016J. Gothelf et al.Integration of Lean UX with HRI research
Table 2. NARS Items with Subscales.
Table 2. NARS Items with Subscales.
No.Questionnaire ItemSub-Scale
1I would feel uneasy if robots really had emotions.S2
2Something bad might happen if robots developed into living beings.S2
3I would feel relaxed talking with robots *S3
4I would feel uneasy if I was given a job where I had to use robots.S1
5If robots had emotions, I would be able to make friends with them. *S3
6I feel comforted being with robots that have emotions. *S3
7The word “robot” means nothing to me.S1
8I would feel nervous operating a robot in front of other people.S1
9I would hate the idea that robots or artificial intelligences were making judgements about things.S1
10I would feel very nervous just standing in front of a robot.S1
11I feel that if I depend on robots too much, something bad might happen.S2
12I would feel paranoid talking with a robot.S1
13I am concerned that robots would be a bad influence on children.S2
14I feel that in the future, society will be dominated by robots.S2
* Reversed item.
Table 3. UX evaluation method used in each study.
Table 3. UX evaluation method used in each study.
StudyPublication YearAuthor(s)Method
[69]2016Cesta et al.Multidimensional Assessment of Telepresence Robot (MARTA)
[67]2018Gerłowska et al.User Experience Questionnaire (UEQ) and survey
[70]2015Destephe et al.MacDorman questionnaire, personality questionnaire and, Ho’s questionnaire
[73]2014Sabanovic et al.in situ evaluation (pre- and post-interview, online questionnaire, self-report, final focus group)
Table 4. Studies used to answer RQ3.
Table 4. Studies used to answer RQ3.
Benefits and Challenges of UX in Social RobotsStudies
BenefitsEarly-stage feedback[42,44,77]
For developers[42,44]
ChallengesFirst UX[87]
Relevant UX goals[87]
Limited assessment[69,78]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shourmasti, E.S.; Colomo-Palacios, R.; Holone, H.; Demi, S. User Experience in Social Robots. Sensors 2021, 21, 5052. https://doi.org/10.3390/s21155052

AMA Style

Shourmasti ES, Colomo-Palacios R, Holone H, Demi S. User Experience in Social Robots. Sensors. 2021; 21(15):5052. https://doi.org/10.3390/s21155052

Chicago/Turabian Style

Shourmasti, Elaheh Shahmir, Ricardo Colomo-Palacios, Harald Holone, and Selina Demi. 2021. "User Experience in Social Robots" Sensors 21, no. 15: 5052. https://doi.org/10.3390/s21155052

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop