Next Article in Journal
Developing Usability Guidelines for mHealth Applications (UGmHA)
Next Article in Special Issue
Would You Hold My Hand? Exploring External Observers’ Perception of Artificial Hands
Previous Article in Journal
Can AI-Oriented Requirements Enhance Human-Centered Design of Intelligent Interactive Systems? Results from a Workshop with Young HCI Designers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Literature Survey of How to Convey Transparency in Co-Located Human–Robot Interaction

LMU Munich, Frauenlobstr. 7a, 80337 Munich, Germany
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2023, 7(3), 25; https://doi.org/10.3390/mti7030025
Submission received: 10 February 2023 / Revised: 21 February 2023 / Accepted: 22 February 2023 / Published: 25 February 2023
(This article belongs to the Special Issue Challenges in Human-Centered Robotics)

Abstract

:
In human–robot interaction, transparency is essential to ensure that humans understand and trust robots. Understanding is vital from an ethical perspective and benefits interaction, e.g., through appropriate trust. While there is research on explanations and their content, the methods used to convey the explanations are underexplored. It remains unclear which approaches are used to foster understanding. To this end, we contribute a systematic literature review exploring how robot transparency is fostered in papers published in the ACM Digital Library and IEEE Xplore. We found that researchers predominantly rely on monomodal visual or verbal explanations to foster understanding. Commonly, these explanations are external, as opposed to being integrated in the robot design. This paper provides an overview of how transparency is communicated in human–robot interaction research and derives a classification with concrete recommendations for communicating transparency. Our results establish a solid base for consistent, transparent human–robot interaction designs.

1. Introduction

Robots that interact with humans should be understandable. Putting this into practice by making robot behavior and intent transparent is one of the key challenges in human–robot interaction (HRI) research today [1]. In this paper, we refer to transparency in the context of ensuring understanding between interaction partners—human and robot—in line with Wortham et al. [2].
It is important to focus on robots specifically when investigating the understanding of technologies, as they are complex systems that physically interact with humans. Transparency has an effect on interactions with and the perception of robots. An accurate understanding of robot capabilities and goals improves the interactions, especially when it comes to the decision of whether or not to trust the robot [3,4]. Further, the transparency of systems follows user-centered design [3] and is ethical, as the knowledge on robot capabilities, purpose and internal decision making gives users the choice of whether and how they want to interact with the system [5]. Establishing the right amount of system transparency for successful interactions, however, is a challenging min-maxing problem. While high transparency is vital, simply providing an abundance of information can overwhelm users [6]. In a recent paper, Cila [7] defined eleven design considerations for human–agent collaboration. Among other aspects, Cila [7] highlighted the intelligibility of robot behavior and intentions and transparent collaboration qualities. They argue that the explanation form is vital for explanations to actually raise intelligibility and transparency. Based on this, we theorize that intelligibility is impacted by how additional information on the robot is provided.
Until now, the primary focus of transparency research has been the explanation content, i.e., what to explain. However, as known from communication science, more aspects of the conversation determine its perception and outcome success than just the content of an explanation. The techniques used for creating human–robot understanding are case specific, so every designer or researcher interested in the effects of transparency is required to implement transparency themselves [8]. This (a) makes it harder for designers of robotic systems, as they do not have a reference for how to design understandable robots and (b) makes it harder for researchers to study the effect of transparency on HRI due to the different realizations of the same concept.
The first step in establishing such guidelines is to acquire an overview of prior work in transparency in HRI. Furthermore, as the topic is hard to grasp due to inconsistent terminology [3,9,10], identifying research gaps is not a trivial task. We thus conducted a structured literature review following PRISMA [11] to obtain an overview of methods currently used in the HRI community for making robots transparent, and to identify relevant research gaps.
The primary aim of this research is to find out how explanations can be presented to foster human–robot understanding in co-located interactions between a robot and a human. We thus answer the research question: Which approaches have been used to create transparency in HRI? As part of this, we explore which modalities are used to achieve transparent robot design.
Contribution: To capture how transparency is ensured to foster understanding in HRI, we conducted a systematic literature review of relevant papers on transparency in HRI. The main contributions of this paper are as follows:
  • We provide a structured overview of approaches commonly utilized in HRI research to make robots transparent to the human interaction partner. To the best of our knowledge, no such overview exists so far.
  • We identify open challenges for further advancement of understandable HRI.
  • We highlight the importance of explanation delivery as an extension of explanation content and as a promising research area.
  • We introduce a framework for HRI research that organizes the tools and approaches available to researchers and designers looking to build transparency into their robotic systems. Our framework is novel compared to previous work on transparent technology because it explores the modalities and functionalities specific to embodied robot technology.

2. Background

2.1. Definitions of Transparency

In this work, we are interested in the concept of robots that are made transparent to the user to ensure understanding between the interaction partners. Works that focus on transparency need to specify which concept they are investigating, as the word transparency can have multiple meanings in the context of human–robot interaction. For example, transparent materials can be used for robot design or robots need to be able to adequately interpret translucency (e.g., [12]). The word transparency is also used to describe the absence of interaction force, making the robot more invisible to the user (e.g., [13,14]). In human–computer interaction (HCI), a system is considered to be transparent if it is designed in a way to ensure that the user’s mental model of the system is correct (e.g., [15]). A mental model is the representation of a system or process in the mind of a user [16]. While transparency refers to the system itself, understanding is the intended resulting phenomenon in the user.
In the context of understandable HRI, there is no one true definition for transparency [9], but definitions overlap depending on the aspect of the robot that is supposed to be made transparent. Upon analyzing different existing definitions, Theodorou et al. [9] pointed out that transparency is a way for the robot to offer information revealing its decision making process in a way that is understandable for the user. According to Wallkötter and Tulli et al. [10], explainability describes “the ability of a robot to provide information about its inner workings, such that an observer (target user) can infer how/why a robot behaves the way it does”. Transparency thus facilitates “understanding of what the system is doing, why, and what it will do next” [3].

2.2. Terminology

There is no uniform terminology for the concept of transparent technology. Gross and Boehme [17], for example, aimed for understandable HRI without ever utilizing the word transparent. For general intelligent systems, Eiband et al. [18] compiled different terms used in literature and explained the nuanced differences between them, ranging from explainability to scrutability. For human–robot interaction specifically, the different terms that are in use to describe similar concepts have not been analyzed. We thus use the synonyms specified by Eiband et al. [18] for the literature search.

2.3. Explanation Content

The decision of what to explain to achieve transparent HRI has been thoroughly researched. Robots are (usually) designed for a specific task or purpose [6]; robots can make human users aware of the intentions that follow from this inherent purpose [6,19]. To execute these tasks, robots are designed with certain capabilities and execute distinct behaviors. Wagner et al. [20] argued that overtrust, which happens when a user overestimates a system’s capabilities, could be reduced if people have a more complete understanding of technology behavior. In particular, anthropomorphic robots should be avoided, as humans transfer their understanding of human capabilities to these human-like robots. The mental model users have of a robot is generally derived from its physical attributes [21], e.g., Powers and Kiesler [21] investigated how robot head shape and voice pitch determine the mental model, i.e., whether the robot is perceived as sociable and knowledgeable based on gender stereotypes. The inherent design of the robot can thus change the users’ understanding of the robot. To this end, HRI researchers have used explainable artificial intelligence (XAI) methods, which aim to explain black box algorithms involved in machine learning [22]. In a systematic literature review, Anjomshoae et al. investigated the explanations on AI decision making provided by XAI to generate understandable explanations on robotic agents [23].

2.4. Previous Literature Reviews

Literature reviews are a well-established method to compile, analyze and uncover research gaps from existing knowledge. In HRI, literature reviews have been used to, for example, investigate trust and its influence factors [24] or robot personality [25]. To the best of our knowledge, there are no existing structured literature reviews on how to convey transparency in human–robot interaction.
Alonso and de la Puente provided a review on transparency in shared autonomy frameworks [3]. They focused on categorizing different perspectives of system transparency. In contrast, our main focus is on the actual realization of system transparency.
Further reviews adjacent to transparency in HRI focus on the control of robots [26] and teleoperation strategies [27]. Here, human operators need to receive feedback from the remote robot via UIs since no inherent information from face-to-face interaction is available [28]. The robot needs to communicate information on its perception of the environment to the human in order to inform them of possible changes or obstacles that could inhibit the otherwise given robot capabilities [6,19,29]. As teleoperation within transparent HRI has already been thoroughly covered, we exclude this area from our study.
Probably closest to our work is the comprehensive review by Wallkötter and Tulli et al. [10], who reviewed social cues of embodied agents, which are used to make the agents explainable. They found that existing papers primarily use speech, text, movement, and imagery to make agents explainable and further analyze the used terminology for transparent HRI. With the same questions in mind, we were curious to see if their results could be reproduced with a systematic review following PRISMA. We validate their insights and present a number of additional dimensions.

3. Method

To work toward a comprehensive overview of how transparency is conveyed in human–robot interaction, we conducted a systematic literature review following PRISMA [30]. One researcher conducted the identification and abstract screening. This author and an additional second researcher both conducted the eligibility and analysis phase. The process is shown in Figure 1.

3.1. Phase I: Identification

We queried two libraries: the ACM Digital Library and IEEE Xplore. These libraries contain the majority of HRI publications and the most cited venues on human–computer interaction and robotics (https://scholar.google.com/citations?view_op=top_venues&hl=de&vq=eng_humancomputerinteraction (last accessed on 15 September 2022)), such as computer–human interaction, the International Conference on Human–Robot Interaction, or the International Conference on Robotics and Automation. The search query used was (“HRI” OR “human-robot interaction” OR robot*) AND (transparen* OR explaina* OR “understandable robot” OR “understandable interaction” OR intelligib* OR interpretab* OR scrutab*). This query contains synonyms for transparency like explainability and intelligibility, inspired by the overview of system qualities that foster understanding in intelligent systems by Eiband et al. [18]. The concept of understanding was not queried using wildcards to avoid the retrieval of papers that aim to understand a different concept in HRI. Similarly, we queried the abstract and title instead of the full text to avoid papers where transparent HRI is not central to the publication.
As shown in Figure 1, this query resulted in 1126 publications. Then we removed duplicates via DOI matching, leaving 1106 unique entries. An additional 12 papers were included in the list of eligible papers, which did not result from the keyword search. These papers were cited as vital sources by extracted, relevant papers.

3.2. Phase II: Screening

We screened the paper abstracts manually using the following exclusion criteria: We excluded workshop publications and books as these are not strictly peer-reviewed. Moreover, we removed papers where HRI is not the focus, e.g., robots only appear as examples or an outlook. Similarly, we excluded works in which transparency is not relevant, e.g., only mentioned in passing as part of the motivation or discussion.
We only included papers that discuss the same concept of transparency, i.e., providing information so that a human interacting with a robot has an improved understanding of the robot. Thus, we excluded papers where the word transparency appears in the context of translucent materials, e.g., [31]; intelligibility is used in the context of auditory understanding in noisy environments, e.g., [32]. Transparency appears in the context of transparency in teleoperated systems, which is defined as the human operator feeling interactions of the remotely located robot [28]. As there are existing reviews that are adjacent to transparency in HRI and focus on the control of robots [26] and teleoperation strategies [27], we specifically excluded these from our study.
We further excluded papers that do not focus on interactions between humans and robots. This exclusion criterion matches papers that concern transparent robot–robot interaction, papers that focus on the process of developing robots, e.g., by making human behavior transparent to the robot, papers that focus on the development of transparent (XAI) algorithms, and papers in which transparency refers to architectural transparency for robot development.
Based on the keywords used, only papers written in English were included. The broad exclusion criteria were defined before abstract screening and iteratively refined during the screening process. After viewing all papers, we re-screened 128 edge cases, which were marked as such in the first pass, to obtain a uniform application of the eligibility criteria. During the screening process, only the title and abstract were visible; author names and publication details were hidden. The screening process resulted in 163 references.

3.3. Phase III: Eligibility

We manually revised the 163 eligible papers during the full-text screening according to the same exclusion criteria. The screening process was performed independently according to the same previously established exclusion criteria by two of the authors. Differences in exclusion and inclusion were reconciled in a dialogue before the analysis. The interrater-reliability was 88.1%, which can be considered very solid [33]. At the end of the literature search, 71 core relevant papers were identified to be included in the analysis.

3.4. Phase IV: Analysis

After completing the screening process, we established a strategy for reviewing the papers in a standardized manner by defining different kinds of themes to be analyzed. In multiple iterations, we established sub-topics to be investigated, based on the aim of the paper. The code book used can be found in the Supplementary Material. The code book was first pilot tested on five papers and then refined. Two authors coded all eligible papers according to the refined data extraction sheet. They were encouraged to note down additional interesting information from the papers that the code book failed to capture. The interrater-reliability across variables was 87.4%. One-dimensional codes have a higher interrater-reliability (up to 97.1%), while there was more need for discussing multidimensional variables (lowest 75.0%). The coding conflicts were resolved in a dialogue after finishing the coding process. We divided the analysis into two main areas: Firstly, we gathered a high-level overview to analyze the status quo of existing studies on the topic. Secondly, we classified methods that make robots transparent.

4. Results

We conducted a structured literature review following PRISMA to capture how transparency is ensured in human–robot interaction. Although we did not limit the search by year, 91.5% of eligible papers were published in the last decade. The analyzed papers on transparency in HRI were predominantly (73.2%) published in conference proceedings, as opposed to scientific journals.

4.1. Terminology and Definitions

Based on the literature analysis, we can confirm the findings from prior research (e.g., [9,10]) that there is no universal terminology for the concept of transparency. As shown in Table 1, the word transparency is the most commonly used, despite only 36 (50.6%) of the analyzed papers using it. While coding the used terminology, we documented additional terms used to describe the concept of transparency, despite not being included in our coding sheet. Authors used the following additional terms: legibility [34,35,36], explicability [37,38] comprehensibility [39] and awareness [40,41,42].
As there is not one terminology and definition for transparent HRI, we further assessed whether papers provided a definition for the studied concept, either by defining transparency themselves or referencing related work. In total, 22 of the 71 papers provide a definition for the concept of transparency. These definitions cover a range of aspects e.g., Kim and Hinds [57] “define transparency as the robot offering explanations of its actions”. Wortham et al. [2] on the other side focus on the consequence and not the process by stating that “robots should be designed [so that humans] can understand them”.

4.2. Measuring Transparency

We documented the role that transparency plays in the eligible papers and found that transparency was an independent variable in 18 studies. These studies usually have a transparent condition and a control condition and investigate the effect of transparent robot design on, for example, trust. Another nine papers ensured transparency and discussed it but did not list it as an independent variable in their study design. Twenty eligible papers were either high-level surveys or interaction concepts that did not include a user study.
Transparency was a dependent variable in 21 studies, and understanding was measured in 24 studies (33.8%). Measuring understanding can be as simple as asking the participant whether they understood the robot better in the transparent condition. A majority of questions posed start with “I understand” or synonyms thereof. More details on exact questions and measures can be found in Table 2. We calculated the Pearson correlation coefficient between the used modality and the measurement of understanding to determine if a specific approach was validated as a tool for transparency more than others. We found that researchers using textual explanations are the least interested in measuring the effect on understanding, yet are not noteworthy (−0.35). As there were few papers investigating haptics and they measured the effect on understanding, these are positively correlated (0.70).
  • Co-Variables in Studies on Transparency
Transparency influences and is influenced by other variables. Table 3 lists variables whose relation to transparency in HRI was investigated in eligible papers. The list does not include outliers or topics that only one paper investigates. A total of 18 studies investigated the influence of transparency on trust in robots. The predominance of trust research in connection to transparency was also found in Wallkötter and Tulli et al. [10].
  • Communicative Purpose
Different aspects of the interaction can be made transparent to facilitate transparent human–robot interactions. As can be seen in Table 4, robot behavior is explained in 32 papers (45.1%). To find out if some modalities are predominantly used to make specific aspects of the robot transparent, we calculated Pearson correlations between the used modalities and the communicative purpose. We did not find any interesting correlations (min −0.32, max 0.33).

4.3. Making Robots Transparent

A central aim of this paper is to investigate how transparency is ensured. We now provide a structured overview of approaches commonly utilized in HRI research to make robots transparent to the human interaction partner. In order to deduce this, we focus on the modalities used to create transparent human–robot interactions. Eleven of the eligible papers are not included in this analysis, as they do not present specific modality suggestions.
Table 5 depicts an overview of the used modalities. Unsurprisingly, we found that research focuses on visual, auditory, and haptic senses. The table does not include modalities that were not used by any paper, such as olfactory sensing. Forty-nine of the eligible papers use visual components to make their robot transparent to the user. Twenty papers use speech, i.e., verbal explanations. Six papers use haptics to communicate information on the robot. These findings are in line with Wallkötter and Tulli et al. [10], who derived the visual groups text, movement and imagery, and the auditory group speech from existing literature, and categorized papers that did not fit these as “other”.
  • Focus on Monomodal Explanations
Please note that multimodal explanations were used in some papers which qualifies them for more than one category. Fourteen papers (19.7%) used multimodal cues, thirteen of which were using visual methods and audio. The study of Yonezawa et al. [96] is the only outlier, as they used the robot gaze as a visual stimulus while applying different haptic sensations in multiple conditions.
  • Integration of Transparency in Robot Design
We documented whether the manipulation to make the robot transparent is integrated through the design of the robot, added to the robot, or ensured through external explanations specifically for the study. We can observe two main approaches to explain a robot across related work: transparency through the robot and transparency on the robot. These extremes, as well as their increments between, are visualized in Figure 2. Several papers use the robot itself to increase transparency in the interaction. In these papers, the robot movement, gaze, and facial expression are utilized to increase the user’s understanding of the robot’s next actions.The integration of explanations is intertwined with modalities, e.g., the robot gaze cannot exist independently of the robot. We found that 31 studies (43.7%) added external explanations that are not part of the robot design. Seventeen studies (23.9%) used integrated robot functionalities for the transparent condition. Thirteen studies (18.3%) integrated transparent explanations into their robot, i.e., the robot was able to make itself intelligible through, for example, an added interface on the robot.

4.3.1. Themes across Visual Explanations

Visual means are commonly used in HCI to communicate information. The results indicate that HRI utilizes prior knowledge from HCI when it comes to making robots transparent by using interfaces or visual overlays. Screens can be a useful means to present information on the robot, even if they remain external due to their prevalence in HCI [29,67,79,84].
Information can be presented on the integrated display in some robots, such as Pepper or Baxter [19,53,65]. Hirschmanner et al. [53], for example, displayed the state of the robot’s learning progress on this tablet with images and labels. This condition was tested in contrast to verbal utterances, such as “I take the box”, or pointing at relevant objects. Some researchers who work with a robot that does not have an integrated screen opt to add a screen to an existing robot. For example, Dietlehelm et al. [50] added 2D eyes on a screen to investigate robot gaze (and speech) as a vessel for transparency. Olatunji et al. [29] designed a GUI displayed on a monitor next to the robot to communicate with the robot in visual form, supplemented with reasoning to action, e.g., “I’m bringing the plate as you asked”. Researchers who are interested in robot explainability, such as [52,69,80,83,89], tend to use interfaces containing images of the robot, sensor information, colorful highlights, and text, e.g., “using this gripper: left” [83]. Sanders et al. [67] used the opportunity of external screens to explore different levels of information on their effect on trust. The information displayed ranged from static images to video clips, some of which employed text and audio as well. Murphy et al. [62] and Wortham et al. [2] used videos as well; Zakershahrak et al. [92] even used 3D GIFs. In contrast, Liu et al. [94], similar to [44,66,70], made use of AR interfaces to display the information in place, using colorful dots that overlay the robot to visualize sensory information and robot decision making. Yigitbas et al. [78] used a VR interface to plan the robot action before execution.
Moving away from using screens, transparency can be created through the robot itself. The facial expressions of a robot can be utilized to represent internal states or simulate emotions, e.g., through eyebrows [6,45,71]. Troniak et al. used robot gaze to communicate shared attention to the user by changing the robot’s head movements to “gaze” at the human during a handover [34] to add this information that is present in interpersonal human collaboration to the robot space. Facial expressions, as well as movements of robots, can be used to signal uncertainty [59]. Similarly, Rossi et al. [19] used gestures and a face displayed on the tablet of a Pepper robot to express different emotions with the aim of making the internal states of the robot transparent. Dragan et al. [35], and similarly [36,58], compared the legibility of different motion profiles. The approaches by Matsumaru et al. [81,95] and Wengefeld et al. [77] can be considered edge cases since they enhanced their robot, enabling the robot to increase transparency on certain behavioral aspects itself. The robot uses laser projections on the ground to indicate its intended movement and speed, which makes it more understandable.

4.3.2. Transparency through Text and Speech

We found that the most common approach is to present information as intelligible sentences, either verbally or textually. Twenty-four papers (35.3%) used text to achieve transparent human–robot interactions; twenty (28.4%) used verbal explanations. Textual explanations are especially relevant for XAI researchers who want to communicate information on the robot but do not aim to investigate the conveyance itself [51,54,75,76]. Here, the internal decision making of the robot is presented in natural language as opposed to opaque algorithms, e.g., “I will stop if the nearest obstacle is in the critical zone.” [54]. Text can also be used in other scenarios, e.g., to record robot behavior in medical contexts [93], in interfaces to communicate information playfully to children [56], or even to communicate what the robot does next [55,80].
Virgolin et al. [73] used a conversational interface to allow the robot to explain its actions and respond to further inquiries. These are supplemented with emotional communication via facial expressions. Aroyo et al. [43], similarly to [4,57], used a robot that presents short, first-person verbal explanations after unexpected behavior occur, e.g., “I have some trouble moving”. Short, first-person speech has further been used to express internal states, e.g., “I’m ready to hear you” [71]. Mota and Sridharan [61] and Arnold et al. [87] presented a robot that can respond to queries, for example, about actions: “What did you do?” is followed by a verbal response “I picked up the glass.” [87]. In a study by Tabrez et al. [91], the verbal explanations do not focus on robot actions and capabilities but on the interaction. Here, the robot interrupts the user who is performing a puzzle task to establish a shared mental model of the task. Straten et al. [68] applied a completely external approach during which the experimenter verbally explained the robot (in the third person), e.g., “The story Nao just told you has been put into Nao’s computer beforehand”.
Papers that primarily focus on the communicative purpose or only use verbal explanations as a fallback for visualizations tend to not specify the exact wording of their verbal or textual explanations [17,46,86,92].

4.3.3. Haptic Stimuli for Transparency

Only six papers use haptics to increase the robot transparency, which is why no generalizable statements can be made. Che et al. [47] and Grushko et al. [41,42] communicated robot intent explicitly via vibration feedback. Similarly, Casalino et al. [40] use vibrotactile feedback to communicate the robot’s beliefs of the interaction. In comparison, Valdivia et al. [72] utilized a soft haptic display wrapped around a robot arm that can inflate to indicate the certainty the robot has of the decision. Yonezawa et al. [96] tested all kinds of haptic stimuli to communicate behavior and found that they benefit understanding.

5. Classification of Approaches to Convey Transparency in HRI

We summarize the scope of current work on transparency in the HRI context in Figure 3, our model for HRI transparency conveyance. The results of our literature survey revealed different approaches to increasing robot transparency, which we classify by their degree of integration in the robot. We refer to the classification of integrated transparency again, which is visualized in Figure 2. The classification of integrated transparency presents the observation that the transparency of robots can be realized through the robot itself or on top of the robot. We define the space between the extremes as transparency using the robot.
The Classification of HRI Transparency Conveyance focuses on the tools and kinds of robots researchers have at their disposal and the options the robot provides as a baseline. Based on these prerequisites, researchers use different approaches to make the robot transparent. We found that these methods to convey information are linked to certain communicative purposes, i.e., certain features can increase understanding of certain aspects of the human-robot interaction.
The transparency through the robot layer comprises approaches to make robots transparent that can be realized with the robot itself and no further tools. Thus, these approaches can only be used if we alter the general way the robot looks or behaves. Transparency through robots is primarily suitable to enrich the interaction with additional information, similar to nonverbal communication in interpersonal communication. The approaches for transparency conveyance through robots include movement, gaze, and facial expressions, all of which are perceived visually. Projected onto human–human interaction, these are nonverbal behaviors. Focusing on robot movement, a robot should use natural trajectories instead of the most direct trajectory so that users recognize the robot’s intended action [35,36,58]. If a robot is equipped with eyes, these can be used to foster shared awareness of a task by making it transparent where the robot is directing its attention [34]. Further, the facial expressions of a humanoid robot can be utilized to communicate nuances in the interaction, e.g., to inform the user of the robot’s internal state [6,45,71]. Accordingly, the transparency through the robot layer does not offer in-depth explanations yet also does not interfere with the interaction itself.
The transparency using the robot layer presents approaches where additional technologies are applied to the robot to facilitate explanations. These tools are either inherent to the robot design, e.g., Pepper and Baxter have integrated screens on their bodies [19,53,65], or are added by the researcher to facilitate robot transparency. The aforementioned integrated screens can depict information that clarifies the robot’s intent and capabilities. As these screens are moving with the robot, information that can be perceived easily, such as facial expressions or single words describing the robot’s task, are more appropriate than long text. If more information density is required, researchers can use speakers that are integrated into the robot. Existing research uses audio to present natural language explanations in first person by the robot, e.g., [43,87].
The transparency on the robot layer focuses on external explanations which are presented on top of the existing robot. This layer is not constrained by the robot and can make use of the entire HCI space. However, this freedom comes at the expense of seeming disjointed from the robot and adding external information to the interaction between human and robot. For example, screens can depict detailed information on the robot, including text and visuals, without the need to interact with the robot [29,67,79]. Similarly, if the robot does not have audio capabilities, the researcher can provide verbal explanations in its place, e.g., [68].

6. Discussion and Future Work

In this paper, we systematically reviewed existing work on transparent HRI to assess how robots are made transparent. We presented an overview of approaches used to ensure transparent HRI in prior research. From this overview, we derived a classification of HRI transparency conveyance, a framework of the modalities and functionalities used.

6.1. Approaches for Conveying Transparency in Human-Robot Interaction

Below, we will first discuss our findings—regarding the question, which approaches have been used to create transparency in HRI?—with the aim of establishing a basis of explored approaches and to identify gaps for future work.

6.1.1. Existing Research Focuses on Verbal and Textual Explanations

The first theme we identified is that existing research primarily conveys information on the robot via natural language explanations using speech or text (e.g., [43,73]). Presenting information on the robot in natural language has the highest information density. Text and speech are prevalent for analog interpersonal communication and digital communication via technologies, usually enhanced by additional nonverbal information [100]. Thus, natural language is an easy choice to convey large amounts of information. Aside from this, existing research utilizes visual approaches to make their robots transparent. HRI researchers utilize methods from HCI to present information on screens (e.g., [29]), as well as try to replicate nonverbal human–human communication, i.e., gestures and facial expressions (e.g., [45]).
The analyzed papers cannot tell us about the suitability of other modalities for transparent HRI. The potential of haptics for nonverbal communication using physically present robots that the user can touch is underexplored. In line with Wallkötter and Tulli et al. [10], no research focused on conveying information to make the robot’s behavior transparent via sound. Future studies could explore different methods, such as sounds, in a comparative experiment to assess their effect on the understanding between human and robot.
We summarize our high-level observations in a classification of existing approaches in HRI literature. This helps researchers to select a general approach to make their robots more transparent, depending on the available tools. Categorizing the approaches on another dimension, e.g., whether they were playful, could offer valuable insights. However, we recognized during the analysis that many papers do not describe the approach to convey the relevant information in detail, as their main focus is the information content. Thus, an analysis pool of relevant data for a meta-analysis does not exist. More studies are needed to explore different building blocks for transparent robot design.

6.1.2. Transparency Is an Afterthought when Designing the Interaction

Should a robot be made transparent for a study or should the robot be designed to be inherently transparent? According to our analysis, the perspective on transparency in HRI has been one-sided, focusing on generating explanations on existing robots. Papers predominantly focus on explaining the behavior of a robot in hindsight. We categorize this as transparency on the robot (e.g., [44,78,94]) and transparency using the robot (e.g., [29,57]). This approach makes sense for HRI researchers who do not design robots themselves and thus do not determine the robot design. We envision a future in which the transparency of the interaction is considered when designing a robot. Not only does this integration result in more transparent robots, but it also allows us to move away from external interfaces and explore colors, robot shapes, and materials to convey information on the robot. HRI researchers have the opportunity to shape what this integrated transparency looks like on the basis of user studies.
Not every robot is the same, and the goal is not to limit the design of robots for the sake of transparency. However, recommendations for transparency are needed to create external consistency. Due to the vast differences between robot designs for different purposes, a one-size-fits-all solution does not seem realistic. Instead, we envision guidelines for transparent HRI for different types of robots, e.g., humanoid social robots vs. industrial robotic limbs. Consistency in the interaction with robots of the same type is needed to guarantee that users recognize how to interact with a specific robot and transfer their understanding from one robot to the next. Thus, we believe that giving recommendations to designers on how to design transparent robots is needed for usable HRI.

6.2. Comparability between Studies on Transparent HRI

Previously, we highlighted that there is a research gap in providing recommendations on how to design transparent robots. The literature review also revealed that transparency research in HRI is missing an overview of best practices of (a) how to report on transparency and (b) to actually verify understanding. This leads to low comparability between different studies.

6.2.1. Towards a Uniform Terminology

One aspect we explored in this literature review is the terminology used to describe the concept of making a robot transparent to a human user. In line with [9,10], our analysis confirms that there is no uniform terminology across HRI. The lack of clear terminology is confusing for readers and impedes research on the subject, as it is harder to find papers on the same concept. While different terms exist because they describe different nuances of similar concepts, existing research does not uniformly use the terms according to their definitions. This paper provides insights into the terms that are most commonly used. Our results cannot tell us which terms are the most appropriate, but we can recommend terminology based on informed observations of the analyzed papers.
Due to the popularity of the word, we recommend that future work uses the word transparency to describe interactions between humans and robots. Explainability is predominantly used when describing the implementation of algorithms or automatic generation of explanations and should be used in this context. Researchers should avoid the word understanding when discussing transparent interactions. Despite its accuracy in capturing the concept and its importance for the interaction, it has low searchability.

6.2.2. Comparison of Transparency Approaches for Understanding

Kim and Hinds [57] controlled whether their participants understood the used robots more when it explained itself. In their study, the relationship was negative. Results such as these emphasize how important it is to measure understanding, which only 32.4% of the analyzed papers did. This matters because we cannot compare the actual effect that different methods have on making a robot transparent if there is no easy and standardized method to verify the understandability of the robot. In contrast to the review by Wallkötter and Tulli et al. [10], the main pattern we found was self-reported understanding.

6.3. Limitations

As discussed in the methodology section, we made the deliberate choice to search the ACM and IEEE digital libraries due to their relevance to the research topic. Furthermore, we only searched the abstract and title due to the ambiguous terminology. Limiting ourselves in the identification process was necessary, as our initial library-independent free-text search resulted in an excessive amount of entries. Thus, we cannot guarantee that we did not miss a relevant publication in the identification process.
The presented review provides an overview of existing work on transparency conveyance in HRI. However, our observations are based on literature that predominantly did not aim to explore transparency conveyance. Thus, they often did not measure the effect of their approach on understanding. Consequently, the effectiveness of the discussed approaches for increasing understanding is not guaranteed. Making claims on the real effect goes beyond the contribution of this paper. Experiments are needed to test different approaches in a comparative study, using a uniform scale to measure understanding.

7. Conclusions

We conducted a systematic literature review following the PRISMA method and analyzed the resulting 71 eligible publications on their perspective and approach to making the interaction with a robot transparent. We present a high-level overview of transparency research in HRI that can be used as a basis for anyone who aims to enter this field. Additionally, we analyzed published papers on how they make their robots transparent. We found that most existing research focuses on generating explanation content or on the effect of transparency on other aspects of the interaction—not understanding. The majority of publications on transparency in HRI present this information as either plain text or speech. Other than this, studies use visual modalities to foster transparency. Different approaches of presenting the information, like haptic stimuli or sounds, are underrepresented in the analyzed literature. Future work is needed to investigate the effectiveness of multiple explanation modalities on understanding. Further, future research could test the impact of understandability on usability using the proposed transparency conveyance model. Overall, this paper highlights the importance of exploring how to make robots transparent, which to date has gone largely unexplored. Thinking about ways to convey information to make robots transparent is a natural next step from generating explanation content.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/mti7030025/s1, References [101,102,103,104,105] are cited in the supplementary materials.

Author Contributions

Conceptualization, S.Y.S.; methodology, S.Y.S.; formal analysis, S.Y.S.; investigation, S.Y.S. and R.M.A.; writing—original draft preparation, S.Y.S.; writing—review and editing, S.Y.S., R.M.A. and A.B.; visualization, S.Y.S. and R.M.A.; supervision, A.B.; project administration, S.Y.S.; funding acquisition, A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Excellence Strategy ONE Munich Strategy Forum, funded by the Federal German Government and the Bavarian Government (StMWi), joint project Next generation Human-Centered Robotics Human embodiment and system agency in trustworthy AI for the Future of health.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

We thank our anonymous reviewers for their comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kiesler, S.; Goodrich, M.A. The Science of Human-Robot Interaction. ACM Trans. Hum. Robot. Interact. 2018, 7, 9. [Google Scholar] [CrossRef] [Green Version]
  2. Wortham, R.H.; Theodorou, A.; Bryson, J.J. Improving robot transparency: Real-time visualisation of robot AI substantially improves understanding in naive observers. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28–31 August 2017; pp. 1424–1431. [Google Scholar] [CrossRef] [Green Version]
  3. Alonso, V.; de la Puente, P. System Transparency in Shared Autonomy: A Mini Review. Front. Neurorobotics 2018, 12, 83. [Google Scholar] [CrossRef] [Green Version]
  4. Nesset, B.; Robb, D.A.; Lopes, J.; Hastie, H. Transparency in HRI: Trust and Decision Making in the Face of Robot Errors. In Proceedings of the Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021; ACM: Boulder, CO, USA, 2021; pp. 313–317. [Google Scholar] [CrossRef]
  5. McBride, N. Robot Enhanced Therapy for Autistic Children: An Ethical Analysis. IEEE Technol. Soc. Mag. 2020, 39, 51–60. [Google Scholar] [CrossRef]
  6. Lyons, J.B. Being transparent about transparency: A model for human-robot interaction. In Proceedings of the 2013 AAAI Spring Symposium Series, Palo Alto, CA, USA, 25–27 March 2013. [Google Scholar]
  7. Cila, N. Designing Human-Agent Collaborations: Commitment, Responsiveness, and Support. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’22), New Orleans, MI, USA, 23–28 April 2022; Association for Computing Machinery: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
  8. Felzmann, H.; Fosch-Villaronga, E.; Lutz, C.; Tamo-Larrieux, A. Robots and Transparency: The Multiple Dimensions of Transparency in the Context of Robot Technologies. IEEE Robot. Autom. Mag. 2019, 26, 71–78. [Google Scholar] [CrossRef]
  9. Theodorou, A.; Wortham, R.H.; Bryson, J.J. Designing and implementing transparency for real time inspection of autonomous robots. Connect. Sci. 2017, 29, 230–241. [Google Scholar] [CrossRef] [Green Version]
  10. Wallkötter, S.; Tulli, S.; Castellano, G.; Paiva, A.; Chetouani, M. Explainable Embodied Agents Through Social Cues: A Review. J. Hum. Robot Interact. 2021, 10, 3457188. [Google Scholar] [CrossRef]
  11. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [Green Version]
  12. Zhou, Z.; Sui, Z.; Jenkins, O.C. Plenoptic Monte Carlo Object Localization for Robot Grasping Under Layered Translucency. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  13. Formaglio, A.; Prattichizzo, D.; Barbagli, F.; Giannitrapani, A. Dynamic Performance of Mobile Haptic Interfaces. IEEE Trans. Robot. 2008, 24, 559–575. [Google Scholar] [CrossRef] [Green Version]
  14. Zanotto, D.; Lenzi, T.; Stegall, P.; Agrawal, S.K. Improving transparency of powered exoskeletons using force/torque sensors on the supporting cuffs. In Proceedings of the 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR), Seattle, WA, USA, 24–26 June 2013; pp. 1–6. [Google Scholar] [CrossRef]
  15. Eiband, M.; Schneider, H.; Bilandzic, M.; Fazekas-Con, J.; Haug, M.; Hussmann, H. Bringing Transparency Design into Practice. In Proceedings of the 23rd International Conference on Intelligent User Interfaces (IUI ’18), Tokyo, Japan, 7–11 March 2018; ACM: New York, NY, USA, 2018; pp. 211–223. [Google Scholar] [CrossRef]
  16. Norman, D.A. Some observations on mental models. In Mental Models; Psychology Press: London, UK, 2014; pp. 15–22. [Google Scholar]
  17. Gross, H.M.; Boehme, H.J. PERSES—A vision-based interactive mobile shopping assistant. In Proceedings of the 2000 IEEE International Conference on Systems, man and Cybernetics: ’Cybernetics Evolving to Systems, Humans, Organizations, and Their Complex Interactions’, Nashville, TN, USA, 8–11 October 2000; Volume 1, pp. 80–85. [Google Scholar] [CrossRef]
  18. Eiband, M.; Buschek, D.; Hussmann, H. How to support users in understanding intelligent systems? Structuring the discussion. In Proceedings of the 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, 14–17 April 2021; pp. 120–132. [Google Scholar]
  19. Rossi, A.; Scheunemann, M.M.; L’Arco, G.; Rossi, S. Evaluation of a Humanoid Robot’s Emotional Gestures for Transparent Interaction. Soc. Robot. 2021, 11, 34. [Google Scholar] [CrossRef]
  20. Wagner, A.R.; Borenstein, J.; Howard, A. Overtrust in the robotic age. Commun. ACM 2018, 61, 22–24. [Google Scholar] [CrossRef]
  21. Powers, A.; Kiesler, S. The advisor robot: Tracing people’s mental model from a robot’s physical attributes. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-Robot Interaction - HRI ’06, Salt Lake City, UT, USA, 2–3 March 2006; ACM Press: Salt Lake City, UT, USA, 2006; p. 218. [Google Scholar] [CrossRef]
  22. Molnar, C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2020. Available online: Lulu.com (accessed on 8 September 2022).
  23. Anjomshoae, S.; Najjar, A.; Calvaresi, D.; Främling, K. Explainable agents and robots: Results from a systematic literature review. In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, QC, Canada, 13–17 May 2019; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, USA, 2019; pp. 1078–1088. [Google Scholar] [CrossRef]
  24. Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.C.; de Visser, E.J.; Parasuraman, R. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Hum. Factors J. Hum. Factors Ergon. Soc. 2011, 53, 517–527. [Google Scholar] [CrossRef] [PubMed]
  25. Esterwood, C.; Robert, L.P. Personality in Healthcare Human Robot Interaction H-HRI: A Literature Review and Brief Critique. In Proceedings of the 8th International Conference on Human-Agent Interaction, Online, 10–13 November 2020; ACM: Austin, TX, USA, 2020; pp. 87–95. [Google Scholar] [CrossRef]
  26. Zhang, T.; Du, Q.; Yang, G.; Chen, C.y.; Wang, C.; Fang, Z. A Review of Compliant Control for Collaborative Robots. In Proceedings of the 2021 IEEE 16th Conference on Industrial Electronics and Applications (ICIEA), Chengdu, China, 1–4 August 2021; pp. 1103–1108. [Google Scholar] [CrossRef]
  27. Deng, Y.; Tang, Y.; Yang, B.; Zheng, W.; Liu, S.; Liu, C. A Review of Bilateral Teleoperation Control Strategies with Soft Environment. In Proceedings of the 2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM), Chongqing, China, 3–5 July 2021; pp. 459–464. [Google Scholar] [CrossRef]
  28. Lee, H.K.; Shin, M.H.; Chung, M.J. Adaptive controller of master-slave systems for transparent teleoperation. In Proceedings of the 1997 8th International Conference on Advanced Robotics. Proceedings. ICAR’97, Monterey, CA, USA, 7–9 July 1997; pp. 1021–1026. [Google Scholar] [CrossRef]
  29. Olatunji, S.A.; Oron-Gilad, T.; Markfeld, N.; Gutman, D.; Sarne-Fleischmann, V.; Edan, Y. Levels of Automation and Transparency: Interaction Design Considerations in Assistive Robots for Older Adults. IEEE Trans. Hum. Mach. Syst. 2021, 51, 673–683. [Google Scholar] [CrossRef]
  30. Liberati, A.; Altman, D.G.; Tetzlaff, J.; Mulrow, C.; Gøtzsche, P.C.; Ioannidis, J.P.; Clarke, M.; Devereaux, P.J.; Kleijnen, J.; Moher, D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. J. Clin. Epidemiol. 2009, 62, e1–e34. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Kalra, A.; Taamazyan, V.; Rao, S.K.; Venkataraman, K.; Raskar, R.; Kadambi, A. Deep Polarization Cues for Transparent Object Segmentation. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 8599–8608. [Google Scholar] [CrossRef]
  32. Martinson, E.; Brock, D. Improving Human-Robot Interaction through Adaptation to the Auditory Scene. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI ’07), Washington, DC, USA, 9–11 March 2007; Association for Computing Machinery: New York, NY, USA, 2007; pp. 113–120. [Google Scholar] [CrossRef]
  33. Koo, T.K.; Li, M.Y. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J. Chiropr. Med. 2016, 15, 155–163. [Google Scholar] [CrossRef] [Green Version]
  34. Moon, A.; Troniak, D.M.; Gleeson, B.; Pan, M.K.; Zheng, M.; Blumer, B.A.; MacLean, K.; Croft, E.A. Meet Me Where I’m Gazing: How Shared Attention Gaze Affects Human-Robot Handover Timing. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’14), Bielefeld, Germany, 3–6 March 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 334–341. [Google Scholar] [CrossRef]
  35. Dragan, A.D.; Lee, K.C.; Srinivasa, S.S. Legibility and predictability of robot motion. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 301–308. [Google Scholar] [CrossRef] [Green Version]
  36. Sheikholeslami, S.; Hart, J.W.; Chan, W.P.; Quintero, C.P.; Croft, E.A. Prediction and Production of Human Reaching Trajectories for Human-Robot Interaction. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18), Chicago, IL, USA, 5–8 March 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 321–322. [Google Scholar] [CrossRef]
  37. Gong, Z.; Zhang, Y. Behavior Explanation as Intention Signaling in Human-Robot Teaming. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 1005–1011. [Google Scholar] [CrossRef]
  38. Kulkarni, A.; Sreedharan, S.; Keren, S.; Chakraborti, T.; Smith, D.E.; Kambhampati, S. Designing Environments Conducive to Interpretable Robot Behavior. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 10982–10989. [Google Scholar] [CrossRef]
  39. Arntz, A.; Eimler, S.C.; Straßmann, C.; Hoppe, H.U. On the Influence of Autonomy and Transparency on Blame and Credit in Flawed Human-Robot Collaboration. In Proceedings of the Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’21), Boulder, CO, USA, 8–11 March 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 377–381. [Google Scholar] [CrossRef]
  40. Casalino, A.; Messeri, C.; Pozzi, M.; Zanchettin, A.M.; Rocco, P.; Prattichizzo, D. Operator Awareness in Human–Robot Collaboration Through Wearable Vibrotactile Feedback. IEEE Robot. Autom. Lett. 2018, 3, 4289–4296. [Google Scholar] [CrossRef] [Green Version]
  41. Grushko, S.; Vysocký, A.; Oščádal, P.; Vocetka, M.; Novák, P.; Bobovský, Z. Improved Mutual Understanding for Human-Robot Collaboration: Combining Human-Aware Motion Planning with Haptic Feedback Devices for Communicating Planned Trajectory. Sensors 2021, 21, 3673. [Google Scholar] [CrossRef]
  42. Grushko, S.; Vysocký, A.; Heczko, D.; Bobovský, Z. Intuitive Spatial Tactile Feedback for Better Awareness about Robot Trajectory during Human–Robot Collaboration. Sensors 2021, 21, 5748. [Google Scholar] [CrossRef]
  43. Aroyo, A.M.; Pasquali, D.; Kothig, A.; Rea, F.; Sandini, G.; Sciutti, A. Expectations Vs. Reality: Unreliability and Transparency in a Treasure Hunt Game With Icub. IEEE Robot. Autom. Lett. 2021, 6, 5681–5688. [Google Scholar] [CrossRef]
  44. Attia, M.; Hossny, M.; Nahavandi, S.; Dalvand, M.; Asadi, H. Towards Trusted Autonomous Surgical Robots. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; IEEE: New York, NY, USA, 2018; pp. 4083–4088. [Google Scholar] [CrossRef]
  45. Broekens, J.; Chetouani, M. Towards Transparent Robot Learning Through TDRL-Based Emotional Expressions. IEEE Trans. Affect. Comput. 2021, 12, 352–362. [Google Scholar] [CrossRef] [Green Version]
  46. Cantucci, F.; Falcone, R. Towards trustworthiness and transparency in social human-robot interaction. In Proceedings of the 2020 IEEE International Conference on Human-Machine Systems (ICHMS), Rome, Italy, 7–9 September 2020; pp. 1–6. [Google Scholar] [CrossRef]
  47. Che, Y.; Okamura, A.M.; Sadigh, D. Efficient and Trustworthy Social Navigation via Explicit and Implicit Robot-Human Communication. IEEE Trans. Robot. 2020, 36, 692–707. [Google Scholar] [CrossRef] [Green Version]
  48. Chen, S.; Boggess, K.; Feng, L. Towards Transparent Robotic Planning via Contrastive Explanations. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 6593–6598. [Google Scholar] [CrossRef]
  49. Das, D.; Banerjee, S.; Chernova, S. Explainable AI for Robot Failures: Generating Explanations That Improve User Assistance in Fault Recovery. In Proceedings of the 2021 International Conference on Human-Robot Interaction (HRI ’21), Boulder, CO, USA, 13–16 March 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 351–360. [Google Scholar] [CrossRef]
  50. Diethelm, I.G.; Hansen, S.S.; Leth, F.B.; Fischer, K.; Palinko, O. Effects of Gaze and Speech in Human-Robot Medical Interactions. In Proceedings of the Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’21), Boulder, CO, USA, 8–11 March 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 349–353. [Google Scholar] [CrossRef]
  51. Hayes, B.; Shah, J.A. Improving Robot Controller Transparency Through Autonomous Policy Explanation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’17), Vienna, Austria, 6–9 March 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 303–312. [Google Scholar] [CrossRef] [Green Version]
  52. Hindemith, L.; Vollmer, A.L.; Wiebel-Herboth, C.B.; Wrede, B. Improving HRI through robot architecture transparency. arXiv 2021, arXiv:2108.11608. [Google Scholar]
  53. Hirschmanner, M.; Gross, S.; Zafari, S.; Krenn, B.; Neubarth, F.; Vincze, M. Investigating Transparency Methods in a Robot Word-Learning System and Their Effects on Human Teaching Behaviors. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 8–12 August 2021; pp. 175–182. [Google Scholar] [CrossRef]
  54. Iucci, A.; Hata, A.; Terra, A.; Inam, R.; Leite, I. Explainable Reinforcement Learning for Human-Robot Collaboration. In Proceedings of the 2021 20th International Conference on Advanced Robotics (ICAR), Ljubljana, Slovenia, 6–10 December 2021; pp. 927–934. [Google Scholar] [CrossRef]
  55. Kallinen, K. The Effects of Transparency and Task Type on Trust, Stress, Quality of Work, and Co-Worker Preference During Human-Autonomous System Collaborative Work. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’17), Vienna, Austria, 6–9 March 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 153–154. [Google Scholar] [CrossRef]
  56. Kaptein, F.; Broekens, J.; Hindriks, K.; Neerincx, M. Evaluating Cognitive and Affective Intelligent Agent Explanations in a Long-Term Health-Support Application for Children with Type 1 Diabetes. In Proceedings of the 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), Cambridge, UK, 3–6 September 2019; pp. 1–7. [Google Scholar] [CrossRef]
  57. Kim, T.; Hinds, P. Who Should I Blame? Effects of Autonomy and Transparency on Attributions in Human-Robot Interaction. In In Proceedings of the ROMAN 2006—The 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 80–85. [Google Scholar] [CrossRef]
  58. Kwon, M.; Huang, S.H.; Dragan, A.D. Expressing Robot Incapability. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18), Chicago, IL, USA, 5–8 March 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 87–95. [Google Scholar] [CrossRef] [Green Version]
  59. Matarese, M.; Sciutti, A.; Rea, F.; Rossi, S. Toward Robots-ehavioral Transparency of Temporal Difference Reinforcement Learning With a Human Teacher. Trans. Hum. Mach. Syst. 2021, 51, 578–589. [Google Scholar] [CrossRef]
  60. Mobahi, H.; Ansari, S. Fuzzy perception, emotion and expression for interactive robots. In Proceedings of the 2003 IEEE International Conference on Systems, Man and Cybernetics (SMC’03), Washington, DC, USA, 8 October 2003; Conference Theme—System Security and Assurance (Cat. No.03CH37483). Volume 4, pp. 3918–3923. [Google Scholar] [CrossRef] [Green Version]
  61. Mota, T.; Sridharan, M. Answer me this: Constructing Disambiguation Queries for Explanation Generation in Robotics. In Proceedings of the 2021 IEEE International Conference on Development and Learning (ICDL), Beijing, China, 23–26 August 2021; pp. 1–8. [Google Scholar] [CrossRef]
  62. Murphy, R.R.; Srinivasan, V.; Henkel, Z.; Suarez, J.; Minson, M.; Straus, J.C.; Hempstead, S.; Valdez, T.; Egawa, S. Interacting with trapped victims using robots. In Proceedings of the 2013 IEEE International Conference on Technologies for Homeland Security (HST), Waltham, MA, USA, 12–14 November 2013; pp. 32–37. [Google Scholar] [CrossRef]
  63. Niu, S.; McCrickard, D.S.; Harrison, S. Exploring humanoid factors of robots through transparent and reflective interactions. In Proceedings of the 2015 International Conference on Collaboration Technologies and Systems (CTS), Atlanta, GA, USA, 1–5 June 2015; pp. 47–54. [Google Scholar] [CrossRef]
  64. Poulsen, A.; Burmeister, O.K.; Tien, D. Care Robot Transparency Isn’t Enough for Trust. In Proceedings of the 2018 IEEE Region Ten Symposium (Tensymp), Sydney, NSW, Australia, 4–6 July 2018; pp. 293–297. [Google Scholar] [CrossRef]
  65. Roncone, A.; Mangin, O.; Scassellati, B. Transparent role assignment and task allocation in human robot collaboration. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 1014–1021. [Google Scholar] [CrossRef]
  66. Rotsidis, A.; Theodorou, A.; Bryson, J.J.; Wortham, R.H. Improving Robot Transparency: An Investigation With Mobile Augmented Reality. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–8. [Google Scholar] [CrossRef]
  67. Sanders, T.L.; Wixon, T.; Schafer, K.E.; Chen, J.Y.C.; Hancock, P.A. The influence of modality and transparency on trust in human-robot interaction. In Proceedings of the 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), San Antonio, TX, USA, 3–6 March 2014; pp. 156–159. [Google Scholar] [CrossRef]
  68. Straten, C.L.V.; Peter, J.; Kühne, R.; Barco, A. Transparency about a Robot’s Lack of Human Psychological Capacities: Effects on Child-Robot Perception and Relationship Formation. J. Hum. Robot Interact. 2020, 9, 3365668. [Google Scholar] [CrossRef] [Green Version]
  69. Struckmeier, O.; Racca, M.; Kyrki, V. Autonomous Generation of Robust and Focused Explanations for Robot Policies. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  70. Tabrez, A.; Luebbers, M.B.; Hayes, B. Descriptive and Prescriptive Visual Guidance to Improve Shared Situational Awareness in Human-Robot Teaming. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’22), London, UK, 29 May–2 June 2022; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, USA, 2022; pp. 1256–1264. [Google Scholar]
  71. Tojo, T.; Matsusaka, Y.; Ishii, T.; Kobayashi, T. A conversational robot utilizing facial and body expressions. In Proceedings of the 2000 IEEE International Conference on Systems, Man and Cybernetics: “Cybernetics Evolving to Systems, Humans, Organizations, and their Complex Interactions”, Nashville, TN, USA, 8–11 October 2000; Volume 2, pp. 858–863. [Google Scholar] [CrossRef]
  72. Valdivia, A.A.; Shailly, R.; Seth, N.; Fuentes, F.; Losey, D.P.; Blumenschein, L.H. Wrapped Haptic Display for Communicating Physical Robot Learning. In Proceedings of the 2022 IEEE 5th International Conference on Soft Robotics (RoboSoft), Edinburgh, UK, 4–8 April 2022; pp. 823–830. [Google Scholar] [CrossRef]
  73. Virgolin, M.; Bellone, M.; Wolff, K.; Wahde, M. A Mobile Interactive Robot for Social Distancing in Hospitals. In Proceedings of the 2021 Fifth IEEE International Conference on Robotic Computing (IRC), Taichung, Taiwan, 15–17 November 2021; pp. 87–91. [Google Scholar] [CrossRef]
  74. Vitale, J.; Tonkin, M.; Herse, S.; Ojha, S.; Clark, J.; Williams, M.A.; Wang, X.; Judge, W. Be More Transparent and Users Will Like You: A Robot Privacy and User Experience Design Experiment. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18), Chicago, IL, USA, 5–8 March 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 379–387. [Google Scholar] [CrossRef]
  75. Wang, N.; Pynadath, D.V.; Hill, S.G. The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems (AAMAS ’16), Sao Paulo, Brazil, 8–12 May 2016; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, USA, 2016; pp. 997–1005. [Google Scholar]
  76. Wang, N.; Pynadath, D.V.; Hill, S.G. Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations. In Proceedings of the The Eleventh ACM/IEEE International Conference on Human Robot Interaction (HRI ’16), Christchurch, New Zealand, 7–10 March 2016; IEEE Press: New York, NY, USA, 2016; pp. 109–116. [Google Scholar]
  77. Wengefeld, T.; Höchemer, D.; Lewandowski, B.; Köhler, M.; Beer, M.; Gross, H.M. A Laser Projection System for Robot Intention Communication and Human Robot Interaction. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 259–265. [Google Scholar] [CrossRef]
  78. Yigitbas, E.; Karakaya, K.; Jovanovikj, I.; Engels, G. Enhancing Human-in-the-Loop Adaptive Systems through Digital Twins and VR Interfaces. In Proceedings of the 2021 International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), Madrid, Spain, 18–21 May 2021; pp. 30–40. [Google Scholar] [CrossRef]
  79. Edmonds, M.; Gao, F.; Liu, H.; Xie, X.; Qi, S.; Rothrock, B.; Zhu, Y.; Wu, Y.N.; Lu, H.; Zhu, S.C. A tale of two explanations: Enhancing human trust by explaining robot behavior. Sci. Robot. 2019, 4, eaay4663. [Google Scholar] [CrossRef] [PubMed]
  80. Gao, X.; Gong, R.; Zhao, Y.; Wang, S.; Shu, T.; Zhu, S.C. Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 1119–1126. [Google Scholar] [CrossRef]
  81. Matsumaru, T. Mobile Robot with Preliminary-announcement and Indication Function of Forthcoming Operation using Flat-panel Display. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 1774–1781. [Google Scholar] [CrossRef]
  82. Ososky, S.; Philips, E.; Schuster, D.; Jentsch, F. A Picture is Worth a Thousand Mental Models: Evaluating Human Understanding of Robot Teammates. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2013, 57, 1298–1302. [Google Scholar] [CrossRef]
  83. Shah, N.; Verma, P.; Angle, T.; Srivastava, S. JEDAI: A System for Skill-Aligned Explainable Robot Planning. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’22), Online, 9–13 May 2022; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, USA, 2022; pp. 1917–1919. [Google Scholar]
  84. van Deurzen, B.; Bruyninckx, H.; Luyten, K. Choreobot: A Reference Framework and Online Visual Dashboard for Supporting the Design of Intelligible Robotic Systems. Proc. ACM Hum. Comput. Interact. 2022, 6, 3532201. [Google Scholar] [CrossRef]
  85. Wlaszczyk, A.; Indurkhya, B. On the use of metaphors in designing educational interfaces. In Proceedings of the 2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Wroclaw, Poland, 16–18 October 2016; pp. 343–346. [Google Scholar] [CrossRef]
  86. Zakershahrak, M.; Marpally, S.R.; Sharma, A.; Gong, Z.; Zhang, Y. Order Matters: Generating Progressive Explanations for Planning Tasks in Human-Robot Teaming. In Proceedings of the 2021 International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 3751–3757. [Google Scholar] [CrossRef]
  87. Arnold, T.; Kasenberg, D.; Scheutz, M. Explaining in Time: Meeting Interactive Standards of Explanation for Robotic Systems. J. Hum. Robot Interact. 2021, 10, 3457183. [Google Scholar] [CrossRef]
  88. Brandao, M.; Mansouri, M.; Mohammed, A.; Luff, P.; Coles, A. Explainability in Multi-Agent Path/Motion Planning: User-Study-Driven Taxonomy and Requirements. In Proceedings of the Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’22), Online, 9–13 May 2022; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, USA, 2022; pp. 172–180. [Google Scholar]
  89. Han, Z.; Giger, D.; Allspaw, J.; Lee, M.S.; Admoni, H.; Yanco, H.A. Building the Foundation of Robot Explanation Generation Using Behavior Trees. J. Hum. Robot Interact. 2021, 10, 3457185. [Google Scholar] [CrossRef]
  90. Sheh, R. Explainable Artificial Intelligence Requirements for Safe, Intelligent Robots. In Proceedings of the 2021 IEEE International Conference on Intelligence and Safety for Robotics (ISR), Tokoname, Japan, 4–6 March 2021; pp. 382–387. [Google Scholar] [CrossRef]
  91. Tabrez, A.; Agrawal, S.; Hayes, B. Explanation-Based Reward Coaching to Improve Human Performance via Reinforcement Learning. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI ’19), Daegu, Republic of Korea, 11–14 March 2019; IEEE Press: New York, NY, USA, 2019; pp. 249–257. [Google Scholar]
  92. Zakershahrak, M.; Gong, Z.; Sadassivam, N.; Zhang, Y. Online Explanation Generation for Planning Tasks in Human-Robot Teaming. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2020; pp. 6304–6310. [Google Scholar] [CrossRef]
  93. Fiazza, M.C.; Fiorini, P. Design for Interpretability: Meeting the Certification Challenge for Surgical Robots. In Proceedings of the 2021 IEEE International Conference on Intelligence and Safety for Robotics (ISR), Tokoname, Japan, 4–6 March 2021; pp. 264–267. [Google Scholar] [CrossRef]
  94. Liu, H.; Zhang, Y.; Si, W.; Xie, X.; Zhu, Y.; Zhu, S.C. Interactive Robot Knowledge Patching Using Augmented Reality. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–26 May 2018; pp. 1947–1954. [Google Scholar] [CrossRef]
  95. Matsumaru, T. Mobile Robot with Preliminary-announcement and Display Function of Forthcoming Motion using Projection Equipment. In Proceedings of the ROMAN 2006—The 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 443–450. [Google Scholar] [CrossRef]
  96. Yonezawa, T.; Yamazoe, H.; Abe, S. Physical contact using haptic and gestural expressions for ubiquitous partner robot. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 5680–5685. [Google Scholar] [CrossRef]
  97. Mathieu, J.E.; Heffner, T.S.; Goodwin, G.F.; Salas, E.; Cannon-Bowers, J.A. The influence of shared mental models on team process and performance. J. Appl. Psychol. 2000, 85, 273. [Google Scholar] [CrossRef]
  98. Laugwitz, B.; Held, T.; Schrepp, M. Construction and Evaluation of a User Experience Questionnaire. USAB 2008, 5298, 63–76. [Google Scholar] [CrossRef]
  99. Hoffman, R.R.; Mueller, S.T.; Klein, G.; Litman, J. Metrics for explainable AI: Challenges and prospects. arXiv 2018, arXiv:1812.04608. [Google Scholar]
  100. Knapp, M.L.; Hall, J.A.; Horgan, T.G. Nonverbal Communication in Human Interaction; Cengage Learning: Boston, MA, USA, 2013. [Google Scholar]
  101. Schaefer, K.E.; Sanders, T.; Yordon, R.E.; Billings, D.R.; Hancock, P.A. Classification of robot form: Factors predicting perceived trustworthiness. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Boston, MA, USA, 22–26 October 2012; SAGE Publications: Sage, CA, USA; Los Angeles, CA, USA, 2012; Volume 56, pp. 1548–1552. [Google Scholar]
  102. Natarajan, M.; Gombolay, M. Effects of anthropomorphism and accountability on trust in human robot interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Hokkaido, Japan, 7–10 March 2020; pp. 33–42. [Google Scholar]
  103. McNorgan, C. A meta-analytic review of multisensory imagery identifies the neural correlates of modality-specific and modality-general imagery. Front. Hum. Neurosci. 2012, 6, 285. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  104. Maeda, T.; Kurahashi, T. Thermodule: Wearable and modular thermal feedback system based on a wireless platform. In Proceedings of the 10th Augmented Human International Conference, Reims, France, 11–12 March 2019; pp. 1–8. [Google Scholar]
  105. Suhonen, K.; Väänänen-Vainio-Mattila, K.; Mäkelä, K. User experiences and expectations of vibrotactile, thermal and squeeze feedback in interpersonal communication. In Proceedings of the 26th BCS Conference on Human Computer Interaction, Birmingham, UK, 12–14 September 2012; pp. 205–214. [Google Scholar]
Figure 1. PRISMA flow diagram of the selection process.
Figure 1. PRISMA flow diagram of the selection process.
Mti 07 00025 g001
Figure 2. Classification of robot transparency by integration. On the top, transparency is realized through the robot itself, while on the bottom, the transparency is realized from the outside on top of the robot.
Figure 2. Classification of robot transparency by integration. On the top, transparency is realized through the robot itself, while on the bottom, the transparency is realized from the outside on top of the robot.
Mti 07 00025 g002
Figure 3. The classification presents the observed best practices of how to present information to make a robot transparent. We map the approaches based on their integration into the robot design. The robot property made transparent is linked to technical prerequisites.
Figure 3. The classification presents the observed best practices of how to present information to make a robot transparent. We map the approaches based on their integration into the robot design. The robot property made transparent is linked to technical prerequisites.
Mti 07 00025 g003
Table 1. Terminology used to describe the concept of transparent HRI.
Table 1. Terminology used to describe the concept of transparent HRI.
TerminologyNr.Refs.
Transparency36[2,4,6,7,9,19,29,34,36,37,39,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78]
Understanding17[2,7,17,37,44,48,58,64,69,72,74,76,77,78,79,80,81,82,83,84,85,86]
Explainability13[37,49,52,54,56,61,75,79,80,83,86,87,88,89,90,91,92]
Interpretability12[38,49,70,72,73,79,84,86,88,90,92,93,94]
Intelligibility6[7,55,81,94,95,96]
Table 2. Measurements used to assess transparency of a system and understanding thereof in current research. The list does not include papers that do not provide concrete wording (e.g., [48,57]).
Table 2. Measurements used to assess transparency of a system and understanding thereof in current research. The list does not include papers that do not provide concrete wording (e.g., [48,57]).
Subjective MeasuresTypeRefs.
Shared Mental Model survey by [97]7-point Likert[82]
User Experience Questionnaire by [98]7-point Likert[74]
Explanation Goodness by [99]7-point Likert[70]
Intelligible - unintelligible/unclearSemantic differential[81,95]
“I understand the behavior of the robot.”7-point Likert[69]
“I understand the robot’s decision making process"7-point Likert[76]
“I understood the robot well.” “I understood the information the robot presented to me.”3-point Likert[29]
“The expression of the robot was perceivable” “The expression of the robot was easy to understand”5-point Likert[96]
“It was easy to tell [goal]” “Confused about what the robot was trying to do” “Understood [goal] but did not understand [cause]” “It was clear that the robot failed because of [cause]”5-point Likert[58]
“Questionable”Yes-No Selection[37]
“How much did the trajectory match the one you expected?”7-point Likert[35]
“[...] it was clear to me which words the robot knew already and which ones it still had to learn.”5-point Likert[53]
“The movements of the robot become more predictable through the projection of a projector.”5-point Likert[77]
Objective Measures Refs.
Reaction time [29,74]
Number of times participants ask for clarifications [29]
Observation of user behavior [47]
Table 3. Summary of variables measured or manipulated in user studies in the application area of transparent HRI. The top lists human-related variables while the bottom focuses on robot-related variables.
Table 3. Summary of variables measured or manipulated in user studies in the application area of transparent HRI. The top lists human-related variables while the bottom focuses on robot-related variables.
Related VariableNr.Refs.
Trust18[4,6,29,43,44,46,47,48,50,55,64,66,67,68,70,75,76,77,79,90,91]
Performance17[19,34,40,41,42,43,47,52,65,69,70,72,78,79,80,86,92]
Perception of robot13[2,19,41,42,50,52,58,66,68,72,80,91,2]
Decision making6[4,49,70,76,83,87]
Compliance4[56,58,70,76]
Workload4[43,67,86,92]
Human emotions3[2,19,96]
Stress3[29,55,70]
Robot reliability9[6,49,53,61,66,68,73,75,76]
Safety9[47,50,66,73,75,77,90,93,95]
Robot motion/shape6[36,39,47,74,81,95]
Explanation content5[48,49,55,75,88]
Robot autonomy4[29,39,57,78]
Table 4. Overview of the communicative purpose, i.e., the item of explanation that is aimed to be made transparent.
Table 4. Overview of the communicative purpose, i.e., the item of explanation that is aimed to be made transparent.
Explanation ObjectNr.Refs.
Robot behavior32[6,7,9,19,29,37,38,39,46,48,50,51,52,57,59,66,69,73,77,78,79,81,84,86,87,88,89,90,91,92,94,95]
Robot intentions/purpose17[6,7,9,35,36,37,41,42,45,46,47,51,58,59,65,69,88]
Robot decision making16[4,6,44,54,59,61,64,65,75,76,79,83,84,86,87,90,92,93,94]
Robot tasks13[2,7,29,34,39,55,65,66,67,71,80,84,86,89,91]
Robot capabilities12[6,7,45,53,62,68,72,73,83,84,87,88,96]
Robot beliefs (e.g., of environment)8[29,40,46,67,70,76,78,80]
Cause of failure6[43,49,58,83,89,91]
Table 5. Overview of modalities used by papers to convey information with the goal of transparency.
Table 5. Overview of modalities used by papers to convey information with the goal of transparency.
ModalityNr.Refs.
Visual - text24[2,6,29,39,48,51,52,54,55,56,61,65,66,67,69,73,74,75,76,78,79,80,83,87,89,93]
Visual - color15[19,44,52,69,70,77,78,79,81,83,86,89,92,94,95]
Visual - motion12[6,19,34,35,36,50,53,58,59,67,71,74,96]
Visual - form5[17,45,60,73,81]
Auditory—speech20[4,6,17,29,43,46,50,53,57,61,65,67,68,71,73,74,86,87,91,92]
Haptic—vibrotact.5[40,41,42,47,96]
Haptic—pressure2[72,96]
Haptic—thermal1[96]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Schött, S.Y.; Amin, R.M.; Butz, A. A Literature Survey of How to Convey Transparency in Co-Located Human–Robot Interaction. Multimodal Technol. Interact. 2023, 7, 25. https://doi.org/10.3390/mti7030025

AMA Style

Schött SY, Amin RM, Butz A. A Literature Survey of How to Convey Transparency in Co-Located Human–Robot Interaction. Multimodal Technologies and Interaction. 2023; 7(3):25. https://doi.org/10.3390/mti7030025

Chicago/Turabian Style

Schött, Svenja Y., Rifat Mehreen Amin, and Andreas Butz. 2023. "A Literature Survey of How to Convey Transparency in Co-Located Human–Robot Interaction" Multimodal Technologies and Interaction 7, no. 3: 25. https://doi.org/10.3390/mti7030025

Article Metrics

Back to TopTop