Next Article in Journal
Measuring Performance: Metrics for Manipulator Design, Control, and Optimization
Previous Article in Journal
Static Modeling of a Class of Stiffness-Adjustable Snake-like Robots with Gravity Compensation
Previous Article in Special Issue
Important Preliminary Insights for Designing Successful Communication between a Robotic Learning Assistant and Children with Autism Spectrum Disorder in Germany
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Influence of Visible Cables and Story Content on Perceived Autonomy in Social Human–Robot Interaction

1
Work, Engineering & Organizational Psychology, Technische Universität Berlin, 10623 Berlin, Germany
2
Human–Computer Interaction, University of Wuerzburg, 97070 Würzburg, Germany
3
Engineering Psychology, Humboldt-Universität zu Berlin, 10117 Berlin, Germany
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Robotics 2023, 12(1), 3; https://doi.org/10.3390/robotics12010003
Submission received: 30 August 2022 / Revised: 13 December 2022 / Accepted: 17 December 2022 / Published: 23 December 2022
(This article belongs to the Special Issue Communication with Social Robots)

Abstract

:
From teaching technical skills to telling bedtime stories, social robots support various edutainment tasks that require smooth communication. Previous studies often emphasized the importance of the autonomy of social robots for those tasks. However, the cabling of robots with power sources and/ or host computers is often required due to technical restrictions. However, it is currently unclear if the cabling of robots makes a difference in perceived autonomy. Therefore, this study examined the influence of visible cables in different tasks on the perception of a social robot. In an online survey, participants evaluated videos of a social robot that was either equipped with a cable or not and told either a story with technical educational content or socially entertaining content. No significant differences were revealed between the cabled and the non-cabled robot, neither for the perceived autonomy nor for the associated concepts of the Godspeed questionnaire series. In addition, the story content did not influence perceived autonomy. However, the robot that told the technical content was perceived as significantly more intelligent and tended to be perceived as more likable than the robot that told the social content. Moreover, the interaction effect of cabling and story content for perceived safety just failed to reach the conventional level of significance. In the social content condition, the non-cabled robot tended to be perceived as less safe than the cabled robot. This was not true for the technical content condition. In conclusion, the results showed the importance of considering story content. Due to methodological limitations of the current study, namely, the lack of gestures accompanying the storytelling and the video-based approach, the missing effect of cabling in regard to perceived autonomy should be investigated in the future via real-life interaction studies.

1. Introduction

Social robots are equipped with verbal and non-verbal communication channels to engage and support users in various tasks such as education or entertainment [1,2,3]. To make this interaction as smooth as possible research in the field of human–robot interaction (HRI) investigates the variables that influence, support and hinder social as well as task-oriented interaction [4,5]. Therefore, HRI extensively incorporates user studies to investigate how social robots and interactional tasks should be designed [5,6,7]. However, it is often challenging to derive generalizable design recommendations as similar studies can yield to different results due to minor methodological differences. For example, Onnasch and Roesler [8] investigated the influence of anthropomorphic framing of the social robot NAO in a task-oriented setting. The human-likeness of the robot was measured via the human-likeness scale of the revised Godspeed on a 5-point semantic differential scale [9]. In both conditions, the robot’s human-likeness was above average, with 3.94 (SD = 0.57) for functional framing and 3.89 (SD = 0.62) for the anthropomorphic framing. However, different results were revealed in a comparable study set-up that also investigated the effect of anthropomorphic framing of the robot NAO [10]. Using either personal humanlike or impersonal functional framing, Steinhaeusser et al. [10] found overall lower values of human-likeness of 1.88 (SD = 0.80) for humanlike and 1.51 (SD = 0.80; unpublished data) for functional framing using the same robot and the original version of the questionnaire [11]. Three major dissimilarities can be determined concerning the methodological differences between both studies. First, while Onnasch and Roesler [8] conducted a live laboratory study, Steinhaeusser et al. [10] ran an online survey due to COVID-19 restrictions. Second, the task after the framing was interactive and technical within Onnasch and Roesler [8]’s study, whereas Steinhaeusser et al. [10] conducted a pure perception study with a social and emotional storytelling task. Lastly, while the NAO robot was equipped with a cable in the online perception study, it was controlled via a non-cabled connection in the live interaction study.
Current taxonomies structuring and analyzing HRI address influential variables such as actual embodied or online depicted exposure to robots or the role of different tasks [12,13,14]. More minor practical issues, such as using a noticeable cable to connect the robot to a host computer or a power source, and their influences are less discussed. Due to restrictions commonly in universities’ networks, the obligation of using cabled connections for robots’ internet access is presumably a widespread practical limitation. De-cabling a robot advances the perception of being physically autonomous [15]. Robot autonomy—the capacity of a robot to self-regulate actions and refine or modify tasks and behavior to its own rules [16,17]—is dependent on the control system implemented rather than the existence of a cable connection. However, the perceived robot autonomy may differ from the actual autonomy of a robot due to a visible cable connection. Harbers et al. [18] found that people rate a robot as more autonomous based on its ability to disobey commands and the physical distance between a robot and its user or operator. These findings reflect Smithers’ [16] assumption of autonomy for mobile robots: “Here then the general idea seems to range from a mobile robot without a power supply cable to a mobile robot that has some independence of operation, in other words, some kind and degree of self-regulation or control.” [16] (p. 90). Even though this assumption was made back in the year 1997, the autonomy variable of cabled robot connection still needs to be referred to in HRI studies.
In many experiments, the robot is connected to a power source or a host computer (e.g., [19,20,21,22,23,24,25,26]) via a clearly visible cable. This might even sometimes not be mentioned in the study description, which further makes a comparision between cabled and non-cabled robots more complicated. The current work aims to close this research gap by investigating the impact of visible cabling and its effects on robot perception and whether this methodological issue is decisive for the results and transferability of an experiment. Moreover, the task content, which was either technical [12] or social [10], might matter for the influence of cabling a robot. Therefore, the study incorporated different tasks to investigate possible interaction effects.

2. Related Work

2.1. Robot Autonomy

Following Onnasch and Roesler’s [12] taxonomy of HRI, a robot should be classified by its task, its morphology, and its degree of autonomy. Autonomy—from the Greek autos (“self”) and nomos (“law”) [27]—can be defined as self-regulating behavior to rules generated by oneself and thus is closely related to self-determination [16,28,29,30]. Folk understandings of autonomy often refer to “doing it my way” or “thinking for myself” [28] (p. 3). This described self-reflection [28] and attribution of the locus of initiation to oneself is crucial for autonomous behavior [30]. Concerning robots, Beer et al. [27] defined autonomy as the “extent to which a robot can sense its environment, plan based on that environment, and act upon that environment with the intent of reaching some task-specific goal (either given to or created by the robot) without external control.” [27] (p. 77) The degree of robot autonomy can technically be specified in the case of (1) information acquisition, (2) information analysis, (3) decision-making, and (4) action implementation [12]. However, this specification of autonomy mainly focuses on the internal processes of a robotic system hidden in a black box from users except for action implementation. Sheridan and Verplank [31] suggested a continuum of levels of automation interpreted as autonomy levels by Goodrich and Schultz [5]. The continuum is anchored by being completely controlled by a human and being completely autonomous, focusing on processes outside this black box, namely, input required, approval by the human operator before action implementation, and feedback from the system afterward. From the described teleoperation to fully autonomously acting systems, autonomy influences the interaction between robots and humans [27]. Robots implementing low levels of autonomy on this continuum would be very time-consuming in their operation. In addition, as the level of autonomy increases, the mental workload of the human user, i.e., “the relation between the function relating the mental resources demanded by a task and those resources available to be supplied by the human operator” [32] (pp. 145–146), decreases [27]. Schwarz et al. [33] suggest that users, therefore, want personal robots to be as autonomous as possible, but also ambivalent attitudes were found towards more autonomous robots [34].
However, although robot autonomy is shaped by the technical and practical processes described by Onnasch and Roesler [12] and Sheridan and Verplank [31], human users’ perceived autonomy of robots can strongly differ from their actual autonomy. “What nearly all of the social robots have in common is the—most of the times—false message they transmit concerning two features […]; the freedom of their actions, and their degree of autonomy” [35] (p. 280) pretending to be more autonomous than they actually are. HRI researchers often utilize pre-programmed interactions, or the Wizard-of-Oz (WoZ) technique [11,36]. Realizing faked autonomy with the WoZ technique, a human experimenter remotely controls a robot without informing the human interaction partner. This puppeteering simulates future HRI and allows for the iterative design of only partly implemented systems [37]. Mainly (1) natural language processing, such as giving appropriate answers in an interaction, is faked using the Wizard-of-Oz method, followed by implementation of (2) non-verbal behavior, and (3) navigation and mobility [37], all three simulating autonomous behavior. Studies show that the awareness of this remote control and, thus, the perceived robot autonomy influence HRI. For example, customers realizing that a robot was teleoperated in language processing reported less enjoyment and less intention for future use compared to customers who thought that a robot acts autonomously [38]. In addition, perceived robot autonomy can be altered only by manipulating its visual appearance.
Song et al. [39] simulated the autonomy of a robot delivering flyers in a mall by showing or not showing a picture of an operator’s face on the robot’s control box. Although the robot’s behavior was teleoperated in all conditions, when an operator photo was provided, people were more likely to stop by and talk with the robot and also picked up more flyers compared to the absence of a photo. This finding indicates a behavior change due to perceived teleoperation, respectively autonomy. Following Smithers [16], who suggests that a visible cable connecting a robot limits its independence of operation, the absence of such a cable may also function as an aspect of visual appearance influencing perceived robot autonomy. Given the effects of perceived autonomy on HRI, this unnoticed circumstance may impact studies’ results. In order to contribute to this research gap, we investigated the effects of recognizable cabling on perceived robot autonomy. In line with Smithers’ assumption, we expect a cabled robot to be perceived as less autonomous and a non-cabled robot to be perceived as more autonomous.
H1: 
A non-cabled robot is perceived as more autonomous than a cabled robot.
In a WoZ-based study, the remotly controlled robot is only a proxy mediating communication between participant and experimenter. Therefore, Riek [37] refers to WoZ-driven interactions rather as human–human interaction than HRI following the robot-as-medium paradigm where a robot is perceived as a medium of communicating between people. Direct interaction with an autonomous robot would then resemble the robot-as-source paradigm, where the robot is perceived as the unmediated source of information [35,40]. We postulate cabling to influence the perceived locus of information by expecting a visible cable to represent a connection to a human operator or an operating system.
H2: 
A non-cabled robot is more perceived as source of the story than a cabled robot.
“Living systems are the prototypes of autonomous systems” [41] (p. 1), thus, it is not surprising that following the autonomy hypothesis, autonomous robot behavior is perceived as more humanlike [42] and also yields to qualities of human–human interaction [43]. Attribution human attributes, i.e., anthropomorphizing an autonomous robot, might solve uncertainty in its behavior [42]. In addition, other variables of robot perception are empirically related to robot autonomy. For example, studies indicated a positive effect of framed fully autonomous robots on perceived intelligence compared to framed teleoperated robots [44]. Moreover, perceived robot autonomy seemed to have a positive trend on empathy [45], and participants were quicker in helping a robot in autonomous mode than the robot in teleoperated mode with the modes depicted by LED color [46]. Regarding human–robot teams, the more autonomous a robot acted, the more the chance increased that it was perceived as a peer or teammate [27]. In addition, users reported better collaboration, more trust, and an increased understanding of the task when collaborating with an autonomous robot compared to a less autonomous robot [47]. Similarly, comparing a robot that autonomously appraises art to a teleoperated robot transferring user appraisals, Lee et al. [48] showed that the autonomous robot was evaluated as more trustworthy. Following these findings of the positive effects of perceived autonomy, we also investigated the direct effects of a visible cable on general robot perception. Due to its dominance in related studies, we use the Godspeed questionnaires [11] to operationalize these perceptions implicitily associated with autonomy.
H3: 
A non-cabled robot leads to higher perceived anthropomorphism, animacy, intelligence, and likeability than a cabled robot.
However, also adverse effects of (perceived) autonomy were reported. Especially, robot autonomy was indicated as negatively affecting perceived safety compared to teleoperation [49,50]. Manipulating the framing of a WoZ-controlled search-and-rescue robot either as being teleoperated by an experimenter or being autonomous, Dole et al. [49] reported that participants supposedly interacting with the robot felt significantly less safe than participants who thought they were interacting with a human experimenter mediated by the robot. Similarly, in a focus group setting by Weiss et al. [50], participants discussed on working together with a humanoid robot they saw in a video—either teleoperated or acting autonomously. While both groups indicated higher perceived safety when imagening to work with a teleoperated robot compared to an autonomous one, the group watching the autonomous robot stated they would prefer collaborating with the teleoperated robot. In addition, a positive relationship between autonomy and risk perception was identified [51]. By these findings, we postulate a positive effect of visible cabling on perceived safety.
H4: 
A non-cabled robot leads to lower perceived safety than a cabled robot.

2.2. Robot-Task Fit

HRI is a highly interdisciplinary field investigating robot deployment for many different tasks. Social robots’ ability to implement social behavior and evoke social relationships makes them exciting actors in many fields with different communicative tasks, such as entertainment and education [52,53]. Regarding education, social robots communicate learning content, especially STEM-related content on the robot itself or other science-related topics [54], striving for cognitive stimulation [12]. Nevertheless, also emotional stimulation can be achieved by entertaining robots [12], e.g., by utilizing robotic storytelling [55,56]. However, these differing communicative tasks also entail different demands. For example, people chose a robot framed as capable of emotion recognition more often for social tasks, whereas an emotionless robot was chosen more often for an arithmetic task. These results are surprising due to the mechanical design of both robots and the emotional robot being framed as capable of the arithmetic task as the emotionless robot [57]. These findings can be summoned under Goetz’ matching hypothesis [58], stating that appearance and behavior of a robot should match the task and situation since they influence people’s perceptions of the robot. In turn, perceived capabilities influencing the evaluation of agents are moderated by task and environment, e.g., whether an interaction scenario involves object manipulation in the real world [59].
A robot’s actual autonomy may change between tasks and environments due to their different requirements, e.g., the need for special sensors [27]. However, robots will also be required to adapt by switching between autonomy levels to match user expectations and to adhere to the social models human users expect [27]. Autonomy might trigger mind perception, i.e. being capable of experiencing, expression, and action planning [34]. Being mindful is closely connected to autonomy [34], whereas autonomy perception is associated with mind perception. Thus, social tasks may need an increased mind perception compared to technical tasks. This would lead to a better task fit of an autonomous robot being perceived as source to a social task while the robot as a medium with less mind perceived might be sufficient for technical tasks. Based on this theoretical basis, comparing a social robot targeting emotional or cognitive stimulation [12] in a non-interactive storytelling scenario operationalized by the content of the storytelling—social story or technical text—a visible cable may lead to different effects on robot perception due to the varying importance of mind perception for robot-task fit (see [57]).
H5: 
The negative effect of the cabling is more pronounced if the social robot tells social compared to technical content.
To the best of our knowledge, the relationship between visible cabling, perceived robot autonomy and general robot perception was not investigated yet.

3. Methods

The experiment was preregistered via the Open Science Framework (https://osf.io/qw724, accessed on 14 December 2022) and approved by the local ethics committee of the Institute for Human–Computer-Media at the University of Würzburg (vote #090222). Due to the COVID-19 pandemic, this survey was conducted online using video recordings suggested by Feil-Seifer et al. [7].

3.1. Participants

A sample size of 200 participants was targeted to obtain 0.80 power to detect a small to medium effect size of 0.20 at the standard 0.05 alpha error probability. Participants were recruited from the students enrolled at the University of Würzburg and the Technical University of Berlin using the respective online-recruitment system. They received credits mandatory for obtaining their final degree. The selection process resulted in a convenience sample which mainly incorporated students. To achieve this final sample size, data of 252 participants were collected. A manipulation check regarding the presence of the cable led to the exclusion of 30 participants. Moreover, 17 participants needed to be excluded because they failed a story content attention check. In addition, we excluded one participant who stated that this was the second time they conducted the survey and four participants who indicated that they do not have either normal or corrected vision. Therefore, the final sample consisted of 200 participants (mean age = 23.45, S D = 7.23 , 79% female, 21% male, 0% non-binary) who were evenly distributed between the four conditions ( n = 50 each).

3.2. Task and Apparatus

We implemented two different stories using the robot Pepper [60] and the software Choregraphe [61] version 2.5.10.7. For the technical content, we chose How do robots work? (https://courses.reaktor.education/de/courses/emerging-technologies/robotics-and-automation/how-do-robots-work/, (accessed on 31 August 2022) which was translated to German and simplified in terms of content and speech, whereas the story Die Maus, die sich fledermauste [62] (The mouse which became a bat) was used for the social content conditions. Both stories were implemented using a speech rate of 85 and autonomous robot behavior, such as random blinking and idle movement. Since gestures matching the spoken text would enormously differ between the two texts, none of the stories was accompanied by gestures to avoid side effects elicited by differing movements. Two videos were recorded per story, one with a visible cable and one without, resulting in four video stimuli of approximately four minutes. At the beginning of each video, Pepper stood facing sideways and turned its head to a centered position after the start, as displayed in Figure 1. As soon as its gaze was directed at the camera, it began to speak.

3.3. Design

To investigate the effects of a visible cabling in dependence of the context on robot perception, a 2 (cabled vs. non-cabled) x 2 (technical vs. social content) between-subjects design was applied.

3.4. Dependent Measures

Using single items and a six-point semantic differential, we operationalized perceived autonomy (“The robot was…”, 1—“completely remote controlled”, 6—“completely autonomous”) and source of information (“The robot was…”, 1—“a device (like a CD player)”, 6—“the narrator of the story”).
Robot perception was measured using the Godspeed questionnaire series [11] including the five dimensions of Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety which are all related to autonomy due to implicit expectations. All items are answered on a five-point semantic differential anchored by two adjectives querying both choice for one anchor and intensity of the choice. The Anthropomorphism scale comprises five items such as “machinelike” vs. “humanlike” with an internal consistency of 0.88 to 0.93 as reported by Bartneck et al. [11]. Further, the Animacy scale includes six items such as “dead” vs. “alive” with an internal consistency of 0.70. Likeability is measured using five items such as “awful” vs. “nice”. The internal consistency reported for this scale is 0.87 to 0.92. In addition, the Perceived Intelligence scale comprises five items such as “unintelligent” vs. “intelligent” with an internal consistency of 0.75 to 0.77. Last, Perceived Safety is measured with three items such as “agitated” vs. “calm”, reliability is not reported by Bartneck et al. [11].
Two questions regarding details of the respective story were asked, serving as an attention check. A manipulation was checked using a binary format (“Was the robot in the video connected to a cable? ”, “yes” vs. “no”).

3.5. Procedure

Participants were randomly assigned to one of the four conditions when accessing the online survey hosted via LimeSurvey [63]. After providing informed consent, they watched the respective video described in Section 3.2, followed by an attention check. Subsequently, participants answered the single items on perceived role and autonomy of the robot and completed the Godspeed questionnaire. Lastly, they provided demographic data, answered the manipulation check, and were invited to give comments. After finishing the survey, participants were thanked and debriefed.

4. Results

All outcome variables were analyzed via two-way between-subjects ANOVAs to investigate the main effects of cabling (i.e., cabled vs. non-cabled) and story content (i.e., social vs. technical), as well as the interaction effect of both factors. Regarding assumption checks, Levene’s tests indicated equality of variances for all variables. Shapiro-Wilk tests confirmed normality of data only for likeability, however, the F-statistic is considered to be robust against violation of this assumption [64].

4.1. Autonomy and Role Perception

The analysis of autonomy revealed neither a significant main effects of cabling ( F ( 1 , 196 ) = 0.01, p = 0.953 , η G 2 = 0.001 ) nor of story content ( F ( 1 , 196 ) = 0.03, p = 0.860 , η G 2 = 0.001 ). Furthermore, the analysis revealed no significant interaction effect, F ( 1 , 196 ) = 0.03, p = 0.860 , η G 2 = 0.001 . In all conditions (i.e., cabled x technical ( M = 2.28 , S D = 1.07 ), cabled x social ( M = 2.34 , S D = 1.27 ), non-cabled x technical ( M = 2.32 , S D = 1.08 ), and non-cabled x social ( M = 2.32 , S D = 1.36 )), the mean values were quite close to each other. Comparable results were revealed by the analysis of role perception, which showed neither a significant main effects of the cabling ( F ( 1 , 196 ) = 0.40, p = 0.530 , η G 2 = 0.002 ) nor of story content ( F ( 1 , 196 ) = 0.01, p = 0.955 , η G 2 = 0.001 ) Furthermore, no significant interaction effect was revealed ( F ( 1 , 196 ) = 0.01, p = 0.955 , η G 2 = 0.001 ). Again, the mean values were descriptively quite close to each other in all conditions (i.e., cabled x technical ( M = 2.16 , S D = 1.33 ), cabled x social ( M = 2.18 , S D = 1.31 ), non-cabled x technical ( M = 2.06 , S D = 1.04 ), and non-cabled x social ( M = 2.06 , S D = 1.25 )).

4.2. General Perception

The Godspeed I-IV: anthropomorphism ( α = 0.81 ), animacy ( α = 0.86 ), likeability ( α = 0.88 ), and Intelligence ( α = 0.86 ) showed good internal consistencies. However, the Godspeed V: Safety showed not only an unacceptable internal consistency ( α = 0.16 ), but also that some items correlated positively and others negatively with the total scale. To ensure the same direction as the other scales (i.e., higher values correspond to higher perceived safety) the two semantic differentials calm vs. agitated and quiescent vs. surprised were reverse coded to match the anxious vs. relaxed semantic differential. This led to an improved but still questionable internal consistency ( α = 0.59 ). To assess the general perception, mean values of the respective Godspeed scales were calculated (see Figure 2).
The analysis of the perceived anthropomorphism revealed neither a significant main effects of cabling ( F ( 1 , 196 ) = 0.97, p = 0.327 , η G 2 = 0.005 ) nor of story content ( F ( 1 , 196 ) = 1.24, p = 0.267 , η G 2 = 0.006 ). In addition, no significant interaction effect was found ( F ( 1 , 196 ) = 1.15, p = 0.286 , η G 2 = 0.006 ). The analysis of animacy as well showed neither significant main effects of cabling ( F ( 1 , 196 ) = 3.25, p = 0.073 , η G 2 = 0.016 ), nor of story content ( F ( 1 , 196 ) = 0.01, p = 0.001 , η G 2 = 0.933 ). Again, no significant interaction effect was revealed ( F ( 1 , 196 ) = 1.96, p = 0.163 , η G 2 = 0.010 ).
For likeability the analysis showed no significant main effect of cabling ( F ( 1 , 196 ) = 0.32, p = 0.575 , η G 2 = 0.002 ) and no significant interaction effect ( F ( 1 , 196 ) = 0.02, p = 0.899 , η G 2 = 0.001 ). However, the main effect of story content just failed to reach the conventional level of significance ( F ( 1 , 196 ) = 3.62, p = 0.059 , η G 2 = 0.018 ). On a descriptive level, the robot was liked more if it told a technical story ( M = 3.59 , S D = 0.76 ) compared to a social one ( M = 3.38 , S D = 0.79 ). In line with this trend, the analysis of intelligence revealed, that the robot was perceived as significantly more intelligent if it told a technical story ( M = 3.47 , S D = 0.80 ) compared to a social one ( M = 3.05 , S D = 0.81 ), F ( 1 , 196 ) = 13.15, p < 0.001 , η G 2 = 0.063 . For intelligence, neither the main effect of cabling ( F ( 1 , 196 ) = 0.01, p = 0.958 , η G 2 = 0.001 ), nor the interaction effect ( F ( 1 , 196 ) = 0.68, p = 0.411 , η G 2 = 0.003 ) were significant.
Lastly, the analysis of safety revealed no significant main effects of cabling ( F ( 1 , 196 ) = 0.83, p = 0.364 , η G 2 = 0.004 ), and story content ( F ( 1 , 196 ) = 1.25, p = 0.264 , η G 2 = 0.006 ). Remarkably, the interaction effect of cabling and story content ( F ( 1 , 196 ) = 3.84, p = 0.051 , η G 2 = 0.019 ) just failed to reach the conventional level of significance. Descriptively, it can be seen that no pronounced difference in perceived safety occurred between the technical ( M = 3.84 , S D = 0.63 ) and social ( M = 3.92 , S D = 0.72 ) story if the robot was equipped with a wire. However, for the non-cabled robot a trend that the robot was perceived less safe if telling a social ( M = 3.65 , S D = 0.71 ) compared to a technical ( M = 3.94 , S D = 0.63 ) story was shown.

5. Discussion

The fact that the methodology itself can be decisive for the results and transferability of studies is well known in psychological research, and related fields such as HRI [14,65]. However, due to the highly interdisciplinary field, it often remains unclear how exactly different methodological aspects affect the interaction of humans and robots. Therefore, the current study aimed to address the influence of one specific and frequently overlooked methodological aspect—the way a robot is connected to its host computer. Moreover, as social robots are applied to a plethora of tasks, we wanted to shed light on the role of visible cables in different social tasks [58,66].
Foremost, we assumed that a non-cabled robot is perceived as more autonomous than a cabled robot (H1). Even though it seems relatively intuitive that autonomy differs between cabled and non-cabled robots [16], this hypothesis was not supported by the results of our experiment. We found no difference between the cabled and non-cabled robots regarding perceived autonomy. In line with this result, we did not find evidence for a difference in the perception of the robot as a source vs. medium of information (H2). This is somewhat surprising, as an unmediated (i.e., non-cabled) robot was expected to be perceived more than a source compared to a mediated (i.e., cabled) one. Perhaps, both results could be related to the online method used in our experiment. As the physical embodiment has an effect on the perception of social robots [14,67,68], the absence of evidence for differences in autonomy and role perception might be related to the depicted robot exposure. Moreover, participants knew that they would evaluate a video and not a live broadcast of a robot. Since there was no opportunity for the real interaction, the ratings may refer to the video as a replay medium rather than to the robot itself. This possible explanation is further supported by the overall low values of autonomy and role perception independently of the experimental condition. All mean values were around the value two on a six-point scale ranging from one (i.e., “completely remote controlled” and “a device (like a CD player)”) to six (i.e., “completely autonomous” and “the narrator of the story”). In addition, our self-constructed items might not have been sensitive enough to measure possible minor differences in perceived autonomy. Future research should, therefore, validate the item by comparing it to more extensive scales such as the Perception of Autonomy Scale [51].
Furthermore, the cabling did not significantly influence autonomy-related perceptions of the robot (H3 and H4) regarding the Godspeed scales [11]. This is not surprising for animacy, as autonomy is one aspect of the life likeness assessed via this Godspeed scale [11]. In addition, Godspeed II: animacy and the Godspeed I: anthropomorphism are highly correlated and even share one item (i.e., artificial vs. lifelike) [11]. Again, overall relatively low values around two on a five-point scale were assigned for both animacy and anthropomorphism. Likeability, intelligence, and safety values were descriptively higher in all conditions compared to anthropomorphism and animacy. However, the cabling of the robot did not influence those scales either. Even though online HRI studies with depictions of robots seem to be particularly popular in times of the COVID-19 pandemic [7], a video-based online approach might not be appropriate for investigating a physical feature such as cabling. Related work indicates a positive effect of physical embodiment compared to videos in terms of trust, natural interaction, quality of interaction [69], and—especially important in regard to our study—anthropomorphism [70]. Therefore, future research should investigate the influence of cabling in real HRI.
Nevertheless, online studies seem to be suitable for investigating the robot-task fit [57,58,71]. We assumed an interaction effect of cabling and story condition (H5). However, only perceived safety showed a trend for an interaction effect. The robot telling the social story tended to be perceived as less safe than the robot telling a technical story in the non-cabled condition. Perceived safety is operationalized via two opposing concepts: the level of danger and the level of comfort during HRI [11]. As no physical danger existed in our video-based scenario, people seemed to have felt less comfortable with the non-cabled robot if it told a social than technical content. This is surprising at first glance, especially as no main effects in regard to autonomy and role perception were revealed. A possible reason for less comfort in this condition might be the overall low level of autonomy. In particular, low autonomy and, in turn, a low perception of social capabilities seem not to fit to a task requiring a certain degree of sociability [57,58], especially if the robot is not connected to a host computer. Even though storytelling is generally communicative and social in its nature [72], the sociability of a story might still determine whether the storyteller is perceived as appropriate for the story. This assumption should be treated with caution, as the interaction effect failed to reach the conventional level of significance. The relationship of cabling and the required sociability of the task should therefore be investigated in future research. Moreover, it should be examined how the autonomy and cabling, in particular, influence the perceived social capabilities of the robot.
In regard to the robot-task fit, our study revealed some unexpected results. One might assume that, Pepper as anthropomorphic robot in terms of its appearance [73], would fit better to a social than a technical content. However, the opposite was revealed. The robot telling the social story was perceived as significantly less intelligent than the robot telling the technical story. Further, a trend that failed to reach the conventional level of significance was revealed that the robot telling the social content was descriptively liked less than the robot telling technical content. This might be associated with the story content itself. Even though the performance of the robot was the same in both story conditions, the authenticity of both stories might have been judged differently [74]. That is, a technical narrative of the robot might have been perceived as more authentic than a social one. In addition, both stories were quickly understandable. However, one might argue that the tendency of liking social content less than technical content might be related to the involved participants. First, technical content might have been perceived as more sophisticated than social content, which is already suggested by the results of perceived intelligence. As our participants were mainly students in their early twenties, the social story might have needed to be more advanced to be liked as much as the technical one. Second, the type of content differed considerably. The social content was a story to entertain children, whereas the technical content was written to inform the public how robots work. So both types of stories might relate to different target groups of the This finding further illustrates that convenience samples such as students might not be generalizable to more specific groups such as children [75] and that the interaction content always needs to be adapted to the target group. As stated by [36], the type of interaction scenario can cause different ratings on likability and intelligence.
Another reason might have been that the reduced robot’s social cues, with no movements and the mechanical voice of the robot, did not match the social nature of the task [58]. Even though Pepper’s appearance is anthropomorphic, the communication and movements which were scaled down to idle movement and blinking might have not matched the expectations set by the appearance [12] especially since expressive gestures are related to the attribution of human traits [76]. Alike, the lack of emotive speech might have impaired participants interest [77]. Future studies should incorporate a matching between appearance and non-verbal, as well as verbal cues, as this could lead to a more believable and authentic HRI. Introducing movements with more degrees of freedom and actual mobility, however, might change the interaction scenario and the possibility to operate the robot via a cable due to range. In those scenarios, the cable might not only influences the perceived autonomy in regard to information processing and decision making, but also the actual action implementation. This again illustrates the importance of taking the interaction scenario into account. Other research already illustrated that especially perceived intelligence is highly affected by the interaction scenario [36]. Social robots for cognitive and emotional stimulation might therefore have different design requirements in regard to the specific task to be perceived as intelligent. Whereas technical education topics might be suitable for a more technical robot, social topics such as storytelling [78] might need more social cues on part of the robot [67]. Thus, the findings show that it is not only crucial to investigate general preferences for robot appearance and behavior [58,71] for different tasks and domains, but to also the sublevel of task content.

6. Conclusions

The conducted online survey aimed to investigate the influence of cabling and storytelling on the perceived autonomy of a social robot. Neither autonomy, nor associated perceptions, were influenced by the cabling. However, our results indicated that the robot telling technical content was perceived as more intelligent and seemed to be liked more than the robot telling a social story. Our study therefore highlights the importance of shedding light on the specific task performed by the robot. In particular, the multi-faceted human–robot communication comprises aspects of the sender and receiver, but also the message itself, and it is fit to both interaction partners. A mismatch of the sender and the message in terms of technical robot communication and social content might have led to a more negative perception of the robot telling the social story. This also might hold true for the mismatch of the receiver and the message, as stories designed for children might have further contributed to a more negative perception of the robot. To account for the methodological drawbacks of this video-based online study, a laboratory study should be considered to investigate how the cabling influences human–robot communication.

Author Contributions

Conceptualization, E.R., S.C.S. and L.O.; methodology, E.R. and S.C.S.; software, S.C.S.; validation, S.C.S. and E.R.; formal analysis, E.R.; investigation, S.C.S. and E.R.; resources, E.R. and S.C.S.; data curation, E.R. and S.C.S.; writing—original draft preparation, E.R. and S.C.S.; writing—review and editing, E.R., S.C.S., L.O. and B.L.; visualization, E.R. and S.C.S.; supervision, L.O.; project administration, E.R. and S.C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This publication was supported by the Open Access Publication Fund of the University of Wuerzburg.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the local ethics committee of the Institute for Human–Computer-Media at the University of Würzburg (vote #090222).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data can be obtained via the Open Science Framework at https://osf.io/tasqx/ (accessed 31 August 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Breazeal, C.; Dautenhahn, K.; Kanda, T. Social Robotics. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 1935–1972. [Google Scholar] [CrossRef]
  2. Lugrin, B. Introduction to Socially Interactive Agents. In The Handbook on Socially Interactive Agents; Lugrin, B., Pelachaud, C., Traum, D., Eds.; ACM: New York, NY, USA, 2021; pp. 1–18. [Google Scholar]
  3. Sheridan, T.B. Human–Robot Interaction: Status and Challenges. Hum. Factors J. Hum. Factors Ergon. Soc. 2016, 58, 525–532. [Google Scholar] [CrossRef] [PubMed]
  4. Murphy, R.; Nomura, T.; Billard, A.; Burke, J. Human—Robot Interaction. IEEE Robot. Autom. Mag. 2010, 17, 85–89. [Google Scholar] [CrossRef]
  5. Goodrich, M.A.; Schultz, A.C. Human–Robot Interaction: A Survey. Found. Trends-Hum.-Comput. Interact. 2007, 1, 203–275. [Google Scholar] [CrossRef]
  6. Young, J.E.; Sung, J.; Voida, A.; Sharlin, E.; Igarashi, T.; Christensen, H.I.; Grinter, R.E. Evaluating Human–Robot Interaction. Int. J. Soc. Robot. 2011, 3, 53–67. [Google Scholar] [CrossRef]
  7. Feil-Seifer, D.; Haring, K.S.; Rossi, S.; Wagner, A.R.; Williams, T. Where to next? The impact of COVID-19 on human–robot interaction research. J. Hum.-Robot Interact. 2020, 10, 1–7. [Google Scholar] [CrossRef]
  8. Onnasch, L.; Roesler, E. Anthropomorphizing Robots: The Effect of Framing in Human–Robot Collaboration; SAGE Publications Inc: Thousand Oaks, CA, USA, 2019; Volume 63, pp. 1311–1315. [Google Scholar] [CrossRef] [Green Version]
  9. Ho, C.C.; MacDorman, K.F. Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Comput. Hum. Behav. 2010, 26, 1508–1518. [Google Scholar] [CrossRef]
  10. Steinhaeusser, S.C.; Gabel, J.J.; Lugrin, B. Your New Friend NAO vs. Robot No. 783—Effects of Personal or Impersonal Framing in a Robotic Storytelling Use Case. In Proceedings of the Companion of the 2021 ACM/IEEE International Conference on Human–Robot Interaction, Boulder, CO, USA, 8–11 March 2021; Bethel, C., Paiva, A., Broadbent, E., Feil-Seifer, D., Szafir, D., Eds.; ACM: New York, NY, USA, 2021; pp. 334–338. [Google Scholar] [CrossRef]
  11. Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. Int. J. Soc. Robot. 2009, 1, 71–81. [Google Scholar] [CrossRef]
  12. Onnasch, L.; Roesler, E. A Taxonomy to Structure and Analyze Human–Robot Interaction. Int. J. Soc. Robot. 2021, 13, 833–849. [Google Scholar] [CrossRef]
  13. Yanco, H.A.; Drury, J. Classifying human–robot interaction: An updated taxonomy. In Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583), The Hague, The Netherlands, 10–13 October 2004; pp. 2841–2846. [Google Scholar] [CrossRef] [Green Version]
  14. Roesler, E.; Manzey, D.; Onnasch, L. Embodiment Matters in Social HRI Research: Effectiveness of Anthropomorphism on Subjective and Objective Outcomes. J. Hum.-Robot Interact. 2022. Just Accepted. [Google Scholar] [CrossRef]
  15. Lund, H.H.; Miglino, O. From simulated to real robots. In Proceedings of the IEEE International Conference on Evolutionary Computation, Nagoya, Japan, 20–22 May 1996; pp. 362–365. [Google Scholar] [CrossRef]
  16. Smithers, T. Autonomy in robots and other agents. Brain Cogn. 1997, 34, 88–106. [Google Scholar] [CrossRef] [Green Version]
  17. de Santis, A.; Siciliano, B.; de Luca, A.; Bicchi, A. An atlas of physical human—robot interaction. Mech. Mach. Theory 2008, 43, 253–270. [Google Scholar] [CrossRef] [Green Version]
  18. Harbers, M.; Peeters, M.M.M.; Neerincx, M.A. Perceived Autonomy of Robots: Effects of Appearance and Context. In A World with Robots; Aldinhas Ferreira, M.I., Silva Sequeira, J., Tokhi, M.O., Kadar, E., Virk, G.S., Eds.; Springer International Publishing: Cham, Switzerland, 2017; Volume 84, pp. 19–33. [Google Scholar] [CrossRef]
  19. Donnermann, M.; Schaper, P.; Lugrin, B. Integrating a Social Robot in Higher Education—A Field Study. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 573–579. [Google Scholar] [CrossRef]
  20. Riedmann, A.; Schaper, P.; Lugrin, B. Integration of a social robot and gamification in adult learning and effects on motivation, engagement and performance. AI Soc. 2022. [Google Scholar] [CrossRef]
  21. Bono, A.; Augello, A.; Pilato, G.; Vella, F.; Gaglio, S. An ACT-R Based Humanoid Social Robot to Manage Storytelling Activities. Robotics 2020, 9, 25. [Google Scholar] [CrossRef]
  22. Häring, M.; Kuchenbrandt, D.; André, E. Would you like to play with me? In Proceedings of the 2014 ACM/IEEE International Conference on Human–Robot Interaction, Bielefeld, Germany, 3–6 March 2014; Sagerer, G., Imai, M., Belpaeme, T., Thomaz, A., Eds.; ACM: New York, NY, USA, 2014; pp. 9–16. [Google Scholar] [CrossRef]
  23. Berghe, R.; Haas, M.; Oudgenoeg-Paz, O.; Krahmer, E.; Verhagen, J.; Vogt, P.; Willemsen, B.; Wit, J.; Leseman, P. A toy or a friend? Children’s anthropomorphic beliefs about robots and how these relate to second–language word learning. J. Comput. Assist. Learn. 2021, 37, 396–410. [Google Scholar] [CrossRef]
  24. Gomez, R.; Szapiro, D.; Galindo, K.; Merino, L.; Brock, H.; Nakamura, K.; Fang, Y.; Nichols, E. Exploring Affective Storytelling with an Embodied Agent. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 8–12 August 2021; pp. 1249–1255. [Google Scholar] [CrossRef]
  25. Mirnig, N.; Stadler, S.; Stollnberger, G.; Giuliani, M.; Tscheligi, M. Robot humor: How self-irony and Schadenfreude influence people’s rating of robot likability. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 166–171. [Google Scholar] [CrossRef] [Green Version]
  26. Mohammad, Y.; Nishida, T. Human-like motion of a humanoid in a shadowing task. In Proceedings of the 2014 International Conference on Collaboration Technologies and Systems (CTS), Minneapolis, MN, USA, 19–23 May 2014; pp. 123–130. [Google Scholar] [CrossRef]
  27. Beer, J.M.; Fisk, A.D.; Rogers, W.A. Toward a framework for levels of robot autonomy in human–robot interaction. J. Hum.-Robot. Interact. 2014, 3, 74–99. [Google Scholar] [CrossRef] [Green Version]
  28. Friedman, M. Autonomy, Gender, Politics; Oxford University Press: Oxford, England, 2003. [Google Scholar]
  29. Ryan, R.M.; Deci, E.L. Self-regulation and the problem of human autonomy: Does psychology need choice, self-determination, and will? J. Personal. 2006, 74, 1557–1585. [Google Scholar] [CrossRef]
  30. Deci, E.L.; Flaste, R. Why We Do What We Do: The Dynamics of Personal Autonomy; GP Putnam’s Sons: New York City, NY, USA, 1995. [Google Scholar]
  31. Sheridan, T.B.; Verplank, W.L. Human and Computer Control of Undersea Teleoperators; Massachusetts Inst of Tech Cambridge Man-Machine Systems Lab.: Cambridge, MA, USA, 1978. [Google Scholar]
  32. Parasuraman, R.; Sheridan, T.B.; Wickens, C.D. Situation Awareness, Mental Workload, and Trust in Automation: Viable, Empirically Supported Cognitive Engineering Constructs. J. Cogn. Eng. Decis. Mak. 2008, 2, 140–160. [Google Scholar] [CrossRef]
  33. Schwarz, M.; Stückler, J.; Behnke, S. Mobile teleoperation interfaces with adjustable autonomy for personal service robots. In Proceedings of the 2014 ACM/IEEE International Conference on Human–Robot Interaction, Bielefeld, Germany, 3–6 March 2014; Sagerer, G., Imai, M., Belpaeme, T., Thomaz, A., Eds.; ACM: New York, NY, USA, 2014; pp. 288–289. [Google Scholar] [CrossRef]
  34. Stapels, J.G.; Eyssel, F. Robocalypse? Yes, Please! The Role of Robot Autonomy in the Development of Ambivalent Attitudes Towards Robots. Int. J. Soc. Robot. 2022, 14, 683–697. [Google Scholar] [CrossRef]
  35. Vlachos, E.; Schärfe, H. Social Robots as Persuasive Agents. In Social Computing and Social Media; Hutchison, D., Kanade, T., Kittler, J., Kleinberg, J.M., Kobsa, A., Mattern, F., Mitchell, J.C., Naor, M., Nierstrasz, O., Pandu Rangan, C., et al., Eds.; Springer International Publishing: Cham, Switzerland, 2014; Volume 8531, pp. 277–284. [Google Scholar] [CrossRef] [Green Version]
  36. Weiss, A.; Bartneck, C. Meta analysis of the usage of the Godspeed Questionnaire Series. In Proceedings of the 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, 31 August–1 September 2015; pp. 381–388. [Google Scholar] [CrossRef]
  37. Riek, L. Wizard of Oz Studies in HRI: A Systematic Review and New Reporting Guidelines. J. Hum.-Robot. Interact. 2012, 1, 119–136. [Google Scholar] [CrossRef] [Green Version]
  38. Baba, J.; Sichao, S.; Nakanishi, J.; Kuramoto, I.; Ogawa, K.; Yoshikawa, Y.; Ishiguro, H. Teleoperated Robot Acting Autonomous for Better Customer Satisfaction. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Bernhaupt, R., Mueller, F.F., Verweij, D., Andres, J., McGrenere, J., Cockburn, A., Avellino, I., Goguey, A., Bjørn, P., Zhao, S., et al., Eds.; ACM: New York, NY, USA, 2020; pp. 1–8. [Google Scholar] [CrossRef]
  39. Song, S.; Baba, J.; Nakanishi, J.; Yoshikawa, Y.; Ishiguro, H. Costume vs. Wizard of Oz vs. Telepresence: How Social Presence Forms of Tele-operated Robots Influence Customer Behavior. In Proceedings of the 2022 ACM/IEEE International Conference on Human–Robot Interaction, Sapporo, Japan, 7–10 March 2022; pp. 521–529. [Google Scholar]
  40. Sundar, S.S.; Nass, C. Source Orientation in Human–Computer Interaction. Commun. Res. 2000, 27, 683–703. [Google Scholar] [CrossRef]
  41. Bekey, G.A. Autonomous Robots: From Biological Inspiration to Implementation and Control; MIT Press: Cambridge, UK, 2005. [Google Scholar]
  42. Millo, F.; Gesualdo, M.; Fraboni, F.; Giusino, D. Human Likeness in robots: Differences between industrial and non-industrial robots. In Proceedings of the European Conference on Cognitive Ergonomics, Siena, Italy, 26–29 April 2021; Marti, P., Parlangeli, O., Recupero, A., Eds.; ACM: New York, NY, USA, 2021; pp. 1–5. [Google Scholar] [CrossRef]
  43. Cramer, H.; Kemper, N.; Amin, A.; Wielinga, B.; Evers, V. ‘Give me a hug’: The effects of touch and autonomy on people’s responses to embodied social agents. Comput. Animat. Virtual Worlds 2009, 20, 437–445. [Google Scholar] [CrossRef] [Green Version]
  44. Choi, J.J.; Kim, Y.; Kwak, S.S. The autonomy levels and the human intervention levels of robots: The impact of robot types in human–robot interaction. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; pp. 1069–1074. [Google Scholar] [CrossRef]
  45. Kwak, S.S.; Kim, Y.; Kim, E.; Shin, C.; Cho, K. What makes people empathize with an emotional robot?: The impact of agency and physical embodiment on human empathy for a robot. In Proceedings of the 2013 IEEE RO-MAN, Gyeongju, Republic of Korea, 26–29 August 2013; pp. 180–185. [Google Scholar] [CrossRef]
  46. Srinivasan, V.; Takayama, L. Help Me Please: Robot Politeness Strategies for Soliciting Help From Humans. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; Kaye, J., Druin, A., Lampe, C., Morris, D., Hourcade, J.P., Eds.; ACM: New York, NY, USA, 2016; pp. 4945–4955. [Google Scholar] [CrossRef]
  47. Azhar, M.Q.; Sklar, E.I. A study measuring the impact of shared decision making in a human–robot team. Int. J. Robot. Res. 2017, 36, 461–482. [Google Scholar] [CrossRef]
  48. Lee, H.; Choi, J.J.; Kwak, S.S. Will you follow the robot’s advice? In Proceedings of the Second International Conference on Human–Agent Interaction, Tsukuba, Japan, 29–31 October 2014; Kuzuoka, H., Ono, T., Imai, M., Young, J.E., Eds.; ACM: New York, NY, USA, 2014; pp. 137–140. [Google Scholar] [CrossRef]
  49. Dole, L.D.; Sirkin, D.M.; Currano, R.M.; Murphy, R.R.; Nass, C.I. Where to look and who to be Designing attention and identity for search-and-rescue robots. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human–Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 119–120. [Google Scholar] [CrossRef]
  50. Weiss, A.; Wurhofer, D.; Lankes, M.; Tscheligi, M. Autonomous vs. tele-operated: How People Perceive Human–Robot Collaboration with HRP-2. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction—HRI ’09, La Jolla, CA, USA, 9–13 March 2009; Scheutz, M., Michaud, F., Hinds, P., Scassellati, B., Eds.; ACM Press: New York, NY, USA, 2009; p. 257. [Google Scholar] [CrossRef]
  51. Dinet, J. “Would You be Friends with a Robot?”: The Impact of Perceived Autonomy and Perceived Risk. Hum. Factors Robot. Drones Unmanned Syst. 2022, 57, 25. [Google Scholar] [CrossRef]
  52. Ioannou, A.; Andreou, E.; Christofi, M. Pre-schoolers’ Interest and Caring Behaviour Around a Humanoid Robot. TechTrends 2015, 59, 23–26. [Google Scholar] [CrossRef]
  53. Striepe, H.; Lugrin, B. There Once Was a Robot Storyteller: Measuring the Effects of Emotion and Non-verbal Behaviour. Soc. Robot. 2017, 10652, 126–136. [Google Scholar] [CrossRef]
  54. Mubin, O.; Stevens, C.J.; Shahid, S.; Mahmud, A.A.; Dong, J.J. A Review of the Applicability of Robots in Education. Technol. Educ. Learn. 2013, 1. [Google Scholar] [CrossRef] [Green Version]
  55. Xu, J.; Broekens, J.; Hindriks, K.; Neerincx, M.A. Effects of a robotic storyteller’s moody gestures on storytelling perception. In Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), Xi’an, China, 21–24 September 2015; pp. 449–455. [Google Scholar]
  56. Spitale, M.; Okamoto, S.; Gupta, M.; Xi, H.; Matarić, M.J. Socially Assistive Robots as Storytellers that Elicit Empathy. ACM Trans. Hum.-Robot. Interact. 2022, 11, 1–29. [Google Scholar] [CrossRef]
  57. Wiese, E.; Weis, P.P.; Bigman, Y.; Kapsaskis, K.; Gray, K. It’s a Match: Task Assignment in Human—Robot Collaboration Depends on Mind Perception. Int. J. Soc. Robot. 2022, 14, 141–148. [Google Scholar] [CrossRef]
  58. Goetz, J.; Kiesler, S.; Powers, A. Matching robot appearance and behavior to tasks to improve human–robot cooperation. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human–Robot Interaction—HRI ’06, Salt Lake City, UT, USA, 2–3 March 2003; pp. 55–60. [Google Scholar] [CrossRef]
  59. Hoffmann, L.; Bock, N.; Rosenthal Pütten, A.M. The Peculiarities of Robot Embodiment (EmCorp-Scale): Development, Validation and Initial Test of the Embodiment and Corporeality of Artificial Agents Scale. In Proceedings of the 2018 ACM/IEEE International Conference on Human–Robot Interaction, Chicago, IL, USA, 5–8 March 2018; Kanda, T., Ŝabanović, S., Hoffman, G., Tapus, A., Eds.; ACM: New York, NY, USA, 2018; pp. 370–378. [Google Scholar] [CrossRef]
  60. SoftBank Robotics. Pepper [Apparatus]. 2021. Available online: https://www.aldebaran.com/pepper (accessed on 22 December 2022).
  61. Aldebaran Robotics. Choregraphe [Software]. 2016. Available online: https://www.aldebaran.com/en/support/pepper-naoqi-2-9/downloads-softwares (accessed on 22 December 2022).
  62. Brinkmeier, M. Die Maus, die sich fledermauste. In 5-Minuten-Märchen zum Erzählen und Vorlesen; Brinkmeier, M., Ed.; Königsfurt-Urania: Krummwisch, Germany, 2019; pp. 11–12. [Google Scholar]
  63. LimeSurvey GmbH. LimeSurvey. 2021. Available online: https://www.limesurvey.org/de/ (accessed on 22 December 2022).
  64. Blanca, M.J.; Alarcón, R.; Arnau, J.; Bono, R.; Bendayan, R. Non-normal data: Is ANOVA still a valid option? Psicothema 2017, 29, 552–557. [Google Scholar] [CrossRef]
  65. Belpaeme, T. Advice to new human–robot interaction researchers. In Human–Robot Interaction; Springer: Cham, Switzerland, 2020; pp. 355–369. [Google Scholar]
  66. Li, D.; Rau, P.L.P.; Li, Y. A Cross-cultural Study: Effect of Robot Appearance and Task. Int. J. Soc. Robot. 2010, 2, 175–186. [Google Scholar] [CrossRef]
  67. Wainer, J.; Feil-Seifer, D.J.; Shell, D.A.; Mataric, M.J. The role of physical embodiment in human–robot interaction. In Proceedings of the ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6—8 September 2006; pp. 117–122. [Google Scholar]
  68. Deng, E.; Mutlu, B.; Mataric, M.J. Embodiment in socially interactive robots. Found. Trends Robot. 2019, 7, 251–356. [Google Scholar] [CrossRef]
  69. Bainbridge, W.A.; Hart, J.; Kim, E.S.; Scassellati, B. The effect of presence on human–robot interaction. In Proceedings of the RO-MAN 2008—The 17th IEEE International Symposium on Robot and Human Interactive Communication, Munich, Germany, 1–3 August 2008; pp. 701–706. [Google Scholar] [CrossRef]
  70. Kiesler, S.; Powers, A.; Fussell, S.; Torrey, C. Anthropomorphic Interactions with a Robot and Robot-Like Agent. Soc. Cogn. 2008, 26, 169–181. [Google Scholar] [CrossRef]
  71. Roesler, E.; Naendrup-Poell, L.; Manzey, D.; Onnasch, L. Why Context Matters: The Influence of Application Domain on Preferred Degree of Anthropomorphism and Gender Attribution in Human–Robot Interaction. Int. J. Soc. Robot. 2022. [Google Scholar] [CrossRef]
  72. Georges, R.A. Toward an understanding of storytelling events. J. Am. Folk. 1969, 82, 313–328. [Google Scholar] [CrossRef]
  73. Phillips, E.; Zhao, X.; Ullman, D.; Malle, B.F. What is Human-like?: Decomposing Robots’ Human-like Appearance Using the Anthropomorphic roBOT (ABOT) Database. In Proceedings of the 2018 13th ACM/IEEE International Conference on Human–Robot Interaction (HRI), Chicago, IL, USA, 5–8 March 2018; pp. 105–113. [Google Scholar]
  74. Button, G.; Coulter, J.; Lee, J.; Sharrock, W. Computers, Minds and Conduct; Polity: Cambridge, UK, 1995. [Google Scholar]
  75. Belpaeme, T.; Baxter, P.; Greeff, J.d.; Kennedy, J.; Read, R.; Looije, R.; Neerincx, M.; Baroni, I.; Zelati, M.C. Child-robot interaction: Perspectives and challenges. In International Conference on Social Robotics; Springer: Cham, Switzerland, 2013; pp. 452–459. [Google Scholar]
  76. Salem, M.; Eyssel, F.; Rohlfing, K.; Kopp, S.; Joublin, F. Effects of Gesture on the Perception of Psychological Anthropomorphism: A Case Study with a Humanoid Robot. Soc. Robot. 2011, 7072, 31–41. [Google Scholar] [CrossRef]
  77. Breazeal, C. Emotive qualities in robot speech. In Proceedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180), Maui, HI, USA, 29 October–3 November 2001; pp. 1388–1394. [Google Scholar] [CrossRef]
  78. Steinhaeusser, S.C.; Schaper, P.; Lugrin, B. Comparing a Robotic Storyteller versus Audio Book with Integration of Sound Effects and Background Music. In Proceedings of the Companion of the 2021 ACM/IEEE International Conference on Human–Robot Interaction, Boulder, CO, USA, 8–11 March 2021; Bethel, C., Paiva, A., Broadbent, E., Feil-Seifer, D., Szafir, D., Eds.; ACM: New York, NY, USA, 2021; pp. 328–333. [Google Scholar] [CrossRef]
Figure 1. Freeze frames from the video stimuli. (a) Start of video in cabled conditions. (b) Start of speech in cabled conditions. (c) Start of video in non-cabled conditions. (d) Start of speech in non-cabled conditions.
Figure 1. Freeze frames from the video stimuli. (a) Start of video in cabled conditions. (b) Start of speech in cabled conditions. (c) Start of video in non-cabled conditions. (d) Start of speech in non-cabled conditions.
Robotics 12 00003 g001
Figure 2. Means and standard errors for perceived anthropomorphism (A), animacy (B), likeability (C), intelligence (D), and safety (E) as a function of cabling (i.e., non-cabled vs. cabled) and story content (i.e., technical vs. social).
Figure 2. Means and standard errors for perceived anthropomorphism (A), animacy (B), likeability (C), intelligence (D), and safety (E) as a function of cabling (i.e., non-cabled vs. cabled) and story content (i.e., technical vs. social).
Robotics 12 00003 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Roesler, E.; Steinhaeusser, S.C.; Lugrin, B.; Onnasch, L. The Influence of Visible Cables and Story Content on Perceived Autonomy in Social Human–Robot Interaction. Robotics 2023, 12, 3. https://doi.org/10.3390/robotics12010003

AMA Style

Roesler E, Steinhaeusser SC, Lugrin B, Onnasch L. The Influence of Visible Cables and Story Content on Perceived Autonomy in Social Human–Robot Interaction. Robotics. 2023; 12(1):3. https://doi.org/10.3390/robotics12010003

Chicago/Turabian Style

Roesler, Eileen, Sophia C. Steinhaeusser, Birgit Lugrin, and Linda Onnasch. 2023. "The Influence of Visible Cables and Story Content on Perceived Autonomy in Social Human–Robot Interaction" Robotics 12, no. 1: 3. https://doi.org/10.3390/robotics12010003

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop