Next Article in Journal
The Number of Ethylene Oxide Groups of Sulphate-Based Surfactants Influences the Cytotoxicity of Mixed Micelles to an Amphibian Cell Line
Next Article in Special Issue
Comparing Usability of Augmented Reality and Virtual Reality for Creating Virtual Bounding Boxes of Real Objects
Previous Article in Journal
Quantitative Evaluation of the Infrazygomatic Crest Thickness in Polish Subjects: A Cone-Beam Computed Tomography Study
Previous Article in Special Issue
Patient–Robot Co-Navigation of Crowded Hospital Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too!

1
UFR de Psychologie, Université Paris 8 (Laboratoire Cognitions Humaine et Artificielle, RNSR 200515259U), 2 Rue de la Liberté, 93526 Saint-Denis, France
2
Association P-A-R-I-S, 25 Rue Henri Barbusse, 75005 Paris, France
3
UFR d’Éducation, CY Cergy Paris Université, 33 boulevard Port, 95000 Cergy-Pontoise, France
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(15), 8743; https://doi.org/10.3390/app13158743
Submission received: 7 July 2023 / Revised: 22 July 2023 / Accepted: 23 July 2023 / Published: 28 July 2023
(This article belongs to the Special Issue Advanced Human-Robot Interaction)

Abstract

:
The increasing presence of robots in our society raises questions about how these objects are perceived by users. Individuals seem inclined to attribute human capabilities to robots, a phenomenon called anthropomorphism. Contrary to what intuition might suggest, these attributions vary according to different factors, not only robotic factors (related to the robot itself), but also situational factors (related to the interaction setting), and human factors (related to the user). The present review aims at synthesizing the results of the literature concerning the factors that influence anthropomorphism, in order to specify their impact on the perception of robots by individuals. A total of 134 experimental studies were included from 2002 to 2023. The mere appearance hypothesis and the SEEK (sociality, effectance, and elicited agent knowledge) theory are two theories attempting to explain anthropomorphism. According to the present review, which highlights the crucial role of contextual factors, the SEEK theory better explains the observations on the subject compared to the mere appearance hypothesis, although it does not explicitly explain all the factors involved (e.g., the autonomy of the robot). Moreover, the large methodological variability in the study of anthropomorphism makes the generalization of results complex. Recommendations are proposed for future studies.

1. Introduction

A previous study ([1], see also [2,3] for a review) showed that some psychological processes, typically found in the literature to be observed only at a specific age, could in fact be observed much earlier when a robot was used as an experimenter. This is especially true when the robot is introduced to the child as being ignorant and slow. This paradigm, the mentor–child paradigm, only works because children attributed intentions (trying to learn) and states of mind (having a piece of information, a concept, or not) to the robot. This is what we call anthropomorphism. This notion is not new and has been heavily discussed in the literature [4,5,6]. Much work has been conducted regarding robotic factors (the design of the robot itself). In this paper, we review this work and emphasize that other contextual elements contribute to anthropomorphism and that these elements are not directly related to the robot itself. To show this, we will first discuss the definition of the concept of anthropomorphism and the psychological processes involved, before exploring the different factors that influence the emergence of this phenomenon. We will discuss three types of factors: robotic factors, of course, but also situational and human factors. Finally, we will present the boundaries of anthropomorphism, namely the uncanny valley, as well as the measurement and methodological limitations.

2. Different Conceptions of What Anthropomorphism Is

Human beings show as much of a tendency to interact with artificial media as they do with other humans. This phenomenon, called anthropomorphism, can be observed in everyday life toward objects such as telephones, computers, or cars [7]. If anthropomorphism is shared by all, there are large inter-individual variations [8]. The word “anthropomorphism” is derived from the Greek anthropos (meaning man) and morphe (meaning form). Anthropomorphism, thus, implies going beyond a simple description of actions (observable or imagined) to represent the mental or physical state of the agent by using human terms, such as: “The dog is affectionate” becomes “The dog loves me” [5]. The notion of anthropomorphism, thus, refers to the tendency to attribute human characteristics (such as motivations, intentions, or emotions) to the behavior of non-human agents or non-living objects [5]. Individuals can attribute a wide range of mental capacities when engaging in anthropomorphism, such as intentions, or conscious experiences [5,9]. The presence of a robot seems to activate different types of socio-cognitive processes, stimulating both low-level processes, such as tracking the direction of the robot’s gaze or representing the robot’s movements as goal-directed actions [10,11], and high-level processes, such as the attribution of human mental states [5,12].
However, it is questionable whether individuals actually attribute mental states to non-human agents. Two forms of anthropomorphism can be distinguished [5]: On the one hand, the strong form, which refers to individuals who are sincerely convinced that the agent possesses the human characteristics attributed to it; and on the other hand, the weak form, which refers to individuals who act as if the agent really has these characteristics, while knowing that it does not. An individual can attribute mental states to agents to explain their behavior but deny the presence of a mind in them when explicitly asked [13,14]. Rather than a binary vision of anthropomorphism between the weak and the strong forms, some authors consider that it is instead a matter of degree on a continuum [5]. The attribution of a mental state to an agent, therefore, does not necessarily imply adherence to the reality of those mental states [15,16,17]. In this review, we will use the term anthropomorphism to refer to behaviors that would imply taking the mental states of an agent into consideration, regardless of whether there is an explicit belief in the presence of these mental states.
How can we explain our tendency to attribute human characteristics to non-human agents, even when we know that they do not actually possess these attributes? Although several explanations have been proposed, this phenomenon is generally considered a mistake [18,19,20]. Anthropomorphism could be caused by the activation of a default schema that would also apply to non-social objects whose behavior cannot be explained otherwise, such as computers and robots [21]. Anthropomorphism would result from the human desire to project the most complex organization possible onto the stimuli [18]. The living beings endowed with intentions—the humans—representing the “greatest complexity” of organization [22], the individuals would seek to attribute human properties to any of the encountered agents, insofar as their characteristics do not exclude them directly. Other papers [19,20] argue that anthropomorphism is the result of heuristics (as defined in [23]), which leads people to explain the behavior of non-human animals based on an analogy with our own human mind. This is in fact similar to the conception considering anthropomorphism as an automatic and invariant psychological process [18,24]. However, the invariance of the process is discussed: Some non-human agents are more anthropomorphized than others, and some individuals show an increased tendency toward anthropomorphism [5]. Moreover, the same agent can be anthropomorphized or considered as an object depending on the situation: Anthropomorphism is, therefore, situational [25]. We will detail this discussion in Section 3.2. As robots are non-human agents that are becoming more common, in this review, we will focus on models of anthropomorphism based on experimental studies with robots.
We will see that some of these models only take into account the appearance of the robot [26], while others also take into account the interaction situation the robot is involved in, either by exploring the psychological determinants of anthropomorphism [5] or by considering anthropomorphism to be a direct consequence of another cognitive process: the theory of mind [5,27].

3. The Importance of Contextual Effects in the Models of Anthropomorphism

3.1. A Contextless Model: The Mere Appearance Hypothesis

One model does not take into account the effects of the context in which the interaction takes place: the mere appearance hypothesis [26]. For the proponents of this approach, it is the appearance of the agent itself that activates processes usually used in human–human interactions. In human–human interactions, socio-cognitive processes are triggered automatically, given that humans are social agents. The misattribution of this social agent status, based only on the appearance of the agent, would trigger these processes similarly and, thus, be the cause of anthropomorphism [28,29]. Therefore, an individual confronted with a robot relies on their socio-cognitive capacities acquired from humans, due to the absence of a behavior explanation mechanism specifically tailored to robots [30]. According to the mere appearance hypothesis, humans would generalize these mechanisms to stimuli (agents or objects), depending on their superficial resemblance to humans (in appearance or behavior). This argument is founded on work showing that organisms can generalize their responses to new stimuli when they resemble the original [31]. This phenomenon explains social reactions expressed by individuals in the presence of inanimate objects, such as eye-like geometric shapes. Therefore, robots similarly prompt a generalization of stimuli, which explains the occurrence of perspective-taking toward robots [26].
According to this theory, the perspective-taking of a robot by a human is only done if the robot has an appearance that resembles the human (rather than activated by the perception of a mind in the robot). If the robot does not have a human appearance, the consideration of the robot’s perspective is less, whereas it will be important when the robot strongly resembles the human and will persist even if the robot is perceived as strange or witless. The integration of the robot’s perspective-taking is related to the level of anthropomorphism of the robot. Participants adopt the perspective of an iconic robot (NAO or BAXTER) more than an abstract robot (THYMIO) or a biologically similar non-human agent, namely a cat. When the robot (BAXTER) has a face or a head and looks at the target, participants adopt its perspective more. Similarly, the perspective-taking rate of a humanoid (i.e., strongly human-like) robot (ERICA) is higher than that observed for the iconic robot but lower than that of the human agent [26].
Furthermore, in human interactions, observing another individual performing goal-directed actions (such as gaze orientation or grasping an object) increases a person’s tendency to adopt another individual’s physical perspective [32]. The authors speculate that, similarly, observing a robot’s actions may reinforce the tendency of individuals to adopt its perspective. Indeed, in a human-like robot, when comparing a directed gaze or gesture with a blank stare, goal-directed actions seem to elicit more perspective-taking [26]. In the same study, if the robot (BAXTER) had a face but looked to the side, the participants adopted its perspective as much as when it had no face or head. This result led the authors to argue that “the impact of human-like appearance on perspective taking lies more in the goal-directed behaviors that it enables—such as gaze and reaching—than in the mere possession of the physical features per se” ([26], p. 16).
Despite these observations, others have shown that a human-like appearance is neither sufficient nor necessary to trigger anthropomorphism, since some non-human-like agents are also anthropomorphized, such as zoomorphic [33] and abstract robots [34], geometric figures [35], computers, or cars [7]. Conversely, some agents with a strong human resemblance are less anthropomorphized than agents with a moderate human resemblance [36]. In another study, children aged 4–11 attributed mental states to a moderately human-like NAO robot, similar to a non-human-like vocal assistant such as Alexa [37]. Finally, the same agent can be anthropomorphized or not depending on the interaction situation (which includes the way the agent is presented to the user, but also the characteristics of the users themselves) [25]. The mere appearance hypothesis, therefore, seems insufficient at explaining anthropomorphism toward robots. Thus, theoretical frameworks should also explain the process of anthropomorphism by taking into account factors related to the interaction situation.
Due to the presence of this context of interaction, it is important to understand the psychological elements that are in effect in such a situation to fully grasp the process of anthropomorphization [5]. This would explain the individual variability.

3.2. Why We Anthropomorphize: The Three Psychological Determinants of Anthropomorphism

According to the sociality, effectance, and elicited agent knowledge (SEEK) theory, the process of anthropomorphism, which essentially applies a default model of human interaction to artificial agents, is modulated by three psychological determinants: (1) the accessibility and applicability of anthropocentric knowledge; (2) the motivation to explain and understand the behavior of other agents; (3) the desire for social contact and affiliation [5]. For this approach, it involves the need to interact with and explain one’s environment that prompts individuals to anthropomorphize an object.
The human observer uses their knowledge to explain the behavior of a robot: They automatically rely on the representation elaborated from their experiences with humans, as it is more accessible and more economical. Thus, it becomes the default model used when interacting with robots in the absence of a more specific model targeted to them. It would result in the attribution of human characteristics to non-humans, to “complete a partial representation” [38]. A study supports this aspect of the SEEK theory as participants with more experience in interacting with robots have a decreased propensity to anthropomorphism [39]. The more a person interacts with a robot, the more relevant a specific representation of an interaction with a robot becomes, and the less anthropomorphism is needed. Thus, children are more likely to anthropomorphize than adults [40].
The motivation to explain and understand artificial agents results from the epistemophilic behavior that allows reducing the uncertainty implied by this interaction situation, all the more when the latter is new [38,41]. Anthropomorphization, thus, aims at answering the need of individuals to explain the robot’s behavior [42,43,44]. This phenomenon is all the more important when non-human entities are perceived as having intentions with unpredictable behavior (for instance, when the robot Asimo answers questions in a random fashion) [43]. The need for individuals to understand as well as predict their environment increases the tendency for anthropomorphism, and in turn, anthropomorphism fills this need to explain the world [43,45]. This is particularly true for people who are anxious as anthropomorphism increases their sense of control [46].
Finally, anthropomorphism satisfies the desire for social contact and affiliation by providing a framework to manage interactions with non-human agents at the lowest cognitive cost. Human individuals would need to establish social links with other humans. Anthropomorphism could satisfy this need by providing a social connection with a non-human agent similar to the one that can be created with a human. The more a person feels a strong need for social contact, the greater the tendency to anthropomorphize. Hence, a high level of social isolation will lead to an increased tendency to anthropomorphize robots (for example, see [47] with AIBOs and [48] with NAO), pets [5], and objects, such as alarm clocks [46] or smartphones [49].
The explanation and prediction of the behavior of non-human agents can, therefore, be based on steps similar to those implemented to understand the behavior of human agents [5]. We will see to what extent anthropomorphism relies on capacities that are usually used for human interactions.

3.3. How We Anthropomorphize: The Theory of Mind

For some authors [5,27], anthropomorphism would be an extension of the Theory of Mind (ToM, the ability to attribute mental states in order to understand and predict behaviors [50]), whereas for others [51], the ToM would not be necessary for the anthropomorphization process and would only be a useful way of describing the agent or the situation [52]. So, is ToM a requirement for anthropomorphism or is it not?

3.3.1. Anthropomorphism as a Process Dependent on the Theory of Mind

If we refer to the definition of anthropomorphism as the attribution of human characteristics to non-human agents, such as objects, the latter is very similar to ToM in that it involves the attribution of mental states. Anthropomorphism is not only limited to perceiving human physical features, such as hands or eyes (even very abstract representations of them such as for THYMIO, whose lights can be likened to two eyes) but also involves the attribution of human mental states (sensations, emotions, intentions), i.e., not only does the robot have eyes, it can see. In the same way that the ToM is activated during human–human interactions to predict behaviors [53], anthropomorphism would be activated during interactions between human and non-human agents to predict, understand, and explain the behaviors of the latter [5,43], within a specific situation or context [54].
Neuroimaging studies highlight the activation of brain regions considered to be part of the ToM network when subjects engage in anthropomorphism [55]. This network corresponds to areas involved in tasks that require understanding and inferring the mental state of others. It is notably composed of the bilateral temporoparietal junction, the precuneus, and the medial prefrontal cortex [56,57].
The same circuit is used when interacting with non-human agents, whether they are with simple geometric shapes and non-human animals, with the activation of parts of the temporoparietal junction [58,59], or with biologically based animated characters, with the activation of this temporoparietal junction and the precuneus [60]. Furthermore, there is a correlation between a predisposition toward the anthropomorphization of non-human animals and a greater gray matter volume in the left temporoparietal junction [8]. This also applies to the observation of robot actions with patterns of neural activation similar to those visible during the observation of human actions [61]. This result again emphasizes the connection between anthropomorphism and ToM highlighted in [27], which can perhaps be explained by common underlying processes.
If the previously cited studies show a link between anthropomorphism and ToM, other authors contest it.

3.3.2. The Anthropomorphism Independent from the Theory of Mind

According to other studies [51,52], the ToM would not be necessary for anthropomorphism. It would only serve as a means to describe the agent or the situation. The process of anthropomorphism would be decomposed into two steps: We first rely on low-level perceptual processes, which are, in a second step, completed and interpreted using a language derived from ToM. This does not necessarily mean that the participants actually believe in these mental states. Such a conception can be assimilated into the weak form of anthropomorphism that we previously discussed [5]. For example, even if participants stated “the triangle had a great idea”, they did not really consider it as a thinking entity. Thus, the actual attribution of mental states to non-human agents would depend, at least in part, on different processes than those involved in the attribution of mental states to a human [52].
Two arguments seem to support this theory. First, one study shows a correlation between the tendency to anthropomorphize cars and the activation of the fusiform face area, but not with the temporoparietal junction and medial prefrontal cortex [62]. This result could indicate that anthropomorphism relies more on perceptual processes than on ToM processes. Second, another study indicates that there is a correlation between ToM abilities and situational anthropomorphism (when the measure of anthropomorphism takes into account the context of interaction, i.e., a specific character in an animated film) but not with dispositional anthropomorphism (when the measure of anthropomorphism takes only into account general attitudes toward robots) [51]. This would indicate that anthropomorphism and ToM would not be analogous: anthropomorphism would not be an extension of ToM.
Those two arguments can be criticized. Both the concept of ToM and the concept of anthropomorphism are extremely broad “multidimensional constructs”, so the question would be worth pursuing. Researchers have recently highlighted disparities between the various tasks used to measure ToM, raising the possibility that these tasks measure different cognitive processes [63]. We will see later that the same problem arises for anthropomorphism (Section 5.2, the measurement of anthropomorphism and its limits). Overall, anthropomorphism varies by object, situation, and agent (e.g., [25,64]).

4. Anthropomorphizing Factors: Robotic Factors Are Not Enough

According to the theories we have just seen, anthropomorphism is determined by the interaction situation or the appearance of the robot. But these different factors jointly modulate the tendency to anthropomorphize a robot.
In light of the theories presented in the introduction, we set out to better understand the determinants of anthropomorphism. Our search for articles was first carried out on Google Scholar with the keywords “anthropomorphism+robot+experimental+psychology”, yielding 16,500 results. As this review is not intended to be systematic, we were particularly interested in experimental papers dealing with the acceptance of robots and the attribution of mental states to them, either directly or indirectly. We excluded experimental papers dealing with the industrial application of robots, focusing on the use of robots in an interactive setting. In all, we selected 134 experimental studies, with publication years ranging from 2002 to 2023.
Anthropomorphism is a particular process of inductive inferences. It can be influenced in two different ways: top-down induction and bottom-up induction [40,65]. To play on bottom-up inferences, modifying the design of the robot is required, i.e., its appearance and shape, voice, behavior, and the quality of its movements. To activate top-down inferences, promoting anthropomorphic beliefs is necessary, for example by attributing human socio-cognitive abilities to the robot (e.g., suggesting to the participants that the robot feels pain if it falls from the table). The latter is heavily context-dependent: the situation itself can promote beliefs, and the user’s own disposition can also have an influence. We will now explore all of these elements below.
Thus, three broad categories of factors impact anthropomorphism [66]: robot design (bottom-up induction [65]), the interaction situation, which includes situational factors (top-down induction [65]), and human factors.

4.1. Robotic Factors: The Design of the Robot

Four characteristics allow us to circumscribe the design of robots: their appearance, voice, the nature of their social behaviors (verbal or non-verbal), and the quality of their movements. Table 1 summarizes the studies cited in this section.

4.1.1. The Robot’s Appearance Has a Strong Impact on Anthropomorphism

An object will be perceived as more or less human-like if it does or does not have a human form [115] or human components [116]. Several studies have pointed out that the presence of human-like physical characteristics in a robot (NAO and ROBOVIE) could lead adults and children to anthropomorphize it [77,78]. Social robots generally have a human-like appearance, which is specifically intended to induce anthropomorphism in users [4], although individuals also anthropomorphize robots (PLEO and AIBO) that do not have a human appearance [33]. There is, nevertheless, a strong disparity in the design of the robots used.
A taxonomy of different social robots allows us to distinguish between several types of designs: abstract, iconic, and humanoid [38]. Abstract robots refer to robots whose appearance is strictly mechanical and does not include human morphological elements (e.g., LEGO MINDSTORMS). Iconic robots have human physical features—such as eyes, mouth, and arms—but their appearance is still strongly mechanical, which allows them to be immediately identified as robots (e.g., NAO). Humanoid robots have an appearance that strongly resembles humans (e.g., SOPHIA); the term “android” is added to designate a robot that strongly resembles humans, both in appearance and behavior (e.g., GEMINOID or ERICA).
The head plays an essential role in the perception of the notion of humanity, in robots [71] and embodied virtual agents (see [117] for a review). The three most important components in a robot’s design are the eyes, the nose, and the mouth. Thus, on a robot’s face, the number of human-like elements correlates with the level of anthropomorphism. As early as 17 months, infants recognize salient facial features and relevant behaviors in the social interactions (the initial eye contact, as in a 6 s video) of humans as well as robots (ROBOVIE 2) [97]. Individuals look more at an industrial robot (SAWYER) when it has a face (displayed on a tablet) [79]. They also adopt the perspective of a robot (BAXTER) more when it has a face or a head [26] (but there is no difference between the presence of a face and the presence of a head). A human-like robot face is considered warmer and more competent than a machine-like robot face, which causes more discomfort to participants [70]. The inversion effect, the fact that human bodies and faces are recognized more quickly and accurately when presented in their usual orientations rather than upside down [118,119], applies to robot body images regardless of the degree of human resemblance, i.e., whether the robots have weak, moderate, or strong human-like physical features. Concerning robot faces, the inversion effect applies only to robots with a high level of human-likeness (versus a low level): Only robots with strong human-like faces are cognitively anthropomorphized [82].
Thus, user reactions may differ depending on whether the robot looks like a human or a machine. Nevertheless, the impact of the robot’s appearance varies across studies.
The human-like appearance of a robot would facilitate interactions, notably by increasing the perceived familiarity of the robot and by giving the impression of understandable and predictable behavior [36]. Human-like robots are judged as more likable [22], fun [75], and intelligent [73]. Whatever the age of the participants (4–8 y.o., 9–13 y.o., or adults), they prefer to interact with the iconic robot NAO rather than with the abstract robot TITAN [69]. Expectations of social and moral norms are more evident toward anthropomorphic robots [76,84]. Adults seem to cooperate more with robots that have some human elements in their appearance [80], and show more empathy toward humanoid robots (GEMINOID) rather than iconic robots (KOJIRO [65]). They also develop more concern for them [81]. Participants were asked to choose which robot they would like to save during an earthquake. They favored the human-like robots—ANDREW and ALICIA—over the non-human-like robots—ROOMBA and AUR). Individuals express a preference for a care robot (PEOPLEBOT) when it has a human face rather than an iron face, sculpture-like face, or no face at all, and then attribute more mental abilities and positive personality traits to it [68]. An iconic robot (NAO) is rated as more believable, likable, and trustworthy than an abstract, less human-like robot (BAXTER) [22]. Nevertheless, BAXTER’s credibility and perceived anthropomorphism increased when individuals first interacted with NAO, suggesting a generalization of anthropomorphism. Similarly, individuals trust an abstract robot (SCITOS G5) more than when they have first seen an iconic, more human-like robot (iCUB) [85].
The attribution of mental states could also depend on the quality of resemblance: A human-like appearance would facilitate the application of ToM to the robot (i.e., an explanation of its behavior based on its mental abilities) [34]. A human-like robot may lead individuals to spontaneously consider the robot’s perspective. Moderately human-like robots elicit more adoption of their views than weak human-like robots, but less than strong human-like robots [26]. Children aged 7–14 attribute more human mental abilities to an iconic robot (NAO) with a human appearance than to an abstract robot (COZMO) [67] and similar results are observed among 5–9 y.o. children who assign more mental states to NAO (iconic) than to ROBOVIE (abstract) [77]. In adults, a robot’s resemblance to humans increases the use of ToM toward them (OZOBOT, COZMO, NAO) [34] and the tendency to attribute mental states to it (NAO, PEPPER) [17,78].
Conversely, 3–5 y.o. children attribute as many biological properties to an iconic robot (NAO) as to an abstract robot (DASH) [72] and at 4–10 y.o., they consider humanoid and zoomorphic robots to have a similar moral status [83]. One study compared an iconic robot (NAO) with an abstract robot (the LEGO MINDSTORMS articulated arm) in a mini-dictator game [40]. They reported a lack of an effect of the robot’s appearance on the children, in contrast to the results observed in adults [65]. In other words, 4–5 y.o. children and 8–9 y.o. children do not share their stickers with the iconic robot NAO any more than with the abstract robot LEGO MINDSTORMS. In this study, manipulating the affective state of robots (attribution of feelings versus non-attribution) and presenting it as successive images may make the anthropomorphic appearance of the robot less salient.
Thus, a human-like appearance seems to improve the quality of the interaction with a robot, and the tendency to attribute mental states to it. However, we will see that a robot’s strong human-like appearance could also have negative effects on its perception, making it less likable (cf. Section 5.1). In addition, children may be less affected by appearance than adults, something we will discuss later in Section 4.3.1.
Although many studies focus on the appearance of the robot, other characteristics of the robot influence the perceptions of individuals. We will discuss these other characteristics in the next section.

4.1.2. A Human-like Voice Helps, but It Is Not Enough

Voice contributes to the anthropomorphism of the robot. Children aged 4–11 attribute as many mental states to a non-human-like agent with a human voice (ALEXA) as to a moderately human-like robot (NAO) [37]. Thus, adapting the voice, the length of the sentences, the speech rate according to the context of interaction, and the role occupied by the robot are important factors [120]. For example, it is relevant to make a voice higher pitched when the robot (NAO) is presented as a learner [111]. Indeed, the pitch of the voice has an influence on the perception of the overall quality of the interaction [112]. A social receptionist robot (OLIVIA), with a higher voice pitch, is evaluated more positively than the same receptionist with a lower voice. Similarly, individuals cooperate more with a robot (NAO) when it expresses itself with emotions in its voice [113]. The levels of pleasure and arousal experienced in interacting with a robot are increased when the robot has a voice similar to the human voice [95,109,110]. Individuals apply more social norms to a robot (NAO) with a natural voice intonation than to a synthetic voice [121]. But the voice impacts the perception of the robot by the individual differently, according to the behavior shown by the robot. We would trust a robot (NAO) that behaves honestly when it has a synthetic voice. On the other hand, if it acts dishonestly, we would trust the same robot (NAO) with a natural voice [114].

4.1.3. Behavior Is a Crucial Factor

The robot’s behavior may play a more important role in assigning human status than its form [122]. The robot can express different social behaviors, both verbal and nonverbal, which might have an effect on acceptance. Acceptance can be subdivided into intentional and behavioral acceptance [123]. Intentional acceptance is the user’s intention to act in a certain way with the technology (usually measured by a questionnaire) while behavioral acceptance refers to the user’s actions when using the technology (behavioral measurement). Some studies conclude that there is no effect of nonverbal or verbal behavior on robot acceptance, whether intentional [88] or behavioral [103]. Conversely, other studies have shown an effect of the robot’s verbal (e.g., encouragement) and nonverbal (e.g., behaving nicely to the user, being user-oriented) social behavior on behavioral [87,88] and intentional acceptance [92,93].
The robot’s verbal behaviors are of key importance in the interaction (especially when the robot shows collaborative behavior in the conversation with the user). Individuals feel more satisfaction and trust toward a robot with polite behavior [94], and a friendly robot is more appreciated than a robot with unfriendly behavior [36]. The level of interactivity in the conversation is a relevant factor: A robot with highly interactive behavior (enabling sophisticated communication with the participant) is judged more sociable and competent than a robot with lesser communication skills [90]. Interaction is valued more highly and experienced as more positive when the robot is animated rather than apathetic [98]. For example, in children (3–5 y.o.), a robot (NAO) expressing interjections (e.g., “Ah”, “Uh”) is perceived as more human-like [103]. A highly interactive robot—one that says “hello” warmly and recognizes the first names of children—results in more child engagement than a weakly interactive robot [102]. At age 5, children consider a robot with interactive behavior more intelligent than a robot that does not move and consider it more likely to feel emotions [100]. Moreover, an unpredictable conversation triggers more anthropomorphism than one clearly following recognizable patterns of behavior [43]. Humans have pragmatic expectations regarding conversations, and details (including non-verbal details) such as timing and turn-taking can also have an impact on anthropomorphism [124].
The robot’s nonverbal social behavior (e.g., looking in the direction of the interlocutor it is addressing, toward a target object, or reaching out toward that object) also modulates individuals’ adoption of its perspective. The robot’s point of view is taken into account more when the robot (NAO and BAXTER) is looking at the object than when it is looking to the side [26,32]. Individuals take little account of the perspective of an iconic (yet moderately human-like) robot when it does not show social behavior. A robot with its gaze directed toward the user increases the pleasure and arousal felt during the interaction [95]. Individuals trust a robot showing a human-like social gaze pattern more than a fixed gaze, if that robot is physically human-like (iCUB)—but not if the robot is non-anthropomorphic (SCITOS G5) [85]. A robot (SIMON) with joint-attention behavior is rated more competent by participants [91]. Even the posture of the robot can influence users, who approach a sitting robot (NAO) more than a standing one [99].
The question of adapting the robot’s behavior according to the user’s emotional state has also been raised, but the problem remains. A study shows no effect on intentional acceptance [96], while others highlight the beneficial effect of the robot’s adaptive behavior, both on intentional acceptance [86,89] and behavioral acceptance [96,104]. For example, a robot with personalized behavior allows children to have more fun and motivation in their interactions. Moreover, studies underline the importance of coherence between the appearance of a robot and its behavior or between the intention it expresses and its behavior [101]. Customization of a robot by participants (by choosing the form and the social skills of the robot) increases their trust toward this robot and leads to less discomfort [125]. They also attribute more agency to the robot. This personalization has no effect on other measures of anthropomorphism (experience, perceived warmth, and competence).
The importance of the adaptation of the robot’s behavior is strikingly similar to the natural adaptation of human behavior when communicating: this is the main interest of the entire field of conversational pragmatics [126,127,128] (note that conversational pragmatics has been shown to be a very important factor of human-likeness in conversations with artificial virtual agents (chatbots) [129,130,131,132,133]. In other words, even when it comes to the way the robot reacts, context is paramount.

4.1.4. The Quality of Movements Can Reinforce Anthropomorphism

A robot performing gestures is more appreciated than a stationary robot, and individuals attribute more mental states to it [107]. What defines the quality of a robot’s movement is its degree of freedom (the ability of a system to move along a specific axis of rotation). The impression of human resemblance is more striking when a robot can move its arm on multiple axes (multiple degrees of freedom at the shoulder) rather than on a single axis (a single degree of freedom), which then only allows the arm to move up and down [134].
To promote anthropomorphism, the quality of the movement is one of the most important clues because it gives the robot the impression of animation and liveliness [108]. Simple geometrical figures can be the objects of anthropomorphism if their movements resemble human movements [35]. In virtual agents, motion triggers a stronger sense of social presence than a static agent, yet this behavioral effect is only observable in non-human-like agents, as opposed to human-like agents [135]. Regarding robots, results are slightly different: The closer the movement is to human (biological) movement, the more the partner will consider the interaction to be pleasant, regardless of the robot’s appearance. Thus, a robot (BAXTER) that moves naturally and smoothly (naturalistic movement) is perceived as more friendly (versus mechanical movement), whether its whole body is visible or only its arm [105]. Natural movement (following curves) gives it a greater sense of animation, but only when the robot’s body is fully visible. A study comparing an arm with robotic movement and an arm with human movement found a positive effect of motion on anthropomorphism. Users better anticipated the trajectory of the arm in the human motion condition [106]. Nevertheless, although moving robots are considered more human-like, they are not necessarily more appreciated. As we will see later, the perception of this animation can be disturbing or unsettling [105] (cf. Section 5.1).
While these factors related to the robot are generally well linked to the concept of anthropomorphism (robotic factors), the context of the interaction also has a crucial part to play in the anthropomorphization process of the robot (situational factors and human factors). We will focus on these situational factors in the next section.

4.2. Situational Factors: The Situation Itself Can Change the Level of Anthropomorphism

When using situational factors, we mean the characteristics of the interaction. They include the way the robot is presented to individuals (the “anthropomorphic framing”), the role of the robot, the frequency of the interaction, and the perceived degree of the autonomy of the robot. Table 2 summarizes the studies cited in this section.

4.2.1. Anthropomorphic Framing Increases Robot Acceptance and Anthropomorphism

The way the robot is presented to individuals, also known as framing, affects interaction and the tendency toward anthropomorphism [65,137,143]. To place the robot in an anthropomorphic frame—that is, one that promotes anthropomorphism—studies rely on a humanized description of the robot, assigning it a first name and a personal history, or mental abilities. For instance, an “anthropomorphic framing” condition could involve the robot being described with a name and a personal history, which includes individual preferences, such as its favorite color and hobbies where a “non-anthropomorphic framing” condition would have the robot described in the manner of a tool.
The impact of anthropomorphic framing is debated. Some authors report no effect of anthropomorphic framing on the perceived resemblance of a robot (NAO) to a human [142] or on its intentional acceptance [145]. Similarly, the anthropomorphic framing of the robot does not increase prosocial behavior toward it [141]. Nevertheless, other studies suggest an impact of this framing. Individuals are less likely to use a hammer to hit a robot (HEXBUG) presented with a first name and a story (e.g., “He’s friendly but easily distracted”) than a robot presented as an object [137]. A robot (TELENOID) presented as having a personal story is considered more attractive by the participants, who then report a higher degree of perceived human likeness and a lower feeling of eeriness [139]. When robots are presented as part of a narrative story (in a situational context), they are appreciated more than robots presented solely from a technical point of view and are judged to be more intelligent and more human-like [143].
The social abilities attributed to the robot impact individuals’ perceptions about the robot as well as their behavior with it. ToM skills are associated with more positive reactions and an increased desire to interact with the robot [136]. A robot (NAO) presented as having ToM skills (participants watch a video where the robot passes Sally and Anne’s false belief test) is perceived as more socially intelligent than a robot without these skills [147] (in the video, the robot fails the false belief test). Similarly, participants trust a robot (PEPPER) presented as having advanced ToM capabilities more than a robot with weak capabilities [140,144]. The perception of ToM capabilities in a domestic robot (HIWONDER) leads to a more positive evaluation of service quality, in contrast to a robot lacking these capabilities [146] (the authors used the same script as in previous studies [140,147] to present the robot as having ToM capabilities). Individuals are more morally concerned and less likely to sacrifice robots that are presented as having emotions [65] regardless of the robot’s appearance (GEMINOID, an android robot, and KOJIRO, a less human-like robot).
Anthropomorphic framing also has an impact on children’s interactions with robots. Indeed, at 3–7 y.o., when a robot (TEGA) is presented as a friend of the child, the child looks at it significantly longer than when the robot is presented as a machine [138]. For this study, in the anthropomorphic framing condition, the experimenter speaks directly to the robot: “You will explain to your new friend how to play, okay?”. In the non-anthropomorphic condition, the experimenter speaks to the robot in the 3rd person: “The robot will explain to you how to play.” Children also share more resources with a robot when it appears to have emotional states [40].
Finally, the impact of anthropomorphic framing can depend on the task to be performed: in a social task, individuals collaborate more with a robot perceived as having emotional abilities, but they prefer to collaborate with a non-emotional robot in an arithmetic task [163].

4.2.2. Giving a Robot the Role of a Companion Increases Acceptance

The role of the robot in the interaction is extremely variable. It can act as a peer, a helper, a pupil [3,164,165], a mentor, a teacher, or an experimenter [1,2,166,167]. Unfortunately, the above studies have not analyzed the effect of the robot’s role on acceptance by the participants but we describe below other studies that have done so.
Children aged 6 to 9 show higher intentional acceptance for some robotic functions, such as when the robot supports learning or when the robot is placed as a companion [160]. Similarly, adults are satisfied with having a robot to clean their house, but not if it cooks for them [162] or prays for them [69]. Overall, participants will judge an assistant robot as more sociable than a competitor robot [90]. In contrast, in another study, robots triggered the same intentional acceptance for different roles: friend role and machine role [138]. One paper stated that the impact of the robot’s role is yet to be determined [155]. The effect of the robot’s role also depends on the age of the child: Younger children (3–5 y.o.) are more interested in a story-reading activity given by the robot, while older children (5–8 y.o.) prefer to interact and discuss with the robot [161].
Anthropomorphism is heavily implied in these studies. Indeed, the fact that participants accepted the robot assuming a certain role indicates a fundamental level of anthropomorphism. Yet, to our knowledge, no study has directly investigated the influence of the role given to a robot on anthropomorphism itself.

4.2.3. The Frequency of the Interaction Decreases Anxiety and Anthropomorphism

The link between the frequency of interaction and the child’s acceptance of the robot remains unclear. Some studies show a positive impact of frequency [154,157] while other papers find none [86,155]. Repeated interactions may be preferable as the expression of negative attitudes toward robots (especially anxiety) decreases over time, regardless of the robot type (GEMINOID HI-2 and ROBOVIE R2 in [36] and KAROTZ in [154]) and regardless of the age of participants [158] (with NAO, anxiety was reduced for all participants). The more often an individual interacts with a robot (AIBO), the more they express a positive attitude toward robots in general [153], which can be interpreted as a simple exposure effect [154].
Individuals may change their attitude toward a robot after conversing with it, although this depends on the robot’s appearance. In an ultimatum game, participants cooperate more with an android robot (GEMINOID HI-1) after talking to it. Conversely, talking to an iconic robot (ROBOVIE R2) does not increase the amount of cooperation with it [156]. The change in attitude toward the robot over time may also depend on its behavior. Considering a period of 5 months, the quality of interaction of children aged 18 to 24 months with a robot (QRIO) decreases over time if the robot behaves in a predictable way, but it increases again if the robot performs a variety of behaviors [159]. In the long term, the attribution of mental abilities to the robot decreases (STARSHIP ROBOT [39]), which may correspond to the end of a two-month novelty period [154].

4.2.4. A Robot Perceived as Autonomous Is Anthropomorphized More

A robot (RA-I) that appears to act autonomously is perceived as more trustworthy than a teleoperated robot (i.e., a robot directed remotely by a human) but decreases the sense of social presence [150]. Children aged 4–8 attribute fewer anthropomorphic qualities to an explicitly remote-controlled robot than to an autonomous robot [148]. When children aged 7–10 are informed about the remote operation of the robot (NAO), they perceive it as less autonomous and are less prone to anthropomorphism [152]. Other studies instead highlight a similar level of acceptability between the two types of robots [149,151]; however, when participants are explicitly informed of the robot’s teleoperation, the perceived intelligence of the robot decreases [151].
Thus, the perception of robots and the behaviors expressed toward them vary according to the interaction situation. The information explicitly given to the subject by the experimenter, therefore, impacts their tendency to anthropomorphize.
Overall, how the robot is presented, its role, and its perceived degree of autonomy will influence its perception, although some studies are contradictory (especially those on the frequency of interaction). These discrepancies can be explained by the inter-individual variability of anthropomorphism, which leads us to focus on the characteristics of the person interacting with the robot in the next section.

4.3. Human Factors Also Depend on the Users Themselves, Not Just the Robots

In addition to robotic and situational factors, the characteristics of the user modulate the perception and acceptability of robots [168]. A review paper highlighted the role of age, gender, personality, education, and experience with technology with robots [155]. We can also cite the impact of the child’s developmental type on the perception of robots. We discuss all these aspects below. Table 3 summarizes the studies cited in this section.

4.3.1. The Older We Get, the Less We Anthropomorphize

Social robots are generally well accepted by children aged 5–9 [155], both intentionally (NAO [177]) and behaviorally (KEEPON [104]). We observe similar tendencies with older children (aged 10–15) with good intentional and behavioral acceptance of the robot. At the intentional level, they show a similar acceptance of robots as they do with a human [178] or a tablet [179], and at the behavioral level they are more willing to switch devices when they perform an activity with a tablet than when they perform the activity with a robot indicating a preference for the robot [179]. When directly comparing groups of different ages, two studies highlight a similar acceptance of robots between young children aged 4–6 y.o. and older children aged 7–10 y.o., both at the intentional [173] and behavioral [104] levels.
Yet, results regarding the effect of age on robot acceptance are conflicting. Two other studies make the opposite finding [160,171]. Indeed, children aged 6–9 are more accepting of robots than both preteens (10–12 y.o.) and teens (13–16 y.o.) at the intentional level [160]. Yet, at the behavioral level, children aged 3 trust a human more than a robot in a game, while at age 7 they trust the robot more (NAO [171]). In addition, when preschoolers interact with a robot for the first time, younger children (about 36 m.o.) are more easily distracted compared to older children (about 44 m.o.). They look at the robot less and show greater dependence on the experimenter compared to older children [169]. A 10-month age difference may, thus, induce different levels of engagement with the robot. The question of the robot’s role may be relevant to explain these results: 36 m.o. would be interested in the robot for a story reading, while 44 m.o. would prefer more interaction and discussion [161].
Beyond mere acceptance, age also has an impact on the overall tendency to anthropomorphize. Neurotypical children anthropomorphize robots [148] as evidenced by the fact that they attribute goals to their movements [175], assign mental states to them [180], can help them [174]), and feel morally concerned with them to some extent [83]. More specifically, younger children are more likely to anthropomorphize than older children [202]. At age 3, children are more likely to assign biological properties to a robot (DASH, NAO, KIROBO) than at age 5 or as an adult [72,176]. At age 5, children assign more mental states to an iconic robot (NAO) than 7 and 9 y.o. children [40,77]. At ages 4–8, children rate robots as significantly kinder than older children (aged 9–13 y.o.) or adults and would also appreciate more the robot praying for them [69]. At ages 5–11, children are more likely to attribute human characteristics to robots than adolescents (12–16 y.o.) [170]. At ages 9–12, children are more willing to consider robots as social beings, compared to 15 y.o. They are also more concerned about the robots’ moral interests and attribute more mental states to them [122]. Thus, as children grow older, they attribute less to non-human agents (e.g., NAO, ALEXA, ROOMBA) [37].
The tendency to anthropomorphism would decrease during development due to the accumulation of experience [40]. According to the SEEK theory [5], anthropomorphism would serve to fill a partial representation of the robots. Thus, the more the child gains experience interacting with robots (with age), the more relevant their representation of robots becomes, and the less anthropomorphization is necessary. Several studies seem to confirm this theory: exposure to technology increases with age [37], and as children grow older, they have a more sophisticated understanding of the mental capacities of robots, as well as their moral and social status [122].
This tendency even extends to adults who ascribe less free will to a robot (ROBOVIE) than to a human compared to 5–7 y.o. children, who assign as much free will to them [172]. When comparing adults of different ages, we notice that they also perceive robots differently: Older adults (more than 60 y.o.) trust a robot more than younger adults (when the robot is polite) [94], judge them as more useful than younger individuals, but also express more anxiety toward them [158]. These age-related differences could be due once again to experience with technology: Young adults, having more experience with robots in daily life, would have a more accurate representation of the robots’ real capabilities, which would explain why they find them less trustworthy or useful, but report less anxiety.

4.3.2. Same Gender Robot Promotes Acceptance in Children, but Not Anthropomorphism

On the question of the impact of the user’s gender on the acceptance of robots, the results are not clear-cut, which could be due to differences in measurements. In children, at the behavioral level, girls interact longer with an abstract robot than boys do [104] but boys show a higher level of interaction (which includes gaze time, emotional expression, and dependence toward the experimenter) with an iconic robot than girls [190]. At the intentional level, when children interact with an abstract robot, no difference is observed in the child’s gender on acceptance [173,197], whereas girls report more physical and social attraction to human-like robots than boys [197]. This could suggest that children would behave with the robot and perceive it differently according to their gender only when the robot has a human-like appearance. In adults, a feminine iconic host robot (OLIVIA) taking on the role of a receptionist is generally given higher ratings on a Likert scale on the quality of the interaction by male participants compared to female participants [112]. Behavioral differences linked to gender also exist in human–human interactions. In children, girls engage longer in interaction than boys, and boys initiate more episodes of interaction than girls [203]. In adults, men are more active in interaction than women (they talk more and give their opinion more), while women show more positive social behavior (friendly behavior, approval) [204]. Thus, it is possible that humans simply reapply the same model of social interactions with robots that they already use with humans.
The user’s acceptance of the robot also varies depending on whether the robot’s gender is the same as their own or not. At the intentional level, boys are more likely than girls to prefer an iconic robot (NAO) of the same gender, but at the behavioral level, such a difference does not seem to be present: children smile more with a female robot than with a male one, irrespective of the gender [193]. Yet, another study shows no effect of gender congruity on intentional acceptance [191]. These contradictory results can perhaps be explained by the experimental design of both studies. The first study [193] varied the voice and the name of the robot to give it its gender and then asked children about their explicit preferences regarding the robot’s gender. In the second study [191], they only changed the name of the robot to indicate its gender and asked to rate indirect affirmations regarding the child’s preferences, such as “I would like to take Lucas/Laura (the possible names of the robot) home with me”.
It is possible that the impact of gender could vary with age. At 5–8 y.o., children prefer a robot of the same gender as themselves, while at 9–12 y.o. they report no particular preference [193]. These results are observed at the behavioral level: Young children placed in a situation of interaction with a robot of the same gender will play significantly longer [192] and smile more [193]; and at the intentional level: young children say they prefer a robot of the same gender as themselves [193]. In adults, a study shows that a robot of the opposite gender is judged more trustworthy, credible, and engaging than a robot of the same gender as the user [195]. A similar effect (same-gender preference) can be observed in cooperation tasks: Male participants complete tasks faster with robots of the same gender while female participants do not [189]. This pattern of same-gender preference is also observed in human–human interactions in children (see [205] for review) and in adults (see [206] for review) and is called gender segregation.
The user’s gender would also modulate anthropomorphism, in adults but not in children. Indeed, a study shows that boys and girls anthropomorphize a non-gendered robot in the same way [207]. In [191], no difference is observed between the anthropomorphization of robots of the same gender compared to robots of the opposite gender. Thus, the attribution of human characteristics to robots does not seem to depend on the children’s gender, unlike acceptance, which seems to depend on it. In adults, men tend to rate an abstract robot as more human-like compared to women, who rate it as more mechanical [194]. Conversely, in another study, women judged robotic movement to be more human-like than men did [186]. However, individuals attributed more mental abilities to a robot (FLOBI) with a human voice of the same gender as their own, compared to a voice of the opposite gender [109]. Men also report being psychologically closer to the robot with a male voice than to the female voice, this effect does not seem to exist for female participants [109].
Gender stereotypes may also apply to robots. Individuals perceive a female-faced robot as warmer and more competent than a male-faced robot, which evokes more discomfort [70]. When a robot (NAO) is implicitly presented as a man by giving it stereotypically masculine characteristics, it is judged more trustworthy and competent than if it is implicitly presented as a woman, where it is then considered more pleasant [188]. However, it is important to note that these authors did not take into account the participant’s gender in their analysis. In another study, giving the robot a gendered name and voice (male, female, or neutral) did not produce a difference in perceived competence between the genders [187]. When explicitly asked, adult participants chose a gender-neutral robot over a gendered one [196]. This discrepancy can potentially be explained by the methodology employed. Some studies presented robots by video or by image [187,196], whereas in [188], the participants interacted with a physically present robot.

4.3.3. Personality Traits Impact Anthropomorphism

Few studies have examined the influence of individual differences on the tendency to attribute human characteristics to robots [44]. In adults, individuals with the highest need for cognition (individuals who are more likely to perform cognitively demanding activities) attribute fewer human characteristics (agentivity, sociability, and animation) to robots, and show more positive attitudes, compared to individuals with a lower need for cognition. Conversely, individuals with the highest need for prediction (individuals who are uncomfortable with ambiguity and who prefer order) attribute more anthropomorphic characteristics to robots (agentivity, sociability, and animation) and show more negative attitudes compared to individuals with a lower need for prediction [45]. On the other hand, people with strong empath personality traits are more reluctant to hit a robot (HEXBUG NANO) presented with a first name and a personalized story [137]. In addition, people with attachment anxiety (preoccupied with proximity, fearful of abandonment, and hypervigilant to social cues) will anthropomorphize more than others [46]. In children aged 8–12, there is a link between the personality trait of openness to new experiences and intentional acceptance of robots: children who are more open to new experiences are more likely to want to interact again with the robot (EMYS [198]).

4.3.4. Cultural Differences Regarding Anthropomorphism

Depending on the culture, the perception of the robot is not of the same nature. In a study that included seven different nationalities (German, American, English, Chinese, Dutch, Japanese, and Mexican), the attitude toward robots was the most positive in the USA, with the most negative in Mexico [153] (it was assessed here by the Negative Attitude Toward Robots scale [208]). An educational robot is perceived more positively in Korea—where parents perceive it as a “friend of the child”—than in Spain, where parents perceive it as a machine [181]. Both Chinese and Koreans perceive a social robot (LEGO MINDSTORMS NXT) as more friendly, trustworthy, and satisfying than Germans. Both Chinese and Koreans also engage more in interactions with the robot [185]. Cultural differences are, therefore, likely to impact robot agreeableness, satisfaction, and trust expressed toward the robot. Moreover, the Japanese attribute more mental abilities to robots (ROBI, KEEPON) than the Australians [184]. For Chinese individuals, the more lonely an individual is, the less they anthropomorphize robots, but this is not the case for American individuals [182]. Explanations for these variations could be based on the differences between individualistic and collectivist cultures: individualism would lead to a less positive attitude toward robots [185].
Finally, a robot presented as having the same cultural background as the user will be perceived more positively. Germans attribute more mental abilities to a robot presented with a German name than to a robot with a Turkish name, and report more psychological closeness and positive intentions toward it [183]. This result can be compared with in-group bias in human–human interactions. People report more positive effects toward in-group individuals than toward out-group individuals [209] and show more prosocial behavior [210]. Thus, the same in-group bias could apply to interactions with robots.

4.3.5. But There Is More

Other user factors, such as previous experience with technology, education, expectations of robots, social isolation, and developmental type can impact robot acceptance.
Users who are more experienced with new technologies (computer training and/or knowledge of voice recognition devices) rate the social skills of two robots (OLIVIA and CYNTHIA) significantly lower than inexperienced users. This result suggests that more experienced users tend to be less open to perceiving robots as social entities [112]. Moreover, 4–7 y.o. with little or no experience with robots assign more psychological characteristics to them than experienced children [199]. Moreover, the more educated an individual is, the less likely they are to perceive the robot as a social entity [200].
Individuals judge a robot (KAROTZ) more positively before having met it versus after the interaction, which shows that, on the one hand, they generally have high expectations toward robots, and on the other hand, these expectations were not met when they met the robot [154]. Low initial expectations lead to less disappointment during the interaction [33]. In a long-term interaction, the people who had the highest expectations toward the robot before the interaction show an abandonment rate of the procedure that is higher than individuals with lower initial expectations [154].
In addition, isolated individuals would evaluate interactions with a robot (APRIL) more positively and rate the robot more attractive than socially well-connected individuals [47]. Moreover, loneliness would lead to more anthropomorphic attributions toward an animal or a technological gadget [46]. This increased tendency toward anthropomorphism can vary according to the robot’s appearance since socially isolated people attribute more human characteristics to a robot that looks like a human than to a robot that looks like an animal [48]. As we have seen before, it could also depend on the user’s culture, as a reverse pattern was observed among Chinese participants with lonely individuals anthropomorphizing less [182].
The developmental type can also modulate robot perception and interaction. For example, individuals with autism spectrum disorder (ASD) would not show a human preference bias, unlike neurotypical individuals who have more affinity with another human being than with an artificial object [201,211]; and they consider a human voice and a robotic voice to be similar [110]. They would also have more difficulty interpreting the robot’s mental states than typically developing children [180].

5. Limits

We have seen the influence of a robot’s design (robotic factors) on how robots are perceived and interacted with, as well as the importance of the context in which the interaction takes place (situational factors and human factors).
Despite some variability in the results, it seems that the more human-like a robot is perceived to be—whether this is due to its design or the interaction situation, which includes the user—the more it will be appreciated. Nevertheless, the benefit of a human-like appearance has a limit, i.e., the uncanny valley. A strong resemblance could instead impact negatively the interaction. Furthermore, the results of studies on anthropomorphism are sometimes contradictory, which may be attributable to the significant heterogeneity of the methodologies employed. Thus, three types of limits emerge from the literature: (1) an intrinsic limit of robotic factors: the uncanny valley; (2) the measurement of anthropomorphism; (3) the methodological limits observed in the study of human–robot interactions.

5.1. Intrinsic Limit for Robotic Factors: The Uncanny Valley

The uncanny valley is based on the following: When an object (here, a robot) reaches a very important degree of anthropomorphism, it triggers a feeling of uneasiness [212,213]. To highlight this phenomenon, two types of studies are proposed, those that focus on the feeling of strangeness [214,215,216] and those that show a preference for machine-like [162,217] or moderately human-like robots [73]. In human–robot interactions, two elements generate the feeling of strangeness: a humanoid face [214], and/or a similar size and body mass to the interacting person [216]. In consequence, humans can end up trusting and appreciating humanoid robots less than mechanical ones [73,143,218]. A meta-analysis (concerning 49 studies based on the Godspeed questionnaire) confirms the preference for robots with low to moderate human resemblance but fails to conclude on the negative effects induced by a strong resemblance to humans [219]. On the other hand, a study of 251 robots shows that a strong human appearance triggers a feeling of strangeness [215].
The explanation of the uncanny valley phenomenon is based on the expectations induced by the appearance of robots [220]: The discrepancies between human expectations and robot behavior would, thus, be at the origin of this phenomenon [212,213,221]. The resemblance would cause individuals to judge the robot according to human normative expectations [222] and, from that point, deviations from the human norm make the robot seem scary [223]. Thus, the extent to which individuals assign to the robot an ability to feel and perceive sensations significantly predicts the feeling of strangeness they report [214]. For this reason, the authors argue that the uncanny valley is a consequence of individuals’ attribution of feeling and sensing abilities to robots. Note that the uncanny valley also impacts the quality of moral judgment. Individuals evaluate moral choices made by a human-like robot (iCLOONEY and iROBOT) as less ethical than the same choices made by a human or by a non-human-like robot (ASIMO) [222].
Repeated interactions decrease feelings of strangeness regardless of the robot (e.g., GEMINOID or ROBOVIE), with both robots being perceived as less strange on the third interaction compared to the first. This indicates that the uncanny valley phenomenon is reduced by increased exposure to the robot [36]. This result seems consistent with the theory presented in [214]. Once individuals know the actual capabilities of the robot, they would rate it as less uncanny and then report more positive feelings. This could explain the beneficial effect of repeated interactions with robots, discussed in Section 4.2.3.
The uncanny valley phenomenon is present in children [84,224]. The age of onset is discussed. For some authors, it would appear between 6 and 12 months [225]; 6 m.o. babies prefer to look at a strange avatar rather than a picture of a human; at 12 months, it is the opposite. Other authors suppose it begins between 4 and 8 y.o. [69], from 9 y.o. [226], or even between 8 and 14 y.o. [84]. The explanation for these differences lies in the methodology used, particularly with regard to the variables of interest/measures (fixation time for babies and image classification for children) or the choice of stimuli (robot video or robot image). The variability of the methodology used to measure anthropomorphism and perception of robots thus limits the interpretation of the results.
Thus, although robot designers seek to maximize the resemblance of robots to humans in order to improve interactions, human-like robots can trigger a feeling of strangeness for the user [215], and can even lead to a reduction of the trust granted to the robot [218]. In conclusion, improving the robotic factors of anthropomorphism alone does not necessarily have beneficial effects on the perception of and attitudes toward robots.

5.2. The Measure of Anthropomorphism and Its Limits

The methodologies employed in studies on human–robot interactions are highly heterogeneous [227]. Studies are mainly based on measurements from the user’s point of view through questionnaires on the perception of robots [228,229]. In the next sections, after presenting the different types of questionnaires, we will examine their limitations.

5.2.1. Questionnaires and Implicit Measures

Different questionnaires (all based on Likert scales) are used in the literature to measure anthropomorphism from the user’s perspective. A review of the literature shows that authors have used questionnaires that focused on anthropomorphism in the broad sense (e.g., the Godspeed questionnaire [6], individual differences in anthropomorphism [44], which correspond to weak anthropomorphism), whereas others focus specifically on cognitive anthropomorphism (e.g., the attribution of mental states questionnaire [97], which corresponds to strong anthropomorphism). The Godspeed questionnaire [6] is one of the most frequently used questionnaires [79,140,147,230] and consists of 5 items scored on a 5-point Likert scale. It assesses five domains: anthropomorphism, animation/illusion of life, appreciability, perceived intelligence, and perceived safety. Another questionnaire often used assesses the attribution of mental states to agents presented as pictures to measure anthropomorphism [77] and consists of 25 questions that are divided into 5 dimensions: perceptual, emotional, intentional, imaginative, and epistemic. As an example, for the perceptual attribution assessment, participants answer the question, “Do you think he can feel heat or cold?”. Nevertheless, the questionnaires measuring anthropomorphism vary widely since many other questionnaires have also been used, e.g., the individual differences in anthropomorphism [44]. The individual differences in anthropomorphism questionnaire consists of 30 items, scored on a 10-point Likert scale. They assess the attribution of anthropomorphic traits, e.g., intentions, emotions, free will, mind, and of non-anthropomorphic traits, e.g., “durable, “active”, and the robot interactive experiences questionnaire [231]. The robot interactive experiences questionnaire includes 8 items scored on a 7-point Likert scale. They assess the individual’s attitude in situations of engagement and social interactions with a robot. Moreover, the validity of verbal (explicit) measures for assessing mental state attribution is questioned [16].
In the field of human–robot interactions, the importance of implicit measures deserves to be emphasized: the results obtained with these measures are not necessarily similar to those obtained with explicit measures, such as questionnaires [22,117,144]. Verbal and nonverbal measures of attribution of mental states to a robot can lead to divergent results [16,217]. Indeed, children may show similar behavior when interacting with a robot and with a human, while they attribute fewer mental states to the robot based on their responses to questionnaires [232]. For this reason, indirect (implicit) and more objective methods have been used to assess the tendency toward anthropomorphism, including relying on social dilemma-type paradigms (such as the mini-dictator game and a resource-sharing task [40]; a moral dilemma [65]; or the ultimatum game [228]) that identify, for example, giving behaviors. Nonverbal paradigms have also been used to implicitly assess the attribution of mental states to the robot (e.g., the gaze anticipation paradigm, [16]; or the implicit association task, [217]). Further studies are needed to determine which type of measure—explicit or implicit—better reflects mental state attributions to robots [16]. Until this debate is resolved, studies should include both types of anthropomorphism measurements.
Generally, the majority of studies focused on anthropomorphic robot designs (either by their appearance or by the way they are presented to the user) aim to assess its impact on subjective measures, such as the robot’s perceived intelligence, acceptance, or realism [73], and pay little attention to the impact on more objective measures (such as performance [79]). While the major advantage of questionnaires is their ease of administration [228], they have some limitations.

5.2.2. The Pragmatic Limits of Anthropomorphic Measures

The measures used are often non-standardized and subjective, making it impossible to compare results. Self-reported measures may be subject to social desirability bias [155,233,234]. Participants are encouraged to respond based on what they think is expected of them. Behavioral measures are less subject to this bias.
Research in language pragmatics provides new insights into some of the results obtained. According to the mere appearance hypothesis, it is the physical resemblance of the robot with a human that induces its perspective to be taken into account. A study describes a low spontaneous perspective taking of the robot when it does not have a human appearance (e.g., THYMIO), while the perspective of a human-like robot is taken into account, even when this robot triggers a feeling of strangeness (ERICA) or when it is obvious that it does not have mental capabilities (e.g., in the case of a mannequin or a wax figure) [26]. In this study, participants see the image of a robot facing a figure. In the participant’s reading direction, the figure is a 9; in the robot’s reading direction, the figure is a 6. The experimenter then asks the open-ended question, “What is the number on the table?” to measure participants’ spontaneous perspective-taking of the robots. The authors nevertheless question the effect of the experimenter’s request. Indeed, according to the principle of cooperation [126], when the experimenter asks a question, the participants try to infer his expectations in order to answer it as well as possible. Yet, the use of a seemingly simple question may destabilize participants and encourage them to seek alternative interpretations in the environment. In this way, they may have inferred that they are expected to consider the agent’s perspective when it seems relevant, i.e., when a cue for possibly understanding the numbers is present [26]. This could explain why they adopt the perspective of a dummy, but not that of an abstract robot (THYMIO looks like a box, which may not be a sufficient cue for numbers understanding). Thus, it is difficult to determine here whether the perspective-taking of human-like robots is truly spontaneous, or induced by the experimenter’s request. This result could be due to pragmatic factors rather than appearance per se. Measures of anthropomorphism that rely on a question asked of the subject are susceptible to pragmatic bias.

5.3. General Methodological Limitations

Anthropomorphism is a complex notion, now widely studied. However, studies conducted with robots employ a highly heterogeneous methodology [227] in terms of the choice of the measure of anthropomorphism (see Section 5.2.2), the type of robot used, the way it is presented, and the type of interaction proposed (see Table 1 and Table 2). We present below some recommendations to address these potential issues.
We have seen that the means of evaluating anthropomorphism are themselves extremely varied and subject to criticism. They are based, in particular, on questionnaires, explicit and subjective measures whose validity is questioned [16], in particular, because they are likely to be impacted by the bias of social desirability or the pragmatics of language. Participants would answer the question according to the expectations they infer from the experimenter, rather than according to spontaneous attributions toward the robots (e.g., [26,88,92,93,96]). We recommend the use of implicit measurement methods (mainly behavioral, such as non-verbal paradigms or social dilemmas), which would provide a more accurate measure of this phenomenon (e.g., [16,40,65,217,228]).
A wide variety of robots have been used in the literature, and the diversity of their appearances makes it difficult to generalize results from one robot to another. Some studies do not specify the type of robot used (e.g., [76,80,170,196]) or use a robot created by the research team itself (e.g., [60,70,71,173,194,225]). The beginning of standardization is allowed by the Anthropomorphic roBOT (ABOT) database, which references an anthropomorphic score for 251 robots based on the voting of 1000 participants. The scores seem to be coherent with the taxonomy we presented earlier [38] since the human-likeness of the abstract robot (LEGO MINDSTORMS) is judged low (15.92/100), the iconic robot (NAO) is judged moderate (45.92/100), and the humanoid robot (SOPHIA) is judged high (78.88/100) [235]. The iconic robot NAO might be a good choice because it has human characteristics, so can enjoy the benefits of anthropomorphism, without falling into the uncanny valley—it does not cause discomfort in children aged 3–18 y.o. [226] or in adults [219].
Moreover, exposure to robots is sometimes done with images or videos, rather than in vivo exposure. Yet, the physical presence of a robot (NURSEBOT) increases the tendency for anthropomorphism more than a projection of a robot on a screen [74] and is considered more trustworthy [94]. Another study states that a physically embodied robot (AIBO) is evaluated more positively when participants can touch the robot. Conversely, if they are prohibited from touching the robot (APRIL), individuals rate interactions with a non-embodied robot more positively compared to a physically embodied robot [47]. According to a meta-analysis, a physically present robot is perceived more positively than a robot presented on a screen (AIBOT [236]), and this mode of presentation allows better performance in a puzzle-solving task (KEEPON [237]). However, in another study, a robot presented in vivo to the participant would not be better accepted than a robot presented via a screen [238]. The different measures (intentional vs. behavioral) could explain, at least in part, these discrepancies. Thus, it is difficult to decide on this issue at this time.
The human appearance of a physically present—embodied—robot positively influences both subjective and objective measures of anthropomorphism, whereas the human appearance of a non-embodied robot (represented as an image, for example) has positive effects only on subjective measures [239]. The impact of the robot’s appearance depends on how it is presented to the participants. Generalizing results obtained with a non-embodied robot to interactions with an embodied robot could lead to overestimating the impact of human likeness on subjective measures and underestimating its effect on objective measures [239].
Similarly, studies that do not involve the physical presence of a robot often represent robots as photos rather than videos (e.g., [26,65,69,84,141,228]. However, it seems that children appreciate them more when they are presented in video form rather than in pictures [84], as observing the robot’s behavior would help them understand its intentions. Thus, data collection through online studies (e.g., [65,69,141]) also does not allow for conclusions about in vivo interactions conducted in the real world. Other studies do not necessarily use a robot to study anthropomorphism (for example, avatars and conversational agents, e.g. [9,12,37,44,46,110,179]). Overall, experimental conditions are rarely ecological, as most studies are carried out online or in the laboratory rather than in vivo (e.g., [9,39,49,65,69,70,94,136,141,153,228]). Studying anthropomorphism without actually placing the participant in a situation of interaction with a robot can distort the results obtained. Furthermore, the duration of interaction with the robots varies considerably across studies: the most common duration is 30 min per session (15–30 min on average, and few studies exceed 60 min) but the longest lasts over 120 min [229]. Overall, interactions with the robot are spread over a short period of time (often a single interaction, the duration of which is not always specified) and few studies are conducted over the long term [229].
We recommend referring to the ABOT database to systematically specify the human-like score of the robots used. The variability of the robot used should also be reduced, so that the results obtained can be generalized. As we have seen in this review, a moderately human-like appearance is preferable, due to the uncanny valley phenomenon. We also recommend privileging in vivo interactions with robots that are physically embodied—and over the long term—in order to approach, as much as possible, the real conditions of human–robot interactions.
In addition to the heterogeneity of the studies, there are methodological limitations, which make the generalization of results questionable. A meta-analysis [219] noted that most studies use small sample sizes (the median of the 49 studies included in their meta-analysis was 21 participants) composed of predominantly young participants and students (the median age was 25 y.o.). As a consequence, it is not easy to draw any conclusion about non-student adults, as age affects the way adults perceive robots [94,158]. The authors also point to the lack of methodological rigor in a substantial number of studies, which omit crucial information about participants (for example, their age or nationality, factors known to influence anthropomorphism). In future studies, it would be appropriate to use a rigorous and standardized methodology. A recent paper advised selecting large sample sizes and reiterating the importance of greater transparency about the detailed characteristics of the sample [219].
This lack of methodological consensus leads to divergent results, which do not allow conclusions to be drawn on certain factors. For example, the effect of the robot’s voice on anthropomorphism or the effect of the robot’s autonomy on acceptance cannot be determined from the literature. The results are contradictory. Concerning the robot’s voice, one study showed no differences in children [37] but a preference for a human voice over a robotic voice in adults [110]. The discrepancy in these results may be due to the age of the participants. Children who anthropomorphize robots more than adults are less likely to perceive the difference than adults. Concerning the robot’s autonomy, it is more difficult to interpret the mixed results. Two studies showed no effect of robot autonomy on acceptance [149,151] and another study showed that robot autonomy had a positive effect on the trust attributed to the robot, but a negative effect on perceived social presence [150].

Is Anthropomorphizing a Robot Even a Good Thing?

Some ethical considerations could be highlighted about anthropomorphizing robots (see [240] for a review).
Firstly, anthropomorphism has not only positive consequences. A user who attributes human capabilities to a robot may experience negative results (e.g., negative emotions may be triggered by the anthropomorphization of the agent as in the uncanny valley). The discrepancy between human expectations and actual robot abilities (in non-transparent experimental situations) may provoke disappointment and frustration in users [241]. This limitation is particularly important for vulnerable individuals, such as schizophrenics [242], individuals with ASD [180], or elders [243]. Even if avatars and robots are generally well accepted by these populations [242,244], the consequences of their interactions with robots could impact their social relationships. This is why some studies ensure that the robot is not perceived as a human [242].
Secondly, anthropomorphism induction may also be questioned. Although in psychology studies it is common to vary the participants’ representations, this manipulation could have serious consequences. Indeed, we have seen that the robot’s design and the way it is presented imply social cues with the aim of facilitating social interactions. This encourages user anthropomorphism, fostering both the top-down and bottom-up inferences mentioned above [65]. However, these social cues mimic human behavior [245] without any actual associated mental states. Then this mismatch between perceived capacities and actual capacities of robots leads to deception, which could have detrimental effects on users [241].
It, therefore, seems important for researchers to ask themselves why they have chosen to anthropomorphize robots, and whether this is really necessary. Indeed, the main reason to focus on anthropomorphism is to trigger behaviors that would allow an interaction similar to that with a human agent. Is the goal then to replace humans? If so, there should be serious considerations on whether this replacement brings more benefits than detrimental effects for the end user and society as a whole.

6. Conclusions

As we have seen in this review, human individuals generally tend to attribute human characteristics to robots. Despite the wide methodological differences observed in the literature, we can argue that several factors influence anthropomorphism: robotic, situational, and human factors. The main effects are summarized in Table 4.
We have seen that many studies focus on the appearance of robots to explain the tendency of individuals to attribute human characteristics to them. However, other factors of the robot can influence their perception. Among the robotic factors, in addition to the robot’s appearance, its voice, its behavior, and the quality of its movements also modulate the way it is perceived. Moreover, the context of the interaction plays a crucial role. Taking into account situational factors related to the interaction setting (anthropomorphic framing, the role held by the robot, its autonomy, the frequency of interaction) and human factors related to the user (age, gender, personality, culture, and others) seems, therefore, essential in the study of anthropomorphism.
Robotic factors are mostly related to the design of the robot: anthropomorphism varies according to the appearance, voice, behavior, and movement quality of the robot. Anthropomorphizing robots (i.e., giving them a human-like form, which encourages individuals to anthropomorphize) generates three types of effects: i) effects on the quality of interaction, j) effects on the perception of the robot, and k) effects on the attribution of mental states. The robot’s human appearance facilitates interactions [36,246], especially by increasing fun [75], engagement [55,79,247], and cooperation during interaction [4,22,80]. Anthropomorphization also impacts the perception of robots. They are more appreciated, deemed more believable [22], and more intelligent [73,75]. Finally, humans show more empathy toward these robots [65,81], adopt their point of view more [26], and attribute more mental abilities to them [17,34,68,78]. Children also attribute more cognitive skills to human-like robots [67,77]. Nevertheless, robots that strongly resemble humans can cause feelings of discomfort in users; this is a phenomenon called the uncanny valley [45,212]. Individuals value these robots less [73,215] and rate them as less trustworthy [218]. In children aged 9 and older, the sense of discomfort induced by highly human-like robots is similar to that of adults [69,226], but the age of emergence of the phenomenon remains unclear. Among robotic factors, the elements that also play a crucial role are the voice, the behavior, and the movement quality. Individuals cooperate more with a robot (NAO) when its voice expresses emotions [113] and report more enjoyment from a robot with a human voice [95,109,110]. The robot is also more appreciated when it shows friendly behavior [36], animated behavior [98], and interactive behavior [100,102], as well as natural movements [105]. Robotic factors are, therefore, very important in human–robot interactions since they affect acceptance and anthropomorphism. However, other factors also have an impact on anthropomorphism: situational and human factors.
Situational factors refer to the way the robot is presented in the interaction situation. Anthropomorphism varies according to the framing of the interaction, the role of the robot, the frequency and duration of the interaction, and the degree of autonomy. When a robot is presented in a human way (for example, by giving it a name or a personal story) individuals show more empathy and indulgence toward it [65,137] and find it more attractive [139,143]. A robot that is assigned mental abilities is rated as more socially intelligent [147] and more trustworthy [140,144], and individuals show an increased desire to interact with it [136]. Children also trust a robot (NAO) more when it appears to have human psychological capabilities [40,202]. The role filled by the robot also modulates its perception. Adults are satisfied to have a robot to clean their house, but not to have it cook for them [162] or to pray [69]. Children appreciate robots more as a companion or when they support their learning [160]. Frequency of interaction also plays a role in anthropomorphism, as repeated interactions with a robot increase its likability [153,154] but reduces anthropomorphism toward it [39]. Furthermore, a robot perceived as autonomous is more likely to be anthropomorphized by children than a remotely operated robot [148,152]. Regarding the impact of perceived autonomy on acceptance, the results are more mixed, since an autonomous robot increases feelings of trust (compared to a teleoperated robot) but decreases feelings of social presence [150].
The human factors concern the user themselves. A wide inter-individual variability is observed in anthropomorphism [5,42], depending on the individual’s age, gender, personality, culture, previous experience with technology, education level, level of social isolation, and developmental type. Age-related differences are noted in the literature. Children are more likely to attribute human characteristics to robots than adolescents and adults [69,72,77,170,176]. In adults, a robot of the opposite gender to the user would be rated as more trustworthy, credible, and engaging than a robot of the same gender [195]. Children aged 5 to 8 prefer a robot of the same gender as themselves, while children aged 9 to 12 report no particular preference [170]. User personality also plays a role in anthropomorphism. Individuals with a high need for cognition attribute fewer human characteristics to robots and show more positive attitudes. Conversely, individuals with a high need for prediction attribute more anthropomorphic traits to robots and express more negative attitudes [45]. The perception of the robot also varies according to the culture of the user. A robot is perceived more positively in Korea than in Spain [181]; the attitude toward it is more positive in the USA than in Mexico [153]. Finally, other user factors impact anthropomorphism: The tendency to anthropomorphize seems to decrease with education [200] and experience with technology [112]. Individuals with the highest expectations toward robots tend to drop out of long-term interactions [154]. Social isolation would induce a more positive evaluation of robots [47], particularly human-like robots [48]. Developmental type is also an important factor. For example, children with ASD would have difficulty interpreting the mental states of a robot, unlike typically developing children [180].
Several theoretical frameworks have attempted to explain the nature of anthropomorphism, such as the mere appearance hypothesis [26] or the SEEK theory [5]. On the one hand, for the simple appearance hypothesis, which is a context-free model, the robot’s appearance would activate processes similar to those involved in human–human interactions, through a stimulus generalization mechanism. This theory is based solely on the robot’s appearance, but we have seen that other robotic factors have an impact on anthropomorphism, such as the robot’s voice, its behavior, and the quality of its movements.
The SEEK theory, on the other hand, takes into account the context of interaction. According to it, anthropomorphism would be a way for individuals to explain a robot’s behavior in the most accessible and economical way possible, in order to satisfy their need for prediction of the environment while satisfying their desire for social contact. Thus, the mere appearance hypothesis is mostly based on robotic factors related to the robot’s design, whereas the SEEK theory focuses on situational and human factors, including some factors we described in this paper (the frequency of interaction, the participant’s social isolation, and personality). Nevertheless, this review highlighted other situational factors (anthropomorphic framing, robot’s role, and autonomy) and human factors (age, gender, culture, education level, prior experience with technology, and developmental type) that impact the acceptance of robots and anthropomorphism toward it, which are not directly mentioned in the SEEK theory. We will see below that the impact of these factors could still potentially be explained by this theory.
Concerning situational factors, the SEEK theory may explain the impact of an anthropomorphic framing, the robot’s role, and perceived autonomy by the accessibility of anthropocentric knowledge. Presenting the robot as a human (with a first name, a personal story) or as having a human social role makes human-related knowledge more accessible to the user, resulting in increased anthropomorphism. The same process could occur for robotic factors. Moreover, the behavior of a robot perceived as acting autonomously is more unpredictable for the participants, which increases their motivation to explain the agent’s behavior and, therefore, their tendency to anthropomorphize it.
Concerning human factors, age modulates anthropomorphism. This effect may be linked to the experience gained with the technology with age. The more experience the individual has with robots, the more they acquire a model for explaining behavior specific to this ontological category. To the same extent, culture has an impact on experience with technology: robots are more widely used in certain countries. The effect of gender we demonstrated may be explained by gender differences in human–human interactions that would be applied to robots. Since individuals use models based on interaction with humans to interact with robots, it is not surprising to find these differences in human–robot interactions. In the same way, the differences in interaction linked to the type of development observed in human–human interactions can be applied to human–robot interactions. Individuals who have difficulty interpreting the mental state of a human will have the same difficulty interpreting the mental state of a robot.
In conclusion, there are several theories to explain anthropomorphism, but they should take greater account of the other factors highlighted in this review to enable the most exhaustive possible conception of anthropomorphism. The SEEK theory seems to be the most consistent with the results observed in the literature since it includes the majority of the factors cited. Indeed, the psychological determinants involved in this theory (i.e., the accessibility of anthropocentric knowledge, the motivation of individuals to understand the behavior of other agents, and to create social links) can be impacted by all the factors we have listed in this review.
Although we have observed that many contextual factors have been explained by the SEEK theory, certain questions remain unanswered and, thus, require further research. First, the same result can be interpreted differently by authors depending on their conception of anthropomorphism. For instance, some authors consider the perception of a robot as teleoperated as evidence of anthropomorphism (i.e., individuals would judge the robot’s behavior as resembling that of a human) [124], whereas in other studies, a robot perceived as not acting on its own would reflect a lesser degree of anthropomorphism (since it is attributed less free will and agentivity) [9]. In this review, we have seen that when participants are informed about the teleoperation of the robot, they attribute less mental states to it, inferring that there is more anthropomorphism when people declare that believing the robot is teleoperated may be an incorrect interpretation. Second, we observed similarities in people’s behavior toward a human and a robot (particularly in terms of differences in gender, personality, culture, and type of development), but more studies are needed to determine whether interactions with non-human agents involve the same socio-cognitive mechanisms as those involved in interactions with humans (see Section 3.3). Some neuroimaging studies suggest that interactions with non-human agents involve the same socio-cognitive mechanisms as those involved in interactions with humans, such as the ToM [55,61]. However, others suggest that the ToM would not be necessary for anthropomorphism [51,52] but would only serve as a way to describe the situation. We could argue that ToM may be involved in human–robot interactions regardless of robot appearance, since humans attribute mental states to non-human-like robots, albeit to a lesser extent than for human-like robots [33,34]. This highlights the importance of context in mental state attributions. Nevertheless, given the difficulty of differentiating between strong and weak anthropomorphism on the basis of self-reported measurements, it is complex to ensure that mental states are actually attributed to robots.
These shortcomings emphasize the need to revise the theories explaining anthropomorphism, and the means used to measure it, in order to better understand the phenomenon. It is also important to bear in mind the methodological limitations identified in this field of research because (1) they limit the interpretation of the results obtained, and (2) they prevent the results from being generalized. We recommend a precise description of the samples (age, gender, nationality) and of the robot used as these characteristics can have an impact on the interaction with the robots. The context in which the interaction takes place (the way the robot is presented, the role and autonomy assigned to it, and the duration and frequency of interaction) must also be taken into account when analyzing results. Researchers should be careful to rely on implicit measures of anthropomorphism, which are more objective, in order to circumvent the potential biases of explicit measures. In particular, implicit measures are less likely to be affected by pragmatic factors, and can, therefore, measure participants’ spontaneous attributions more accurately.

Author Contributions

Conceptualization, M.D.-S., B.J. and J.B.; investigation, M.D.-S.; writing—original draft preparation, M.D.-S.; writing—review and editing, B.J. and J.B; supervision, B.J., F.J. and J.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SEEKsociality, effectance, and elicited agent knowledge
ToMtheory of mind
ASDautism spectrum disorders

References

  1. Jamet, F.; Masson, O.; Jacquet, B.; Stilgenbauer, J.L.; Baratgin, J. Learning by teaching with humanoid robot: A new powerful experimental tool to improve children’s learning ability. J. Robot. 2018, 2018, 4578762. [Google Scholar] [CrossRef]
  2. Dubois-Sage, M.; Jacquet, B.; Jamet, F.; Baratgin, J. The mentor-child paradigm for individuals with autism spectrum disorders. In Proceedings of the Workshop Social Robots Personalisation at the Crossroads between Engineering and Humanities (Concatenate) at the 18th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Stockholm, Sweden, 13–16 March 2023. [Google Scholar]
  3. Baratgin, J.; Dubois-Sage, M.; Jacquet, B.; Stilgenbauer, J.L.; Jamet, F. Pragmatics in the false-belief task: Let the robot ask the question! Front. Psychol. 2020, 11, 593807. [Google Scholar] [CrossRef] [PubMed]
  4. Duffy, B. Anthropomorphism and the social robot. Robot. Auton. Syst. 2003, 42, 177–190. [Google Scholar] [CrossRef]
  5. Epley, N.; Waytz, A.; Cacioppo, J.T. On seeing human: A three-factor theory of anthropomorphism. Psychol. Rev. 2007, 114, 864–886. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. Int. J. Soc. Robot. 2009, 1, 71–81. [Google Scholar] [CrossRef] [Green Version]
  7. Reeves, B.; Nass, C.I. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places; Center for the Study of Language and Information, Ed.; Cambridge University Press: New York, NY, USA, 1966. [Google Scholar]
  8. Cullen, H.; Kanai, R.; Bahrami, B.; Rees, G. Individual differences in anthropomorphic attributions and human brain structure. Soc. Cogn. Affect. Neurosci. 2014, 9, 1276–1280. [Google Scholar] [CrossRef] [Green Version]
  9. Gray, H.M.; Gray, K.; Wegner, D.M. Dimensions of mind perception. Science 2007, 315, 619. [Google Scholar] [CrossRef] [Green Version]
  10. Meltzoff, A.N.; Brooks, R.; Shon, A.P.; Rao, R.P.N. “Social” robots are psychological agents for infants: A test of gaze following. Neural Netw. Off. J. Int. Neural Netw. Soc. 2010, 23, 966–972. [Google Scholar] [CrossRef]
  11. Urgen, B.A.; Plank, M.; Ishiguro, H.; Poizner, H.; Saygin, A.P. EEG theta and Mu oscillations during perception of human and robot actions. Front. Neurorobotics 2013, 7, 19. [Google Scholar] [CrossRef] [Green Version]
  12. De Graaf, M.M.; Malle, B.F. People’s explanations of robot behavior subtly reveal mental state inferences. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; pp. 239–2148. [Google Scholar] [CrossRef]
  13. Fussell, S.R.; Kiesler, S.; Setlock, L.D.; Yew, V. How people anthropomorphize robots. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, Amsterdam, The The Netherlands, 12–15 March 2008; Association for Computing Machinery: New York, NY, USA, 2008. HRI ’08. pp. 145–152. [Google Scholar] [CrossRef]
  14. Ranjbartabar, H.; Richards, D. Should we use human-human factors for validating human-agent relationships? A look at rapport. In Proceedings of the Workshop on Methodology and the Evaluation of Intelligent Virtual Agents (ME-IVA) at the Intelligent Virtual Agent Conference (IVA2018), Sydney, NSW, Australia, 5–8 November 2018; pp. 1–4. [Google Scholar]
  15. Thellman, S.; Ziemke, T. The intentional stance toward robots: Conceptual and methodological considerations. In Proceedings of the 41st Annual Conference of the Cognitive Science Society, Montreal, QC, Canada, 24–27 July 2019; Proceedings of the CogSci’19. Goel, A.K., Seifert, C.M., Freksa, C., Eds.; Cognitive Science Society Inc.: Seattle, WA, USA, 2019; pp. 1097–1103. [Google Scholar]
  16. Thellman, S.; Giagtzidou, A.; Silvervarg, A.; Ziemke, T. An implicit, non-verbal measure of belief attribution to robots. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 23–26 March 2020; ACM: Cambridge UK, 2020; pp. 473–475. [Google Scholar] [CrossRef]
  17. Thellman, S.; de Graaf, M.; Ziemke, T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM Trans.-Hum.-Robot. Interact. 2022, 11, 1–51. [Google Scholar] [CrossRef]
  18. Guthrie, S.E. Faces in the Clouds: A New Theory of Religion; Oxford University Press: New York, NY, USA, 1995. [Google Scholar]
  19. Dacey, M. Anthropomorphism as Cognitive Bias. Philos. Sci. 2017, 84, 1152–1164. [Google Scholar] [CrossRef]
  20. Dacey, M.; Coane, J.H. Implicit measures of anthropomorphism: Affective priming and recognition of apparent animal emotions. Front. Psychol. 2023, 14, 1149444. [Google Scholar] [CrossRef] [PubMed]
  21. Caporael, L.R. Anthropomorphism and mechanomorphism: Two faces of the human machine. Comput. Hum. Behav. 1986, 2, 215–234. [Google Scholar] [CrossRef]
  22. Zanatto, D.; Patacchiola, M.; Cangelosi, A.; Goslin, J. Generalisation of anthropomorphic stereotype. Int. J. Soc. Robot. 2020, 12, 163–172. [Google Scholar] [CrossRef]
  23. Tversky, A.; Kahneman, D. Judgment under uncertainty: Heuristic and biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef]
  24. Złotowski, J.; Sumioka, H.; Eyssel, F.; Nishio, S.; Bartneck, C.; Ishiguro, H. Model of Dual Anthropomorphism: The Relationship Between the Media Equation Effect and Implicit Anthropomorphism. Int. J. Soc. Robot. 2018, 10, 701–714. [Google Scholar] [CrossRef]
  25. Airenti, G. The Development of Anthropomorphism in Interaction: Intersubjectivity, Imagination, and Theory of Mind. Front. Psychol. 2018, 9, 2136. [Google Scholar] [CrossRef]
  26. Zhao, X.; Malle, B.F. Spontaneous perspective taking toward robots: The unique impact of humanlike appearance. Cognition 2022, 224, 105076. [Google Scholar] [CrossRef]
  27. Atherton, G.; Cross, L. Seeing more than human: Autism and anthropomorphic theory of mind. Front. Psychol. 2018, 9, 528. [Google Scholar] [CrossRef] [Green Version]
  28. Chaminade, T.; Franklin, D.; Oztop, E.; Cheng, G. Motor interference between Humans and Humanoid Robots: Effect of Biological and Artificial Motion. In Proceedings of the 4th International Conference on Development and Learning, Hong Kong, China, 31 July–3 August 2005; Volume 2005, pp. 96–101. [Google Scholar] [CrossRef]
  29. Chaminade, T.; Zecca, M.; Blakemore, S.J.; Takanishi, A.; Frith, C.; Micera, S.; Dario, P.; Rizzolatti, G.; Gallese, V.; Umiltà, M. Brain response to a humanoid robot in areas implicated in the perception of human emotional gestures. PLoS ONE 2010, 5, e11577. [Google Scholar] [CrossRef] [Green Version]
  30. Heyes, C.M.; Frith, C.D. The cultural evolution of mind reading. Science 2014, 344, 1243091. [Google Scholar] [CrossRef]
  31. Shepard, R.N. Toward a universal law of generalization for psychological science. Science 1987, 237, 1317–1323. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Zhao, X.; Cusimano, C.; Malle, B. Do people spontaneously take a robot’s visual perspective? In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; ACM: New York, NY, USA, 2016; pp. 335–342. [Google Scholar] [CrossRef]
  33. Paepcke, S.; Takayama, L. Judging a bot by its cover: An experiment on expectation setting for personal robots. In Proceedings of the 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2–5 March 2010; ACM: New York, NY, USA, 2010; pp. 45–2148. [Google Scholar] [CrossRef] [Green Version]
  34. Banks, J. Theory of mind in social robots: Replication of five established human tests. Int. J. Soc. Robot. 2020, 12, 403–414. [Google Scholar] [CrossRef]
  35. Heider, F.; Simmel, M. An Experimental Study of Apparent Behavior. Am. J. Psychol. 1944, 57, 243–259. [Google Scholar] [CrossRef]
  36. Zlotowski, J.; Sumioka, H.; Nishio, S.; Glas, D.; Bartneck, C.; Ishiguro, H. Persistence of the uncanny valley: The influence of repeated interactions and a robot’s attitude on its perception. Front. Psychol. 2015, 6, 883. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Flanagan, T.; Wong, G.; Kushnir, T. The minds of machines: Children’s beliefs about the experiences, thoughts, and morals of familiar interactive technologies. Dev. Psychol. 2023, 59, 1017–1031. [Google Scholar] [CrossRef]
  38. Spatola, N. L’interaction homme-robot, de l’anthropomorphisme à l’humanisation. L’Année Psychol. 2019, 119, 515–563. [Google Scholar] [CrossRef]
  39. Kim, M.J.; Kohn, S.; Shaw, T. Does Long-Term Exposure to Robots Affect Mind Perception? An Exploratory Study. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2020, 64, 1820–1824. [Google Scholar] [CrossRef]
  40. Nijssen, S.R.R.; Müller, B.C.N.; Bosse, T.; Paulus, M. You, robot? The role of anthropomorphic emotion attributions in children’s sharing with a robot. Int. J.-Child-Comput. Interact. 2021, 30, 332–336. [Google Scholar] [CrossRef]
  41. Barsante, L.S.; Paixão, K.S.; Laass, K.H.; Cardoso, R.T.N.; Eiras, Ã.E.; Acebal, J.L. A model to predict the population size of the dengue fever vector based on rainfall data. arXiv 2014, arXiv:1409.7942. [Google Scholar]
  42. Waytz, A.; Gray, K.; Epley, N.; Wegner, D.M. Causes and consequences of mind perception. Trends Cogn. Sci. 2010, 14, 383–388. [Google Scholar] [CrossRef] [PubMed]
  43. Waytz, A.; Morewedge, C.K.; Epley, N.; Monteleone, G.; Gao, J.H.; Cacioppo, J.T. Making sense by making sentient: Effectance motivation increases anthropomorphism. J. Personal. Soc. Psychol. 2010, 99, 410–435. [Google Scholar] [CrossRef] [PubMed]
  44. Waytz, A.; Cacioppo, J.; Epley, N. Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect. Psychol. Sci. 2010, 5, 219–232. [Google Scholar] [CrossRef] [PubMed]
  45. Spatola, N.; Wykowska, A. The personality of anthropomorphism: How the need for cognition and the need for closure define attitudes and anthropomorphic attributions toward robots. Comput. Hum. Behav. 2021, 122, 106841. [Google Scholar] [CrossRef]
  46. Bartz, J.A.; Tchalova, K.; Fenerci, C. Reminders of Social Connection Can Attenuate Anthropomorphism: A Replication and Extension of Epley, Akalis, Waytz, and Cacioppo (2008). Psychol. Sci. 2016, 27, 1644–1650. [Google Scholar] [CrossRef] [PubMed]
  47. Lee, K.M.; Jung, Y.; Kim, J.; Kim, S.R. Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interaction, and people’s loneliness in human-robot interaction. Int. J.-Hum.-Comput. Stud. 2006, 64, 962–973. [Google Scholar] [CrossRef]
  48. Jung, Y.; Hahn, S. Social Robots as Companions for Lonely Hearts: The Role of Anthropomorphism and Robot Appearances. arXiv 2023, arXiv:2306.02694. [Google Scholar]
  49. Wang, W. Smartphones as social actors? Social dispositional factors in assessing anthropomorphism. Comput. Hum. Behav. 2017, 68, 334–344. [Google Scholar] [CrossRef]
  50. Premack, D.; Woodruff, G. Does the chimpanzee have a theory of mind? Behav. Brain Sci. 1978, 1, 515–526. [Google Scholar] [CrossRef] [Green Version]
  51. Hortensius, R.; Kent, M.; Darda, K.M.; Jastrzab, L.; Koldewyn, K.; Ramsey, R.; Cross, E.S. Exploring the relationship between anthropomorphism and theory-of-mind in brain and behavior. Hum. Brain Mapp. 2021, 42, 4224–4241. [Google Scholar] [CrossRef]
  52. Tahiroglu, D.; Taylor, M. Anthropomorphism, social understanding, and imaginary companions. Br. J. Dev. Psychol. 2019, 37, 284–299. [Google Scholar] [CrossRef] [PubMed]
  53. Marchetti, A.; Manzi, F.; Itakura, S.; Massaro, D. Theory of mind and humanoid robots from a lifespan perspective. Z. Psychol. 2018, 226, 98–109. [Google Scholar] [CrossRef]
  54. Woo, B.M.; Tan, E.; Hamlin, J.K. Theory of mind in context: Mental-state representations for social evaluation. Behav. Brain Sci. 2021, 44, e176. [Google Scholar] [CrossRef]
  55. Hortensius, R.; Cross, E.S. From automata to animate beings: The scope and limits of attributing socialness to artificial agents. Ann. N. Y. Acad. Sci. 2018, 1426, 93–110. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Carrington, S.J.; Bailey, A.J. Are there theory of mind regions in the brain? A review of the neuroimaging literature. Hum. Brain Mapp. 2009, 30, 2313–2335. [Google Scholar] [CrossRef]
  57. Schurz, M.; Radua, J.; Aichhorn, M.; Richlan, F.; Perner, J. Fractionating theory of mind: A meta-analysis of functional brain imaging studies. Neurosci. Biobehav. Rev. 2014, 42, 9–34. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Schurz, M.; Tholen, M.G.; Perner, J.; Mars, R.B.; Sallet, J. Specifying the brain anatomy underlying temporo-parietal junction activations for theory of mind: A review using probabilistic atlases from different imaging modalities. Hum. Brain Mapp. 2017, 38, 4788–4805. [Google Scholar] [CrossRef] [Green Version]
  59. Spunt, R.P.; Ellsworth, E.; Adolphs, R. The neural basis of understanding the expression of the emotions in man and animals. Soc. Cogn. Affect. Neurosci. 2017, 12, 95–105. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Chaminade, T.; Hodgins, J.; Kawato, M. Anthropomorphism influences perception of computer-animated characters’ actions. Soc. Cogn. Affect. Neurosci. 2007, 2, 206–216. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Wykowska, A.; Chellali, R.; Al-Amin, M.M.; Müller, H. Implications of robot actions for human perception. How do we represent actions of the observed Rrobots? Int. J. Soc. Robot. 2014, 6, 357–366. [Google Scholar] [CrossRef]
  62. Kühn, S.; Brick, T.R.; Müller, B.C.N.; Gallinat, J. Is This Car Looking at You? How Anthropomorphism Predicts Fusiform Face Area Activation when Seeing Cars. PLoS ONE 2014, 9, e113885. [Google Scholar] [CrossRef] [PubMed]
  63. Quesque, F.; Rossetti, Y. What do theory-of-mind tasks actually measure? Theory and practice. Perspect. Psychol. Sci. 2020, 15, 384–396. [Google Scholar] [CrossRef] [PubMed]
  64. Ruijten, P.A.; Haans, A.; Ham, J.; Midden, C.J. Perceived human-likeness of social robots: Testing the Rasch model as a method for measuring anthropomorphism. Int. J. Soc. Robot. 2019, 11, 477–494. [Google Scholar] [CrossRef] [Green Version]
  65. Nijssen, S.R.R.; Müller, B.C.N.; Baaren, R.B.v.; Paulus, M. Saving the robot or the human? Robots who feel deserve moral care. Soc. Cogn. 2019, 37, 41–56. [Google Scholar] [CrossRef]
  66. Mubin, O.; Stevens, C.; Shahid, S.; Mahmud, A.; Dong, J.J. A review of the applicability of robots in education. Technol. Educ. Learn. 2013, 1, 13. [Google Scholar] [CrossRef] [Green Version]
  67. Barco, A.; de Jong, C.; Peter, J.; Kühne, R.; van Straten, C.L. Robot Morphology and Children’s Perception of Social Robots: An Exploratory Study. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; Association for Computing Machinery: New York, NY, USA, 2020. HRI ’20. pp. 125–127. [Google Scholar] [CrossRef] [Green Version]
  68. Broadbent, E.; Kumar, V.; Li, X.; Sollers, J.; Stafford, R.; Macdonald, B.; Wegner, D. Robots with display screens: A Robot with a more humanlike face display is perceived to have more mind and a better personality. PLoS ONE 2013, 8, e72589. [Google Scholar] [CrossRef] [PubMed]
  69. Burdett, E.R.R.; Ikari, S.; Nakawake, Y. British children’s and adults’ perceptions of robots. Hum. Behav. Emerg. Technol. 2022, 2022, 3813820. [Google Scholar] [CrossRef]
  70. Carpinella, C.M.; Wyman, A.B.; Perez, M.A.; Stroessner, S.J. The Robotic Social Attributes Scale (RoSAS): Development and Validation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; Association for Computing Machinery: New York, NY, USA, 2017. HRI ’17. pp. 254–262. [Google Scholar] [CrossRef]
  71. Disalvo, C.; Gemperle, F.; Forlizzi, J.; Kiesler, S. All robots are not created equal: The design and perception of humanoid robot heads. In Proceedings of the 4th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, London, UK, 25–28 June 2002; Proceedings of the DIS’02. ACM: New York, NY, USA, 2002; Volume 321–326, pp. 321–326. [Google Scholar] [CrossRef]
  72. Goldman, E.J.; Baumann, A.E.; Poulin-Dubois, D. Preschoolers’ anthropomorphizing of robots: Do human-like properties matter? Front. Psychol. 2023, 13, 1102370. [Google Scholar] [CrossRef]
  73. Haring, K.S.; Silvera-Tawil, D.; Takahashi, T.; Watanabe, K.; Velonaki, M. How people perceive different robot types: A direct comparison of an android, humanoid, and non-biomimetic robot. In Proceedings of the 2016 8th International Conference on Knowledge and Smart Technology (KST), Chiangmai, Thailand, 3–6 February 2016; pp. 265–270. [Google Scholar] [CrossRef]
  74. Kiesler, S.; Powers, A.; Fussell, S.R.; Torrey, C. Anthropomorphic interactions with a robot and robot-like agent. Soc. Cogn. 2008, 26, 169–181. [Google Scholar] [CrossRef]
  75. Krach, S.; Hegel, F.; Wrede, B.; Sagerer, G.; Binkofski, F.; Kircher, T. Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLoS ONE 2008, 3, e2597. [Google Scholar] [CrossRef]
  76. Malle, B.F.; Scheutz, M.; Forlizzi, J.; Voiklis, J. Which robot am I thinking about? The impact of action and appearance on people’s evaluations of a moral robot. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 125–132. [Google Scholar] [CrossRef]
  77. Manzi, F.; Peretti, G.; Di Dio, C.; Cangelosi, A.; Itakura, S.; Kanda, T.; Ishiguro, H.; Massaro, D.; Marchetti, A. A robot is not worth another: Exploring children’s mental state attribution to different humanoid robots. Front. Psychol. 2020, 11, 2011. [Google Scholar] [CrossRef]
  78. Manzi, F.; Massaro, D.; Di Lernia, D.; Maggioni, M.A.; Riva, G.; Marchetti, A. Robots Are Not All the Same: Young Adults’ Expectations, Attitudes, and Mental Attribution to Two Humanoid Social Robots. Cyberpsychol. Behav. Soc. Netw. 2021, 24, 307–314. [Google Scholar] [CrossRef] [PubMed]
  79. Onnasch, L.; Hildebrandt, C.L. Impact of anthropomorphic robot design on trust and attention in industrial human-robot interaction. ACM Trans.-Hum.-Robot. Interact. 2021, 11, 1–24. [Google Scholar] [CrossRef]
  80. Powers, A.; Kiesler, S. The advisor robot: Tracing people’s mental model from a robot’s physical attributes. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, Salt Lake City, UT, USA, 2–3 March 2006; Proceedings of the HRI ’06. ACM: New York, NY, USA, 2016; Volume 2006, pp. 218–225. [Google Scholar] [CrossRef]
  81. Riek, L.D.; Rabinowitch, T.C.; Chakrabarti, B.; Robinson, P. How anthropomorphism affects empathy toward robots. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction—HRI’09, La Jolla, CA, USA, 9–13 March 2009; ACM Press: La Jolla, CA, USA, 2009; pp. 245–246. [Google Scholar] [CrossRef] [Green Version]
  82. Sacino, A.; Cocchella, F.; De Vita, G.; Bracco, F.; Rea, F.; Sciutti, A.; Andrighetto, L. Human- or object-like? Cognitive anthropomorphism of humanoid robots. PLoS ONE 2022, 17, e0270787. [Google Scholar] [CrossRef]
  83. Sommer, K.; Nielsen, M.; Draheim, M.; Redshaw, J.; Vanman, E.; Wilks, M. Children’s perceptions of the moral worth of live agents, robots, and inanimate objects. J. Exp. Child Psychol. 2019, 187, 104656. [Google Scholar] [CrossRef] [PubMed]
  84. Tung, F.W. Child perception of humanoid robot appearance and behavior. Int. J.-Hum.-Comput. Interact. 2016, 32, 493–502. [Google Scholar] [CrossRef]
  85. Zanatto, D.; Patacchiola, M.; Goslin, J.; Cangelosi, A. Priming anthropomorphism: Can the credibility of humanlike robots be transferred to non-humanlike robots? In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 543–544. [Google Scholar] [CrossRef]
  86. Baxter, P.; Ashurst, E.; Read, R.; Kennedy, J.; Belpaeme, T. Robot education peers in a situated primary school study: Personalisation promotes child learning. PLoS ONE 2017, 12, e0178126. [Google Scholar] [CrossRef] [Green Version]
  87. Boladeras, M.; Nuño, N.; Saez-Pons, J.; Pardo, D.; Angulo, C. Building up child-robot relationship for therapeutic purposes: From initial attraction towards long-term social engagement. In Proceedings of the 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG), Santa Barbara, CA, USA, 21–23 March 2011; pp. 927–932. [Google Scholar] [CrossRef]
  88. Breazeal, C.; Harris, P.L.; DeSteno, D.; Kory Westlund, J.M.; Dickens, L.; Jeong, S. Young children treat robots as informants. Top. Cogn. Sci. 2016, 8, 481–491. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Henkemans, O.A.B.; Bierman, B.P.; Janssen, J.; Looije, R.; Neerincx, M.A.; van Dooren, M.M.; de Vries, J.L.; van der Burg, G.J.; Huisman, S.D. Design and evaluation of a personal robot playing a self-management education game with children with diabetes type 1. Int. J.-Hum.-Comput. Stud. 2017, 106, 63–76. [Google Scholar] [CrossRef] [Green Version]
  90. Horstmann, A.C.; Krämer, N.C. Expectations vs. actual behavior of a social robot: An experimental investigation of the effects of a social robot’s interaction skill level and its expected future role on people’s evaluations. PLoS ONE 2020, 15, e0238133. [Google Scholar] [CrossRef]
  91. Huang, C.M.; Thomaz, A.L. Joint attention in human-robot interaction. In Proceedings of the 2010 AAAI Fall Symposium Series, Arlington, WV, USA, 11–13 November 2010. [Google Scholar]
  92. Kanda, T.; Shimada, M.; Koizumi, S. Children learning with a social robot. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA, USA, 5–8 March 2012; Proceedings of the HRI ’12. ACM: New York, NY, USA, 2012; pp. 351–358. [Google Scholar] [CrossRef]
  93. Kruijff-Korbayová, I.; Oleari, E.; Bagherzadhalimi, A.; Sacchitelli, F.; Kiefer, B.; Racioppa, S.; Pozzi, C.; Sanna, A. Young users’ perception of a social robot displaying familiarity and eliciting disclosure. In Proceedings of the Social Robotics; Tapus, A., André, E., Martin, J.C., Ferland, F., Ammi, M., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 380–389. [Google Scholar] [CrossRef]
  94. Kumar, S.; Itzhak, E.; Edan, Y.; Nimrod, G.; Sarne-Fleischmann, V.; Tractinsky, N. Politeness in Human-Robot Interaction: A Multi-Experiment Study with Non-Humanoid Robots. Int. J. Soc. Robot. 2022, 14, 1805–1820. [Google Scholar] [CrossRef]
  95. Li, M.; Guo, F.; Wang, X.; Chen, J.; Ham, J. Effects of robot gaze and voice human-likeness on users’ subjective perception, visual attention, and cerebral activity in voice conversations. Comput. Hum. Behav. 2023, 141, 107645. [Google Scholar] [CrossRef]
  96. Looije, R.; Neerincx, M.A.; Hindriks, K.V. Specifying and testing the design rationale of social robots for behavior change in children. Cogn. Syst. Res. 2017, 43, 250–265. [Google Scholar] [CrossRef] [Green Version]
  97. Manzi, F.; Ishikawa, M.; Di Dio, C.; Itakura, S.; Kanda, T.; Ishiguro, H.; Massaro, D.; Marchetti, A. The understanding of congruent and incongruent referential gaze in 17-month-old infants: An eye-tracking study comparing human and robot. Sci. Rep. 2020, 10, 11918. [Google Scholar] [CrossRef]
  98. Nitsch, V.; Glassen, T. Investigating the effects of robot behavior and attitude towards technology on social human-robot interactions. In Proceedings of the 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, 31 August–1 September 2015; pp. 535–540. [Google Scholar] [CrossRef]
  99. Obaid, M.; Sandoval, E.; Złotowski, J.; Moltchanova, E.; Basedow, C.; Bartneck, C. Stop! That is close enough. How body postures influence human-robot proximity. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 354–361. [Google Scholar] [CrossRef]
  100. Okumura, Y.; Hattori, T.; Fujita, S.; Kobayashi, T. A robot is watching me!: Five-year-old children care about their reputation after interaction with a social robot. Child Dev. 2023, 94, 865–873. [Google Scholar] [CrossRef]
  101. Rossignoli, D.; Manzi, F.; Gaggioli, A.; Marchetti, A.; Massaro, D.; Riva, G.; Maggioni, M. Attribution of mental state in strategic human-robot interactions. Res. Sq. 2022. [Google Scholar] [CrossRef]
  102. Tozadore, D.C.; Pinto, A.H.; Romero, R.A. Variation in a Humanoid Robot Behavior to Analyse Interaction Quality in Pedagogical Sessions with Children. In Proceedings of the 2016 XIII Latin American Robotics Symposium and IV Brazilian Robotics Symposium (LARS/SBR), Recife, Brazil, 8–12 October 2016; pp. 133–138. [Google Scholar] [CrossRef]
  103. Wigdor, N.; Greeff, J.; Looije, R.; Neerincx, M. How to improve human-robot interaction with Conversational Fillers. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 219–224. [Google Scholar] [CrossRef]
  104. Simmons, R.; Knight, H. Keep on dancing: Effects of expressive motion mimicry. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 720–727. [Google Scholar] [CrossRef]
  105. Castro-González, À.; Admoni, H.; Scassellati, B. Effects of form and motion on judgments of social robots animacy, likability, trustworthiness and unpleasantness. Int. J.-Hum.-Comput. Stud. 2016, 90, 27–38. [Google Scholar] [CrossRef]
  106. Kuz, S.; Mayer, M.P.; Müller, S.; Schlick, C.M. Using Anthropomorphism to Improve the Human-Machine Interaction in Industrial Environments (Part I). In Proceedings of the Digital Human Modeling and Applications in Health, Safety, Ergonomics, and Risk Management. Human Body Modeling and Ergonomics; Duffy, V.G., Ed.; Springer: Berlin/Heidelberg, Germany, 2013; Lecture Notes in Computer Science; pp. 76–85. [Google Scholar] [CrossRef]
  107. Salem, M.; Eyssel, F.; Rohlfing, K.; Kopp, S.; Joublin, F. To Err is Human-like: Effects of Robot Gesture on Perceived Anthropomorphism and Likability. Int. J. Soc. Robot. 2013, 5, 313–323. [Google Scholar] [CrossRef]
  108. Tremoulet, P.D.; Feldman, J. Perception of animacy from the motion of a single object. Perception 2000, 29, 943–951. [Google Scholar] [CrossRef]
  109. Eyssel, F.; Kuchenbrandt, D.; Hegel, F.; de Ruiter, L. Activating elicited agent knowledge: How robot and user features shape the perception of social robots. In Proceedings of the 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 851–857. [Google Scholar] [CrossRef]
  110. Kuriki, S.; Tamura, Y.; Igarashi, M.; Kato, N.; Nakano, T. Similar impressions of humanness for human and artificial singing voices in autism spectrum disorders. Cognition 2016, 153, 1–5. [Google Scholar] [CrossRef]
  111. Masson, O.; Baratgin, J.; Jamet, F. NAO robot as experimenter: Social cues emitter and neutralizer to bring new results in experimental psychology. In Proceedings of the International Conference on Information and Digital Technologies, IDT 2017, Zilina, Slovakia, 5–7 July 2017; pp. 256–264. [Google Scholar] [CrossRef]
  112. Niculescu, A.; Dijk, B.; Nijholt, A.; Li, H.; See, S. Making social robots more attractive: The effects of voice pitch, humor and empathy. Int. J. Soc. Robot. 2013, 5, 171–191. [Google Scholar] [CrossRef] [Green Version]
  113. Tielman, M.; Neerincx, M.; Meyer, J.J.; Looije, R. Adaptive emotional expression in robot-child interaction. In Proceedings of the 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Bielefeld, Germany, 3–6 March 2014; ACM: New York, NY, USA, 2014; pp. 407–414. [Google Scholar] [CrossRef] [Green Version]
  114. Torre, I.; Goslin, J.; White, L.; Zanatto, D. Trust in artificial voices: A “congruency effect” of first impressions and behavioral experience. In Proceedings of the Technology, Mind, and Society, Washington, DC, USA, 5–7 April 2018; Proceedings of the TechMindSociety’ 18. ACM: New York, NY, USA, 2018; Volume 40, pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  115. Arnheim, R. Visual Thinking; University of California Press: Berkeley, CA, USA, 1969. [Google Scholar]
  116. Carey, S.; Spelke, E. Domain-specific knowledge and conceptual change. In Mapping the Mind: Domain Specificity in Cognition and Culture; Cambridge University Press: New York, NY, USA, 1994; pp. 169–200. [Google Scholar] [CrossRef]
  117. Yee, N.; Bailenson, J.N.; Rickertsen, K. A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 28 April–3 May 2007; Association for Computing Machinery: New York, NY, USA, 2007. CHI ’07. pp. 1–10. [Google Scholar] [CrossRef] [Green Version]
  118. Yin, R.K. Looking at upside-down faces. J. Exp. Psychol. 1969, 81, 141–145. [Google Scholar] [CrossRef]
  119. Leder, H.; Bruce, V. When inverted faces are recognized: The Role of configural information in face recognition. Q. J. Exp. Psychol. Sect. 2000, 53, 513–536. [Google Scholar] [CrossRef] [PubMed]
  120. Huijnen, C.A.G.J.; Lexis, M.A.S.; Jansens, R.; de Witte, L.P. How to implement robots in interventions for children with autism? A co-creation study involving people with autism, parents and professionals. J. Autism Dev. Disord. 2017, 47, 3079–3096. [Google Scholar] [CrossRef] [Green Version]
  121. Masson, O.; Baratgin, J.; Jamet, F. NAO robot, transmitter of social cues: What impacts? In Proceedings of the Advances in Artificial Intelligence: From Theory to Practice; Benferhat, S., Tabia, K., Ali, M., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 559–568. [Google Scholar] [CrossRef]
  122. Kahn, P.H.; Kanda, T.; Ishiguro, H.; Freier, N.G.; Severson, R.L.; Gill, B.T.; Ruckert, J.H.; Shen, S. “ROBOVIE, you’ll have to go into the closet now”: Children’s social and moral relationships with a humanoid robot. Dev. Psychol. 2012, 48, 303–314. [Google Scholar] [CrossRef] [Green Version]
  123. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319. [Google Scholar] [CrossRef] [Green Version]
  124. Minato, T.; Sakai, K.; Uchida, T.; Ishiguro, H. A study of interactive robot architecture through the practical implementation of conversational android. Front. Robot. AI 2022, 9, 905030. [Google Scholar] [CrossRef]
  125. Lacroix, D.; Wullenkord, R.; Eyssel, F. I Designed It, So I Trust It: The Influence of Customization on Psychological Ownership and Trust Toward Robots. In Proceedings of the Social Robotics (ICSR 2022); Cavallo, F., Cabibihan, J.J., Fiorini, L., Sorrentino, A., He, H., Liu, X., Matsumoto, Y., Ge, S.S., Eds.; Springer Nature: Cham, Switzerland, 2023; Lecture Notes in Computer Science; Volume 13818, pp. 601–614. [Google Scholar] [CrossRef]
  126. Grice, H.P. Logic and conversation. In Speech Acts; Cole, P., Morgan, J.L., Eds.; Academic Press: New York, NY, USA, 1975; Syntax and Semantics; Volume 3, pp. 43–58. [Google Scholar]
  127. Ducrot, O. Dire et ne Pas Dire: Principes de séMantique Linguistique, 3rd ed.; Collection Savoir: Hermann, Paris, 2008. [Google Scholar]
  128. Sperber, D.; Wilson, D. Relevance: Communication and Cognition, 2nd ed.; Blackwell Publishers: Oxford, UK; Cambridge, MA, USA, 2001. [Google Scholar]
  129. Jacquet, B.; Baratgin, J.; Jamet, F. The Gricean Maxims of Quantity and of Relation in the Turing Test. In Proceedings of the 2018 11th International Conference on Human System Interaction (HSI), Gdansk, Poland, 4–6 July 2018; pp. 332–338. [Google Scholar] [CrossRef]
  130. Jacquet, B.; Masson, O.; Jamet, F.; Baratgin, J. On the Lack of Pragmatic Processing in Artificial Conversational Agents. In Proceedings of the Human Systems Engineering and Design; Ahram, T., Karwowski, W., Taiar, R., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 394–399. [Google Scholar] [CrossRef]
  131. Jacquet, B.; Baratgin, J.; Jamet, F. Cooperation in Online Conversations: The Response Times as a Window Into the Cognition of Language Processing. Front. Psychol. 2019, 10, 727. [Google Scholar] [CrossRef]
  132. Jacquet, B.; Hullin, A.; Baratgin, J.; Jamet, F. The Impact of the Gricean Maxims of Quality, Quantity and Manner in Chatbots. In Proceedings of the 2019 International Conference on Information and Digital Technologies (IDT), Zilina, Slovakia, 25–27 June 2019; pp. 180–189. [Google Scholar] [CrossRef]
  133. Jacquet, B.; Jaraud, C.; Jamet, F.; Guéraud, S.; Baratgin, J. Contextual Information Helps Understand Messages Written with Textisms. Appl. Sci. 2021, 11, 4853. [Google Scholar] [CrossRef]
  134. Kumazaki, H.; Muramatsu, T.; Yoshikawa, Y.; Matsumoto, Y.; Ishiguro, H.; Kikuchi, M.; Sumiyoshi, T.; Mimura, M. Optimal robot for intervention for individuals with autism spectrum disorders. Psychiatry Clin. Neurosci. 2020, 74, 581–586. [Google Scholar] [CrossRef]
  135. Bailenson, J.; Swinth, K.; Hoyt, C.; Persky, S.; Dimov, A.; Blascovich, J. The Independent and Interactive Effects of Embodied-Agent Appearance and Behavior on Self-Report, Cognitive, and Behavioral Markers of Copresence in Immersive Virtual Environments. Presence 2005, 14, 379–393. [Google Scholar] [CrossRef]
  136. Barchard, K.A.; Lapping-Carr, L.; Westfall, R.S.; Fink-Armold, A.; Banisetty, S.B.; Feil-Seifer, D. Measuring the Perceived Social Intelligence of Robots. ACM Trans.-Hum.-Robot. Interact. 2020, 9, 1–29. [Google Scholar] [CrossRef]
  137. Darling, K.; Nandy, P.; Breazeal, C. Empathic concern and the effect of stories in human-robot interaction. In Proceedings of the 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Kobe, Japan, 31 August–4 September 2015; pp. 770–775. [Google Scholar] [CrossRef] [Green Version]
  138. Kory Westlund, J.; Martinez, M.; Archie, M.; Das, M.; Breazeal, C. Effects of framing a robot as a social agent or as a machine on children’s social behavior. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 688–693. [Google Scholar] [CrossRef]
  139. Mara, M.; Appel, M. Science fiction reduces the eeriness of android robots: A field experiment. Comput. Hum. Behav. 2015, 48, 156–162. [Google Scholar] [CrossRef]
  140. Mou, W.; Ruocco, M.; Zanatto, D.; Cangelosi, A. When would you trust a robot? A study on trust and theory of mind in human-robot interactions. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 956–9437. [Google Scholar] [CrossRef]
  141. Nijssen, S.R.R.; Heyselaar, E.; Müller, B.C.N.; Bosse, T. Do we take a robot’s needs into account? The effect of humanization on prosocial considerations toward other human beings and robots. Cyberpsychology Behav. Soc. Netw. 2021, 24, 332–336. [Google Scholar] [CrossRef] [PubMed]
  142. Onnasch, L.; Roesler, E. Anthropomorphizing robots: The effect of framing in human-robot collaboration. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2019, 63, 1311–1315. [Google Scholar] [CrossRef] [Green Version]
  143. Rosenthal-von der Pütten, A.; Straßmann, C.; Mara, M. A long time ago in a galaxy far, far away…The effects of narration and appearance on the perception of robots. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 1169–9437. [Google Scholar] [CrossRef]
  144. Ruocco, M.; Mou, W.; Cangelosi, A.; Jay, C.; Zanatto, D. Theory of Mind improves human’s trust in an iterative human-robot game. In Proceedings of the 9th International Conference on Human-Agent Interaction, Virtual Event, Japan, 9–11 November 2021; ACM: New York, NY, USA, 2021; pp. 227–234. [Google Scholar] [CrossRef]
  145. Schömbs, S.; Klein, J.; Roesler, E. Feeling with a robot—The role of anthropomorphism by design and the tendency to anthropomorphize in human-robot interaction. Front. Robot. AI 2023, 10, 1149601. [Google Scholar] [CrossRef] [PubMed]
  146. Söderlund, M. Service robots with (perceived) theory of mind: An examination of humans’ reactions. J. Retail. Consum. Serv. 2022, 67, 102999. [Google Scholar] [CrossRef]
  147. Sturgeon, S.; Palmer, A.; Blankenburg, J.; Feil-Seifer, D. Perception of social intelligence in robots performing false-belief tasks. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–7. [Google Scholar] [CrossRef]
  148. Chernyak, N.; Gary, H.E. Children’s cognitive and behavioral reactions to an autonomous versus controlled social robot dog. Early Educ. Dev. 2016, 27, 1175–1189. [Google Scholar] [CrossRef]
  149. Haas, M.; Aroyo, A.M.; Barakova, E.; Haselager, W.; Smeekens, I. The effect of a semi-autonomous robot on children. In Proceedings of the 2016 IEEE 8th International Conference on Intelligent Systems (IS), Sofia, Bulgaria, 4–6 September 2016; pp. 376–381. [Google Scholar] [CrossRef]
  150. Lee, H.; Choi, J.J.; Kwak, S.S. Will you follow the robot’s advice?: The impact of robot types and task types on people’s perception of a robot. In Proceedings of the Second International Conference on Human-Agent Interaction, Tsukuba, Japan, 29–31 October 2014; ACM: New York, NY, USA, 2014; pp. 137–140. [Google Scholar] [CrossRef]
  151. Tozadore, D.; Pinto, A.; Romero, R.; Trovato, G. Wizard of Oz vs. autonomous: Children’s perception changes according to robot’s operation condition. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 664–669. [Google Scholar] [CrossRef]
  152. Van Straten, C.L.; Peter, J.; Kühne, R.; Barco, A. The wizard and I: How transparent teleoperation and self-description (do not) affect children’s robot perceptions and child-robot relationship formation. AI Soc. 2022, 37, 383–399. [Google Scholar] [CrossRef]
  153. Bartneck, C.; Suzuki, T.; Kanda, T.; Nomura, T. The influence of people’s culture and prior experiences with Aibo on their attitude towards robots. AI Soc. 2007, 21, 217–230. [Google Scholar] [CrossRef]
  154. De Graaf, M.M.A.; Ben Allouch, S.; van Dijk, J.A.G.M. Long-term evaluation of a social robot in real homes. Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst. 2016, 17, 461–490. [Google Scholar] [CrossRef] [Green Version]
  155. De Jong, C.; Peter, J.; Kühne, R.; Barco, A. Children’s acceptance of social robots: A narrative review of the research 2000–2017. Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst. 2019, 20, 393–425. [Google Scholar] [CrossRef]
  156. Nishio, S.; Ogawa, K.; Kanakogi, Y.; Itakura, S.; Ishiguro, H. Do robot appearance and speech affect people’s attitude? Evaluation through the Ultimatum Game. In Proceedings of the 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 809–814. [Google Scholar] [CrossRef]
  157. Ribi, F.; Yokoyama, A.; Turner, D. Comparison of children’s behavior toward Sony’s robotic dog AIBO and a real Dog: A pilot study. Anthrozoos Multidiscip. J. Interact. People Anim. 2008, 21, 245–256. [Google Scholar] [CrossRef]
  158. Sinnema, L.; Alimardani, M. The attitude of elderly and young adults towards a humanoid robot as a facilitator for social interaction. In Social Robotics; Springer International Publishing: Cham, Switzerland, 2019; pp. 24–33. [Google Scholar] [CrossRef]
  159. Tanaka, F.; Cicourel, A.; Movellan, J.R. Socialization between toddlers and robots at an early childhood education center. Proc. Natl. Acad. Sci. USA 2007, 104, 17954–17958. [Google Scholar] [CrossRef]
  160. Al-Taee, M.A.; Kapoor, R.; Garrett, C.; Choudhary, P. Acceptability of Robot Assistant in Management of Type 1 Diabetes in Children. Diabetes Technol. Ther. 2016, 18, 551–554. [Google Scholar] [CrossRef]
  161. Banthia, V.; Maddahi, Y.; May, M.; Blakley, D.; Chang, Z.; Gbur, A.; Tu, C.; Sepehri, N. Development of a graphical user interface for a socially interactive robot: A case study evaluation. In Proceedings of the 2016 IEEE 7th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 13–15 October 2016; pp. 1–8. [Google Scholar] [CrossRef]
  162. Ray, C.; Mondada, F.; Siegwart, R. What do people expect from robots? In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3816–3821. [Google Scholar] [CrossRef] [Green Version]
  163. Wiese, E.; Weis, P.P.; Bigman, Y.; Kapsaskis, K.; Gray, K. It’s a Match: Task Assignment in Human–Robot Collaboration Depends on Mind Perception. Int. J. Soc. Robot. 2022, 14, 141–148. [Google Scholar] [CrossRef]
  164. Baratgin, J.; Jamet, F. Le paradigme de “l’enfant mentor d’un robot ignorant et naïf” comme révélateur de competences cognitives et sociales précoces chez le jeune enfant. In Proceedings of the WACAI 2021; Centre National de la Recherche Scientifique [CNRS]: Saint Pierre d’Oleron, France, 2021. [Google Scholar]
  165. Baratgin, J.; Jacquet, B.; Dubois-Sage, M.; Jamet, F. “Mentor-child and naive-pupil-robot” paradigm to study children’s cognitive and social development. In Proceedings of the Workshop: Interdisciplinary Research Methods for Child-Robot Relationship Formation, HRI-2021, Boulder, CO, USA, 8–11 March 2021. [Google Scholar]
  166. Masson, O.; Baratgin, J.; Jamet, F. NAO robot and the “endowment effect”. In Proceedings of the 2015 IEEE International Workshop on Advanced Robotics and its Social Impacts (ARSO), Lyon, France, 30 June–2 July 2015; pp. 1–6. [Google Scholar] [CrossRef]
  167. Masson, O.; Baratgin, J.; Jamet, F.; Ruggieri, F.; Filatova, D. Use a robot to serve experimental psychology: Some examples of methods with children and adults. In Proceedings of the International Conference on Information and Digital Technologies (IDT-2016), Rzeszow, Poland, 5–7 July 2016; pp. 190–197. [Google Scholar] [CrossRef]
  168. Graaf, M.M.A.d.; Allouch, S.B. Exploring influencing variables for the acceptance of social robots. Robot. Auton. Syst. 2013, 61, 1476–1486. [Google Scholar] [CrossRef]
  169. Baxter, P.; De Jong, C.; Aarts, R.; de Haas, M.; Vogt, P. The Effect of Age on Engagement in Preschoolers’ Child-Robot Interactions. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; Association for Computing Machinery: New York, NY, USA, 2017. HRI ’17. pp. 81–82. [Google Scholar] [CrossRef] [Green Version]
  170. Beran, T.; Ramirez-Serrano, A.; Kuzyk, R.; Fior, M.; Nugent, S. Understanding how children understand robots: Perceived animism in child-robot interaction. Int. J. Hum.-Comput. Stud. 2011, 69, 539–550. [Google Scholar] [CrossRef]
  171. Di Dio, C.; Manzi, F.; Peretti, G.; Cangelosi, A.; Harris, P.L.; Massaro, D.; Marchetti, A. Shall I trust you? From child-robot interaction to trusting relationships. Front. Psychol. 2020, 11, 469. [Google Scholar] [CrossRef] [Green Version]
  172. Flanagan, T.; Rottman, J.; Howard, L.H. Constrained Choice: Children’s and Adults’ Attribution of Choice to a Humanoid Robot. Cogn. Sci. 2021, 45, e13043. [Google Scholar] [CrossRef]
  173. Leite, I.; Lehman, J. The Robot Who Knew Too Much: Toward Understanding the Privacy/Personalization Trade-Off in Child-Robot Conversation. In Proceedings of the The 15th International Conference on Interaction Design and Children, Manchester, UK, 21–24 June 2016; Association for Computing Machinery: New York, NY, USA, 2016. Proceedings of the IDC’ 16. pp. 379–387. [Google Scholar] [CrossRef]
  174. Martin, D.U.; MacIntyre, M.I.; Perry, C.; Clift, G.; Pedell, S.; Kaufman, J. Young children’s indiscriminate helping behavior toward a humanoid robot. Front. Psychol. 2020, 11, 239. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  175. Martin, D.U.; Perry, C.; MacIntyre, M.I.; Varcoe, L.; Pedell, S.; Kaufman, J. Investigating the nature of children’s altruism using a social humanoid robot. Comput. Hum. Behav. 2020, 104, 106149. [Google Scholar] [CrossRef]
  176. Okanda, M.; Taniguchi, K.; Wang, Y.; Itakura, S. Preschoolers’ and adults’ animism tendencies toward a humanoid robot. Comput. Hum. Behav. 2021, 118, 106688. [Google Scholar] [CrossRef]
  177. Pulido, J.C.; González, J.C.; Suárez-Mejías, C.; Bandera, A.; Bustos, P.; Fernández, F. Evaluating the child–robot interaction of the NAOTherapist platform in pediatric rehabilitation. Int. J. Soc. Robot. 2017, 9, 343–358. [Google Scholar] [CrossRef] [Green Version]
  178. Serholt, S.; Basedow, C.; Barendregt, W.; Obaid, M. Comparing a humanoid tutor to a human tutor delivering an instructional task to children. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; Volume 2015, pp. 1134–1141. [Google Scholar] [CrossRef] [Green Version]
  179. Tozadore, D.C.; Pinto, A.M.H.; Ranieri, C.; Batista, M.R.; Romero, R. Tablets and humanoid robots as engaging platforms for teaching languages. In Proceedings of the 2017 Latin American Robotics Symposium (LARS) and 2017 Brazilian Symposium on Robotics (SBR), Curitiba, Brazil, 8–11 November 2017; pp. 1–6. [Google Scholar] [CrossRef]
  180. Zhang, Y.; Song, W.; Tan, Z.; Zhu, H.; Wang, Y.; Lam, C.M.; Weng, Y.; Hoi, S.P.; Lu, H.; Man Chan, B.S.; et al. Could social robots facilitate children with autism spectrum disorders in learning distrust and deception? Comput. Hum. Behav. 2019, 98, 140–149. [Google Scholar] [CrossRef]
  181. Choi, J.; Jongyun, L.; Han, J. Comparison of cultural acceptability for educational robots between Europe and Korea. J. Inf. Process. Syst. 2008, 4, 97–102. [Google Scholar] [CrossRef] [Green Version]
  182. Dang, J.; Liu, L. Do lonely people seek robot companionship? A comparative examination of the Loneliness—Robot anthropomorphism link in the United States and China. Comput. Hum. Behav. 2023, 141, 107637. [Google Scholar] [CrossRef]
  183. Eyssel, F.; Kuchenbrandt, D. Social categorization of social robots: Anthropomorphism as a function of robot group membership: Social categorization and social robots. Br. J. Soc. Psychol. 2012, 51, 724–731. [Google Scholar] [CrossRef]
  184. Haring, K.; Silvera-Tawil, D.; Watanabe, K.; Velonaki, M. The influence of robot appearance and interactive ability in HRI: A cross-cultural study. Proc. Soc. Robot. 2016, 9979, 392–401. [Google Scholar] [CrossRef]
  185. Li, D.; Rau, P.L.; Li, Y. A cross-cultural study: Effect of robot appearance and task. Int. J. Soc. Robot. 2010, 2, 175–186. [Google Scholar] [CrossRef]
  186. Abel, M.; Kuz, S.; Patel, H.J.; Petruck, H.; Schlick, C.M.; Pellicano, A.; Binkofski, F.C. Gender Effects in Observation of Robotic and Humanoid Actions. Front. Psychol. 2020, 11, 797. [Google Scholar] [CrossRef] [PubMed]
  187. Bryant, D.; Borenstein, J.; Howard, A. Why Should We Gender?: The Effect of Robot Gendering and Occupational Stereotypes on Human Trust and Perceived Competency. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; ACM: New York, NY, USA, 2020; pp. 13–21. [Google Scholar] [CrossRef] [Green Version]
  188. Kraus, M.; Kraus, J.; Baumann, M.; Minker, W. Effects of gender stereotypes on trust and likability in spoken human-robot interaction. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018); European Language Resources Association (ELRA): Miyazaki, Japan, 2018. [Google Scholar]
  189. Kuchenbrandt, D.; Häring, M.; Eichberg, J.; Eyssel, F. Keep an eye on the task! How gender typicality of tasks influence human–robot interactions. In Proceedings of the Social Robotics; Ge, S.S., Khatib, O., Cabibihan, J.J., Simmons, R., Williams, M.A., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Lecture Notes in Computer Science; pp. 448–457. [Google Scholar] [CrossRef]
  190. Lücking, P.; Rohlfing, K.; Wrede, B.; Schilling, M. Preschoolers’ engagement in social interaction with an autonomous robotic system. In Proceedings of the 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), Cergy-Pontoise, France, 19–22 September 2016; pp. 210–216. [Google Scholar] [CrossRef]
  191. Robben, D.; Fukuda, E.; De Haas, M. The effect of gender on perceived anthropomorphism and intentional acceptance of a storytelling robot. In Proceedings of the Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Stockholm Sweden, 13–16 March 2023; Association for Computing Machinery: New York, NY, USA, 2023. HRI’23. pp. 495–499. [Google Scholar] [CrossRef]
  192. Sandygulova, A.; O’Hare, G.M. Investigating the impact of gender segregation within observational pretend play interaction. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 399–406. [Google Scholar] [CrossRef]
  193. Sandygulova, A.; O’Hare, G.M.P. Age- and Gender-Based Differences in Children’s Interactions with a Gender-Matching Robot. Int. J. Soc. Robot. 2018, 10, 687–700. [Google Scholar] [CrossRef]
  194. Schermerhorn, P.; Scheutz, M.; Crowell, C.R. Robot social presence and gender: Do females view robots differently than males? In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, Amsterdam, The The Netherlands, 12–15 March 2008; Association for Computing Machinery: New York, NY, USA, 2008. HRI’08. pp. 263–270. [Google Scholar] [CrossRef]
  195. Siegel, M.; Breazeal, C.; Norton, M.I. Persuasive robotics: The influence of robot gender on human behavior. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 2563–2568. [Google Scholar] [CrossRef]
  196. Suzuki, T.; Nomura, T. Gender preferences for robots and gender equality orientation in communication situations. AI Soc. 2022. [Google Scholar] [CrossRef]
  197. Tung, F.W. Influence of Gender and Age on the Attitudes of Children towards Humanoid Robots. In Proceedings of the Human-Computer Interaction. Users and Applications; Jacko, J.A., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; Lecture Notes in Computer Science; pp. 637–646. [Google Scholar] [CrossRef]
  198. Kędzierski, J.; Muszyński, R.; Zoll, C.; Oleksy, A.; Frontkiewicz, M. EMYS—Emotive head of a social robot. Int. J. Soc. Robot. 2013, 5, 237–249. [Google Scholar] [CrossRef] [Green Version]
  199. Bernstein, D.; Crowley, K. Searching for signs of intelligent life: An investigation of young children’s beliefs about robot intelligence. J. Learn. Sci. 2008, 17, 225–247. [Google Scholar] [CrossRef]
  200. Heerink, M. Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults. In Proceedings of the 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, Switzerland, 6–9 March 2011; ACM: New York, NY, USA, 2011; pp. 147–148. [Google Scholar] [CrossRef] [Green Version]
  201. Nakano, T.; Tanaka, K.; Endo, Y.; Yamane, Y.; Yamamoto, T.; Nakano, Y.; Ohta, H.; Kato, N.; Kitazawa, S. Atypical gaze patterns in children and adults with autism spectrum disorders dissociated from developmental changes in gaze behavior. Proc. Biol. Sci. 2010, 277, 2935–2943. [Google Scholar] [CrossRef] [PubMed]
  202. Van Straten, C.L.; Peter, J.; Kühne, R. Child-robot relationship formation: A narrative review of empirical research. Int. J. Soc. Robot. 2020, 12, 325–344. [Google Scholar] [CrossRef] [Green Version]
  203. Benenson, J.F.; Apostoleris, N.H.; Parnass, J. Age and sex differences in dyadic and group interaction. Dev. Psychol. 1997, 33, 538–543. [Google Scholar] [CrossRef]
  204. Wood, W.; Rhodes, N. Sex Differences in Interaction Style in Task Groups. In Gender, Interaction, and Inequality; Springer: Berlin/Heidelberg, Germany, 1992; pp. 97–121. [Google Scholar] [CrossRef]
  205. Martinez, M.A.; Osornio, A.; Halim, M.L.D.; Zosuls, K.M. Gender: Awareness, identity, and stereotyping. In Encyclopedia of Infant and Early Childhood Development, 2nd ed.; Benson, J.B., Ed.; Elsevier: Oxford, UK, 2020; pp. 1–12. [Google Scholar] [CrossRef]
  206. Mehta, C.M.; Strough, J. Sex segregation in friendships and normative contexts across the life span. Dev. Rev. 2009, 29, 201–220. [Google Scholar] [CrossRef]
  207. Berghe, R.; Haas, M.; Oudgenoeg-Paz, O.; Krahmer, E.; Verhagen, J.; Vogt, P.; Willemsen, B.; Wit, J.; Leseman, P. A toy or a friend? Children’s anthropomorphic beliefs about robots and how these relate to second-language word learning. J. Comput. Assist. Learn. 2021, 37, 396–410. [Google Scholar] [CrossRef]
  208. Nomura, T.; Suzuki, T.; Kanda, T.; Kato, K. Measurement of negative attitudes toward robots. Interact. Stud. 2006, 7, 437–454. [Google Scholar] [CrossRef]
  209. Gaertner, S.L.; Dovidio, J.F. Reducing Intergroup Bias: The Common Ingroup Identity Model; Psychology Press: New York, NY, USA, 2000; pp. 13–212. [Google Scholar]
  210. Levine, M.; Prosser, A.; Evans, D.; Reicher, S. Identity and emergency intervention: How social group membership and inclusiveness of group boundaries shape helping behavior. Personal. Soc. Psychol. Bull. 2005, 31, 443–453. [Google Scholar] [CrossRef] [PubMed]
  211. Annaz, D.; Campbell, R.; Coleman, M.; Milne, E.; Swettenham, J. Young children with autism spectrum disorder do not preferentially attend to biological motion. J. Autism Dev. Disord. 2012, 42, 401–408. [Google Scholar] [CrossRef] [PubMed]
  212. Mori, M. The uncanny valley. Energy 1970, 7, 33–35. [Google Scholar]
  213. Mori, M.; MacDorman, K.F.; Kageki, N. The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
  214. Gray, K.; Wegner, D.M. Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition 2012, 125, 125–130. [Google Scholar] [CrossRef] [PubMed]
  215. Kim, B.; Bruce, M.; Brown, L.; de Visser, E.; Phillips, E. A comprehensive approach to validating the uncanny valley using the anthropomorphic RoBOT (ABOT) database. In Proceedings of the 2020 Systems and Information Engineering Design Symposium (SIEDS), Charlottesville, VA, USA, 24 April 2020; pp. 1–6. [Google Scholar] [CrossRef]
  216. Lee, M.K.; Forlizzi, J.; Rybski, P.; Crabbe, F.; Chung, W.; Finkle, J.; Glaser, E.; Kiesler, S. The snackbot: Documenting the design of a robot for long-term human-robot interaction. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, La Jolla, CA, USA, 9–13 March 2009; ACM: New York, NY, USA, 2009; pp. 7–14. [Google Scholar] [CrossRef]
  217. Spatola, N.; Wudarczyk, O. Ascribing emotions to robots: Explicit and implicit attribution of emotions and perceived robot anthropomorphism. Comput. Hum. Behav. 2021, 124, 106934. [Google Scholar] [CrossRef]
  218. Mathur, M.B.; Reichling, D.B. Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition 2016, 146, 22–32. [Google Scholar] [CrossRef] [Green Version]
  219. Mara, M.; Appel, M.; Gnambs, T. Human-like robots and the uncanny valley: A meta-analysis of user responses based on the godspeed scales. Z. Psychol. 2022, 230, 33–46. [Google Scholar] [CrossRef]
  220. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef] [Green Version]
  221. MacDorman, K.F.; Ishiguro, H. The uncanny advantage of using androids in cognitive and social science research. Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst. 2006, 7, 297–337. [Google Scholar] [CrossRef]
  222. Laakasuo, M.; Palomäki, J.; Köbis, N. Moral uncanny valley: A robot’s appearance moderates how its decisions are judged. Int. J. Soc. Robot. 2021, 13, 1679–1688. [Google Scholar] [CrossRef]
  223. MacDorman, K. Subjective ratings of robot video clips for human likeness, familiarity, and eeriness: An exploration of the uncanny valley. In ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science; Indiana University: Bloomington, IN, USA, 2006. [Google Scholar]
  224. Woods, S. Exploring the design space of robots: Children’s perspectives. Interact. Comput. 2006, 18, 1390–1418. [Google Scholar] [CrossRef]
  225. Lewkowicz, D.J.; Ghazanfar, A.A. The development of the uncanny valley in infants. Dev. Psychobiol. 2012, 54, 124–132. [Google Scholar] [CrossRef] [Green Version]
  226. Brink, K.A.; Gray, K.; Wellman, H.M. Creepiness creeps in: Uncanny valley feelings are acquired in childhood. Child Dev. 2019, 90, 1202–1214. [Google Scholar] [CrossRef]
  227. Baxter, P.; Kennedy, J.; Senft, E.; Lemaignan, S.; Belpaeme, T. From characterising three years of HRI to methodology and reporting recommendations. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 391–398. [Google Scholar] [CrossRef] [Green Version]
  228. Torta, E.; van Dijk, E.; Ruijten, P.A.M.; Cuijpers, R.H. The Ultimatum Game as Measurement Tool for Anthropomorphism in Human–Robot Interaction. In Proceedings of the Social Robotics; Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U., Eds.; Springer International Publishing: Cham, Switzerland, 2013; pp. 209–217. [Google Scholar] [CrossRef]
  229. Amirova, A.; Rakhymbayeva, N.; Yadollahi, E.; Sandygulova, A.; Johal, W. 10 years of human-NAO interaction research: A scoping review. Front. Robot. AI 2021, 8, 744526. [Google Scholar] [CrossRef]
  230. Sandoval, E.B.; Brandstatter, J.; Yalcin, U.; Bartneck, C. Robot likeability and reciprocity in human robot interaction: Using ultimatum game to determinate reciprocal likeable robot strategies. Int. J. Soc. Robot. 2021, 13, 851–862. [Google Scholar] [CrossRef]
  231. Mubin, O.; Henderson, J.; Bartneck, C. You just do not understand me! Speech recognition in human robot interaction. In Proceedings of the The 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; pp. 637–642. [Google Scholar] [CrossRef] [Green Version]
  232. Di Dio, C.; Manzi, F.; Itakura, S.; Kanda, T.; Ishiguro, H.; Massaro, D.; Marchetti, A. It does not matter who you are: Fairness in pre-schoolers interacting with human and robotic partner. Int. J. Soc. Robot. 2020, 12, 1045–1059. [Google Scholar] [CrossRef]
  233. Belpaeme, T.; Baxter, P.; de Greeff, J.; Kennedy, J.; Read, R.; Looije, R.; Neerincx, M.; Baroni, I.; Zelati, M.C. Child-Robot Interaction: Perspectives and Challenges. In Proceedings of the Social Robotics; Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U., Eds.; Springer International Publishing: Cham, Switzerland, 2013; pp. 452–459. [Google Scholar] [CrossRef]
  234. Fisher, R. Social desirability bias and the validity of indirect questioning. J. Consum. Res. 1993, 20, 303–315. [Google Scholar] [CrossRef]
  235. Phillips, E.; Zhao, X.; Ullman, D.; Malle, B.F. What is human-like? decomposingr robots’ human-like appearance using the anthropomorphic roBOT (ABOT) database. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; Association for Computing Machinery: New York, NY, USA, 2018. HRI ’18. pp. 105–113. [Google Scholar] [CrossRef]
  236. Li, J. The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. Int. J.-Hum.-Comput. Stud. 2015, 77, 23–37. [Google Scholar] [CrossRef]
  237. Leyzberg, D.; Spaulding, S.; Toneva, M.; Scassellati, B. The physical presence of a robot tutor increases cognitive learning gains. In Proceedings of the Annual Meeting of the Cognitive Science Society, Sapporo, Japan, 1–4 August 2012; Volume 34. [Google Scholar]
  238. Kose-Bagci, H.; Ferrari, E.; Dautenhahn, K.; Syrdal, D.S.; Nehaniv, C.L. Effects of embodiment and gestures on social interaction in drumming games with a humanoid robot. Adv. Robot. 2009, 23, 1951–1996. [Google Scholar] [CrossRef]
  239. Roesler, E.; Manzey, D.; Onnasch, L. Embodiment Matters in Social HRI Research: Effectiveness of Anthropomorphism on Subjective and Objective Outcomes. In ACM Transactions on Human-Robot Interaction; ACM: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
  240. Richards, D.; Vythilingam, R.; Formosa, P. A principlist-based study of the ethical design and acceptability of artificial social agents. Int. J.-Hum.-Comput. Stud. 2023, 172, 102980. [Google Scholar] [CrossRef]
  241. Malle, B.; Fischer, K.; Young, J.; Moon, A.; Collins, E. Trust and the discrepancy between expectations and actual capabilities of social robots. In Human-Robot Interaction: Control, Analysis, and Design; Cambridge Scholars Publishing: Newcastle upon Tyne, UK, 2021; pp. 3–23. [Google Scholar]
  242. Bickmore, T.W.; Puskar, K.; Schlenk, E.A.; Pfeifer, L.M.; Sereika, S.M. Maintaining reality: Relational agents for antipsychotic medication adherence. Interact. Comput. 2010, 22, 276–288. [Google Scholar] [CrossRef]
  243. Tahan, K.; Cayrier, A.; Baratgin, J.; N’Kaoua, B. ZORA Robot to Assist a Caregiver in Prospective Memory Tasks. (Accepted under Minor Revision), Applied Neuropsychology: Adult. Available online: https://hidrive.ionos.com/lnk/1SLOFXwX (accessed on 6 July 2023).
  244. Scassellati, B.; Admoni, H.; Matarić, M. Robots for use in autism research. Annu. Rev. Biomed. Eng. 2012, 14, 275–294. [Google Scholar] [CrossRef] [Green Version]
  245. Scibilia, A.; Pedrocchi, N.; Fortuna, L. Modeling Nonlinear Dynamics in Human–Machine Interaction. IEEE Access 2023, 11, 58664–58678. [Google Scholar] [CrossRef]
  246. Roesler, E.; Manzey, D.; Onnasch, L. A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction. Sci. Robot. 2021, 6, eabj5425. [Google Scholar] [CrossRef]
  247. Broadbent, E. Interactions with robots: The truths we reveal about ourselves. Annu. Rev. Psychol. 2017, 68, 627–652. [Google Scholar] [CrossRef] [Green Version]
Table 1. Robotic factors of anthropomorphism.
Table 1. Robotic factors of anthropomorphism.
FactorArticleVariableEffectEffect p-ValueRobotSample SizeMean Age (Standard Deviation)Country
AppearanceBanks [34]Mentalizing explanationshuman/android > low/mid robots p < 0.001 OZOBOT, COZMO, NAO469 22.3 ( 7.06 ) USA
Barco et al. [67]Anthropomorphism scoreNAO > COZMO > PLEOp < 0.05NAO, PLEO, COZMO35 9.91 ( 1.70 ) The Netherlands
Broadbent et al. [68]Preference; mind attributionface > no facep < 0.01; p < 0.001PEOPLEBOT30 22.5 ( 4.58 ) USA
Burdett et al. [69]Will to play withiconic > humanoid > abstract p < 0.001 NAO, TITAN, MINDAR110 5.80 ( 1.41 ) ; 10.65 ( 1.41 ) ; 30.60 ( 3.01 ) UK
Carpinella et al. [70]Perceived warmth; Competence; Comforthuman-like > machine-like p < 0.001 Team-built robot face252not specifiednot specified
Disalvo et al. [71]Perception of humannessmany facial features > none p < 0.01 48 Different robots60not specifiednot specified
Goldman et al. [72]Biological properties attributionNAO = DASHnot significantNAO, DASH8942 months (3 y.o.); 65 months (5 y.o.)USA, Canada
Haring et al. [73]Intelligenceandroid > humanoid > abstract p < 0.001 GEMINOID-F, ROBI, KEEPON335 22.2 ( 4.03 ) (Japan); 25.2 ( 8.92 ) (Australia)Japan, Australia
Kiesler et al. [74]Personality attributionpresent > projected p < 0.05 NURSEBOT11326USA
Krach et al. [75]Fun; Perceived Intelligenceanthropomorphic robot > functional p < 0.01 BARTHOC JR, LEGO MINDSTORMS20 24.5 ( 2.97 ) Germany
Malle et al. [76]Blamehumanoid = human > mechanical p < 0.05 Mechanical or humanoid633 34.4 ( 11 ) not specified
Manzi et al. [77]Mental state attributionNAO > ROBOVIE p < 0.001 NAO, ROBOVIE1895, 7 and 9Italy
Manzi et al. [78]Mental state attributionPEPPER > NAO p < 0.01 NAO, PEPPER174 20.22 ( 1.8 ) (NAO); 21.76 ( 4.42 ) (PEPPER)Italy
Nijssen et al. [65]Sacrificehumanness + p = 0.001 GEMINOID, KOJIRO5419.43(2.69)The Netherlands
Nijssen et al. [40]Sharingiconic = abstract robotnot significantNAO, LEGO MINDSTORMS120 4.90 ( 0.42 ) ; 8.35 ( 0.50 ) The Netherlands
Onnasch and Hildebrandt [79]Number of fixationsanthropomorphic > non-anthropomorphic p < 0.001 SAWYER40 24.47 ( 4.34 ) Germany
Powers and Kiesler [80]Cooperationhuman-like > machine like p < 0.05 Animated Robot98not specifiednot specified
Riek et al. [81]Empathyhumanoid > mechanical p < 0.05 ROOMBA, AUR, ANDREW, ALICIA120 29.4 ( 9.9 ) UK
Sacino et al. [82]Inversion effect for robotshigh level of humanness > low-level p < 0.001 Multiple robots99, 94, 109 22.2 ( 2.26 ) ; 21.8 ( 2.82 ) ; 22.1 ( 2.92 ) Italy
Sommer et al. [83]Perceived moral worthNAO = PLEOnot significantNAO, PLEO126 7.61 ( 1.87 ) Australia
Tung [84]Social and physical attractionanthropomorphic > non-anthropomorphic p < 0.001 12 Robots (pictures), 9 Robots (videos)267 12.1 Taiwan
Zanatto et al. [85]Change rateprimed robot > nonprimed p < 0.01 SCITOS G5, iCUB15not specifiedUK
Zanatto et al. [22]Likability; Trust; AnthropomorphismNAO > BAXTER p < 0.001 NAO, BAXTER30 23.20 ( 2.10 ) UK
Zhao and Malle [26]Perspective takinghead/face > no head/no face; ERICA > NAO and BAXTER > THYMIO p < 0.01 NAO, BAXTER, ERICA, THYMIO1729, 1431 33.56 ( 12.18 ) ; 32.14 ( 12.40 ) not specified
Zlotowski et al. [36]Likability; Eerinessiconic > android; android > iconic p < 0.05 GEMINOID HI-2, ROBOVIE R258 21.47 Japan
BehaviorBAXTER et al. [86]Enjoymentperceived competence + (personalized) p = 0.001 NAO59not specifiedUK
Boladeras et al. [87]Preferenceslow > agitatednot specifiedPLEO4not specifiedSpain
Breazeal et al. [88]Preference; Gazeattentive = non attentive; attentive > non attentivenot significant; p < 0.01 DRAGONBOTS17 4.2 ( 0.79 ) USA
Henkemans et al. [89]Perceived funpersonalized > neutral p < 0.05 NAO45 11.04 ( 1.71 ) ; 12.55 ( 1.04 ) The Netherlands
Horstmann and Krämer [90]Perceived sociability; Competencehigh level of interaction > low level p < 0.01 NAO162 22.85 ( 3.88 ) not specified
Huang and Thomaz [91]Intelligencejoint attention > without joint attention p < 0.001 SIMON20not specifiedUSA
Kanda et al. [92]Appreciationsocial behavior > non-social behavior p < 0.01 ROBOVIE, LEGO MINDSTORMS31not specifiedJapan
Kruijff-Korbayová et al. [93]Perceived friendshipfamiliar > neutral p < 0.05 NAO19not specifiedItaly
Kumar et al. [94]Satisfaction and trustpolite > rude p < 0.001 TurtleBot3 Burger, Robotic Arm20326 (Young adults) vs. 70 (Seniors)not specified
Li et al. [95]Fun; Likabilitywith eye gaze > without p < 0.001 Alpha227 21.7 ( 1.41 ) China
Looije et al. [96]Smiling; Questionnaireaffective > non affective; affective = non affective p < 0.05 NAO189The Netherlands
Manzi et al. [97]Duration of fixation on the facewith eye contact > without p < 0.01 ROBOVIE32not specifiednot specified
Nitsch and Glassen [98]Interaction scoreanimated robot > apathetic robot p < 0.001 NAO48not specifiedGermany
Obaid et al. [99]Proximitystanding robot > sitting p < 0.01 NAO22 28.6 ( 10.6 ) New Zealand
Okumura et al. [100]Perceived intelligence; Emotion attributioninteractive > still robot p < 0.01 SOTA3662.08 months (6.42)Japan
Rossignoli et al. [101]attribution of mental statesearnest robot > misleading p < 0.01 NAO126not specifiedItaly
Tozadore et al. [102]Correct answershigh interactivity > lownot specifiedNAO30not specifiedBrazil
Tung [84]Social and physical attractionmovement > static p < 0.001 12 Robots (pictures), 9 Robots (videos)311 11.8 Taiwan
Wigdor et al. [103]Free play selection; Perceived human-likenessfillers = no fillers; fillers > no fillersnot significant; p < 0.001 NAO26 9.32 The Netherlands
Simmons and Knight [104]Diversity in motionsmimicry > control p < 0.001 KEEPON45not specifiedPortugal
Waytz et al. [43]Anthropomorphismunpredictable > predictable p < 0.05 ASIMO55 34.89 ( 12.32 ) USA
Zanatto et al. [85]Change ratesocial gaze > non social p < 0.001 SCITOS G5, iCUB15not specifiedUK
Zhao and Malle [26]Perspective takingreach > gaze > side-look p < 0.001 NAO, BAXTER, ERICA, THYMIO1219 33.58 ( 11.36 ) not specified
Zlotowski et al. [36]Likabilitypositive behavior > negative p = 0.001 GEMINOID HI-2, ROBOVIE R258 21.47 Japan
MovementCastro-González et al. [105]Likabilitysoft movement > mechanical p < 0.01 BAXTER42not specifiedUSA
Kuz et al. [106]Movement predictionhuman > robotic p < 0.05 Robotic Arm24 25.21 ( 3.80 ) Germany
Salem et al. [107]Mental state attribution; Likabilitygesture > no gesture p < 0.01 HONDA6230.90(9.82)Germany
Tremoulet and Feldman [108]Animacy ratingsaligned > misaligned; fast > slow; large direction change > small p < 0.01 None34not specifiedUSA
VoiceEyssel et al. [109]Likabilityhuman > robot voice p = 0.01 FLOBI58 22.98 ( 2.81 ) Germany
Flanagan et al. [37]Mental state attributionNAO = Alexa p < 0.05 NAO, ROOMBA, Alexa127 7.50 ( 2.27 ) USA
Kuriki et al. [110]Perceived humanness; positive feelingshuman voice > artificial p < 0.001 14 28.9 Japan
Li et al. [95]Fun; Likabilityhuman-like voice > non human-like p < 0.001 Alpha227 21.7 ( 1.41 ) China
Masson et al. [111]Endowment effectvocal intonation > non-vocalnot specifiedNAO30not specifiedFrance
Niculescu et al. [112]Likabilityhigh pitch > low p < 0.001 OLIVIA, CYNTHIA28not specifiedSingapore
Tielman et al. [113]Expressions; Valenceaffective > non affective p < 0.05 NAO18 8.89 ( 0.81 ) The Netherlands
Torre et al. [114]Investmentsynthetic voice > natural (generous condition), natural > synthetic (mean condition) p < 0.05 NAO120not specifiedUK
Table 2. Situational factors of anthropomorphism.
Table 2. Situational factors of anthropomorphism.
FactorArticleVariableEffectEffect p-ValueRobotSample SizeMean Age (Standard Deviation)Country
Anthropomorphic framingBarchard et al. [136]Positive feelingssocial competence score+ p < 0.001 ROBOVIE, NAO, PR2, DRAGONBOT296 37.39 ( 11.50 ) USA
Darling et al. [137]Reluctance to hitstory > no story p < 0.05 HEXBUG NANO101 29 ( 9.7 ) USA
Kory Westlund et al. [138]Eye gazefriend > machine p < 0.05 TEGA22 5.04 ( 1.23 ) USA
Mara and Appel [139]Perceived human-likeness and attractiveness; Perceived eerinessnarrative > non-narrative; narrative < non-narrative p < 0.05 ; p = 0.001TELENOID72 31.24 ( 11.56 ) Austria
Mou et al. [140]Trusthigh-level ToM > low-level p < 0.05 PEPPER32not specifiedUK
Nijssen et al. [65]Sacrificeanthropomorphic framing < neutral p < 0.01 GEMINOID, KOJIRO54 19.43 ( ± 2.69 ) The Netherlands
Nijssen et al. [40]Sharingaffective robot > non affective p < 0.05 NAO, LEGO MINDSTORMS120 4.90 ( 0.42 ) (4–5 y.o.); 8.35 ( 0.50 ) (8–9 y.o.)The Netherlands
Nijssen et al. [141]Socially mindful choicesanthropomorphic framing = neutralnot significantKOJIRO128 26.54 ( ± 11.10 ) The Netherlands
Onnasch and Roesler [142]Anthropomorphismanthropomorphic framing = neutralnot significantNAO40 26.5 ( 7.58 ) (I); 25.83 ( 6.67 ) (II)Germany
Rosenthal-von der Pütten et al. [143]Likability; Anthropomorphismstory > no story p < 0.001 Papero, Icat, GEMINOID, HRP-4c, Justin, Mika249 29.64 ( 10.62 ) not specified
Ruocco et al. [144]Investmenthigh-level ToM > low-level p < 0.05 PEPPER32 23.7 not specified
Schömbs et al. [145]Likability, Perceived competenceanthropomorphic framing = technicalnot significantPEPPER, PANDA180 28.06 ( 5.19 ) Germany
Söderlund et al. [146]Perceived qualityhigh ToM > low ToM p < 0.05 HIWONDER51 30.54 (I); 31.43 (II)Sweden
Sturgeon et al. [147]IntelligenceToM > no ToM p < 0.01 NAO5320–79 y . o . not specified
Autonomous degreeChernyak and Gary [148]Emotional state attributionautonomous > controlled p < 0.05 AIBO80 5.50 ( 0.30 ) (5 y.o.); 7.35 ( 0.36 ) (7 y.o.)“Mostly Euro-American”
Haas et al. [149]Likabilityremotely controlled = autonomousnot significantNAO20 7.75 ( 0.65 ) The Netherlands
Lee et al. [150]Social presence; Trustteleoperated > autonomous; autonomous > teleoperated p < 0.05 RA-I30not specifiedSouth Korea
Tozadore et al. [151]Perceived intelligence; Preferenceautonomous > teleoperated p < 0.05 NAO82 9.36 ( 1.24 ) Brazil
van Straten et al. [152]Perceived autonomy; Anthropomorphismcovert teleoperation > overt p < 0.001 NAO168 9.02 ( 0.71 ) The Netherlands
Frequency of InteractionBartneck et al. [153]Positive attitudeinteraction+ p < 0.01 AIBO467not specifiedChina, Germany, Japan, Mexico, The Netherlands, UK, USA
BAXTER et al. [86]EnjoymentInteraction 1 = Interaction 3not significantNAO59not specifiedUK
de Graaf et al. [154]Attitude toward robotsInteraction 6 > Interaction 1 p < 0.01 KAROTZ102 37.74 ( 16.87 ) The Netherlands
de Jong et al. [155]Anxietypre- > post-interaction p < 0.001 NAO52 69 ( 7 ) (elderly); 22 ( 3 ) (students)The Netherlands
Kim et al. [39]Perception of spiritInteraction 1 and 2 > Interaction 3 p < 0.001 251 different robots41 20 ( 2.97 ) USA
Nishio et al. [156]Acceptance rate for androidafter interaction > before p < 0.05 ROBOVIE R2 et GEMINOID HI-121 21.2 ( 2.56 ) Japan
Ribi et al. [157]Frequency of interactionTime+not specifiedAIBO14not specifiedSwiss
Sinnema and Alimardani [158]Anxietypre- > post-interaction p < 0.001 NAO52 69 ( 7 ) (elderly); 22 ( 3 ) (students)The Netherlands
Tanaka et al. [159]Quality of interactionTime− p < 0.05 QRIOnot specified18–24 monthsUSA
Zlotowski et al. [36]EerinessInteraction 1 > Interaction 3 p = 0.05 GEMINOID HI-2, ROBOVIE R258 21.47 Japan
Robot roleAl-Taee et al. [160]Acceptability levelcompanion, education teacher > calculatornot specifiedNAO376–16 y.o.UK
Banthia et al. [161]Enjoymentstoryteller > interaction partner (3–5 y.o.); storyteller < interaction partner (5–8 y.o.)not specifiedZENOnot specified3–13 y.o.Canada
Burdett et al. [69]Will to be prayed for by robotsyoung > older children, adults p = 0.001 NAO, TITAN, MINDAR110 5.80 ( 1.41 ) (I); 10.65 ( 1.41 ) (II); 30.60 ( 3.01 ) (III)UK
Horstmann and Krämer [90]Perceived sociabilityassistant > competitor p < 0.05 NAO162 22.85 ( 3.88 ) not specified
Kory Westlund et al. [138]Gaze timefriend > machine p < 0.05 TEGA110 5.04 ( 1.23 ) USA
Ray et al. [162]Acceptability for cooking; for cleaningno > yes; yes > nonot specifiedROBOX, ALICES240not specifiedSwiss
Table 3. Human factors of anthropomorphism.
Table 3. Human factors of anthropomorphism.
FactorArticleVariableEffectEffect p-ValueRobotSample SizeMean Age (Standard Deviation)Country
AgeAl-Taee et al. [160]Acceptability levelyoung > old children p < 0.001 NAO376–16 y.o.UK
Banthia et al. [161]Enjoymentstoryteller > interaction partner (3–5 y.o.); storyteller < interaction partner (5–8 y.o.)not specifiedZENOnot specified3–8 y.o.Canada
BAXTER et al. [169]Gaze timeyoung > old p < 0.05 NAO32 41.47 months ( 4.74 ) The Netherlands
Beran et al. [170]Mental state attributionyoung > old children p < 0.01 robotic arm184 8.18 Canada
Burdett et al. [69]Helpfulness; Kindnessyoung > older children, adults; children > adults p < 0.001 NAO, TITAN, MINDAR110 5.80 ( 1.41 ) (4–8 y.o.); 10.65 ( 1.41 ) (9–13 y.o.); 30.60 ( 3.01 ) (adults)UK
Di Dio et al. [171]Trusthuman > robot (3 y.o.); robot > human (7 y.o.) p < 0.05 NAO94not specifiedItaly
Flanagan et al. [172]Free choice attributionhuman > robot (adults), robot = human (children) p < 0.05 (adults); not significant (children)ROBOVIE32 (children), 60 (adults) 5.72 ( 0.68 ) (children), 38.6 ( 11.39 ) (adults)USA
Flanagan et al. [37]Mind attributionyoung children > old p < 0.001 NAO, ROOMBA, Alexa127 7.50 ( 2.27 ) USA
Goldman et al. [72]Biological properties attribution3 y.o. > 5 y.o. p < 0.001 NAO, DASH44 (3 y.o.), 45 (5 y.o.)42 months (3 y.o.), 65 months (5 y.o.)USA and Canada
Kahn et al. [122]Mental state attribution9–12 y.o. > 15 y.o. p < 0.05 ROBOVIE90not specifiedUSA
Kumar et al. [94]Trustseniors > young adults p < 0.001 TurtleBot3 Burger, robotic arm203 26.16 ( 4.07 ) (young); 71.61 ( 4.09 ) (old)USA
Leite and Lehman [173]Affective responseyoung children = oldnot significantAbstract Robot28 6.7 ( 1.82 ) not specified
Manzi et al. [77]Mental state attribution5 y.o. > 7–9 y.o. p < 0.001 NAO, ROBOVIE189 69.52 months ( 3.31 ) (5 y.o.); 92.65 ( 3.52 ) (7 y.o.); 116.9 ( 4.17 ) (9 y.o.)Italy
Martin et al. [174]Helpinghigh autonomy = low autonomy p < 0.05 NAO82 41.30 ( 3.27 ) Australia
Martin et al. [175]Latency to helplooks at target < looks away p < 0.001 NAO40 43.14 ( 3.64 ) Australia
Nijssen et al. [40]Anthropomorphismyoung children > old p < 0.001 NAO and LEGO MINDSTORMS120 4.90 ( 0.42 ) (4–5 y.o.); 8.35 ( 0.50 ) (8–9 y.o.)The Netherlands
Okanda et al. [176]Anthropomorphism3 y.o. > 5 y.o. and adults p < 0.05 KIROBO79 42.35 months ( 3.36 ) (3 y.o.); 63.42 ( 2.84 ) (5 y.o.); 25.36 ( 8.32 ) (adults)Japan
Pulido et al. [177]Willingness to have this robot at home 98 % of children want it at homenot specifiedNAO120 7.90 ( 1.4 ) Spain
Serholt et al. [178]Success raterobot = humannot significantNAO27 13 ( 1.4 ) Sweden
Sinnema and Alimardani [158]Anxiety post interaction; Usefulnessold > young; old > youngall p < 0.05 NAO52 22 ( 3 ) (young); 69 ( 7 ) (old)The Netherlands
Sommer et al. [83]Moral concern for PLEOage− p < 0.05 NAO, PLEO126 7.61 ( 1.87 ) Australia
Tozadore et al. [179]Enjoyment; Interest for the other platformtablet = robot; robot > tabletp > 0.05; p < 0.01NAO22 10.90 ( 0.53 ) Brazil
Simmons and Knight [104]Time of interactionyoung = old p > 0.05 KEEPON45 6.8 ( 1.8 ) Portugal
Zhang et al. [180]Mental state attributionTD > ASD p < 0.01 NAO40 6.35 ( 0.56 ) (neurotypical); 6.79 ( 0.93 ) (autistic)China
CultureBartneck et al. [153]Positive attitudeUSA > Mexico p < 0.05 AIBO463not specified (adults)Various
Choi et al. [181]Robot as friendKoreans > Spanish p < 0.01 160not specifiedKorea, Spain
Dang and Liu [182]Attribution of mental abilitiesChinese: loneliness− p < 0.001 Description of a social robot39731.12(8.91) (Americans); 29.95(7.62) (Chinese)China, USA
Eyssel and Kuchenbrandt [183]Attribution of mental abilitiessame culture > different p < 0.05 FLOBI78 23.27 ( 3.29 ) Germany
Haring et al. [184]Mental state attributionJapanese > Australian p < 0.001 ROBI, KEEPON126 21.5 ( 2.05 ) (Japan); 23.6 ( 6.8 ) (Australia)Japan, Australia
Li et al. [185]Likability, engagement and Satisfaction; TrustKorean = Chinese > German; Chi > Ko > Gerp < 0.01; p < 0.05LEGO MINDSTORMS NXT108 24 ( 1.85 ) (China); 23.28 ( 3.05 ) (Korea); 27.75 ( 0.87 ) (Germany)China, Korea, Germany
GenderAbel et al. [186]Anthropomorphic ratingmen > womenall p < 0.05 Gantry robot40 24.8 ( 5.9 ) (Male); 23.5 ( 3.4 ) (Female)Germany
Bryant et al. [187]Perceived competencefemale = neutral = maleall p > 0.05 PEPPER50 35.65 ( 9.34 ) USA
Carpinella et al. [70]Perceived warmth and competence; Discomfortfemale > male; female < malep < 0.05; p < 0.001Team-built robot face252Not specifiedNot specified
Eyssel et al. [109]Mind attribution among men; among women; Perceived psychological proximity among menmale voice > female, female voice > male (human voice only); male voice > femalep < 0.01, p < 0.05FLOBI58 22.98 ( 2.81 ) Germany
Kraus et al. [188]Perceived trustworthy and competence; Likabilitymen > women; women > men p < 0.05 NAO38 26.34 ( 7.38 ) Germany
Kuchenbrandt et al. [189]Duration of taskfemale robot > male (male participants), male = female (female participants) p < 0.01 NAO73 25.04 ( 4.34 ) Germany
Leite and Lehman [173]Affective responsegirls = boys p > 0.05 Abstract Robot28 6.7 ( 1.82 ) -
Lücking et al. [190]Interaction levelboys > girls p < 0.05 NAO1258 months ( 4.99 ) Germany
Niculescu et al. [112]Likabilitymen > women p < 0.05 OLIVIA, CYNTHIA28AdultsSingapore
Robben et al. [191]Anthropomorphism; Enjoymentsame gender = different p > . 05 NAO62 8.3 ( 1.6 ) The Netherlands
Sandygulova and O’Hare [192]Playing timesame gender > different p = 0.01 NAO74 6.15 ( 1.9 ) (girls); 5.46 ( 1.74 ) (boys)Ireland
Sandygulova and O’Hare [193]Same gender preferenceboys > girls, young > old p < 0.001 NAO1075–12 y.o.Ireland
Schermerhorn et al. [194]Response biasalone > with robot (women); alone < robot (men) p < 0.05 Abstract Robot47not specifiednot specified
Siegel et al. [195]Credibility; Trust; Engagementopposite sex > sameall p < 0.05 MOBILE DEXTEROUS SOCIAL ROBOT134 35.6 ( 11.58 ) USA
Simmons and Knight [104]Time of interactiongirls > boys p < 0.05 KEEPON45 6.8 ( 1.8 ) Portugal
Suzuki and Nomura [196]Chosen gendereither > male, women80% of participantsnot specified1000not specifiedJapan
Tung [197]Social and physical attraction to robotsgirls > boys p < 0.01 12 different robots26710–15 y.o.Taiwan
PersonalityBartz et al. [46]Anthropomorphic attributionsattachment anxiety > non attachment anxiety p < 0.05 Gadgets and pets178 39.50 ( 13.06 ) North America
Darling et al. [137]Reluctance to hitempathetic > non empathetic p < 0.01 HEXBUG NANO101 29 ( 9.7 ) USA
Kędzierski et al. [198]Will to interactopenness to new experiences + r = 0.38 , p = 0.01 EMYS45 9.9 ( 1.41 ) Poland
Spatola and Wykowska [45]Anthropomorphic attribution; Attitudesneed for cognition−, need for prediction+; cognition+, prediction− p < 0.05 HOSPI, PERSONAL ROBOT, ARMAR, NIMBRO, NADINE1141 25.55 ( 5.71 ) France
OthersBartz et al. [46]Anthropomorphic attributionsloneliness > non loneliness p < 0.05 Gadgets and pets178 39.50 ( 13.06 ) North America
Bernstein and Crowley [199]Attribution of psychological characteristicsinexperienced > experienced p < 0.01 QRIO and Exploration Rover Personnel60 62.6 months (4–5 y.o.); 82.6 months (6–7 y.o.)USA
Dang and Liu [182]Attribution of mental abilitiesChinese: loneliness− p < 0.001 Description of a social robot397 31.12 ( 8.91 ) (Americans); 29.95 ( 7.62 ) (Chinese)China, USA
de Graaf et al. [154]Evaluation before interactionrejecters > other groups p < 0.01 KAROTZ102 37.74 ( 16.87 ) The Netherlands
Heerink [200]Perception as social entityeducation− p < 0.05 ROBOCARE66not specifiedSwiss
Kuriki et al. [110]Perceived humanness and positive feelingsartificial voice = human (ASD group) p = 0.07 28 27.6 (autistic); 28.9 (neurotypical)Japan
Lee et al. [47]Social attractiveness; Positive evaluationlonely > non-lonelyp < 0.05; p < 0.01AIBO, APRIL32not specifiedUSA
Nakano et al. [201]Fixation to eyes and mouthTD > TSA p < 0.001 104 3.11 ( 1.1 ) , 29.5 ( 7.4 ) (autistic); 3.1 ( 1.11 ) , 32.1 ( 11.8 ) (neurotypical)Japan
Niculescu et al. [112]Positive feelingsno experience > experienceall p < 0.05 OLIVIA, CYNTHIA28not specifiedSingapore
Paepcke and Takayama [33]Perceived competence after interactionlow expectations > high p < 0.05 PLEO et AIBO24 30.46 ( 12.07 ) USA
Zhang et al. [180]Mental state attributionTD > TSA p < 0.01 NAO40 6.79 ( 0.93 ) (autistic); 6.35 ( 0.56 ) (neurotypical)China
Table 4. Compacted summary of the influences of the different factors on acceptance and anthropomorphism according to the studies reviewed in this paper.
Table 4. Compacted summary of the influences of the different factors on acceptance and anthropomorphism according to the studies reviewed in this paper.
Factor CategoriesFactorsEffect on AcceptanceEffect on Anthropomorphism
RoboticHuman-like appearance13+, 0=, 0−12+, 3=, 0−
Human-like voice5+, 0=, 1−2+, 0=, 0−
Human-like behavior19+, 4=, 1−5+, 0=, 0−
Movements3+, 0=, 1−2+, 0=, 0−
SituationalAnthropomorphic framing8+, 1=, 0−5+, 2=, 0−
Human-like role2+, 1=, 2−0+, 0=, 0−
Interaction frequency6+, 3=, 1−0+, 0=, 1−
Perceived autonomy1+, 1=, 1−3+, 0=, 0−
HumanAge4+, 2=, 3−0+, 0=, 8−
Gender ★
- Female2+, 2=, 2−1+, 3=, 0−
- Male2+, 2=, 2−2+, 2=, 0−
Personality2+, 0=, 1−3+, 0=, 1−
Culture3+, 0=, 3−2+, 0=, 2−
Others
- Experience with technology, education0+, 0=, 1−0+, 0=, 2−
- Expectations0+, 0=, 2−0+, 0=, 0−
- Social isolation1+, 0=, 0−1+, 0=, 0−
- Developmental type3+, 0=, 0−1+, 0=, 0−
+ indicates the number of studies with a positive effect. = indicates the number of studies with no effect. − indicates the number of studies with a negative effect. ★ due to the complex interaction between the user’s gender and the robot’s gender, only the simple effect is shown in the table.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dubois-Sage, M.; Jacquet, B.; Jamet, F.; Baratgin, J. We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too! Appl. Sci. 2023, 13, 8743. https://doi.org/10.3390/app13158743

AMA Style

Dubois-Sage M, Jacquet B, Jamet F, Baratgin J. We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too! Applied Sciences. 2023; 13(15):8743. https://doi.org/10.3390/app13158743

Chicago/Turabian Style

Dubois-Sage, Marion, Baptiste Jacquet, Frank Jamet, and Jean Baratgin. 2023. "We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too!" Applied Sciences 13, no. 15: 8743. https://doi.org/10.3390/app13158743

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop