Next Article in Journal
An Inductive Isolation-Based 10 kV Modular Solid Boost-Marx Pulse Generator
Previous Article in Journal
Data-Driven Surrogate-Assisted Optimization of Metamaterial-Based Filtenna Using Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study on Social Exclusion in Human-Robot Interaction

Cognitive Science Department, Jagiellonian University, 31-007 Krakow, Poland
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(7), 1585; https://doi.org/10.3390/electronics12071585
Submission received: 2 February 2023 / Revised: 19 March 2023 / Accepted: 25 March 2023 / Published: 28 March 2023

Abstract

:
Recent research in human-robot interaction (HRI) points to possible unfair outcomes caused by artificial systems based on machine learning. The aim of this study was to investigate if people are susceptible to social exclusion shown by a robot and, if they are, how they signal the feeling of being rejected from the group. We review the research on social exclusion in the context of human–human interaction and explore its relevance for HRI. Then we present the results of our experiment to simulate social exclusion in the context of HRI: the participants (for whom it was their first encounter with a robot) and the Nao robot were asked to cooperate in solving the bomb defusal task, during which the robot favored one participant with whom it had a longer interaction before the task. The robot was controlled using the Wizard-of-Oz methodology throughout the experiment. Our results show that the discriminated participants reported a higher feeling of exclusion. Though some other hypotheses were not confirmed, we present several qualitative observations from our experiment. For example, it was noticed that the participants who behaved more openly and were more extraverted acted more comfortably when interacting with the robot.

1. Introduction

The presence of robots in everyday activities is becoming a reality to which we humans need to adapt. Nowadays, robots are not just present in factories but are playing an increasingly important role in social contexts. A major design issue for these social robots is how to design and program them so that they are trustworthy, safe, and useful for humans. Though humans vary widely in terms of their gender, age, culture, personality, and so on, we can still try to find some general design principles that would lead to a trustworthy and smooth human–robot interaction.
One issue that needs to be investigated thoroughly is how a robot’s presence in a team with humans, who are engaged in a cooperative task, affects the feelings and behavior of the human team members. This is important because robots are increasingly being used in environments where they must interact with more than just a single person. We can think of such examples as hospitals, care homes, offices, or even households, where robots are supposed to interact with groups of different people [1,2,3,4,5,6]. This requires a shift in our attention from research on dyadic interactions to focusing on groups consisting of humans and robots.
Previous research has shown that people’s preconceptions and previous experiences with robots affect their attitudes towards robots and artificial agents. For example, it has been shown that people who do not have extensive experience with social robots generally have positive feelings towards them [7]. In another recent study, Esterwood and Robert [8] have shown that an individual’s attitude towards working with robots (AWOR) significantly affects their trust repair strategy in human-robot interaction, and this effect can change over the course of repeated trust violations. (See also [9,10].) In this context, it is important to study team cooperation in human–robot teams.
One such study [11] examined how the social structure between humans and machines is affected when the group size is increased from two (human and the machine) to three (two humans and a machine). It was found that the addition of one more human to a social situation resulted in higher doubt about the machine’s non-subjectivity, while simultaneously consolidating its position in a secondary relationship.
In another study, Rosenthal-von der Pütten and Abrams [12] pointed out that, besides the social dynamics in human–robot groups, the social consequence of their cooperation is also important. As robots adapt their behavior based on data gathered through interactions with people, it is very likely that we will have to deal with the unequal adaptation of robots to different members of the human group. The underlying reason is that a robot’s actions and behaviors are based on machine learning algorithms. So, a robot will better adapt and more accurately predict and match the preferences of the group members with a larger data set. This might be perceived as unfairness or even discrimination by other group members [13]. One possible way to avoid such a situation is to first investigate more carefully how humans behave when a robot is seen as a group member and how they might react when it shows biased behavior and seems to favor a certain group member. Only after better understanding the effects of unequal treatment of group members, can strategies aimed at reducing such problems be developed.
In the rest of this paper, we first introduce the topic of humans as social animals with a brief discussion of social exclusion. Then we elaborate on human–robot interaction, followed by the topic of social exclusion and how it manifests in human interactions with artificial systems and its possible consequences. Then we present our experiment on investigating social exclusion in human–robot groups. In our experiment, the participants (for whom it was their first encounter with a robot) and the Nao robot were asked to cooperate in solving the bomb defusal task, during which the robot favored one participant with whom it had a longer interaction before the task. The robot was controlled using the Wizard-of-Oz methodology all through the experiment [14]. Finally, a brief conclusion sums up the results provided by the study, and ideas for further research in this area are suggested.

2. Motivation and Background

“Man is by nature a social animal” is not without reason a famous quote of Aristotle [15]. Community is a biologically natural habitat for human beings, who are interdependent and need others to fulfill their needs. Humans are social creatures who, by spending much of their time with other human beings, mutually influence each other’s thoughts, behaviors, beliefs, and emotions, what social psychologists call social influence [16,17]. We are not only under the influence of others; we also feel the need to be like others and be recognized as a part of the society or group within which we function. Those individuals who break established rules risk being laughed at and being socially excluded [18,19]. Group interactions are present at every stage of our lives, starting from early childhood, when children play together with each other in kindergarten, then learning in groups in school and competing in sports teams and continuing with starting a family in adulthood. Even for the elderly, social isolation is a major risk factor [20]. Every human belongs to some groups from the very beginning, and even some extreme cases such as asocial individuals are members of some cultural or ethnic group depending on their origin and upbringing [21]. As emphasized by Baumeister and Leary [22], the feeling of belonging and the creation of interpersonal relationships are basic human needs. Taking all those factors into account, it is safe to state that humans are social creatures who naturally engage and interact in groups.
Here a question arises: how would the presence of an artificial agent, such as a robot, affect the social interaction in a group? This issue is becoming increasingly important as more and more social robots are being designed to play an inclusive role in our everyday lives. Nowadays, robots are designed to be able to cooperate with people, to assist and help them in everyday activities [1], or even to take part in therapies [2,3,6], education [4,5], and others. More and more often, robots are being introduced as members in human groups and teams [23,24], so it is necessary to develop robotic systems that can communicate with and adapt to multiple users.
It has been suggested that social rules followed by people during interactions with artificial agents are in many ways similar to those present in human–human interactions [25]. Moreover, it has also been observed that humans categorize artificial systems such as robots or even computers [26] in a similar way as they do other humans; they rate more positively and anthropomorphize more those robots that share with them some similarities such as nationality [27]. There are cases when people develop a bond with their robotic team members and feel strongly attached to them: for example, we can mention the first funeral with a Buddhist ceremony for robot dogs, which took place in Japan in 2015 [28], or Jibo robot owners who reported being sad on receiving information about its termination [29]. The analysis conducted by Carter et al. [30] shows that people more often categorize social media posts about robots (such as Jibo) as referring to humans and not to robots. Though this raises many ethical issues and questions, what is important for the purpose of this paper is that forming relationships with artificial agents becomes a reality for many people. Whereas interaction with another human being is biologically natural for people, it is hard to predict how working with an artificial system would influence an individual’s behavior, decisions, or even mental state. Nonetheless, despite the many limitations of robots, like their lack of basic social skills such as understanding others’ point of view, emotions, and empathy, human–robot interactions are in many ways similar to human–human interactions.
For example, Gordon Allport [31] has proposed a contact hypothesis, according to which increased contact between members of different groups helps to improve intergroup relations and reduce hostility. Using this hypothesis and combining it with the observations of Reeves and Nass [32] that people tend to anthropomorphize computers and new media, Sarda Gou et al. [33] have found that the participants’ implicit attitudes towards robots increased after watching a video of a friend describing their interaction with a robot. All this research suggests that some observations from studies on human–human relationships can apply to human–robot relationships as well.

3. Social Exclusion

The term social exclusion was first coined in France in 1974 by René Lenoir, so it is quite a recent notion. There are many different definitions in the literature that describe this phenomenon [34,35], but for this research, we characterize it as a multidimensional process of depriving and preventing an individual from engaging in fair, equal, and unbiased interaction within a social group, as well as not perceiving such a person as a full-fledged member of the group. The result is that the rejected person has no opportunity to participate under the same conditions as others in a social group. It was originally used in the context of the marginalization of certain social groups, such as the handicapped, unemployed, or single parents, addressing serious social issues facing governments all over the world.
Even though some individuals are more prone to being socially excluded because of, for example, their socio-economic status or bad health, it should be emphasized that it affects practically every single person. Consider, for example, a hypothetical situation with a school trip where a child named John gets sick and has to stay home while all his friends join the trip. After his friends get back from this excursion, they constantly talk about all the exciting things that happened during the trip. As John cannot relate to his friends, he starts to feel like an outsider, and he might no longer be perceived as a full-fledged member of the group. These kinds of situations are quite common, and we all have probably experienced at least once the feeling of being excluded from a certain group.
A series of studies on the psychological effects of social exclusion conducted by Jones et al. [36,37,38] revealed that participants in the out-of-the-loop condition, meaning those who, in contrast to their team members, did not receive information needed to solve the main task, reported decreased fulfillment of fundamental needs such as belonging, self-esteem, control, and meaningful existence. They also reported a worse mood and rated their team members lower in terms of liking and trustworthiness. These findings suggest that depriving a person of an opportunity to interact with others on the same level has many negative psychological outcomes for the person. Even if these detrimental effects are of a short duration, they still should not be disregarded, and we should encourage more in-depth research on different forms of social exclusion.

3.1. Social Exclusion in HRI

The question of how social exclusion might affect people while interacting with artificial agents such as robots needs a more thorough investigation, though the existing research so far has suggested that this also has a negative impact on human wellbeing. In a study conducted by Ruijten, Ham, and Midden [39], the participants first played the Cyberball game, during which the ball was tossed fewer times to some participants (the exclusion group) than to others (the inclusion group). This was followed by the main washing machine task, where the participants were given feedback by a virtual agent about their choices (about the washing machine task). The results showed that the participants who were excluded, especially women, were more sensitive to the feedback provided by the agent.
In a study by Erel et al. [40], regarding ostracism in robot–robot–human interaction, the Cyberball paradigm in its physical form was used to investigate if interaction with two non-humanoid robots might lead to the experience of being excluded from the team and might have an impact on the fundamental psychological needs of the participants. As predicted, those in the exclusion condition (only about 10% of the ball tosses were directed at them) reported worse moods and more negative ratings regarding three needs: belonging, control, and meaningful existence, compared to the participants in the inclusion and the over-inclusion conditions.
As observed by Claure et al. [41], the problem of unfairness might also arise when a robot assigns more resources to a team member whose performance is the highest, which in turn might lead to decreased trust in the system by the other workers. To investigate this issue, a Tetris game was used where the algorithm assigned blocks to two players. The algorithm represented three levels of fairness, depending on the minimum allocation rate for each participant. Even though the team’s performance did not change across the conditions, there was a decreased level of trust reported by the weaker performers. Another study that also investigated the perceived fairness of decisions made by an algorithm showed that the level to which a system is perceived as fair might also depend on how the person was evaluated by this system [42]. In this study, an algorithm was perceived as fairer when participants received favorable feedback from it than when the outcome was unfavorable. It was also emphasized that perceived fairness might depend on individual differences such as gender or level of education.
Even though such results provide some clues to how humans behave in discriminating settings while interacting with artificial systems, it is still unclear if they are susceptible to unfair behavior when it is shown by such a system. (For effects of algorithms behaving unfairly, see [43].) This is the issue we are exploring in this research, where a robot is a source of possible exclusion for one of the group members. It might happen that in a place where more people interact with a robot, such as, for example, at work, in a hospital, or even at home, the robot would be better at predicting and understanding the requests and preferences of those who spend more time with it because it has been able to gather more data on these people’s habits and behavior. Then those who do not spend as much time with the robot, especially new members of a group that includes a robot, might feel not only irritated by not being understood appropriately by the robot but also excluded from the interactions with such a system.

3.2. Consequences of Social Exclusion

Research shows that social exclusion might cause several negative consequences for humans. It is not surprising that those who experience rejection report having worse mood than others, but what might not seem so obvious is that, as shown by Van Beest and Williams [44], this effect remains even if being excluded by others is beneficial. In this experiment, the participants played a Euroball game modeled after the commonly used Cyberball paradigm. Even when the ostracized participants were receiving more money than the other players, they reported lower mood ratings and satisfaction levels.
It has also been shown that individuals are more aggressive after being socially excluded. In the studies conducted by Twenge et al. [45], experimenters manipulated the participants’ feelings of belonging by giving them predictions about their future social lives that were allegedly based on their personality tests. The participants were also asked to write an essay on abortion, after which they were given feedback on the quality of the essay, which was allegedly evaluated by another competent participant. To test the level of aggressiveness, the participants evaluated the person who rated their essay. As predicted, the ratings given by those who were told that their future social life would be lonely were more negative than those given by the people in the other conditions. Moreover, in the next experiment, participants who were made to feel rejected by their peers (meaning that they received information that no one chose them as the person with whom they would like to work) set a higher intensity and a longer duration for the noise blast that was allegedly given to the person with whom they played the computer game.
Receiving predictions about a lonely future life seems to also affect both the pain tolerance and the pain threshold, as was shown by DeWall and Baumeister [46]. In this research, the participants who received a pessimistic future-life scenario had a significantly higher physical pain threshold and tolerance compared to those in the other three control conditions. It supports the view that feelings of rejection and exclusion might result in emotional numbness.
A similar procedure was also used in Baumeister et al. [47], where participants were asked to fill out the general mental abilities test after receiving predictions about their future lives. Those who received an anticipation of a lonely future answered fewer questions from the test accurately, which suggests that a feeling of social rejection might impair intelligence. However, a significant decrease in cognitive abilities was not observed in the case of simple information processing. A possible explanation might lie in the impairment of one’s own executive function. This suggestion was supported by Baumeister et al. [48], where it was found that social exclusion might indeed have a detrimental effect on self-regulation. After receiving feedback about their anticipated future social lives, participants were encouraged to consume a bad-tasting vinegar drink. This task required a high level of self-regulation, as the participants had to force themselves to drink a beverage that was said to be beneficial for their health even though it was unpalatable. It was observed that the participants who received a prediction about a lonely future life drank fewer ounces of the beverage than those who were told to be accepted by others in the future and even fewer than those who were predicted to experience misfortunes in their future.
As shown by several studies, even short-term social exclusion might have a negative impact on different dimensions of human wellbeing. It might not only influence subjective feelings and mood but also an individual’s behavior and cognitive abilities. Whether similar consequences might also be caused by rejection from an artificial agent has not yet been studied. However, studies on interaction between humans might be a good starting point for establishing future directions of research in human–robot interaction. We conducted one such empirical study, described below.

4. An Empirical Study on Social Exclusion in Human-Robot Interaction

To find out if people are susceptible to exclusion by a robot, we conducted an experiment where two humans were paired together and asked to cooperate in solving the bomb defusal task (described later in this section) under two different conditions. In the experimental condition, the robot made small talk with one participant (called the favored participant) before starting the task. Then, during the task, the robot gave positive feedback to the favored participant’s suggestions and ignored the suggestions of the other participant (called the discriminated participant). In the control condition, neither participant talked to the robot before the task, and, during the task, the robot gave similar feedback to both participants. Before the participants met the robot, they answered a question about their current mood. After the interaction, they filled out a post-session questionnaire, which included questions regarding their perception of the robot, their feeling of exclusion from the team, and their level of identification with the team. We also measured how well the team performed on the task.

4.1. Hypotheses

As mentioned in the previous section, even in short-term manipulations, social exclusion might have a more pronounced impact compared to other variables such as monetary benefits. So, we hypothesize that:
H1. 
The robot’s behavior and actions will be seen as favoring one participant and discriminating against another. This effect will be reflected in the post-session questionnaires, where the favored participant will rate the robot more positively than the discriminated participant.
H2. 
In the post-session questionnaire, we expect the following results regarding the moods of the participants: favored participants (experimental condition) > control condition participants > discriminated participants (experimental condition).
H3. 
The discriminated participants will report a higher feeling of exclusion, which is indirectly measured by the items from the Needs Threat Scale [49,50].
H4. 
The discriminated participants will feel less like a part of the group in comparison to the favored participants.
H5. 
The teams in the control condition will perform better than the teams in the experimental condition, as we hypothesize that the discriminated participant will not feel like cooperating, thereby degrading the overall performance of the team.

4.2. Participants

Sixty-two Polish adults (32 women and 30 men), ages 18–69 (M = 25.4, SD = 8.7) were paired into thirty-one teams. Each participant received forty Zlotych (about ten Euros) for her or his participation in the study. The participants were recruited through advertising on social media and departmental mailing lists. The participants were randomly assigned to the control condition or the experimental condition. In the experimental condition, one participant was randomly selected to be the favored one, and the other participant became the discriminated one. In some of the pairs, the participants knew each other, and in others they were strangers. This could influence the feeling of social inclusion. However, we did not control for this parameter in this experiment; this can be addressed in future research. For all the participants, it was their first encounter with a robot.

4.3. Materials

The robot used for this study was NAO V6 created by SoftBank Robotics. During the experiment, in all the interactions with the participants, the robot’s speech was controlled using the Wizard-of-Oz methodology [14], where a human speaks to the participants through the robot, but the participant is led to believe that the robot is operating autonomously. The experimental setup is shown in Figure 1 below.
The bomb defusal task [51], modelled after the Mastermind game, was used as the main task during the interaction. Twenty-three cables in nine colors were available for the team. Seven cables had to be chosen and assigned in the right order to seven places in the code. The goal was to have the color and position of each cable match the target code held by the experimenter. During the ten minutes of the game, the team had five opportunities to ask the experimenter for feedback regarding their current guess about the code. The experimenter gave feedback about how many cables were of the correct color and in the correct place, and how many were of the correct color (but not in the correct place). Only these two numbers were provided as feedback, and no information was given as to which particular cables were of the correct color and which ones were in the correct place. The timer was not stopped during the feedback. It was not important if the team would guess the code correctly before the time expired, as was the case in a similar study conducted by Jung et al. [51]. After the allotted time, we counted the number of cables that were placed correctly.

4.4. Experimental Manipulation

To create a situation where one participant might feel socially included, in the experimental condition, the robot consistently paid attention to one participant while ignoring the other. Before starting the task, the robot had a short conversation with one participant (the favored participant), while the other participant (the discriminated participant) was asked to do a sudoku task. During the bomb defusal task, the robot gave positive feedback to the suggestions and decisions regarding placement of the wires made by the favored participant. For example, the robot would say, “Great work <name of the favored participant>!” after the favored participant made a change in the code, or “Let’s do what <name of the favored participant> says” after the favored participant gave some suggestions about the order of the wires. The discriminated participant did not receive any positive feedback, and his or her suggestions were not supported by the robot. The only times the robot spoke to the discriminated participant was when it addressed both participants, for example, while greeting them at the beginning or while saying neutral things such as “Let’s start the game”.
In the control condition, the robot addressed both participants equally and the same number of times during the interaction.
To make the robot seem autonomous and an active participant in the task, the robot made three suggestions during each game for changing the ordering of the wires. This was the case for both the control and the experimental conditions.

4.5. Dependent Measures

We use the following dependent measures, which are assessed based on the questionnaires filled out by the participants before and after the interaction. Mood was the only parameter that was measured before and after the interaction. All other parameters were measured only after the interaction.

4.5.1. Mood

Participants’ ratings of their mood were measured before and after the interaction with the robot using the question “How are you feeling right now?”. The answers were indicated on a 5-point Likert scale ranging from “Never felt worse” to “Better than ever”.

4.5.2. Liking the Robot (Nao)

The participants were asked, “How much do you like Nao?” on the five-point Likert scale ranging from “Not at all” to “Very much”. This was included to assess the effect of social exclusion on how the participants liked or disliked the robot after the interaction.

4.5.3. Participants’ Perceptions of the Robot

To assess what impression the robot made on the participants, items from the Godspeed questionnaire series [52] were used regarding the robot’s likeability (unfriendly–friendly, unkind–kind), perceived intelligence (incompetent–competent, irresponsible–responsible, unintelligent–intelligent), and perceived safety (anxious–relaxed).

4.5.4. Feeling of Exclusion from the Team

To assess how much excluded from the team the participants felt, items from the Needs Threat Scale (NTS) [49,50] were used regarding their basic psychological needs. In particular, the following items from the NTS were used in our questionnaire:
  • During the game, I felt good about myself.
  • I felt that the other participants failed to perceive me as a worthy and likeable person.
  • I felt somewhat inadequate during the game.
  • I felt poorly accepted by the other participants.
  • I felt as though I had made a ‘connection’ or ‘bonded’ with one or more of the participants during the game.
  • I felt like an outsider during the game.
  • I felt that I was able to change the order of wires in the code as often as I wanted during the game.
  • I felt somewhat frustrated during the game.
  • I felt in control during the game.
  • I felt that my performance had some effect on the direction of the game.
  • I felt non-existent during the game.
  • I felt as though my existence was meaningless during the game.
Though the validity of the NTS scale has been questioned [49], we still decided to use it so that our results could be compared to those of other studies that use this scale.

4.5.5. Group Identification

To find out how much integration into the team the participants felt, we asked the participants to “Circle one of the five graphics that best illustrates your sense of belonging to the group you were a member of during the study.” The graphics are shown in Figure 2 below: ‘ja’ is ‘I’ and’ zespół’ is ‘group’.

4.5.6. Team Performance

To check how the teams performed on the task, the number of correctly identified wires in the best configuration achieved by the team during the 10 min was counted and compared between the experimental and control conditions.

4.6. Procedure

The experimental procedure was approved by the Ethics Committee of the Institute of Philosophy of the Jagiellonian University in Kraków.
The experiment was conducted between 13 July 2021, and 6 August 2021. The participants were paired randomly and were invited to the laboratory as pairs, with each pair being allotted a different time slot. Upon arrival, the participants were informed about the anonymity of the collected data and were asked to sign their consent for voluntary participation. Then the purpose of the study was described to them: The goal of the study you are about to participate in is to analyze and observe the behavior of a social robot while it interacts in a group. This study will take about 30 min and consists of four parts:
  • Pre-session questionnaire.
  • Introduction to the robot and your partner in this task (the second participant).
  • Interaction with the robot solving a game in a group for 10 min.
  • Post-session questionnaire.
Completing the pre-session questionnaire took about one minute, as there was only one question about their mood.
In the experimental condition, one participant (favored participant) was led by the experimenter to the room where the robot was present. Nao introduced itself and asked about the participant’s name. Then Nao continued the conversation by asking a question such as “How are you doing?” or “How is your day so far?”. Sometimes Nao talked about the weather, something appropriate for that day such as “quite a hot day isn’t it?” or “do you enjoy rainy days like today?”. Sometimes Nao added something a bit funny such as, “I don’t like the summer because my system overheats.” Such humorous dialog was used quite often. For example, if the participant asked Nao how his day was going, Nao would say that he was tired from sitting in the lab all day or that all he does is work hard.
The Wizard-of-Oz methodology was employed for all the interactions with the robot, in which a collaborator of the experimenter talked to the participants through the robot. This interaction was spontaneous, and the collaborator adjusted Nao’s answers to the context and the participant’s reactions. Each conversation was a bit different, but the overall manner of the interactions was neutral with a hint of humor, so that the participant could loosen up and feel liked by Nao.
During the time the favored participant was talking with the robot, the other participant (discriminated participant) was asked to solve a sudoku puzzle in a separate room. After about five minutes, the experimenter led the discriminated participant to the room with the robot and the favored participant. The robot was briefly introduced to the discriminated participant, and the experimenter informed both participants that now, together with the robot, they are a team of three, and they are to cooperate to solve the bomb defusal task. The experimenter explained the rules of the game and answered any questions the participants had regarding it. The game lasted ten minutes, after which the experimenter informed the participants that the part of the experiment involving the robot was over. The robot said goodbye to the participants and thanked them for the play.
Then the participants were handed out the post-session questionnaires, which took about ten minutes to complete. After this, the participants were debriefed about the real purpose of the study and about the manipulation that could cause a feeling of exclusion from the group. Finally, the participants received their payment and were thanked for their participation.
In the control condition, the procedure was similar except that both participants were asked to do a sudoku puzzle for five minutes before meeting the robot. The robot was introduced to both the participants at the same time, and then the procedure was the same as in the experimental condition except, as mentioned before, that the robot gave equal attention to both the participants during the bomb defusal task.

4.7. Quantitative Analysis

The experimental data are available on request. Statistical analyses were performed to answer the research questions and test the hypotheses. IBM SPSS Statistics version 26 was used to analyze basic descriptive statistics, one-way ANOVA, and Mann–Whitney U test. The standard level of significance was applied, α = 0.05.
First, the distributions of quantitative variables were tested. To do this, basic descriptive statistics as well as the Shapiro–Wilk test were calculated. The results are presented in Table 1.
The Shapiro–Wilk test is insignificant only for the game completion time variable measured in minutes. Its distribution is then similar to that of the Gauss curve. In the case of other variables, the results are statistically significant, which means that the distribution differs from the normal distribution. The skewness for every variable does not exceed the absolute value of 1, meaning that these distributions are slightly skewed. Nonetheless, given the robustness of ANOVA, it was used for the analysis despite the non-normality.
To test whether the conditions to which the participants were assigned differentiated their moods before and after the team interaction (Hypothesis H2), a one-way ANOVA was conducted. The results are statistically insignificant, so no differences were observed in terms of their mood between those who were favored, discriminated against, and the control group, measured before and after the team interaction (Table 2, Figure 3).
Next, a one-way ANOVA was used again to test the differences between conditions in terms of how much participants liked the robot and their subjective assessments of it (Hypothesis H1). The research condition was treated as an inter-subject factor (control group vs. favored participants vs. discriminated participants). Dependent variables were the likeability of the robot and its assessment. Both effects proved to be statistically insignificant, which indicates no differences between the compared groups in terms of the robot’s likeability and its assessment (Table 3, Figure 4).
A one-way ANOVA was used to inspect whether a research condition differentiates participants’ feelings of belonging to the team (Hypothesis H4) from their feelings of exclusion (Hypothesis H3) which were indirectly measured by the items from the Needs Threat Scale (Table 4, Figure 5).
The analysis has shown that the research condition differs from the level of feeling of exclusion from the team. To test the exact differences, post hoc tests with Games–Howell correction were carried out. It turned out that the discriminated participants felt a stronger sense of being excluded from the team than those favored by the robot (p = 0.009) and the control group (p = 0.015). However, no differences were found between the favored and the control groups (p = 0.844). Moreover, all groups that are being compared do not differ in terms of the feeling of belonging to the team.
Lastly, differences in the game completion time between the control group and experimental group were tested (Hypothesis H5). The non-parametric Mann–Whitney U test was used. Results are statistically insignificant, which means that no differences were found between the control and experimental groups in terms of the game completion time (Table 5).

4.8. Qualitative Observations

A meaningful addition to the quantitative results can be provided by observations of the participants’ behavior and by their comments during and after the interaction. The first observation is that four teams (13, 18, 21, and 29) reported that the robot did not look at the right person while speaking. This was because the software for the robot was designed so that it turned its face towards the source of the loudest sound. So, if the discriminated participant spoke louder than the favored participant, then the robot turned its face towards the discriminated participant. However, the robot always addressed the participant by her or his name (so it used the name of the favored participant), therefore we did not exclude the data from these teams from the analysis. We emphasize that it was difficult to obtain participants for this study because of the pandemic. Nonetheless, this factor should be controlled for in future studies.
During the interaction of team 9, we observed that the discriminated person blushed and became very quiet after the first few minutes of the game, during which the robot did not address this person. He also became reluctant to play the game. However, his responses in the questionnaire were exceptionally positive in regard to the robot’s perception and the feeling of being accepted by the teammates. This could be the effect of the presence of the experimenter and the social desirability bias; future studies need to control for this bias [53,54].
Interestingly, in team 29, the participants were getting more connected to each other, evidently ignoring the robot, and were negatively disposed towards it from the very beginning. It could be observed that they did not follow the robot’s suggestions or even reply to it. Moreover, the participants were irritated only when the robot’s recommendation regarding the code turned out to be incorrect, but not when their suggestion turned out to be wrong. It was as if the robot was the discriminated group member and not the human participants. (See the discussion in [55,56]). To explore this phenomenon further requires assessment of the attitudes of the participants towards the robot before starting the task. For example, the participants could be shown a picture or video of the robot used in the study and then asked to fill out the Godspeed questionnaire before starting the game.
Another observation worth mentioning is that those who behaved in a more extraverted fashion and were open towards the robot acted more comfortably than those who seemed to be shy and uneasy while interacting with the robot. Correlations between the participants’ behavior while interacting with artificial agents, their perceptions of robots, and their personalities have already been discussed in recent studies suggesting that more extraverted individuals interact with robots more willingly [57]. Taking these findings into account, investigating the role of personality in human–robot group interaction, where one of the participants is induced to feel excluded, seems to be a promising area of future research.

5. Discussion

The main goal of the study was to investigate if people are susceptible to exclusion by the robot in human–human–robot interactions. During the interaction, the participants and the robot had to construct a code from the colorful cables that matched the target code held by the experimenter. The experimental condition was designed in such a way that one of the participants would feel excluded. Before starting the task, the robot made small talk with the favored participant, while the discriminated participant was asked to do a sudoku puzzle. During the task, the robot addressed the favored participant by her or his name and gave positive feedback to her or his suggestions, while ignoring the discriminated participant. The discriminated participant was aware that the robot could talk with either participant—before starting the task, the robot introduced itself to both participants, and during the task, there were some neutral suggestions that were directed at both participants—but the robot never addressed the discriminated participant alone or called her or him by name. However, as the statistical analysis revealed, we found no difference in the mood ratings before and after the experiment among the participants from the control, favored, and discriminated groups. These groups also rated the robot similarly and felt that it was part of their team at a similar level. Nonetheless, statistically significant differences were found for the participants’ feeling of being excluded from the group as measured by the items from the Needs Threat Scale. This effect supports prior studies on exclusion in HRI [40].
Our study was conducted during the pandemic, and so it was very difficult to recruit many participants. Nonetheless, we make some observations based on this study that would be useful for future research. For instance, one confounding factor, which we did not control in our study, is the participants’ familiarity with each other. Ideally, we would like to have two groups: one in which the two participants are familiar with each other and another in which they are not. We hypothesize that when the participants are familiar with each other, their mutual relationship may play a bigger role than the fact that the robot is favoring one of them. It should be noted that this is not always the case in human–human relationships: strong bonds of friendship may be strained or even broken by a newcomer [58]. It is also possible that when the two participants are not familiar with each other, the excluded participant may not care that much about being excluded. This is something that needs to be addressed in future studies: how familiarity influences exclusion in human–human–robot teams.
Another factor related to the bomb defusal task, where our study differed from the previous ones [51,59], was that the duration of interaction with the robot was not constant across all the teams. This was because, unlike the previous studies, in our case most of the teams finished the task within the allotted time of ten minutes. Due to this uneven interaction time, it was not possible to test the team’s effectiveness depending on the experimental condition as planned. Future work should make the task harder by decreasing the number of feedback groups receive during the game or by increasing the number of cables in the code. However, the statistically significant results regarding the feeling of being excluded suggest that this feeling develops in the very first few minutes of interaction in an unfair environment.
That we found no difference in the participants’ ratings of the robot shows that rejection from Nao did not affect the participants’ attitude towards it. This is similar to the results from Nash et al. [60], except that in our study, the Godspeed questionnaire (administered after the task) was specifically aimed at the robot Nao and not at robots in general. In the debriefing session, some participants asked questions about the capabilities of the robot and how it worked. Some discriminated participants also commented that they thought the robot addressed only the favored participant because it had met her or him a little earlier, but some others commented that the robot was not addressing them because it did not remember their name. This explanation of the robot’s favoring behavior might have made the discriminated participants more understanding towards the robot because they did not take its behavior personally. This explanation is supported by the two case studies shown in Isabella Willinger’s 2019 documentary, Hi, A.I. In one case, Grandma Sakurai and her elderly friends interact with the robot Pepper in Japan. In the second case, Chuck in the US interacts with a companion robot named Harmony. In both these studies, even when the robot does not understand the remark directed at it and makes some irrelevant response, the human partner takes a benign attitude towards it, as one would with a child. To explicitly check for this phenomenon, future research should devise ways to study this effect through questionnaires and psychophysical measurements.
For all our participants, it was their first encounter with a robot, and they were not sure before the experiment what to expect. In the future, it would be interesting to study social exclusion in human–human–robot teams where the participants have some previous experience with robots (for example, conducting a similar study with participants in Japan) and to compare those results with ours. There is some existing research on how previous exposure to robots influences people’s attitudes towards them [7,8,33,61,62,63], and it would be useful to see how this extends to social exclusion. Moreover, such studies would allow us to design culture-specific solutions to creating effective human–robot teams [64].

6. Conclusions

This study contributes to the existing research on exclusion in human–robot groups. It has been shown that the favoring behavior of the robot towards one of its team members results in a stronger feeling of exclusion in the discriminated participants in comparison to those favored by the robot and those from the control group. Even though other hypotheses regarding participants’ mood, feelings towards the robot, feeling of belonging to the team, and groups’ effectiveness on the task were not confirmed, the study can be treated as inspiration and encouragement to further investigate the behavior of participants during human–robot group interactions. This paper provides suggestions for further research in this area based on what could be observed and learned from our study.
As the technology for social robots is maturing at a rapid rate, one expects that scenarios for human–robot teams, which so far have remained in the realm of fiction (for example, The Murderbot Diaries by Martha Wells), will become a reality soon. Philosophers are discussing ways to redefine the concept of friendship to include robots as well [65]. To prepare for this inevitable future, it is necessary that we study cognitive and affective aspects of how humans respond to robot team members [24]. The study presented here takes one small step in this direction.

Author Contributions

Conceptualization and design: S.E.S. and B.I.; implementation and data analysis: S.E.S.; first draft: S.E.S.; supervision and revised draft: B.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Priority Research Area DigiWorld under the program Excellence Initiative–Research University at the Jagiellonian University in Kraków (IDUB/DigiWorld/2021/47, 1027.0641.363.2019).

Institutional Review Board Statement

The experimental procedure was approved by the Ethics Committee of the Philosophical Department of the Jagiellonian University in Kraków.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

We would like to thank Anna Kołbasa and Barbara Wziętek for their assistance in conducting this experiment.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jevtić, A.; Valle, A.F.; Alenya, G.; Chance, G.; Caleb-Solly, P.; Dogramadzi, S.; Torras, C. Personalized Robot Assistant for Support in Dressing. IEEE Trans. Cogn. Dev. Syst. 2019, 11, 363–374. [Google Scholar] [CrossRef]
  2. Duret, C.; Grosmaire, G.; Krebs, H.I. Robot-Assisted Therapy in Upper Extremity Hemiparesis: Overview of an Evidence-Based Approach. Front. Neurol. 2019, 10, 412. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Cao, H.-L.; Esteban, P.G.; Bartlett, M.; Baxter, P.; Belpaeme, T.; Billing, E.; Cai, H.; Coeckelbergh, M.; Costescu, C.; David, D.; et al. Robot-Enhanced Therapy: Development and Validation of Supervised Autonomous Robotic System for Autism Spectrum Disorders Therapy. IEEE Robot. Autom. Mag. 2019, 26, 49–58. [Google Scholar] [CrossRef]
  4. Arvin, F.; Espinosa, J.; Bird, B.; West, A.; Watson, S.; Lennox, B. Mona: An Affordable Open-Source Mobile Robot for Education and Research. J. Intell. Robot. Syst. 2019, 94, 761–775. [Google Scholar] [CrossRef] [Green Version]
  5. Mondada, F.; Bonani, M.; Riedo, F.; Briod, M.; Pereyre, L.; Re, P.; Magnenat, S. Bringing Robotics to Formal Education: The Thymio Open-Source Hardware Robot. IEEE Robot. Autom. Mag. 2017, 24, 77–85. [Google Scholar] [CrossRef] [Green Version]
  6. Bouchard, K.; Liu, P.P.; Tulloch, H. The Social Robots Are Coming: Preparing for a New Wave of Virtual Care in Cardiovascular Medicine. Circulation 2022, 145, 1291–1293. [Google Scholar] [CrossRef]
  7. Naneva, S.; Gou, M.S.; Webb, T.L.; Prescott, T.J. A Systematic Review of Attitudes, Anxiety, Acceptance, and Trust Towards Social Robots. Int. J. Soc. Robot. 2020, 12, 1179–1201. [Google Scholar] [CrossRef]
  8. Esterwood, C.; Robert, L.P. Having The Right Attitude: How Attitude Impacts Trust Repair in Human-Robot Interaction. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2022), Sapporo, Japan, 7–10 March 2022. [Google Scholar]
  9. Staffa, M.; Rossi, S. Recommender Interfaces: The More Human-Like, the More Humans Like. In Social Robotics. ICSR 2016; Agah, A., Cabibihan, J.J., Howard, A., Salichs, M., He, H., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 9979. [Google Scholar] [CrossRef]
  10. Abate, A.F.; Barra, P.; Bisogni, C.; Cascone, L.; Passero, I. Contextual Trust Model with a Humanoid Robot Defense for Attacks to Smart Eco-Systems. IEEE Access 2020, 8, 207404–207414. [Google Scholar] [CrossRef]
  11. Etzrodt, K. The third party will make a difference—A study on the impact of dyadic and triadic social situations on the relationship with a voice-based personal agent. Int. J. Hum. Comput. Stud. 2022, 168, 102901. [Google Scholar] [CrossRef]
  12. der Pütten, A.M.R.-V.; Abrams, A. Social Dynamics in Human-Robot Groups—Possible Consequences of Unequal Adaptation to Group Members through Machine Learning in Human-Robot Groups; Springer: Cham, Switzerland, 2020; pp. 396–411. [Google Scholar] [CrossRef]
  13. Misztal-Radecka, J.; Indurkhya, B. Bias-Aware Hierarchical Clustering for Detecting the Discriminated Groups of Users in Recommendation Systems. Inf. Process. Manag. 2021, 58, 102519. [Google Scholar] [CrossRef]
  14. Riek, L.D. Wizard of Oz studies in HRI: A systematic review and new reporting guidelines. J. Hum. Robot Interact. 2012, 1, 119–136. [Google Scholar] [CrossRef] [Green Version]
  15. Aristotle. Aristotle’s Politics; Clarendon Press: Oxford, UK, 1905. [Google Scholar]
  16. Aronson, E. The Social Animal; Worth Publishers: New York, NY, USA, 2011. [Google Scholar]
  17. Axelrod, R.; Hamilton, W.D. The evolution of cooperation. Science 1981, 211, 1390–1396. [Google Scholar] [CrossRef] [PubMed]
  18. Wojciszke, B. Psychologia Społeczna; Wydawnictwo Naukowe Scholar: Warszawa, Poland, 2011. [Google Scholar]
  19. Aline, H.; Lynn, M.K.; Melanie, K. Social exclusion and culture: The role of group norms, group identity and fairness. An. Psicol. 2011, 27, 587–599. [Google Scholar]
  20. Boamah, S.A.; Weldrick, R.; Lee, T.-S.J.; Taylor, N. Social Isolation Among Older Adults in Long-Term Care: A Scoping Review. J. Aging Health 2021, 33, 618–632. [Google Scholar] [CrossRef]
  21. Fante, C.; Palermo, S.; Auriemma, V.; Rosalba, M. Social Inclusion and Exclusion: How Evolution Changes Our Relational and Social Brain. In Evolutionary Psychology Meets Social Neuroscience; Morese, R., Auriemma, V., Palermo, S., Eds.; IntechOpen: Rijeka, Croatia, 2021. [Google Scholar] [CrossRef]
  22. Baumeister, R.; Leary, M. The Need to Belong: Desire for Interpersonal Attachments as a Fundamental Human Motivation. Psychol. Bull. 1995, 177, 497–529. [Google Scholar] [CrossRef]
  23. Jung, M.F.; Šabanović, S.; Eyssel, F.; Fraune, M. Robots in groups and teams. In Proceedings of the Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, Portland, OR, USA, 25 February–1 March 2017; pp. 401–407. [Google Scholar]
  24. Edwards, J. Human-Robot Teams: The Next IT Management Challenge, Information Week, 6 January 2022. Available online: https://www.informationweek.com/big-data/human-robot-teams-the-next-it-management-challenge (accessed on 4 March 2023).
  25. Krämer, N.C.; von der Pütten, A.; Eimler, S. Human-Agent and Human-Robot Interaction Theory: Similarities to and Differences from Human-Human Interaction. Stud. Comput. Intell. 2012, 396, 215–240. [Google Scholar]
  26. Nass, C.; Moon, Y. Machines and Mindlessness: Social Responses to Computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  27. Eyssel, F.; Kuchenbrandt, D. Social categorization of social robots: Anthropomorphism as a function of robot group membership. Br. J. Soc. Psychol. 2012, 51, 724–731. [Google Scholar] [CrossRef]
  28. Burch, J. In Japan, a Buddhist Funeral Service for Robot Dogs. National Geographic. 25 May 2018. Available online: https://www.nationalgeographic.com/travel/article/in-japan--a-buddhist-funeral-service-for-robot-dogs (accessed on 4 April 2021).
  29. Carman, A. They Welcomed a Robot into Their Family, Now They’re Mourning Its Death. The Verge. 19 June 2019. Available online: https://www.theverge.com/2019/6/19/18682780/jibo-death-server-update-social-robot-mourning (accessed on 4 April 2021).
  30. Carter, E.J.; Reig, S.; Tian, X.Z.; Laput, G.; Rosenthal, S.; Steinfeld, A. Death of a Robot: Social Media Reactions and Language Usage when a Robot Stops Operating. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Sapporo, Japan, 7–10 March 2022; pp. 589–597. [Google Scholar]
  31. Allport, G.W. The Nature of Prejudice; Addison-Wesley: Cambridge, MA, USA, 1954. [Google Scholar]
  32. Reeves, B.; Nass, C.I. The Media Equation: How People Treat Computers, Television, and New media Like Real People and Places; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  33. Gou, M.S.; Webb, T.L.; Prescott, T.L. The effect of direct and extended contact on attitudes towards social robots. Heliyon 2021, 7, e06418. [Google Scholar]
  34. Millar, J. Social Exclusion and Social Policy Research: Defining Exclusion. In Multidisciplinary Handbook of Social Exclusion Research; Abrams, D., Christian, J., Gordon, D., Eds.; John Wiley & Sons: Hoboken, NJ, USA, 2007; pp. 1–15. [Google Scholar]
  35. Daly, M. Social Exclusion as Concept and Policy Template in the European Union. CES Working Paper, No. 135. 2006. Available online: http://aei.pitt.edu/9026/1/Daly135.pdf (accessed on 18 March 2023).
  36. Jones, E.E.; Carter-Sowell, A.R.; Kelly, J.R.; Williams, K.D. ‘I’m Out of the Loop’: Ostracism Through Information Exclusion. Group Process. Intergroup Relat. 2009, 12, 157–174. [Google Scholar] [CrossRef] [Green Version]
  37. Jones, E.E.; Kelly, J.R. ‘Why Am I Out of the Loop?’ Attributions Influence Responses to Information Exclusion. Personal. Soc. Psychol. Bull. 2010, 36, 1186–1201. [Google Scholar] [CrossRef]
  38. Jones, E.E.; Carter-Sowell, A.R.; Kelly, J.R. Participation Matters: Psychological and Behavioral Consequences of Information Exclusion in Groups. Group Dyn. Theory Res. Pract. 2011, 15, 311–325. [Google Scholar] [CrossRef] [Green Version]
  39. Ruijten, P.A.M.; Ham, J.; Midden, C.J.H. Investigating the Influence of Social Exclusion on Persuasion by a Virtual Agent. In Persuasive Technology, Persuasive 2014; Spagnolli, A., Chittaro, L., Gamberini, L., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2014; Volume 8462, pp. 191–200. [Google Scholar] [CrossRef]
  40. Erel, H.; Cohen, Y.; Shafrir, K.; Levy, S.D.; Vidra, I.D.; Tov, T.S.; Zuckerman, O. Excluded by Robots: Can Robot-Robot-Human Interaction Lead to Ostracism? In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 9–11 March 2021; pp. 312–321. [Google Scholar] [CrossRef]
  41. Claure, H.; Chen, Y.; Modi, J.; Jung, M.; Nikolaidis, S. Multi-Armed Bandits with Fairness Constraints for Distributing Resources to Human Teammates. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 299–308. [Google Scholar] [CrossRef] [Green Version]
  42. Wang, R.; Harper, F.M.; Zhu, H. Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–14. [Google Scholar]
  43. O’Neil, C. Weapons of Math Destruction; Crown Books: New York, NY, USA, 2016. [Google Scholar]
  44. van Beest, I.; Williams, K.D. When Inclusion Costs and Ostracism Pays, Ostracism Still Hurts. J. Personal. Soc. Psychol. 2006, 91, 918–928. [Google Scholar] [CrossRef]
  45. Twenge, J.M.; Baumeister, R.F.; Tice, D.M.; Stucke, T.S. If You Can’t Join Them, Beat Them: Effects of Social Exclusion on Aggressive Behavior. J. Personal. Soc. Psychol. 2001, 81, 1058–1069. [Google Scholar] [CrossRef] [PubMed]
  46. DeWall, C.N.; Baumeister, R.F. Alone but Feeling No Pain: Effects of Social Exclusion on Physical Pain Tolerance and Pain Threshold, Affective Forecasting, and Interpersonal Empathy. J. Personal. Soc. Psychol. 2006, 91, 1–15. [Google Scholar] [CrossRef] [Green Version]
  47. Baumeister, R.F.; Twenge, J.M.; Nuss, C.K. Effects of Social Exclusion on Cognitive Processes: Anticipated Aloneness Reduces Intelligent Thought. J. Personal. Soc. Psychol. 2002, 83, 817–827. [Google Scholar] [CrossRef] [PubMed]
  48. Baumeister, R.; Twenge, J.M.; Ciarocco, N.J. Social Exclusion Impairs Self-Regulation. J. Personal. Soc. Psychol. 2005, 88, 589–604. [Google Scholar] [CrossRef] [Green Version]
  49. Gerber, J.P.; Chang, S.H.; Reimel, H. Construct validity of Williams’ ostracism needs threat scale. Personal. Individ. Differ. 2017, 115, 50–53. [Google Scholar] [CrossRef]
  50. Zadro, L.; Williams, K.D.; Richardson, R. How low can you go? Ostracism by a computer is sufficient to lower self-reported levels of belonging, control, self-esteem, and meaningful existence. J. Exp. Soc. Psychol. 2004, 40, 560–567. [Google Scholar] [CrossRef]
  51. Jung, M.F.; Martelaro, N.; Hinds, P.J. Using robots to moderate team conflict: The case of repairing violations. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; pp. 229–236. [Google Scholar]
  52. Bartneck, C.; Kulic, D.; Croft, E.A.; Zoghbi, S. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. Int. J. Soc. Robot. 2008, 1, 71–81. [Google Scholar] [CrossRef] [Green Version]
  53. Nederhof, A.J. Methods of coping with social desirability bias: A review. Eur. J. Soc. Psychol. 1985, 15, 263–280. [Google Scholar] [CrossRef]
  54. Krumpal, I. Determinants of social desirability bias in sensitive surveys: A literature review. Qual. Quant. 2013, 47, 2025–2047. [Google Scholar] [CrossRef]
  55. Alač, M. Social robots: Things or agents? AI Soc. 2016, 31, 519–535. [Google Scholar] [CrossRef]
  56. Licoppe, C.; Rollet, N. «Je dois y aller». Analyses de séquences de clôtures entre humains et robot. Réseaux 2020, 220–221, 151–193. [Google Scholar] [CrossRef]
  57. Robert, L.P.; Alahmad, R.; Esterwood, C.; Kim, S.; You, S.; Zhang, Q. A Review of Personality in Human Robot Interactions. arXiv 2020, arXiv:2001.1177. [Google Scholar] [CrossRef] [Green Version]
  58. Ferrin, D.L.; Dirks, K.T.; Shah, P.P. Direct and indirect effects of third-party relationships on interpersonal trust. J. Appl. Psychol. 2006, 91, 870–883. [Google Scholar] [CrossRef]
  59. Bartneck, C.; van der Hoek, M.; Mubin, O.; Al Mahmud, A. ‘Daisy, daisy, give me your answer do!’ switching off a robot. In Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Arlington, VA, USA, 9–11 March 2007; pp. 217–222. [Google Scholar]
  60. Nash, K.; Lea, J.M.; Davies, T.; Yogeeswaran, K. The bionic blues: Robot rejection lowers self-esteem. Comput. Hum. Behav. 2018, 78, 59–63. [Google Scholar] [CrossRef]
  61. Syrdal, D.S.; Dautenhahn, K.; Koay, K.L.; Walters, M.L. The Negative Attitudes towards Robots Scale and reactions to robot behaviour in a live Human-Robot Interaction study, Adaptive and Emergent Behaviour and Complex Systems. In Proceedings of the 23rd Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour, Edinburgh, UK, 6–9 April 2009. [Google Scholar]
  62. Backonja, U.; Hall, A.K.; Painter, I.; Kneale, L.; Lazar, A.; Cakmak, M.; Thompson, H.J.; Demiris, G. Comfort and Attitudes Towards Robots Among Young, Middle-Aged, and Older Adults: A Cross-Sectional Study. J. Nurs. Scholarsh. 2018, 50, 623–633. [Google Scholar] [CrossRef]
  63. de Graaf, M.M.A.; Allouch, S.B. The relation between people’s attitude and anxiety towards robots in human-robot interaction. In Proceedings of the IEEE RO-MAN, Gyeongju, Korea, 26–29 August 2013; pp. 632–637. [Google Scholar]
  64. Rosner, D.K. Critical Fabulations: Reworking the Methods and Margins of Design; MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
  65. Ryland, H. It’s Friendship, Jim, but Not as We Know It: A Degrees-of-Friendship View of Human-Robot Friendships. Minds Mach. 2021, 31, 377–393. [Google Scholar] [CrossRef]
Figure 1. Experimental setup in the laboratory.
Figure 1. Experimental setup in the laboratory.
Electronics 12 01585 g001
Figure 2. Graphics used to assess group identification.
Figure 2. Graphics used to assess group identification.
Electronics 12 01585 g002
Figure 3. Means with 95% confidence interval for the mood variables in the control group, in the favored participants, and in the discriminated.
Figure 3. Means with 95% confidence interval for the mood variables in the control group, in the favored participants, and in the discriminated.
Electronics 12 01585 g003
Figure 4. Means with 95% confidence interval for the robot’s likeability variable and indicator of its subjective assessment.
Figure 4. Means with 95% confidence interval for the robot’s likeability variable and indicator of its subjective assessment.
Electronics 12 01585 g004
Figure 5. Means with 95% confidence interval for the feeling of belonging to the team and the feeling of being excluded from the team.
Figure 5. Means with 95% confidence interval for the feeling of belonging to the team and the feeling of being excluded from the team.
Electronics 12 01585 g005
Table 1. Basic descriptive statistics and the result of Shapiro–Wilk test.
Table 1. Basic descriptive statistics and the result of Shapiro–Wilk test.
MMeSDSk.Kurt.Min.Maks.Wp
Research group (N = 62)
Mood before3.714.000.78−0.741.591.005.000.83<0.001
Mood after3.874.000.81−0.931.871.005.000.82<0.001
Belonging to the team4.054.000.84−0.61−0.132.005.000.84<0.001
Assessment of the robot3.854.000.57−0.730.302.334.830.950.010
Exclusion from the group4.424.500.39−0.890.653.255.000.940.003
Groups (N = 31)
Game completion time (in minutes)7.277.501.89−0.07−1.114.0010.000.940.061
Abbreviations: M: mean; Me: median; SD: standard deviation; Sk.: skew factor; Kurt.: kurtosis; Min.: minimum; Maks.: maximum; W: Shapiro–Wilk factor; p: p value.
Table 2. Differences in the participants’ mood before and after the team interaction.
Table 2. Differences in the participants’ mood before and after the team interaction.
Control Group
(n = 20)
Favored
(n = 21)
Discriminated
(n = 21)
MSDMSDMSDFpη2
Mood before3.600.993.810.753.710.560.370.6950.01
Mood after3.901.073.890.743.810.600.080.925<0.01
Table 3. Differences in the robot’s likeability and participant’s assessment of the robot.
Table 3. Differences in the robot’s likeability and participant’s assessment of the robot.
Control Group
(n = 20)
Favored
(n = 21)
Discriminated
(n = 21)
MSDMSDMSDFpη2
How much did you like Nao3.851.044.400.683.810.932.740.0730.09
Subjective assessment of the robot3.670.694.090.443.800.503.100.0520.09
Table 4. Comparison of different groups for the feeling of belonging to the team and the feeling of being excluded from the team.
Table 4. Comparison of different groups for the feeling of belonging to the team and the feeling of being excluded from the team.
Control Group
(n = 20)
Favored
(n = 21)
Discriminated
(n = 21)
MSDMSDMSDFpη2
Feeling of belonging to the team4.100.724.240.773.810.981.450.2420.05
Feeling of being excluded measured
by the items from the Needs Threat Scale
4.51 a0.254.56 a0.344.16 b0.467.530.0010.21
Annotation: Means with a letter index differ from each other at the level of p < 0.05.
Table 5. Comparison of control group and experimental group in terms of the game completion time.
Table 5. Comparison of control group and experimental group in terms of the game completion time.
Control (n = 10)Experimental (n = 21)
Mean RankMSDMean RankMSDUpη2
Game completion time15.357.151.9916.317.331.8998.500.787<0.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Spisak, S.E.; Indurkhya, B. A Study on Social Exclusion in Human-Robot Interaction. Electronics 2023, 12, 1585. https://doi.org/10.3390/electronics12071585

AMA Style

Spisak SE, Indurkhya B. A Study on Social Exclusion in Human-Robot Interaction. Electronics. 2023; 12(7):1585. https://doi.org/10.3390/electronics12071585

Chicago/Turabian Style

Spisak, Sharon Ewa, and Bipin Indurkhya. 2023. "A Study on Social Exclusion in Human-Robot Interaction" Electronics 12, no. 7: 1585. https://doi.org/10.3390/electronics12071585

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop