Advanced Human-Robot Interaction

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Robotics and Automation".

Deadline for manuscript submissions: 20 July 2024 | Viewed by 16270

Special Issue Editors


E-Mail Website
Guest Editor
School of Computing, Engineering and Built Environment, Glasgow Caledonian University, Glasgow G4 0BA, UK
Interests: human-computer interaction; virtual/augmented reality; artificial intelligence; simulation systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute for Advanced Modeling and Simulation, University of Nicosia, 2417 Nicosia, Cyprus
Interests: computational fluid dynamics; turbulence; shock-waves; multi-component mixing; micro- & nano-scale flows; machine learning; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are inviting submissions to a Special Issue on “Advanced Human–Robot Interaction”.

Human–robot interaction (HRI) is rapidly becoming a significant research field and one of the central issues in the development of smart homes.

Robots and robotic devices enable the human population to perform numerous tasks improving speed and safety. Beyond typical routine tasks, robots are gradually becoming an integral part of daily human activities and expanding in different, previously unexplored domains. As such, human–robot social interaction, virtual/augmented reality interaction with robots, autonomous vehicles, medical robots, drones, and drone swarms are some of the few advances attracting primary interest from the research community. Techniques to improve human–robot interaction, machine learning for robot activities, simulations, and user experience could further unlock the full potential of robots and support their subtle integration into current and future societies.

In this Special Issue, we invite submissions exploring cutting-edge research and recent advances in HRI and the contribution of emerging technologies. Both theoretical and experimental studies are welcome, as well as comprehensive reviews and survey papers. The following keywords offer but an indication of the topics invited and are by no means limiting.

Prof. Dr. Vassilis Charissis
Prof. Dr. Dimitris Drikakis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • HRI and artificial intelligence /ML
  • physical and virtual HRI (VR/AR/MR)
  • HRI collaborative environments
  • autonomous vehicles and driver interaction
  • drone/drone swarms and human interaction
  • artificial consciousness
  • drones and drone swarms
  • human-robot collaboration

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 3067 KiB  
Article
Complexity-Driven Trust Dynamics in Human–Robot Interactions: Insights from AI-Enhanced Collaborative Engagements
by Yi Zhu, Taotao Wang, Chang Wang, Wei Quan and Mingwei Tang
Appl. Sci. 2023, 13(24), 12989; https://doi.org/10.3390/app132412989 - 05 Dec 2023
Viewed by 916
Abstract
This study explores the intricate dynamics of trust in human–robot interaction (HRI), particularly in the context of modern robotic systems enhanced by artificial intelligence (AI). By grounding our investigation in the principles of interpersonal trust, we identify and analyze both similarities and differences [...] Read more.
This study explores the intricate dynamics of trust in human–robot interaction (HRI), particularly in the context of modern robotic systems enhanced by artificial intelligence (AI). By grounding our investigation in the principles of interpersonal trust, we identify and analyze both similarities and differences between trust in human–human interactions and human–robot scenarios. A key aspect of our research is the clear definition and characterization of trust in HRI, including the identification of factors influencing its development. Our empirical findings reveal that trust in HRI is not static but varies dynamically with the complexity of the tasks involved. Notably, we observe a stronger tendency to trust robots in tasks that are either very straightforward or highly complex. In contrast, for tasks of intermediate complexity, there is a noticeable decline in trust. This pattern of trust challenges conventional perceptions and emphasizes the need for nuanced understanding and design in HRI. Our study provides new insights into the nature of trust in HRI, highlighting its dynamic nature and the influence of task complexity, thereby offering a valuable reference for future research in the field. Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Show Figures

Figure 1

15 pages, 3184 KiB  
Article
Development of a Play-Tag Robot with Human–Robot Contact
by Yutaka Hiroi, Kenzaburo Miyawaki and Akinori Ito
Appl. Sci. 2023, 13(23), 12909; https://doi.org/10.3390/app132312909 - 01 Dec 2023
Viewed by 958
Abstract
Many robots that play with humans have been developed so far, but developing a robot that physically contacts humans while playing is challenging. We have developed robots that play tag with humans, which find players, approach them, and move away from them. However, [...] Read more.
Many robots that play with humans have been developed so far, but developing a robot that physically contacts humans while playing is challenging. We have developed robots that play tag with humans, which find players, approach them, and move away from them. However, the developed algorithm for approaching a player was insufficient because it did not consider how the arms are attached to the robot. Therefore, in this paper, we assume that the arms are fixed on both sides of the robot and develop a new algorithm to approach the player and touch them with an arm. Since the algorithm aims to move along a circular orbit around a player, we call this algorithm “the go-round mode”. To investigate the effectiveness of the proposed method, we conducted two experiments. The first is a simulation experiment, which showed that the proposed method outperformed the previous one. In the second experiment, we implemented the proposed method in a real robot and conducted an experiment to chase and touch the player. As a result, the robot could touch the player in all the trials without collision. Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Show Figures

Figure 1

12 pages, 5849 KiB  
Article
Comparing Usability of Augmented Reality and Virtual Reality for Creating Virtual Bounding Boxes of Real Objects
by Nyan Kyaw, Morris Gu, Elizabeth Croft and Akansel Cosgun
Appl. Sci. 2023, 13(21), 11693; https://doi.org/10.3390/app132111693 - 26 Oct 2023
Viewed by 1015
Abstract
This study conducts a comparative analysis of user experiences of Augmented Reality (AR) and Virtual Reality (VR) headsets during an interactive semantic mapping task. This task entails the placement of virtual objects onto real-world counterparts. Our investigation focuses on discerning the distinctive features [...] Read more.
This study conducts a comparative analysis of user experiences of Augmented Reality (AR) and Virtual Reality (VR) headsets during an interactive semantic mapping task. This task entails the placement of virtual objects onto real-world counterparts. Our investigation focuses on discerning the distinctive features of each headset and their respective advantages within a semantic mapping context. The experiment employs a user interface enabling the creation, manipulation, and labeling of virtual 3D holograms. To ensure parity between the headsets, the VR headset mimics AR by relaying its camera feed to the user. A comprehensive user study, encompassing 12 participants tasked with mapping six tabletop objects, compares interface usability and performance between the headsets. The study participants’ evaluations highlight that the VR headset offers enhanced user-friendliness and responsiveness compared to the AR headset. Nonetheless, the AR headset excels in augmenting environmental perception and interpretation, surpassing VR in this aspect. Consequently, the study underscores that current handheld motion controllers for interacting with virtual environments outperform existing hand gesture interfaces. Furthermore, it suggests potential improvements for VR devices, including an upgraded camera feed integration. Significantly, this experiment unveils the feasibility of leveraging VR headsets for AR applications without compromising user experience. However, it also points to the necessity of future research addressing prolonged usage scenarios for both types of headsets in various interactive tasks. Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Show Figures

Figure 1

19 pages, 1750 KiB  
Article
Patient–Robot Co-Navigation of Crowded Hospital Environments
by Krishna Kodur and Maria Kyrarini
Appl. Sci. 2023, 13(7), 4576; https://doi.org/10.3390/app13074576 - 04 Apr 2023
Viewed by 1772
Abstract
Intelligent multi-purpose robotic assistants have the potential to assist nurses with a variety of non-critical tasks, such as object fetching, disinfecting areas, or supporting patient care. This paper focuses on enabling a multi-purpose robot to guide patients while walking. The proposed robotic framework [...] Read more.
Intelligent multi-purpose robotic assistants have the potential to assist nurses with a variety of non-critical tasks, such as object fetching, disinfecting areas, or supporting patient care. This paper focuses on enabling a multi-purpose robot to guide patients while walking. The proposed robotic framework aims at enabling a robot to learn how to navigate a crowded hospital environment while maintaining contact with the patient. Two deep reinforcement learning models are developed; the first model considers only dynamic obstacles (e.g., humans), while the second model considers static and dynamic obstacles in the environment. The models output the robot’s velocity based on the following inputs; the patient’s gait velocity, which is computed based on a leg detection method, spatial and temporal information from the environment, the humans in the scene, and the robot. The proposed models demonstrate promising results. Finally, the model that considers both static and dynamic obstacles is successfully deployed in the Gazebo simulation environment. Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Show Figures

Figure 1

18 pages, 1554 KiB  
Article
Integrating Virtual, Mixed, and Augmented Reality to Human–Robot Interaction Applications Using Game Engines: A Brief Review of Accessible Software Tools and Frameworks
by Enrique Coronado, Shunki Itadera and Ixchel G. Ramirez-Alpizar
Appl. Sci. 2023, 13(3), 1292; https://doi.org/10.3390/app13031292 - 18 Jan 2023
Cited by 8 | Viewed by 5456
Abstract
This article identifies and summarizes software tools and frameworks proposed in the Human–Robot Interaction (HRI) literature for developing extended reality (XR) experiences using game engines. This review includes primary studies proposing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) solutions where [...] Read more.
This article identifies and summarizes software tools and frameworks proposed in the Human–Robot Interaction (HRI) literature for developing extended reality (XR) experiences using game engines. This review includes primary studies proposing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) solutions where humans can control or interact with real robotic platforms using devices that extend the user’s reality. The objective of this article is not to present an extensive list of applications and tools. Instead, we present recent, relevant, common, and accessible frameworks and software tools implemented in research articles published in high-impact robotics conferences and journals. For this, we searched papers published during a seven-years period between 2015 and 2022 in relevant databases for robotics (Science Direct, IEEE Xplore, ACM digital library, Springer Link, and Web of Science). Additionally, we present and classify the application context of the reviewed articles in four groups: social robotics, programming of industrial robots, teleoperation of industrial robots, and Human–Robot collaboration (HRC). Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Show Figures

Figure 1

9 pages, 805 KiB  
Article
How Does Exposure to Changing Opinions or Reaffirmation Opinions Influence the Thoughts of Observers and Their Trust in Robot Discussions?
by Hiroki Itahara, Mitsuhiko Kimoto, Takamasa Iio, Katsunori Shimohara and Masahiro Shiomi
Appl. Sci. 2023, 13(1), 585; https://doi.org/10.3390/app13010585 - 31 Dec 2022
Cited by 2 | Viewed by 1492
Abstract
This study investigated how exposure to changing or reaffirmation opinions in robot conversations influences the impressions of observers and their trust in media. Even though the provided conversational contents include the same amount of information, their order, positive/negative attitudes, and discussion styles change [...] Read more.
This study investigated how exposure to changing or reaffirmation opinions in robot conversations influences the impressions of observers and their trust in media. Even though the provided conversational contents include the same amount of information, their order, positive/negative attitudes, and discussion styles change their perceived impressions. We conducted a web survey using video stimuli, where two robots discussed Japan’s first state of emergency response to the COVID-19 pandemic. We prepared two patterns of opinion changes to a different side (positive–negative and negative–positive) and two patterns of opinion reaffirmation (positive–positive and negative–negative) with identical information contents; we only modified their order. The experimental results showed that exposure to opinion changes from the positive side (i.e., negative–positive) or positive opinion reaffirmation (positive–positive) effectively provides positive and fair impressions. Exposure to an opinion that became negative (i.e., positive–negative) effectively provided negative and fair impressions, although negative opinion reaffirmation (negative–negative) led to significantly less trust in media. Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Show Figures

Figure 1

Review

Jump to: Research

38 pages, 599 KiB  
Review
We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too!
by Marion Dubois-Sage, Baptiste Jacquet, Frank Jamet and Jean Baratgin
Appl. Sci. 2023, 13(15), 8743; https://doi.org/10.3390/app13158743 - 28 Jul 2023
Cited by 5 | Viewed by 1410
Abstract
The increasing presence of robots in our society raises questions about how these objects are perceived by users. Individuals seem inclined to attribute human capabilities to robots, a phenomenon called anthropomorphism. Contrary to what intuition might suggest, these attributions vary according to different [...] Read more.
The increasing presence of robots in our society raises questions about how these objects are perceived by users. Individuals seem inclined to attribute human capabilities to robots, a phenomenon called anthropomorphism. Contrary to what intuition might suggest, these attributions vary according to different factors, not only robotic factors (related to the robot itself), but also situational factors (related to the interaction setting), and human factors (related to the user). The present review aims at synthesizing the results of the literature concerning the factors that influence anthropomorphism, in order to specify their impact on the perception of robots by individuals. A total of 134 experimental studies were included from 2002 to 2023. The mere appearance hypothesis and the SEEK (sociality, effectance, and elicited agent knowledge) theory are two theories attempting to explain anthropomorphism. According to the present review, which highlights the crucial role of contextual factors, the SEEK theory better explains the observations on the subject compared to the mere appearance hypothesis, although it does not explicitly explain all the factors involved (e.g., the autonomy of the robot). Moreover, the large methodological variability in the study of anthropomorphism makes the generalization of results complex. Recommendations are proposed for future studies. Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
17 pages, 742 KiB  
Review
Considerations for Developing Robot-Assisted Crisis De-Escalation Practices
by Kathryn Pierce, Debra J. Pepler, Stephanie G. Craig and Michael Jenkin
Appl. Sci. 2023, 13(7), 4337; https://doi.org/10.3390/app13074337 - 29 Mar 2023
Cited by 1 | Viewed by 1679
Abstract
Robots are increasingly entering the social sphere and taking on more sophisticated roles. One application for which robots are already being deployed is in civilian security tasks, in which robots augment security and police forces. In this domain, robots will encounter individuals in [...] Read more.
Robots are increasingly entering the social sphere and taking on more sophisticated roles. One application for which robots are already being deployed is in civilian security tasks, in which robots augment security and police forces. In this domain, robots will encounter individuals in crisis who may pose a threat to themselves, others, or personal property. In such interactions with human police and security officers, a key goal is to de-escalate the situation to resolve the interaction. This paper considers the task of utilizing mobile robots in de-escalation tasks, using the mechanisms developed for de-escalation in human–human interactions. What strategies should a robot follow in order to leverage existing de-escalation approaches? Given these strategies, what sensing and interaction capabilities should a robot be capable of in order to engage in de-escalation tasks with humans? First, we discuss the current understanding of de-escalation with individuals in crisis and present a working model of the de-escalation process and strategies. Next, we review the capabilities that an autonomous agent should demonstrate to be able to apply such strategies in robot-mediated crisis de-escalation. Finally, we explore data-driven approaches to training robots in de-escalation and the next steps in moving the field forward. Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: An Intelligent Real-time Teleoperation System Enabling Inter-action with Social Robots
Author: Alhmiedat
Highlights: This paper introduces a cloud-based teleoperation system for robotics that enables real-time control of the robot platform, allowing remote users to view video streams and manipulate the robot system

Back to TopTop