Special Issue "Advanced Human-Robot Interaction"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Robotics and Automation".

Deadline for manuscript submissions: 15 December 2023 | Viewed by 7422

Special Issue Editors

School of Computing, Engineering and Built Environment, Glasgow Caledonian University, Glasgow G4 0BA, UK
Interests: human-computer interaction; virtual/augmented reality; artificial intelligence; simulation systems
Special Issues, Collections and Topics in MDPI journals
Medical School, University of Nicosia, 46 Makedonitissas Avenue, Nicosia CY-2417, Cyprus
Interests: medical physics; fluid dynamics; heat transfer; machine learning; engineering science; emerging technologies
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are inviting submissions to a Special Issue on “Advanced Human–Robot Interaction”.

Human–robot interaction (HRI) is rapidly becoming a significant research field and one of the central issues in the development of smart homes.

Robots and robotic devices enable the human population to perform numerous tasks improving speed and safety. Beyond typical routine tasks, robots are gradually becoming an integral part of daily human activities and expanding in different, previously unexplored domains. As such, human–robot social interaction, virtual/augmented reality interaction with robots, autonomous vehicles, medical robots, drones, and drone swarms are some of the few advances attracting primary interest from the research community. Techniques to improve human–robot interaction, machine learning for robot activities, simulations, and user experience could further unlock the full potential of robots and support their subtle integration into current and future societies.

In this Special Issue, we invite submissions exploring cutting-edge research and recent advances in HRI and the contribution of emerging technologies. Both theoretical and experimental studies are welcome, as well as comprehensive reviews and survey papers. The following keywords offer but an indication of the topics invited and are by no means limiting.

Prof. Dr. Vassilis Charissis
Prof. Dr. Dimitris Drikakis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2300 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • HRI and artificial intelligence /ML
  • physical and virtual HRI (VR/AR/MR)
  • HRI collaborative environments
  • autonomous vehicles and driver interaction
  • drone/drone swarms and human interaction
  • artificial consciousness
  • drones and drone swarms
  • human-robot collaboration

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Patient–Robot Co-Navigation of Crowded Hospital Environments
Appl. Sci. 2023, 13(7), 4576; https://doi.org/10.3390/app13074576 - 04 Apr 2023
Viewed by 1020
Abstract
Intelligent multi-purpose robotic assistants have the potential to assist nurses with a variety of non-critical tasks, such as object fetching, disinfecting areas, or supporting patient care. This paper focuses on enabling a multi-purpose robot to guide patients while walking. The proposed robotic framework [...] Read more.
Intelligent multi-purpose robotic assistants have the potential to assist nurses with a variety of non-critical tasks, such as object fetching, disinfecting areas, or supporting patient care. This paper focuses on enabling a multi-purpose robot to guide patients while walking. The proposed robotic framework aims at enabling a robot to learn how to navigate a crowded hospital environment while maintaining contact with the patient. Two deep reinforcement learning models are developed; the first model considers only dynamic obstacles (e.g., humans), while the second model considers static and dynamic obstacles in the environment. The models output the robot’s velocity based on the following inputs; the patient’s gait velocity, which is computed based on a leg detection method, spatial and temporal information from the environment, the humans in the scene, and the robot. The proposed models demonstrate promising results. Finally, the model that considers both static and dynamic obstacles is successfully deployed in the Gazebo simulation environment. Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Show Figures

Figure 1

Article
Integrating Virtual, Mixed, and Augmented Reality to Human–Robot Interaction Applications Using Game Engines: A Brief Review of Accessible Software Tools and Frameworks
Appl. Sci. 2023, 13(3), 1292; https://doi.org/10.3390/app13031292 - 18 Jan 2023
Cited by 1 | Viewed by 2841
Abstract
This article identifies and summarizes software tools and frameworks proposed in the Human–Robot Interaction (HRI) literature for developing extended reality (XR) experiences using game engines. This review includes primary studies proposing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) solutions where [...] Read more.
This article identifies and summarizes software tools and frameworks proposed in the Human–Robot Interaction (HRI) literature for developing extended reality (XR) experiences using game engines. This review includes primary studies proposing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) solutions where humans can control or interact with real robotic platforms using devices that extend the user’s reality. The objective of this article is not to present an extensive list of applications and tools. Instead, we present recent, relevant, common, and accessible frameworks and software tools implemented in research articles published in high-impact robotics conferences and journals. For this, we searched papers published during a seven-years period between 2015 and 2022 in relevant databases for robotics (Science Direct, IEEE Xplore, ACM digital library, Springer Link, and Web of Science). Additionally, we present and classify the application context of the reviewed articles in four groups: social robotics, programming of industrial robots, teleoperation of industrial robots, and Human–Robot collaboration (HRC). Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Show Figures

Figure 1

Article
How Does Exposure to Changing Opinions or Reaffirmation Opinions Influence the Thoughts of Observers and Their Trust in Robot Discussions?
Appl. Sci. 2023, 13(1), 585; https://doi.org/10.3390/app13010585 - 31 Dec 2022
Viewed by 1038
Abstract
This study investigated how exposure to changing or reaffirmation opinions in robot conversations influences the impressions of observers and their trust in media. Even though the provided conversational contents include the same amount of information, their order, positive/negative attitudes, and discussion styles change [...] Read more.
This study investigated how exposure to changing or reaffirmation opinions in robot conversations influences the impressions of observers and their trust in media. Even though the provided conversational contents include the same amount of information, their order, positive/negative attitudes, and discussion styles change their perceived impressions. We conducted a web survey using video stimuli, where two robots discussed Japan’s first state of emergency response to the COVID-19 pandemic. We prepared two patterns of opinion changes to a different side (positive–negative and negative–positive) and two patterns of opinion reaffirmation (positive–positive and negative–negative) with identical information contents; we only modified their order. The experimental results showed that exposure to opinion changes from the positive side (i.e., negative–positive) or positive opinion reaffirmation (positive–positive) effectively provides positive and fair impressions. Exposure to an opinion that became negative (i.e., positive–negative) effectively provided negative and fair impressions, although negative opinion reaffirmation (negative–negative) led to significantly less trust in media. Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Show Figures

Figure 1

Review

Jump to: Research

Review
We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too!
Appl. Sci. 2023, 13(15), 8743; https://doi.org/10.3390/app13158743 - 28 Jul 2023
Cited by 1 | Viewed by 438
Abstract
The increasing presence of robots in our society raises questions about how these objects are perceived by users. Individuals seem inclined to attribute human capabilities to robots, a phenomenon called anthropomorphism. Contrary to what intuition might suggest, these attributions vary according to different [...] Read more.
The increasing presence of robots in our society raises questions about how these objects are perceived by users. Individuals seem inclined to attribute human capabilities to robots, a phenomenon called anthropomorphism. Contrary to what intuition might suggest, these attributions vary according to different factors, not only robotic factors (related to the robot itself), but also situational factors (related to the interaction setting), and human factors (related to the user). The present review aims at synthesizing the results of the literature concerning the factors that influence anthropomorphism, in order to specify their impact on the perception of robots by individuals. A total of 134 experimental studies were included from 2002 to 2023. The mere appearance hypothesis and the SEEK (sociality, effectance, and elicited agent knowledge) theory are two theories attempting to explain anthropomorphism. According to the present review, which highlights the crucial role of contextual factors, the SEEK theory better explains the observations on the subject compared to the mere appearance hypothesis, although it does not explicitly explain all the factors involved (e.g., the autonomy of the robot). Moreover, the large methodological variability in the study of anthropomorphism makes the generalization of results complex. Recommendations are proposed for future studies. Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Review
Considerations for Developing Robot-Assisted Crisis De-Escalation Practices
Appl. Sci. 2023, 13(7), 4337; https://doi.org/10.3390/app13074337 - 29 Mar 2023
Viewed by 959
Abstract
Robots are increasingly entering the social sphere and taking on more sophisticated roles. One application for which robots are already being deployed is in civilian security tasks, in which robots augment security and police forces. In this domain, robots will encounter individuals in [...] Read more.
Robots are increasingly entering the social sphere and taking on more sophisticated roles. One application for which robots are already being deployed is in civilian security tasks, in which robots augment security and police forces. In this domain, robots will encounter individuals in crisis who may pose a threat to themselves, others, or personal property. In such interactions with human police and security officers, a key goal is to de-escalate the situation to resolve the interaction. This paper considers the task of utilizing mobile robots in de-escalation tasks, using the mechanisms developed for de-escalation in human–human interactions. What strategies should a robot follow in order to leverage existing de-escalation approaches? Given these strategies, what sensing and interaction capabilities should a robot be capable of in order to engage in de-escalation tasks with humans? First, we discuss the current understanding of de-escalation with individuals in crisis and present a working model of the de-escalation process and strategies. Next, we review the capabilities that an autonomous agent should demonstrate to be able to apply such strategies in robot-mediated crisis de-escalation. Finally, we explore data-driven approaches to training robots in de-escalation and the next steps in moving the field forward. Full article
(This article belongs to the Special Issue Advanced Human-Robot Interaction)
Show Figures

Figure 1

Back to TopTop