Next Article in Journal
Enhancing Algal Yield and Nutrient Removal from Anaerobic Digestion Piggery Effluent by an Integrated Process-Optimization Strategy of Fungal Decolorization and Microalgae Cultivation
Previous Article in Journal
Optimization of Locking Plate Screw Angle Used to Treat Two-Part Proximal Humerus Fractures to Maintain Fracture Stability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System

1
Mechanical Engineering Deprtment, College of Engineering, University of Canterbury, Christchurch 8041, New Zealand
2
Manufacturing Futures Research Institute (MFRI), Swinburne University of Technology, Melbourne 3122, Australia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(9), 4740; https://doi.org/10.3390/app12094740
Submission received: 1 April 2022 / Revised: 27 April 2022 / Accepted: 5 May 2022 / Published: 8 May 2022
(This article belongs to the Topic Virtual Reality, Digital Twins, the Metaverse)

Abstract

:
This paper presents an integrated mapping of motion and visualization scheme based on a Mixed Reality (MR) subspace approach for the intuitive and immersive telemanipulation of robotic arm-hand systems. The effectiveness of different control-feedback methods for the teleoperation system is validated and compared. The robotic arm-hand system consists of a 6 Degrees-of-Freedom (DOF) industrial manipulator and a low-cost 2-finger gripper, which can be manipulated in a natural manner by novice users physically distant from the working site. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time 3D visual feedback from the robot working site. Imitation-based velocity-centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control and enables spatial velocity-based control of the robot Tool Center Point (TCP). The user control space and robot working space are overlaid through the MR subspace, and the local user and a digital twin of the remote robot share the same environment in the MR subspace. The MR-based motion and visualization mapping scheme for telerobotics is compared to conventional 2D Baseline and MR tele-control paradigms over two tabletop object manipulation experiments. A user survey of 24 participants was conducted to demonstrate the effectiveness and performance enhancements enabled by the proposed system. The MR-subspace-integrated 3D mapping of motion and visualization scheme reduced the aggregate task completion time by 48% compared to the 2D Baseline module and 29%, compared to the MR SpaceMouse module. The perceived workload decreased by 32% and 22%, compared to the 2D Baseline and MR SpaceMouse approaches.

1. Introduction

With the rapid development of space exploration, deep-sea discovery, nuclear rescue, radiation detection, and robot-assisted medical equipment in recent years, humans urgently need interactive control of slave robots to complete remote operations [1,2]. More recently, medical robotic applications during the coronavirus pandemic have proven valuable [3,4]. Due to the highly contagious nature of the novel coronavirus, surgeons are at high risk of infection when examining and sampling patients face-to-face [5]. However, oropharyngeal swabbing is a commonly used technique for COVID-19 sampling and diagnosis in the pandemic worldwide [6,7]. One application scenario of medical telerobotic systems is to teleoperate robots to conduct COVID-19 swab testing and provide other healthcare services, such as (1) robotics-assisted telesurgery, (2) tele-examination of patients before and after treatment, and (3) tele-training for surgical procedures [8,9]. On the user side of biomedical telerobotic systems, surgeons can operate a Human–Robot Interaction (HRI) system with an MR Head-Mounted Display (HMD) and control robots from a distance to perform surgery. Additionally, healthcare workers can telemanipulate robots for the care of infected patients or the collection of biological samples, which greatly reduces the risk of infection.
Human–robot interactive teleoperation technology remains the main means to realize remote operations in a complex and dynamic environment. Robotic teleoperation satisfies the demands of scenarios in which human access is dangerous but human intelligence is required [10]. Human-in-the-loop tele-control of robotic systems enables operators to remotely implement complex tasks in order to reduce risk without losing quality [11,12]. An interactive teleoperation system consists of five main components, including the human operator, master control loop, communication channel, slave control loop, and robotic agent. The operator passes commands to the slave loop via the communication channels [13], which also return information on the robot’s interaction with its environment [14], using visual and haptic feedback [15,16]. An effective teleoperation system not only enables intuitive HRI, but ensures that the robot can also be operated in a way allowing the operator to experience the “feel” of the robot working on the remote side [17], gaining a “sense of presence” [18,19].
Typically, in conventional systems, the user controls the remote robotic system with a joystick, gamepad, keyboard and mouse, or 3D mouse, and simultaneously receives visual feedback from 2D displays [20,21]. Robot control is not intuitive and natural to the user [22,23]. The mismatch between the range of user control space and the limits of the input device workspace can increase the difficulty of telemanipulation and lead to poor operation [24,25]. Another disadvantage of typical telerobotic systems is the lack of depth perception due to monocular, 2D visualization of the remote site [26,27,28], limiting operator performance [29,30] and any feelings of immersion and telepresence in the remote workspace [31].
MR technology has great potential in co-located and remote collaboration. Researchers have been exploring MR-enhanced collaboration to assist in interactive activities in co-located or remote scenarios [32]. MR-based remote collaboration holds great promise in situations where human/robotic contributors are physically separated and need to find a shared space to work on a common project. By leveraging the seamless integration of virtual and physical content, MR-based collaboration provides a common understanding or shared environment for collaborators. To date, the majority of research efforts have been focused on developing human–computer interaction (HCI), evaluating collaborative processes, and establishing conceptual models and taxonomies of collaborative MR [33]. Compared to traditional video-mediated collaboration, MR-based remote collaboration offers more natural and intuitive interactions, reduces task execution time, offers a more engaging user experience, and enables users to share important nonverbal cues, which have been found to increase empathy, reliability, efficiency, and collaboration [34]. MR-based collaboration for interactive tasks has recently been used in academia and industry. It has shown great potential in multiple areas, such as telemedicine, education, training, remote assistance, maintenance, intuitive HRI, and other remote collaboration tasks [35]. MR systems continue to make new advances, creating imaginative and interactive schemes that push the edges previously explored in Computer-Supported Cooperative Work (CSCW) [36]. Three complementary factors that can enhance collaboration in MR environments are annotation techniques, collaborative object manipulation, and perception and cognition [37].
MR has been applied to robotic teleoperation systems to enhance user perception of the remote side in order to enable immersive robotic teleoperation (IRT) [38,39,40]. A three-dimensional virtual world similar to the slave side can be simulated through MR and displayed to the user on the master side. By implementing MR as an HRI interface, the user experiences a physical presence in the remote environment, and co-existence with the robotic platform via the MR subspace, while guiding and monitoring the robotic platform in the local user space [41,42]. MR-enhanced teleoperation allows direct mapping of control commands and the actions between the user and the robot, and has the potential for performance enhancements by serving as an intermediary for the integration of imitation-based motion mapping and 3D visualization [43].
Most recent research on MR-enhanced teleoperation systems has focused on collecting demonstrations for robotic learning, solving long time delays, the development of immersive manipulation, and poor virtual transparency problems [44,45,46]. However, those telerobotic systems do not fully exploit the potential performance enhancements provided by using the MR subspace as an intermediary for the integration of imitation-based motion mapping and 3D visualization mapping.
In this paper, an MR-enhanced velocity-centric mapping scheme for tele-control is proposed to ensure natural and intuitive robot interaction with effective 3D visual feedback. Precise hand motion tracking and robot movement control are achieved. Acceptable remote manipulation results are accomplished for novice operators by reducing the reliance on operator skills. Spatial motion mapping involves linear velocities and angular velocities. The updated six-vector spatial twist, a representation of the linear and angular velocity of users’ hand motions, is transmitted from the user space to the robot space. The integration of spatial motion mapping and 3D visual feedback through the MR subspace is applied in robotic teleoperation as an interface to enable novice users to remotely control the robotic arm-hand system for performing manipulation tasks in harsh, unstructured situations. To analyze system performance, the isometric-rate tele-control paradigm through MR-based 3D motion and visualization retargeting scheme for telerobotics is compared to the isotonic-position 3D interaction condition and conventional 2D baseline over two tabletop object manipulation experiments.
This paper is organized into four sections. The first section presented here describes the background, research objectives, and contributions of this research. Section 2 (Materials and Methods) highlights the system overview, experiments and analyses. Section 3 (Results and Discussion) presents the objective and subjective measures of the experiments. Section 4 (Conclusions and Future Work) summarizes the research outcomes and next steps in future work.

2. Materials and Methods

2.1. Teleoperation System Overview

The proposed robotic manipulation system and the related mapping are shown in Figure 1. On the MR side (local space), a binocular HMD is tracked by the HTC Vive VR platform (HTC Corporation, Taoyuan, Taiwan) and used to display the virtual manipulation scene generated in Unity (Unity Technologies, San Francisco, CA, USA) to the user. Motion tracking devices (HTC lighthouses) are linked to the Unity computer and track the pose of the user’s head, while they inspect the workpiece from different perspective angles and perform teleoperation tasks. Figure 2 shows the data communication and information exchange for intuitive motion control of the imitation-based MR-HRI of the teleoperated robotic arm-hand system.

2.1.1. Robot and User Communication

A Kinect V2 depth sensor (Microsoft Corporation, Albuquerque, NM, USA) publishes compressed RGB-D images from the perspective of the robot’s head to produce a point cloud-based 3D visual feedback in the MR subspace. Compressed images enter the MR subspace through a custom image subscriber and depth image subscriber in Unity 3D. In Unity, RGB images and depth images are decompressed using OpenCV for C# (Microsoft Corporation, Albuquerque, NM, USA). A custom “material” shader in Unity combines these images into a 3D point cloud from the RGB image and the depth image, and the computation is done in parallel on the GPU.
Onsite sensory information from the remote workspace is transferred back to the human operators. The operators act as part of the control loop to interact with the remote robotic arm and hand system via a computer-generated virtual environment and overlaid multisensory data. The 3D virtual scene is rendered according to the actual robot working environment and displayed to the user via the HMD. A force sensor attached to the robot finger is primarily dedicated to monitoring the grasping process. The user can directly observe the workpiece and end-effector status with the real-time overlaid sensory information in MR during operation. The depth sensor information is mapped and reconstructed in the point cloud form on the master side to be completely consistent with the environment of the slave side. The proposed teleoperation system allows information mapping between the user side and the robot side using an MR subspace [47].

2.1.2. MR Subspace

The MR subspace serves as an intermediary between the user command loop and robot control loop. A digital twin system is generated in the virtual environment and features a synchronous representation of the physical UR5 robot (Universal Robotics, Odense, Denmark) employed. The remote human operator can inspect the sensory information, robot control command input, and robot pose in its work environment through a 3D MR subspace interface. The virtual replica allows the user to inspect how the physical robot is situated in the remote environment without deploying an array of static cameras. The digital twin subscribes to the joint state data of the physical UR5 robot and updates its configuration accordingly in MR using rosbridge software provided by the Robot Operating System (ROS). In the MR scene, the user and the remote robotic platform share the same space virtually. The digital twin-enhanced MR subspace interface is shown in Figure 3.
In typical telerobotic systems with a 2D camera feed, the user is provided with a monocular RGB stream of the remote scene. In the virtual subspace, a human operator monitors the spatial manipulation process using the real-time 3D point clouds for remote environmental visualization. The digital twin can either be situated in front of the user or superimposed on the user, depending on their requirements.
The operator uses the handheld HTC Vive controller as a motion input device to grant human-level dynamic performance to the robotic arm-hand for remote telerobotic spatial manipulation tasks. The Vive tracking system tracks the controller position to a sub-millimeter level accuracy. User hand movements are tracked by two base stations tracking the handheld HTC Vive controllers at a refresh rate of 90 Hz. The HTC Vive base stations use alternating horizontal and vertical lasers to scan across the HTC Vive headset and handheld controllers, which are equipped with sensors to detect the lasers as they pass. The HTC Vive system integratesthe position and rotation data of the components being tracked in the 3D MR subspace. The acquired position and orientation data are calculated and transmitted to the ROS-based controller of the UR5 robot, where the input velocity values are converted into robot joint velocities.

2.2. Velocity-Centric Motion Mapping

For telemanipulation applications, the remote robot should maintain smooth, accurate tracking of user hand movements. However, raw hand movement data provided by the Vive tracking system contain intended hand motion, as well as tremors and noise. Thus, direct velocity mapping causes aggressive control maneuvers and jittering robot motion. Single Exponential Smoothing (SES) is applied to remove unintended short-term fluctuations and reduce hand tremor and noise. The SES for filtering out the noise from the hand motion series can be calculated recursively:
V t + 1 | t = α V t + α ( 1 α ) V t 1 + α ( 1 α ) 2 V t 2 + α ( 1 α ) k V t k
where V t are velocity measurements and 0 α 1 is the smoothing parameter. The decrease rate of the weighting terms is controlled by the smoothing parameter. If α is large and close to 1, more weight is given to the more recent hand motion observations. If α approaches 0, the output velocity signal tends to be the average of the historical input velocity data. A value of α = 0.9 gives the optimal performance in motion smoothing applications, according to experiments.
The approach for intuitively controlling the robot so that its end-effector smoothly follows a user’s hand trajectory, T u ( t ) , is to calculate and control the required n-vector of joint velocities θ ˙ directly from the relationship J ( θ ) θ ˙ = V , where the desired robot end-effector twist V r = [ v r   ω r   ] T and J ( θ )   R 6 × n are expressed in the same frame. At time k, the motion tracking system measures and determines the configuration of the user’s hand, converts this calculated spatial velocity V u = [ v u   ω u   ] T to a Cartesian twist command V r = [ v r   ω r   ] T to the robot in the ROS coordinate system, employs the inverse-Jacobian solver to determine the appropriate joint rate vector, θ ˙ , according to the desired twist representation of end-effector motion, V r , and derives the joint configuration sent to the UR5 robot controller, as shown by:
θ ˙ = J ( θ )   S V r
V u = [ v u   ω u   ]     T = k V r = [ v r   ω r   ]     T = J r ( θ ) θ ˙ = [ J v ( θ )   J ω ( θ )   ] θ ˙
The use of the pseudo-inverse in Equation (2) implicitly weights the cost of each of the 6 joint velocities identically, returns the minimized two-norm of joint velocities, and reduces the energy consumption of the robot. S is a positive scaling factor, and S = I is chosen in the application for precisely mimicking the user’s movement, where I R 6 × 6 is an identity matrix.
By applying inverse-Jacobian-based kinematic techniques, real-time control of the spatial velocity of the TCP is achieved, instead of creating discrete path plans. Thus, a smooth trajectory and quality manipulation behavior can be achieved. The proposed teleoperated robotic system provides the operator with a more natural and intuitive scheme for interacting with the remote robot in comparison to conventional robotic teleoperation systems in which the robot is manipulated in joint space or the end-effector is driven at a certain specified velocity [48].
Fingertip movements of the index finger collected from the hall-effect sensor embedded in the Vive controller are projected onto the X-Z plane perpendicular to the palm and contain index finger movements without abduction in the Y direction. The projected tip positions are transformed and scaled to the MR subspace through a standard frame transformation. When the user extends their index finger away from or towards the palm, the press depth detected by the sensor is updated accordingly. The press depth is mapped to the workspace limits of the robotic hand while it approaches the open and close pose for the manipulation of objects. A grasping depth index (GDI) is proposed as the criterion for grasping modulation of the robotic gripper, rather than reading binary values of 0 or 1 directly.

2.3. Experiments

2.3.1. Hypothesis

To validate whether a teleoperation intermediary using MR-subspace-enhanced spatial mapping of human motion facilitates control of robotic arm-hand systems by unskilled operators, a set of spatial manipulation tasks, shown in Figure 4, including pick-and-place and assembly, were developed. A user study was designed and conducted to verify task performance and user experience with the proposed HRI system compared to two typical teleoperation modes.
The null hypothesis (H0) for a repeated-measures ANOVA is that the MR-subspace-enhanced imitation-based motion mapping approach for telemanipulation and the two alternatives (commonly used teleoperation methods) have identical effects on task efficiency, user performance, and system usability for unskilled users, in terms of effectiveness, intuitiveness, and learnability.

2.3.2. Experimental Setup

A within-subjects experiment was developed to verify the null hypothesis by presenting novice participants with three HRI conditions (Baseline/2D SpaceMouse; MR direct control/MR SpaceMouse; and MR subspace) in a random order to guide the robotic arm-hand system and complete two spatial manipulation tasks: (1) pick-and-place and (2) assembly.
A total of 24 participants took part in the user study, 13 males (54%) and 11 females (46%). Subjects were all from the University of Canterbury and ranged in age from 20 to 31. In general, participants were unfamiliar with robotic systems, HRI, or MR. In the pre-experiment questionnaire, subjects’ experiences with VR and computer games were collected. Among the 24 participants, 13 had no previous experience with VR, 7 had used a VR headset once, and 4 were more frequent users. Most of them were had a lot of experience with computer games and they usually played once a week. None of the participants had experience with controlling a robot, and they were regarded as novice users of the teleoperated robotic system. The tasks for the user study were to guide the robotic arm-hand system to conduct pick-and-place and assembly telemanipulation processes through three different teleoperation intermediaries. Each participant completed the user study within 60 min, which includes pre-questionnaires, instructions, experiments, post-questionnaires and comments. The participants were informed of the right to stop and/or withdraw from the experiment at any time.
The proposed MR-HRI mapping approach was compared to a SpaceMouse 6-DOF control input device sensor (3Dconnexion, Munich, Germany) with 2D camera feeds, and the SpaceMouse with the 3D point cloud interface of telemanipulation. The 3D connexion SpaceMouse Compact is a 6-DOF control input component widely used for guiding robots, precise 3D navigation in Computer-aided design (CAD), or 3D analysis and review. The 6-DOF SpaceMouse was chosen as a baseline [49], because prior work indicated that, despite the complex operations, the SpaceMouse interface was highly effective for cartesian teleoperation of a robot end-effector [50,51,52]. The goal of the experiment is to test whether the isotonic-position control scheme with the MR subspace outperforms isometric-rate control using the SpaceMouse input with either 2D direct camera feeds or 3D visual feedback.
The Baseline HRI is a typical teleoperation mode, providing participants with 2D visual feedback displaying the working view through the RGB camera in the Kinect V2 depth sensor. The user is presented with a 2D ego-centric view of the robot’s working space on the monitor and unable to change perspective. The piloting metaphor was used for HRI with a stationary 6-DoF SpaceMouse. The user interaction included manipulating input from a SpaceMouse for TCP translation and rotation of the UR5 robot arm and the grasp and release of the robot hand.
The MR direct control/MR SpaceMouse module can be regarded as an MR subspace without Velocity-Centric Motion Mapping (VCMM), and is a condensed version of the proposed MR-HRI system that enables the user to inspect the robotic arm-hand situation in the remote environment via a 3D point cloud in the MR subspace. The user can interact with the robotic system through an isometric-rate control scheme by using the SpaceMouse input with 3D visual feedback, but without perceived correspondence between the input device motion and the robot end-effector movement.
The MR subspace experiment is a full MR-HRI system with VCMM using the tracked Vive motion controller in a grasping metaphor, and allows the user to use hand motion to directly guide the robotic arm-hand system through the isotonic-position control scheme, and view the robot’s space through the point cloud generated within MR subspace, conducting a set of spatial manipulation tasks. The full MR-HRI system features the perceived movement correspondence between the input device (Vive controller) and the robot end-effector. In the MR subspace, the motion of the robotic arm and hand system closely mimics the movement of the user’s arm and hand, so that the user can perceive the movement of the robot through his/her own movement.

2.3.3. Experimental Procedure

Each subject picked up two cubic and cylindrical items and placed them on target positions on the tabletop in the first pick-and-place manipulation task. By requiring both translation and rotation for placing the objects, the process assesses the translational and rotational functionality of the three HRI schemes. The second task was to grab one LEGO subassembly from a predefined spot and stack it onto a fixed LEGO base on the table. As an assembly task with increased complexity, this task required the subject to rearrange the in-hand object’s orientation by rotating the red LEGO cuboid subassembly and aligning the two components before mating, which restricted the orientation of the end-effector in comparison to placing the cubical and cylindrical items.
All participants were given a full information sheet and briefing and were informed of the right to stop and/or withdraw at any time, including withdrawing their data. All data and usability survey data were anonymized. Thus, the approach met all best-practice standards and covered all informed consent requirements. The trial began with each subject filling out a pre-experiment questionnaire to record age and gender, and assess prior knowledge and experience with MR, robotics, and HRI. Following the pre-experiment questionnaire, participants were given health and safety guidelines and provided with details on the experimental tasks and objectives. Prior to using each HRI method, each subject was presented with a tutorial video demonstrating the user interaction module. The video-based introduction was utilized to equalize the training that subjects were offered in each HRI module. The three HRI modules included the interaction scheme of the SpaceMouse input device with 2D camera feeds, the SpaceMouse with the 3D point cloud interface of telemanipulation, and the MR-subspace-integrated 3D mapping of motion and visualization scheme.
After indicating preparedness, participants ran through a training task of grasping a cylindrical object with the robotic hand-arm to get used to each control method of the teleoperation system [53]. The object grasping task was selected due to its simplicity and the preparation offered for the experimental tasks, which allows the users to get familiarized with the HRI methods. Each participant completed a training activity once to standardize the training process, and they were still regarded as unskilled operators. Thus, this activity mainly serves as part of the instructions, and its influence on the bias of the results is minimized.
When the user expressed their readiness to conduct the experimental tasks, the cumulative training time for each HRI module was recorded to assess the learnability and efficiency of each HRI module. The locations of target objects and the robotic pose changed, respectively. To reduce order effects during the execution of the experimental tasks, the sequence of the HRI modules and the target position of the objects were randomly allocated, and pick-and-place and assembly manipulation task orders were also randomized, which minimizes the learning effects between tasks. The teleoperation task was suspended when the subject placed the object in the goal position and mated the components together in an assembly.
To standardize the initial position, the experimenter returned the robot to an identical home configuration after each trial. Participants completed a post-trial questionnaire on each HRI module after completing the training and experimental tasks. Participants remained outside the robotic platform operating zone at all times to guarantee physical safety, and one experimenter closely observed all operations with an emergency stop. If a collision occurred on the objects or tabletop, the experimenter immediately activated the emergency stop button to halt all robot motion. The robot was then sent back to its original pose, demanding the subjects restart the task. In the MR subspace, virtual safety grids were also established to surround and remind the user. When participants reached predetermined boundaries, warning grids showing the MR subspace edge were displayed. The experiment requested all users avoid these edges if they could.

2.4. Analyses

The goal of the study is to investigate the effect of the MR-subspace-enhanced imitation-based HRI module on the operator’s ability to interact with objects in the remote environment. To evaluate the efficacy of the robotic manipulation system presented, human operator performance was assessed under three HRI modes denoted: B, MRD and MRS. B denotes the Baseline, using only a 6-DOF SpaceMouse and a monocular RGB display. MRD (MRnoVCMM/MR SpaceMouse) represents the MR direct control module using SpaceMouse and an MR-enhanced 3D point cloud visual method without deploying VCMM. MRD is a limited version of the proposed MR-HRI system. MRS (MRwithVCMM) provides the user with the proposed MR subspace module using the VCMM approach and a Vive controller as well as an MR-assisted 3D point cloud display.
Task performance and work efficiency were evaluated by measuring the time for each task and the total time for both tasks. Participant perception of the different HRI modalities was measured using a questionnaire based on a previous study assessing user preferences and system usability, including the NASA Task Load Index (NASA-TLX) and Technology Acceptance Model (TAM) for evaluating the user’s acceptance [54,55,56]. User performance was measured by time for each manipulation task and aggregated total time for both tasks. User effort and workload during teleoperation experiments were evaluated by the NASA-TLX score, where from 0 to 100 (most difficult), participants rated qualitative experiences of mental demands, physical demands, time demands, performance, effort, and frustration at the end of each experimental case. User acceptance and perception, including usefulness and ease-of-use, was assessed by questionnaires based on the TAM. This survey uses a 0 to 7 (best) scale to measure the acceptance and ease-of-use of different HRI modules. A one-way within-subjects ANOVA with repeated measures was used to analyse data from all measures. Greenhouse–Geisser correction was applied to assess the difference in the survey responses with B, MRD, and MRS HRI modules as within-subjects variables.

3. Results and Discussion

3.1. Objective Measures

The MR-subspace-based motion-mapping HRI technique was compared to Baseline and MR direct control schemes across two tasks. Results in Table 1 and Table 2 show the data and statistical findings. The analysis rejected the null hypothesis (H0) that the MR-subspace-enhanced spatial motion mapping approach for telemanipulation and the other typical teleoperation modules have identical effects on task performance (Table 1). In contrast, the results indicated that the MR subspace motion-centric HRI approach significantly outperformed both the 2D Baseline and the MR SpaceMouse HRI schemes on both tasks for all pairwise comparisons (Table 2). Guiding a robotic hand-arm system using the natural arm motion mapping through the MR subspace improved task performance for both tasks.
As shown in Figure 5, a one-way within-subjects ANOVA with repeated measures with a Greenhouse–Geisser correction indicated that the time taken to complete the pick-and-place tasks was statistically significantly different (F(1.382, 31.778) = 148.20; p < 0.001; Partial = 0.87). The post-hoc test revealed that the completion time for the pick-and-place tasks was significantly reduced from the Baseline (M = 136.06) compared to the MR SpaceMouse module (M = 98.27) and the MR subspace module (M = 58.52). Statistical significance was also shown for the assembly task completion time between the three HRI modules (F(1.553, 35.73) = 74.08; p < 0.001; Partial = 0.76). Pairwise comparisons indicated that the time to complete the assembly tasks was significantly reduced from the Baseline (M = 162.60) compared to the MR SpaceMouse module (M = 122.88) and the MR subspace module (M = 97.51). The F value for the HRI factor of overall task completion time and its related significance level and the magnitude of the effect (Partial Eta Squared) in the Greenhouse–Geisser correction showed that the mean aggregate task completion time for each HRI module was statistically significantly different (F(1.875, 43.128) = 303.197; p < 0.001; Partial = 0.93). The pairwise comparisons indicated that aggregate time significantly decreased from the Baseline (M = 298.66) compared to the MR SpaceMouse module (M = 221.15) and the MR subspace module (M = 156.03). The aggregate task completion time was reduced by 48% compared to the 2D Baseline module and 29% compared to the MR SpaceMouse module. With MR-subspace-enhanced motion and vision mapping, a comparable rate of improvement in completion time was attained for operators with minimal technical knowledge.
The F value for the HRI factor of overall task completion time and its related significance level and the magnitude of the effect (Partial Eta Squared) in the Greenhouse–Geisser correction showed that the mean aggregate task completion time for each HRI module was statistically significantly different (F(1.875, 43.128) = 303.197; p < 0.001; Partial = 0.93). The pairwise comparisons indicated that aggregate time significantly decreased from the Baseline (M = 298.66) compared to the MR SpaceMouse module (M = 221.15) and the MR subspace module (M = 156.03).
Overall, the aggregate task completion time was reduced by 48% compared to the 2D Baseline module and 29% compared to the MR SpaceMouse module. With MR-subspace-enhanced motion and vision mapping, a comparable rate of improvement in completion time was attained for operators with minimal technical knowledge.
Mean training completion time was 153.7 s, 98.3 s, and 71.8 s for the Baseline and MR with and without the imitation-based motion mapping modes, respectively. The learning time for the training tasks with the MR subspace approach decreased by 53% and 36%, compared to the Baseline and MR SpaceMouse approaches, respectively, indicating that additional learning was required for the two typical HRI modules to reach the same competency as for the proposed MR-subspace-enhanced imitation-based HRI module. It can also be observed that, even at the end of the training, subjects did not reach the same proficiency and dexterity as they immediately did while using the MR-subspace-enhanced imitation-based module. The MR subspace module has an important effect in saving training time even for non-skilled operators.

3.2. Subjective Measures

Overall, all subjects completed the telemanipulation experiments under the three conditions. In the post-experiment questionnaire, the Baseline case was rated as the most difficult task condition by the majority of participants. Most subjects preferred the MRS vision/motion mapping scheme condition and commented that the MRS scheme enabled robot control with natural movements and reduced fatigue and stress during the robotic manipulation process.
The NASA-TLX score shows how participants rated their qualitative experiences of mental demands, physical demands, temporal demands, performance, effort, and frustration. The TAM was used for evaluating acceptance and measuring participant perception of different HRI modules, and scales to measure usefulness and ease-of-use were presented to the subjects in each post-task questionnaire. Three items on a seven-point rating scale (1 = strongly disagree; 7 = strongly agree) were included for each of the two aspects. As shown in Figure 6, all average NASA-TLX scores were lower for the MR-subspace-enhanced motion-mapping tasks compared with the 2D Baseline and MR SpaceMouse cases. In particular, the MR subspace motion-mapping module significantly reduced the physical and mental demand as well as the frustration of participants.
In general, the Overall Workload (OW) decreased from Baseline (M = 75.81) compared to the MR SpaceMouse module (M = 66.89) and the MR subspace module (M = 51.92), as shown in Figure 6f. As a result, the average score of NASA-TLX decreased significantly by 32% (F(1.663, 38.247) = 74.408; p < 0.001; Partial = 0.87) when the MR subspace mapping was used.
The TAM results indicate that there was a substantial disparity between the means of the users appeals of the three different HRI modules. Participants found that the MR-subspace-imitation-based HRI module (M = 5.50) had better usability than the 2D Baseline (M = 2.67), and was marginally easier to use than the MR SpaceMouse module (M = 3.25), as shown in Figure 7a. The MR subspace method (M = 4.75) was reported to be slightly more acceptable than the MR SpaceMouse module (M = 3.08) and 2D Baseline interfaces (M = 2.83) in terms of perceived usefulness, as shown in Figure 7b. The subjective measures analysis proved that the MR-subspace-enhanced spatial motion mapping approach for telemanipulation outperformed the other typical teleoperation modules in task workload and user perception.
In this paper, the isometric-rate tele-control paradigm through MR-based 3D motion and visualization retargeting scheme for telerobotics is compared to the isotonic-position 3D interaction condition and a conventional 2D Baseline. The system performance analysis indicates that the MRS scheme improved the tele-control performance of manipulation tasks and reduced the workload on operators. The results of this study will benefit the research community moving forward by providing experimental groundwork for other researchers to select appropriate command/control schemes to construct intuitive HRI systems. Furthermore, the results provide a fundamental platform for ongoing research to incorporate haptic feedback and machine learning strategies, aiming to further improve the immersive teleoperation platform developed.

4. Conclusions and Future Work

This work presents the design of an MRS-based intuitive telemanipulation paradigm and a user-study evaluation of three control and visual feedback HRI modes on a practical robotic arm-hand platform. The particular interest lies in the potential benefits of deploying an MRS-enhanced 3D vision/motion mapping approach to improve the work efficiency and situation awareness of unskilled operators in teleoperated pick-and-place and assembly tasks. The IRT system introduced in the paper enabled novice users to intuitively and naturally perform high-quality manipulation tasks at a distance. The proposed MRS HRI interface for robotic tele-control is designed and implemented by leveraging the 3D mapping of motion and vision through an MR subspace.
An intuitive and natural interaction scheme was achieved by mapping the user’s hand motions to the robot movements and applying spatial velocity-centric control techniques. A VCMM approach was implemented to accurately track the operator’s hand movements and minimize aggressive velocity commands, while generating smooth movement in the two typical manipulation tasks. A 3D point cloud rendering architecture was deployed in the MRS paradigm to form an incorporated 3D visualization of the remote site, provide the desired depth perception of the workspace, and maintain the inspection of the workpieces, remote site, and digital twin situated in the MR subspace as a whole. Telemanipulation experiments of novice operators were carried out to test the proposed intuitive teleoperation of the robotic platform.
Telemanipulation experiments show that the MRS-integrated scheme reduced aggregate task completion time by 48% compared to the 2D Baseline module, and 29% compared to the MR SpaceMouse module. The MRS-enhanced 3D mapping of motion and vision paradigm improved completion time for operators with minimal technical knowledge. Further, the learning time of the training tasks with the MRS scheme decreased by 53% and 36% compared to the 2D Baseline and MR SpaceMouse approaches, respectively, indicating that extra learning was required in the two typical HRI modules to reach the same competency as in the proposed MRS imitation-based HRI module. Finally, the proposed MRS teleoperation scheme achieved the desired telemanipulation results for novice users.And this scheme significantly reduced the physical and mental demand and frustration of participants, while offering higher user acceptance.
Overall, the MRS teleoperation scheme for robotic arm-hand teleoperation presented improved remote pick-and-place and assembly performance of operators with minimal technical knowledge. The proposed teleoperation scheme using integrated 3D mapping of vision and motion through an MR subspace involving intuitive movement control provides improved tele-control performance of manipulation tasks and reduces the workload of operators.
The following recommendations are presented for the extension of the current work. These include two major aspects of improvement. The first part regards using the haptic effect to further improve the developed immersive teleoperation platform. In the proposed telemanipulation schemes, the haptic effect on user performance, system usability, and collision avoidance was not investigated. Both haptic and tactile feedbacks have the potential to further improve task performance and users’ situational awareness. The second part is about using machine learning strategies to learn the operator’s behavior and predict their intentions for the latency mitigation for long-distance scenarios, and employing cloud computing to achieve better management of the developed telerobotic infrastructure. Teleoperation enables the most productive utilization of scarce expertise. However, latency is a major issue in long-distance teleoperation. HMMs as a general-case latency mitigation protocol have the potential to deal with error-inducing time delays inherent in MR-based teleoperated robotic systems.

Author Contributions

Methodology, Y.-P.S. and G.C.; Software, Y.-P.S.; Supervision, X.-Q.C., C.P. and G.C.; Validation, Y.-P.S.; Writing—original draft, Y.-P.S.; Writing—review & editing, T.Z. and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Su, H.; Qi, W.; Yang, C.; Sandoval, J.; Ferrigno, G.; de Momi, E. Deep Neural Network Approach in Robot Tool Dynamics Identification for Bilateral Teleoperation. IEEE Robot. Autom. Lett. 2020, 5, 2943–2949. [Google Scholar] [CrossRef]
  2. Li, S.; Rameshwar, R.; Votta, A.M.; Onal, C.D. Intuitive Control of a Robotic Arm and Hand System with Pneumatic Haptic Feedback. IEEE Robot. Autom. Lett. 2019, 4, 4424–4430. [Google Scholar] [CrossRef]
  3. Conte, D.; Leamy, S.; Furukawa, T. Design and Map-Based Teleoperation of a Robot for Disinfection of COVID-19 in Complex Indoor Environments. In Proceedings of the 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics, SSRR 2020, Abu Dhabi, United Arab Emirates, 4–6 November 2020; pp. 276–282. [Google Scholar]
  4. Yang, G.; Lv, H.; Zhang, Z.; Yang, L.; Deng, J.; You, S.; Du, J.; Yang, H. Keep Healthcare Workers Safe: Application of Teleoperated Robot in Isolation Ward for COVID-19 Prevention and Control. Chin. J. Mech. Eng. 2020, 33, 47. [Google Scholar] [CrossRef]
  5. Li, C.; Gu, X.; Xiao, X.; Lim, C.M.; Duan, X.; Ren, H. A Flexible Transoral Robot Towards COVID-19 Swab Sampling. Front. Robot. AI 2021, 8, 51. [Google Scholar] [CrossRef]
  6. Chen, Y.; Wang, Q.; Chi, C.; Wang, C.; Gao, Q.; Zhang, H.; Li, Z.; Mu, Z.; Xu, R.; Sun, Z.; et al. A Collaborative Robot for COVID-19 Oropharyngeal Swabbing. Robot. Auton. Syst. 2022, 148, 103917. [Google Scholar] [CrossRef] [PubMed]
  7. Zhou, J.; Chen, W.; Cheng, S.S.; Xue, L.; Tong, M.C.F.; Liu, Y. Bio-Inspired Soft (BIS) Hand for Tele-Operated COVID-19 Oropharyngeal (OP) Swab Sampling. In Proceedings of the 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China, 27–31 December 2021; pp. 80–86. [Google Scholar]
  8. Yiming, L.; Chunki, Y.; Zhen, S.; Ya, H.; Kuanming, Y.; Tszhung, W.; Jingkun, Z.; Ling, Z.; Xingcan, H.; Khazaee, N.S.; et al. Electronic Skin as Wireless Human-Machine Interfaces for Robotic VR. Sci. Adv. 2022, 8, eabl6700. [Google Scholar] [CrossRef]
  9. Feizi, N.; Tavakoli, M.; Patel, R.V.; Atashzar, S.F. Robotics and AI for Teleoperation, Tele-Assessment, and Tele-Training for Surgery in the Era of COVID-19: Existing Challenges, and Future Vision. Front. Robot. AI 2021, 8, 16. [Google Scholar] [CrossRef]
  10. Li, C.; Wang, T.; Hu, L.; Tang, P.; Wang, L.; Zhang, L.; Guo, N.; Tan, Y. A Novel Master-Slave Teleoperation Robot System for Diaphyseal Fracture Reduction: A Preliminary Study. Comput. Assist. Surg. 2016, 21, 163–168. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, Z.; Chen, Z.; Liang, B. Fixed-Time Velocity Reconstruction Scheme for Space Teleoperation Systems: Exp Barrier Lyapunov Function Approach. Acta Astronaut. 2019, 157, 92–101. [Google Scholar] [CrossRef]
  12. Shen, Y.; Guo, D.; Long, F.; Mateos, L.A.; Ding, H.; Xiu, Z.; Hellman, R.B.; King, A.; Chen, S.; Zhang, C.; et al. Robots under COVID-19 Pandemic: A Comprehensive Survey. IEEE Access 2021, 9, 1590–1615. [Google Scholar] [CrossRef]
  13. Lee, K.H.; Pruks, V.; Ryu, J.H. Development of Shared Autonomy and Virtual Guidance Generation System for Human Interactive Teleoperation. In Proceedings of the 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence, URAI 2017, Jeju, Korea, 28 June–1 July 2017; pp. 457–461. [Google Scholar]
  14. Song, K.T.; Jiang, S.Y.; Lin, M.H. Interactive Teleoperation of a Mobile Manipulator Using a Shared-Control Approach. IEEE Trans. Hum.-Mach. Syst. 2016, 46, 834–845. [Google Scholar] [CrossRef]
  15. Gao, L.; Xu, Z.; Huang, W.; Song, A. Design and Application of Experimental Platform for Interactive Teleoperation Robot. Dongnan Daxue Xuebao (Ziran Kexue Ban)/J. Southeast Univ. (Nat. Sci. Ed.) 2004, 34, 775–779. [Google Scholar]
  16. Saeidi, H.; Wagner, J.R.; Wang, Y. A Mixed-Initiative Haptic Teleoperation Strategy for Mobile Robotic Systems Based on Bidirectional Computational Trust Analysis. IEEE Trans. Robot. 2017, 33, 1500–1507. [Google Scholar] [CrossRef]
  17. Solanes, J.E.; Muñoz, A.; Gracia, L.; Martí, A.; Girbés-Juan, V.; Tornero, J. Teleoperation of Industrial Robot Manipulators Based on Augmented Reality. Int. J. Adv. Manuf. Technol. 2020, 111, 1077–1097. [Google Scholar] [CrossRef]
  18. Navarro, F.; Fdez, J.; Garzón, M.; Roldán, J.J.; Barrientos, A. Integrating 3D Reconstruction and Virtual Reality: A New Approach for Immersive Teleoperation. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2018; Volume 694, pp. 606–616. [Google Scholar]
  19. Lipton, J.I.; Fay, A.J.; Rus, D. Baxter’s Homunculus: Virtual Reality Spaces for Teleoperation in Manufacturing. IEEE Robot. Autom. Lett. 2018, 3, 179–186. [Google Scholar] [CrossRef] [Green Version]
  20. Dinh, T.Q.; Yoon, J.I.; Marco, J.; Jennings, P.; Ahn, K.K.; Ha, C. Sensorless Force Feedback Joystick Control for Teleoperation of Construction Equipment. Int. J. Precis. Eng. Manuf. 2017, 18, 955–969. [Google Scholar] [CrossRef] [Green Version]
  21. Truong, D.Q.; Truong, B.N.M.; Trung, N.T.; Nahian, S.A.; Ahn, K.K. Force Reflecting Joystick Control for Applications to Bilateral Teleoperation in Construction Machinery. Int. J. Precis. Eng. Manuf. 2017, 18, 301–315. [Google Scholar] [CrossRef]
  22. Nakanishi, J.; Itadera, S.; Aoyama, T.; Hasegawa, Y. Towards the Development of an Intuitive Teleoperation System for Human Support Robot Using a VR Device. Adv. Robot. 2020, 34, 1239–1253. [Google Scholar] [CrossRef]
  23. Meeker, C.; Rasmussen, T.; Ciocarlie, M. Intuitive Hand Teleoperation by Novice Operators Using a Continuous Teleoperation Subspace. In Proceedings of the IEEE International Conference on Robotics and Automation, Brisbane, Australia, 21–25 May 2018; pp. 5821–5827. [Google Scholar]
  24. Ellis, S.R.; Adelstein, B.D.; Welch, R.B. Kinesthetic Compensation for Misalignment of Teleoperator Controls through Cross-Modal Transfer of Movement Coordinates. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2002, 46, 1551–1555. [Google Scholar] [CrossRef]
  25. Li, G.; Caponetto, F.; del Bianco, E.; Katsageorgiou, V.; Sarakoglou, I.; Tsagarakis, N.G. Incomplete Orientation Mapping for Teleoperation with One DoF Master-Slave Asymmetry. IEEE Robot. Autom. Lett. 2020, 5, 5167–5174. [Google Scholar] [CrossRef]
  26. Bejczy, B.; Bozyil, R.; Vaiekauskas, E.; Petersen, S.B.K.; Bogh, S.; Hjorth, S.S.; Hansen, E.B. Mixed Reality Interface for Improving Mobile Manipulator Teleoperation in Contamination Critical Applications. Procedia Manuf. 2020, 51, 620–626. [Google Scholar] [CrossRef]
  27. Triantafyllidis, E.; McGreavy, C.; Gu, J.; Li, Z. Study of Multimodal Interfaces and the Improvements on Teleoperation. IEEE Access 2020, 8, 78213–78227. [Google Scholar] [CrossRef]
  28. Yew, A.W.W.; Ong, S.K.; Nee, A.Y.C. Immersive Augmented Reality Environment for the Teleoperation of Maintenance Robots. In Procedia CIRP; Elsevier: Amsterdam, The Netherlands, 2017; Volume 61, pp. 305–310. [Google Scholar]
  29. Komatsu, R.; Fujii, H.; Tamura, Y.; Yamashita, A.; Asama, H. Free Viewpoint Image Generation System Using Fisheye Cameras and a Laser Rangefinder for Indoor Robot Teleoperation. ROBOMECH J. 2020, 7, 15. [Google Scholar] [CrossRef]
  30. Ribeiro, L.G.; Suominen, O.J.; Durmush, A.; Peltonen, S.; Morales, E.R.; Gotchev, A. Retro-Reflective-Marker-Aided Target Pose Estimation in a Safety-Critical Environment. Appl. Sci. 2021, 11, 3. [Google Scholar] [CrossRef]
  31. Illing, B.; Westhoven, M.; Gaspers, B.; Smets, N.; Bruggemann, B.; Mathew, T. Evaluation of Immersive Teleoperation Systems Using Standardized Tasks and Measurements. In Proceedings of the 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020, Naples, Italy, 31 August–4 September 2020; pp. 278–285. [Google Scholar]
  32. Marques, B.; Teixeira, A.; Silva, S.; Alves, J.; Dias, P.; Santos, B.S. A Critical Analysis on Remote Collaboration Mediated by Augmented Reality: Making a Case for Improved Characterization and Evaluation of the Collaborative Process. Comput. Gr. 2022, 102, 619–633. [Google Scholar] [CrossRef]
  33. Marques, B.; Silva, S.S.; Alves, J.; Araujo, T.; Dias, P.M.; Santos, B.S. A Conceptual Model and Taxonomy for Collaborative Augmented Reality. IEEE Trans. Vis. Comput. Gr. 2021, 102, 1. [Google Scholar] [CrossRef]
  34. Wang, P.; Bai, X.; Billinghurst, M.; Zhang, S.; Zhang, X.; Wang, S.; He, W.; Yan, Y.; Ji, H. AR/MR Remote Collaboration on Physical Tasks: A Review. Robot. Comput.-Integr. Manuf. 2021, 72, 102071. [Google Scholar] [CrossRef]
  35. Sereno, M.; Wang, X.; Besancon, L.; Mcguffin, M.J.; Isenberg, T. Collaborative Work in Augmented Reality: A Survey. IEEE Trans. Vis. Comput. Gr. 2020, 72, 1. [Google Scholar] [CrossRef]
  36. Ens, B.; Lanir, J.; Tang, A.; Bateman, S.; Lee, G.; Piumsomboon, T.; Billinghurst, M. Revisiting Collaboration through Mixed Reality: The Evolution of Groupware. Int. J. Hum.-Comput. Stud. 2019, 131, 81–98. [Google Scholar] [CrossRef]
  37. De Belen, R.A.J.; Nguyen, H.; Filonik, D.; del Favero, D.; Bednarz, T. A Systematic Review of the Current State of Collaborative Mixed Reality Technologies: 2013–2018. AIMS Electron. Electr. Eng. 2019, 3, 181–223. [Google Scholar] [CrossRef]
  38. Nakamura, K.; Tohashi, K.; Funayama, Y.; Harasawa, H.; Ogawa, J. Dual-Arm Robot Teleoperation Support with the Virtual World. Artif. Life Robot. 2020, 25, 286–293. [Google Scholar] [CrossRef]
  39. Whitney, D.; Rosen, E.; Ullman, D.; Phillips, E.; Tellex, S. ROS Reality: A Virtual Reality Framework Using Consumer-Grade Hardware for ROS-Enabled Robots. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 5018–5025. [Google Scholar]
  40. Whitney, D.; Rosen, E.; Phillips, E.; Konidaris, G.; Tellex, S. Comparing Robot Grasping Teleoperation Across Desktop and Virtual Reality with ROS Reality. Springer Proc. Adv. Robot. 2020, 10, 335–350. [Google Scholar] [CrossRef]
  41. Delpreto, J.; Lipton, J.I.; Sanneman, L.; Fay, A.J.; Fourie, C.; Choi, C.; Rus, D. Helping Robots Learn: A Human-Robot Master-Apprentice Model Using Demonstrations via Virtual Reality Teleoperation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 10226–10233. [Google Scholar] [CrossRef]
  42. Britton, N.; Yoshida, K.; Walker, J.; Nagatani, K.; Taylor, G.; Dauphin, L. Lunar Micro Rover Design for Exploration through Virtual Reality Tele-Operation. Springer Tracts Adv. Robot. 2015, 105, 259–272. [Google Scholar] [CrossRef]
  43. Naceri, A.; Mazzanti, D.; Bimbo, J.; Prattichizzo, D.; Caldwell, D.G.; Mattos, L.S.; Deshpande, N. Towards a Virtual Reality Interface for Remote Robotic Teleoperation. In Proceedings of the 2019 19th International Conference on Advanced Robotics, ICAR 2019, Belo Horizonte, Brazil, 2–6 December 2019; pp. 284–289. [Google Scholar]
  44. Zhang, T.; McCarthy, Z.; Jowl, O.; Lee, D.; Chen, X.; Goldberg, K.; Abbeel, P. Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 5628–5635. [Google Scholar] [CrossRef] [Green Version]
  45. Concannon, D.; Flynn, R.; Murray, N. A Quality of Experience Evaluation System and Research Challenges for Networked Virtual Reality-Based Teleoperation Applications. In Proceedings of the 11th ACM Workshop on Immersive Mixed and Virtual Environment Systems, MMVE 2019, Amherst, MA, USA, 18–21 June 2019; pp. 10–12. [Google Scholar]
  46. Stein, C.; Stein, C. Virtual Reality Design: How Head-Mounted Displays Change Design Paradigms of Virtual Reality Worlds. MediaTropes 2016, 6, 52–85. [Google Scholar] [CrossRef]
  47. Wonsick, M.; Padir, T. A Systematic Review of Virtual Reality Interfaces for Controlling and Interacting with Robots. Appl. Sci. 2020, 10, 9051. [Google Scholar] [CrossRef]
  48. Wang, Z.; Reed, I.; Fey, A.M. Toward Intuitive Teleoperation in Surgery: Human-Centric Evaluation of Teleoperation Algorithms for Robotic Needle Steering. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 5799–5806. [Google Scholar] [CrossRef]
  49. Franzluebbers, A.; Johnsen, K. Remote Robotic Arm Teleoperation through Virtual Reality. In Proceedings of the Symposium on Spatial User Interaction, New Orleans, LA, USA, 19–20 October 2019. [Google Scholar] [CrossRef]
  50. Pryor, W.; Vagvolgyi, B.P.; Gallagher, W.J.; Deguet, A.; Leonard, S.; Whitcomb, L.L.; Kazanzides, P. Experimental Evaluation of Teleoperation Interfaces for Cutting of Satellite Insulation. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Amherst, MA, USA, 18–21 June 2019; pp. 4775–4781. [Google Scholar]
  51. Baklouti, S.; Gallot, G.; Viaud, J.; Subrin, K. On the Improvement of Ros-Based Control for Teleoperated Yaskawa Robots. Appl. Sci. 2021, 11, 7190. [Google Scholar] [CrossRef]
  52. Mai, X.; Chen, J.; Wang, Y.; Bi, S.; Cheng, Y.; Xi, N. A Teleoperation Framework of Hot Line Work Robot. In Proceedings of the 2018 IEEE International Conference on Mechatronics and Automation (ICMA), Changchun, China, 5–8 August 2018; pp. 1872–1876. [Google Scholar]
  53. Rakita, D.; Mutlu, B.; Gleicher, M. A Motion Retargeting Method for Effective Mimicry-Based Teleoperation of Robot Arms. In Proceedings of the Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 361–370. [Google Scholar]
  54. Quintero, C.P.; Dehghan, M.; Ramirez, O.; Ang, M.H.; Jagersand, M. Flexible Virtual Fixture Interface for Path Specification in Tele-Manipulation. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 5363–5368. [Google Scholar]
  55. Huang, K.; Chitrakar, D.; Rydén, F.; Chizeck, H.J. Evaluation of Haptic Guidance Virtual Fixtures and 3D Visualization Methods in Telemanipulation—A User Study. Intell. Serv. Robot. 2019, 12, 289–301. [Google Scholar] [CrossRef] [Green Version]
  56. Livatino, S.; Guastella, D.C.; Muscato, G.; Rinaldi, V.; Cantelli, L.; Melita, C.D.; Caniglia, A.; Mazza, R.; Padula, G. Intuitive Robot Teleoperation through Multi-Sensor Informed Mixed Reality Visual Aids. IEEE Access 2021, 9, 25795–25808. [Google Scholar] [CrossRef]
Figure 1. Schematic of the motion-mapping approach for the MR subspace-enhanced imitation-based HRI system of the teleoperated robotic arm-hand. (a) The robot working environment; (b) the MR scene displayed to the operator; (c) the user space. The MR subspace decouples the human from the robot, and the mappings for sensors and grippers are not necessarily direct and identical. The user’s eyes {E} and hand {H} and the state of the camera {C} as well as the robot arm’s grippers {G} are linked to each other through the MR subspace {S}, with {T}, {W}, and {B} denoting reference frames of the target objects, robot wrist and robot arm base.
Figure 1. Schematic of the motion-mapping approach for the MR subspace-enhanced imitation-based HRI system of the teleoperated robotic arm-hand. (a) The robot working environment; (b) the MR scene displayed to the operator; (c) the user space. The MR subspace decouples the human from the robot, and the mappings for sensors and grippers are not necessarily direct and identical. The user’s eyes {E} and hand {H} and the state of the camera {C} as well as the robot arm’s grippers {G} are linked to each other through the MR subspace {S}, with {T}, {W}, and {B} denoting reference frames of the target objects, robot wrist and robot arm base.
Applsci 12 04740 g001
Figure 2. The communication system for information exchange and motion control of the MR subspace-enhanced imitation-based HRI of the teleoperated robotic arm-hand system.
Figure 2. The communication system for information exchange and motion control of the MR subspace-enhanced imitation-based HRI of the teleoperated robotic arm-hand system.
Applsci 12 04740 g002
Figure 3. Overview of the MR subspace interface with the digital twin of the UR5 robot superimposed on the 3D Point Cloud of the physical UR5 robot in the MR environment. (a) The pick-and-place task of two cubical and cylindrical objects given a goal position with different heights; (b) the assembly task involved grabbing one LEGO subassembly from a predefined spot and stacking it onto a fixed LEGO base on the table.
Figure 3. Overview of the MR subspace interface with the digital twin of the UR5 robot superimposed on the 3D Point Cloud of the physical UR5 robot in the MR environment. (a) The pick-and-place task of two cubical and cylindrical objects given a goal position with different heights; (b) the assembly task involved grabbing one LEGO subassembly from a predefined spot and stacking it onto a fixed LEGO base on the table.
Applsci 12 04740 g003
Figure 4. An overview of spatial manipulation tasks and command input methods. (a) Pick-and-place task of two cubical and cylindrical objects given a goal position with different heights; (b) assembly task involved grabbing one LEGO subassembly from a predefined spot and stacking it onto a fixed LEGO base on the table; (c) isometric-rate HRI scheme using the 6-DOF input device; (d) isotonic-position imitative HRI scheme using the Vive motion controller.
Figure 4. An overview of spatial manipulation tasks and command input methods. (a) Pick-and-place task of two cubical and cylindrical objects given a goal position with different heights; (b) assembly task involved grabbing one LEGO subassembly from a predefined spot and stacking it onto a fixed LEGO base on the table; (c) isometric-rate HRI scheme using the 6-DOF input device; (d) isotonic-position imitative HRI scheme using the Vive motion controller.
Applsci 12 04740 g004
Figure 5. Boxplots of quantitative measures on the user performance for each HRI scheme across two manipulation tasks. (a) Time to complete the pick-and-place tasks; (b) the assembly task completion time; (c) the aggregate task completion time; (d) the mean training completion time. MRD and MRS denote MR SpaceMouse and MR subspace, respectively.
Figure 5. Boxplots of quantitative measures on the user performance for each HRI scheme across two manipulation tasks. (a) Time to complete the pick-and-place tasks; (b) the assembly task completion time; (c) the aggregate task completion time; (d) the mean training completion time. MRD and MRS denote MR SpaceMouse and MR subspace, respectively.
Applsci 12 04740 g005
Figure 6. Boxplots of workload measures for each HRI scheme across two tasks. MRD and MRS denote MR SpaceMouse and MR subspace, respectively.
Figure 6. Boxplots of workload measures for each HRI scheme across two tasks. MRD and MRS denote MR SpaceMouse and MR subspace, respectively.
Applsci 12 04740 g006
Figure 7. Boxplots of subjective measures about ease-of-use and usefulness for each HRI scheme across tasks. (a) The ease-of-use factor of user acceptance and perception; (b) the usefulness factor of user acceptance and perception. MRD and MRS denote MR SpaceMouse and MR subspace, respectively.
Figure 7. Boxplots of subjective measures about ease-of-use and usefulness for each HRI scheme across tasks. (a) The ease-of-use factor of user acceptance and perception; (b) the usefulness factor of user acceptance and perception. MRD and MRS denote MR SpaceMouse and MR subspace, respectively.
Applsci 12 04740 g007
Table 1. One-way ANOVA statistics for all measures. B, MRD, and MRS denote Baseline, MR SpaceMouse, and MR subspace, respectively.
Table 1. One-way ANOVA statistics for all measures. B, MRD, and MRS denote Baseline, MR SpaceMouse, and MR subspace, respectively.
Post-Hoc Tests
MeasurePartial Eta SquaredFpMRS-MRDMRS-BMRD-B
Pick-and-place (s)0.87F(1.382, 31.778) = 148.198<0.001<0.001<0.001<0.001
Assembly (s) 0.76F(1.553, 35.725) = 74.080<0.001<0.001<0.001<0.001
Aggregate Time (s)0.93F(1.875, 43.128) = 303.197<0.001<0.001<0.001<0.001
Physical Demand0.45F(1.971, 45.339) = 18.478<0.001<0.001<0.0010.387
Mental Demand0.59F(1.995, 45.874) = 32.638<0.001<0.001<0.0010.002
NASA TLX0.76F(1.663, 38.247) = 74.408<0.001<0.001<0.001<0.001
Usefulness0.41F(1.846, 42.449) = 15.794<0.001<0.001<0.0011.000
Ease of Use0.69F(1.832, 42.133) = 50.205<0.001<0.001<0.0010.299
Table 2. Statistical results for objective and subjective measures for each method compared. MRD and MRS denote MR SpaceMouse and MR subspace, respectively.
Table 2. Statistical results for objective and subjective measures for each method compared. MRD and MRS denote MR SpaceMouse and MR subspace, respectively.
BaselineMRDMRS
MeasureMeanStd.DevMeanStd.DevMeanStd.Dev
Pick-and-place (s)136.0620.7298.2717.0058.529.75
Assembly (s) 162.6023.61122.8816.8497.5112.18
Aggregate Time (s)298.6626.42221.1523.39156.0317.19
Physical Demand76.0411.3270.8811.5155.6312.54
Mental Demand81.3813.2968.4611.1254.5413.15
NASA TLX75.816.2366.895.3951.928.15
Usefulness2.831.173.081.104.751.33
Ease of Use2.670.873.251.395.500.89
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Su, Y.-P.; Chen, X.-Q.; Zhou, T.; Pretty, C.; Chase, G. Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System. Appl. Sci. 2022, 12, 4740. https://doi.org/10.3390/app12094740

AMA Style

Su Y-P, Chen X-Q, Zhou T, Pretty C, Chase G. Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System. Applied Sciences. 2022; 12(9):4740. https://doi.org/10.3390/app12094740

Chicago/Turabian Style

Su, Yun-Peng, Xiao-Qi Chen, Tony Zhou, Christopher Pretty, and Geoffrey Chase. 2022. "Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System" Applied Sciences 12, no. 9: 4740. https://doi.org/10.3390/app12094740

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop