Next Article in Journal
Study on Experimental and Constitutive Model of Granite Gneiss under Hydro-Mechanical Properties
Next Article in Special Issue
Application of Neural Radiance Fields (NeRFs) for 3D Model Representation in the Industrial Metaverse
Previous Article in Journal
Comparative Analysis of Pulsed Electric Fields (PEF) and Traditional Pasteurization Techniques: Comparative Effects on Nutritional Attributes and Bacterial Viability in Milk and Whey Products
Previous Article in Special Issue
XR Technology Deployment in Value Creation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Integrating Virtual, Mixed, and Augmented Reality into Remote Robotic Applications: A Brief Review of Extended Reality-Enhanced Robotic Systems for Intuitive Telemanipulation and Telemanufacturing Tasks in Hazardous Conditions

1
Mechanical Engineering Department, College of Engineering, University of Canterbury, Christchurch 8041, New Zealand
2
Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511436, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(22), 12129; https://doi.org/10.3390/app132212129
Submission received: 2 October 2023 / Revised: 2 November 2023 / Accepted: 5 November 2023 / Published: 8 November 2023
(This article belongs to the Special Issue Extended Reality Applications in Industrial Systems)

Abstract

:
There is an increasingly urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations, concomitant with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices. The potential high value of medical telerobotic applications was also evident during the recent coronavirus pandemic and will grow in future. Robotic teleoperation satisfies the demands of the scenarios in which human access carries measurable risk, but human intelligence is required. An effective teleoperation system not only enables intuitive human-robot interaction (HRI) but ensures the robot can also be operated in a way that allows the operator to experience the “feel” of the robot working on the remote side, gaining a “sense of presence”. Extended reality (XR) technology integrates real-world information with computer-generated graphics and has the potential to enhance the effectiveness and performance of HRI by providing depth perception and enabling judgment and decision making while operating the robot in a dynamic environment. This review examines novel approaches to the development and evaluation of an XR-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions. It presents a strong review of XR-enhanced telerobotics for remote robotic applications; a particular focus of the review includes the use of integrated 2D/3D mixed reality with haptic interfaces to perform intuitive remote operations to remove humans from dangerous conditions. This review also covers primary studies proposing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) solutions where humans can better control or interact with real robotic platforms using these devices and systems to extend the user’s reality and provide a more intuitive interface. The objective of this article is to present recent, relevant, common, and accessible frameworks implemented in research articles published on XR-enhanced telerobotics for industrial applications. Finally, we present and classify the application context of the reviewed articles in two groups: mixed reality–enhanced robotic telemanipulation and mixed reality–enhanced robotic tele-welding. The review thus addresses all elements in the state of the art for these systems and ends with recommended research areas and targets. The application range of these systems and the resulting recommendations is readily extensible to other application areas, such as remote robotic surgery in telemedicine, where surgeons are scarce and need is high, and other potentially high-risk/high-need scenarios.

1. Introduction

Robotic teleoperation is a technological approach allowing a human operator to perform a task by controlling a device or machine at a remote location. The distance can vary from the micron level (micro-manipulation) to the kilometer level (space applications). Human operators manifestly have the ability to adapt to the irregular environment and employ a variety of strategies to place the telerobots in favorable conditions, despite the complexity involved in remote sites.
Teleoperation provides an enormous advantage over other robotic controlling methods. A teleoperation system benefits from taking advantage of both human and robotic capabilities. A typical teleoperation system consists of a master device and a slave device. A human operator manipulates the master device and issues commands to the slave device, which is manipulated to achieve a certain goal. A robot manipulator that follows the motion of the master arm in real time rather than being pre-programmed is an example. The kinematics of the slave device can be identical to the kinematics of the master device, or they can be different. It can also be specifically scaled or even completely customized according to the task and requirements.
Despite the emergence and growing application of artificial intelligence, autonomous robots cannot reach the same level of intuition and reasoning as humans. While practical control algorithms can be used to process complex interactions, and kinematic aspects are better suited to humans, who perform such tasks effortlessly in their daily lives [1]. One of the advantages of teleoperation is that it empowers the robot to fulfill tasks in unstructured or hostile environments where situational perceptions, cognitive abilities, and professional experience have a predominant impact on task execution [2]. Therefore, for tasks requiring a variety of human judgments and professional skills, it is practical and feasible to utilize teleoperation to accomplish the goals.
In general, application scenarios of teleoperation could include deep-sea robotics applications [3], space exploration [4,5], de-mining operations, search and rescue in disasters, inspection in restricted spaces [6], medical robot-assisted surgery [7], hazardous material treatment, and micro-manipulation or minimally invasive surgery, among many possibilities. These applications fall into areas of either high risk to humans or high need, where human capability is not available or cannot be present for other reasons. Space or underwater applications are examples of high-risk scenarios, while telerobotic surgery in remote locales is an example of a high-need application, where a skilled surgeon may not be locally available [8].
More specifically, teleoperation is suitable for tasks meeting the following set of criteria:
  • Tasks to be performed in unstructured and dynamic environments, such as deep-sea exploration and space applications.
  • Tasks involving operations in hazardous situations where human health is severely harmed. For example, in the Fukushima nuclear disaster, there was a high demand for emergency treatment and rescue. Mining scenarios are also increasingly typical.
  • Tasks requiring dexterity, especially the coordination between hands and eyes. Medical surgery performed with remote robots is a typical example.
  • Tasks requiring object recognition, obstacle detection, or situational awareness, for example, inspection in confined spaces.
This literature review, unlike previous ones, specifically focuses on XR applications applied to human-in-the-loop telemanipulation and telemanufacturing scenarios. The objective of this review is to categorize the recent literature on XR-enhanced telerobotic frameworks for telemanipulation and telemanufacturing tasks, spanning the period from early 2016 to late 2023. Our aim is to comprehensively examine the evolution of this research field, pinpoint the primary areas and sectors where it is currently applied, elucidate the adopted technological solutions, and emphasize the key benefits attainable through this technology. In this context, we address substantial challenges and opportunities, with a specific focus on hardware and software integration to ensure stable and efficient communication and system adaptation for various scenarios. Our review delves into telemanipulation and tele-welding scenarios within industrial applications, offering a unique perspective on the integration of XR technology. Notably, our work innovatively summarizes how XR technology can enhance remote welding tasks, providing fresh and valuable insights in this domain. Additionally, our review recognizes the specific challenges encountered in integrating teleoperation systems with diverse hardware platforms and emphasizes the necessity for detailed information on the design of software architectures for interoperability. Through our meticulous analysis, we aspire to contribute significant knowledge to the evolving landscape of XR-enhanced telerobotic systems.
The scope of our work is focused on telemanipulation and telemanufacturing tasks in hazardous conditions. To achieve this, we conducted a search for papers published during a seven-year period between 2016 and 2023 in relevant resources for extended reality and robotics, including IEEE Xplore, MDPI, ACM Digital Library, and Science Direct. Additionally, we present and classify the application context of the reviewed articles into two groups: mixed reality–enhanced robotic telemanipulation and mixed reality–enhanced robotic tele-welding. We conducted a thorough literature review focusing on scholarly databases, academic journals, conference proceedings, and reputable online repositories related to the field of extended reality and robotics. Our searches were comprehensive and included keywords such as human–robot interaction, teleoperation, extended reality, virtual reality, mixed reality, and augmented reality and used Boolean operators to refine the search results. We ensured that the selected resources were recent and peer-reviewed, emphasizing studies and articles published within the last 7 years to maintain relevancy and accuracy.
This section provides a comprehensive overview of the article’s focus on immersive and intuitive telemanipulation frameworks using XR technologies. The rest of the article is structured as follows. Section 2 presents the prerequisites for effective robotic teleoperation, focusing on natural motion retargeting and multimodal feedback. Section 3 outlines the research methodology utilized in the literature and provides initial insights into intuitive and natural teleoperation. Section 4 delves into XR-enhanced telerobotic frameworks for industrial applications and classifies the application context into two groups: MR-enhanced robotic telemanipulation and MR-enhanced robotic tele-welding. Section 5 outlines the challenges and discusses future opportunities in the field of research. Finally, Section 6 summarizes the key findings from the reviewed works and emphasizes the integration of XR, particularly MR, in telemanipulation systems and remote robotic welding.

2. Categories of Robotic Teleoperation

2.1. Collocated and Separated Teleoperation

Based on the relative locations of the human and robot, robot teleoperation can be categorized into collocated and separated teleoperation. In collocated teleoperation [9,10], the operator can directly observe the robot and its working environment in the same space, and a visual feedback system is not necessary; augmented reality technology can be used in collocated teleoperation to provide users with informative virtual content superimposed on the user–robot shared space [11]. Collocated robot teleoperation presents users with a robotic platform that shares a physical environment with the users, who have a natural and clear 3D view of the robot’s workspace [12]. However, although humans and robots physically share the same space, users can often only observe from a third viewpoint alongside the robot and cannot inspect the robot’s environment from the robot’s main viewpoint.
Spatially separated teleoperation is used in a wider range of scenarios where the user and the robotic system are far apart or where the user must be separated from the robot for safety reasons and cannot directly view the robot’s movements. This type of robot teleoperation system typically requires visual and haptic feedback systems to show the user how the robot is working at a distance, allowing the operator to make timely adjustments. However, in the separated telemanipulation, users can typically only perceive the robot space through 2D camera streams and have difficulty synthesizing the 2D information collected by the robot with contextual knowledge of robot movements in its working space [13].

2.2. Ego-Centric and Eco-Centric Teleoperation

Depending on the operator’s perspective in observing the manipulation process, robot teleoperation can be divided into ego-centric and eco-centric teleoperation. In ego-centric robotic teleoperation, the user observes the remote manipulation process from the same perspective as the robot and perceives himself as one with the robot [14]. The operator controls the remote robot with human-level dynamics through immersive and multimodal control–feedback schemes. Ego-centric robotic teleoperation transfers human-level manipulation and proficiency in telemanipulation applications and equips robotic systems with human-level motor skills. The user does not need to consider the mapping relationship between the robot’s actions and the user’s input commands [15,16].
Eco-centric teleoperation usually involves inspecting the robot and its interactive environment from a third-person perspective while manipulating the robotic system with a broader view of observation (Figure 1) [17]. With the help of mixed reality technology, users can still experience coexistence with the robot in the same virtual space but without the immersion and intuition of being one with the robot.

2.3. Robotic Mechanism-Based and Motion Sensor-Based Teleoperation

Robot teleoperation can be divided into robotic mechanism–based and motion sensor–based teleoperation, according to the command input device used. Teleoperation systems that use robots as master mechanisms usually apply robotic devices of the same or different structure as the remotely controlled robot as the user command input system [18]. Using the same robots to form the master–slave architecture for teleoperation, motion synchronization can be accomplished by simple joint space mapping, and the user can directly visualize the pose of the slave robot by observing the master robot being controlled [19,20,21]. When using robots of different configurations as master and slave components for teleoperation, motion synchronization can be achieved using joint space motion remapping or TCP motion in the Cartesian space, and a scaling factor can be set to match the working space ranges of the master and slave robots.
The motion sensor–based robotic teleoperation platforms eliminate the need to introduce expensive and complex robot platforms as human–robot interaction command input devices. Commercial optical and IMU-based sensors, depth cameras, motion controllers, and gloves are often used to capture user motion information [22]. Compared to the robotic master devices, motion sensor–based robotic teleoperation platforms usually have a larger space for motion tracking, the user’s movement is not constrained by the mechanical structure, and the relatively lower price reduces the overall cost of the system [23,24].

3. Intuitive and Natural Teleoperation

In conventional telemanipulation systems, the user controls the remote robotic system with a joystick, gamepad, keyboard and mouse, or 3D mouse and simultaneously receives visual feedback from 2D displays [25,26]. Robot control is not intuitive and natural to the user [27,28]. The mismatch between the range of user control space and the limits of input device workspace can increase the difficulty of telemanipulation and lead to poor operation [29,30]. Another disadvantage of typical telerobotic systems is the lack of depth perception due to monocular, 2D visualization of the remote site [31,32,33], limiting operator performance [34,35] and any feelings of immersion and telepresence in the remote workspace [36,37].
Intuitive human–robot interaction schemes are needed so the operator can easily guide the robotic system using natural motion. A conventional intuitive teleoperation platform was developed in [38] (Figure 2). The user manipulates the seven degree-of-freedom (DOF) master robot to intuitively drive the slave robot at a distance as if they were directly operating the slave robot. This platform requires a symmetrical relationship between the input (master) and output mechanisms (slave). However, for 2D visualization of the telemanipulation process, the user lacks a sense of presence. Monitoring the interaction process with a 2D monitor on the user side can also distract the user from effective robot control activities. In addition, using the isomorphic master robot as the motion input device on the user side in telemanipulation systems often leads to a significant hardware investment.
Several noteworthy studies and frameworks have effectively implemented natural motion retargeting and multimodal feedback. One notable example is the work of Akshit et al. [39], in which the authors developed a telerobotic system integrating natural motion retargeting algorithms, significantly enhancing the operator’s control precision. Another novel interface that features a motion retargeting method for effective mimicry-based teleoperation of robot manipulators was introduced in [40]. This framework allows novice users to interact with the remote robotic manipulators effectively and intuitively. Additionally, the framework proposed by [41] successfully incorporates multimodal feedback, combining visual, auditory, and haptic cues, leading to improved user immersion and task performance. These studies demonstrate the practical implementation of natural motion retargeting and multimodal feedback in XR-enhanced telerobotics, emphasizing their impact on enhancing user experience and task efficiency.
With the inevitable limitations in the field of view of remote on-site 2D cameras and the poor quality of the visual feedback sent back from the robot’s workspace, users are often unable to achieve a level of situational awareness sufficient for effective and safe remote manipulation tasks [42]. The operator is unable to perceive the physical environment of the worksite in 3D due to the lack of stereoscopic depth information. Typical monocular visual feedback strategies greatly reduce the efficiency of remote-controlled robotic operations [43,44].
More specifically, the lack of depth information from 2D vision causes ambiguity about the spatial positions of objects in static images. Operators cannot directly perceive depth in monoscopic vision (MV) streams and have difficulty in accurately determining the distance between the robot end-effector and workpieces. Thus, the user perception from 2D visual feedback makes it insufficient to accomplish remote tasks effectively. Hence, the success and efficiency of teleoperated tasks mainly depend on operator skill and proficiency [45,46].
Multi-camera arrangements can provide the relative position of objects and allow users to view and judge the distance from various perspectives, compensating for the limits of a single monocular 2D camera. However, as telerobotic systems become more complex, avoiding robot collisions with surrounding objects with only multiple 2D feedbacks inevitably leads to low robot control efficiency and increased operator workload.
Different levels of depth perception in telemanipulation can improve task performance for various manipulation and grasping tasks [47,48,49]. Using binocular cameras can provide additional depth information to assist operators. Stereoscopic vision and point cloud are two methods of providing stereo-depth information and have been applied in various telerobotic systems for remote vision [50,51,52]. Binocular visual feedback for robotic teleoperation has been widely studied on unmanned aerial vehicles and mobile robots [53,54]. Leveraging depth perception in remote robot teleoperation tasks for general object manipulation is not well studied on mobile manipulators. In addition, neither multi-camera nor binocular camera setups allow the user the flexibility to continuously change the viewing angle to observe the remote work without introducing additional mechanisms on the robot side [55].
Incorporating haptic feedback in telerobotics offers numerous advantages in creating immersive and natural teleoperation schemes, notably enhancing the user’s tactile experience and augmenting immersion [56]. This technological integration enables users to perceive and manipulate objects remotely with a heightened sense of realism. Such realistic feedback proves invaluable in tasks necessitating a keen sense of touch, such as delicate surgical procedures or the handling of fragile items [57]. Additionally, haptic feedback serves to enhance the operator’s spatial awareness, offering vital cues about the robot’s environment and the forces it encounters. This heightened perception significantly contributes to the precise control and manipulation of the robotic system. Moreover, haptic interfaces alleviate the cognitive burden on operators by delivering intuitive, natural feedback. This intuitive interaction allows operators to concentrate more on the task at hand, reducing their dependence on interpreting visual information [44].
Nevertheless, the integration of haptic feedback into telerobotics is not without challenges. One of the primary hurdles involves ensuring real-time responsiveness, as any delay in haptic feedback can disrupt the user’s control and sense of immersion [58]. Achieving high-fidelity haptic feedback demands advanced hardware components and sophisticated algorithms, thereby augmenting the overall complexity and cost of the system. Additionally, the design of haptic interfaces catering to a diverse array of tasks and user preferences poses a formidable challenge [59]. Different applications may require varying levels of force, precision, and feedback sensations, necessitating a delicate balance to accommodate these diverse requirements effectively. Addressing these challenges is imperative for harnessing the full potential of haptic feedback in telerobotics, ensuring seamless integration and optimal user experience in various telerobotic applications.
In the process of integrating robotic teleoperation systems with diverse hardware platforms, several specific challenges were encountered. Ensuring compatibility between various hardware components, such as sensors, actuators, and communication interfaces, was a primary challenge due to differences in data formats, protocols, and communication speeds, which posed hurdles in achieving seamless integration [60]. Additionally, interfacing multiple sensors, including cameras, depth sensors, and force/torque sensors, required careful consideration and calibration to integrate data from different manufacturers in real time, ensuring accurate and coherent information for operators [61]. Communication delays between different hardware components could disrupt the real-time feedback loop, impacting teleoperation tasks that demand quick response times. Minimizing latency and optimizing communication protocols were crucial in maintaining stable and efficient communication [62]. Furthermore, coordinating control signals sent to different hardware modules, such as manipulator arms and grippers, necessitated precise synchronization to prevent erratic movements or inaccuracies, highlighting the importance of synchronization mechanisms for ensuring precise and coordinated actions.
To overcome these challenges and ensure stable and efficient communication in the integration of teleoperation systems with diverse hardware platforms, several strategies were employed. Implementing standardized communication protocols, such as ROS (Robot Operating System) messages, facilitated interoperability between different hardware components, ensuring consistent data exchange and compatibility across various devices that use different programming languages. Designing a unified software architecture with ROS and interface engines, such as Unity 3D, which abstracted underlying hardware complexities, was essential. This design allowed for the implementation of high-level control algorithms and user interfaces without the need to consider hardware-specific details, thereby promoting interoperability and ease of development [63,64]. In summary, the integration of ROS and Unity-based software architectures enable easy expansion and adaptation to new devices and sensors used in telemanufacturing aspects, such as remote robotic welding.
Teleoperation systems cope with dynamic environments by integrating advanced technologies such as sensor fusion, artificial intelligence, machine learning, adaptive algorithms, and human supervision [65,66]. These systems use various sensors to gather real-time data and employ AI algorithms to process and adapt to dynamic changes. Environmental mapping techniques, like simultaneous localization and mapping (SLAM), enable the system to create and update detailed maps, aiding in adaptive navigation. Dynamic path planning algorithms calculate optimal routes in real time, considering obstacles and changing environmental conditions [67]. Additionally, collaborative teleoperation strategies leverage human expertise for high-level decision making while allowing robots to execute tasks based on human guidance, ensuring adaptability and safe operation in complex and unpredictable environments. Human supervisors also play a vital role in remotely monitoring operations, providing contextual understanding and intervention capabilities when necessary [68].
In adapting teleoperation technology for telehealth scenarios, particularly in remote regions, several key strategies can be employed. Firstly, enhancing the robustness of internet connectivity through satellite-based solutions and low-bandwidth protocols can ensure real-time communication between remote robots and healthcare professionals [69]. Developing affordable and portable robotic platforms, customized for telehealth applications, can address cost and accessibility issues. Moreover, comprehensive training programs for local healthcare providers are crucial to ensure effective operation and maintenance of teleoperation equipment [70]. However, these implementations are not without challenges. Limited infrastructure, such as unstable internet connectivity and power supply, poses a significant hurdle. Finding cost-effective solutions, considering the financial constraints of remote regions, is crucial for the widespread adoption of teleoperation technology [71]. Addressing training gaps and ensuring cultural acceptance are equally vital. Ethical considerations, particularly related to patient consent and data security, require careful attention. Moreover, establishing efficient maintenance and technical support systems in remote regions, where specialized technicians are scarce, is essential to guarantee the continuous and effective operation of telehealth robots. By proactively addressing these challenges, teleoperation technology can be successfully adapted to transform healthcare delivery in remote areas [72,73].
This section on intuitive and natural teleoperation highlighted two important requirements for an effective and intuitive teleoperation platform. First, natural motion retargeting, or the use of natural human behavior to teleoperate robotic systems, effectively reduces training costs and grants human-level dynamics to telerobotic systems; it is an important component of natural and intuitive human–robot interaction systems. Second, sufficient multimodal feedback, particularly visual feedback, is pivotal in interactive telemanipulation systems, providing depth information and enhancing situational awareness to enable the operator to know the dynamic situation in the robot’s workspace; it is also critical in interactive telemanipulation systems. The human operator in the control loop team must be aware of the relative positions of the robot end-effector and the workpieces in the workspace to make decisions and avoid collisions or damage to the robotic system, the working area, and/or the environment.

4. Mixed Reality for Human-Robot Interaction

4.1. Human-Robot Interaction

Human–Computer Interaction (HCI) delves into the study of how individuals interact with computers and the extent to which computers are designed for successful human interaction [74]. This discipline encompasses the design, evaluation, and implementation of interactive computing systems tailored for human use, along with the examination of significant phenomena associated with them. HCI fundamentally revolves around comprehending the intricate relationship between humans and computers and crafting computer systems that facilitate meaningful and efficient interactions [75]. It places a strong emphasis on enhancing the usability and user experience of computer interfaces, ensuring technology is accessible, efficient, and enjoyable for users [76]. Human–Robot Interaction (HRI) constitutes a multidisciplinary field focused on comprehending, designing, and evaluating interactions between humans and robots [77]. This domain involves the comprehensive study, design, and analysis of behaviors, interfaces, and systems geared towards robots interacting with humans across diverse contexts [78]. The overarching objective of HRI is to develop robots capable of collaborating effectively and intuitively with humans, offering assistance or services in a variety of settings, including homes, workplaces, healthcare facilities, and public spaces [79].

4.2. Reality–Virtuality Continuum

Extended reality (XR) encompasses the mutual embedding and fusion of physical and virtual scenes, along with the human–computer interaction occurring in the generated environment [80], where X represents the degree of interpenetration and integration of real scenes and virtual content in spatial or immersive computing technologies, involving the user’s sense of presence and acquisition of perception [81]. XR spans the entire spectrum from entirely realistic to fully virtual, encompassing all representative forms and possible variations of AR, VR, AV, MR, and other interpolations between reality and virtuality [82,83]. The reality–virtuality continuum presented by Milgram et al. [84] outlines the XR ecosystem and the relationship between AR/VR/AV/MR and fully real and virtual environments, which are schematically illustrated in Figure 3.
Augmented reality (AR) overlays informative virtual 3D graphics onto the physical world and allows real-time interaction with the 3D graphics, enabling the user to reach into the augmented world and manipulate 3D objects directly as if they were real-world objects [85]. The advantage of AR is it superimposes important cues on physical objects or hovers the display in real space to indicate the user’s potential or actual future actions, or the robot’s motion planning, while eliminating distractions caused by typical display methods [86]. AR–HRI interfaces provide users with an exo-centric view of the robot and its surroundings and allow operators to maintain situational awareness of the robot and ensure intuitive interaction and communication between the human and robot using multimodal interfaces [87].
Virtual reality (VR) uses fully immersive computer-generated graphics with scenes and objects that appear to be real to completely replace the physical surroundings in which the user is located [88]. In the virtual environment, the users perceive immersion in an identical or completely different scene from the physical surroundings and perceive an imaginary environment with muti-modalities of visual, acoustic, and haptic information [89]. VR allows users to move around in the scene and manipulate the virtual objects by using wireless controllers or haptic robotic structures as input sources.
Mixed reality (MR) technology involves the merging of physical and imaginary spaces and does not occur exclusively in the physical or virtual world [90]. MR enables real-time visualization and interaction between physical and digital content [91] and takes full advantage of the visual information of real scenes and the three-dimensional immersion and interaction provided with virtual cues [92]. Head-mounted display (HMD)-based mixed reality can be achieved in two ways: either by displaying the camera captured video feedback of the real world in the HMD or by allowing the user some direct visibility of the real world in the HMD [93,94].
Augmented virtuality (AV) is a fully immersive computer-generated virtuality-dominated environment enhanced by sensory data from physical environments [95]. AV keeps the virtual environment central but superimposes the real-world elements on the virtual content [96]. AV is also classified as a subset of MR. The term AV is less commonly used in the literature for immersive telerobotic interfaces and is often generally referred to as MR.
In MR, the hierarchy of reality spans from partial sensory input to a proximity infinitely close to immersive reality [97]. The reviewed MR-enhanced intuitive and immersive teleoperation schemes in this article fall into the category of partially superimposing real-world visual inputs onto computer-generated virtual content.

4.3. Mixed Reality–Enhanced Telemanipulation

4.3.1. Mixed Reality–Enhanced Intuitive Telemanipulation

In recent years, mixed reality technologies have been widely applied in the field of robotics, and these research directions can be divided into three categories according to application aspects: (1) human–robot interaction (HRI): immersive teleoperation, intuitive telemanipulation, collaborative interaction, wearable robots, haptic effects, and virtual devices; (2) medical robotics: robot-assisted surgery (RAS), prosthetics, and robot-assisted rehabilitation and training devices; (3) robot motion planning and control: trajectory generation, robot programming, simulation, and manipulation. Table 1 outlines the main differences between this article and previous research in the realm of MR-enhanced robotic manipulation.
MR interfaces enhance operator immersion through realistic 3D visualizations and interactive elements. To minimize fatigue, haptic feedback and intuitive gestures are integrated, enhancing the user experience. Additionally, optimizing rendering algorithms is essential. Techniques like foveated rendering, where high-quality graphics are focused on the user’s gaze, ensure a smooth experience while conserving computational resources [101]. MR has been applied to robotic teleoperation systems to optimize operator immersion and enhance user perception of the remote side to enable immersive robotic teleoperation (IRT) [102,103,104]. A three-dimensional virtual world similar to the slave side can be simulated through MR and displayed to the user on the master side. By implementing MR as a human–robot interaction interface (HRI), the user experiences a physical presence in the remote environment and co-existence with the robotic platform via MR subspace, while guiding and monitoring the robotic platform at the local user space [105,106]. MR interfaces blend virtual elements with the real world, providing operators with an immersive 3D visualization of the remote environment, such as a welding site [107]. This immersion allows operators to perceive the task space in depth, enhancing their understanding of the work area and minimizing cognitive load. By feeling present in the remote environment, operators can focus more effectively on their tasks, reducing distractions and mental fatigue [108]. MR-based user interfaces with natural control commands minimize fatigue by utilizing human instincts, such as motion and gestures, to interact with complex robotic systems [109]. MR-enhanced teleoperation allows direct mapping of control commands and actions between the user and the robot, presenting the potential for performance enhancements. It serves as an intermediary for integrating imitation-based motion mapping and 3D visualization techniques [110].
Recent research on MR-enhanced teleoperation systems has primarily concentrated on gathering demonstrations for robotic learning, incorporating haptic feedback and other sensory inputs, developing immersive manipulation, integrating AI and machine learning, and addressing issues related to poor virtual transparency [111,112,113]. However, these telerobotic systems do not fully exploit the potential performance enhancements provided by using an MR subspace as an intermediary for the integration of imitation-based motion mapping and 3D visualization mapping. Figure 4 shows the typical diagram and communication structure of the XR-enhanced intuitive and immersive teleoperation systems. The XR-enhanced robotic teleoperation system consists of four parts: (1) the XR-enhanced human–robot interaction unit; (2) the XR scene for vision and motion mapping; (3) the remotely controlled robot working unit; and (4) the bi-directional communication links. Dennis Krupke et al. [114] developed an MR-enhanced heterogeneous robot telemanipulation system that presents the real robot working space and the corresponding virtual scene presented to the operator in the robot telemanipulation architecture for immersive and intuitive remote grasping. The pose of the robot replica in MR is synchronized with the current pose of the physical robot via messages. The communication between the robot and MR scene is maintained via the ros-bridge. A virtual screen on the left wall augments the virtual scene by displaying an image stream from the camera mounted above the physical robot. However, intuitive teleoperation is only applied to the robotic hand instead of the arm-hand system in its entirety, limiting the workspace and overall application significantly.
In [111], researchers from the University of California, Berkeley, built an MR-based teleoperation system on a PR2 robot through imitation learning. The system allows the operator to teleoperate robots to perform complicated tasks naturally and intuitively. In the proposed teleoperation system, a robot is controlled at a distance by the human operator through an MR-based telerobotic interface with overlaid information, which is an effective approach to collect high-quality demonstrations for training the robot. Imitation learning techniques allow the robot to imitate human behavior and acquire skills through perceiving human demonstrations aiming at performing specific tasks.
An MR control room for dynamic vision and movement mapping between the operator and dual-arm robotic agent can be developed for tele-manufacturing (Figure 5) [115]. The multiple monocular sensor displays and motion mapping approach via MR outperforms telerobotic platforms with direct camera feeds. The control room has objects and controls floating in space, which allows the user to perform movements relative to 3D markers to command the remote-controlled robot. However, the researchers did not determine if the immersive experience led to performance improvements compared to conventional 2D HRI platforms, so the impact remains unknown and unquantified.
ROS Reality [103,104] is an open-source MR-based telerobotic manipulation framework that was developed at Brown University (Figure 6). This work enables communication and interaction between ROS-based robots and Unity-compatible MR systems. A total of 24 dexterous telerobotic manipulation tasks using ROS Reality were conducted by expert users compared to direct kinematic manipulation of the Baxter robot. the remote-controlled robotic platform targeted at expert teleoperators. However, user efficiency and system functionality were not verified with novice users for manufacturing-related tasks. This system can be used as a data acquisition and validation platform for learning from demonstration (LfD) and other machine learning approaches to transfer human expertise and skills to robots.
In [68], the authors developed an MR-based telemanipulation system to control a robotic arm-hand system. The MR scene is augmented by real-time data from the robot task space, to enhance the operator’s visual perception. The system incorporates a new interactive agent to control the robot and reduce the operator’s workload. Two control algorithms are introduced into the MR-based teleoperation system to improve the long-distance and fine motions of the robot. Telemanipulation experiments using a UR 10 robot arm and Robotiq-85 gripper demonstrate the feasibility of the proposed telerobotic paradigm.
The results of our review indicate that using motion input devices to build intuitive robot teleoperation with XR technology has become a focus of research in natural and immersive HRI applications. In this context, a common application is the use of VR controllers to capture user motion to drive physical robots, as described in [116]. However, few works compare the effectiveness and efficiency between different control–feedback methods for the teleoperation system. An exception is [117], which presents an integrated mapping of motion and visualization scheme based on the MR subspace approach for intuitive and immersive telemanipulation of robotic arm-hand systems. The effectiveness of different control–feedback methods for the teleoperation system is validated and compared.
Evaluating the effectiveness of virtual reality (VR) training technology for preparing operators in dynamic industrial environments, particularly for unforeseen situations during teleoperation, involves several key aspects and methodologies. Researchers use techniques such as assessing performance metrics like task completion time, accuracy, error rates, and response time [118]. They introduce scenario complexity through variations and stress tests, examining adaptability. Cognitive load and decision making are analyzed using eye tracking, think-aloud protocols, and surveys [119]. Biometric data like heart rate, galvanic skin response, and electroencephalography offer insights into emotional and cognitive states [120]. Feedback and iterative design involve post-training debriefing and continuous scenario refinement. Long-term skill retention is evaluated through follow-up assessments, and comparative studies compare VR-trained operators with those from traditional methods [40]. Real-world performance correlation assesses VR training’s translation into job performance. This comprehensive approach aids in understanding VR training effectiveness, informing improvements in preparing operators for unforeseen teleoperation challenges in dynamic industrial settings.

4.3.2. Mixed Reality–Based Vision Mapping and Merging

Mixed reality (MR) scenes feature immersive integration of multiple 3D/2D visual display modes to the users. At present, various teleoperation platforms have focused on MR-enhanced robot control [93]. It is not yet known whether using stereoscopic vision and point cloud within an MR environment can enhance users’ stereoscopic perception and task performance in separated telerobotic operations. In [115], an MR control room for dynamic vision and movement mapping between the operator and dual-arm robot is introduced. The multiple monocular sensor displays and motion mapping approach via MR outperforms telerobotic platforms with direct camera feeds. The control room has objects and controls floating in space, which allows the user to perform movements relative to 3D markers to command the remote-controlled robot. However, the authors did not determine if the immersive experience led to performance improvements.
In [31], the authors presented an MR teleoperation interface for mobile manipulation tasks with visual inputs from a monoscopic and stereoscopic camera setup for remote mobile manipulation tasks in hazardous production environments. This system is equipped with two monocular cameras and a stereoscopic camera at the robot’s working site. Users acquire multi-view 2D images and stereo vision with depth cues in a Unity-generated MR control room. However, the intuitive control of the robotic platform as a whole was not assessed or presented. In addition, no comparative tests with typical telemanipulation systems were performed to verify the performance and efficiency of the proposed system. To reduce the training time for teleoperation, a multi-view merging method via MR was designed (Figure 7) in [121], and the platform provides users with intuitive control of the robot’s motion by using commercial VR controllers.
Yiming Luo et al. [122] explored the use of stereoscopic view in an immersive manner for mobile robot teleoperation and navigation, and the results showed the stereoscopic view and immersive perception via virtual reality head-mounted displays (VR-HMDs) provided the user with depth cues and improved user performance and system usability. However, the effect of stereoscopic perception on telerobotic manipulation and the depth cues provided by other 3D visualization resources such as point cloud was not studied.
The only exception is [118]; this work evaluates the impact of depth perception and immersion provided by integrated 3D/2D vision and motion mapping schemes on teleoperation efficiency and user experience in an MR environment. In particular, the MR-enhanced systems maintain spatial awareness and perceptual salience of the remote scene in 3D, facilitating intuitive mixed-reality human–robot interaction (MR-HRI). This study compared two MR-integrated 3D/2D vision and motion mapping schemes against a typical 2D baseline visual display method through pick-and-place, assembly, and dexterous manufacturing tasks.
Incorporating 2D/3D vision mapping into mixed reality is just one facet of the growing adoption of XR-enhanced intuitive HRI; digital twins also play a pivotal role in facilitating intuitive teleoperation within the MR scene [116]. These digital twins, mirroring the physical robots, provide users with real-time information about the remote robot and seamlessly integrate into the human–robot interface [123,124]. Within the mixed reality environment, users can interact with these digital twins naturally, enabling precise and intuitive control over robotic systems while obtaining real-time status information about the remote robot from the digital twins [125,126]. Digital twins enhance the user’s sense of presence and control in teleoperation by providing visual representations, spatial awareness, real-time monitoring, predictive capabilities, and interactivity [127,128]. These capabilities empower operators, enabling them to teleoperate robots effectively and with a heightened sense of immersion and control. Several challenges are associated with the real-time integration of digital twins with robotic systems. Achieving real-time synchronization between the digital twin and the physical robot is crucial for seamless robotic telemanipulation [129]. Ensuring data accuracy and calibration poses another significant challenge, as digital twins heavily depend on precise sensor data to accurately represent the state of the physical robot [130]. Additionally, creating detailed and accurate digital twin models for complex robotic systems presents a formidable obstacle [131,132]. The complexity of the physical system must be faithfully mirrored in the digital twin to provide valuable and meaningful insights, adding to the complexity of the integration process [133].
Multimodal feedback and haptic interfaces play a crucial role in XR-enhanced telerobotics by providing users with enhanced immersion, increased spatial awareness, and improved perception [134,135]. Haptic interfaces provide users with a sense of touch and force feedback, allowing them to perceive and interact with virtual objects and environments more accurately and safely [57]. Several specific studies and experiments have highlighted their significance. In [136], the operator controls the remote robot in AR-based immersive and multimodal control–feedback schemes in telepresence robot applications. In [32], the authors studied XR-based multimodal interfaces and the improvements in teleoperation through comparison of how all combinations of audio, visual, and haptic interfaces affect performance during manipulation. The authors of [56,63,137] conducted experiments involving practical manipulating and manufacturing tasks, including item manipulation and item delivery of tools and components. The results demonstrate the feasibility and significance of implementing multimodal feedback, especially haptic feedback, in telerobotic systems. In summary, multimodal feedback and haptic interfaces are essential components of XR-enhanced telerobotics, significantly improving user experience and safety and the overall effectiveness of teleoperation tasks [44,58].

4.4. Mixed Reality-Enhanced Robotic Tele-Welding

4.4.1. Robotic Tele-Welding

Welding has been used extensively in the maintenance of nuclear plants, the construction of underwater structures, and the repair of spacecraft in outer space [138]. In these hazardous situations, in which human welders have no effective access, the judgment and intervention of the human operators is still required [139]. Customized production is also an application scenario for tele-welding, where welders often work in environments with dust, strong light, radiation, and explosion hazards [140]. Human-in-the-loop (HITL) robotic tele-welding strategies have become a feasible approach for removing humans from these dangerous, harmful, and unpleasant environments while performing welding operations [141]. Robotic tele-welding systems (RTWSs) combine the advantages of humans and robotics and coordinate the functions of all system components efficiently and safely [142]. For example, RTWSs can address geographical limitations for scarce welding professionals and bring a remote workforce into manufacturing.
Welding training is a time-consuming and costly process. Intensive instruction and training are usually required to bring unskilled welders to an intermediate skill level [143]. It is important to analyze the differences between the operating skills of professional and novice welders to facilitate the professional welding level of unskilled welders and to further improve the feasibility, efficiency, and welding quality of RTWSs used by novice welders during remote welding operations.
Thus, the expertise and skill extraction of professional welders as well as the application of robot assistance in on-site welding operations have become popular research topics [144]. The implementation of interactive robots can stabilize the hand movements of novice welders for improved welding quality. However, robot-assisted welding has not been studied in teleoperated welding scenarios. Welding motion capture systems were used in [145] and [146] to differentiate between professional and unskilled welders in terms of operational behavior in the gas tungsten arc welding (GTAW) process, providing an experimental basis for the development of robot-assisted tele-welding schemes. The experiments in [147] revealed the differences between professional and unskilled welders in the trajectory of the GTAW hand movements and indicated the main cause of unsatisfactory welding results is that novice welders make abrupt movements in the direction perpendicular to the weld surface. However, the operational difference of gas metal arc welding (GMAW) between professional and novice users was not well examined [148].
In industrial teleoperation contexts, control algorithms play a critical role in managing communication latency during human–robot interactions [149]. These algorithms employ predictive modeling and adaptive strategies to compensate for the delay between the operator’s input and the robot’s response [66,150]. By anticipating the robot’s movements based on the operator’s commands and real-time feedback, these algorithms help synchronize actions despite the latency [151]. The impact of communication latency on welding precision is noticeable. In tele-welding tasks, precision is paramount, requiring accurate control of the welding tool. Latency can lead to misalignments and imprecise movements, affecting the quality and integrity of the weld [152]. Control algorithms work to minimize this impact, ensuring that the robot’s actions closely align with the operator’s intentions, thus maintaining welding precision even in the presence of communication delays [153]. However, achieving real-time synchronization remains a significant challenge, necessitating continuous advancements in control algorithms to enhance both precision and efficiency in teleoperated robotic welding scenarios.
Welding automation has advanced significantly with the integration of teleoperation and artificial intelligence (AI). Teleoperation allows remote control and monitoring of welding robots, while AI algorithms enhance precision and efficiency [140]. Machine learning adapts welding parameters based on historical data, ensuring tailored and optimal welding conditions. AI-powered computer vision systems enable real-time quality control by detecting defects, leading to immediate adjustments. Predictive maintenance algorithms anticipate equipment issues, reducing downtime [154]. Additionally, AI optimizes path planning, minimizing unnecessary movements, and collaborative robots, guided by AI, work alongside human welders, ensuring precise and efficient welding outcomes [155]. These integrations enhance adaptability, productivity, quality, and cost-effectiveness across various industries.

4.4.2. Mixed Reality–Enhanced Robot-Assisted Welding

Recent research on human-centered robotic welding has focused on the development of MR-based robot-assisted welding training platforms, intuitive programming for telerobotic welding, interactive telerobotic welding design, and MR-enhanced tele-welding paradigms. A VR-based haptic-guided welder training system was introduced in [156]. This system provides guidance to welders, simulating a human welding trainer. Both novice and skilled welders can use this platform to improve their welding skills in a virtual environment. However, this system does not integrate real welding scenarios into the virtual environment to allow welders to adjust their movements in real time according to the welding pool status, nor does it transfer human movements to the robot for actual tele-welding operations.
Olaf Ciszak et al. [157] proposed a vision-guided approach for programming automated welding robot paths in 2D, where the programmer draws the target weld pattern in the user presentation space, a low-cost camera in the system captures the image, and an algorithm detects and processes the geometry (contour lines) drawn by the human. This intuitive remote programming system for welding is limited by programming of the contour lines in two-dimensional planes only and does not have the real-time capability of a telerobotic welding system. In [158], the authors analyzed the integration of advanced technologies, such as MR, robot vision, intuitive and immersive teleoperation, and artificial intelligence (AI), to build an interactive telerobotic welding system. This paradigm enables efficient human-centered collaboration between remote welding platforms and operators through multi-channel communication.
A teleoperated wall-climbing robotic welding system was developed to demonstrate the application of various technologies in an innovative robotic interaction system to best achieve natural human–robot interaction. However, the mobile wall-climbing welding robot presented in this system has a simple structure and does not have a flexible robot manipulator to mimic welders’ human-level manipulation and make dexterous welding adjustments. Natural human movement signals were not used to improve the system intuitiveness and control the robot for tele-welding tasks.
More recently, research attention has been focused on MR-enhanced tele-welding paradigms [159]. It was verified in [160] that there were no statistically significant differences in the total welding scores between participants in the physical welding group and the mixed reality–based welding groups. The mixed reality welding user interface gives operators the ability to perform welding at a distance while maintaining a level of manipulation [161]. An optical tracking-based telerobotic welding system was introduced in [162]. The Leap Motion sensor captures the trajectory of a virtual welding gun held by a human welder in the user space to control the remote welding robot for the welding task. However, this welding system requires the use of a physical replica of the welding workpiece in the user space to superimpose a real-time weld pool state and guide the welders to adjust their hand movements to the shape of the workpiece [163]. Thus, this MR welding system is not suitable for a wide range of workpieces.
Yukang Liu et al. [163] developed a projection-based MR telerobotic welding system for transferring welder skills and human-level dynamics to the welding robot (Figure 8). Human welders interact with the welding robot by moving a tracked virtual welding torch in 3D space. A UR5 industrial manipulator equipped with vision sensors is operated remotely to perform the welding task. The weld pool stream from the welding site is transmitted back to the user and projected on a mock-up of the workpiece. The human welder can monitor the work process and adjust the movements according to the projected weld pool status from the worksite. However, the operator does not have sufficient visual feedback to check the status of the robot during operation.
Wang et al. [164] developed an MR-based human-robot collaborative welding system (Figure 9). The collaborative tele-welding platform combines the strengths of humans and robots to perform weaving gas tungsten arc welding (GTAW) tasks. The welder can monitor the welding process through an MR display without the need to be physically present. Welding experiments indicated collaborative tele-welding provides better welding results compared to welding performed by humans or robots independently.
MR-based robot-assisted remote welding platforms were developed in [165] to provide the welders with more natural and immersive human–robot interaction (HRI). However, in these systems, the users rely on visual feedback for movement control and have no haptic effects to completely prevent accidental collisions between the robot and the workpiece when the operator controls the robot for welding from a distance. Hence, they are still limited.
A visual and haptic robot programming system based on mixed reality and force feedback was developed in [166]. However, the system was not suitable for real-time remote welding operations and was inefficient in unstructured and dynamic welding situations. Haptic feedback provides the welders with additional scene modality and increases the sense of presence in the remote environment, thereby improving the ability to perform complex tasks [167]. In contrast, it can be difficult to implement effectively.
The primary advantage of integrating haptic effects is to enhance both the performance of tele-welding tasks and the operator’s perception. Current remote robotic welding systems fail to fully utilize the potential improvements in performance offered by various forms of haptic feedback. The rapid advancement of MR-enhanced teleoperation has facilitated the merging of MR and virtual fixtures, aiming to enhance task performance and user perception. This integration effectively addresses the deficiencies in existing telerobotic welding systems. By creating an immersive and interactive MR environment, virtual workpieces can be generated in the user’s space. When combined with Virtual Fixture (VF) technology, this environment provides users with force feedback and guidance, improving the precision of robot movements and preventing accidental collisions [168,169].
The main limitation of existing studies on tele-welding lies in the insufficient incorporation of MR technology and virtual fixtures into remote-controlled robotic welding systems. This lack of integration hampers the elimination of potentially harmful collisions in the tele-welding process and prevents welding robots from achieving human-level dynamics for intricate Gas Metal Arc Welding (GMAW) tasks. Additionally, there has been no effort to simplify operations to assist inexperienced welders in performing welding tasks quickly, addressing the issues of time-consuming training and the shortage of qualified personnel.
An exception is [107], which introduces an integrated approach utilizing MR and haptic feedback for intuitive and immersive teleoperation of robotic welding systems. This approach involves incorporating MR technology, allowing the user to be fully immersed in a virtual operating space enhanced by real-time visual feedback from the robot’s working area. In this proposed robotic tele-welding system, the user’s hand movements are mapped to imitate the robot’s motions, enabling spatial velocity-based control of the robot’s tool center point (TCP).
Several design elements, including mixed reality, digital twin, and virtual fixtures, are integrated and implemented in user interfaces for intuitive comprehension of the robot’s state, especially in critical welding situations. Digital twin technology was utilized to capture the real-time pose of the physical UR5 robot during its operation, enabling welders to observe the rotational status of each joint. By combining virtual twin data with onsite video streams in the MR space, a comprehensive monitoring system was created, offering real-time insights into the robot’s operations. This setup facilitated accurate and efficient adjustments of welding motions based on the robot model data. To ensure optimal visualization within the MR welding workspace, the virtual UR5 robot’s data and motions were scaled at a ratio of 1:5 to match the user’s perspective. Within this framework, virtual fixtures (VFs) were employed and categorized into guidance and prevention fixtures. The proposed MRVF integrated both types, guiding users to the initial welding point effectively and preventing collisions between the torch tip and the workpiece.
In the welding process, it is essential for the electrode to contact the molten weld pool to transfer filler metal to the workpiece. Simultaneously, contact between the torch tip and the workpiece must be avoided to prevent damage [107]. In the MRVF tele-welding workspace (as illustrated in Figure 10), a transparent prevention VF panel was overlaid on the virtual workpiece. This panel displayed a 2D representation of the actual welding process, minimizing collisions between the user-manipulated torch tip and the workpiece.

5. Challenges and Future Opportunities

Based on the reviewed and analyzed research, several challenges associated with XR solutions for robotic telemanipulation and telemanufacturing have been identified. One major challenge lies in the technological integration of extended reality technologies such as AR, VR, and XR with robotic systems, giving rise to issues such as synchronization problems, latency, and the need for real-time data processing. Additionally, designing interfaces that offer an intuitive and immersive user experience proves to be a daunting task, demanding seamless interaction with augmented or virtual environments while simultaneously controlling robotic systems in real time. Providing accurate and meaningful haptic and sensory feedback to users presents another obstacle, as replicating real-world touch and feel in virtual environments adds complexity to system design. Furthermore, ensuring the safety of both users and the environment remains paramount, necessitating solutions for challenges like collision detection, emergency shutdown mechanisms, and real-time hazard recognition. Moreover, the cost associated with developing and implementing robust XR-enhanced robotic systems poses a significant barrier, limiting accessibility for smaller organizations or research initiatives aiming to delve into this innovative technology.
Given the dynamic nature of XR technology, we anticipate numerous exciting opportunities in XR-enhanced telerobotics. Advancements in immersive XR interfaces, incorporating enhanced gesture recognition, voice commands, and brain–computer interfaces, are expected, making teleoperation more seamless and intuitive. Customized interfaces tailored to individual preferences, coupled with advancements in MR and haptic feedback integration, promise a realistic and detailed sense of touch and force feedback, enhancing the operator’s ability to perform delicate tasks remotely. Additionally, developments in AI and machine learning could enhance robots’ autonomy and adaptability by learning from human operators, fostering more efficient and versatile telerobotic systems. In summary, innovations in immersive interfaces, haptic feedback, AI integration, and interdisciplinary research are poised to revolutionize remote teleoperation and human–robot collaboration, unlocking the full potential of XR technology in telemanipulation and telemanufacturing domains.

6. Conclusions

This article identifies and summarizes the intuitive telerobotic frameworks proposed in the human–robot interaction (HRI) literature, which aim to remove human workers from harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human–robot interaction, including the implementation of XR techniques to improve the user’s situational awareness, depth perception, and spatial cognition, as fundamental to effective and efficient teleoperation. The article commences by delving into the prerequisites for achieving effective robotic teleoperation, highlighting two important requirements for an effective and intuitive teleoperation platform. First, natural motion retargeting, or the use of natural human behavior to teleoperate robotic systems, effectively reduces training costs and grants human-level dynamics to telerobotic systems; it is an important component of natural and intuitive human–robot interaction systems. Second, adequate multimodal feedback, especially visual feedback that provides depth information and situational awareness to enable the operator to know the dynamic situation in the robot’s workspace, is also critical in interactive telemanipulation systems.
This review has elucidated the state-of-the-art frameworks developed by the robotics community to integrate physical robotic platforms with XR technology. Specifically, it underscores MR-based 3D/2D vision mapping and merging as a method for providing immersive integration of diverse visual display modes for users. In the MR-enhanced telerobotic schemes, digital twins also play a pivotal role in facilitating intuitive teleoperation within the MR scene, in which users can interact with these digital twins naturally, enabling precise and intuitive control over robotic systems while obtaining real-time status information about the remote robot from the digital twins. We have presented the main contributions of the reviewed articles on XR-enhanced telerobotic schemes in industrial systems and classified their application contexts into three main groups: effective and intuitive teleoperation, MR-enhanced intuitive telemanipulation, and MR-based robotic tele-welding. Additionally, this review highlights the pivotal role of XR, especially MR, in the domain of teleoperation systems and robotic welding. Identified as a promising development, the incorporation of haptic interfaces in tele-welding systems holds the potential to enhance task performance and operator perception.

Author Contributions

Conceptualization, Y.-P.S.; methodology, Y.-P.S., L.H.P. and J.G.C.; validation, Y.-P.S., X.-Q.C. and C.G.P.; writing—original draft preparation, Y.-P.S.; writing—review and editing, J.G.C. and C.Z.; supervision, J.G.C.; project administration, J.G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kheddar, A.; Neo, E.S.; Tadakuma, R.; Yokoi, K. Enhanced Teleoperation through Virtual Reality Techniques. Springer Tracts Adv. Robot. 2007, 31, 139–159. [Google Scholar] [CrossRef]
  2. Ismet Can Dede, M.; Selvi, Ö.; Bilgincan, T.; Kant, Y. Design of a Haptic Device for Teleoperation and Virtual Reality Systems. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, SMC, San Antonio, TX, USA, 11–14 October 2009; pp. 3623–3628. [Google Scholar] [CrossRef]
  3. Phung, A.; Billings, G.; Daniele, A.F.; Walter, M.R.; Camilli, R. Enhancing Scientific Exploration of the Deep Sea through Shared Autonomy in Remote Manipulation. Sci. Robot. 2023, 8, eadi5227. [Google Scholar] [CrossRef] [PubMed]
  4. Vagvolgyi, B.; Niu, W.; Chen, Z.; Wilkening, P.; Kazanzides, P. Augmented Virtuality for Model-Based Teleoperation. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 3826–3833. [Google Scholar] [CrossRef]
  5. Vagvolgyi, B.P.; Pryor, W.; Reedy, R.; Niu, W.; Deguet, A.; Whitcomb, L.L.; Leonard, S.; Kazanzides, P. Scene Modeling and Augmented Virtuality Interface for Telerobotic Satellite Servicing. IEEE Robot. Autom. Lett. 2018, 3, 4241–4248. [Google Scholar] [CrossRef]
  6. Salman, A.E.; Roman, M.R. Augmented Reality-Assisted Gesture-Based Teleoperated System for Robot Motion Planning. Ind. Robot. 2023, 50, 765–780. [Google Scholar] [CrossRef]
  7. Wang, Z.; Reed, I.; Fey, A.M. Toward Intuitive Teleoperation in Surgery: Human-Centric Evaluation of Teleoperation Algorithms for Robotic Needle Steering. In Proceedings of the Proceedings—IEEE International Conference on Robotics and Automation, Brisbane, Australia, 21–25 May 2018; pp. 5799–5806. [Google Scholar]
  8. Martin, E.J.; Erwin, B.; Katija, K.; Phung, A.; Gonzalez, E.; Thun, S.; Von Cullen, H.; Haddock, S.H.D. A Virtual Reality Video System for Deep Ocean Remotely Operated Vehicles. In Proceedings of the OCEANS 2021: San Diego–Porto, San Diego, CA, USA, 20–23 September 2021. [Google Scholar] [CrossRef]
  9. Hedayati, H.; Walker, M.; Szafir, D. Improving Collocated Robot Teleoperation with Augmented Reality. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 78–86. [Google Scholar] [CrossRef]
  10. Liu, Y.C.; Khong, M.H.; Ou, T.W. Nonlinear Bilateral Teleoperators with Non-Collocated Remote Controller over Delayed Network. Mechatronics 2017, 45, 25–36. [Google Scholar] [CrossRef]
  11. Lai, Z.H.; Tao, W.; Leu, M.C.; Yin, Z. Smart Augmented Reality Instructional System for Mechanical Assembly towards Worker-Centered Intelligent Manufacturing. J. Manuf. Syst. 2020, 55, 69–81. [Google Scholar] [CrossRef]
  12. Walker, M.E.; Hedayati, H.; Szafir, D. Robot Teleoperation with Augmented Reality Virtual Surrogates. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Daegu, Republic of Korea, 11–14 March 2019; pp. 202–210. [Google Scholar]
  13. Liu, H.; Wang, L. Remote Human—Robot Collaboration: A Cyber—Physical System Application for Hazard Manufacturing Environment. J. Manuf. Syst. 2020, 54, 24–34. [Google Scholar] [CrossRef]
  14. Stotko, P.; Krumpen, S.; Schwarz, M.; Lenz, C.; Behnke, S.; Klein, R.; Weinmann, M. A VR System for Immersive Teleoperation and Live Exploration with a Mobile Robot. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Macao, China, 3–8 November 2019; pp. 3630–3637. [Google Scholar]
  15. Almeida, L.; Menezes, P.; Dias, J. Improving Robot Teleoperation Experience via Immersive Interfaces. In Proceedings of the 2017 4th Experiment at International Conference: Online Experimentation, Faro, Portugal, 6–8 June 2017; pp. 87–92. [Google Scholar] [CrossRef]
  16. Pace, F.; De Gorjup, G.; Bai, H.; Sanna, A.; Liarokapis, M.; Billinghurst, M. Leveraging Enhanced Virtual Reality Methods and Environments for Efficient, Intuitive, and Immersive Teleoperation of Robots. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 5 June 2021; pp. 12967–12973. [Google Scholar] [CrossRef]
  17. Zhou, T.; Zhu, Q.; Du, J. Intuitive Robot Teleoperation for Civil Engineering Operations with Virtual Reality and Deep Learning Scene Reconstruction. Adv. Eng. Inform. 2020, 46, 10170–10190. [Google Scholar] [CrossRef]
  18. Duong, M.D.; Teraoka, C.; Imamura, T.; Miyoshi, T.; Terashima, K. Master-Slave System with Teleoperation for Rehabilitation. IFAC Proc. Vol. 2005, 16, 48–53. [Google Scholar] [CrossRef]
  19. Hung, N.V.Q.; Narikiyo, T.; Tuan, H.D.; Fuwa, K. A Dynamics-Based Adaptive Control for Master-Slave System in Teleoperation. IFAC Proc. Vol. 2001, 34, 237–242. [Google Scholar] [CrossRef]
  20. Ji, Y.; Liu, D.; Guo, Y. Adaptive Neural Network Based Position Tracking Control for Dual-Master/Single-Slave Teleoperation System under Communication Constant Time Delays. ISA Trans. 2019, 93, 80–92. [Google Scholar] [CrossRef] [PubMed]
  21. LIU, G.; GENG, X.; LIU, L.; WANG, Y. Haptic Based Teleoperation with Master-Slave Motion Mapping and Haptic Rendering for Space Exploration. Chin. J. Aeronaut. 2019, 32, 723–736. [Google Scholar] [CrossRef]
  22. Jin, H.; Zhang, L.; Rockel, S.; Zhang, J.; Hu, Y.; Zhang, J. A Novel Optical Tracking Based Tele-Control System for Tabletop Object Manipulation Tasks. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 September–2 October 2015; pp. 636–642. [Google Scholar] [CrossRef]
  23. Cerulo, I.; Ficuciello, F.; Lippiello, V.; Siciliano, B. Teleoperation of the SCHUNK S5FH Under-Actuated Anthropomorphic Hand Using Human Hand Motion Tracking. Rob. Auton. Syst. 2017, 89, 75–84. [Google Scholar] [CrossRef]
  24. Suligoj, F.; Jerbic, B.; Svaco, M.; Sekoranja, B.; Mihalinec, D.; Vidakovic, J. Medical Applicability of a Low-Cost Industrial Robot Arm Guided with an Optical Tracking System. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 September–2 October 2015; pp. 3785–3790. [Google Scholar] [CrossRef]
  25. Dinh, T.Q.; Yoon, J., II; Marco, J.; Jennings, P.; Ahn, K.K.; Ha, C. Sensorless Force Feedback Joystick Control for Teleoperation of Construction Equipment. Int. J. Precis. Eng. Manuf. 2017, 18, 955–969. [Google Scholar] [CrossRef]
  26. Truong, D.Q.; Truong, B.N.M.; Trung, N.T.; Nahian, S.A.; Ahn, K.K. Force Reflecting Joystick Control for Applications to Bilateral Teleoperation in Construction Machinery. Int. J. Precis. Eng. Manuf. 2017, 18, 301–315. [Google Scholar] [CrossRef]
  27. Nakanishi, J.; Itadera, S.; Aoyama, T.; Hasegawa, Y. Towards the Development of an Intuitive Teleoperation System for Human Support Robot Using a VR Device. Adv. Robot. 2020, 34, 1239–1253. [Google Scholar] [CrossRef]
  28. Meeker, C.; Rasmussen, T.; Ciocarlie, M. Intuitive Hand Teleoperation by Novice Operators Using a Continuous Teleoperation Subspace. In Proceedings of the Proceedings—IEEE International Conference on Robotics and Automation, Brisbane, Australia, 21–25 May 2018; pp. 5821–5827. [Google Scholar]
  29. Ellis, S.R.; Adelstein, B.D.; Welch, R.B. Kinesthetic Compensation for Misalignment of Teleoperator Controls through Cross-Modal Transfer of Movement Coordinates. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Santa Barbara, CA, USA, 1 September 2002; Volume 46, pp. 1551–1555. [Google Scholar]
  30. Li, G.; Caponetto, F.; Del Bianco, E.; Katsageorgiou, V.; Sarakoglou, I.; Tsagarakis, N.G. Incomplete Orientation Mapping for Teleoperation with One DoF Master-Slave Asymmetry. IEEE Robot. Autom. Lett. 2020, 5, 5167–5174. [Google Scholar] [CrossRef]
  31. Bejczy, B.; Bozyil, R.; Vaiekauskas, E.; Petersen, S.B.K.; Bogh, S.; Hjorth, S.S.; Hansen, E.B. Mixed Reality Interface for Improving Mobile Manipulator Teleoperation in Contamination Critical Applications. Procedia Manuf. 2020, 51, 620–626. [Google Scholar] [CrossRef]
  32. Triantafyllidis, E.; McGreavy, C.; Gu, J.; Li, Z. Study of Multimodal Interfaces and the Improvements on Teleoperation. IEEE Access 2020, 8, 78213–78227. [Google Scholar] [CrossRef]
  33. Yew, A.W.W.; Ong, S.K.; Nee, A.Y.C. Immersive Augmented Reality Environment for the Teleoperation of Maintenance Robots. Procedia CIRP 2017, 61, 305–310. [Google Scholar] [CrossRef]
  34. Komatsu, R.; Fujii, H.; Tamura, Y.; Yamashita, A.; Asama, H. Free Viewpoint Image Generation System Using Fisheye Cameras and a Laser Rangefinder for Indoor Robot Teleoperation. ROBOMECH J. 2020, 7, 1–10. [Google Scholar] [CrossRef]
  35. Ribeiro, L.G.; Suominen, O.J.; Durmush, A.; Peltonen, S.; Morales, E.R.; Gotchev, A. Retro-Reflective-Marker-Aided Target Pose Estimation in a Safety-Critical Environment. Appl. Sci. 2021, 11, 3. [Google Scholar] [CrossRef]
  36. Illing, B.; Westhoven, M.; Gaspers, B.; Smets, N.; Bruggemann, B.; Mathew, T. Evaluation of Immersive Teleoperation Systems Using Standardized Tasks and Measurements. In Proceedings of the 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020, Naples, Italy, 31 August–4 September 2020; pp. 278–285. [Google Scholar] [CrossRef]
  37. Chen, J.; Glover, M.; Yang, C.; Li, C.; Li, Z.; Cangelosi, A. Development of an Immersive Interface for Robot Teleoperation. In Proceedings of the Annual Conference towards Autonomous Robotic Systems, Guildford, UK, 9–21 July 2017; pp. 1–15. [Google Scholar]
  38. Cody, G.; Brian, R.; Allison, W.; Miller, M.; Stoytchev, A. An Effective and Intuitive Control Interface for Remote Robot Teleoperation with Complete Haptic Feedback. In Proceedings of the Emerging Technologies Conference-ETC, Islamabad, Pakistan, 19–20 October 2009. [Google Scholar]
  39. Kaplish, A.; Yamane, K. Motion Retargeting and Control for Teleoperated Physical Human-Robot Interaction. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Toronto, ON, Canada, 15–17 October 2019; pp. 723–730. [Google Scholar] [CrossRef]
  40. Rakita, D.; Mutlu, B.; Gleicher, M. A Motion Retargeting Method for Effective Mimicry-Based Teleoperation of Robot Arms. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction Part. F127194, Vienna, Austria, 6–9 March 2017; pp. 361–370. [Google Scholar] [CrossRef]
  41. Hong, A.; Lee, D.G.; Bülthoff, H.H.; Son, H. Il Multimodal Feedback for Teleoperation of Multiple Mobile Robots in an Outdoor Environment. J. Multimodal User Interfaces 2017, 11, 67–80. [Google Scholar] [CrossRef]
  42. Wang, X.; Dunston, P.S. Mixed Reality—Enhanced Operator Interface for Teleoperation Systems in Unstructured Environment. In Proceedings of the 10th Biennial International Conference on Engineering, Construction, and Operations in Challenging Environments—Earth and Space 2006, Houston, TX, USA, 5–8 March 2006; Volume 2006, p. 93. [Google Scholar] [CrossRef]
  43. Yang, C.; Wang, X.; Li, Z.; Li, Y.; Su, C.Y. Teleoperation Control Based on Combination of Wave Variable and Neural Networks. IEEE Trans. Syst. Man. Cybern. Syst. 2017, 47, 2125–2136. [Google Scholar] [CrossRef]
  44. Girbes-Juan, V.; Schettino, V.; Demiris, Y.; Tornero, J. Haptic and Visual Feedback Assistance for Dual-Arm Robot Teleoperation in Surface Conditioning Tasks. IEEE Trans. Haptics 2021, 14, 44–56. [Google Scholar] [CrossRef] [PubMed]
  45. Zolotas, M.; Wonsick, M.; Long, P.; Padır, T. Motion Polytopes in Virtual Reality for Shared Control in Remote Manipulation Applications. Front. Robot. AI 2021, 8, 730433. [Google Scholar] [CrossRef]
  46. Gao, X.; Silverio, J.; Pignat, E.; Calinon, S.; Li, M.; Xiao, X. Motion Mappings for Continuous Bilateral Teleoperation. IEEE Robot. Autom. Lett. 2021, 6, 5048–5055. [Google Scholar] [CrossRef]
  47. Ferre, M.; Cobos, S.; Aracil, R.; Urán, M.A.S. 3D-Image Visualization and Its Performance in Teleoperation. In Proceedings of the Second International Conference, ICVR 2007, Held as part of HCI International 2007, Beijing, China, 22–27 July 2007; Volume 4563, pp. 21–27. [Google Scholar]
  48. Arévalo Arboleda, S.; Dierks, T.; Rücker, F.; Gerken, J. Exploring the Visual Space to Improve Depth Perception in Robot Teleoperation Using Augmented Reality: The Role of Distance and Target’s Pose in Time, Success, and Certainty. In Proceedings of the IFIP Conference on Human-Computer Interaction, York, UK, 28 August–1 September 2021; Volume 12932, pp. 522–543. [Google Scholar]
  49. Guzsvinecz, T.; Kovacs, C.; Reich, D.; Szucs, V.; Sik-Lanyi, C. Developing a Virtual Reality Application for the Improvement of Depth Perception. In Proceedings of the 2018 9th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Budapest, Hungary, 22–24 August 2018; pp. 17–22. [Google Scholar]
  50. Chenf, F.; Gao, B.; Selvaggio, M.; Li, Z.; Caldwell, D.; Kershaw, K.; Masi, A.; Di Castro, M.; Losito, R. A Framework of Teleoperated and Stereo Vision Guided Mobile Manipulation for Industrial Automation. In Proceedings of the 2016 IEEE International Conference on Mechatronics and Automation, IEEE ICMA 2016, Harbin, China, 7–10 August 2016; pp. 1641–1648. [Google Scholar]
  51. McHenry, N.; Spencer, J.; Zhong, P.; Cox, J.; Amiscaray, M.; Wong, K.C.; Chamitoff, G. Predictive XR Telepresence for Robotic Operations in Space. In Proceedings of the IEEE Aerospace Conference Proceedings, Big Sky, MT, USA, 6–13 March 2021; pp. 1–10. [Google Scholar] [CrossRef]
  52. Smolyanskiy, N.; Gonzalez-Franco, M. Stereoscopic First Person View System for Drone Navigation. Front. Robot. AI 2017, 4, 11. [Google Scholar] [CrossRef]
  53. Livatino, S.; Muscato, G.; Privitera, F. Stereo Viewing and Virtual Reality Technologies in Mobile Robot Teleguide. IEEE Trans. Robot. 2009, 25, 1343–1355. [Google Scholar] [CrossRef]
  54. Niu, L.; Aha, L.; Mattila, J.; Gotchev, A.; Ruiz, E. A Stereoscopic Eye-in-Hand Vision System for Remote Handling in ITER. Fusion. Eng. Des. 2019, 146, 1790–1795. [Google Scholar] [CrossRef]
  55. Guo, Z.; Zhou, D.; Zhou, Q.; Zhang, X.; Geng, J.; Zeng, S.; Lv, C.; Hao, A. Applications of Virtual Reality in Maintenance during the Industrial Product Lifecycle: A Systematic Review. J. Manuf. Syst. 2020, 56, 525–538. [Google Scholar] [CrossRef]
  56. Panzirsch, M.; Balachandran, R.; Weber, B.; Ferre, M.; Artigas, J. Haptic Augmentation for Teleoperation through Virtual Grasping Points. IEEE Trans. Haptics 2018, 11, 400–416. [Google Scholar] [CrossRef] [PubMed]
  57. Abi-Farrajl, F.; Henze, B.; Werner, A.; Panzirsch, M.; Ott, C.; Roa, M.A. Humanoid Teleoperation Using Task-Relevant Haptic Feedback. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems 2018, Madrid, Spain, 1–5 October 2018. [Google Scholar]
  58. Singh, J.; Srinivasan, A.R.; Neumann, G.; Kucukyilmaz, A. Haptic-Guided Teleoperation of a 7-DoF Collaborative Robot Arm with an Identical Twin Master. IEEE Trans. Haptics 2020, 13, 246–252. [Google Scholar] [CrossRef] [PubMed]
  59. Shankhwar, K.; Chuang, T.J.; Tsai, Y.Y.; Smith, S. A Visuo-Haptic Extended Reality–Based Training System for Hands-on Manual Metal Arc Welding Training. Int. J. Adv. Manuf. Technol. 2022, 121, 249–265. [Google Scholar] [CrossRef]
  60. Nuzzi, C.; Ghidini, S.; Pagani, R.; Pasinetti, S.; Coffetti, G.; Sansoni, G. Hands-Free: A Robot Augmented Reality Teleoperation System. In Proceedings of the 2020 17th International Conference on Ubiquitous Robots, UR, Kyoto, Japan, 22–26 June 2020; pp. 617–624. [Google Scholar] [CrossRef]
  61. Yuan, F.; Zhang, L.; Zhang, H.; Li, D.; Zhang, T. Distributed Teleoperation System for Controlling Heterogeneous Robots Based on ROS. In Proceedings of the IEEE Workshop on Advanced Robotics and its Social Impacts, ARSO 2019, Beijing, China, 31 October–2 November 2019; pp. 7–12. [Google Scholar] [CrossRef]
  62. Bai, W.; Cao, Q.; Wang, P.; Chen, P.; Leng, C.; Pan, T. Modular Design of a Teleoperated Robotic Control System for Laparoscopic Minimally Invasive Surgery Based on ROS & RT-Middleware. Ind. Robot. 2017, 44, 596–608. [Google Scholar] [CrossRef]
  63. Lee, D.; Park, Y.S. Implementation of Augmented Teleoperation System Based on Robot Operating System (ROS). In Proceedings of the IEEE International Conference on Intelligent Robots and Systems 2018, Madrid, Spain, 1–5 October 2018; pp. 5497–5502. [Google Scholar] [CrossRef]
  64. Mortimer, M.; Horan, B.; Joordens, M. Kinect with ROS, Interact with Oculus: Towards Dynamic User Interfaces for Robotic Teleoperation. In Proceedings of the 2016 11th Systems of Systems Engineering Conference, SoSE 2016, Kongsberg, Norway, 12–16 June 2016. [Google Scholar] [CrossRef]
  65. Deng, Y.; Tang, Y.; Yang, B.; Zheng, W.; Liu, S.; Liu, C. A Review of Bilateral Teleoperation Control Strategies with Soft Environment. In Proceedings of the 2021 6th IEEE International Conference on Advanced Robotics and Mechatronics, ICARM 2021, Chongqing, China, 3–5 July 2021; pp. 459–464. [Google Scholar] [CrossRef]
  66. Kebria, P.M.; Khosravi, A.; Nahavandi, S.; Shi, P.; Alizadehsani, R. Robust Adaptive Control Scheme for Teleoperation Systems with Delay and Uncertainties. IEEE Trans. Cybern. 2020, 50, 3243–3253. [Google Scholar] [CrossRef]
  67. Luo, J.; He, W.; Yang, C. Combined Perception, Control, and Learning for Teleoperation: Key Technologies, Applications, and Challenges. Cogn. Comput. Syst. 2020, 2, 33–43. [Google Scholar] [CrossRef]
  68. Sun, D.; Kiselev, A.; Liao, Q.; Stoyanov, T.; Loutfi, A. A New Mixed-Reality-Based Teleoperation System for Telepresence and Maneuverability Enhancement. IEEE Trans. Hum. Mach. Syst. 2020, 50, 55–67. [Google Scholar] [CrossRef]
  69. Nittari, G.; Khuman, R.; Baldoni, S.; Pallotta, G.; Battineni, G.; Sirignano, A.; Amenta, F.; Ricci, G. Telemedicine Practice: Review of the Current Ethical and Legal Challenges. Telemed. E-Health 2020, 26, 1427–1437. [Google Scholar] [CrossRef]
  70. Singh, S.K.; Sharma, J.; Joshua, L.M.; Huda, F.; Kumar, N.; Basu, S.; Singh, S.K.; Sharma, J.; Joshua, L.M.; Huda, F.; et al. Telesurgery and Robotics: Current Status and Future Perspectives; IntechOpen: Rijeka, Croatia, 2022. [Google Scholar] [CrossRef]
  71. Miao, Y.; Jiang, Y.; Peng, L.; Hossain, M.S.; Muhammad, G. Telesurgery Robot Based on 5G Tactile Internet. Mob. Netw. Appl. 2018, 23, 1645–1654. [Google Scholar] [CrossRef]
  72. Dinh, A.; Yin, A.L.; Estrin, D.; Greenwald, P.; Fortenko, A. Augmented Reality in Real-Time Telemedicine and Telementoring: Scoping Review. JMIR Mhealth Uhealth 2023, 11, e45464. [Google Scholar] [CrossRef]
  73. Jin, M.L.; Brown, M.M.; Patwa, D.; Nirmalan, A.; Edwards, P.A. Telemedicine, Telementoring, and Telesurgery for Surgical Practices. Curr. Probl. Surg. 2021, 58, 100986. [Google Scholar] [CrossRef] [PubMed]
  74. Liu, K.; Miao, J.; Liao, Z.; Luan, X.; Meng, L. Dynamic Constraint and Objective Generation Approach for Real-Time Train Rescheduling Model under Human-Computer Interaction. High-Speed Railw. 2023. [Google Scholar] [CrossRef]
  75. Liu, J.B.; Ang, M.C.; Chaw, J.K.; Kor, A.L.; Ng, K.W. Emotion Assessment and Application in Human–Computer Interaction Interface Based on Backpropagation Neural Network and Artificial Bee Colony Algorithm. Expert. Syst. Appl. 2023, 232, 120857. [Google Scholar] [CrossRef]
  76. Chen, H.; Zendehdel, N.; Leu, M.C.; Yin, Z. Real-Time Human-Computer Interaction Using Eye Gazes. Manuf. Lett. 2023, 35, 883–894. [Google Scholar] [CrossRef]
  77. Lopes, S.L.; Ferreira, A.I.; Prada, R.; Schwarzer, R. Social Robots as Health Promoting Agents: An Application of the Health Action Process Approach to Human-Robot Interaction at the Workplace. Int. J. Hum. Comput. Stud. 2023, 180, 103124. [Google Scholar] [CrossRef]
  78. Popov, D.; Pashkevich, A.; Klimchik, A. Adaptive Technique for Physical Human–Robot Interaction Handling Using Proprioceptive Sensors. Eng. Appl. Artif. Intell. 2023, 126, 107141. [Google Scholar] [CrossRef]
  79. Spatola, N.; Cherif, E. Spontaneous Humanization of Robots in Passive Observation of Human-Robot Interaction: A Path toward Ethical Consideration and Human-Robot Cooperation. Comput. Hum. Behav. Artif. Hum. 2023, 1, 100012. [Google Scholar] [CrossRef]
  80. Ratclife, J.; Soave, F.; Bryan-Kinns, N.; Tokarchuk, L.; Farkhatdinov, I. Extended Reality (XR) Remote Research: A Survey of Drawbacks and Opportunities. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 8–13 May 2021. [Google Scholar] [CrossRef]
  81. Doolani, S.; Wessels, C.; Kanal, V.; Sevastopoulos, C.; Jaiswal, A.; Nambiappan, H.; Makedon, F. A Review of Extended Reality (XR) Technologies for Manufacturing Training. Technologies 2020, 8, 77. [Google Scholar] [CrossRef]
  82. Morimoto, T.; Kobayashi, T.; Hirata, H.; Otani, K.; Sugimoto, M.; Tsukamoto, M.; Yoshihara, T.; Ueno, M.; Mawatari, M. XR (Extended Reality: Virtual Reality, Augmented Reality, Mixed Reality) Technology in Spine Medicine: Status Quo and Quo Vadis. J. Clin. Med. 2022, 11, 470. [Google Scholar] [CrossRef]
  83. Zwolinski, G.; Kaminska, D.; Laska-Lesniewicz, A.; Eric Haamer, R.; Vairinhos, M.; Raposo, R.; Urem, F.; Reisinho, P. Extended Reality in Education and Training: Case Studies in Management Education. Electron 2022, 11, 336. [Google Scholar] [CrossRef]
  84. Milgram, P.; Kishino, F. A Taxonomy of Mixed Reality Visual Displays. IEICE Trans. Inf. Syst. 1994, E77-D, 1321–1329. [Google Scholar]
  85. Han, X.; Chen, Y.; Feng, Q.; Luo, H. Augmented Reality in Professional Training: A Review of the Literature from 2001 to 2020. Appl. Sci. 2022, 12, 1024. [Google Scholar] [CrossRef]
  86. Makhataeva, Z.; Varol, H.A. Augmented Reality for Robotics: A Review. Robotics 2020, 9, 21. [Google Scholar] [CrossRef]
  87. Li, C.; Fahmy, A.; Sienz, J. An Augmented Reality Based Human-Robot Interaction Interface Using Kalman Filter Sensor Fusion. Sensors 2019, 19, 4586. [Google Scholar] [CrossRef]
  88. Wonsick, M.; Padir, T. A Systematic Review of Virtual Reality Interfaces for Controlling and Interacting with Robots. Appl. Sci. 2020, 10, 9051. [Google Scholar] [CrossRef]
  89. Prati, E.; Villani, V.; Peruzzini, M.; Sabattini, L. An Approach Based on VR to Design Industrial Human-Robot Collaborative Workstations. Appl. Sci. 2021, 11, 11773. [Google Scholar] [CrossRef]
  90. Sievers, T.S.; Schmitt, B.; Rückert, P.; Petersen, M.; Tracht, K. Concept of a Mixed-Reality Learning Environment for Collaborative Robotics. Procedia Manuf. 2020, 45, 19–24. [Google Scholar] [CrossRef]
  91. Frank, J.A.; Moorhead, M.; Kapila, V. Mobile Mixed-Reality Interfaces That Enhance Human-Robot Interaction in Shared Spaces. Front. Robot. AI 2017, 4, 20. [Google Scholar] [CrossRef]
  92. Zhang, R.; Liu, X.; Shuai, J.; Zheng, L. Collaborative Robot and Mixed Reality Assisted Microgravity Assembly for Large Space Mechanism. Procedia Manuf. 2020, 51, 38–45. [Google Scholar] [CrossRef]
  93. Choi, S.H.; Park, K.B.; Roh, D.H.; Lee, J.Y.; Mohammed, M.; Ghasemi, Y.; Jeong, H. An Integrated Mixed Reality System for Safety-Aware Human-Robot Collaboration Using Deep Learning and Digital Twin Generation. Robot. Comput. Integr. Manuf. 2022, 73, 102258. [Google Scholar] [CrossRef]
  94. Khatib, M.; Al Khudir, K.; De Luca, A. Human-Robot Contactless Collaboration with Mixed Reality Interface. Robot. Comput. Integr. Manuf. 2021, 67, 102030. [Google Scholar] [CrossRef]
  95. Palma, G.; Perry, S.; Cignoni, P. Augmented Virtuality Using Touch-Sensitive 3D-Printed Objects. Remote Sens. 2021, 13, 2186. [Google Scholar] [CrossRef]
  96. Gralak, R. A Method of Navigational Information Display Using Augmented Virtuality. J. Mar. Sci. Eng. 2020, 8, 237. [Google Scholar] [CrossRef]
  97. Ostanin, M.; Yagfarov, R.; Klimchik, A. Interactive Robots Control Using Mixed Reality. IFAC-Pap. 2019, 52, 695–700. [Google Scholar] [CrossRef]
  98. Dianatfar, M.; Latokartano, J.; Lanz, M. Review on Existing VR/AR Solutions in Human–Robot Collaboration. Procedia CIRP 2021, 97, 407–411. [Google Scholar] [CrossRef]
  99. Suzuki, R.; Karim, A.; Xia, T.; Hedayati, H.; Marquardt, N. Augmented Reality and Robotics: A Survey and Taxonomy for AR-Enhanced Human-Robot Interaction and Robotic Interfaces. In Proceedings of the Conference on Human Factors in Computing Systems, Orleans, LA, USA, 29 April–5 May 2022. [Google Scholar] [CrossRef]
  100. Coronado, E.; Itadera, S.; Ramirez-Alpizar, I.G. Integrating Virtual, Mixed, and Augmented Reality to Human–Robot Interaction Applications Using Game Engines: A Brief Review of Accessible Software Tools and Frameworks. Appl. Sci. 2023, 13, 1292. [Google Scholar] [CrossRef]
  101. Roth, T.; Weier, M.; Hinkenjann, A.; Li, Y.; Slusallek, P. A Quality-Centered Analysis of Eye Tracking Data in Foveated. J. Eye Mov. Res. 2017, 10, 1–12. [Google Scholar] [CrossRef]
  102. Nakamura, K.; Tohashi, K.; Funayama, Y.; Harasawa, H.; Ogawa, J. Dual-Arm Robot Teleoperation Support with the Virtual World. Artif. Life Robot. 2020, 25, 286–293. [Google Scholar] [CrossRef]
  103. Whitney, D.; Rosen, E.; Ullman, D.; Phillips, E.; Tellex, S. ROS Reality: A Virtual Reality Framework Using Consumer-Grade Hardware for ROS-Enabled Robots. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 5018–5025. [Google Scholar]
  104. Whitney, D.; Rosen, E.; Phillips, E.; Konidaris, G.; Tellex, S. Comparing Robot Grasping Teleoperation Across Desktop and Virtual Reality with ROS Reality. Springer Proc. Adv. Robot. 2020, 10, 335–350. [Google Scholar] [CrossRef]
  105. Delpreto, J.; Lipton, J.I.; Sanneman, L.; Fay, A.J.; Fourie, C.; Choi, C.; Rus, D. Helping Robots Learn: A Human-Robot Master-Apprentice Model Using Demonstrations via Virtual Reality Teleoperation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 10226–10233. [Google Scholar] [CrossRef]
  106. Britton, N.; Yoshida, K.; Walker, J.; Nagatani, K.; Taylor, G.; Dauphin, L. Lunar Micro Rover Design for Exploration through Virtual Reality Tele-Operation. Springer Tracts Adv. Robot. 2015, 105, 259–272. [Google Scholar] [CrossRef]
  107. Su, Y.P.; Chen, X.Q.; Zhou, T.; Pretty, C.; Chase, J.G. Mixed Reality-Enhanced Intuitive Teleoperation with Hybrid Virtual Fixtures for Intelligent Robotic Welding. Appl. Sci. 2021, 11, 11280. [Google Scholar] [CrossRef]
  108. Livatino, S.; Guastella, D.C.; Muscato, G.; Rinaldi, V.; Cantelli, L.; Melita, C.D.; Caniglia, A.; Mazza, R.; Padula, G. Intuitive Robot Teleoperation through Multi-Sensor Informed Mixed Reality Visual Aids. IEEE Access 2021, 9, 25795–25808. [Google Scholar] [CrossRef]
  109. Shi, Y.; Li, X.; Wang, L.; Cheng, Z.; Mo, Z.; Zhang, S. Research on Mixed Reality Visual Augmentation Method for Teleoperation Interactive System; Springer: Berlin/Heidelberg, Germany, 2023; pp. 490–502. [Google Scholar] [CrossRef]
  110. Naceri, A.; Mazzanti, D.; Bimbo, J.; Prattichizzo, D.; Caldwell, D.G.; Mattos, L.S.; Deshpande, N. Towards a Virtual Reality Interface for Remote Robotic Teleoperation. In Proceedings of the 2019 19th International Conference on Advanced Robotics, ICAR 2019, Belo Horizonte, Brazil, 2–6 December 2019; pp. 284–289. [Google Scholar]
  111. Zhang, T.; McCarthy, Z.; Jowl, O.; Lee, D.; Chen, X.; Goldberg, K.; Abbeel, P. Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 5628–5635. [Google Scholar] [CrossRef]
  112. Concannon, D.; Flynn, R.; Murray, N. A Quality of Experience Evaluation System and Research Challenges for Networked Virtual Reality-Based Teleoperation Applications. In Proceedings of the 11th ACM Workshop on Immersive Mixed and Virtual Environment Systems, MMVE 2019, Amherst, MA, USA, 18 June 2019; pp. 10–12. [Google Scholar]
  113. Stein, C.; Stein, C. Virtual Reality Design: How Head-Mounted Displays Change Design Paradigms of Virtual Reality Worlds. MediaTropes 2016, 6, 52–85. [Google Scholar] [CrossRef]
  114. Krupke, D.; Einig, L.; Langbehn, E.; Zhang, J.; Steinicke, F. Immersive Remote Grasping: Realtime Gripper Control by a Heterogenous Robot Control System. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST, Munich, Germany, 2–4 November 2016; pp. 337–338. [Google Scholar]
  115. Lipton, J.I.; Fay, A.J.; Rus, D. Baxter’s Homunculus: Virtual Reality Spaces for Teleoperation in Manufacturing. IEEE Robot. Autom. Lett. 2018, 3, 179–186. [Google Scholar] [CrossRef]
  116. Azar, A.T.; Humaidi, A.J.; Al Mhdawi, A.K.; Kaarlela, T.; Pitkäaho, T.; Pieskä, S.; Padrão, P.; Bobadilla, L.; Tikanmäki, M.; Haavisto, T.; et al. Towards Metaverse: Utilizing Extended Reality and Digital Twins to Control Robotic Systems. Actuators 2023 2023, 12, 219. [Google Scholar] [CrossRef]
  117. Su, Y.P.; Chen, X.Q.; Zhou, T.; Pretty, C.; Chase, G. Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System. Appl. Sci. 2022, 12, 4740. [Google Scholar] [CrossRef]
  118. Su, Y.; Chen, X.; Zhou, T.; Pretty, C.; Chase, G. Mixed Reality-Integrated 3D/2D Vision Mapping for Intuitive Teleoperation of Mobile Manipulator. Robot. Comput. Integr. Manuf. 2022, 77, 102332. [Google Scholar] [CrossRef]
  119. Souchet, A.D.; Philippe, S.; Lourdeaux, D.; Leroy, L. Measuring Visual Fatigue and Cognitive Load via Eye Tracking While Learning with Virtual Reality Head-Mounted Displays: A Review. Int. J. Hum. Comput. Interact. 2022, 38, 801–824. [Google Scholar] [CrossRef]
  120. Szczurek, K.A.; Prades, R.M.; Matheson, E.; Perier, H.; Buonocore, L.R.; Castro, M. Di from 2D to 3D Mixed Reality Human-Robot Interface in Hazardous Robotic Interventions with the Use of Redundant Mobile Manipulator. In Proceedings of the 18th International Conference on Informatics in Control, Automation and Robotics, ICINCO 2021, Paris, France, 6–8 July 2021; pp. 388–395. [Google Scholar] [CrossRef]
  121. Wei, D.; Huang, B.; Li, Q. Multi-View Merging for Robot Teleoperation with Virtual Reality. IEEE Robot. Autom. Lett. 2021, 6, 8537–8544. [Google Scholar] [CrossRef]
  122. Luo, Y.; Wang, J.; Liang, H.-N.; Luo, S.; Lim, E.G. Monoscopic vs. Stereoscopic Views and Display Types in the Teleoperation of Unmanned Ground Vehicles for Object Avoidance. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 8–12 August 2021; pp. 418–425. [Google Scholar]
  123. Sehad, N.; Tu, X.; Rajasekaran, A.; Hellaoui, H.; Jäntti, R.; Debbah, M. Towards Enabling Reliable Immersive Teleoperation through Digital Twin: A UAV Command and Control Use Case. arXiv 2023, arXiv:2308.14524. [Google Scholar]
  124. Li, H.; Ma, W.; Wang, H.; Liu, G.; Wen, X.; Zhang, Y.; Yang, M.; Luo, G.; Xie, G.; Sun, C. A Framework and Method for Human-Robot Cooperative Safe Control Based on Digital Twin. Adv. Eng. Inform. 2022, 53, 101701. [Google Scholar] [CrossRef]
  125. Mazumder, A.; Sahed, M.F.; Tasneem, Z.; Das, P.; Badal, F.R.; Ali, M.F.; Ahamed, M.H.; Abhi, S.H.; Sarker, S.K.; Das, S.K.; et al. Towards next Generation Digital Twin in Robotics: Trends, Scopes, Challenges, and Future. Heliyon 2023, 9, e13359. [Google Scholar] [CrossRef] [PubMed]
  126. Kuts, V.; Otto, T.; Tähemaa, T.; Bondarenko, Y. Digital Twin Based Synchronised Control and Simulation of the Industrial Robotic Cell Using Virtual Reality. J. Mach. Eng. 2019, 19, 128–144. [Google Scholar] [CrossRef]
  127. Kaarlela, T.; Arnarson, H.; Pitkäaho, T.; Shu, B.; Solvang, B.; Pieskä, S. Common Educational Teleoperation Platform for Robotics Utilizing Digital Twins. Machines 2022, 10, 577. [Google Scholar] [CrossRef]
  128. Cichon, T.; Robmann, J. Digital Twins: Assisting and Supporting Cooperation in Human-Robot Teams. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision, ICARCV 2018, Singapore, 18–21 November 2018; pp. 486–491. [Google Scholar] [CrossRef]
  129. Tsokalo, I.A.; Kuss, D.; Kharabet, I.; Fitzek, F.H.P.; Reisslein, M. Remote Robot Control with Human-in-the-Loop over Long Distances Using Digital Twins. In Proceedings of the 2019 IEEE Global Communications Conference, GLOBECOM 2019—Proceedings, Waikoloa, HI, USA, 9–13 December 2019. [Google Scholar] [CrossRef]
  130. Lee, H.; Kim, S.D.; Amin, M.A.U. Al Control Framework for Collaborative Robot Using Imitation Learning-Based Teleoperation from Human Digital Twin to Robot Digital Twin. Mechatronics 2022, 85, 102833. [Google Scholar] [CrossRef]
  131. Lopez Pulgarin, E.J.; Niu, H.; Herrmann, G.; Carrasco, J. Implementing and Assessing a Remote Teleoperation Setup with a Digital Twin Using Cloud Networking. Lect. Notes Comput. Sci. Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinform. 2022, 13546, 238–250. [Google Scholar] [CrossRef]
  132. Kattepur, A. Robotic Tele-Operation Performance Analysis via Digital Twin Simulations. In Proceedings of the 2022 14th International Conference on COMmunication Systems and NETworkS, COMSNETS 2022, Bangalore, India, 4–8 January 2022; pp. 415–417. [Google Scholar] [CrossRef]
  133. Li, C.; Zheng, P.; Li, S.; Pang, Y.; Lee, C.K.M. AR-Assisted Digital Twin-Enabled Robot Collaborative Manufacturing System with Human-in-the-Loop. Robot. Comput. Integr. Manuf. 2022, 76, 102321. [Google Scholar] [CrossRef]
  134. Falleni, S.; Filippeschi, A.; Ruffaldi, E.; Avizzano, C.A. Teleoperated Multimodal Robotic Interface for Telemedicine: A Case Study on Remote Auscultation. In Proceedings of the RO-MAN 2017—26th IEEE International Symposium on Robot and Human Interactive Communication, Lisbon, Portugal, 28 August–1 September 2017; pp. 476–482. [Google Scholar] [CrossRef]
  135. Brizzi, F.; Peppoloni, L.; Graziano, A.; Di Stefano, E.; Avizzano, C.A.; Ruffaldi, E. Effects of Augmented Reality on the Performance of Teleoperated Industrial Assembly Tasks in a Robotic Embodiment. IEEE Trans. Hum. Mach. Syst. 2018, 48, 197–206. [Google Scholar] [CrossRef]
  136. Pereira, A.; Carter, E.J.; Leite, I.; Mars, J.; Lehman, J.F. Augmented Reality Dialog Interface for Multimodal Teleoperation. In Proceedings of the RO-MAN 2017–26th IEEE International Symposium on Robot and Human Interactive Communication, Lisbon, Portugal, 28–31 August; pp. 764–771. [CrossRef]
  137. González, C.; Solanes, J.E.; Muñoz, A.; Gracia, L.; Girbés-Juan, V.; Tornero, J. Advanced Teleoperation and Control System for Industrial Robots Based on Augmented Virtuality and Haptic Feedback. J. Manuf. Syst. 2021, 59, 283–298. [Google Scholar] [CrossRef]
  138. Fu, B.; Seidelman, W.; Liu, Y.; Kent, T.; Carswell, M.; Zhang, Y.; Yang, R. Towards Virtualized Welding: Visualization and Monitoring of Remote Welding. In Proceedings of the 2014 IEEE International Conference on Multimedia and Expo (ICME), Cherbourg, France, 30 June–2 July 2014; pp. 1–6. [Google Scholar]
  139. Baklouti, S.; Gallot, G.; Viaud, J.; Subrin, K. On the Improvement of Ros-Based Control for Teleoperated Yaskawa Robots. Appl. Sci. 2021, 11, 7190. [Google Scholar] [CrossRef]
  140. Wang, B.; Hu, S.J.; Sun, L.; Freiheit, T. Intelligent Welding System Technologies: State-of-the-Art Review and Perspectives. J. Manuf. Syst. 2020, 56, 373–391. [Google Scholar] [CrossRef]
  141. Liu, Y.K. Toward Intelligent Welding Robots: Virtualized Welding Based Learning of Human Welder Behaviors. Weld. World 2016, 60, 719–729. [Google Scholar] [CrossRef]
  142. Ming, H.; Huat, Y.S.; Lin, W.; Hui Bin, Z. On Teleoperation of an Arc Welding Robotic System. Proc. IEEE Int. Conf. Robot. Autom. 1996, 2, 1275–1280. [Google Scholar] [CrossRef]
  143. Ding, D.; Shen, C.; Pan, Z.; Cuiuri, D.; Li, H.; Larkin, N.; Van Duin, S. Towards an Automated Robotic Arc-Welding-Based Additive Manufacturing System from CAD to Finished Part. CAD Comput. Aided Des. 2016, 73, 66–75. [Google Scholar] [CrossRef]
  144. Van Essen, J.; Van Der Jagt, M.; Troll, N.; Wanders, M.; Erden, M.S.; Van Beek, T.; Tomiyama, T. Identifying Welding Skills for Robot Assistance. In Proceedings of the 2008 IEEE/ASME International Conference on Mechatronics and Embedded Systems and Applications, MESA 2008, Beijing, China, 12–15 October 2008; pp. 437–442. [Google Scholar]
  145. Erden, M.S.; Billard, A. End-Point Impedance Measurements across Dominant and Nondominant Hands and Robotic Assistance with Directional Damping. IEEE Trans. Cybern. 2015, 45, 1146–1157. [Google Scholar] [CrossRef] [PubMed]
  146. Liu, Y.K.; Shao, Z.; Zhang, Y.M. Learning Human Welder Movement in Pipe GTAW: A Virtualized Welding Approach. Weld. J. 2014, 93, 388–398. [Google Scholar]
  147. Erden, M.S.; Tomiyama, T. Identifying Welding Skills for Training and Assistance with Robot. Sci. Technol. Weld. Join. 2009, 14, 523–532. [Google Scholar] [CrossRef]
  148. Liu, Y.K.; Zhang, Y.M. Control of Human Arm Movement in Machine-Human Cooperative Welding Process. Control Eng. Pr. 2014, 32, 161–171. [Google Scholar] [CrossRef]
  149. Tian, N.; Tanwani, A.K.; Goldberg, K.; Sojoudi, S. Mitigating Network Latency in Cloud-Based Teleoperation Using Motion Segmentation and Synthesis. Springer Proc. Adv. Robot. 2022, 20 SPAR, 906–921. [Google Scholar] [CrossRef]
  150. Luz, R.; Silva, J.L.; Ventura, R. Enhanced Teleoperation Interfaces for Multi-Second Latency Conditions: System Design and Evaluation. IEEE Access 2023, 11, 10935–10953. [Google Scholar] [CrossRef]
  151. Chi, P.; Wang, Z.; Liao, H.; Li, T.; Zhan, J.; Wu, X.; Tian, J.; Zhang, Q. Low-Latency Visual-Based High-Quality 3-D Reconstruction Using Point Cloud Optimization. IEEE Sens. J. 2023, 23, 20055–20065. [Google Scholar] [CrossRef]
  152. Qin, B.; Luo, Q.; Luo, Y.; Zhang, J.; Liu, J.; Cui, L. Research and Application of Key Technologies of Edge Computing for Industrial Robots. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference, ITNEC 2020, Chongqing, China, 12–14 June 2020; pp. 2157–2164. [Google Scholar] [CrossRef]
  153. Ipsita, A.; Erickson, L.; Dong, Y.; Huang, J.; Bushinski, A.K.; Saradhi, S.; Villanueva, A.M.; Peppler, K.A.; Redick, T.S.; Ramani, K. Towards Modeling of Virtual Reality Welding Simulators to Promote Accessible and Scalable Training. In Proceedings of the Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April–5 May 2022. [Google Scholar] [CrossRef]
  154. Pan, Y.; Zhang, L. Roles of Artificial Intelligence in Construction Engineering and Management: A Critical Review and Future Trends. Autom. Constr. 2021, 122, 103517. [Google Scholar] [CrossRef]
  155. Wang, Y.; Chen, Y.; Nan, Z.; Hu, Y. Study on Welder Training by Means of Haptic Guidance and Virtual Reality for Arc Welding. In Proceedings of the 2006 IEEE International Conference on Robotics and Biomimetics, Kunming, China, 17–20 December 2006; pp. 954–958. [Google Scholar]
  156. Ciszak, O.; Juszkiewicz, J.; Suszyński, M. Programming of Industrial Robots Using the Recognition of Geometric Signs in Flexible Welding Process. Symmetry 2020, 12, 1429. [Google Scholar] [CrossRef]
  157. Yu, H.; Qin, J.; Zhao, K. Innovation in Interactive Design of Tele-Robotic Welding in the Trend of Interaction Change. Des. Eng. 2020, 322–330. [Google Scholar] [CrossRef]
  158. Wang, Q.; Jiao, W.; Yu, R.; Johnson, M.T.; Zhang, Y.M. Virtual Reality Robot-Assisted Welding Based on Human Intention Recognition. IEEE Trans. Autom. Sci. Eng. 2020, 17, 799–808. [Google Scholar] [CrossRef]
  159. Wells, T.; Miller, G. The Effect of Virtual Reality Technology on Welding Skill Performance. J. Agric. Educ. 2020, 61, 152–171. [Google Scholar] [CrossRef]
  160. Byrd, A.P.; Stone, R.T.; Anderson, R.G.; Woltjer, K. The Use of Virtual Welding Simulators to Evaluate Experienced Welders. Weld. J. 2015, 94, 389–395. [Google Scholar]
  161. Liu, Y.; Zhang, Y. Human Welder 3-D Hand Movement Learning in Virtualized GTAW: Theory and Experiments. Trans. Intell. Weld. Manuf. 2019, 1, 3–25. [Google Scholar]
  162. Liu, Y.K.; Zhang, Y.M. Supervised Learning of Human Welder Behaviors for Intelligent Robotic Welding. IEEE Trans. Autom. Sci. Eng. 2017, 14, 1532–1541. [Google Scholar] [CrossRef]
  163. Liu, Y.K.; Zhang, Y.M. Fusing Machine Algorithm with Welder Intelligence for Adaptive Welding Robots. J. Manuf. Process 2017, 27, 18–25. [Google Scholar] [CrossRef]
  164. Wang, Q.; Cheng, Y.; Jiao, W.; Johnson, M.T.; Zhang, Y.M. Virtual Reality Human-Robot Collaborative Welding: A Case Study of Weaving Gas Tungsten Arc Welding. J. Manuf. Process. 2019, 48, 210–217. [Google Scholar] [CrossRef]
  165. Ni, D.; Yew, A.W.W.; Ong, S.K.; Nee, A.Y.C. Haptic and Visual Augmented Reality Interface for Programming Welding Robots. Adv. Manuf. 2017, 5, 191–198. [Google Scholar] [CrossRef]
  166. Selvaggio, M.; Notomista, G.; Chen, F.; Gao, B.; Trapani, F.; Caldwell, D. Enhancing Bilateral Teleoperation Using Camera-Based Online Virtual Fixtures Generation. IEEE Int. Conf. Intell. Robot. Syst. 2016, 2016, 1483–1488. [Google Scholar] [CrossRef]
  167. Rokhsaritalemi, S.; Sadeghi-Niaraki, A.; Choi, S.M. A Review on Mixed Reality: Current Trends, Challenges and Prospects. Appl. Sci. 2020, 10, 636. [Google Scholar] [CrossRef]
  168. Aygün, M.M.; Ögüt, Y.Ç.; Baysal, H.; Taşcioglu, Y. Visuo-Haptic Mixed Reality Simulation Using Unbound Handheld Tools. Appl. Sci. 2020, 10, 5344. [Google Scholar] [CrossRef]
  169. Tu, X.; Autiosalo, J.; Jadid, A.; Tammi, K.; Klinker, G. A Mixed Reality Interface for a Digital Twin Based Crane. Appl. Sci. 2021, 11, 9480. [Google Scholar] [CrossRef]
Figure 1. The immersive teleoperation of the Baxter robotic system with two human–robot interaction modes [17]. (a) Ego-centric teleoperation mode. (b) Eco-centric teleoperation mode.
Figure 1. The immersive teleoperation of the Baxter robotic system with two human–robot interaction modes [17]. (a) Ego-centric teleoperation mode. (b) Eco-centric teleoperation mode.
Applsci 13 12129 g001
Figure 2. A typical intuitive control approach for remote robotic manipulation. (a) The user manipulates the master robot of 7 DOF to intuitively drive the slave robot at a distance as if they were directly operating the slave robot. (b) The overview of the master–slave teleoperation architecture [38].
Figure 2. A typical intuitive control approach for remote robotic manipulation. (a) The user manipulates the master robot of 7 DOF to intuitively drive the slave robot at a distance as if they were directly operating the slave robot. (b) The overview of the master–slave teleoperation architecture [38].
Applsci 13 12129 g002
Figure 3. Milgram’s reality–virtuality continuum [84].
Figure 3. Milgram’s reality–virtuality continuum [84].
Applsci 13 12129 g003
Figure 4. The XR-based telerobotic manipulation platform for intuitive and immersive telemanipulation. The physical robot environment consists of a robotic arm-hand system with onsite sensors. The XR-enhanced corresponding scene is displayed to the user in HMD.
Figure 4. The XR-based telerobotic manipulation platform for intuitive and immersive telemanipulation. The physical robot environment consists of a robotic arm-hand system with onsite sensors. The XR-enhanced corresponding scene is displayed to the user in HMD.
Applsci 13 12129 g004
Figure 5. The MR-based Baxter’s homunculus telerobotic system for a wide range of tele-manufacturing tasks. Reprinted with permission from Ref. [115]. 2018, IEEE.
Figure 5. The MR-based Baxter’s homunculus telerobotic system for a wide range of tele-manufacturing tasks. Reprinted with permission from Ref. [115]. 2018, IEEE.
Applsci 13 12129 g005
Figure 6. The ROS Reality–controlled telerobotic manipulation platform. (a) The intuitive and immersive control of the dual-arm Baxter robot; (b) the point-cloud-enhanced MR scene displayed to the users. Reprinted with permission from Refs. [103,104]. 2018, IEEE.
Figure 6. The ROS Reality–controlled telerobotic manipulation platform. (a) The intuitive and immersive control of the dual-arm Baxter robot; (b) the point-cloud-enhanced MR scene displayed to the users. Reprinted with permission from Refs. [103,104]. 2018, IEEE.
Applsci 13 12129 g006
Figure 7. The MR-based multiple-views merging approach for robotic telemanipulation. (a) The user controls the robot’s motion using commercial VR devices. (b) The real robot working environment. (c) The MR-based view-merging scene. (df) Three different vision-merging setups. Reprinted with permission from Ref. [121]. 2021, IEEE.
Figure 7. The MR-based multiple-views merging approach for robotic telemanipulation. (a) The user controls the robot’s motion using commercial VR devices. (b) The real robot working environment. (c) The MR-based view-merging scene. (df) Three different vision-merging setups. Reprinted with permission from Ref. [121]. 2021, IEEE.
Applsci 13 12129 g007
Figure 8. The projection-based MR remotely controlled robotic welding system for transferring welder skills and human-level dynamics to the welding robot. (a) Overview of the welding station and virtual station. (b) Replica of a welding torch used in the MR welding station. Reprinted with permission from Ref. [163]. 2017, Elsevier.
Figure 8. The projection-based MR remotely controlled robotic welding system for transferring welder skills and human-level dynamics to the welding robot. (a) Overview of the welding station and virtual station. (b) Replica of a welding torch used in the MR welding station. Reprinted with permission from Ref. [163]. 2017, Elsevier.
Applsci 13 12129 g008
Figure 9. Overview of MR-based telerobotic system for remote welding tasks. (a) System configuration of MR-based human-robot collaborative welding platform. (b) Telerobotic welding site. (c) The MR scene displayed to the operator is augmented by a real-time on-site welding stream. Reprinted with permission from Ref. [164]. 2019, Elsevier.
Figure 9. Overview of MR-based telerobotic system for remote welding tasks. (a) System configuration of MR-based human-robot collaborative welding platform. (b) Telerobotic welding site. (c) The MR scene displayed to the operator is augmented by a real-time on-site welding stream. Reprinted with permission from Ref. [164]. 2019, Elsevier.
Applsci 13 12129 g009
Figure 10. The intuitive MR-based user interfaces for comprehending robot states in tele-welding experiments. (a) Typical 2D visual feedback for remote-controlled robotic welding, where the user uses a monitor to observe the welding process without the immersive HMD usage; (b) the virtual replica of the physical welding workpiece; (c) tele-welding MR platform without haptic effects, including welding virtual workpiece, overlaid RGB stream, virtual welding torch, and the scaled digital twin of UR5; (d) MRVF module involving hybrid guidance and prevention VFs [107].
Figure 10. The intuitive MR-based user interfaces for comprehending robot states in tele-welding experiments. (a) Typical 2D visual feedback for remote-controlled robotic welding, where the user uses a monitor to observe the welding process without the immersive HMD usage; (b) the virtual replica of the physical welding workpiece; (c) tele-welding MR platform without haptic effects, including welding virtual workpiece, overlaid RGB stream, virtual welding torch, and the scaled digital twin of UR5; (d) MRVF module involving hybrid guidance and prevention VFs [107].
Applsci 13 12129 g010
Table 1. Differences between this article and comparable studies recently introduced in the literature on MR-enhanced robotics.
Table 1. Differences between this article and comparable studies recently introduced in the literature on MR-enhanced robotics.
Reference Year Topics Focused on Recent PapersDifferences with This Article
[98] 2021
  • Review of the current state of VR/AR solutions in HRI and HRC
  • Solutions summarized in four main categories: operator support, instruction, simulation, and manipulation
  • Referenced citations were published between 2010 and 2019
  • Review of novel XR-enhanced robotic approaches for intuitive remote teleoperation applications
  • This article focuses on XR technologies in robotic telemanipulation and telemanufacturing scenarios
  • Articles presented were published from 2016 to 2023
[99]2022
  • Review of existing AR-enhanced HRI and robotic interfaces
  • Approaches presented in two main categories: augmenting robots and augmenting surroundings
  • This article reviews XR-enhanced telerobotic solutions, including the use of VR, MR, and AR
  • The scope of this work is focused on telemanipulation and telemanufacturing tasks in hazardous conditions
[100]2023
  • Review of existing VR/AR/MR tools and frameworks in HRC using game engines
  • Recent literature on software tools categorized into communication tools and interaction tools
  • Applications presented in four main categories: social robotics, programming of industrial robots, teleoperation, and human–robot collaboration
  • This article presents XR systems that extend the user’s reality and provide a more immersive and intuitive interface
  • This article focuses on both software and hardware aspects
  • This work classifies the industrial application context of the reviewed articles into two groups: XR-enhanced telemanipulation and XR-enhanced tele-welding
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Su, Y.-P.; Chen, X.-Q.; Zhou, C.; Pearson, L.H.; Pretty, C.G.; Chase, J.G. Integrating Virtual, Mixed, and Augmented Reality into Remote Robotic Applications: A Brief Review of Extended Reality-Enhanced Robotic Systems for Intuitive Telemanipulation and Telemanufacturing Tasks in Hazardous Conditions. Appl. Sci. 2023, 13, 12129. https://doi.org/10.3390/app132212129

AMA Style

Su Y-P, Chen X-Q, Zhou C, Pearson LH, Pretty CG, Chase JG. Integrating Virtual, Mixed, and Augmented Reality into Remote Robotic Applications: A Brief Review of Extended Reality-Enhanced Robotic Systems for Intuitive Telemanipulation and Telemanufacturing Tasks in Hazardous Conditions. Applied Sciences. 2023; 13(22):12129. https://doi.org/10.3390/app132212129

Chicago/Turabian Style

Su, Yun-Peng, Xiao-Qi Chen, Cong Zhou, Lui Holder Pearson, Christopher G. Pretty, and J. Geoffrey Chase. 2023. "Integrating Virtual, Mixed, and Augmented Reality into Remote Robotic Applications: A Brief Review of Extended Reality-Enhanced Robotic Systems for Intuitive Telemanipulation and Telemanufacturing Tasks in Hazardous Conditions" Applied Sciences 13, no. 22: 12129. https://doi.org/10.3390/app132212129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop