Next Article in Journal
Apply Graph Signal Processing on NILM: An Unsupervised Approach Featuring Power Sequences
Next Article in Special Issue
BoxStacker: Deep Reinforcement Learning for 3D Bin Packing Problem in Virtual Environment of Logistics Systems
Previous Article in Journal
The Effect of Surrounding Vegetation on Basal Stem Measurements Acquired Using Low-Cost Depth Sensors in Urban and Native Forest Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Human-Centric Digital Twins in Industry: A Comprehensive Review of Enabling Technologies and Implementation Strategies

1
Department of Mechanical Engineering, Capital University of Science and Technology, Islamabad 45750, Pakistan
2
Department of Mechatronics Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan
3
Digital Innovation Research Group, Department of Engineering, School of Science & Technology, Nottingham Trent University, Nottingham NG11 8NS, UK
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(8), 3938; https://doi.org/10.3390/s23083938
Submission received: 22 February 2023 / Revised: 6 April 2023 / Accepted: 8 April 2023 / Published: 12 April 2023
(This article belongs to the Special Issue Machine Learning in Cyber Physical Systems)

Abstract

:
The last decade saw the emergence of highly autonomous, flexible, re-configurable Cyber-Physical Systems. Research in this domain has been enhanced by the use of high-fidelity simulations, including Digital Twins, which are virtual representations connected to real assets. Digital Twins have been used for process supervision, prediction, or interaction with physical assets. Interaction with Digital Twins is enhanced by Virtual Reality and Augmented Reality, and Industry 5.0-focused research is evolving with the involvement of the human aspect in Digital Twins. This paper aims to review recent research on Human-Centric Digital Twins (HCDTs) and their enabling technologies. A systematic literature review is performed using the VOSviewer keyword mapping technique. Current technologies such as motion sensors, biological sensors, computational intelligence, simulation, and visualization tools are studied for the development of HCDTs in promising application areas. Domain-specific frameworks and guidelines are formed for different HCDT applications that highlight the workflow and desired outcomes, such as the training of AI models, the optimization of ergonomics, the security policy, task allocation, etc. A guideline and comparative analysis for the effective development of HCDTs are created based on the criteria of Machine Learning requirements, sensors, interfaces, and Human Digital Twin inputs.

1. Introduction

Industry has a constant need to evolve, innovate, and adopt technological advances in order to meet the challenges of the future in a competitive environment. Broadly in literature, historical industrial progress is divided into epochs called “Industrial Revolutions”, characterized by major technological paradigm shifts [1]. In light of the digital revolution and exponential growth in computational power, information and data processing capabilities, smart sensors, and computational intelligence, the focus of Industry 4.0 is to leverage emerging technologies to interconnect smart Cyber-Physical Systems (CPS) to enable mass customization in robust, flexible Smart Factories. A synergistic paradigm of emerging technologies is recently developing, which includes leveraging multi-physics simulations to create Digital Twins (DT) of physical systems. DTs use immersive technologies such as Virtual Reality (VR) and Augmented Reality (AR) to explore and interact with virtual objects and use collaborative robots for safe, intuitive Human-Robot Interaction (HRI), especially by utilizing advances in artificial intelligence. There is an increasing realization that in addition to technological aspirations, government, and industry should move towards a value-driven future where the focus is on human well-being, sustainability, and resilience under Industry 5.0 [2].

1.1. Cyber-Physical Systems and Digital Twins

In the context of Industry 4.0, certain enabling technologies such as distributed computing, sensor networks, big data, and the Internet of Things (IoT) have found increasing application in manufacturing settings to achieve reliability, flexibility, increased automation, and better performance [3]. Due to advances in computational intelligence, increasing data bandwidth and processing capability, and widespread use of sensors, it has become possible to create highly autonomous virtual replicas of CPS with advanced decision-making capabilities. These highly autonomous virtual replicas are known as Digital Twins. The term ‘Digital Twin’ is popularized by NASA in the context of flying vehicles [4]. Many definitions of DTs have been proposed in the literature. The definition proposed by AIAA [5] is “a set of virtual information constructs that mimic the structure, context, and behavior of an individual/unique physical asset, or a group of physical assets, are dynamically updated with data from their physical twins throughout their life cycles and inform decisions that realize value”.
Although the concept of DTs originated in the aerospace industry, they can now be used for the digital representation of any complex system [6]. DTs have been employed in many traditional manufacturing areas, including metallurgy, machining, grinding, and hole punching [7]. Functionally, DTs are used for supervision, interaction, and prediction of assets, often exploiting artificial intelligence and immersive user experiences, for example using Extended Reality (XR) [8]. Furthermore, Digital Twin technology is also finding increasing use by industry leaders, for example in the power production sector by British Petroleum for monitoring oil and gas facilities and General Electric for monitoring turbines [9].
This increasing interest of investigators in digitization originated a number of opportunities to explore the emerging phenomena of DTs for CPS to meet the requirements of smart manufacturing. Computer simulations with Multi-physics modeling can be employed to create DTs of real machines with a bi-directional data flow between the physical and virtual systems. Such DTs can be utilized for rapid testing, optimization, and deployment for flexible manufacturing using Cyber-Physical Production Systems.

1.2. Human Robot Collaboration

Industrial robots driven by Programmable Logical Controllers executing fixed sets of instructions have become a staple of automation and mass production in advanced factories. Most such industrial robots are entirely separated from human workers through safety fencing or large distances. Such industrial setups require high degrees of automation with few human operators. Increasingly, there is a trend towards reintroducing human workers to the factory floor to work side-by-side with collaborative robots (Cobots). The coexistence of humans and robots leverages human creativity and general intelligence, as well as robot precision and repeatability and highly increased perception due to rapid developments in artificial intelligence technologies. Emerging avenues in Human-Robot Collaboration (HRC) focus on human centricity, where a symbiotic relationship between humans and robots is envisioned and human performance and perception can be enhanced by using exoskeletons, cognitive ability, co-intelligence, mixed reality, and Brain-Computer Interface (BCI) [10].
There can be an array of possible hazards during HRC stemming from the design of the industrial process, malfunction of the control system of the robot, the robot characteristics (such as speed, force, end-effectors, etc.), or mental stress to the operator during collaboration [11]. The strategies employed to ensure safe human-robot collaboration include limiting tool center point (TCP) velocity, limiting robot power and force, collision detection and avoidance, protective stop functions, and ergonomic design of the robot as well as the working space [12].

1.3. Human-Centric Digital Twins

Human centricity is an important hallmark of “Industry 5.0”, which is envisioned to be a value-based industrial revolution with a focus on human well-being, sustainability, flexibility, and efficiency [13]. The challenge of improving human well-being and reducing harmful emissions in smart manufacturing requires structural changes, and may even necessitate transformation to a post-growth economy in high-income countries [14]. This comes with the realization that the objective of automation is not the elimination of labor altogether but to leverage the creativity, objective thinking, dexterity, and decision-making power of humans alongside the repeatability, accuracy, and convenience of using robots for repetitive, labor-intensive, tedious tasks and tasks that may be hazardous for humans [15]. This gives rise to the idea of Operator 5.0 [16], where human perception, cognition, and interaction capabilities are enhanced by a range of enabling technologies, as shown in Figure 1, with the aim of using these strengths to achieve sustainable development with a focus on social well-being and robustness in the face of unexpected challenges.
In order to better integrate humans in CPS, the concept of Human Digital Twins is finding increasing popularity in order to better monitor, evaluate, and optimize human performance, ergonomics, and well-being [18]. The development of Human Digital Twins involves the deployment of a model of humans using sensor data that provides insight into their behavior and attributes, which may include their physical, physiological, cognitive, and emotional states [19]. Although DTs of machines have found broad use in industry, the use of DTs and parallel societies for human-centric social computing is still a developing research topic [20]. Research on human intent recognition is also motivated by the aim of developing symbiotic collaboration between robots and humans so as to distinguish between accidental contact and active collaboration and develop an intuitive and helpful cobot motion control strategy [21]. Creating a symbiotic human-robot collaboration system requires the use of dynamic monitoring of humans and resources using smart sensors, active collision avoidance, dynamic planning, and context-aware adaptive robot control [22].
The main purpose of this study is to present state-of-the-art research conducted on Human-Centric Digital Twins, their enabling technologies, and implementation frameworks for different industrial applications. Firstly, the recent literature in the domain of HCDTs, how it evolved over the years, and areas for future research are discussed using a detailed literature review. Secondly, enabling technologies used by various researchers and engineers in the past and those having potential in the future are discussed. Finally, different applications of HCDT technology along with implementation frameworks are presented, and general guidelines are discussed for the development of HCDTs, as shown in Figure 2.

2. Review Methodology

A comprehensive literature review is conducted in this study, where relevant research studies are exported into a digital library, assessed, and screened for any relevance and duplication. The research articles considered in this study were published between the period of 2012 and 2022 and exported from Google Scholar, Science Direct, and Scopus. These databases are selected as they contain the latest full-text peer-reviewed articles and advanced search options and cover the largest content of published research papers. The keywords used to perform a keywords-based search method are as follows: “Digital Twin” AND (“Human-Centric” OR “Human Centered” OR “Human Robot Collaboration” OR “Industry 4.0” OR “Human Robot Interaction” OR “Human Digital Twin” OR “Human-Centered Design” OR “Industry 5.0”).
In the second phase, a graph-based search method is used to find additional relevant papers, where key papers identified from our initial search are used as seeds. For the selection of key papers, the number of citations of the publications was considered, and some of the most highly cited articles included in this review (Table 1) are used as seed papers. Citation analysis tools (inciteful (https://inciteful.xyz/ (accessed on 12 January 2023)) and Litmaps (https://www.litmaps.com/ (accessed on 15 January 2023))) are utilized to create citation network graphs (Figure 3), and related papers are explored and added to the database using the network graph.
Using this search method, first, 237 research publications are screened by studying the abstracts. Relevant papers are imported into the digital library, created using Mendeley software, and further assessed for duplication and redundancies. Only those papers are selected where the human element of DTs is an important consideration in the research work. Finally, the final literature volume of 119 latest publications from 2016 to 2022 is included in this review.

2.1. Keyword Mapping and Bibliometric Analysis

A keywords co-occurrence map of selected publications is obtained using bibliometric analysis in VOSViewer [23] and shown in Figure 4. The main keywords are represented by circles and labels; colors represent keyword clusters, and the number of occurrences is shown by circle size. The distance between keywords shows how closely they are linked. The larger distance between two keywords shows that the correlation between the two words is weak, while the smaller distance shows a strong correlation. Figure 4 represents the main keywords in selected literature such as ‘human-robot interaction’, ‘virtual reality’, ‘industry 4.0’, ‘artificial intelligence, and ‘cyber-physical systems’ from different clusters that are closely related to ‘Digital Twin’. The number of occurrences of each keyword and linking strength are also shown in Table 2.
Figure 5 highlights only the cluster of keywords related to ‘human centricity’ and ‘human centered design’. It shows that selected recent literature has discussed human centricity in DTs, but it is not emphasized in all articles, as Digital Twin literature has not considered human participation in previous years. The same trend is also shown in Table 2, where the keyword ‘digital twin’ has the highest overall link strength of 88 and the keywords ‘human-centricity’ and ‘human-centered design’ have a combined link strength of 12, showing a weak correlation as they occur only six times.

2.2. Related Reviews

In the last two years, a number of review articles on DT-related themes have been published, with their focus and outcome summarized in Table 3. Kunz et al. [24] suggested that nearly all papers “neglect the human factor”. Hosamo et al. [23] concluded that Occupant Centric Building Design is “least developed”. A review of Digital-Twin driven smart manufacturing by Lu et al. found that even though DTs for people can increase understanding of well-being and improve working conditions and training programs, over 95% of DTs are developed for manufacturing assets or factories, not for humans [25]. These review papers highlight the need for human-centricity in their respective domains. Based on the literature review, key enabling technologies and application domains for HCDTs are identified and discussed in the subsequent sections.

3. Enabling Technologies

A number of technical challenges exist in the implementation of HCDTs. This paper discusses the key technologies that can be used for the development of HCDTs and implemented by different researchers and engineers in the literature for different industrial applications, including Human-Robot Interaction (HRI). The following sections focus on sensing technologies, computational intelligence techniques involving artificial intelligence, optimization and control systems, multiple simulations, and visualization tools. In the end, a generic framework for HCDT is also presented that incorporates the discussed enabling technologies.

3.1. Human-Focused Sensors

To create HCDT-driven CPS, sensors can be deployed to collect digital data from humans and other physical systems, which are further integrated with different modeling and simulation tools to support the whole DT framework. Many mature solutions have been developed to collect data from physical systems, such as data collection from Cobots. However, data collection from humans is still in progress. Various human motion tracking sensors developed in recent decades are able to provide accurate results. However, gaze tracking [30], facial temperature, and other unobtrusive and miniaturized psychological and physiological data sensors are continuously evolving, making sensing the mental status of humans still a point of contention [31].
For accurate human skeletal tracking and joint monitoring, optical and non-optical sensing devices were used by researchers. Optical marker-based devices, comprising active and passive markers, such as Optitrack (https://optitrack.com/ (accessed on 12 December 2022)) and VICON (https://www.vicon.com/ (accessed on 12 December 2022)), are widely used for human motion tracking. On the other hand, optical marker-less technology and video-based human motion tracking devices, including RGB-D cameras [32,33], infrared cameras [34,35], and Kinect [36,37,38,39], are also extensively used in literature. Non-optical tracking devices include wearable inertial and magnetic measurement units (IMUs) [40], and magnetometers have been used in the past to track human movements, trajectory, and position while collaborating with cobots. Mechanical motion capture systems are also used when direct measurement of human motion is essential [41].
Different biological sensors are also being used to measure the physiological data of humans to monitor human behavior during human-robot collaboration [42]. Physiological sensors, such as Electrooculogram (EOG) [43], Electrocardiogram (ECG) [44], Electroencephalogram (EEG) [45], Magnetoencephalogram (MEG) [46], and EMG [47], capture signals generated from the human body and can infer important information. Lately, these signals have been broadly used in HRC systems to predict the intention of human operators [46].
Sensors are also required to collect and transmit data on various environmental parameters such as airflow, humidity, light, noise, temperature, and others. This data is then used to create an accurate digital representation of the physical environment, which can be used for simulations, analysis, and decision-making with regard to human comfort and well-being. For example, sensors can be used to monitor the air quality in a building, which can help identify potential health hazards or optimize the operation of HVAC systems in buildings, leading to improved comfort and energy savings.

3.2. Computational Intelligence

Artificial intelligence and machine learning are central to realizing the promise of HCDTs. In the context of path and motion planning for robotics systems, model-based control systems are still widely used despite being challenged by data-driven approaches. Model-based control theory can offer advantages such as explainability and performance and safety guarantees [48], which are still lacking in AI-based methods. Developing trust in autonomous systems is an active research area that will be central to achieving symbiotic human-robot collaboration.

3.2.1. Computer Vision

Creating a high-fidelity Human Digital Twin may involve recognition of human facial features, expressions, poses, gestures, and so on. Deep Neural Networks and Computer Vision are being used extensively in literature for this purpose. Yi et al. [32] use RGB-D sensors for posture estimation using CNN. Dimitropoulous et al. [33] also employed an array of technologies to enable an AI system to safely and ergonomically interact with human operators. The AI system uses convolutional neural networks with RGBD sensors and real-life as well as virtually generated imagery as training data to perform object recognition as well as human pose estimation. Machine-learning based computer vision has been widely used in the literature for safe Human-Robot Collaboration [49,50].

3.2.2. Classification Methods

Supervised learning-based classification techniques such as Convolutional Neural Networks (CNNs), Long Short-term Memory (LSTMs), Hidden Markov Model (HMM), and Spiking Neural Networks (SNN) have been used in the literature for motion and intent detection and prediction in HRC [51,52,53]. Yi et al. demonstrated 3D human pose estimation using six inertial sensors and a deep neural network, using data fusion by combining a data-driven and motion-driven approach [40]. Supervised learning is also used to classify EEG signals in Brain-Computer Interface.
The use of BCI in HMI is still in its early stages. Dmytriyev et al. demonstrated the use of BCI based on EEG signals in a collaborative assembly setting, where the operator looks at monitor screens in order to issue commands to the robot controller [54]. Signal classification (unsupervised learning) has also been used in BCI applications; for example, a blink detection algorithm is used by [55] in the context of a collaborative assembly with the aid of an Augmented Reality (AR) device. Machine learning can also be used to improve the capabilities of robot sensors and actuators, which further enhances human-machine interaction. Jin et al. developed a soft-robotic sensory gripper that uses an SVM-based machine learning algorithm for object recognition through tactile feedback [56].

3.2.3. Reinforcement Learning

In Reinforcement Learning (RL) algorithms, agents learn their control policies through unsupervised learning without the need for vast amounts of training data. Deep Reinforcement techniques leverage deep neural networks in combination with RL algorithms such as Q-Learning to solve complex problems and have been shown to surpass human performance in many situations [57]. Reinforcement learning has found widespread use in localization and mapping [58], motion planning [59,60], and in the context of decision-making in human-centric applications [61,62,63] as further described in the applications sections.

3.2.4. Optimization Techniques

Increasingly data-intensive artificial intelligence methods are being used for computational intelligence. However, a number of optimization techniques, such as simulated annealing [64], ant colony optimization [65], Genetic Algorithms [66], have also been used in DT literature, as described in subsequent sections. Kennel-Maushart et al. [67] use Newton’s Method to optimize the solution of the inverse kinematics problem to enhance teleoperation performance via mixed reality for multi-robot systems.

3.3. Simulation Tools

For the development of a DT, different simulation tools and packages are required to create an interaction between physical objects and their virtual twins. Many free and commercial simulation environments are available that can be used in the design of DT for HRI, which will be discussed briefly in this section. However, the selection of a specific simulation tool is entirely dependent on the DT application.

3.3.1. Numerical Analysis Tools

Finite Element Analysis (FEA) is often necessary for multiphysics simulations of physical assets involving structural analysis, fluid flow, or thermal loads. Finite Element Analysis tools, such as ANSYS [68,69], COMSOL [66], and ABAQUS [70], have been frequently used to create DTs for biomedical applications and physical assets for which health/condition monitoring is desired [71,72] FEM-based high-fidelity physics simulations are generally computationally costly. Reduced-Order Models based on FEM Simulations can be a useful tool to deploy DTs at scale for complex systems [72], for which commercially available software such as ANSYS Twin Builder (https://www.ansys.com/products/digital-twin/ansys-twin-builder accessed on 14 December 2022) are becoming increasingly popular [73]. As a high-level numerical computing platform, MATLAB is used in a number of studies involving DTs in the medical domain [70,74,75], as further discussed in the applications.

3.3.2. Robotics Simulation Tools

Robotics simulators allow developers and researchers to design, test, and evaluate robotic systems in virtual environments. Gazebo is one of the most commonly used simulators in the context of HCDT literature [60,65,76,77]. It is an open-source simulator that can be integrated with ROS/ROS2. Robot Operating System (ROS) is one of the most widely used platforms used by robotics researchers for a broad domain of applications, including motion planning, control, sensing, localization, and mapping [78]. For robotics manipulation and planning tasks, Gazebo is often used with the MoveIt framework [79], which provides state-of-the-art algorithms for motion planning, collision checking, kinematics, control, and visualization of manipulators. MATLAB/Simulink also provides toolboxes for robotics simulations, such as the Robotics System Toolbox. It can be used in co-simulation settings with other tools such as ROS or game engines such as Unity. Andaluz et al. [80] described a co-simulation scenario involving teleoperation and autonomous control of a robotic arm where Windows Inter-process Communication (IPC) was used to communicate between MATLAB and Unity. Other commonly used robotics simulators include CoppeliaSim [81], and NVIDIA Isaac Sim [82].

3.3.3. Game Physics Engines

Game development applications and physics engines developed for the gaming industry are valuable tools for the development and simulation of DTs [83]. The most commonly used physics engine in the gaming industry is NVIDIA’s PhysX, which is used by Unity and Unreal Engine, the two most popular game development platforms. In recent years, these have widely been used in the development of DTs. For example, Kuts et al. [84] implemented a custom C# script in the Unity game engine to develop a controller for the DT of a Motoman GP8 industrial robot created using Autodesk 3ds Max and Autodesk Maya. Other works that employ game engines are discussed in subsequent sections [30,63,77,85,86,87,88].

3.4. Data Visualization and Interaction

Different 3D visualization and rendering software is used by researchers for design visualization as per the requirements of the application area. In immersive user experiences, AR/VR technologies offer new possibilities. The visualization tools and technologies that can be used with DTs are briefly discussed in this section.

3.4.1. Photorealistic Rendering

The creation of virtual models in DTs begins with 3D modeling tools, which can be further used by simulation tools. Some of the tools used in the cited literature include Blender [89], Autodesk 3ds Max [68,90], CATIA [91] and Siemens NX [92]. In addition, the ability of these tools to create photorealistic rendered images can be used to create training data for ML algorithms and to provide a better realistic user experience for the design and inspection of DTs. In many cases, this is done by exporting the models into platforms such as NVIDIA Omniverse [93], Unity [26], and Unreal Engine [86].

3.4.2. Immersive User Experience

The use of DTs creates opportunities to create a rich immersive user experience through, for example, the use of virtual reality, augmented reality, and mixed reality. In augmented reality, virtual objects are superimposed on real images, using a Head Mounted Display (HMD) such as Microsoft Hololens, by using sensors and trackers along with handheld or fixed displays, and AR development tools such as AR Toolkit [94], Microft Mixed Reality Toolkit [24], and PTC Vuforia [92,95]. Mixed Reality (MR) allows for co-existence and interactivity between the physical and the virtual environment [96], which can be used, for example, for intuitive, user-friendly teleoperation of robotic manipulators [97]. In design and manufacturing, AR systems have found widespread applications in training and guiding operators in areas such as operations, maintenance, and quality assurance; however, user acceptance is limited by challenges such as cost, complexity, weight, data security, and privacy issues in AR systems [98]. Interactivity in VR/AR may not be limited to handheld controllers with improvements in gesture tracking through computer vision. For example, Ref. [99] used four monochrome cameras mounted on a VR HMD (Oculus Quest VR) for accurate detection of hand motion, which may be used for a more immersive interactive experience.
Human cognitive ability can be enhanced by employing not just a visual relay of information to the operator through a mixed reality approach as required, but also through haptic feedback and auditory cues and communication. Advances in speech recognition, text-to-speech, and turn-based conversational systems can potentially enhance human-robot interaction. However, there are still major challenges since conversational systems are trained using specific human-human corpora, and their generic practical utility in human-robot interaction remains unproven [100]. With increasingly sophisticated Large Language Models (LLMs) such as Open-AI’s ChatGPT, it becomes foreseeable for rich impromptu human-machine interaction in industrial settings to be carried out via two-way text and voice communication through a fusion of technologies such as Task and Motion Planning, Vision, Language, and Control in Robotics [101].

3.5. Data Management and System Integration

In a Human-Centric Digital Twin, there is interconnectivity between humans, robots (or miscellaneous machines), the environment, and their DTs in the physical and virtual domains. In the implementation of Digital Twins, a wide range of tools and technologies for data management, including data transmission from devices and sensors, data storage, and data fusion, have been used. Industrial IoT platforms such as Siemens MindSphere and GE Predix are gaining popularity for system integration in DTs [102], especially for conventional infrastructure management. In Human-Centric Digital Twins, the cited literature uses more research-oriented tools and standards, such as ROSBridge [79,86,94], MQTT [32], and MTConnect [87] for communication. ROSBridge is a WebSocket server that allows web browsers to talk to ROS. MQTT (Message Queuing Telemetry Transport) is a lightweight machine-to-machine messaging protocol. MTConnect is an open-source standard that provides a semantic vocabulary for manufacturing equipment. System integration to ensure a seamless, secure flow of data to represent physical infrastructure, especially for legacy systems, remains an important challenge in the deployment of DTs [28].
A generic framework for HCDTs is presented in Figure 6, in which many of the devices and methods that may be employed are listed. A virtual model of the environment can be created using CAD models or by reconstruction using image-based or 3D point-based modeling approaches [103]. The design and selection of sensors, computational tools, simulation tools, data visualization tools, and data management tools in the integrated system are carried out according to the specific requirements of the HCDT based on the application area.

4. Application Domains

Digital Twin Technology has recently been applied in a range of industries and scenarios, including Smart City construction [104], monitoring and optimization of physical assets including machine tools, vehicles, machinery, mechanical structures, and materials [27,28,29]. Here, we have identified and placed particular emphasis on key application domains where human-centricity is of paramount importance.

4.1. Ergonomics and Safety

In Smart Factories, as humans and collaborative robots begin to share workspaces without borders, it is necessary to ensure human safety and optimize ergonomics during such interactions. Cobots are designed to be intrinsically safe due to speed, force, and torque limits and design considerations. Additionally, there is extensive literature, guidelines, and standards dealing with risk assessment and safety requirements for human-robot interaction [105]. Havard et al. [106] created a DT involving the co-simulation of a CPPS using Dassault Digital Factory Suite and Unity 3D for an assembly operation using a UR10 cobot. The Digital Twin was employed to carry out a safety and ergonomics assessment in VR. Agnusdei et al. [29] provide a framework to assess the capability of a DT to improve safety based on three criteria. The first relates to data acquisition, where real-time data acquisition is heavily favored. The second criterion relates to data processing. DTs use statistical methods, multi-physics modeling, or artificial intelligence-based methods for data processing. And the third criterion is about the source of risk, which can be the machine, the human, or human-machine interaction.
In HRC safety, collision avoidance is an important challenge, commonly addressed in literature [107]. A range of techniques are employed in literature where DTs are employed for safe contactless interaction with human operators. In ref. [36], an Oculus Rift HMD and Kinect sensor are used by a collision avoidance control algorithm. Liu et al. [37] employ a novel deep reinforcement learning algorithm, IRDDPG, to allow a Cobot to reach a target state while minimizing the risk of collision with a human hand, modeled as a bounding box using a Kinect V2 depth camera. In numerous studies, DTs have been leveraged to optimize ergonomics in a manufacturing setting. Greco et al. used a DT simulation in Siemens Tecnomatix Jack and optical motion capture using Microsoft Kinect (R) and an indigenously developed wearable motion capture system to monitor and optimize the ergonomics during a manufacturing task by evaluating and minimizing operational health and safety-related risk factor metrics [38].
Choi et al. [39] created a DT of an HRC scenario with a human operator and a UR3 cobot using DL algorithms applied to 3D point cloud data from two Azure Kinect depth sensors and conveyed Safety and Task information to the human operator using a Mixed Reality HMD (Microsoft HoloLens 2). Table 4 presents a brief summary of related literature. A generic layout for HCDTs for ergonomics and safety is shown in Figure 7. The framework aims to enhance human well-being and safety by monitoring biological sensor data and creating a human intent model, which can be used by machine intelligence to optimize ergonomics and safety.

4.2. Training and Testing of Robotics Systems

Digital Twins have found a lot of utility in the training and testing of robotics systems. Training supervised machine learning algorithms is highly data intensive. DTs can provide a high-fidelity virtual environment to generate a vast amount of test data to viably train an ML model and transfer it to the physical system/robot. NVIDIA has demonstrated this capability using its newly launched platform, NVIDIA Omniverse, with industrial partners such as Pepsi and Amazon to create DTs of warehouses and distribution centers for training AI models [111], as shown in Figure 8.
Mania et al. employed photorealistic simulations rendered in a game engine (Unreal Engine 4) to increase the performance of the perception system of a mobile robot by comparing its detection results with an expected result that is acquired through physics simulations in the virtual environment [86]. Table 5 presents a brief summary of related literature.
A generic framework for using HCDTs for training and testing of robotics systems is shown in Figure 9. Here, the intent is to realize the aim of a symbiotic human-robot relationship, by training a deep learning or deep reinforcement learning algorithm using human and machine data fed into the DT, which includes predicting human intent and creating a machine policy to assist the human in a flexible intuitive collaborative environment.

4.3. User Training and Education

VR/AR technologies are already widely used in training and teaching [113] in educational, industrial, and military applications. The use of AR/VR headsets enables operators in process industries to get a better understanding of the process, for remote assistance and operator training [28]. Um et al. showed that the distribution of computing power allows for image recognition or various detection algorithms to be used seamlessly with Microsoft Hololens to guide the operator in a production setting. DTs have also been used in enhanced surgical training, where the use of DTs with haptic feedback is reported to increase accuracy and reduce cognitive load on surgeons [114].
In light of the COVID-19 pandemic and subsequent travel restrictions, there is increased recognition of the utility of AR as a distance learning and remote assistance tool in education and industry. For example, a case study [115] from PTC Vuforia highlighted how local field engineers from Rockwell Automation used AR as a remote assistance tool to get virtual support from senior engineers who could not be present due to travel restrictions for the installation of specialized equipment. In VR-based training, the use of artificial intelligence and optimization methods has shown to be effective tools for adaptive, customized training based on the user’s cognitive and physiological abilities [116]. Unity and C# were used by ref. [117] in a VR-based training with turn-based dialog for adaptive de-escalation training.
In ref. [94], an AR Headset (Microsoft HoloLens) is used to receive input from ROS through the Rosbridge communication package to enhance the user’s experience and perception. VR/AR-based training with DTs is receiving a lot of interest in a wide range of industries, such as construction [118,119], mining [120]. Using immersive virtual simulations, the safety perception of HRC in construction workers is tested and enhanced in [121]. Matsas et al. [122] created a virtual training and testing environment using 3D graphics generated in 3ds Max in a gaming engine (Unity 3D). Using a VR HMD, user evaluations of safety techniques adopted by the HRC AI are carried out and analyzed. Wang et al. [35] used a combination of an industrial camera, a VR HMD, and Unity for the assessment of Welder behavior in a teleoperation setting with a UR5 Cobot.
In the context of user training and education, the literature shows that immersive technologies, audio-visual feedback, and the use of artificial intelligence with Digital Twin technology are central to achieving adaptive, intuitive, and customized user assistance and training experiences.

4.4. Product and Process Design, Validation and Testing

The prospect of using DTs in combination with immersive technologies such as virtual reality and augmented reality (VR/AR) can be utilized to enhance and enrich the process and product design in manufacturing settings. Malik et al. [123] leveraged DTs and virtual reality for HRC design and planning. The proposed architecture uses bi-directional information sharing between the physical and virtual robots and a human operator using TCP/IP communication, and motion capture technology. In the VR environment, assisted by a virtual assistant (chatbot), designers will have the ability to manipulate objects and study the HRC design from the perspective of visibility, reach, and ergonomics to optimize cycle times and safety. The framework proposed by the authors is shown in Figure 10.
Kousi et al. [79] employed ROS Gazebo, MoveIt, and Lanner Witness Simulation in an automotive assembly case study, where a DT-based system was used to generate alternative configurations for the assembly process with a dual-arm mobile robot and human operators and validate the system’s performance. Wang et al. present a framework for Human-Robot Collaborative Assembly using DTs that proposes a data fusion and visualization service to process information coming from human-centric as well as robot sensors [124]. The data is subsequently processed and used to generate events, schedule tasks, and run the robot’s control service. Table 6 presents a brief summary of related literature.
The literature shows that DTs have been leveraged within this application domain to optimize cycle time, enhance workstation layout, ergonomics, and flexibility, and carry out dynamic scheduling, line balancing, and process planning.

4.5. Security of Cyber-Physical Systems

Security and reliability are important issues in DTs, more so in HCDTs. Humans may be involved in the planning and execution of the attack and may also be the target of an anthropocentric CPS (ACPS). In the context of a Collaborative Robotic Cyber-Physical System (CRCPS), Khalid et al. present an assessment of attack types, possible effects, and severity and present a framework, shown in Figure 11, for a secure CRCPS based on authentication and data integrity checks and an independent module to compare real-time sensor data to a pre-stored specifications library and report any discrepancies [135].
Security becomes even more central in mission-critical applications such as robotic surgery. Laaki et al. investigate the challenges of latency and security in teleoperated robotic surgery over mobile networks [136]. The authors used an HTC Vive HMD with a haptic feedback controller and a DT created in Unity3D with a 4G mobile connection routed through a VPN server, secured via password protection and biometric authentication, and highlighted security issues such as protection of intellectual property rights, protection of data, and denial-of-service attacks.
Digital Twins and their enabling technologies can themselves be used for enhancing security. Deitz et al. set out the formal requirements of a security framework that employs DT simulations to detect, analyze, and handle security incidents [137]. Blockchain technology can be utilized to ensure the reliability and security of Digital Twin data for CPS [138]. A brief description of some other works in this area is shown in Table 7.

4.6. Rehabilitation, Well-Being and Health Management

In the medical domain, Human Digital Twin technology has found more maturity and acceptance. Medical Digital Twins (MDTs) have been used in many areas, including pulmonology [140], orthopedics [91], and hepatology [75]. In a recently publicized campaign, the National Football League (NFL), in partnership with Amazon AWS is working towards creating a ’Digital Athlete’ with the aim of improving player health and safety [141]. This may further increase public interest in MDTs.
MDTs may involve using clinical imaging techniques to develop patient-specific solutions, for example for the human tongue [73], tibia [69], and aortic walls [70]. Table 8 presents a brief description of these and other related works.
The use of DTs in medicine is not without challenges or apprehensions. Data-driven patient-specific medical care can potentially lead to segmentation and discrimination, with an increased need to ensure privacy and transparency in data usage [142].
Figure 12 illustrates a generic layout for healthcare applications. Patient-specific DTs are generated using information from sources such as medical imaging, biological sensor data, and medical informatics. Analysis and dagnostic data based on the DT is relayed to healthcare professionals, where DT-enabling technologies may aid healthcare professionals in training and enhancing robotic surgery through sensory and haptic feedback.

5. Discussion

Cyber-physical systems with synergistic human-machine interaction, aided by Human-Centric Digital Twins are slowly making their mark in multiple application domains. Regarding the human element in Digital Twin applications, the utility of different sensors, Human Digital Twin inputs, feedback mechanisms, and prevalent types of machine learning algorithms in the identified application domains are shown in Table 9. The utility is ranked as low, medium, or high and displayed graphically.
As shown in the table, in order to optimize ergonomics and safety in industrial settings, biological and visual sensor data is often used for pose and motion recognition by adopting deep learning and relating safety-critical information back to the human operator. Human supervisory control and intervention are necessary to ensure real-time safety during operation or to assess and optimize different offline scenarios. For training and testing robotic systems, Sim2Real technology can include the human element by generating virtual training data for deep learning models and testing agent policies using reinforcement learning (RL) techniques in an environment with a digital human created with the aid of human sensor data, for example using visual and motion sensors. This can additionally be aided by human intervention through the use of behavior cloning (BC) methods in RL.
The use of HCDTs with VR/AR may be leveraged to create a personalized and adaptive user learning and collaborative experience. The audio-visual interface and direct input devices play a central role in this application, which may be aided by artificial intelligence by leveraging Large Language Models (LLM) or for adaptive learning. The supervisors of the training regime should have access to intervene as required. Visual and motion sensors are central to process design applications. HCDTs, using optimization techniques and deep learning, can optimize cycle time, enhance workstation layout, ergonomics, and flexibility, and carry out dynamic scheduling, line balancing, and process planning. An immersive visual interface can be very beneficial for human designers to interact with the DT for process monitoring and assessment.
In security applications, biometric and facial recognition, motion sensing, and the use of data security techniques, including blockchain technology, are central. Here, deep learning techniques can be leveraged to provide safeguards against the numerous types of cyberattacks. Human intervention is required only in the event of any detected security breach or threat. HCDTs already have a lot of acceptance in medical applications, where DTs of patients are created using medical imaging and biological sensors, which can be subjected to deep learning algorithms or FEA analysis. These are used by doctors who can train and operate with the aid of immersive sensory feedback, including haptic feedback and robotic assistance.

6. Conclusions

In this research work, a state-of-the-art literature review on Human-Centric Digital Twins (HCDTs) and their enabling technologies is conducted. A key shortcoming observed in current DT literature is that the focus is almost entirely on the physical assets of a CPS and not on human operators. In the coming years, an increasingly ambitious and sophisticated industry and process-specific DTs may address this shortcoming by developing HCDTs. A generic framework is proposed to underline the enabling technologies, such as human-focused sensors and computer vision, that can be used for the creation of HCDTs. The enabling technologies for this purpose have received considerable individual attention. For example, AI-driven algorithms for a range of problems such as object recognition and collision avoidance or using DTs for monitoring and supervision of physical assets, etc., have reached some degree of maturity. However, there is a lack of literature on how all these enabling technologies are utilized together in a synergistic manner to enhance human-machine interaction in industrial settings.
We identified six key application areas for DTs with strong human involvement, which are ergonomics and safety, training and testing of robotics systems, user training and education, product and process design, validation and testing, security of cyber-physical systems, and finally rehabilitation, well-being, and health management. Implementation frameworks for the selected domains highlight the workflow and desired outcomes (such as optimization of ergonomics, security policy, task allocation, etc.). It has been found that the use of HCDTs in literature on the security of CPS is currently underdeveloped and merits further exploration. DTs are being extensively used to train robotic systems by simulation and data generation in a virtual environment. These can be further enriched by including more human-focused sensor data to enable synergistic, intuitive collaboration. The development of increasingly specialized modeling tools and the ubiquitous spread of artificial intelligence can transform this expert-driven paradigm towards becoming user-driven. Over time, the use of HCDTs is expected to considerably expand owing to the immense interest shown by researchers and industry.

Author Contributions

Conceptualization, A.K. and W.A.L.; Supervision, A.K. and W.A.L.; Writing—original draft, U.A. and M.K.; Methodology, U.A. and M.K.; Visualization, U.A. and M.K.; Writing—review & editing A.K. and W.A.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nahavandi, S. Industry 5.0-a human-centric solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef] [Green Version]
  2. Lu, Y.; Zheng, H.; Chand, S.; Xia, W.; Liu, Z.; Xu, X.; Wang, L.; Qin, Z.; Bao, J. Outlook on human-centric manufacturing towards Industry 5.0. J. Manuf. Syst. 2022, 62, 612–627. [Google Scholar] [CrossRef]
  3. Gallala, A.; Kumar, A.A.; Hichri, B.; Plapper, P. Digital Twin for Human–Robot Interactions by Means of Industry 4.0 Enabling Technologies. Sensors 2022, 22, 4950. [Google Scholar] [CrossRef]
  4. Shafto, M.; Rich, M.C.; Glaessgen, D.E.; Kemp, C.; Lemoigne, J.; Wang, L. Modeling, Simulation, Information technology, and Processing roadmap. Technology Area 11. Natl. Aeronaut. Space Adm. 2012, 32, 1–38. [Google Scholar]
  5. DIGITAL TWIN: DEFINITION & VALUE An AIAA and AIA Position Paper. Available online: https://www.aia-aerospace.org/publications/digital-twin-definition-value-an-aiaa-and-aia-position-paper/ (accessed on 27 October 2022).
  6. Ammar, A.; Nassereddine, H.; Dadi, G. Roadmap to a Holistic Highway Digital Twin: A Why, How, & Why Framework. In Critical Infrastructure—Modern Approach and New Developments; Pietro, D.A.D., Marti, P.J., Eds.; IntechOpen: London, UK, 2022; Chapter 3. [Google Scholar] [CrossRef]
  7. Liu, M.; Fang, S.; Dong, H.; Xu, C. Review of digital twin about concepts, technologies, and industrial applications. J. Manuf. Syst. 2021, 58, 346–361. [Google Scholar] [CrossRef]
  8. Turner, C.J.; Garn, W. Next generation DES simulation: A research agenda for human centric manufacturing systems. J. Ind. Inf. Integr. 2022, 28, 100354. [Google Scholar] [CrossRef]
  9. Tao, F.; Zhang, H.; Liu, A.; Nee, A.Y. Digital Twin in Industry: State-of-the-Art. IEEE Trans. Ind. Inform. 2019, 15, 2405–2415. [Google Scholar] [CrossRef]
  10. Wang, L. A futuristic perspective on human-centric assembly. J. Manuf. Syst. 2022, 62, 199–201. [Google Scholar] [CrossRef]
  11. Khalid, A.; Khan, Z.H.; Idrees, M.; Kirisci, P.; Ghrairi, Z.; Thoben, K.D.; Pannek, J. Understanding vulnerabilities in cyber physical production systems. Int. J. Comput. Integr. Manuf. 2022, 35, 569–582. [Google Scholar] [CrossRef]
  12. Michalos, G.; Makris, S.; Tsarouchi, P.; Guasch, T.; Kontovrakis, D.; Chryssolouris, G. Design Considerations for Safe Human-robot Collaborative Workplaces. Procedia CIRP 2015, 37, 248–253. [Google Scholar] [CrossRef]
  13. Xu, X.; Lu, Y.; Vogel-Heuser, B.; Wang, L. Industry 4.0 and Industry 5.0—Inception, conception and perception. J. Manuf. Syst. 2021, 61, 530–535. [Google Scholar] [CrossRef]
  14. Hardt, L.; Barrett, J.; Taylor, P.G.; Foxon, T.J. What structural change is needed for a post-growth economy: A framework of analysis and empirical evidence. Ecol. Econ. 2021, 179, 106845. [Google Scholar] [CrossRef]
  15. Land, N.; Syberfeldt, A.; Almgren, T.; Vallhagen, J. A framework for realizing industrial human-robot collaboration through virtual simulation. Procedia CIRP 2020, 93, 1194–1199. [Google Scholar] [CrossRef]
  16. Wang, B.; Zheng, P.; Yin, Y.; Shih, A.; Wang, L. Toward human-centric smart manufacturing: A human-cyber-physical systems (HCPS) perspective. J. Manuf. Syst. 2022, 63, 471–490. [Google Scholar] [CrossRef]
  17. Longo, F.; Padovano, A.; Umbrello, S. Value-Oriented and Ethical Technology Engineering in Industry 5.0: A Human-Centric Perspective for the Design of the Factory of the Future. Appl. Sci. 2020, 10, 4182. [Google Scholar] [CrossRef]
  18. Löcklin, A.; Jung, T.; Jazdi, N.; Ruppert, T.; Weyrich, M. Architecture of a Human-Digital Twin as Common Interface for Operator 4.0 Applications. Procedia CIRP 2021, 104, 458–463. [Google Scholar] [CrossRef]
  19. Miller, M.E.; Spatz, E. A unified view of a human digital twin. Hum.-Intell. Syst. Integr. 2022, 4, 23–33. [Google Scholar] [CrossRef]
  20. Wang, F.Y.; Qin, R.; Li, J.; Yuan, Y.; Wang, X. Parallel Societies: A Computing Perspective of Social Digital Twins and Virtual-Real Interactions. IEEE Trans. Comput. Soc. Syst. 2020, 7, 2–7. [Google Scholar] [CrossRef]
  21. Zhang, R.; Lv, J.; Li, J.; Bao, J.; Zheng, P.; Peng, T. A graph-based reinforcement learning-enabled approach for adaptive human-robot collaborative assembly operations. J. Manuf. Syst. 2022, 63, 491–503. [Google Scholar] [CrossRef]
  22. Wang, L.; Gao, R.; Váncza, J.; Krüger, J.; Wang, X.; Makris, S.; Chryssolouris, G. Symbiotic human-robot collaborative assembly. CIRP Ann. 2019, 68, 701–726. [Google Scholar] [CrossRef] [Green Version]
  23. Hosamo, H.H.; Imran, A.; Cardenas-Cartagena, J.; Svennevig, P.R.; Svidt, K.; Nielsen, H.K. A Review of the Digital Twin Technology in the AEC-FM Industry. Adv. Civ. Eng. 2022, 2022, 2185170. [Google Scholar] [CrossRef]
  24. Kunz, A.; Rosmann, S.; Loria, E.; Pirker, J. The Potential of Augmented Reality for Digital Twins: A Literature Review. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) 2022, Christchurch, New Zealand, 12–16 March 2022; pp. 389–398. [Google Scholar] [CrossRef]
  25. Lu, Y.; Liu, C.; Wang, K.I.; Huang, H.; Xu, X. Digital Twin-driven smart manufacturing: Connotation, reference model, applications and research issues. Robot. Comput.-Integr. Manuf. 2020, 61, 101837. [Google Scholar] [CrossRef]
  26. Dianatfar, M.; Latokartano, J.; Lanz, M. Review on existing VR/AR solutions in human–robot collaboration. Procedia CIRP 2021, 97, 407–411. [Google Scholar] [CrossRef]
  27. Atalay, M.; Murat, U.; Oksuz, B.; Parlaktuna, A.M.; Pisirir, E.; Testik, M.C. Digital twins in manufacturing: Systematic literature review for physical–digital layer categorization and future research directions. Int. J. Comput. Integr. Manuf. 2022, 35, 679–705. [Google Scholar] [CrossRef]
  28. Perno, M.; Hvam, L.; Haug, A. Implementation of digital twins in the process industry: A systematic literature review of enablers and barriers. Comput. Ind. 2022, 134, 103558. [Google Scholar] [CrossRef]
  29. Agnusdei, G.P.; Elia, V.; Gnoni, M.G. A classification proposal of digital twin applications in the safety domain. Comput. Ind. Eng. 2021, 154, 107–137. [Google Scholar] [CrossRef]
  30. Park, K.B.; Choi, S.H.; Lee, J.Y.; Ghasemi, Y.; Mohammed, M.; Jeong, H. Hands-Free Human–Robot Interaction Using Multimodal Gestures and Deep Learning in Wearable Mixed Reality. IEEE Access 2021, 9, 55448–55464. [Google Scholar] [CrossRef]
  31. Tuli, T.B.; Kohl, L.; Chala, S.A.; Manns, M.; Ansari, F. Knowledge-Based Digital Twin for Predicting Interactions in Human-Robot Collaboration. In Proceedings of the 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA ) 2021, Vasteras, Sweden, 7–10 September 2021; pp. 1–8. [Google Scholar] [CrossRef]
  32. Yi, S.; Liu, S.; Xu, X.; Wang, X.V.; Yan, S.; Wang, L. A vision-based human-robot collaborative system for digital twin. Procedia CIRP 2022, 107, 552–557. [Google Scholar] [CrossRef]
  33. Dimitropoulos, N.; Togias, T.; Zacharaki, N.; Michalos, G.; Makris, S. Seamless Human–Robot Collaborative Assembly Using Artificial Intelligence and Wearable Devices. Appl. Sci. 2021, 11, 5699. [Google Scholar] [CrossRef]
  34. Ha, E.; Byeon, G.; Yu, S. Full-Body Motion Capture-Based Virtual Reality Multi-Remote Collaboration System. Appl. Sci. 2022, 12, 5862. [Google Scholar] [CrossRef]
  35. Wang, Q.; Jiao, W.; Wang, P.; Zhang, Y. Digital Twin for Human-Robot Interactive Welding and Welder Behavior Analysis. IEEE/CAA J. Autom. Sin. 2021, 8, 334–343. [Google Scholar] [CrossRef]
  36. Khatib, M.; Khudir, K.A.; Luca, A.D. Human-robot contactless collaboration with mixed reality interface. Robot. Comput.-Integr. Manuf. 2021, 67, 102030. [Google Scholar] [CrossRef]
  37. Liu, Q.; Liu, Z.; Xiong, B.; Xu, W.; Liu, Y. Deep reinforcement learning-based safe interaction for industrial human-robot collaboration using intrinsic reward function. Adv. Eng. Inform. 2021, 49, 101360. [Google Scholar] [CrossRef]
  38. Greco, A.; Caterino, M.; Fera, M.; Gerbino, S. Digital Twin for Monitoring Ergonomics during Manufacturing Production. Appl. Sci. 2020, 10, 7758. [Google Scholar] [CrossRef]
  39. Choi, S.H.; Park, K.B.; Roh, D.H.; Lee, J.Y.; Mohammed, M.; Ghasemi, Y.; Jeong, H. An integrated mixed reality system for safety-aware human-robot collaboration using deep learning and digital twin generation. Robot. Comput.-Integr. Manuf. 2022, 73, 102258. [Google Scholar] [CrossRef]
  40. Yi, X.; Zhou, Y.; Xu, F. TransPose. ACM Trans. Graph. (TOG) 2021, 40, 1–13. [Google Scholar] [CrossRef]
  41. Filippeschi, A.; Schmitz, N.; Miezal, M.; Bleser, G.; Ruffaldi, E.; Stricker, D. Survey of Motion Tracking Methods Based on Inertial Sensors: A Focus on Upper Limb Human Motion. Sensors 2017, 17, 1257. [Google Scholar] [CrossRef] [Green Version]
  42. Tiberio, L.; Cesta, A.; Belardinelli, M.O. Psychophysiological Methods to Evaluate User’s Response in Human Robot Interaction: A Review and Feasibility Study. Robotics 2013, 2, 92–121. [Google Scholar] [CrossRef] [Green Version]
  43. Usakli, A.B.; Gurkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F. On the use of electrooculogram for efficient human computer interfaces. Comput. Intell. Neurosci. 2010, 2010, 135629. [Google Scholar] [CrossRef] [Green Version]
  44. Schalk, G.; Leuthardt, E.C. Brain-computer interfaces using electrocorticographic signals. IEEE Rev. Biomed. Eng. 2011, 4, 140–154. [Google Scholar] [CrossRef]
  45. Bi, L.; Fan, X.A.; Liu, Y. EEG-based brain-controlled mobile robots: A survey. IEEE Trans. Hum.-Mach. Syst. 2013, 43, 161–176. [Google Scholar] [CrossRef]
  46. Hakonen, M.; Piitulainen, H.; Visala, A. Current state of digital signal processing in myoelectric interfaces and related applications. Biomed. Signal Process. Control 2015, 18, 334–359. [Google Scholar] [CrossRef] [Green Version]
  47. Yeom, H.G.; Kim, J.S.; Chung, C.K. Estimation of the velocity and trajectory of three-dimensional reaching movements from non-invasive magnetoencephalography signals. J. Neural Eng. 2013, 10, 026006. [Google Scholar] [CrossRef] [PubMed]
  48. Mujica, M.; Crespo, M.; Benoussaad, M.; Junco, S.; Fourquet, J.Y. Robust variable admittance control for human–robot co-manipulation of objects with unknown load. Robot. Comput.-Integr. Manuf. 2023, 79, 102408. [Google Scholar] [CrossRef]
  49. Dröder, K.; Bobka, P.; Germann, T.; Gabriel, F.; Dietrich, F. A Machine Learning-Enhanced Digital Twin Approach for Human-Robot-Collaboration. Procedia CIRP 2018, 76, 187–192. [Google Scholar] [CrossRef]
  50. Islam, S.O.B.; Lughmani, W.A.; Qureshi, W.S.; Khalid, A.; Mariscal, M.A.; Garcia-Herrero, S. Exploiting visual cues for safe and flexible cyber-physical production systems. Res. Artic. Adv. Mech. Eng. 2019, 11, 1–13. [Google Scholar] [CrossRef]
  51. Zhang, R.; Li, J.; Zheng, P.; Lu, Y.; Bao, J.; Sun, X. A fusion-based spiking neural network approach for predicting collaboration request in human-robot collaboration. Robot. Comput.-Integr. Manuf. 2022, 78, 102383. [Google Scholar] [CrossRef]
  52. Liu, Z.; Liu, Q.; Xu, W.; Liu, Z.; Zhou, Z.; Chen, J. Deep learning-based human motion prediction considering context awareness for human-robot collaboration in manufacturing. Procedia CIRP 2019, 83, 272–278. [Google Scholar] [CrossRef]
  53. Huang, Z.; Shen, Y.; Li, J.; Fey, M.; Brecher, C. A Survey on AI-Driven Digital Twins in Industry 4.0: Smart Manufacturing and Advanced Robotics. Sensors 2021, 21, 6340. [Google Scholar] [CrossRef]
  54. Dmytriyev, Y.; Insero, F.; Carnevale, M.; Giberti, H. Brain–Computer Interface and Hand-Guiding Control in a Human–Robot Collaborative Assembly Task. Machines 2022, 10, 654. [Google Scholar] [CrossRef]
  55. Ji, Z.; Liu, Q.; Xu, W.; Yao, B.; Liu, J.; Zhou, Z. A closed-loop brain-computer interface with augmented reality feedback for industrial human-robot collaboration. Int. J. Adv. Manuf. Technol. 2021, 124, 3083–3098. [Google Scholar] [CrossRef]
  56. Jin, T.; Sun, Z.; Li, L.; Zhang, Q.; Zhu, M.; Zhang, Z.; Yuan, G.; Chen, T.; Tian, Y.; Hou, X.; et al. Triboelectric nanogenerator sensors for soft robotics aiming at digital twin applications. Nat. Commun. 2020, 11, 5381. [Google Scholar] [CrossRef] [PubMed]
  57. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing Atari with Deep Reinforcement Learning. arXiv 2013, arXiv:1312.5602. [Google Scholar] [CrossRef]
  58. Naveed, K.; Anjum, M.L.; Hussain, W.; Lee, D. Deep introspective SLAM: Deep reinforcement learning based approach to avoid tracking failure in visual SLAM. Auton. Robot. 2022, 46, 705–724. [Google Scholar] [CrossRef]
  59. Liu, Y.; Xu, H.; Liu, D.; Wang, L. A digital twin-based sim-to-real transfer for deep reinforcement learning-enabled industrial robot grasping. Robot. Comput.-Integr. Manuf. 2022, 78, 102365. [Google Scholar] [CrossRef]
  60. Vrabič, R.; Škulj, G.; Malus, A.; Kozjek, D.; Selak, L.; Bračun, D.; Podržaj, P. An architecture for sim-to-real and real-to-sim experimentation in robotic systems. Procedia CIRP 2021, 104, 336–341. [Google Scholar] [CrossRef]
  61. Sun, X.; Zhang, R.; Liu, S.; Lv, Q.; Bao, J.; Li, J. A digital twin-driven human–robot collaborative assembly-commissioning method for complex products. Int. J. Adv. Manuf. Technol. 2022, 118, 3389–3402. [Google Scholar] [CrossRef]
  62. Xia, K.; Sacco, C.; Kirkpatrick, M.; Saidy, C.; Nguyen, L.; Kircaliali, A.; Harik, R. A digital twin to train deep reinforcement learning agent for smart manufacturing plants: Environment, interfaces and intelligence. J. Manuf. Syst. 2021, 58, 210–230. [Google Scholar] [CrossRef]
  63. Matulis, M.; Harvey, C. A robot arm digital twin utilising reinforcement learning. Comput. Graph. 2021, 95, 106–114. [Google Scholar] [CrossRef]
  64. Nourmohammadi, A.; Fathi, M.; Ng, A.H. Balancing and scheduling assembly lines with human-robot collaboration tasks. Comput. Oper. Res. 2022, 140, 105674. [Google Scholar] [CrossRef]
  65. Bansal, R.; Khanesar, M.A.; Branson, D. Ant colony optimization algorithm for industrial robot programming in a digital twin. In Proceedings of the ICAC 2019—2019 25th IEEE International Conference on Automation and Computing, Lancaster, UK, 5–7 September 2019. [Google Scholar] [CrossRef]
  66. Zhu, X.; Ji, Y. A digital twin–driven method for online quality control in process industry. Int. J. Adv. Manuf. Technol. 2022, 119, 3045–3064. [Google Scholar] [CrossRef]
  67. Kennel-Maushart, F.; Poranne, R.; Coros, S. Multi-Arm Payload Manipulation via Mixed Reality; IEEE: Philadelphia, PA, USA, 2022; pp. 11251–11257. [Google Scholar] [CrossRef]
  68. Asad, U.; Rasheed, S.; Lughmani, W.A.; Kazim, T.; Khalid, A.; Pannek, J. Biomechanical Modeling of Human–Robot Accident Scenarios: A Computational Assessment for Heavy-Payload-Capacity Robots. Appl. Sci. 2023, 13, 1957. [Google Scholar] [CrossRef]
  69. Aubert, K.; Germaneau, A.; Rochette, M.; Ye, W.; Severyns, M.; Billot, M.; Rigoard, P.; Vendeuvre, T. Development of Digital Twins to Optimize Trauma Surgery and Postoperative Management. A Case Study Focusing on Tibial Plateau Fracture. Front. Bioeng. Biotechnol. 2021, 9, 722275. [Google Scholar] [CrossRef] [PubMed]
  70. Liang, L.; Liu, M.; Martin, C.; Sun, W. A deep learning approach to estimate stress distribution: A fast and accurate surrogate of finite-element analysis. J. R. Soc. Interface 2018, 15, 20170844. [Google Scholar] [CrossRef] [Green Version]
  71. Aivaliotis, P.; Georgoulias, K.; Chryssolouris, G. The use of Digital Twin for predictive maintenance in manufacturing. Int. J. Comput. Integr. Manuf. 2019, 32, 1067–1080. [Google Scholar] [CrossRef]
  72. Kapteyn, M.G. Mathematical and Computational Foundations to Enable Predictive Digital Twins at Scale. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2021. [Google Scholar]
  73. Calka, M.; Perrier, P.; Ohayon, J.; Grivot-Boichon, C.; Rochette, M.; Payan, Y. Machine-Learning based model order reduction of a biomechanical model of the human tongue. Comput. Methods Programs Biomed. 2021, 198, 105786. [Google Scholar] [CrossRef] [PubMed]
  74. Barricelli, B.R.; Casiraghi, E.; Gliozzo, J.; Petrini, A.; Valtolina, S. Human Digital Twin for Fitness Management. IEEE Access 2020, 8, 26637–26664. [Google Scholar] [CrossRef]
  75. Lauzeral, N.; Borzacchiello, D.; Kugler, M.; George, D.; Rémond, Y.; Hostettler, A.; Chinesta, F. A model order reduction approach to create patient-specific mechanical models of human liver in computational medicine applications. Comput. Methods Programs Biomed. 2019, 170, 95–106. [Google Scholar] [CrossRef] [Green Version]
  76. Thumm, J.; Althoff, M. Provably Safe Deep Reinforcement Learning for Robotic Manipulation in Human Environments. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022. [Google Scholar]
  77. Mahadevan, K.; Sousa, M.; Tang, A.; Grossman, T. “Grip-that-there”: An Investigation of Explicit and Implicit Task Allocation Techniques for Human-Robot Collaboration. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–14. [Google Scholar] [CrossRef]
  78. Macenski, S.; Foote, T.; Gerkey, B.; Lalancette, C.; Woodall, W. Robot Operating System 2: Design, architecture, and uses in the wild. Sci. Robot. 2022, 7, 66. [Google Scholar] [CrossRef]
  79. Kousi, N.; Gkournelos, C.; Aivaliotis, S.; Lotsaris, K.; Bavelos, A.C.; Baris, P.; Michalos, G.; Makris, S. Digital twin for designing and reconfiguring human–robot collaborative assembly lines. Appl. Sci. 2021, 11, 4620. [Google Scholar] [CrossRef]
  80. Andaluz, V.H.; Chicaiza, F.A.; Gallardo, C.; Quevedo, W.X.; Varela, J.; Sánchez, J.S.; Arteaga, O. Unity3D-MatLab Simulator in Real Time for Robotics Applications; Springer: Cham, Switzerland, 2016; Volume 9768, pp. 246–263. [Google Scholar] [CrossRef]
  81. Farley, A.; Wang, J.; Marshall, J.A. How to pick a mobile robot simulator: A quantitative comparison of CoppeliaSim, Gazebo, MORSE and Webots with a focus on accuracy of motion. Simul. Model. Pract. Theory 2022, 120, 102629. [Google Scholar] [CrossRef]
  82. Rojas, M.; Hermosilla, G.; Yunge, D.; Farias, G. An Easy to Use Deep Reinforcement Learning Library for AI Mobile Robots in Isaac Sim. Appl. Sci. 2022, 12, 8429. [Google Scholar] [CrossRef]
  83. Clausen, C.S.B.; Ma, Z.G.; Jørgensen, B.N. Can we benefit from game engines to develop digital twins for planning the deployment of photovoltaics? Energy Inform. 2022, 5, 42. [Google Scholar] [CrossRef]
  84. Kuts, V.; Otto, T.; Tähemaa, T.; Bondarenko, Y. Digital twin based synchronised control and simulation of the industrial robotic cell using virtual reality. J. Mach. Eng. 2019, 19, 128–144. [Google Scholar] [CrossRef] [Green Version]
  85. Srinivasan, M.; Mubarrat, S.T.; Humphrey, Q.; Chen, T.; Binkley, K.; Chowdhury, S.K. The Biomechanical Evaluation of a Human-Robot Collaborative Task in a Physically Interactive Virtual Reality Simulation Testbed. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2021, 65, 403–407. [Google Scholar] [CrossRef]
  86. Mania, P.; Kenfack, F.K.; Neumann, M.; Beetz, M. Imagination-enabled Robot Perception. IEEE Int. Conf. Intell. Robot. Syst. 2020, 936–943. [Google Scholar] [CrossRef]
  87. Yun, H.; Jun, M.B. Immersive and interactive cyber-physical system (I2CPS) and virtual reality interface for human involved robotic manufacturing. J. Manuf. Syst. 2022, 62, 234–248. [Google Scholar] [CrossRef]
  88. Mourtzis, D.; Angelopoulos, J.; Panopoulos, N. Closed-Loop Robotic Arm Manipulation Based on Mixed Reality. Appl. Sci. 2022, 12, 2972. [Google Scholar] [CrossRef]
  89. Alexopoulos, K.; Nikolakis, N.; Chryssolouris, G. Digital twin-driven supervised machine learning for the development of artificial intelligence applications in manufacturing. Int. J. Comput. Integr. Manuf. 2020, 33, 429–439. [Google Scholar] [CrossRef] [Green Version]
  90. Wu, P.; Qi, M.; Gao, L.; Zou, W.; Miao, Q.; Liu, L.L. Research on the virtual reality synchronization of workshop digital twin. In Proceedings of the 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference, ITAIC 2019, Chongqing, China, 24–26 May 2019; pp. 875–879. [Google Scholar] [CrossRef]
  91. Hernigou, P.; Olejnik, R.; Safar, A.; Martinov, S.; Hernigou, J.; Ferre, B. Digital twins, artificial intelligence, and machine learning technology to identify a real personalized motion axis of the tibiotalar joint for robotics in total ankle arthroplasty. Int. Orthop. 2021, 1, 3. [Google Scholar] [CrossRef]
  92. Rabah, S.; Assila, A.; Khouri, E.; Maier, F.; Ababsa, F.; Bourny, V.; Maier, P.; Mérienne, F. Towards improving the future of manufacturing through digital twin and augmented reality technologies. Procedia Manuf. 2018, 17, 460–467. [Google Scholar] [CrossRef]
  93. Akar, C.A.; Tekli, J.; Jess, D.; Khoury, M.; Kamradt, M.; Guthe, M. Synthetic Object Recognition Dataset for Industries; IEEE: Piscataway, NJ, USA, 2022; pp. 150–155. [Google Scholar] [CrossRef]
  94. Blaga, A.; Tamas, L. Augmented Reality for Digital Manufacturing. In Proceedings of the MED 2018—26th Mediterranean Conference on Control and Automation, Zadar, Croatia, 19–22 June 2018; pp. 173–178. [Google Scholar] [CrossRef]
  95. Li, C.; Zheng, P.; Li, S.; Pang, Y.; Lee, C.K. AR-assisted digital twin-enabled robot collaborative manufacturing system with human-in-the-loop. Robot. Comput.-Integr. Manuf. 2022, 76, 102321. [Google Scholar] [CrossRef]
  96. Ke, S.; Xiang, F.; Zhang, Z.; Zuo, Y. A enhanced interaction framework based on VR, AR and MR in digital twin. Procedia CIRP 2019, 83, 753–758. [Google Scholar] [CrossRef]
  97. Su, Y.P.; Chen, X.Q.; Zhou, T.; Pretty, C.; Chase, G. Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System. Appl. Sci. 2022, 12, 4740. [Google Scholar] [CrossRef]
  98. Masood, T.; Egger, J. Augmented reality in support of Industry 4.0—Implementation challenges and success factors. Robot. Comput.-Integr. Manuf. 2019, 58, 181–195. [Google Scholar] [CrossRef]
  99. Han, S.; Liu, B.; Cabezas, R.; Twigg, C.D.; Zhang, P.; Petkau, J.; Yu, T.H.; Tai, C.J.; Akbay, M.; Wang, Z.; et al. MEgATrack. ACM Trans. Graph. (TOG) 2020, 39, 87. [Google Scholar] [CrossRef]
  100. Skantze, G. Turn-taking in Conversational Systems and Human-Robot Interaction: A Review. Comput. Speech Lang. 2021, 67, 101178. [Google Scholar] [CrossRef]
  101. Huang, W.; Xia, F.; Xiao, T.; Chan, H.; Liang, J.; Florence, P.; Zeng, A.; Tompson, J.; Mordatch, I.; Chebotar, Y.; et al. Inner Monologue: Embodied Reasoning through Planning with Language Models. In Proceedings of the 6th Conference on Robot Learning, Volume 205 of Proceedings of Machine Learning Research, Auckland, New Zealand, 14–18 December 2022; pp. 1769–1782. [Google Scholar]
  102. Qi, Q.; Tao, F.; Hu, T.; Anwer, N.; Liu, A.; Wei, Y.; Wang, L.; Nee, A.Y. Enabling technologies and tools for digital twin. J. Manuf. Syst. 2021, 58, 3–21. [Google Scholar] [CrossRef]
  103. Wang, J.; Wu, Q.; Remil, O.; Yi, C.; Guo, Y.; Wei, M. Modeling indoor scenes with repetitions from 3D raw point data. CAD Comput. Aided Des. 2018, 94, 1–15. [Google Scholar] [CrossRef]
  104. Wang, W.; Guo, H.; Li, X.; Tang, S.; Li, Y.; Xie, L.; Lv, Z. BIM Information Integration Based VR Modeling in Digital Twins in Industry 5.0. J. Ind. Inf. Integr. 2022, 28, 100351. [Google Scholar] [CrossRef]
  105. Gualtieri, L.; Rauch, E.; Vidoni, R. Development and validation of guidelines for safety in human-robot collaborative assembly systems. Comput. Ind. Eng. 2022, 163, 107801. [Google Scholar] [CrossRef]
  106. Havard, V.; Jeanne, B.; Lacomblez, M.; Baudry, D. Digital twin and virtual reality: A co-simulation environment for design and assessment of industrial workstations. Prod. Manuf. Res. 2019, 7, 472–489. [Google Scholar] [CrossRef] [Green Version]
  107. Maragkos, C.; Vosniakos, G.C.; Matsas, E. Virtual reality assisted robot programming for human collaboration. Procedia Manuf. 2019, 38, 1697–1704. [Google Scholar] [CrossRef]
  108. Bobka, P.; Germann, T.; Heyn, J.K.; Gerbers, R.; Dietrich, F.; Dröder, K. Simulation Platform to Investigate Safe Operation of Human-Robot Collaboration Systems. Procedia CIRP 2016, 44, 187–192. [Google Scholar] [CrossRef] [Green Version]
  109. Maruyama, T.; Ueshiba, T.; Tada, M.; Toda, H.; Endo, Y.; Domae, Y.; Nakabo, Y.; Mori, T.; Suita, K. Digital Twin-Driven Human Robot Collaboration Using a Digital Human. Sensors 2021, 21, 8266. [Google Scholar] [CrossRef]
  110. Grandi, F.; Prati, E.; Peruzzini, M.; Pellicciari, M.; Campanella, C.E. Design of ergonomic dashboards for tractors and trucks: Innovative method and tools. J. Ind. Inf. Integr. 2022, 25, 100304. [Google Scholar] [CrossRef]
  111. Use Cases—Omniverse Digital Twin Documentation. Available online: https://docs.omniverse.nvidia.com/prod_digital-twins/prod_digital-twins/warehouse-digital-twins/use-cases.html (accessed on 24 October 2022).
  112. Lee, H.; Kim, S.D.; Amin, M.A.U.A. Control framework for collaborative robot using imitation learning-based teleoperation from human digital twin to robot digital twin. Mechatronics 2022, 85, 102833. [Google Scholar] [CrossRef]
  113. 5 Important Augmented And Virtual Reality Trends For 2019 Everyone Should Read. Available online: https://www.forbes.com/sites/bernardmarr/2019/01/14/5-important-augmented-and-virtual-reality-trends-for-2019-everyone-should-read/ (accessed on 5 July 2022).
  114. Hagmann, K.; Hellings-Kuß, A.; Klodmann, J.; Richter, R.; Stulp, F.; Leidner, D. A Digital Twin Approach for Contextual Assistance for Surgeons During Surgical Robotics Training. Front. Robot. AI 2021, 8, 735566. [Google Scholar] [CrossRef]
  115. Rockwell Automation Deployed Collaborative Augmented Reality to Prevent Production Delays During Pandemic. Available online: https://www.ptc.com/en/case-studies/rockwell-automation-collaborative-augmented-reality (accessed on 2 October 2022).
  116. Zahabi, M.; Razak, A.M.A. Adaptive virtual reality-based training: A systematic literature review and framework. Virtual Real. 2020, 24, 725–752. [Google Scholar] [CrossRef]
  117. Blankendaal, R.A.M.; Bosse, T. Using Run-Time Biofeedback During Virtual Agent-Based Aggression De-Escalation Training. In Proceedings of the Advances in Practical Applications of Agents, Multi-Agent Systems, and Complexity: The PAAMS Collection: 16th International Conference, PAAMS 2018, Toledo, Spain, 20–22 June 2018; pp. 97–109. [Google Scholar] [CrossRef]
  118. Harichandran, A.; Johansen, K.W.; Jacobsen, E.L.; Teizer, J. A Conceptual Framework for Construction Safety Training using Dynamic Virtual Reality Games and Digital Twins. In Proceedings of the International Symposium on Automation and Robotics in Construction, Bogotá, Colombia, 13–15 July 2021. [Google Scholar]
  119. Joshi, S.; Hamilton, M.; Warren, R.; Faucett, D.; Tian, W.; Wang, Y.; Ma, J. Implementing Virtual Reality technology for safety training in the precast/ prestressed concrete industry. Appl. Ergon. 2021, 90, 103286. [Google Scholar] [CrossRef]
  120. Beloglazov, I.I.; Petrov, P.A.; Bazhin, V.Y. The concept of digital twins for tech operator training simulator design for mining and processing industry. Eurasian Mining 2020, 2020, 50–54. [Google Scholar] [CrossRef]
  121. You, S.; Kim, J.H.; Lee, S.H.; Kamat, V.; Robert, L.P. Enhancing perceived safety in human–robot collaborative construction using immersive virtual environments. Autom. Constr. 2018, 96, 161–170. [Google Scholar] [CrossRef]
  122. Matsas, E.; Vosniakos, G.C.; Batras, D. Prototyping proactive and adaptive techniques for human-robot collaboration in manufacturing using virtual reality. Robot. Comput.-Integr. Manuf. 2018, 50, 168–180. [Google Scholar] [CrossRef]
  123. Malik, A.A.; Masood, T.; Bilberg, A. Virtual reality in manufacturing: Immersive and collaborative artificial-reality in design of human-robot workspace. Int. J. Comput. Integr. Manuf. 2019, 33, 22–37. [Google Scholar] [CrossRef]
  124. Wang, Y.; Feng, J.; Liu, J.; Liu, X.; Wang, J. Digital Twin-based Design and Operation of Human-Robot Collaborative Assembly. IFAC-PapersOnLine 2022, 55, 295–300. [Google Scholar] [CrossRef]
  125. Malik, A.A.; Bilberg, A. Digital twins of human robot collaboration in a production setting. Procedia Manuf. 2018, 17, 278–285. [Google Scholar] [CrossRef]
  126. Nikolakis, N.; Alexopoulos, K.; Xanthakis, E.; Chryssolouris, G. The digital twin implementation for linking the virtual representation of human-based production tasks to their physical counterpart in the factory-floor. Int. J. Comput. Integr. Manuf. 2019, 32, 1–12. [Google Scholar] [CrossRef]
  127. Malik, A.A.; Brem, A. Digital twins for collaborative robots: A case study in human-robot interaction. Robot. Comput.-Integr. Manuf. 2021, 68, 102092. [Google Scholar] [CrossRef]
  128. Zhu, Z.; Liu, C.; Xu, X. Visualisation of the Digital Twin data in manufacturing by using Augmented Reality. Procedia CIRP 2019, 81, 898–903. [Google Scholar] [CrossRef]
  129. Ding, K.; Chan, F.T.; Zhang, X.; Zhou, G.; Zhang, F. Defining a Digital Twin-based Cyber-Physical Production System for autonomous manufacturing in smart shop floors. Int. J. Prod. Res. 2019, 57, 6315–6334. [Google Scholar] [CrossRef] [Green Version]
  130. Müller, M.; Mielke, J.; Pavlovskyi, Y.; Pape, A.; Masik, S.; Reggelin, T.; Häberer, S. Real-time combination of material flow simulation, digital twins of manufacturing cells, an AGV and a mixed-reality application. Procedia CIRP 2021, 104, 1607–1612. [Google Scholar] [CrossRef]
  131. Zhou, T.; Tang, D.; Zhu, H.; Zhang, Z. Multi-agent reinforcement learning for online scheduling in smart factories. Robot. Comput.-Integr. Manuf. 2021, 72, 102202. [Google Scholar] [CrossRef]
  132. Lv, Q.; Zhang, R.; Sun, X.; Lu, Y.; Bao, J. A digital twin-driven human-robot collaborative assembly approach in the wake of COVID-19. J. Manuf. Syst. 2021, 60, 837–851. [Google Scholar] [CrossRef] [PubMed]
  133. Chiriatti, G.; Ciccarelli, M.; Forlini, M.; Franchini, M.; Palmieri, G.; Papetti, A.; Germani, M. Human-Centered Design of a Collaborative Robotic System for the Shoe-Polishing Process. Machines 2022, 10, 1082. [Google Scholar] [CrossRef]
  134. Leng, J.; Zhang, H.; Yan, D.; Liu, Q.; Chen, X.; Zhang, D. Digital twin-driven manufacturing cyber-physical system for parallel controlling of smart workshop. J. Ambient. Intell. Humaniz. Comput. 2018, 10, 1155–1166. [Google Scholar] [CrossRef]
  135. Khalid, A.; Kirisci, P.; Khan, Z.H.; Ghrairi, Z.; Thoben, K.D.; Pannek, J. Security framework for industrial collaborative robotic cyber-physical systems. Comput. Ind. 2018, 97, 132–145. [Google Scholar] [CrossRef] [Green Version]
  136. Laaki, H.; Miche, Y.; Tammi, K. Prototyping a Digital Twin for Real Time Remote Control over Mobile Networks: Application of Remote Surgery. IEEE Access 2019, 7, 20235–20336. [Google Scholar] [CrossRef]
  137. Dietz, M.; Vielberth, M.; Pernul, G. Integrating Digital Twin Security Simulations in the Security Operations Center. In Proceedings of the 15th International Conference on Availability, Reliability and Security (ARES ’20), Virtual Event, Ireland, 25–28 August 2020; ACM: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  138. Suhail, S.; Malik, S.U.R.; Jurdak, R.; Hussain, R.; Matulevičius, R.; Svetinovic, D. Towards situational aware cyber-physical systems: A security-enhancing use case of blockchain-based digital twins. Comput. Ind. 2022, 141, 103699. [Google Scholar] [CrossRef]
  139. Lv, Z.; Qiao, L.; Li, Y.; Yuan, Y.; Wang, F.Y. BlockNet: Beyond reliable spatial Digital Twins to Parallel Metaverse. Patterns 2022, 3, 100468. [Google Scholar] [CrossRef]
  140. Zhang, J.; Tai, Y. Secure medical digital twin via human-centric interaction and cyber vulnerability resilience. Connect. Sci. 2021, 2022, 895–910. [Google Scholar] [CrossRef]
  141. VIDEO: The Digital Athlete and How It’s Revolutionizing Player Health & Safety. Available online: https://www.nfl.com/playerhealthandsafety/equipment-and-innovation/aws-partnership/digital-athlete-spot (accessed on 2 February 2023).
  142. Greenbaum, D.; Lavazza, A.; Beier, K.; Bruynseels, K.; Sio, F.S.D.; Hoven, J.V.D. Digital Twins in Health Care: Ethical Implications of an Emerging Engineering Paradigm. Front. Genet. 2018, 9, 31. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Symbiotic human-machine relationship in the Industry 5.0 [17].
Figure 1. Symbiotic human-machine relationship in the Industry 5.0 [17].
Sensors 23 03938 g001
Figure 2. Methodology & Structure.
Figure 2. Methodology & Structure.
Sensors 23 03938 g002
Figure 3. Citation Network Graph.
Figure 3. Citation Network Graph.
Sensors 23 03938 g003
Figure 4. Keyword Co-occurrence Network Map.
Figure 4. Keyword Co-occurrence Network Map.
Sensors 23 03938 g004
Figure 5. Human-centricity Co-occurrence Network Map.
Figure 5. Human-centricity Co-occurrence Network Map.
Sensors 23 03938 g005
Figure 6. Enabling Technologies and framework of Human-Centric Digital Twins in Industry 5.0.
Figure 6. Enabling Technologies and framework of Human-Centric Digital Twins in Industry 5.0.
Sensors 23 03938 g006
Figure 7. Generic framework for HCDT for safety and ergonomics.
Figure 7. Generic framework for HCDT for safety and ergonomics.
Sensors 23 03938 g007
Figure 8. Nvidia Omniverse based DTs for Amazon and Pepsi [111].
Figure 8. Nvidia Omniverse based DTs for Amazon and Pepsi [111].
Sensors 23 03938 g008
Figure 9. Generic framework for Robotics System Training.
Figure 9. Generic framework for Robotics System Training.
Sensors 23 03938 g009
Figure 10. VR enhanced Digital Twin framework for HRC design [123].
Figure 10. VR enhanced Digital Twin framework for HRC design [123].
Sensors 23 03938 g010
Figure 11. Framework for Human-Robot CPS under Cyber-attack [135].
Figure 11. Framework for Human-Robot CPS under Cyber-attack [135].
Sensors 23 03938 g011
Figure 12. Generic framework for DTs in healthcare.
Figure 12. Generic framework for DTs in healthcare.
Sensors 23 03938 g012
Table 1. Most cited articles in this review.
Table 1. Most cited articles in this review.
S#CitationsAuthorsTitleYearJournal
1158Lihui Wang, Robert X. Gao, József Váncza, Jörg Krüger, Xi Vincent Wang, Sotiris Makris, George ChryssolourisSymbiotic human-robot collaborative assembly2019CIRP Annals
2155Kai Ding, Felix T. S. Chan, Zhang Xudong, Guanghui Zhou, Fuqiang ZhangDefining a Digital Twin-based Cyber-Physical Production System for autonomous manufacturing in smart shop floors2019International Journal of Production Research
3144S. NahavandiIndustry 5.0—A Human-Centric Solution2019Sustainability
4130Koenraad Bruynseels, Filippo Santoni de Sio, Jeroen van den HovenDigital Twins in Health Care: Ethical Implications of an Emerging Engineering Paradigm2018Frontiers in Genetics
582Nikolaos Nikolakis, Kosmas Alexopoulos, Evangelos Xanthakis, George ChryssolourisThe digital twin implementation for linking the virtual representation of human-based production tasks to their physical counterpart in the factory-floor2019International Journal of Computer Integrated Manufacturing
674P. Aivaliotis, Konstantinos Georgoulias, George ChryssolourisThe use of Digital Twin for predictive maintenance in manufacturing2019International Journal of Computer Integrated Manufacturing
774Ali Ahmad Malik, Arne BilbergDigital twins of human robot collaboration in a production setting2018Procedia Manufacturing
873Azfar Khalid, Pierre T. Kirisci, Zeashan Hameed Khan, Zied Ghrairi, Klaus-Dieter Thoben, Jürgen PannekSecurity framework for industrial collaborative robotic cyber-physical systems2018Computers in Industry
Table 2. Keywords.
Table 2. Keywords.
idKeywordOccurrencesLink Strength
1digital twin4989
2human-robot collaboration2552
3artificial intelligence1537
4industry 4.01231
5simulation1528
6augmented reality927
7cyber-physical system924
8virtual reality1623
9assembly820
10manufacturing719
11reinforcement learning1118
12human-robot interaction816
13robotics516
14cobot514
15ergonomics513
16mixed reality613
17deep learning412
18safety612
19smart factory412
20smart manufacturing510
21internet of things39
22collision avoidance38
23human-centricity37
24sustainability27
25human-centered design36
26predictive maintenance26
27industry 5.035
28robot25
29brain-computer interface24
30digital transformation24
31finite element analysis34
32operator 4.024
33safety training24
34personalized medicine23
Table 3. Summary of previously published Review Papers on Digital Twins.
Table 3. Summary of previously published Review Papers on Digital Twins.
Ref.YearFocusOutcome
[26]2021VR/AR solutions in HRCChallenges for AR/VR solutions are identified, especially with respect to calibration and tracking of objects
[27]2022DTs in Manufacturing, classified by publication type, year, country, manufacturing sectorClassified literature by simulation method, and attributes of the physical and digital layers (such as optimization, monitoring, control, etc.)
[23]2022DTs in Construction and Facility ManagementBuilding Information Management (BIM) is a more developed and applied concept in construction. Research on the integration of BIM and IoT in a DT framework is further behind
[24]2022Augmented Reality in Digital TwinsIdentified that AR is used for information visualization, guidance and control in DTs, mostly in areas of Manufacturing, Training, and Construction
[28]2022DTs in Process Industry focusing on challenges and barriers in adoptions, as well as enablersSystem integration challenges, data, and IP security, performance issues in real-time data exchange and organizational issues, are highlighted as major barriers
[29]2021Classification of DT literature in safety domainA framework for assessment of the capability of DTs to improve safety.
[19]2022Defining Human Digital Twins (HDTs), identifying their attributes and use casesAttributes of HDTs include physical, physiological, perceptual, cognitive and emotional attributes. HDTs are commonly used in health industry and product design and validation.
Table 4. Applications in Ergonomics and Safety.
Table 4. Applications in Ergonomics and Safety.
Ref.YearToolsProposed Idea
[108]2016Custom GUIA custom software, “Human-Industrial-Robot-Interaction-Tool” HIRIT was developed for safety evaluation on HRC, using depth sensors for human motion capture and Genetic Algorithm for safety distance estimate
[32]2022CNN Motion CaptureHuman pose estimation using RGBD Camera is used for human localization and collision avoidance in HRC
[85]2021OpenSim UnityErgonomic analysis of pick-and-place task using motion capture, biological sensors and a musculoskeletal human model
[109]2021ROSErgonomic Analysis and dynamic scheduling in an HRC scenario with a personalized Human Digital Twin with motion analysis and a skin surface
[76]2022ROS Gazebo Open Dynamics EngineImplementing an Actor-Critic based Reinforcement Learning algorithm to ensure collision-free HRC
[110]2022Siemens Jack UnityErgonomic human-centered design of a tractor dashboard using VR simulations in Unity, ergonomics analysis in Siemens Jack, incorporating analysis of human motion and physiological sensor data
Table 5. Applications in Training of Robotics systems.
Table 5. Applications in Training of Robotics systems.
Ref.YearToolsProposed Idea
[89]2021CNN CAD/BlenderLearning to recognize the orientation of arbitrarily placed plastic parts using CNN based ML model trained via a DT generated training set training
[86]2020ROS Unreal Engine 4Improving Robot Perception by comparing real visual sensor data to expected data generated by photorealistic virtual model
[112]2022MATLABUsing a DT for CNN, pose estimation, and transfer to Yumi Cobot for imitation and teleoperation
[65]2019Python ROS GazeboUsing Ant Colony Optimization to train a robot to reach target states while avoiding obstacles, validated in the DT before transferring to real robot
[87]2022Unity ROSVR based control and robot programming Cobot for a pick and place task and its DT using ROS, Unity, and MTConnect.
[88]2022Unity HoloLensAd-hoc Robot Navigation and safety visualization in HRC using AR, hand gesture control and an Android application
[77]2021ROS, Gazebo, MoveIt, UnityVR based ad-hoc DT-enabled cobot control in pick and place tasks validated by a user study
[30]2021Unity HoloLensIn a DT-enable HRC scenario, using DL (Retina-Net) for object recognition; eye tracking and hand gestures, voice commands to control the system and a DT for visualization through HoloLens
[62]2021Siemens TecnomatixUsed a DT of a PLC controlled Cobot for Deep Q-Networks (DQN) training to develop the capability of dynamic robust scheduling in a manufacturing setting
[63]2021Unity TensorFlowReinforcement Learning training using TensorFlow utilizing a DT of Robotic arm for pick and place task
[59]2022V-REP ROSUsing Deep Reinforcement Learning to train robot policy using DT for grasping operations in the context of assembly
[60]2021ROS GazeboAn architecture for reinforcement training in ROS on a DT, followed by transfer to real is discussed with the help of case studies on a Fanuc Industrial Robot and fleet management of mobile robots
[95]2022Vuforia UnityImplementing multi-robot collaborative teleoperation using AR and DT, using Reinforcement Learning for robot motion planning.
Table 6. Applications in Design, Validation, and Testing.
Table 6. Applications in Design, Validation, and Testing.
Ref.YearToolsProposed Idea
[125]2018Siemens TecnomatixUsing a DT in an HRC assembly scenario to optimize workstation layout by analyzing collision, placement, human reach and vision in the virtual environment
[126]2019Camunda BPM, Java, XML3DIn a factory floor setting, analysis of human motion and action categorization (such as walking, picking, screwing); subsequent transfer to virtual environment for process optimization, and cycle time and ergonomics evaluation
[127]2021Siemens TecnomatixTested, analyzed and optimized a collaborative assembly case study for safety, cycle time, and productivity using a DT with a UR5 cobot
[128]2019Unity HoloLensUsing AR and DT in CNC machining operations for visualization and communication (voice/gesture commands)
[66]2022COMSOL MultiphysicsDT of an Optical Fiber Drawing Process is used for real-time monitoring and evaluation (using DL algorithms) of process parameters and quality control
[129]2019Siemens TecnomatixProcess planning and configuration for manufacturing of an impeller using industrial robots, machine tools, and an AGV in a with a DT based Cyber-Physical Production System (CPPS)
[130]2021VINCENT UnityCreating a DT of a proANT 436 AGV visualized in a projection system using Unity for material flow simulation in a virtual logistic system
[64]2022GAMSHRC Assembly Line Balancing Problem (ALBP) is simulated using General Algebraic Modeling System (GAMS) Software, Simulated Annealing and Mixed-Integer linear programming (MILP)
[90]2019Unity, Visual C#, Autodesk 3ds MaxImplementing a DT of an Intelligent Workshop created using 3ds Max and Unity with real-time synchronization for monitoring and digitization
[131]2021AI, Deep-Q Network, GAImplementation of a reinforcement learning based scheduler for dynamic production scheduling in a smart factory with real machine and operation sensor data
[61]2022Openpose DDPGUsing Deep Reinforcement Learning (DDPG) for motion planning with Openpose and Semantic Segmentation for human intent and task prediction in a collaborative assembly-commissioning with an automotive generator as a case study
[132]2021D-DDPGPresented a DT-based HRC assembly framework integrating all kinds of data from digital twin spaces. Double deep deterministic policy gradient (D-DDPG) is applied to optimize HRC strategy and action sequence
[133]2022Siemens Tecnomatix, CAM NXHuman-centered design of DT-assisted collaborative shoe polishing, simulated on Tecnomatix, using a novel polishing tool controlled by CAM NX on a UR5 robot
[134]2021JavaImplementing a DT-driven smart manufacturing workshop to optimize mass customization for increased flexibility
Table 7. Applications in Security of Cyber-Physical systems.
Table 7. Applications in Security of Cyber-Physical systems.
Ref.YearProposed Idea
[137]2020Sets out formal requirements of a security framework that employs DT-based simulations to detect, analyze, and handle security incidents
[138]2022Demonstrates the use of blockchain technology to ensure reliability and security of DT data for CPS
[139]2022A parallel metaverse is envisioned that leverages Blockchain technology to ensure data reliability.
Table 8. Applications in Rehab, Well-being and Health Management.
Table 8. Applications in Rehab, Well-being and Health Management.
Ref.YearToolsProposed Idea
[140]2021CNN, Oculus MRMR based Surgery training and analysis for Video-Assisted Thoracoscopic Surgery (VATS) using patient CT image for mesh generation, visual rendering, and haptic feedback
[91]2021CATIA, PCAUsing patient-specific DTs developed via CT scans by comparing the real ankle of patients to the DT via machine Learning, the tibiotalar joint axis is identified
[75]2019MATLAB, ROMPatient Specific computational model of human liver anatomy was created using Reduced Order Modelling and Statistical Shape Analysis, solved using Sparse Subspace Learning coupled with an FE Solver
[73]2021ANSYS Twin BuilderCreating patient-specific FE model of human tongue, and its ML-based reduced order model for simulate nonlinear behavior of tongue.
[69]20213D Slicer, Simpleware, ANSYSPatient-specific FE model of tibial fracture based on 3D X-Ray imagery to assess stress distribution and fracture risk under different possible interventions
[70]2018MATLAB, TensorFlow, ABAQUSStress distribution on aortic walls using Neural Network trained using FEM, where Aorta shape encoding uses PCA applied on real patient geometries
[68]2023ANSYS 3ds MaxQuasi-static and dynamic analysis for biomechanical simulation of accidental scenarios during HRC using FEM
[74]2020MATLABUsed machine learning classifiers and statistical tools to model athlete performance
Table 9. Guidelines for enabling technologies in application domains.
Table 9. Guidelines for enabling technologies in application domains.
Legend:High Utility:Sensors 23 03938 i001Medium Utility:Sensors 23 03938 i002Low Utility:Sensors 23 03938 i003
Ergonomics & SafetyRobotics TrainingUser TrainingDesign & ValidationSecurity of CPSHealth & Well-Being
SensorsBiological SensorsSensors 23 03938 i001Sensors 23 03938 i003Sensors 23 03938 i003Sensors 23 03938 i003Sensors 23 03938 i002Sensors 23 03938 i001
Visual & MotionSensors 23 03938 i001Sensors 23 03938 i001Sensors 23 03938 i003Sensors 23 03938 i001Sensors 23 03938 i001Sensors 23 03938 i003
Direct InputSensors 23 03938 i002Sensors 23 03938 i002Sensors 23 03938 i001Sensors 23 03938 i003Sensors 23 03938 i003Sensors 23 03938 i003
HDT InputsPoseSensors 23 03938 i001Sensors 23 03938 i001Sensors 23 03938 i003Sensors 23 03938 i002Sensors 23 03938 i002Sensors 23 03938 i003
Hand GestureSensors 23 03938 i002Sensors 23 03938 i003Sensors 23 03938 i001Sensors 23 03938 i002Sensors 23 03938 i003Sensors 23 03938 i003
Facial ExpressionSensors 23 03938 i002Sensors 23 03938 i003Sensors 23 03938 i002Sensors 23 03938 i003Sensors 23 03938 i002Sensors 23 03938 i002
GazeSensors 23 03938 i001Sensors 23 03938 i001Sensors 23 03938 i002Sensors 23 03938 i003Sensors 23 03938 i002Sensors 23 03938 i001
Motion PredictionSensors 23 03938 i001Sensors 23 03938 i001Sensors 23 03938 i003Sensors 23 03938 i002Sensors 23 03938 i002Sensors 23 03938 i003
Language (NLP)Sensors 23 03938 i002Sensors 23 03938 i002Sensors 23 03938 i001Sensors 23 03938 i003Sensors 23 03938 i002Sensors 23 03938 i003
BCISensors 23 03938 i002Sensors 23 03938 i002Sensors 23 03938 i003Sensors 23 03938 i003Sensors 23 03938 i003Sensors 23 03938 i002
Machine Learning RequirementsOptimizationSensors 23 03938 i002Sensors 23 03938 i003Sensors 23 03938 i003Sensors 23 03938 i001Sensors 23 03938 i003Sensors 23 03938 i003
Deep LearningSensors 23 03938 i002Sensors 23 03938 i001Sensors 23 03938 i002Sensors 23 03938 i002Sensors 23 03938 i001Sensors 23 03938 i002
Reinforcement LearningSensors 23 03938 i002Sensors 23 03938 i001Sensors 23 03938 i003Sensors 23 03938 i003Sensors 23 03938 i003Sensors 23 03938 i003
Feedback and InterfaceAudioSensors 23 03938 i002Sensors 23 03938 i003Sensors 23 03938 i001Sensors 23 03938 i003Sensors 23 03938 i002Sensors 23 03938 i003
VisualSensors 23 03938 i002Sensors 23 03938 i002Sensors 23 03938 i001Sensors 23 03938 i001Sensors 23 03938 i002Sensors 23 03938 i001
HapticSensors 23 03938 i003Sensors 23 03938 i003Sensors 23 03938 i002Sensors 23 03938 i003Sensors 23 03938 i003Sensors 23 03938 i001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Asad, U.; Khan, M.; Khalid, A.; Lughmani, W.A. Human-Centric Digital Twins in Industry: A Comprehensive Review of Enabling Technologies and Implementation Strategies. Sensors 2023, 23, 3938. https://doi.org/10.3390/s23083938

AMA Style

Asad U, Khan M, Khalid A, Lughmani WA. Human-Centric Digital Twins in Industry: A Comprehensive Review of Enabling Technologies and Implementation Strategies. Sensors. 2023; 23(8):3938. https://doi.org/10.3390/s23083938

Chicago/Turabian Style

Asad, Usman, Madeeha Khan, Azfar Khalid, and Waqas Akbar Lughmani. 2023. "Human-Centric Digital Twins in Industry: A Comprehensive Review of Enabling Technologies and Implementation Strategies" Sensors 23, no. 8: 3938. https://doi.org/10.3390/s23083938

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop