Next Article in Journal
Deep Image Clustering Based on Label Similarity and Maximizing Mutual Information across Views
Previous Article in Journal
AC Electric Powertrain without Power Electronics for Future Hybrid Electric Aircrafts: Architecture, Design and Stability Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Control System Design and Methods for Collaborative Robots: Review

by
Ayesha Hameed
1,†,
Andrzej Ordys
1,*,†,
Jakub Możaryn
1,† and
Anna Sibilska-Mroziewicz
2,†
1
Institute of Automatic Control and Robotics, Warsaw University of Technology, 00-661 Warsaw, Poland
2
Institute of Micromechanics and Photonics, Warsaw University of Technology, 00-661 Warsaw, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2023, 13(1), 675; https://doi.org/10.3390/app13010675
Submission received: 28 November 2022 / Revised: 27 December 2022 / Accepted: 28 December 2022 / Published: 3 January 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Collaborative robots cooperate with humans to assist them in undertaking simple-to-complex tasks in several fields, including industry, education, agriculture, healthcare services, security, and space exploration. These robots play a vital role in the revolution of Industry 4.0, which defines new standards of manufacturing and the organization of products in the industry. Incorporating collaborative robots in the workspace improves efficiency, but it also introduces several safety risks. Effective safety measures then become indispensable to ensure safe and robust interaction. This paper presents the review of low-level control methodologies of a collaborative robot to assess the current status of human–robot collaboration over the last decade. First, we discuss the classification of human–robot collaboration, architectures of systems and the complex requirements on control strategies. The most commonly used control approaches were presented and discussed. Several methods of control, reported in industrial applications, are elaborated upon with a prime focus on HR-collaborative assembly operations. Since the physical HRC is a critical control problem for the co-manipulation task, this article identifies key control challenges such as the prediction of human intentions, safety, and human-caused disturbances in motion synchronization; the proposed solutions were analyzed afterwards. The discussion at the end of the paper summarizes the features of the control systems that should be incorporated within the systematic framework to allow the execution of a robotic task from global task planning to low-level control implementation for safe and robust interactions.

1. Introduction

Human–robot collaboration is an innovative area aiming to construct an environment for safe and efficient collaboration between humans and robots to accomplish a specific task. This area introduces a new type of robot called collaborative robots or cobots. Unlike traditional robots, collaborative robots are robots that can work together with humans to perform tasks in several fields of life, including industry, education, agriculture, healthcare services, security, and space exploration. The term “collaborative robot” was first introduced in the academic literature by Peskin and Colgate in 1999 [1]. In the beginning, the collaborative robots were known as intelligent assist devices (IADs) until the standards of intelligent assist devices with personal safety elements were composed by the Robotic Industries Association (RIA) in 2003. Later, the term collaborative robot became standard. The first commercial collaborative robot that was capable of performing automated and streamlined repetitive industrial tasks was developed by KUKA in 2003 [2]. Figure 1 depicts the stages of development of collaborative robots since 2003. The considered cobots are single-arm and dual-arm and with six or seven degrees of freedom.
The concept of collaborative robots is to combine the cognitive skills of humans with the precision and dexterity of robots to accomplish complex tasks. Unlike the classical industrial robots, cobots are simple, lightweight, reliable, with easy-to-deploy enhanced sensors. Thanks to these characteristics, they can detect a human’s motion and are equipped with collision avoidance algorithms. Additionally, they can be programmed with mobile phones. Recently, with the development of Industry 4.0 standards, the vital role of cobots in automation, manufacturing in general and in small and medium enterprises (SMEs) in particular has become evident [3]. According to the recent survey report of Markets&Markets, the market share for collaborative robots is estimated to grow from USD 981 million to USD 7972 million during the period 2021–2026 [4]. The surge in demand for collaborative robots motivates researchers and manufacturers to develop advanced collaborative robotic systems that can integrate with the Internet of Things (IoT) [5,6].
Human–robot (HR) interaction requires different levels of automation and human intervention. These interactions are commonly classified as HR coexistence, HR cooperation, and HR collaboration with various levels of automation (full, semi, collaborative). When humans and robots share a workspace but are working independently without sharing a task, this kind of interaction is known as HR coexistence. The terms HR collaboration and HR cooperation are used interchangeably. However, the two vary slightly. When the human and robot both seek to achieve the same goal and share the workspace, it is known as HR collaboration. However, HR-cooperation implies a situation in which robots and humans share a workspace and work simultaneously, but on separate sub-tasks.
The collaborative robot (cobot) can be a manipulator or mobile robot that can work together on shared tasks in a shared workspace. In this article, the type of collaborative robot addressed are manipulator robots that can work with humans on shared work in the same workspace. However, collaborative robots are also known as multiple robots that can interact with each other to perform shared tasks, e.g., co-transporting objects with the help of multiple collaborative robots. However, this literature review focuses on the collaborative robot (manipulator) that has physical human–robot interaction and control methods relying on the motion-force control of the robot. While multiple collaborative robots use cooperative distributed control methods based, e.g., on game theory, that is out of the scope of our literature review presented in this paper.
Examples of this kind of interaction are the industrial robotic system which works on specific tasks behind fixed and interlocked guards to prevent human intrusions into their workspace. The second example in this type is cooperative robotic systems. These robots are equipped with safety configurations to safeguard the interference of robots with a human in an automatic manner. The third example is collaborative robotic systems, which are explicitly designed for direct interaction with humans to perform collaborative jobs. Contact between the collaborative robot and the human body might be either intentional as part of a normal collaboration sequence; or unintentional as a result of unpredicted movement, sensor errors or system malfunction. The scope of this article is to focus on the physical HR collaboration for collaborative robot manipulators, as this field of research is still in its infancy. Therefore, the fundamental purpose of this review is to describe various control methods developed within the domain of collaborative robot manipulators for industrial applications.
In the past decade, the field of HR collaboration has been widely investigated. Furthermore, issues such as strategies for integrating safety, ergonomics and speed, human–robot task allocations, communication methods, and multiple human collaborations with multiple robots as a team have also been analyzed for advanced collaborative robotic systems. Recently, a systematic literature review by Luca Gualtieri et al. [7] presents emerging research areas addressing the safety and ergonomics issues in the context of industrial collaborative robots reported in the year 2015–2018. In industrial robotics, development is based on modeling and simulation, sensor systems for object tracking, motion planning and control, safety management, and artificial intelligence in industrial applications. The survey concluded that motion planning and control to ensure overall safety focuses on developing strategies for human contact avoidance rather than detection and mitigation solutions. However, more research needs to be performed in industrial collaborative robots, where the main challenges are physical ergonomics, cognitive and organizational ergonomics for motion planning and control.
Another earlier review by Zoltan Dobra et al. [8] describes the potential research trends introducing multiple humans and robots teamwork collaboration based on the measurement of the degree of collaboration and task relocation in human&robot. Hentout et al. [9] presented a comprehensive review classifying HR collaborative applications into software and hardware designs, and discussing robotic programming, augmented reality, physical and cognitive interactions, safety mechanisms, and the fault tolerance of industrial collaborative robots.
Furthermore, Umbrico et al. [10] proposed a shared ontology suitable for human–robot collaboration. The ontology is based on DUL (DOLCE+DnS Ultralite) and semantic sensor network (SSN) ontologies. It used the concepts of ProductionMethod, ProductionTask, ComplexTask and SimpleTask. The study facilitates reasoning regarding agents’ capabilities and the analysis of possible collaborations. The interaction between humans and robots is defined in the class ProductionAction. The concept of risk level is introduced to measure the risk related to the interaction between agents. Such an approach is deemed very helpful in defining the tasks, objectives and constraints of control systems.
Comprehensive reviews are presented in the literature to describe the safety and human ergonomics in collaborative robotic systems for HR collaboration. However, an in-depth analysis of control methodologies for collaborative robots over the recent years has not been reported to the best of the author’s knowledge. Therefore, this review article attempts to provide an overview of the development of various control methods for collaborative robots and challenges for designing controllers for collaborative robotic systems in industrial applications. The concept of human–robot collaboration and its design and implementation for human–robot collaborative control architectures are discussed in detail to make the integration safe, robust and precise. The physical human–robot collaboration is a complex control problem. This article addresses low-level control challenges posed to collaborative robots and their proposed solutions addressing the following key challenges: the prediction of human intentions, safety, and human-caused disturbances in motion synchronization. As the focus of this article is restricted to low-level controller strategies, the modeling techniques and high-level control methods are beyond the scope of this article.
This article is organized into seven sections. Section 1 introduces the development of collaborative robots and the complex requirements of human–robot collaboration. Section 2 describes the review methodology used in this paper. Section 3 defines the key terminologies and features of human–robot collaboration for collaborative robots. Section 4 and Section 5 report the collaborative control system architectures followed by a review of various control methodologies for collaborative robot manipulators used in physical HR-collaborative assembly applications in the industry, as reported in various publications. Section 6 summarizes and discusses the main attributes and requirements of the control systems presented in the previous sections, hence providing recommendations for the most important features: the estimation of human intention, safety, and human-caused disturbances. Finally, Section 7 concludes the paper.

2. Literature Review Methodology

The traditional literature review approach was used to analyze scientific articles published from 2010 and 2020 that highlighted the control strategies developed for physical HR collaboration in collaborative robots for industrial applications. To limit the review scope, the focus was restricted to manipulator collaborative robots with a particular focus on HR collaborative assembly tasks. To this end, the peer-reviewed publications were comprehensively evaluated based on controllers developed and practically validated on collaborative robots that highlight control advantages, challenges, and drawbacks.
Next, we identify three major challenges which have been encountered while designing control techniques: (i) estimation of human intention; (ii) safety; and (iii) human-caused disturbance. On the basis of the identified factors, we analyzed the control techniques, control objective, and performance of controller used in the collaborative robot. We summarize the chronological development in this area in tabular form to prioritize highlighting the control features with a robotic platform details requisite for HR interaction. It is worth mentioning here that the articles not included in the review are those that considered control methods applied on traditional robots and do not deal with direct physical HR interaction and theoretical studies. Three search engines were utilized to obtain scientific articles that were selected using the following search string: (collaborative robot OR cobot) AND (Human–robot collaboration) AND (Control) AND (Industry OR assembly). Since collaborative robots have only been commercially available for the previous decade. Hence, the scientific articles cover the period 2010–2020. IEEExplore returned 38 results, from which 30 were found to fit our literature review criteria after reading the title and abstract. ScienceDirect returned 191 results, among which 40 were found to be suitable for our literature review. Web of Science returned 120 results, among which 45 were selected based on relevance to our application. Out of all these relevant results, 10 were duplicated results, leaving us with 41 papers to analyze. Following a thorough evaluation of the articles, 41 papers were considered to fully meet our criteria and were therefore included in this review. It is important to note that 22 articles were cited in the analysis of the co-assembly application. The parameters considered for systematic analysis are a collaborative robot, robotic platform detail, collaborative configuration, physical HR interaction, sensors, control methods, control objective, and controller performance.

3. Human–Robot Collaboration

Before elaborating the concept of human–robot collaboration in the collaborative robot, it is important to discuss the concept of human–robot interactions, types of human–robot collaboration and human–robot collaborative operation modes in collaborative robots. Human–robot collaboration is a sub-category of human–robot interaction [11,12].

3.1. Human–Robot Interaction

The human–robot interactions are divided into three subcategories: (i) Human–robot co-existence; (ii) Human–robot cooperation; and (iii) Human–robot collaboration. This classification is based on four criteria: (i) workspace; (ii) working time; (iii) working aim or task; and (iv) the existence of contact (contactless or with-contact).
The workspace can be described as a working area surrounding humans and robots wherein they can perform their tasks individually, as shown in Figure 2. The time during which a human is working in the collaborative workspace is known as the working time. Humans and robots interact in a workspace to achieve a common goal or distinct goals. Therefore, if the workspace is shared between the two entities along with simultaneous action, this interaction is known as HR coexistence [13]. HR cooperation implies an interaction when they work simultaneously towards the same aim in a shared workspace. However, HR collaboration covers scenarios in which there is direct contact between humans and robots to accomplish the shared aim or goal. Examples of these interactions are classical industrial robots, cooperative robots, and collaborative robots, respectively.
It is important to consider that the term HR collaboration is ambiguous in its definitions [14,15]. In Figure 2, HR collaboration is shown as the final category of HR interaction that describes a human and robot executing the same task together, wherein the action of the one has an immediate impact on the other.

3.2. Human–Robot Collaboration Types

Human–robot collaboration is the advanced property of robots that allows them to execute a challenging task involving human interaction in two ways: (i) physical collaboration; and (ii) contactless collaboration [14]. Physical collaboration entails direct physical contact of the force of the human hand exerted on the robot’s end-effector. These forces/torques assist or predict the robotic motion accordingly [16]. However, contactless collaboration does not involve physical interaction. This collaboration is carried out through direct (speech or gestures) or indirect (eye gaze direction, intentions recognition, or facial expressions) communication [15]. In these types of collaboration scenarios, human operator cognitive skills and decision-making abilities are combined with the robotic attributes of repetitively and more precisely performing the job with human involvement.
Contactless collaboration faces several issues, e.g., communication channel delay, input actuator saturation, bounded input and output, and data transmission delay in bilateral teleoperation systems. Therefore, various controller methods have been reported in the literature to deal with these issues, such as output feedback control [17], fuzzy control [18], adaptive robust control [19], model predictive control [20], and sliding mode control [21,22]. However, this survey focuses on the critical issues observed by collaborative robots during physical HR collaboration. The key challenging issue in this regard includes the prediction of human intentions, motion synchronization due to human-caused disturbances, and human safety for efficient physical HR interaction. The following section introduces the different robotic operations of a collaborative robot during HR collaboration.

3.3. Collaborative Robotic Operations

Norm ISO/TS15066 describes four operative modes for collaborative robots to ensure human safety: (1) power- and force-limiting; (2) speed and separation monitoring; (3) a safety-rated monitored stop; and (4) hand-guiding [23,24]. In these operating modes, collaborative robots work in collaboration with a human operator depending on the application. Table 1 presents the four working modes of collaborative operations on the basis of features, monitoring speed, torque-sensing, operator control, and a workspace limit for safe HR collaboration.

4. Control Design of Human–Robot Collaboration

4.1. Collaborative Control System Architectures

When designing a controller for human–robot collaboration, two essential factors need to be considered; adjustable autonomy and mixed-initiative for integrating humans into an autonomous control system. Adjustable autonomy and human initiatives switch the control of tasks between an operator and an automated system in response to changing the demands of the robotic system. In this survey, an application scenario of a collaborative manipulator robot having direct physical collaboration with the human operator in industrial applications for collaborative assembly is considered.
Simple-to-complex control architectures have been presented in the literature. The collaborative control architecture presents a systematic view of the interaction between humans and robots at both low-level (sensors, actuators) and high-level control (perception and cognition), as shown in Figure 3 [25,26].
Another control architecture presented in [27,28] shows a more comprehensive review of interaction control in human–robot collaboration. This framework explains the complex requirements and diverse methods for interaction control, motion planning, and interaction planning. This control architecture is unconventional compared to a classical control architecture due to the planning for safe collaboration [29]. The control architecture is composed of three abstraction layers, and the next subsection explains it in detail (Figure 4).

4.1.1. Non-Real-Time Layer

The highest abstraction layer is the top layer of architecture that plans the global task of the robot based on skill sets in offline mode. Such a task planner creates the different skill states to accomplish the respective global tasks/actions. It generates the task state and sends the initial desired behavior information to lower layers. Each skill state holds the information regarding the current task and action to perform. The job of a robotic system can be, e.g., to grasp the object or hand it over. The examples of non-real time control architectures were presented in [30,31].

4.1.2. Soft Real-Time Layer

The second abstraction layer is responsible for dynamically executing and modifying global plans; it does so by choosing the best action of the current task state, behavior state, human state, and environmental state. This dynamic planning unit is followed by a learning and adaption unit. The unit converts global task planning information into the corresponding dynamic planning language. The planning unit can translate the robot’s desired actions into safe and task-consistent actions, which instantaneously alter the global task plan. Hence, this layer’s primary task is to modify the pre-planned course into safe and consistent actions using the prediction of human intentions. Examples of soft real-time architectures are described in detail in [32,33,34,35,36].

4.1.3. Real-Time Layer

The low-level control layer is the bottom layer with the desired action ( a d ) and behavior ( b d ) that is directly forwarded to the robot for task execution. The expected behavior ( b d ) can alter based on reflex behaviors when accidental situations or collision events occur. The control layer provides feedback on the currently active activity ( a ) and behavior ( b ) to the dynamic planning layer, allowing it to perform accordingly.
Human interaction in this control architecture is observed at various levels of abstraction. The human observer states gather all human-related information and knowledge in the second layer (soft real-time) that can be further used for planning in the lower layer. This control architecture ensures human safety in physical interaction for interactive and cooperative tasks with a collaborative robot [37]. The following subsection highlights the key challenges that are considered in our literature review.

4.2. Controller Challenges

4.2.1. Estimation of Human Intention

The first challenge posed to HR collaboration is the precise estimation of human intention for a controller design. It allows the system to select the correct dynamic planning to anticipate the appropriate safety behavior. The goal here is to equip the robot with the human intention that is easy to interpret and safe for humans and robots to participate in the collaborative task. In the real-time control layer, there is a two-way interaction: the human ability to anticipate the robot movement is as important as the ability of the robot to anticipate human behaviors. The prediction of human actions relies on two factors: (1) predicting the next action; and (2) prediction of action time. Human motion prediction relies on predicting the desired tasks (manipulation, navigation) and the characteristics of human motion. On the other side, with either explicit or implicit cues, the robot can make its intended goals/tasks clearer to the co-located humans, facilitating the humans’ ability to select safe actions and motions.

4.2.2. Safety

Industrial collaborative robots can work with humans and perform operations besides them. These robots can move their arms and bodies and operate with dangerous and sharp objects. Such a situation demands specific procedures to ensure human safety while undertaking collaborative tasks. This is an important and emerging issue in the field of human–robot collaboration. Such a problem can be tackled using a collision model for a robot consisting of n joints and a particular link detecting a collision with a human [38].
The following equation combines linear and angular velocity vectors with joint angular velocities ( q )
x c ˙ = v c w c = J c , l i n ( q ) J c , a n g ( q ) q ˙
where x c is a state vector, v c is a linear velocity vector and w c is an angular velocity vector of the related robot link at the collision contact point, and J c ( q ) is the contact Jacobian. In case of the collision, the robot dynamics can be represented as
M ( q ) q ¨ + C ( q , q ˙ ) q ˙ + g ( q ) + τ f = τ + τ e x t
where M ( q ) is a joint space inertia matrix, C ( q ) is a coriolis vector, g ( q ) is the gravity vector, τ f is the dissipative friction torque, τ is the motor torque, F e x t is the external force observed by the joint during collision, and then τ e x t is external joint torque expressed as
τ e x t = J c T ( q ) F e x t
Then, the effective mass of a robot can be estimated as
m u = u T Λ v ( q ) 1 u 1
where Λ v ( q ) is the Cartesian kinetic energy matrix. When the collision occurs, an important entity is a force observed at the contact point. This force is characterized in two phases, as shown in Figure 5. In phase I, F I represents a short impulsive force. In phase II, two types of forces come into play in case there is quasi-static contact. If the human is not clamped, then this force is called a pushing force F I I a . When a human is clamped, the force is called a crushing force F I I b . The mathematical modeling of robotic systems that includes kinematic modeling and dynamic modeling is a precursor for control design. The comprehensive dynamic modeling of point contact between the human hand and robotic arm is described in [39,40,41,42].

4.2.3. Human-Caused Disturbances

The unpredictable disturbances of human motion significantly reduce the performance of robotic control. To ensure stable and robust human–robot collaborative assembly manipulation, there is a need to minimize human disturbance. Control methodologies for handling these challenges are described in the next section.

5. Control Methodologies

From a low-level-control perspective, the control methodologies enable smooth task execution to address the complex and unpredictable nature of physical human–robot collaboration. The aim of a controller includes the tracking of human motion trajectories by avoiding collision during physical interaction. We briefly present control methodologies designed and implemented for collaborative robots, particularly for HR-collaborative assembly application. Then, we elaborate the related control methods.

5.1. Impedance Control Strategy

The most popular control technique is the impedance control which handles the hybrid force/position control and disturbances in the unknown environments of human–robot physical interaction [43,44]. This method was developed for robust collaborative object manipulation, in which the impact of involuntary human motions can be compensated by adjusting impedance parameters. Impedance control techniques can handle motion and force in a unified manner for the robotic system. They benefit from hybrid force-motion controllers to produce a motion not constrained by a kinematic workspace [45,46]. This method expresses the system as a second-order closed loop system of a mass-spring damper. The control objective is defined in operational space coordinates x as
M x x ˜ ¨ + D x x ˜ ˙ + K x x ˜ = F e x t
where M x denotes the desired inertia, x ˜ = x x d is the position error vector, x d represents the equilibrium position vector, D x is the damping matrix, and K x is the stiffness matrix in the operational space.
In [47], the authors developed a modified impedance controller that requires angles for the operational space and energy functions for physical interpretation. To eliminate singularities, the end-effector orientation and displacement were used. The extension of adaptive impedance controllers can utilize impedance and feed-forward torques. However, the controller’s learning of the desired motion trajectories is a significant problem [48,49]. To address this issue, the authors proposed an adaptive impedance control method for series elastic actuator (SEA) collaborative robots [50]. This method introduced a controller with two operating modes: the first mode is “robot-in-charge”, implying that a robot takes the primary role in job execution; and in the second mode, “human-in-charge”, the human takes charge of executing the operation (Figure 6). The performance of the proposed controller was found to be satisfactory as validated using an experimental setup.
Another control method is that of Cartesian impedance control, which is generally used for grasping objects in co-manipulation tasks, where humans and robots share their roles as leader and follower. However, the authors [51] proposed modifications in this method that shift the human–robot role by sharing control through the operation. The torques applied to the admittance controller developed a wrist deflection, which help transform a virtual vertical force to carry the load. Using a second admittance controller, this force was used to extend or lower the robot’s end-effector. The authors that employed this controller approached the mobile robot helper’s base to perform the cooperative carrying of a load [52,53]. The authors of [54,55] also used a lifting controller with cascaded second-order virtual admittance controllers applied to the same applications. The authors of [56,57] designed and evaluated an interactive controller for cooperatively carrying a load with the help of the robot and human simultaneously. Figure 7 depicts a simplified diagram for admittance-based collaborative task control.
The fundamental problem of collision detection and avoidance was addressed in collaborative robotics. Geravand et al. [58] introduced a collision detection approach for industrial robots that uses a closed loop feedback control strategy. They used the measurement for joint positions and velocities as a reference signal for the controller. This technique did not demand the robot’s dynamics model. The experimental scenarios of human–robot collaborations with various forces on the KUKA KR5 manipulator and DLR LWR-III were tested [59]. In addition, an impedance controller was designed that adapted itself according to external force collision, obeying safety standards (ISO10218) for force, velocity, and power limitations. The Kuka LWR4 robot was used for experimental verifications for such a collision avoidance strategy [60].
Overall, two main approaches are commonly used. The first is that of active impedance control, and the other is passive compliance control for human safety and protection. The active impedance solution has a low latency [61] in the case of a human–robot collision, which could adversely affect human safety. On the other hand, passive compliance provides an instantaneous, rapid, and robust reaction to the uncertain collision.

5.2. Invariance Control Strategy

The detailed development of a novel invariance control algorithm for dynamic constraints in a human–robot collision avoidance algorithm was designed and tested on a torque-controlled robotic manipulator [62]. The controller is responsible for handling the physical constraints (velocity/joint) and dynamic constraints (human movement) with external disturbances. The idea of virtual constraints defines a safe zone for human activity inside the collaborative workspace. The control scheme was built to maintain the admissible subset while keeping the error minimum. This is also applicable in the presence of external disturbances. Figure 8 presents the invariance control strategy with two control loops: the outer control loop has the nominal controller, and the inner loop has the invariance controller. The nominal control law and robot motion ( P d e s ) generate control torques ( τ n o ) that describe the desired behavior of the robotic manipulator. The invariance controller in the inner loop calculates the corrective control input ( τ c ) , which is close to the nominal control input ( τ n o ) and compensates for the effect of disturbances or violations of the constraints. The torque-controlled robotic systems is described as [62]
q ˙ q ¨ x ˙ = q ˙ M q 1 C q q ˙ + g q f ( x ) + 0 M q 1 G ( x ) = g 1 ( x ) g n q ( x ) τ c e τ
The nominal control law is designed to generate control torque ( τ n o ) as
τ no = J ( q ) f ext + M p p ¨ des + D p p ˙ des p ˙ + K p p des p + C q ( q , q ˙ ) q ˙ + g q ( q )
where the mass matrix M q R n q × n q , a sufficiently smooth desired trajectory p des and the positive definite Cartesian mass M p R n p × n p , stiffness K p R n p × n p , and damping D p R n p × n p matrices, Jacobian matrix J ( q ) R n q × n q , Coriolis and centripetal forces C q ( q , q ˙ ) R n q , and the gravitational torques g q ( q ) R n q . External forces f ext R n p are connected to the external torque τ ext . The stiffness and damping parameters may be adapted, sometimes even online, to account for task requirements. Furthermore, solving a constrained minimization problem given below yields corrective control ( τ c )
a r g m i n | | τ c τ no | | 2
This strategy is tested on the dual-arm robot with seven DOF with direct human contact. One person exerts force to the end-effector and the robotic manipulator accurately tracks movement.

5.3. Exteroceptive Sensor-Based Control Strategy

Another strategy to provide better human safety in manufacturing applications is exteroceptive sensor-based control [63]. De Santis and Siciliano [64] suggested a sensors’ system combined with virtual reality under different scenarios of collisions. Moreover, it is observed that the subjective comfort of humans relied on the shape and speed of the robot during robot motion. Avanzini et al. [65] made a hardware/software platform to optimize the safety of human–robot collaboration. The authors developed the distributed distance sensor model integrated into an industrial robot, and the controller used these sensors’ output to assess the risk level. The module’s functionality has been verified on the ABB IRB140 robot manipulator [66].
The rigid body dynamics in task coordinates is [64]:
Λ ( x , x ˙ ) x ¨ + μ ( x , x ˙ ) x ˙ + J T g ( x ) = J T τ + f ,
where operational velocities Λ ( x , x ) = J T M J 1 , μ ( x , x ˙ ) = J 1 C M J 1 J ˙ J 1 , with q R N q the vector of joint angles, M the inertia matrix, C the Coriolis/centrifugal matrix, g(x) the vector of gravity torques, f is the vector of external forces, τ is the applied joint torques, and J is the Jacobian.
Impedance control in the task space consists of the following control objective:
Λ d e ¨ x + D d e ˙ x + K d e x = e f
where e x = x x d is the position error between the actual position x and the reference position x d ; e f = f f d measures how much the actual perceived force f deviates from the predicted one f d 2 . Λ d , D d , and K d are the symmetric and positive definite matrices of desired inertia, damping, and stiffness, respectively.
The Cartesian impedance controller can be implemented via the joint torques τ as follows:
τ = u + J T k ˜ e x + J T D ˜ e x ˙ + J T Λ ˜ e f ˙
where u = g + J T Λ x ¨ d + μ x ˙ d
Meziane et al. [67] introduced two units: an inertial measurement unit that measured the torque/force signal and an indication unit that tracks the human location and movements in real time. The authors developed such a hybrid system for the flexible industrial manufacturing system. However, other systems handled safety issues by creating safety zones nearby the human and robot. The robotic speed varies in the overlapping zone. Currently, researchers are trying to make these zones dynamic for both human movement and robot behavior [68].

5.4. Proprioceptive Sensor-Based Control Strategy

The control strategies that rely on the internal sensor’s measurements of the robot are known as proprioceptive sensor-based control strategies. Lacevic and Rocco [69] proposed a mechanism for detecting collisions between an industrial robot and a human. It used proprioceptive sensors, fully integrated into the internal software design, and did not rely on external sensors. The experimental results showed that coordinated and collision-free motion could be obtained in such a framework. Some researchers suggested different approaches using pose estimation methods [70], extended Kalman filter [71], and a hybrid extended Kalman filter [72].
Another human–robot control scheme based on the kinematic control strategy was implemented to handle human–robot collision avoidance [73]. The proposed algorithm generates the optimal motion for velocity trajectory to ensure safety constraints. Experimental validation was carried out on dual-arm robots at a joint level rather than in 3D Cartesian space to reduce the computation time. Finite-state automata were used to implement the collision detection and avoidance in the human–robot controller, and a linear optimization algorithm was used for motion planning. The controller adjusts the joint velocities of the robot relative to the distance from the human in the collaborative workspace to ensure that the velocity scaling factor is bounded.
A kinostatic danger field for the control of multi DOF robot was presented in [73,74] for the assessment of human–robot interaction. This assumed that there were N N relevant obstacles in the robot environment and it let r j be the position of the obstacle j , j { 1 , 2 , , N } . We may refer to r j as the position of the point on the obstacle that is nearest to the robot. Define m as:
m = 1 if C D F r j Δ j , j { 1 , 2 , , N } 0 otherwise ,
where C D F is a vector representing the cumulative danger field pointing towards r . This means that m = 1 if and only if the value of the danger field at each of the relevant locations r j does not exceed a certain threshold Δ j . Therefore, the control law:
T = m T task + ( 1 m ) I + m N T ( q ) T subtask
The torque T task is responsible for the task behavior. If m = 1, then the subtask torque T subtask only affects the robot posture, without altering the end-effector dynamic behavior. This is guaranteed by the matrix N ( q ) = I J ˜ ( q ) J ( q ) that projects an arbitrary torque vector into the null-space of J T ( q ) . If m = 0, T s u b t a s k affects the dynamics of the complete robot. The torque T subtask is defined as:
T subtask = j = 1 N k = 1 n J T ( q , j , k ) F k ( C D F ( r j ) , r ˙ j )

5.5. Distance/Speed-Based Control Strategy

Another popular control strategy is control that prioritizes speed, which minimizes the distance between robots and humans in the workspace. This control method is used to handle the safety issue by utilizing the collision avoidance and detection algorithms based on the speed and distance. The measured distance is often utilized to adjust the robotic speed in the case of collision detection [75]. Tsarouchi et al. [76] developed and tested ROS modules in the C5G open architecture, ROS sensors, and other sensor setups. Michieletto et al. [77] generated the repulsive vectors of velocities dependent on lengths between the robot and the moving human obstacles. This repulsive action was intended to assist the manipulator’s end-effector stop collisions when performing a Cartesian motion task. The experimental implementation on Kuka LWR4 confirmed the properties of this strategy.
The artificial potential field was introduced by Khatib in 1985 [76]. The idea was to create an artificial potential field U art in a robot’s environment. Minimal distance between the robot and the obstacle should be kept: when the distance is too short, the robot is repulsed by the obstacle because the potential is strong; however it is attracted by the target because the potential field is weak. The artificial potential field ( U a r t ( q ) ) is a sum of both attractive ( U a t t ( q ) ) and repulsive potential fields ( U rep ( q ) ) , where q is the robot’s geometric configuration:
U a r t ( q ) = U a t t ( q ) + U rep ( q )
The robot moves following the sum of attractive and repulsive forces generated by the attractive potential and the repulsive potential, respectively. The artificial force is determined by:
F art ( q ) = F att ( q ) + F rep ( q ) ,
where F a t t ( q ) the attractive force and F rep ( q ) is the repulsive force at the current position q of the robot. The attractive force is provided by the equation:
F a t t = ε q q t ,
where ε is a positive gain (similar to Hooke’s law spring constant), q is the current position of the robot and q t is the target’s position. The repulsive force ( F rep ) is provided by the equation:
F rep = η d 2 1 d 1 d 0 d if d d 0 0 if d > d 0 ,
where d is the minimal distance between the robot and the obstacle, d 0 is the influence distance of the obstacle, and η is a positive gain (similar to Hooke’s law spring constant).

5.6. Probabilistic Method

Several probabilistic techniques have been proposed for translating human intentions. For instance, the authors discussed the human intention estimation problem and proposed a prediction mechanism for online trajectory planning using a hidden Markov model (HMM) algorithm in [38,78]. The walking human’s current position and desired position are used to adjust the robotic movements in industrial application for the collaborative assembly, where the three types of human–robot interactions (coexistence, cooperation, and collaboration) are considered. Each motion pattern θ m is described by a set of Gaussian distributions with the coordinate mean μ m k and its covariance m k σ . In the first phase, online trajectory prediction is performed for each observed human position with K discretized states along with M motion patterns applying a hidden Markov model (HMM). During the next phase, the probability of each estimated human state P ( μ m k π i ) is calculated according to
P ( μ m k π i ) = 1 2 π σ e 1 2 σ 2 π i μ m k 2 P ( μ m k )
The probability of each state is modified during the observation process based on the current state P ( μ m k ) and the observed location of human π i = ( x i , y i ) .
When the person at a specific time t in the future visited an interaction area A j , then the Bayes theorem defines the probability of interaction area P ( A j , t π i ) as
P A j , t π i = P A j , t , μ m k π i = P A j , t μ m k , π i P μ m k π i
The experiments showed successful results when the intention estimation algorithm was tested for coexistence, cooperation, and collaboration scenarios in predicting the correct interaction zone between a robot and human. Ding et al. also suggested an HMM-based approach for predicting the efficient generation of the safety-critical regions occupied by the movement of a human arm over a long-term prediction horizon for planning a robot’s motion planning [38]. This probabilistic method uses stochastic transitions between various motion patterns and resolves the uncertainties in the prediction of human movement.
Classical estimation techniques and deterministic methods produce satisfactory short-term predictions but fail to work well on a long prediction horizon [79,80]. The research focuses on communicating more human motion information to the robot to make the system efficient and replan its trajectories [81]. While prediction has been shown to be helpful for ensuring safe HRC, the appropriate predictors for a given task and environment are essential to determine the accuracy of the relevant predictors. The results reported that low-confidence task-level predictors into motion-level prediction could deteriorate the prediction performance [82]. Furthermore, one can use control-based safety methods to handle incorrect predictions.

5.7. Human-Caused Disturbance Methods

Different control methods are proposed in the literature to predict human motions, which are described in an earlier section. However, probabilistic machine learning [83] methods can predict human movements, but the impact of human motion on manipulated objects is not considered in these algorithms. Despite the fact that an adaptive neural network tracking control technique [84] can handle mismatched disturbances in linear systems; however, it could not determine the unknown dynamic parameters in advance.
Recently, Li et al. [20] suggested a hybrid control method based on modified MPC and impedance control methods to suppress human-caused disturbances that occurred during direct interaction between a human and a robot for a co-manipulating task. The experimental results showed that, when humans and robots hold the ball together, the proposed controller could balance the ball by reducing the amplitude disturbances to 70% compared to conventional MPC methods. It has been observed that more research is needed in the field of human-caused disturbances in direct HR collaboration.
In the modified MPC control algorithm described in [20], the velocity of human hands is obtained by the differential of its displacement, and it is assumed that the velocity will be constant in a future time T for convenience.
b v h y ( t + s ) = b l h ( t ) b l h ( t d t ) d t , s [ 0 , T ]
where b v h y ( t ) is the velocity of the human hands in the vertical direction at time t , d t is the sampling time, and b l h ( t ) is left hand position with regard to base frame b. The state of the system is controlled by both the human and robot.
x ( t + d t ) = A x ( t ) + B u ( t ) + C w ( t )
u ( t ) = b v r y ( t ) w ( t ) = b v h y ( t )
where x ( t ) is the state vector at time t , u ( t ) is the input vector, w ( t ) is the disturbance, b v r y ( t ) is the velocity of robotic hands in the base frame, and b v h y ( t ) is the velocity of human hands in vertical direction. Furthermore, a cost function is designed to minimize u.
J N ( x ( t ) , u , w ) = s = 0 N 1 x ( t + s t ) T D + x ( t ) T E
Although an adaptive neural network tracking control technique can handle mismatched disturbances in linear systems, it cannot predict unknown dynamic parameters. In dynamic nonlinear systems, the trajectory tracking and optimization control problem is anticipated using dynamic and probabilistic movement primitives [85]. The experimental results showed that these methods cannot support real-time control and disturbances. Whereas model predictive control can handle uncertainties and tracking problems more efficiently due to the prediction of the future trajectory of system [86], there has been a lot of research into human–robot prediction and control optimization approaches.

6. Discussions

From our review of current control methods, it has been observed that several control methods have been developed for collaborative robots to be applied to HR collaborative assembly applications in the last decade. A collaborative assembly task involves several tasks where a human operator is required to guide the robot from a random starting point to a fixed target in the robot workspace. The robot participates in task completion in a similar way. Table 2 presents the detailed analysis of the control methodologies of an HR collaborative robotic system features designed and implemented for collaborative robots in the last decade. It is apparent that the design of controllers depends upon the underlying application.
Physical human–robot collaboration from a control perspective is a challenging task. The challenging factors considered in this survey are human intention, the prediction of human intentions, safety, and human-caused disturbances in motion synchronization. These factors are comprehensively investigated in the literature. Therefore, the challenges instigate researchers and manufacturers to develop promising control methods that ensure safe, robust and smooth physical human–robot collaboration. The control methods discussed in this study are classified as an impedance control method, invariance control method, exteroceptive and proprioceptive sensor-based control methods and distance-based control method. Among these control methods, the impedance control method is a widely used low-level control strategy for collaborative robots applied to HR-collaborative assembly applications. This method relies on the choice of sensors, robotic platform, collaborative interface, and application area. However, the impedance/admittance control methods do not involve the explicit force feedback loop, resulting in an indirect control structure. On the other hand, direct force control requires an explicit model of the system and environment, which relies on the hybrid position/force control.
Furthermore, smooth HR collaboration requires visual feedback with the position and force feedback to the controller for the co-manipulation task in assembly application. Visual feedback significantly improves the human motion estimation. Therefore, this area needs to be widely investigated. The safety and human ergonomics of collaborative robots is an interesting area, and extensive research is underway in developing strategies targeting the collision detection and avoidance. However, real-time responsive control algorithms are needed for safe human–robot collaboration in collaborative robots. Human-caused disturbances during motion synchronization are another under-explored area. Effective control strategies can play a significant role in addressing this challenging problem. Our focus is developing control strategies addressing human-caused disturbances for the collaborative robot (FANUC CR7iA) in physical HR collaboration in our lab (Figure 9).

7. Conclusions

The controller plays a critical role in the development and design of collaborative robots. This paper concluded with the findings of the literature review on the collaborative robot’s control strategy with the concept of human–robot collaboration; then, implicit and explicit human–robot collaboration methods were briefly explained. Imperative collaboration operations for collaborative robots in industrial applications were thoroughly discussed, with a particular emphasis on HR co-assembly applications. Then, the integration of collaboration into the development of controller architectures is discussed in detail. The systematic framework for collaborative robot control is described, allowing the execution of a robotic task from global task planning to low-level control implementation for safe interactions. Since the physical HRC is a critical control problem for the co-manipulation task, this article identifies key control challenges such as the prediction of human intentions, safety, and human-caused disturbances in motion synchronization. Finally, we provided a review of different low-level control methodologies that can handle collision detection and implement avoidance mechanisms for human–robot collaboration. The most frequently used control methods are discussed within the domain of collaborative robotics, which is summarized in tabular form with respect to certain parameters, i.e., collaborative robotic platform, collaboration operations, control objectives, control methods, etc. It was concluded that impedance control methods are most widely used for physical HRC to address safety issues. Collision detection and avoidance in HRC is a very popular topic in ongoing research. However, human-caused disturbances during the synchronization of human–robot movements have not been well explored in the literature. Further investigation into control strategies can play a significant role in alleviating this problem.

Author Contributions

Conceptualization, A.O., A.H., J.M. and A.S.-M.; methodology, A.H. and J.M.; software, A.S.-M. and J.M.; validation, A.O., A.H. and J.M.; formal analysis, A.O.; investigation, A.H., J.M. and A.S.-M.; resources, A.O.; data curation, A.H.; writing—original draft preparation, A.H.; writing—review and editing, A.O., J.M. and A.S.-M.; visualization, A.O.; supervision, A.O.; project administration, A.O.; funding acquisition, A.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work has received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Peshkin, M.A.; Colgate, J.E.; Wannasuphoprasit, W.; Moore, C.A.; Gillespie, R.B.; Akella, P. Cobot architecture. IEEE Trans. Robot. Autom. 2001, 17, 377–390. [Google Scholar] [CrossRef] [Green Version]
  2. KUKA. KUKA Robotics, 2021. Available online: https://www.kuka.com/ (accessed on 31 December 2022).
  3. Mourtzis, D. Simulation in the design and operation of manufacturing systems: State of the art and new trends. Int. J. Prod. Res. 2020, 58, 1927–1949. [Google Scholar] [CrossRef]
  4. MarketsMarkets. Collaborative Robot Market by Payload Capacity (up to 5 Kg, between 5 and 10 K, above 10 K), Industry, Application, and G. Forecast to 2025. Technical Report. 2021. Available online: https://www.marketsandmarkets.com/Market-Reports/collaborative-robot-market-194541294.html (accessed on 31 December 2022).
  5. Wang, X.V.; Kemény, Z.; Váncza, J.; Wang, L. Human–robot collaborative assembly in cyber-physical production: Classification framework and implementation. CIRP Ann. 2017, 66, 5–8. [Google Scholar] [CrossRef] [Green Version]
  6. Parsa, S.; Saadat, M. Human–robot collaboration disassembly planning for end-of-life product disassembly process. Robot. Comput.-Integr. Manuf. 2021, 71, 102170. [Google Scholar] [CrossRef]
  7. Gualtieri, L.; Rauch, E.; Vidoni, R. Emerging research fields in safety and ergonomics in industrial collaborative robotics: A systematic literature review. Robot. Comput.-Integr. Manuf. 2021, 67, 101998. [Google Scholar] [CrossRef]
  8. Zoltan, D.; S, D.K. Technology jump in the industry: Human–robot cooperation in production. Ind. Robot 2020, 47, 757–775. [Google Scholar] [CrossRef]
  9. Hentout, A.; Aouache, M.; Maoudj, A.; Akli, I. Human–robot interaction in industrial collaborative robotics: A literature review of the decade 2008–2017. Adv. Robot. 2019, 33, 764–799. [Google Scholar] [CrossRef]
  10. Umbrico, A.; Orlandini, A.; Cesta, A. An Ontology for Human-Robot Collaboration. Procedia CIRP 2020, 93, 1097–1102. [Google Scholar] [CrossRef]
  11. Cherubini, A.; Passama, R.; Crosnier, A.; Lasnier, A.; Fraisse, P. Collaborative manufacturing with physical human–robot interaction. Robot. Comput.-Integr. Manuf. 2016, 40, 1–13. [Google Scholar] [CrossRef] [Green Version]
  12. Bi, Z.; Luo, M.; Miao, Z.; Zhang, B.; Zhang, W.; Wang, L. Safety assurance mechanisms of collaborative robotic systems in manufacturing. Robot. Comput.-Integr. Manuf. 2021, 67, 102022. [Google Scholar] [CrossRef]
  13. Schmidtler, J.; Knott, V.; Hölzel, C.; Bengler, K. Human Centered Assistance Applications for the working environment of the future. Occup. Ergon. 2015, 12, 83–95. [Google Scholar] [CrossRef]
  14. Matheson, E.; Minto, R.; Zampieri, E.G.G.; Faccio, M.; Rosati, G. Human–Robot Collaboration in Manufacturing Applications: A Review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, L.; Gao, R.; Váncza, J.; Krüger, J.; Wang, X.; Makris, S.; Chryssolouris, G. Symbiotic human–robot collaborative assembly. CIRP Ann. 2019, 68, 701–726. [Google Scholar] [CrossRef] [Green Version]
  16. Al-Yacoub, A.; Zhao, Y.; Eaton, W.; Goh, Y.; Lohse, N. Improving human robot collaboration through Force/Torque based learning for object manipulation. Robot. Comput.-Integr. Manuf. 2021, 69, 102111. [Google Scholar] [CrossRef]
  17. Rahman, S.M.; Wang, Y. Mutual trust-based subtask allocation for human–robot collaboration in flexible lightweight assembly in manufacturing. Mechatronics 2018, 54, 94–109. [Google Scholar] [CrossRef]
  18. Jiang, J.; Huang, Z.; Bi, Z.; Ma, X.; Yu, G. State-of-the-Art control strategies for robotic PiH assembly. Robot. Comput.-Integr. Manuf. 2020, 65, 101894. [Google Scholar] [CrossRef]
  19. Cheng, C.; Liu, S.; Wu, H.; Zhang, Y. Neural network–based direct adaptive robust control of unknown MIMO nonlinear systems using state observer. Int. J. Adapt. Control Signal Process. 2020, 34, 1–14. [Google Scholar] [CrossRef]
  20. Li, S.; Wang, H.; Zhang, S. Human-Robot Collaborative Manipulation with the Suppression of Human-caused Disturbance. J. Intell. Robot. Syst. 2021, 102, 1–11. [Google Scholar] [CrossRef]
  21. Chen, Z.; Huang, F.; Chen, W.; Zhang, J.; Sun, W.; Chen, J.; Gu, J.; Zhu, S. RBFNN-Based Adaptive Sliding Mode Control Design for Delayed Nonlinear Multilateral Telerobotic System With Cooperative Manipulation. IEEE Trans. Ind. Inform. 2020, 16, 1236–1247. [Google Scholar] [CrossRef]
  22. Abadi, A.S.S.; Hosseinabadi, P.A.; Mekhilef, S. Fuzzy adaptive fixed-time sliding mode control with state observer for a class of high-order mismatched uncertain systems. Int. J. Control Autom. Syst. 2020, 18, 2492–2508. [Google Scholar] [CrossRef]
  23. Scholtz, J. Theory and evaluation of human robot interactions. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA, 6–9 January 2003; p. 10. [Google Scholar] [CrossRef] [Green Version]
  24. Cherubini, A.; Passama, R.; Meline, A.; Crosnier, A.; Fraisse, P. Multimodal control for human–robot cooperation. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 2202–2207. [Google Scholar] [CrossRef] [Green Version]
  25. Hua, C.; Yang, Y.; Liu, P.X. Output-feedback adaptive control of networked teleoperation system with time-varying delay and bounded inputs. IEEE/ASME Trans. Mechatron. 2014, 20, 2009–2020. [Google Scholar] [CrossRef]
  26. Zhai, D.H.; Xia, Y. Adaptive fuzzy control of multilateral asymmetric teleoperation for coordinated multiple mobile manipulators. IEEE Trans. Fuzzy Syst. 2015, 24, 57–70. [Google Scholar] [CrossRef]
  27. Chen, Z.; Huang, F.; Sun, W.; Gu, J.; Yao, B. RBF-neural-network-based adaptive robust control for nonlinear bilateral teleoperation manipulators with uncertainty and time delay. IEEE/ASME Trans. Mechatron. 2019, 25, 906–918. [Google Scholar] [CrossRef]
  28. Rosenstrauch, M.J.; Krüger, J. Safe human–robot-collaboration-introduction and experiment using ISO/TS 15066. In Proceedings of the 3rd International Conference on Control, Automation and Robotics (ICCAR), Nagoya, Japan, 24–26 April 2017; pp. 740–744. [Google Scholar] [CrossRef]
  29. Villani, V.; Pini, F.; Leali, F.; Secchi, C. Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics 2018, 55, 248–266. [Google Scholar] [CrossRef]
  30. Asfour, T.; Kaul, L.; Wächter, M.; Ottenhaus, S.; Weiner, P.; Rader, S.; Grimm, R.; Zhou, Y.; Grotz, M.; Paus, F.; et al. ARMAR-6: A Collaborative Humanoid Robot for Industrial Environments. In Proceedings of the 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), Beijing, China, 6–9 November 2018; pp. 447–454. [Google Scholar] [CrossRef]
  31. Tang, L.; Jiang, Y.; Lou, J. Reliability architecture for collaborative robot control systems in complex environments. Int. J. Adv. Robot. Syst. 2016, 13, 17. [Google Scholar] [CrossRef]
  32. Ye, Y.; Li, P.; Li, Z.; Xie, F.; Liu, X.J.; Liu, J. Real-Time Design Based on PREEMPT_RT and Timing Analysis of Collaborative Robot Control System. In Proceedings of the Intelligent Robotics and Applications; Liu, X.J., Nie, Z., Yu, J., Xie, F., Song, R., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 596–606. [Google Scholar]
  33. Dumonteil, G.; Manfredi, G.; Devy, M.; Confetti, A.; Sidobre, D. Reactive planning on a collaborative robot for industrial applications. In Proceedings of the 2015 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO), Colmar, France, 21–23 July 2015; Volume 2, pp. 450–457. [Google Scholar]
  34. Parusel, S.; Haddadin, S.; Albu-Schäffer, A. Modular state-based behavior control for safe human-robot interaction: A lightweight control architecture for a lightweight robot. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 4298–4305. [Google Scholar] [CrossRef]
  35. Xi, Q.; Zheng, C.W.; Yao, M.Y.; Kou, W.; Kuang, S.L. Design of a Real-time Robot Control System oriented for Human-Robot Cooperation. In Proceedings of the 2021 International Conference on Artificial Intelligence and Electromechanical Automation (AIEA), Guangzhou, China, 14–16 May 2021; pp. 23–29. [Google Scholar]
  36. Gambao, E.; Hernando, M.; Surdilovic, D. A new generation of collaborative robots for material handling. In Proceedings of the International Symposium on Automation and Robotics in Construction; IAARC Publications: Eindhoven, The Netherlands, 2012; Volume 29, p. 1. [Google Scholar]
  37. Fong, T.; Thorpe, C.; Baur, C. Advanced Interfaces for Vehicle Teleoperation: Collaborative Control, Sensor Fusion Displays, and Remote Driving Tools. Auton. Robot. 2001, 11, 77–85. [Google Scholar] [CrossRef]
  38. Haddadin, S.; Croft, E. Physical human-obot interaction. In Springer Handbook of Robotics; Springer: Cham, Switzerland, 2016; pp. 1835–1874. [Google Scholar]
  39. Skrinjar, L.; Slavič, J.; Boltežar, M. A review of continuous contact-force models in multibody dynamics. Int. J. Mech. Sci. 2018, 145, 171–187. [Google Scholar] [CrossRef]
  40. Ahmadizadeh, M.; Shafei, A.; Fooladi, M. Dynamic analysis of multiple inclined and frictional impact-contacts in multi-branch robotic systems. Appl. Math. Model. 2021, 91, 24–42. [Google Scholar] [CrossRef]
  41. Korayem, M.; Shafei, A.; Seidi, E. Symbolic derivation of governing equations for dual-arm mobile manipulators used in fruit-picking and the pruning of tall trees. Comput. Electron. Agric. 2014, 105, 95–102. [Google Scholar] [CrossRef]
  42. Shafei, A.; Shafei, H. Planar multibranch open-loop robotic manipulators subjected to ground collision. J. Comput. Nonlinear Dyn. 2017, 12, 06100. [Google Scholar] [CrossRef]
  43. Ding, H.; Reißig, G.; Wijaya, K.; Bortot, D.; Bengler, K.; Stursberg, O. Human arm motion modeling and long-term prediction for safe and efficient Human-Robot-Interaction. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 5875–5880. [Google Scholar] [CrossRef]
  44. Thomaz, A.; Hoffman, G.; Cakmak, M. Computational Human-Robot Interaction. Found. Trends Robot. 2016, 4, 105–223. [Google Scholar] [CrossRef]
  45. Lasota, P.A.; Song, T.; Shah, J.A. A Survey of Methods for Safe Human-Robot Interaction; Now: Delft, Netherlands, 2017; p. 1. [Google Scholar] [CrossRef]
  46. Wang, W.; Li, R.; Chen, Y.; Diekel, Z.M.; Jia, Y. Facilitating Human–Robot Collaborative Tasks by Teaching-Learning-Collaboration From Human Demonstrations. IEEE Trans. Autom. Sci. Eng. 2019, 16, 640–653. [Google Scholar] [CrossRef]
  47. Ragaglia, M.; Zanchettin, A.M.; Rocco, P. Safety-aware trajectory scaling for Human-Robot Collaboration with prediction of human occupancy. In Proceedings of the 2015 International Conference on Advanced Robotics (ICAR), Istanbul, Turkey, 27–31 July 2015; pp. 85–90. [Google Scholar] [CrossRef]
  48. Krämer, M.; Rösmann, C.; Hoffmann, F.; Bertram, T. Model predictive control of a collaborative manipulator considering dynamic obstacles. Optim. Control Appl. Methods 2020, 41, 1211–1232. [Google Scholar] [CrossRef]
  49. Hogan, N. Impedance Control: An Approach to Manipulation: Part I—Theory. J. Dyn. Syst. Meas. Control 1985, 107, 1–7. [Google Scholar] [CrossRef]
  50. Ott, C.; Mukherjee, R.; Nakamura, Y. Unified Impedance and Admittance Control. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 554–561. [Google Scholar] [CrossRef]
  51. Hogan, N. Impedance Control: An Approach to Manipulation: Part II—Implementation. J. Dyn. Syst. Meas. Control 1985, 107, 8–16. [Google Scholar] [CrossRef]
  52. Benedictis, C.D.; Franco, W.; Maffiodo, D.; Ferraresi, C. Control of Force Impulse in Human–Machine Impact; Springer International Publishing: Cham, Switzerland, 2018; pp. 956–964. [Google Scholar]
  53. Duan, J.; Gan, Y.; Chen, M.; Dai, X. Adaptive variable impedance control for dynamic contact force tracking in uncertain environment. Robot. Auton. Syst. 2018, 102, 54–65. [Google Scholar] [CrossRef]
  54. Tagliamonte, N.L.; Sergi, F.; Accoto, D.; Carpino, G.; Guglielmelli, E. Double actuation architectures for rendering variable impedance in compliant robots: A review. Mechatronics 2012, 22, 1187–1203. [Google Scholar] [CrossRef]
  55. Li, X.; Pan, Y.; Chen, G.; Yu, H. Adaptive Human–Robot Interaction Control for Robots Driven by Series Elastic Actuators. IEEE Trans. Robot. 2017, 33, 169–182. [Google Scholar] [CrossRef]
  56. Albu-Schaffer, A.; Hirzinger, G. Cartesian impedance control techniques for torque controlled light-weight robots. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), Washington, DC, USA, 11–15 May 2002; Volume 1, pp. 657–663. [Google Scholar] [CrossRef]
  57. Kosuge, K.; Sato, M.; Kazamura, N. Mobile robot helper. In Proceedings of the IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), San Francisco, CA, USA, 24–28 April 2000; Volume 1, pp. 583–588. [Google Scholar] [CrossRef]
  58. Geravand, M.; Flacco, F.; Luca, A.D. Human–robot physical interaction and collaboration using an industrial robot with a closed control architecture. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 4000–4007. [Google Scholar] [CrossRef]
  59. Salter, T.; Michaud, F.; Létourneau, D.; Lee, D.C.; Werry, I.P. Using proprioceptive sensors for categorizing human–robot interactions. In Proceedings of the HRI ’07: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Arlington, VA, USA, 10–12 March 2007; pp. 105–112. [Google Scholar] [CrossRef]
  60. Ćehajić, D.; Erhart, S.; Hirche, S. Grasp pose estimation in human–robot manipulation tasks using wearable motion sensors. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 1031–1036. [Google Scholar] [CrossRef] [Green Version]
  61. Tong, X.; Li, Z.; Han, G.; Liu, N.; Su, Y.; Ning, J.; Yang, F. Adaptive EKF Based on HMM Recognizer for Attitude Estimation Using MEMS MARG Sensors. IEEE Sens. J. 2018, 18, 3299–3310. [Google Scholar] [CrossRef]
  62. Kimmel, M.; Hirche, S. Invariance Control for Safe Human–Robot Interaction in Dynamic Environments. IEEE Trans. Robot. 2017, 33, 1327–1342. [Google Scholar] [CrossRef] [Green Version]
  63. Albu-Schäffer, A.; Ott, C.; Hirzinger, G. A Unified Passivity-based Control Framework for Position, Torque and Impedance Control of Flexible Joint Robots. Int. J. Robot. Res. 2007, 26, 23–39. [Google Scholar] [CrossRef]
  64. Gribovskaya, E.; Kheddar, A.; Billard, A. Motion learning and adaptive impedance for robot control during physical interaction with humans. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 4326–4332. [Google Scholar] [CrossRef] [Green Version]
  65. Lu, W.; Meng, Q. Impedance control with adaptation for robotic manipulations. IEEE Trans. Robot. Autom. 1991, 7, 408–415. [Google Scholar] [CrossRef]
  66. Zanchettin, A.M.; Ceriani, N.M.; Rocco, P.; Ding, H.; Matthias, B. Safety in human–robot collaborative manufacturing environments: Metrics and control. IEEE Trans. Autom. Sci. Eng. 2016, 13, 882–893. [Google Scholar] [CrossRef] [Green Version]
  67. Cherubini, A.; Navarro-Alarcon, D. Sensor-Based Control for Collaborative Robots: Fundamentals, Challenges, and Opportunities. Front. Neurorobot. 2021, 14, 576846. [Google Scholar] [CrossRef] [PubMed]
  68. Erden, M.S.; Billard, A. Hand Impedance Measurements During Interactive Manual Welding With a Robot. IEEE Trans. Robot. 2015, 31, 168–179. [Google Scholar] [CrossRef]
  69. Lippiello, V.; Siciliano, B.; Villani, L. Robot Interaction Control Using Force and Vision. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 1470–1475. [Google Scholar] [CrossRef]
  70. Santis, A.D.; Lippiello, V.; Siciliano, B.; Luigi, V. Human-Robot Interaction Control Using Force and Vision. In Advances in Control Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2007; Volume 353. [Google Scholar] [CrossRef] [Green Version]
  71. Avanzini, G.B.; Ceriani, N.M.; Zanchettin, A.M.; Rocco, P.; Bascetta, L. Safety Control of Industrial Robots Based on a Distributed Distance Sensor. IEEE Trans. Control Syst. Technol. 2014, 22, 2127–2140. [Google Scholar] [CrossRef]
  72. Kouris, A.; Dimeas, F.; Aspragathos, N. A Frequency Domain Approach for Contact Type Distinction in Human–Robot Collaboration. IEEE Robot. Autom. Lett. 2018, 3, 720–727. [Google Scholar] [CrossRef]
  73. Schimmack, M.; Haus, B.; Mercorelli, P. An Extended Kalman Filter as an Observer in a Control Structure for Health Monitoring of a Metal–Polymer Hybrid Soft Actuator. IEEE/ASME Trans. Mechatron. 2018, 23, 1477–1487. [Google Scholar] [CrossRef]
  74. Lacevic, B.; Rocco, P. Kinetostatic danger field—A novel safety assessment for human–robot interaction. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 2169–2174. [Google Scholar] [CrossRef]
  75. Parker, C.A.C.; Croft, E.A. Design & Personalization of a Cooperative Carrying Robot Controller. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3916–3921. [Google Scholar] [CrossRef]
  76. Meziane, R.; Li, P.; Otis, M.J.; Ezzaidi, H.; Cardou, P. Safer hybrid workspace using human–robot interaction while sharing production activities. In Proceedings of the 2014 IEEE International Symposium on Robotic and Sensors Environments (ROSE) Proceedings, Timisoara, Romania, 16–18 October 2014; pp. 37–42. [Google Scholar] [CrossRef]
  77. Michieletto, S.; Ghidoni, S.; Pagello, E.; Moro, M. Why teach robotics using ROS. J. Autom. Mob. Robot. Intell. Syst. 2014, 60–68. [Google Scholar] [CrossRef]
  78. Fong, T.; Thorpe, C.; Baur, C. Collaboration, Dialogue, Human-Robot Interaction; Springer: Berlin/Heidelberg, Germany, 2003; pp. 255–266. [Google Scholar]
  79. Kragic, D.; Gustafson, J.; Karaoguz, H.; Jensfelt, P.; Krug, R. Interactive, Collaborative Robots: Challenges and Opportunities; AAAI Press: Palo Alto, CA, USA, 2018; pp. 18–25. [Google Scholar]
  80. Sheng, W.; Thobbi, A.; Gu, Y. An Integrated Framework for Human–Robot Collaborative Manipulation. IEEE Trans. Cybern. 2015, 45, 2030–2041. [Google Scholar] [CrossRef]
  81. Hoffman, G. Evaluating Fluency in Human–Robot Collaboration. IEEE Trans. Hum.-Mach. Syst. 2019, 49, 209–218. [Google Scholar] [CrossRef]
  82. Bascetta, L.; Ferretti, G.; Rocco, P.; Ardö, H.; Bruyninckx, H.; Demeester, E.; Lello, E.D. Towards safe human–robot interaction in robotic cells: An approach based on visual tracking and intention estimation. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 2971–2978. [Google Scholar] [CrossRef]
  83. Berger, E.; Vogt, D.; Haji-Ghassemi, N.; Jung, B.; Amor, H.B. Inferring guidance information in cooperative human-robot tasks. In Proceedings of the 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Atlanta, GA, USA, 15–17 October 2013; pp. 124–129. [Google Scholar]
  84. Liu, P.; Yu, H.; Cang, S. Adaptive neural network tracking control for underactuated systems with matched and mismatched disturbances. Nonlinear Dyn. 2019, 98, 1447–1464. [Google Scholar] [CrossRef] [Green Version]
  85. Schaal, S. Dynamic movement primitives-a framework for motor control in humans and humanoid robotics. In Adaptive Motion of Animals and Machines; Springer: Tokyo, Japan, 2006; pp. 261–280. [Google Scholar]
  86. Shyam, R.A.; Lightbody, P.; Das, G.; Liu, P.; Gomez-Gonzalez, S.; Neumann, G. Improving local trajectory optimisation using probabilistic movement primitives. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 2666–2671. [Google Scholar]
  87. Ding, H.; Heyn, J.; Matthias, B.; Staab, H. Structured collaborative behavior of industrial robots in mixed human–robot environments. In Proceedings of the 2013 IEEE International Conference on Automation Science and Engineering (CASE), Madison, WI, USA, 17–20 August 2013; pp. 1101–1106. [Google Scholar]
  88. Ding, H.; Schipper, M.; Matthias, B. Collaborative behavior design of industrial robots for multiple human–robot collaboration. In Proceedings of the IEEE ISR 2013, Seoul, Republic of Korea, 24–26 October 2013; pp. 1–6. [Google Scholar]
  89. Hawkins, K.P.; Bansal, S.; Vo, N.N.; Bobick, A.F. Anticipating human actions for collaboration in the presence of task and sensor uncertainty. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 2215–2222. [Google Scholar]
  90. Mariotti, E.; Magrini, E.; Luca, A.D. Admittance Control for Human-Robot Interaction Using an Industrial Robot Equipped with a F/T Sensor. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 6130–6136. [Google Scholar] [CrossRef]
  91. Roveda, L.; Iannacci, N.; Tosatti, L.M. Discrete-Time Formulation for Optimal Impact Control in Interaction Tasks. J. Intell. Robot. Syst. 2018, 90, 407–417. [Google Scholar] [CrossRef]
  92. Rahman, S.M.; Wang, Y.; Walker, I.D.; Mears, L.; Pak, R.; Remy, S. Trust-based compliant robot-human handovers of payloads in collaborative assembly in flexible manufacturing. In Proceedings of the 2016 IEEE International Conference on Automation Science and Engineering (CASE), Fort Worth, TX, USA, 21–25 August 2016; pp. 355–360. [Google Scholar]
  93. Rahman, S.M.; Liao, Z.; Jiang, L.; Wang, Y. A regret-based autonomy allocation scheme for human–robot shared vision systems in collaborative assembly in manufacturing. In Proceedings of the IEEE International Conference on Automation Science and Engineering (CASE), Fort Worth, TX, USA, 21–25 August 2016; pp. 897–902. [Google Scholar]
  94. Whitsell, B.; Artemiadis, P. Physical human–robot interaction (pHRI) in 6 DOF with asymmetric cooperation. IEEE Access 2017, 5, 10834–10845. [Google Scholar] [CrossRef]
  95. Bös, J.; Wahrburg, A.; Listmann, K.D. Iteratively learned and temporally scaled force control with application to robotic assembly in unstructured environments. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 3000–3007. [Google Scholar]
  96. Vemula, B.; Matthias, B.; Ahmad, A. A design metric for safety assessment of industrial robot design suitable for power- and force-limited collaborative operation. Int. J. Intell. Robot. Appl. 2018, 2, 226–234. [Google Scholar] [CrossRef] [Green Version]
  97. Darvish, K.; Bruno, B.; Simetti, E.; Mastrogiovanni, F.; Casalino, G. Interleaved online task planning, simulation, task allocation and motion control for flexible human–robot cooperation. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 58–65. [Google Scholar]
  98. Wojtynek, M.; Oestreich, H.; Beyer, O.; Wrede, S. Collaborative and robot-based plug & produce for rapid reconfiguration of modular production systems. In Proceedings of the 2017 IEEE/SICE International Symposium on System Integration (SII), Taipei, Taiwan, 11–14 December 2017; pp. 1067–1073. [Google Scholar]
  99. Michalos, G.; Kousi, N.; Karagiannis, P.; Gkournelos, C.; Dimoulas, K.; Koukas, S.; Mparis, K.; Papavasileiou, A.; Makris, S. Seamless human robot collaborative assembly–An automotive case study. Mechatronics 2018, 55, 194–211. [Google Scholar] [CrossRef]
  100. Tsarouchi, P.; Makris, S.; Michalos, G.; Matthaiakis, S.; Chatzigeorgiou, X.; Athanasatos, A.; Stefos, M.; Aivaliotis, P.; Chryssolouris, G. ROS Based Coordination of Human Robot Cooperative Assembly Tasks-An Industrial Case Study. Procedia CIRP 2015, 37, 254–259. [Google Scholar] [CrossRef]
  101. Michieletto, S.; Tosello, E.; Romanelli, F.; Ferrara, V.; Menegatti, E. ROS-I Interface for COMAU Robots. In Proceedings of the Simulation, Modeling, and Programming for Autonomous Robots; Brugali, D., Broenink, J.F., Kroeger, T., MacDonald, B.A., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 243–254. [Google Scholar]
  102. Safeea, M.; Neto, P.; Béarée, R. On-line collision avoidance for collaborative robot manipulators by adjusting off-line generated paths: An industrial use case. Robot. Auton. Syst. 2019, 119, 278–288. [Google Scholar] [CrossRef]
Figure 1. Development of collaborative robotic system.
Figure 1. Development of collaborative robotic system.
Applsci 13 00675 g001
Figure 2. Classification of human–robot interactions.
Figure 2. Classification of human–robot interactions.
Applsci 13 00675 g002
Figure 3. Block diagram of collaborative control architecture.
Figure 3. Block diagram of collaborative control architecture.
Applsci 13 00675 g003
Figure 4. Interactive collaborative control framework.
Figure 4. Interactive collaborative control framework.
Applsci 13 00675 g004
Figure 5. Force profiles of collision occurrence.
Figure 5. Force profiles of collision occurrence.
Applsci 13 00675 g005
Figure 6. Controller layout for SEA robots.
Figure 6. Controller layout for SEA robots.
Applsci 13 00675 g006
Figure 7. Simplified diagram for an admittance-based collaborative task control.
Figure 7. Simplified diagram for an admittance-based collaborative task control.
Applsci 13 00675 g007
Figure 8. Invariance control architecture.
Figure 8. Invariance control architecture.
Applsci 13 00675 g008
Figure 9. Experimental setup at WUT.
Figure 9. Experimental setup at WUT.
Applsci 13 00675 g009
Table 1. Type of collaborative robotic operations.
Table 1. Type of collaborative robotic operations.
Robotic OperationHuman InputSpeedTechniquesTorques
Power- and force-limitingApplication-dependentMaximum determined speed to limit forcesThe robot cannot exceed power excessive forceMax. determined torques
Speed and separation monitoringNo human control in collaborative workspaceSafety-rated monitored speedLimited contact between robot and humanNecessary to establish a minimum separation distance and to execute the application
Hand guidingEmergency stopSafety-rated monitored speedMotion controlled with direct operator inputOperator input
Safety-rated monitored stopOperator has no controlWhen human is in collaborative workspace, speed is zeroRobotic operation stops, if the human is presentGravity and load compensation only
Table 2. Comparison of control methods applied to collaborative robots in HR-collaborative assembly application.
Table 2. Comparison of control methods applied to collaborative robots in HR-collaborative assembly application.
Collaborative RobotRobotic PlatformCollaborative Robot OperationCollaboration ConfigurationCollaborative InteractionCollaborative Triggering ParameterPhysical HR InteractionCollaborative ScenariosGoalSensorsControl MethodsControl ObjectivePerformanceYear (Reference)
ABB FRIDADual-arm robotSpeed and separation monitoringOne robot–two humansTwo interaction zoneDistanceYesAutomaticSafetyMicrosoft KinectImpedance controlHR-Collision avoidanceImprove collision-free path for each robotic arm2013 [87]
ABB FRIDADual-arm robotSpeed and separation monitoringMultiple robots-multiple humansTwo interaction zonesDistanceYesAutomaticProductivityMicrosoft KinectImpedance controlReduce speedImprove robotic functionality by reducing uptime with safety constraints2013 [88]
Universal RobotsOne-arm robotPower and force limitingMultiple robots–multiple humansOne interaction zoneEuclidean distanceYesAutomaticProductivityPosition, velocity, cameraControl strategyHandle uncertainty and perceptual perturbationsImproves collaborative task efficiency by reducing disturbances for multi path anticipation2014 [89]
KUKA KR5One-arm robotHand guidingOne robot–one humanOne interaction zoneDistanceYesAutomaticSafetyPosition, forceWeb-based control systemsHR collision avoidanceImprove assembly operation2015 [90]
KUKA LWR4+One-arm robotsHand guidingOne robot–one humanOne interaction zoneDistanceYesAutomaticProductivityForceSafe and task consistent controlHR collision avoidanceImprove safety during HR interaction2015 [91]
KinovaOne-arm robotSpeed and separation monitoringOne robot–one humanOne interaction zoneHuman trust thresholdYesAutomaticSafetyVisionProprioceptive sensor-based controlN/AImproves HR interaction by trust-based handover in motion planning2016 [92]
Rethink BaxterDual-arm robotHand guidingOne robot–one humanOne interaction zoneHR Team fluency, human cognitive workload, human trustYesAutomaticProductivityVisionExteroceptive controlSuboptimal autonomy allocationHR interaction is attain for sub-optimal allocation in different sensing modes2016 [93]
KUKA LWR IVDual-arm robotHand guidingOne robot–one humanOne interaction zoneVisionYesAutomaticSafetyVision, forceJoint space kinematic controlHR collision avoidanceIntrinsic collision detection obeys safety standard using trajectory optimization and visual gesture monitoring2016 [11]
KUKA LBRiiwaOne-arm robotHand guidingOne robot–one humanOne interaction zoneDisplacementYesAutomaticProductivityForceImpedance controlMotion trajectory trackingController shows smooth trajectory following in assembly application2017 [94]
ABB YuMiDual-arm robotNoOne robot–one humanOne interaction zoneStiffnessYesManualProductivityReduce contact force and trajectory trackingIterative learning and temporal scaled force controlProductivityIt increases assembly speed and adjusts reference trajectory2017 [95]
Kuka LWROne-arm robotHand guidingOne robot–one humanOne interaction zoneForceYesManualSafety, productivityPosition, forceInvariance controlHR collision avoidanceController provide larger damping with dynamic constraint perpendicular to assembly line2017 [68]
KUKA KR5One-arm robotPower- and force-limitingOne robot–one humanOne interaction zoneVelocityYesAutomaticProductivity, safetyPosition, ForceImpedance controlHR collision detection and avoidanceFast collision detection and safe robot reaction to unexpected collisions2017 [58]
DLROne-arm robotPower- and force-limitingOne robot–one humanOne interaction zoneForceYesAutomaticSafetyForceControl strategyHR collision detectionEffect of contact force and human body elasticity is verified in simulation for collision2017 [96]
Baxter RobotDual-arm robotHand guidingOne robot–one humanOne interaction zoneVelocityNoNoProductivityMicrosoft Kinect, accelerationControl approachOnline motion trackingOnline perception-task planning is implemented for collaborative assembly2018 [97]
Kuka LBRIIWATwo-finger gripper robotHand guidingOne robot–one humanOne interaction zonePosition mounting pointsYesManualProductivity and safetyForceExteroceptive- sensor-based controlCollision detection with trajectory trackingAdaptation and verification of robot behavior is performed through a simulation-based planning subsystem2017 [98]
COMAUOne-arm robotHand guidingOne robot–one humanOne interaction zoneForceYesManualSafetyForceAdmittance controlHR collision avoidanceCycle time reduction and human’s operator strain is minimized2018 [99]
Universal robotOne-arm robotHand guidingOne robot–one humanOne interaction zonePositionYesAutomaticSafetyDistanceImpedance and admittance controlHR collaboration collision detectionSafe HR collaboration is achieved2018 [11]
CobotDual-arm robotHand guidingOne robot–one humanOne interaction zonePositionNoManualProductivityCameraImpedance and admittance controlHR task coordinationCoordination of HR assembly task scenario is simulated on ROS platform2018 [100]
COMAU Smart5 SiXOne-arm robotHand guidingOne robot–one humanOne interaction zonePositionYesManualProductivityCameraMulti-modal controlMotion tracking in HR collaborationController guarantees same trajectory interpolation2018 [101]
KUKAOne-arm robotHand guidingOne robot–one humanOne interaction zonePositionYesAutomaticSafetyforceState observer controlHR collision avoidanceCollision avoidance guarantee through repulsion vector reshaping2019 [102]
CobotOne-arm robotHand guidingOne robot–one humanOne interaction zonePositionYesAutomaticProductivityPositionAdmittance controlHR collision avoidance3D motion tracking is achieved with accuracy and stability2019 [61]
CobotOne-arm robotHand guidingOne robot–one humanOne interaction zonePosition, VisionNoManualSafetyPositionExteroceptive controlHR collision avoidanceCollision avoidance algorithm is simulated and tested2019 [37]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hameed, A.; Ordys, A.; Możaryn, J.; Sibilska-Mroziewicz, A. Control System Design and Methods for Collaborative Robots: Review. Appl. Sci. 2023, 13, 675. https://doi.org/10.3390/app13010675

AMA Style

Hameed A, Ordys A, Możaryn J, Sibilska-Mroziewicz A. Control System Design and Methods for Collaborative Robots: Review. Applied Sciences. 2023; 13(1):675. https://doi.org/10.3390/app13010675

Chicago/Turabian Style

Hameed, Ayesha, Andrzej Ordys, Jakub Możaryn, and Anna Sibilska-Mroziewicz. 2023. "Control System Design and Methods for Collaborative Robots: Review" Applied Sciences 13, no. 1: 675. https://doi.org/10.3390/app13010675

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop