Next Article in Journal
Lignocellulosic Materials Used as Biosorbents for the Capture of Nickel (II) in Aqueous Solution
Next Article in Special Issue
Implementation of Robots Integration in Scaled Laboratory Environment for Factory Automation
Previous Article in Journal
Total Phenolic, Anthocyanins HPLC-DAD-MS Determination and Antioxidant Capacity in Black Grape Skins and Blackberries: A Comparative Study
Previous Article in Special Issue
Crowdsourced Evaluation of Robot Programming Environments: Methodology and Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Smart Industrial Robot Control Trends, Challenges and Opportunities within Manufacturing

Institute of Electronics and Computer Science, 14 Dzerbenes St., LV-1006 Riga, Latvia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(2), 937; https://doi.org/10.3390/app12020937
Submission received: 24 November 2021 / Revised: 13 December 2021 / Accepted: 24 December 2021 / Published: 17 January 2022
(This article belongs to the Special Issue Smart Robots for Industrial Applications)

Abstract

:
Industrial robots and associated control methods are continuously developing. With the recent progress in the field of artificial intelligence, new perspectives in industrial robot control strategies have emerged, and prospects towards cognitive robots have arisen. AI-based robotic systems are strongly becoming one of the main areas of focus, as flexibility and deep understanding of complex manufacturing processes are becoming the key advantage to raise competitiveness. This review first expresses the significance of smart industrial robot control in manufacturing towards future factories by listing the needs, requirements and introducing the envisioned concept of smart industrial robots. Secondly, the current trends that are based on different learning strategies and methods are explored. Current computer-vision, deep reinforcement learning and imitation learning based robot control approaches and possible applications in manufacturing are investigated. Gaps, challenges, limitations and open issues are identified along the way.

1. Introduction

Industrial robots and their control methods have come a long way since their first appearance in manufacturing [1]. The essence of industrial robots is to automate repetitive, dangerous, heavy-duty, and resource-intensive, tasks, etc., and eventually to automate human-intelligence demanding tasks. An industrial robot is defined by ISO 8373:2012 as follows: “An automatically controlled, reprogrammable, multipurpose manipulator programmable in three or more axes, which can be either fixed in place or mobile for use in industrial automation applications [2]”. The core element of an industrial robot is the combination of motors that drives the robot and control algorithms. A precise sequence of executed actions enables an industrial robot to achieve desirable trajectories and succeed at the given tasks. There are several different mechanical structures of industrial robots [3], and the choice of robot structure depends space and the task itself. Due to its universality, the most commonly used industrial robots have articulated mechanical design that usually consists of at least six degrees of freedom (DOF).
The third industrial revolution proved to the industry and the society how the usage of industrial robots are helping manufacturing companies to become more effective and competitive in their field [4]. Since their first appearance, industrial robots have been applied to automate manufacturing processes on many production floors, therefore, transforming the sector of manufacturing. Similarities can be drawn today in modern industry in the era of digitization, in a time when Industry 4.0 [5] concepts are being applied, and industrial robots with more advanced capabilities are one of the core elements of the transformation process and enablers of smart manufacturing.
Substantial technological developments, new applications, global trends in rising labour costs, population ageing: these are only some of the aspects in the growth of annual shipments of industrial robots. In fact, from 2015 to 2020, annual installations of industrial robots increased by 9% on average each year [6]. However, currently, the majority of industrial robots that are installed in production floors have been traditionally programmed to meet specific needs of respective factories and mostly tailored to perform “dull, dirty, or dangerous” jobs [7]. Typically such industrial robot systems are neither intelligent nor able to adapt to the dynamic environment nor learn tasks by themselves. Therefore, a significant amount of time is spent programming industrial robots [8] by handcrafting trajectories for one specific task, and it is incredibly laborious, complicated and error-prone. Thus, the environment must be deterministic, and everything must be in place and precisely positioned. If the design of the product changes, the programming must be performed all over again. Changes in the marketplace translate into uncertainty for manufacturing and end-user mobility services. Customers today want new models or versions of products faster than ever before, and the way for businesses to succeed is by being flexible, smart and effective in the manufacturing process.
Factories of today are still effectively designed for a single purpose, which means there is little or no room for flexibility in terms of product design changes. By looking at smart manufacturing and digitization trends [9], we see that future factories will be multi-purpose and able to adapt to new designs in a very short amount of time. However, the possibilities that come with digitization in smart manufacturing are not yet fully explored nor exploited, but some of the technologies have already marked their spot for a long-term stay in the manufacturing sector. AI-based approaches have been already internationally accepted as the main driver [10] in a transformation and digitization of factories as flexibility and deep understanding of complex manufacturing processes are becoming the key advantage to raise competitiveness [11]. The smart factory in a way is a manufacturing solution driven by smart industrial robots, acting as one of the key elements of Industry 4.0 [12] and as an enabling technology of Industry 5.0 where the creativity of human experts in conjunction with smart, efficient and accurate machines is explored [13].
Smart industrial robots also referred to as intelligent industrial robots in the literature have been mentioned as: “remarkably useful combination of a manipulator, sensors and controls” [14]. Even though the need for smart industrial robots and first discussions dates back to the 1980s [15], the idea on high-level remains the same—the wide variety of sensors in combination with reasoning abilities and control mechanisms are used to achieve desired industrial robot motions.
Industrial robots play an important role in the automation process and smart industrial robots even more in the context of Industry 4.0 and 5.0. However, such intelligent control methods are not usually provided by industrial robot manufacturers if provided at all. The majority of smart industrial robot control methods are still in the research phase or being commercialized or integrated for one specific task. The process from a ready solution that is validated in the laboratory to be proven in an operational environment can be quite long and difficult. It becomes even more complex when it comes to engineers who need to apply it in their manufacturing process and operators who need to maintain the flawless operation of the machine. This is one of the reasons why the industry lacks knowledge of possible opportunities that smart industrial robots can offer them.
Industrial robots and associated control methods are continuously developing. With the recent progress in the field of artificial intelligence [16], new perspectives in industrial robot control strategies have emerged, and prospects of achieving more of a human-like performance have arisen. The goal of this article is twofold. First, in Section 2, we aim to give the concept of smart industrial robots, list the required functionalities towards them and express the significance in manufacturing and future factories. Second, we review the current trends of smart industrial robot control that are based on different learning strategies and methods in Section 3, Section 4 and Section 5. We investigate how they are applied in the automation of industrial processes by considering the advantages and disadvantages of each control mechanism. By doing so in Section 6, we also identify gaps, current limitations, challenges and open issues that need to be solved to compete with the dexterity and reasoning abilities of human workers.

2. Significance of Smart Industrial Robots in Manufacturing

Since the first discussions on smart industrial robots, the developments and achievements in sensor fields have highly grown [17]; similarly, the trends in industrial robot control methods have significantly changed. The achievements in the field of AI are contributing highly to smart industrial robot control trends. The AI usage in robotic systems is strongly becoming one of the main areas of focus as the manufacturing sector requires increased performance, adaptability to product variations, increased safety, and reduced costs, etc. [18]. However, these requirements are neither feasible nor sustainable to be achieved by standard control methods. The methods and techniques of traditional industrial robot control are similar to factories that can be single-purpose [19], meaning that methods that are focused on a single task can be hardly adjustable to different scenarios if adjustable at all. As one of the criteria to succeed is to be flexible to changes, industrial robots need algorithms that can adjust to different scenarios with minor modifications that can be performed by the machine operators.
The need for smart industrial robot control methods arises when there is uncertainty in the environment. With the environment in this context, we mainly understand the region in the environment that can be reached by the robot, e.g., the robots’ workspace. Typically this workspace scales down to a region of interest or objects of interest to which the robot is manipulating. Nevertheless, there can be different types of uncertainties in the robots’ workspace such as randomly placed objects [20], dynamic obstacles [21], and human presence, etc. Usually, such tasks cannot be preprogrammed and the robotic system strictly depends on the information that it receives from the environment and the learned strategies. To avoid collisions, succeed in the given tasks and interact with the environment the usage of additional sensor systems is beneficial, for example, in the field of human–robot collaboration, at least 13 different sensors/devices are used [11]. Additionally, there is also a need to reduce the time spent on programming industrial robots as traditional approaches of handcrafting trajectories [8] is a laborious, complicated and error-prone task.
Smart industrial robot control is an important element yet only a fraction of the smart manufacturing ecosystem. In this paper, the focus is concentrated on components, methods and strategies that act as enablers of smart industrial robot control. Therefore, also only the required core functionalities [22,23] that differ them from traditional industrial robots are investigated and essentially are oriented towards cognitive robotics [24]. The overall concept of smart industrial robot control follows the principle of “see-think-act”. Even though this principle originally has been addressed in the field of mobile robotics [25], it is also becoming more relevant in the context of smart manufacturing. The diagram illustrated in Figure 1 is built upon this principle and integrates such core functionalities of smart industrial robots as:
  • Perception;
  • High-level instruction and context-aware task execution;
  • Knowledge acquisition and generalization;
  • Adaptive planning;
Perception is understood as a system that provides the robots with the perceiving, comprehending and reasoning abilities. This includes sensory data processing, representation and interpretation [26]. Within functionality of perception, several key elements can be distinguished that fundamentally differ from the listed methods and strategies in the following sections. These components are mainly associated with the way how knowledge is acquired, generated by different strategies and applied in context-aware task execution. Essentially, the knowledge and database is the core element that enables smart industrial robot control. Acquiring new knowledge and generalizing that knowledge enables one to undertake new tasks based on the history of decisions. High-level instructions and context-aware task execution, however, factor in application scenario-specific constraints when carrying out the tasks, so robots can determine the best possible actions by themselves to effectively succeed at the given task. The knowledge is supported by adaptive planning that allows one to anticipate events and prepare for them in advance, therefore coping with unforeseen situations [22].
As illustrated in Figure 1, the knowledge can be acquired in several different ways. The learning strategies fundamentally differ in how the knowledge is acquired, generalized and essentially how much industrial robot is involved in the learning process. Learning from labeled data in this article is associated with computer-vision based control and supervised learning, where the current trends are explored in Section 3. In this context, the knowledge mostly enables robots to understand complex scenes and extract the required information about the objects of interest. Learning typically takes RGB and/or depth information with respective labels to train neural networks. Most commonly, this knowledge is applied to determine the objects’ pose in order to grasp it or manipulate it by other means such as painting, welding or inspection. Acquiring knowledge from the interactions with the environment in this context is based on deep reinforcement learning—Section 4. Within this strategy, the robot is highly involved in the learning process, as the knowledge is acquired through interactions with the environment based on trial and error. Therefore, giving an understanding of how to interact with the environment based on a history of events. Learning from demonstrations, however, is connected to imitation learning—Section 5. The basis of imitation learning is to utilize expert behavior and make use of the demonstrated actions.

3. Computer Vision-Based Control

The recent achievements in the computer vision and deep learning fields [27,28] have raised awareness of the benefits that such systems can provide to industrial robot systems, manufacturing and the business itself. Computer vision-based methods in combination with the control loop of the robot are one of the most common approaches [29] to deal with the uncertainty of the environment and to interact with it. After data acquisition, as illustrated in Figure 2, the first step in the workflow of such systems is to detect objects or the presence of the objects or humans. For some scenarios, such as monitoring of safety zones in human-robot collaboration [11] it is enough to detect just a presence of an object or human to continue, pause or stop task execution. In the majority of scenarios, it is not enough, and all the required information about detected objects needs to be extracted and taken into account in the planning step. In these scenarios, the typical actions after object detection are object pose-estimation, grasp pose determination, motion planning and control in accordance. Although, as illustrated in Figure 2, this is only one structure of a workflow, and some solutions are bypassing the pose-estimation step by directly looking at possible grasping points [30].
The novel object detection methods are usually deep-learning-based [31,32]. Even though the main focus of object detection algorithm development historically has been addressed in different fields [33], the need for such systems is also present in the smart manufacturing context to deal with uncertainties of the environment. Thus, such deep learning-based approaches have proven to be quite flexible if conditions changes [34]. Object detection in industrial robotic tasks differs from other tasks as extracted information needs to be more detailed and precise in order to successfully manipulate objects. The precision and accuracy of the computer vision system can be directly connected to the quality of the product.
Several different scenarios or combinations of conditions can be distinguished, such as working with overlapping, similar, different, multiple, complex objects with undefined positions, etc., where some situations are illustrated in Figure 3. Additionally, the objects can be known, whether information is available beforehand, or otherwise, the objects are unknown. Some combinations of conditions make the task more complex and vice versa, and accordingly, available methods are similar for some scenarios. The required industrial robot control methods briefly can be distinguished in two different cases, respectively:
  • Tasks that require the robots’ end of arm tooling to be precisely positioned for the entire trajectory,
  • Tasks that require explicit grasp planning.
The first task can be considered to have low complexity in regards to computer-vision as the usual region of interest includes a single object that is not obstructed by other objects. The application scenarios are multiple, but most applicable cases in the smart manufacturing context are connected to pose estimation of the part and more precise feature extraction for further processing such as path generation in autonomous spray painting [35], welding [39], inspection [40], polishing, etc. Historically, in these kinds of scenarios, CAD model comparison-based methods have been proposed [41] or other solutions that are based on geometrical characteristics [42]. In some cases, CAD models of the part are not available or accurate enough; therefore, such methods are not capable of handling surfaces without mathematical representations. More recent methods can automatically generate tool paths by utilizing point cloud [43,44].
The application scenarios for the second task has the most occurrence in modern industrial robot use-cases. Typically this can be a region of interest that includes multiple, obstructed or complex objects that can require specific grasping approaches. One of the most common robotic grasping problems in industry settings is bin-picking [45], accordingly this problem has been already addressed for several decades [46]. Bin-picking is a task where arbitrarily placed objects that are overlapping each other in a bin must be taken out and placed in a dedicated position and orientation. Historically this problem has been addressed as one of the greatest robotic challenges in manufacturing automation [47]. As many different combinations of conditions can be present in this problem, the proposed methods that tend to solve this task also cover most of the robotic grasping issues.
The bin-picking problem in some cases can be eliminated in roots by avoiding situations where parts mix together as object placement in some part of the manufacturing process is structured. If object placements are structured at least along Z axes, relatively non-complex computer vision algorithms can be applied to pick and place them [48]. However, in many cases, it is not possible, as it requires modifying the manufacturing plant. Some factories lack manufacturing floor space, as complex and relatively big object dividers or shakers should be installed, and this excludes plant modification. The next aspect is high plant adaption expenses, which in some situations are not sustainable, and eventually, the plant is hardly adjustable if product design changes. Therefore, many factories end up with tasks where randomly placed objects that are located in the bin must be picked and placed by hand. This is where the vision-based grasping and automated bin-picking solution can be applied.
Vision-based grasping approaches can be divided into analytic or data-driven methods where analytic methods focus on analyzing the shape of the target object, whereas data-driven methods are based on machine learning [49]. Learning-based approaches tend to be more generic and flexible in dealing with uncertainties of the environment and therefore also more promising in the smart manufacturing context. Model-based and model-free learning-based grasping approaches can be distinguished, respectively, working with known or unknown objects.
Typically the first step in the model-based grasping approach is object detection, which is followed by pose estimation, grasp pose determination and path planning. For model-based solutions, the optimal gripping point usually can be defined beforehand by an expert whether it is a centre for simple shaped objects or a specific part of the object [50]. Therefore, the overall goal is to find an object that could be picked with the highest confidence and without collisions with the environment. Different approaches have been proposed throughout the years to estimate the 6D pose of the object in cluttered scenes [50,51,52]. Several multi-stage and hybrid approaches proceed with 2D CNN-based object detection algorithms such as SSD [53], YOLO [54], LINEMOD [51], etc., followed by an additional neural network for pose-estimation of the detected object. In addition, reverse methods are proposed by segmenting objects first and then finding the location [55,56] and estimating the position. These methods are effective but also comparatively slow due to multi-stage architecture. Single-shot 6D pose-estimation using only RGB images [57] followed by PnP [58] shows a decrease in time and is accurate for grasping objects in the household environment; however, single-shot methods lack comprehensive industrial use-case studies.
When dealing with unknown objects, the approach and respective methods change. Grasping of unknown objects requires model-free grasping approaches that are focused on finding a spot in the region of interest that could be graspable. Such methods are more appealing as they can generalize to unseen objects. Model-free methods skip the pose-estimation step and are directly estimating the grasp pose. Grasps across 1500 3D object models were analyzed in [37] to generate a dataset that contains 2.8 million point clouds, suction grasps and robustness labels. The generated dataset (Dex-Net 3.0) was used to train a Grasp Quality Convolutional Neural Network (GQ-CNN) for classifying robust suction targets in single object point clouds. The trained model was able to generalize to basic (prismatic or cylindrical), typical (more complex geometry), and adversarial (with few available suction-grasp points) with respective success rates of 98%, 82%, and 58%. A more recent implementation of [38] (Dex-Net 4.0) containing 5 million synthetic depth images was used to train policies for a parallel-jaw and vacuum gripper resulting in 95% accuracy on gripping novel objects. In [59], Generative Grasping Convolutional Neural Network (GG-CNN) is proposed, where the quality and pose of grasps is predicted at every pixel, achieving an 83% grasp success rate on unseen objects with adversarial geometry. Thus, these methods are well suited for closed-loop control up to 50hz, meaning the methods can enable successful manipulation in dynamic environments where objects are moving. Multiple grasp candidates are provided simultaneously, and the highest quality candidate ensures relatively high accuracy in previously unknown environments.
Often the object grasping problem has been focused just on the grasping phase; however, in industrial settings, full pick and place cycle is required. Post gripping [60] is as important as the gripping phase and eventually grasping strategies should be addressed accordingly. For model-free methods, the post-gripping process is extremely complicated and mostly cannot be executed without additional operations after picking up the object. Some exceptional cases such as picking of potentially tangled objects are present where proposed solutions [61,62] only partially address this problem, mostly by avoiding the grasping of such objects.

4. Deep Reinforcement Learning-Based Control

After the tremendous victory of AlphaGo [63], the attention on reinforcement learning (RL) has significantly increased whilst many potential application scenarios have emerged. Industrial robot control and the field of smart manufacturing is not an exception. However, robotics as an RL domain is substantially different from most common RL benchmark problems [64]. RL methods are based on trial and error [65], where the agent learns optimal behavior through interactions with the environment. However, the best problem representations in the field of robotics often consist of high-dimensional, continuous states and actions, which are hardly manageable with classical RL approaches [64]. RL in combination with deep learning defines the field of Deep-Reinforcement Learning (DRL). As DRL combines the technique of giving rewards that are based on agents’ actions and the usage of neural networks for learning feature representations, it enables agents to make decisions in complex environment scenarios when input data are unstructured and high-dimensional [66].
The DRL model typically consists of an agent that takes actions in an environment, whereas the environment or “interpreter” returns an observation of how the environment changed after the particular action was taken. Figure 4 illustrates a simplified agent and environment interaction. In the context of industrial robots, the input to the agent is a current state, whereas information in this state depends on the type of action space or learning task. The state can consist of such parameters as robot joint angles, tool position, velocity, acceleration, target pose and other sensor information, which typically is environmental information acquired by a 3D camera. The action space is categorized by two main approaches: pixel-wise actions and actions that directly control robot motions. These different categories also require distinct information. For example, within pixel-wise action space, the agent selects pixels to determine where the action primitive, typically a predefined grasping trajectory, should be executed. In the context of robotic grasping and manipulation, the goal of DRL is to provide an optimal sequence of commands to succeed at the given task [66].
As illustrated in Figure 5, the most commonly applied DRL algorithms can be grouped into model-free and model-based methods. Model-based methods have access to the environment transition dynamics, which is either given or learned from experience. Even though the model-based approaches are more sample efficient, they are limited to the exact or approximate models of the environment. Thus, accurate models are rarely available and often are difficult to learn. Model-free methods, however, learn control policies without building or having access to a model. The ability to provide an optimal control sequence without the use of a model can be advantageous in dynamically complex environments, which is mostly the case in the context of industrial robot control. However, model-free approaches can require a large amount of learning data [67]. To overcome this issue, benefits of combining model-based and model-free methods have been explored in [68]. Both types can be divided into two categories based on the learning approach: policy-based learning and value-based learning [69]. These methods can also be combined, which results in an actor-critic approach [70].
Even though many deep-learning-based computer vision solutions effectively deal with the grasping problem, there are limitations on how much can be achieved without interacting with the environment. When the robotic grasping is approached by DRL methods, the system architecture changes as the optimal policies are learned by interactions with the environment. Unlike computer-vision based techniques, where robot control is mostly performed separately (e.g., by generating trajectories to successfully grasp the object of interest), in DRL-based techniques, the robots’ performance is mostly evaluated as a whole.

4.1. Typical Grasping Scenarios

In the last few years, DRL-based grasping approaches have been utilized in several works. Many of these are focused on solving the tasks when the region of interest includes a single [71] or a few objects with simple geometry [72] that are not obstructed by other objects. Value-based DRL methods were utilized in [73]: a combination of a double deep Q-learning framework and novel Grasp-Q-Network. The grasping performance was evaluated on three different objects: cube, sphere and cylinder, and the highest accuracy (91.1%) was achieved by the use of a multi-view camera setup on cube object. In a very similar scenario [74], not only the grasping problem was addressed but also the smoothness of the trajectory by proposing a policy-based DRL method: experience-based deep deterministic policy gradient. The grasping accuracy of 90.68% was achieved. The issue of transferring DRL grasping agent from simulation to reality was especially addressed in [75], by utilizing an actor-critic version of proximal policy optimization (PPO) algorithm and CycleGan to map RGB-D images from the real world to the simulated environment. The proposed method was able to “trick” the agent into believing it was still performing in the simulator when it was actually deployed in the real world and thus achieved a grasping accuracy of 83% on both previously seen and unseen objects.

4.2. Push and Grasp

Currently, the DRL-based grasping in relatively simple environments lags behind DL-based computer vision-based methods. The most advantages of DRL-based grasping can be seen in more complex scenarios, for example, when additional actions are required before an object of interest can be grasped. A cluttered environment can often lack collision-free grasping candidates, which can result in an unsuccessful grasp. Even though collision avoidance usually is incorporated in the grasping task, by penalizing the agent if the robot collides with itself or the infrastructure, in some cases, the collisions (e.g., intended collisions) with the environment are allowed or even required to accomplish the given task. Pushing and grasping actions can singulate the object from its surroundings by removing other objects to make room for gripper fingers, therefore, making it easier to perform the grasping operations. In [76], a Twin Delayed DDPG (TD3) algorithm was employed to learn a pushing policy and a positive reward was given if a graspable object was separated from the clutter. The synergy between pushing and grasping was also explored in [77] where steadier results in a more dense environment were achieved by training deep end-to-end policies. In [78], an exploration-singulation-grasping procedure was proposed to solve another relatively similar problem named “grasping the invisible”. In this scenario, as depicted in Figure 6, initially, the object of interest is completely obstructed by other objects; therefore, making it “invisible”.

4.3. Force/torque Information Usage

Research on mimicking human sight has been conducted for a long time; therefore, the majority of DRL-based grasping methods are based on visual perception; however, one of the grasping goals is to achieve more human-like performance. Humans in the environment exploration process and especially in the grasping process also strongly rely on the sense of touch. Thus, humans can even grasp, classify and manipulate objects without visual perception. The sense of touch has been only partly mimicked by artificial sensors as humans simultaneously feel multiple parameters of the environment such as pressure, temperature, vibration, roughness, hardness and even pain [79], which is very challenging to technologically recreate. Inspired by the touch-based exploration of children, the [80] formulated a reward that encourages physical interaction and introduced contact-prioritized experience replay, which outperformed state-of-the-art methods in picking, pushing and sliding simple-shaped objects. Contact-rich manipulation can also benefit in grasping complex shape objects [81], peg-insertion tasks involving unstable objects [82] and in tasks with a clearance of and well below 0.2mm [83]. Applying different amounts of force to sturdy objects usually does not affect objects’ physical properties; however, when it comes to fragile objects, the robot needs to be gentle in the interaction process—Figure 7. Contact-rich yet gentle exploration can be achieved by penalty-based surprise, which is based on the prediction of forceful contacts [84]. Gentle and contact-rich manipulation and in-hand manipulation especially requires mimicking tactile feedback of the human hand. Usage of such anthropomorphic hands with tactile fingertips and flexible tactile skin [85] shows a 97% increase in sample efficiency for training and a simultaneous 21% increase in performance.

4.4. DRL-Based Assembly

Similarly, as in computer-vision based control, most of the attention in the DRL case is focused on the grasping process. However, the majority of real-world tasks require not only accurate grasping but also precise object placement. Apart from a simple stacking task that could be used in the palletization process [86], a widely used process after grasping is an assembly task. In [87], a method of learning to assemble objects specified in CAD files was proposed. Whereas the CAD data were utilized to compute a geometric motion plan used as a reference trajectory in U shape assembly—Figure 8, gear assembly and peg insertion tasks. In similar settings, Reference [88] incorporated force/torque information for an assembly task of rigid bodies.
DRL-based grasping approaches have been mainly utilized for grasping unstructured yet static objects, and significantly less attention has been forwarded to relatively more challenging tasks of grasping moving [89] or living objects, which can move and deform to avoid being grasped [90]. Similarly, DRL-based multi-industrial robot systems are also yet to be discovered.

5. Imitation Learning-Based Control

The ability to utilize expert behavior and learn about tasks from demonstrations is the basis of imitation learning [91]. Opposite to reinforcement learning, where the agent learns the robot control strategies by interacting with the environment, imitation learning makes use of demonstrated trajectories, which typically are represented by states or state-action pairs [67]. The two major classes of imitation learning are behavioral cloning and inverse reinforcement learning. Behavioral cloning is often referred to as learning a policy that directly maps from the state to the control input. Alternatively, recovering the reward function from demonstrations is referred to as inverse reinforcement learning [92].
The demonstrations can be acquired in several different direct and indirect ways such as teleoperation, kinesthetic teaching, motion capture, virtual/augmented reality and video demonstrations [93]. Such skill transfer can not only increase usability and cope with the absence of skilled industrial robot programmers on the production floor but also increase efficiency and achieve more human-like performance. The usage of the demonstration method typically depends on the details of the task; however, kinesthetic teaching illustrated in Figure 9 [94,95,96] can be advantageous, as demonstrations are directly acquired from robot movements, which eliminate imprecision in the form of a noise, which can be introduced by the usage of supplementary sensors. A major limitation of kinesthetic teaching is the requirement of gravity compensation, which is not supported by all of the industrial robots. Another direct approach—teleoperation teaching—can be performed by different control input devices such as joystick [97], hand tracking hardware [98] and other remote control devices.
In contrast to direct teaching, indirect teaching does not directly control the robot. Such systems typically learn from visual demonstrations [99] and wearable devices [100] by capturing human motions to generate trajectories for robots. However, such learning from visual representations lacks valuable information of robot interaction with the environment, therefore requiring additional extensive training. Thus, the difficulty of mimicking human behavior based on visual information is raised due to a comparatively complex demonstration acquisition process. In this manner, the actions performed by human needs to be recognized, interpreted and mapped into the imitator space.
A self-supervised representation learning method based on multi-view video was proposed in [99]. The reward function for RL was created by using only raw video demonstrations for super-vision and human pose imitation without any explicit joint level correspondence. The experiments demonstrated effective learning of a pouring task with a real robot illustrated in Figure 10. The wearable device (tactile glove) used in [100] enabled observation of visually concealed changes of the workpiece. The human demonstration included both hand pose and contact forces to learn the task of opening child-safe medicine bottles. Even though indirect teaching is mostly addressed in tackling more of everyday situations that are applicable, for example, in elderly care [101], the used methods perfectly match the smart manufacturing challenges.
Demonstrations can also be learned in a virtual environment by training a virtual robot, as depicted in Figure 11. Such virtual reality approach achieved an overall grasping success rate of 74% grasping fish in real-world settings by acquiring 35 virtual demonstrations, which were synthetically expanded to 100,000 grasping samples [102]. Thus, with a VR-based demonstration, the expert knowledge can be acquired remotely, without physical presence. Complex manipulations tasks were also learned in [98] by combining virtual reality and teleoperation teaching. Such system provided an efficient way for collecting high-quality manipulation demonstrations with the highest success rate of 98.9% achieved in the pushing task.
Imitation learning can be an efficient way to quickly [96] learn a task by demonstrations; however, such demonstrations usually do not cover all the possible states. This is mainly due to the fact that such demonstration requires expert knowledge, which can become expensive and time-consuming to acquire for all the possible scenarios; therefore, the obtained policy can perform poorly on unseen objects or situations. Similarly, learning from visual representation can lack crucial details on how the robot should interact with the environment and workpiece. To overcome this disadvantage, the imitation learning can be followed by reinforcement learning for policy improvement, whereas imitation learning in these settings can decrease the need for exploratory actions [92]. In industrial robot control, such complementary learning strategies exist [99,103] but have not yet gained high popularity.

6. Challenges and Open Issues

6.1. Smart Industrial Robot Deployment and Control

Diverse conditions on the manufacturing floor, lack of skilled personnel, functional correctness and reliability—these are only some of the challenges but definitely the core ones that are holding back the wider applicability of smart industrial robots. Dedicated systems adjusted for distinct environmental conditions are no longer sustainable in smart manufacturing. Diverse conditions, especially in the context of CV-based systems, such as lighting conditions, a field of view distance and a variety of object types can considerably decrease the precision of the system. The adaption to the changes in the environment can require modifications in software or hardware replacement. The respective component change should have no (or only minimal) impact on other components within the system; therefore, the characteristic of modularity [104] is one of the prerequisites of the smart industrial robot system.
The automation and deployment of industrial robot systems replace tasks that are repetitive, lack meaningfulness [105], poses a high risk of injuries [106] and also low-skill tasks that traditionally required human intelligence and dexterity. In the process of automation, more creative job openings are introduced; however, there is a lack of skilled personnel to operate and maintain advanced technologies that also includes smart industrial robot systems. In fact, it is estimated that this skill gap could leave 2.4 million positions unfilled between 2018 and 2028 in the United States alone [107]. The process behind a smart robotic system can be remarkably complex, even for skilled operators. The maintainability of the system and adaptability to other specified goals should be doable without any deep specific knowledge of the underlying target technology.
The ability of the smart industrial robot system to accomplish services as specified and maintain its level of performance can become a major issue within AI-based robotic systems and especially when adapting the system to new environments [108]. In manufacturing, these systems are put into continuous operation and it can be a mission-critical production environment with short maintenance times. AI must not compromise productivity through an unreliable or imprecise operation that requires regular human intervention or leads to failures. In smart industrial robot systems, this concern can be highly connected to an issue that is commonly referred to as: “reality gap” [109]. Even though reliable operation can be achieved in simulations, the trained models can perform unreliably when moved to a real environment. As real-world maintenance times can be very short, a more reliable sim-to-real transfer is required.
Even though in some situations robotic grasping is reliable and functions correctly, it still remains a fundamental challenge in industrial robotics. Several methods on the grasping of unknown and geometrically complex objects are proposed; however, a precise placing of such objects has not been widely addressed. In some scenarios, the grasping process can include several additional operations, which is the case of tangled objects, where proposed methods only partially address this issue by mostly avoiding grasping such kinds of objects.

6.2. Reinforced, Imitated or Combined Learning Strategies

Even though purely computer-vision based solutions in combination with industrial robots can successfully solve a variety of tasks in a dynamic environment [110], they require explicit programming of robot motions. Thus, the absence of an ability to learn from interacting with the environment or learning from demonstrations limits this method to achieve more of a human-like performance. DRL, on the other hand, enables robots to learn a task from scratch and at present is one of the main research topics in robotics [111]. The availability of super-computational power, large-scale datasets, and frontier algorithms has been one of the main building blocks in these recent achievements [111]. However, to learn a task from scratch, many trials are required, which is related to sample inefficiency. This aspect, in combination with the reality-gap, is holding back industrial robots from an optimal performance in solving real-world problems. Even though within industrial robot control an optimal policy very often is already known beforehand or can be acquired by the expert, the vast majority of the proposed methods proceed to learn a task from scratch. The usage of expert knowledge can be applied in robotics by imitation learning through different types of demonstration methods listed in Section 5, which therefore reduces the need for exploratory actions. Each of the learning strategies has its cons and pros [7,70], and the applicability mainly depends on the complexity and the requirements of the task. At the moment, there is no solution that could fit, for example, all grasping scenarios and perform equally; however, the combined learning strategies could leverage between the performance and training efficiency.

6.3. Use of Simulations and Synthetic Data

The use of simulations and synthetic data can facilitate the development of smart industrial robotic systems in various ways and stages. Generating low cost and large amounts of data, accelerating the design cycle whilst reducing costs, and providing a safe and fully controlled testing environment are only some of the opportunities [112]. However, these benefits come with several challenges that are yet to be solved. In the context of the reviewed smart industrial robot control methods, the challenges are highly connected to differences between simulations and real-life, which therefore usually decreases the precision of the system when deployed in real-life scenarios. Physic simulations, object virtual representations, sensor data recreation, artificial lightning and other countless bits of real environment all contribute to the issue of transferring learned models from simulation to reality.
Collection and processing of data for machine-learning tasks play an important role and smart industrial robot control is not an exception as ML-based approaches are widely used to solve challenging tasks. According to [113], on average, more than 80% of time spent on AI projects is based on the collection and processing of the data. The data collection techniques can be distinguished using several methods. The reviewed techniques in [114] vary for different use cases. For example, in the field of autonomous driving [115] or in more traditional applications, such as household object recognition, people detection or machine translation, there are openly available datasets that could be reused and adapted of specific needs for model training [116].
Due to specific elements of smart manufacturing and precision requirements, a different situation can be seen in this field. The product variety is changing more quickly and algorithms have to be trained on new data sets. Therefore, the re-usability of existing data sets is fairly low. Manual labeling methods cannot meet the requirements of agile production as it is time-consuming, expensive and usually requires expert knowledge of the specific field, and thus, human error can arise in the labeling process [117]. These aspects also conflict with the goal of smart industrial robots to automate tedious jobs that lack creativity. Introducing the smart industrial robot control methods that require manual labeling just shifts the manual work to a different sector. Synthetic data generation, on the other hand, decreases the time required for the labeling process by the use of physics and graphics engines to replicate real-life situations of environmental uncertainties. In a controlled environment and in non-complex scenes listed in Section 3, the benefits of synthetic data usage might not overcome the reality-gap issues. However, generalizing the knowledge requires vast amounts of data and when it comes to many different environmental parameters and complex scenes, all the variables can be taken into account in the generation process. Moreover, combining real and synthetic data could leverage between efficiency of the data collection process and the precision of the trained model, but yet lacks comprehensive studies in manufacturing applications.
Moreover, in scenarios where robots learn by themselves (see Section 4), there is also a possibility of training the robots in real life whilst interacting with the real environment and using the feedback from it to learn policies. Similarly, this can be tedious, expensive and very time-consuming work [118]. Additionally, in this case, the robot needs to actually execute actions that in the first learning iterations can be dangerous to the environment and the robot itself while it is exploring the action possibilities. To decrease the risks that can arise by random exploratory actions, the training of the robot is mostly moved to digital space. However, complex and unstructured environments, uncertain external loads, actuator noise, friction, wear and tear, and impacts, etc., are some of the many sources [112] that are not yet accurately represented by simulations. This means that the solutions learned in a simulated environment can perform differently and usually with less accuracy in real-world settings, if performing at all.

6.4. The Road to Future Factories

By looking at smart manufacturing trends and future factories, the learning of a new task can be neither time-consuming nor laborious. When the supply chains of smart manufacturing need to be reconfigured, the standstill is not beneficial for any organization. The pandemic situation indicated many issues and a lack of automation, which also impacted the economy as a whole [119]. However, this situation also outlined the benefits of digitization for many factories. The road to the future factories is rather incremental, and the acceptance of smart industrial robots within it is partly connected to the maturity of control methods listed in Section 3, Section 4 and Section 5. The adoption of smart industrial robots will likely be a critical point in productivity growth, due to the potential of reshaping the global supply chains [120]. However, one of the main challenges is the balance between humans and automation. To achieve a high level of automation and flexibility, industrial robots should be able to achieve more of a human-like performance and move towards cognitive robotics [24]. If this is not possible, a proper collaboration between humans and robots should be established to maximize the potential of both parties.
The balance between human and automation also goes beyond Industry 4.0 and is in the vision of Industry 5.0. In this vision, humans and robots will work together whenever and wherever possible [121] by using the creativity of human experts in conjunction with efficient, smart and accurate machines [13]. However, based on the current situation, several important issues are identified in this aspect such as acceptance of robots, trust in human–robot collaboration, a need to redesign the workplace, proper education and training [121]. Today, the attitude toward the possibility of close collaboration between humans and robots is rather ambiguous [122]. However, it is likely that the attitude will change together with the development of more natural ways of communicating with machines and an overall understanding of digitization and technologies that can raise the well-being of human workers.

7. Conclusions

The manufacturing sector requires increased performance, adaptability to product variations, increased safety, and reduced costs, etc.; however, these requirements are neither feasible nor sustainable to be achieved by standard control methods. This article has introduced the concept of smart industrial robot control and the potential of different learning strategies for tackling manufacturing challenges. AI-based robotic systems are strongly becoming one of the main areas of focus as flexibility and deep understanding of complex manufacturing processes are becoming the key advantage to raise competitiveness. With the smart industrial robot control trends, the prospects of future factories and the role of robots in it were explored.
The current trends of smart industrial robot control reviewed in this article show the potential and also limitations of different learning strategies. Compared to traditional robot control methods, learning-based methods enable robots to perceive the environment, learn from it and adapt to dynamically changing situations. The ability of decision-making and learning is an important step towards achieving more of a human-like performance. However, the possibilities of smart industrial robot control in manufacturing are not yet fully explored nor exploited. Several challenges in smart industrial robot control especially in learning strategies require further research attention. For example, sample inefficiency leads to a need for vast amounts of data, which, in the manufacturing sector, can be difficult to obtain, and more effective knowledge generalization would enable faster adaption to unseen situations.

Funding

The work has been performed in the project AI4DI: Artificial Intelligence for Digitizing Industry, under grant agreement No. 826060. The project is co-funded by grants from Germany, Austria, Finland, France, Norway, Latvia, Belgium, Italy, Switzerland, and Czech Republic and Electronic Component Systems for European Leadership Joint Undertaking (ECSEL JU).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Wallén, J. The History of the Industrial Robot; Linköping University Electronic Press: Linköping, Sweden, 2008. [Google Scholar]
  2. International Organization for Standardization (ISO). ISO 8373:2012: Robots and Robotic Devices—Vocabulary; International Organization for Standardization: Geneva, Switzerland, 2012. [Google Scholar]
  3. Wilson, M. Chapter 2—Industrial Robots. In Implementation of Robot Systems; Wilson, M., Ed.; Butterworth-Heinemann: Oxford, UK, 2015; pp. 19–38. [Google Scholar] [CrossRef]
  4. Carlsson, J. A Decade of Robotics: [Analysis of the Diffusion of Industrial Robots in the 1980s by Countries, Application Areas, Industrial Branches and Types of Robots]; Mekanförbundets Förlag: Stockholm, Sweden, 1991. [Google Scholar]
  5. Zhong, R.Y.; Xu, X.; Klotz, E.; Newman, S.T. Intelligent Manufacturing in the Context of Industry 4.0: A Review. Engineering 2017, 3, 616–630. [Google Scholar] [CrossRef]
  6. Executive Summary World Robotics 2021 Industrial Robots. 2021. Available online: https://ifr.org/img/worldrobotics/Executive_Summary_WR_Industrial_Robots_2021.pdf (accessed on 20 November 2021).
  7. Sanneman, L.; Fourie, C.; Shah, J.A. The state of industrial robotics: Emerging technologies, challenges, and key research directions. arXiv 2020, arXiv:2010.14537. [Google Scholar]
  8. Pan, Z.; Polden, J.; Larkin, N.; Van Duin, S.; Norrish, J. Recent progress on programming methods for industrial robots. Robot. Comput.-Integr. Manuf. 2012, 28, 87–94. [Google Scholar] [CrossRef] [Green Version]
  9. Evjemo, L.; Gjerstad, T.; Grøtli, E.; Sziebig, G. Trends in Smart Manufacturing: Role of Humans and Industrial Robots in Smart Factories. Curr. Robot. Rep. 2020, 1, 35–41. [Google Scholar] [CrossRef] [Green Version]
  10. Probst, L.; Pedersen, B.; Lefebvre, V.; Dakkak, L. USA-China-EU plans for AI: Where do we stand. Digit. Transform. Monit. Eur. Comm. 2018. Available online: https://ati.ec.europa.eu/reports/technology-watch/usa-china-eu-plans-ai-where-do-we-stand-0 (accessed on 13 September 2021).
  11. Arents, J.; Abolins, V.; Judvaitis, J.; Vismanis, O.; Oraby, A.; Ozols, K. Human–Robot Collaboration Trends and Safety Aspects: A Systematic Review. J. Sens. Actuator Netws. 2021, 10, 48. [Google Scholar] [CrossRef]
  12. Osterrieder, P.; Budde, L.; Friedli, T. The smart factory as a key construct of industry 4.0: A systematic literature review. Int. J. Prod. Econ. 2020, 221, 107476. [Google Scholar] [CrossRef]
  13. Maddikunta, P.K.R.; Pham, Q.V.; Prabadevi, B.; Deepa, N.; Dev, K.; Gadekallu, T.R.; Ruby, R.; Liyanage, M. Industry 5.0: A survey on enabling technologies and potential applications. J. Ind. Inf. Integr. 2021. [Google Scholar] [CrossRef]
  14. Golnazarian, W.; Hall, E. Intelligent Industrial Robots. Cent. Robot. Res. 2002, 1050, 72. [Google Scholar] [CrossRef]
  15. Shao, L.; Volz, R. Methods and strategies of object localization. In Proceedings of the NASA Conference on Space Telerobotics, Pasadena, CA, USA, 31 January 1989. [Google Scholar]
  16. Zhang, C.; Lu, Y. Study on artificial intelligence: The state of the art and future prospects. J. Ind. Inf. Integr. 2021, 23, 100224. [Google Scholar] [CrossRef]
  17. Li, P.; Liu, X. Common Sensors in Industrial Robots: A Review. J. Phys. Conf. Ser. 2019, 1267, 012036. [Google Scholar] [CrossRef] [Green Version]
  18. Mittal, S.; Khan, M.A.; Romero, D.; Wuest, T. Smart manufacturing: Characteristics, technologies and enabling factors. Proc. Inst. Mech. Eng. Part J. Eng. Manuf. 2019, 233, 1342–1361. [Google Scholar] [CrossRef]
  19. Abubakr, M.; Abbas, A.T.; Tomaz, I.; Soliman, M.S.; Luqman, M.; Hegab, H. Sustainable and Smart Manufacturing: An Integrated Approach. Sustainability 2020, 12, 2280. [Google Scholar] [CrossRef] [Green Version]
  20. Arents, J.; Cacurs, R.; Greitans, M. Integration of Computervision and Artificial Intelligence Subsystems with Robot Operating System Based Motion Planning for Industrial Robots. Autom. Control. Comput. Sci. 2018, 52, 392–401. [Google Scholar] [CrossRef]
  21. Wei, K.; Ren, B. A method on dynamic path planning for robotic manipulator autonomous obstacle avoidance based on an improved RRT algorithm. Sensors 2018, 18, 571. [Google Scholar] [CrossRef] [Green Version]
  22. Vernon, D.; Vincze, M. Industrial Priorities for Cognitive Robotics. In Proceedings of the EUCognition 2016—“Cognitive Robot Architectures”, Vienna, Austria, 8–9 December 2016; pp. 6–9. [Google Scholar]
  23. Kraetzschmar, G. Software Engineering Factors for Cognitive Robotics. 2018. Available online: https://cordis.europa.eu/project/id/688441/results (accessed on 3 June 2021).
  24. Samani, H. Cognitive Robotics; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  25. Siegwart, R.; Nourbakhsh, I.R.; Scaramuzza, D. Introduction to Autonomous Mobile Robots; MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
  26. Premebida, C.; Ambrus, R.; Marton, Z.C. Intelligent robotic perception systems. In Applications of Mobile Robots; IntechOpen: London, UK, 2018. [Google Scholar]
  27. Dong, S.; Wang, P.; Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 2021, 40, 100379. [Google Scholar] [CrossRef]
  28. Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Hasan, M.; Van Essen, B.C.; Awwal, A.A.S.; Asari, V.K. A State-of-the-Art Survey on Deep Learning Theory and Architectures. Electronics 2019, 8, 292. [Google Scholar] [CrossRef] [Green Version]
  29. Kakani, V.; Nguyen, V.H.; Kumar, B.P.; Kim, H.; Pasupuleti, V.R. A critical review on computer vision and artificial intelligence in food industry. J. Agric. Food Res. 2020, 2, 100033. [Google Scholar] [CrossRef]
  30. Lenz, I.; Lee, H.; Saxena, A. Deep Learning for Detecting Robotic Grasps. Int. J. Robot. Res. 2013, 34, 705–724. [Google Scholar] [CrossRef] [Green Version]
  31. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar] [CrossRef] [Green Version]
  32. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 39, 91–99. [Google Scholar] [CrossRef] [Green Version]
  33. Zou, Z.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. arXiv 2019, arXiv:1905.05055. [Google Scholar]
  34. Poss, C.; Mlouka, O.B.; Irrenhauser, T.; Prueglmeier, M.; Goehring, D.; Zoghlami, F.; Salehi, V. Robust Framework for intelligent Gripping Point Detection. In Proceedings of the IECON 2019—45th Annual Conference of the IEEE Industrial Electronics Society, Lisbon, Portugal, 14–17 October 2019; Volume 1, pp. 717–723. [Google Scholar] [CrossRef]
  35. Lin, W.; Anwar, A.; Li, Z.; Tong, M.; Qiu, J.; Gao, H. Recognition and Pose Estimation of Auto Parts for an Autonomous Spray Painting Robot. IEEE Trans. Ind. Inform. 2019, 15, 1709–1719. [Google Scholar] [CrossRef]
  36. Arents, J.; Greitans, M.; Lesser, B. Construction of a Smart Vision-Guided Robot System for Manipulation in a Dynamic Environment. In Artificial Intelligence for Digitising Industry; River Publishers: Gistrup, Denmark, 2021; pp. 205–220. [Google Scholar]
  37. Mahler, J.; Matl, M.; Liu, X.; Li, A.; Gealy, D.; Goldberg, K. Dex-Net 3.0: Computing Robust Robot Vacuum Suction Grasp Targets in Point Clouds using a New Analytic Model and Deep Learning. arXiv 2018, arXiv:1709.06670. [Google Scholar]
  38. Mahler, J.; Matl, M.; Satish, V.; Danielczuk, M.; DeRose, B.; McKinley, S.; Goldberg, K. Learning ambidextrous robot grasping policies. Sci. Robot. 2019, 4, 26. Available online: https://robotics.sciencemag.org/content/4/26/eaau4984.full.pdf (accessed on 14 June 2021).
  39. Kah, P.; Shrestha, M.; Hiltunen, E.; Martikainen, J. Robotic arc welding sensors and programming in industrial applications. Int. J. Mech. Mater. Eng. 2015, 10, 13. [Google Scholar] [CrossRef] [Green Version]
  40. Wu, Q.; Lu, J.; Zou, W.; Xu, D. Path planning for surface inspection on a robot-based scanning system. In Proceedings of the 2015 IEEE International Conference on Mechatronics and Automation (ICMA), Beijing, China, 2–5 August 2015; pp. 2284–2289. [Google Scholar] [CrossRef]
  41. Skotheim, O.; Lind, M.; Ystgaard, P.; Fjerdingen, S.A. A flexible 3D object localization system for industrial part handling. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 3326–3333. [Google Scholar] [CrossRef]
  42. Tsai, M.; Fang, J.J.; Chang, J.L. Robotic Path Planning for an Automatic Mold Polishing System. Int. J. Robot. Autom. 2004, 19, 81–90. [Google Scholar] [CrossRef]
  43. Zhen, X.; Seng, J.C.Y.; Somani, N. Adaptive Automatic Robot Tool Path Generation Based on Point Cloud Projection Algorithm. In Proceedings of the 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Zaragoza, Spain, 10–13 September 2019; pp. 341–347. [Google Scholar] [CrossRef]
  44. Peng, R.; Navarro-Alarcon, D.; Wu, V.; Yang, W. A Point Cloud-Based Method for Automatic Groove Detection and Trajectory Generation of Robotic Arc Welding Tasks. In Proceedings of the 2020 17th International Conference on Ubiquitous Robots (UR), Kyoto, Japan, 22–26 June 2020. [Google Scholar]
  45. Fujita, M.; Domae, Y.; Noda, A.; Garcia Ricardez, G.; Nagatani, T.; Zeng, A.; Song, S.; Rodriguez, A.; Causo, A.; Chen, I.M.; et al. What are the important technologies for bin picking? Technology analysis of robots in competitions based on a set of performance metrics. Adv. Robot. 2020, 34, 560–574. [Google Scholar] [CrossRef]
  46. Horn, B.; Ikeuchi, K. The Mechanical Manipulation of Randomly Oriented Parts. Sci. Am. 1984, 251, 100–111. [Google Scholar] [CrossRef]
  47. Marvel, J.A.; Saidi, K.; Eastman, R.; Hong, T.; Cheok, G.; Messina, E. Technology readiness levels for randomized bin picking. In Proceedings of the Workshop on Performance Metrics for Intelligent Systems (PerMI′12) Workshop; Special Publication (NIST SP); National Institute of Standards and Technology: Gaithersburg, MD, USA, 2012; pp. 109–113. [Google Scholar]
  48. Holz, D.; Topalidou-Kyniazopoulou, A.; Stückler, J.; Behnke, S. Real-time object detection, localization and verification for fast robotic depalletizing. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 1459–1466. [Google Scholar]
  49. Kleeberger, K.; Bormann, R.; Kraus, W.; Huber, M. A Survey on Learning-Based Robotic Grasping. Curr. Robot. Rep. 2020, 1, 239–249. [Google Scholar] [CrossRef]
  50. Spenrath, F.; Pott, A. Gripping Point Determination for Bin Picking Using Heuristic Search. Procedia CIRP 2017, 62, 606–611. [Google Scholar] [CrossRef]
  51. He, R.; Rojas, J.; Guan, Y. A 3D Object Detection and Pose Estimation Pipeline Using RGB-D Images. arXiv 2017, arXiv:1703.03940. [Google Scholar]
  52. Sock, J.; Kim, K.; Sahin, C.; Kim, T.K. Multi-task deep networks for depth-based 6D object pose and joint registration in crowd scenarios. arXiv 2019, arXiv:1806.03891. [Google Scholar]
  53. Kehl, W.; Manhardt, F.; Tombari, F.; Ilic, S.; Navab, N. SSD-6D: Making RGB-based 3D detection and 6D pose estimation great again. arXiv 2017, arXiv:1711.10006. [Google Scholar]
  54. Olesen, A.; Gergaly, B.; Ryberg, E.; Thomsen, M.; Chrysostomou, D. A collaborative robot cell for random bin-picking based on deep learning policies and a multi-gripper switching strategy. Procedia Manuf. 2020, 51, 3–10. [Google Scholar] [CrossRef]
  55. Rad, M.; Lepetit, V. BB8: A Scalable, Accurate, Robust to Partial Occlusion Method for Predicting the 3D Poses of Challenging Objects without Using Depth. arXiv 2018, arXiv:1703.10896. [Google Scholar]
  56. Xiang, Y.; Schmidt, T.; Narayanan, V.; Fox, D. PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes. arXiv 2018, arXiv:1711.00199. [Google Scholar]
  57. Tremblay, J.; To, T.; Sundaralingam, B.; Xiang, Y.; Fox, D.; Birchfield, S. Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects. arXiv 2018, arXiv:1809.10790. [Google Scholar]
  58. Lepetit, V.; Moreno-Noguer, F.; Fua, P. EPnP: An accurate O(n) solution to the PnP problem. Int. J. Comput. Vis. 2009, 81, 155. [Google Scholar] [CrossRef] [Green Version]
  59. Morrison, D.; Corke, P.; Leitner, J. Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach. arXiv 2018, arXiv:1804.05172. [Google Scholar]
  60. Zoghlami, F.; Kurrek, P.; Jocas, M.; Masala, G.; Salehi, V. Design of a Deep Post Gripping Perception Framework for Industrial Robots. J. Comput. Inf. Sci. Eng. 2021, 21, 021003. [Google Scholar] [CrossRef]
  61. Matsumura, R.; Domae, Y.; Wan, W.; Harada, K. Learning Based Robotic Bin-picking for Potentially Tangled Objects. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 7990–7997. [Google Scholar] [CrossRef]
  62. Moosmann, M.; Spenrath, F.; Kleeberger, K.; Khalid, M.U.; Mönnig, M.; Rosport, J.; Bormann, R. Increasing the Robustness of Random Bin Picking by Avoiding Grasps of Entangled Workpieces. Procedia CIRP 2020, 93, 1212–1217. [Google Scholar] [CrossRef]
  63. Silver, D.; Huang, A.; Maddison, C.; Guez, A.; Sifre, L.; Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef]
  64. Kober, J.; Bagnell, J.A.; Peters, J. Reinforcement learning in robotics: A survey. Int. J. Robot. Res. 2013, 32, 1238–1274. [Google Scholar] [CrossRef] [Green Version]
  65. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  66. Liu, R.; Nageotte, F.; Zanne, P.; de Mathelin, M.; Dresp-Langley, B. Deep Reinforcement Learning for the Control of Robotic Manipulation: A Focussed Mini-Review. Robotics 2021, 10, 22. [Google Scholar] [CrossRef]
  67. Kroemer, O.; Niekum, S.; Konidaris, G. A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms. J. Mach. Learn. Res. 2021, 22, 1–82. [Google Scholar]
  68. Chebotar, Y.; Hausman, K.; Zhang, M.; Sukhatme, G.; Schaal, S.; Levine, S. Combining model-based and model-free updates for trajectory-centric reinforcement learning. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 703–711. [Google Scholar]
  69. Arulkumaran, K.; Deisenroth, M.P.; Brundage, M.; Bharath, A.A. Deep Reinforcement Learning: A Brief Survey. IEEE Signal Process. Mag. 2017, 34, 26–38. [Google Scholar] [CrossRef] [Green Version]
  70. Hua, J.; Zeng, L.; Li, G.; Ju, Z. Learning for a robot: Deep reinforcement learning, imitation learning, transfer learning. Sensors 2021, 21, 1278. [Google Scholar] [CrossRef]
  71. Zhan, A.; Zhao, P.; Pinto, L.; Abbeel, P.; Laskin, M. A Framework for Efficient Robotic Manipulation. arXiv 2020, arXiv:2012.07975. [Google Scholar]
  72. Quillen, D.; Jang, E.; Nachum, O.; Finn, C.; Ibarz, J.; Levine, S. Deep reinforcement learning for vision-based robotic grasping: A simulated comparative evaluation of off-policy methods. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 6284–6291. [Google Scholar]
  73. Joshi, S.; Kumra, S.; Sahin, F. Robotic Grasping using Deep Reinforcement Learning. In Proceedings of the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, China, 20–21 August 2020; pp. 1461–1466. [Google Scholar] [CrossRef]
  74. Wang, Y.; Lan, X.; Feng, C.; Wan, L.; Li, J.; Liu, Y.; Li, D. An experience-based policy gradient method for smooth manipulation. In Proceedings of the 2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Suzhou, China, 29 July–2 August 2019; pp. 93–97. [Google Scholar] [CrossRef]
  75. Pedersen, O.M.; Misimi, E.; Chaumette, F. Grasping Unknown Objects by Coupling Deep Reinforcement Learning, Generative Adversarial Networks, and Visual Servoing. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 5655–5662. [Google Scholar] [CrossRef]
  76. Chen, Y.; Ju, Z.; Yang, C. Combining reinforcement learning and rule-based method to manipulate objects in clutter. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–6. [Google Scholar]
  77. Zeng, A.; Song, S.; Welker, S.; Lee, J.; Rodriguez, A.; Funkhouser, T. Learning Synergies Between Pushing and Grasping with Self-Supervised Deep Reinforcement Learning. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4238–4245. [Google Scholar] [CrossRef] [Green Version]
  78. Yang, Y.; Liang, H.; Choi, C. A deep learning approach to grasping the invisible. IEEE Robot. Autom. Lett. 2020, 5, 2232–2239. [Google Scholar] [CrossRef] [Green Version]
  79. Shin, K.; Sim, M.; Choi, E.; Park, H.; Choi, J.W.; Cho, Y.; Sohn, J.I.; Cha, S.N.; Jang, J.E. Artificial Tactile Sensor Structure for Surface Topography Through Sliding. IEEE/ASME Trans. Mechatronics 2018, 23, 2638–2649. [Google Scholar] [CrossRef]
  80. Vulin, N.; Christen, S.; Stevšić, S.; Hilliges, O. Improved learning of robot manipulation tasks via tactile intrinsic motivation. IEEE Robot. Autom. Lett. 2021, 6, 2194–2201. [Google Scholar] [CrossRef]
  81. Merzić, H.; Bogdanović, M.; Kappler, D.; Righetti, L.; Bohg, J. Leveraging contact forces for learning to grasp. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3615–3621. [Google Scholar]
  82. Johannink, T.; Bahl, S.; Nair, A.; Luo, J.; Kumar, A.; Loskyll, M.; Ojea, J.A.; Solowjow, E.; Levine, S. Residual reinforcement learning for robot control. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 6023–6029. [Google Scholar]
  83. Beltran-Hernandez, C.C.; Petit, D.; Ramirez-Alpizar, I.G.; Nishi, T.; Kikuchi, S.; Matsubara, T.; Harada, K. Learning force control for contact-rich manipulation tasks with rigid position-controlled robots. IEEE Robot. Autom. Lett. 2020, 5, 5709–5716. [Google Scholar] [CrossRef]
  84. Huang, S.H.; Zambelli, M.; Kay, J.; Martins, M.F.; Tassa, Y.; Pilarski, P.M.; Hadsell, R. Learning gentle object manipulation with curiosity-driven deep reinforcement learning. arXiv 2019, arXiv:1903.08542. [Google Scholar]
  85. Melnik, A.; Lach, L.; Plappert, M.; Korthals, T.; Haschke, R.; Ritter, H. Tactile sensing and deep reinforcement learning for in-hand manipulation tasks. In Proceedings of the IROS Workshop on Autonomous Object Manipulation, Venetian Macao, Macau, China, 8 November 2019. [Google Scholar]
  86. Hundt, A.; Killeen, B.; Greene, N.; Wu, H.; Kwon, H.; Paxton, C.; Hager, G.D. “Good Robot!”: Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer. IEEE Robot. Autom. Lett. 2020, 5, 6724–6731. [Google Scholar] [CrossRef]
  87. Thomas, G.; Chien, M.; Tamar, A.; Ojea, J.A.; Abbeel, P. Learning robotic assembly from cad. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 3524–3531. [Google Scholar]
  88. Luo, J.; Solowjow, E.; Wen, C.; Ojea, J.A.; Agogino, A.M.; Tamar, A.; Abbeel, P. Reinforcement Learning on Variable Impedance Controller for High-Precision Robotic Assembly. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3080–3087. [Google Scholar] [CrossRef] [Green Version]
  89. Chen, P.; Lu, W. Deep reinforcement learning based moving object grasping. Inf. Sci. 2021, 565, 62–76. [Google Scholar] [CrossRef]
  90. Hu, Z.; Zheng, Y.; Pan, J. Living Object Grasping Using Two-Stage Graph Reinforcement Learning. IEEE Robot. Autom. Lett. 2021, 6, 1950–1957. [Google Scholar] [CrossRef]
  91. Attia, A.; Dayan, S. Global overview of imitation learning. arXiv 2018, arXiv:1801.06503. [Google Scholar]
  92. Osa, T.; Pajarinen, J.; Neumann, G.; Bagnell, J.A.; Abbeel, P.; Peters, J. An algorithmic perspective on imitation learning. arXiv 2018, arXiv:1811.06711. [Google Scholar]
  93. Fang, B.; Jia, S.; Guo, D.; Xu, M.; Wen, S.; Sun, F. Survey of imitation learning for robotic manipulation. Int. J. Intell. Robot. Appl. 2019, 3, 362–369. [Google Scholar] [CrossRef]
  94. Suomalainen, M.; Abu-Dakka, F.J.; Kyrki, V. Imitation learning-based framework for learning 6-D linear compliant motions. Auton. Robot. 2021, 45, 389–405. [Google Scholar] [CrossRef]
  95. Gašpar, T.; Nemec, B.; Morimoto, J.; Ude, A. Skill learning and action recognition by arc-length dynamic movement primitives. Robot. Auton. Syst. 2018, 100, 225–235. [Google Scholar] [CrossRef] [Green Version]
  96. Perico, C.A.V.; De Schutter, J.; Aertbeliën, E. Combining imitation learning with constraint-based task specification and control. IEEE Robot. Autom. Lett. 2019, 4, 1892–1899. [Google Scholar] [CrossRef]
  97. Gubbi, S.; Kolathaya, S.; Amrutur, B. Imitation Learning for High Precision Peg-in-Hole Tasks. In Proceedings of the 2020 6th International Conference on Control, Automation and Robotics (ICCAR), Singapore, 20–23 April 2020; pp. 368–372. [Google Scholar]
  98. Zhang, T.; McCarthy, Z.; Jow, O.; Lee, D.; Chen, X.; Goldberg, K.; Abbeel, P. Deep imitation learning for complex manipulation tasks from virtual reality teleoperation. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 5628–5635. [Google Scholar]
  99. Sermanet, P.; Lynch, C.; Chebotar, Y.; Hsu, J.; Jang, E.; Schaal, S.; Levine, S.; Brain, G. Time-contrastive networks: Self-supervised learning from video. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 1134–1141. [Google Scholar]
  100. Edmonds, M.; Gao, F.; Xie, X.; Liu, H.; Qi, S.; Zhu, Y.; Rothrock, B.; Zhu, S.C. Feeling the force: Integrating force and pose for fluent discovery through imitation learning to open medicine bottles. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 3530–3537. [Google Scholar]
  101. Andtfolk, M.; Nyholm, L.; Eide, H.; Fagerström, L. Humanoid robots in the care of older persons: A scoping review. Assist. Technol. 2021, 1–9, Online ahead of print. [Google Scholar] [CrossRef]
  102. Dyrstad, J.S.; Øye, E.R.; Stahl, A.; Mathiassen, J.R. Teaching a robot to grasp real fish by imitation learning from a human supervisor in virtual reality. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7185–7192. [Google Scholar]
  103. Kober, J.; Peters, J. Policy search for motor primitives in robotics. Mach. Learn. 2011, 84, 171–203. [Google Scholar] [CrossRef] [Green Version]
  104. Karnouskos, S.; Sinha, R.; Leitão, P.; Ribeiro, L.; Strasser, T.I. The applicability of ISO/IEC 25023 measures to the integration of agents and automation systems. In Proceedings of the IECON 2018—44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018; pp. 2927–2934. [Google Scholar]
  105. Smids, J.; Nyholm, S.; Berkers, H. Robots in the Workplace: A Threat to—Or Opportunity for—Meaningful Work? Philos. Technol. 2020, 33, 503–522. [Google Scholar] [CrossRef] [Green Version]
  106. Wadsworth, E.; Walters, D. Safety and Health at the Heart of the Future of Work: Building on 100 Years of Experience; International Labour Office: Geneva, Switzerland, 2019. [Google Scholar]
  107. Giffi, C.; Wellener, P.; Dollar, B.; Manolian, H.A.; Monck, L.; Moutray, C. Deloitte and The Manufacturing Institute Skills Gap and Future of Work Study; Deloitte Insights: Birmingham, AL, USA, 2018; Available online: https://www2.deloitte.com/content/dam/insights/us/articles/4736_2018-Deloitte-skills-gap-FoW-manufacturing/DI_2018-Deloitte-skills-gap-FoW-manufacturing-study.pdf (accessed on 25 August 2021).
  108. Urlini, G.; Arents, J.; Latella, A. AI in Industrial Machinery. In Artificial Intelligence for Digitising Industry; River Publishers: Gistrup, Denmark, 2021; pp. 179–185. [Google Scholar]
  109. Jakobi, N.; Husb, P.; Harvey, I. Noise and The Reality Gap: The Use of Simulation in Evolutionary Robotics. In Proceedings of the European Conference on Artificial Life, Lausanne, Switzerland, 13–17 September 1999. [Google Scholar] [CrossRef]
  110. Zeng, R.; Wen, Y.; Zhao, W.; Liu, Y.J. View planning in robot active vision: A survey of systems, algorithms, and applications. Comput. Vis. Media 2020, 6, 225–245. [Google Scholar] [CrossRef]
  111. Zhang, T.; Mo, H. Reinforcement learning for robot research: A comprehensive review and open issues. Int. J. Adv. Robot. Syst. 2021, 18, 1–22. [Google Scholar] [CrossRef]
  112. Choi, H.; Crump, C.; Duriez, C.; Elmquist, A.; Hager, G.; Han, D.; Hearl, F.; Hodgins, J.; Jain, A.; Leve, F.; et al. On the use of simulation in robotics: Opportunities, challenges, and suggestions for moving forward. Proc. Natl. Acad. Sci. USA 2021, 118, e1907856118. [Google Scholar] [CrossRef]
  113. Cognilytica. Data Engineering, Preparation, and Labeling for AI. 2019. Available online: https://www.cloudfactory.com/reports/data-engineering-preparation-labeling-for-ai (accessed on 20 November 2021).
  114. Roh, Y.; Heo, G.; Whang, S. A Survey on Data Collection for Machine Learning: A Big Data—AI Integration Perspective. IEEE Trans. Knowl. Data Eng. 2019, 33, 1328–1347. [Google Scholar] [CrossRef] [Green Version]
  115. Éric Noël Laflamme, C.; Pomerleau, F.; Giguère, P. Driving Datasets Literature Review. arXiv 2019, arXiv:1910.11968. [Google Scholar]
  116. Vanschoren, J.; Van Rijn, J.N.; Bischl, B.; Torgo, L. OpenML: Networked science in machine learning. ACM SIGKDD Explor. Newsl. 2014, 15, 49–60. [Google Scholar] [CrossRef] [Green Version]
  117. Bertolini, M.; Mezzogori, D.; Neroni, M.; Zammori, F. Machine Learning for industrial applications: A comprehensive literature review. Expert Syst. Appl. 2021, 175, 114820. [Google Scholar] [CrossRef]
  118. Levine, S.; Pastor, P.; Krizhevsky, A.; Quillen, D. Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection. Int. J. Robot. Res. 2016, 37, 421–436. [Google Scholar] [CrossRef]
  119. De Vet, J.M.; Nigohosyan, D.; Ferrer, J.N.; Gross, A.K.; Kuehl, S.; Flickenschild, M. Impacts of the COVID-19 Pandemic on EU Industries; European Parliament: Strasbourg, Francuska, 2021; Available online: https://www.europarl.europa.eu/RegData/etudes/STUD/2021/662903/IPOL_STU(2021)662903_EN.pdf (accessed on 12 October 2021).
  120. Atkinson, R.D. Robotics and the Future of Production and Work; Technical Report; Information Technology and Innovation Foundation: Washington, DC, USA, 2019; Available online: https://itif.org/publications/2019/10/15/robotics-and-future-production-and-work (accessed on 5 September 2021).
  121. Demir, K.A.; Döven, G.; Sezen, B. Industry 5.0 and human-robot co-working. Procedia Comput. Sci. 2019, 158, 688–695. [Google Scholar] [CrossRef]
  122. Bröhl, C.; Nelles, J.; Brandl, C.; Mertens, A.; Nitsch, V. Human–robot collaboration acceptance model: Development and comparison for Germany, Japan, China and the USA. Int. J. Soc. Robot. 2019, 11, 709–726. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Smart industrial robot concept.
Figure 1. Smart industrial robot concept.
Applsci 12 00937 g001
Figure 2. Typical pipelines of computer-vision based industrial robot control.
Figure 2. Typical pipelines of computer-vision based industrial robot control.
Applsci 12 00937 g002
Figure 3. Some of the possible combinations of different conditions: (a) single object [35]; (b) multiple, similar and overlapping objects [36]; (c) complex, multiple, different and divided objects [37], (d) multiple, different and overlapping objects [38].
Figure 3. Some of the possible combinations of different conditions: (a) single object [35]; (b) multiple, similar and overlapping objects [36]; (c) complex, multiple, different and divided objects [37], (d) multiple, different and overlapping objects [38].
Applsci 12 00937 g003
Figure 4. Agent environment interaction [65].
Figure 4. Agent environment interaction [65].
Applsci 12 00937 g004
Figure 5. The classification of DRL algorithms [70].
Figure 5. The classification of DRL algorithms [70].
Applsci 12 00937 g005
Figure 6. Grasping the invisible [78].
Figure 6. Grasping the invisible [78].
Applsci 12 00937 g006
Figure 7. Gentle object manipulation with an anthropomorphic robotic hand [84].
Figure 7. Gentle object manipulation with an anthropomorphic robotic hand [84].
Applsci 12 00937 g007
Figure 8. U shape object assembly [87].
Figure 8. U shape object assembly [87].
Applsci 12 00937 g008
Figure 9. Demonstration by kinesthetic teaching [94].
Figure 9. Demonstration by kinesthetic teaching [94].
Applsci 12 00937 g009
Figure 10. Pouring task with a real robot [99].
Figure 10. Pouring task with a real robot [99].
Applsci 12 00937 g010
Figure 11. Imitation learning in VR [102].
Figure 11. Imitation learning in VR [102].
Applsci 12 00937 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Arents, J.; Greitans, M. Smart Industrial Robot Control Trends, Challenges and Opportunities within Manufacturing. Appl. Sci. 2022, 12, 937. https://doi.org/10.3390/app12020937

AMA Style

Arents J, Greitans M. Smart Industrial Robot Control Trends, Challenges and Opportunities within Manufacturing. Applied Sciences. 2022; 12(2):937. https://doi.org/10.3390/app12020937

Chicago/Turabian Style

Arents, Janis, and Modris Greitans. 2022. "Smart Industrial Robot Control Trends, Challenges and Opportunities within Manufacturing" Applied Sciences 12, no. 2: 937. https://doi.org/10.3390/app12020937

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop