Next Issue
Volume 13, January
Previous Issue
Volume 12, October
 
 

Robotics, Volume 12, Issue 6 (December 2023) – 26 articles

Cover Story (view full-size image): This work proposes a “Learning by Demonstration” framework based on Dynamic Movement Primitives (DMPs) that could be effectively adopted to plan complex activities in robotics. The approach uses Lie theory and integrates the exponential and logarithmic map into the DMP equations, which converts any element of the Lie group SO(3) into an element of the tangent space so(3) and vice versa. Moreover, it includes a dynamic parameterization for the tangent space elements to manage the logarithmic map discontinuity. The proposed approach was tested on the Tiago robot in the fulfilment of agricultural activities, such as digging, seeding, irrigation and harvesting. The obtained results (100% success rate) demonstrated the high capability of the method to manage orientation discontinuity. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 5038 KiB  
Article
A Novel Control Architecture Based on Behavior Trees for an Omni-Directional Mobile Robot
Robotics 2023, 12(6), 170; https://doi.org/10.3390/robotics12060170 - 16 Dec 2023
Cited by 1 | Viewed by 1141
Abstract
Robotic systems are increasingly present in dynamic environments. This paper proposes a hierarchical control structure wherein a behavior tree (BT) is used to improve the flexibility and adaptability of an omni-directional mobile robot for point stabilization. Flexibility and adaptability are crucial at each [...] Read more.
Robotic systems are increasingly present in dynamic environments. This paper proposes a hierarchical control structure wherein a behavior tree (BT) is used to improve the flexibility and adaptability of an omni-directional mobile robot for point stabilization. Flexibility and adaptability are crucial at each level of the sense–plan–act loop to implement robust and effective robotic solutions in dynamic environments. The proposed BT combines high-level decision making and continuous execution monitoring while applying non-linear model predictive control (NMPC) for the point stabilization of an omni-directional mobile robot. The proposed control architecture can guide the mobile robot to any configuration within the workspace while satisfying state constraints (e.g., obstacle avoidance) and input constraints (e.g., motor limits). The effectiveness of the controller was validated through a set of realistic simulation scenarios and experiments in a real environment, where an industrial omni-directional mobile robot performed a point stabilization task with obstacle avoidance in a workspace. Full article
(This article belongs to the Topic Industrial Robotics: 2nd Volume)
Show Figures

Figure 1

26 pages, 5465 KiB  
Article
NOHAS: A Novel Orthotic Hand Actuated by Servo Motors and Mobile App for Stroke Rehabilitation
Robotics 2023, 12(6), 169; https://doi.org/10.3390/robotics12060169 - 08 Dec 2023
Viewed by 1796
Abstract
The rehabilitation process after the onset of a stroke primarily deals with assisting in regaining mobility, communication skills, swallowing function, and activities of daily living (ADLs). This entirely depends on the specific regions of the brain that have been affected by the stroke. [...] Read more.
The rehabilitation process after the onset of a stroke primarily deals with assisting in regaining mobility, communication skills, swallowing function, and activities of daily living (ADLs). This entirely depends on the specific regions of the brain that have been affected by the stroke. Patients can learn how to utilize adaptive equipment, regain movement, and reduce muscle spasticity through certain repetitive exercises and therapeutic interventions. These exercises can be performed by wearing soft robotic gloves on the impaired extremity. For post-stroke rehabilitation, we have designed and characterized an interactive hand orthosis with tendon-driven finger actuation mechanisms actuated by servo motors, which consists of a fabric glove and force-sensitive resistors (FSRs) at the tip. The robotic device moves the user’s hand when operated by mobile phone to replicate normal gripping behavior. In this paper, the characterization of finger movements in response to step input commands from a mobile app was carried out for each finger at the proximal interphalangeal (PIP), distal interphalangeal (DIP), and metacarpophalangeal (MCP) joints. In general, servo motor-based hand orthoses are energy-efficient; however, they generate noise during actuation. Here, we quantified the noise generated by servo motor actuation for each finger as well as when a group of fingers is simultaneously activated. To test ADL ability, we evaluated the device’s effectiveness in holding different objects from the Action Research Arm Test (ARAT) kit. Our device, novel hand orthosis actuated by servo motors (NOHAS), was tested on ten healthy human subjects and showed an average of 90% success rate in grasping tasks. Our orthotic hand shows promise for aiding post-stroke subjects recover because of its simplicity of use, lightweight construction, and carefully designed components. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses)
Show Figures

Figure 1

23 pages, 8736 KiB  
Article
Emotional Experience in Human–Robot Collaboration: Suitability of Virtual Reality Scenarios to Study Interactions beyond Safety Restrictions
Robotics 2023, 12(6), 168; https://doi.org/10.3390/robotics12060168 - 08 Dec 2023
Viewed by 1283
Abstract
Today’s research on fenceless human–robot collaboration (HRC) is challenged by a relatively slow development of safety features. Simultaneously, design recommendations for HRC are requested by the industry. To simulate HRC scenarios in advance, virtual reality (VR) technology can be utilized and ensure safety. [...] Read more.
Today’s research on fenceless human–robot collaboration (HRC) is challenged by a relatively slow development of safety features. Simultaneously, design recommendations for HRC are requested by the industry. To simulate HRC scenarios in advance, virtual reality (VR) technology can be utilized and ensure safety. VR also allows researchers to study the effects of safety-restricted features like close distance during movements and events of robotic malfunctions. In this paper, we present a VR experiment with 40 participants collaborating with a heavy-load robot and compare the results to a similar real-world experiment to study transferability and validity. The participant’s proximity to the robot, interaction level, and occurring system failures were varied. State anxiety, trust, and intention to use were used as dependent variables, and valence and arousal values were assessed over time. Overall, state anxiety was low and trust and intention to use were high. Only simulated failures significantly increased state anxiety, reduced trust, and resulted in reduced valence and increased arousal. In comparison with the real-world experiment, non-significant differences in all dependent variables and similar progression of valence and arousal were found during scenarios without system failures. Therefore, the suitability of applying VR in HRC research to study safety-restricted features can be supported; however, further research should examine transferability for high-intensity emotional experiences. Full article
(This article belongs to the Special Issue Collection in Honor of Women's Contribution in Robotics)
Show Figures

Figure 1

30 pages, 5439 KiB  
Article
Evaluating the Performance of Mobile-Convolutional Neural Networks for Spatial and Temporal Human Action Recognition Analysis
Robotics 2023, 12(6), 167; https://doi.org/10.3390/robotics12060167 - 08 Dec 2023
Viewed by 1313
Abstract
Human action recognition is a computer vision task that identifies how a person or a group acts on a video sequence. Various methods that rely on deep-learning techniques, such as two- or three-dimensional convolutional neural networks (2D-CNNs, 3D-CNNs), recurrent neural networks (RNNs), and [...] Read more.
Human action recognition is a computer vision task that identifies how a person or a group acts on a video sequence. Various methods that rely on deep-learning techniques, such as two- or three-dimensional convolutional neural networks (2D-CNNs, 3D-CNNs), recurrent neural networks (RNNs), and vision transformers (ViT), have been proposed to address this problem over the years. Motivated by the fact that most of the used CNNs in human action recognition present high complexity, and the necessity of implementations on mobile platforms that are characterized by restricted computational resources, in this article, we conduct an extensive evaluation protocol over the performance metrics of five lightweight architectures. In particular, we examine how these mobile-oriented CNNs (viz., ShuffleNet-v2, EfficientNet-b0, MobileNet-v3, and GhostNet) execute in spatial analysis compared to a recent tiny ViT, namely EVA-02-Ti, and a higher computational model, ResNet-50. Our models, previously trained on ImageNet and BU101, are measured for their classification accuracy on HMDB51, UCF101, and six classes of the NTU dataset. The average and max scores, as well as the voting approaches, are generated through three and fifteen RGB frames of each video, while two different rates for the dropout layers were assessed during the training. Last, a temporal analysis via multiple types of RNNs that employ features extracted by the trained networks is examined. Our results reveal that EfficientNet-b0 and EVA-02-Ti surpass the other mobile-CNNs, achieving comparable or superior performance to ResNet-50. Full article
(This article belongs to the Special Issue Towards Socially Intelligent Robots)
Show Figures

Figure 1

17 pages, 4938 KiB  
Article
Robot Learning by Demonstration with Dynamic Parameterization of the Orientation: An Application to Agricultural Activities
Robotics 2023, 12(6), 166; https://doi.org/10.3390/robotics12060166 - 07 Dec 2023
Viewed by 1296
Abstract
This work proposes a Learning by Demonstration framework based on Dynamic Movement Primitives (DMPs) that could be effectively adopted to plan complex activities in robotics such as the ones to be performed in agricultural domains and avoid orientation discontinuity during motion learning. The [...] Read more.
This work proposes a Learning by Demonstration framework based on Dynamic Movement Primitives (DMPs) that could be effectively adopted to plan complex activities in robotics such as the ones to be performed in agricultural domains and avoid orientation discontinuity during motion learning. The approach resorts to Lie theory and integrates into the DMP equations the exponential and logarithmic map, which converts any element of the Lie group SO(3) into an element of the tangent space so(3) and vice versa. Moreover, it includes a dynamic parameterization for the tangent space elements to manage the discontinuity of the logarithmic map. The proposed approach was tested on the Tiago robot during the fulfillment of four agricultural activities, such as digging, seeding, irrigation and harvesting. The obtained results were compared to the one achieved by using the original formulation of the DMPs and demonstrated the high capability of the proposed method to manage orientation discontinuity (the success rate was 100 % for all the tested poses). Full article
(This article belongs to the Special Issue Robotics and AI for Precision Agriculture)
Show Figures

Figure 1

16 pages, 40065 KiB  
Article
AgroCableBot: Reconfigurable Cable-Driven Parallel Robot for Greenhouse or Urban Farming Automation
Robotics 2023, 12(6), 165; https://doi.org/10.3390/robotics12060165 - 01 Dec 2023
Viewed by 1456
Abstract
In this paper, a Cable-Driven Parallel Robot developed to automate repetitive and essential tasks in crop production in greenhouse and urban garden environments is introduced. The robot has a suspended configuration with five degrees-of-freedom, composed of a fixed platform (frame) and a moving [...] Read more.
In this paper, a Cable-Driven Parallel Robot developed to automate repetitive and essential tasks in crop production in greenhouse and urban garden environments is introduced. The robot has a suspended configuration with five degrees-of-freedom, composed of a fixed platform (frame) and a moving platform known as the end-effector. To generate its movements and operations, eight cables are used, which move through eight pulley systems and are controlled by four winches. In addition, the robot is equipped with a seedbed that houses potted plants. Unlike conventional suspended cable robots, this robot incorporates four moving pulley systems in the frame, which significantly increases its workspace. The development of this type of robot requires precise control of the end-effector pose, which includes both the position and orientation of the robot extremity. To achieve this control, analysis is performed in two fundamental aspects: kinematic analysis and dynamic analysis. In addition, an analysis of the effective workspace of the robot is carried out, taking into account the distribution of tensions in the cables. The aim of this analysis is to verify the increase of the working area, which is useful to cover a larger crop area. The robot has been validated through simulations, where possible trajectories that the robot could follow depending on the tasks to be performed in the crop are presented. This work supports the feasibility of using this type of robotic systems to automate specific agricultural processes, such as sowing, irrigation, and crop inspection. This contribution aims to improve crop quality, reduce the consumption of critical resources such as water and fertilizers, and establish them as technological tools in the field of modern agriculture. Full article
(This article belongs to the Special Issue Robotics and AI for Precision Agriculture)
Show Figures

Figure 1

30 pages, 5705 KiB  
Article
Length Modelling of Spiral Superficial Soft Strain Sensors Using Geodesics and Covering Spaces
Robotics 2023, 12(6), 164; https://doi.org/10.3390/robotics12060164 - 01 Dec 2023
Viewed by 1413
Abstract
Piecewise constant curvature soft actuators can generate various types of movements. These actuators can undergo extension, bending, rotation, twist, or a combination of these. Proprioceptive sensing provides the ability to track their movement or estimate their state in 3D space. Several proprioceptive sensing [...] Read more.
Piecewise constant curvature soft actuators can generate various types of movements. These actuators can undergo extension, bending, rotation, twist, or a combination of these. Proprioceptive sensing provides the ability to track their movement or estimate their state in 3D space. Several proprioceptive sensing solutions were developed using soft strain sensors. However, current mathematical models are only capable of modelling the length of the soft sensors when they are attached to actuators subjected to extension, bending, and rotation movements. Furthermore, these models are limited to modelling straight sensors and incapable of modelling spiral sensors. In this study, for both the spiral and straight sensors, we utilise concepts in geodesics and covering spaces to present a mathematical length model that includes twist. This study is limited to the Piecewise constant curvature actuators and demonstrates, among other things, the advantages of our model and the accuracy when including and excluding twist. We verify the model by comparing the results to a finite element analysis. This analysis involves multiple simulation scenarios designed specifically for the verification process. Finally, we validate the theoretical results with previously published experimental results. Then, we discuss the limitations and possible applications of our model using examples from the literature. Full article
(This article belongs to the Special Issue Editorial Board Members' Collection Series: "Soft Robotics")
Show Figures

Figure 1

26 pages, 8960 KiB  
Article
Virtual Reality Teleoperation System for Mobile Robot Manipulation
Robotics 2023, 12(6), 163; https://doi.org/10.3390/robotics12060163 - 29 Nov 2023
Cited by 1 | Viewed by 1589
Abstract
Over the past few years, the industry has experienced significant growth, leading to what is now known as Industry 4.0. This advancement has been characterized by the automation of robots. Industries have embraced mobile robots to enhance efficiency in specific manufacturing tasks, aiming [...] Read more.
Over the past few years, the industry has experienced significant growth, leading to what is now known as Industry 4.0. This advancement has been characterized by the automation of robots. Industries have embraced mobile robots to enhance efficiency in specific manufacturing tasks, aiming for optimal results and reducing human errors. Moreover, robots can perform tasks in areas inaccessible to humans, such as hard-to-reach zones or hazardous environments. However, the challenge lies in the lack of knowledge about the operation and proper use of the robot. This work presents the development of a teleoperation system using HTC Vive Pro 2 virtual reality goggles. This allows individuals to immerse themselves in a fully virtual environment to become familiar with the operation and control of the KUKA youBot robot. The virtual reality experience is created in Unity, and through this, robot movements are executed, followed by a connection to ROS (Robot Operating System). To prevent potential damage to the real robot, a simulation is conducted in Gazebo, facilitating the understanding of the robot’s operation. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

13 pages, 1457 KiB  
Article
Are Friendly Robots Trusted More? An Analysis of Robot Sociability and Trust
Robotics 2023, 12(6), 162; https://doi.org/10.3390/robotics12060162 - 29 Nov 2023
Viewed by 1360
Abstract
Older individuals prefer to maintain their autonomy while maintaining social connection and engagement with their family, peers, and community. Though individuals can encounter barriers to these goals, socially assistive robots (SARs) hold the potential for promoting aging in place and independence. Such domestic [...] Read more.
Older individuals prefer to maintain their autonomy while maintaining social connection and engagement with their family, peers, and community. Though individuals can encounter barriers to these goals, socially assistive robots (SARs) hold the potential for promoting aging in place and independence. Such domestic robots must be trusted, easy to use, and capable of behaving within the scope of accepted social norms for successful adoption to scale. We investigated perceived associations between robot sociability and trust in domestic robot support for instrumental activities of daily living (IADLs). In our multi-study approach, we collected responses from adults aged 65 years and older using two separate online surveys (Study 1, N = 51; Study 2, N = 43). We assessed the relationship between perceived robot sociability and robot trust. Our results consistently demonstrated a strong positive relationship between perceived robot sociability and robot trust for IADL tasks. These data have design implications for promoting robot trust and acceptance of SARs for use in the home by older adults. Full article
(This article belongs to the Special Issue Human Factors in Human–Robot Interaction)
Show Figures

Figure 1

16 pages, 4168 KiB  
Article
DDPG-Based Adaptive Sliding Mode Control with Extended State Observer for Multibody Robot Systems
Robotics 2023, 12(6), 161; https://doi.org/10.3390/robotics12060161 - 26 Nov 2023
Viewed by 1313
Abstract
This research introduces a robust control design for multibody robot systems, incorporating sliding mode control (SMC) for robustness against uncertainties and disturbances. SMC achieves this through directing system states toward a predefined sliding surface for finite-time stability. However, the challenge arises in selecting [...] Read more.
This research introduces a robust control design for multibody robot systems, incorporating sliding mode control (SMC) for robustness against uncertainties and disturbances. SMC achieves this through directing system states toward a predefined sliding surface for finite-time stability. However, the challenge arises in selecting controller parameters, specifically the switching gain, as it depends on the upper bounds of perturbations, including nonlinearities, uncertainties, and disturbances, impacting the system. Consequently, gain selection becomes challenging when system dynamics are unknown. To address this issue, an extended state observer (ESO) is integrated with SMC, resulting in SMCESO, which treats system dynamics and disturbances as perturbations and estimates them to compensate for their effects on the system response, ensuring robust performance. To further enhance system performance, deep deterministic policy gradient (DDPG) is employed to fine-tune SMCESO, utilizing both actual and estimated states as input states for the DDPG agent and reward selection. This training process enhances both tracking and estimation performance. Furthermore, the proposed method is compared with the optimal-PID, SMC, and H in the presence of external disturbances and parameter variation. MATLAB/Simulink simulations confirm that overall, the SMCESO provides robust performance, especially with parameter variations, where other controllers struggle to converge the tracking error to zero. Full article
(This article belongs to the Special Issue Kinematics and Robot Design VI, KaRD2023)
Show Figures

Figure 1

27 pages, 2606 KiB  
Review
Industrial Robots in Mechanical Machining: Perspectives and Limitations
Robotics 2023, 12(6), 160; https://doi.org/10.3390/robotics12060160 - 24 Nov 2023
Cited by 1 | Viewed by 1901
Abstract
Recently, the need to produce from soft materials or components in extra-large sizes has appeared, requiring special solutions that are affordable using industrial robots. Industrial robots are suitable for such tasks due to their flexibility, accuracy, and consistency in machining operations. However, robot [...] Read more.
Recently, the need to produce from soft materials or components in extra-large sizes has appeared, requiring special solutions that are affordable using industrial robots. Industrial robots are suitable for such tasks due to their flexibility, accuracy, and consistency in machining operations. However, robot implementation faces some limitations, such as a huge variety of materials and tools, low adaptability to environmental changes, flexibility issues, a complicated tool path preparation process, and challenges in quality control. Industrial robotics applications include cutting, milling, drilling, and grinding procedures on various materials, including metal, plastics, and wood. Advanced robotics technologies involve the latest advances in robotics, including integrating sophisticated control systems, sensors, data fusion techniques, and machine learning algorithms. These innovations enable robots to adapt better and interact with their environment, ultimately increasing their accuracy. The main focus of this study is to cover the most common industrial robotic machining processes and to identify how specific advanced technologies can improve their performance. In most of the studied literature, the primary research objective across all operations is to enhance the stiffness of the robotic arm’s structure. Some publications propose approaches for planning the robot’s posture or tool orientation. In contrast, others focus on optimizing machining parameters through the utilization of advanced control and computation, including machine learning methods with the integration of collected sensor data. Full article
(This article belongs to the Special Issue The State-of-the-Art of Robotics in Europe)
Show Figures

Figure 1

15 pages, 5905 KiB  
Article
Minimum Energy Utilization Strategy for Fleet of Autonomous Robots in Urban Waste Management
Robotics 2023, 12(6), 159; https://doi.org/10.3390/robotics12060159 - 23 Nov 2023
Viewed by 1293
Abstract
Many service robots have to operate in a variety of different Service Event Areas (SEAs). In the case of the waste collection robot MARBLE (Mobile Autonomous Robot for Litter Emptying) every SEA has characteristics like varying area and number of litter bins, with [...] Read more.
Many service robots have to operate in a variety of different Service Event Areas (SEAs). In the case of the waste collection robot MARBLE (Mobile Autonomous Robot for Litter Emptying) every SEA has characteristics like varying area and number of litter bins, with different distances between litter bins and uncertain filling levels of litter bins. Global positions of litter bins and garbage drop-off positions from MARBLEs after reaching their maximum capacity are defined as task-performing waypoints. We provide boundary delimitation for characteristics that describe the SEA. The boundaries interpolate synergy between individual SEAs and the developed algorithms. This helps in determining which algorithm best suits an SEA, dependent on the characteristics. The developed route-planning methodologies are based on vehicle routing with simulated annealing (VRPSA) and knapsack problems (KSPs). VRPSA uses specific weighting based on route permutation operators, initial temperature, and the nearest neighbor approach. The KSP optimizes a route’s given capacity, in this case using smart litter bins (SLBs) information. The game-theory KSP algorithm with SLBs information and the KSP algorithm without SLBs information performs better on SEAs lower than 0.5 km2, and with fewer than 50 litter bins. When the standard deviation of the fill rate of litter bins is ≈10%, the KSP without SLB is preferred, and if the standard deviation is between 25 and 40%, then the game-theory KSP is selected. Finally, the vehicle routing problem outperforms in SEAs with an area of 0.55 km2, 50–450 litter bins, and a fill rate of 10–40%. Full article
(This article belongs to the Special Issue Multi-robot Systems: State of the Art and Future Progress)
Show Figures

Figure 1

33 pages, 82129 KiB  
Article
Implicit Shape Model Trees: Recognition of 3-D Indoor Scenes and Prediction of Object Poses for Mobile Robots
Robotics 2023, 12(6), 158; https://doi.org/10.3390/robotics12060158 - 23 Nov 2023
Viewed by 1334
Abstract
This article describes an approach for mobile robots to identify scenes in configurations of objects spread across dense environments. This identification is enabled by intertwining the robotic object search and the scene recognition on already detected objects. We proposed “Implicit Shape Model (ISM) [...] Read more.
This article describes an approach for mobile robots to identify scenes in configurations of objects spread across dense environments. This identification is enabled by intertwining the robotic object search and the scene recognition on already detected objects. We proposed “Implicit Shape Model (ISM) trees” as a scene model to solve these two tasks together. This article presents novel algorithms for ISM trees to recognize scenes and predict object poses. For us, scenes are sets of objects, some of which are interrelated by 3D spatial relations. Yet, many false positives may occur when using single ISMs to recognize scenes. We developed ISM trees, which is a hierarchical model of multiple interconnected ISMs, to remedy this. In this article, we contribute a recognition algorithm that allows the use of these trees for recognizing scenes. ISM trees should be generated from human demonstrations of object configurations. Since a suitable algorithm was unavailable, we created an algorithm for generating ISM trees. In previous work, we integrated the object search and scene recognition into an active vision approach that we called “Active Scene Recognition”. An efficient algorithm was unavailable to make their integration using predicted object poses effective. Physical experiments in this article show that the new algorithm we have contributed overcomes this problem. Full article
(This article belongs to the Special Issue Active Methods in Autonomous Navigation)
Show Figures

Figure 1