Next Issue
Volume 11, October
Previous Issue
Volume 11, June
 
 

Robotics, Volume 11, Issue 4 (August 2022) – 18 articles

Cover Story (view full-size image): This study tackles the task of swarm robotics, where robots explore the environment to detect targets. Our previous results confirmed that Lévy flight outperformed the usual random walk for exploration strategy in an indoor environment. This paper investigated the search performance of swarm crawler robots with Lévy flight on target detection problems in large environments through a series of real robots’ experiments. To the best of our knowledge, we are the first to investigate the search performance of swarm robots with Lévy flight in a large field. The results suggest that target exploration in a large environment would be possible by crawler robots with Lévy flight. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 5270 KiB  
Article
VLP Landmark and SLAM-Assisted Automatic Map Calibration for Robot Navigation with Semantic Information
by Yiru Wang, Babar Hussain and Chik Patrick Yue
Robotics 2022, 11(4), 84; https://doi.org/10.3390/robotics11040084 - 21 Aug 2022
Cited by 2 | Viewed by 2978
Abstract
With the rapid development of robotics and in-depth research of automatic navigation technology, mobile robots have been applied in a variety of fields. Map construction is one of the core research focuses of mobile robot development. In this paper, we propose an autonomous [...] Read more.
With the rapid development of robotics and in-depth research of automatic navigation technology, mobile robots have been applied in a variety of fields. Map construction is one of the core research focuses of mobile robot development. In this paper, we propose an autonomous map calibration method using visible light positioning (VLP) landmarks and Simultaneous Localization and Mapping (SLAM). A layout map of the environment to be perceived is calibrated by a robot tracking at least two landmarks mounted in the venue. At the same time, the robot’s position on the occupancy grid map generated by SLAM is recorded. The two sequences of positions are synchronized by their time stamps and the occupancy grid map is saved as a sensor map. A map transformation method is then performed to align the orientation of the two maps and to calibrate the scale of the layout map to agree with that of the sensor map. After the calibration, the semantic information on the layout map remains and the accuracy is improved. Experiments are performed in the robot operating system (ROS) to verify the proposed map calibration method. We evaluate the performance on two layout maps: one with high accuracy and the other with rough accuracy of the structures and scale. The results show that the navigation accuracy is improved by 24.6 cm on the high-accuracy map and 22.6 cm on the rough-accuracy map, respectively. Full article
(This article belongs to the Topic Intelligent Systems and Robotics)
Show Figures

Figure 1

15 pages, 4522 KiB  
Article
Improving the Accuracy of a Robot by Using Neural Networks (Neural Compensators and Nonlinear Dynamics)
by Zhengjie Yan, Yury Klochkov and Lin Xi
Robotics 2022, 11(4), 83; https://doi.org/10.3390/robotics11040083 - 19 Aug 2022
Cited by 3 | Viewed by 1689
Abstract
The subject of this paper is a programmable con trol system for a robotic manipulator. Considering the complex nonlinear dynamics involved in practical applications of systems and robotic arms, the traditional control method is here replaced by the designed Elma and adaptive radial [...] Read more.
The subject of this paper is a programmable con trol system for a robotic manipulator. Considering the complex nonlinear dynamics involved in practical applications of systems and robotic arms, the traditional control method is here replaced by the designed Elma and adaptive radial basis function neural network—thereby improving the system stability and response rate. Related controllers and compensators were developed and trained using MATLAB-related software. The training results of the two neural network controllers for the robot programming trajectories are presented and the dynamic errors of the different types of neural network controllers and two control methods are analyzed. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

16 pages, 7877 KiB  
Article
Infrastructure-Aided Localization and State Estimation for Autonomous Mobile Robots
by Daniel Flögel, Neel Pratik Bhatt and Ehsan Hashemi
Robotics 2022, 11(4), 82; https://doi.org/10.3390/robotics11040082 - 18 Aug 2022
Cited by 8 | Viewed by 2132
Abstract
A slip-aware localization framework is proposed for mobile robots experiencing wheel slip in dynamic environments. The framework fuses infrastructure-aided visual tracking data (via fisheye lenses) and proprioceptive sensory data from a skid-steer mobile robot to enhance accuracy and reduce variance of the estimated [...] Read more.
A slip-aware localization framework is proposed for mobile robots experiencing wheel slip in dynamic environments. The framework fuses infrastructure-aided visual tracking data (via fisheye lenses) and proprioceptive sensory data from a skid-steer mobile robot to enhance accuracy and reduce variance of the estimated states. The slip-aware localization framework includes: the visual thread to detect and track the robot in the stereo image through computationally efficient 3D point cloud generation using a region of interest; and the ego motion thread which uses a slip-aware odometry mechanism to estimate the robot pose utilizing a motion model considering wheel slip. Covariance intersection is used to fuse the pose prediction (using proprioceptive data) and the visual thread, such that the updated estimate remains consistent. As confirmed by experiments on a skid-steer mobile robot, the designed localization framework addresses state estimation challenges for indoor/outdoor autonomous mobile robots which experience high-slip, uneven torque distribution at each wheel (by the motion planner), or occlusion when observed by an infrastructure-mounted camera. The proposed system is real-time capable and scalable to multiple robots and multiple environmental cameras. Full article
(This article belongs to the Special Issue Advances in Industrial Robotics and Intelligent Systems)
Show Figures

Figure 1

23 pages, 3945 KiB  
Article
Constrained Reinforcement Learning for Vehicle Motion Planning with Topological Reachability Analysis
by Shangding Gu, Guang Chen, Lijun Zhang, Jing Hou, Yingbai Hu and Alois Knoll
Robotics 2022, 11(4), 81; https://doi.org/10.3390/robotics11040081 - 16 Aug 2022
Cited by 9 | Viewed by 2664
Abstract
Rule-based traditional motion planning methods usually perform well with prior knowledge of the macro-scale environments but encounter challenges in unknown and uncertain environments. Deep reinforcement learning (DRL) is a solution that can effectively deal with micro-scale unknown and uncertain environments. Nevertheless, DRL is [...] Read more.
Rule-based traditional motion planning methods usually perform well with prior knowledge of the macro-scale environments but encounter challenges in unknown and uncertain environments. Deep reinforcement learning (DRL) is a solution that can effectively deal with micro-scale unknown and uncertain environments. Nevertheless, DRL is unstable and lacks interpretability. Therefore, it raises a new challenge: how to combine the effectiveness and overcome the drawbacks of the two methods while guaranteeing stability in uncertain environments. In this study, a multi-constraint and multi-scale motion planning method is proposed for automated driving with the use of constrained reinforcement learning (RL), named RLTT, and comprising RL, a topological reachability analysis used for vehicle path space (TPS), and a trajectory lane model (TLM). First, a dynamic model of vehicles is formulated; then, TLM is developed on the basis of the dynamic model, thus constraining RL action and state space. Second, macro-scale path planning is achieved through TPS, and in the micro-scale range, discrete routing points are achieved via RLTT. Third, the proposed motion planning method is designed by combining sophisticated rules, and a theoretical analysis is provided to guarantee the efficiency of our method. Finally, related experiments are conducted to evaluate the effectiveness of the proposed method; our method can reduce 19.9% of the distance cost in the experiments as compared to the traditional method. Experimental results indicate that the proposed method can help mitigate the gap between data-driven and traditional methods, provide better performance for automated driving, and facilitate the use of RL methods in more fields. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

15 pages, 2296 KiB  
Article
An Edge-Based Architecture for Offloading Model Predictive Control for UAVs
by Achilleas Santi Seisa, Sumeet Gajanan Satpute, Björn Lindqvist and George Nikolakopoulos
Robotics 2022, 11(4), 80; https://doi.org/10.3390/robotics11040080 - 6 Aug 2022
Cited by 8 | Viewed by 2625
Abstract
Thanks to the development of 5G networks, edge computing has gained popularity in several areas of technology in which the needs for high computational power and low time delays are essential. These requirements are indispensable in the field of robotics, especially when we [...] Read more.
Thanks to the development of 5G networks, edge computing has gained popularity in several areas of technology in which the needs for high computational power and low time delays are essential. These requirements are indispensable in the field of robotics, especially when we are thinking in terms of real-time autonomous missions in mobile robots. Edge computing will provide the necessary resources in terms of computation and storage, while 5G technologies will provide minimal latency. High computational capacity is crucial in autonomous missions, especially for cases in which we are using computationally demanding high-level algorithms. In the case of Unmanned Aerial Vehicles (UAVs), the onboard processors usually have limited computational capabilities; therefore, it is necessary to offload some of these tasks to the cloud or edge, depending on the time criticality of the application. Especially in the case of UAVs, the requirement to have large payloads to cover the computational needs conflicts with other payload requirements, reducing the overall flying time and hindering autonomous operations from a regulatory perspective. In this article, we propose an edge-based architecture for autonomous UAV missions in which we offload the high-level control task of the UAV’s trajectory to the edge in order to take advantage of the available resources and push the Model Predictive Controller (MPC) to its limits. Additionally, we use Kubernetes to orchestrate our application, which runs on the edge and presents multiple experimental results that prove the efficacy of the proposed novel scheme. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

15 pages, 2141 KiB  
Article
Singularity Avoidance for Cart-Mounted Hand-Guided Collaborative Robots: A Variational Approach
by Erica Salvato, Walter Vanzella, Gianfranco Fenu and Felice Andrea Pellegrino
Robotics 2022, 11(4), 79; https://doi.org/10.3390/robotics11040079 - 6 Aug 2022
Cited by 4 | Viewed by 2116
Abstract
Most collaborative robots (cobots) can be taught by hand guiding: essentially, by manually jogging the robot, an operator teaches some configurations to be employed as via points. Based on those via points, Cartesian end-effector trajectories such as straight lines, circular arcs or splines [...] Read more.
Most collaborative robots (cobots) can be taught by hand guiding: essentially, by manually jogging the robot, an operator teaches some configurations to be employed as via points. Based on those via points, Cartesian end-effector trajectories such as straight lines, circular arcs or splines are then constructed. Such methods can, in principle, be employed for cart-mounted cobots (i.e., when the jogging involves one or two linear axes, besides the cobot axes). However, in some applications, the sole imposition of via points in Cartesian space is not sufficient. On the contrary, albeit the overall system is redundant, (i) the via points must be reached at the taught joint configurations, and (ii) the undesirable singularity (and near-singularity) conditions must be avoided. The naive approach, consisting of setting the cart trajectory beforehand (for instance, by imposing a linear-in-time motion law that crosses the taught cart configurations), satisfies the first need, but does not guarantee the satisfaction of the second. Here, we propose an approach consisting of (i) a novel strategy for decoupling the planning of the cart trajectory and that of the robot joints, and (ii) a novel variational technique for computing the former in a singularity-aware fashion, ensuring the avoidance of a class of workspace singularity and near-singularity configurations. Full article
(This article belongs to the Special Issue Women in Robotics)
Show Figures

Figure 1

19 pages, 7123 KiB  
Article
Parameter-Adaptive Event-Triggered Sliding Mode Control for a Mobile Robot
by Tri Duc Tran, Trong Trung Nguyen, Van Tu Duong, Huy Hung Nguyen and Tan Tien Nguyen
Robotics 2022, 11(4), 78; https://doi.org/10.3390/robotics11040078 - 2 Aug 2022
Cited by 4 | Viewed by 2459
Abstract
Mobile robots have played a vital role in the transportation industries, service robotics, and autonomous vehicles over the past decades. The development of robust tracking controllers has made mobile robots a powerful tool that can replace humans in industrial work. However, most of [...] Read more.
Mobile robots have played a vital role in the transportation industries, service robotics, and autonomous vehicles over the past decades. The development of robust tracking controllers has made mobile robots a powerful tool that can replace humans in industrial work. However, most of the traditional controller updates are time-based and triggered at every predetermined time interval, which requires high communication bandwidth. Therefore, an event-triggered control scheme is essential to release the redundant data transmission. This paper presents a novel parameter-adaptive event-trigger sliding mode to control a two-wheeled mobile robot. The adaptive control scheme ensures that the mobile robot system can be controlled accurately without the knowledge of physical parameters. Meanwhile, the event trigger sliding approach guarantees the system robustness and reduces resource usage. A simulation in MATLAB and an experiment are carried out to validate the efficiency of the proposed controller. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

15 pages, 6571 KiB  
Article
Robotic Complex for Harvesting Apple Crops
by Oleg Krakhmalev, Sergey Gataullin, Eldar Boltachev, Sergey Korchagin, Ivan Blagoveshchensky and Kang Liang
Robotics 2022, 11(4), 77; https://doi.org/10.3390/robotics11040077 - 24 Jul 2022
Cited by 2 | Viewed by 8195
Abstract
The article deals with the concept of building an automated system for the harvesting of apple crops. This system is a robotic complex mounted on a tractor cart, including an industrial robot and a packaging system with a container for fruit collection. The [...] Read more.
The article deals with the concept of building an automated system for the harvesting of apple crops. This system is a robotic complex mounted on a tractor cart, including an industrial robot and a packaging system with a container for fruit collection. The robot is equipped with a vacuum gripper and a vision system. A generator for power supply, a vacuum pump for the gripper and an equipment control system are also installed on the cart. The developed automated system will have a high degree of reliability that meets the requirements of operation in the field. Full article
(This article belongs to the Special Issue Automation and Robots in Agriculture)
Show Figures

Figure 1

15 pages, 1373 KiB  
Article
Swarm Crawler Robots Using Lévy Flight for Targets Exploration in Large Environments
by Yoshiaki Katada, Sho Hasegawa, Kaito Yamashita, Naoki Okazaki and Kazuhiro Ohkura
Robotics 2022, 11(4), 76; https://doi.org/10.3390/robotics11040076 - 22 Jul 2022
Cited by 8 | Viewed by 2689
Abstract
This study tackles the task of swarm robotics, where robots explore the environment to detect targets. When a robot detects a target, the robot must be connected with a base station via intermediate relay robots for wireless communication. Our previous results confirmed that [...] Read more.
This study tackles the task of swarm robotics, where robots explore the environment to detect targets. When a robot detects a target, the robot must be connected with a base station via intermediate relay robots for wireless communication. Our previous results confirmed that Lévy flight outperformed the usual random walk for exploration strategy in an indoor environment. This paper investigated the search performance of swarm crawler robots with Lévy flight on target detection problems in large environments through a series of real robots’ experiments. The results suggest that the swarm crawler robots with Lévy flight succeeded in the target’s discovery in the indoor environment with a 100% success rate, and were able to find several targets in a given time in the outdoor environment. Thus, we confirmed that target exploration in a large environment would be possible by crawler robots with Lévy flight and significant variances in the detection rate among the positions to detect the outdoor environment’s target. Full article
Show Figures

Figure 1

28 pages, 2033 KiB  
Review
A Survey on Recent Advances in Social Robotics
by Karim Youssef, Sherif Said, Samer Alkork and Taha Beyrouthy
Robotics 2022, 11(4), 75; https://doi.org/10.3390/robotics11040075 - 18 Jul 2022
Cited by 15 | Viewed by 4806
Abstract
Over decades, social robotics has evolved as a concept that presently covers different areas of application, and interacts with different domains in technology, education, medicine and others. Today, it is possible to envision social robots in tasks that were not expected years ago, [...] Read more.
Over decades, social robotics has evolved as a concept that presently covers different areas of application, and interacts with different domains in technology, education, medicine and others. Today, it is possible to envision social robots in tasks that were not expected years ago, and that is not only due to the evolution of social robots, but also to the evolution of the vision humans have for them. This survey addresses recent advances in social robotics from different perspectives. Different contexts and areas of application of social robots are addressed, as well as modalities of interaction with humans. Different robotic platforms used in social contexts are shown and discussed. Relationships of social robotics with advances in other technological areas are surveyed, and methods and metrics used for the human evaluation of the interaction with robots are presented. The future of social robotics is also envisioned based on surveyed works and from different points of view. Full article
(This article belongs to the Section Humanoid and Human Robotics)
Show Figures

Figure 1

17 pages, 3121 KiB  
Article
Toward Avoiding Misalignment: Dimensional Synthesis of Task-Oriented Upper-Limb Hybrid Exoskeleton
by Sakshi Gupta, Anupam Agrawal and Ekta Singla
Robotics 2022, 11(4), 74; https://doi.org/10.3390/robotics11040074 - 13 Jul 2022
Cited by 2 | Viewed by 1977
Abstract
One of the primary reasons for wearable exoskeleton rejection is user discomfort caused by misalignment between the coupled system, i.e., the human limb and the exoskeleton. The article focuses primarily on the solution strategies for misalignment issues. The purpose of this work is [...] Read more.
One of the primary reasons for wearable exoskeleton rejection is user discomfort caused by misalignment between the coupled system, i.e., the human limb and the exoskeleton. The article focuses primarily on the solution strategies for misalignment issues. The purpose of this work is to facilitate rehabilitative exercise-based exoskeletons for neurological and muscular disorder patients, which can aid a user in following the appropriate natural trajectory with the least amount of misalignment. A double four-bar planar configuration is used for this purpose. The paper proposes a methodology for developing an optimum task-oriented upper-limb hybrid exoskeleton with low active degrees-of-freedom (dof) that enables users to attain desired task space locations (TSLs) while maintaining an acceptable range of kinematic performance. Additionally, the study examines the influence of an extra restriction placed at the elbow motion and the compatibility of connected systems. The findings and discussion indicate the usefulness of the proposed concept for upper-limb rehabilitation. Full article
(This article belongs to the Section Medical Robotics and Service Robotics)
Show Figures

Figure 1

39 pages, 5282 KiB  
Article
On Fast Jerk-Continuous Motion Functions with Higher-Order Kinematic Restrictions for Online Trajectory Generation
by Burkhard Alpers
Robotics 2022, 11(4), 73; https://doi.org/10.3390/robotics11040073 - 12 Jul 2022
Cited by 5 | Viewed by 2721
Abstract
In robotics and automated manufacturing, motion functions for parts of machines need to be designed. Many proposals for the shape of such functions can be found in the literature. Very often, time efficiency is a major criterion for evaluating the suitability for a [...] Read more.
In robotics and automated manufacturing, motion functions for parts of machines need to be designed. Many proposals for the shape of such functions can be found in the literature. Very often, time efficiency is a major criterion for evaluating the suitability for a given task. If there are higher precision requirements, the reduction in vibration also plays a major role. In this case, motion functions should have a continuous jerk function but still be as fast as possible within the limits of kinematic restrictions. The currently available motion designs all include assumptions that facilitate the computation but are unnecessary and lead to slower functions. In this contribution, we drop these assumptions and provide an algorithm for computing a jerk-continuous fifteen segment profile with arbitrary initial and final velocities where given kinematic restrictions are met. We proceed by going systematically through the design space using the concept of a varying intermediate velocity and identify critical velocities and jerks where one has to switch models. The systematic approach guarantees that all possible situations are covered. We implemented and validated the model using a huge number of random configurations in Matlab, and we show that the algorithm is fast enough for online trajectory generation. Examples illustrate the improvement in time efficiency compared to existing approaches for a wide range of configurations where the maximum velocity is not held over a period of time. We conclude that faster motion functions are possible at the price of an increase in complexity, yet which is still manageable. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

26 pages, 1250 KiB  
Article
A Fast and Close-to-Optimal Receding Horizon Control for Trajectory Generation in Dynamic Environments
by Khoi Hoang-Dinh, Marion Leibold and Dirk Wollherr
Robotics 2022, 11(4), 72; https://doi.org/10.3390/robotics11040072 - 6 Jul 2022
Cited by 2 | Viewed by 2119
Abstract
This paper presents a new approach for the optimal trajectory planning of nonlinear systems in a dynamic environment. Given the start and end goals with an objective function, the problem is to find an optimal trajectory from start to end that minimizes the [...] Read more.
This paper presents a new approach for the optimal trajectory planning of nonlinear systems in a dynamic environment. Given the start and end goals with an objective function, the problem is to find an optimal trajectory from start to end that minimizes the objective while taking into account the changes in the environment. One of the main challenges here is that the optimal control sequence needs to be computed in a limited amount of time and needs to be adapted on-the-fly. The control method presented in this work has two stages: the first-order gradient algorithm is used at the beginning to compute an initial guess of the control sequence that satisfies the constraints but is not yet optimal; then, sequential action control is used to optimize only the portion of the control sequence that will be applied on the system in the next iteration. This helps to reduce the computational effort while still being optimal with regard to the objective; thus, the proposed approach is more applicable for online computation as well as dealing with dynamic environments. We also show that under mild conditions, the proposed controller is asymptotically stable. Different simulated results demonstrate the capability of the controller in terms of solving various tracking problems for different systems under the existence of dynamic obstacles. The proposed method is also compared to the related indirect optimal control approach and sequential action control in terms of cost and computation time to evaluate the improvement of the proposed method. Full article
(This article belongs to the Topic Motion Planning and Control for Robotics)
Show Figures

Figure 1

11 pages, 2150 KiB  
Article
Body-Powered and Portable Soft Hydraulic Actuators as Prosthetic Hands
by Sivakumar Kandasamy, Meiying Teo, Narrendar Ravichandran, Andrew McDaid, Krishnan Jayaraman and Kean Aw
Robotics 2022, 11(4), 71; https://doi.org/10.3390/robotics11040071 - 5 Jul 2022
Cited by 5 | Viewed by 2554
Abstract
Soft robotic actuators are highly flexible, compliant, dexterous, and lightweight alternatives that can potentially replace conventional rigid actuators in various human-centric applications. This research aims to develop a soft robotic actuator that leverages body movements to mimic the function of human fingers for [...] Read more.
Soft robotic actuators are highly flexible, compliant, dexterous, and lightweight alternatives that can potentially replace conventional rigid actuators in various human-centric applications. This research aims to develop a soft robotic actuator that leverages body movements to mimic the function of human fingers for gripping and grasping tasks. Unlike the predominantly used chamber-based actuation, this study utilizes actuators made from elastomers embedded with fiber braiding. The Young’s modulus of the elastomer and braiding angles of the fiber highly influenced the bending angle and force generated by these actuators. In this experiment, the bending and force profiles of these actuators were characterized by varying the combinations of elastomeric materials and braiding angles to suit hand manipulation tasks. Additionally, we found that utilizing water, which is relatively more incompressible than air, as the actuation fluid enabled easier actuation of the actuators using body movements. Lastly, we demonstrated a body-powered actuator setup that can provide comfort to patients in terms of portability, standalone capability, and cost effectiveness, potentially allowing them to be used in a wide range of wearable robotic applications. Full article
Show Figures

Figure 1

13 pages, 3152 KiB  
Article
Online Computation of Time-Optimization-Based, Smooth and Path-Consistent Stop Trajectories for Robots
by Rafael A. Rojas, Andrea Giusti and Renato Vidoni
Robotics 2022, 11(4), 70; https://doi.org/10.3390/robotics11040070 - 1 Jul 2022
Cited by 2 | Viewed by 1964
Abstract
Enforcing the cessation of motion is a common action in robotic systems to avoid the damage that the robot can exert on itself, its environment or, in shared environments, people. This procedure raises two main concerns, which are addressed in this paper. On [...] Read more.
Enforcing the cessation of motion is a common action in robotic systems to avoid the damage that the robot can exert on itself, its environment or, in shared environments, people. This procedure raises two main concerns, which are addressed in this paper. On the one hand, the stopping procedure should respect the collision free path computed by the motion planner. On the other hand, a sudden stop may produce large current peaks and challenge the limits of the motor’s control capabilities, as well as degrading the mechanical performance of the system, i.e., increased wear. To address these concerns, we propose a novel method to enforce a mechanically feasible, smooth and path-consistent stop of the robot based on a time-minimization algorithm. We present a numerical implementation of the method, as well as a numerical study of its complexity and convergence. Finally, an experimental comparison with an off-the-shelf stopping scheme is presented, showing the effectiveness of the proposed method. Full article
(This article belongs to the Topic Motion Planning and Control for Robotics)
Show Figures

Figure 1

20 pages, 5667 KiB  
Article
Application of Deep Learning in the Deployment of an Industrial SCARA Machine for Real-Time Object Detection
by Tibor Péter Kapusi, Timotei István Erdei, Géza Husi and András Hajdu
Robotics 2022, 11(4), 69; https://doi.org/10.3390/robotics11040069 - 30 Jun 2022
Cited by 13 | Viewed by 3584
Abstract
In the spirit of innovation, the development of an intelligent robot system incorporating the basic principles of Industry 4.0 was one of the objectives of this study. With this aim, an experimental application of an industrial robot unit in its own isolated environment [...] Read more.
In the spirit of innovation, the development of an intelligent robot system incorporating the basic principles of Industry 4.0 was one of the objectives of this study. With this aim, an experimental application of an industrial robot unit in its own isolated environment was carried out using neural networks. In this paper, we describe one possible application of deep learning in an Industry 4.0 environment for robotic units. The image datasets required for learning were generated using data synthesis. There are significant benefits to the incorporation of this technology, as old machines can be smartened and made more efficient without additional costs. As an area of application, we present the preparation of a robot unit which at the time it was originally produced and commissioned was not capable of using machine learning technology for object-detection purposes. The results for different scenarios are presented and an overview of similar research topics on neural networks is provided. A method for synthetizing datasets of any size is described in detail. Specifically, the working domain of a given robot unit, a possible solution to compatibility issues and the learning of neural networks from 3D CAD models with rendered images will be discussed. Full article
(This article belongs to the Topic New Frontiers in Industry 4.0)
Show Figures

Figure 1

14 pages, 10239 KiB  
Article
Sensor Fusion with Deep Learning for Autonomous Classification and Management of Aquatic Invasive Plant Species
by Jackson E. Perrin, Shaphan R. Jernigan, Jacob D. Thayer, Andrew W. Howell, James K. Leary and Gregory D. Buckner
Robotics 2022, 11(4), 68; https://doi.org/10.3390/robotics11040068 - 28 Jun 2022
Cited by 4 | Viewed by 2209
Abstract
Recent advances in deep learning, including the development of AlexNet, Residual Network (ResNet), and transfer learning, offer unprecedented classification accuracy in the field of machine vision. A developing application of deep learning is the automated identification and management of aquatic invasive plants. Classification [...] Read more.
Recent advances in deep learning, including the development of AlexNet, Residual Network (ResNet), and transfer learning, offer unprecedented classification accuracy in the field of machine vision. A developing application of deep learning is the automated identification and management of aquatic invasive plants. Classification of submersed aquatic vegetation (SAV) presents a unique challenge, namely, the lack of a single source of sensor data that can produce robust, interpretable images across a variable range of depth, turbidity, and lighting conditions. This paper focuses on the development of a multi-sensor (RGB and hydroacoustic) classification system for SAV that is robust to environmental conditions and combines the strengths of each sensing modality. The detection of invasive Hydrilla verticillata (hydrilla) is the primary goal. Over 5000 aerial RGB and hydroacoustic images were generated from two Florida lakes via an unmanned aerial vehicle and boat-mounted sonar unit, and tagged for neural network training and evaluation. Classes included “HYDR”, containing hydrilla; “NONE”, lacking SAV, and “OTHER”, containing SAV other than hydrilla. Using a transfer learning approach, deep neural networks with the ResNet architecture were individually trained on the RGB and hydroacoustic datasets. Multiple data fusion methodologies were evaluated to ensemble the outputs of these neural networks for optimal classification accuracy. A method incorporating logic and a Monte Carlo dropout approach yielded the best overall classification accuracy (84%), with recall and precision of 84.5% and 77.5%, respectively, for the hydrilla class. The training and ensembling approaches were repeated for a DenseNet model with identical training and testing datasets. The overall classification accuracy was similar between the ResNet and DenseNet models when averaged across all approaches (1.9% higher accuracy for the ResNet vs. the DenseNet). Full article
Show Figures

Figure 1

14 pages, 3957 KiB  
Article
Towards Fully Autonomous Negative Obstacle Traversal via Imitation Learning Based Control
by Brian César-Tondreau, Garrett Warnell, Kevin Kochersberger and Nicholas R. Waytowich
Robotics 2022, 11(4), 67; https://doi.org/10.3390/robotics11040067 - 22 Jun 2022
Cited by 4 | Viewed by 2344
Abstract
Current research in experimental robotics has had a focus on traditional, cost-based, navigation methods. These methods ascribe a value of utility for occupying certain locations in the environment. A path planning algorithm then uses this cost function to compute an optimal path relative [...] Read more.
Current research in experimental robotics has had a focus on traditional, cost-based, navigation methods. These methods ascribe a value of utility for occupying certain locations in the environment. A path planning algorithm then uses this cost function to compute an optimal path relative to obstacle positions based on proximity, visibility, and work efficiency. However, tuning this function to induce more complex navigation behaviors in the robot is not straightforward. For example, this cost-based scheme tends to be pessimistic when assigning traversal cost to negative obstacles. Its often simpler to ascribe high traversal costs to costmap cells based on elevation. This forces the planning algorithm to plan around uneven terrain rather than exploring techniques that understand if and how to safely traverse through them. In this paper, imitation learning is applied to the task of negative obstacle traversal with Unmanned Ground Vehicles (UGVs). Specifically, this work introduces a novel point cloud-based state representation of the local terrain shape and employs imitation learning to train a reactive motion controller for negative obstacle detection and traversal. This method is compared to a classical motion planner that uses the dynamic window approach (DWA) to assign traversal cost based on the terrain slope local to the robots current pose. Full article
(This article belongs to the Special Issue Human-Robot Physical Interaction)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop