Smart Robotics for Automation

Editor


E-Mail Website
Collection Editor
Sensors and Smart Systems Group, Institute of Engineering, Hanze University of Applied Sciences, 9747 AS Groningen, The Netherlands
Interests: robot control; mobile robotics; educational robotics; collaborative robotics
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

In recent years, the demand for efficient automation in distribution centers has increased sharply, mainly due to the increase in e-commerce. This trend continues and spreads to other sectors. The world’s population is aging, so more efficient automation is needed to cope with the relative reduction in the available workforce in the near future. In addition, as more people are moving to cities and fewer people are interested in working on farms, food production will also need to be automated more efficiently. Moreover, climate change tends to play an important role due to factors such as the decrease in available areas for food production.

To address these important problems, this Topical Collection on “Smart Robotics for Automation” is dedicated to publishing research papers, communications, and review articles proposing solutions to increase the efficiency of automation systems with the application of smart robotics. By smart robotics we mean solutions that apply the latest developments in artificial intelligence, sensor systems (including computer vision), and control to manipulators and mobile robots. Solutions that increase the adaptability and flexibility of current robotic applications are also welcome.

Topics of interest are related to robotics automation. A non-exhaustive list is as follows:

- Process automation with intelligent robotics;

- Intelligent robotic applications for production systems;

- Robotics applied to precision agriculture;

- AI and machine learning systems applied to robotics;

- Robot control;

- Robot manipulation and picking;

- Mobile robot navigation, localization, and mapping;

- Interpretation of sensor data (including vision systems);

- Human–robot collaboration (including cobots);

- Multi-robot systems

This Topical Collection is focused more on automation. Papers focus on sensors may choose our joint Topical Collection in Sensors (ISSN 1424-8220, IF 3.275).

Dr. Felipe Nascimento Martins
Collection Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Automation is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Control theory
  • Manufacturing systems
  • Learning systems
  • Intelligent control systems
  • Artificial neural networks
  • Motion control and sensing
  • Process automation and monitoring
  • Sensors and signal processing
  • Robotics and applications

Published Papers (17 papers)

2024

Jump to: 2023, 2022, 2021

22 pages, 2334 KiB  
Article
Vision-Based Object Manipulation for Activities of Daily Living Assistance Using Assistive Robot
by Md Tanzil Shahria, Jawhar Ghommam, Raouf Fareh and Mohammad Habibur Rahman
Automation 2024, 5(2), 68-89; https://doi.org/10.3390/automation5020006 - 15 Apr 2024
Viewed by 246
Abstract
The increasing prevalence of upper and lower extremity (ULE) functional deficiencies presents a significant challenge, as it restricts individuals’ ability to perform daily tasks independently. Robotic devices are emerging as assistive devices to assist individuals with limited ULE functionalities in activities of daily [...] Read more.
The increasing prevalence of upper and lower extremity (ULE) functional deficiencies presents a significant challenge, as it restricts individuals’ ability to perform daily tasks independently. Robotic devices are emerging as assistive devices to assist individuals with limited ULE functionalities in activities of daily living (ADLs). While assistive manipulators are available, manual control through traditional methods like joysticks can be cumbersome, particularly for individuals with severe hand impairments and vision limitations. Therefore, autonomous/semi-autonomous control of a robotic assistive device to perform any ADL task is open to research. This study addresses the necessity of fostering independence in ADLs by proposing a creative approach. We present a vision-based control system for a six-degrees-of-freedom (DoF) robotic manipulator designed for semi-autonomous “pick-and-place” tasks, one of the most common activities among ADLs. Our approach involves selecting and training a deep-learning-based object detection model with a dataset of 47 ADL objects, forming the base for a 3D ADL object localization algorithm. The proposed vision-based control system integrates this localization technique to identify and manipulate ADL objects (e.g., apples, oranges, capsicums, and cups) in real time, returning them to specific locations to complete the “pick-and-place” task. Experimental validation involving an xArm6 (six DoF) robot from UFACTORY in diverse settings demonstrates the system’s adaptability and effectiveness, achieving an overall 72.9% success rate in detecting, localizing, and executing ADL tasks. This research contributes to the growing field of autonomous assistive devices, enhancing independence for individuals with functional impairments. Full article
Show Figures

Figure 1

22 pages, 5448 KiB  
Article
Neural Network-Based Classifier for Collision Classification and Identification for a 3-DOF Industrial Robot
by Khaled H. Mahmoud, G. T. Abdel-Jaber and Abdel-Nasser Sharkawy
Automation 2024, 5(1), 13-34; https://doi.org/10.3390/automation5010002 - 14 Mar 2024
Viewed by 546
Abstract
In this paper, the aim is to classify torque signals that are received from a 3-DOF manipulator using a pattern recognition neural network (PR-NN). The output signals of the proposed PR-NN classifier model are classified into four indicators. The first predicts that no [...] Read more.
In this paper, the aim is to classify torque signals that are received from a 3-DOF manipulator using a pattern recognition neural network (PR-NN). The output signals of the proposed PR-NN classifier model are classified into four indicators. The first predicts that no collisions occur. The other three indicators predict collisions on the three links of the manipulator. The input data to train the PR-NN model are the values of torque exerted by the joints. The output of the model predicts and identifies the link on which the collision occurs. In our previous work, the position data for a 3-DOF robot were used to estimate the external collision torques exerted by the joints when applying collisions on each link, based on a recurrent neural network (RNN). The estimated external torques were used to design the current PR-NN model. In this work, the PR-NN model, while training, could successfully classify 56,592 samples out of 56,619 samples. Thus, the model achieved overall effectiveness (accuracy) in classifying collisions on the robot of 99.95%, which is almost 100%. The sensitivity of the model in detecting collisions on the links “Link 1, Link 2, and Link 3” was 97.9%, 99.7%, and 99.9%, respectively. The overall effectiveness of the trained model is presented and compared with other previous entries from the literature. Full article
Show Figures

Figure 1

2023

Jump to: 2024, 2022, 2021

19 pages, 2098 KiB  
Article
Optimisation of Product Recovery Options in End-of-Life Product Disassembly by Robots
by Natalia Hartono, F. Javier Ramírez and Duc Truong Pham
Automation 2023, 4(4), 359-377; https://doi.org/10.3390/automation4040021 - 29 Nov 2023
Viewed by 971
Abstract
In a circular economy, strategies for product recovery, such as reuse, recycling, and remanufacturing, play an important role at the end of a product’s life. A sustainability model was developed to solve the problem of sequence-dependent robotic disassembly line balancing. This research aimed [...] Read more.
In a circular economy, strategies for product recovery, such as reuse, recycling, and remanufacturing, play an important role at the end of a product’s life. A sustainability model was developed to solve the problem of sequence-dependent robotic disassembly line balancing. This research aimed to assess the viability of the model, which was optimised using the Multi-Objective Bees Algorithm in a robotic disassembly setting. Two industrial gear pumps were used as case studies. Four objectives (maximising profit, energy savings, emissions reductions and minimising line imbalance) were set. Several product recovery scenarios were developed to find the best recovery plans for each component. An efficient metaheuristic, the Bees Algorithm, was used to find the best solution. The robotic disassembly plans were generated and assigned to robotic workstations simultaneously. Using the proposed sustainability model on end-of-life industrial gear pumps shows the applicability of the model to real-world problems. The Multi-Objective Bees Algorithm was able to find the best scenario for product recovery by assigning each component to recycling, reuse, remanufacturing, or disposal. The performance of the algorithm is consistent, producing a similar performance for all sustainable strategies. This study addresses issues that arise with product recovery options for end-of-life products and provides optimal solutions through case studies. Full article
Show Figures

Figure 1

28 pages, 10914 KiB  
Article
Task Location to Improve Human–Robot Cooperation: A Condition Number-Based Approach
by Abdel-Nasser Sharkawy
Automation 2023, 4(3), 263-290; https://doi.org/10.3390/automation4030016 - 06 Sep 2023
Viewed by 1520
Abstract
This paper proposes and implements an approach to evaluate human–robot cooperation aimed at achieving high performance. Both the human arm and the manipulator are modeled as a closed kinematic chain. The proposed task performance criterion is based on the condition number of this [...] Read more.
This paper proposes and implements an approach to evaluate human–robot cooperation aimed at achieving high performance. Both the human arm and the manipulator are modeled as a closed kinematic chain. The proposed task performance criterion is based on the condition number of this closed kinematic chain. The robot end-effector is guided by the human operator via an admittance controller to complete a straight-line segment motion, which is the desired task. The best location of the selected task is determined by maximizing the minimum of the condition number along the path. The performance of the proposed approach is evaluated using a criterion related to ergonomics. The experiments are executed with several subjects using a KUKA LWR robot to repeat the specified motion to evaluate the introduced approach. A comparison is presented between the current proposed approach and our previously implemented approach where the task performance criterion was based on the manipulability index of the closed kinematic chain. The results reveal that the condition number-based approach improves the human–robot cooperation in terms of the achieved accuracy, stability, and human comfort, but at the expense of task speed and completion time. On the other hand, the manipulability-index-based approach improves the human–robot cooperation in terms of task speed and human comfort, but at the cost of the achieved accuracy. Full article
Show Figures

Figure 1

27 pages, 21777 KiB  
Article
Trajectory Control in Discrete-Time Nonlinear Coupling Dynamics of a Soft Exo-Digit and a Human Finger Using Input–Output Feedback Linearization
by Umme Kawsar Alam, Kassidy Shedd and Mahdi Haghshenas-Jaryani
Automation 2023, 4(2), 164-190; https://doi.org/10.3390/automation4020011 - 31 May 2023
Cited by 2 | Viewed by 1824
Abstract
This paper presents a quasi-static model-based control algorithm for controlling the motion of a soft robotic exo-digit with three independent actuation joints physically interacting with the human finger. A quasi-static analytical model of physical interaction between the soft exo-digit and a human finger [...] Read more.
This paper presents a quasi-static model-based control algorithm for controlling the motion of a soft robotic exo-digit with three independent actuation joints physically interacting with the human finger. A quasi-static analytical model of physical interaction between the soft exo-digit and a human finger model was developed. Then, the model was presented as a nonlinear discrete-time multiple-input multiple-output (MIMO) state-space representation for the control system design. Input–output feedback linearization was utilized and a control input was designed to linearize the input–output, where the input is the actuation pressure of an individual soft actuator, and the output is the pose of the human fingertip. The asymptotic stability of the nonlinear discrete-time system for trajectory tracking control is discussed. A soft robotic exoskeleton digit (exo-digit) and a 3D-printed human-finger model integrated with IMU sensors were used for the experimental test setup. An Arduino-based electro-pneumatic control hardware was developed to control the actuation pressure of the soft exo-digit. The effectiveness of the controller was examined through simulation studies and experimental testing for following different pose trajectories corresponding to the human finger pose during the activities of daily living. The model-based controller was able to follow the desired trajectories with a very low average root-mean-square error of 2.27 mm in the x-direction, 2.75 mm in the y-direction, and 3.90 degrees in the orientation of the human finger distal link about the z-axis. Full article
Show Figures

Figure 1

2022

Jump to: 2024, 2023, 2021

14 pages, 7358 KiB  
Article
Experimental Examination of Automatic Tether Winding Method Using Kinetic Energy in Tether Space Mobility Device
by Kazuki Nirayama, Shoichiro Takehara, Satoshi Takayama and Yusuke Ito
Automation 2022, 3(3), 364-377; https://doi.org/10.3390/automation3030019 - 19 Jul 2022
Viewed by 2092
Abstract
Tethers (strings and wires) are used in various mechanical systems because they are lightweight and have excellent storability. Examples of such systems include elevators and cranes. In recent years, the use of tethers in special environments, such as outer space, is expected, and [...] Read more.
Tethers (strings and wires) are used in various mechanical systems because they are lightweight and have excellent storability. Examples of such systems include elevators and cranes. In recent years, the use of tethers in special environments, such as outer space, is expected, and various systems have been proposed. In this study, we propose a mobility system using a tether that moves a human by winding a tether attached to a wall. However, the method has a problem whereby the attitude of the human can lack stability during the winding of the tether. We developed the attitude control method of the Tether Space Mobility Device during tether winding while focusing on fluctuations in the rotational kinetic energy of systems. The effectiveness of the control method was shown using numerical simulation. In this paper, the proposed control system is installed in the experimental device for validating the numerical simulation model. Then, we verified the effectiveness of the proposed control method through experiments using an actual system. The experimental results confirm that the angular velocity of the Tether Space Mobility Device converges to 0 deg/s when control is applied. In addition, it was shown that the proposed control method is effective for automatically winding the tether. Full article
Show Figures

Figure 1

17 pages, 2491 KiB  
Article
A Combined Homing Trajectory Optimization Method of the Parafoil System Considering Intricate Constraints
by Weichao He, Jiayan Wen, Jin Tao and Qinglin Sun
Automation 2022, 3(2), 269-285; https://doi.org/10.3390/automation3020014 - 17 May 2022
Viewed by 1929
Abstract
In order to achieve an accurate airdrop in the actual environment, the influence of complex interferences, such as wind field and the terrain of the environment, must be taken into account. Aiming at this problem, a combined trajectory planning strategy of a parafoil [...] Read more.
In order to achieve an accurate airdrop in the actual environment, the influence of complex interferences, such as wind field and the terrain of the environment, must be taken into account. Aiming at this problem, a combined trajectory planning strategy of a parafoil system subjected to intricate conditions is proposed in this paper. This method divides the parafoil airdrop area into an obstacle area and a landing area, then, considering the terrain environment surface, a model for the parafoil system in the wind field is built in the obstacle area. The Gauss pseudo-spectral method is used to transform the complex terrain environment constraint into a series of nonlinear optimal control problems with complex constraints. Finally, the trajectory of the landing area is designed by means of multiphase homing, and the target parameters are solved by the improved marine predator algorithm. The simulation results show that the proposed method has better realizability than a single homing strategy, and the optimization results of the improved marine predator algorithm have higher accuracy. Full article
Show Figures

Figure 1

11 pages, 1662 KiB  
Article
Inspection Application in an Industrial Environment with Collaborative Robots
by Paulo Magalhaes and Nuno Ferreira
Automation 2022, 3(2), 258-268; https://doi.org/10.3390/automation3020013 - 03 Apr 2022
Cited by 3 | Viewed by 3224
Abstract
In this study, we analyze the potential of collaborative robotics in automated quality inspections in the industry. The development of a solution integrating an industrial vision system allowed evaluating the performance of collaborative robots in a real case. The use of these tools [...] Read more.
In this study, we analyze the potential of collaborative robotics in automated quality inspections in the industry. The development of a solution integrating an industrial vision system allowed evaluating the performance of collaborative robots in a real case. The use of these tools allows reducing quality defects as well as costs in the manufacturing process. In this system, image processing methods use resources based on depth and surface measurements with high precision. The system fully processes a panel, observing the state of the surface to detect any potential defect in the panels produced to increase the quality of production. Full article
Show Figures

Figure 1

19 pages, 5312 KiB  
Article
Reinforcement Learning for Collaborative Robots Pick-and-Place Applications: A Case Study
by Natanael Magno Gomes, Felipe Nascimento Martins, José Lima and Heinrich Wörtche
Automation 2022, 3(1), 223-241; https://doi.org/10.3390/automation3010011 - 21 Mar 2022
Cited by 11 | Viewed by 6278
Abstract
The number of applications in which industrial robots share their working environment with people is increasing. Robots appropriate for such applications are equipped with safety systems according to ISO/TS 15066:2016 and are often referred to as collaborative robots (cobots). Due to the nature [...] Read more.
The number of applications in which industrial robots share their working environment with people is increasing. Robots appropriate for such applications are equipped with safety systems according to ISO/TS 15066:2016 and are often referred to as collaborative robots (cobots). Due to the nature of human-robot collaboration, the working environment of cobots is subjected to unforeseeable modifications caused by people. Vision systems are often used to increase the adaptability of cobots, but they usually require knowledge of the objects to be manipulated. The application of machine learning techniques can increase the flexibility by enabling the control system of a cobot to continuously learn and adapt to unexpected changes in the working environment. In this paper we address this issue by investigating the use of Reinforcement Learning (RL) to control a cobot to perform pick-and-place tasks. We present the implementation of a control system that can adapt to changes in position and enables a cobot to grasp objects which were not part of the training. Our proposed system uses deep Q-learning to process color and depth images and generates an ϵ-greedy policy to define robot actions. The Q-values are estimated using Convolution Neural Networks (CNNs) based on pre-trained models for feature extraction. To reduce training time, we implement a simulation environment to first train the RL agent, then we apply the resulting system on a real cobot. System performance is compared when using the pre-trained CNN models ResNext, DenseNet, MobileNet, and MNASNet. Simulation and experimental results validate the proposed approach and show that our system reaches a grasping success rate of 89.9% when manipulating a never-seen object operating with the pre-trained CNN model MobileNet. Full article
Show Figures

Figure 1

16 pages, 8120 KiB  
Article
Predictive Performance of Mobile Vis–NIR Spectroscopy for Mapping Key Fertility Attributes in Tropical Soils through Local Models Using PLS and ANN
by Mateus Tonini Eitelwein, Tiago Rodrigues Tavares, José Paulo Molin, Rodrigo Gonçalves Trevisan, Rafael Vieira de Sousa and José Alexandre Melo Demattê
Automation 2022, 3(1), 116-131; https://doi.org/10.3390/automation3010006 - 03 Feb 2022
Cited by 3 | Viewed by 3305
Abstract
Mapping soil fertility attributes at fine spatial resolution is crucial for site-specific management in precision agriculture. This paper evaluated the performance of mobile measurements using visible and near-infrared spectroscopy (vis–NIR) to predict and map key fertility attributes in tropical soils through local data [...] Read more.
Mapping soil fertility attributes at fine spatial resolution is crucial for site-specific management in precision agriculture. This paper evaluated the performance of mobile measurements using visible and near-infrared spectroscopy (vis–NIR) to predict and map key fertility attributes in tropical soils through local data modeling with partial least squares regression (PLS) and artificial neural network (ANN). Models were calibrated and tested in a calibration area (18-ha) with high spatial variability of soil attributes and then extrapolated in the entire field (138-ha). The models calibrated with ANN obtained superior performance for all attributes. Although ANN models obtained satisfactory predictions in the calibration area (ratio of performance to interquartile range (RPIQ) ≥ 1.7) for clay, organic matter (OM), cation exchange capacity (CEC), base saturation (V), and exchangeable (ex-) Ca, it was not repeated for some of them when extrapolated into the entire field. In conclusion, robust mappings (RPIQ = 2.49) were obtained for clay and OM, indicating that these attributes can be successfully mapped on tropical soils using mobile vis–NIR spectroscopy and local calibrations using ANN. This study highlights the need to implement an independent test to assess reliability and extrapolability of previously calibrated models, even when extrapolating the models to neighboring areas. Full article
Show Figures

Figure 1

2021

Jump to: 2024, 2023, 2022

14 pages, 18303 KiB  
Communication
Design and Implementation of a Robotic Arm Assistant with Voice Interaction Using Machine Vision
by George Nantzios, Nikolaos Baras and Minas Dasygenis
Automation 2021, 2(4), 238-251; https://doi.org/10.3390/automation2040015 - 31 Oct 2021
Cited by 3 | Viewed by 6199
Abstract
It is evident that the technological growth of the last few decades has signaled the development of several application domains. One application domain that has expanded massively in recent years is robotics. The usage and spread of robotic systems in commercial and non-commercial [...] Read more.
It is evident that the technological growth of the last few decades has signaled the development of several application domains. One application domain that has expanded massively in recent years is robotics. The usage and spread of robotic systems in commercial and non-commercial environments resulted in increased productivity, efficiency, and higher quality of life. Many researchers have developed systems that improve many aspects of people’s lives, based on robotics. Most of the engineers use high-cost robotic arms, which are usually out of the reach of typical consumers. We fill this gap by presenting a low-cost and high-accuracy project to be used as a robotic assistant for every consumer. Our project aims to further improve people’s quality of life, and more specifically people with physical and mobility impairments. The robotic system is based on the Niryo-One robotic arm, equipped with a USB (Universal Serial Bus) HD (High Definition) camera on the end-effector. To achieve high accuracy, we modified the YOLO algorithm by adding novel features and additional computations to be used in the kinematic model. We evaluated the proposed system by conducting experiments using PhD students of our laboratory and demonstrated its effectiveness. The experimental results indicate that the robotic arm can detect and deliver the requested object in a timely manner with a 96.66% accuracy. Full article
Show Figures

Figure 1

15 pages, 3496 KiB  
Article
Design of Tendon-Actuated Robotic Glove Integrated with Optical Fiber Force Myography Sensor
by Antonio Ribas Neto, Julio Fajardo, Willian Hideak Arita da Silva, Matheus Kaue Gomes, Maria Claudia Ferrari de Castro, Eric Fujiwara and Eric Rohmer
Automation 2021, 2(3), 187-201; https://doi.org/10.3390/automation2030012 - 03 Sep 2021
Cited by 10 | Viewed by 4695
Abstract
People taken by upper limb disorders caused by neurological diseases suffer from grip weakening, which affects their quality of life. Researches on soft wearable robotics and advances in sensor technology emerge as promising alternatives to develop assistive and rehabilitative technologies. However, current systems [...] Read more.
People taken by upper limb disorders caused by neurological diseases suffer from grip weakening, which affects their quality of life. Researches on soft wearable robotics and advances in sensor technology emerge as promising alternatives to develop assistive and rehabilitative technologies. However, current systems rely on surface electromyography and complex machine learning classifiers to retrieve the user intentions. In addition, the grasp assistance through electromechanical or fluidic actuators is passive and does not contribute to the rehabilitation of upper-limb muscles. Therefore, this paper presents a robotic glove integrated with a force myography sensor. The glove-like orthosis features tendon-driven actuation through servo motors, working as an assistive device for people with hand disabilities. The detection of user intentions employs an optical fiber force myography sensor, simplifying the operation beyond the usual electromyography approach. Moreover, the proposed system applies functional electrical stimulation to activate the grasp collaboratively with the tendon mechanism, providing motion support and assisting rehabilitation. Full article
Show Figures

Figure 1

12 pages, 9007 KiB  
Article
UAV Thrust Model Identification Using Spectrogram Analysis
by Igor Henrique Beloti Pizetta, Alexandre Santos Brandão and Mário Sarcinelli-Filho
Automation 2021, 2(3), 141-152; https://doi.org/10.3390/automation2030009 - 01 Aug 2021
Cited by 3 | Viewed by 3198
Abstract
This paper deals with a non-contact method to identify the aerodynamic propeller constants of the Parrot AR.Drone quadrotor. The experimental setup consists of a microphone installed in the flight arena to record audio data. In terms of methodology, a spectrogram analysis is adopted [...] Read more.
This paper deals with a non-contact method to identify the aerodynamic propeller constants of the Parrot AR.Drone quadrotor. The experimental setup consists of a microphone installed in the flight arena to record audio data. In terms of methodology, a spectrogram analysis is adopted to estimate the propeller velocity based on the filtered sound signal. It is known that, in a hovering maneuver, when the UAV mass increases, the propellers rotate faster to produce the necessary thrust increment. In this work, the rotorcraft takes off with its factory settings, first with no hull, corresponding to a mass of 413 g, and after with a small hull, corresponding to a mass of 444 g, and a bigger hull, corresponding to a mass of 462 g. In the sequence, the velocity of the propellers are estimated for each of these three cases using spectrograms of audio recorded by a microphone, corresponding to the sound generated by the four rotors. Finally, the estimated velocity is used to identify the aerodynamic parameters, thus validating the proposal. Full article
Show Figures

Figure 1

14 pages, 1593 KiB  
Article
Adaptive 3D Visual Servoing of a Scara Robot Manipulator with Unknown Dynamic and Vision System Parameters
by Jorge Antonio Sarapura, Flavio Roberti and Ricardo Carelli
Automation 2021, 2(3), 127-140; https://doi.org/10.3390/automation2030008 - 27 Jul 2021
Cited by 4 | Viewed by 4016
Abstract
In the present work, we develop an adaptive dynamic controller based on monocular vision for the tracking of objects with a three-degrees of freedom (DOF) Scara robot manipulator. The main characteristic of the proposed control scheme is that it considers the robot dynamics, [...] Read more.
In the present work, we develop an adaptive dynamic controller based on monocular vision for the tracking of objects with a three-degrees of freedom (DOF) Scara robot manipulator. The main characteristic of the proposed control scheme is that it considers the robot dynamics, the depth of the moving object, and the mounting of the fixed camera to be unknown. The design of the control algorithm is based on an adaptive kinematic visual servo controller whose objective is the tracking of moving objects even with uncertainties in the parameters of the camera and its mounting. The design also includes a dynamic controller in cascade with the former one whose objective is to compensate the dynamics of the manipulator by generating the final control actions to the robot even with uncertainties in the parameters of its dynamic model. Using Lyapunov’s theory, we analyze the two proposed adaptive controllers for stability properties, and, through simulations, the performance of the complete control scheme is shown. Full article
Show Figures

Figure 1

11 pages, 320 KiB  
Communication
Improving Automatic Warehouse Throughput by Optimizing Task Allocation and Validating the Algorithm in a Developed Simulation Tool
by Nikolaos Baras, Antonios Chatzisavvas, Dimitris Ziouzios and Minas Dasygenis
Automation 2021, 2(3), 116-126; https://doi.org/10.3390/automation2030007 - 20 Jul 2021
Cited by 8 | Viewed by 3800
Abstract
It is evident that over the last years, the usage of robotics in warehouses has been rapidly increasing. The usage of robot vehicles in storage facilities has resulted in increased efficiency and improved productivity levels. The robots, however, are only as efficient as [...] Read more.
It is evident that over the last years, the usage of robotics in warehouses has been rapidly increasing. The usage of robot vehicles in storage facilities has resulted in increased efficiency and improved productivity levels. The robots, however, are only as efficient as the algorithms that govern them. Many researchers have attempted to improve the efficiency of industrial robots by improving on the internal routing of a warehouse, or by finding the best locations for charging power stations. Because of the popularity of the problem, many research works can be found in the literature regarding warehouse routing. The majority of these algorithms found in the literature, however, are statically designed and cannot handle multi-robot situations, especially when robots have different characteristics. The proposed algorithm of this paper attempts to give the following solution to this issue: utilizing more than one robot simultaneously to allocate tasks and tailor the navigation path of each robot based on its characteristics, such as its speed, type and current location within the warehouse so as to minimize the task delivery timing. Moreover, the algorithm finds the optimal location for the placement of power stations. We evaluated the proposed methodology in a synthetic realistic environment and demonstrated that the algorithm is capable of finding an improved solution within a realistic time frame. Full article
Show Figures

Figure 1

18 pages, 3873 KiB  
Article
Design, Modeling, and Control of a Differential Drive Rimless Wheel That Can Move Straight and Turn
by Sebastian Sanchez and Pranav A. Bhounsule
Automation 2021, 2(3), 98-115; https://doi.org/10.3390/automation2030006 - 19 Jul 2021
Cited by 8 | Viewed by 4429
Abstract
A rimless wheel or a wheel without a rim, is the simplest example of a legged robot and is an ideal testbed to understand the mechanics of locomotion. This paper presents the design, modeling, and control of a differential drive rimless wheel robot [...] Read more.
A rimless wheel or a wheel without a rim, is the simplest example of a legged robot and is an ideal testbed to understand the mechanics of locomotion. This paper presents the design, modeling, and control of a differential drive rimless wheel robot that achieves straight-line movement and turning. The robot design comprises a central axis with two 10-spoked springy rimless wheels on either side and a central body that houses the electronics, motors, transmission, computers, and batteries. To move straight, both motors are commanded to constant pitch control of the central body. To turn while maintaining constant pitch, a differential current is added and subtracted from currents on either motor. In separate tests, the robot achieved the maximum speed of 4.3 m per sec (9.66 miles per hour), the lowest total cost of transport (power per unit weight per unit velocity) of 0.13, and a smallest turning radius of 0.5 m. A kinematics-based model for steering and a dynamics-based sagittal (fore-aft) plane model for forward movement is presented. Finally, parameters studies that influence the speed, torque, power, and energetics of locomotion are performed. A rimless wheel that can move straight and turn can potentially be used to navigate in constrained spaces such as homes and offices. Full article
Show Figures

Figure 1

14 pages, 892 KiB  
Communication
Modelling Software Architecture for Visual Simultaneous Localization and Mapping
by Bhavyansh Mishra, Robert Griffin and Hakki Erhan Sevil
Automation 2021, 2(2), 48-61; https://doi.org/10.3390/automation2020003 - 02 Apr 2021
Viewed by 4409
Abstract
Visual simultaneous localization and mapping (VSLAM) is an essential technique used in areas such as robotics and augmented reality for pose estimation and 3D mapping. Research on VSLAM using both monocular and stereo cameras has grown significantly over the last two decades. There [...] Read more.
Visual simultaneous localization and mapping (VSLAM) is an essential technique used in areas such as robotics and augmented reality for pose estimation and 3D mapping. Research on VSLAM using both monocular and stereo cameras has grown significantly over the last two decades. There is, therefore, a need for emphasis on a comprehensive review of the evolving architecture of such algorithms in the literature. Although VSLAM algorithm pipelines share similar mathematical backbones, their implementations are individualized and the ad hoc nature of the interfacing between different modules of VSLAM pipelines complicates code reuseability and maintenance. This paper presents a software model for core components of VSLAM implementations and interfaces that govern data flow between them while also attempting to preserve the elements that offer performance improvements over the evolution of VSLAM architectures. The framework presented in this paper employs principles from model-driven engineering (MDE), which are used extensively in the development of large and complicated software systems. The presented VSLAM framework will assist researchers in improving the performance of individual modules of VSLAM while not having to spend time on system integration of those modules into VSLAM pipelines. Full article
Show Figures

Figure 1

Back to TopTop