Robot Intelligence for Grasping and Manipulation

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 December 2023) | Viewed by 7307

Special Issue Editor


E-Mail Website
Guest Editor
Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin 17104, Republic of Korea
Interests: AI deep learning; machine learning; pattern recognition; brain engineering; biomedical imaging/signal analysis; robot intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Robot intelligence is an exciting interdisciplinary field that includes robotics, machine learning, pattern recognition, and visuomotor/sensorimotor controls. The aim of robot intelligence for grasping and manipulating objects is to achieve the dexterity of grasping and manipulation in humans. Recently, advancements in machine learning methods, particularly deep learning, have accelerated the growth of this new discipline, such that robots can learn to grasp and manipulate various objects autonomously, similarly to humans.

This Special Issue intends to share novel ideas and works of researchers and technical experts in the field of robot intelligence, pertaining to the grasping and manipulation of objects.

This Special Issue is dedicated to high-quality, original research papers in the overlapping fields of: 

  • Robot intelligence for grasping and manipulation;
  • Robotics for dexterous grasping and manipulation;
  • Anthropomorphic grasping and manipulation;
  • Short- and long-horizon robotic grasping and manipulation;
  • Artificial intelligence, machine learning, deep learning, deep reinforcement learning in robotics;
  • Robotic machine vision intelligence;
  • Robot visuomotor and sensorimotor tasks.

Prof. Dr. Tae-Seong Kim
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • robotics
  • robot intelligence
  • machine learning
  • deep learning
  • deep reinforcement learning
  • machine vision intelligence
  • visuomotor/sensorimotor tasks
  • grasping and manipulation

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 1718 KiB  
Article
Classification of Faults Operation of a Robotic Manipulator Using Symbolic Classifier
by Nikola Anđelić, Sandi Baressi Šegota, Matko Glučina and Ivan Lorencin
Appl. Sci. 2023, 13(3), 1962; https://doi.org/10.3390/app13031962 - 02 Feb 2023
Cited by 1 | Viewed by 1336
Abstract
In autonomous manufacturing lines, it is very important to detect the faulty operation of robot manipulators to prevent potential damage. In this paper, the application of a genetic programming algorithm (symbolic classifier) with a random selection of hyperparameter values and trained using a [...] Read more.
In autonomous manufacturing lines, it is very important to detect the faulty operation of robot manipulators to prevent potential damage. In this paper, the application of a genetic programming algorithm (symbolic classifier) with a random selection of hyperparameter values and trained using a 5-fold cross-validation process is proposed to determine expressions for fault detection during robotic manipulator operation, using a dataset that was made publicly available by the original researchers. The original dataset was reduced to a binary dataset (fault vs. normal operation); however, due to the class imbalance random oversampling, and SMOTE methods were applied. The quality of best symbolic expressions (SEs) was based on the highest mean values of accuracy (ACC¯), area under receiving operating characteristics curve (AUC¯), Precision¯, Recall¯, and F1Score¯. The best results were obtained on the SMOTE dataset with ACC¯, AUC¯, Precision¯, Recall¯, and F1Score¯ equal to 0.99, 0.99, 0.992, 0.9893, and 0.99, respectively. Finally, the best set of mathematical equations obtained using the GPSC algorithm was evaluated on the initial dataset where the mean values of ACC¯, AUC¯, Precision¯, Recall¯, and F1Score¯ are equal to 0.9978, 0.998, 1.0, 0.997, and 0.998, respectively. The investigation showed that using the described procedure, symbolically expressed models of a high classification performance are obtained for the purpose of detecting faults in the operation of robotic manipulators. Full article
(This article belongs to the Special Issue Robot Intelligence for Grasping and Manipulation)
Show Figures

Figure 1

20 pages, 3318 KiB  
Article
Detecting and Controlling Slip through Estimation and Control of the Sliding Velocity
by Marco Costanzo, Giuseppe De Maria and Ciro Natale
Appl. Sci. 2023, 13(2), 921; https://doi.org/10.3390/app13020921 - 09 Jan 2023
Cited by 2 | Viewed by 1219
Abstract
Slipping detection and avoidance are key issues in dexterous robotic manipulation. The capability of robots to grasp and manipulate objects of common use can be greatly enhanced by endowing these robots with force/tactile sensors on their fingertips. Object slipping can be caused by [...] Read more.
Slipping detection and avoidance are key issues in dexterous robotic manipulation. The capability of robots to grasp and manipulate objects of common use can be greatly enhanced by endowing these robots with force/tactile sensors on their fingertips. Object slipping can be caused by both tangential and torsional loads when the grip force is too low. Contact force and moment measurements are required to counteract such loads and avoid slippage by controlling the grip force. In this paper, we use the SUNTouch force/tactile sensor, which provides the robotic control system with reliable measurements of both normal and tangential contact force components together with the torsional moment. By exploiting the limit surface concept and the LuGre friction model, we build a model of the object/fingertip planar sliding. This model is the basis of a nonlinear observer that estimates the sliding velocity and the friction state variable from the measured contact force and torsional moment. The slipping control system uses the estimated friction state to detect the slipping event and the estimated sliding velocity to control the grasp force. The control modality is twofold: the first one is aimed at avoiding object slip, while the second one allows the object to perform a controlled pivoting about the grasping axis. Experiments show that the robot is able to safely manipulate objects that require grasping forces in a large range, from 0.2 N to 10 N. This level of manipulation autonomy is attained by a suitably identified dynamic model that overcomes the limited generalization capability of existing learning-based approaches in the general roto-translational slip control. Full article
(This article belongs to the Special Issue Robot Intelligence for Grasping and Manipulation)
Show Figures

Figure 1

10 pages, 927 KiB  
Article
Task-Specific Grasp Planning for Robotic Assembly by Fine-Tuning GQCNNs on Automatically Generated Synthetic Data
by Artúr István Károly and Péter Galambos
Appl. Sci. 2023, 13(1), 525; https://doi.org/10.3390/app13010525 - 30 Dec 2022
Cited by 2 | Viewed by 1635
Abstract
In modern robot applications, there is often a need to manipulate previously unknown objects in an unstructured environment. The field of grasp-planning deals with the task of finding grasps for a given object that can be successfully executed with a robot. The predicted [...] Read more.
In modern robot applications, there is often a need to manipulate previously unknown objects in an unstructured environment. The field of grasp-planning deals with the task of finding grasps for a given object that can be successfully executed with a robot. The predicted grasps can be evaluated according to certain criteria, such as analytical metrics, similarity to human-provided grasps, or the success rate of physical trials. The quality of a grasp also depends on the task which will be carried out after the grasping is completed. Current task-specific grasp planning approaches mostly use probabilistic methods, which utilize categorical task encoding. We argue that categorical task encoding may not be suitable for complex assembly tasks. This paper proposes a transfer-learning-based approach for task-specific grasp planning for robotic assembly. The proposed method is based on an automated pipeline that quickly and automatically generates a small-scale task-specific synthetic grasp dataset using Graspit! and Blender. This dataset is utilized to fine-tune pre-trained grasp quality convolutional neural networks (GQCNNs). The aim is to train GQCNNs that can predict grasps which do not result in a collision when placing the objects. Consequently, this paper focuses on the geometric feasibility of the predicted grasps and does not consider the dynamic effects. The fine-tuned GQCNNs are evaluated using the Moveit! Task Constructor motion planning framework, which enables the automated inspection of whether the motion planning for a task is feasible given a predicted grasp and, if not, which part of the task is responsible for the failure. Our results suggest that fine-tuning GQCNN models can result in superior grasp-planning performance (0.9 success rate compared to 0.65) in the context of an assembly task. Our method can be used to rapidly attain new task-specific grasp policies for flexible robotic assembly applications. Full article
(This article belongs to the Special Issue Robot Intelligence for Grasping and Manipulation)
Show Figures

Figure 1

13 pages, 2717 KiB  
Article
Dexterous Object Manipulation with an Anthropomorphic Robot Hand via Natural Hand Pose Transformer and Deep Reinforcement Learning
by Patricio Rivera Lopez, Ji-Heon Oh, Jin Gyun Jeong, Hwanseok Jung, Jin Hyuk Lee, Ismael Espinoza Jaramillo, Channabasava Chola, Won Hee Lee and Tae-Seong Kim
Appl. Sci. 2023, 13(1), 379; https://doi.org/10.3390/app13010379 - 28 Dec 2022
Cited by 2 | Viewed by 2385
Abstract
Dexterous object manipulation using anthropomorphic robot hands is of great interest for natural object manipulations across the areas of healthcare, smart homes, and smart factories. Deep reinforcement learning (DRL) is a particularly promising approach to solving dexterous manipulation tasks with five-fingered robot hands. [...] Read more.
Dexterous object manipulation using anthropomorphic robot hands is of great interest for natural object manipulations across the areas of healthcare, smart homes, and smart factories. Deep reinforcement learning (DRL) is a particularly promising approach to solving dexterous manipulation tasks with five-fingered robot hands. Yet, controlling an anthropomorphic robot hand via DRL in order to obtain natural, human-like object manipulation with high dexterity remains a challenging task in the current robotic field. Previous studies have utilized some predefined human hand poses to control the robot hand’s movements for successful object-grasping. However, the hand poses derived from these grasping taxonomies are limited to a partial range of adaptability that could be performed by the robot hand. In this work, we propose a combinatory approach of a deep transformer network which produces a wider range of natural hand poses to configure the robot hand’s movements, and an adaptive DRL to control the movements of an anthropomorphic robot hand according to these natural hand poses. The transformer network learns and infers the natural robot hand poses according to the object affordance. Then, DRL trains a policy using the transformer output to grasp and relocate the object to the designated target location. Our proposed transformer-based DRL (T-DRL) has been tested using various objects, such as an apple, a banana, a light bulb, a camera, a hammer, and a bottle. Additionally, its performance is compared with a baseline DRL model via natural policy gradient (NPG). The results demonstrate that our T-DRL achieved an average manipulation success rate of 90.1% for object manipulation and outperformed NPG by 24.8%. Full article
(This article belongs to the Special Issue Robot Intelligence for Grasping and Manipulation)
Show Figures

Figure 1

Back to TopTop