sensors-logo

Journal Browser

Journal Browser

Vision Based Sensing and Machine Learning for Robotic Grasping and Manipulation

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 June 2021) | Viewed by 20807

Special Issue Editor


E-Mail Website
Guest Editor
Kingston University London, Kingston, U.K.
Interests: dynamic vision-based tactile sensor, grasping and manipulation; artificial intelligence; robotics systems for extreme environment

Special Issue Information

Dear Colleagues,

The sense of touch is essential for humans to perform coordinated and efficient interactions within their environment. Without the sense of touch, it is very difficult to maintain stable grasping or manipulation.

Robot interaction through object grasping and manipulation is one of the key areas with the potential to drive the industry for economic growth. Recent advancements in vision-based sensors and machine learning techniques have made impressive progress in many areas of computer vision and robotics applications. Robotics grasping and manipulation present many challenges that require novel approaches. Grasping is one of the most fundamental skills for manipulating objects, and one of the first skills robots need to master. Instead of relying on human assistance, a robot has to learn to grasp by interacting with dynamic objects in an unknown environment. Therefore, in order to reach their full potential as autonomous agents, robots must be capable of learning versatile manipulation skills for different objects and situations.

Robotics potential applications have increased rapidly in recent years, including household chores, tasks in hazardous/extreme environments, industrial robots, healthcare systems, medical instrumentation and devices, augmented reality, and human–machine interaction. This led the research community to address interdisciplinary developments involving people from machine learning, computer vision, electronics, mechanics, material science, measurement methods, system engineering, robotics, and bioengineering.

Prof. Dr. Yahya Zweiri
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Tactile sensor technologies
  • Tactile sensor modeling
  • Tactile data interpretation
  • Robot tactile systems
  • Force and tactile sensing
  • Vision-based measurement
  • Grasping and manipulation
  • Deformable object manipulation
  • Contact modeling
  • Dexterous manipulation
  • Artificial skin
  • Object features recognition
  • Object classification
  • Slip detection
  • Physical human–robot interaction
  • Human–machine interfaces
  • Augmented Reality
  • Machine learning
  • Supervised, unsupervised, and reinforcement learning
  • Soft robotics modeling and sensing
  • Soft sensors
  • Compliant robot manipulators and grippers

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 3274 KiB  
Article
Object Manipulation with an Anthropomorphic Robotic Hand via Deep Reinforcement Learning with a Synergy Space of Natural Hand Poses
by Patricio Rivera, Edwin Valarezo Añazco and Tae-Seong Kim
Sensors 2021, 21(16), 5301; https://doi.org/10.3390/s21165301 - 05 Aug 2021
Cited by 9 | Viewed by 2929
Abstract
Anthropomorphic robotic hands are designed to attain dexterous movements and flexibility much like human hands. Achieving human-like object manipulation remains a challenge especially due to the control complexity of the anthropomorphic robotic hand with a high degree of freedom. In this work, we [...] Read more.
Anthropomorphic robotic hands are designed to attain dexterous movements and flexibility much like human hands. Achieving human-like object manipulation remains a challenge especially due to the control complexity of the anthropomorphic robotic hand with a high degree of freedom. In this work, we propose a deep reinforcement learning (DRL) to train a policy using a synergy space for generating natural grasping and relocation of variously shaped objects using an anthropomorphic robotic hand. A synergy space is created using a continuous normalizing flow network with point clouds of haptic areas, representing natural hand poses obtained from human grasping demonstrations. The DRL policy accesses the synergistic representation and derives natural hand poses through a deep regressor for object grasping and relocation tasks. Our proposed synergy-based DRL achieves an average success rate of 88.38% for the object manipulation tasks, while the standard DRL without synergy space only achieves 50.66%. Qualitative results show the proposed synergy-based DRL policy produces human-like finger placements over the surface of each object including apple, banana, flashlight, camera, lightbulb, and hammer. Full article
Show Figures

Figure 1

30 pages, 9202 KiB  
Article
Vision-Based Tactile Sensor Mechanism for the Estimation of Contact Position and Force Distribution Using Deep Learning
by Vijay Kakani, Xuenan Cui, Mingjie Ma and Hakil Kim
Sensors 2021, 21(5), 1920; https://doi.org/10.3390/s21051920 - 09 Mar 2021
Cited by 30 | Viewed by 6330
Abstract
This work describes the development of a vision-based tactile sensor system that utilizes the image-based information of the tactile sensor in conjunction with input loads at various motions to train the neural network for the estimation of tactile contact position, area, and force [...] Read more.
This work describes the development of a vision-based tactile sensor system that utilizes the image-based information of the tactile sensor in conjunction with input loads at various motions to train the neural network for the estimation of tactile contact position, area, and force distribution. The current study also addresses pragmatic aspects, such as choice of the thickness and materials for the tactile fingertips and surface tendency, etc. The overall vision-based tactile sensor equipment interacts with an actuating motion controller, force gauge, and control PC (personal computer) with a LabVIEW software on it. The image acquisition was carried out using a compact stereo camera setup mounted inside the elastic body to observe and measure the amount of deformation by the motion and input load. The vision-based tactile sensor test bench was employed to collect the output contact position, angle, and force distribution caused by various randomly considered input loads for motion in X, Y, Z directions and RxRy rotational motion. The retrieved image information, contact position, area, and force distribution from different input loads with specified 3D position and angle are utilized for deep learning. A convolutional neural network VGG-16 classification modelhas been modified to a regression network model and transfer learning was applied to suit the regression task of estimating contact position and force distribution. Several experiments were carried out using thick and thin sized tactile sensors with various shapes, such as circle, square, hexagon, for better validation of the predicted contact position, contact area, and force distribution. Full article
Show Figures

Figure 1

16 pages, 4713 KiB  
Article
GadgetArm—Automatic Grasp Generation and Manipulation of 4-DOF Robot Arm for Arbitrary Objects Through Reinforcement Learning
by JoungMin Park, SangYoon Lee, JaeWoon Lee and Jumyung Um
Sensors 2020, 20(21), 6183; https://doi.org/10.3390/s20216183 - 30 Oct 2020
Cited by 2 | Viewed by 3458
Abstract
Automatic robot gripper system which involves the automated object recognition of work-in-process in production line is the key technology of the upcoming manufacturing facility achieving Industry 4.0. Automatic robot gripper enables the manufacturing system to be autonomous, self-recognized, and adaptable by using artificial [...] Read more.
Automatic robot gripper system which involves the automated object recognition of work-in-process in production line is the key technology of the upcoming manufacturing facility achieving Industry 4.0. Automatic robot gripper enables the manufacturing system to be autonomous, self-recognized, and adaptable by using artificial intelligence of robot programming dealing with arbitrary shapes of work-in-processes. This paper specifically explores the chain of key technologies, such as 3D object recognition with CAD and point cloud data, reinforcement learning of robot arm, and customized 3D printed gripper, in order to enhance the intelligence of the robot controller system. And it also proposes the integration with 3D point cloud based object recognition and game-engine based reinforcement learning. The result of the prototype of the intelligent robot gripping system developed by the proposed method with a 4 degree-of-freedom robot arm is explained in this paper. Full article
Show Figures

Figure 1

15 pages, 5414 KiB  
Article
Neuromorphic Vision Based Contact-Level Classification in Robotic Grasping Applications
by Xiaoqian Huang, Rajkumar Muthusamy, Eman Hassan, Zhenwei Niu, Lakmal Seneviratne, Dongming Gan and Yahya Zweiri
Sensors 2020, 20(17), 4724; https://doi.org/10.3390/s20174724 - 21 Aug 2020
Cited by 14 | Viewed by 3559
Abstract
In recent years, robotic sorting is widely used in the industry, which is driven by necessity and opportunity. In this paper, a novel neuromorphic vision-based tactile sensing approach for robotic sorting application is proposed. This approach has low latency and low power consumption [...] Read more.
In recent years, robotic sorting is widely used in the industry, which is driven by necessity and opportunity. In this paper, a novel neuromorphic vision-based tactile sensing approach for robotic sorting application is proposed. This approach has low latency and low power consumption when compared to conventional vision-based tactile sensing techniques. Two Machine Learning (ML) methods, namely, Support Vector Machine (SVM) and Dynamic Time Warping-K Nearest Neighbor (DTW-KNN), are developed to classify material hardness, object size, and grasping force. An Event-Based Object Grasping (EBOG) experimental setup is developed to acquire datasets, where 243 experiments are produced to train the proposed classifiers. Based on predictions of the classifiers, objects can be automatically sorted. If the prediction accuracy is below a certain threshold, the gripper re-adjusts and re-grasps until reaching a proper grasp. The proposed ML method achieves good prediction accuracy, which shows the effectiveness and the applicability of the proposed approach. The experimental results show that the developed SVM model outperforms the DTW-KNN model in term of accuracy and efficiency for real time contact-level classification. Full article
Show Figures

Figure 1

15 pages, 2520 KiB  
Article
Dynamic-Vision-Based Force Measurements Using Convolutional Recurrent Neural Networks
by Fariborz Baghaei Naeini, Dimitrios Makris, Dongming Gan and Yahya Zweiri
Sensors 2020, 20(16), 4469; https://doi.org/10.3390/s20164469 - 10 Aug 2020
Cited by 24 | Viewed by 3616
Abstract
In this paper, a novel dynamic Vision-Based Measurement method is proposed to measure contact force independent of the object sizes. A neuromorphic camera (Dynamic Vision Sensor) is utilizused to observe intensity changes within the silicone membrane where the object is in contact. Three [...] Read more.
In this paper, a novel dynamic Vision-Based Measurement method is proposed to measure contact force independent of the object sizes. A neuromorphic camera (Dynamic Vision Sensor) is utilizused to observe intensity changes within the silicone membrane where the object is in contact. Three deep Long Short-Term Memory neural networks combined with convolutional layers are developed and implemented to estimate the contact force from intensity changes over time. Thirty-five experiments are conducted using three objects with different sizes to validate the proposed approach. We demonstrate that the networks with memory gates are robust against variable contact sizes as the networks learn object sizes in the early stage of a grasp. Moreover, spatial and temporal features enable the sensor to estimate the contact force every 10 ms accurately. The results are promising with Mean Squared Error of less than 0.1 N for grasping and holding contact force using leave-one-out cross-validation method. Full article
Show Figures

Graphical abstract

Back to TopTop