Learning Control Design and Analysis for Human-Robot Interaction

A special issue of Machines (ISSN 2075-1702). This special issue belongs to the section "Machines Testing and Maintenance".

Deadline for manuscript submissions: closed (31 July 2022) | Viewed by 7397

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechanical Engineering, Lassonde School of Engineering, York University, Toronto, ON M3J 1P3, Canada
Interests: robotics and mechatronics; high-performance parallel robotic machine development; sustainable/green manufacturing systems; micro/nanomanipulation and MEMS devices (sensors); micro mobile robots and control of multi-robot cooperation; intelligent servo control system for the MEMS-based high-performance micro-robot; web-based remote manipulation; rehabilitation robot and rescue robot
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science and Technology, Algoma University, 1520 Queen St E, Sault Ste, Marie, ON P6A 2G4, Canada
Interests: robotics; control theory; dynamical systems; human–robot interaction; AI
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

Adaptive control for robotics has been developed in the last decade, but learning control design and its application are still at an early stage. For example, the use of learning control in human–robot interaction is a great tool for handing uncertainties during the process of robots interacting with humans. This Special Issue aims to bring researchers together to present the latest advances and technologies in the field of learning control, particularly for human–robot interaction application, in order to further summarize and improve the methodologies in this field. Suitable topics include but are not limited to the following:

  • Modeling of learning control
  • Motor control and learning in human nervous systems
  • Learning control for human–robot interaction
  • Biomechanics in robotics control
  • Advanced control and coordination for human–robot interaction

Prof. Dr. Dan Zhang
Dr. Bin Wei
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machines is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • modeling
  • learning control
  • human–robot interaction
  • dynamics

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 5336 KiB  
Article
Learning by Watching via Keypoint Extraction and Imitation Learning
by Yin-Tung Albert Sun, Hsin-Chang Lin, Po-Yen Wu and Jung-Tang Huang
Machines 2022, 10(11), 1049; https://doi.org/10.3390/machines10111049 - 09 Nov 2022
Cited by 1 | Viewed by 1315
Abstract
In recent years, the use of reinforcement learning and imitation learning to complete robot control tasks have become more popular. Demonstration and learning by experts have always been the goal of researchers. However, the lack of action data has been a significant limitation [...] Read more.
In recent years, the use of reinforcement learning and imitation learning to complete robot control tasks have become more popular. Demonstration and learning by experts have always been the goal of researchers. However, the lack of action data has been a significant limitation to learning by human demonstration. We propose an architecture based on a new 3D keypoint tracking model and generative adversarial imitation learning to learn from expert demonstrations. We used 3D keypoint tracking to make up for the lack of action data in simple images and then used image-to-image conversion to convert human hand demonstrations into robot images, which enabled subsequent generative adversarial imitation learning to learn smoothly. The estimation time of the 3D keypoint tracking model and the calculation time of the subsequent optimization algorithm was 30 ms. The coordinate errors of the model projected to the real 3D key point under correct detection were all within 1.8 cm. The tracking of key points did not require any sensors on the body; the operator did not need vision-related knowledge to correct the accuracy of the camera. By merely setting up a generic depth camera to track the mapping changes of key points after behavior clone training, the robot could learn human tasks by watching, including picking and placing an object and pouring water. We used pybullet to build an experimental environment to confirm our concept of the simplest behavioral cloning imitation to attest the success of the learning. The effectiveness of the proposed method was accomplished by a satisfactory performance requiring a sample efficiency of 20 sets for pick and place and 30 sets for pouring water. Full article
(This article belongs to the Special Issue Learning Control Design and Analysis for Human-Robot Interaction)
Show Figures

Figure 1

20 pages, 5271 KiB  
Article
A Robotic Milling System Based on 3D Point Cloud
by Yongzhuo Gao, Haibo Gao, Kunpeng Bai, Mingyang Li and Wei Dong
Machines 2021, 9(12), 355; https://doi.org/10.3390/machines9120355 - 15 Dec 2021
Cited by 8 | Viewed by 2802
Abstract
Industrial robots have advantages in the processing of large-scale components in the aerospace industry. Compared to CNC machine tools, robot arms are cheaper and easier to deploy. However, due to the poor consistency of incoming materials, large-scale and lightweight components make it difficult [...] Read more.
Industrial robots have advantages in the processing of large-scale components in the aerospace industry. Compared to CNC machine tools, robot arms are cheaper and easier to deploy. However, due to the poor consistency of incoming materials, large-scale and lightweight components make it difficult to automate robotic machining. In addition, the stiffness of the tandem structure is quite low. Therefore, the stability of the milling process is always a concern. In this paper, the robotic milling research is carried out for the welding pre-processing technology of large-scale components. In order to realize the automatic production of low-conformity parts, the on-site measurement–planning–processing method is adopted with the laser profiler. On the one hand, the laser profiler hand–eye calibration method is optimized to improve the measurement accuracy. On the other hand, the stiffness of the robot’s processing posture is optimized, combined with the angle of the fixture turntable. Finally, the experiment shows the feasibility of the on-site measurement–planning–processing method and verifies the correctness of the stiffness model. Full article
(This article belongs to the Special Issue Learning Control Design and Analysis for Human-Robot Interaction)
Show Figures

Figure 1

17 pages, 2750 KiB  
Article
Adaptive Admittance Control Scheme with Virtual Reality Interaction for Robot-Assisted Lower Limb Strength Training
by Musong Lin, Hongbo Wang, Jianye Niu, Yu Tian, Xincheng Wang, Guowei Liu and Li Sun
Machines 2021, 9(11), 301; https://doi.org/10.3390/machines9110301 - 22 Nov 2021
Cited by 5 | Viewed by 2186
Abstract
Muscle weakness is the primary impairment causing mobility difficulty among stroke survivors. Millions of people are unable to live normally because of mobility difficulty every year. Strength training is an effective method to improve lower extremity ability but is limited by the shortage [...] Read more.
Muscle weakness is the primary impairment causing mobility difficulty among stroke survivors. Millions of people are unable to live normally because of mobility difficulty every year. Strength training is an effective method to improve lower extremity ability but is limited by the shortage of medical staff. Thus, this paper proposes a robot-assisted active training (RAAT) by an adaptive admittance control scheme with virtual reality interaction (AACVRI). AACVRI consists of a stiffness variable admittance controller, an adaptive controller, and virtual reality (VR) interactions. In order to provide human-robot reality interactions corresponding to virtual scenes, an admittance control law with variable stiffness term was developed to define the mechanics property of the end effector. The adaptive controller improves tracking performances by compensating interaction forces and dynamics model deviations. A virtual training environment including action following, event feedback, and competition mechanism is utilized for improving boring training experience and engaging users to maintain active state in cycling training. To verify controller performances and the feasibility of RAAT, experiments were conducted with eight subjects. Admittance control provides desired variable interactions along the trajectory. The robot responds to different virtual events by changing admittance parameters according to trigger feedbacks. Adaptive control ensures tracking errors at a low level. Subjects were maintained in active state during this strength training. Their physiological signals significantly increased, and interaction forces were at a high level. RAAT is a feasible approach for lower limb strength training, and users can independently complete high-quality active strength training under RAAT. Full article
(This article belongs to the Special Issue Learning Control Design and Analysis for Human-Robot Interaction)
Show Figures

Figure 1

Back to TopTop