Advances in Perception and Mixed-Reality for Human-Robot Interactive Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 3764

Special Issue Editors


E-Mail Website
Guest Editor
Department of Engineering, University of Ferrara, 44122 Ferrara, Italy
Interests: robotics; motion planning; simulation; industrial robotics; mixed reality

E-Mail Website
Guest Editor
Department of Engineering, University of Ferrara, 44122 Ferrara, Italy
Interests: robotics; motion planning; interaction control; surgical robots
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Advanced technologies for perception (e.g., 3D/depth cameras) and mixed reality enhance present-day opportunities and future perspectives in the context of human–robot interactive applications, in particular in the fields of manufacturing, surgery, aerospace, and mobile robotics. Indeed, strict safety requirements arise when robots have to share a common workspace with humans, or when robots have to provide assistance interacting with humans while accomplishing a common desired task. As a matter of fact, machine vision plays a fundamental role in achieving an effective human-robot collaboration, allowing the robot to plan its motions being aware of the human presence and intents, which could be often unpredictable. Another important consequence of the recent development in machine vision technologies and algorithms is the growth of opportunities within the scope of augmented and mixed reality that can be more easily incorporated into human–robot interactive applications to decrease the cognitive workload required to the user or to improve the usability of the system. Indeed, challenging scenarios that can be enabled by the integration of mixed reality in robotics comprise, e.g., the design of new interfaces for programming robots and safely interacting with them during the debugging, the deployment, and the execution of the desired application, as well as the creation of immersive 3D visualizations and simulations of robotic work-cells, to be used as test-bench for the training of human operators or of the robot itself.

This Special Issue welcomes contributions focused on the analysis, the design, and the implementation of innovative methods which involve the integration of machine vision and augmented/mixed reality within human–robot interactive applications.

Dr. Saverio Farsoni
Dr. Marcello Bonfè
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • virtual, augmented, mixed reality
  • computer, machine vision
  • human–robot interaction
  • robotic assistance
  • simulation
  • digital twin
  • collaborative robotics
  • motion planning
  • skeleton tracking
  • 3D perception

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 11526 KiB  
Article
Planning Collision-Free Robot Motions in a Human–Robot Shared Workspace via Mixed Reality and Sensor-Fusion Skeleton Tracking
by Saverio Farsoni, Jacopo Rizzi, Giulia Nenna Ufondu and Marcello Bonfè
Electronics 2022, 11(15), 2407; https://doi.org/10.3390/electronics11152407 - 01 Aug 2022
Cited by 4 | Viewed by 1706
Abstract
The paper describes a method for planning collision-free motions of an industrial manipulator that shares the workspace with human operators during a human–robot collaborative application with strict safety requirements. The proposed workflow exploits the advantages of mixed reality to insert real entities into [...] Read more.
The paper describes a method for planning collision-free motions of an industrial manipulator that shares the workspace with human operators during a human–robot collaborative application with strict safety requirements. The proposed workflow exploits the advantages of mixed reality to insert real entities into a virtual scene, wherein the robot control command is computed and validated by simulating robot motions without risks for the human. The proposed motion planner relies on a sensor-fusion algorithm that improves the 3D perception of the humans inside the robot workspace. Such an algorithm merges the estimations of the pose of the human bones reconstructed by means of a pointcloud-based skeleton tracking algorithm with the orientation data acquired from wearable inertial measurement units (IMUs) supposed to be fixed to the human bones. The algorithm provides a final reconstruction of the position and of the orientation of the human bones that can be used to include the human in the virtual simulation of the robotic workcell. A dynamic motion-planning algorithm can be processed within such a mixed-reality environment, allowing the computation of a collision-free joint velocity command for the real robot. Full article
Show Figures

Figure 1

Review

Jump to: Research

12 pages, 350 KiB  
Review
Augmented Reality and Robotic Systems for Assistance in Percutaneous Nephrolithotomy Procedures: Recent Advances and Future Perspectives
by Federica Ferraguti, Saverio Farsoni and Marcello Bonfè
Electronics 2022, 11(19), 2984; https://doi.org/10.3390/electronics11192984 - 20 Sep 2022
Cited by 4 | Viewed by 1538
Abstract
Percutaneous nephrolithotomy is the gold standard for the treatment of renal stones larger than 20 mm in diameter. The treatment outcomes of PCNL are highly dependent on the accuracy of the puncture step, in order to achieve a suitable renal access and reach [...] Read more.
Percutaneous nephrolithotomy is the gold standard for the treatment of renal stones larger than 20 mm in diameter. The treatment outcomes of PCNL are highly dependent on the accuracy of the puncture step, in order to achieve a suitable renal access and reach the stone with a precise and direct path. Thus, performing the puncturing to get the renal access is the most crucial and challenging step of the procedure with the steepest learning curve. Many simulation methods and systems have been developed to help trainees achieve the requested competency level to achieve a suitable renal access. Simulators include human cadavers, animal tissues and virtual reality simulators to simulate human patients. On the other hand, the availability of pre-operative information (e.g., computed tomography or magnetic resonance imaging) and of intra-operative images (e.g., ultrasound images) has allowed the development of solutions involving augmented reality and robotic systems to assist the surgeon during the operation and to help a novel surgeon in strongly reducing the learning curve. In this context, the real-time awareness of the 3D position and orientation of the considered anatomical structures with reference to a common frame is fundamental. Such information must be accurately estimated by means of specific tracking systems that allow the reconstruction of the motion of the probe and of the tool. This review paper presents a survey on the leading literature on augmented reality and robotic assistance for PCNL, with a focus on existing methods for tracking the motion of the ultrasound probe and of the surgical needle. Full article
Show Figures

Figure 1

Back to TopTop