Enhanced Perception in Robotics Control and Manipulation

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 6857

Special Issue Editors


E-Mail Website
Guest Editor
Fraunhofer Italia Research S.c.a.r.l., Bolzano, Italy
Interests: machine learning; deep learning; object detection; semantic segmentation; robotic grasping and manipulation; multi-agent system; estimation; optimization

E-Mail Website
Guest Editor
BISITE Research Group, University of Salamanca, Edificio Multiusos I+D+I, 37007 Salamanca, Spain
Interests: artificial intelligence; multi-agent systems; cloud computing and distributed systems; technology-enhanced learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Robotic systems are ubiquitous in production industries.

Nonetheless, the advent of collaborative solutions together with advanced and learning control approaches, are lowering the entering barriers thus unlocking the use of robots in “soft” as well as unstructured “everyday” environments.

However, to effectively control robot interactions with the surroundings, correctly sensing and perceiving the environment is a necessary preliminary task.

The Special Issue in “Enhanced Perception in Robotics Control and Manipulation” encourages novel and original papers presenting high-quality research specifically focusing on perception technologies for robotic systems. A special emphasis will be on novel representations and approaches for robotics manipulation and grasping with focus on “adversarial” objects, e.g., characterized by transparent, reflective, and feature-less surfaces.

Despite learning-based solutions currently playing a key and disruptive role in perception technologies, less data-hungry approaches are encouraged and will also be considered within the scope of this SI. Indeed, topics are invited from a wide range of disciplines and perspectives, including but are not restricted to the following:

  • Advances in object detection;
  • Advances in semantic segmentation;
  • Object representation;
  • Deep learning solutions to machine vision;
  • Scene understanding;
  • Learning embeddings;
  • Machine vision to robot grasping and manipulation.

Dr. Marco Todescato
Dr. Fernando De la Prieta
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • robotic perception
  • object detection
  • semantic segmentation
  • deep vision
  • learning embeddings
  • robotic manipulation

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Review

29 pages, 622 KiB  
Review
Deep Learning for Visual SLAM: The State-of-the-Art and Future Trends
by Margarita N. Favorskaya
Electronics 2023, 12(9), 2006; https://doi.org/10.3390/electronics12092006 - 26 Apr 2023
Cited by 4 | Viewed by 5903
Abstract
Visual Simultaneous Localization and Mapping (VSLAM) has been a hot topic of research since the 1990s, first based on traditional computer vision and recognition techniques and later on deep learning models. Although the implementation of VSLAM methods is far from perfect and complete, [...] Read more.
Visual Simultaneous Localization and Mapping (VSLAM) has been a hot topic of research since the 1990s, first based on traditional computer vision and recognition techniques and later on deep learning models. Although the implementation of VSLAM methods is far from perfect and complete, recent research in deep learning has yielded promising results for applications such as autonomous driving and navigation, service robots, virtual and augmented reality, and pose estimation. The pipeline of traditional VSLAM methods based on classical image processing algorithms consists of six main steps, including initialization (data acquisition), feature extraction, feature matching, pose estimation, map construction, and loop closure. Since 2017, deep learning has changed this approach from individual steps to implementation as a whole. Currently, three ways are developing with varying degrees of integration of deep learning into traditional VSLAM systems: (1) adding auxiliary modules based on deep learning, (2) replacing the original modules of traditional VSLAM with deep learning modules, and (3) replacing the traditional VSLAM system with end-to-end deep neural networks. The first way is the most elaborate and includes multiple algorithms. The other two are in the early stages of development due to complex requirements and criteria. The available datasets with multi-modal data are also of interest. The discussed challenges, advantages, and disadvantages underlie future VSLAM trends, guiding subsequent directions of research. Full article
(This article belongs to the Special Issue Enhanced Perception in Robotics Control and Manipulation)
Show Figures

Figure 1

Back to TopTop