Autonomous Mobile Robotics

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Mechanical Engineering".

Deadline for manuscript submissions: closed (31 December 2019) | Viewed by 3484

Special Issue Editors


E-Mail Website
Guest Editor
Department of Automatics and Robotics, Centrale Nantes Joint Research Laboratory LS2N CNRS 6004, Centrale Nantes, Nantes, France
Interests: machine learning; multi-sensor fusion; vision for robotics; autonomous mobile robotics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
SATIE Laboratory CNRS Joint Research Unit, UMR 8029, Paris-Saclay University, 91190 Gif-sur-Yvette, France
Interests: multi-modal perception; multi-sensor data fusion; computer vision; intelligent transportation systems; mobile robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Autonomous and self-navigating robots have been rising in popularity due to a push for a more technologically aided future. The applications of autonomous robots are vast and span many different fields as evidenced by recent developments such as smart mobility for intelligent transport systems, advanced manufacturing technologies for future industry, mini-UAVs for missions such as monitoring large infrastructures and search and rescue applications, and applications in the fields of agriculture and ocean engineering. These autonomous robots, often heterogeneous in terms of shape, energy autonomy, and computing capabilities, are increasingly evolving in open, complex, dynamic environments, and they are interacting with humans. In addition, the convergence of massive databases, important embedded computing capabilities and new paradigms of artificial intelligence have given robotics systems greater autonomy and reasoning capabilities.

This Special Issue aims to collect the most recent studies and applications on deep learning techniques and mobile robotics. Topics will include, but are not strictly limited to

  • Advanced machine learning techniques for SLAM;
  • Deep fusion architectures for robotic perception sensors;
  • Emergent sensing capabilities for mobile robotics;
  • Deep reinforcement learning for mobile robots' navigation;
  • Dynamic scene analysis and multi-object detection and tracking;
  • Real-time inference and hardware implementation;
  • Data-driven navigation and control;
  • Autonomous mobile robots’ applications on ground, air, and water…

Prof. Vincent Frémont
Dr. Sergio Alberto Rodriguez Florez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Advanced machine learning techniques for SLAM;
  • Deep fusion architectures for robotic perception sensors;
  • Emergent sensing capabilities for mobile robotics;
  • Deep reinforcement learning for mobile robots' navigation;
  • Dynamic scene analysis and multi-object detection and tracking;
  • Real-time inference and hardware implementation;
  • Data-driven navigation and control;
  • Autonomous mobile robots’ applications on ground, air, and water…

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 11801 KiB  
Article
Regressed Terrain Traversability Cost for Autonomous Navigation Based on Image Textures
by Mohammed Abdessamad Bekhti and Yuichi Kobayashi
Appl. Sci. 2020, 10(4), 1195; https://doi.org/10.3390/app10041195 - 11 Feb 2020
Cited by 14 | Viewed by 3074
Abstract
The exploration of remote, unknown, rough environments by autonomous robots strongly depends on the ability of the on-board system to build an accurate predictor of terrain traversability. Terrain traversability prediction can be made more cost efficient by using texture information of 2D images [...] Read more.
The exploration of remote, unknown, rough environments by autonomous robots strongly depends on the ability of the on-board system to build an accurate predictor of terrain traversability. Terrain traversability prediction can be made more cost efficient by using texture information of 2D images obtained by a monocular camera. In cases where the robot is required to operate on a variety of terrains, it is important to consider that terrains sometimes contain spiky objects that appear as non-uniform in the texture of terrain images. This paper presents an approach to estimate the terrain traversability cost based on terrain non-uniformity detection (TNUD). Terrain images undergo a multiscale analysis to determine whether a terrain is uniform or non-uniform. Terrains are represented using a texture and a motion feature computed from terrain images and acceleration signal, respectively. Both features are then combined to learn independent Gaussian Process (GP) predictors, and consequently, predict vibrations using only image texture features. The proposed approach outperforms conventional methods relying only on image features without utilizing TNUD. Full article
(This article belongs to the Special Issue Autonomous Mobile Robotics)
Show Figures

Figure 1

Back to TopTop