sensors-logo

Journal Browser

Journal Browser

Novel Sensors and Algorithms for Outdoor Mobile Robot

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 20 June 2024 | Viewed by 20266

Special Issue Editors


E-Mail Website
Guest Editor
Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
Interests: pose estimation; mapping; 3D recognition; sensor fusion; active perception

E-Mail Website
Guest Editor
Machine Perception Research Laboratory (MPLab), Institute for Computer Science and Control (SZTAKI), Eötvös Loránd Research Network (ELKH), Kende u. 13-17, H-1111 Budapest, Hungary
Interests: spatial AI; understanding of semantic scenes and their applications for mobile robots
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Mobile robot navigation greatly depends on two components: the sensors used for the environment perception and the algorithms applied for localization and mapping in a navigation feedback loop. Recently, both components have made significant progress and require advanced machine learning techniques in order to be implemented efficiently in embedded robot hardware.

The recent advances in the field of deep learning has also had a great impact on mobile robot navigation, especially on the 2D–3D perception pipelines including the segmentation, recognition and pose estimation parts, while for the path planning, the deep reinforcement learning also proved to be feasible with limited hardware resources.

Thus, the use of machine learning algorithms seems to be a promising direction for the future of embedded mobile robot navigation software stacks. Combined with novel sensing techniques, this yields efficient real-time robot platforms for both the industrial and entertainment markets.

This Special Issue focuses on novel sensors and algorithms based on advanced machine learning techniques for outdoor mobile robot navigation. The aim is to highlight the current trend in this specific field of robotics of machine learning with constraint processing power. Thus, this Special Issue welcomes the contributions in the field of mobile robotics with the focus on novel sensing techniques used in real-life outdoor navigation scenarios.

The topics of interest for this special issue include, but are not limited to:

  • Robot sensing and perception;
  • SLAM;
  • Mobile robot exploration;
  • Machine learning for mobile robot;
  • Edge computing.

Dr. Levente Tamás
Dr. Andras Majdik
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 7440 KiB  
Article
Hardware Schemes for Smarter Indoor Robotics to Prevent the Backing Crash Framework Using Field Programmable Gate Array-Based Multi-Robots
by Mudasar Basha, Munuswamy Siva Kumar, Mangali Chinna Chinnaiah, Siew-Kei Lam, Thambipillai Srikanthan, Janardhan Narambhatla, Hari Krishna Dodde and Sanjay Dubey
Sensors 2024, 24(6), 1724; https://doi.org/10.3390/s24061724 - 07 Mar 2024
Viewed by 466
Abstract
The use of smart indoor robotics services is gradually increasing in real-time scenarios. This paper presents a versatile approach to multi-robot backing crash prevention in indoor environments, using hardware schemes to achieve greater competence. Here, sensor fusion was initially used to analyze the [...] Read more.
The use of smart indoor robotics services is gradually increasing in real-time scenarios. This paper presents a versatile approach to multi-robot backing crash prevention in indoor environments, using hardware schemes to achieve greater competence. Here, sensor fusion was initially used to analyze the state of multi-robots and their orientation within a static or dynamic scenario. The proposed novel hardware scheme-based framework integrates both static and dynamic scenarios for the execution of backing crash prevention. A round-robin (RR) scheduling algorithm was composed for the static scenario. Dynamic backing crash prevention was deployed by embedding a first come, first served (FCFS) scheduling algorithm. The behavioral control mechanism of the distributed multi-robots was integrated with FCFS and adaptive cruise control (ACC) scheduling algorithms. The integration of multiple algorithms is a challenging task for smarter indoor robotics, and the Xilinx-based partial reconfiguration method was deployed to avoid computational issues with multiple algorithms during the run-time. These methods were coded with Verilog HDL and validated using an FPGA (Zynq)-based multi-robot system. Full article
(This article belongs to the Special Issue Novel Sensors and Algorithms for Outdoor Mobile Robot)
Show Figures

Figure 1

19 pages, 11791 KiB  
Article
Robot Grasp Planning: A Learning from Demonstration-Based Approach
by Kaimeng Wang, Yongxiang Fan and Ichiro Sakuma
Sensors 2024, 24(2), 618; https://doi.org/10.3390/s24020618 - 18 Jan 2024
Viewed by 796
Abstract
Robot grasping constitutes an essential capability in fulfilling the complexities of advanced industrial operations. This field has been extensively investigated to address a range of practical applications. However, the generation of a stable grasp remains challenging, principally due to the constraints imposed by [...] Read more.
Robot grasping constitutes an essential capability in fulfilling the complexities of advanced industrial operations. This field has been extensively investigated to address a range of practical applications. However, the generation of a stable grasp remains challenging, principally due to the constraints imposed by object geometries and the diverse objectives of the tasks. In this work, we propose a novel learning from demonstration-based grasp-planning framework. This framework is designed to extract crucial human grasp skills, namely the contact region and approach direction, from a single demonstration. Then, it formulates an optimization problem that integrates the extracted skills to generate a stable grasp. Distinct from conventional methods that rely on learning implicit synergies through human demonstration or on mapping the dissimilar kinematics between human hands and robot grippers, our approach focuses on learning the intuitive human intent that involves the potential contact regions and the grasping approach direction. Furthermore, our optimization formulation is capable of identifying the optimal grasp by minimizing the surface fitting error between the demonstrated contact regions on the object and the gripper finger surface and imposing a penalty for any misalignment between the demonstrated and the gripper’s approach directions. A series of experiments is conducted to verify the effectiveness of the proposed algorithm through both simulations and real-world scenarios. Full article
(This article belongs to the Special Issue Novel Sensors and Algorithms for Outdoor Mobile Robot)
Show Figures

Figure 1

18 pages, 6304 KiB  
Article
Hardware-Efficient Scheme for Trailer Robot Parking by Truck Robot in an Indoor Environment with Rendezvous
by Divya Vani G, Srinivasa Rao Karumuri, Chinnaiah M C, Siew-Kei Lam, Janardhan Narambhatlu and Sanjay Dubey
Sensors 2023, 23(11), 5097; https://doi.org/10.3390/s23115097 - 26 May 2023
Cited by 3 | Viewed by 1232
Abstract
Autonomous grounded vehicle-based social assistance/service robot parking in an indoor environment is an exciting challenge in urban cities. There are few efficient methods for parking multi-robot/agent teams in an unknown indoor environment. The primary objective of autonomous multi-robot/agent teams is to establish synchronization [...] Read more.
Autonomous grounded vehicle-based social assistance/service robot parking in an indoor environment is an exciting challenge in urban cities. There are few efficient methods for parking multi-robot/agent teams in an unknown indoor environment. The primary objective of autonomous multi-robot/agent teams is to establish synchronization between them and to stay in behavioral control when static and when in motion. In this regard, the proposed hardware-efficient algorithm addresses the parking of a trailer (follower) robot in indoor environments by a truck (leader) robot with a rendezvous approach. In the process of parking, initial rendezvous behavioral control between the truck and trailer robots is established. Next, the parking space in the environment is estimated by the truck robot, and the trailer robot parks under the supervision of the truck robot. The proposed behavioral control mechanisms were executed between heterogenous-type computational-based robots. Optimized sensors were used for traversing and the execution of the parking methods. The truck robot leads, and the trailer robot mimics the actions in the execution of path planning and parking. The truck robot was integrated with FPGA (Xilinx Zynq XC7Z020-CLG484-1), and the trailer was integrated with Arduino UNO computing devices; this heterogenous modeling is adequate in the execution of trailer parking by a truck. The hardware schemes were developed using Verilog HDL for the FPGA (truck)-based robot and Python for the Arduino (trailer)-based robot. Full article
(This article belongs to the Special Issue Novel Sensors and Algorithms for Outdoor Mobile Robot)
Show Figures

Figure 1

16 pages, 4091 KiB  
Article
Object-Based Change Detection Algorithm with a Spatial AI Stereo Camera
by Levente Göncz and András László Majdik
Sensors 2022, 22(17), 6342; https://doi.org/10.3390/s22176342 - 23 Aug 2022
Cited by 3 | Viewed by 2127
Abstract
This paper presents a real-time object-based 3D change detection method that is built around the concept of semantic object maps. The algorithm is able to maintain an object-oriented metric-semantic map of the environment and can detect object-level changes between consecutive patrol routes. The [...] Read more.
This paper presents a real-time object-based 3D change detection method that is built around the concept of semantic object maps. The algorithm is able to maintain an object-oriented metric-semantic map of the environment and can detect object-level changes between consecutive patrol routes. The proposed 3D change detection method exploits the capabilities of the novel ZED 2 stereo camera, which integrates stereo vision and artificial intelligence (AI) to enable the development of spatial AI applications. To design the change detection algorithm and set its parameters, an extensive evaluation of the ZED 2 camera was carried out with respect to depth accuracy and consistency, visual tracking and relocalization accuracy and object detection performance. The outcomes of these findings are reported in the paper. Moreover, the utility of the proposed object-based 3D change detection is shown in real-world indoor and outdoor experiments. Full article
(This article belongs to the Special Issue Novel Sensors and Algorithms for Outdoor Mobile Robot)
Show Figures

Figure 1

22 pages, 2677 KiB  
Article
A Simulator and First Reinforcement Learning Results for Underwater Mapping
by Matthias Rosynski and Lucian Buşoniu
Sensors 2022, 22(14), 5384; https://doi.org/10.3390/s22145384 - 19 Jul 2022
Cited by 2 | Viewed by 1265
Abstract
Underwater mapping with mobile robots has a wide range of applications, and good models are lacking for key parts of the problem, such as sensor behavior. The specific focus here is the huge environmental problem of underwater litter, in the context of the [...] Read more.
Underwater mapping with mobile robots has a wide range of applications, and good models are lacking for key parts of the problem, such as sensor behavior. The specific focus here is the huge environmental problem of underwater litter, in the context of the Horizon 2020 SeaClear project, where a team of robots is being developed to map and collect such litter. No reinforcement-learning solution to underwater mapping has been proposed thus far, even though the framework is well suited for robot control in unknown settings. As a key contribution, this paper therefore makes a first attempt to apply deep reinforcement learning (DRL) to this problem by exploiting two state-of-the-art algorithms and making a number of mapping-specific improvements. Since DRL often requires millions of samples to work, a fast simulator is required, and another key contribution is to develop such a simulator from scratch for mapping seafloor objects with an underwater vehicle possessing a sonar-like sensor. Extensive numerical experiments on a range of algorithm variants show that the best DRL method collects litter significantly faster than a baseline lawn mower trajectory. Full article
(This article belongs to the Special Issue Novel Sensors and Algorithms for Outdoor Mobile Robot)
Show Figures

Figure 1

24 pages, 629 KiB  
Article
A Simulation-Based Approach to Aid Development of Software-Based Hardware Failure Detection and Mitigation Algorithms of a Mobile Robot System
by Jacopo Sini, Andrea Passarino, Stefano Bonicelli and Massimo Violante
Sensors 2022, 22(13), 4665; https://doi.org/10.3390/s22134665 - 21 Jun 2022
Viewed by 1359
Abstract
Mechatronic systems, like mobile robots, are fairly complex. They are composed of electromechanical actuation components and sensing elements supervised by microcontrollers running complex embedded software. This paper proposes a novel approach to aid mobile robotics developers in adopting a rigorous development process to [...] Read more.
Mechatronic systems, like mobile robots, are fairly complex. They are composed of electromechanical actuation components and sensing elements supervised by microcontrollers running complex embedded software. This paper proposes a novel approach to aid mobile robotics developers in adopting a rigorous development process to design and verify the robot’s detection and mitigation capabilities against random hardware failures affecting its sensors or actuators. Unfortunately, assessing the interactions between the various safety/mission-critical subsystem is quite complex. The failure mode effect analysis (FMEA) alongside an analysis of the failure detection capabilities (FMEDA) are the state-of-the-art methodologies for performing such an analysis. Various guidelines are available, and the authors decided to follow the one released by AIAG&VDA in June 2019. Since the robot’s behavior is based on embedded software, the FMEA has been integrated with the hardware/software interaction analysis described in the ECSS-Q-ST-30-02C manual. The core of this proposal is to show how a simulation-based approach, where the mechanical and electrical/electronic components are simulated alongside the embedded software, can effectively support FMEA. As a benchmark application, we considered the mobility system of a proof-of-concept assistance rover for Mars exploration designed by the D.I.A.N.A. student team at Politecnico di Torino. Thanks to the adopted approach, we described how to develop the detection and mitigation strategies and how to determine their effectiveness, with a particular focus on those affecting the sensors. Full article
(This article belongs to the Special Issue Novel Sensors and Algorithms for Outdoor Mobile Robot)
Show Figures

Figure 1

21 pages, 1708 KiB  
Article
Sensor Faults Isolation in Networked Control Systems: Application to Mobile Robot Platoons
by Wijaya Kurniawan and Lorinc Marton
Sensors 2021, 21(20), 6702; https://doi.org/10.3390/s21206702 - 09 Oct 2021
Cited by 2 | Viewed by 1544
Abstract
In networked control systems, sensor faults in a subsystem have a major influence on the entire network as the fault effect reaches the other subsystems through the network interconnections. In this paper, a fault diagnosis-oriented model is proposed for linear networked control systems [...] Read more.
In networked control systems, sensor faults in a subsystem have a major influence on the entire network as the fault effect reaches the other subsystems through the network interconnections. In this paper, a fault diagnosis-oriented model is proposed for linear networked control systems that can be applied to the robotics platoon. In addition, this model can also be used to design distributed Unknown Input Observers (UIO) in each subsystem to accomplish weak sensor faults isolation by treating the network disturbances and fault propagation through the network as unknown inputs. A case study was developed in which the subsystems were represented by robots that are connected in a wireless communication-based leader-follower scheme. The simulation results show that the model successfully reproduces the expected behaviour of the robotics platoon in the presence of sensor faults. Furthermore, weak sensor faults isolation is also achieved by observing the residual signals produced by the UIOs in each of the subsystems. Full article
(This article belongs to the Special Issue Novel Sensors and Algorithms for Outdoor Mobile Robot)
Show Figures

Figure 1

18 pages, 9821 KiB  
Article
Feature Pyramid Network Based Efficient Normal Estimation and Filtering for Time-of-Flight Depth Cameras
by Szilárd Molnár, Benjamin Kelényi and Levente Tamas
Sensors 2021, 21(18), 6257; https://doi.org/10.3390/s21186257 - 18 Sep 2021
Cited by 4 | Viewed by 1774
Abstract
In this paper, an efficient normal estimation and filtering method for depth images acquired by Time-of-Flight (ToF) cameras is proposed. The method is based on a common feature pyramid networks (FPN) architecture. The normal estimation method is called ToFNest, and the filtering method [...] Read more.
In this paper, an efficient normal estimation and filtering method for depth images acquired by Time-of-Flight (ToF) cameras is proposed. The method is based on a common feature pyramid networks (FPN) architecture. The normal estimation method is called ToFNest, and the filtering method ToFClean. Both of these low-level 3D point cloud processing methods start from the 2D depth images, projecting the measured data into the 3D space and computing a task-specific loss function. Despite the simplicity, the methods prove to be efficient in terms of robustness and runtime. In order to validate the methods, extensive evaluations on public and custom datasets were performed. Compared with the state-of-the-art methods, the ToFNest and ToFClean algorithms are faster by an order of magnitude without losing precision on public datasets. Full article
(This article belongs to the Special Issue Novel Sensors and Algorithms for Outdoor Mobile Robot)
Show Figures

Figure 1

17 pages, 13344 KiB  
Article
OctoPath: An OcTree-Based Self-Supervised Learning Approach to Local Trajectory Planning for Mobile Robots
by Bogdan Trăsnea, Cosmin Ginerică, Mihai Zaha, Gigel Măceşanu, Claudiu Pozna and Sorin Grigorescu
Sensors 2021, 21(11), 3606; https://doi.org/10.3390/s21113606 - 22 May 2021
Cited by 5 | Viewed by 2779
Abstract
Autonomous mobile robots are usually faced with challenging situations when driving in complex environments. Namely, they have to recognize the static and dynamic obstacles, plan the driving path and execute their motion. For addressing the issue of perception and path planning, in this [...] Read more.
Autonomous mobile robots are usually faced with challenging situations when driving in complex environments. Namely, they have to recognize the static and dynamic obstacles, plan the driving path and execute their motion. For addressing the issue of perception and path planning, in this paper, we introduce OctoPath, which is an encoder-decoder deep neural network, trained in a self-supervised manner to predict the local optimal trajectory for the ego-vehicle. Using the discretization provided by a 3D octree environment model, our approach reformulates trajectory prediction as a classification problem with a configurable resolution. During training, OctoPath minimizes the error between the predicted and the manually driven trajectories in a given training dataset. This allows us to avoid the pitfall of regression-based trajectory estimation, in which there is an infinite state space for the output trajectory points. Environment sensing is performed using a 40-channel mechanical LiDAR sensor, fused with an inertial measurement unit and wheels odometry for state estimation. The experiments are performed both in simulation and real-life, using our own developed GridSim simulator and RovisLab’s Autonomous Mobile Test Unit platform. We evaluate the predictions of OctoPath in different driving scenarios, both indoor and outdoor, while benchmarking our system against a baseline hybrid A-Star algorithm and a regression-based supervised learning method, as well as against a CNN learning-based optimal path planning method. Full article
(This article belongs to the Special Issue Novel Sensors and Algorithms for Outdoor Mobile Robot)
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 3926 KiB  
Review
Road Curb Detection: A Historical Survey
by Lucero M. Romero, Jose A. Guerrero and Gerardo Romero
Sensors 2021, 21(21), 6952; https://doi.org/10.3390/s21216952 - 20 Oct 2021
Cited by 11 | Viewed by 5116
Abstract
Curbs are used as physical markers to delimit roads and to redirect traffic into multiple directions (e.g., islands and roundabouts). Detection of road curbs is a fundamental task for autonomous vehicle navigation in urban environments. Since almost two decades, solutions that use various [...] Read more.
Curbs are used as physical markers to delimit roads and to redirect traffic into multiple directions (e.g., islands and roundabouts). Detection of road curbs is a fundamental task for autonomous vehicle navigation in urban environments. Since almost two decades, solutions that use various types of sensors, including vision, Light Detection and Ranging (LiDAR) sensors, among others, have emerged to address the curb detection problem. This survey elaborates on the advances of road curb detection problems, a research field that has grown over the last two decades and continues to be the ground for new theoretical and applied developments. We identify the tasks involved in the road curb detection methods and their applications on autonomous vehicle navigation and advanced driver assistance system (ADAS). Finally, we present an analysis on the similarities and differences of the wide variety of contributions. Full article
(This article belongs to the Special Issue Novel Sensors and Algorithms for Outdoor Mobile Robot)
Show Figures

Figure 1

Back to TopTop