sensors-logo

Journal Browser

Journal Browser

Sensor Fusion for Vehicles Navigation and Robotic Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (18 August 2022) | Viewed by 21877

Special Issue Editors


E-Mail Website
Guest Editor
MASCOR Institute, FH Aachen University of Applied Sciences, Eupener Str. 70, 52066 Aachen, Germany
Interests: AI; robotics; autonomous systems; high-level control
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
MASCOR Institute, FH Aachen University of Applied Sciences, Hohenstaufenallee 10, 52064 Aachen, Germany
Interests: autonomous driving; automotive electronics and software
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
MASCOR Institute, FH Aachen University of Applied Sciences, Eupener Str. 70, 52066 Aachen, Germany
Interests: artificial intelligence; cognitive robotics; human–robot interaction; knowledge-based systems

Special Issue Information

Dear Colleagues,

For self-driving cars and any autonomous, intelligent robot system, reliable pose information and reliable perception of the environment is of utter importance. Without them, a self-driving car could not start its journey and the mobile robot system could not commence any intelligent mission. To calculate robust localisation data, the robot needs to integrate several different sensor sources into its localisation pose to cope with sensor noise and come up with a consistent pose estimate. To detect obstacles in all weather conditions, a self-driving car uses various sensor systems, such as cameras, LiDAR and radar, but none of them is perfect on its own. Therefore, only fusing the sensor data will ensure a robust perception of the situation and the environment.

With this Special Issue we aim to collect current work on sensor fusion in the field of self-driving cars and autonomous robots. We welcome work on successful application examples of sensor fusion in these fields, but we also invite works on improving the theory or technology of state estimation, which focus on these particular application domains. The Special Issue is focused on, but not limited to, original work on:

  • Novel sensor fusion techniques;
  • Novel sensor technologies;
  • Success stories in sensor application;
  • Innovative processing of sensor data;
  • Neural networks and their training, e.g., for object detection;
  • Advantages in vehicle navigation based on sensor improvements;
  • Mapping based on fused sensor data;
  • Connected vehicles for a common perception;
  • Realistic sensor simulation;
  • Semantic segmentation and data annotation.

Prof. Dr. Alexander Ferrein
Prof. Dr. Michael Reke 
Dr. Stefan Schiffer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor fusion
  • vehicle navigation
  • novel sensor technologies
  • sensor simulation
  • object detection
  • semantic data annotation/processing

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 10430 KiB  
Article
Benchmarking of Various LiDAR Sensors for Use in Self-Driving Vehicles in Real-World Environments
by Joschua Schulte-Tigges, Marco Förster, Gjorgji Nikolovski, Michael Reke, Alexander Ferrein, Daniel Kaszner, Dominik Matheis and Thomas Walter
Sensors 2022, 22(19), 7146; https://doi.org/10.3390/s22197146 - 21 Sep 2022
Cited by 7 | Viewed by 3312
Abstract
In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with [...] Read more.
In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with real-world use cases in mind, in order to find a sensor that meet the requirements of self-driving vehicles. For this, we defined static and dynamic benchmark scenarios. In the static scenarios, both LiDAR and the detection target do not move during the measurement. In dynamic scenarios, the LiDAR sensor was mounted on the vehicle which was driving toward the detection target. We tested all mentioned LiDAR sensors in both scenarios, show the results regarding the detection accuracy of the targets, and discuss their usefulness for deployment in self-driving cars. Full article
(This article belongs to the Special Issue Sensor Fusion for Vehicles Navigation and Robotic Systems)
Show Figures

Figure 1

13 pages, 7990 KiB  
Article
Improvement of Baro Sensors Matrix for Altitude Estimation
by Łukasz Nagi, Jarosław Zygarlicki, Wojciech P. Hunek, Paweł Majewski, Paweł Młotek, Piotr Warmuzek, Piotr Witkowski and Dariusz Zmarzły
Sensors 2022, 22(18), 7060; https://doi.org/10.3390/s22187060 - 18 Sep 2022
Viewed by 1486
Abstract
The article presents the use of barometric sensors to precisely determine the altitude of a flying object. The sensors are arranged in a hexahedral spatial arrangement with appropriately spaced air inlets. Thanks to the solution used, the range of measurement uncertainty can be [...] Read more.
The article presents the use of barometric sensors to precisely determine the altitude of a flying object. The sensors are arranged in a hexahedral spatial arrangement with appropriately spaced air inlets. Thanks to the solution used, the range of measurement uncertainty can be reduced, resulting in a lower probability of error during measurement by improving the accuracy of estimation. The paper also describes the use of pressure sensors in complex Tracking Vertical Velocity and Height systems, integrating different types of sensors to highlight the importance of this single parameter. The solution can find application in computational systems using different types of data in Kalman filters. The impact of pressure measurements in a geometric system with different spatial orientations of sensors is also presented. In order to compensate for local pressure differences, e.g., in the form of side wind gusts, an additional reference sensor was used, making the developed solution relevant for applications such as industrial ones. Full article
(This article belongs to the Special Issue Sensor Fusion for Vehicles Navigation and Robotic Systems)
Show Figures

Figure 1

20 pages, 7837 KiB  
Article
Multi-LiDAR Mapping for Scene Segmentation in Indoor Environments for Mobile Robots
by Pavel Gonzalez, Alicia Mora, Santiago Garrido, Ramon Barber and Luis Moreno
Sensors 2022, 22(10), 3690; https://doi.org/10.3390/s22103690 - 12 May 2022
Cited by 5 | Viewed by 2881
Abstract
Nowadays, most mobile robot applications use two-dimensional LiDAR for indoor mapping, navigation, and low-level scene segmentation. However, single data type maps are not enough in a six degree of freedom world. Multi-LiDAR sensor fusion increments the capability of robots to map on different [...] Read more.
Nowadays, most mobile robot applications use two-dimensional LiDAR for indoor mapping, navigation, and low-level scene segmentation. However, single data type maps are not enough in a six degree of freedom world. Multi-LiDAR sensor fusion increments the capability of robots to map on different levels the surrounding environment. It exploits the benefits of several data types, counteracting the cons of each of the sensors. This research introduces several techniques to achieve mapping and navigation through indoor environments. First, a scan matching algorithm based on ICP with distance threshold association counter is used as a multi-objective-like fitness function. Then, with Harmony Search, results are optimized without any previous initial guess or odometry. A global map is then built during SLAM, reducing the accumulated error and demonstrating better results than solo odometry LiDAR matching. As a novelty, both algorithms are implemented in 2D and 3D mapping, overlapping the resulting maps to fuse geometrical information at different heights. Finally, a room segmentation procedure is proposed by analyzing this information, avoiding occlusions that appear in 2D maps, and proving the benefits by implementing a door recognition system. Experiments are conducted in both simulated and real scenarios, proving the performance of the proposed algorithms. Full article
(This article belongs to the Special Issue Sensor Fusion for Vehicles Navigation and Robotic Systems)
Show Figures

Figure 1

16 pages, 594 KiB  
Article
A Reconfigurable Framework for Vehicle Localization in Urban Areas
by Kerman Viana, Asier Zubizarreta and Mikel Diez
Sensors 2022, 22(7), 2595; https://doi.org/10.3390/s22072595 - 28 Mar 2022
Cited by 6 | Viewed by 1748
Abstract
Accurate localization for autonomous vehicle operations is essential in dense urban areas. In order to ensure safety, positioning algorithms should implement fault detection and fallback strategies. While many strategies stop the vehicle once a failure is detected, in this work a new framework [...] Read more.
Accurate localization for autonomous vehicle operations is essential in dense urban areas. In order to ensure safety, positioning algorithms should implement fault detection and fallback strategies. While many strategies stop the vehicle once a failure is detected, in this work a new framework is proposed that includes an improved reconfiguration module to evaluate the failure scenario and offer alternative positioning strategies, allowing continued driving in degraded mode until a critical failure is detected. Furthermore, as many failures in sensors can be temporary, such as GPS signal interruption, the proposed approach allows the return to a non-fault state while resetting the alternative algorithms used in the temporary failure scenario. The proposed localization framework is validated in a series of experiments carried out in a simulation environment. Results demonstrate proper localization for the driving task even in the presence of sensor failure, only stopping the vehicle when a fully degraded state is achieved. Moreover, reconfiguration strategies have proven to consistently reset the accumulated drift of the alternative positioning algorithms, improving the overall performance and bounding the mean error. Full article
(This article belongs to the Special Issue Sensor Fusion for Vehicles Navigation and Robotic Systems)
Show Figures

Figure 1

17 pages, 2313 KiB  
Article
Automating the Calibration of Visible Light Positioning Systems
by Robin Amsters, Simone Ruberto, Eric Demeester, Nobby Stevens and Peter Slaets
Sensors 2022, 22(3), 998; https://doi.org/10.3390/s22030998 - 27 Jan 2022
Viewed by 2130
Abstract
Visible light positioning is one of the most popular technologies used for indoor positioning research. Like many other technologies, a calibration procedure is required before the system can be used. More specifically, the location and identity of each light source need to be [...] Read more.
Visible light positioning is one of the most popular technologies used for indoor positioning research. Like many other technologies, a calibration procedure is required before the system can be used. More specifically, the location and identity of each light source need to be determined. These parameters are often measured manually, which can be a labour-intensive and error-prone process. Previous work proposed the use of a mobile robot for data collection. However, this robot still needed to be steered by a human operator. In this work, we significantly improve the efficiency of calibration by proposing two novel methods that allow the robot to autonomously collect the required calibration data. In postprocessing, the necessary system parameters can be calculated from these data. The first novel method will be referred to as semi-autonomous calibration, and requires some prior knowledge of the LED locations and a map of the environment. The second, fully-autonomous calibration procedure requires no prior knowledge. Simulation results show that the two novel methods are both more accurate than manual steering. Fully autonomous calibration requires approximately the same amount of time to complete, whereas semi-autonomous calibration is significantly faster. Full article
(This article belongs to the Special Issue Sensor Fusion for Vehicles Navigation and Robotic Systems)
Show Figures

Figure 1

19 pages, 5191 KiB  
Article
Vehicle Trajectory Prediction with Lane Stream Attention-Based LSTMs and Road Geometry Linearization
by Dongyeon Yu, Honggyu Lee, Taehoon Kim and Sung-Ho Hwang
Sensors 2021, 21(23), 8152; https://doi.org/10.3390/s21238152 - 06 Dec 2021
Cited by 10 | Viewed by 3483
Abstract
It is essential for autonomous vehicles at level 3 or higher to have the ability to predict the trajectories of surrounding vehicles to safely and effectively plan and drive along trajectories in complex traffic situations. However, predicting the future behavior of vehicles is [...] Read more.
It is essential for autonomous vehicles at level 3 or higher to have the ability to predict the trajectories of surrounding vehicles to safely and effectively plan and drive along trajectories in complex traffic situations. However, predicting the future behavior of vehicles is a challenging issue because traffic vehicles each have different drivers with different driving tendencies and intentions and they interact with each other. This paper presents a Long Short-Term Memory (LSTM) encoder–decoder model that utilizes an attention mechanism that focuses on certain information to predict vehicles’ trajectories. The proposed model was trained using the Highway Drone (HighD) dataset, which is a high-precision, large-scale traffic dataset. We also compared this model to previous studies. Our model effectively predicted future trajectories by using an attention mechanism to manage the importance of the driving flow of the target and adjacent vehicles and the target vehicle’s dynamics in each driving situation. Furthermore, this study presents a method of linearizing the road geometry such that the trajectory prediction model can be used in a variety of road environments. We verified that the road geometry linearization mechanism can improve the trajectory prediction model’s performance on various road environments in a virtual test-driving simulator constructed based on actual road data. Full article
(This article belongs to the Special Issue Sensor Fusion for Vehicles Navigation and Robotic Systems)
Show Figures

Figure 1

14 pages, 1413 KiB  
Article
A Scalable Framework for Map Matching Based Cooperative Localization
by Chizhao Yang, Jared Strader and Yu Gu
Sensors 2021, 21(19), 6400; https://doi.org/10.3390/s21196400 - 25 Sep 2021
Cited by 2 | Viewed by 1866
Abstract
Localization based on scalar field map matching (e.g., using gravity anomaly, magnetic anomaly, topographics, or olfaction maps) is a potential solution for navigating in Global Navigation Satellite System (GNSS)-denied environments. In this paper, a scalable framework is presented for cooperatively localizing a group [...] Read more.
Localization based on scalar field map matching (e.g., using gravity anomaly, magnetic anomaly, topographics, or olfaction maps) is a potential solution for navigating in Global Navigation Satellite System (GNSS)-denied environments. In this paper, a scalable framework is presented for cooperatively localizing a group of agents based on map matching given a prior map modeling the scalar field. In order to satisfy the communication constraints, each agent in the group is assigned to different subgroups. A locally centralized cooperative localization method is performed in each subgroup to estimate the poses and covariances of all agents inside the subgroup. Each agent in the group, at the same time, could belong to multiple subgroups, which means multiple pose and covariance estimates from different subgroups exist for each agent. The improved pose estimate for each agent at each time step is then solved through an information fusion algorithm. The proposed algorithm is evaluated with two different types of scalar field based simulations. The simulation results show that the proposed algorithm is able to deal with large group sizes (e.g., 128 agents), achieve 10-m level localization performance with 180 km traveling distance, while under restrictive communication constraints. Full article
(This article belongs to the Special Issue Sensor Fusion for Vehicles Navigation and Robotic Systems)
Show Figures

Figure 1

22 pages, 5665 KiB  
Article
Performance Evaluation of the Highway Radar Occupancy Grid
by Jakub Porębski and Krzysztof Kogut
Sensors 2021, 21(6), 2177; https://doi.org/10.3390/s21062177 - 20 Mar 2021
Cited by 2 | Viewed by 3011
Abstract
The quality of environmental perception is crucial for automated vehicle capabilities. In order to ensure the required accuracy, the occupancy grid mapping algorithm is often utilised to fuse data from multiple sensors. This paper focuses on the radar-based occupancy grid for highway applications [...] Read more.
The quality of environmental perception is crucial for automated vehicle capabilities. In order to ensure the required accuracy, the occupancy grid mapping algorithm is often utilised to fuse data from multiple sensors. This paper focuses on the radar-based occupancy grid for highway applications and describes how to measure effectively the quality of the occupancy map. The evaluation was performed using the novel grid pole-like object analysis method. The proposed assessment is versatile and can be applied without detailed ground truth information. The evaluation was tested with a simulation and real vehicle experiments on the highway. Full article
(This article belongs to the Special Issue Sensor Fusion for Vehicles Navigation and Robotic Systems)
Show Figures

Graphical abstract

Back to TopTop