sensors-logo

Journal Browser

Journal Browser

Perception Sensors for Road Applications 2022

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: closed (30 July 2022) | Viewed by 10557

Special Issue Editor


grade E-Mail Website
Guest Editor
Instituto Universitario de Investigación del Automóvil (INSIA), Universidad Politécnica de Madrid, 28040 Madrid, Spain
Interests: intelligent transport systems; advanced driver assistance systems; vehicle positioning; inertial sensors; digital maps; vehicle dynamics; driver monitoring; perception; autonomous vehicles; cooperative services; connected and autonomous driving
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

New assistance systems and the applications of autonomous driving of road vehicles imply ever greater requirements for perception systems in order to increase the robustness of decisions and avoid false positives or false negatives. In this sense, there are many technologies that can be used, both in the vehicle and infrastructure. In a first case, technologies, such as LiDAR or computer vision, are the basis for growth in automation levels of vehicles, although their actual deployment is also demonstrating the problems that can be found in real scenarios, and that must be solved to continue on the path of improving the safety and efficiency of road traffic. Usually, given the limitations of each of the technologies, it is common to resort to sensorial fusion, both of the same type sensors and of different types. Additionally, obtaining data for decision-making does not come from only on-board sensors, but wireless communication with the outside world allow a vehicle to offer a greater electronic horizon. In the same way, positioning in precise and detailed digital maps provides additional information that can be very useful to interpret the environment. The sensors also cover the driver, in order to analyse their ability to perform tasks safely. In all areas, it is crucial to study the limitations of each of the solutions and sensors, as well as to establish tools that try to alleviate these issues, either through improvements in hardware or in software. In this sense, the specifications requested of sensors must be established and specific methods must be developed for the validation of said specifications for the sensors and complete systems. Finally, studies of the state-of-the-art, in relation to the evolution of perception sensors and their impact on the evolution of road transport, are also welcome. In conclusion, this Special Issue aims to bring together innovative developments in areas related to sensors in vehicles and in infrastructure, including, but not limited to:

  • environment perception
  • LiDAR
  • Computer vision
  • Radar
  • vehicle dynamics sensors
  • driver surveillance
  • infrastructure sensors
  • new assistance systems based on perception sensors
  • Sensor fusion techniques for autonomous systems
  • Interaction of autonomous systems and driver
  • Decision algorithms for autonomous actions
  • Cooperation between autonomous vehicles and infrastructure
  • sensors requirements
  • state-of-the-art review of perception sensors and technologies

Authors are invited to contact the guest editor prior to submission if they are uncertain whether their work falls within the general scope of this Special Issue.

Dr. Felipe Jiménez
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 6985 KiB  
Article
Self-Supervised Sidewalk Perception Using Fast Video Semantic Segmentation for Robotic Wheelchairs in Smart Mobility
by Vishnu Pradeep, Redouane Khemmar, Louis Lecrosnier, Yann Duchemin, Romain Rossi and Benoit Decoux
Sensors 2022, 22(14), 5241; https://doi.org/10.3390/s22145241 - 13 Jul 2022
Viewed by 2086
Abstract
The real-time segmentation of sidewalk environments is critical to achieving autonomous navigation for robotic wheelchairs in urban territories. A robust and real-time video semantic segmentation offers an apt solution for advanced visual perception in such complex domains. The key to this proposition is [...] Read more.
The real-time segmentation of sidewalk environments is critical to achieving autonomous navigation for robotic wheelchairs in urban territories. A robust and real-time video semantic segmentation offers an apt solution for advanced visual perception in such complex domains. The key to this proposition is to have a method with lightweight flow estimations and reliable feature extractions. We address this by selecting an approach based on recent trends in video segmentation. Although these approaches demonstrate efficient and cost-effective segmentation performance in cross-domain implementations, they require additional procedures to put their striking characteristics into practical use. We use our method for developing a visual perception technique to perform in urban sidewalk environments for the robotic wheelchair. We generate a collection of synthetic scenes in a blending target distribution to train and validate our approach. Experimental results show that our method improves prediction accuracy on our benchmark with tolerable loss of speed and without additional overhead. Overall, our technique serves as a reference to transfer and develop perception algorithms for any cross-domain visual perception applications with less downtime. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications 2022)
Show Figures

Figure 1

27 pages, 1367 KiB  
Article
Recent Advances in Vision-Based On-Road Behaviors Understanding: A Critical Survey
by Rim Trabelsi, Redouane Khemmar, Benoit Decoux, Jean-Yves Ertaud and Rémi Butteau
Sensors 2022, 22(7), 2654; https://doi.org/10.3390/s22072654 - 30 Mar 2022
Cited by 4 | Viewed by 2965
Abstract
On-road behavior analysis is a crucial and challenging problem in the autonomous driving vision-based area. Several endeavors have been proposed to deal with different related tasks and it has gained wide attention recently. Much of the excitement about on-road behavior understanding has been [...] Read more.
On-road behavior analysis is a crucial and challenging problem in the autonomous driving vision-based area. Several endeavors have been proposed to deal with different related tasks and it has gained wide attention recently. Much of the excitement about on-road behavior understanding has been the labor of advancement witnessed in the fields of computer vision, machine, and deep learning. Remarkable achievements have been made in the Road Behavior Understanding area over the last years. This paper reviews 100+ papers of on-road behavior analysis related work in the light of the milestones achieved, spanning over the last 2 decades. This review paper provides the first attempt to draw smart mobility researchers’ attention to the road behavior understanding field and its potential impact on road safety to the whole road agents such as: drivers, pedestrians, stuffs, etc. To push for an holistic understanding, we investigate the complementary relationships between different elementary tasks that we define as the main components of road behavior understanding to achieve a comprehensive understanding of approaches and techniques. For this, five related topics have been covered in this review, including situational awareness, driver-road interaction, road scene understanding, trajectories forecast, driving activities, and status analysis. This paper also reviews the contribution of deep learning approaches and makes an in-depth analysis of recent benchmarks as well, with a specific taxonomy that can help stakeholders in selecting their best-fit architecture. We also finally provide a comprehensive discussion leading us to identify novel research directions some of which have been implemented and validated in our current smart mobility research work. This paper presents the first survey of road behavior understanding-related work without overlap with existing reviews. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications 2022)
Show Figures

Figure 1

16 pages, 4328 KiB  
Article
Deep-Neural-Network-Based Modelling of Longitudinal-Lateral Dynamics to Predict the Vehicle States for Autonomous Driving
by Xiaobo Nie, Chuan Min, Yongjun Pan, Ke Li and Zhixiong Li
Sensors 2022, 22(5), 2013; https://doi.org/10.3390/s22052013 - 04 Mar 2022
Cited by 15 | Viewed by 2538
Abstract
Multibody models built in commercial software packages, e.g., ADAMS, can be used for accurate vehicle dynamics, but computational efficiency and numerical stability are very challenging in complex driving environments. These issues can be addressed by using data-driven models, owing to their robust generalization [...] Read more.
Multibody models built in commercial software packages, e.g., ADAMS, can be used for accurate vehicle dynamics, but computational efficiency and numerical stability are very challenging in complex driving environments. These issues can be addressed by using data-driven models, owing to their robust generalization and computational speed. In this study, we develop a deep neural network (DNN) based model to predict longitudinal-lateral dynamics of an autonomous vehicle. Dynamic simulations of the autonomous vehicle are performed based on a semirecursive multibody method for data acquisition. The data are used to train and test the DNN model. The DNN inputs include the torque applied on wheels and the vehicle’s initial speed that imitates a double lane change maneuver. The DNN outputs include the longitudinal driving distance, the lateral driving distance, the final longitudinal velocities, the final lateral velocities, and the yaw angle. The predicted vehicle states based on the DNN model are compared with the multibody model results. The accuracy of the DNN model is investigated in detail in terms of error functions. The DNN model is verified within the framework of a commercial software package CarSim. The results demonstrate that the DNN model predicts accurate vehicle states in real time. It can be used for real-time simulation and preview control in autonomous vehicles for enhanced transportation safety. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications 2022)
Show Figures

Figure 1

13 pages, 1155 KiB  
Article
Impact of Road Marking Retroreflectivity on Machine Vision in Dry Conditions: On-Road Test
by Darko Babić, Dario Babić, Mario Fiolić, Arno Eichberger and Zoltan Ferenc Magosi
Sensors 2022, 22(4), 1303; https://doi.org/10.3390/s22041303 - 09 Feb 2022
Cited by 5 | Viewed by 2103
Abstract
(1) Background: Due to its high safety potential, one of the most common ADAS technologies is the lane support system (LSS). The main purpose of LSS is to prevent road accidents caused by road departure or entrance in the lane of other vehicles. [...] Read more.
(1) Background: Due to its high safety potential, one of the most common ADAS technologies is the lane support system (LSS). The main purpose of LSS is to prevent road accidents caused by road departure or entrance in the lane of other vehicles. Such accidents are especially common on rural roads during nighttime. In order for LSS to function properly, road markings should be properly maintained and have an adequate level of visibility. During nighttime, the visibility of road markings is determined by their retroreflectivity. The aim of this study is to investigate how road markings’ retroreflectivity influences the detection quality and the view range of LSS. (2) Methods: An on-road investigation comprising measurements using Mobileye and a dynamic retroreflectometer was conducted on four rural roads in Croatia. (3) Results: The results show that, with the increase of markings’ retroreflection, the detection quality and the range of view of Mobileye increase. Additionally, it was determined that in “ideal” conditions, the minimal value of retroreflection for a minimum level 2 detection should be above 55 mcd/lx/m2 and 88 mcd/lx/m2 for the best detection quality (level 3). The results of this study are valuable to researchers, road authorities and policymakers. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications 2022)
Show Figures

Figure 1

Back to TopTop