sensors-logo

Journal Browser

Journal Browser

Advances in Sensor Related Technologies for Autonomous Driving

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 15 September 2024 | Viewed by 16306

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mechanical and Aerospace Engineering and the Automated Driving Lab (ADL), Ohio State University, Columbus, OH 43210, USA
Interests: connected and autonomous vehicles (CAVs); intelligent transportation systems; smart mobility; smart cities; vehicle dynamics controllers; electronic stability control; adaptive cruise control; cooperative adaptive cruise control; collision warning and avoidance systems; cooperative collision avoidance systems

E-Mail Website
Guest Editor
Department of Mechanical and Aerospace Engineering, Department of Electrical and Computer Engineering and the Automated Driving Lab (ADL), Ohio State University, Columbus, OH 43210, USA
Interests: connected and autonomous driving; HIL simulation; autonomous driving datasets; validation of autonomous driving functions

Special Issue Information

Dear Colleagues,

Successful pilot deployments of autonomous driving are increasing at a very rapid pace worldwide. These include highway pilots, urban autonomous shuttles, autonomous delivery vehicles and robots, and robo-taxis. Series-produced autonomous driving road vehicles have started to appear in limited editions in the market, with more widespread introductions expected in the near future. The most significant enabler of these autonomous driving developments are the sensors and sensor data processing and fusion algorithms used for perception of the environment for situational awareness.

Along with advancements in autonomous driving, there have also been significant advancements in their perception and localization sensors and their processing algorithms. Motivated by these advances, this Special Issue focuses on recent developments in autonomous vehicle sensor-related technologies, including lidar, camera, radar, inertial measurement unit (IMU), GPS, and communication sensors. There is considerable work to improve these and similar sensors in terms of resolution, operation under adverse conditions such as weather effects, range, and flash versus scanning type capture of the sensed environment while lowering cost. In some cases, two sensors and possibly more are combined together for a multimodal sensor that benefits from using the same reference frame for both modes. These include, for example, IMU and GPS, camera and radar, radar, and communication modem, lidar and camera, etc. Series production favors sensors with built-in intelligence such that the sensor outputs sensed object information as compared to raw data. In research applications, sensor raw data are processed in a centralized computation system in edge computing where sensor fusion and data association also take place. It is also possible with the fast cellular communication networks for sensory data in multiple vehicles to be shared and used in edge or cloud computing for enhanced perception.

This Special Issue welcomes contributions dealing with all the technological facets of sensors for autonomous driving, including multimodal sensors, new sensor architectures, algorithms for sensor fusion and data association, deployment issues, soft sensor models for simulation, soft environments for sensor perception algorithms and autonomous driving function validation and development, application case studies, cooperative sensing, combined use of perception sensors with communication, hardware-in-the-loop testing methods for autonomous driving sensors, cybersecurity, and hacking of sensors and datasets.

The topics of interest include but are not limited to the following:

  • Autonomous driving sensor architectures and innovative, new sensing approaches;
  • Multimodal sensors for autonomous driving;
  • Soft sensor models and environments for development and testing;
  • Autonomous driving sensors in loop simulation systems;
  • Autonomous driving sensor perception algorithms for sensor fusion and data association;
  • Datasets of autonomous driving sensors especially for urban environments;
  • Sensor fault-tolerant localization and perception for autonomous driving.

This Special Issue is on the topics of physical sensors, smart/Intelligent sensors, sensor devices, sensor technology and application, sensing principles, signal processing, data fusion, and deep learning in sensor systems as per the scope of the Sensors journal.

Prof. Dr. Bilin Aksun Guvenc
Prof. Dr. Levent Guvenc
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 24162 KiB  
Article
Monocular Depth Estimation from a Fisheye Camera Based on Knowledge Distillation
by Eunjin Son, Jiho Choi, Jimin Song, Yongsik Jin and Sang Jun Lee
Sensors 2023, 23(24), 9866; https://doi.org/10.3390/s23249866 - 16 Dec 2023
Viewed by 1160
Abstract
Monocular depth estimation is a task aimed at predicting pixel-level distances from a single RGB image. This task holds significance in various applications including autonomous driving and robotics. In particular, the recognition of surrounding environments is important to avoid collisions during autonomous parking. [...] Read more.
Monocular depth estimation is a task aimed at predicting pixel-level distances from a single RGB image. This task holds significance in various applications including autonomous driving and robotics. In particular, the recognition of surrounding environments is important to avoid collisions during autonomous parking. Fisheye cameras are adequate to acquire visual information from a wide field of view, reducing blind spots and preventing potential collisions. While there have been increasing demands for fisheye cameras in visual-recognition systems, existing research on depth estimation has primarily focused on pinhole camera images. Moreover, depth estimation from fisheye images poses additional challenges due to strong distortion and the lack of public datasets. In this work, we propose a novel underground parking lot dataset called JBNU-Depth360, which consists of fisheye camera images and their corresponding LiDAR projections. Our proposed dataset was composed of 4221 pairs of fisheye images and their corresponding LiDAR point clouds, which were obtained from six driving sequences. Furthermore, we employed a knowledge-distillation technique to improve the performance of the state-of-the-art depth-estimation models. The teacher–student learning framework allows the neural network to leverage the information in dense depth predictions and sparse LiDAR projections. Experiments were conducted on the KITTI-360 and JBNU-Depth360 datasets for analyzing the performance of existing depth-estimation models on fisheye camera images. By utilizing the self-distillation technique, the AbsRel and SILog error metrics were reduced by 1.81% and 1.55% on the JBNU-Depth360 dataset. The experimental results demonstrated that the self-distillation technique is beneficial to improve the performance of depth-estimation models. Full article
(This article belongs to the Special Issue Advances in Sensor Related Technologies for Autonomous Driving)
Show Figures

Figure 1

19 pages, 4136 KiB  
Article
LiDAR-as-Camera for End-to-End Driving
by Ardi Tampuu, Romet Aidla, Jan Aare van Gent and Tambet Matiisen
Sensors 2023, 23(5), 2845; https://doi.org/10.3390/s23052845 - 06 Mar 2023
Cited by 7 | Viewed by 2702
Abstract
The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving commands, e.g., steering [...] Read more.
The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving commands, e.g., steering angle, as output. However, simulation studies have shown that depth-sensing can make the end-to-end driving task easier. On a real car, combining depth and visual information can be challenging due to the difficulty of obtaining good spatial and temporal alignment of the sensors. To alleviate alignment problems, Ouster LiDARs can output surround-view LiDAR images with depth, intensity, and ambient radiation channels. These measurements originate from the same sensor, rendering them perfectly aligned in time and space. The main goal of our study is to investigate how useful such images are as inputs to a self-driving neural network. We demonstrate that such LiDAR images are sufficient for the real-car road-following task. Models using these images as input perform at least as well as camera-based models in the tested conditions. Moreover, LiDAR images are less sensitive to weather conditions and lead to better generalization. In a secondary research direction, we reveal that the temporal smoothness of off-policy prediction sequences correlates with the actual on-policy driving ability equally well as the commonly used mean absolute error. Full article
(This article belongs to the Special Issue Advances in Sensor Related Technologies for Autonomous Driving)
Show Figures

Figure 1

18 pages, 9362 KiB  
Article
V2X Communication between Connected and Automated Vehicles (CAVs) and Unmanned Aerial Vehicles (UAVs)
by Ozgenur Kavas-Torris, Sukru Yaren Gelbal, Mustafa Ridvan Cantas, Bilin Aksun Guvenc and Levent Guvenc
Sensors 2022, 22(22), 8941; https://doi.org/10.3390/s22228941 - 18 Nov 2022
Cited by 10 | Viewed by 2998
Abstract
Connectivity between ground vehicles can be utilized and expanded to include aerial vehicles for coordinated missions. Using Vehicle-to-Everything (V2X) communication technologies, a communication link can be established between Connected and Autonomous vehicles (CAVs) and Unmanned Aerial vehicles (UAVs). Hardware implementation and testing of [...] Read more.
Connectivity between ground vehicles can be utilized and expanded to include aerial vehicles for coordinated missions. Using Vehicle-to-Everything (V2X) communication technologies, a communication link can be established between Connected and Autonomous vehicles (CAVs) and Unmanned Aerial vehicles (UAVs). Hardware implementation and testing of a ground-to-air communication link are crucial for real-life applications. In this paper, the V2X communication and coordinated mission of a CAV & UAV are presented. Four methods were utilized to establish communication between the hardware and software components, namely Dedicated Short Range communication (DSRC), User Datagram Protocol (UDP), 4G internet-based WebSocket and Transmission Control Protocol (TCP). These communication links were used together for a real-life use case scenario called Quick Clear demonstration. In this scenario, the first aim was to send the accident location information from the CAV to the UAV through DSRC communication. On the UAV side, the wired connection between the DSRC modem and Raspberry Pi companion computer was established through UDP to get the accident location from CAV to the companion computer. Raspberry Pi first connected to a traffic contingency management system (CMP) through TCP to send CAV and UAV location, as well as the accident location, information to the CMP. Raspberry Pi also utilized WebSocket communication to connect to a web server to send photos that were taken by the camera that was mounted on the UAV. The Quick Clear demonstration scenario was tested for both a stationary test and dynamic flight cases. The latency results show satisfactory performance in the data transfer speed between test components with UDP having the least latency. The package drop percentage analysis shows that the DSRC communication showed the best performance among the four methods studied here. All in all, the outcome of this experimentation study shows that this communication structure can be utilized for real-life scenarios for successful implementation. Full article
(This article belongs to the Special Issue Advances in Sensor Related Technologies for Autonomous Driving)
Show Figures

Figure 1

21 pages, 7924 KiB  
Article
Optimization of On-Demand Shared Autonomous Vehicle Deployments Utilizing Reinforcement Learning
by Karina Meneses-Cime, Bilin Aksun Guvenc and Levent Guvenc
Sensors 2022, 22(21), 8317; https://doi.org/10.3390/s22218317 - 29 Oct 2022
Cited by 8 | Viewed by 1654
Abstract
Ride-hailed shared autonomous vehicles (SAV) have emerged recently as an economically feasible way of introducing autonomous driving technologies while serving the mobility needs of under-served communities. There has also been corresponding research work on optimization of the operation of these SAVs. However, the [...] Read more.
Ride-hailed shared autonomous vehicles (SAV) have emerged recently as an economically feasible way of introducing autonomous driving technologies while serving the mobility needs of under-served communities. There has also been corresponding research work on optimization of the operation of these SAVs. However, the current state-of-the-art research in this area treats very simple networks, neglecting the effect of a realistic other traffic representation, and is not useful for planning deployments of SAV service. In contrast, this paper utilizes a recent autonomous shuttle deployment site in Columbus, Ohio, as a basis for mobility studies and the optimization of SAV fleet deployment. Furthermore, this paper creates an SAV dispatcher based on reinforcement learning (RL) to minimize passenger wait time and to maximize the number of passengers served. The created taxi-dispatcher is then simulated in a realistic scenario while avoiding generalization or over-fitting to the area. It is found that an RL-aided taxi dispatcher algorithm can greatly improve the performance of a deployment of SAVs by increasing the overall number of trips completed and passengers served while decreasing the wait time for passengers. Full article
(This article belongs to the Special Issue Advances in Sensor Related Technologies for Autonomous Driving)
Show Figures

Figure 1

13 pages, 2943 KiB  
Article
Free Space Detection Algorithm Using Object Tracking for Autonomous Vehicles
by Yeongwon Lee and Byungyong You
Sensors 2022, 22(1), 315; https://doi.org/10.3390/s22010315 - 31 Dec 2021
Cited by 5 | Viewed by 4874
Abstract
In this paper, we propose a new free space detection algorithm for autonomous vehicle driving. Previous free space detection algorithms often use only the location information of every frame, without information on the speed of the obstacle. In this case, there is a [...] Read more.
In this paper, we propose a new free space detection algorithm for autonomous vehicle driving. Previous free space detection algorithms often use only the location information of every frame, without information on the speed of the obstacle. In this case, there is a possibility of creating an inefficient path because the behavior of the obstacle cannot be predicted. In order to compensate for the shortcomings of the previous algorithm, the proposed algorithm uses the speed information of the obstacle. Through object tracking, the dynamic behavior of obstacles around the vehicle is identified and predicted, and free space is detected based on this. In the free space, it is possible to classify an area in which driving is possible and an area in which it is not possible, and a route is created according to the classification result. By comparing and evaluating the path generated by the previous algorithm and the path generated by the proposed algorithm, it is confirmed that the proposed algorithm is more efficient in generating the vehicle driving path. Full article
(This article belongs to the Special Issue Advances in Sensor Related Technologies for Autonomous Driving)
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 7766 KiB  
Review
Review of Vision-Based Deep Learning Parking Slot Detection on Surround View Images
by Guan Sheng Wong, Kah Ong Michael Goh, Connie Tee and Aznul Qalid Md. Sabri
Sensors 2023, 23(15), 6869; https://doi.org/10.3390/s23156869 - 02 Aug 2023
Cited by 2 | Viewed by 1832
Abstract
Autonomous vehicles are gaining popularity, and the development of automatic parking systems is a fundamental requirement. Detecting the parking slots accurately is the first step towards achieving an automatic parking system. However, modern parking slots present various challenges for detection task due to [...] Read more.
Autonomous vehicles are gaining popularity, and the development of automatic parking systems is a fundamental requirement. Detecting the parking slots accurately is the first step towards achieving an automatic parking system. However, modern parking slots present various challenges for detection task due to their different shapes, colors, functionalities, and the influence of factors like lighting and obstacles. In this comprehensive review paper, we explore the realm of vision-based deep learning methods for parking slot detection. We categorize these methods into four main categories: object detection, image segmentation, regression, and graph neural network, and provide detailed explanations and insights into the unique features and strengths of each category. Additionally, we analyze the performance of these methods using three widely used datasets: the Tongji Parking-slot Dataset 2.0 (ps 2.0), Sejong National University (SNU) dataset, and panoramic surround view (PSV) dataset, which have played a crucial role in assessing advancements in parking slot detection. Finally, we summarize the findings of each method and outline future research directions in this field. Full article
(This article belongs to the Special Issue Advances in Sensor Related Technologies for Autonomous Driving)
Show Figures

Figure 1

Back to TopTop