sensors-logo

Journal Browser

Journal Browser

Sensors for Smart Vehicle Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: closed (15 December 2022) | Viewed by 26990

Special Issue Editors


E-Mail Website
Guest Editor
School of Automotive Studies, Tongji University, Shanghai 201804, China
Interests: autonomous vehicles; target tracking; vehicle engineering
Department of Civil and Environmental Engineering, University of California, Los Angeles, CA 90095, USA
Interests: cooperative driving automation; state estimation; cooperative localization; cooperative perception; the dynamic control of connected autonomous vehicles
Special Issues, Collections and Topics in MDPI journals
Department of Civil and Environmental Engineering, University of California, Los Angeles, CA 90095, USA
Interests: connected and automated vehicles; intelligent transportation systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Department of Mechanical Engineering, Michigan State University, East Lansing, MI, USA
2. Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI, USA
Interests: robotics and autonomous vehicles; intelligent transportation system; reinforcement learning; vehicle dynamics; optimal control
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the development of intelligent transportation systems, autonomous vehicles have attracted extensive attention from scholars in both academia and industry. In order to improve the safety, comfort, and performance of vehicles, the reliable perception of the environment as well as accurate and robust state information (position, velocity, and attitude) of vehicles are particularly important. By processing the on-board sensors (LiDAR, radar, camera, GPS, IMU, etc.) on autonomous vehicles and also the shared information from communication (in-vehicle network, wireless network, etc.), environmental perception is realized. Simultaneously, vehicle state estimate (vehicle pose/velocity and position) via various positioning algorithms (Bayesian filtering, optimization, machine learning, etc.) can be performed. In addition, more progress needs to be made in the decision making, planning, and control systems to ensure the reliable autonomous driving performance. Therefore, in these fields, to advance the autonomous driving technologies, this Special Issue aims to introduce technologies related to environmental perception, state estimation, and control systems for autonomous vehicles. Original research and comprehensive reviews are welcome. Potential topics include, but are not limited to:

  • Autonomous driving systems;
  • Sensor fusion;
  • Environmental perception (object detection and tracking);
  • Machine learning techniques;
  • GNSS positioning, inertial navigation, and integrated navigation;
  • Simultaneous localization and mapping;
  • Vehicle state estimation;
  • Security for autonomous vehicles;
  • Vehicle dynamics control.

Prof. Dr. Lu Xiong
Dr. Xin Xia
Dr. Jiaqi Ma
Dr. Zhaojian Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 12879 KiB  
Article
A Portable Multi-Modal Cushion for Continuous Monitoring of a Driver’s Vital Signs
by Onno Linschmann, Durmus Umutcan Uguz, Bianca Romanski, Immo Baarlink, Pujitha Gunaratne, Steffen Leonhardt, Marian Walter and Markus Lueken
Sensors 2023, 23(8), 4002; https://doi.org/10.3390/s23084002 - 14 Apr 2023
Cited by 2 | Viewed by 1628
Abstract
With higher levels of automation in vehicles, the need for robust driver monitoring systems increases, since it must be ensured that the driver can intervene at any moment. Drowsiness, stress and alcohol are still the main sources of driver distraction. However, physiological problems [...] Read more.
With higher levels of automation in vehicles, the need for robust driver monitoring systems increases, since it must be ensured that the driver can intervene at any moment. Drowsiness, stress and alcohol are still the main sources of driver distraction. However, physiological problems such as heart attacks and strokes also exhibit a significant risk for driver safety, especially with respect to the ageing population. In this paper, a portable cushion with four sensor units with multiple measurement modalities is presented. Capacitive electrocardiography, reflective photophlethysmography, magnetic induction measurement and seismocardiography are performed with the embedded sensors. The device can monitor the heart and respiratory rates of a vehicle driver. The promising results of the first proof-of-concept study with twenty participants in a driving simulator not only demonstrate the accuracy of the heart (above 70% of medical-grade heart rate estimations according to IEC 60601-2-27) and respiratory rate measurements (around 30% with errors below 2 BPM), but also that the cushion might be useful to monitor morphological changes in the capacitive electrocardiogram in some cases. The measurements can potentially be used to detect drowsiness and stress and thus the fitness of the driver, since heart rate variability and breathing rate variability can be captured. They are also useful for the early prediction of cardiovascular diseases, one of the main reasons for premature death. The data are publicly available in the UnoVis dataset. Full article
(This article belongs to the Special Issue Sensors for Smart Vehicle Applications)
Show Figures

Figure 1

17 pages, 15784 KiB  
Article
Design, Implementation, and Empirical Validation of a Framework for Remote Car Driving Using a Commercial Mobile Network
by Javier Saez-Perez, Qi Wang, Jose M. Alcaraz-Calero and Jose Garcia-Rodriguez
Sensors 2023, 23(3), 1671; https://doi.org/10.3390/s23031671 - 03 Feb 2023
Cited by 4 | Viewed by 2221
Abstract
Despite the fact that autonomous driving systems are progressing in terms of their automation levels, the achievement of fully self-driving cars is still far from realization. Currently, most new cars accord with the Society of Automotive Engineers (SAE) Level 2 of automation, which [...] Read more.
Despite the fact that autonomous driving systems are progressing in terms of their automation levels, the achievement of fully self-driving cars is still far from realization. Currently, most new cars accord with the Society of Automotive Engineers (SAE) Level 2 of automation, which requires the driver to be able to take control of the car when needed: for this reason, it is believed that between now and the achievement of fully automated self-driving car systems, there will be a transition, in which remote driving cars will be a reality. In addition, there are tele-operation-use cases that require remote driving for health or safety reasons. However, there is a lack of detailed design and implementation available in the public domain for remote driving cars: therefore, in this work we propose a functional framework for remote driving vehicles. We implemented a prototype, using a commercial car. The prototype was connected to a commercial 4G/5G mobile network, and empirical experiments were conducted, to validate the prototype’s functions, and to evaluate its performance in real-world driving conditions. The design, implementation, and empirical evaluation provided detailed technical insights into this important research and innovation area. Full article
(This article belongs to the Special Issue Sensors for Smart Vehicle Applications)
Show Figures

Figure 1

19 pages, 4099 KiB  
Article
System and Method for Driver Drowsiness Detection Using Behavioral and Sensor-Based Physiological Measures
by Jaspreet Singh Bajaj, Naveen Kumar, Rajesh Kumar Kaushal, H. L. Gururaj, Francesco Flammini and Rajesh Natarajan
Sensors 2023, 23(3), 1292; https://doi.org/10.3390/s23031292 - 23 Jan 2023
Cited by 6 | Viewed by 8338
Abstract
The amount of road accidents caused by driver drowsiness is one of the world’s major challenges. These accidents lead to numerous fatal and non-fatal injuries which impose substantial financial strain on individuals and governments every year. As a result, it is critical to [...] Read more.
The amount of road accidents caused by driver drowsiness is one of the world’s major challenges. These accidents lead to numerous fatal and non-fatal injuries which impose substantial financial strain on individuals and governments every year. As a result, it is critical to prevent catastrophic accidents and reduce the financial burden on society caused by driver drowsiness. The research community has primarily focused on two approaches to identify driver drowsiness during the last decade: intrusive and non-intrusive. The intrusive approach includes physiological measures, and the non-intrusive approach includes vehicle-based and behavioral measures. In an intrusive approach, sensors are used to detect driver drowsiness by placing them on the driver’s body, whereas in a non-intrusive approach, a camera is used for drowsiness detection by identifying yawning patterns, eyelid movement and head inclination. Noticeably, most research has been conducted in driver drowsiness detection methods using only single measures that failed to produce good outcomes. Furthermore, these measures were only functional in certain conditions. This paper proposes a model that combines the two approaches, non-intrusive and intrusive, to detect driver drowsiness. Behavioral measures as a non-intrusive approach and sensor-based physiological measures as an intrusive approach are combined to detect driver drowsiness. The proposed hybrid model uses AI-based Multi-Task Cascaded Convolutional Neural Networks (MTCNN) as a behavioral measure to recognize the driver’s facial features, and the Galvanic Skin Response (GSR) sensor as a physiological measure to collect the skin conductance of the driver that helps to increase the overall accuracy. Furthermore, the model’s efficacy has been computed in a simulated environment. The outcome shows that the proposed hybrid model is capable of identifying the transition from awake to a drowsy state in the driver in all conditions with the efficacy of 91%. Full article
(This article belongs to the Special Issue Sensors for Smart Vehicle Applications)
Show Figures

Figure 1

18 pages, 12953 KiB  
Article
Vision-Based Autonomous Following of a Moving Platform and Landing for an Unmanned Aerial Vehicle
by Jesús Morales, Isabel Castelo, Rodrigo Serra, Pedro U. Lima and Meysam Basiri
Sensors 2023, 23(2), 829; https://doi.org/10.3390/s23020829 - 11 Jan 2023
Cited by 6 | Viewed by 3118
Abstract
Interest in Unmanned Aerial Vehicles (UAVs) has increased due to their versatility and variety of applications, however their battery life limits their applications. Heterogeneous multi-robot systems can offer a solution to this limitation, by allowing an Unmanned Ground Vehicle (UGV) to serve as [...] Read more.
Interest in Unmanned Aerial Vehicles (UAVs) has increased due to their versatility and variety of applications, however their battery life limits their applications. Heterogeneous multi-robot systems can offer a solution to this limitation, by allowing an Unmanned Ground Vehicle (UGV) to serve as a recharging station for the aerial one. Moreover, cooperation between aerial and terrestrial robots allows them to overcome other individual limitations, such as communication link coverage or accessibility, and to solve highly complex tasks, e.g., environment exploration, infrastructure inspection or search and rescue. This work proposes a vision-based approach that enables an aerial robot to autonomously detect, follow, and land on a mobile ground platform. For this purpose, ArUcO fiducial markers are used to estimate the relative pose between the UAV and UGV by processing RGB images provided by a monocular camera on board the UAV. The pose estimation is fed to a trajectory planner and four decoupled controllers to generate speed set-points relative to the UAV. Using a cascade loop strategy, these set-points are then sent to the UAV autopilot for inner loop control. The proposed solution has been tested both in simulation, with a digital twin of a solar farm using ROS, Gazebo and Ardupilot Software-in-the-Loop (SiL); and in the real world at IST Lisbon’s outdoor facilities, with a UAV built on the basis of a DJ550 Hexacopter and a modified Jackal ground robot from DJI and Clearpath Robotics, respectively. Pose estimation, trajectory planning and speed set-point are computed on board the UAV, using a Single Board Computer (SBC) running Ubuntu and ROS, without the need for external infrastructure. Full article
(This article belongs to the Special Issue Sensors for Smart Vehicle Applications)
Show Figures

Figure 1

18 pages, 4006 KiB  
Article
Computer Vision Based Pothole Detection under Challenging Conditions
by Boris Bučko, Eva Lieskovská, Katarína Zábovská and Michal Zábovský
Sensors 2022, 22(22), 8878; https://doi.org/10.3390/s22228878 - 17 Nov 2022
Cited by 17 | Viewed by 5437
Abstract
Road discrepancies such as potholes and road cracks are often present in our day-to-day commuting and travel. The cost of damage repairs caused by potholes has always been a concern for owners of any type of vehicle. Thus, an early detection processes can [...] Read more.
Road discrepancies such as potholes and road cracks are often present in our day-to-day commuting and travel. The cost of damage repairs caused by potholes has always been a concern for owners of any type of vehicle. Thus, an early detection processes can contribute to the swift response of road maintenance services and the prevention of pothole related accidents. In this paper, automatic detection of potholes is performed using the computer vision model library, You Look Only Once version 3, also known as Yolo v3. Light and weather during driving naturally affect our ability to observe road damage. Such adverse conditions also negatively influence the performance of visual object detectors. The aim of this work was to examine the effect adverse conditions have on pothole detection. The basic design of this study is therefore composed of two main parts: (1) dataset creation and data processing, and (2) dataset experiments using Yolo v3. Additionally, Sparse R-CNN was incorporated into our experiments. For this purpose, a dataset consisting of subsets of images recorded under different light and weather was developed. To the best of our knowledge, there exists no detailed analysis of pothole detection performance under adverse conditions. Despite the existence of newer libraries, Yolo v3 is still a competitive architecture that provides good results with lower hardware requirements. Full article
(This article belongs to the Special Issue Sensors for Smart Vehicle Applications)
Show Figures

Figure 1

15 pages, 1437 KiB  
Article
Smart Steering Sleeve (S3): A Non-Intrusive and Integrative Sensing Platform for Driver Physiological Monitoring
by Chuwei Ye, Wen Li, Zhaojian Li, Gopi Maguluri, John Grimble, Joshua Bonatt, Jacob Miske, Nicusor Iftimia, Shaoting Lin and Michele Grimm
Sensors 2022, 22(19), 7296; https://doi.org/10.3390/s22197296 - 26 Sep 2022
Cited by 1 | Viewed by 1784
Abstract
Driving is a ubiquitous activity that requires both motor skills and cognitive focus. These aspects become more problematic for some seniors, who have underlining medical conditions and tend to lose some of these capabilities. Therefore, driving can be used as a controlled environment [...] Read more.
Driving is a ubiquitous activity that requires both motor skills and cognitive focus. These aspects become more problematic for some seniors, who have underlining medical conditions and tend to lose some of these capabilities. Therefore, driving can be used as a controlled environment for the frequent, non-intrusive monitoring of bio-physical and cognitive status within drivers. Such information can then be utilized for enhanced assistive vehicle controls and/or driver health monitoring. In this paper, we present a novel multi-modal smart steering sleeve (S3) system with an integrated sensing platform that can non-intrusively and continuously measure a driver’s physiological signals, including electrodermal activity (EDA), electromyography (EMG), and hand pressure. The sensor suite was developed by combining low-cost interdigitated electrodes with a piezoresistive force sensor on a single, flexible polymer substrate. Comprehensive characterizations on the sensing modalities were performed with promising results demonstrated. The sweat-sensing unit (SSU) for EDA monitoring works under a 100 Hz alternative current (AC) source. The EMG signal acquired by the EMG-sensing unit (EMGSU) was amplified to within 5 V. The force-sensing unit (FSU) for hand pressure detection has a range of 25 N. This flexible sensor was mounted on an off-the-shelf steering wheel sleeve, making it an add-on system that can be installed on any existing vehicles for convenient and wide-coverage driver monitoring. A cloud-based communication scheme was developed for the ease of data collection and analysis. Sensing platform development, performance, and limitations, as well as other potential applications, are discussed in detail in this paper. Full article
(This article belongs to the Special Issue Sensors for Smart Vehicle Applications)
Show Figures

Figure 1

Review

Jump to: Research

34 pages, 1493 KiB  
Review
An Exploration of Recent Intelligent Image Analysis Techniques for Visual Pavement Surface Condition Assessment
by Waqar S. Qureshi, Syed Ibrahim Hassan, Susan McKeever, David Power, Brian Mulry, Kieran Feighan and Dympna O’Sullivan
Sensors 2022, 22(22), 9019; https://doi.org/10.3390/s22229019 - 21 Nov 2022
Cited by 6 | Viewed by 3628
Abstract
Road pavement condition assessment is essential for maintenance, asset management, and budgeting for pavement infrastructure. Countries allocate a substantial annual budget to maintain and improve local, regional, and national highways. Pavement condition is assessed by measuring several pavement characteristics such as roughness, surface [...] Read more.
Road pavement condition assessment is essential for maintenance, asset management, and budgeting for pavement infrastructure. Countries allocate a substantial annual budget to maintain and improve local, regional, and national highways. Pavement condition is assessed by measuring several pavement characteristics such as roughness, surface skid resistance, pavement strength, deflection, and visual surface distresses. Visual inspection identifies and quantifies surface distresses, and the condition is assessed using standard rating scales. This paper critically analyzes the research trends in the academic literature, professional practices and current commercial solutions for surface condition ratings by civil authorities. We observe that various surface condition rating systems exist, and each uses its own defined subset of pavement characteristics to evaluate pavement conditions. It is noted that automated visual sensing systems using intelligent algorithms can help reduce the cost and time required for assessing the condition of pavement infrastructure, especially for local and regional road networks. However, environmental factors, pavement types, and image collection devices are significant in this domain and lead to challenging variations. Commercial solutions for automatic pavement assessment with certain limitations exist. The topic is also a focus of academic research. More recently, academic research has pivoted toward deep learning, given that image data is now available in some form. However, research to automate pavement distress assessment often focuses on the regional pavement condition assessment standard that a country or state follows. We observe that the criteria a region adopts to make the evaluation depends on factors such as pavement construction type, type of road network in the area, flow and traffic, environmental conditions, and region’s economic situation. We summarized a list of publicly available datasets for distress detection and pavement condition assessment. We listed approaches focusing on crack segmentation and methods concentrating on distress detection and identification using object detection and classification. We segregated the recent academic literature in terms of the camera’s view and the dataset used, the year and country in which the work was published, the F1 score, and the architecture type. It is observed that the literature tends to focus more on distress identification (“presence/absence” detection) but less on distress quantification, which is essential for developing approaches for automated pavement rating. Full article
(This article belongs to the Special Issue Sensors for Smart Vehicle Applications)
Show Figures

Graphical abstract

Back to TopTop