Autonomous Driving: Advancements in Cognitive Perception Systems for Increased Level Autonomy

A special issue of Machines (ISSN 2075-1702). This special issue belongs to the section "Robotics, Mechatronics and Intelligent Machines".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 26614

Special Issue Editors

Centre for Research and Technology Hellas, Information Technologies Institute, 57001 Thessaloniki, Greece
Interests: cognition and perception; active vision; human-aware SLAM and navigation; safe manipulation; social robots; robot behavior; human–robot collaboration; autonomous vehicles
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent decades, the research on autonomous driving technologies enabled the automotive industry to introduce vehicles endorsed with Advanced Driver-Assistance Systems (ADAS) to the market. These systems render the daily driving experience safer and more attractive to humans. The majority of the existing assistive driving solutions are ranked between Level-1 and Level-3 autonomy, which involves mainly supportive systems. However, no one in the industry is even close to attaining Level-4 and Level-5 autonomy, a challenge that demands vehicles’ operation in any road network and weather condition. It is evident that reaching the ultimate autonomous operation (Level-5) necessitates future vehicles to be endorsed with advanced perception and cognition capabilities that will allow them to cope with urban, rural and semi-structured environments. Recent advancements in sensors provided 3D data that further augment the perception capabilities of autonomous vehicles, while the machine learning society released powerful neural networks that amplify the vehicle’s awareness regarding its surroundings. However, few such approaches reached mass production—mainly due to the fact that such solutions typically require the existence of an abundance of data to train the respective models and powerful computational units, which hinders their adoption from the automotive industry.

This Special Issue focuses on the recent advances in perception and cognition technologies of autonomous vehicles. Scientists are encouraged to submit their outstanding work towards addressing this issue, focusing on (but not limited to) the topics listed below:

  • Semantic segmentation of urban and/or rural environments.
  • Efficient navigation in urban environments.
  • Navigation in semi-structured and GPS-denied areas.
  • Trajectory planning and following for narrow areas.
  • Vehicle cognitive reaction methods.
  • Pedestrian detection.
  • Localization and navigation in parking areas.
  • Hybrid SLAM applications.
  • Collision avoidance and safe vehicles reactions.
  • Adaptive decision making and dependable behavior.
  • Light deep learning for autonomous driving.
  • Dicentralised control schemes for driverless vehicles.
  • Fleet management systems for self-driving vehicles.
  • Driverless vehicles in sypply chain and logistics.

Research, position and survey papers are welcome, in addition to papers proposing new datasets and novel evaluation methods.

Prof. Dr. Antonios Gasteratos
Dr. Ioannis Kostavelis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machines is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous driving
  • vehicle’s perception
  • vehicle’s cognition
  • 3D sensors, deep neural networks
  • SLAM
  • navigation
  • trajectory planning
  • collision avoidance
  • dicentralized control
  • operations management
  • logistics

Related Special Issue

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

3 pages, 166 KiB  
Editorial
Editorial
by Antonios Gasteratos and Ioannis Kostavelis
Machines 2023, 11(4), 477; https://doi.org/10.3390/machines11040477 - 14 Apr 2023
Viewed by 706
Abstract
In recent decades, the research on autonomous driving technologies has enabled the automotive industry to introduce vehicles supported by Advanced Driver-Assistance Systems (ADAS) to the market [...] Full article

Research

Jump to: Editorial, Other

16 pages, 472 KiB  
Article
On the Legal and Economic Implications of Tele-Driving
by Thomas Hoffmann and Gunnar Prause
Machines 2023, 11(3), 331; https://doi.org/10.3390/machines11030331 - 27 Feb 2023
Cited by 1 | Viewed by 2315
Abstract
While the idea of autonomous vehicles has been enthusiastically embraced by scientists and commercial markets alike, ranging from solving the last mile problem across shared economy models in various segments to human transportation logistics, more than just a few aspects require further development [...] Read more.
While the idea of autonomous vehicles has been enthusiastically embraced by scientists and commercial markets alike, ranging from solving the last mile problem across shared economy models in various segments to human transportation logistics, more than just a few aspects require further development before driverless urban logistics can be organized more thoroughly and meaningfully for our practical purposes. Before fully autonomous vehicles become standard, many of these shortcomings can be addressed (in part) by the remote operation of vehicles. Besides the various technological challenges, remote operation of vehicles also has many important legal and economic implications, impacting a wide area, including data protection, liability for torts performed, and mundane fields such as road traffic law. Based on a case study of a start-up developing remote operation solutions in Germany (Vay), this paper analyses and further develops the regulatory framework of remote operation solutions by highlighting their legal and economic implications. Since remote operation solutions are comprised of cyber-physical systems, this research is located in the context of Smart Cities and Industry 5.0, i.e., our research contributes to the related regulatory framework of the Smart City concept as well as to Industry 5.0 in international terms. Finally, the paper discusses future perspectives and proposes specific modes of compliance. Full article
Show Figures

Figure 1

18 pages, 4867 KiB  
Article
The Graph Neural Network Detector Based on Neighbor Feature Alignment Mechanism in LIDAR Point Clouds
by Xinyi Liu, Baofeng Zhang and Na Liu
Machines 2023, 11(1), 116; https://doi.org/10.3390/machines11010116 - 14 Jan 2023
Cited by 2 | Viewed by 1619
Abstract
Three-dimensional (3D) object detection has a vital effect on the environmental awareness task of autonomous driving scenarios. At present, the accuracy of 3D object detection has significant improvement potential. In addition, a 3D point cloud is not uniformly distributed on a regular grid [...] Read more.
Three-dimensional (3D) object detection has a vital effect on the environmental awareness task of autonomous driving scenarios. At present, the accuracy of 3D object detection has significant improvement potential. In addition, a 3D point cloud is not uniformly distributed on a regular grid because of its disorder, dispersion, and sparseness. The strategy of the convolution neural networks (CNNs) for 3D point cloud feature extraction has the limitations of potential information loss and empty operation. Therefore, we propose a graph neural network (GNN) detector based on neighbor feature alignment mechanism for 3D object detection in LiDAR point clouds. This method exploits the structural information of graphs, and it aggregates the neighbor and edge features to update the state of vertices during the iteration process. This method enables the reduction of the offset error of the vertices, and ensures the invariance of the point cloud in the spatial domain. For experiments performed on the KITTI public benchmark, the results demonstrate that the proposed method achieves competitive experimental results. Full article
Show Figures

Figure 1

27 pages, 17500 KiB  
Article
Cooperative Adaptive Cruise Algorithm Based on Trajectory Prediction for Driverless Buses
by Hui Xie and Pengbo Xiao
Machines 2022, 10(10), 893; https://doi.org/10.3390/machines10100893 - 03 Oct 2022
Cited by 3 | Viewed by 1381
Abstract
Cooperative adaptive cruise control (CACC) technology offers a proven solution to the current traffic congestion problems caused by the yearly growth of car ownership. Coping with random lane changes of bypass vehicles under the condition of traffic congestion is a challenge for urban [...] Read more.
Cooperative adaptive cruise control (CACC) technology offers a proven solution to the current traffic congestion problems caused by the yearly growth of car ownership. Coping with random lane changes of bypass vehicles under the condition of traffic congestion is a challenge for urban driverless vehicles. In this paper, to meet the demand for high comfort driverless buses driving on urban roads, an active anti-disturbance following control method for driverless buses based on bystander vehicle intention recognition and trajectory prediction is proposed for the scenario of bystander vehicle cut-in during driving to alleviate the disturbance caused by bystander vehicles, improve passenger comfort, and suppress multi-vehicle oscillation. The simulation results show that the intelligent prediction system-based queue reduces the traffic oscillation rate by an average of 9.8% and improves the comfort level by an average of 11% under side-car insertion conditions. The results of the real vehicle test show that the vehicles based on the intelligent prediction algorithm have a 25.5% reduction in maximum speed adjustment, 14.5 m average reduction in following distance, 6% improvement in comfort, and 27% improvement in rear vehicle comfort. Full article
Show Figures

Figure 1

17 pages, 6631 KiB  
Article
Millimeter-Wave Radar and Vision Fusion Target Detection Algorithm Based on an Extended Network
by Chunyang Qi, Chuanxue Song, Naifu Zhang, Shixin Song, Xinyu Wang and Feng Xiao
Machines 2022, 10(8), 675; https://doi.org/10.3390/machines10080675 - 10 Aug 2022
Cited by 1 | Viewed by 1762
Abstract
The need for a vehicle to perceive information about the external environmental as an independent intelligent individual has grown with the progress of intelligent driving from primary driver assistance to high-level autonomous driving. The ability of a common independent sensing unit to sense [...] Read more.
The need for a vehicle to perceive information about the external environmental as an independent intelligent individual has grown with the progress of intelligent driving from primary driver assistance to high-level autonomous driving. The ability of a common independent sensing unit to sense the external environment is limited by the sensor’s own characteristics and algorithm level. Hence, a common independent sensing unit fails to obtain comprehensive sensing information independently under conditions such as rain, fog, and night. Accordingly, an extended network-based fusion target detection algorithm for millimeter-wave radar and vision fusion is proposed in this work by combining the complementary perceptual performance of in-vehicle sensing elements, cost effectiveness, and maturity of independent detection technologies. Feature-level fusion is first used in this work according to the analysis of technical routes of the millimeter-wave radar and vision fusion. Training and test evaluation of the algorithm are carried out on the nuScenes dataset and test data from a homemade data acquisition platform. An extended investigation on the RetinaNet one-stage target detection algorithm based on the VGG-16+FPN backbone detection network is then conducted in this work to introduce millimeter-wave radar images as auxiliary information for visual image target detection. We use two-channel radar and three-channel visual images as inputs of the fusion network. We also propose an extended VGG-16 network applicable to millimeter-wave radar and visual fusion and an extended feature pyramid network. Test results showed that the mAP of the proposed network improves by 2.9% and the small target accuracy is enhanced by 18.73% compared with those of the reference network for pure visual image target detection. This finding verified the detection capability and algorithmic feasibility of the proposed extended fusion target detection network for visually insensitive targets. Full article
Show Figures

Figure 1

20 pages, 5098 KiB  
Article
Online Multiple Object Tracking Using Spatial Pyramid Pooling Hashing and Image Retrieval for Autonomous Driving
by Hongjian Wei and Yingping Huang
Machines 2022, 10(8), 668; https://doi.org/10.3390/machines10080668 - 09 Aug 2022
Cited by 3 | Viewed by 1728
Abstract
Multiple object tracking (MOT) is a fundamental issue and has attracted considerable attention in the autonomous driving community. This paper presents a novel MOT framework for autonomous driving. The framework consists of two stages of object representation and data association. In the stage [...] Read more.
Multiple object tracking (MOT) is a fundamental issue and has attracted considerable attention in the autonomous driving community. This paper presents a novel MOT framework for autonomous driving. The framework consists of two stages of object representation and data association. In the stage of object representation, we employ appearance, motion, and position features to characterize objects. We design a spatial pyramidal pooling hash network (SPPHNet) to generate the appearance features. Multiple-level representative features in the SPPHNet are mapped into a similarity-preserving binary space, called hash features. The hash features retain the visual discriminability of high-dimensional features and are beneficial for computational efficiency. For data association, a two-tier data association scheme is designed to address the occlusion issue, consisting of an affinity cost model and a hash-based image retrieval model. The affinity cost model accommodates the hash features, disparity, and optical flow as the first tier of data association. The hash-based image retrieval model exploits the hash features and adopts image retrieval technology to handle reappearing objects as the second tier of data association. Experiments on the KITTI public benchmark dataset and our campus scenario sequences show that our method has superior tracking performance to the state-of-the-art vision-based MOT methods. Full article
Show Figures

Figure 1

15 pages, 4661 KiB  
Article
An Improved Approach for Real-Time Taillight Intention Detection by Intelligent Vehicles
by Bingming Tong, Wei Chen, Changzhen Li, Luyao Du, Zhihao Xiao and Donghua Zhang
Machines 2022, 10(8), 626; https://doi.org/10.3390/machines10080626 - 29 Jul 2022
Cited by 5 | Viewed by 1374
Abstract
Vehicle taillight intention detection is an important application for perception and decision making by intelligent vehicles. However, effectively improving detection precision with sufficient real-time performance is a critical issue in practical applications. In this study, a vision-based improved lightweight approach focusing on small [...] Read more.
Vehicle taillight intention detection is an important application for perception and decision making by intelligent vehicles. However, effectively improving detection precision with sufficient real-time performance is a critical issue in practical applications. In this study, a vision-based improved lightweight approach focusing on small object detection with a multi-scale strategy is proposed to achieve application-oriented real-time vehicle taillight intention detection. The proposed real-time detection model is designed based on YOLOv4-tiny, and a spatial pyramid pooling fast (SPPF) module is employed to enrich the output layer features. An additional detection scale is added to expand the receptive field corresponding to small objects. Meanwhile, a path aggregation network (PANet) is used to improve the feature resolution of small objects by constructing a feature pyramid with connections between feature layers. An expanded dataset based on the BDD100K dataset is established to verify the performance of the proposed method. Experimental results on the expanded dataset reveal that the proposed method can increase the average precision (AP) of vehicle, brake, left-turn, and right-turn signals by 1.81, 15.16, 40.04, and 41.53%, respectively. The mean average precision (mAP) can be improved by 24.63% (from 62.20% to 86.83%) at over 70 frames per second (FPS), proving that the proposed method can effectively improve detection precision with good real-time performance. Full article
Show Figures

Figure 1

16 pages, 6845 KiB  
Article
Research on Tire/Road Peak Friction Coefficient Estimation Considering Effective Contact Characteristics between Tire and Three-Dimensional Road Surface
by Yinfeng Han, Yongjie Lu, Jingxv Liu and Junning Zhang
Machines 2022, 10(8), 614; https://doi.org/10.3390/machines10080614 - 27 Jul 2022
Cited by 6 | Viewed by 1837
Abstract
In the field of transportation, the accurate estimation of tire–road peak friction coefficient (TRPFC) is very important to improve vehicle safety performance and the efficiency of road maintenance. The existing estimation algorithms rarely consider the influence of road roughness and road texture on [...] Read more.
In the field of transportation, the accurate estimation of tire–road peak friction coefficient (TRPFC) is very important to improve vehicle safety performance and the efficiency of road maintenance. The existing estimation algorithms rarely consider the influence of road roughness and road texture on the estimation results. This paper proposes an estimation method of TRPFC considering the effective contact characteristics between the tire and the three-dimensional road. In the longitudinal and lateral directions of tire–road contact, the model is optimized by incorporating the effective contact area ratio coefficient into the LuGre tire model. The optimized model characterizes the road texture by road power spectrum and establishes the relationship between road texture and tire force. In the vertical direction of tire–road contact, the force transfer between tire and road is represented by a new multi-point contact method. By combining the above models with the normalization method and the unscented Kalman filter (UKF), the timely and accurate estimation of the peak friction coefficient of the tire and the 3D road is realized. Simulation and real vehicle experiments verify the effectiveness of the estimation algorithm. Full article
Show Figures

Figure 1

15 pages, 2649 KiB  
Article
Utilizing Human Feedback in Autonomous Driving: Discrete vs. Continuous
by Maryam Savari and Yoonsuck Choe
Machines 2022, 10(8), 609; https://doi.org/10.3390/machines10080609 - 26 Jul 2022
Cited by 4 | Viewed by 1603
Abstract
Deep reinforcement learning (Deep RL) algorithms are defined with fully continuous or discrete action spaces. Among DRL algorithms, soft actor–critic (SAC) is a powerful method capable of handling complex and continuous state–action spaces. However, a long training time and data efficiency are the [...] Read more.
Deep reinforcement learning (Deep RL) algorithms are defined with fully continuous or discrete action spaces. Among DRL algorithms, soft actor–critic (SAC) is a powerful method capable of handling complex and continuous state–action spaces. However, a long training time and data efficiency are the main drawbacks of this algorithm, even though SAC is robust for complex and dynamic environments. One of the proposed solutions to overcome this issue is to utilize human feedback. In this paper, we investigate different forms of human feedback: head direction vs. steering and discrete vs. continuous feedback. To this end, a real-time human demonstration from steer and human head direction with discrete or continuous actions were employed as human feedback in an autonomous driving task in the CARLA simulator. We used alternating actions from a human expert and SAC to have a real-time human demonstration. Furthermore, to test the method without potential individual differences in human performance, we tested the discrete vs. continuous feedback in an inverted pendulum task, with an ideal controller to stand in for the human expert. The results for both the CARLA and the inverted pendulum tasks showed a significant reduction in the training time and a significant increase in gained rewards with discrete feedback, as opposed to continuous feedback, while the action space remained continuous. It was also shown that head direction feedback can be almost as good as steering feedback. We expect our findings to provide a simple yet efficient training method for Deep RL for autonomous driving, utilizing multiple sources of human feedback. Full article
Show Figures

Figure 1

19 pages, 6653 KiB  
Article
Monocular Depth and Velocity Estimation Based on Multi-Cue Fusion
by Chunyang Qi, Hongxiang Zhao, Chuanxue Song, Naifu Zhang, Sinxin Song, Haigang Xu and Feng Xiao
Machines 2022, 10(5), 396; https://doi.org/10.3390/machines10050396 - 19 May 2022
Cited by 2 | Viewed by 1881
Abstract
Many consumers and scholars currently focus on driving assistance systems (DAS) and intelligent transportation technologies. The distance and speed measurement technology of the vehicle ahead is an important part of the DAS. Existing vehicle distance and speed estimation algorithms based on monocular cameras [...] Read more.
Many consumers and scholars currently focus on driving assistance systems (DAS) and intelligent transportation technologies. The distance and speed measurement technology of the vehicle ahead is an important part of the DAS. Existing vehicle distance and speed estimation algorithms based on monocular cameras still have limitations, such as ignoring the relationship between the underlying features of vehicle speed and distance. A multi-cue fusion monocular velocity and ranging framework is proposed to improve the accuracy of monocular ranging and velocity measurement. We use the attention mechanism to fuse different feature information. The training method is used to jointly train the network through the distance velocity regression loss function and the depth loss as an auxiliary loss function. Finally, experimental validation is performed on the Tusimple dataset and the KITTI dataset. On the Tusimple dataset, the average speed mean square error of the proposed method is less than 0.496 m2/s2, and the average mean square error of the distance is 5.695 m2. On the KITTI dataset, the average velocity mean square error of our method is less than 0.40 m2/s2. In addition, we test in different scenarios and confirm the effectiveness of the network. Full article
Show Figures

Figure 1

19 pages, 4432 KiB  
Article
Online Obstacle Trajectory Prediction for Autonomous Buses
by Yue Linn Chong, Christina Dao Wen Lee, Liushifeng Chen, Chongjiang Shen, Ken Kok Hoe Chan and Marcelo H. Ang, Jr.
Machines 2022, 10(3), 202; https://doi.org/10.3390/machines10030202 - 11 Mar 2022
Cited by 5 | Viewed by 2851
Abstract
We tackle the problem of achieving real-world autonomous driving for buses, where the task is to perceive nearby obstacles and predict their motion in the time ahead, given current and past information of the object. In this paper, we present the development of [...] Read more.
We tackle the problem of achieving real-world autonomous driving for buses, where the task is to perceive nearby obstacles and predict their motion in the time ahead, given current and past information of the object. In this paper, we present the development of a modular pipeline for the long-term prediction of dynamic obstacles’ trajectories for an autonomous bus. The pipeline consists of three main tasks, which are the obstacle detection task, tracking task, and trajectory prediction task. Unlike most of the existing literature that performs experiments in the laboratory, our pipeline’s modules are dependent on the introductory modules in the pipeline—it uses the output of previous modules. This best emulates real-world autonomous driving and reflects the errors that may accumulate and cascade from previous modules with less than 100% accuracy. For the trajectory prediction task, we propose a training method to improve the module’s performance and attain a run-time of 10 Hz. We present the practical problems that arise from realising ready-to-deploy autonomous buses and propose methods to overcome these problems for each task. Our Singapore autonomous bus (SGAB) dataset evaluated the pipeline’s performance. The dataset is publicly available online. Full article
Show Figures

Figure 1

Other

Jump to: Editorial, Research

15 pages, 4890 KiB  
Viewpoint
A Viewpoint on the Challenges and Solutions for Driverless Last-Mile Delivery
by Vasiliki Balaska, Kosmas Tsiakas, Dimitrios Giakoumis, Ioannis Kostavelis, Dimitrios Folinas, Antonios Gasteratos and Dimitrios Tzovaras
Machines 2022, 10(11), 1059; https://doi.org/10.3390/machines10111059 - 10 Nov 2022
Cited by 8 | Viewed by 2886
Abstract
The occurring growth in e-commerce comes along with an increasing number of first-time delivery failures due to the customer’s absence at the delivery location. Failed deliveries result in rework, causing a significant impact on the carriers’ delivery cost. Hence, the last mile is [...] Read more.
The occurring growth in e-commerce comes along with an increasing number of first-time delivery failures due to the customer’s absence at the delivery location. Failed deliveries result in rework, causing a significant impact on the carriers’ delivery cost. Hence, the last mile is the portion of a journey that involves moving people and commodities from a transportation hub to a final destination, which should be an efficient process. The above-mentioned concept is used in supply chain management and transportation planning. The paper at hand is a position paper that aims to scrutinize the concept of driverless last-mile delivery, with autonomous vehicles, in order to highlight and stress the challenges and limitations in the existing technology that hinder level five autonomous driving. Specifically, this work documents the current capabilities of the existing autonomous vehicles’ perception and cognition system and outlines their future skills towards addressing complete autonomous last-mile delivery, as well as efficient robotic process automation in logistics from warehouse/distribution center to hub’s delivery. Full article
Show Figures

Figure 1

29 pages, 12511 KiB  
Technical Note
Simulation and Post-Processing for Advanced Driver Assistance System (ADAS)
by Matteo Dollorenzo, Vincenzo Dodde, Nicola Ivan Giannoccaro and Davide Palermo
Machines 2022, 10(10), 867; https://doi.org/10.3390/machines10100867 - 27 Sep 2022
Cited by 2 | Viewed by 2225
Abstract
Considering the continuous development in the automotive sector and autonomous driving technology, it is necessary to conduct continuous research to identify the main points that can allow continuous improvement of system autonomy. In addition to designing new components, an important aspect is characterizing [...] Read more.
Considering the continuous development in the automotive sector and autonomous driving technology, it is necessary to conduct continuous research to identify the main points that can allow continuous improvement of system autonomy. In addition to designing new components, an important aspect is characterizing the test procedures uniformly. The present work is related to analyzing the testing phases of a vehicle concerning the post-processing of the tests, using suitable software and routines, and creating an overall summary report that includes information on the type of instrumentation and type of test and post-processing results. The paper proposes the generation of an innovative tool designed to improve the generation capacity of test maneuvers for Advanced Driver Assistance Systems (ADASs) and to automate the collection and analysis phase of data relating to tests for a Lane System’s Support System (LSS), Autonomous Emergency Braking (AEB), and Car to Pedestrian Nearside Child (CPNC) comply with Euro NCAP LSS 3.0.2, Euro NCAP AEB C2C 3.0.2 and UNECE R-152. The goal was achieved with the collaboration of the company Nardò Technical Center S.r.l. The entire post-processing routine was developed from data relating to experimental tests carried out in the company. Full article
Show Figures

Figure 1

Back to TopTop