Computer Vision & Intelligent Transportation Systems

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 August 2021) | Viewed by 26206

Special Issue Editors


E-Mail Website
Guest Editor
Computer Engineering Department, INVETT Research Group, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain
Interests: intelligent transportation systems; autonomous vehicles; control systems; driver assistance systems; artificial vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Lead Cooperative Driving, 2getthere B.V., Utrecht, The Netherlands;
2. Associate Professor (part-time), Mechanical Engineering Department, Dynamics and Control group, Eindhoven University of Technology, Eindhoven, The Netherlands
Interests: networked control; string stability; agent-based control; vehicle automation; platooning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Measurement and Control Systems, Karlsruhe Institute of Technology, Karlsruhe, Germany
Interests: autonomous vehicles; machine vision; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Postdoctoral Researcher, INVETT Research Group, Computer Engineering Department, Universidad de Alcalá, Alcalá de Henares, Spain
Interests: robotics; intelligent transportation systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Assistant professor, Computer Engineering Department. INVETT Research Group. Universidad de Alcalá, Alcalá de Henares, Madrid, Spain
Interests: accurate indoor and outdoor global positioning; vehicle localization; autonomous vehicles; driver assistance systems; imaging and image analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Engineering Department, Universidad de Alcalá, Alcalá de Henares, 28805 Madrid, Spain
Interests: computer vision; multi-sensory systems; 3D sensing; mapping and localization; autonomous vehicles and robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Perception systems have a key role in intelligent transportation systems (ITS) applications. There are a wide variety of sensors included in this topic. Cameras, radars, and lidars are the most usual ones. Radars and cameras are the preferred option in the industry, in order to avoid anti-aesthetic effects on the cars appearance. The latest ones have suffered a small revolution thanks to the application of convolutional neural networks to the image processing.  These sensors, cameras, radars, and lidars are used in several ITS applications, like intelligent traffic lights control, automatic number plate recognition, traffic flow detection, vehicle's speed detection, asphalt pavement cracks detection, and so on. Inside the vehicle, there are several advanced driver assistance systems (ADAS) that rely on perception systems, like the collision mitigation brake system, the driving monitoring system, and the parking assistance system. In addition, regarding the self-driving car, these sensors are used for localization (visual odometry, lidar odometry, 3D maps, etc.), perception (trajectory planning, scene understanding, traffic signs detection, drivable space detection, obstacle avoidance, etc.), and so on.  The aim of this Special Issue is to get a view of the latest works in these fields, and to give the reader a clear picture on the advances that are to come.  Welcome topics include, but are not strictly limited to, the following: 

  • Computer vision and image processing;
  • Lidar and 3D sensors;
  • Radar and other proximity sensors;
  • Infrastructure ITS applications;
  • Advanced driver assistance systems onboard the vehicles;
  • Self-driving car perception and navigation systems.

Prof. Dr. Javier Alonso Ruiz
Dr. Jeroen Ploeg
Dr. Martin Lauer
Dr. Angel Llamazares Llamazares
Prof. Dr. Noelia Hernández Parra
Dr. Carlota Salinas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computer vision
  • Lidar
  • Radar
  • 3D perception systems
  • Convolutional neural networks
  • Intelligent traffic lights control
  • Automatic number plate recognition
  • Traffic flow detection
  • Vehicle's speed detection
  • Asphalt pavement cracks detection
  • Collision mitigation brake system
  • Driving monitoring system
  • Parking assistance system
  • Visual odometry
  • Lidar odometry
  • 3D maps construction and localization
  • Scene understanding
  • Traffic signs detection
  • Drivable space detection
  • Obstacle detection…

Related Special Issue

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 4933 KiB  
Article
Lightweight Convolutional Neural Networks with Model-Switching Architecture for Multi-Scenario Road Semantic Segmentation
by Peng-Wei Lin and Chih-Ming Hsu
Appl. Sci. 2021, 11(16), 7424; https://doi.org/10.3390/app11167424 - 12 Aug 2021
Cited by 2 | Viewed by 2625
Abstract
A convolutional neural network (CNN) that was trained using datasets for multiple scenarios was proposed to facilitate real-time road semantic segmentation for various scenarios encountered in autonomous driving. However, the CNN inhibited the mutual suppression effect between weights; thus, it did not perform [...] Read more.
A convolutional neural network (CNN) that was trained using datasets for multiple scenarios was proposed to facilitate real-time road semantic segmentation for various scenarios encountered in autonomous driving. However, the CNN inhibited the mutual suppression effect between weights; thus, it did not perform as well as a network that was trained using a single scenario. To address this limitation, we used a model-switching architecture in the network and maintained the optimal weights of each individual model which required considerable space and computation. We, subsequently, incorporated a lightweight process into the model to reduce the model size and computational load. The experimental results indicated that the proposed lightweight CNN with a model-switching architecture outperformed and was faster than the conventional methods across multiple scenarios in road semantic segmentation. Full article
(This article belongs to the Special Issue Computer Vision & Intelligent Transportation Systems)
Show Figures

Figure 1

17 pages, 4073 KiB  
Article
Multi-Scale Safety Helmet Detection Based on SAS-YOLOv3-Tiny
by Rao Cheng, Xiaowei He, Zhonglong Zheng and Zhentao Wang
Appl. Sci. 2021, 11(8), 3652; https://doi.org/10.3390/app11083652 - 19 Apr 2021
Cited by 44 | Viewed by 4004
Abstract
In the practical application scenarios of safety helmet detection, the lightweight algorithm You Only Look Once (YOLO) v3-tiny is easy to be deployed in embedded devices because its number of parameters is small. However, its detection accuracy is relatively low, which is why [...] Read more.
In the practical application scenarios of safety helmet detection, the lightweight algorithm You Only Look Once (YOLO) v3-tiny is easy to be deployed in embedded devices because its number of parameters is small. However, its detection accuracy is relatively low, which is why it is not suitable for detecting multi-scale safety helmets. The safety helmet detection algorithm (named SAS-YOLOv3-tiny) is proposed in this paper to balance detection accuracy and model complexity. A light Sandglass-Residual (SR) module based on depthwise separable convolution and channel attention mechanism is constructed to replace the original convolution layer, and the convolution layer of stride two is used to replace the max-pooling layer for obtaining more informative features and promoting detection performance while reducing the number of parameters and computation. Instead of two-scale feature prediction, three-scale feature prediction is used here to improve the detection effect about small objects further. In addition, an improved spatial pyramid pooling (SPP) module is added to the feature extraction network to extract local and global features with rich semantic information. Complete-Intersection over Union (CIoU) loss is also introduced in this paper to improve the loss function for promoting positioning accuracy. The results on the self-built helmet dataset show that the improved algorithm is superior to the original algorithm. Compared with the original YOLOv3-tiny, the SAS-YOLOv3-tiny has significantly improved all metrics (including Precision (P), Recall (R), Mean Average Precision (mAP), F1) at the expense of only a minor speed while keeping fewer parameters and amounts of calculation. Meanwhile, the SAS-YOLOv3-tiny algorithm shows advantages in accuracy compared with lightweight object detection algorithms, and its speed is faster than the heavyweight model. Full article
(This article belongs to the Special Issue Computer Vision & Intelligent Transportation Systems)
Show Figures

Figure 1

18 pages, 42969 KiB  
Article
Automatic Roadway Features Detection with Oriented Object Detection
by Hesham M. Eraqi, Karim Soliman, Dalia Said, Omar R. Elezaby, Mohamed N. Moustafa and Hossam Abdelgawad
Appl. Sci. 2021, 11(8), 3531; https://doi.org/10.3390/app11083531 - 15 Apr 2021
Cited by 4 | Viewed by 3141
Abstract
Extensive research efforts have been devoted to identify and improve roadway features that impact safety. Maintaining roadway safety features relies on costly manual operations of regular road surveying and data analysis. This paper introduces an automatic roadway safety features detection approach, which harnesses [...] Read more.
Extensive research efforts have been devoted to identify and improve roadway features that impact safety. Maintaining roadway safety features relies on costly manual operations of regular road surveying and data analysis. This paper introduces an automatic roadway safety features detection approach, which harnesses the potential of artificial intelligence (AI) computer vision to make the process more efficient and less costly. Given a front-facing camera and a global positioning system (GPS) sensor, the proposed system automatically evaluates ten roadway safety features. The system is composed of an oriented (or rotated) object detection model, which solves an orientation encoding discontinuity problem to improve detection accuracy, and a rule-based roadway safety evaluation module. To train and validate the proposed model, a fully-annotated dataset for roadway safety features extraction was collected covering 473 km of roads. The proposed method baseline results are found encouraging when compared to the state-of-the-art models. Different oriented object detection strategies are presented and discussed, and the developed model resulted in improving the mean average precision (mAP) by 16.9% when compared with the literature. The roadway safety feature average prediction accuracy is 84.39% and ranges between 91.11% and 63.12%. The introduced model can pervasively enable/disable autonomous driving (AD) based on safety features of the road; and empower connected vehicles (CV) to send and receive estimated safety features, alerting drivers about black spots or relatively less-safe segments or roads. Full article
(This article belongs to the Special Issue Computer Vision & Intelligent Transportation Systems)
Show Figures

Figure 1

18 pages, 4223 KiB  
Article
An Efficiency Enhancing Methodology for Multiple Autonomous Vehicles in an Urban Network Adopting Deep Reinforcement Learning
by Quang-Duy Tran and Sang-Hoon Bae
Appl. Sci. 2021, 11(4), 1514; https://doi.org/10.3390/app11041514 - 08 Feb 2021
Cited by 10 | Viewed by 2724
Abstract
To reduce the impact of congestion, it is necessary to improve our overall understanding of the influence of the autonomous vehicle. Recently, deep reinforcement learning has become an effective means of solving complex control tasks. Accordingly, we show an advanced deep reinforcement learning [...] Read more.
To reduce the impact of congestion, it is necessary to improve our overall understanding of the influence of the autonomous vehicle. Recently, deep reinforcement learning has become an effective means of solving complex control tasks. Accordingly, we show an advanced deep reinforcement learning that investigates how the leading autonomous vehicles affect the urban network under a mixed-traffic environment. We also suggest a set of hyperparameters for achieving better performance. Firstly, we feed a set of hyperparameters into our deep reinforcement learning agents. Secondly, we investigate the leading autonomous vehicle experiment in the urban network with different autonomous vehicle penetration rates. Thirdly, the advantage of leading autonomous vehicles is evaluated using entire manual vehicle and leading manual vehicle experiments. Finally, the proximal policy optimization with a clipped objective is compared to the proximal policy optimization with an adaptive Kullback–Leibler penalty to verify the superiority of the proposed hyperparameter. We demonstrate that full automation traffic increased the average speed 1.27 times greater compared with the entire manual vehicle experiment. Our proposed method becomes significantly more effective at a higher autonomous vehicle penetration rate. Furthermore, the leading autonomous vehicles could help to mitigate traffic congestion. Full article
(This article belongs to the Special Issue Computer Vision & Intelligent Transportation Systems)
Show Figures

Figure 1

17 pages, 5168 KiB  
Article
Fast Planar Detection System Using a GPU-Based 3D Hough Transform for LiDAR Point Clouds
by Yifei Tian, Wei Song, Long Chen, Yunsick Sung, Jeonghoon Kwak and Su Sun
Appl. Sci. 2020, 10(5), 1744; https://doi.org/10.3390/app10051744 - 04 Mar 2020
Cited by 16 | Viewed by 5084
Abstract
Plane extraction is regarded as a necessary function that supports judgment basis in many applications, including semantic digital map reconstruction and path planning for unmanned ground vehicles. Owing to the heterogeneous density and unstructured spatial distribution of three-dimensional (3D) point clouds collected by [...] Read more.
Plane extraction is regarded as a necessary function that supports judgment basis in many applications, including semantic digital map reconstruction and path planning for unmanned ground vehicles. Owing to the heterogeneous density and unstructured spatial distribution of three-dimensional (3D) point clouds collected by light detection and ranging (LiDAR), plane extraction from it is recently a significant challenge. This paper proposed a parallel 3D Hough transform algorithm to realize rapid and precise plane detection from 3D LiDAR point clouds. After transforming all the 3D points from a Cartesian coordinate system to a pre-defined 3D Hough space, the generated Hough space is rasterised into a series of arranged cells to store the resided point counts into individual cells. A 3D connected component labeling algorithm is developed to cluster the cells with high values in Hough space into several clusters. The peaks from these clusters are extracted so that the targeting planar surfaces are obtained in polar coordinates. Because the laser beams emitted by LiDAR sensor holds several fixed angles, the collected 3D point clouds distribute as several horizontal and parallel circles in plane surfaces. This kind of horizontal and parallel circles mislead plane detecting results from horizontal wall surfaces to parallel planes. For detecting accurate plane parameters, this paper adopts a fraction-to-fraction method to gradually transform raw point clouds into a series of sub Hough space buffers. In our proposed planar detection algorithm, a graphic processing unit (GPU) programming technology is applied to speed up the calculation of 3D Hough space updating and peaks searching. Full article
(This article belongs to the Special Issue Computer Vision & Intelligent Transportation Systems)
Show Figures

Figure 1

17 pages, 1886 KiB  
Article
Adaptive Cruise Control Based on Model Predictive Control with Constraints Softening
by Lie Guo, Pingshu Ge, Dachuan Sun and Yanfu Qiao
Appl. Sci. 2020, 10(5), 1635; https://doi.org/10.3390/app10051635 - 29 Feb 2020
Cited by 21 | Viewed by 4137
Abstract
In this paper, with the aim of meeting the requirements of car following, safety, comfort, and economy for adaptive cruise control (ACC) system, an ACC algorithm based on model predictive control (MPC) using constraints softening is proposed. A higher-order kinematics model is established [...] Read more.
In this paper, with the aim of meeting the requirements of car following, safety, comfort, and economy for adaptive cruise control (ACC) system, an ACC algorithm based on model predictive control (MPC) using constraints softening is proposed. A higher-order kinematics model is established based on the mutual longitudinal kinematics between the host vehicle and the preceding vehicle that considers the changing characteristics of the inter-distance, relative velocity, acceleration, and jerk of the host vehicle. Performance indexes are adopted to represent the multi-objective demands and constraints of the ACC system. To avoid the solution becoming unfeasible because of the overlarge feedback correction, the constraint softening method was introduced to improve robustness. Finally, the proposed ACC method is verified in typical car-following scenarios. Through comparisons and case studies, the proposed method can improve the robustness and control precision of the ACC system, while satisfying the demands of safety, comfort, and economy. Full article
(This article belongs to the Special Issue Computer Vision & Intelligent Transportation Systems)
Show Figures

Figure 1

14 pages, 3442 KiB  
Article
Prediction of Driver’s Attention Points Based on Attention Model
by Shuanfeng Zhao, Guodong Han, Qingqing Zhao and Pei Wei
Appl. Sci. 2020, 10(3), 1083; https://doi.org/10.3390/app10031083 - 06 Feb 2020
Cited by 8 | Viewed by 2496
Abstract
The current intelligent driving system does not consider the selective attention mechanism of drivers, and it cannot completely replace the drivers to extract effective road information. A Driver Visual Attention Network (DVAN), which is based on deep learning attention model, is proposed in [...] Read more.
The current intelligent driving system does not consider the selective attention mechanism of drivers, and it cannot completely replace the drivers to extract effective road information. A Driver Visual Attention Network (DVAN), which is based on deep learning attention model, is proposed in our paper, in order to solve this problem. The DVAN is aimed at extracting the key information affecting the driver’s operation by predicting the driver’s attention points. It completes the fast localization and extraction of road information that is most interesting to drivers by merging local apparent features and contextual visual information. Meanwhile, a Cross Convolutional Neural Network (C-CNN) is proposed in order to ensure the integrity of the extracted information. Here, we verify the network on the KITTI dataset, which is the largest computer vision algorithm evaluation data set in the world’s largest autonomous driving scenario. Our results show that the DVAN can quickly locate and identify the target that the driver is most interested in a picture, and the average accuracy of prediction is 96.3%. This will provide useful theoretical basis and technical methods that are related to visual perception for intelligent driving vehicles, driving training and assisted driving systems in the future. Full article
(This article belongs to the Special Issue Computer Vision & Intelligent Transportation Systems)
Show Figures

Figure 1

Back to TopTop