sensors-logo

Journal Browser

Journal Browser

Navigation Filters for Autonomous Vehicles

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 8536

Special Issue Editors


E-Mail Website
Guest Editor
School of Mechanical Convergence System Engineering, Kunsan National University, Gunsan 54150, Republic of Korea
Interests: navigation system; autonomous vehicles; multi-sensor fusion; multi-target tracking; deep learning

E-Mail Website
Guest Editor
School of IT, Information and Control Engineering Information and Control Engineering Major, Kunsan National University, Gunsan 54150, Republic of Korea
Interests: indoor positioning system; autonomous driving; medical device; autonomous wheelchair; medical device evaluation method; sensor & AI system; embedded system

E-Mail Website
Guest Editor
Department of Mechanical System Engineering, Kumoh National Institute of Technology, Gyeongbuk 39177, Republic of Korea
Interests: signal processing; nonlinear filtering; deep learning

Special Issue Information

Dear Colleagues,

Though there are various application of linear filters, practical real-world challenges related to autonomous vehicles often involve nonlinear systems. Nonlinearities may arise either from the dynamics of autonomous vehicles or even from the process of sensors’ observation. As a fundamental problem of nonlinear filtering encountered across a few research areas, Bayesian analysis has stimulated a sustaining interest during the past decades and it provides a powerful framework for nonlinear state estimation of autonomous vehicles.

The goal of this special issue is to present novel results and trends in the field of nonlinear filtering by integrating concepts for autonomous vehicles from nonlinear filtering, machine learning and information theory (data fusion). This special issue is addressed to all types of nonlinear filtering for autonomous vehicles.

Dr. Sun Young Kim
Dr. Jae Hoon Jeong
Dr. Chang Ho Kang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • integrated navigation
  • indoor positioning system
  • adaptive filter
  • nonlinear filtering
  • estimation theory
  • autonomous vehicles
  • autonomous driving
  • machine learning
  • sensor data fusion

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

35 pages, 15069 KiB  
Article
Holistic Spatio-Temporal Graph Attention for Trajectory Prediction in Vehicle–Pedestrian Interactions
by Hesham Alghodhaifi and Sridhar Lakshmanan
Sensors 2023, 23(17), 7361; https://doi.org/10.3390/s23177361 - 23 Aug 2023
Cited by 1 | Viewed by 1237
Abstract
Ensuring that intelligent vehicles do not cause fatal collisions remains a persistent challenge due to pedestrians’ unpredictable movements and behavior. The potential for risky situations or collisions arising from even minor misunderstandings in vehicle–pedestrian interactions is a cause for great concern. Considerable research [...] Read more.
Ensuring that intelligent vehicles do not cause fatal collisions remains a persistent challenge due to pedestrians’ unpredictable movements and behavior. The potential for risky situations or collisions arising from even minor misunderstandings in vehicle–pedestrian interactions is a cause for great concern. Considerable research has been dedicated to the advancement of predictive models for pedestrian behavior through trajectory prediction, as well as the exploration of the intricate dynamics of vehicle–pedestrian interactions. However, it is important to note that these studies have certain limitations. In this paper, we propose a novel graph-based trajectory prediction model for vehicle–pedestrian interactions called Holistic Spatio-Temporal Graph Attention (HSTGA) to address these limitations. HSTGA first extracts vehicle–pedestrian interaction spatial features using a multi-layer perceptron (MLP) sub-network and max pooling. Then, the vehicle–pedestrian interaction features are aggregated with the spatial features of pedestrians and vehicles to be fed into the LSTM. The LSTM is modified to learn the vehicle–pedestrian interactions adaptively. Moreover, HSTGA models temporal interactions using an additional LSTM. Then, it models the spatial interactions among pedestrians and between pedestrians and vehicles using graph attention networks (GATs) to combine the hidden states of the LSTMs. We evaluate the performance of HSTGA on three different scenario datasets, including complex unsignalized roundabouts with no crosswalks and unsignalized intersections. The results show that HSTGA outperforms several state-of-the-art methods in predicting linear, curvilinear, and piece-wise linear trajectories of vehicles and pedestrians. Our approach provides a more comprehensive understanding of social interactions, enabling more accurate trajectory prediction for safe vehicle navigation. Full article
(This article belongs to the Special Issue Navigation Filters for Autonomous Vehicles)
Show Figures

Figure 1

11 pages, 2512 KiB  
Article
A Study on Object Detection Performance of YOLOv4 for Autonomous Driving of Tram
by Joo Woo, Ji-Hyeon Baek, So-Hyeon Jo, Sun Young Kim and Jae-Hoon Jeong
Sensors 2022, 22(22), 9026; https://doi.org/10.3390/s22229026 - 21 Nov 2022
Cited by 7 | Viewed by 2762
Abstract
Recently, autonomous driving technology has been in the spotlight. However, autonomous driving is still in its infancy in the railway industry. In the case of railways, there are fewer control elements than autonomous driving of cars due to the characteristics of running on [...] Read more.
Recently, autonomous driving technology has been in the spotlight. However, autonomous driving is still in its infancy in the railway industry. In the case of railways, there are fewer control elements than autonomous driving of cars due to the characteristics of running on railways, but there is a disadvantage in that evasive maneuvers cannot be made in the event of a dangerous situation. In addition, when braking, it cannot be decelerated quickly for the weight of the body and the safety of the passengers. In the case of a tram, one of the railway systems, research has already been conducted on how to generate a profile that plans braking and acceleration as a base technology for autonomous driving, and to find the location coordinates of surrounding objects through object recognition. In pilot research about the tram’s automated driving, YOLOv3 was used for object detection to find object coordinates. YOLOv3 is an artificial intelligence model that finds coordinates, sizes, and classes of objects in an image. YOLOv3 is the third upgrade of YOLO, which is one of the most famous object detection technologies based on CNN. YOLO’s object detection performance is characterized by ordinary accuracy and fast speed. For this paper, we conducted a study to find out whether the object detection performance required for autonomous trams can be sufficiently implemented with the already developed object detection model. For this experiment, we used the YOLOv4 which is the fourth upgrade of YOLO. Full article
(This article belongs to the Special Issue Navigation Filters for Autonomous Vehicles)
Show Figures

Figure 1

16 pages, 6142 KiB  
Communication
Application of Convolutional Neural Network (CNN) to Recognize Ship Structures
by Jae-Jun Lim, Dae-Won Kim, Woon-Hee Hong, Min Kim, Dong-Hoon Lee, Sun-Young Kim and Jae-Hoon Jeong
Sensors 2022, 22(10), 3824; https://doi.org/10.3390/s22103824 - 18 May 2022
Cited by 2 | Viewed by 1948
Abstract
The purpose of this paper is to study the recognition of ships and their structures to improve the safety of drone operations engaged in shore-to-ship drone delivery service. This study has developed a system that can distinguish between ships and their structures by [...] Read more.
The purpose of this paper is to study the recognition of ships and their structures to improve the safety of drone operations engaged in shore-to-ship drone delivery service. This study has developed a system that can distinguish between ships and their structures by using a convolutional neural network (CNN). First, the dataset of the Marine Traffic Management Net is described and CNN’s object sensing based on the Detectron2 platform is discussed. There will also be a description of the experiment and performance. In addition, this study has been conducted based on actual drone delivery operations—the first air delivery service by drones in Korea. Full article
(This article belongs to the Special Issue Navigation Filters for Autonomous Vehicles)
Show Figures

Figure 1

11 pages, 2324 KiB  
Communication
Determination of Traffic Lane in Tunnel and Positioning of Autonomous Vehicles Using Chromaticity of LED Lights
by Joo Woo, So-Hyeon Jo, Gi-Sig Byun, Sun-Young Kim, Seok-Geun Jee, Ju-Hyeon Seong, Yeon-Man Jeong and Jae-Hoon Jeong
Sensors 2022, 22(8), 2912; https://doi.org/10.3390/s22082912 - 11 Apr 2022
Cited by 2 | Viewed by 1882
Abstract
Currently, the location recognition and positioning system are the essential parts of unmanned vehicles. Among them, location estimation under GPS-denied environments is currently being studied using IMU, Wi-Fi, and VLC, but there are problems such as cumulative errors, hardware complexity, and precision positioning. [...] Read more.
Currently, the location recognition and positioning system are the essential parts of unmanned vehicles. Among them, location estimation under GPS-denied environments is currently being studied using IMU, Wi-Fi, and VLC, but there are problems such as cumulative errors, hardware complexity, and precision positioning. To address this problem with the current positioning system, the present study proposed a lane positioning technique by analyzing the chromaticity coordinates, judging from the color temperature of LED lights in tunnels. The tunnel environment was built using LEDs with three color temperatures, and to solve nonlinear problems such as lane positioning from chromaticity analysis, a single input single output fuzzy algorithm was developed to estimate the position of an object on lanes using chromaticity values of signals measured by RGB sensors. The RGB value measured by the sensor removes the disturbance through the pre-processing filter, accepts only the tunnel LED information, and estimates where it is located on the x-distance indicating the lane position through a fuzzy algorithm. Finally, the performance of the fuzzy algorithm was evaluated through experiments, and the accuracy was shown with an average error of less than 4.86%. Full article
(This article belongs to the Special Issue Navigation Filters for Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop