Topic Editors

State Key Laboratory of Millimeter Waves, School of Information Science and Engineering, Southeast University, Nanjing 210096, China
Dr. Yi Ren
National Laboratory of Radar Signal Processing, School of Electronic Engineering, Xidian University, Xi'an 710071, China
Dr. Penghui Huang
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
National Laboratory of Radar Signal Processing, School of Electronic Engineering, Xidian University, Xi’an 710071, China

Information Sensing Technology for Intelligent/Driverless Vehicle, 2nd Volume

Abstract submission deadline
closed (31 March 2024)
Manuscript submission deadline
31 May 2024
Viewed by
10649

Topic Information

Dear Colleagues,

This Topic is a continuation of the previous successful Topic “Information Sensing Technology for Intelligent/Driverless Vehicle”.

As the basis for vehicle positioning and path planning, the environmental perception system is a significant part of intelligent/driverless vehicles, which is used to obtain the environmental information around the vehicle, including roads, obstacles, traffic signs, and the vital signs of the driver. In the past few years, environmental perception technology based on various vehicle-mounted sensors (camera, laser, millimeter-wave radar, and GPS/IMU) has made rapid progress. With further research into automatic driving and assisted driving, the information sensing technology of driverless cars has become a research hotspot, and thus the performance of vehicle-mounted sensors should be improved to adapt to the complex driving environment of daily life. However, in reality, there are still many developmental issues, such as immature technology, lack of advanced instruments, and experimental environments not being real. All these problems pose great challenges to traditional vehicle-mounted sensor systems and information perception technology, motivating the need for new environmental perception systems, signal processing methods, and even new types of sensors.

This Topic is devoted to highlighting the most advanced studies in technology, methodology, and applications of sensors mounted on intelligent/driverless vehicles. Papers dealing with fundamental theoretical analyses, as well as those demonstrating their applications to real-world and/or emerging problems, are welcome. We welcome original papers, and some review articles, in all areas related to sensors mounted on intelligent/driverless vehicles, including, but not limited to, the following suggested topics:

  • Vehicle-mounted millimeter-wave radar technology;
  • Vehicle-mounted LiDAR technology;
  • Vehicle visual sensors;
  • High-precision positioning technology based on GPS/IMU;
  • Muti-sensor data fusion (MSDF);
  • New sensor systems mounted on intelligent/driverless vehicles.

Dr. Yan Huang
Dr. Yi Ren
Dr. Penghui Huang
Dr. Jun Wan
Dr. Zhanye Chen
Dr. Shiyang Tang
Topic Editors

Keywords

  • information sensing technology
  • intelligent/driverless vehicle
  • millimeter-wave radar
  • LiDAR
  • vehicle visual sensor

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Remote Sensing
remotesensing
5.0 7.9 2009 23 Days CHF 2700 Submit
Sensors
sensors
3.9 6.8 2001 17 Days CHF 2600 Submit
Smart Cities
smartcities
6.4 8.5 2018 20.2 Days CHF 2000 Submit
Vehicles
vehicles
2.2 2.9 2019 22.2 Days CHF 1600 Submit
Geomatics
geomatics
- - 2021 18.6 Days CHF 1000 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (8 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
17 pages, 9837 KiB  
Article
Robust Calibration Technique for Precise Transformation of Low-Resolution 2D LiDAR Points to Camera Image Pixels in Intelligent Autonomous Driving Systems
by Ravichandran Rajesh and Pudureddiyur Venkataraman Manivannan
Vehicles 2024, 6(2), 711-727; https://doi.org/10.3390/vehicles6020033 - 19 Apr 2024
Viewed by 276
Abstract
In the context of autonomous driving, the fusion of LiDAR and camera sensors is essential for robust obstacle detection and distance estimation. However, accurately estimating the transformation matrix between cost-effective low-resolution LiDAR and cameras presents challenges due to the generation of uncertain points [...] Read more.
In the context of autonomous driving, the fusion of LiDAR and camera sensors is essential for robust obstacle detection and distance estimation. However, accurately estimating the transformation matrix between cost-effective low-resolution LiDAR and cameras presents challenges due to the generation of uncertain points by low-resolution LiDAR. In the present work, a new calibration technique is developed to accurately transform low-resolution 2D LiDAR points into camera pixels by utilizing both static and dynamic calibration patterns. Initially, the key corresponding points are identified at the intersection of 2D LiDAR points and calibration patterns. Subsequently, interpolation is applied to generate additional corresponding points for estimating the homography matrix. The homography matrix is then optimized using the Levenberg–Marquardt algorithm to minimize the rotation error, followed by a Procrustes analysis to minimize the translation error. The accuracy of the developed calibration technique is validated through various experiments (varying distances and orientations). The experimental findings demonstrate that the developed calibration technique significantly reduces the mean reprojection error by 0.45 pixels, rotation error by 65.08%, and distance error by 71.93% compared to the standard homography technique. Thus, the developed calibration technique promises the accurate transformation of low-resolution LiDAR points into camera pixels, thereby contributing to improved obstacle perception in intelligent autonomous driving systems. Full article
Show Figures

Graphical abstract

30 pages, 14867 KiB  
Article
Architecture and Potential of Connected and Autonomous Vehicles
by Michele Pipicelli, Alfredo Gimelli, Bernardo Sessa, Francesco De Nola, Gianluca Toscano and Gabriele Di Blasio
Vehicles 2024, 6(1), 275-304; https://doi.org/10.3390/vehicles6010012 - 29 Jan 2024
Cited by 1 | Viewed by 1132
Abstract
The transport sector is under an intensive renovation process. Innovative concepts such as shared and intermodal mobility, mobility as a service, and connected and autonomous vehicles (CAVs) will contribute to the transition toward carbon neutrality and are foreseen as crucial parts of future [...] Read more.
The transport sector is under an intensive renovation process. Innovative concepts such as shared and intermodal mobility, mobility as a service, and connected and autonomous vehicles (CAVs) will contribute to the transition toward carbon neutrality and are foreseen as crucial parts of future mobility systems, as demonstrated by worldwide efforts in research and industry communities. The main driver of CAVs development is road safety, but other benefits, such as comfort and energy saving, are not to be neglected. CAVs analysis and development usually focus on Information and Communication Technology (ICT) research themes and less on the entire vehicle system. Many studies on specific aspects of CAVs are available in the literature, including advanced powertrain control strategies and their effects on vehicle efficiency. However, most studies neglect the additional power consumption due to the autonomous driving system. This work aims to assess uncertain CAVs’ efficiency improvements and offers an overview of their architecture. In particular, a combination of the literature survey and proper statistical methods are proposed to provide a comprehensive overview of CAVs. The CAV layout, data processing, and management to be used in energy management strategies are discussed. The data gathered are used to define statistical distribution relative to the efficiency improvement, number of sensors, computing units and their power requirements. Those distributions have been employed within a Monte Carlo method simulation to evaluate the effect on vehicle energy consumption and energy saving, using optimal driving behaviour, and considering the power consumption from additional CAV hardware. The results show that the assumption that CAV technologies will reduce energy consumption compared to the reference vehicle, should not be taken for granted. In 75% of scenarios, simulated light-duty CAVs worsen energy efficiency, while the results are more promising for heavy-duty vehicles. Full article
Show Figures

Figure 1

25 pages, 2259 KiB  
Article
RC-SLAM: Road Constrained Stereo Visual SLAM System Based on Graph Optimization
by Yuan Zhu, Hao An, Huaide Wang, Ruidong Xu, Mingzhi Wu and Ke Lu
Sensors 2024, 24(2), 536; https://doi.org/10.3390/s24020536 - 15 Jan 2024
Viewed by 734
Abstract
Intelligent vehicles are constrained by road, resulting in a disparity between the assumed six degrees of freedom (DoF) motion within the Visual Simultaneous Localization and Mapping (SLAM) system and the approximate planar motion of vehicles in local areas, inevitably causing additional pose estimation [...] Read more.
Intelligent vehicles are constrained by road, resulting in a disparity between the assumed six degrees of freedom (DoF) motion within the Visual Simultaneous Localization and Mapping (SLAM) system and the approximate planar motion of vehicles in local areas, inevitably causing additional pose estimation errors. To address this problem, a stereo Visual SLAM system with road constraints based on graph optimization is proposed, called RC-SLAM. Addressing the challenge of representing roads parametrically, a novel method is proposed to approximate local roads as discrete planes and extract parameters of local road planes (LRPs) using homography. Unlike conventional methods, constraints between the vehicle and LRPs are established, effectively mitigating errors arising from assumed six DoF motion in the system. Furthermore, to avoid the impact of depth uncertainty in road features, epipolar constraints are employed to estimate rotation by minimizing the distance between road feature points and epipolar lines, robust rotation estimation is achieved despite depth uncertainties. Notably, a distinctive nonlinear optimization model based on graph optimization is presented, jointly optimizing the poses of vehicle trajectories, LPRs, and map points. The experiments on two datasets demonstrate that the proposed system achieved more accurate estimations of vehicle trajectories by introducing constraints between the vehicle and LRPs. The experiments on a real-world dataset further validate the effectiveness of the proposed system. Full article
Show Figures

Figure 1

19 pages, 2402 KiB  
Article
Controllable Unsupervised Snow Synthesis by Latent Style Space Manipulation
by Hanting Yang, Alexander Carballo, Yuxiao Zhang and Kazuya Takeda
Sensors 2023, 23(20), 8398; https://doi.org/10.3390/s23208398 - 12 Oct 2023
Viewed by 711
Abstract
In the field of intelligent vehicle technology, there is a high dependence on images captured under challenging conditions to develop robust perception algorithms. However, acquiring these images can be both time-consuming and dangerous. To address this issue, unpaired image-to-image translation models offer a [...] Read more.
In the field of intelligent vehicle technology, there is a high dependence on images captured under challenging conditions to develop robust perception algorithms. However, acquiring these images can be both time-consuming and dangerous. To address this issue, unpaired image-to-image translation models offer a solution by synthesizing samples of the desired domain, thus eliminating the reliance on ground truth supervision. However, the current methods predominantly focus on single projections rather than multiple solutions, not to mention controlling the direction of generation, which creates a scope for enhancement. In this study, we propose a generative adversarial network (GAN)–based model, which incorporates both a style encoder and a content encoder, specifically designed to extract relevant information from an image. Further, we employ a decoder to reconstruct an image using these encoded features, while ensuring that the generated output remains within a permissible range by applying a self-regression module to constrain the style latent space. By modifying the hyperparameters, we can generate controllable outputs with specific style codes. We evaluate the performance of our model by generating snow scenes on the Cityscapes and the EuroCity Persons datasets. The results reveal the effectiveness of our proposed methodology, thereby reinforcing the benefits of our approach in the ongoing evolution of intelligent vehicle technology. Full article
Show Figures

Figure 1

18 pages, 7241 KiB  
Article
Road-Network-Map-Assisted Vehicle Positioning Based on Pose Graph Optimization
by Shuchen Xu, Yongrong Sun, Kedong Zhao, Xiyu Fu and Shuaishuai Wang
Sensors 2023, 23(17), 7581; https://doi.org/10.3390/s23177581 - 31 Aug 2023
Viewed by 765
Abstract
Satellite signals are easily lost in urban areas, which causes difficulty in vehicles being located with high precision. Visual odometry has been increasingly applied in navigation systems to solve this problem. However, visual odometry relies on dead-reckoning technology, where a slight positioning error [...] Read more.
Satellite signals are easily lost in urban areas, which causes difficulty in vehicles being located with high precision. Visual odometry has been increasingly applied in navigation systems to solve this problem. However, visual odometry relies on dead-reckoning technology, where a slight positioning error can accumulate over time, resulting in a catastrophic positioning error. Thus, this paper proposes a road-network-map-assisted vehicle positioning method based on the theory of pose graph optimization. This method takes the dead-reckoning result of visual odometry as the input and introduces constraints from the point-line form road network map to suppress the accumulated error and improve vehicle positioning accuracy. We design an optimization and prediction model, and the original trajectory of visual odometry is optimized to obtain the corrected trajectory by introducing constraints from map correction points. The vehicle positioning result at the next moment is predicted based on the latest output of the visual odometry and corrected trajectory. The experiments carried out on the KITTI and campus datasets demonstrate the superiority of the proposed method, which can provide stable and accurate vehicle position estimation in real-time, and has higher positioning accuracy than similar map-assisted methods. Full article
Show Figures

Figure 1

22 pages, 17114 KiB  
Article
Radar Timing Range–Doppler Spectral Target Detection Based on Attention ConvLSTM in Traffic Scenes
by Fengde Jia, Jihong Tan, Xiaochen Lu and Junhui Qian
Remote Sens. 2023, 15(17), 4150; https://doi.org/10.3390/rs15174150 - 24 Aug 2023
Cited by 3 | Viewed by 1322
Abstract
With the development of autonomous driving and the emergence of various intelligent traffic scenarios, object detection technology based on deep learning is more and more widely applied to real traffic scenarios. Commonly used detection devices include LiDAR and cameras. Since the implementation of [...] Read more.
With the development of autonomous driving and the emergence of various intelligent traffic scenarios, object detection technology based on deep learning is more and more widely applied to real traffic scenarios. Commonly used detection devices include LiDAR and cameras. Since the implementation of traffic scene target detection technology requires mass production, the advantages of millimeter-wave radar have emerged, such as low cost and no interference from the external environment. The performance of LiDAR and cameras is greatly reduced due to their sensitivity to light, which affects target detection at night and in bad weather. However, millimeter-wave radar can overcome the influence of these harsh environments and has a great auxiliary effect on safe driving on the road. In this work, we propose a deep-learning-based object detection method considering the radar range–Doppler spectrum in traffic scenarios. The algorithm uses YOLOv8 as the basic architecture, makes full use of the time series characteristics of range–Doppler spectrum data in traffic scenarios, introduces the ConvLSTM network, and exerts the ability to process time series data. In order to improve the model’s ability to detect small objects, an efficient and lightweight Efficient Channel Attention (ECA) module is introduced. Through extensive experiments, our model shows better performance on two publicly available radar datasets, CARRADA and RADDet, compared to other state-of-the-art methods. Compared with other mainstream methods that can only achieve 30–60% mAP performance when the IOU is 0.3, our model can achieve 74.51% and 75.62% on the RADDet and CARRADA datasets, respectively, and has better robustness and generalization ability. Full article
Show Figures

Graphical abstract

25 pages, 13207 KiB  
Article
Layered SOTIF Analysis and 3σ-Criterion-Based Adaptive EKF for Lidar-Based Multi-Sensor Fusion Localization System on Foggy Days
by Lipeng Cao, Yansong He, Yugong Luo and Jian Chen
Remote Sens. 2023, 15(12), 3047; https://doi.org/10.3390/rs15123047 - 10 Jun 2023
Cited by 3 | Viewed by 1350
Abstract
The detection range and accuracy of light detection and ranging (LiDAR) systems are sensitive to variations in fog concentration, leading to the safety of the intended functionality-related (SOTIF-related) problems in the LiDAR-based fusion localization system (LMSFLS). However, due to the uncontrollable weather, it [...] Read more.
The detection range and accuracy of light detection and ranging (LiDAR) systems are sensitive to variations in fog concentration, leading to the safety of the intended functionality-related (SOTIF-related) problems in the LiDAR-based fusion localization system (LMSFLS). However, due to the uncontrollable weather, it is almost impossible to quantitatively analyze the effects of fog on LMSFLS in a realistic environment. Therefore, in this study, we conduct a layered quantitative SOTIF analysis of the LMSFLS on foggy days using fog simulation. Based on the analysis results, we identify the component-level, system-level, and vehicle-level functional insufficiencies of the LMSFLS, the corresponding quantitative triggering conditions, and the potential SOTIF-related risks. To address the SOTIF-related risks, we propose a functional modification strategy that incorporates visibility recognition and a 3σ-criterion-based variance mismatch degree grading adaptive extended Kalman filter. The visibility of a scenario is recognized to judge whether the measurement information of the LiDAR odometry is disturbed by fog. Moreover, the proposed filter is adopted to fuse the abnormal measurement information of the LiDAR odometry with IMU and GNSS. Simulation results demonstrate that the proposed strategy can inhibit the divergence of the LMSFLS, improve the SOTIF of self-driving cars on foggy days, and accurately recognize the visibility of the scenarios. Full article
Show Figures

Figure 1

25 pages, 30818 KiB  
Article
Improving Pedestrian Safety Using Ultra-Wideband Sensors: A Study of Time-to-Collision Estimation
by Salah Fakhoury and Karim Ismail
Sensors 2023, 23(8), 4171; https://doi.org/10.3390/s23084171 - 21 Apr 2023
Cited by 1 | Viewed by 2694
Abstract
Pedestrian safety has been evaluated based on the mean number of pedestrian-involved collisions. Traffic conflicts have been used as a data source to supplement collision data because of their higher frequency and lower damage. Currently, the main source of traffic conflict observation is [...] Read more.
Pedestrian safety has been evaluated based on the mean number of pedestrian-involved collisions. Traffic conflicts have been used as a data source to supplement collision data because of their higher frequency and lower damage. Currently, the main source of traffic conflict observation is through video cameras that can efficiently gather rich data but can be limited by weather and lighting conditions. The utilization of wireless sensors to gather traffic conflict data can augment video sensors because of their robustness to adverse weather conditions and poor illumination. This study presents a prototype of a safety assessment system that utilizes ultra-wideband wireless sensors to detect traffic conflicts. A customized variant of time-to-collision is used to detect conflicts at different severity thresholds. Field trials are conducted using vehicle-mounted beacons and a phone to simulate sensors on vehicles and smart devices on pedestrians. Proximity measures are calculated in real-time to alert smartphones and prevent collisions, even in adverse weather conditions. Validation is conducted to assess the accuracy of time-to-collision measurements at various distances from the phone. Several limitations are identified and discussed, along with recommendations for improvement and lessons learned for future research and development. Full article
Show Figures

Figure 1

Back to TopTop