sensors-logo

Journal Browser

Journal Browser

Multi-modal Sensor Fusion and 3D LiDARs for Vehicle Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: 31 August 2024 | Viewed by 6763

Special Issue Editor


E-Mail Website
Guest Editor
Institute for Mechatronics Engineering & Cyber-Physical Systems (IMECH), Universidad de Málaga, 29071 Málaga, Spain
Interests: mobile robotics; search and rescue; self-driving cars; machine learning; sensor fusion

Special Issue Information

Dear Colleagues,

"Multi-Modal Sensor Fusion and 3D LiDARs for Vehicle Applications" is a Special Issue focused on the latest research and developments in the field of sensor fusion and 3D LiDAR technology for vehicles. The aim of this Special Issue is to bring together leading experts from academia and the industry to share their latest findings and advancements in this rapidly evolving field.

Sensor fusion is the process of combining data from multiple transducers to improve their robustness and performance. In the context of vehicle applications, sensor fusion is used to enhance the capabilities of mobile robots, advanced driver assistance systems (ADAS), and self-driving cars. By combining measurements from multiple sensors, such as cameras, radars, and LiDARs, a vehicle can perceive its environment more accurately and make better decisions in real-time. Sensor fusion also enables the sensor’s failures to be handled and provides robustness in adverse conditions.

Modern 3D LiDARs work by simultaneously emitting multiple laser beams and capturing the elapsed times for the beams to bounce back. This technology has become a key component for autonomous navigation because it perceives the surroundings in a precise and fast way. The 3D LiDAR data can be used for both static and dynamic obstacle detection, which is a crucial capability for the development of self-driving cars.

The Guest Editor invites contributions to this Special Issue in topics including but not limited to:

  • Sensor fusion algorithms;
  • 3D LiDARs for perception and localization;
  • Obstacle detection and tracking;
  • Multi-modal sensor fusion for ADAS and autonomous vehicles;
  • LiDAR-camera fusion;
  • LiDAR-radar fusion;
  • Robustness and safety of multi-modal sensor fusion systems.

Overall, this Special Issue will provide a comprehensive view of the latest research and developments in the field of sensor fusion and 3D LiDAR technology for vehicle applications. It will be a valuable resource for researchers, engineers, and practitioners working in this field, providing them with the latest insights and advancements in this exciting field.

Prof. Dr. Jesús Morales
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • mobile robot
  • sensor fusion
  • multi-modal sensors
  • 3D LiDAR
  • robust systems
  • localization
  • obstacle detection

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 3889 KiB  
Article
New Scheme of MEMS-Based LiDAR by Synchronized Dual-Laser Beams for Detection Range Enhancement
by Chien-Wei Huang, Chun-Nien Liu, Sheng-Chuan Mao, Wan-Shao Tsai, Zingway Pei, Charles W. Tu and Wood-Hi Cheng
Sensors 2024, 24(6), 1897; https://doi.org/10.3390/s24061897 - 15 Mar 2024
Viewed by 521
Abstract
A new scheme presents MEMS-based LiDAR with synchronized dual-laser beams for detection range enhancement and precise point-cloud data without using higher laser power. The novel MEMS-based LiDAR module uses the principal laser light to build point-cloud data. In addition, an auxiliary laser light [...] Read more.
A new scheme presents MEMS-based LiDAR with synchronized dual-laser beams for detection range enhancement and precise point-cloud data without using higher laser power. The novel MEMS-based LiDAR module uses the principal laser light to build point-cloud data. In addition, an auxiliary laser light amplifies the single-noise ratio to enhance the detection range. This LiDAR module exhibits the field of view (FOV), angular resolution, and maximum detection distance of 45° (H) × 25° (V), 0.11° (H) × 0.11° (V), and 124 m, respectively. The maximum detection distance is enhanced by 16% from 107 m to 124 m with a laser power of 1 W and an additional auxiliary laser power of 0.355 W. Furthermore, the simulation results show that the maximum detection distance can be up to 300 m with laser power of 8 W and only 6 W if the auxiliary laser light of 2.84 W is used, which is 35.5% of the laser power. This result indicates that the synchronized dual-laser beams can achieve long detection distance and reduce laser power 30%, hence saving on the overall laser system costs. Therefore, the proposed LiDAR module can be applied for a long detection range in autonomous vehicles without requiring higher laser power if it utilizes an auxiliary laser light. Full article
(This article belongs to the Special Issue Multi-modal Sensor Fusion and 3D LiDARs for Vehicle Applications)
Show Figures

Figure 1

20 pages, 4551 KiB  
Article
Extrinsic Calibration of Thermal Camera and 3D LiDAR Sensor via Human Matching in Both Modalities during Sensor Setup Movement
by Farhad Dalirani and Mahmoud R. El-Sakka
Sensors 2024, 24(2), 669; https://doi.org/10.3390/s24020669 - 20 Jan 2024
Viewed by 1012
Abstract
LiDAR sensors, pivotal in various fields like agriculture and robotics for tasks such as 3D object detection and map creation, are increasingly coupled with thermal cameras to harness heat information. This combination proves particularly effective in adverse conditions like darkness and rain. Ensuring [...] Read more.
LiDAR sensors, pivotal in various fields like agriculture and robotics for tasks such as 3D object detection and map creation, are increasingly coupled with thermal cameras to harness heat information. This combination proves particularly effective in adverse conditions like darkness and rain. Ensuring seamless fusion between the sensors necessitates precise extrinsic calibration. Our innovative calibration method leverages human presence during sensor setup movements, eliminating the reliance on dedicated calibration targets. It optimizes extrinsic parameters by employing a novel evolutionary algorithm on a specifically designed loss function that measures human alignment across modalities. Our approach showcases a notable 4.43% improvement in the loss over extrinsic parameters obtained from target-based calibration in the FieldSAFE dataset. This advancement reduces costs related to target creation, saves time in diverse pose collection, mitigates repetitive calibration efforts amid sensor drift or setting changes, and broadens accessibility by obviating the need for specific targets. The adaptability of our method in various environments, like urban streets or expansive farm fields, stems from leveraging the ubiquitous presence of humans. Our method presents an efficient, cost-effective, and readily applicable means of extrinsic calibration, enhancing sensor fusion capabilities in the critical fields reliant on precise and robust data acquisition. Full article
(This article belongs to the Special Issue Multi-modal Sensor Fusion and 3D LiDARs for Vehicle Applications)
Show Figures

Figure 1

35 pages, 20635 KiB  
Article
Leveraging LiDAR-Based Simulations to Quantify the Complexity of the Static Environment for Autonomous Vehicles in Rural Settings
by Mohamed Abohassan and Karim El-Basyouny
Sensors 2024, 24(2), 452; https://doi.org/10.3390/s24020452 - 11 Jan 2024
Viewed by 787
Abstract
This paper uses virtual simulations to examine the interaction between autonomous vehicles (AVs) and their surrounding environment. A framework was developed to estimate the environment’s complexity by calculating the real-time data processing requirements for AVs to navigate effectively. The VISTA simulator was used [...] Read more.
This paper uses virtual simulations to examine the interaction between autonomous vehicles (AVs) and their surrounding environment. A framework was developed to estimate the environment’s complexity by calculating the real-time data processing requirements for AVs to navigate effectively. The VISTA simulator was used to synthesize viewpoints to replicate the captured environment accurately. With an emphasis on static physical features, roadways were dissected into relevant road features (RRFs) and full environment (FE) to study the impact of roadside features on the scene complexity and demonstrate the gravity of wildlife–vehicle collisions (WVCs) on AVs. The results indicate that roadside features substantially increase environmental complexity by up to 400%. Increasing a single lane to the road was observed to increase the processing requirements by 12.3–16.5%. Crest vertical curves decrease data rates due to occlusion challenges, with a reported average of 4.2% data loss, while sag curves can increase the complexity by 7%. In horizontal curves, roadside occlusion contributed to severe loss in road information, leading to a decrease in data rate requirements by as much as 19%. As for weather conditions, heavy rain increased the AV’s processing demands by a staggering 240% when compared to normal weather conditions. AV developers and government agencies can exploit the findings of this study to better tailor AV designs and meet the necessary infrastructure requirements. Full article
(This article belongs to the Special Issue Multi-modal Sensor Fusion and 3D LiDARs for Vehicle Applications)
Show Figures

Figure 1

22 pages, 32270 KiB  
Article
Exploring Adversarial Robustness of LiDAR Semantic Segmentation in Autonomous Driving
by K. T. Yasas Mahima, Asanka Perera, Sreenatha Anavatti and Matt Garratt
Sensors 2023, 23(23), 9579; https://doi.org/10.3390/s23239579 - 02 Dec 2023
Cited by 1 | Viewed by 1107
Abstract
Deep learning networks have demonstrated outstanding performance in 2D and 3D vision tasks. However, recent research demonstrated that these networks result in failures when imperceptible perturbations are added to the input known as adversarial attacks. This phenomenon has recently received increased interest in [...] Read more.
Deep learning networks have demonstrated outstanding performance in 2D and 3D vision tasks. However, recent research demonstrated that these networks result in failures when imperceptible perturbations are added to the input known as adversarial attacks. This phenomenon has recently received increased interest in the field of autonomous vehicles and has been extensively researched on 2D image-based perception tasks and 3D object detection. However, the adversarial robustness of 3D LiDAR semantic segmentation in autonomous vehicles is a relatively unexplored topic. This study expands the adversarial examples to LiDAR-based 3D semantic segmentation. We developed and analyzed three LiDAR point-based adversarial attack methods on different networks developed on the SemanticKITTI dataset. The findings illustrate that the Cylinder3D network has the highest adversarial susceptibility to the analyzed attacks. We investigated how the class-wise point distribution influences the adversarial robustness of each class in the SemanticKITTI dataset and discovered that ground-level points are extremely vulnerable to point perturbation attacks. Further, the transferability of each attack strategy was assessed, and we found that networks relying on point data representation demonstrate a notable level of resistance. Our findings will enable future research in developing more complex and specific adversarial attacks against LiDAR segmentation and countermeasures against adversarial attacks. Full article
(This article belongs to the Special Issue Multi-modal Sensor Fusion and 3D LiDARs for Vehicle Applications)
Show Figures

Figure 1

24 pages, 4257 KiB  
Article
Multi-Level Optimization for Data-Driven Camera–LiDAR Calibration in Data Collection Vehicles
by Zijie Jiang, Zhongliang Cai, Nian Hui and Bozhao Li
Sensors 2023, 23(21), 8889; https://doi.org/10.3390/s23218889 - 01 Nov 2023
Viewed by 1172
Abstract
Accurately calibrating camera–LiDAR systems is crucial for achieving effective data fusion, particularly in data collection vehicles. Data-driven calibration methods have gained prominence over target-based methods due to their superior adaptability to diverse environments. However, current data-driven calibration methods are susceptible to suboptimal initialization [...] Read more.
Accurately calibrating camera–LiDAR systems is crucial for achieving effective data fusion, particularly in data collection vehicles. Data-driven calibration methods have gained prominence over target-based methods due to their superior adaptability to diverse environments. However, current data-driven calibration methods are susceptible to suboptimal initialization parameters, which can significantly impact the accuracy and efficiency of the calibration process. In response to these challenges, this paper proposes a novel general model for the camera–LiDAR calibration that abstracts away the technical details in existing methods, introduces an improved objective function that effectively mitigates the issue of suboptimal parameter initialization, and develops a multi-level parameter optimization algorithm that strikes a balance between accuracy and efficiency during iterative optimization. The experimental results demonstrate that the proposed method effectively mitigates the effects of suboptimal initial calibration parameters, achieving highly accurate and efficient calibration results. The suggested technique exhibits versatility and adaptability to accommodate various sensor configurations, making it a notable advancement in the field of camera–LiDAR calibration, with potential applications in diverse fields including autonomous driving, robotics, and computer vision. Full article
(This article belongs to the Special Issue Multi-modal Sensor Fusion and 3D LiDARs for Vehicle Applications)
Show Figures

Figure 1

20 pages, 6446 KiB  
Article
AFTR: A Robustness Multi-Sensor Fusion Model for 3D Object Detection Based on Adaptive Fusion Transformer
by Yan Zhang, Kang Liu, Hong Bao, Xu Qian, Zihan Wang, Shiqing Ye and Weicen Wang
Sensors 2023, 23(20), 8400; https://doi.org/10.3390/s23208400 - 12 Oct 2023
Cited by 1 | Viewed by 1405
Abstract
Multi-modal sensors are the key to ensuring the robust and accurate operation of autonomous driving systems, where LiDAR and cameras are important on-board sensors. However, current fusion methods face challenges due to inconsistent multi-sensor data representations and the misalignment of dynamic scenes. Specifically, [...] Read more.
Multi-modal sensors are the key to ensuring the robust and accurate operation of autonomous driving systems, where LiDAR and cameras are important on-board sensors. However, current fusion methods face challenges due to inconsistent multi-sensor data representations and the misalignment of dynamic scenes. Specifically, current fusion methods either explicitly correlate multi-sensor data features by calibrating parameters, ignoring the feature blurring problems caused by misalignment, or find correlated features between multi-sensor data through global attention, causing rapidly escalating computational costs. On this basis, we propose a transformer-based end-to-end multi-sensor fusion framework named the adaptive fusion transformer (AFTR). The proposed AFTR consists of the adaptive spatial cross-attention (ASCA) mechanism and the spatial temporal self-attention (STSA) mechanism. Specifically, ASCA adaptively associates and interacts with multi-sensor data features in 3D space through learnable local attention, alleviating the problem of the misalignment of geometric information and reducing computational costs, and STSA interacts with cross-temporal information using learnable offsets in deformable attention, mitigating displacements due to dynamic scenes. We show through numerous experiments that the AFTR obtains SOTA performance in the nuScenes 3D object detection task (74.9% NDS and 73.2% mAP) and demonstrates strong robustness to misalignment (only a 0.2% NDS drop with slight noise). At the same time, we demonstrate the effectiveness of the AFTR components through ablation studies. In summary, the proposed AFTR is an accurate, efficient, and robust multi-sensor data fusion framework. Full article
(This article belongs to the Special Issue Multi-modal Sensor Fusion and 3D LiDARs for Vehicle Applications)
Show Figures

Figure 1

Back to TopTop