sensors-logo

Journal Browser

Journal Browser

Multi-Sensor Fusion for Target Detection and Tracking

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 3835

Special Issue Editors


E-Mail Website
Guest Editor
Automatic Target Recognition (ATR) Key Lab, College of Electronic Science and Engineering, National University of Defense Technology (NUDT), Changsha 410073, China
Interests: devleoping air-to-ground sensing algorithms for drones (e.g. classification, detection, tracking, localization and mapping)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Information and Communication Engineering, Dalian University of Technology, Dalian 116024, China
Interests: computer vision; image processing; object tracking; visual tracking

E-Mail Website
Guest Editor
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
Interests: small object detection; multiple object tracking

E-Mail Website
Guest Editor
College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
Interests: visual tracking and machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With continuous applications in the civilian and military domains, object detection and tracking is drawing increasing attention. Visible light cameras are one of the most popular imaging sensors. Existing detection and tracking algorithms deal well with single-modal (visible) observation data and fail in dark or foggy scenarios. To address the above issue, Sensor fusion is indicated as an open research issue as well to achieve better detection and tracking results in comparison to a single sensor. In this special issue, multi-modal sensor data (i.e., visible, thermal, time, location, altitude, IMU) are collected in real-world outdoor environments. We believe the multi-modal sensor data can boost object detection and tracking performance. The expected outcomes of this special issue are of great theoretical and practical value for improving the environmental perception ability of robots or drones under complex scenarios.

Dr. Dongdong Li
Prof. Dr. Dong Wang
Prof. Dr. Yan Zhang
Dr. Yangliu Kuai
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • object detection
  • object tracking
  • multi-modal sensing
  • multi-sensor fusion

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 10558 KiB  
Article
Multi-Task Foreground-Aware Network with Depth Completion for Enhanced RGB-D Fusion Object Detection Based on Transformer
by Jiasheng Pan, Songyi Zhong, Tao Yue, Yankun Yin and Yanhao Tang
Sensors 2024, 24(7), 2374; https://doi.org/10.3390/s24072374 - 08 Apr 2024
Viewed by 297
Abstract
Fusing multiple sensor perceptions, specifically LiDAR and camera, is a prevalent method for target recognition in autonomous driving systems. Traditional object detection algorithms are limited by the sparse nature of LiDAR point clouds, resulting in poor fusion performance, especially for detecting small and [...] Read more.
Fusing multiple sensor perceptions, specifically LiDAR and camera, is a prevalent method for target recognition in autonomous driving systems. Traditional object detection algorithms are limited by the sparse nature of LiDAR point clouds, resulting in poor fusion performance, especially for detecting small and distant targets. In this paper, a multi-task parallel neural network based on the Transformer is constructed to simultaneously perform depth completion and object detection. The loss functions are redesigned to reduce environmental noise in depth completion, and a new fusion module is designed to enhance the network’s perception of the foreground and background. The network leverages the correlation between RGB pixels for depth completion, completing the LiDAR point cloud and addressing the mismatch between sparse LiDAR features and dense pixel features. Subsequently, we extract depth map features and effectively fuse them with RGB features, fully utilizing the depth feature differences between foreground and background to enhance object detection performance, especially for challenging targets. Compared to the baseline network, improvements of 4.78%, 8.93%, and 15.54% are achieved in the difficult indicators for cars, pedestrians, and cyclists, respectively. Experimental results also demonstrate that the network achieves a speed of 38 fps, validating the efficiency and feasibility of the proposed method. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion for Target Detection and Tracking)
Show Figures

Figure 1

20 pages, 2267 KiB  
Article
Multi-Target Tracking AA Fusion Method for Asynchronous Multi-Sensor Networks
by Kuiwu Wang, Qin Zhang, Guimei Zheng and Xiaolong Hu
Sensors 2023, 23(21), 8751; https://doi.org/10.3390/s23218751 - 27 Oct 2023
Viewed by 723
Abstract
Aiming at the problem of asynchronous multi-target tracking, this paper studies the AA fusion optimization problem of multi-sensor networks. Firstly, each sensor node runs a PHD filter, and the measurement information obtained from different sensor nodes in the fusion interval is flood communicated [...] Read more.
Aiming at the problem of asynchronous multi-target tracking, this paper studies the AA fusion optimization problem of multi-sensor networks. Firstly, each sensor node runs a PHD filter, and the measurement information obtained from different sensor nodes in the fusion interval is flood communicated into composite measurement information. The Gaussian component representing the same target is associated with a subset by distance correlation. Then, the Bayesian Cramér–Rao Lower Bound of the asynchronous multi-target-tracking error, including radar node selection, is derived by combining the composite measurement information representing the same target. On this basis, a multi-sensor-network-optimization model for asynchronous multi-target tracking is established. That is, to minimize the asynchronous multi-target-tracking error as the optimization objective, the adaptive optimization design of the selection method of the sensor nodes in the sensor network is carried out, and the sequential quadratic programming (SQP) algorithm is used to select the most suitable sensor nodes for the AA fusion of the Gaussian components representing the same target. The simulation results show that compared with the existing algorithms, the proposed algorithm can effectively improve the asynchronous multi-target-tracking accuracy of multi-sensor networks. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion for Target Detection and Tracking)
Show Figures

Figure 1

15 pages, 4486 KiB  
Article
Defogging Algorithm Based on Polarization Characteristics and Atmospheric Transmission Model
by Feng Ling, Yan Zhang, Zhiguang Shi, Jinghua Zhang, Yu Zhang and Yi Zhang
Sensors 2022, 22(21), 8132; https://doi.org/10.3390/s22218132 - 24 Oct 2022
Cited by 1 | Viewed by 1129
Abstract
We propose a polarized image defogging algorithm according to the sky segmentation results and transmission map optimization. Firstly, we propose a joint sky segmentation method based on scene polarization information, gradient information and light intensity information. This method can effectively segment the sky [...] Read more.
We propose a polarized image defogging algorithm according to the sky segmentation results and transmission map optimization. Firstly, we propose a joint sky segmentation method based on scene polarization information, gradient information and light intensity information. This method can effectively segment the sky region and accurately estimate the global parameters such as atmospheric polarization degree and atmospheric light intensity at infinite distance. Then, the Gaussian filter is used to solve the light intensity map of the target, and the information of the polarization degree of the target is solved. Finally, based on the segmented sky region, a three-step transmission optimization method is proposed, which can effectively suppress the halo effect in the reconstructed image of large area sky region. Experimental results shows that defogging has a big improvement in the average gradient of the image and the grayscale standard deviation. Therefore, the proposed algorithm provides strong defogging and can improve the optical imaging quality in foggy scenes by restoring fog-free images. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion for Target Detection and Tracking)
Show Figures

Figure 1

Back to TopTop