Special Issue "When Deep Learning Meets Geometry for Air-to-Ground Perception on Drones"

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: 30 November 2023 | Viewed by 4963

Special Issue Editors

Automatic Target Recognition (ATR) Key Lab, College of Electronic Science and Engineering, National University of Defense Technology (NUDT), Changsha 410073, China
Interests: devleoping air-to-ground sensing algorithms for drones (e.g. classification, detection, tracking, localization and mapping)
Special Issues, Collections and Topics in MDPI journals
Prof. Dr. Gongjian Wen
E-Mail Website
Guest Editor
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
Interests: optimization algorithms; computer vision; image processing; machine vision; pattern recognition; object recognition; feature extraction; 3D reconstruction; pattern matching; image recognition
Dr. Yangliu Kuai
E-Mail Website
Guest Editor
College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
Interests: visual tracking and machine learning
Special Issues, Collections and Topics in MDPI journals
Dr. Runmin Cong
E-Mail Website
Guest Editor
Institute of Information Science, School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China
Interests: visual saliency detection and segmentation

Special Issue Information

Dear Colleagues,

Recently, drones are drawing increasing attention as data acquisition or aerial perception platforms for many civilian or military applications. Owing to the success of deep learning in computer vision, drone images are processed in an end-to-end manner to achieve air-to-ground perception (e.g., detection, tracking, recognition). Generally, drone images are processed as general images ignoring the geometric metadata (e.g., location, altitude, pose) generated by the drone equipped GPS or IMU sensors. Inspired by Simultaneous Localization and Mapping (SLAM) which utilizes both image data and geometric data, this Special Issue aims at boosting deep learning based air-to-ground perception performance with geometric metadata for drones. We welcome submissions which provide the community with the most recent advancements regarding this Special Issue.

Topics of interest include, but are not limited to, the following:

  • Air-to-ground object detection for drones
  • Air-to-ground single/multiple object tracking for drones
  • Air-to-ground object localization for drones
  • Air-to-ground monocular visual slam for drones

Dr. Dongdong Li
Prof. Dr. Gongjian Wen
Dr. Yangliu Kuai
Dr. Runmin Cong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • object detection
  • object tracking
  • object localization
  • visual slam
  • embeded vision on drones

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Drone Based RGBT Tracking with Dual-Feature Aggregation Network
Drones 2023, 7(9), 585; https://doi.org/10.3390/drones7090585 - 18 Sep 2023
Viewed by 299
Abstract
In the field of drone-based object tracking, utilization of the infrared modality can improve the robustness of the tracker in scenes with severe illumination change and occlusions and expand the applicable scene of the drone object tracking task. Inspired by the great achievements [...] Read more.
In the field of drone-based object tracking, utilization of the infrared modality can improve the robustness of the tracker in scenes with severe illumination change and occlusions and expand the applicable scene of the drone object tracking task. Inspired by the great achievements of Transformer structure in the field of RGB object tracking, we design a dual-modality object tracking network based on Transformer. To better address the problem of visible-infrared information fusion, we propose a Dual-Feature Aggregation Network that utilizes attention mechanisms in both spatial and channel dimensions to aggregate heterogeneous modality feature information. The proposed algorithm has achieved better performance by comparing with the mainstream algorithms in the drone-based dual-modality object tracking dataset VTUAV. Additionally, the algorithm is lightweight and can be easily deployed and executed on a drone edge computing platform. In summary, the proposed algorithm is mainly applicable to the field of drone dual-modality object tracking and the algorithm is optimized so that it can be deployed on the drone edge computing platform. The effectiveness of the algorithm is proved by experiments and the scope of drone object tracking is extended effectively. Full article
Show Figures

Figure 1

Article
Implicit Neural Mapping for a Data Closed-Loop Unmanned Aerial Vehicle Pose-Estimation Algorithm in a Vision-Only Landing System
Drones 2023, 7(8), 529; https://doi.org/10.3390/drones7080529 - 12 Aug 2023
Viewed by 467
Abstract
Due to their low cost, interference resistance, and concealment of vision sensors, vision-based landing systems have received a lot of research attention. However, vision sensors are only used as auxiliary components in visual landing systems because of their limited accuracy. To solve the [...] Read more.
Due to their low cost, interference resistance, and concealment of vision sensors, vision-based landing systems have received a lot of research attention. However, vision sensors are only used as auxiliary components in visual landing systems because of their limited accuracy. To solve the problem of the inaccurate position estimation of vision-only sensors during landing, a novel data closed-loop pose-estimation algorithm with an implicit neural map is proposed. First, we propose a method with which to estimate the UAV pose based on the runway’s line features, using a flexible coarse-to-fine runway-line-detection method. Then, we propose a mapping and localization method based on the neural radiance field (NeRF), which provides continuous representation and can correct the initial estimated pose well. Finally, we develop a closed-loop data annotation system based on a high-fidelity implicit map, which can significantly improve annotation efficiency. The experimental results show that our proposed algorithm performs well in various scenarios and achieves state-of-the-art accuracy in pose estimation. Full article
Show Figures

Figure 1

Article
TAN: A Transferable Adversarial Network for DNN-Based UAV SAR Automatic Target Recognition Models
Drones 2023, 7(3), 205; https://doi.org/10.3390/drones7030205 - 16 Mar 2023
Viewed by 835
Abstract
Recently, the unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) has become a highly sought-after topic for its wide applications in target recognition, detection, and tracking. However, SAR automatic target recognition (ATR) models based on deep neural networks (DNN) are suffering from adversarial [...] Read more.
Recently, the unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) has become a highly sought-after topic for its wide applications in target recognition, detection, and tracking. However, SAR automatic target recognition (ATR) models based on deep neural networks (DNN) are suffering from adversarial examples. Generally, non-cooperators rarely disclose any SAR-ATR model information, making adversarial attacks challenging. To tackle this issue, we propose a novel attack method called Transferable Adversarial Network (TAN). It can craft highly transferable adversarial examples in real time and attack SAR-ATR models without any prior knowledge, which is of great significance for real-world black-box attacks. The proposed method improves the transferability via a two-player game, in which we simultaneously train two encoder–decoder models: a generator that crafts malicious samples through a one-step forward mapping from original data, and an attenuator that weakens the effectiveness of malicious samples by capturing the most harmful deformations. Particularly, compared to traditional iterative methods, the encoder–decoder model can one-step map original samples to adversarial examples, thus enabling real-time attacks. Experimental results indicate that our approach achieves state-of-the-art transferability with acceptable adversarial perturbations and minimum time costs compared to existing attack methods, making real-time black-box attacks without any prior knowledge a reality. Full article
Show Figures

Figure 1

Article
Special Vehicle Detection from UAV Perspective via YOLO-GNS Based Deep Learning Network
Drones 2023, 7(2), 117; https://doi.org/10.3390/drones7020117 - 08 Feb 2023
Cited by 6 | Viewed by 2204
Abstract
At this moment, many special vehicles are engaged in illegal activities such as illegal mining, oil and gas theft, the destruction of green spaces, and illegal construction, which have serious negative impacts on the environment and the economy. The illegal activities of these [...] Read more.
At this moment, many special vehicles are engaged in illegal activities such as illegal mining, oil and gas theft, the destruction of green spaces, and illegal construction, which have serious negative impacts on the environment and the economy. The illegal activities of these special vehicles are becoming more and more rampant because of the limited number of inspectors and the high cost required for surveillance. The development of drone remote sensing is playing an important role in allowing efficient and intelligent monitoring of special vehicles. Due to limited onboard computing resources, special vehicle object detection still faces challenges in practical applications. In order to achieve the balance between detection accuracy and computational cost, we propose a novel algorithm named YOLO-GNS for special vehicle detection from the UAV perspective. Firstly, the Single Stage Headless (SSH) context structure is introduced to improve the feature extraction and facilitate the detection of small or obscured objects. Meanwhile, the computational cost of the algorithm is reduced in view of GhostNet by replacing the complex convolution with a linear transform by simple operation. To illustrate the performance of the algorithm, thousands of images are dedicated to sculpting in a variety of scenes and weather, each with a UAV view of special vehicles. Quantitative and comparative experiments have also been performed. Compared to other derivatives, the algorithm shows a 4.4% increase in average detection accuracy and a 1.6 increase in detection frame rate. These improvements are considered to be useful for UAV applications, especially for special vehicle detection in a variety of scenarios. Full article
Show Figures

Figure 1

Back to TopTop