Intelligent Image Processing and Sensing for Drones

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: closed (25 December 2023) | Viewed by 41608

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor

Special Issue Information

Dear Colleagues,

Recently, the use of drones for various purposes is increasing. Drones can be remotely controlled or programmed to capture scenes from a distance. This capture is cost-effective and does not require highly trained personnel. Drones are widely used in various fields such as video surveillance, wildlife and farm monitoring, industrial investigation, search and rescue, firefighting, and 3D visualization.

Multiple sensors can be mounted on a drone. In addition to conventional visual cameras, infrared thermal imaging and multispectral imaging are available for drones. LiDAR and SAR are active sensors that can be mounted on drones. These mobile aerial imaging sensors provide a new perspective on research and development for a variety of applications. However, more challenges are often posed than grounded cameras because of the unique sensing environments and limited resources of a drone. Certainly, information acquired by a drone is of tremendous value, thus intelligent analysis of them is necessary to make the best use of it. 

This Special Issue focuses on a wide range of intelligent processing of images and sensor data acquired by drones. The objectives of intelligent processing range from the refinement of raw data to the symbolic representation and visualization of the real world. This can be achieved through image/signal processing or deep/machine learning algorithms. The latest technological developments will be shared through this Special Issue. Researchers and investigators are invited to contribute original research or review articles in this Special Issue.

Article / Review

Prof. Dr. Seokwon Yeom
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visible / infrared thermal / multispectral image analysis 
  • LiDAR / SAR with a drone 
  • video surveillance 
  • monitoring and inspection 
  • 3D imaging and visualization 
  • detection, recognition, and tracking 
  • segmentation and feature extraction 
  • image registration and fusion

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 154 KiB  
Editorial
Special Issue on Intelligent Image Processing and Sensing for Drones
by Seokwon Yeom
Drones 2024, 8(3), 87; https://doi.org/10.3390/drones8030087 - 04 Mar 2024
Viewed by 1040
Abstract
Recently, the use of drones or unmanned aerial vehicles (UAVs) for various purposes has been increasing [...] Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)

Research

Jump to: Editorial, Review

15 pages, 9694 KiB  
Article
Thermal Image Tracking for Search and Rescue Missions with a Drone
by Seokwon Yeom
Drones 2024, 8(2), 53; https://doi.org/10.3390/drones8020053 - 05 Feb 2024
Viewed by 1901
Abstract
Infrared thermal imaging is useful for human body recognition for search and rescue (SAR) missions. This paper discusses thermal object tracking for SAR missions with a drone. The entire process consists of object detection and multiple-target tracking. The You-Only-Look-Once (YOLO) detection model is [...] Read more.
Infrared thermal imaging is useful for human body recognition for search and rescue (SAR) missions. This paper discusses thermal object tracking for SAR missions with a drone. The entire process consists of object detection and multiple-target tracking. The You-Only-Look-Once (YOLO) detection model is utilized to detect people in thermal videos. Multiple-target tracking is performed via track initialization, maintenance, and termination. Position measurements in two consecutive frames initialize the track. Tracks are maintained using a Kalman filter. A bounding box gating rule is proposed for the measurement-to-track association. This proposed rule is combined with the statistically nearest neighbor association rule to assign measurements to tracks. The track-to-track association selects the fittest track for a track and fuses them. In the experiments, three videos of three hikers simulating being lost in the mountains were captured using a thermal imaging camera on a drone. Capturing was assumed under difficult conditions; the objects are close or occluded, and the drone flies arbitrarily in horizontal and vertical directions. Robust tracking results were obtained in terms of average total track life and average track purity, whereas the average mean track life was shortened in harsh searching environments. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)
Show Figures

Figure 1

23 pages, 191929 KiB  
Article
Transmission Line Segmentation Solutions for UAV Aerial Photography Based on Improved UNet
by Min He, Liang Qin, Xinlan Deng, Sihan Zhou, Haofeng Liu and Kaipei Liu
Drones 2023, 7(4), 274; https://doi.org/10.3390/drones7040274 - 17 Apr 2023
Cited by 6 | Viewed by 1618
Abstract
The accurate and efficient detection of power lines and towers in aerial drone images with complex backgrounds is crucial for the safety of power grid operations and low-altitude drone flights. In this paper, we propose a new method that enhances the deep learning [...] Read more.
The accurate and efficient detection of power lines and towers in aerial drone images with complex backgrounds is crucial for the safety of power grid operations and low-altitude drone flights. In this paper, we propose a new method that enhances the deep learning segmentation model UNet algorithm called TLSUNet. We enhance the UNet algorithm by using a lightweight backbone structure to extract the features and then reconstructing them with contextual information features. In this network model, to reduce its parameters and computational complexity, we adopt DFC-GhostNet (Dubbed Full Connected) as the backbone feature extraction network, which is composed of the DFC-GhostBottleneck structure and uses asymmetric convolution to capture long-distance targets in transmission lines, thus enhancing the model’s extraction capability. Additionally, we design a hybrid feature extraction module based on convolution and a transformer to refine deep semantic features and improve the model’s ability to locate towers and transmission lines in complex environments. Finally, we adopt the up-sampling operator CARAFE (Content-Aware Re-Assembly of FEature) to improve segmentation accuracy by enhancing target restoration using contextual neighborhood pixel information correlation under feature decoding. Our experiments on public aerial photography datasets demonstrate that the improved model requires only 8.3% of the original model’s computational effort and has only 21.4% of the original model’s parameters, while achieving a reduction in inference speed delay by 0.012 s. The segmentation metrics also showed significant improvements, with the mIOU improving from 79.75% to 86.46% and the mDice improving from 87.83% to 92.40%. These results confirm the effectiveness of our proposed method. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)
Show Figures

Figure 1

18 pages, 2813 KiB  
Article
Non-Linear Signal Processing Methods for UAV Detections from a Multi-Function X-Band Radar
by Mohit Kumar and P. Keith Kelly
Drones 2023, 7(4), 251; https://doi.org/10.3390/drones7040251 - 06 Apr 2023
Cited by 1 | Viewed by 1915
Abstract
This article develops the applicability of non-linear processing techniques such as Compressed Sensing (CS), Principal Component Analysis (PCA), Iterative Adaptive Approach (IAA), and Multiple-input-multiple-output (MIMO) for the purpose of enhanced UAV detections using portable radar systems. The combined scheme has many advantages and [...] Read more.
This article develops the applicability of non-linear processing techniques such as Compressed Sensing (CS), Principal Component Analysis (PCA), Iterative Adaptive Approach (IAA), and Multiple-input-multiple-output (MIMO) for the purpose of enhanced UAV detections using portable radar systems. The combined scheme has many advantages and the potential for better detection and classification accuracy. Some of the benefits are discussed here with a phased array platform in mind, the novel portable phased array Radar (PWR) by Agile RF Systems (ARS), which offers quadrant outputs. CS and IAA both show promising results when applied to micro-Doppler processing of radar returns owing to the sparse nature of the target Doppler frequencies. This shows promise in reducing the dwell time and increases the rate at which a volume can be interrogated. Real-time processing of target information with iterative and non-linear solutions is possible now with the advent of GPU-based graphics processing hardware. Simulations show promising results. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)
Show Figures

Figure 1

20 pages, 22592 KiB  
Article
Segmentation Detection Method for Complex Road Cracks Collected by UAV Based on HC-Unet++
by Hongbin Cao, Yuxi Gao, Weiwei Cai, Zhuonong Xu and Liujun Li
Drones 2023, 7(3), 189; https://doi.org/10.3390/drones7030189 - 10 Mar 2023
Cited by 11 | Viewed by 2262
Abstract
Road cracks are one of the external manifestations of safety hazards in transportation. At present, the detection and segmentation of road cracks is still an intensively researched issue. With the development of image segmentation technology of the convolutional neural network, the identification of [...] Read more.
Road cracks are one of the external manifestations of safety hazards in transportation. At present, the detection and segmentation of road cracks is still an intensively researched issue. With the development of image segmentation technology of the convolutional neural network, the identification of road cracks has also ushered in new opportunities. However, the traditional road crack segmentation method has these three problems: 1. It is susceptible to the influence of complex background noise information. 2. Road cracks usually appear in irregular shapes, which increases the difficulty of model segmentation. 3. The cracks appear discontinuous in the segmentation results. Aiming at these problems, a network segmentation model of HC-Unet++ road crack detection is proposed in this paper. In this network model, a deep parallel feature fusion module is first proposed, one which can effectively detect various irregular shape cracks. Secondly, the SEnet attention mechanism is used to eliminate complex backgrounds to correctly extract crack information. Finally, the Blurpool pooling operation is used to replace the original maximum pooling in order to solve the crack discontinuity of the segmentation results. Through the comparison with some advanced network models, it is found that the HC-Unet++ network model is more precise for the segmentation of road cracks. The experimental results show that the method proposed in this paper has achieved 76.32% mIOU, 82.39% mPA, 85.51% mPrecision, 70.26% dice and Hd95 of 5.05 on the self-made 1040 road crack dataset. Compared with the advanced network model, the HC-Unet++ network model has stronger generalization ability and higher segmentation accuracy, which is more suitable for the segmentation detection of road cracks. Therefore, the HC-Unet++ network model proposed in this paper plays an important role in road maintenance and traffic safety. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)
Show Figures

Figure 1

16 pages, 3584 KiB  
Article
Improved Image Synthesis with Attention Mechanism for Virtual Scenes via UAV Imagery
by Lufeng Mo, Yanbin Zhu, Guoying Wang, Xiaomei Yi, Xiaoping Wu and Peng Wu
Drones 2023, 7(3), 160; https://doi.org/10.3390/drones7030160 - 25 Feb 2023
Cited by 3 | Viewed by 1452
Abstract
Benefiting from the development of unmanned aerial vehicles (UAVs), the types and number of datasets available for image synthesis have greatly increased. Based on such abundant datasets, many types of virtual scenes can be created and visualized using image synthesis technology before they [...] Read more.
Benefiting from the development of unmanned aerial vehicles (UAVs), the types and number of datasets available for image synthesis have greatly increased. Based on such abundant datasets, many types of virtual scenes can be created and visualized using image synthesis technology before they are implemented in the real world, which can then be used in different applications. To achieve a convenient and fast image synthesis model, there are some common issues such as the blurred semantic information in the normalized layer and the local spatial information of the feature map used only in the generation of images. To solve such problems, an improved image synthesis model, SYGAN, is proposed in this paper, which imports a spatial adaptive normalization module (SPADE) and a sparse attention mechanism YLG on the basis of generative adversarial network (GAN). In the proposed model SYGAN, the utilization of the normalization module SPADE can improve the imaging quality by adjusting the normalization layer with spatially adaptively learned transformations, while the sparsified attention mechanism YLG improves the receptive field of the model and has less computational complexity which saves training time. The experimental results show that the Fréchet Inception Distance (FID) of SYGAN for natural scenes and street scenes are 22.1, 31.2; the Mean Intersection over Union (MIoU) for them are 56.6, 51.4; and the Pixel Accuracy (PA) for them are 86.1, 81.3, respectively. Compared with other models such as CRN, SIMS, pix2pixHD and GauGAN, the proposed image synthesis model SYGAN has better performance and improves computational efficiency. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)
Show Figures

Figure 1

25 pages, 11167 KiB  
Article
An Automatic Visual Inspection of Oil Tanks Exterior Surface Using Unmanned Aerial Vehicle with Image Processing and Cascading Fuzzy Logic Algorithms
by Mohammed A. H. Ali, Muhammad Baggash, Jaloliddin Rustamov, Rawad Abdulghafor, Najm Al-Deen N. Abdo, Mubarak H. G. Abdo, Talep S. Mohammed, Ameen A. Hasan, Ali N. Abdo, Sherzod Turaev and Yusoff Nukman
Drones 2023, 7(2), 133; https://doi.org/10.3390/drones7020133 - 13 Feb 2023
Cited by 4 | Viewed by 2242
Abstract
This paper presents an automatic visual inspection of exterior surface defects of oil tanks using unmanned aerial vehicles (UAVs) and image processing with two cascading fuzzy logic algorithms. Corrosion is one of the defects that has a serious effect on the safety of [...] Read more.
This paper presents an automatic visual inspection of exterior surface defects of oil tanks using unmanned aerial vehicles (UAVs) and image processing with two cascading fuzzy logic algorithms. Corrosion is one of the defects that has a serious effect on the safety of the surface of oil and gas tanks. At present, human inspection, and climbing robots inspection are the dominant approach for rust detection in oil and gas tanks. However, there are many shortcomings to this approach, such as taking longer, high cost, and covering less surface area inspection of the tank. The purpose of this research is to detect the rust in oil tanks by localizing visual inspection technology using UAVs, as well as to develop algorithms to distinguish between defects and noise. The study focuses on two basic aspects of oil tank inspection through the images captured by the UAV, namely, the detection of defects and the distinction between defects and noise. For the former, an image processing algorithm was developed to improve or remove noise, adjust the brightness of the captured image, and extract features to identify defects in oil tanks. Meanwhile, for the latter aspect, a cascading fuzzy logic algorithm and threshold algorithm were developed to distinguish between defects and noise levels and reduce their impact through three stages of processing: The first stage of fuzzy logic aims to distinguish between defects and low noise generated by the appearance of objects on the surface of the tank, such as trees or stairs, and reduce their impact. The second stage aims to distinguish between defects and medium noise generated by shadows or the presence of small objects on the surface of the tank and reduce their impact. The third stage of the thresholding algorithm aims to distinguish between defects and high noise generated by sedimentation on the surface of the tank and reduce its impact. The samples were classified based on the output of the third stage of the threshold process into defective or non-defective samples. The proposed algorithms were tested on 180 samples and the results show its superiority in the inspection and detection of defects with an accuracy of 83%. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)
Show Figures

Figure 1

15 pages, 4363 KiB  
Article
A Novel UAV Visual Positioning Algorithm Based on A-YOLOX
by Ying Xu, Dongsheng Zhong, Jianhong Zhou, Ziyi Jiang, Yikui Zhai and Zilu Ying
Drones 2022, 6(11), 362; https://doi.org/10.3390/drones6110362 - 18 Nov 2022
Cited by 3 | Viewed by 2084
Abstract
The application of UAVs is becoming increasingly extensive. However, high-precision autonomous landing is still a major industry difficulty. The current algorithm is not well-adapted to light changes, scale transformations, complex backgrounds, etc. To address the above difficulties, a deep learning method was here [...] Read more.
The application of UAVs is becoming increasingly extensive. However, high-precision autonomous landing is still a major industry difficulty. The current algorithm is not well-adapted to light changes, scale transformations, complex backgrounds, etc. To address the above difficulties, a deep learning method was here introduced into target detection and an attention mechanism was incorporated into YOLOX; thus, a UAV positioning algorithm called attention-based YOLOX (A-YOLOX) is proposed. Firstly, a novel visual positioning pattern was designed to facilitate the algorithm’s use for detection and localization; then, a UAV visual positioning database (UAV-VPD) was built through actual data collection and data augmentation and the A-YOLOX model detector developed; finally, corresponding high- and low-altitude visual positioning algorithms were designed for high- and low-altitude positioning logics. The experimental results in the actual environment showed that the AP50 of the proposed algorithm could reach 95.5%, the detection speed was 53.7 frames per second, and the actual landing error was within 5 cm, which meets the practical application requirements for automatic UAV landing. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)
Show Figures

Figure 1

11 pages, 3852 KiB  
Article
Inverse Airborne Optical Sectioning
by Rakesh John Amala Arokia Nathan, Indrajit Kurmi and Oliver Bimber
Drones 2022, 6(9), 231; https://doi.org/10.3390/drones6090231 - 02 Sep 2022
Cited by 3 | Viewed by 2401
Abstract
We present Inverse Airborne Optical Sectioning (IAOS), an optical analogy to Inverse Synthetic Aperture Radar (ISAR). Moving targets, such as walking people, that are heavily occluded by vegetation can be made visible and tracked with a stationary optical sensor (e.g., a hovering camera [...] Read more.
We present Inverse Airborne Optical Sectioning (IAOS), an optical analogy to Inverse Synthetic Aperture Radar (ISAR). Moving targets, such as walking people, that are heavily occluded by vegetation can be made visible and tracked with a stationary optical sensor (e.g., a hovering camera drone above forest). We introduce the principles of IAOS (i.e., inverse synthetic aperture imaging), explain how the signal of occluders can be further suppressed by filtering the Radon transform of the image integral, and present how targets’ motion parameters can be estimated manually and automatically. Finally, we show that while tracking occluded targets in conventional aerial images is infeasible, it becomes efficiently possible in integral images that result from IAOS. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

21 pages, 6631 KiB  
Review
An Overview of Drone Applications in the Construction Industry
by Hee-Wook Choi, Hyung-Jin Kim, Sung-Keun Kim and Wongi S. Na
Drones 2023, 7(8), 515; https://doi.org/10.3390/drones7080515 - 03 Aug 2023
Cited by 5 | Viewed by 15631
Abstract
The integration of drones in the construction industry has ushered in a new era of efficiency, accuracy, and safety throughout the various phases of construction projects. This paper presents a comprehensive overview of the applications of drones in the construction industry, focusing on [...] Read more.
The integration of drones in the construction industry has ushered in a new era of efficiency, accuracy, and safety throughout the various phases of construction projects. This paper presents a comprehensive overview of the applications of drones in the construction industry, focusing on their utilization in the design, construction, and maintenance phases. The differences between the three different types of drones are discussed at the beginning of the paper where the overview of the drone applications in construction industry is then described. Overall, the integration of drones in the construction industry has yielded transformative advancements across all phases of construction projects. As technology continues to advance, drones are expected to play an increasingly critical role in shaping the future of the construction industry. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)
Show Figures

Figure 1

27 pages, 3381 KiB  
Review
A Comprehensive Survey of Transformers for Computer Vision
by Sonain Jamil, Md. Jalil Piran and Oh-Jin Kwon
Drones 2023, 7(5), 287; https://doi.org/10.3390/drones7050287 - 25 Apr 2023
Cited by 10 | Viewed by 4896
Abstract
As a special type of transformer, vision transformers (ViTs) can be used for various computer vision (CV) applications. Convolutional neural networks (CNNs) have several potential problems that can be resolved with ViTs. For image coding tasks such as compression, super-resolution, segmentation, and denoising, [...] Read more.
As a special type of transformer, vision transformers (ViTs) can be used for various computer vision (CV) applications. Convolutional neural networks (CNNs) have several potential problems that can be resolved with ViTs. For image coding tasks such as compression, super-resolution, segmentation, and denoising, different variants of ViTs are used. In our survey, we determined the many CV applications to which ViTs are applicable. CV applications reviewed included image classification, object detection, image segmentation, image compression, image super-resolution, image denoising, anomaly detection, and drone imagery. We reviewed the state of the-art and compiled a list of available models and discussed the pros and cons of each model. Full article
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones)
Show Figures

Figure 1

Back to TopTop