Topic Editors

Optical Communication Laboratory, Ocean College, Zhejiang University, Zheda Road 1, Zhoushan 316021, China
Network and Telecommunication Research Group, University of Haute-Alsace, 68008 Colmar, France
Department of Engineering, Manchester Metropolitan University, Manchester M1 5GD, UK
Hamdard Institute of Engineering & Technology, Islamabad 44000, Pakistan
Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China

Advances, Innovations and Applications of UAV Technology for Remote Sensing

Abstract submission deadline
30 November 2023
Manuscript submission deadline
29 February 2024
Viewed by
6051

Topic Information

Dear Colleagues,

Nowadays, a variety of Unmanned Aerial Vehicles (UAVs) are commercially available and are widely used for several real-world tasks, such as environmental monitoring, construction site surveys, remote sensing data collection, vertical structure inspection, glaciology, smart agriculture, forestry, atmospheric research, disaster prevention, humanitarian observation, biological sensing, reef monitoring, fire monitoring, volcanic gas sampling, gap pipeline monitoring, hydrology, ecology, and archaeology. These less-invasive aerial robots support minimized user interventions, high-level autonomous functionalities, and can carry payloads for certain missions. UAV operations are highly effective in collecting valuable quantitative and qualitative data required to monitor isolated and distant regions. The integration of UAVs in such regions has substantially enhanced environmental monitoring by saving time, increasing precision, minimizing human footprints, increasing safety, and extending the study area of hard-to-access regions. Moreover, we have noticed a notable growth in emerging technologies such as artificial intelligence (AI), machine learning (ML), deep learning (DL), computer vision, the Internet-of-Things (IoT), laser scanning, sensing, oblique photogrammetry, aerial imaging, efficient perception, and 3D mapping, all of which assist UAVs in their multiple operations. These promising technologies can outperform human potentials in different sophisticated tasks such as medical image analysis, 3D mapping, aerial photography, and autonomous driving. Based on the aforementioned technologies, there exists an expanding interest to consider these cutting-edge technologies in order to enhance UAVs' level of autonomy and their other capabilities. Currently, we have also witnessed a tremendous growth in the use of UAVs for remotely sensing rural, urban, suburban, and remote regions. The extensive applicability and popularity of UAVs is not only strengthening the development of advanced UAV sensors, including RGB, LiDAR, laser scanners, thermal cameras, and hyperspectral and multispectral sensors, but it is also driving pragmatic and innovative problem-solving features and intelligent decision-making strategies in diverse domains. The integration of autonomous driving, collision avoidance, strong mobility for acquiring images at high temporal and spatial resolutions, environmental awareness, communication, precise control, dynamic data collection, 3D information acquisition, and intelligent algorithms also support UAV-based remote sensing technology for multiple applications. The growing advancements and innovations of UAVs as a remote sensing platform, as well as development in the miniaturization of instrumentation, have resulted in an expanding uptake of this technology in disciplines of remote sensing science.

This Topic aims to provide a novel and modern viewpoint on recent developments, novel patterns, and applications in the field. Our objective is to gather latest research contributions from both academicians and practitioners from diversified interests to fill the gap in the aforementioned research areas. We invite researchers to devote articles of high-quality scientific research to bridge the gap between theory, practice in design, and applications. We seek reviews, surveys, and original research articles on, but not limited to, the topics given below:

  • Real-time AI in motion planning, trajectory planning and control, data gathering and analysis of UAVs;
  • Image/LiDAR feature extraction;
  • Processing algorithms for UAV-aided imagery datasets;
  • Semantic/instance segmentation, classification, object detection and tracking with UAV data using the data mining, AI, ML, DL algorithms;
  • Cooperative perception and mapping utilizing UAV swarms;
  • UAV image/point clouds processing in power/oil/industry, hydraulics, agriculture, ecology, emergency response, and smart cities;
  • UAV-borne hyperspectral remote sensing;
  • Collaborative strategies and mechanisms between UAVs and other systems, such as hardware/software architectures including multi-agent systems, protocols, and strategies to work together;
  • UAV onboard remote sensing data storage, transmission, and retrieval;
  • Advances in the applications of UAVs in archeology, precision agriculture, yield protection, atmospheric research, area management, photogrammetry, 3D modeling, object reconstruction, Earth observation, climate change, sensing and imaging for coastal and environment monitoring, construction, mining, pollution monitoring, target tracking, humanitarian localization, security and surveillance, and ecological applications;
  • Use of optical laser, hyperspectral, multi-spectral, and SAR technologies for UAV-based remote sensing.

Dr. Syed Agha Hassnain Mohsan
Prof. Dr. Pascal Lorenz
Dr. Khaled Rabie
Dr. Muhammad Asghar Khan
Dr. Muhammad Shafiq
Topic Editors

Keywords

  • drones
  • UAVs
  • aerial robots
  • remote sensing
  • aerial imagery
  • LiDAR
  • machine learning
  • atmospheric research
  • sensing and Imaging
  • processing algorithms

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
- - 2020 25.1 Days 1200 CHF Submit
Drones
drones
5.532 7.2 2017 13.6 Days 2000 CHF Submit
Inventions
inventions
- 5.2 2016 14.7 Days 1500 CHF Submit
Machine Learning and Knowledge Extraction
make
- - 2019 16.7 Days 1400 CHF Submit
Remote Sensing
remotesensing
5.349 7.4 2009 19.7 Days 2500 CHF Submit
Sensors
sensors
3.847 6.4 2001 15 Days 2400 CHF Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (6 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
JO-TADP: Learning-Based Cooperative Dynamic Resource Allocation for MEC–UAV-Enabled Wireless Network
Drones 2023, 7(5), 303; https://doi.org/10.3390/drones7050303 - 04 May 2023
Viewed by 683
Abstract
Providing robust communication services to mobile users (MUs) is a challenging task due to the dynamicity of MUs. Unmanned aerial vehicles (UAVs) and mobile edge computing (MEC) are used to improve connectivity by allocating resources to MUs more efficiently in a dynamic environment. [...] Read more.
Providing robust communication services to mobile users (MUs) is a challenging task due to the dynamicity of MUs. Unmanned aerial vehicles (UAVs) and mobile edge computing (MEC) are used to improve connectivity by allocating resources to MUs more efficiently in a dynamic environment. However, energy consumption and lifetime issues in UAVs severely limit the resources and communication services. In this paper, we propose a dynamic cooperative resource allocation scheme for MEC–UAV-enabled wireless networks called joint optimization of trajectory, altitude, delay, and power (JO-TADP) using anarchic federated learning (AFL) and other learning algorithms to enhance data rate, use rate, and resource allocation efficiency. Initially, the MEC–UAVs are optimally positioned based on the MU density using the beluga whale optimization (BLWO) algorithm. Optimal clustering is performed in terms of splitting and merging using the triple-mode density peak clustering (TM-DPC) algorithm based on user mobility. Moreover, the trajectory, altitude, and hovering time of MEC–UAVs are predicted and optimized using the self-simulated inner attention long short-term memory (SSIA-LSTM) algorithm. Finally, the MUs and MEC–UAVs play auction games based on the classified requests, using an AFL-based cross-scale attention feature pyramid network (CSAFPN) and enhanced deep Q-learning (EDQN) algorithms for dynamic resource allocation. To validate the proposed approach, our system model has been simulated in Network Simulator 3.26 (NS-3.26). The results demonstrate that the proposed work outperforms the existing works in terms of connectivity, energy efficiency, resource allocation, and data rate. Full article
Show Figures

Figure 1

Article
The Development of Copper Clad Laminate Horn Antennas for Drone Interferometric Synthetic Aperture Radar
Drones 2023, 7(3), 215; https://doi.org/10.3390/drones7030215 - 20 Mar 2023
Viewed by 1109
Abstract
Interferometric synthetic aperture radar (InSAR) is an active remote sensing technique that typically utilises satellite data to quantify Earth surface and structural deformation. Drone InSAR should provide improved spatial-temporal data resolutions and operational flexibility. This necessitates the development of custom radar hardware for [...] Read more.
Interferometric synthetic aperture radar (InSAR) is an active remote sensing technique that typically utilises satellite data to quantify Earth surface and structural deformation. Drone InSAR should provide improved spatial-temporal data resolutions and operational flexibility. This necessitates the development of custom radar hardware for drone deployment, including antennas for the transmission and reception of microwave electromagnetic signals. We present the design, simulation, fabrication, and testing of two lightweight and inexpensive copper clad laminate (CCL)/printed circuit board (PCB) horn antennas for C-band radar deployed on the DJI Matrice 600 Pro drone. This is the first demonstration of horn antennas fabricated from CCL, and the first complete overview of antenna development for drone radar applications. The dimensions are optimised for the desired gain and centre frequency of 19 dBi and 5.4 GHz, respectively. The S11, directivity/gain, and half power beam widths (HPBW) are simulated in MATLAB, with the antennas tested in a radio frequency (RF) electromagnetic anechoic chamber using a calibrated vector network analyser (VNA) for comparison. The antennas are highly directive with gains of 15.80 and 16.25 dBi, respectively. The reduction in gain compared to the simulated value is attributed to a resonant frequency shift caused by the brass input feed increasing the electrical dimensions. The measured S11 and azimuth HPBW either meet or exceed the simulated results. A slight performance disparity between the two antennas is attributed to minor artefacts of the manufacturing and testing processes. The incorporation of the antennas into the drone payload is presented. Overall, both antennas satisfy our performance criteria and highlight the potential for CCL/PCB/FR-4 as a lightweight and inexpensive material for custom antenna production in drone radar and other antenna applications. Full article
Show Figures

Figure 1

Article
Mine Pit Wall Geological Mapping Using UAV-Based RGB Imaging and Unsupervised Learning
Remote Sens. 2023, 15(6), 1641; https://doi.org/10.3390/rs15061641 - 18 Mar 2023
Cited by 1 | Viewed by 704
Abstract
In surface mining operations, geological pit wall mapping is important since it provides significant information on the surficial geological features throughout the pit wall faces, thereby improving geological certainty and operational planning. Conventional pit wall geological mapping techniques generally rely on close visual [...] Read more.
In surface mining operations, geological pit wall mapping is important since it provides significant information on the surficial geological features throughout the pit wall faces, thereby improving geological certainty and operational planning. Conventional pit wall geological mapping techniques generally rely on close visual observations and laboratory testing results, which can be both time- and labour-intensive and can expose the technical staff to different safety hazards on the ground. In this work, a case study was conducted by investigating the use of drone-acquired RGB images for pit wall mapping. High spatial resolution RGB image data were collected using a commercially available unmanned aerial vehicle (UAV) at two gold mines in Nevada, USA. Cluster maps were produced using unsupervised learning algorithms, including the implementation of convolutional autoencoders, to explore the use of unlabelled image data for pit wall geological mapping purposes. While the results are promising for simple geological settings, they deviate from human-labelled ground truth maps in more complex geological conditions. This indicates the need to further optimize and explore the algorithms to increase robustness for more complex geological cases. Full article
Show Figures

Figure 1

Article
Estimating Black Oat Biomass Using Digital Surface Models and a Vegetation Index Derived from RGB-Based Aerial Images
Remote Sens. 2023, 15(5), 1363; https://doi.org/10.3390/rs15051363 - 28 Feb 2023
Cited by 1 | Viewed by 681
Abstract
Responsible for food production and industry inputs, agriculture needs to adapt to worldwide increasing demands and environmental requirements. In this scenario, black oat has gained environmental and economic importance since it can be used in no-tillage systems, green manure, or animal feed supplementation. [...] Read more.
Responsible for food production and industry inputs, agriculture needs to adapt to worldwide increasing demands and environmental requirements. In this scenario, black oat has gained environmental and economic importance since it can be used in no-tillage systems, green manure, or animal feed supplementation. Despite its importance, few studies have been conducted to introduce more accurate and technological applications. Plant height (H) correlates with biomass production, which is related to yield. Similarly, productivity status can be estimated from vegetation indices (VIs). The use of unmanned aerial vehicles (UAV) for imaging enables greater spatial and temporal resolutions from which to derive information such as H and VI. However, faster and more accurate methodologies are necessary for the application of this technology. This study intended to obtain high-quality digital surface models (DSMs) and orthoimages from UAV-based RGB images via a direct-to-process means; that is, without the use of ground control points or image pre-processing. DSMs and orthoimages were used to derive H (HDSM) and VIs (VIRGB), which were used for H and dry biomass (DB) modeling. Results showed that HDSM presented a strong correlation with actual plant height (HREF) (R2 = 0.85). Modeling biomass based on HDSM demonstrated better performance for data collected up until and including the grain filling (R2 = 0.84) and flowering (R2 = 0.82) stages. Biomass modeling based on VIRGB performed better for data collected up until and including the booting stage (R2 = 0.80). The best results for biomass estimation were obtained by combining HDSM and VIRGB, with data collected up until and including the grain filling stage (R2 = 0.86). Therefore, the presented methodology has permitted the generation of trustworthy models for estimating the H and DB of black oats. Full article
Show Figures

Figure 1

Article
Fine Classification of UAV Urban Nighttime Light Images Based on Object-Oriented Approach
Sensors 2023, 23(4), 2180; https://doi.org/10.3390/s23042180 - 15 Feb 2023
Viewed by 773
Abstract
Fine classification of urban nighttime lighting is a key prerequisite step for small-scale nighttime urban research. In order to fill the gap of high-resolution urban nighttime light image classification and recognition research, this paper is based on a small rotary-wing UAV platform, taking [...] Read more.
Fine classification of urban nighttime lighting is a key prerequisite step for small-scale nighttime urban research. In order to fill the gap of high-resolution urban nighttime light image classification and recognition research, this paper is based on a small rotary-wing UAV platform, taking the nighttime static monocular tilted light images of communities near Meixi Lake in Changsha City as research data. Using an object-oriented classification method to fully extract the spectral, textural and geometric features of urban nighttime lights, we build four types of classification models based on random forest (RF), support vector machine (SVM), K-nearest neighbor (KNN) and decision tree (DT), respectively, to finely extract five types of nighttime lights: window light, neon light, road reflective light, building reflective light and background. The main conclusions are as follows: (i) The equal division of the image into three regions according to the visual direction can alleviate the variable scale problem of monocular tilted images, and the multiresolution segmentation results combined with Canny edge detection are more suitable for urban nighttime lighting images; (ii) RF has the highest classification accuracy among the four classification algorithms, with an overall classification accuracy of 95.36% and a kappa coefficient of 0.9381 in the far view region, followed by SVM, KNN and DT as the worst; (iii) Among the fine classification results of urban light types, window light and background have the highest classification accuracy, with both UA and PA above 93% in the RF classification model, while road reflective light has the lowest accuracy; (iv) Among the selected classification features, the spectral features have the highest contribution rates, which are above 59% in all three regions, followed by the textural features and the geometric features with the smallest contribution rates. This paper demonstrates the feasibility of nighttime UAV static monocular tilt image data for fine classification of urban light types based on an object-oriented classification approach, provides data and technical support for small-scale urban nighttime research such as community building identification and nighttime human activity perception. Full article
Show Figures

Figure 1

Article
AFL-Net: Attentional Feature Learning Network for Building Extraction from Remote Sensing Images
Remote Sens. 2023, 15(1), 95; https://doi.org/10.3390/rs15010095 - 24 Dec 2022
Viewed by 849
Abstract
Convolutional neural networks (CNNs) perform well in tasks of segmenting buildings from remote sensing images. However, the intraclass heterogeneity of buildings is high in images, while the interclass homogeneity between buildings and other nonbuilding objects is low. This leads to an inaccurate distinction [...] Read more.
Convolutional neural networks (CNNs) perform well in tasks of segmenting buildings from remote sensing images. However, the intraclass heterogeneity of buildings is high in images, while the interclass homogeneity between buildings and other nonbuilding objects is low. This leads to an inaccurate distinction between buildings and complex backgrounds. To overcome this challenge, we propose an Attentional Feature Learning Network (AFL-Net) that can accurately extract buildings from remote sensing images. We designed an attentional multiscale feature fusion (AMFF) module and a shape feature refinement (SFR) module to improve building recognition accuracy in complex environments. The AMFF module adaptively adjusts the weights of multi-scale features through the attention mechanism, which enhances the global perception and ensures the integrity of building segmentation results. The SFR module captures the shape features of the buildings, which enhances the network capability for identifying the area between building edges and surrounding nonbuilding objects and reduces the over-segmentation of buildings. An ablation study was conducted with both qualitative and quantitative analyses, verifying the effectiveness of the AMFF and SFR modules. The proposed AFL-Net achieved 91.37, 82.10, 73.27, and 79.81% intersection over union (IoU) values on the WHU Building Aerial Imagery, Inria Aerial Image Labeling, Massachusetts Buildings, and Building Instances of Typical Cities in China datasets, respectively. Thus, the AFL-Net offers the prospect of application for successful extraction of buildings from remote sensing images. Full article
Show Figures

Figure 1

Back to TopTop