Next Article in Journal
Space Targets with Micro-Motion Classification Using Complex-Valued GAN and Kinematically Sifted Methods
Previous Article in Journal
Mapping of Rubber Forest Growth Models Based on Point Cloud Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Editorial on the Advances, Innovations and Applications of UAV Technology for Remote Sensing

by
Syed Agha Hassnain Mohsan
1,*,
Muhammad Asghar Khan
2 and
Yazeed Yasin Ghadi
3
1
Ocean College, Zhejiang University, Zheda Road 1, Zhoushan 316021, China
2
Faculty of Engineering Sciences & Technology, Hamdard University Islamabad Campus, Islamabad 44000, Pakistan
3
Department of Computer Science and Software Engineering, Al Ain University, Al Ain P.O. Box 112612, United Arab Emirates
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(21), 5087; https://doi.org/10.3390/rs15215087
Submission received: 19 October 2023 / Accepted: 20 October 2023 / Published: 24 October 2023
Currently, several kinds of Unmanned Aerial Vehicles (UAVs) or drones [1] are available in the market and are being extensively utilized in a wide range of real-world applications, such as archaeology, ecology, hydrology, glaciology, gas pipeline monitoring, volcanic gas sampling, environmental monitoring, fire monitoring, reef monitoring, biological sensing, humanitarian observations, disaster prevention, atmospheric research, high-precision remote sensing, forestry, smart agriculture, geological mapping, photogrammetry, crack detection, vertical structure inspection, vertical structure inspection, and construction site surveys. Different applications of UAVs are represented in Figure 1. This less-invasive drone technology helps reduce user interventions, perform operations autonomously, and carry several kgs of the payload to perform critical missions. Drone technology has emerged as a promising tool to gather significant qualitative and quantitative data to observe distant and isolated regions [2]. The use of UAVs in these regions can effectively enhance climate and environmental monitoring to reduce mission times, enhance safety, reduce human footprint, increase precision, and extend the geological area for inspection, including hard-to-access regions [3].
Over the few past decades, we have noticed a remarkable boost in cutting-edge technologies, such as three-dimensional (3D) mapping, efficient perception, aerial imaging, oblique photogrammetry, sensing, laser scanning, the Internet-of-Things (IoT), computer vision, blockchain, mobile-edge computing (MEC), artificial intelligence (AI), deep learning (DL), and machine learning (ML), all of which notably support UAV technology to carry out diverse operations [4]. The integration of these technologies can significantly reduce human intervention in multiple sophisticated operations, such as autonomous driving, aerial photography, object detection and classification, 3D mapping, aerial surveillance, and medical image analysis. We have noticed a rapidly expanding interest and tremendous growth when integrating these promising technologies with UAVs to substantially enhance autonomy, motion control, trajectory planning, data collection, collaborative strategies, and various other key indicators to efficiently use UAVs in remote-sensing urban, suburban, rural, and remote areas [5]. The rapid popularity and extensive applicability of UAVs not only strengthen the advancements of UAV sensing devices, including hyperspectral and multispectral sensors, thermal cameras, laser scanners, Light Detection and Ranging (LiDAR), red, green, and blue (RGB) sensors but also drives innovative and pragmatic efficient decision-making strategies and technical problem-solving features in various fields. Particularly for remote sensing, the coexistence of high mobility features to acquire images at high spatial and temporal resolutions, collision avoidance mechanisms, autonomous driving techniques, intelligent algorithms, optimization techniques, 3D information acquisition, dynamic data collection, novel-sensing platforms, innovative instrumentation, precise control, and efficient communication under harsh conditions can effectively assist in a growing uptake of UAV-aided remote sensing for several use cases [6,7,8,9,10]. Figure 2 shows UAVs applied to remote sensing for various applications.
This research was conducted with the motive of addressing current advancements, innovations, and applications in the specialized field of UAV technology for remote sensing [11,12] based on theory, simulation analysis, and experimental validation in several areas of UAV trajectory planning and control. These include image feature extraction, processing algorithms, semantic segmentation, object detection, cooperative perception through UAV swarms, hyperspectral imaging, multi-agent systems, remote sensing data operations, SAR and LiDAR technologies for various applications, including security and surveillance, humanitarian localization, target tracking, pollution monitoring, mining, construction, sensing and imaging for coastal monitoring, atmospheric research, climate change, object reconstruction, 3D modeling, earth observation, photogrammetry, yield protection, and precision agriculture, which for the Topic Issue of Remote Sensing, Sensors, Drones, Machine Learning and Knowledge Extraction, Inventions, and AI has been the focus behind “Advances, Innovations and Applications of UAV Technology for Remote Sensing”.
This Topic Issue mainly aims to shed some light on current research initiatives from scientific experts and industrialists from diversified interests and various fields to unlock promising application scenarios, address recent breakthroughs, explore research novelties, and investigate technical uncertainties. Fifty-six articles were submitted for consideration to this Topic Issue, all of which were subject a the rigorous review process to meet the originality and novelty criteria set by associated journals as well as the scope of each journal and Topic Issue. In total, twenty-two articles covering viewpoints on recent developments, experimental demonstrations, and various applications in the field were finally accepted for publication. The published high-quality research articles significantly contribute to filling gaps in the aforementioned research domains. The brief discussion about research contributions made by each study is discussed in the remainder of this Editorial. The contributions are listed below:
  • Qiu, Y., Wu, F., Qian, H., Zhai, R., Gong, X., Yin, J., ... & Wang, A. (2022). AFL-Net: Attentional Feature Learning Network for Building Extraction from Remote Sensing Images. Remote Sensing, 15(1), 95.
  • Zhang, D., Li, D., Zhou, L., & Wu, J. (2023). Fine Classification of UAV Urban Nighttime Light Images Based on Object-Oriented Approach. Sensors, 23(4), 2180.
  • Trevisan, L. R., Brichi, L., Gomes, T. M., & Rossi, F. (2023). Estimating Black Oat Biomass Using Digital Surface Models and a Vegetation Index Derived from RGB-Based Aerial Images. Remote Sensing, 15(5), 1363.
  • Yang, P., Esmaeili, K., Goodfellow, S., & Ordóñez Calderón, J. C. (2023). Mine Pit Wall Geological Mapping Using UAV-Based RGB Imaging and Unsupervised Learning. Remote Sensing, 15(6), 1641.
  • Carpenter, A., Lawrence, J. A., Ghail, R., & Mason, P. J. (2023). The Development of Copper Clad Laminate Horn Antennas for Drone Interferometric Synthetic Aperture Radar. Drones, 7(3), 215.
  • Ahmad, S., Zhang, J., Khan, A., Khan, U. A., & Hayat, B. (2023). JO-TADP: Learning-Based Cooperative Dynamic Resource Allocation for MEC–UAV-Enabled Wireless Network. Drones, 7(5), 303.
  • Daniels, L., Eeckhout, E., Wieme, J., Dejaegher, Y., Audenaert, K., & Maes, W. H. (2023). Identifying the Optimal Radiometric Calibration Method for UAV-Based Multispectral Imaging. Remote Sensing, 15(11), 2909.
  • Du, H., Fan, H., Zhang, Q., & Li, S. (2023). Measurements of the Thickness and Area of Thick Oil Slicks Using Ultrasonic and Image Processing Methods. Remote Sensing, 15(12), 2977.
  • Liang, H., Lee, S. C., & Seo, S. (2023). UAV-Based Low Altitude Remote Sensing for Concrete Bridge Multi-Category Damage Automatic Detection System. Drones, 7(6), 386.
  • Zhang, C., Song, Y., Jiang, R., Hu, J., & Xu, S. (2023). A Cognitive Electronic Jamming Decision-Making Method Based on Q-Learning and Ant Colony Fusion Algorithm. Remote Sensing, 15(12), 3108.
  • Yuan, S., Sun, B., Zuo, Z., Huang, H., Wu, P., Li, C., ... & Zhao, Z. (2023). IRSDD-YOLOv5: Focusing on the Infrared Detection of Small Drones. Drones, 7(6), 393.
  • Pitombeira, K., & Mitishita, E. (2023). Influence of On-Site Camera Calibration with Sub-Block of Images on the Accuracy of Spatial Data Obtained by PPK-Based UAS Photogrammetry. Remote Sensing, 15(12), 3126.
  • Shaikh, J. A., Wang, C., Khan, M. A., Mohsan, S. A. H., Ullah, S., Chelloug, S. A., ... & Muthanna, A. (2023). A UAV-Assisted Stackelberg Game Model for Securing loMT Healthcare Networks. Drones, 7(7), 415.
  • Detka, J., Coyle, H., Gomez, M., & Gilbert, G. S. (2023). A Drone-Powered Deep Learning Methodology for High Precision Remote Sensing in California’s Coastal Shrubs. Drones, 7(7), 421.
  • Gong, K., Liu, B., Xu, X., Xu, Y., He, Y., Zhang, Z., & Rasol, J. (2023). Research of an Unmanned Aerial Vehicle Autonomous Aerial Refueling Docking Method Based on Binocular Vision. Drones, 7(7), 433.
  • Xu, Z., Wang, Y., Hao, X., & Fan, J. (2023). Crack Detection of Bridge Concrete Components Based on Large-Scene Images Using an Unmanned Aerial Vehicle. Sensors, 23(14), 6271.
  • Fan, J., Dai, W., Wang, B., Li, J., Yao, J., & Chen, K. (2023). UAV-Based Terrain Modeling in Low-Vegetation Areas: A Framework Based on Multiscale Elevation Variation Coefficients. Remote Sensing, 15(14), 3569.
  • Rahman, S. U., Khan, A., Usman, M., Bilal, M., Cho, Y. Z., & El-Sayed, H. (2023). Dynamic Repositioning of Aerial Base Stations for Enhanced User Experience in 5G and Beyond. Sensors, 23(16), 7098.
  • Dai, W., Qiu, R., Wang, B., Lu, W., Zheng, G., Amankwah, S. O. Y., & Wang, G. (2023). Enhancing UAV-SfM Photogrammetry for Terrain Modeling from the Perspective of Spatial Structure of Errors. Remote Sensing, 15(17), 4305.
  • Ren, K., Chen, X., Wang, Z., Liang, X., Chen, Z., & Miao, X. (2023). HAM-Transformer: A Hybrid Adaptive Multi-Scaled Transformer Net for Remote Sensing in Complex Scenes. Remote Sensing, 15(19), 4817.
  • Gordon, S., & Brooker, G. (2023). Using Schlieren Imaging and a Radar Acoustic Sounding System for the Detection of Close-in Air Turbulence. Sensors, 23(19), 8255.
  • Cao, H., You, D., Ji, D., Gu, X., Wen, J., Wu, J., Li, Y., Cao, Y., Cui, T., & Zhang, H. (2023). The Method of Multi-Angle Remote Sensing Observation Based on Unmanned Aerial Vehicles and the Validation of BRDF. Remote Sensing, 15(20), 5000.
Convolutional neural networks (CNNs) exhibit good performance for segmentation from remote sensing images. However, there are critical challenges of complex background and improper distinction between various buildings. In their contribution (1), Qiu et al. introduce the Attentional Feature Learning Network (AFL-Net) to overcome this problem by properly extracting buildings from remote sensing images. The authors developed a feature refinement (SFR) module and an attentional multiscale feature fusion (AMFF) to enhance building recognition accuracy in complicated scenarios. The SRF module takes the shapes and features of buildings to improve the network potential and identify the areas between the edges of buildings and non-building objects in the surrounding area. The AMFF module configures the weights of multi-scale features to improve global perception and significantly enhance building segmentation results.
To address high-resolution urban nighttime light image recognition and classification, Zhang et al., in contribution (2), focused on a small-rotary-wing UAV design. They considered object-oriented classification techniques to extract the geometric, textural, and spectral features of urban nighttime lights. The authors developed four kinds of classification models based on decision tree (DT), K-nearest neighbor (KNN), support vector machine (SVM), and random forest (RF) to precisely extract five different types of nighttime lights as follows: building-reflective light, road reflective light, neon light, window light, and background. The presented study includes data and technical support for nighttime human activity perception and community building identification. The adoption of UAV technology for imaging features higher temporal and spatial resolutions, which can greatly support precision agriculture. These data could help to derive vegetation indices (VIs) and plant height (H). However, more accurate and faster techniques are required for such applications. Trevisan et al., in contribution (3), used high-quality orthoimages and digital surface models (DSMs) from UAV-assisted RGB images and direct-to-process methodologies without using image pre-processing and ground control points. Orthoimages and DSMs were considered to derive VIs (VIRGB) and H (HDSM), and were used for dry biomass (DB) and H modeling. This propped technique validated trustworthy models to estimate DB and H for black oats. Yang et al., in contribution (4), conducted a case study to investigate the role of drone-based RGB images for pit wall mapping. The authors used commercially available UAVs to collect high spatial resolution RGB image data at two gold mines in Nevada, USA. Unsupervised learning algorithms were used to produce cluster maps, including convolutional autoencoders, which were implemented using unlabeled image data for pit wall geological mapping. The proposed techniques provide a better performance in simple geological settings. However, the results indicate the need to further optimize and use efficient algorithms to enhance robustness for highly complicated geological scenarios.
The interferometric synthetic aperture radar (InSAR) is an effective remote sensing methodology that uses satellite data to quantify structural deformation and the earth’s surface. Drone-based InSAR can improve operational flexibility and give enhanced spatial-temporal data resolutions. It necessitates the design of custom radar hardware to deploy drone technology, including antennas to transmit and receive microwave electromagnetic signals. Carpenter et al., in contribution (5), presented the fabrication design, simulation analysis, and testing of two cost-efficient and lightweight-printed circuit board (PCB)/copper-clad laminate (CCL) horn antennas to deploy the C-band radar on the DJI Matrice 600 Pro drone. Both antennas met the performance criteria and validated antenna production in the drone radar and other application scenarios.
UAV technology has shown a good contribution to navigation, security, and communication services. Ensuring robust communication to mobile users is a critical issue due to the dynamicity of mobile users. Mobile edge computing (MEC) and UAVs can be utilized to enhance connectivity by efficiently locating resources to mobile users in a dynamic environment. However, lifetime and energy consumption are critical concerns in UAVs that affect communication with resource allocation. Ahmad et al., in contribution (6), propose a dynamic cooperative resource allocation strategy for MEC-UAV-aided wireless networks named the joint optimization of trajectory, altitude, delay, and power (JO-TADP) by considering several learning algorithms such as anarchic federated learning (AFL) to improve resource allocation efficiency, use rate, and data rate.
Advancements in multispectral cameras and UAVs have supported several remote sensing applications with unprecedented spatial resolution. However, there is uncertainty regarding the radiometric calibration technique’s ability to convert raw images to surface reflectance. Various calibration techniques exist, but the pros and cons need to be further explored. In contribution (7), Daniels et al. carried out an empirical analysis on five different calibration techniques for a 10-band multispectral camera. The authors took spectrometer measurements to compare multispectral images. They collected two different datasets based on clear skies and overcast conditions on the same field. The authors found that the empirical line method (ELM) through various radiometric reference targets imaged at the mission altitude showed better performance in terms of RMSE and bias.
The in situ calculation of oil slick thickness for the estimation of volume in an oil spill is essential to determine the oil spill response methodology and evaluate oil spill disposal efficiency. Du et al., in contribution (8), propose a technique based on image processing and ultrasonic inspection to measure the volume of oil slicks through oil slick thickness, and the area. They used a remotely operated vehicle (ROV) compromising two ultrasonic immersion transducers to capture ultrasonic reflections from an oil slick. An optical camera integrated into an airborne drone was used to capture images of the oil slick. In addition, the authors propose image processing algorithms to conduct image processing on oil slick images to measure the oil slick area. They performed various trials to validate the proposed technique and results indicated that the volume, area, and thickness of thick oil slick could be precisely calculated.
Damage detection in bridges can be a difficult task, posing challenges for environmental inspection, time, and resources in manual acquisition. In addition, frequently used damage detection techniques depend on pixel-level segmentation, reducing the feasibility of accurately locating and classifying various damage types. To overcome such issues, Liang et al., in contribution (9), propose a novel, fully automated damage detection technique for concrete bridges using UAV-based remote sensing technology. The presented findings indicate the better performance of the proposed system compared to YOLOv5-L and YOLOX-L models. The robustness of presented visual inspection results indicates its efficiency and accuracy for bridge inspection, damage detection, and maintenance. The findings show the potential of the proposed technique to ensure the longevity and safety of vital infrastructure.
To enhance the adaptability and efficiency of cognitive radar jamming decision making, Zhang et al., in contribution (10), propose a fusion algorithm (Ant-QL) based on Q-Learning and the ant colony. The proposed algorithm improves adaptability via real-time interactions between the target radar and jammer, and it does not depend on prior information. It can be implemented to single or multiple jammer countermeasures with high jamming effects. The proposed algorithm significantly prevents falling into a local optimum, thus maximizing the convergence speed with high robustness and stability.
With the rapid expansion of the global drone market, several types of small drones have caused critical concern for public safety. Therefore, effective countermeasures are needed to detect small drones in a timely manner. Even though deep learning-based techniques have shown a remarkable breakthrough in target detection, these techniques are not good for small-drone detection. To address these challenges, Yuan et al., in contribution (11), introduce the IRSDD-YOLOv5 model based on YOLOv5. For feature extraction, the authors developed an infrared small-target detection module (IRSTDM) appropriate for the infrared recognition of small drones, which can effectively detect small targets through IRSDD-YOLOv5. For target prediction, the authors utilized the small target prediction head (PH) to predict the prior information output through the infrared small-target detection module (IRSTDM). They used mainstream instance segmentation algorithms (BoxInst, Blendmask, etc.) for the training and performance evaluation of the proposed method.
Unmanned Aerial Systems (UAS) Photogrammetry has been extensively utilized for spatial data acquisition. At present, the Post Processed Kinematic (PPK) and Real Time Kinematic (RTK) are key correction techniques for precise positioning considered for direct calculations of UAS imagery-based camera station coordinates. Therefore, 3D camera coordinates are generally utilized for observations in Bundle Block Adjustments to carry out Global Navigation Satellite System-Assisted Aerial Triangulation (GNSS-AAT). Pitombeira et al. in contribution (12) highlighted the impact of on-site camera calibration through a sub-block of images to check the accuracy of spatial data gathered by PPK-assisted UAS Photogrammetry. The findings indicate that the proposed method sufficiently enhanced the vertical accuracy of spatial data extraction.
In disaster situations, reliable communication becomes a critical challenge to support first aid services. To overcome this problem, UAVs have been used to support hospitals for the delivery of medical care in hard-to-access regions. Even though UAVs can substantially enhance medical services, their limited resources pose security concerns. Thus, introducing efficient and secure communication protocols for Internet of Medical Things (IoMT) networks through UAVs is a key research domain that features timely and reliable medical services. Shaikh et al., in contribution (13), propose a novel Stackelberg security-aided game theory algorithm, called Stackelberg with an ad hoc on-demand distance vector (SBAODV), for detecting and recovering data suffered by black hole attacks in IoMT networks through UAVs. The proposed method uses the substantial Stackelberg equilibrium (SSE) to develop techniques that protect the system from attacks. The authors compared the performance of the proposed routing scheme with existing schemes, and the results gave a better performance in terms of end-to-end delay, detection ratio, throughput, networking load, and packet delivery ratio (PDR).
Recently, deep learning analysis based on UAV imagery has shown significant support for the accurate classification of plant species in different communications in large regions. Detka et al., in contribution (14), discussed the use of deep-learning models and drone-based imagery to map species in an oak woodland, coastal sage scrub. And complex chaparral communities. The authors analyzed the effectiveness of support vector machine, random forest, and CNN together with object-based image analysis (OBIA) to perform mapping in diverse shrublands. The proposed CNN + OBIA technique outperformed the support vector machine and random forest to accurately identify communities, vegetation gaps, and shrub and tree species.
In contribution (15), Gong et al. propose a visual navigation technique based on deep learning and binocular vision to tackle the navigation challenges of the autonomous aerial refueling docking method of UAV. To meet the criteria of high frame rate and high accuracy in the aerial refueling process, this study presents a lightweight single-stage drogue detection model, which significantly enhances the inference speed of binocular images by ensuring depth-separable convolution and image alignment. It also enhances the scale adaptation performance and feature extraction capability of the proposed model through an adaptive spatial feature fusion method (ASFF) and an efficient attention mechanism (ECA). Moreover, this study also introduces a novel approach to estimating the pose of the drogue via spatial geometric modeling through optical markers and further enhances the robustness and accuracy via visual reprojection.
The existing approaches for crack detection in bridges through UABs depend on obtaining local images of concrete bridge components, which makes image acquisition very challenging. To overcome this challenge, Xu et al., in contribution (16), propose a crack detection approach that uses large-scene images obtained through a UAV. The proposed scheme includes a method to design a UAV-assisted scheme that can acquire large-scene images of bridges, and then a background denoising algorithm can be used to process these acquired images. The findings of this study provide enhanced detection efficiency.
In UAV photogrammetry, a challenging task is the removal of low vegetation. On the basis of various topographic features enabled by point-cloud data at various scales, Fan et al., in contribution (17), propose a vegetation-filtering approach considering multiscale elevation-variation coefficients. Initially, virtual grids were designed at various scales, and average values were obtained. Second, the elevation changes at any two scales were measured in each virtual grid to obtain the difference in surface parameters. Thirdly, the elevation variation coefficient in relation to the largest elevation variation degree is measured, and threshold segmentation is implemented. In the end, the optimal measurement neighborhood radius was analyzed for the elevation variation coefficients. The findings of this study show that the proposed approach can precisely remove vegetation points and reserve ground points in low and dense regions.
The ultra-dense deployment (UDD) of small cells in fifth-generation (5G) networks to improve data rate and capacity is interesting, but since the user densities often change, static deployment can cause user dissatisfaction, a waste of resources and capital. To overcome these challenges, Rahman et al., in contribution (18), propose the adaptation of Aerial Base Stations (ABSs) wherein small cells are integrated into UAVs deployed for user locations. In addition, considering the existing user densities, this work focuses on optimal adjustments of ABSs to improve the signal-to-interference ratio and total received power.
UAV-SfM photogrammetry has been extensively used in geoscience and remote sensing fraternity. Researchers have made efforts to optimize UAV-SfM for terrain modeling by considering analysis based on the standard deviation (STD), mean error (ME), and root-mean-squared error (RMSE). However, UAV-SfM still requires research on the aspect of the spatial structure of errors. Therefore, Dai et al., in contribution (19), developed several UAV-SfM photogrammetric cases and studied the impact of GCPs and image collection techniques on terrain modeling. These findings greatly help to understand how GCPs and image collection strategies can impact the spatial structure of errors and subsequent terrain modeling accuracy and precision.
Recently, learning-based object detection has gained significant attention in remote sensing. To accurately detect small objects in complex scenarios, Ren et al., in contribution (20), propose a novel hybrid backbone based on CNN and HAM-Transformer Net. First, HAM-Transformer Net extracts the information of feature maps through convolutional local feature extraction blocks. Second, multi-scale location coding is used to extract hierarchical information. Finally, features in various reception fields are extracted and fused adaptively using an adaptive multi-scale transformer block. The proposed method shows effectiveness and outperforms existing object detection algorithms.
Gordon et al., in contribution (21), propose a novel sensor design to detect and characterize regions of air turbulence. As a component of the ground truth method, it contains a joint Radar Acoustic Sounding System (RASS) and Schlieren imager to create dual modality images of air mobility in a given volume. The ultrasound-modulated Schlieren imager contains a camera, light block, parabolic mirror, and a strobed point light source. This study shows the fine-scale projection of acoustic pulse-modulated air turbulence by allowing higher-resolution Schlieren images for the identification of turbulence mechanisms.
Recently, UAVs have shown great potential as a remote sensing tool, providing cost-effectiveness and convenience while ensuring multi-view observations. In contribution (22), Cao et al. derive a polygonal flight path along the hemisphere to obtain bidirectional reflectance distribution function (BRDF) calculations for azimuth angles and zenith angels. By implementing the principle of aerial triangulation for photogrammetry, observation angles were restored accurately, and the geometric structure of “sun-object-view” was developed. Moreover, the authors compared three BRDF models (RTLSR, RPV, M_Walthall) and evaluated performance at the UAV scale, considering reflectance errors, shape structure, and fitting quality. The findings indicated the better performance of the RPM model followed by M_Walthall, but the performance of RTLST was comparatively poor.

Author Contributions

Conceptualization, S.A.H.M. and M.A.K.; Writing—original draft preparation, S.A.H.M. and Y.Y.G.; Writing—review and editing, S.A.H.M., M.A.K. and Y.Y.G.; Supervision, S.A.H.M.; project administration, S.A.H.M.; funding acquisition, S.A.H.M. and M.A.K. All authors have read and agreed to the published version of the manuscript.

Acknowledgments

We would like to thank all the authors and reviewers for their tremendous efforts, as well as the Editor-in-Chief and staff of Remote Sensing, Sensors, Drones, Machine Learning and Knowledge Extraction, Inventions, and AI for their collective support of this Topic Issue.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mohsan, S.A.H.; Khan, M.A.; Noor, F.; Ullah, I.; Alsharif, M.H. Towards the unmanned aerial vehicles (UAVs): A comprehensive review. Drones 2022, 6, 147. [Google Scholar] [CrossRef]
  2. Liu, Q.; Liu, R.; Wang, Z.; Thompson, J.S. UAV swarm-enabled localization in isolated region: A rigidity-constrained deployment perspective. IEEE Wirel. Commun. Lett. 2021, 10, 2032–2036. [Google Scholar] [CrossRef]
  3. Tziavou, O.; Pytharouli, S.; Souter, J. Unmanned Aerial Vehicle (UAV) based mapping in engineering geological surveys: Considerations for optimum results. Eng. Geol. 2018, 232, 12–21. [Google Scholar] [CrossRef]
  4. Alzahrani, B.; Oubbati, O.S.; Barnawi, A.; Atiquzzaman, M.; Alghazzawi, D. UAV assistance paradigm: State-of-the-art in applications and challenges. J. Netw. Comput. Appl. 2020, 166, 102706. [Google Scholar] [CrossRef]
  5. Pajares, G. Overview and current status of remote sensing applications based on unmanned aerial vehicles (UAVs). Photogramm. Eng. Remote Sens. 2015, 81, 281–330. [Google Scholar] [CrossRef]
  6. Mozaffari, M.; Saad, W.; Bennis, M.; Nam, Y.-H.; Debbah, M. A tutorial on UAVs for wireless networks: Applications, challenges, and open problems. IEEE Commun. Surv. Tutor. 2019, 21, 2334–2360. [Google Scholar] [CrossRef]
  7. Saeed, A.S.; Younes, A.B.; Islam, S.; Dias, J.; Seneviratne, L.; Cai, G. A review on the platform design, dynamic modeling and control of hybrid UAVs. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; pp. 806–815. [Google Scholar]
  8. Kanellakis, C.; Nikolakopoulos, G. Survey on computer vision for UAVs: Current developments and trends. J. Intell. Robot. Syst. 2017, 87, 141–168. [Google Scholar] [CrossRef]
  9. Yasin, J.N.; Mohamed, S.A.S.; Haghbayan, M.-H.; Heikkonen, J.; Tenhunen, H.; Plosila, J. Unmanned aerial vehicles (UAVs): Collision avoidance systems and approaches. IEEE Access 2020, 8, 105139–105155. [Google Scholar] [CrossRef]
  10. Goh, G.D.; Agarwala, S.; Goh, G.L.; Dikshit, V.; Sing, S.L.; Yeong, W.Y. Additive manufacturing in unmanned aerial vehicles (UAVs): Challenges and potential. Aerosp. Sci. Technol. 2017, 63, 140–151. [Google Scholar] [CrossRef]
  11. Osco, L.P.; Junior, J.M.; Ramos, A.P.; de Castro Jorge, L.A.; Fatholahi, S.N.; de Andrade Silva, J.; Matsubara, E.T.; Pistori, H.; Gonçalves, W.N.; Li, J. A review on deep learning in UAV remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102456. [Google Scholar] [CrossRef]
  12. Yao, H.; Qin, R.; Chen, X. Unmanned aerial vehicle for remote sensing applications—A review. Remote Sens. 2019, 11, 1443. [Google Scholar] [CrossRef]
Figure 1. Application scenarios of UAVs.
Figure 1. Application scenarios of UAVs.
Remotesensing 15 05087 g001
Figure 2. UAV-based remote sensing applications.
Figure 2. UAV-based remote sensing applications.
Remotesensing 15 05087 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohsan, S.A.H.; Khan, M.A.; Ghadi, Y.Y. Editorial on the Advances, Innovations and Applications of UAV Technology for Remote Sensing. Remote Sens. 2023, 15, 5087. https://doi.org/10.3390/rs15215087

AMA Style

Mohsan SAH, Khan MA, Ghadi YY. Editorial on the Advances, Innovations and Applications of UAV Technology for Remote Sensing. Remote Sensing. 2023; 15(21):5087. https://doi.org/10.3390/rs15215087

Chicago/Turabian Style

Mohsan, Syed Agha Hassnain, Muhammad Asghar Khan, and Yazeed Yasin Ghadi. 2023. "Editorial on the Advances, Innovations and Applications of UAV Technology for Remote Sensing" Remote Sensing 15, no. 21: 5087. https://doi.org/10.3390/rs15215087

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop