remotesensing-logo

Journal Browser

Journal Browser

2D and 3D Mapping with UAV Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (12 February 2022) | Viewed by 24679

Special Issue Editors


E-Mail Website
Chief Guest Editor
College of Engineering and Computer Science, Florida Atlantic University, 777 Glades Road, EG-36, Boca Raton, FL 33431, USA
Interests: low cost mobile mapping techniques; sensor development for change detection applications; UAV based LiDAR; feature based registration techniques

E-Mail Website
Co-Guest Editor
Civil and Geomatics Engineering, California State University at Fresno, CA 2320 E. San Ramon Ave, MS/EE 94, Fresno, CA 93740-8030, USA
Interests: LiDAR; mobile LiDAR; UAV mapping; 3D model

E-Mail Website
Co-Guest Editor
Lyles School of Civil Engineering, Purdue University, 550 Stadium Mall Dr, West Lafayette, IN 47907, USA
Interests: UAV; geospatial data science; high performance computing; high throughput phenotyping; data fusion
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, Unmanned Aerial Vehicle (UAV) is considered as an efficient and economical alternative to conventional manned aerial platforms and is rapidly replacing existing paradigm such as notions, planning/cost and procedures. As the applications of UAV are expanding to various science and engineering disciplines, producing a quality mosaic, surface and 3D model of the sites of their interest, UAVs are often equipped with various onboard sensors, including multi/hyper-spectral, LiDAR, SAR among others, increasing monitoring, feature tracking and object detection abilities.

Recent trend in UAV mapping also shows several research challenges.

  • Dependency in ready-made hardware in UAV data acquisition
  • Dependency of UAV data processing software.
  • Sensor calibration and accuracy assessment tools
  • Gaps between UAV related research and UAV practices in disciplines.
  • Lack of data processing algorithms

To this end, this special issue aims 1) to have better understanding of UAV research and practices, especially when quality results play role in their study, 2) to contribute UAV application which is applicable to various disciplines – agriculture, forestry, environment science, earth science, natural disaster such as earthquake, wildfire etc., 3) to advance UAV processing algorithms and 4) to provide UAV data QA/QC guiding

The broad topics of this special issue of Remote Sensing include but are not limited to:

  • Accuracy assessment of UAV data and its results
  • Calibration of UAV sensors
  • Geometric and radiometric properties of UAV
  • New Sensor development for UAV applications
  • UAV sensor integration
  • Sensor and data fusion for performance improvement
  • UAV best practices
  • UAV data processing algorithms
  • UAV applications: agriculture, forestry, environment, earth science, natural disaster etc.
  • Standardization, data QA/QC

Dr. Sudhagar Nagarajan
Dr. Scott Peterson
Dr. Jinha Jung
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAV
  • calibration
  • data processing
  • accuracy
  • best practice
  • sensor fusion
  • data fusion
  • QA/QC

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

19 pages, 11444 KiB  
Article
A New Methodology for Bridge Inspections in Linear Infrastructures from Optical Images and HD Videos Obtained by UAV
by Miguel Cano, José Luis Pastor, Roberto Tomás, Adrián Riquelme and José Luis Asensio
Remote Sens. 2022, 14(5), 1244; https://doi.org/10.3390/rs14051244 - 03 Mar 2022
Cited by 9 | Viewed by 2744
Abstract
Many bridges and other structures worldwide present a lack of maintenance or a need for rehabilitation. The first step in the rehabilitation process is to perform a bridge inspection to know the bridge′s current state. Routine bridge inspections are usually based only on [...] Read more.
Many bridges and other structures worldwide present a lack of maintenance or a need for rehabilitation. The first step in the rehabilitation process is to perform a bridge inspection to know the bridge′s current state. Routine bridge inspections are usually based only on visual recognition. In this paper, a methodology for bridge inspections in communication routes using images acquired by unmanned aerial vehicle (UAV) flights is proposed. This provides access to the upper parts of the structure safely and without traffic disruptions. Then, a standardized and systematized novel image acquisition protocol is applied for data acquisition. Afterwards, the images are studied by civil engineers for damage identification and description. Then, specific structural inspection forms are completed using the acquired information. Recommendations about the need of new and more detailed inspections should be included at this stage when needed. The suggested methodology was tested on two railway bridges in France. Image acquisition of these structures was performed using an UAV for its ability to provide an expert assessment of the damage level. The main advantage of this method is that it makes it possible to safely accurately identify diverse damages in structures without the need for a specialised engineer to go to the site. Moreover, the videos can be watched by as many engineers as needed with no personal movement. The main objective of this work is to describe the systematized methodology for the development of bridge inspection tasks using a UAV system. According to this proposal, the in situ inspection by a specialised engineer is replaced by images and videos obtained from an UAV flight by a trained flight operator. To this aim, a systematized image/videos acquisition method is defined for the study of the morphology and typology of the structural elements of the inspected bridges. Additionally, specific inspection forms are proposed for every type of structural element. The recorded information will allow structural engineers to perform a postanalysis of the damage affecting the bridges and to evaluate the subsequent recommendations. Full article
(This article belongs to the Special Issue 2D and 3D Mapping with UAV Data)
Show Figures

Graphical abstract

28 pages, 12243 KiB  
Article
UAV Oblique Imagery with an Adaptive Micro-Terrain Model for Estimation of Leaf Area Index and Height of Maize Canopy from 3D Point Clouds
by Minhui Li, Redmond R. Shamshiri, Michael Schirrmann, Cornelia Weltzien, Sanaz Shafian and Morten Stigaard Laursen
Remote Sens. 2022, 14(3), 585; https://doi.org/10.3390/rs14030585 - 26 Jan 2022
Cited by 13 | Viewed by 5126
Abstract
Leaf area index (LAI) and height are two critical measures of maize crops that are used in ecophysiological and morphological studies for growth evaluation, health assessment, and yield prediction. However, mapping spatial and temporal variability of LAI in fields using handheld tools and [...] Read more.
Leaf area index (LAI) and height are two critical measures of maize crops that are used in ecophysiological and morphological studies for growth evaluation, health assessment, and yield prediction. However, mapping spatial and temporal variability of LAI in fields using handheld tools and traditional techniques is a tedious and costly pointwise operation that provides information only within limited areas. The objective of this study was to evaluate the reliability of mapping LAI and height of maize canopy from 3D point clouds generated from UAV oblique imagery with the adaptive micro-terrain model. The experiment was carried out in a field planted with three cultivars having different canopy shapes and four replicates covering a total area of 48 × 36 m. RGB images in nadir and oblique view were acquired from the maize field at six different time slots during the growing season. Images were processed by Agisoft Metashape to generate 3D point clouds using the structure from motion method and were later processed by MATLAB to obtain clean canopy structure, including height and density. The LAI was estimated by a multivariate linear regression model using crop canopy descriptors derived from the 3D point cloud, which account for height and leaf density distribution along the canopy height. A simulation analysis based on the Sine function effectively demonstrated the micro-terrain model from point clouds. For the ground truth data, a randomized block design with 24 sample areas was used to manually measure LAI, height, N-pen data, and yield during the growing season. It was found that canopy height data from the 3D point clouds has a relatively strong correlation (R2 = 0.89, 0.86, 0.78) with the manual measurement for three cultivars with CH90. The proposed methodology allows a cost-effective high-resolution mapping of in-field LAI index extraction through UAV 3D data to be used as an alternative to the conventional LAI assessments even in inaccessible regions. Full article
(This article belongs to the Special Issue 2D and 3D Mapping with UAV Data)
Show Figures

Figure 1

20 pages, 10605 KiB  
Article
Flood Detection Using Real-Time Image Segmentation from Unmanned Aerial Vehicles on Edge-Computing Platform
by Daniel Hernández, José M. Cecilia, Juan-Carlos Cano and Carlos T. Calafate
Remote Sens. 2022, 14(1), 223; https://doi.org/10.3390/rs14010223 - 04 Jan 2022
Cited by 25 | Viewed by 5357
Abstract
With the proliferation of unmanned aerial vehicles (UAVs) in different contexts and application areas, efforts are being made to endow these devices with enough intelligence so as to allow them to perform complex tasks with full autonomy. In particular, covering scenarios such as [...] Read more.
With the proliferation of unmanned aerial vehicles (UAVs) in different contexts and application areas, efforts are being made to endow these devices with enough intelligence so as to allow them to perform complex tasks with full autonomy. In particular, covering scenarios such as disaster areas may become particularly difficult due to infrastructure shortage in some areas, often impeding a cloud-based analysis of the data in near-real time. Enabling AI techniques at the edge is therefore fundamental so that UAVs themselves can both capture and process information to gain an understanding of their context, and determine the appropriate course of action in an independent manner. Towards this goal, in this paper, we take determined steps towards UAV autonomy in a disaster scenario such as a flood. In particular, we use a dataset of UAV images relative to different floods taking place in Spain, and then use an AI-based approach that relies on three widely used deep neural networks (DNNs) for semantic segmentation of images, to automatically determine the regions more affected by rains (flooded areas). The targeted algorithms are optimized for GPU-based edge computing platforms, so that the classification can be carried out on the UAVs themselves, and only the algorithm output is uploaded to the cloud for real-time tracking of the flooded areas. This way, we are able to reduce dependency on infrastructure, and to reduce network resource consumption, making the overall process greener and more robust to connection disruptions. Experimental results using different types of hardware and different architectures show that it is feasible to perform advanced real-time processing of UAV images using sophisticated DNN-based solutions. Full article
(This article belongs to the Special Issue 2D and 3D Mapping with UAV Data)
Show Figures

Graphical abstract

23 pages, 18083 KiB  
Article
Onboard Real-Time Dense Reconstruction in Large Terrain Scene Using Embedded UAV Platform
by Zhengchao Lai, Fei Liu, Shangwei Guo, Xiantong Meng, Shaokun Han and Wenhao Li
Remote Sens. 2021, 13(14), 2778; https://doi.org/10.3390/rs13142778 - 14 Jul 2021
Cited by 8 | Viewed by 3316
Abstract
Using unmanned aerial vehicles (UAVs) for remote sensing has the advantages of high flexibility, convenient operation, low cost, and wide application range. It fills the need for rapid acquisition of high-resolution aerial images in modern photogrammetry applications. Due to the insufficient parallaxes and [...] Read more.
Using unmanned aerial vehicles (UAVs) for remote sensing has the advantages of high flexibility, convenient operation, low cost, and wide application range. It fills the need for rapid acquisition of high-resolution aerial images in modern photogrammetry applications. Due to the insufficient parallaxes and the computation-intensive process, dense real-time reconstruction for large terrain scenes is a considerable challenge. To address these problems, we proposed a novel SLAM-based MVS (Multi-View-Stereo) approach, which can incrementally generate a dense 3D (three-dimensional) model of the terrain by using the continuous image stream during the flight. The pipeline of the proposed methodology starts with pose estimation based on SLAM algorithm. The tracked frames were then selected by a novel scene-adaptive keyframe selection method to construct a sliding window frame-set. This was followed by depth estimation using a flexible search domain approach, which can improve accuracy without increasing the iterate time or memory consumption. The whole system proposed in this study was implemented on the embedded GPU based on an UAV platform. We proposed a highly parallel and memory-efficient CUDA-based depth computing architecture, enabling the system to achieve good real-time performance. The evaluation experiments were carried out in both simulation and real-world environments. A virtual large terrain scene was built using the Gazebo simulator. The simulated UAV equipped with an RGB-D camera was used to obtain synthetic evaluation datasets, which were divided by flight altitudes (800-, 1000-, 1200 m) and terrain height difference (100-, 200-, 300 m). In addition, the system has been extensively tested on various types of real scenes. Comparison with commercial 3D reconstruction software is carried out to evaluate the precision in real-world data. According to the results on the synthetic datasets, over 93.462% of the estimation with absolute error distance of less then 0.9%. In the real-world dataset captured at 800 m flight height, more than 81.27% of our estimated point cloud are less then 5 m difference with the results of Photoscan. All evaluation experiments show that the proposed approach outperforms the state-of-the-art ones in terms of accuracy and efficiency. Full article
(This article belongs to the Special Issue 2D and 3D Mapping with UAV Data)
Show Figures

Graphical abstract

27 pages, 11031 KiB  
Article
Rethinking the Fourier-Mellin Transform: Multiple Depths in the Camera’s View
by Qingwen Xu, Haofei Kuang, Laurent Kneip and Sören Schwertfeger
Remote Sens. 2021, 13(5), 1000; https://doi.org/10.3390/rs13051000 - 05 Mar 2021
Cited by 5 | Viewed by 2850
Abstract
Remote sensing and robotics often rely on visual odometry (VO) for localization. Many standard approaches for VO use feature detection. However, these methods will meet challenges if the environments are feature-deprived or highly repetitive. Fourier-Mellin Transform (FMT) is an alternative VO approach that [...] Read more.
Remote sensing and robotics often rely on visual odometry (VO) for localization. Many standard approaches for VO use feature detection. However, these methods will meet challenges if the environments are feature-deprived or highly repetitive. Fourier-Mellin Transform (FMT) is an alternative VO approach that has been shown to show superior performance in these scenarios and is often used in remote sensing. One limitation of FMT is that it requires an environment that is equidistant to the camera, i.e., single-depth. To extend the applications of FMT to multi-depth environments, this paper presents the extended Fourier-Mellin Transform (eFMT), which maintains the advantages of FMT with respect to feature-deprived scenarios. To show the robustness and accuracy of eFMT, we implement an eFMT-based visual odometry framework and test it in toy examples and a large-scale drone dataset. All these experiments are performed on data collected in challenging scenarios, such as, trees, wooden boards and featureless roofs. The results show that eFMT performs better than FMT in the multi-depth settings. Moreover, eFMT also outperforms state-of-the-art VO algorithms, such as ORB-SLAM3, SVO and DSO, in our experiments. Full article
(This article belongs to the Special Issue 2D and 3D Mapping with UAV Data)
Show Figures

Figure 1

Other

Jump to: Research

10 pages, 2714 KiB  
Technical Note
3D Characterization of Sorghum Panicles Using a 3D Point Cloud Derived from UAV Imagery
by Anjin Chang, Jinha Jung, Junho Yeom and Juan Landivar
Remote Sens. 2021, 13(2), 282; https://doi.org/10.3390/rs13020282 - 15 Jan 2021
Cited by 8 | Viewed by 2743
Abstract
Sorghum is one of the most important crops worldwide. An accurate and efficient high-throughput phenotyping method for individual sorghum panicles is needed for assessing genetic diversity, variety selection, and yield estimation. High-resolution imagery acquired using an unmanned aerial vehicle (UAV) provides a high-density [...] Read more.
Sorghum is one of the most important crops worldwide. An accurate and efficient high-throughput phenotyping method for individual sorghum panicles is needed for assessing genetic diversity, variety selection, and yield estimation. High-resolution imagery acquired using an unmanned aerial vehicle (UAV) provides a high-density 3D point cloud with color information. In this study, we developed a detecting and characterizing method for individual sorghum panicles using a 3D point cloud derived from UAV images. The RGB color ratio was used to filter non-panicle points out and select potential panicle points. Individual sorghum panicles were detected using the concept of tree identification. Panicle length and width were determined from potential panicle points. We proposed cylinder fitting and disk stacking to estimate individual panicle volumes, which are directly related to yield. The results showed that the correlation coefficient of the average panicle length and width between the UAV-based and ground measurements were 0.61 and 0.83, respectively. The UAV-derived panicle length and diameter were more highly correlated with the panicle weight than ground measurements. The cylinder fitting and disk stacking yielded R2 values of 0.77 and 0.67 with the actual panicle weight, respectively. The experimental results showed that the 3D point cloud derived from UAV imagery can provide reliable and consistent individual sorghum panicle parameters, which were highly correlated with ground measurements of panicle weight. Full article
(This article belongs to the Special Issue 2D and 3D Mapping with UAV Data)
Show Figures

Figure 1

Back to TopTop