Next Article in Journal
Impact of Satellite and In Situ Data Assimilation on Hydrological Predictions
Next Article in Special Issue
An Integrated Solution for 3D Heritage Modeling Based on Videogrammetry and V-SLAM Technology
Previous Article in Journal
Sentinel-1 DInSAR for Monitoring Active Landslides in Critical Infrastructures: The Case of the Rules Reservoir (Southern Spain)
Previous Article in Special Issue
Matching Confidence Constrained Bundle Adjustment for Multi-View High-Resolution Satellite Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Increasing the Geometrical and Interpretation Quality of Unmanned Aerial Vehicle Photogrammetry Products using Super-Resolution Algorithms

by
Pawel Burdziakowski
Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk University of Technology, Narutowicza 11-12, 80-233 Gdansk, Poland
Remote Sens. 2020, 12(5), 810; https://doi.org/10.3390/rs12050810
Revised: 19 February 2020 / Accepted: 28 February 2020 / Published: 3 March 2020
(This article belongs to the Special Issue Photogrammetry and Image Analysis in Remote Sensing)

Abstract

:
Unmanned aerial vehicles (UAVs) have now become very popular in photogrammetric and remote-sensing applications. Every day, these vehicles are used in new applications, new terrains, and new tasks, facing new problems. One of these problems is connected with flight altitude and the determined ground sample distance in a specific area, especially within cities and industrial and construction areas. The problem is that a safe flight altitude and camera parameters do not meet the required or demanded ground sampling distance or the geometrical and texture quality. In the cases where the flight level cannot be reduced and there is no technical ability to change the UAV camera or lens, the author proposes the use of a super-resolution algorithm for enhancing images acquired by UAVs and, consequently, increase the geometrical and interpretation quality of the final photogrammetric product. The main study objective was to utilize super-resolution (SR) algorithms to improve the geometric and interpretative quality of the final photogrammetric product, assess its impact on the accuracy of the photogrammetric processing and on the traditional digital photogrammetry workflow. The research concept assumes a comparative analysis of photogrammetric products obtained on the basis of data collected from small, commercial UAVs and products obtained from the same data but additionally processed by the super-resolution algorithm. As the study concludes, the photogrammetric products that are created as a result of the algorithms’ operation on high-altitude images show a comparable quality to the reference products from low altitudes and, in some cases, even improve their quality.

Graphical Abstract

1. Introduction

Unmanned aerial vehicle applications and new methods in photogrammetry [1] and remote sensing have increased rapidly in recent years [2,3,4,5]. Currently, unmanned aerial vehicles (UAVs) are used by a wide community and for cases and applications that could not be performed in the past. Small UAVs, as a photogrammetry measurement tool, provide flexibility and reliability, are safe and easy to use, can be deployed in minutes, and initial measurements can be delivered on the field. User demands are growing both for the quality of the modeling and the final resolution. UAVs are used in many areas where visual spectrum images or multi-spectral images, digital surface models (DSMs), and orthoimagery are derived and encompass the following fields: geodesy [6,7,8,9,10,11,12,13,14,15], agriculture [14,16,17,18,19], forestry [20,21,22], archaeology and architecture [10,23,24,25,26,27], environment and technical infrastructure monitoring [6,7,11,17,18,19,21,28,29,30,31,32,33], and emergency management and traffic monitoring [34,35,36]. Numerous cases of UAV applications have been realized by the author, during which, some problems have been encountered [37]. One of these problems is connected with the flight level (altitude, above ground level (AGL)) and the determined ground sample distance (GSD) in specific areas, especially within cities and industrial and construction areas. The terms flight level, altitude, and above ground level in this paper are used equivalently and mean a height measured with respect to the underlying ground surface at take-off position.
The problem is that the safe flight level and camera parameters do not meet the required or demanded ground sampling distance (GSD) (geometrical quality) and texture quality for interpretation (interpretation quality). The safe flight level within an industrial environment can be limited by high cranes, high power lines (which are even more dangerous for UAVs), high buildings [20,36], etc. If a required GSD demands a flight level lower than the highest objects in the area, then the required quality cannot be met. A flight level must consider the safe separation between objects and the UAV. This separation (defined by a vertical distance between highest point of the object and UAV) varies and depends on the object type and consists of many coefficients, like altimeter accuracy, global navigation satellite systems (GNSS) accuracy, local law regulations, and the level of confidence in the object’s height as known by the operator.
In the cases where the flight level cannot be reduced and there is no technical ability to change the UAV camera or lens, the author proposes the use of super-resolution (SR) algorithms for increasing the geometrical and interpretation quality of the final photogrammetric product.
In recent years, many techniques to improve the visual quality of images and videos have been developed. The main reason that this kind of technology is being developed is to satisfy user demands for high-quality multimedia content. People require crystal clear and visually pleasing pictures displayed on new, high-quality viewing equipment, such as LCDs (liquid-crystal displays) and LEDs (light-emitting displays). Moreover, high resolution and image quality are commercially attractive, and producers of display equipment want to increase their dimensions (given in diagonal dimension of the screen) and resolution. High-resolution content is not always available due to reasons that include down-sampling for the sake of bandwidth limitations, different types of noise, different compression techniques, different video standards, etc. [38].
A group of techniques for estimating a high-resolution (HR) image from its low-resolution (LR) counterpart are [39] called super-resolution (SR) techniques [38]. Super-resolution methods try to do image upscaling and upsizing without sacrificing the detail and visual appearance of the images. Consequently, the main goal of super resolutions is to find the value of the missing pixels in a high-resolution image. In the context of the presented research, the idea is to find the value of the pixels in the images taken from higher altitudes and make them similar to those taken from a lower altitude. Recent works have considered super-resolution methods in remote sensing [40,41,42,43,44,45], satellite imagery [41,42,43,44,45,46,47,48,49,50,51], medicine [52,53,54,55], and microscopy [56,57,58,59].
Generally, super-resolution methods are classified into two classes [60]: multiple-image super-resolution methods [61,62,63] and single-image super-resolution methods [39,64,65,66,67]. The first group enhances the spatial resolution of images based on multiple images presenting the same scene. Multiple-image super resolution is based on information fusion, which benefits from the differences (mainly subpixel shifts) between low-resolution images [61]. From the practical point of view for photogrammetry and remote sensing, multiple images are not always available, or if they are available, there are slight changes between the images. For example, earth observation missions allow for acquisition of the same scene on a regular basis, but the scenes still change too fast in comparison to the revisit time. There are changes including shadows, cloud, snow coverage, moving objects, or seasonal changes in vegetation [65].
The second group, single-image super-resolution algorithms, are more practical for UAV photogrammetry or remote-sensing applications. An interpolation method (like bicubic interpolation) is the simplest approach to solve the single-image super-resolution problem. However, results from those methods are far from ideal. Developments in the field of machine learning, and especially evidence-based learning techniques, are using parameters learned during training to enhance the results in the evaluation of unknown data. Deep-learning techniques, particularly convolutional neural networks (CNNs), are actually able to enhance the data in an information-theoretical sense [65], and due to that fact, those techniques were used in the presented experiment.

2. Materials and Methods

This chapter describes the methodology used in the research. The main objective was to study super-resolution (SR) algorithms to improve the geometric and interpretative quality of the final photogrammetric product and its impact on the accuracy of the photogrammetric processing and on the traditional digital photogrammetry workflow. The research concept assumes a comparative analysis of photogrammetric products obtained on the basis of data collected from small, commercial UAVs and products obtained from the same data but additionally processed by the super-resolution algorithm.
The super-resolution algorithm was applied for image-data calculation before the standard postprocessing routine (Figure 1) for data collected at 110 meters in altitude, in accordance with the main research intention. The data collected at the lower altitude in this research are used as a reference data for comparison with the reduced ones. In other words, the intention was to prove that data collected at a higher altitude can be enhanced using super-resolution algorithms and, using standard photogrammetric processing data, are comparable to those collected at the lower altitude. In the practical cases, where flight at a lower altitude cannot be performed and planned data quality cannot be reached, that algorithmic enhancement can be the only way, and the simplest one, to reach the planned data quality.

2.1. Photogrammetric Process

The photogrammetry technique encompasses methods of image measurement and interpretation in order to derive the shape and location of an object based on photographs. The photogrammetric methods can be applied in cases where the object can be photographically recorded. The purpose of the photogrammetric measurement is a three-dimensional reconstruction in a digital or graphical form. The measurements (images) and a mathematical transformation between the image and the object space have the means to model the object.
Currently, the digital photogrammetry process (Figure 1a) consists of the data acquisition, processing, and exporting. All steps within this process are made based on raw (not modified) images. Moreover, the photogrammetric software providers underline the fact that images loaded to the processing software are not to be modified [68,69]. Any modification can change the internal or external orientation parameters, and the modeling software will not be able to correctly conduct the reconstruction process. Here, a new method, enhanced by a super-resolution, photogrammetric process, was designed and tested on a typical, state-of-the-art photogrammetric software [70]. In this research, Agisoft Metashape v. 1.6.1 software was used.
The main purpose of augmentation is to increase the resolution of images obtained from the flight at a higher altitude, which will result in a higher geometric and interpretation quality of the final products. This approach is close to reducing the flight level of unmanned aerial vehicles or, in other words, reducing the effective distance to the object. Moreover, the research verified if, despite the guidelines of the software developers, it is possible to modify the resolution of the images and to process them on the commercial software without sacrificing the reconstruction possibilities.

2.2. UAV Flights

The commercial drone market is now dominated by the Chinese DJI (Da Jiang Innovations Science & Technology Co., Ltd., Shenzhen, China) company [71,72], and products of this company are used in almost every company which uses UAVs for measurements. For this research, the author used the currently most popular representatives on the commercial market for UAVs: DJI Phantom 4 Pro (PH) and DJI Mavic Pro (MP). Both represent the same class: small, commercial UAVs. Apart from different flight capabilities, both UAVs also have different cameras, and in this regard, only 13-Mpix (megapixel) and 20-Mpix sensors sizes are available. Higher-resolution cameras require a different, larger aerial platform typically mounted on custom constructions and, due to their minority share within the market, were not used for this research.
In the presented research, the single-grid flight path (Figure 2) was used for both UAVs, with parameters presented in Table 1. The single flight path is usually used for cases where a main interest is 2D map outputs (orthophotomaps, digital surface models, or digital terrain models) for relatively flat surfaces, such as fields. Typically, an effective area that can be covered during one flight of small commercial UAVs at an altitude of 100 m using a single-grid path is limited to an area of around 600x600 m with a calculated flight time of around 19 minutes. The maximum flight time is calculated for no-wind conditions and, due to that fact, real coverage in windy conditions will be reduced.
During the study, 4 different UAV flights were conducted. Detailed data of the flight patterns (Figure 2) are presented in Table 1, where: D Y —width of the area of interest, D X - length of the area of interest, B y —distance between two stripes, B x —distance between the perspective centers of two consecutive photos, L W —image footprint across flight line, L H —image footprint along flight line.

2.3. Super Resolution

As it was mentioned, super-resolution methods try to do image upscaling and upsizing without sacrificing the detail and visual appearance of the images. This super-resolution property, embedded in the classic digital photogrammetry process, should theoretically increase the accuracy of the location of ground-control points and the photogrammetric reconstruction itself. Based on recent super-resolution methods, review papers [73,74,75,76], and the latest available implementations [60,64,77,78,79,80,81,82,83,84,85], the method based on the super-resolution generative adversarial network (SRGAN) [39] was chosen. The method belongs to the group of single-image super resolution (SISR).
The SRGAN network uses high-resolution images and their low-resolution equivalents in the training process. The low-resolution images are obtained by using a Gaussian filter and a down-sampling factor. In the training process, the generator network outputs high-resolution images. The generator network employs a deep-residual network (ResNet) [86]. The result is evaluated by the critic network with perceptual loss using high-level feature maps of the VGG (visual geometry group) network [87] and then optimized. VGG is a pretrained convolutional neural network model that is trained on images from the ImageNet database [88]. The VGG network is combined here with a discriminator that encourages solutions perceptually hard to distinguish from the high-resolution (reference) images.
The aim of optimizing supervised SR algorithms is usually to minimize the mean squared error (MSE) between the recovered high-resolution image and the reference image. MSE minimization also maximizes the peak signal-to-noise ratio (PSNR), which is commonly used to evaluate and compare super-resolution algorithms [87]. The use of MSE as a critique, especially for real-world images, may result in an insufficient result for the generator [39]. Therefore, the SRGAN method ignores MSE and replaces the MSE-based content loss with a loss calculated on feature maps of the VGG network [89]. Since small shifts in the contents of images leads to very poor MSE and PSNR results even when the contents are identical [90], the change to the VGG network makes it more invariant to changes in the pixel space. In this approach, the generator can learn to create solutions that are highly similar to real images, and that was the main reason of choosing the SRGAN method to enhance photogrammetric images.
The photogrammetric images enhancement was realized using a TensorLayer framework [60]. Firstly, the pretrained VGG 19-layer model was downloaded, and high-resolution images for the generator network training were obtained from [91]. This dataset was designed for the New Trends in Image Restoration and Enhancement (NTIRE) challenge on image super resolution. Based on the implementation [60] and trained networks, the final image enhancement was conducted.
UAV images taken at higher flight levels (110 m) were enhanced using a SRGAN method with a 2x scaling factor. The lower flight level (55 m) was used as a reference image for further modeling and model comparison. Additionally, original images were resized using bicubic interpolation with a scaling factor of 2x (the output pixel value is a weighted average of pixels in the nearest 4-by-4 neighborhood). The assessment of image qualities and the evaluation of the SRGAN method in comparison with the bipolar interpolation was conducted on the basis of three different image quality metrics (IQM): blind referenceless image spatial quality evaluator (BRISQUE) [92], natural image quality evaluator (NIQE) [93], and perception-based image quality evaluator (PIQE) [94]. Chosen no-reference image quality scores generally return a non-negative scalar.
The BRISQUE score is in the range from 0 to 100. Lower values of scores reflect better perceptual qualities of images. The NIQE model is trained on a database of pristine images and can measure the quality of images with arbitrary distortion. NIQE is opinion-unaware and does not use subjective quality scores. The tradeoff is that the NIQE score of an image might not correlate as well as the BRISQUE score with human perceptions of quality. Lower values of scores reflect better perceptual qualities of images with respect to the input model. The PIQE score is the no-reference image quality score, and it is inversely correlated to the perceptual quality of an image. A low score value indicates high perceptual quality, and high-score values indicate low perceptual quality. The image scores are presented in Table 2.
The PIQE scale of the image is based on its PIQE score given in Table 3. The quality scale and respective score range are assigned through experimental analysis on the dataset in the database [95].

2.4. Georeferencing Accuracy

Ground-control points (GCPs) can be defined as a feature with known real-world coordinates that can be clearly identified in an image. These points used during the photogrammetric process are required to achieve results of the highest quality, both in terms of the geometrical precision and georeferencing accuracy; therefore, it is very important to correctly locate and point them out during the photo-processing process.
Figure 3 and Figure 4 present a visual evaluation of two different types of GCPs marked in the area. Typically, GCPs are to be precisely identified at the resolution of the raw image and marked in the processing software. GCPs can be marked in the terrain, as in this case, with a white spray (GCPs no. 1-4) and some kind of pattern, e.g., a chessboard pattern (GCP no. 5).
The GCPs position was measured using a GNSS RTK (real-time kinematic) geodetic receiver (Trimble R8 by Trimble Inc., Sunnyvale, California, USA) with the maximum available precision for this system 8 mm at the horizontal and 15-mm vertical axes. After initial photo alignment, GCPs were marked in the software, and then camera alignment optimization was performed. Figure 5 presents GCP locations and error estimates after camera alignment optimization.
Table 4 and Table 5 present detailed values of GCP error estimates and calculated percentage changes between error values calculated for the traditional photogrammetric process with relation to values calculated for the enhanced photogrammetric process. The percentage change is calculated in accordance with following formula:
D I F F S R t o E S T = 100 E SR E T E T
where E T —calculated error value for the traditional process and E S R represents the calculated error value for the enhanced process.

3. Results

The images collected at 110 meters were enhanced using the described super-resolution algorithm. As a result, new double-sized images were processed (Table 6). The autocalibration algorithm used in the processing software used an enhanced image resolution and sensor size (in accordance with the provided camera model) to calculate the pixel size. Double-sized images resulted in double-reduced calculated pixel sizes for both UAVs.
The processing report summary and calculated percentage differences for all cases for the photo alignment process are presented in Table 7 and Table 8.
The analysis of the results presented in Table 7 showed that the enhanced process resulted in a 50% decrease in ground resolution (Mavic Pro—SR to 110 m), which is the expected result, as the pixel size of the image was reduced by 50%. The number of tie points increased by 21% for the Mavic Pro (Table 7) and 40% for the Phantom 4 (Table 8). The reprojection error in the case of the Mavic Pro has been increased by 23%, but in the case of the Phantom 4 Pro, the reprojection error has been decreased by 15%. A reprojection error is the distance between a point detected in an image and a corresponding world point projected into the same image given in pixels. This error depends on the quality of the camera calibration, as well as on the quality of the detected tie points on the images. In the context of images taken by the Phantom 4 Pro camera, it can be assumed that its overall initial original image quality is better than the Mavic Pro’s images (Table 2) (excellent on the PIQE scale). The number of tie points detected on the Phantom 4 Pro super resolution is 19% percent higher than that detected on the Mavic Pro’s super-resolution images; therefore, super-resolution enhancement provides a reprojection error reduction for the Phantom 4 Pro. The SR algorithm increases the number of tie points on the processed images for both UAVs at 21% and 40%, respectively. The increased number of tie points in extreme cases like forest-mapping presented in [96] may result in an increasement in the number of tie points over the minimum number required to carry out the modeling process successfully.
The processing report summary and calculated percentage differences for all cases for the final photogrammetric products are presented in Table 9 and Table 10.
The SR significantly increased the number of points in the point clouds (Table 9 and Table 10). The number is even higher than in the referenced model. The number of points in the dense point clouds for both cases was increased up to 337 % (Mavic Pro—SR to 110 m). The visual examination of the dense point clouds for the same example object is presented in Figure 6. This significant improvement resulted in further modeling quality. It can be expected that DEM and 3D models will be generated with higher resolution and with higher levels of details.
Figure 7 presents the results of point cloud comparations. Point clouds generated from the 55-m images were compared to point clouds generated from the SR images using a cloud-to-cloud (C2C) comparison technique [97]. Based on the C2C distance visualization and histograms, we can prove that the SR point clouds are similar to the referenced one, and that super-resolution enhancement can be applied to the traditional photogrammetric software with no additional modification required. The differences presented on this comparison are visible, particularly in the areas where lower altitude products (referenced) suffered from the modeling problem or reality was modeled missing some objects (trees or bushes). Some objects, like trees or bushes, were modeled on the 110-m images, while the 55-m images, theoretically with smaller GSD, were not reconstructed at all (Figure 8). This situation appears on both cases, for images taken by the Mavic Pro and Phantom 4 Pro.
A more detailed analysis of the problem reveals that, in the area where bushes and trees are present, the algorithm does not find any tie points. That can result from many aspects, most of all, small but noticeable dynamics of the object (bushes and trees moved in the wind). Particularly noticeable from the low altitude flights are the small dimensions of the object elements (branches), multiple times smaller than the GSD. During the flight at a higher altitude, the size of the GSD allows for some generalization of the object, especially with such small elements as branches and tiny leaves, and also, the dynamics of the object are not so noticeable.
As far as the orthophotomap and DEM are concerned, a close-up of the parts of the products are shown in Figure 9 The presented part of the products was selected deliberately. The image presents buildings that exceed the average ground level. The flight altitude was determined relative to the mean terrain level. In the pictures presented in Figure 9, the artifact that was formed on one of the structures appears. There were a series of such artifacts in the entire project, and they occurred at the edges of the higher structures.
The artefacts presented above are revealed on the products created as a result of processing the images from flight at the level of 55 m. They do not occur on the products created from the 110-m altitude flight, and the same fragments are much better reproduced on the products resulting from the images processed by the super-resolution algorithms. The analysis has found that the resulting artifacts are the result of too-low overlap, which was created as a result of low UAV flight (55 m), but only for the higher structures exceeding the average ground level. A building fragment visible on Figure 9a at a 55-m flight height was imaged only on four photographs (Figure 10a), while the points on the ground surface were visualized on more than six photographs (Figure 10b).
The situation described above proves that, in some circumstances, it might not be practical to reduce the flight level to achieve the desired GSD. Naturally, in a comparable scenario, it is possible to increase coverage, which will eliminate similar errors, but it will extend the flight time and reduce the area that will be covered during one flight. The orthophotomap and DEM, which are the result of super-resolution algorithms, do not have similar artifacts, and the GSD is comparable to a flight conducted at an altitude twice that.

4. Conclusions

This study presents the results of increasing the resolution of photogrammetric aerial images, as well as the effect of super-resolution algorithm operations, on the resulting product. As the study has shown, there are photogrammetric products that are created as a result of the algorithmic operation that show a very similar quality to the reference products and, in some cases, even improve their quality. Super-resolution enhancement can be applied to the traditional photogrammetric software, with no additional software modification required.
The typical procedure of photogrammetry image-processing can be extended by the application of super-resolution algorithms in cases where UAV flight altitude reduction is not feasible. Therefore, this provides the capability to preserve the desired quality of the processing. As has been calculated, the ground resolution in cm/pixel can remain unaffected for images acquired at double-height if super-resolution algorithms are applied.
Super-resolution algorithms in the photogrammetric process significantly increased the number of points in the point cloud. The number of points has been increased by 337% as compared to the point clouds generated from images not super-resolutioned, resulting in a significant increase in output quality. These algorithms also do not affect the process itself or the standard functionality of the image-processing software. These applications correctly solve tasks and model objects from photos that have been treated with the super-resolution technique.
The number of tie-points was increased (21% for the Mavic Pro and 40% for the Phantom 4 Pro). In the extreme cases, this feature may result in an increase in the number of tie-points over the minimum number required to carry out the modeling process successfully.
The precision of the position of ground-control points was reduced. This reduction is the higher the better for the original quality of the UAV images. In the case of the Phantom 4 Pro, this reduction was 310%; however, if this result is reported in the total error in millimeters, it drops from 4.7 mm to 19.46 mm.
The study also observes that, for the product development obtained from the images of a lower altitude, in addition to the obvious reduced GSD, there may be some issues in the processing, such as artefacts, deficiencies in the structure, and deficiencies in some reality elements. The use of super-resolution algorithms and flight at a slightly higher altitude resulted in a remarkable elimination of these shortcomings, and, as a result, the resulting product was complete, without spaces or artefacts.
To summarize, it can be considered that super-resolution methods in modern photogrammetry and remote sensing will be applied with increasing frequency. Their potential enables their implementation on the basis of already known photogrammetry software and well-known workflow. Photogrammetric products can be created, as shown in the paper, based on low-cost cameras installed on common UAVs, and at the same time, the geometrical and interpretation quality of the work can be improved by super-resolution algorithms.

Author Contributions

Conceptualization, P.B.; methodology, P.B.; bibliography review, P.B.; acquisition, analysis, and interpretation of data, P.B., writing—original draft preparation, P.B.; writing—review and editing, P.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Nex, F. UAV-g 2019: Unmanned Aerial Vehicles in Geomatics. Drones 2019, 3, 74. [Google Scholar] [CrossRef] [Green Version]
  2. Meng, L.; Peng, Z.; Zhou, J.; Zhang, J.; Lu, Z.; Baumann, A.; Du, Y. Real-Time Detection of Ground Objects Based on Unmanned Aerial Vehicle Remote Sensing with Deep Learning: Application in Excavator Detection for Pipeline Safety. Remote Sens. 2020, 12, 182. [Google Scholar] [CrossRef] [Green Version]
  3. Wierzbicki, D.; Kedzierski, M.; Fryskowska, A.; Jasinski, J. Quality Assessment of the Bidirectional Reflectance Distribution Function for NIR Imagery Sequences from UAV. Remote Sens. 2018, 10, 1348. [Google Scholar] [CrossRef] [Green Version]
  4. Kedzierski, M.; Wierzbicki, D.; Sekrecka, A.; Fryskowska, A.; Walczykowski, P.; Siewert, J. Influence of Lower Atmosphere on the Radiometric Quality of Unmanned Aerial Vehicle Imagery. Remote Sens. 2019, 11, 1214. [Google Scholar] [CrossRef] [Green Version]
  5. Wierzbicki, D.; Kedzierski, M.; Sekrecka, A. A Method for Dehazing Images Obtained from Low Altitudes during High-Pressure Fronts. Remote Sens. 2019, 12, 25. [Google Scholar] [CrossRef] [Green Version]
  6. Zanutta, A.; Lambertini, A.; Vittuari, L. UAV Photogrammetry and Ground Surveys as a Mapping Tool for Quickly Monitoring Shoreline and Beach Changes. J. Mar. Sci. Eng. 2020, 8, 52. [Google Scholar] [CrossRef] [Green Version]
  7. Šašak, J.; Gallay, M.; Kaňuk, J.; Hofierka, J.; Minár, J. Combined Use of Terrestrial Laser Scanning and UAV Photogrammetry in Mapping Alpine Terrain. Remote Sens. 2019, 11, 2154. [Google Scholar] [CrossRef] [Green Version]
  8. Zongjian, L.I.N. Others UAV for mapping—low altitude photogrammetric survey. Int. Arch. Photogramm. Remote Sens. 2008, 37, 1183–1186. [Google Scholar]
  9. Fan, X.; Nie, G.; Gao, N.; Deng, Y.; An, J.; Li, H. Building extraction from UAV remote sensing data based on photogrammetry method. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 3317–3320. [Google Scholar]
  10. Pei, H.; Wan, P.; Li, C.; Feng, H.; Yang, G.; Xu, B.; Niu, Q. Accuracy analysis of UAV remote sensing imagery mosaicking based on structure-from-motion. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 5904–5907. [Google Scholar]
  11. Gao, N.; Zhao, J.; Song, D.; Chu, J.; Cao, K.; Zha, X.; Du, X. High-Precision and Light-Small Oblique Photogrammetry UAV Landscape Restoration Monitoring. In Proceedings of the 2018 Ninth International Conference on Intelligent Control and Information Processing (ICICIP), Wanzhou, China, 9–11 November 2018; pp. 301–304. [Google Scholar]
  12. Samad, A.M.; Kamarulzaman, N.; Hamdani, M.A.; Mastor, T.A.; Hashim, K.A. The potential of Unmanned Aerial Vehicle (UAV) for civilian and mapping application. In Proceedings of the 2013 IEEE 3rd International Conference on System Engineering and Technology, Shah Alam, Malaysia, 19–20 August 2013; pp. 313–318. [Google Scholar]
  13. Ismael, R.Q.; Henari, Q.Z. Accuracy Assessment of UAV photogrammetry for Large Scale Topographic Mapping. In Proceedings of the 2019 International Engineering Conference (IEC), Erbil, KRG, Iraq, 23–24 April 2019; pp. 1–5. [Google Scholar]
  14. Tariq, A.; Osama, S.M.; Gillani, A. Development of a Low Cost and Light Weight UAV for Photogrammetry and Precision Land Mapping Using Aerial Imagery. In Proceedings of the 2016 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 19–21 December 2016; pp. 360–364. [Google Scholar]
  15. Segales, A.; Gregor, R.; Rodas, J.; Gregor, D.; Toledo, S. Implementation of a low cost UAV for photogrammetry measurement applications. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, USA, 7–10 June 2016; pp. 926–932. [Google Scholar]
  16. Song, Y.; Wang, J.; Shan, B. An Effective Leaf Area Index Estimation Method for Wheat from UAV-Based Point Cloud Data. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 1801–1804. [Google Scholar]
  17. Mansoori, S.A.; Al-Ruzouq, R.; Dogom, D.A.; al Shamsi, M.; Mazzm, A.A.; Aburaed, N. Photogrammetric Techniques and UAV for Drainage Pattern and Overflow Assessment in Mountainous Terrains—Hatta/UAE. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 951–954. [Google Scholar]
  18. Fernández, T.; Pérez, J.L.; Cardenal, J.; Gómez, J.M.; Colomo, C.; Delgado, J. Analysis of Landslide Evolution Affecting Olive Groves Using UAV and Photogrammetric Techniques. Remote Sens. 2016, 8, 837. [Google Scholar] [CrossRef] [Green Version]
  19. Nevalainen, O.; Honkavaara, E.; Tuominen, S.; Viljanen, N.; Hakala, T.; Yu, X.; Hyyppä, J.; Saari, H.; Pölönen, I.; Imai, N.N.; et al. Individual Tree Detection and Classification with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging. Remote Sens. 2017, 9, 185. [Google Scholar] [CrossRef] [Green Version]
  20. Feng, Q.; Liu, J.; Gong, J. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef] [Green Version]
  21. Zhang, Y.; Wu, H.; Yang, W. Forests Growth Monitoring Based on Tree Canopy 3D Reconstruction Using UAV Aerial Photogrammetry. Forests 2019, 10, 1052. [Google Scholar] [CrossRef] [Green Version]
  22. Torresan, C.; Berton, A.; Carotenuto, F.; di Gennaro, S.F.; Gioli, B.; Matese, A.; Miglietta, F.; Vagnoli, C.; Zaldei, A.; Wallace, L. Forestry applications of UAVs in Europe: A review. Int. J. Remote Sens. 2017, 38, 2427–2447. [Google Scholar] [CrossRef]
  23. Jizhou, W.; Zongjian, L.; Chengming, L. Reconstruction of buildings from a single UAV image. In Proceedings of the Proc. International Society for Photogrammetry and Remote Sensing Congress, Zurich, Switzerland, 6–12 September 2004; pp. 100–103. [Google Scholar]
  24. Saleri, R.; Cappellini, V.; Nony, N.; de Luca, L.; Pierrot-Deseilligny, M.; Bardiere, E.; Campi, M. UAV photogrammetry for archaeological survey: The Theaters area of Pompeii. In Proceedings of the 2013 Digital Heritage International Congress (DigitalHeritage), Marseille, France, 28 October–1 November 2013; Volume 2, pp. 497–502. [Google Scholar]
  25. Tariq, A.; Gillani, S.M.O.A.; Qureshi, H.K.; Haneef, I. Heritage preservation using aerial imagery from light weight low cost Unmanned Aerial Vehicle (UAV). In Proceedings of the 2017 International Conference on Communication Technologies (ComTech), Guayaquil, Ecuador, 6–9 November 2017; pp. 201–205. [Google Scholar]
  26. Hashim, K.A.; Ahmad, A.; Samad, A.M.; NizamTahar, K.; Udin, W.S. Integration of low altitude aerial terrestrial photogrammetry data in 3D heritage building modeling. In Proceedings of the 2012 IEEE Control and System Graduate Research Colloquium, Shah Alam, Selangor, Malaysia, 16–17 July 2012; pp. 225–230. [Google Scholar]
  27. Frankenberger, J.R.; Huang, C.; Nouwakpo, K. Low-Altitude Digital Photogrammetry Technique to Assess Ephemeral Gully Erosion. In Proceedings of the IGARSS 2008—2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 6–11 July 2008; Volume 4, pp. IV-117–IV-120. [Google Scholar]
  28. Mancini, F.; Castagnetti, C.; Rossi, P.; Dubbini, M.; Fazio, N.L.; Perrotti, M.; Lollino, P. An Integrated Procedure to Assess the Stability of Coastal Rocky Cliffs: From UAV Close-Range Photogrammetry to Geomechanical Finite Element Modeling. Remote Sens. 2017, 9, 1235. [Google Scholar] [CrossRef] [Green Version]
  29. Simpson, J.E.; Wooster, M.J.; Smith, T.E.L.; Trivedi, M.; Vernimmen, R.R.E.; Dedi, R.; Shakti, M.; Dinata, Y. Tropical Peatland Burn Depth and Combustion Heterogeneity Assessed Using UAV Photogrammetry and Airborne LiDAR. Remote Sens. 2016, 8, 1000. [Google Scholar] [CrossRef] [Green Version]
  30. Lu, C. Uav-Based photogrammetry for the application on geomorphic change- the case study of Penghu Kuibishan geopark, Taiwan. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 7840–7842. [Google Scholar]
  31. Özcan, O.; Akay, S.S. Modeling Morphodynamic Processes in Meandering Rivers with UAV-Based Measurements. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 7886–7889. [Google Scholar]
  32. Shi, Y.; Bai, M.; Li, Y.; Li, Y. Study on UAV Remote Sensing Technology in Irrigation District Informationization Construction and Application. In Proceedings of the 2018 10th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA), Changsha, China, 10–11 February 2018; pp. 252–255. [Google Scholar]
  33. Zefri, Y.; Elkcttani, A.; Sebari, I.; Lamallam, S.A. Inspection of Photovoltaic Installations by Thermo-visual UAV Imagery Application Case: Morocco. In Proceedings of the 2017 International Renewable and Sustainable Energy Conference (IRSEC), Tangier, Morocco, 7–20 April 2017; pp. 1–6. [Google Scholar]
  34. Tan, Y.; Li, Y. UAV Photogrammetry-Based 3D Road Distress Detection. ISPRS Int. J. Geo. Inf. 2019, 8, 409. [Google Scholar] [CrossRef] [Green Version]
  35. Ro, K.; Oh, J.-S.; Dong, L. Lessons learned: Application of small uav for urban highway traffic monitoring. In Proceedings of the 45th AIAA aerospace sciences meeting and exhibit, Reno, NV, USA, 8–11 November 2007; p. 596. [Google Scholar]
  36. Semsch, E.; Jakob, M.; Pavlicek, D.; Pechoucek, M. Autonomous UAV surveillance in complex urban environments. In Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, Washington, DC, USA, 15–18 September 2009; Volume 2, pp. 82–85. [Google Scholar]
  37. Burdziakowski, P. Uav in todays photogrammetry—Application areas and challenges. In Proceedings of the International Multidisciplinary Scientific GeoConference Surveying Geology and Mining Ecology Management, Albena, Bulgaria, 30 June–9 July 2018. [Google Scholar]
  38. Al-falluji, R.A.A.; Youssif, A.A.-H.; Guirguis, S.K. Single Image Super Resolution Algorithms: A Survey and Evaluation. Int. J. Adv. Res. Comput. Eng. Technol. 2017, 6, 1445–1451. [Google Scholar]
  39. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
  40. Dănișor, C.; Fornaro, G.; Pauciullo, A.; Reale, D.; Datcu, M. Super-Resolution Multi-Look Detection in SAR Tomography. Remote Sens. 2018, 10, 1894. [Google Scholar] [CrossRef] [Green Version]
  41. Jiang, K.; Wang, Z.; Yi, P.; Jiang, J.; Xiao, J.; Yao, Y. Deep Distillation Recursive Network for Remote Sensing Imagery Super-Resolution. Remote Sens. 2018, 10, 1700. [Google Scholar] [CrossRef] [Green Version]
  42. Kwan, C. Remote Sensing Performance Enhancement in Hyperspectral Images. Sensors 2018, 18, 3598. [Google Scholar] [CrossRef] [Green Version]
  43. Mei, S.; Yuan, X.; Ji, J.; Zhang, Y.; Wan, S.; Du, Q. Hyperspectral Image Spatial Super-Resolution via 3D Full Convolutional Neural Network. Remote Sens. 2017, 9, 1139. [Google Scholar] [CrossRef] [Green Version]
  44. Li, L.; Xu, T.; Chen, Y. Improved Urban Flooding Mapping from Remote Sensing Images Using Generalized Regression Neural Network-Based Super-Resolution Algorithm. Remote Sens. 2016, 8, 625. [Google Scholar] [CrossRef] [Green Version]
  45. Hu, J.; Zhao, M.; Li, Y. Hyperspectral Image Super-Resolution by Deep Spatial-Spectral Exploitation. Remote Sens. 2019, 11(24), 2933. [Google Scholar] [CrossRef] [Green Version]
  46. Demirel, H.; Anbarjafari, G. Discrete wavelet transform-based satellite image resolution enhancement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1997–2004. [Google Scholar] [CrossRef]
  47. Ducournau, A.; Fablet, R. Deep learning for ocean remote sensing: An application of convolutional neural networks for super-resolution on satellite-derived SST data. In Proceedings of the 2016 9th IAPR Workshop on Pattern Recogniton in Remote Sensing (PRRS), Cancun, Mexico, 4 December 2016; pp. 1–6. [Google Scholar]
  48. Tatem, A.J.; Lewis, H.G.; Atkinson, P.M.; Nixon, M.S. Super-resolution target identification from remotely sensed images using a Hopfield neural network. IEEE Trans. Geosci. Remote Sens. 2001, 39, 781–796. [Google Scholar] [CrossRef] [Green Version]
  49. Harikrishna, O.; Maheshwari, A. Satellite image resolution enhancement using DWT technique. Int. J. Soft Comput. Eng. IJSCE 2012, 2, 274–275. [Google Scholar]
  50. Li, F.; Jia, X.; Fraser, D. Universal HMT based super resolution for remote sensing images. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12—15 October 2008; pp. 333–336. [Google Scholar]
  51. Thornton, M.W.; Atkinson, P.M.; Holland, D.A. Sub-pixel mapping of rural land cover objects from fine spatial resolution satellite sensor imagery using super-resolution pixel-swapping. Int. J. Remote Sens. 2006, 27, 473–491. [Google Scholar] [CrossRef]
  52. Plenge, E.; Poot, D.H.J.; Bernsen, M.; Kotek, G.; Houston, G.; Wielopolski, P.; van der Weerd, L.; Niessen, W.J.; Meijering, E. Super-resolution methods in MRI: Can they improve the trade-off between resolution, signal-to-noise ratio, and acquisition time? Magn. Reson. Med. 2012, 68, 1983–1993. [Google Scholar] [CrossRef]
  53. Trinh, D.-H.; Luong, M.; Dibos, F.; Rocchisani, J.-M.; Pham, C.-D.; Nguyen, T.Q. Novel example-based method for super-resolution and denoising of medical images. IEEE Trans. Image Process. 2014, 23, 1882–1895. [Google Scholar] [CrossRef]
  54. O’Reilly, M.A.; Hynynen, K. A super-resolution ultrasound method for brain vascular mapping. Med. Phys. 2013, 40, 110701. [Google Scholar] [CrossRef] [Green Version]
  55. Greenspan, H. Super-resolution in medical imaging. Comput. J. 2008, 52, 43–63. [Google Scholar] [CrossRef]
  56. Huang, B.; Bates, M.; Zhuang, X. Super-resolution fluorescence microscopy. Annu. Rev. Biochem. 2009, 78, 993–1016. [Google Scholar] [CrossRef] [Green Version]
  57. Huang, B.; Wang, W.; Bates, M.; Zhuang, X. Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science 2008, 319, 810–813. [Google Scholar] [CrossRef] [Green Version]
  58. Schermelleh, L.; Heintzmann, R.; Leonhardt, H. A guide to super-resolution fluorescence microscopy. J. Cell Biol. 2010, 190, 165–175. [Google Scholar] [CrossRef] [Green Version]
  59. Nieves, D.J.; Gaus, K.; Baker, M.A.B. DNA-Based Super-Resolution Microscopy: DNA-PAINT. Genes 2018, 9, 621. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Dong, H.; Supratak, A.; Mai, L.; Liu, F.; Oehmichen, A.; Yu, S.; Guo, Y. TensorLayer: A Versatile Library for Efficient Deep Learning Development. ACM Multimedia 2017, 10, 1210–1217. [Google Scholar]
  61. Kawulok, M.; Benecki, P.; Piechaczek, S.; Hrynczenko, K.; Kostrzewa, D.; Nalepa, J. Deep Learning for Multiple-Image Super-Resolution. IEEE Geosci. Remote Sens. Lett. 2019, 1–5. [Google Scholar] [CrossRef]
  62. Yuan, Q.; Zhang, L.; Shen, H.; Li, P. Adaptive multiple-frame image super-resolution based on U-curve. IEEE Trans. Image Process. 2010, 19, 3157–3170. [Google Scholar] [CrossRef] [PubMed]
  63. Capel, D.; Zisserman, A. Super-resolution from multiple views using learnt image models. In Proceedings of the Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2, 2. [Google Scholar]
  64. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  65. Liebel, L.; Körner, M. Single-image super resolution for multispectral remote sensing data using convolutional neural networks. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 883–890. [Google Scholar] [CrossRef]
  66. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
  67. Zhang, Y.; Zheng, Z.; Luo, Y.; Zhang, Y.; Wu, J.; Peng, Z. A CNN-Based Subpixel Level DSM Generation Approach via Single Image Super-Resolution. Photogramm. Eng. Remote Sens. 2019, 85, 765–775. [Google Scholar] [CrossRef]
  68. Bentley Advancing Infrastructure. ContextCapture–Quick Guide for Photo Acquisition. Available online: https://www.inas.ro/ro/bentley-modelare-virtuala-realitate-contextcapture-center?file=files/docs/bentley/bentley-contextcapture-reguli.pdf (accessed on 12 December 2019).
  69. Agisoft LLC. Agisoft Metashape User Manual Professional Edition, Version 1.5. Available online: https://www.agisoft.com/pdf/metashape-pro_1_5_en.pdf (accessed on 13 February 2020).
  70. Agisoft LLC Agisoft. Available online: https://www.agisoft.com/ (accessed on 13 February 2020).
  71. Xu, F.; Muneyoshi, H. A Case Study of DJI, the Top Drone Maker in the World. Kindai Manag. Rev. 2017, 5, 97–104. [Google Scholar]
  72. Schroth, L. Drone Manufacturer Market Shares: DJI Leads the Way in the US. Available online: https://www.droneii.com/drone-manufacturer-market-shares-dji-leads-the-way-in-the-us (accessed on 12 December 2019).
  73. Burdziakowski, P. A Commercial of the Shelf Components for an Unmanned Air Vehicle Photogrammetry. In Proceedings of the 16th International Multidisciplinary Scientific GeoConference SGEM2016, Informatics, Geoinformatics and Remote Sensing, Albena, Bulgaria, 30 June–6 July 2016. [Google Scholar]
  74. Blaikie, R.J.; Melville, D.O.S.; Alkaisi, M.M. Super-resolution near-field lithography using planar silver lenses: A review of recent developments. Microelectron. Eng. 2006, 83, 723–729. [Google Scholar] [CrossRef]
  75. Siu, W.-C.; Hung, K.-W. Review of image interpolation and super-resolution. In Proceedings of the Proceedings of The 2012 Asia Pacific Signal and Information Processing Association Annual Summit and Conference, Hollywood, CA, USA, 3–6 December 2012; pp. 1–10. [Google Scholar]
  76. Yang, W.; Zhang, X.; Tian, Y.; Wang, W.; Xue, J.-H.; Liao, Q. Deep learning for single image super-resolution: A brief review. IEEE Trans. Multimed. 2019, 1, 99. [Google Scholar] [CrossRef] [Green Version]
  77. Dong, C.; Loy, C.C.; Tang, X. Accelerating the Super-Resolution Convolutional Neural Network. In Proceedings of the European conference on computer vision ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin, Germany; pp. 391–407. [Google Scholar]
  78. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
  79. Li, Z.; Yang, J.; Liu, Z.; Yang, X.; Jeon, G.; Wu, W. Feedback Network for Image Super-Resolution. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Xi’an, China, 8–11 November 2019. [Google Scholar]
  80. Tai, Y.; Yang, J.; Liu, X.; Xu, C. MemNet: A Persistent Memory Network for Image Restoration. In Proceedings of the International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  81. Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced super-resolution generative adversarial networks. In Proceedings of the The European Conference on Computer Vision Workshops (ECCVW), Munich, Germany, 8–14 September 2018. [Google Scholar]
  82. Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learning Deep CNN Denoiser Prior for Image Restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–28 July 2017; pp. 3929–3938. [Google Scholar]
  83. Zhang, K.; Zuo, W.; Zhang, L. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3262–3271. [Google Scholar]
  84. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the ECCV, Munich, Germany, 8–14 September 2018. [Google Scholar]
  85. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Super-Resolution 2018. In Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  86. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 27–30 June 2016. [Google Scholar]
  87. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multi-scale structural similarity for image quality assessment. In Proceedings of the Conference Record of the Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 9–12 November 2003. [Google Scholar]
  88. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  89. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015—Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  90. Agustsson, E.; Timofte, R. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  91. Computer Vision Laboratory NTIRE 2017. Available online: http://www.vision.ee.ethz.ch/ntire17/ (accessed on 12 December 2019).
  92. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
  93. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2013, 21, 209–212. [Google Scholar] [CrossRef]
  94. Venkatanath, N.; Praneeth, D.; Maruthi Chandrasekhar, B.H.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 21st National Conference on Communications, NCC 2015, Bombay, India, 27 February–1 March 2015. [Google Scholar]
  95. Sheikh, H.R.; Wang, Z.; Cormack, L.; Bovik, A.C. LIVE Image Quality Assessment Database Release 2. Available online: https://live.ece.utexas.edu/research/quality/ (accessed on 12 December 2019).
  96. Fraser, B.T.; Congalton, R.G. Issues in Unmanned Aerial Systems (UAS) Data Collection of Complex Forest Environments. Remote Sens. 2018, 10, 908. [Google Scholar] [CrossRef] [Green Version]
  97. Nourbakhshbeidokhti, S.; Kinoshita, A.M.; Chin, A.; Florsheim, J.L. A Workflow to Estimate Topographic and Volumetric Changes and Errors in Channel Sedimentation after Disturbance. Remote Sens. 2019, 11, 586. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Traditional unmanned aerial vehicle (UAV) photogrammetric process (a) and augmented by a super-resolution algorithm (b). GCP: ground-control points and RTK: real-time kinematic.
Figure 1. Traditional unmanned aerial vehicle (UAV) photogrammetric process (a) and augmented by a super-resolution algorithm (b). GCP: ground-control points and RTK: real-time kinematic.
Remotesensing 12 00810 g001
Figure 2. Single-grid flight path—scheme and parameters.
Figure 2. Single-grid flight path—scheme and parameters.
Remotesensing 12 00810 g002
Figure 3. Images of ground-control points from the Mavic Pro: (a) GCP-2 at 55 m, (b) GCP-2 at 110 m, and (c) GCP-2 at SR and from the Phantom 4 Pro: (d) GCP-2 at 55 m, (e) GCP-2 at 110 m, and (f) GCP-2 at SR. SR: super resolution.
Figure 3. Images of ground-control points from the Mavic Pro: (a) GCP-2 at 55 m, (b) GCP-2 at 110 m, and (c) GCP-2 at SR and from the Phantom 4 Pro: (d) GCP-2 at 55 m, (e) GCP-2 at 110 m, and (f) GCP-2 at SR. SR: super resolution.
Remotesensing 12 00810 g003
Figure 4. Images of ground-control points from the Mavic Pro: (a) GCP-5 at 55 m, (b) GCP-5 at 110 m, and (c) GCP-5 at SR and from the Phantom 4 Pro: (d) GCP-5 at 55 m, (e) GCP-5 at 110 m, and (f) GCP-5 at SR.
Figure 4. Images of ground-control points from the Mavic Pro: (a) GCP-5 at 55 m, (b) GCP-5 at 110 m, and (c) GCP-5 at SR and from the Phantom 4 Pro: (d) GCP-5 at 55 m, (e) GCP-5 at 110 m, and (f) GCP-5 at SR.
Remotesensing 12 00810 g004aRemotesensing 12 00810 g004b
Figure 5. GCP locations and error estimates for the Mavic Pro: (a) at 55 m, (b) at 110 m, and (c) SR and for the Phantom 4 Pro: (d) at 55 m, (e) at 110 m, and (f) SR. Z error is represented by the ellipse color. X,Y errors are represented by the ellipse shape.
Figure 5. GCP locations and error estimates for the Mavic Pro: (a) at 55 m, (b) at 110 m, and (c) SR and for the Phantom 4 Pro: (d) at 55 m, (e) at 110 m, and (f) SR. Z error is represented by the ellipse color. X,Y errors are represented by the ellipse shape.
Remotesensing 12 00810 g005aRemotesensing 12 00810 g005b
Figure 6. Point-cloud details visual comparison (cars in a parking place) for (a) Mavic Pro 100 m and (b) Mavic Pro SR.
Figure 6. Point-cloud details visual comparison (cars in a parking place) for (a) Mavic Pro 100 m and (b) Mavic Pro SR.
Remotesensing 12 00810 g006
Figure 7. Point-to-point cloud comparison of the Mavic Pro: (a) scalar field 55 m to SR and (b) histogram 55 m to SR and Phantom 4 Pro: (c) scalar field 55 m to SR and (d) histogram 55 m to SR [97]. C2C: cloud-to-cloud.
Figure 7. Point-to-point cloud comparison of the Mavic Pro: (a) scalar field 55 m to SR and (b) histogram 55 m to SR and Phantom 4 Pro: (c) scalar field 55 m to SR and (d) histogram 55 m to SR [97]. C2C: cloud-to-cloud.
Remotesensing 12 00810 g007
Figure 8. Bushes problem (a) point cloud from 55 m, (b) point cloud from 110 m, (c) C2C comparison (55 m to SR), (d) matches between images (55 m), and (e) point cloud and respective image overlaid (55 m).
Figure 8. Bushes problem (a) point cloud from 55 m, (b) point cloud from 110 m, (c) C2C comparison (55 m to SR), (d) matches between images (55 m), and (e) point cloud and respective image overlaid (55 m).
Remotesensing 12 00810 g008
Figure 9. Results for the Mavic Pro orthophotomap at (a) 55 m, (b) 110 m, (c) SR and DEM at (d) 55 m, (e) 110 m, and (f) SR.
Figure 9. Results for the Mavic Pro orthophotomap at (a) 55 m, (b) 110 m, (c) SR and DEM at (d) 55 m, (e) 110 m, and (f) SR.
Remotesensing 12 00810 g009
Figure 10. Camera location and image overlap for the Mavic Pro at (a) 55 m and (b) SR.
Figure 10. Camera location and image overlap for the Mavic Pro at (a) 55 m and (b) SR.
Remotesensing 12 00810 g010
Table 1. Flight plan parameters. AGL: above ground level and GSD: ground sample distance.
Table 1. Flight plan parameters. AGL: above ground level and GSD: ground sample distance.
FlightAGL (m)GSD (cm/pix) D Y ( m ) D X ( m ) B y ( m ) B x ( m ) L W ( m ) L H ( m )
Mavic Pro 55 m551.67450300272089.367
Mavic Pro 110 m1103.4 4503005440178.6134
Phantom 4 Pro 55 m551.4450300171182.555
Phantom 4 Pro 110 m1102.834503003322165110
Table 2. Image quality metrics (IQM). BRISQUE: blind referenceless image spatial quality evaluator, NIQE: natural image quality evaluator, and PIQE: perception-based image quality evaluator.
Table 2. Image quality metrics (IQM). BRISQUE: blind referenceless image spatial quality evaluator, NIQE: natural image quality evaluator, and PIQE: perception-based image quality evaluator.
ImageBRISQUEDiffNIQEPIQEPIQE Scale
Mavic Pro—original25.6103 2.009221.5645Good
Mavic Pro—super resolution38.3286−12.72%2.386522.3731Good
Mavic Pro—bicubic43.2879−17.68%3.695957.0822Poor
Phantom 4—original24.8537 3.697115.0224Excellent
Phantom 4—super resolution34.8079−9.95%3.554929.6032Good
Phantom 4—bicubic47.0685−22.21%5.43766.4054Poor
Table 3. PIQE scale.
Table 3. PIQE scale.
Quality ScaleScore Range
Excellent0–20
Good21–35
Fair36–50
Poor51–80
Bad81–100
Table 4. GCP locations, error estimates, and percentage difference calculations for the Mavic Pro.
Table 4. GCP locations, error estimates, and percentage difference calculations for the Mavic Pro.
GCP55 m110 mSRSR to 55 mSR to 110 m
X error (mm)26.323218.666724.0629−9%29%
Y error (mm)61.878635.906243.0071−30%20%
Z error (mm)1.03700.68800.65627−37%−5%
XY error (mm)67.244940.468549.2812−27%22%
Total error (mm)67.252940.474449.2856−27%22%
Table 5. GCP locations, error estimates, and percentage difference calculations for the Phantom 4 Pro.
Table 5. GCP locations, error estimates, and percentage difference calculations for the Phantom 4 Pro.
55 m110 mSRSR to 55 mSR to 110 m
X error (mm)1.402374.137.6927449%86%
Y error (mm)2.038672.045217.8583776%773%
Z error (mm)0.086181.14280.95491008%−16%
XY error (mm)2.474444.609219.4447686%322%
Total error (mm)2.24754.748819.4682766%310%
Table 6. The camera digital sensor parameters calculated in processing software from the super-resolution (SR) images.
Table 6. The camera digital sensor parameters calculated in processing software from the super-resolution (SR) images.
Camera ModelResolutionFocal LengthPixel Size
Mavic Pro SRFC220 (4.73 mm)8000 × 6000 4.73 mm0.787 × 0.787 μm
Phantom 4 Pro SRFC6310 (8.8 mm)10944 × 7296 8.8 mm1.21 × 1.21 μm
Table 7. Image alignment results for the Mavic Pro.
Table 7. Image alignment results for the Mavic Pro.
55 m110 mSRSR to 55 mSR to 110 m
Number of images464170170−63%0%
Flying altitude (m)54.3112115112%3%
Ground resolution (cm/pix)1.873.761.881%−50%
Coverage area (km²)0.1840.2610.24433%−7%
Camera stations 464170170−63%0%
Tie points423,069161,698196,262−54%21%
Projections1,278,009531,674522,934−59%−2%
Reprojection error (pix)1.361.51.8536%23%
Camera pixel size (μm)1.571.570.787−50%−50%
Table 8. Image alignment results for the Phantom 4 Pro.
Table 8. Image alignment results for the Phantom 4 Pro.
55 m110 mSRSR to 55 mSR to 110 m
Number of images374206206−45%0%
Flying altitude (m)58.1121120107%−1%
Ground resolution (cm/pix)1.493.121.565%−50%
Coverage area (km²)0.1670.2180.21529%−1%
Camera stations 374206206−45%0%
Tie points386,953181,734254,686−34%40%
Projections1,118,423755,904710,762−36%−6%
Reprojection error (pix)1.151.31.1−4%−15%
Camera pixel size (μm)2.412.411.,21−50%−50%
Table 9. Photogrammetric products data summary for the Mavic Pro.
Table 9. Photogrammetric products data summary for the Mavic Pro.
55 m110 mSRSR to 55 mSR to 110 m
Dense Cloud (points)53,224,18418,586,62881,307,68853%337%
3D model (faces) 10,534,8923,668,1676,163, 65753%341%
DEM x size (pix)13,259675313,2430%96%
DEM y size (pix) 10,072520610,8298%108%
DEM resolution (cm/pix)6.7313.76.831%-50%
Orthophoto x size (pix)36,68420,50040,95612%100%
Orthophoto y size (pix) 29,45016,78533,75315%101%
Orthophoto resolution (cm/pix) 1.683.421.712%−50%
Table 10. Photogrammetric products data summary for the Phantom 4 Pro.
Table 10. Photogrammetric products data summary for the Phantom 4 Pro.
55 m110 mSRSR to 55 mSR to 110 m
Dense Cloud (points)78,393,61624,147,651101,512,14829%320%
3D model (faces) 15,559,3574,773,51220,169,84730%323%
DEM x size (pix)13,891760112,058−13%59%
DEM y size (pix) 11,924700310,887−9%55%
DEM resolution (cm/pix)5.3911.35.675%−50%
Orthophoto x size (pix)44,77323,31145,8532%97%
Orthophoto y size (pix) 34,70819,61339,23213%100%
Orthophoto resolution (cm/pix) 1.352.831.425%−50%

Share and Cite

MDPI and ACS Style

Burdziakowski, P. Increasing the Geometrical and Interpretation Quality of Unmanned Aerial Vehicle Photogrammetry Products using Super-Resolution Algorithms. Remote Sens. 2020, 12, 810. https://doi.org/10.3390/rs12050810

AMA Style

Burdziakowski P. Increasing the Geometrical and Interpretation Quality of Unmanned Aerial Vehicle Photogrammetry Products using Super-Resolution Algorithms. Remote Sensing. 2020; 12(5):810. https://doi.org/10.3390/rs12050810

Chicago/Turabian Style

Burdziakowski, Pawel. 2020. "Increasing the Geometrical and Interpretation Quality of Unmanned Aerial Vehicle Photogrammetry Products using Super-Resolution Algorithms" Remote Sensing 12, no. 5: 810. https://doi.org/10.3390/rs12050810

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop