Next Article in Journal
Heat Conduction and Microconvection in Nanofluids: Comparison between Theoretical Models and Experimental Results
Next Article in Special Issue
Multi-Domain Based Computational Investigations on Advanced Unmanned Amphibious System for Surveillances in International Marine Borders
Previous Article in Journal
Manned Mars Mission Analysis Using Mission Architecture Matrix Method
Previous Article in Special Issue
Aerial Footage Analysis Using Computer Vision for Efficient Detection of Points of Interest Near Railway Tracks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Earthwork Volume Calculation, 3D Model Generation, and Comparative Evaluation Using Vertical and High-Oblique Images Acquired by Unmanned Aerial Vehicles

1
Department of Spatial Information, Kyungpook National University, Daegu 41566, Korea
2
Department of Convergence and Fusion System Engineering, Kyungpook National University, Sangju 37224, Korea
*
Author to whom correspondence should be addressed.
Aerospace 2022, 9(10), 606; https://doi.org/10.3390/aerospace9100606
Submission received: 23 August 2022 / Revised: 3 October 2022 / Accepted: 12 October 2022 / Published: 15 October 2022
(This article belongs to the Special Issue Applications of Drones)

Abstract

:
In civil engineering and building construction, the earthwork volume calculation is one of the most important factors in the design and construction stages; therefore, an accurate calculation is necessary. Moreover, because managing earthworks is highly important, in this study, a three-dimensional (3D) model for earthwork calculation and management was performed using an unmanned aerial vehicle (UAV) and an RGB camera. Vertical and high-oblique images (45°, 60°, and 75°) were acquired at 50 and 100 m heights for accurate earthwork calculations and a 3D model, and data were generated by dividing the images into eight cases. Cases 1–4 were images acquired from a height of 50 m, and cases 5–8 were images acquired from a height of 100 m. (case 1: 90°, case 2: 90° + 45°, case 3: 90° + 60°, case 4: 90° + 75°, case 5: 90°, case 6: 90° + 45°, case 7: 90° + 60°, case 8: 90° + 75°). Three evaluations were performed on the data. First, the accuracy was evaluated through checkpoints for the orthophoto; second, the earthwork volumes calculated via a global positioning system and UAV were compared; finally, the 3D model was evaluated. Case 2, which showed the lowest root mean square error in the orthophoto accuracy evaluation, was the most accurate. Case 2 was the most accurate in the earthwork volume evaluation and 3D model compared to other cases. Through this study, the best results were obtained when using a vertical image and a high-oblique image of 40 to 50° when generating a 3D model for earthwork volume calculation and management. In addition, if the UAV is not affected by obstacles, it is better to shoot at about 50 m or less than to shoot the UAV height too high.

1. Introduction

During construction, the earthwork amount is a very important factor in determining construction costs in the design and construction stages; hence, accurate measurements are necessary [1]. Currently, earthwork volume is calculated using methods such as total station surveying and global navigation satellite system (GNSS) surveying, and with numerical elevation models made through remote sensing, either directly or using an earthworks calculation program [2,3,4]. Additionally, to systematically measure and manage the earthwork volume, a technology capable of measuring and analyzing various data in real-time at the construction site is required. A measurement method (using a total station and GNSS) is used in the existing earthwork volume analysis, but it is inefficient because it takes significant time and manpower [5]. There have been many studies on the construction of the topography of mining areas using UAV, but there are not many studies on the exact calculation of the earthwork volume [6,7,8,9]. Remote sensing methods, such as aerial photogrammetry, have disadvantages in terms of cost and accuracy.
Recently, as various sensors for unmanned aerial vehicles (UAVs) and the UAVs themselves have been developed and improved, data with high temporal and spatial resolutions can be easily acquired. Various studies on orthophoto and digital surface model (DSM) generation, precision agriculture, change detection, and temperature measurements are currently being conducted using UAVs [10,11,12,13,14]. Moreover, the battery performances of UAVs have considerably improved; except for the weather constraints, it is possible to measure repeatedly, two-dimensionally (2D) and three-dimensionally (3D) [15]. Additionally, because the camera angle can be adjusted in a UAV, 3D models are possible using high-oblique images [16]. Because accurate topographical information can be generated for 2D and 3D analyses, UAV imagery can assist in site development by acquiring 3D earthwork data and calculating earthwork volume.
Several studies related to earthwork sites and volume estimations using UAVs have been conducted. Hugenholtz et al. analyzed earthwork volumes using UAVs [2]. Seong et al. compared soil volume measurements using a GNSS, total station, and UAV data for a small area [17]. Ronchi et al. analyzed the relationship between earthworks and cut surfaces using light detection and ranging for UAVs and multispectral sensors [18]. Kim et al. used unmanned aerial vehicles (UAVs) to evaluate changes in the topographies of mining sites and the possibility of mountain restoration [19]. Kim et al. performed design verification and earthwork planning operations by utilizing an integrated approach of UAV-based point cloud and BIM (building information model) [20]. Siebert and Jochen generated a 3D map of a civil engineering survey site using a UAV to analyze safety requirements, construction equipment, and earthwork progress tracking for sloped areas [21]. Tucci et al. calculated the waste volume for the waste site using a UAV and compared and evaluated it using vertical and oblique images [22]. Filkin et al. calculated waste volume using UAVs to estimate waste stockpiles in landfills [23].
In previous studies, only vertical images were mainly used when measuring earthwork volume using UAVs [2,5,17,23,24]. There was also a study on the estimation of waste volume using vertical and inclined images, but the study was conducted by acquiring only 30° oblique photos instead of various angles [22]. Various studies have been conducted using high-oblique images taken by UAVs [25,26,27,28,29,30]. Using high-oblique images captured by UAVs, precise 3D terrain construction was possible. Through this, high-oblique images were also taken at construction and earthwork sites to conduct research on constructing 3D topographic data. However, when calculating the earthwork amounts, studies on the calculations of earthwork amounts using high-oblique images and accuracy evaluations were insufficient. In addition, studies comparing high-oblique images according to the height of UAV imaging when calculating the earthwork amount were insufficient. There have also been studies using high-oblique images when constructing topographic data at mining and earthworks sites but studies on various camera angles are insufficient. As mentioned earlier, the amount of earthwork in previous studies has been basically calculated using only vertical images. When constructing a 3D model for earthworks, high-oblique images have also been used, but the accuracy according to the camera angle has not been evaluated using different fixed camera angles. In addition, since the accuracy of the 3D model may vary depending on the height, it is necessary to evaluate the accuracy according to both imaging height and angle. Thus, in this study the accuracy of the earthwork volume calculation and the 3D model was evaluated by fusing vertical images and various high-oblique images (which were previously insufficient when calculating the amount of earthwork and constructing a 3D model) along with different UAV shooting heights. In this study, comparison and evaluation studies were tested on the construction of earthwork volume and topographic data using vertical images and various high-oblique images (45°, 60°, and 75°) at two flying heights (50 and 100 m). Furthermore, because 3D topographic data are useful for managing earthwork volumes in building construction, this study evaluated the 3D model that was not generated in previous studies.

2. Materials and Methods

As shown in Figure 1, we evaluated the earthwork volume and 3D model by selecting the research site, acquiring UAV data at various angles, and using global positioning system (GPS) data.

2.1. Study Area and Equipment

A civil engineering (reservoir construction) site at the Korea Rural Community Corporation, located in Hwaseo-myeon, Sangju-si, Gyeongsangbuk-do, Republic of Korea, was selected as the study site (latitude: 36.451, longitude: 127.926). The land cover at the study site was composed of soil and gravel, and the construction of the reservoir and embankment was completed at the time of data acquisition. The total area of the study site was about 0.02 km 2 . The earthwork volume calculation did not provide any information on the embankment in the target area. Hence, the data were measured from an area that could be verified with the naked eye.
Inspire 1, a rotary wing UAV manufactured by DJI (Shenzhen, China), and Zenmuse X3, a dedicated camera for Inspire 1 manufactured by DJI, were used as the sources of UAV photography in this study. The Inspire 1 and Zenmuse X3 were released in 2014, and although they are more than 9 years old, they are still widely used in photogrammetry surveying studies and construction sites [31,32,33]. The Inspire 1 UAV weighs 2935 g and has a top speed of 22 m/s, a maximum flight altitude of 4500 m, and a maximum wind speed resistance of 10 m/s. When the battery was optimal, it flew for approximately 18 min. Zenmuse X3 weighs 293 g and has a maximum image size of 4000 × 3000 pixels, a lens diagonal field of view of 94°, and a focal length of 3.61 mm. The wide-angle lens field of view (FOV) was slightly smaller than that of the human eye. However, this view is sufficient, considering that the drone has a low imaging altitude and is inexpensive compared to a survey camera [34] (Table 1). Furthermore, the hovering position of Inspire 1 was extremely accurate when the GNSS was connected. Even when disconnected, the hovering accuracy of the GNSS is 2.5 m horizontally and 0.5 m vertically through visual positioning, making it possible to produce stable images [35]. Acquiring oblique and vertical images is also possible owing to the adjustable camera angle of Inspire 1.
GNSS coordinate data are required to perform UAV image acquisition and processing, earthwork volume calculations, and 3D model accuracy evaluations. Trimble (Colorado, U.S) R8s were used in the GNSS survey. The R8s weigh 3.81 kg, have 440 channels, and can receive GPS, GLONASS, SBAS, Galileo, and BeiDou satellite signals. In this study, only GNSS signals were received, and the survey was conducted using the virtual reference station (VRS) method, which is a real-time kinematic (RTK) method. The R8s had a vertical error of approximately 15 mm + 0.5 parts per million (ppm) root mean square (RMS) and a horizontal error of approximately 8 mm + 0.5 ppm RMS during VRS surveying (Table 2). The VRS method transmits the current GNSS location of the mobile station to the virtual reference point server using a GNSS receiver and mobile phone signal. Furthermore, the transmitted information was integrated with the information from three permanent GNSS sites, and systematic errors caused by the effects of the ionosphere and convective layer were removed. The removed errors provided position correction values to the mobile station, which was used to perform the RTK survey [36].

2.2. Data Acquisition and Method

2.2.1. UAV Data Acquisition

UAV image acquisition was conducted between December 2017 and January 2018. Images were acquired between 12:00 and 13:00 when the sun was at its highest. When shooting UAV, it was performed by an automatic flight using the Pix4Dcapture application. Considering the available flight time and the surrounding environment of the study site, the longitudinal and side overlap was set to 80% at heights of 50 and 100 m. The battery discharges quickly when the air is cold, and the video can become unstable when the wind is strong. Considering that it was winter at the time of shooting the UAV, a warming pad was attached to the battery, and the photo was taken on a day when the wind was not blowing as much as possible. Both vertical and high oblique images (45°, 60°, and 75°) were acquired at two flying heights of 50 and 100 m, respectively. In this study, orthophotos and 3D models were generated by acquiring 30° images that were 45° or less, but they were ultimately excluded because of their low quality. In the case of the 30° image, there was more image information on the horizon than on the ground, which caused an error in image matching. Except for the 30° image, 45°, 60°, 75°, and 90° images were all used and image processing was performed by dividing them into cases 1–8.

2.2.2. GPS Data Acquisition

For the GPS data, the earthwork amount for the embankment in the target area was unknown; hence, a large quantity of coordinate data was acquired for use as reference data (to perform the accurate evaluation). GPS coordinate data were acquired after calibration using a virtual base station according to the shape of the terrain. The obtained longitudinal and latitudinal coordinates were converted into plane rectangular X and Y coordinates. The KNGeoid14 model, which is a Korean national geoid model provided by the National Geographic Information Institute (NGII), was applied to calculate the orthometric height. A ground control point (GCP) is a reference point used to obtain a conversion equation between the image coordinates and map coordinates and is utilized to acquire precise 3D coordinates. The GCPs and checkpoints (CPs) were measured using an air photo signal and identifiable corner points. Because it is a small area (0.02 km2), seven GCPs and eight CPs were surveyed, making a total of fifteen points (Figure 2). In Korea, the GCP number is stipulated in principle to be more than 9 points per square kilometer in the regulations for UAV surveying regulation no. 2020-5670, as established by the NGII of the Republic of Korea. In a study with a similar area to this study, the number of GCPs was less than 10 [15,37,38]. Approximately 260 data points were acquired as GPS coordinate data for the earthwork volume calculation. The GPS coordinate acquisition for earthwork volume calculation was measured every 2–3 m in areas with large elevation differences, and every 5–10 m in areas with low elevation differences. The 260 GPS coordinates were used to calculate the earthwork volume and evaluate the 3D model accuracy. This metric was evaluated using 60 random points from 260 GPS coordinates. L1C/A, L1C, L2C, L2E, and L5 signals from 10 to 15 GPS satellites were acquired during the survey. The average horizontal accuracy was 0.009 m, whereas the vertical accuracy was 0.017 m. The position dilution of the precision values was six or less and complied with the network RTK surveying regulation no. 2019-153 in the Public Surveying Work Regulations, as established by the NGII of the Republic of Korea (Table 3).

2.2.3. Method

In the previous study, the earthwork amount was calculated only with vertical images, but in this study, the earthwork amount and the 3D model were evaluated by fusing vertical images and high-oblique images. In addition, according to the height of the UAV imaging, a comparative evaluation was performed on the earthwork amount and the 3D model. The UAV data were acquired at 90° (case 1, 50 m), 90° + 45° (case 2, 50 m), 90° + 60° (case 3, 50 m), 90° + 75° (case 4, 50 m), 90° (case 5, 100 m), 90° + 45° (case 6, 100 m), 90° + 60° (case 7, 100 m), and 90° + 75° (case 8, 100 m); orthophotos, earthwork volume measurements, and 3D models were generated. For each case, the orthophoto horizontal and vertical accuracies were evaluated, and the earthwork volume and 3D model evaluations were performed.

3. Image Processing

The data acquired by the UAV were processed using the Agisoft (Saint Petersburg, Russia) Metashape software. The processing process of Metashape software is as follows (Figure 3). First, the photos taken with the UAV are added to the software. Second, the feature point is extracted, and the relative orientation is performed by identifying the conjugate points among them. Self-calibration is performed using the conjugate point pairs. During this process, the approximate calibration parameters are optimized. Third, the relative coordinates are converted to absolute coordinates using the GCPs, the external orientation parameters are converted, and the camera calibration parameters are optimized. Finally, after constructing the high-density points, the DSM and orthophotos are generated [39].

3.1. Image Matching

Image matching proceeds as follows: first, the scale-invariant feature transform (SIFT)-based feature point extraction is performed. SIFT matching was developed in 2004 by Lowe and has been one of the most widely used image-processing methods for nearly 20 years [40]. Feature point extraction creates a scale space for extracting extreme values. To generate scale space, the Laplacian filter must be processed after the Gaussian filter. However, because the complexity of the computation increases, a difference in Gaussian (DoG) algorithm is generated, and poles in each set of difference images are extracted as feature point candidates. When the DoG is generated, local maxima and minima are extracted as candidate points by comparing them with 26 adjacent pixels on the generated scale. Furthermore, the exact positions of the candidate points were determined using the second-order Taylor approximation [40], and the final feature points were extracted after removing the low-contrast feature points. Furthermore, the direction and magnitude of the gradient were determined for each feature point. The determined gradient is presented as a histogram, expressed as 4 × 4 × 8, i.e., a 128-dimensional vector that connects the histogram storage values in a line. The image values were then normalized [41,42]. SIFT performs excellently with respect to invariance for image transformations, rotation and scaling transformations, light changes, and noise and affine transformations [43]. The relative orientation was performed by identifying the conjugate points among the extracted feature points, and the approximate calibration parameters were estimated by referring to self-calibration [39]. The number of feature points for each camera angle is (a): 4974, (b): 5232, (c): 5122, (d): 5031, (e): 3846, (f): 4783, (g): 4624, and (h): 4478. Feature points were extracted for image cases 1–4 using SIFT (Figure 4).

3.2. Camera Lens Distortion Correction

Camera calibration is an important issue in photogrammetry. A distorted lens affects the measurement accuracy; hence, it is necessary to perform a camera distortion test before image registration (after feature point extraction) [44]. Distortion correction is important in tasks that involve quantitative measurements, such as geometric positioning and dimensional measurements [45]. Before performing Brown’s distortion model, GCPs were input, GCPs were used to transform exterior orientation parameters from relative to absolute coordinates, and Brown’s distortion model was performed to optimize camera calibration parameters. The camera distortion correction used Brown’s distortion model [Equations (1)–(6)].
y = Y / Z
  r = ( x 2 + y 2 )
  x   = x 1 + K 1 r 2 + K 2 r 4 + K 3 r 6 + K 4 r 8 + P 1 r 2 + 2 x ¯ 2 + 2 P 2 x y ¯
  y   = y 1 + K 1 r 2 + K 2 r 4 + K 3 r 6 + K 4 r 8 + P 2 r 2 + 2 y ¯ 2 + 2 P 1 x y ¯
u = w   ×   0 . 5 + c x + x f + x B 1 + y B 2
v = h   ×   0 . 5 + c y + y f
The following definitions are used in the equations: X, Y, and Z are point coordinates in the local camera coordinate system; u and v are projected point coordinates in the image coordinate system (in pixels). f is the focal length; cx and cy are the principal point offsets; K1, K2, K3, and K4 are the radial distortion coefficients; P1 and P2 are tangential distortion coefficients; B1 and B2 are affinity and non-orthogonality (skew) coefficients; w and h are the image width and height. Brown’s distortion model is widely used to correct lens distortion in digital photography and multispectral cameras [46]. Because Zenmuse X3 is an inexpensive camera, the residual is severe toward the end of the image, unlike expensive survey cameras.

3.3. DSM and Orthophoto Generation

DSM generation uses structure-from-motion (SfM) [47]. SfM has emerged as a method for extracting 3D structures from multiple images through bundle adjustments [48], and its utility is increasing with the popularity of UAVs [49,50]. SfM was initially widely used for buildings, statues, and cultural properties [51], and as the use of UAVs has expanded, it has been used in various fields, such as topography and archaeological monitoring [52,53,54]. The SfM technique uses feature points extracted using SIFT and can form a high-density cloud with 3D relative coordinate values even if the camera posture and scale are different [55]. Because the coordinates obtained from the GPS transmitter of a general-purpose drone are not exact, it is necessary to input the GCPs obtained using the VRS method and convert them to absolute coordinates [56]. After inputting the GCPs and CPs, a high-density cloud is constructed. Cloud quality can be selected in Metashape, and the highest quality was selected in this study. High-density point construction is a process of converting low-density points to high-density based on the absolute coordinates of the camera posture and direction estimated through the SfM method and GCPs. In this process, a height value was calculated to construct a high-density point. The polygon mesh model was reconstructed by using the points constructed with high density, and a grid-type DSM was created. Finally, the orthophotos were generated through an optimization process after constructing a texture using DSM (Figure 5).

3.4. 3D Model Generation

The 3D model of high-density points was performed using the built-tiled model function of the Metashape software [47], which was obtained using a SfM-based method. The SfM technique is suitable for a 3D model because it is possible to construct accurate data for regular topography as well as topography with abrupt changes [57]. The results of the 3D model are shown in Figure 6.

3.5. Earthwork Volume Calculation

Coordinates acquired using the VRS survey were extracted as points with the Geosystem GeoUtile2 software. The earthwork volume was calculated using ArcMap 10.1 and the point data. Existing earthwork volume calculation methods, such as the conventional raster or section methods, have disadvantages, such as the high amount of effort required or low calculation accuracy [58]. Therefore, using a triangulated irregular network (TIN) as a calculation model allows for the visualization and accurate calculation of the earthwork [59,60]. The TIN was created using the TIN tool in ArcMap 10.1 for converting into continuous spatial data. When calculating the earthwork volume, the reference plane was calculated based on the horizontal plane (above). The earthwork volume was calculated using the surface volume tool and generated TIN data. The earthwork volume calculated using the GPS data was 147,316.15 m 3 .
Metashape was used to calculate the earthwork volume using UAV images, and the calculation proceeded after the DSM and orthophoto were generated. After selecting the range, the earthwork volume was calculated by applying the Delaunay triangulation (Equation (7)) [61].
V = L i × W i × H i
where L i is the length of the cell, W i is the width of the cell, and H i is the height of the center of the cell. The length and width were equal to the ground sample distance, and the height was equal to the ground level ( Z B i ) of the center of the cell. Therefore, the formula for calculating the earthwork volume is as follows (Equation (8)) [61]:
V = G S D × G S D × Z B i
The calculated earthwork volume is shown in Table 4.

4. Results and Discussion

4.1. Orthophoto Accuracy Assessment

The accuracies of the generated orthophotos were evaluated using a total of eight CPs. The CP survey method complied with at least 1/3 of the number of GCPs according to the UAV surveying regulation no. 2020-5670, as established by the NGII of the Republic of Korea. The accuracies of the generated orthophotos were evaluated based on Aerial Photogrammetry Work Regulation No. 2020-5165, Chapter 4, Article 50, Limitations of Adjustment Calculations and Errors, as stipulated by the NGII (Table 5). For case 1, the maximum errors for x, y, and z were 0.10, 0.15, and 0.09 m, and the root mean square error (RMSE) values were 0.08, 0.07, and 0.07 m, respectively. For case 2, the maximum errors for x, y, and z were 0.12, 0.16, and 0.12 m, and the RMSE values were 0.07, 0.07, and 0.11 m, respectively. For case 3, the maximum errors for x, y, and z were 0.11, 0.11, and 0.17 m, and the RMSE values were 0.08, 0.08, and 0.11 m, respectively. For case 3, the maximum errors for x, y, and z were 0.10, 0.06, and 0.05 m, and the RMSE values were 0.06, 0.06, and 0.04 m, respectively (Table 6). Cases 1 and 2 were satisfied within the ground sampling distance (GSD) of 8 cm, but cases 3 and 4 were satisfied with the GSD of 12 cm because of the difference in z error. For case 5, the maximum errors for x, y, and z were 0.12, 0.10, and 0.10 m, and the RMSE values were 0.08, 0.08, and 0.08 m, respectively. For case 6, the maximum errors for x, y, and z were 0.09, 0.10, and 0.08 m, and the RMSE values were 0.06, 0.07, and 0.05 m, respectively. For case 7, the maximum errors for x, y, and z were 0.11, 0.12, and 0.13 m, and the RMSE values were 0.09, 0.08, and 0.11 m, respectively. For case 8, the maximum errors for x, y, and z were 0.12, 0.13, and 0.13 m, and the RMSE values were 0.10, 0.10, and 0.11 m, respectively. Cases 5 and 6 were satisfied within the GSD of 8 cm, but cases 7 and 8 were satisfied with the GSD of 12 cm because of the z error difference (Table 7).

4.2. Earthwork Volume and 3D Model Accuracy Assessment

4.2.1. Earthwork Volume Accuracy Assessment

The accuracy of the calculated earthwork volume was quantitatively evaluated, and the 3D model for earthwork was evaluated by visual inspection. Because the earthwork amount at the study site was not known, the earthwork amount was calculated using the data measured using GPS, which was mainly used in the field. The GPS-calculated earthwork volume and that calculated by the UAV were compared and analyzed. The earthwork volume measured by GPS was 147,316.15 m3, and the earthwork volume calculated by the UAV was 143,681.48 m3 in case 1, 147,341.10 m3 in case 2, 149,214.71 m3 in case 3, 150,879.87 m3 in case 4, 150,408.24 m3 in case 5, 145,787.72 m3 in case 6, 153,547.39 m3 in case 7 and 151,475.12 m3 in case 8 (Table 4). Compared with the GPS, the highest accuracy was found in case 2 (99.73%). The accuracy of case 1 was 98.71%, case 3 was 98.21%, case 4 was 97.58%, case 5 was 97.90%, case 6 was 98.96%, case 7 was 95.77%, and case 8 was 96.50% (Table 8). The vertical value (Z) is important for the calculation of the earthwork volume; the z error in case 2 was 0.04 m in RMSE and the maximum error was 0.05 m, which was the lowest when generating an orthophoto (Table 6). In cases 5, 6, 7, and 8, the images were taken from a height of 100 m. Among them, the earthwork value of case 6 was the second most accurate among cases 1–8 and the most accurate among the 100 m heights. In case 6, as in case 2, the accuracy was highest when the image was 90° + 45°. As the height of the UAV increased, the accuracy of the earthwork amount decreased slightly, but the accurate result was confirmed at 90° + 45°. In a previous study, the z error was lowered when the vertical and high-oblique images were fused, and in this study, case 2 and case 6 (90° + 45°) images had the lowest z errors [62]. Consequently, the earthwork volume appeared to have been accurately calculated.

4.2.2. 3D Model Accuracy Assessment

The accuracy comparison for the 3D model was performed through quantitative and visual inspection evaluations. In the quantitative evaluation, a method using the z value in GPS coordinate data was used [57,63]. For quantitative evaluation, 60 of the 260 GPS coordinate data points were randomly extracted, and the z value was compared and evaluated. The difference between the z value of the generated 3D model and the z value obtained using the VRS measurement was evaluated by applying the RMSE. As a result, in case 1, it was 0.11 m, in case 2, 0.05 m, in case 3, 0.09 m, in case 4, 0.14 m, in case 5, 0.12 m, in case 6, 0.07 m, in case 7, 0.13 m, and in case 8, 0.16 m (Table 9). The 3D model of case 2 clearly showed superior quantitative results compared to the other 3D models. The 3D model of case 6 (±0.07 m) also showed more accurate quantitative results than the other 3D models made at a height of 100 m. It was also confirmed that the 3D model accuracy decreased as the shooting height of the UAV increased.
The 3D model visual inspection evaluation of the earthwork study site was divided into A and B sides, as shown in Figure 7 and Figure 8 (A side yellow, B side green). In the enlarged Figure 7 of sides A and B, the picture on the left is for case 2, and the picture on the right is for cases 1, 3, and 4. In the latter three cases, the images were distorted or bent compared with case 2. In case 2, the 3D model was performed neatly without errors such as distortion or warping of the surface. In the enlarged Figure 8 of sides A and B, the picture on the left was case 2, and the picture on the right is for cases 5, 6, 7, and 8. In the latter four cases, the images were distorted or warped more than in case 2. Case 6, which had a UAV shooting height of 100 m, had fewer errors (such as distortion or bending) than cases 5, 7, and 8. However, due to the height difference, a cleaner result was not obtained than in case 2. A visual inspection verified that case 2 had a superior 3D model performance compared to the other cases. Case 2 was the most accurate in both quantitative and visual inspection evaluations.

5. Conclusions

In this study, vertical images and high-oblique (45°, 60°, and 75°) images at heights of 50 and 100 m were captured using a UAV and RGB camera. The orthophoto, DSM, and 3D models were performed using only the captured vertical images and images for each angle. Then, a fusion of vertical and high-oblique images was divided into eight categories (case 1: 90° + 50 m, case 2: 90° + 45° + 50 m, case 3: 90° + 60° + 50 m, case 4: 90° + 75° + 50 m, case 5: 90° + 100 m, case 6: 90° + 45° + 100 m, case 7: 90° + 60° + 100 m, case 8: 90° + 75° + 100 m) to generate orthophotos, DSM, and the 3D model. An accuracy evaluation was performed using CPs for the generated orthophotos. In addition, the earthwork amount was calculated using the generated orthophoto and DSM, and a comparison and evaluation were performed with the earthwork amount calculated using coordinate data obtained via GPS. Compared with the earthworks calculated by GPS, those calculated in case 2 were the most accurate; this was because when the orthophoto accuracy was evaluated with the CPs, the horizontal (x, y) and vertical (z) RMSE and maximum errors were more accurate in case 2 than in other cases. Moreover, as the height of the UAV shooting increased, when comparing the accuracy of orthophotos and the earthwork amount, cases 1, 2, 3, and 4 with a low height of 50 m were more accurate than those of 100 m. However, when cases 5, 6, 7, and 8 of 100 m were compared, case 6, which fused vertical and 45° images, was more accurate than cases 7, 8, and 9 in the orthophotos and earthwork volume calculations.
For vertical images (case 1), the earthwork volume was calculated more accurately than in cases 3 and 4 because the RMSE and maximum error for the horizontal and vertical images were small. However, case 2 was more accurate than case 1. Because the z value is important when calculating the amount of earthwork, case 2 was calculated accurately as the error in the z value was the lowest. Even in the case of a 100 m vertical image (case 5), since the RMSE and maximum error of the horizontal and vertical images were small, the earthwork amount was calculated more accurately than in case 7 and case 8. However, similar to case 2, case 6 had the smallest z-value error, so the orthophotos and earthwork volume were calculated more accurately than in case 5. In the 3D model, the accuracy was evaluated through a quantitative evaluation and visual inspection. For the quantitative evaluation, 60 pieces of data were randomly extracted from the z value data acquired by GPS, and the difference from the z value of the generated 3D model was evaluated using RMSE. Thus, the RMSE of case 2 was the most accurate at ±0.05 m. In the 3D model, the value of RMSE was smaller in the case of 50 m than in the case of 100 m. At 100 m, case 6, which had the lowest z value, had the highest accuracy of the 3D model at ±0.07 m than cases 5, 7, and 8. A visual inspection evaluated whether the surface was manufactured smoothly for the surface texturing of the 3D model, and case 2 showed the best surface texturing compared to cases 1, 3, 4, 5, 7, and 8. In the latter cases, each part showed an uneven surface, such as surface texturing warping.
In the previous studies, the earthwork amount was calculated only vertically, and high-oblique images were used by arbitrarily setting the camera angle when generating a 3D model for earthworks. In addition, the accurate evaluation of the UAV imaging height was not considered. However, in this study, the earthwork volume calculation and the accurate evaluation of the 3D model were performed by fusing vertical images and various high-oblique images; the accuracy was evaluated according to the height of the UAV shooting. Based on the results of this study, when calculating the earthwork amount using UAV and RGB cameras, it is best to use the camera angle generated by combining the vertical image and the 45° image. When the UAV shooting heights were 50 and 100 m, the results of the orthophoto and 3D model generated by combining the vertical image and 45° image taken at 50 m were the best. In this study, 30° images were also acquired, but when data were generated by a fusion with 90° images, their quality was low, and thus, they were excluded from the study. Accordingly, in cases where it is difficult to acquire a 45° image, it would be advisable to set the angle to 40–50°. If the UAV is not affected by obstacles, it is better to shoot from around 50 m than to shoot the UAV height too high. Volume measurements and a 3D model are also important in managing earthworks. Therefore, based on the results of this study, it is advisable to generate data by fusing vertical and high-oblique images of 40–50° to generate a 3D model for earthwork volume calculations, earthworks, and volume management. In the future, it will be necessary to study the earthwork volume calculation through an angle and 3D model using RTK UAVs that do not require GCP surveying; it will also be necessary to research the overlap ratio.

Author Contributions

Conceptualization, K.L.; methodology, W.H.L.; software, K.L.; formal analysis, W.H.L. and K.L.; writing—original draft preparation, K.L.; writing—review and editing W.H.L.; funding acquisition, W.H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2020R1I1A3061750) and a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (no. NRF-2021R1A5A8033165).

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to acknowledge the Research Institute of Artificial Intelligent Diagnosis Technology for Multi-scale Organic and Inorganic Structure, Kyungpook National University, Sangju, Republic of Korea, for providing laboratory facilities.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Akgul, M.; Yurtseven, H.; Gulci, S.; Akay, A.E. Evaluation of UAV- and GNSS-Based DEMs for Earthwork Volume. Arab. J. Sci. Eng. 2018, 43, 1893–1909. [Google Scholar] [CrossRef]
  2. Hugenholtz, C.H.; Walker, J.; Brown, O.; Myshak, S. Earthwork Volumetrics with an Unmanned Aerial Vehicle and Softcopy Photogrammetry. J. Surv. Eng. 2015, 141, 06014003. [Google Scholar] [CrossRef]
  3. Ali, F.A. Comparison among Height Observation of GPS, Total Station and Level and Their Suitability in Mining Works by Using GIS Technology. Int. Res. J. Eng. Technol. 2017, 4, 953–956. [Google Scholar]
  4. Abd-elqader, Y.S.; Fawaz, D.E.M.; Hamdy, A.M. Evaluation Study of GNSS Technology and Traditional Surveying in DEM Generation and Volumes Estimation. Aust. J. Basic Appl. Sci. 2020, 14, 18–25. [Google Scholar] [CrossRef]
  5. Seong, J.; Cho, S.I.; Xu, C.; Yun, H.C. UAV Utilization for Efficient Estimation of Earthwork Volume Based on DEM. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2021, 39, 279–288. [Google Scholar] [CrossRef]
  6. Nesbit, P.R.; Hugenholtz, C.H. Enhancing UAV–SFM 3D Model Accuracy in High-relief Landscapes by Incorporating Oblique Images. Remote Sens. 2019, 11, 239. [Google Scholar] [CrossRef] [Green Version]
  7. Rossi, P.; Mancini, F.; Dubbini, M.; Mazzone, F.; Capra, A. Combining Nadir and Oblique UAV Imagery to Reconstruct Quarry Topography: Methodology and Feasibility Analysis. Eur. J. Remote Sens. 2017, 50, 211–221. [Google Scholar] [CrossRef] [Green Version]
  8. Zheng, J.; Yao, W.; Lin, X.; Ma, B.; Bai, L. An Accurate Digital Subsidence Model for Deformation Detection of Coal Mining Areas Using a UAV-Based LiDAR. Remote Sens. 2022, 14, 421. [Google Scholar] [CrossRef]
  9. Mello, C.C.D.S.; Salim, D.H.C.; Simões, G.F. UAV-based Landfill Operation Monitoring: A Year of Volume and TopoGraphic Measurements. Waste Manag. 2022, 137, 253–263. [Google Scholar] [CrossRef]
  10. Kameyama, S.; Sugiura, K. Estimating Tree Height and Volume using Unmanned Aerial Vehicle Photography and SfM Technology, with Verification of Result Accuracy. Drones 2020, 4, 19. [Google Scholar] [CrossRef]
  11. Demir, N.; Sönmez, N.K.; Akar, T.; Ünal, S. Automated Measurement of Plant Height of Wheat Genotypes using a DSM Derived from UAV Imagery. Proceedings 2018, 2, 350. [Google Scholar] [CrossRef]
  12. Lee, K.; Lee, W.H. Temperature Accuracy Analysis by Land Cover According to the Angle of the Thermal Infrared Imaging Camera for Unmanned Aerial Vehicles. ISPRS Int. J. Geo-Inf. 2022, 11, 204. [Google Scholar] [CrossRef]
  13. Jung, S.; Lee, W.H.; Han, Y. Change Detection of Building Objects in High-resolution Single-sensor and Multi-sensor Imagery Considering the Sun and Sensor’s Elevation and Azimuth Angles. Remote Sens. 2021, 13, 3660. [Google Scholar] [CrossRef]
  14. Jung, S.; Lee, K.; Yun, Y.; Lee, W.H.; Han, Y. Detection of Collapse Buildings Using UAV and Bitemporal Satellite Imagery. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2020, 38, 187–196. [Google Scholar] [CrossRef]
  15. Cho, J.; Lee, J.; Park, J. Large-Scale Earthwork Progress Digitalization Practices Using Series of 3D Models Generated from UAS Images. Drones 2021, 5, 147. [Google Scholar] [CrossRef]
  16. Lee, K.R.; Lee, W.H. Comparison of Orthophoto and 3D Model Using Vertical and High Oblique Images Taken by UAV. J. Korean Soc. Geospat. Inf. Syst. 2017, 25, 35–45. [Google Scholar] [CrossRef]
  17. Seong, J.H.; Han, Y.K.; Lee, W.H. Earth-Volume Measurement of Small Area Using Low-Cost UAV. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2018, 36, 279–286. [Google Scholar] [CrossRef]
  18. Ronchi, D.; Limongiello, M.; Barba, S. Correlation Among Earthwork and Cropmark Anomalies Within Archaeological Landscape Investigation by Using LiDAR and Multispectral Technologies from UAV. Drones 2020, 4, 72. [Google Scholar] [CrossRef]
  19. Kim, D.; Kim, S.; Back, K. Analysis of Mine Change Using 3D Spatial Information Based on Drone Image. Sustainability 2022, 14, 3433. [Google Scholar] [CrossRef]
  20. Kim, J.; Lee, S.; Seo, J.; Lee, D.; Choi, H.S. The Integration of Earthwork Design Review and Planning Using UAV-Based Point Cloud and BIM. Appl. Sci. 2021, 11, 3435. [Google Scholar] [CrossRef]
  21. Siebert, S.; Teizer, J. Mobile 3D Mapping for Surveying Earthwork Projects Using an Unmanned Aerial Vehicle (UAV) System. Autom. Constr. 2014, 41, 1–14. [Google Scholar] [CrossRef]
  22. Tucci, G.; Gebbia, A.; Conti, A.; Fiorini, L.; Lubello, C. Monitoring and Computation of the Volumes of Stockpiles of Bulk Material by Means of UAV Photogrammetric Surveying. Remote Sens. 2019, 11, 1471. [Google Scholar] [CrossRef] [Green Version]
  23. Filkin, T.; Sliusar, N.; Huber-Humer, M.; Ritzkowski, M.; Korotaev, V. Estimation of Dump and Landfill Waste Volumes using Unmanned Aerial Systems. Waste Manag. 2022, 139, 301–308. [Google Scholar] [CrossRef] [PubMed]
  24. Cho, S.I.; Lim, J.H.; Lim, S.B.; Yun, H.C. A Study on DEM-Based Automatic Calculation of Earthwork Volume for BIM Application. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2020, 38, 131–140. [Google Scholar] [CrossRef]
  25. Grenzdörffer, G.J.; Guretzki, M.; Friedlander, I. Photogrammetric Image Acquisition and Image Analysis of Oblique Imagery. Photogramm. Record 2008, 23, 372–386. [Google Scholar] [CrossRef]
  26. Cheng, M.; Matsuoka, M. Extracting three-dimensional (3D) spatial information from sequential oblique unmanned aerial system (UAS) imagery for digital surface model. Int. J. Remote Sens. 2021, 42, 1643–1663. [Google Scholar] [CrossRef]
  27. Yang, B.; Ali, F.; Yin, P.; Yang, T.; Yu, Y.; Li, S.; Liu, X. Approaches for Exploration of Improving Multi-slice Mapping Via forWarding Intersection based on Images of UAV Oblique Photogrammetry. Comput. Electr. Eng. 2021, 92, 107135. [Google Scholar] [CrossRef]
  28. Vacca, G.; Dessì, A.; Sacco, A. The Use of Nadir and Oblique UAV Images for Building Knowledge. ISPRS Int. J. Geo-Inf. 2017, 6, 393. [Google Scholar] [CrossRef] [Green Version]
  29. Jiang, S.; Jiang, W. Efficient SfM for Oblique UAV Images: From Match Pair Selection to Geometrical Verification. Remote Sens. 2018, 10, 1246. [Google Scholar] [CrossRef] [Green Version]
  30. Zhang, X.; Zhao, P.; Hu, Q.; Ai, M.; Hu, D.; Li, J. A UAV-based Panoramic Oblique Photogrammetry (POP) Approach using Spherical Projection. ISPRS J. Photogramm. Remote Sens. 2020, 159, 198–219. [Google Scholar] [CrossRef]
  31. Bhatnagar, S.; Gill, L.; Ghosh, B. Drone image segmentation using machine and deep learning for mapping raised bog vegetation communities. Remote Sens. 2020, 12, 2602. [Google Scholar] [CrossRef]
  32. Zhu, W.; Sun, Z.; Huang, Y.; Yang, T.; Li, J.; Zhu, K.; Zhang, J.; Yang, B.; Shao, C.; Peng, J. Optimization of multi-source UAV RS agro-monitoring schemes designed for field-scale crop phenotyping. Precis. Agric 2021, 22, 1768–1802. [Google Scholar] [CrossRef]
  33. Ahmad, N.; Iqbal, J.; Shaheen, A.; Ghfar, A.; Al-Anazy, M.M.; Ouladsmane, M. Spatio-temporal analysis of chickpea crop in arid environment by comparing high-resolution UAV image and LANDSAT imagery. Int. J. Environ. Sci. Tech. 2022, 19, 6595–6610. [Google Scholar] [CrossRef]
  34. BN, P.K.; Chai, Y.H.; Patil, A.K. Inspired by Human Eye: Vestibular Ocular Reflex Based Gimbal Camera Movement to Minimize Viewpoint Changes. Symmetry 2019, 11, 101. [Google Scholar] [CrossRef] [Green Version]
  35. Lee, K.; Park, J.; Jung, S.; Lee, W. Roof Color-Based Warm Roof Evaluation in Cold Regions Using a UAV Mounted Thermal Infrared Imaging Camera. Energies 2021, 14, 6488. [Google Scholar] [CrossRef]
  36. Wanninger, L. Virtual Reference Stations (VRS). GPS Solut. 2003, 7, 143–144. [Google Scholar] [CrossRef]
  37. Park, J.; Lee, W. Orthophoto and DEM generation in small slope areas using low specification UAV. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2016, 34, 283–290. [Google Scholar] [CrossRef] [Green Version]
  38. Elkhrachy, I. Accuracy assessment of low-cost Unmanned Aerial Vehicle (UAV) photogrammetry. Alex. Eng. J. 2021, 60, 5579–5590. [Google Scholar] [CrossRef]
  39. Han, S.; Park, J.; Lee, W. On-Site vs. Laboratorial Implementation of Camera Self- Calibration for UAV Photogrammetry. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2016, 34, 349–356. [Google Scholar] [CrossRef] [Green Version]
  40. Zhu, G.; Wang, Q.; Yuan, Y.; Yan, P. SIFT on Manifold: An Intrinsic Description. Neurocomputing 2013, 113, 227–233. [Google Scholar] [CrossRef]
  41. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  42. Han, Y.K.; Kim, Y.I.; Han, D.Y.; Choi, J.W. Mosaic Image Generation of AISA Eagle Hyperspectral Sensor Using SIFT Method. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2013, 31, 165–172. [Google Scholar] [CrossRef] [Green Version]
  43. Mousavi, V.; Varshosaz, M.; Remondino, F. Using Information Content to Select Keypoints for UAV Image Matching. Remote Sens. 2021, 13, 1302. [Google Scholar] [CrossRef]
  44. Sun, Q.; Wang, X.; Xu, J.; Wang, L.; Zhang, H.; Yu, J.; Su, T.; Zhang, X. Camera Self-Calibration with Lens Distortion. Optik 2016, 127, 4506–4513. [Google Scholar] [CrossRef]
  45. Weng, J.; Zhou, W.; Ma, S.; Qi, P.; Zhong, J.; Lens, M.-F. Distortion Correction Based on Phase Analysis of Fringe-Patterns. Sensors 2021, 21, 209. [Google Scholar] [CrossRef] [PubMed]
  46. Jiang, J.; Zheng, H.; Ji, X.; Cheng, T.; Tian, Y.; Zhu, Y.; Cao, W.; Ehsani, R.; Yao, X. Analysis and Evaluation of the Image Pre-processing Process of a Six-Band Multispectral Camera Mounted on an Unmanned Aerial Vehicle for Winter Wheat Moni-toring. Sensors 2019, 19, 747. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Agisoft, L.L.C. Agisoft Metashape User Manual. Professional Edition, Version 1.7. 2021. Available online: http://www.agisoft.com/pdf/metashape-pro_1_7_en.pdf (accessed on 8 December 2021).
  48. Snavely, N.; Seitz, S.M.; Szeliski, R. Model the World from Internet Photo Collections. Int. J. Comput. Vis. 2008, 80, 189–210. [Google Scholar] [CrossRef] [Green Version]
  49. Rossi, G.; Tanteri, L.; Tofani, V.; Vannocci, P.; Moretti, S.; Casagli, N. Multitemporal UAV Surveys for Landslide Mapping and Characterization. Landslides 2018, 15, 1045–1052. [Google Scholar] [CrossRef] [Green Version]
  50. Mlambo, R.; Woodhouse, I.H.; Gerard, F.; Anderson, K. Structure from Motion (SfM) Photogrammetry with Drone Data: A Low Cost Method for Monitoring Greenhouse Gas Emissions from Forests in Developing Countries. Forests 2017, 8, 68. [Google Scholar] [CrossRef] [Green Version]
  51. Mosbrucker, A.R.; Major, J.J.; Spicer, K.R.; Pitlick, J. Camera System Considerations for Geomorphic Applications of SfM Photogrammetry. Earth Surf. Process. Landf. 2017, 42, 969–986. [Google Scholar] [CrossRef] [Green Version]
  52. Anderson, S.; Pitlick, J. Using Repeat Lidar to Estimate Sediment Transport in a Steep Stream. J. Geophys. Res. Earth Surf. 2014, 119, 621–643. [Google Scholar] [CrossRef]
  53. Cucchiaro, S.; Fallu, D.J.; Zhao, P.; Waddington, C.; Cockcroft, D.; Tarolli, P.; Brown, A.G. SfM Photogrammetry for Geoarchaeology. Dev. Earth Surf. Process. 2020, 23, 183–205. [Google Scholar] [CrossRef]
  54. Ferreira, E.; Chandler, J.; Wackrow, R.; Shiono, K. Automated Extraction of Free Surface Topography Using SfM-MVS Photogrammetry. Flow Meas. Instrum. 2017, 54, 243–249. [Google Scholar] [CrossRef] [Green Version]
  55. Lucieer, A.; Jong, S.M.d.; Turner, D. Mapping Landslide Displacements Using Structure from Motion (SfM) and Image Correlation of Multi-Temporal UAV Photography. Prog. Phys. Geogr. 2014, 38, 97–116. [Google Scholar] [CrossRef]
  56. Lee, K.R.; Lee, W.H. Orthophoto and DEM Generation Using Low Specification UAV Images from Different Altitudes. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2016, 34, 535–544. [Google Scholar] [CrossRef] [Green Version]
  57. Lee, K.R.; Han, Y.K.; Lee, W.H. Comparison of Orthophotos and 3D Models Generated by UAV-Based Oblique Images Taken in Various Angles. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2018, 36, 117–126. [Google Scholar] [CrossRef]
  58. Hao, X.; Pan, Y. Accuracy Analysis of Earthwork Calculation Based on Triangulated Irregular Network (TIN). Intell. Autom. Soft Comput. 2011, 17, 793–802. [Google Scholar] [CrossRef]
  59. Chen, Z.A.; Luo, Y.Y.; Zhang, L.T. Precision Analysis and Earthwork Computation of Land Consolidation Based on Surfer. Geotech. Investig. Surv. 2010, 5, 53–56. [Google Scholar]
  60. Kavaliauskas, P.; Židanavičius, D.; Jurelionis, A. Geometric Accuracy of 3D Reality Mesh Utilization for BIM-Based Earthwork Quantity Estimation Workflows. ISPRS Int. J. Geo-Inf. 2021, 10, 399. [Google Scholar] [CrossRef]
  61. Ajayi, O.G.; Oyeboade, T.O.; Samaila-Ija, H.A.; Adewale, T.J. Development of a UAV-Based System for the Semi-Automatic Estimation of the Volume of Earthworks. Rep. Geod. Geoinf. 2020, 110, 21–28. [Google Scholar] [CrossRef]
  62. Cho, J.; Lee, J.; Lee, B. A Study on the Optimal Shooting Conditions of UAV for 3D Production and Orthophoto Generation. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2020, 38, 645–653. [Google Scholar] [CrossRef]
  63. Lee, K.R.; Han, Y.K.; Lee, W.H. Generation and Comparison of Orthophotos and 3D Models of Small-Scale Terraced Topography Using Vertical and High Oblique Images Taken by UAV. J. Korean Soc. Geospat. Inf. Sci. 2018, 26, 23–30. [Google Scholar] [CrossRef]
Figure 1. Research flow chart.
Figure 1. Research flow chart.
Aerospace 09 00606 g001
Figure 2. Arrangement of GCPs and CPs of the study site.
Figure 2. Arrangement of GCPs and CPs of the study site.
Aerospace 09 00606 g002
Figure 3. Image processing of UAV data.
Figure 3. Image processing of UAV data.
Aerospace 09 00606 g003
Figure 4. Feature point extraction using scale-invariant feature transform (SIFT) matching: (a) case 1, (b) case 2, (c) case 3, (d) case 4, (e) case 5, (f) case 6, (g) case 7, and (h) case 8.
Figure 4. Feature point extraction using scale-invariant feature transform (SIFT) matching: (a) case 1, (b) case 2, (c) case 3, (d) case 4, (e) case 5, (f) case 6, (g) case 7, and (h) case 8.
Aerospace 09 00606 g004aAerospace 09 00606 g004b
Figure 5. Orthophotos obtained using mesh construction and texturing with data acquired by the UAV: (a) case 1, (b) case 2, (c) case 3, (d) case 4, (e) case 5, (f) case 6, (g) case 7, and (h) case 8.
Figure 5. Orthophotos obtained using mesh construction and texturing with data acquired by the UAV: (a) case 1, (b) case 2, (c) case 3, (d) case 4, (e) case 5, (f) case 6, (g) case 7, and (h) case 8.
Aerospace 09 00606 g005aAerospace 09 00606 g005b
Figure 6. Three-dimensional (3D) model results: (a) case 1, (b) case 2, (c) case 3, (d) case 4, (e) case 5, (f) case 6, (g) case 7 and (h) case 8.
Figure 6. Three-dimensional (3D) model results: (a) case 1, (b) case 2, (c) case 3, (d) case 4, (e) case 5, (f) case 6, (g) case 7 and (h) case 8.
Aerospace 09 00606 g006aAerospace 09 00606 g006b
Figure 7. The 3D model visual inspection evaluation of the earthworks study site (left images: case 2, right images: cases 1, 3, and 4).
Figure 7. The 3D model visual inspection evaluation of the earthworks study site (left images: case 2, right images: cases 1, 3, and 4).
Aerospace 09 00606 g007
Figure 8. The 3D model visual inspection evaluation of the earthworks study site (left images: case 2, right images: cases 5, 6, 7, and 8).
Figure 8. The 3D model visual inspection evaluation of the earthworks study site (left images: case 2, right images: cases 5, 6, 7, and 8).
Aerospace 09 00606 g008
Table 1. Specifications of Inspire 1 and Zenmuse X3.
Table 1. Specifications of Inspire 1 and Zenmuse X3.
UAVRGB Camera
Inspire 1Zenmuse X3
Weight2935 gResolution4000 × 3000
Flight
altitude
Max: 4500 mPixel
size
1.561 × 1.561 μm
Flight
time
Max: 18 minFOV94°
SpeedMax: 22 m/sFocal
length
3.61 mm
Maximum wind
resistance
10 m/sF-StopF/2.8
Table 2. Specifications of the Trimble R8s receiver.
Table 2. Specifications of the Trimble R8s receiver.
GNSS Receiver
Trimble R8s
Weight3.81 kg
Number of channels440 channels
Satellite signalGPS: L1C/A, L1C, L2C, L2E, L5
GLONASS: L1C/A, L1P, L2C/A, L2P, L3
SBAS: L1C/A, L5 (for SBAS satellites that support L5)
Galileo: E1, E5A, E5B
BeiDou: B1, B2
VRS precisionHorizontal: 8 mm + 0.5 ppm RMS
Vertical: 15 mm + 0.5 ppm RMS
Table 3. Synopsis of the RTK public surveying work regulation.
Table 3. Synopsis of the RTK public surveying work regulation.
Rules
1.
Network RTK survey uses data observed at three or more fixed points.
2.
The lowest elevation angle of the satellite should be set at 15°, and in areas with many obstacles, set at 20°.
3.
The number of simultaneous reception satellites should be 5 or more.
4.
The allowable precision is within 0.05 m horizontally and 0.10 m vertically, and the actual observation is performed when the transmission delay time of correction information is within 2 s.
Table 4. Earthwork volume calculation result using the UAV.
Table 4. Earthwork volume calculation result using the UAV.
GPSCase 1Case 2Case 3Case 4
147,316.15 m3149,214.71 m3146,913.10 m3144,681.48 m3150,879.87 m3
Case 5Case 6Case 7Case 8
150,408.24145,787.72153,547.39152,475.12
Table 5. Limitations of adjustment calculations and errors stipulated by the NGII.
Table 5. Limitations of adjustment calculations and errors stipulated by the NGII.
GSD (cm)RMSE (m)Maximum Error (m)
Within 80.080.16
Within 120.120.24
Within 250.250.50
Within 420.420.84
Within 650.651.30
Within 800.801.60
Table 6. Checkpoint errors for horizontal (x, y) and vertical (z) for orthophotos for each case at 50 m (unit: m).
Table 6. Checkpoint errors for horizontal (x, y) and vertical (z) for orthophotos for each case at 50 m (unit: m).
GCP No.Case 1Case 2Case 3Case 4
XYZXYZXYZXYZ
1−0.030.15−0.010.02−0.020.02−0.080.100.090.010.040.12
20.100.140.09−0.09−0.040.05−0.090.09−0.100.020.080.08
30.030.05−0.110.05−0.10−0.05−0.01−0.05−0.100.010.05−0.13
4−0.02−0.04−0.03−0.010.020.01−0.12−0.11−0.120.020.07−0.02
5−0.160.030.02−0.020.06−0.060.020.070.010.11−0.09−0.04
60.040.010.050.01−0.09−0.070.110.040.17−0.110.160.12
70.100.110.04−0.080.05−0.01−0.090.02−0.100.120.13−0.14
8−0.050.12−0.110.100.060.02−0.100.110.10−0.050.010.12
RMSE0.080.070.070.060.060.040.080.080.110.070.070.11
Maximum error0.100.150.090.100.060.050.110.110.170.120.160.12
Table 7. Checkpoint errors for horizontal (x, y) and vertical (z) for orthophotos for each case at 100 m (unit: m).
Table 7. Checkpoint errors for horizontal (x, y) and vertical (z) for orthophotos for each case at 100 m (unit: m).
GCP No.Case 5Case 6Case 7Case 8
XYZXYZXYZXYZ
10.120.100.100.03−0.080.050.110.060.12−0.08−0.060.09
20.110.10−0.04−0.05−0.040.080.11−0.06−0.080.05−0.09−0.09
30.01−0.03−0.100.02−0.04−0.040.08−0.08−0.100.120.09−0.10
4−0.050.02−0.04−0.070.040.02−0.08−0.120.12−0.02−0.08−0.11
50.10−0.090.09−0.09−0.060.06−0.05−0.070.100.11−0.090.11
6−0.10−0.020.06−0.020.10−0.050.090.060.12−0.120.13−0.10
70.080.100.070.02−0.030.020.050.05−0.100.100.11−0.12
80.10−0.10−0.090.090.060.02−0.090.100.13−0.090.060.13
RMSE0.080.080.080.060.070.050.090.080.110.100.100.11
Maximum error0.120.100.100.090.100.080.110.120.130.120.130.13
Table 8. Comparison of earthwork volume accuracy values (reference: GPS surveying).
Table 8. Comparison of earthwork volume accuracy values (reference: GPS surveying).
Surveying TypeAccuracy
GPS100%
UAVCase 198.71%
Case 299.73%
Case 398.21%
Case 497.58%
Case 597.90%
Case 698.96%
Case 795.77%
Case 896.50%
Table 9. The 3D model accuracy comparison.
Table 9. The 3D model accuracy comparison.
Case 1Case 2Case 3Case 4
±0.11 m±0.05 m±0.09 m±0.14 m
Case 5Case 6Case 7Case 8
±0.12 m±0.07 m±0.13 m±0.16 m
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, K.; Lee, W.H. Earthwork Volume Calculation, 3D Model Generation, and Comparative Evaluation Using Vertical and High-Oblique Images Acquired by Unmanned Aerial Vehicles. Aerospace 2022, 9, 606. https://doi.org/10.3390/aerospace9100606

AMA Style

Lee K, Lee WH. Earthwork Volume Calculation, 3D Model Generation, and Comparative Evaluation Using Vertical and High-Oblique Images Acquired by Unmanned Aerial Vehicles. Aerospace. 2022; 9(10):606. https://doi.org/10.3390/aerospace9100606

Chicago/Turabian Style

Lee, Kirim, and Won Hee Lee. 2022. "Earthwork Volume Calculation, 3D Model Generation, and Comparative Evaluation Using Vertical and High-Oblique Images Acquired by Unmanned Aerial Vehicles" Aerospace 9, no. 10: 606. https://doi.org/10.3390/aerospace9100606

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop