Next Article in Journal
Wavelet Mutation with Aquila Optimization-Based Routing Protocol for Energy-Aware Wireless Communication
Next Article in Special Issue
Quantifying the Influence of Surface Texture and Shape on Structure from Motion 3D Reconstructions
Previous Article in Journal
Intelligent Control of Groundwater in Slopes with Deep Reinforcement Learning
Previous Article in Special Issue
Combining Photogrammetry and Photometric Stereo to Achieve Precise and Complete 3D Reconstruction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accuracy Verification of Surface Models of Architectural Objects from the iPad LiDAR in the Context of Photogrammetry Methods

1
Faculty of Computer Science and Telecommunications, Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland
2
Faculty of Architecture, Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland
3
Faculty of Mechanical Engineering, Cracow University of Technology, al. Jana Pawła II 37, 31-864 Kraków, Poland
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(21), 8504; https://doi.org/10.3390/s22218504
Submission received: 1 September 2022 / Revised: 26 October 2022 / Accepted: 31 October 2022 / Published: 4 November 2022

Abstract

:
The creation of accurate three-dimensional models has been radically simplified in recent years by developing photogrammetric methods. However, the photogrammetric procedure requires complex data processing and does not provide an immediate 3D model, so its use during field (in situ) surveys is infeasible. This paper presents the mapping of fragments of built structures at different scales (finest detail, garden sculpture, architectural interior, building facade) by using a LiDAR sensor from the Apple iPad Pro mobile device. The resulting iPad LiDAR and photogrammetric models were compared with reference models derived from laser scanning and point measurements. For small objects with complex geometries acquired by iPad LiDAR, up to 50% of points were unaligned with the reference models, which is much more than for photogrammetric models. This was primarily due to much less frequent sampling and, consequently, a sparser grid. This simplification of object surfaces is highly beneficial in the case of walls and building facades as it smooths out their surfaces. The application potential of the IPad LiDAR Pro is severely constrained by its range cap being 5 m, which greatly limits the size of objects that can be recorded, and excludes most buildings.

1. Introduction

Methods of restoring three-dimensional models have been widely used in various fields of research. Three-dimensional modelling is becoming more and more popular in the design and surveying of architecture and landscape-architecture objects, replacing traditionally used two-dimensional drawings and photographs [1]. The precision of the resultant models, in this case, translates into reliable insight about objects. This information can be used in popularising works of art, and can play an educational role [2,3], as well as form a basis for conservation and reconstruction works [4]. In such cases, it supports an entire range of activities, covering the following phases: construction work planning, creating technical documentation and cost estimation, up to construction work supervision and compliance with design documentation. A precisely developed model can be a key element of building information management (BIM) [5], and in the case of historic buildings, historical building information management (HBIM) [6,7]. It also allows the monitoring of the technical condition of a historical site and possible changes in its structure occurring over time [6,8].
Emerging sophisticated tools allow either partial or complete automation of the modelling process, so it is possible to obtain a 3D reconstructions based on satellite data [9], stereoscopic images [10], or light detection and ranging (LiDAR) data [11,12,13,14]. Photogrammetric reconstruction is also widely used (e.g., [2,15,16,17,18]). It consists of the restoration of the position and the spatial relations between 3D points of observed surfaces on the basis of 2D images [19]. Such an approach is often referred to as structure from motion (SfM) [20].
Photogrammetry has become generally available thanks to developing applications that use optical sensors installed in mobile devices [21]. This applies to both photographs obtained at human eye level and those collected by unmanned aerial vehicles (UAVs) (e.g., [16,22,23,24,25,26,27,28]). Due to their relatively easy availability, models created by photogrammetric methods are used in a broad spectrum of different fields, ranging from the digitisation of museum collections [29], through feasibility studies and construction planning [30], to applications in the entertainment industry [31]. Due to such a wide dissemination of the method, questions arise about the accuracy of the models created in this way [16,28,32,33], research on their verification by other methods (e.g., [16,24,34,35]), or attempts to increase their precision [36,37].
LiDAR is a technology that uses beams of electromagnetic radiation to generate information about objects. It has been associated mainly with large-scale applications, such as forestry, mining, oceanography, archaeology, topography, land surveying, and urban planning [38,39]. The development of consumer devices such as smartphones and tablets has shown that this technology can also be applied in mobile devices. LiDAR, in this case, is based on time of flight (ToF) technology, which determines the time it takes for a pulse or modulated light signal to travel a distance from an object [40,41]. Currently, there are several mobile devices that use these technologies (LiDAR, ToF, optical sensors) on the market [42]. This paper discusses research conducted by using an Apple iPad Pro device (Apple Inc., Cupertino, CA, USA) [43], whose LiDAR sensors are based on direct time of flight (dToF) technology [44].
To the best of our knowledge, there have been no comprehensive studies in the literature concerning the verification of the quality of 3D reconstructions derived from LiDAR sensors in iPad devices, especially in the context of architectural and landscape objects. A comparative analysis of the accuracy of scans from this device with scans using an industrial 3D scanner has been presented in [40]. However, it applied to objects of minuscule size. A slightly more extensive study was presented by Gollob et al., whose use of an iPad device was mentioned in investigating the accuracy of forest surveying variables [45]. Heinrichs and Yang presented an analysis in which they verified the bias and repeatability of the 3D scans made by using the device [46]. One can also find various types of comparative analyses online in blogs or vlogs, but they do not have the value of peer-reviewed academic studies. Therefore, it is expedient to demonstrate the iPad LiDAR’s potential to create surface models and to indicate types of built objects that could be digitised in this way, as no similar studies were found in the literature.
The purpose of this paper is to compare 3D scans acquired by using the Apple iPad Pro LiDAR sensor with photogrammetric reconstructions. Measurement accuracy was compared. Analyses were performed for models of four types of architectural elements: (1) a small detail of significant geometric complexity, (2) a piece of garden furniture (garden sculpture), (3) an architectural interior, and (4) a building’s facade. The measurement capabilities of the devices under study in relation to the groups listed are presented, and the quality of the obtained measurements is summarised. This study allowed for formulating recommendations on the use of the Apple iPad Pro in various applications. The theoretical contribution of this study was to propose and validate a method for comparing surface models based on positional statistics.

2. Materials and Methods

2.1. Accuracy of Reference Object Measurements

A set of calibration balls from the Laboratory of Coordinate Metrology (at the Cracow University of Technology) was used to measure the accuracy of the scanning device (Figure 1).
The set consisted of two polished and tarnished metal balls d 1 and d 2 . Their diameters were d 1 = 85.02 mm and d 2 = 85.01 mm respectively, with a shape deviation of δ k = 0.02 mm. The spheres were placed on a metal frame at a distance of d k = 269.4 mm. The listed values have been confirmed on a calibration certificate by an accredited laboratory. The measurement set allowed for the precise verification of the quality of the spatial representation of the objects. A sphere size measurement strategy was adopted that consisted of:
(1)
manually marking parts of the mesh that represented the spheres,
(2)
determining the equation of the sphere fitted to the marked area by using the root mean square (RMS) error minimisation algorithm. In this way, two spheres were created in the observation area, for which their centre distances could be determined, and
(3)
after a series of measurements, determining the average value of the distance between the observed spheres d a v ; this quantity was compared against the nominal value of d k .
If the measurement error reaches a small value, this indicates the potential suitability of the device for generating accurate 3D models of built objects.

2.2. Input Data

In order to carry out a precise analysis of the actual capabilities of the device as mentioned above in the context of 3D scanning, an approach was adopted that involved the selection of objects with different spatial scales. Such a study allows one to determine the accuracy that the device can achieve for different applications, but also to identify applications whereby the use of the device can give correct results. Indeed, although the maximum range of the LiDAR sensor is specified, there is no mention of its resolution. It is therefore unclear whether fine details are imaged correctly and with sufficient precision.
With this in mind, analyses were conducted for four sites, each representing a different scale and type of architectural work.
1.
Fine detail. The item analysed was a plaster figurine covered with paint, characterised by fine detail and surface irregularities. Its overall dimensions are 99.2 × 40.8 × 90 mm. This type of form may represent the finest architectural detail found in ornamental elements, such as window and door frames or stucco mouldings. The small size meant that, together with the actual object, a large area of the surroundings was also collected. This, in turn, entailed the necessity of appropriate selection of the scanning location (Figure 2a).
2.
Small architectural object. The bust of Tadeusz Kosciuszko—a Polish national hero and patron of the Cracow University of Technology—placed on a granite pedestal, was chosen as the second test object. The bronze monument, designed by Professor Stefan Dousa, is located in the central part of the university’s campus, surrounded by trees and footpaths. This brings greenery elements within the theoretical range of the LiDAR sensor. Both the location and the material from which the sculpture is made can present challenges for photogrammetric methods due to the potential for significant differences in the illumination of the different parts. The dimensions of the statue are 678 × 458 × 754 mm (Figure 2b).
3.
Indoor space. The interior shown in (Figure 2c) is a fragment of a hall leading to lecture rooms, located in a modernist building of the Faculty of Chemical Engineering and Technology at the Cracow University of Technology (design: E. Moj, building in use since 1970). It was chosen as a test object for several reasons. Its area is so large that it is impossible to cover it all only from one point, so it is necessary to move with the device. In addition, many elements in the corridor may be difficult to represent on a 3D model, such as structural columns, as well as one of the walls, the ceiling beams, fixtures, and glazed doors. The dimensions of the room are 690 × 525 × 275 cm.
4.
Building facade. One of the Cracow University of Technology’s campus buildings was used to analyse the LiDAR sensor’s performance in outdoor scenes with an object of significant size. Purchased by the university in 2010, this 1918 military building was initially used as an artillery equipment warehouse and later as a military bathhouse and laundry room. It has been adapted for teaching purposes while retaining its original external form. The main problem, in this case, were the dimensions, which exceed the theoretical range of the sensor. For a slight simplification, only one of the elevations (the southern one, which was preserved as a “witness to history” during the building’s adaptation while the others were remodelled) was selected for comparison. However, it posed a challenge to the methods that were tested because of the historical architectural detailing that gives it a spatial, three-dimensional form. In the articulation of the peak elevation of the two-story building, covered with a gable roof, four symmetrically spaced pilasters stand out, and divide it into three fields. The central part contains two windows (one above the other) with arched brick lintels. Above them, there is a small round window with a brick frame. Doors, also topped with brick arches, are visible in the side fields. All these apertures are now covered with blind windows. A diagonal strip of windows along the eaves of the building was introduced to provide light to the rooms. The dimensions of the selected fragment of the building are 1740 × 123 × 1203 cm (Figure 2d).
All scans were performed during the summer months of 2021 on the campus of the Cracow University of Technology. During summer, trees sport full crowns of leaves and there is a lot of sun, thus creating demanding conditions for image acquisition and photogrammetric reconstruction. This is caused by lighting conditions and high contrasts and brightness, and by lush vegetation obscuring buildings. In order to properly compare the properties of the technologies under study, the scanning procedure with both the iPad device and photogrammetry was performed under the same lighting conditions. These conditions varied depending on the object (e.g., artificial light for the interior of the room, natural light for the garden furniture piece), and in the case of natural lighting, different acquisition conditions were tested—sunlight, shadow, or twilight.

2.3. Photogrammetric Reconstruction

Photogrammetric reconstruction entailed taking photographs by using a DSLM-type camera with a fixed focal length lens. Registration was carried out without the use of additional supporting markers. Photogrammetric reconstruction was performed by using Agisoft Metashape version 1.6.2 software [47], and each time the process went through multiple steps, starting with matching photographs, generating a sparse point cloud on their basis, creating a depth map for each of the matched photographs, and finally creating a dense point cloud.
Once the photographs were loaded into the program, their suitability for further processing was evaluated based on their sharpness. Algorithms used for feature extraction and descriptor creation resemble the approach known from the scale-invariant feature transform (SIFT) method [48]; however, due to its closed structure, the whole software should be treated as a black-box tool. The possibility of defining reconstruction parameters is limited to essential elements; however, the values of these parameters were constant for each performed reconstruction.
Agisoft Metashape usually generates very dense point clouds [49], which also contain redundant information in the form of noise. In order to eliminate it, the “confidence” parameter is used for each point cloud, which defines the number of occurrences of the given point in depth maps created from input images. The values of this coefficient depend on the number and coverage of input images; in practice, they usually do not exceed 10% of the number of input images. Therefore, points with high coefficient values are treated as more accurate and reliable, whereas those with low confidence levels can be rejected in the filtering process. It is assumed that points with confidence levels 1–2 are discarded; this significantly reduces the noise level in the photogrammetric reconstruction.

2.4. LiDAR Measurements with Apple iPad Pro

For this study, scans were obtained by using an Apple iPad Pro 12.9 128 GB (Apple Inc., Cupertino, CA, USA) [43] running iOS version 15.0.1. Apple does not provide exact specifications of the sensors the device is fitted with. From the laconic statements available, it can be deduced that the sensor is based on ToF technology, and its range is about 5 m [43]. Through reverse engineering, independent researchers have determined that the LiDAR module consists of an emitter (vertical-cavity surface-emitting laser with diffraction optics element, VCSEL DOE) and a receptor (single-photon, avalanche diode, array-based, near infrared, complementary metal-oxide-semiconductor image sensor, SPAD NIR CMOS) based on direct-time-of-flight technology [44].
Registration quality with LiDAR sensors is only slightly affected by lighting conditions, so 3D scanning under varying natural light conditions in outdoor applications is possible while respecting possible imperfections in the quality of the resulting textures [45]. This is quite a promising property compared to photogrammetric techniques, where unfavourable lighting conditions, such as harsh shadows or backlighting, negatively affect the quality of the reconstructions obtained [50].
Of the vast and growing range of applications using the LiDAR sensor for 3D scanning, four were selected after initial testing: 3D Scanner App [51], Polycam [52], Scaniverse [53], and SiteScape [54]. After many tests, the Scaniverse app was selected for further research as it allows us to obtain 3D models in form of point clouds or meshes with high accuracy. Due to the use of a LiDAR sensor, it is supported by the following devices: the iPhone 12 Pro, the iPhone 12 Pro Max, the iPhone 13 Pro, the iPhone 13 Pro Max, and the iPad Pro (2020, 2021). Scaniverse allows visualising scanned models both in 3D and directly in augmented reality. The app offers a PRO version that exports high-resolution models to the following formats: FBX, OBJ, GLB, USDZ, STL, PLY, and LAS. Scaniverse provides an attractive option in the ability to set the scanning range to skip the area that is not needed. In the case of small models, the maximum scanning range of 5 m may be unnecessary because there is no need to scan such a large area but only a specific object. After scanning the model, Scaniverse gives the option to save it in one of three resolutions: standard (2k texture and 12-mm grid), high (4k texture and 8-mm grid), and ultra (8k texture and 6-mm grid). It is also possible to edit the scanned model within the application. We can improve its appearance or trim it. Another essential feature is the ability to measure the actual size of the scanned model. All these features contributed to the fact that Scaniverse proved to be the most adequate to achieve the research aim among many available applications.

2.5. Reference Data

The results were verified by using two approaches. For objects with highly complex geometries (sculptures at different scales), 3D scans were used. For geometricised, cuboid objects, 3D CAD models were developed.
A Konica–Minolta scanner was used to create a reference surface model of the gypsum figurine (Figure 3a). The boundary dimensions (described in Section 2.1), were also measured by using callipers. The heights of all three components of the sculpture were measured and found to have the following heights: 80.5, 90.0, and 84.4 mm. The measurement accuracy was 0.1 mm.
The second object, a sculpture on a pedestal, located in a park, also had a complex geometry. The Creaform Academia 3D structural slight scanner was used to procure its measurement data. The surface model produced had an accuracy of 1 mm. The reference mesh featured over 3.6 million vertices (Figure 3b).
CAD techniques worked very well for modelling the interior of the lobby room. A laser rangefinder and a measuring tape were used as measuring tools. The measurement accuracy was 1 cm (Figure 3c).
The same technique and tools were used to generate a model of part of the building. In this case, there was limited access to the upper parts of the facade. The surface model was compared to a point cloud in FLS format obtained from a laser scan. The scan, which was obtained courtesy of Bimtelligent (www.bimtelligent.pl, accessed on 4 October 2021), was taken with the scanner positioned approximately 7.3 m from the wall, in the middle of the wall width. By superimposing the model on the point cloud, the points of the surface model located in the higher parts of the building could be verified and considered reliable (Figure 3d).

2.6. Fit Measures Used

Comparisons between models were performed by using CloudCompare software. The distance from the model grid surface was determined for all photogrammetric reconstruction points and iPad LiDAR scans. This resulted in a root mean square (RMS) distance matching measure. This measure can be formalized by using the formula
D R M S = 1 n i = 1 n p i 2 m i 2
where: n—the number of reconstruction points, p i —reconstruction point, m i —point closest to p i on the reference model.
The distance distribution d R M S for each reconstruction point is also used for analysis. Histograms showing the fit of the reconstruction to the model and positional statistics about the extent of the fit are also built from this.

2.7. Methods of Statistical Analysis

The statistical comparison method used was described at length by Labedz et al. [36]. In this method, it is assumed that the analysed variable is positional: the smaller the distance between the obtained point cloud and the reference model, the more correct the result. The statistical values that are considered are the quartiles and the interquartile range. Subsequent quartiles provide information about how far away from the original model the 25% ( Q 1 ), 50% ( Q 2 , median), and 75% points ( Q 3 ) are. These are calculated based on the absolute distances of the cloud points from the reference model, so the side they are on (in front of the model or behind the model) is not taken into account. Such information is of little importance in the context of mapping correctness. The smaller the value of the considered quantities, the better the result because more points lie closer to the reference model. Another analysed value is the interquartile range ( I Q R ), which is a measure of dispersion. Unlike quartiles, it is calculated on distance data with the sign and is defined as the difference between the third and first quartiles: I Q R = Q 3 Q 1 . From this definition, it follows that 50% points lie at a distance from the model, defined by the I Q R ,value so that it can be treated as a measure of diversity; a narrower interval means less diversity in the variable being analysed. In the case presented, this means a greater concentration of points closer to the reference model, and thus a better score. The last statistical value examined is the number of points lying outside a particular distance range from the reference model. The average Q 3 values calculated on the unsigned data were taken as this distance (sigma), so these are points whose distance is significant from the reference data. A larger number of such points indicates a lower accuracy of the obtained experimental data [36].
The value can be easily determined directly, namely the number of points in the resulting cloud, which is also not without significance. Photogrammetric methods tend to generate very dense clouds, which may or may not indicate a higher model accuracy. Conversely, a small number of points for models of significant size may indicate a high degree of data generalisation.

3. Results

The object of this study was to compare 3D scans acquired by using the LiDAR sensor of the Apple iPad Pro with photogrammetric reconstructions. This was to verify the measurement capabilities of the device when applied to specific object groups. The experimental data collected by photogrammetric methods and with the device mentioned above were compared with the reference data presented in Section 2.5. The dense point cloud created by using photogrammetric methods and the cloud created by using the Apple iPad Pro were recorded by using the ICP method [55] to compare the datasets. The distance between the analysed cloud and the reference model was then calculated for each cloud. The resulting data was used to perform both visual and statistical analyses.

3.1. Measuring Accuracy of the iPad LiDAR Sensor

The first step in testing the iPad LiDAR sensor was to compare measurement accuracy by using reference spheres. For this purpose, we used reference spheres provided by the Laboratory of Coordinate Metrology of Cracow University of Technology (Figure 1). The calibration set consisted of two matte metal balls with calibrated diameters whose centres were located at the distance d k = 269.4 mm. Our goal was to estimate the distance between the spheres based on iPad LiDAR scans and compare the readings with the nominal value d k . For this reason, 20 scans of the surface of the reference spheres were taken, of which two scans were rejected due to discontinuities in the reconstructed surface. This number is redundant, as seven measurements are sufficient to assess the measurement accuracy of a device [56]. Zeiss GOM software was then used to compare the sphere equations to the scanned surfaces. In each case, the distances between the centres of the matched spheres were determined (Figure 4). From these 18 measurements, the mean distance between the sphere centres d a v g and the standard deviation s t d were estimated: d a v g = 270.81 mm, s t d = 4.06 . As a result, the difference of the mean value d a v g to the nominal value d k was determined: Δ k = 1.41 mm, which was 0.52 % of the nominal distance d k .

3.2. Comparison to the Model

Model preparation based on experimental data sometimes involved repeating the acquisition process many times. Although the procedure of collecting photographs for photogrammetric reconstruction has been known and studied for years, the use of an Apple iPad Pro equipped with a LiDAR sensor required multiple attempts to acquire the necessary experience. The first necessary element of the procedure was the selection of the appropriate software, described in Section 2.4, with which the acquisition procedure was then performed. During its execution, many scanning attempts failed. This was particularly evident in the case of the fine detail and the small architectural object, for which up to a dozen attempts were necessary.
The data obtained were subjected to a preliminary visual verification. At this stage, models containing coarse errors in spatial data were rejected (Figure 5).
The original purpose of the iPad distance sensor predestines it for measuring medium-sized objects and interiors up to 5 m away from the sensor. For small objects of approximately a few centimetres, the measurement density is insufficient, and results in the occurrence of distortions and structure discontinuities (Figure 5a–b). For larger objects, which are difficult to cover with the sensor’s range, errors related to the acquisition mechanics occur, caused by quick or abrupt sensor movement during data capture, resulting in the appearance of unnatural shifts of the structure and duplication of fragments (Figure 5c–d).
In the next stage of verification, the models were evaluated in I Q R and Q 3 values, and those with the best I Q R and Q 3 values were selected for the final presentation. A similar procedure was carried out for data collected by using photogrammetric reconstruction, but in this case, no data contained coarse errors, and the number of acquisition repetitions was a maximum of 3.

3.2.1. Figure Three Wise Monkeys—Fine Detail

The smallest model that was considered was a small gypsum figurine. It was characterised by a significant number of fine details and surface irregularities, which presented a challenge for data acquisition methods. The visualisations clearly show the difference in data density between the photogrammetric model and that coming from the Apple iPad Pro (Figure 6). The cloud derived from the iPad has apparent regularity features (Figure 6e), and the grid constructed from it is characterised by significant data generalisation, leading to a loss of a fair amount of detail. This is particularly evident in the right part of the surface model (Figure 6d). Interestingly, the distances of the measurement points obtained from the iPad device from the reference model were not very large. A total of 25% of them were less than 1 mm away from the model, and another 25% were more than 3 mm away (Table 1). By carefully analysing the distribution of the cloud points, it could be observed that the worst results were obtained at the locations of the various depressions of the model (cf. Figure 6d,e), whereas relatively good results were obtained at the locations of the convexities. It should be reiterated that the presented model was the best one obtained by using the Apple iPad Pro.
The photogrammetric reconstruction of the figure shown has surprisingly high accuracy (Figure 6a–c). It was obtained from only 19 photographs. The number of points that can be considered to lie at a considerable distance from the reference model is very negligible (Table 2). However, it should be noted that the σ value of 0.15 cm, in this case, was influenced according to the methodology adopted by both the high Q 3 value for the iPad model and the very low value for the photogrammetry. Not surprisingly, more than 50% of the points were outside the σ value for the iPad model. The data presented indicate that the Apple iPad Pro is unsuitable for imaging fine architectural details and ornaments.

3.2.2. Garden Architectural Object-Bust on Plinth

The bust of T. Kosciuszko, placed on a pedestal, served as an example of a small built object. It is an object with a relatively complex surface, a non-convex structure and multiple grooves and concavities. This proved challenging to model. The Creaform Academia universal structured light handheld 3D scanner was used for this purpose. The object surface was reconstructed with an accuracy of 1 mm. The outdoor location of the sculpture, among trees, made it problematic to ensure reasonably uniform lighting conditions. The influence of light reflections was particularly troublesome when acquiring images for photogrammetric reconstruction. For the iPad LIDAR sensor, it was very problematic to scan the surface of the plinth, which was covered with a glossy granite pattern that introduced distortions in the cover reading.
Due to the varied colours of the surface details, the photogrammetric reconstruction (Figure 7a,b) was able to produce a high-resolution model (approximately 200,000 points). Unfortunately, the need to acquire the images in the evening due to better lighting conditions resulted in poorer colour reproduction (Figure 7a). Mapping the object by using the iPAD Pro LiDAR resulted in a much sparser, but also more regular point cloud structure (Figure 7e). The textured mesh showed surface discontinuities and irregularities (Figure 7d), which nevertheless did not deform it as drastically as observed for the small object (Figure 6d). The discontinuities and minor duplications seen in the eye and mouth area are related to the deformation of the texture, not the geometric mesh (Figure 7d). In this area, the mesh matched the reference model quite well, as illustrated by the blue points in Figure 7e.
The point cloud obtained with photogrammetric methods was much sparser than the ones obtained in the other analysed cases, yet its density was still more than 250 times higher than the one obtained with the iPad (Table 3). In the statistical data, we can observe similar accuracy of both methods for the first quartile and the median. For the iPad data, these are the points lying on the surface of the plinth walls. On the other hand, the data for the third quartile, as well as the sigma value for the iPad (Table 3) show the existence of a considerable number of inaccurately reconstructed points, which can be referred to as the already mentioned sharp edges. However, it is noteworthy that above the 3 σ value of 2, i.e., above a 2-cm distance from the reference model, the number of these points is not so significant, and above 3, there is not a single point for the iPad data (Table 4).

3.2.3. CUT Building Hallway–Interior

The indoor space, a fragment of a hall leading to a series of classrooms, had a slightly different character than the other analysed models. First of all, it was an enclosed space, which made it difficult to present its overall appearance on visualisation drawings. For this reason, it was decided to present its fragment as separated diagonally without taking into account the data for the ceiling. The size of the room (690 × 525 × 275 cm) and the multitude of difficult-to-map elements made it necessary to move along the walls, as well as to penetrate recesses and niches during data acquisition with the Apple iPad Pro. This type of measurement procedure may have caused a build-up of errors when performing multiple scans of certain sections of the room, which could not be avoided due to the room’s design. This resulted in the rejection of several obtained models due to coarse errors such as double-wall geometry. Another problem was the mixing of different types of lighting, i.e., artificial and natural, coming from behind glass doors.
On the presented visualisation (Figure 8b,d), one can again notice a significant difference in the density and regularity of the obtained point cloud. This is confirmed by the numerical data (Table 5), which show that the number of cloud points derived from the photogrammetric reconstruction is as much as 220 times greater. The mapping accuracy is already suggested by the shape of the histograms, whereas for the iPad data (Figure 8f), it is much more strongly stretched toward higher values than the histogram for the photogrammetric data (Figure 8c). Again, confirmation can be found in the statistics (Table 5): 75% of the cloud points obtained from photogrammetry lie within 2.8 cm of the model. For the cloud from the iPad, this value is 6.6 cm. It should be noted that compared to the room size, these values are not large—0.41% and 0.96% of the largest dimension, respectively. However, the data spread was almost three times larger for the cloud coming from the Apple iPad Pro.
The regularity of the point cloud obtained with Apple iPad Pro made the quality of the mesh based on this cloud much higher than the mesh based on data from photogrammetry (Figure 8a,c). The photogrammetric data showed various distortions on smooth surfaces and insufficiently illuminated areas. The iPad data acquisition procedure, which required recess penetration, resulted in a much more accurate reproduction of such elements.
Analysis of the deviations beyond the σ value (Table 6) of 5 cm, in this case, led to interesting conclusions. Although for the data collected with the Apple iPad Pro the number of points above the sigma value was significant (more than 41%), the number of points above the 2 σ value decreases significantly, taking on acceptable values at 3 σ . For photogrammetry, these values were shallow in all ranges.

3.2.4. Building Facade

The building facade was the largest object selected for analysis. As with the small architectural object, data collection had to occur under outdoor conditions that had to be consistent for both acquisition methods. The first observation highlighted from the models obtained was the inability to scan the entire facade by using the Apple iPad Pro (Figure 9). This was due to the range of the LiDAR sensor, which is at most about 5 m. For this reason, in further consideration, we have limited the size range of the analysed object to the parts visible on all reconstructions (Figure 10). However, the quality of the acquired data was not affected by the atmospheric conditions during the acquisition. Both under harsh sunlight and overcast conditions, the device coped correctly with geometry mapping, although there were different textures. For photogrammetry, the soft light of an overcast day provided better acquisition conditions. In the visualisation shown (Figure 10b), it can be seen that the representation of cavities posed the biggest problem for both methods. Here, the distance from the reference model was the largest.
The histograms of the distribution of the distances of the collected experimental data from the reference model had a reasonably similar shape (Figure 10c,f). However, it should be kept in mind that the number of points in each model differed by at least two orders of magnitude (Table 7), which was observed in all the presented examples. The difference in the density of the obtained data is also clearly shown in Figure 10b,e.
When analysing the data from the statistical side, one can notice a very similar value of the Q 1 1 coefficient for both experimental models (Table 7). This means that 25% of the reconstructed points were at a distance of about 1 cm from the reference model. Considering the dimensions of the model, these results should be considered good. Compared to the room model, the values of the statistical coefficients were stable and close to each other. The difference in the coefficient I Q R was insignificant, with more than 300% in the earlier example. Moreover, the analysis of the number of outlying points (Table 8) showed that the accuracy of data representation for both methods was similar and relatively high. For the σ value (which is also 5 cm in this case), the number of points above it for both the photogrammetric and iPad reconstruction were at a similar level of about 20%, so in this case, the differences can be considered marginal.
The facade of the building that was chosen for the experiments featured moderate geometric complexity. Some of the architectural details were fairly fine; nonetheless, the cavities on the elevation were not too deep. Moreover, there were no nooks and recesses that could not be accessed by the measurement devices. It should be noted that there is a great variety of architectural styles, ranging from those characterised by the simplest form (such as modernism) to the most formally complex (such as baroque). For the latter, the approach suited to small-scale details and garden sculpture would be more appropriate, with the caveat, however, that the rear parts of sculptures adjacent to the facade may be impossible to represent in models.

4. Discussion and Conclusions

Models equipped with various types of ToF sensors are starting, increasingly prominently, to enter the market of mobile consumer devices. Their role is to support the acquisition of depth maps and thus improve the obtainment of digital 3D models of the surrounding reality. They can be used in a variety of ways. One such device is the Apple iPad Pro, which comes equipped with a LiDAR sensor based on dToF technology. The presented research aimed to verify the measurement capabilities of this device concerning both reference models and models derived from photogrammetric reconstruction or laser scanners. Four types of architectural objects were selected to perform this research.
The first notable element when preparing spatial data acquisition using the mentioned device is the multitude of applications that offer such functionality. It was not the authors’ intention to compare different applications, so one of them was chosen after preliminary tests: Scaniverse. The process of data acquisition was carried out without the control of lighting conditions, i.e., they were not selected for a specific application. Obtaining the optimal data acquisition result by using the Apple iPad Pro was not an easy task. Sometimes, several attempts were necessary for the model to meet expectations, at least to some extent. The most frequent errors concerned geometry multiplication and the incorrect merging of adjacent model fragments. Additionally, the accumulation of measurement errors made during the scanning process was a characteristic effect.
Depending on the applications and methods used, the result of spatial data acquisition can be both point clouds and polygonal meshes, the topology of which is derived from the cloud structure. Both technologies produce grids with different densities and structures. Meshes derived from photogrammetry are dense, and their surfaces are rough. The meshes generated by an iPad are smooth and highly generalised. This observation was common to all four types of architectural objects tested.
Based on a statistically significant series of measurements made on reference spheres, the average measurement error for iPad LiDAR was tentatively estimated. Afterward, models obtained with it were compared with reference models to gauge accuracy. Statistical analysis showed that high cloud density derived from photogrammetry was also accompanied by high accuracy. The situation was slightly different for the data coming from the Apple iPad Pro. For most miniature objects (fine detail, ornament), the deviation analysis showed a high inconsistency with the reference models. For larger objects (interior, facade of a building), the statistics approached the values of the data derived from photogrammetry. However, it should be emphasised that the statistics were related to the distance from the reference model in the nodes of the regular grid derived from the experimental data. The already mentioned low data density and related generalisation negatively influence the quality of resulting data, especially in the case of small-sized objects and a high level of detail.
These properties imply the purpose of the technologies that were studied. Scanning with an iPad may be sufficient for data acquisition of objects with smooth surfaces or for which the rough surface structure is not essential. The generalisation it provides may be desirable in many situations. For example, in the case of a wall, the edge dimensions are crucial, not the surface texture, for an imperfection in a wall cavity, a floor, or ceiling surface. Therefore, the iPad works quite well for indoor space mapping. Unlike photogrammetry, the data from such a scan is already correctly scaled, and no loss of representation continuity is observed for colour-uniform surfaces. However, one should keep in mind the limitation in the range of the iPad’s LiDAR sensor, which is 5 m. This property significantly limits the usefulness of the device in field spatial data acquisition.

Author Contributions

Conceptualization, K.S., P.Ł. and P.O.; methodology, K.S., P.Ł. and P.O.; data acquisition, K.S., P.Ł., A.O., P.O., D.R. and K.O.; data processing, K.S. and P.Ł.; investigation, K.S. and P.Ł.; writing—original draft preparation, K.S., P.Ł., A.O., P.O. and D.R.; writing—review and editing, K.S., P.Ł., A.O. and P.O.; visualization, K.S., P.Ł. and P.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Upon a reasonable request from the corresponding author.

Acknowledgments

We would like to express our sincere thanks to Bimtelligent S.C. for providing the laser scan of the building facade.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LiDARLight Detection and Ranging
ToFTime of Flight
dToFdirect-time-of-flight
DSLMDigital Single Lens Mirrorless
SIFTScale-Invariant Feature Transform
CADComputer Aided Design
RMSRoot Mean Square
IQRinterquartile range
ICPIterative Closest Point
CUTCracow University of Technology

References

  1. Gosztyła, M.; Pásztor, P. Konserwacja i Ochrona Zabytków Architektury, 1st ed.; Oficyna Wydawnicza Politechniki: Rzeszowskiej, Poland, 2014; pp. 93–100. [Google Scholar]
  2. Apollonio, F.I.; Fantini, F.; Garagnani, S.; Gaiani, M. A Photogrammetry-Based Workflow for the Accurate 3D Construction and Visualization of Museums Assets. Remote Sens. 2021, 13, 486. [Google Scholar] [CrossRef]
  3. Kłopotowska, A.; Kłopotowski, M. Dotykowe Modele Architektoniczne w Przestrzeniach Polskich Miast, 1st ed.; Oficyna Wydawnicza Politechniki Białostockiej: Białystok, Poland, 2018; Volume 1, pp. 27–82. [Google Scholar]
  4. Donato, E.; Giuffrida, D. Combined Methodologies for the Survey and Documentation of Historical Buildings: The Castle of Scalea (CS, Italy). Heritage 2019, 2, 2384–2397. [Google Scholar] [CrossRef] [Green Version]
  5. Eastman, C.; Teicholz, P.; Sacks, R.; Liston, K. BIM Handbook: A Guide to Building Information Modeling for Owners, Managers, Designers, Engineers and Contractors; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  6. Attenni, M. Informative Models for Architectural Heritage. Heritage 2019, 2, 2067–2089. [Google Scholar] [CrossRef] [Green Version]
  7. Croce, V.; Caroti, G.; De Luca, L.; Jacquot, K.; Piemonte, A.; Véron, P. From the Semantic Point Cloud to Heritage-Building Information Modeling: A Semiautomatic Approach Exploiting Machine Learning. Remote Sens. 2021, 13, 461. [Google Scholar] [CrossRef]
  8. Reinoso-Gordo, J.F.; Gámiz-Gordo, A.; Barrero-Ortega, P. Digital Graphic Documentation and Architectural Heritage: Deformations in a 16th-Century Ceiling of the Pinelo Palace in Seville (Spain). ISPRS Int. J. Geo-Inf. 2021, 10, 85. [Google Scholar] [CrossRef]
  9. Partovi, T.; Fraundorfer, F.; Bahmanyar, R.; Huang, H.; Reinartz, P. Automatic 3-D Building Model Reconstruction from Very High Resolution Stereo Satellite Imagery. Remote Sens. 2019, 11, 1660. [Google Scholar] [CrossRef] [Green Version]
  10. Bacharidis, K.; Sarri, F.; Paravolidakis, V.; Ragia, L.; Zervakis, M. Fusing Georeferenced and Stereoscopic Image Data for 3D Building Façade Reconstruction. ISPRS Int. J. Geo-Inf. 2018, 7, 151. [Google Scholar] [CrossRef] [Green Version]
  11. Hu, P.; Yang, B.; Dong, Z.; Yuan, P.; Huang, R.; Fan, H.; Sun, X. Towards Reconstructing 3D Buildings from ALS Data Based on Gestalt Laws. Remote Sens. 2018, 10, 1127. [Google Scholar] [CrossRef] [Green Version]
  12. Zheng, Y.; Weng, Q.; Zheng, Y. A Hybrid Approach for Three-Dimensional Building Reconstruction in Indianapolis from LiDAR Data. Remote Sens. 2017, 9, 310. [Google Scholar] [CrossRef] [Green Version]
  13. Jung, J.; Jwa, Y.; Sohn, G. Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data. Sensors 2017, 17, 621. [Google Scholar] [CrossRef]
  14. Yang, B.; Huang, R.; Li, J.; Tian, M.; Dai, W.; Zhong, R. Automated Reconstruction of Building LoDs from Airborne LiDAR Point Clouds Using an Improved Morphological Scale Space. Remote Sens. 2017, 9, 14. [Google Scholar] [CrossRef] [Green Version]
  15. Skabek, K.; Tomaka, A. Comparison of photogrammetric techniques for surface reconstruction from images to reconstruction from laser scanning. Theor. Appl. Inform. 2014, 26, 161–178. [Google Scholar]
  16. Ozimek, A.; Ozimek, P.; Skabek, K.; Łabędź, P. Digital Modelling and Accuracy Verification of a Complex Architectural Object Based on Photogrammetric Reconstruction. Buildings 2021, 11, 206. [Google Scholar] [CrossRef]
  17. Knyaz, V.A.; Kniaz, V.V.; Remondino, F.; Zheltov, S.Y.; Gruen, A. 3D Reconstruction of a Complex Grid Structure Combining UAS Images and Deep Learning. Remote Sens. 2020, 12, 3128. [Google Scholar] [CrossRef]
  18. Carrivick, J.L.; Smith, M.W.; Quincey, D.J. Structure from Motion in the Geosciences; John Wiley & Sons: Hoboken, NJ, USA, 2016. [Google Scholar]
  19. Klein, N.; Li, N.; Becerik-Gerber, B. Imaged-based verification of as-built documentation of operational buildings. Autom. Constr. 2012, 21, 161–171. [Google Scholar] [CrossRef]
  20. Jebara, T.; Azarbayenjani, A.; Pentl, A. 3D structure from 2D motion. IEEE Signal Process. Mag. 1999, 16, 66–84. [Google Scholar] [CrossRef] [Green Version]
  21. Nocerino, E.; Poiesi, F.; Locher, A.; Tefera, Y.T.; Remondino, F.; Chippendale, P.; Gool, L.V. 3D Reconstruction with a Collaborative Approach Based on Smartphones and a Cloud-Based Server. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 187–194. [Google Scholar] [CrossRef] [Green Version]
  22. Jacob-Loyola, N.; Muñoz-La Rivera, F.; Herrera, R.F.; Atencio, E. Unmanned Aerial Vehicles (UAVs) for Physical Progress Monitoring of Construction. Sensors 2021, 21, 4227. [Google Scholar] [CrossRef]
  23. Zhang, H.; Bauters, M.; Boeckx, P.; Van Oost, K. Mapping Canopy Heights in Dense Tropical Forests Using Low-Cost UAV-Derived Photogrammetric Point Clouds and Machine Learning Approaches. Remote Sens. 2021, 13, 3777. [Google Scholar] [CrossRef]
  24. Mohammadi, M.; Rashidi, M.; Mousavi, V.; Karami, A.; Yu, Y.; Samali, B. Quality Evaluation of Digital Twins Generated Based on UAV Photogrammetry and TLS: Bridge Case Study. Remote Sens. 2021, 13, 3499. [Google Scholar] [CrossRef]
  25. Cali, M.; Ambu, R. Advanced 3D Photogrammetric Surface Reconstruction of Extensive Objects by UAV Camera Image Acquisition. Sensors 2018, 18, 2815. [Google Scholar] [CrossRef] [PubMed]
  26. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  27. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2013, 6, 1–15. [Google Scholar] [CrossRef]
  28. Jiménez-Jiménez, S.I.; Ojeda-Bustamante, W.; Marcial-Pablo, M.d.J.; Enciso, J. Digital Terrain Models Generated with Low-Cost UAV Photogrammetry: Methodology and Accuracy. ISPRS Int. J. Geo-Inf. 2021, 10, 285. [Google Scholar] [CrossRef]
  29. Di Angelo, L.; Di Stefano, P.; Guardiani, E.; Morabito, A.E. A 3D Informational Database for Automatic Archiving of Archaeological Pottery Finds. Sensors 2021, 21, 978. [Google Scholar] [CrossRef] [PubMed]
  30. Goedert, J.; Bonsell, J.; Samura, F. Integrating Laser Scanning and Rapid Prototyping to enhance Construction Modeling. J. Archit. Eng. 2005, 11, 71–74. [Google Scholar] [CrossRef]
  31. Ma, Y.-P. Extending 3D-GIS District Models and BIM-Based Building Models into Computer Gaming Environment for Better Workflow of Cultural Heritage Conservation. Appl. Sci. 2021, 11, 2101. [Google Scholar] [CrossRef]
  32. Moyano, J.; Nieto-Julián, J.E.; Bienvenido-Huertas, D.; Marín-García, D. Validation of Close-Range Photogrammetry for Architectural and Archaeological Heritage: Analysis of Point Density and 3D Mesh Geometry. Remote Sens. 2020, 12, 3571. [Google Scholar] [CrossRef]
  33. Burdziakowski, P.; Bobkowska, K. UAV Photogrammetry under Poor Lighting Conditions—Accuracy Considerations. Sensors 2021, 21, 3531. [Google Scholar] [CrossRef]
  34. Grau, J.; Liang, K.; Ogilvie, J.; Arp, P.; Li, S.; Robertson, B.; Meng, F.-R. Improved Accuracy of Riparian Zone Mapping Using Near Ground Unmanned Aerial Vehicle and Photogrammetry Method. Remote Sens. 2021, 13, 1997. [Google Scholar] [CrossRef]
  35. Li, M.; Li, Z.; Liu, Q.; Chen, E. Comparison of Coniferous Plantation Heights Using Unmanned Aerial Vehicle (UAV) Laser Scanning and Stereo Photogrammetry. Remote Sens. 2021, 13, 2885. [Google Scholar] [CrossRef]
  36. Łabędź, P.; Skabek, K.; Ozimek, P.; Nytko, M. Histogram Adjustment of Images for Improving Photogrammetric Reconstruction. Sensors 2021, 21, 4654. [Google Scholar] [CrossRef] [PubMed]
  37. Farella, E.M.; Torresani, A.; Remondino, F. Refining the Joint 3D Processing of Terrestrial and UAV Images Using Quality Measures. Remote Sens. 2020, 12, 2873. [Google Scholar] [CrossRef]
  38. McManamon, P.F. LiDAR Technologies and Systems; Bellingham: Washington, DC, USA, 2019. [Google Scholar]
  39. Gatziolis, D.; Andersen, H.-E.-E. A Guide to LIDAR Data Acquisition and Processing for the Forests of the Pacific Northwest; U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station: Portland, OR, USA, 2008. [Google Scholar]
  40. Vogt, M.; Rips, A.; Emmelmann, C. Comparison of iPad Pro®’s LiDAR and TrueDepth Capabilities with an Industrial 3D Scanning Solution. Technologies 2021, 9, 25. [Google Scholar] [CrossRef]
  41. Schuon, S.; Theobalt, C.; Davis, J.; Thrun, S. High-quality scanning using time-of-flight depth superresolution. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008. ISBN 9781424423392. [Google Scholar]
  42. vGis. Available online: https://www.vgis.io/2020/12/02/lidar-in-iphone-and-ipad-spatial-tracking-capabilities-test-take-2/ (accessed on 23 September 2021).
  43. iPad Pro-Apple. Available online: https://www.apple.com/ipad-pro/ (accessed on 23 September 2021).
  44. Junko Yoshida, EETimes. Available online: https://www.eetimes.com/breaking-down-ipad-pro-11s-lidar-scanner/ (accessed on 23 September 2021).
  45. Gollob, C.; Ritter, T.; Kraßnitzer, R.; Tockner, A.; Nothdurft, A. Measurement of Forest Inventory Parameters with Apple iPad Pro and Integrated LiDAR Technology. Remote Sens. 2021, 13, 3129. [Google Scholar] [CrossRef]
  46. Heinrichs, B.E.; Yang, M. Bias and Repeatability of Measurements from 3D Scans Made Using iOS-Based Lidar; SAE Technical Paper; SAE International: Warrendale, PA, USA, 2021. [Google Scholar]
  47. Agisoft LLC. Agisoft Metashape (Version 1.6.3); Agisoft LLC: Saint Petersburg, Russia, 2020. [Google Scholar]
  48. Lowe, D. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar]
  49. Shan, J.; Toth, C.K. Topographic Laser Ranging and Scanning: Principles and Processing, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  50. Bruno, N.; Giacomini, A.; Roncella, R.; Thoeni, K. Influence of Illumination Changes on Image-Based 3d Surface Reconstruction. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2021, B2, 701–708. [Google Scholar] [CrossRef]
  51. Laan Labs 3D Scanner App-LIDAR Scanner for iPad & iPhone Pro. Available online: https://www.3dscannerapp.com/ (accessed on 1 July 2021).
  52. Polycam-LiDAR 3D Scanner. Available online: https://poly.cam/ (accessed on 1 July 2021).
  53. Scaniverse-3D LiDAR Scanner for iPhone and iPad. Available online: https://scaniverse.com/ (accessed on 1 July 2021).
  54. SiteScape. Available online: https://www.sitescape.ai/ (accessed on 21 April 2021).
  55. McKay, N.D.; Besl, J. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar]
  56. VDI/VDE 2634; Optische 3-D-Messsysteme-Bildgebende Systeme mit flächenhafter Antastung in Mehreren Einzelansichten/Optical 3D-Measuring Systems-Multiple View Systems Based on Area Scanning. Engl. VDI/VDE-Gesellschaft Mess- und Automatisierungstechnik: Düsseldorf, Germany, 2008.
Figure 1. Calibration balls in the Laboratory of Coordinate Metrology, Cracow University of Technology.
Figure 1. Calibration balls in the Laboratory of Coordinate Metrology, Cracow University of Technology.
Sensors 22 08504 g001
Figure 2. Objects at various scales used during the study: (a) fine detail; (b) small architecture object; (c) room interior; (d) building facade.
Figure 2. Objects at various scales used during the study: (a) fine detail; (b) small architecture object; (c) room interior; (d) building facade.
Sensors 22 08504 g002
Figure 3. Reference models: (a) small figure model—Three Wise Monkeys, Konica-Minolta laser scanner; (b) medium model—bust of Tadeusz Kościuszko, Creaform Academia 3D scanner; (c) interior model—student lounge, CAD model; (d) facade model—CUT cannon shed building, CAD model.
Figure 3. Reference models: (a) small figure model—Three Wise Monkeys, Konica-Minolta laser scanner; (b) medium model—bust of Tadeusz Kościuszko, Creaform Academia 3D scanner; (c) interior model—student lounge, CAD model; (d) facade model—CUT cannon shed building, CAD model.
Sensors 22 08504 g003
Figure 4. Measurements with calibration balls using GOM Software.
Figure 4. Measurements with calibration balls using GOM Software.
Sensors 22 08504 g004
Figure 5. Common bugs in iPad meshes: (a)—discontinuity and deformation, (b)—deformation and displacement, (c)—duplication, (d)— duplication.
Figure 5. Common bugs in iPad meshes: (a)—discontinuity and deformation, (b)—deformation and displacement, (c)—duplication, (d)— duplication.
Sensors 22 08504 g005
Figure 6. Comparison of statistical values for fine detail. Photogrammetry: (a) textured mesh, (b) point cloud, (c) histogram of distances to the model; iPad: (d) mesh, (e) point cloud, (f) histogram of distances to the model. Point clouds are coloured according to histogram data.
Figure 6. Comparison of statistical values for fine detail. Photogrammetry: (a) textured mesh, (b) point cloud, (c) histogram of distances to the model; iPad: (d) mesh, (e) point cloud, (f) histogram of distances to the model. Point clouds are coloured according to histogram data.
Sensors 22 08504 g006aSensors 22 08504 g006b
Figure 7. Comparison of measured data with the model for the small architectural object. Photogrammetry: (a) textured mesh, (b) point cloud, (c) histogram of distances to the model; iPad: (d) mesh, (e) point cloud, (f) histogram of distances to the model. Point clouds are coloured according to histogram data.
Figure 7. Comparison of measured data with the model for the small architectural object. Photogrammetry: (a) textured mesh, (b) point cloud, (c) histogram of distances to the model; iPad: (d) mesh, (e) point cloud, (f) histogram of distances to the model. Point clouds are coloured according to histogram data.
Sensors 22 08504 g007
Figure 8. Comparison of measured data with model for room interior. Photogrammetry: (a) textured mesh, (b) point cloud, (c) histogram of distances to the model; iPad: (d) mesh, (e) point cloud, (f) histogram of distances to the model. Point clouds are coloured according to histogram data.
Figure 8. Comparison of measured data with model for room interior. Photogrammetry: (a) textured mesh, (b) point cloud, (c) histogram of distances to the model; iPad: (d) mesh, (e) point cloud, (f) histogram of distances to the model. Point clouds are coloured according to histogram data.
Sensors 22 08504 g008aSensors 22 08504 g008b
Figure 9. Measurement limits: (a) photogrammetry, (b) iPad LiDAR.
Figure 9. Measurement limits: (a) photogrammetry, (b) iPad LiDAR.
Sensors 22 08504 g009
Figure 10. Comparison of statistical values for the building facade. Photogrammetry: (a) textured mesh, (b) point cloud, (c) histogram of distances to the model; iPad: (d) mesh, (e) point cloud, (f) histogram of distances to the model. Point clouds are coloured according to histogram data.
Figure 10. Comparison of statistical values for the building facade. Photogrammetry: (a) textured mesh, (b) point cloud, (c) histogram of distances to the model; iPad: (d) mesh, (e) point cloud, (f) histogram of distances to the model. Point clouds are coloured according to histogram data.
Sensors 22 08504 g010
Table 1. Comparison of statistical values for fine detail [mm].
Table 1. Comparison of statistical values for fine detail [mm].
ModelPoints Q 1 Median Q 3 IQR
iPad12540.751.682.952.52
photogrammetry3,622,0070.080.160.270.25
Table 2. Analysis of the number of outlying points ( σ = Q 3 = 1.5 [mm]) for fine detail.
Table 2. Analysis of the number of outlying points ( σ = Q 3 = 1.5 [mm]) for fine detail.
Model σ [%] 2 σ [%] 3 σ [%]
iPad66553.0329323.371169.25
photogrammetry48740.135890.022960.01
Table 3. Comparison of statistical values for the small architecture object [mm].
Table 3. Comparison of statistical values for the small architecture object [mm].
ModelPoints Q 1 Median Q 3 IQR
iPad96251.563.476.137.0
photogrammetry193,1270.701.502.523.0
Table 4. Analysis of the number of outlying points ( σ = Q 3 = 5 [mm]) for the small architectural object.
Table 4. Analysis of the number of outlying points ( σ = Q 3 = 5 [mm]) for the small architectural object.
Model σ [%] 2 σ [%] 3 σ [%]
iPad163016.944494.66760.69
photogrammetry17990.936200.324600.24
Table 5. Comparison of statistical values for the room model [cm].
Table 5. Comparison of statistical values for the room model [cm].
ModelPoints Q 1 Median Q 3 IQR
iPad27,7762.04.26.67.7
photogrammetry6,138,6440.81.72.82.5
Table 6. Analysis of the number of outlying points ( σ = Q 3 =5 [cm]) for the indoor space model.
Table 6. Analysis of the number of outlying points ( σ = Q 3 =5 [cm]) for the indoor space model.
Model σ [%] 2 σ [%] 3 σ [%]
iPad11,40141.0517106.163621.30
photo-grammetry426,0176.9426,9740.4468300.11
Table 7. Comparison of statistical values for the building facade [cm].
Table 7. Comparison of statistical values for the building facade [cm].
ModelPoints Q 1 Median Q 3 IQR
iPad33,3631.22.95.25.3
photogrammetry3,031,5911.22.54.55.1
Table 8. Analysis of the number of outlying points ( σ = Q 3 = 5 [cm]) for the building facade.
Table 8. Analysis of the number of outlying points ( σ = Q 3 = 5 [cm]) for the building facade.
Model σ [%] 2 σ [%] 3 σ [%]
iPad669920.0819055.718062.42
photo-grammetry574,69218.96104,4223.4439,1741.29
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Łabędź, P.; Skabek, K.; Ozimek, P.; Rola, D.; Ozimek, A.; Ostrowska, K. Accuracy Verification of Surface Models of Architectural Objects from the iPad LiDAR in the Context of Photogrammetry Methods. Sensors 2022, 22, 8504. https://doi.org/10.3390/s22218504

AMA Style

Łabędź P, Skabek K, Ozimek P, Rola D, Ozimek A, Ostrowska K. Accuracy Verification of Surface Models of Architectural Objects from the iPad LiDAR in the Context of Photogrammetry Methods. Sensors. 2022; 22(21):8504. https://doi.org/10.3390/s22218504

Chicago/Turabian Style

Łabędź, Piotr, Krzysztof Skabek, Paweł Ozimek, Dominika Rola, Agnieszka Ozimek, and Ksenia Ostrowska. 2022. "Accuracy Verification of Surface Models of Architectural Objects from the iPad LiDAR in the Context of Photogrammetry Methods" Sensors 22, no. 21: 8504. https://doi.org/10.3390/s22218504

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop