Next Article in Journal
“How Far Is the Closest Bus Stop?” An Evaluation of Self-Reported versus GIS-Computed Distance to the Bus among Older People and Factors Influencing Their Perception of Distance
Previous Article in Journal
Evaluating OSM Building Footprint Data Quality in Québec Province, Canada from 2018 to 2023: A Comparative Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Use of Smartphone Lidar Technology for Low-Cost 3D Building Documentation with iPhone 13 Pro: A Comparative Analysis of Mobile Scanning Applications

Department of Geomatics, HafenCity University, 20457 Hamburg, Germany
*
Author to whom correspondence should be addressed.
Geomatics 2023, 3(4), 563-579; https://doi.org/10.3390/geomatics3040030
Submission received: 12 August 2023 / Revised: 8 December 2023 / Accepted: 8 December 2023 / Published: 11 December 2023

Abstract

:
Laser scanning technology has long been the preferred method for capturing interior scenes in various industries. With a growing market, smaller and more affordable scanners have emerged, offering end products with sufficient accuracy. While not on par with professional scanners, Apple has made laser scanning technology accessible to users with the introduction of the new iPhone Pro models, democratizing 3D scanning. Thus, this study aimed to assess the performance of the iPhone’s lidar technology as a low-cost solution for building documentation. Four scanning applications were evaluated to determine the accuracy, precision, and user experience of the generated point clouds compared with a terrestrial laser scanner. The results reveal varying performances on the same device, highlighting the influence of software. Notably, there is room for improvement, particularly in tracking the device’s position through software solutions. As it stands, the technology is well suited for applications such as indoor navigation and the generation of quick floor plans in the context of building documentation.

1. Introduction

Over the past decades, laser scanning has emerged as a cutting-edge technology. Laser scanners generate point clouds that are highly effective in representing objects of varying complexity at different scales [1]. In the 1990s, terrestrial laser scanners (TLS) were introduced to the surveying industry [2], and towards the 2010s, they became more accurate and capable of scanning ranges of hundreds of meters. TLSs are widely used in a variety of applications, including cultural heritage [3,4], change detection [5,6], monitoring and deformation [7,8,9], as-built modelling [10], and forestry [11]. In the late 2000s, mobile mapping systems (MMS), which operate on a vehicle such as a car, were introduced into mapping operations, mainly for data capture on road infrastructure and building facades [12] and extended its use to various applications [13,14,15,16,17,18,19,20].
These systems utilize active or passive sensing to capture the object of interest, along with GNSS and IMU for accurate georeferencing. While the GNSS and IMU combination works well for outdoor applications, in GNSS-denied spaces like indoors, using only inertial sensors leads to an increasing drift rate, one which cannot be corrected due to the unknown function with respect to time [21]. Simultaneous localization and mapping (SLAM) is one of the techniques that offers a solution to this problem. Its fundamental concept is monitoring the sensor’s position and orientation (pose) over time in 3 degrees of freedom (DoF) and with relative coordinates, respectively. This is achieved by utilizing overlaps in optical data, such as with previously observed features [21].
Nowadays, numerous low-cost MMS rely on SLAM and can be utilized through various platforms like trolleys, backpacks, and hand-held devices. Although many of these systems have been initialized for entertainment, some have led to research work developments for further applications. In addition to mobile laser scanner solutions, depth cameras represent another commonly employed low-cost alternative in 3D documentation. The integration of RGB and depth cameras generates a 3D representation of the scene by capturing the distance between the object and the camera within their field of view (FOV) and is frequently utilized in computer vision [22]. Two common approaches for depth cameras are time-of-flight (ToF) and structured light. ToF cameras, exemplified by devices like Azure Kinect and HoloLens, emit light pulses and capture the reflected signal to calculate the distance based on the measured time for the light to travel to an object and back. Numerous studies have incorporated both systems in indoor mapping [23,24,25]. Structured light-based cameras project a known light pattern onto the scene and calculate depth information based on the distortion of the pattern on the analyzed object surface. Early generations of Kinect serve as a well-known example of this type of camera and have been utilized in various studies to investigate their capabilities in indoor mapping [22,26,27].
The developments in laser scanning technology and the rapid advancement in low-cost sensor technology have made 3D laser scanning more accessible and cost-effective. Over the years, researchers have investigated comparative evaluation of the lidar-based indoor MSS, such as [28,29,30,31]. Even consumer technology, like some iPhone models, now incorporates laser scanning technology, opening possibilities for the democratization of 3D scanning. This paper aims to perform scanning experiments with Apple iPhone 13 Pro lidar for 3D documentation of indoor environments. People spend most of their time in indoor environments, [32] yet these lack proper and up-to-date map representations. Though developments in scanning technology have made it possible to capture indoor environments with efficiency of time and accuracy, the cost could still be lower and the technology needs expertise. Therefore, research into low-cost opportunities for indoor mapping, as in other domains [33], is still an ongoing effort.
In this regard, this paper will assess the possibility of using a consumer-grade smartphone equipped with lidar (Apple iPhone 13 Pro) as a low-cost alternative to mobile mapping systems or terrestrial laser scanners in the 3D documentation of indoor environments, such as in the quick generation of floor plans and indoor navigation maps, detecting changes in spaces, or filling the gaps in a previous scan. For the experiment, a room occupied by laboratory inventory will be scanned by different 3D scanning applications installed on an iPhone 13 Pro, and the resulting point clouds will be compared with terrestrial laser scanner data.
The remainder of this paper is organized as follows. Section 2 briefly describes the key related works on smartphone-based MSS and existing solutions. Section 3 explains the methodologies used as well as the data acquisition. The results are expressed in Section 4, and, finally, the paper is concluded and discussed in Section 5.

2. Related Works

Using smartphones to obtain spatial information is not a new concept, as smartphones are equipped with inertial sensors that are commonly used in indoor positioning, such as in [34], and cameras that are used in 3D reconstruction based on images or videos [35,36]. Most earlier studies intensively worked with the Google Tango technology, which was launched in 2014 [37] and aimed to evaluate the dependability, influence, and engagement of users in a hardware and software bundle that permits the development of augmented/mixed/virtual reality content exclusively through the use of their smartphones or tablets [38]. The Tango project was only available on a limited number of compatible phones and tablets. In 2018, the project was terminated and replaced with ARCore [39]. Some studies include [40,41], both of which tested the Tango tablet’s capability for 3D documentation of indoor spaces. Other examples are [42], which investigated 3D reconstruction using a Tango smartphone in the context of cultural heritage, and [38], which assessed the quality and potential of the system in their study.
Apple introduced lidar sensors into its pro lines of tablets and smartphones, iPad Pro and iPhone 12 Pro, in 2020. This brought a novelty to the 3D scanning subject by incorporating a lidar sensor into user-grade smartphones, leading to the question of whether these devices would be a low-cost alternative with enough accuracy in 3D scanning. Apple’s aim was more to improve the camera and enhance the augmented reality experience for its users. Hence, Apple has not released any 3D scanning applications for large spaces or objects after the initial release, apart from the Measure app, which is designed as a measuring tool. However, Apple has provided a software development kit (SDK); since then, many developers have developed 3D scanning apps with ARKit by Apple. As it seems to be compatible with novice users who seek to generate a floor plan to design their houses or to try furniture before buying, more applications that target scanning experts have been released over time. It has also received attention from researchers as a low-cost and over-the-shelf alternative for 3D documentation. Different subjects have been investigated since the release of the first Apple device equipped with the lidar sensor. [43,44] evaluated the iPhone 12 Pro for its use in geoscience applications. The former reports a 10 cm sensor accuracy when demonstrating its use on a coastal cliff, while the latter concludes that the tested iPhone 12 Pro device would be the standard process for capturing rocky slopes and investigating discontinuities, despite limitations in its range. [33,45] assessed the Apple lidar devices for their use in heritage documentation and concluded that this technology holds great promise for the near future. [46] investigated these devices for indoor/outdoor modelling and reported 53 cm for local precision and 10 cm for global correctness. The indoor test space consisted of two adjacent rooms that covered a total of around 200 m2. [47] evaluated the iPad Pro from the architectural surveying perspective and reported 2 cm precision and 4 cm accuracy for a 1:200 map scale.

3. Materials and Methods

The iPhone 13 Pro was the device tested in this study, and was released in September 2021. The device weighs 204 g, has a 7.7 mm thickness, and features a 6.1-inch super retina display. It is powered by an A15 Bionic chip with a 6 core CPU, 5-core GPU, 16-core Neural Engine, 6 GB RAM, and 128 GB memory. Additionally, the iPhone 13 Pro includes three 12MP rear cameras (telephoto, wide, and ultrawide) and a 3D time-of-flight (ToF) lidar. Although Apple publishes limited information about the technical details of the laser used in their products, the authors of [34] have claimed that the laser sensor is a solid-state device that does not use motorized mechanical parts so as to provide higher scalability and reliability. According to [27], the lidar sensor of the iPhone 13 Pro emits a vertical cavity surface emitting laser with diffraction optics element (VCSEL DOE) at a near-infrared spectrum in a 2D array and is received by a single-photon avalanche photodiode (SPAD). A total of 576 points are emitted in an array of 8 × 8 points, diffracted into 3 × 3 grids.
Although Apple does not offer a dedicated 3D scanning application, developers can access sensors on iOS 14 and later versions through ARKit to create 3D mapping applications. As a result, several 3D scanning applications are available in the Apple Store. This study used four different 3D scanning applications—3DScanner, PolyCam, Scaniverse, and SiteScape. The selection was based on three criteria: (1) the application was free or had a free-use option, (2) the product generated a point cloud, and (3) the lidar sensor was utilized in point cloud generation. Each application is explained in the following subsections, and a summary of the applications’ specifications is given in Table 1 below.

3.1. 3D Scanner App

The 3D Scanner app (version 2.0.13(1)) is a free application offering multiple scan modes, including LIDAR, LIDAR Advance, Point Cloud, RoomPlan, Photos, and TrueDepth. The application’s help page explains each mode to help users select the best mode for the scanner’s purpose. Although the LIDAR Advance mode offers flexibility in setting parameters (resolution, max depth, etc.) before the scan, the LIDAR mode was used in this study as suggested for large areas. In the advanced mode, the scan automatically ends after a short capture time due to the large number of points, while the LIDAR mode enables longer scans. The quality produced by both modes is reported to be the same. Once the capture is completed, the scan is processed (smoothing, simplifying, and texturing) in HD, fast, or custom modes. The app includes extra features such as extending a scan, viewing the camera trajectory, measuring with the scan, and capturing a floor plan image. Exports are either point cloud (PCD, PLY, LAS, e57, PTS, XYZ) or mesh (OBJ, KMZ, FBX, etc.). The LAS format exports georeferenced point clouds with the WGS84 coordinates. The scanned data were exported in XYZ format, compatible with the point cloud processing software CloudCompare (version. 2.12.4).

3.2. PolyCam

PolyCam (version 3.0.2) offers free, team (14.99 $/seat), and pro (14.99 $/month) versions. The free version was sufficient for this study as it does not limit lidar captures; however, the free trial version was used for the ease of data export. The scan modes available are LIDAR, photo, and room. The photo mode uses the photogrammetry technique and is suitable for smaller objects, while the room mode generates 3D models instantly. Captured scans can be processed under fast, space, object, or custom categories. Measurements on scans and extending or editing an existing scan are possible. Scans can be exported as point clouds (DXF, PLY, XYZ, LAS, PTS) or mesh (OBJ, FBX, STL, etc.). This work used LIDAR mode for data capture using the PolyCam application, and the output was exported in XYZ format.

3.3. SiteScape

SiteScape (version 1.6.9) also offers free, team (N/A price) and pro (49.99$/month–52.99€) versions. Up to 50 sqm is included in the free version, with one scan synced to their web viewer. Export is limited to PLY or E57 formats in the free version. SiteScape works only in LIDAR mode. The user can set point density (low, med, or high), which affects how quickly the scan will reach the maximum allowed point, and point size (low, med, or high), which only sets the displaying size of the points while scanning. After approximately one minute, the maximum point limit was reached for one scan, and up to ten scans could be captured consecutively. The completed scans can be exported as a point cloud or synched to the SiteScape cloud for viewing in a web app or sharing with multiple users. The Geolab capture was completed with the medium (med) point density setting of SiteScape and as ten partial scans. These scans were conducted consecutively, utilizing some overlapping areas in between. The scans were exported in E57 format.

3.4. Scaniverse

Scaniverse (version 2.0.3) is a free application that offers scan modes based on the size of the object (small, medium, large). The processing is available in speed, area, and detail modes. Processed scans can be exported as a point cloud (PLY, LAS) or mesh (OBJ; FBX, STL, GLB, USDZ). The LAS format allows the exporting of point clouds georeferenced with UTM Cartesian coordinates, which were used in this work. The scans were completed in large object mode and processed in area mode.
Data were collected in the geomatics laboratory (Geolab) at HafenCity University, Hamburg (Figure 1). The Geolab is an ideal location for testing the capacity of the iPhone’s lidar sensor in a controlled space. It has a 35 m long straight concrete wall on one of its longer edges. The other long edge comprises two walls measuring 13 and 23 m in length, which gradually widen towards the center and connect with each other. These walls have large windows covering them. The broader windows were curtained before scanning. The short side walls are 7 and 9 m long. There are six surveying pillars that are approximately 1.5 m high and 40 cm in size, as well as many laser scanning targets, some of which had been previously measured with a total station. Additionally, the Geolab is cluttered with furniture and equipment.
Each application is used to scan the Geolab in two parts (Figure 1) by creating a loop for each part (except SiteScape). Scanning is repeated a number of times, and the optimal result was achieved when the phone was held parallel to the walls and moved up and down by sliding slowly toward one side at every step. Attention is given to maintaining the distance between the scanned surface and the camera, ensuring that it does not exceed 5 m, as recommended by the applications. Efforts were made to cover the ceiling and the floor entirely while adhering to the recommendations in the applications’ manuals by avoiding rapid movements and sudden turns. The scanning time for each application was similar, taking between 20 to 25 min to capture the entire room. Furthermore, the laboratory was scanned with the terrestrial laser scanner (TLS) Z+F Imager 5016 [48] from eight scan positions, and The TLS data served as references in evaluation.
This paper investigates the capacities of the iPhone 13 Pro lidar as a low-cost sensor alternative for 3D documentation of indoor environments, with a focus on the quality of the sensor and the generated point cloud. The global accuracy of the generated point clouds was evaluated by comparing them to the terrestrial laser scanner data using a cloud-to-cloud method, and segmented planes were analyzed to assess the precision of the sensor. Distances were calculated to determine the local accuracy of the system by using already available targets in the Geolab. The use of different applications in the evaluation aims to reveal the effect of the software on quality of the final point cloud. Finally, the user experience is included in the discussion and conclusion sections of the evaluation.

4. Results

Upon data collection, all data processing for each application was conducted using the open-source point cloud processing software CloudCompare [49]. First, the point clouds of Part 1 and Part 2 for each application were roughly aligned with the TLS cloud by utilizing existing laser scanning targets or other distinctive points. Next, the iterative closest point (ICP) algorithm performed a fine registration on each part. Registered parts were then merged to generate a single point cloud of the test room for each application. Figure 2 displays the registered point clouds for each application. PolyCam appears to have less distortion compared with the other applications, which, for example, exhibit more distorted edges. The 3D Scanner app has some areas on the ceiling that were not captured, which was a result of missing capture. The SiteScape point cloud has a very high number of points, 115,883,552, in comparison with PolyCam (6,685,940), 3D Scanner app (6,568,595), and Scaniverse (787,819). On one of the flat wall surfaces, the point density was assessed within a one m2 box. The point distribution was as follows: 204,231 points for SiteScape, 9128 points for PolyCam, 6,405 points for 3D Scanner app, and 1183 points for Scaniverse.
A closer look at the point clouds shows some split surfaces, particularly where two parts overlap and loops end. These stem from the drift error accumulating over time and are a known problem in SLAM systems. Figure 3 shows examples of the split surfaces in each application’s point cloud.
The comparison initially assesses global accuracy using the multiscale model-to-model cloud comparison (M3C2) method [5]. This method calculates the Euclidean distance between point clouds along the surface, typically to a specified search depth. The resulting deviations from the reference point cloud are represented as M3C2 distances on the color-coded cloud. For this study, a search depth of 40 cm was used, considering distances beyond this value as useless.
Figure 4 presents the results, where a range of 40 cm and a color saturation of 14 cm were used. The 3D Scanner app shows higher deviations on the walls compared with the floor and ceiling. PolyCam demonstrates overall balanced and low deviations, with some peaks observed on the floor and ceiling. Similarly, SiteScape displays balanced deviations, but the walls experience partially higher deviations. Scaniverse does not display remarkable performance in any specific area, but the walls show smaller deviations compared with the floor and ceiling. Despite employing a consistent scanning approach with the phone held parallel to the side walls and moved up and down by the same user at a normal to slow pace, varying performance in different areas within the test room is attributed to the SLAM algorithm. A discernible line reveals the operator’s path as they walked along one wall in one direction and then back along the other wall and can be observed as a slight or dominant line on the point clouds (Figure 4). Additionally, areas where the loops end, or the different parts of the room are connected exhibit higher deviations across all applications. These observations highlight the consequences of errors in pose estimation during dynamic scanning, resulting in misaligned points and failure in loop closure.
Additionally, one of the long side walls was partially covered by large windows, which were mostly shielded from direct sunlight during the capture process. The applications, particularly PolyCam and SiteScape, demonstrated satisfactory performance along this wall, suggesting that changes in lighting conditions during scans had minimal effect. However, this is an assumption and not assessed within this work. On the other hand, the back wall exhibited higher deviations from the TLS in each application’s point cloud. This can be partly attributed to the presence of clutter in front of the wall, which hindered scanning at a closer range. In particular, Scaniverse experienced difficulties in this area, as indicated by the dashed circle in Figure 4, where it failed to capture any data within a 40 cm distance from the reference cloud.
Table 2 summarizes the results obtained from the visualization in Figure 4. SiteScape exhibits the lowest standard deviation of 6 cm, followed by PolyCam with 7 cm, Scaniverse with 8 cm, and 3D Scanner app with 9 cm. Each application exhibits its highest point density within the 1–3 cm range, with SiteScape leading at 46%, followed by PolyCam at 32%, Scaniverse at 31%, and the 3D Scanner app at 30%. While Scaniverse only has 3% of points falling within the deviation range of 20 to 40 cm, as shown in Figure 4, it is important to note that most of the data on the back wall were not captured due to our exceeding of the limits of the set search depth of the M3C2 algorithm. Overall, the applications demonstrate possibly achievable accuracies of up to 5 cm, considering the percentage of deviations within this range is 69% for the 3D Scanner app, 77% for PolyCam, 83% for SiteScape and 70% for Scaniverse. The problem seems to be in the areas with splitting or uneven surfaces due to the drift error that accumulates over time, showing that there is room for improvement in the software component of the applications.
The evaluation of the presented results is also compared with the accuracy levels (LOA) defined by the U.S. Institute of Building Documentation [50], widely adhered to in Scan2BIM projects, and outlined in Table 3. These LOA levels are specified at the 95 percent confidence level (2σ), a common practice in surveying, e.g., the German standard DIN 18710. LOA50 represents the highest class with accuracies of up to 1 mm, while LOA10 is the lowest, indicating accuracies greater than 5 cm. Upon comparing the values in Table 2 with those in Table 3, it is evident that no application achieves at least 95% of all distances within the given LOA levels up to 5 cm. The achievable accuracies for each application are in the range of 10–20 cm for the 95% confidence level, signifying that the software component has not yet achieved the capability to produce a highly accurate point cloud that aligns with widely referenced standards.
The subsequent analysis prioritized the noise assessment on the point clouds on flat surfaces, namely walls, floor, and ceiling. To achieve this, clutter, such as furniture or wall accessories, covering the flat surfaces was segmented away, leaving behind the relevant areas. A plane was fitted into these remaining parts to represent the flat areas accurately. The planes were constructed through the random sampling and consensus (RANSAC) algorithm that calculates the parameters required to construct a corresponding primitive utilizing a minimum set of points [51]. The distance between each point and the fitted plane was calculated to measure the noise present in the data sets. This information was visualized to gain insights into the noise levels across the flat surfaces.
Figure 5 illustrates the results for the TLS data. It is evident that the floor and walls exhibit a smooth, flat surface, while the ceiling deviates partially from a flat surface, with variations of up to 4 cm along the middle line.
Figure 6 illustrates the distances from each point to the fitted plane along the long concrete wall, while Figure 7 focuses on the floor and ceiling. A comprehensive summary of these comparisons can be found in Table 4. Notably, no consistent pattern is observed across all applications concerning their behavior on each of these surfaces. For instance, the 3D Scanner app exhibits the least deviation on the ceiling surface, while PolyCam and Scaniverse perform better on the wall surface, and SiteScape performs better on the floor surface. Within the applications, the percentage of points exceeding a distance of 10 cm remains below 10%, except for Scaniverse. The standard deviation ranges between 2 to 7 cm for all applications. PolyCam and SiteScape generally perform better than the 3D Scanner app and Scaniverse by demonstrating higher point densities in the lower deviation ranges. The deviation pattern identified on the ceiling in the TLS data (Figure 5) is not clearly reflected in the results from the applications. An important factor contributing to noise in the point clouds is the presence of split or uneven surfaces (e.g., Figure 3) arising from the drift over time. Overall, these observations highlight that the results are primarily influenced by the software component rather than the hardware component.
Furthermore, a comprehensive comparison was undertaken to assess the local accuracy of the provided point clouds. This evaluation involved measuring the Euclidean distances between multiple points distributed throughout the room. The point coordinates were manually extracted from each point cloud. The coordinates derived from the TLS point cloud were considered the true values with which to establish a reference. Before conducting this comparison, a preliminary assessment was performed by comparing the TLS points with the ground truth coordinates obtained from total station measurements. This assessment revealed a root mean square error (RMSE) of 3 mm for X coordinates and 6 mm for both Y and Z coordinates.
Figure 8 illustrates the targets used for evaluation and the corresponding calculated distances. A uniformly distributed point network was selected for the comparison, ensuring comprehensive coverage of all possible distances. The results are presented in Table 5 and Table 6.
Table 5 displays the point coordinates obtained from each point cloud. Notably, the blurriness of the 3D Scanner and Scaniverse point clouds made it challenging to read all of the coordinates accurately. This issue is evident in Figure 9, which highlights a wall section adorned with numerous laser scanning targets. The 3D Scanner app exhibits distortions that hinder precise target selection, while Scaniverse’s sparse point cloud necessitates zooming out to identify a point. Consequently, only targets consistently readable across all point clouds were considered for evaluation, highlighted in green in Table 5.
Table 6 presents the calculated distances and their deviations from the TLS values. Earlier, the TLS distances between four points (T03, T06, T11, and T13) were compared with the distances calculated from the control points with known coordinates in the Geolab to validate the TLS results. This comparison resulted in a 6 mm root mean square error (RMSE). The mean deviation from the TLS values is 5 cm for PolyCam and 6 cm for SiteScape, while the 3D Scanner app has a mean deviation of 10 cm, and Scaniverse exhibits a substantial mean deviation of 44 cm. The RMSE was considered to assess the precision of the predicted distances. PolyCam demonstrates the best performance with an RMSE of 10 cm, followed by SiteScape with an RMSE of 14 cm. The 3D Scanner app and Scaniverse show higher RMSE values of 17 cm and 56 cm, respectively. Notably, Scaniverse performs significantly weaker than the other applications regarding local accuracy.
After identifying that the higher values in the 3D Scanner app point cloud were associated with distances from point P2, as seen in Table 5, it was suspected that P2 might be an erroneous point due to the distorted view of the point cloud. To address this, the mean and RMSE values were recalculated by excluding the distances to and from P2. Upon removing the distances involving P2, the mean deviation values were revised to 2.2 cm for the 3D Scanner app, 2 cm for PolyCam, 1 cm for SiteScape, and 39 cm for Scaniverse. Similarly, the RMSE values were recalculated as 10 cm for the 3D Scanner app, 10 cm for PolyCam, 7 cm for SiteScape, and 55 cm for Scaniverse. These adjusted values provide a more accurate assessment of the performance of each application, considering the potential error associated with P2.
Regarding handling the apps, 3D Scanner app, PolyCam, and Scaniverse are more convenient for scanning larger spaces, as there is no need to cut the scan until the entire space is captured. However, during tests in Geolab, it was observed that capturing bigger rooms in one scan can sometimes result in scans with split surfaces, as shown in Figure 3. On the other hand, SiteScape requires multiple scans to cover larger areas, as it quickly reaches the maximum point limit and automatically cuts off the scan. This approach is likely implemented to address drift errors that accumulate over time. Although this leads to longer scanning times and one should seek to have overlaps between consecutive scans, the quality of the generated point cloud remains satisfactory. In terms of processing and data export, all applications are convenient.

5. Discussion and Conclusions

This paper aimed to discover the potential use of a lidar-enabled consumer smartphone, the iPhone 13 Pro, as a low-cost alternative in building documentation. The analysis focused on the generated point clouds’ accuracy, precision, and user handling of the applications—the 3D Scanner app, PolyCam, SiteScape, and Scaniverse—when dynamically capturing large spaces.
The global accuracy inquiry was completed by comparing point clouds using the M3C2 method, which revealed deviations from the reference point cloud. No application aligned with the commonly used accuracy specifications (LOAs). However, there was a notable concentration of deviations for each application within the 5 cm range, which can be assumed to be a promising result. The observed accuracy is a product of scans conducted within an approximate range of 3–5 m and based on the stated versions used in this study. The precision evaluation revealed that PolyCam and SiteScape produced a less noisy point cloud than the 3D Scanner app and Scaniverse. Noisy areas often match the higher deviation areas observed in the global accuracy analysis, which is attributed to the drift error accumulated over time while moving the phone and a failure in closing loops. The local accuracy assessment showed the challenges of reading targets accurately in point clouds from Scaniverse and the 3D Scanner app. All applications, except Scaniverse, achieved RMSE values of up to 10 cm, while Scaniverse lagged behind with an RMSE value of 55 cm. In terms of usability, the 3D Scanner app, PolyCam, and Scaniverse offer convenience when scanning larger spaces without the need to cut the scan, while SiteScape requires multiple scans to complete a single area but maintains satisfactory point cloud quality. All applications offer convenient processing and data export capabilities.
Overall, analyses have highlighted the challenges associated with the position tracking, which were revealed on the data as split or uneven surfaces causing higher deviations in parts. The small FoVs and accumulated drift errors over time during movement contribute to the observed deviations despite taking precautions during data capture, such as following a closed loop and avoiding revisiting the same area. As a result, the current versions of the tested applications do not fully meet high accuracy expectations, especially when confronted with the complexities of scanning extensive areas. The identified challenges in position tracking, manifested as split or uneven surfaces, are likely to be exacerbated in environments with diverse room layouts. Therefore, the question of how well these applications can adapt to the intricacies of varied spatial configurations remains a critical aspect of their utility. Nevertheless, this technology holds promise for achieving better accuracies with improved software applications. The capability to capture a space at any time using just a smartphone, with applications that do not demand significant expertise, offers great flexibility.
Integrating laser scanning technology into consumer-grade devices introduces new research topics within mobile mapping systems. While it may not replace professional scanners, it offers a cost-effective and accessible solution for various industries and applications. As software advancements continue, the accuracy and precision of the generated point clouds are likely to improve, expanding the capabilities of this technology even further. Overall, the iPhone 13 Pro’s lidar technology represents a positive step towards democratizing 3D scanning and making it more widely available for a range of practical uses.
A comparative evaluation between the investigated applications and depth cameras can be considered as the next step. The existing literature underlines the significance of depth cameras as valuable and cost-effective alternatives in the realm of building documentation. A direct comparison between these consumer-level low-cost solutions promises to reveal compelling insights. Furthermore, considering these applications’ continuous release of new versions, future publications incorporating the latest versions could provide insights into the technology’s development by comparing results across different versions. Looking ahead, subsequent phases of this study could deepen the exploration of the Scan2Bim process. This would involve a comprehensive investigation into the full potential of integrating low-cost technology into building modelling, offering valuable contributions to the field. Lastly, drawing on the experiences gained during this work, integrating 3D reference points instead of classical black and white targets might enhance the metric and evaluation quality of future studies.

Author Contributions

Conceptualization, C.A. and H.S.; methodology, C.A. and H.S.; validation, C.A. and H.S.; investigation, C.A.; data curation, C.A.; writing—original draft preparation, C.A.; writing—review and editing, H.S.; visualization, C.A.; supervision, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was written within the Level 5 Indoor Navigation (L5 IN) project, funded by the German Federal Ministry of Transport and Digital Infrastructure (BMVI), grant number VB5GFHAMB.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Due to data security concerns imposed by facility management, we regret to inform readers that the data presented in this paper cannot be published.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Di Stefano, F.; Chiappini, S.; Gorreja, A.; Balestra, M.; Pierdicca, R. Mobile 3D scan LiDAR: A literature review. Geomatics Nat. Hazards Risk 2021, 12, 2387–2429. [Google Scholar] [CrossRef]
  2. Scantech International. Timeline of 3D Laser Scanners | By Scantech International. Available online: https://scantech-international.com/blog/timeline-of-3d-laser-scanners (accessed on 3 March 2023).
  3. Pritchard, D.; Sperner, J.; Hoepner, S.; Tenschert, R. Terrestrial laser scanning for heritage conservation: The Cologne Cathedral documentation project. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-2/W2, 213. [Google Scholar] [CrossRef]
  4. Kushwaha, S.K.P.; Dayal, K.R.; Sachchidanand; Raghavendra, S.; Pande, H.; Tiwari, P.S.; Agrawal, S.; Srivastava, S.K. 3D Digital Documentation of a Cultural Heritage Site Using Terrestrial Laser Scanner—A Case Study. In Applications of Geomatics in Civil Engineering, 1st ed.; Lecture Notes in Civil Engineering; Ghosh, J.K., da Silva, I., Eds.; Springer: Singapore, 2020; Volume 33, pp. 49–58. [Google Scholar] [CrossRef]
  5. Lague, D.; Brodu, N.; Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z). ISPRS J. Photogramm. Remote Sens. 2013, 82, 10–26. [Google Scholar] [CrossRef]
  6. Schürch, P.; Densmore, A.L.; Rosser, N.J.; Lim, M.; McArdell, B.W. Detection of surface change in complex topography using terrestrial laser scanning: Application to the Illgraben debris-flow channel. Earth Surf. Process. Landf. 2011, 36, 1847–1859. [Google Scholar] [CrossRef]
  7. Sternberg, H. Deformation Measurements at Historical Buildings with Terrestrial Laserscanners. In Proceedings of the ISPRS Commission V Symposium Image Engineering and Vision Metrology, Dresden, Germany, 25–27 September 2006; pp. 303–308. Available online: https://www.isprs.org/proceedings/xxxvi/part5/paper/STER_620.pdf (accessed on 17 July 2023).
  8. Wang, W.; Zhao, W.; Huang, L.; Vimarlund, V.; Wang, Z. Applications of terrestrial laser scanning for tunnels: A review. J. Traffic Transp. Eng. (Eng. Ed.) 2014, 1, 325–337. [Google Scholar] [CrossRef]
  9. Mukupa, W.; Roberts, G.W.; Hancock, C.M.; Al-Manasir, K. A review of the use of terrestrial laser scanning application for change detection and deformation monitoring of structures. Surv. Rev. 2017, 49, 99–116. [Google Scholar] [CrossRef]
  10. Raza, M. BIM for Existing Buildings: A Study of Terrestrial Laser Scanning and Conventional Measurement Technique. Master’s Thesis, Metropolia University of Applied Sciences, Helsinki, Finland, 2017. [Google Scholar]
  11. Park, H.; Lim, S.; Trinder, J.; Turner, R. 3D surface reconstruction of Terrestrial Laser Scanner data for forestry. In Proceedings of the IGARSS 2010–2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010; pp. 4366–4369. [Google Scholar]
  12. Petrie, G. An Introduction to the Technology Mobile Mapping Systems. GeoInformatics 2016, 13, 32–43. Available online: http://petriefied.info/Petrie_Mobile_Mapping_Systems_Jan-Feb_2010.pdf (accessed on 17 July 2023).
  13. Hamraz, H.; Contreras, M.A.; Zhang, J. Forest understory trees can be segmented accurately within sufficiently dense airborne laser scanning point clouds. Sci. Rep. 2017, 7, 6770. [Google Scholar] [CrossRef]
  14. Chen, D.; Wang, R.; Peethambaran, J. Topologically Aware Building Rooftop Reconstruction From Airborne Laser Scanning Point Clouds. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7032–7052. [Google Scholar] [CrossRef]
  15. Toth, C.; Grejner-Brzezinska, D. Redefining the Paradigm of Modern Mobile Mapping. Photogramm. Eng. Remote Sens. 2004, 70, 685–694. [Google Scholar] [CrossRef]
  16. Briese, C.; Zach, G.; Verhoeven, G.; Ressl, C.; Ullrich, A.; Studnicka, N.; Doneus, M. Analysis of mobile laser scanning data and multi-view image reconstruction. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B5, 163–168. [Google Scholar] [CrossRef]
  17. Stojanovic, V.; Shoushtari, H.; Askar, C.; Scheider, A.; Schuldt, C.; Hellweg, N.; Sternberg, H. A Conceptual Digital Twin for 5G Indoor Navigation. 2021. Available online: https://www.researchgate.net/publication/351234064 (accessed on 30 November 2023).
  18. Ibrahimkhil, M.H.; Shen, X.; Barati, K.; Wang, C.C. Dynamic Progress Monitoring of Masonry Construction through Mobile SLAM Mapping and As-Built Modeling. Buildings 2023, 13, 930. [Google Scholar] [CrossRef]
  19. Mahdjoubi, L.; Moobela, C.; Laing, R. Providing real-estate services through the integration of 3D laser scanning and building information modelling. Comput. Ind. 2013, 64, 1272–1281. [Google Scholar] [CrossRef]
  20. Sgrenzaroli, M.; Barrientos, J.O.; Vassena, G.; Sanchez, A.; Ciribini, A.; Ventura, S.M.; Comai, S. Indoor mobile mapping systems and (bim) digital models for construction progress monitoring. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLIII-B1-2, 121–127. [Google Scholar] [CrossRef]
  21. Lehtola, V.V.; Nikoohemat, S.; Nüchter, A.; Lehtola, V.V.; Nikoohemat, S.; Nüchter, A. Indoor 3D: Overview on Scanning and Reconstruction Methods. In Handbook of Big Geospatial Data; Werner, M., Chiang, Y.-Y., Eds.; Springer: Berlin, Germany, 2021; pp. 55–97. [Google Scholar] [CrossRef]
  22. Lachat, E.; Macher, H.; Mittet, M.-A.; Landes, T.; Grussenmeyer, P. First experiences with kinect v2 sensor for close range 3d modelling. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W4, 93–100. [Google Scholar] [CrossRef]
  23. Khoshelham, K.; Tran, H.; Acharya, D. Indoor mapping eyewear: Geometric evaluation of spatial mapping capability of hololens. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W13, 805–810. [Google Scholar] [CrossRef]
  24. Delasse, C.; Lafkiri, H.; Hajji, R.; Rached, I.; Landes, T. Indoor 3D Reconstruction of Buildings via Azure Kinect RGB-D Camera. Sensors 2022, 22, 9222. [Google Scholar] [CrossRef]
  25. Hübner, P.; Clintworth, K.; Liu, Q.; Weinmann, M.; Wursthorn, S. Evaluation of HoloLens Tracking and Depth Sensing for Indoor Mapping Applications. Sensors 2020, 20, 1021. [Google Scholar] [CrossRef]
  26. Weinmann, M.; Wursthorn, S.; Jutzi, B. Semi-automatic image-based co-registration of range imaging data with different characteristics. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XXXVIII-3, 119–124. [Google Scholar] [CrossRef]
  27. Kalantari, M.; Nechifor, M. 3D Indoor Surveying—A Low Cost Approach. Surv. Rev. 2016, 49, 1–6. [Google Scholar] [CrossRef]
  28. Lehtola, V.V.; Kaartinen, H.; Nüchter, A.; Kaijaluoto, R.; Kukko, A.; Litkey, P.; Honkavaara, E.; Rosnell, T.; Vaaja, M.T.; Virtanen, J.-P.; et al. Comparison of the Selected State-Of-The-Art 3D Indoor Scanning and Point Cloud Generation Methods. Remote Sens. 2017, 9, 796. [Google Scholar] [CrossRef]
  29. Tucci, G.; Visintini, D.; Bonora, V.; Parisi, E.I. Examination of Indoor Mobile Mapping Systems in a Diversified Internal/External Test Field. Appl. Sci. 2018, 8, 401. [Google Scholar] [CrossRef]
  30. di Filippo, A.; Sánchez-Aparicio, L.J.; Barba, S.; Martín-Jiménez, J.A.; Mora, R.; Aguilera, D.G. Use of a Wearable Mobile Laser System in Seamless Indoor 3D Mapping of a Complex Historical Site. Remote Sens. 2018, 10, 1897. [Google Scholar] [CrossRef]
  31. Salgues, H.; Macher, H.; Landes, T. Evaluation of mobile mapping systems for indoor surveys. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIV-4/W1-2020, 119–125. [Google Scholar] [CrossRef]
  32. Wilkening, J.; Kapaj, A.; Cron, J. Creating a 3D Campus Routing Information System with ArcGIS Indoors. In Dreiländertagung der OVG, DGPF und SGPF Photogrammetrie-Fernerkundung-Geoinformation-2019; Thomas Kersten: Hamburg, Germany, 2019. [Google Scholar]
  33. Murtiyoso, A.; Grussenmeyer, P.; Landes, T.; Macher, H. First assessments into the use of commercial-grade solid state lidar for low cost heritage documentation. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLIII-B2-2, 599–604. [Google Scholar] [CrossRef]
  34. Shoushtari, H.; Willemsen, T.; Sternberg, H. Many Ways Lead to the Goal—Possibilities of Autonomous and Infrastructure-Based Indoor Positioning. Electronics 2021, 10, 397. [Google Scholar] [CrossRef]
  35. Tanskanen, P.; Kolev, K.; Meier, L.; Camposeco, F.; Saurer, O.; Pollefeys, M. Live Metric 3D Reconstruction on Mobile Phones. In Proceedings of the 2013 IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 1–8 December 2013; pp. 65–72. [Google Scholar]
  36. Kersten, T.P. The Smartphone as a Professional Mapping Tool | GIM International. GIM International, 25 February 2020. Available online: https://www.gim-international.com/content/article/the-smartphone-as-a-professional-mapping-tool (accessed on 17 July 2023).
  37. Wikipedia. Tango (Platform)-Wikipedia. Available online: https://en.wikipedia.org/wiki/Tango_(platform) (accessed on 10 May 2023).
  38. Bianchini, C.; Catena, L. The Democratization of 3D Capturing an Application Investigating Google Tang Potentials. Int. J. Bus. Hum. Soc. Sci. 2019, 12, 3298576. [Google Scholar] [CrossRef]
  39. Google. Build New Augmented Reality Experiences that Seamlessly Blend the Digital and Physical Worlds | ARCore | Google Developers. Available online: https://developers.google.com/ar (accessed on 10 May 2023).
  40. Diakité, A.A.; Zlatanova, S. First experiments with the tango tablet for indoor scanning. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III-4, 67–72. [Google Scholar] [CrossRef]
  41. Froehlich, M.; Azhar, S.; Vanture, M. An Investigation of Google Tango® Tablet for Low Cost 3D Scanning. In Proceedings of the 34th International Symposium on Automation and Robotics in Construction, Taipei, Taiwan, 28 June–1 July 2017; pp. 864–871. [Google Scholar]
  42. Boboc, R.G.; Gîrbacia, F.; Postelnicu, C.C.; Gîrbacia, T. Evaluation of Using Mobile Devices for 3D Reconstruction of Cultural Heritage Artifacts. In VR Technologies in Cultural Heritage; Duguleană, M., Carrozzino, M., Gams, M., Tanea, I., Eds.; Communications in Computer and Information Science; Springer International Publishing: Cham, Swizterland, 2019; Volume 904, pp. 46–59. [Google Scholar] [CrossRef]
  43. Luetzenburg, G.; Kroon, A.; Bjørk, A.A. Evaluation of the Apple iPhone 12 Pro LiDAR for an Application in Geosciences. Sci. Rep. 2021, 11, 1–9. [Google Scholar] [CrossRef]
  44. Riquelme, A.; Tomás, R.; Cano, M.; Pastor, J.L.; Jordá-Bordehore, L. Extraction of discontinuity sets of rocky slopes using iPhone-12 derived 3DPC and comparison to TLS and SfM datasets. IOP Conf. Ser. Earth Environ. Sci. 2021, 833, 012056. [Google Scholar] [CrossRef]
  45. Losè, L.T.; Spreafico, A.; Chiabrando, F.; Tonolo, F.G. Apple LiDAR Sensor for 3D Surveying: Tests and Results in the Cultural Heritage Domain. Remote Sens. 2022, 14, 4157. [Google Scholar] [CrossRef]
  46. Díaz-Vilariño, L.; Tran, H.; Frías, E.; Balado, J.; Khoshelham, K. 3D mapping of indoor and outdoor environments using Apple smart devices. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLIII-B4-2, 303–308. [Google Scholar] [CrossRef]
  47. Spreafico, A.; Chiabrando, F.; Losè, L.T.; Tonolo, F.G. The ipad pro built-in lidar sensor: 3D rapid mapping tests and quality assessment. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLIII-B1-2, 63–69. [Google Scholar] [CrossRef]
  48. Zoller+Fröhlich. Z+F IMAGER® 5016: Zoller+Fröhlich. Available online: https://www.zofre.de/laserscanner/3d-laserscanner/z-f-imagerr-5016 (accessed on 17 July 2023).
  49. CloudCompare. (version. 2.12.4) [GPL Software]. Available online: http://www.cloudcompare.org/ (accessed on 14 July 2022).
  50. U.S. Institute of Building Documentation. USIBD Level of Accuracy (LOA) Specification Guide, v2.0-2016. 2016. Available online: https://cdn.ymaws.com/www.nysapls.org/resource/resmgr/2019_conference/handouts/hale-g_bim_loa_guide_c120_v2.pdf (accessed on 12 October 2022).
  51. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for point-cloud shape detection. In Proceedings of the 2007 Computer Graphics Forum, Honolulu, HI, USA, 25–29 June 2017; pp. 214–226. [Google Scholar] [CrossRef]
Figure 1. The images on the left (ac) illustrate the Geolab test room. The whole room was scanned at Part 1 and Part 2, covering a common area as shown on the right side (d). The scan concludes at the starting point, identified as a point on the image (d). The colored arrows (d) indicate the walking direction during the scanning process.
Figure 1. The images on the left (ac) illustrate the Geolab test room. The whole room was scanned at Part 1 and Part 2, covering a common area as shown on the right side (d). The scan concludes at the starting point, identified as a point on the image (d). The colored arrows (d) indicate the walking direction during the scanning process.
Geomatics 03 00030 g001
Figure 2. Registered and merged point clouds from each application.
Figure 2. Registered and merged point clouds from each application.
Geomatics 03 00030 g002
Figure 3. Example of split surfaces from dataset. Red boxes illustrate split walls and uneven surfaces on the floor or ceiling.
Figure 3. Example of split surfaces from dataset. Red boxes illustrate split walls and uneven surfaces on the floor or ceiling.
Geomatics 03 00030 g003
Figure 4. Cloud-to-cloud comparison of each point cloud with the reference TLS point cloud (depicted on the left). Distances were compared within a 40 cm range, with deviations beyond this range resulting in empty spaces as observed within the marked circle in Scaniverse’s point cloud. Higher deviations are seen in different parts for different point clouds.
Figure 4. Cloud-to-cloud comparison of each point cloud with the reference TLS point cloud (depicted on the left). Distances were compared within a 40 cm range, with deviations beyond this range resulting in empty spaces as observed within the marked circle in Scaniverse’s point cloud. Higher deviations are seen in different parts for different point clouds.
Geomatics 03 00030 g004
Figure 5. TLS data—point-to-plane (P2Plane) comparison on the floor, ceiling, and the southern wall (red dashed circled).
Figure 5. TLS data—point-to-plane (P2Plane) comparison on the floor, ceiling, and the southern wall (red dashed circled).
Geomatics 03 00030 g005
Figure 6. Point-to-plane (P2Plane) comparison on the southern wall (red dashed circled).
Figure 6. Point-to-plane (P2Plane) comparison on the southern wall (red dashed circled).
Geomatics 03 00030 g006
Figure 7. Point-to-plane (P2Plane) comparison on the floor (left) and ceiling (right).
Figure 7. Point-to-plane (P2Plane) comparison on the floor (left) and ceiling (right).
Geomatics 03 00030 g007
Figure 8. Target points across Geolab, indicated by yellow stars and circles in the pictures, are used for distance calculations (represented by yellow lines).
Figure 8. Target points across Geolab, indicated by yellow stars and circles in the pictures, are used for distance calculations (represented by yellow lines).
Geomatics 03 00030 g008
Figure 9. A zoomed view to each applications’ point cloud in a wall decorated with targets. Distortions on 3D Scanner app and blurriness on Scaniverse posed problems when picking target points.
Figure 9. A zoomed view to each applications’ point cloud in a wall decorated with targets. Distortions on 3D Scanner app and blurriness on Scaniverse posed problems when picking target points.
Geomatics 03 00030 g009
Table 1. Summary of the specifications of each application. Given information is based on the used versions at the time of data capture. By the time of the publication of the paper, there might be changes in the specifications.
Table 1. Summary of the specifications of each application. Given information is based on the used versions at the time of data capture. By the time of the publication of the paper, there might be changes in the specifications.
3D Scanner AppPolyCamSiteScapeScaniverse
Scan modeLIDAR, LIDAR Advance, Point Cloud, Photos, TrueDepthLIDAR, Photo, RoomLIDARSmall object, medium object, large object (area)
Scan settingsResolution, max depth-Point density and size (low, med, high)Range setting (max 5 m)
Processing
options
HD, Fast, CustomFast, Space, Object, CustomSynching to the SiteScape cloudSpeed, area, detail
Processing stepsSmoothing, simplifying, texturing---
Export asPoint cloud, meshPoint cloud, meshPoint cloudPoint cloud, mesh
Export formatsPCD, PLY, LAS, e57, PTS, XYZ, OBJ, KMZ, FBX etc.DXF, PLY, LAS, PTS, XYZ, OBJ, STL, FBX etc.e57PLY, LAS, OBJ, FBX, STL, GLB, USDZ
Table 2. Numerical summary of the cloud-to-cloud comparison. (Std: standard deviation).
Table 2. Numerical summary of the cloud-to-cloud comparison. (Std: standard deviation).
3D Scanner AppPolyCamSiteScapeScaniverse
<5 mm17%19%8%10%
5 mm–1 cm11%17%10%10%
1–3 cm30%32%46%31%
3–5 cm11%9%19%19%
5–10 cm12%9%9%18%
10–20 cm11%12%6%9%
20–40 cm8%2%2%3%
Std (cm)9768
Table 3. LOA definitions (based on deviations of 2σ) by the U.S. Institute of Building Documentation.
Table 3. LOA definitions (based on deviations of 2σ) by the U.S. Institute of Building Documentation.
LevelUpper RangeLower Range
LOA10User-defined5 cm
LOA205 cm15 mm
LOA3015 mm5 mm
LOA405 mm1 mm
LOA501 mm0
Table 4. Summary of plane fitting. (Std: standard deviation).
Table 4. Summary of plane fitting. (Std: standard deviation).
3DScannerPolyCamSiteScapeScaniverse
FloorCeilingWallFloorCeilingWallFloorCeilingWallFloorCeilingWall
<1 cm21%38%8%27%27%44%45%37%18%13%28%24%
1–3 cm34%40%27%44%47%46%45%48%35%26%32%43%
3–5 cm29%14%24%17%16%8%8%12%17%24%19%19%
5–10 cm9%5%33%8%7%1%1%3%17%29%20%12%
>10 cm7%2%7%4%4%0%0%0%12%8%2%1%
Std (cm)5.73.55.73.83.51.92.02.26.55.54.03.4
Table 5. The coordinates of the picked target points in each application as well as the TLS data. The points that have been highlighted in green are those used for the calculations.
Table 5. The coordinates of the picked target points in each application as well as the TLS data. The points that have been highlighted in green are those used for the calculations.
TLS3D Scanner AppPolyCamSiteScapeScaniverse
PointsXYZXYZXYZXYZXYZ
P11.399−4.199−1.1271.402−4.183−1.0971.406−4.043−1.1351.356−4.108−1.095---
P28.464−4.221−0.7038.301−4.074−0.7088.452−4.091−0.6908.452−4.059−0.6718.131−3.531−0.678
P30.669−0.602−0.9680.674−0.602−0.9650.675−0.465−0.9840.686−0.649−0.9180.6480.379−0.989
T030.00214.0210.012−0.13213.838−0.013−0.09513.9860.0400.05514.0420.068−0.02914.3770.033
T040.00718.4410.953---−0.07218.3640.9820.09818.3701.057−0.10518.6480.058
T060.02430.6490.2670.09930.5870.0450.02230.6320.2970.04230.6150.3090.03730.6150.258
T086.77127.013−1.9396.81627.048−1.9196.82826.964−1.9466.67527.330−1.907---
T097.95019.1520.8828.00819.2050.9148.00219.3120.9217.91919.1630.853---
T119.5248.143−0.0439.4388.0660.0329.6308.141−0.0369.5037.9830.0369.5998.245−0.195
T138.731−4.1330.107---8.705−4.0860.1448.679−4.0270.1138.394−3.5280.181
T1461.97430.446−0.4132.09930.413−0.2762.00330.431−0.4132.00030.385−0.4271.96530.444−0.360
Table 6. The calculated distances between target points for the TLS and each respective application. The deviations from the TLS distances are also provided.
Table 6. The calculated distances between target points for the TLS and each respective application. The deviations from the TLS distances are also provided.
Distances (m)TLS (d_{TLS})3D Scanner (d_{3D})PolyCam (d_{P})SiteScape (d_{SI})Scaniverse (d_{SC})d_{TLS}-d_{3D} d_{TLS}-d_{P} d_{TLS}-d_{SI} d_{TLS}-d_{SC}
P2–P38.5998.3848.5868.4858.4490.2150.0130.1130.150
P2–T0320.12219.81020.00919.96819.6920.3120.1130.1540.429
P2–T0635.89035.62635.74535.69335.1050.2640.1450.1980.786
P2–T1112.42712.21612.30612.10811.8770.2110.1210.3180.550
P2–T14635.27035.04335.12035.04434.5310.2280.1500.2270.739
P3–T0314.67114.49414.50814.73814.0520.1770.163−0.0660.620
P3–T0631.28331.21131.13031.29530.2680.0720.152−0.0121.015
P3–T1112.48012.36712.45612.37611.9430.1130.0240.1040.537
P3–T14631.08131.05530.93031.06630.1000.0250.1510.0150.980
T03–T0616.63016.75116.64816.57516.240−0.120−0.0180.0560.391
T03–T1111.19011.17611.34711.22411.4170.014−0.157−0.034−0.227
T03–T14616.54816.72716.58416.46616.195−0.178−0.0360.0830.353
T06–T1124.43124.38124.46024.53124.3320.051−0.028−0.1000.099
T06–T1462.0752.0332.1142.1042.0320.042−0.039−0.0290.044
T11–T14623.54923.52323.56223.63023.4760.026−0.013−0.0800.074
Mean (m)0.0970.0490.0630.436
RMSE (m)0.1660.1070.1350.559
Mean and RMSE values calculated after excluding P2.Mean (m)0.0220.020−0.0070.388
RMSE (m)0.1010.1010.0660.549
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Askar, C.; Sternberg, H. Use of Smartphone Lidar Technology for Low-Cost 3D Building Documentation with iPhone 13 Pro: A Comparative Analysis of Mobile Scanning Applications. Geomatics 2023, 3, 563-579. https://doi.org/10.3390/geomatics3040030

AMA Style

Askar C, Sternberg H. Use of Smartphone Lidar Technology for Low-Cost 3D Building Documentation with iPhone 13 Pro: A Comparative Analysis of Mobile Scanning Applications. Geomatics. 2023; 3(4):563-579. https://doi.org/10.3390/geomatics3040030

Chicago/Turabian Style

Askar, Cigdem, and Harald Sternberg. 2023. "Use of Smartphone Lidar Technology for Low-Cost 3D Building Documentation with iPhone 13 Pro: A Comparative Analysis of Mobile Scanning Applications" Geomatics 3, no. 4: 563-579. https://doi.org/10.3390/geomatics3040030

Article Metrics

Back to TopTop