Next Article in Journal
Climate Change Performance of nZEB Buildings
Previous Article in Journal
Proposal for the Integration of Health and Safety into the Design of Road Projects with BIM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automation of Construction Progress Monitoring by Integrating 3D Point Cloud Data with an IFC-Based BIM Model

by
Paulius Kavaliauskas
1,*,
Jaime B. Fernandez
2,
Kevin McGuinness
2 and
Andrius Jurelionis
1
1
Faculty of Civil Engineering and Architecture, Kaunas University of Technology, 44249 Kaunas, Lithuania
2
Insight SFI Centre for Data Analytics, Dublin City University, Glasnevin, D9 Dublin, Ireland
*
Author to whom correspondence should be addressed.
Buildings 2022, 12(10), 1754; https://doi.org/10.3390/buildings12101754
Submission received: 30 September 2022 / Revised: 15 October 2022 / Accepted: 18 October 2022 / Published: 20 October 2022

Abstract

:
Automated construction progress monitoring using as-planned building information modeling (BIM) and as-built point cloud data integration has substantial potential and could lead to the fast-tracking of construction work and identifying discrepancies. Laser scanning is becoming mainstream for conducting construction surveys due to the accuracy of the data obtained and the speed of the process; however, construction progress monitoring techniques are still limited because of the complexity of the methods, incompleteness of the scanned areas, or the obstructions by temporary objects in construction sites. The novel method proposed within this study enables the extracting of BIM data, calculating the plane equation of the faces, and performing a point-to-plane distance estimation, which successfully overcomes some limitations reported in previous studies, including automated object detection in an occluded environment. Six datasets consisting of point clouds collected by static and mobile laser scanning techniques including the corresponding BIM models were analyzed. In all the analyzed cases, the proposed method automatically detected whether the construction of an object was completed or not in the as-built point cloud compared to the provided as-planned BIM model.

1. Introduction

Despite the rapid digitization of construction, modern construction projects face increasingly complex challenges, such as constant changes in the construction execution phase, the insufficient adoption of innovations, and more complex projects [1,2]. Each construction project requires progress monitoring to ensure up-to-date information is available to control the decisions needed to deliver project goals on schedule, as the miscalculations of progress can lead to cost overruns [3,4,5,6]. Technological advances play an important role in developing the efficiency and quality of automated progress monitoring [7,8]; however, hesitation remains due to the lack of theoretical understanding and several factors such as the point cloud density, occlusions, the environment, etc., which may affect the efficient implementation of the process [9].
The monitoring of construction progress requires up to date as-built data. Traditional measurement methods are still widely used in construction, although they are time-consuming and more prone to human error [10,11], resulting in time-inefficient project monitoring and decision-making. Moreover, timely and accurate spatial information is essential in today’s complex civil engineering projects [12]. Several techniques, such as laser scanning or photogrammetry, can be used to automate construction progress monitoring in a more efficient and accurate way. A recent study analyzed the accuracy of such techniques, focused on UAV-based photogrammetry for building surveying applications [13], and although the application demonstrated many benefits, such technology would be difficult to apply inside a building. Close-range photogrammetry would be more suitable for this purpose, although its accuracy is highly dependent on several factors such as the optical precision of the camera lens, the shooting distance, camera calibration, the color and material properties of the scanned surfaces, etc. [14]. A rather expensive method, but one often used in construction due to its high accuracy, is terrestrial laser scanning [15]. Obtaining accurate measurements using this method requires an understanding of the technique and its performance, including its limitations such as the impact of obstacles, unseen points or reflection problems related to object materials that affect the accuracy, and acquisition times that may take much longer than expected to obtain a high resolution result [16]. The use of a mobile laser scanner would be a more efficient way to collect as-built data and reduce occlusion [17], although for engineering applications the accuracy of this approach is reported to be insufficient [18,19,20,21]. Point cloud generation is a widely studied and well-established technique [22] and is beyond the scope of this study.
To automate the construction progress monitoring process, the obtained as-built data should be compared with the as-planned model. Most studies focus on combining point cloud data with as-planned building information modelling (BIM) data expressed in 3D Computer-Aided Design (CAD) or mesh formats [23,24]. Currently, the Industry Foundation Classes (IFC) is a well-known open standard used for BIM information exchange. IFC is a standardized digital description of the built asset industry released by BuildingSMART, without which a data exchange and collaboration among the varieties of stakeholders would be hard to imagine these days. It has disadvantages, such as consistently modelled elements not being properly associated with semantics or information being lost when exporting to the IFC format [25,26]; however, it is constantly being improved and is widely used in modern construction. Therefore, the IFC-based BIM is considered in this study.
Although methods for monitoring construction progress are constantly evolving, there are challenges that remain unsolved. To verify that an object is present in the as-built data, it needs to be identified and compared to the as-planned model. The point cloud model needs to be aligned with the BIM model, resulting in challenges since the datasets are in different coordinate systems [27,28,29,30]. Moreover, registering IFC-based BIM and point cloud models presents additional challenges due to the use of a completely different file structure. Alignment is still often performed manually or semi-automatically.
Object detection in a point cloud is another challenging part in progress monitoring. The noise of the data obtained during scanning can cause measurement errors [31], which can make it difficult to detect objects. Construction sites inevitably contain obstacles such as scaffolding, moving objects, building materials, or machinery. Due to the insufficient quality of the obtained data, the as-built model may contain various holes or drifted objects. The environment in construction sites is occluded, which means that the spatial data is incomplete; therefore, a purely geometric comparison with the BIM model may be insufficient to obtain reliable results [32,33,34]. Currently, these challenges are as yet unresolved.
This research aims to develop a methodology for automated construction progress monitoring, focusing on the automated detection of objects in noisy and occluded environments and the representation of as-builts. In this study, a novel approach to construction progress tracking is presented, and the results of the method are demonstrated by using different data collection techniques. This, in turn, allows for analyzing the results in relation to the different quality and precision of datasets. The proposed method performs monitoring using point cloud and BIM data integration by extracting points from vertices of IFC-based BIM model objects. This allows the successful detection of the as-built objects of building structures compared to the provided IFC data in both clean and occluded environments, as well as scans with a different quality.
The paper structure is organized in the following order. The literature is reviewed in the next section. The research methodology is described in Section 3, and the achieved results are presented in Section 4. Section 5 presents the conclusions and future direction of the study.

2. Related Work

For monitoring construction progress, it is first important to obtain as-built data. One of the most reliable ways is to obtain this data from laser scanners in the form of point clouds. Point cloud registration is a key step in most point cloud processing. The global point cloud registration method typically uses geometric feature matching, while local registration techniques usually do not employ any features. Fontana et al. [35] analyzed a public and shared benchmark for point cloud registration algorithms using Generalized ICP (G-ICP), Normal Distribution Transform (NDT), Probabilistic Point Clouds Registration (PPCR) algorithms for local registration, and TEASER++, Fast Global Registration, and Random Sample Consensus (RANSAC) for global registration. Xuming Ge [36] analyzed sampling-based algorithms using RANSAC to overcome coarse registration problems in an automated alignment of two markless point clouds from building scenes.
A well-known classical method is the Iterative Closest Point (ICP) algorithm, which is used to co-register two point clouds or to align a point cloud to a BIM model [37,38,39]. To obtain the correct transformation parameters for a fine registration, a coarse registration needs be performed beforehand. ICP assumes that point clouds are already approximately aligned and seeks to find the rigid transformation that best refines the alignment. The algorithm approximates the correspondences by iteratively searching for the closest distance between points, thus improving the alignment at each step.
Bouaziz et al. [40] proposed a formulation of an ICP algorithm that achieves robustness via sparsity optimization. The Sparse ICP method addresses the problem by using sparsity-inducing norms, thus, improving the resilience of the registration process to high noise and outliers, but significantly degrading performance [41].
The problem of a Euclidean alignment of two roughly pre-registered, partially overlapping, noisy point clouds was investigated by Chetverikov et al. [42]. The authors presented an extension of the ICP, called Trimmed ICP (TrICP), which is based on the consistent use of the Least Trimmed Squares approach. They used a simple boxing structure that divides the space into uniform boxes, or cubes. Depending on the point in space, only the box that contains that point and the adjacent boxes should be considered in the search. The box size is updated as both sets get closer to each other.
To automate the monitoring of construction progress, a very important process is the alignment of as-built and BIM data. Recently, Kaiser et al. [43] proposed an automated co-registration algorithm for photogrammetric point clouds that works without the use of control points with building models defined in the IFC standard, where the building model serves as a reference. Bosche [44] and Bueno et al. [45] used a plane-based registration method when planar patches were extracted from a point cloud and building information model. The main disadvantages of this method are a sensitivity to the missing data, outliers, and an insufficient overlap.
Turkan et al. [46] described a construction progress monitoring system that combines 3D object recognition technology with schedule information. Based on the laser scan’s acquisition date, the system automatically constructs a time-adjusted 3D model and compares it to the laser scan. The system compares the number of recognized objects with the number of expected objects, then the schedule is automatically updated based on the progress estimates. An automated approach for the recognition of physical progress based on the as-built structure-from-motion (SfM) point cloud model, derived from unordered site photographs and an IFC-based BIM is presented in [47]. The authors propose a machine-learning scheme based on a Bayesian probabilistic model for automating the monitoring of physical progress and they propose the system that automatically colors the deviations over the IFC-based BIM. Pucko et al. [48] proposed the Scan-vs-BIM method for automated construction progress monitoring, where the as-built model is constantly updated using low-precision small scanning devices that can be attached to safety helmets or machinery. The presented method allows to identify the differences and deviations from the time schedule of two models. Another as-built vs. as-designed model comparison for progress monitoring was made using daily site images and BIM [49]. An as-built point cloud was generated from the images, then the CAD model was generated from that point cloud and compared to the as-planned model. The framework for Scan-to-BIM, which describes the main steps of a scan-to-BIM process and its relationships, is described in [50].
Braun et al. [51] presented a machine-learning-based object-detection approach that supports progress monitoring by verifying the element categories as compared to the expected data from the digital model. The idea of the authors was to improve the geometric as-planned IFC vs. an as-built comparison from the point cloud to geometry level using additional layers of information, focusing on a schedule analysis, color detection, and image-based element detection. Several articles have presented methods for automatically segmenting and classifying point clouds, generating 3D surfaces, and creating IFC building elements [52,53].
Recent studies have explored the capabilities of parameterization of the BIM, focusing on the automated updating of the initial BIM to reflect the as-built conditions [54,55]. Laser scanning data is converted to well-connected as-built BIMs, where algorithms can detect both irregular and regular shaped components considering both the shape and pose parameters. Point cloud compression approaches and an integration to BIM models using an IFC schema extension and binary serialization format was proposed by Krijnen and Beetz [56].

3. Methodology

The proposed construction progress monitoring methodology is based on the alignment and spatial association of point cloud data obtained from laser sensors with point cloud data created by extracting the vertices of objects stored in IFC files. The presented methodology was tested under different conditions using six datasets, which included data obtained using terrestrial and handheld laser scanning techniques, modified point cloud data with removed objects of building structures, and scans in more noisy and occluded scenes. Laser scanners of different specifications and price ranges were used, where the device accuracies and other parameters declared by the manufacturer had considerable differences. Each point cloud data had a corresponding as-planned BIM model in the IFC format. The research methodology scheme is provided in Figure 1.
To evaluate the performance of the proposed method, three experiments were performed. Each experiment consisted of two datasets consisting of two point clouds and corresponding IFC models. In the first experiment, the datasets consisted of as-built data obtained by static laser scanning from an office building and data obtained by mobile scanning from the construction of a residential building. The scans were performed in a relatively clean environment and unwanted points were removed from the data. In the second experiment, the point cloud was modified by removing some objects such as columns and walls. Finally, data was collected using static laser scanning in an occluded environment in another part of the office building. Both scans were performed at the same location using different static scanners. The obtained data was minimally processed, without removing unwanted points, so that the point cloud reflected the real situation at the construction site. These data were used in the third experiment. A concise description of the datasets is given in Table 1. The data are described in more detail in Section 3.1.

3.1. Data Description

The data were collected on the construction sites of the Sqveras office building and Piliamiestis A1 residential building projects located in Kaunas, Lithuania. The investigations were carried out one floor at a time; therefore, the scans were only carried out in those parts of the building, and scanning of the whole building was not considered. In the office building, the data was collected from the first and third floors. The structures of both floors were the same with each floor area being 635 sq/m. The choice of the third floor was determined by the relatively clean environment, considering the fact that the building structures were not covered with building materials or other equipment. A FARO Focus S terrestrial laser scanner was used to scan this area. The resulting data was used to perform the initial object detection research, as the point cloud was of high quality and free of occlusions. The point cloud obtained during this scan was georeferenced using seven paper-printed checkerboard pattern control points and the TOPCON GT series positioning system. Other scans in this building were performed on the first floor, purposefully selecting the location to evaluate the performance of the method in an occluded and noisy environment. In this case, the scanning was performed with two different laser scanners, using the same FARO Focus S and another Leica BLK360. Finally, data was collected in another project, from a residential building construction site, using the mobile scanning method and using a ZEB-GO handheld laser scanner. The workflow of the static and mobile data acquisition and pre-processing is schematically illustrated in Figure 2. The main parameters of the scanners are presented in Table 2.
The scan registrations were performed using software provided by the manufacturer, such as Leica Cyclone and GeoSLAM Hub, and the commercially available Autodesk ReCap. The Autodesk ReCap software was used for the final point cloud processing and subsequent use to automate the monitoring process. Unnecessary points were removed and unified files with the registered scans were exported into the PTS (plain text file) format that lists the x ,   y ,   z coordinates of each point. PTS is a single scan file that combines information from the files in a project. It is a simple text file used to store point data.
The proposed automated object detection method requires only the geometric information of the point cloud data. The main focus of this research is on the detection of walls and columns; the floor and ceiling planes were not analyzed; therefore, the upper slabs were removed in the models for better visualization (Figure 3). The main point cloud parameters are listed in Table 3.
The BIM models were created using the Tekla Structures 2017 software and exported in the IFC format using the file scheme IFC2X3. The resulting files were edited in the Simplebim 8.2 SR1 software, removing all unnecessary elements and elements assemblies from the model, keeping only the required floor. The modified file was saved again in the IFC format.

3.2. IFC and Point Cloud Data Alignment

IFC files and point cloud files contain data of two different natures that cannot be compared directly. IFC files contain a well-structured list of elements that belong to a building, such as the walls, columns, beams, and slabs. Each element is stored by its properties such as the vertices, lines, faces, and their x ,   y ,   z spatial location. On the other hand, point cloud files contain a list of points representing objects found on a building during the scanning process. These points are listed along with the x ,   y ,   z spatial locations; however, apart from spatial information, these points do not have information about what they represent, i.e., a wall, a column, or a beam. At this point we had two options: (1) convert the point cloud data to IFC data or (2) convert the IFC data to point cloud data. The first option is complex and requires a detailed understanding of the IFC format files; however, the second option only requires an understanding of how element vertices are stored in IFC files and how to access them. In addition, a widely used method to perform an automatic alignment is the ICP algorithm, which works on two sets of point clouds: a source and a target point cloud. In this case the source point cloud was the data obtained from the lidar scanners and the target point cloud was the point cloud created by the extraction of vertices from the elements in the IFC files.

3.2.1. IFC Data Extraction

To perform the task of the automated alignment of the point cloud data with the IFC data, in this work we first explored the IFC file format and executed an element vertex extraction, which allowed us to create a point cloud data version of the IFC data, as shown in Figure 4.
IfcOpenShell http://www.ifcopenshell.org (accessed on 14 July 2022) is an open source (LGPL) software library that enables working with the IFC file format. In this work, IfcOpenShell was used together with Python to read the IFC format and extract the geometry information of objects in the IFC file. The geometry information for each object is represented as a list of faces, lines, and vertices and the relationship between them. Only the x ,   y ,   z spatial information of the vertices of each object is needed to create a point cloud.

3.2.2. Transformation Estimation Using Three Points of Reference

In this work, instead of using the ICP method without any information other than a simple point cloud, the alignment was initiated by allowing the user to select three reference points in each point cloud data (source and target data), as shown in Figure 5. Each reference point is listed in such order that the first reference point selected in the source data is related to the first reference point selected in the target data, and so on; the 1st is yellow, 2nd is magenta, and 3rd is orange. This process calculates an initial transformation matrix that was applied to the source data to perform an initial alignment.

3.2.3. ICP Point-to-Point Alignment

After the initial alignment, an ICP point-to-point alignment was performed on all data points in the source and target point clouds. This process produced a transformation matrix that was applied to the source data to perform the final alignment. This final alignment was used to perform the as-built vs. as-planned object monitoring process.

3.3. As-Built vs. IFC Object Monitoring

Once the source point cloud from the LIDAR sensor was aligned with the target point cloud, the process shown in Figure 6 was executed to perform the as-built vs. as-planned object monitoring.
The process steps are explained as follows:
  • Object extraction: As described before, IFC files contain a structured list of objects that belong to a building. This object extraction task was used to explore the IFC files to produce a list of objects IDs.
  • Geometry extraction: Each object in an IFC file consists of three main properties: (1) vertices, (2) lines, and (3) faces. The geometry extraction process allowed us to extract these properties.
  • Face extraction: IFC files provide face information for each element as a relationship between vertices. As a result, we only extracted face and vertex information in this process.
  • Plane extraction: Since we were working with 3D information, this module allowed us to know which plane, namely, ( x ,   y ) ,   ( x ,   z ) or ( y ,   z ) that each face belonged to. This was undertaken by looking at the axis where the plane extended further.
  • Plane equation: The methodology we adopted to verify that an object was complete in IFC using point cloud data was to compare how many points were close to the faces of each object. This task is called point-to-plane distance estimation, and to perform such a task, we first need to calculate the face plane equation. In IFC files, the object faces are represented as mesh triangles, which is convenient since we only need three points to calculate the equation of the plane:
    ax + by + cz + d = 0
  • Plane limits: Since we used the equation of the plane to calculate the distance of a point-to-the plane, this solution calculates the distance to all points, even if they do not belong to the specific limits of the plane, as shown in Figure 7a. This process allowed us to discard those points that were located outside the limits of the current plane (see Figure 7b).
  • Point to object relation: Each object was explored and analyzed individually face by face. For each face, the calculation of the plane to point distance estimation was conducted. A point is associated with a face when its distance to the face is less than a threshold. The first output of this module was a list of faces and the number of points related with each face for each object. The final output was a list of objects marked as completed and not completed.

3.4. Optimization

3.4.1. Downsampling

When working with point cloud data, there is always a problem due to the large amount of data. Point cloud data is useful as it can provide a geometric description of the world’s entities by discretizing them through a group of points, that collectively resemble the shape of the environment of interest. However, the challenge with 3D point clouds is that the data density can be higher than necessary for a given application. This often results in higher computational costs for subsequent data processing or visualization. In order to manage dense point clouds more efficiently, their data density can be reduced. In this work, we applied sub-sampling to reduce the size of the point cloud data. A function provided by Open3D (open3d.geometry.voxel_down_sample) with a voxel size of 5 cm (0.05 m) was used to perform the downsampling (See Figure 8).

3.4.2. Exploration of One Triangle per Face

The proposed method requires finding the object faces that compose the existing object in the IFC data. Faces are represented by IFC files as triangles that form a mesh, so that each face of an object consists of two triangles, as presented in Figure 9. The wall example shows that the wall consisted of 6 faces and each face consisted of two triangles. Since we used the plane to point distance estimation to determine if a point was related to a face, during the experiment phase it was observed that the same number of points were associated with triangles of the same face. This makes sense because they form the same plane when calculating the equation of the plane. Due to this observation, we only processed one triangle for each face to determine whether that face was completed or not.

4. Results and Discussion

The results of the study are presented in the following way: Section 4.1 looks at the aspects of alignment, Section 4.2 evaluates the object detection method in various environments, including occlusion and downsampling. In Section 4.3 the outputs, such as an error report and visualizations, are analyzed.

4.1. Alignment

Part of the point cloud data used in this study was in the global coordinate system, while the other part was in the zero origin coordinate system. In the Open3d module, it is more convenient to use x ,   y ,   z data when it is converted to zero origin coordinates. One of the reasons for this is that the source data and the target data are very far apart. Another reason is that the calculation becomes more complicated when the coordinates of each point are expressed in thousandths or millionths. For this reason, the georeferenced data was transformed to zero origin (Figure 10a). An initial alignment of the point cloud and vertices extracted from the IFC file was performed using three user-selected reference points, followed by the ICP method for final refinement. The calculated transformation matrix was applied to the source point cloud obtained with the scanners. The distance threshold had a significant impact on the alignment performance and changing the threshold value would give different results, despite using the same data. Since the rough alignment was first calculated using the correspondences given by the user, a threshold distance of 0.03 m (3 cm) was further used for the final point-to-point ICP alignment. The results of the initial alignment after selecting three correspondences are shown in Figure 10b. After this process, the models were reasonably well aligned. During the testing process, the point-to-plane and point-to-point ICP alignment methods were tested, of which we obtained better results using the point-to-point ICP method. For this reason, a point-to-point ICP iterative nearest point algorithm was used to minimize the difference between the two point clouds and after refinement, the alignment accuracy improved to about a 0.021 m root mean square error (RMSE). An example of the final alignment result is shown in Figure 11.

4.2. Evaluation of Object Detection

The detection of IFC objects in a point cloud model was performed according to defined parameters that are not strict and allow flexibility in case the scan is not performed accurately or the object surfaces are incompletely scanned. First, our method was evaluated using Datasets 1 and 2. To decide whether an object existed in the point cloud, we first calculated each face area of each IFC object and then marked the faces as complete if they had at least 400 points per square meter as shown in Figure 8. Each object had several faces and if at least 60% of the faces were marked as complete, then the object would be marked as complete. In all the tests performed, these parameters allowed for bypassing the aforementioned conditions and allowed the automatic detection of complete objects without errors.

4.2.1. Removed Objects

Another important aspect for the evaluation of the proposed method was to eliminate certain objects in the as-built model and evaluate how effectively it coped with object detection after removing some objects from the point cloud, by simulating as if the works were not completed in reality. Dataset 3 and Dataset 4 were used for this purpose. This work focuses on the monitoring of vertical structures, and as a result, objects such as walls and columns, which are shown in Figure 12, were removed from the point cloud models. Our progress monitoring method coped with this task and automatically detected complete and incomplete objects. A visualization of the results obtained is shown in Figure 13, where the completed objects were automatically represented in green and the incomplete objects in blue.

4.2.2. Noise Level and Occlusion

After successful test results using good quality processed scan data and removing certain objects from that data, the next step was to perform an object detection evaluation on the models with a poorer scan quality; unnecessary points caused by noise were not removed and the scan environment was selected to be a more occluded environment. Dataset 5 and Dataset 6 were used in this step. All this contributed to the slight displacement and duplication of some columns and incomplete object surfaces in the final output of the point cloud data (Figure 14); however, despite all these challenges, our method successfully detected IFC objects in the point cloud data.

4.2.3. Downsampled

A very important aspect in making the process more efficient is reducing the amount of data, which significantly reduces the computational costs. We performed a downsampling process using a 5 cm voxel size, resulting in a 91–96% reduction in data points for each point cloud compared to the original data. Decreasing the points density resulted in a deterioration in the visual appearance and particularity of the surface (Figure 15); however, we did not consider small objects, therefore, no additional constraints arose from this point of view. After downsampling, our method successfully detected the objects in the SQVERAS 3rd and PILIAMIESTIS A1 datasets, which are considered good quality models with redundant points removed. In other cases, when downsampling was used on lower quality models including the occluded environment, the object detection task was challenging. In both the SQVERAS 1st F (obtained using the Faro scanner) and SQVERAS 1st L (obtained using the Leica scanner) datasets, some of the objects were not detected. Two columns were not detected in the SQVERAS 1st F model and three columns and two walls were not detected in the SQVERAS 1st L model. In noisy and occluded environments, a higher density of points was required to detect the objects, resulting in a significant performance degradation. The object detection results comparing the original point cloud data with the data after downsampling are presented in Table 4.

4.3. Visualization and Report

The monitoring report results in an ordered list of 0’s and 1’s, where 0 means the object was not detected in the point cloud and 1 means the object was built. The results of the experiments are reported in Figure 16. From the results, we can see that without applying downsampling, the objects were automatically identified with minimal error. The objects removed from the point cloud were identified as not completed. One existing object was not detected in Dataset 4, while all the others were successfully identified. After applying downsampling, two existing columns were not detected in Dataset 5, and three existing columns and two existing walls were not detected in Dataset 6.
In addition, results can be reported based on object global IDs defined in the IFC model and visualized to help track progress. The visualization is based on point cloud data derived from the BIM model, which automatically marks completed and not completed objects with different colors, and provides the integration of both source and target point clouds for additional analysis. The aforementioned methods of reporting and visualizing the results are shown in Figure 17.
Automated progress monitoring is limited due to the constant changes and occluded environments of construction sites. Under such conditions, it is challenging to achieve a high object detection rate, and most detection methods are complex. The proposed methodology provides an improved solution for aligning IFC and point cloud data. The obtained results show that approximately 99 percent of objects can be detected, including in occluded environments. The proposed methodology is believed to be novel for automated construction progress monitoring and can help ensure that up-to-date information is available to control decisions needed to deliver project goals on schedule and within budget.

5. Conclusions and Future Directions

The presented construction progress monitoring method is based on the alignment of point cloud data with the IFC and automatic object detection. The alignment is based on extracting the vertices of the IFC objects, thus, creating a second version of the point cloud. Automated object detection is performed by calculating the relationship between points and objects in the aligned data.
To address the construction monitoring task, three experiments were conducted in two different types of buildings in three different scanning scenes. A total of six datasets were obtained, with two datasets consisting of the point cloud and the corresponding IFC model in each experiment. Dataset 1 consists of a point cloud obtained in an office-type building in an orderly environment using a high-spec static laser scanner. The data was georeferenced and carefully processed to produce a high-quality point cloud model. In addition, a separate scan using a mobile laser scanner was performed in a residential building. The resulting point cloud was used in Dataset 2. Based on these datasets, a baseline assessment of the alignment and object detection was performed. The point cloud models from Dataset 1 and Dataset 2 were modified by removing some objects to simulate these objects not being built. These modified data were assigned to Dataset 3 and Dataset 4 to evaluate the object detection after removing some objects. As construction sites tend to be challenging environments, the next step was to perform scans in the office building, but in a different location, in a more occluded environment. In one case, a high-spec laser scanner was used, assigning this data to Dataset 5, and additionally a lower-spec laser scanner was used, where the resulting point cloud data was used in Dataset 6. Less effort was devoted to obtaining the quality of these point cloud models, and the level of noise obtained during the scan was not reduced.
The main evaluation criterion of the method is the effectiveness of the source and target data alignment and whether the objects in the point cloud were automatically detected in comparison with the BIM model. Laser scanners of different types and specifications were used in the study and the method was evaluated under different conditions. Additionally, in some experiments, the number of points was reduced to 91–96%, resulting in a proportional increase in the computational efficiency. From the obtained results, it can be concluded that our method copes well with noisy data in an occluded environment, but the quality of the scanning has a large influence on the quality and efficiency of the monitoring, which is especially evident after downsampling.
The main limitations of the method are, first, that small faces normally do not have a related point cloud. Second, the method can automatically detect whether the object is built or not; however, it cannot detect deviation, and theoretically if the object is built inaccurately or is sixty percent complete, but falls within the detection threshold, it will still be identified as built. Third, the coordinates must be transformed to the zero origin. The transformation itself is not a challenge but using coordinates in a global coordinate system could benefit in some cases, as it is not clear how effective an automatic alignment would work in a large-scale BIM model to align a small portion of the scan data. A further direction of research could seek to overcome these limitations. Experiments should be performed on a larger scale, and improvements in the visualization and reporting are also needed.

Author Contributions

Conceptualization, P.K. and A.J.; methodology, P.K. and J.B.F.; data acquisition, P.K.; software tools, P.K. and J.B.F.; validation, J.B.F.; investigation, P.K. and J.B.F.; writing—original draft preparation, P.K.; writing—review and editing, P.K., J.B.F., A.J. and K.M.; visualization, P.K. and J.B.F.; supervision, J.B.F., A.J. and K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank Bentley Systems for support, YIT Lithuania for providing access to the construction facilities, technical equipment, and data necessary for this study. This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) under grant number SFI/12/RC/2289_P2.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lavikka, R.; Seppanen, O.; Peltokorpi, A.; Lehtovaara, J. Fostering process innovations in construction through industry–university consortium. Constr. Innov. 2020, 20, 569–586. [Google Scholar] [CrossRef]
  2. Alizadehsalehi, S.; Yitmen, I. A Concept for Automated Construction Progress Monitoring: Technologies Adoption for Benchmarking Project Performance Control. Arab. J. Sci. Eng. 2019, 44, 4993–5008. [Google Scholar] [CrossRef]
  3. Kim, C.; Son, H.; Kim, C. Automated Construction Progress Measurement Using A 4D Building Information Model And 3D Data. Autom. Constr. 2013, 31, 75–82. [Google Scholar] [CrossRef]
  4. Navon, R. Research in Automated Measurement of Project Performance Indicators. Autom. Constr. 2007, 16, 176–188. [Google Scholar] [CrossRef]
  5. Turkan, Y.; Bosché, F.; Haas, C.; Haas, R. Tracking Secondary and Temporary Concrete Construction Objects Using 3D Imaging Technologies. Comput. Civ. Eng. 2013, 14, 145–167. [Google Scholar] [CrossRef]
  6. Jadidi, H.; Ravanshadnia, M.; Alipour, M. Visualization of As-Built Progress Data Using Constructionsite Photographs: Two Case Studies. Proc. Int. Symp. Autom. Robot. Constr. (IAARC) 2014, 31, 706–713. [Google Scholar] [CrossRef] [Green Version]
  7. Bosché, F.; Ahmed, M.; Turkan, Y.; Haas, C.; Haas, R. The Value of Integrating Scan-To-BIM And Scan-Vs-BIM Techniques For Construction Monitoring Using Laser Scanning And BIM: The Case Of Cylindrical MEP Components. Autom. Constr. 2015, 49, 201–213. [Google Scholar] [CrossRef]
  8. Son, H.; Bosché, F.; Kim, C. As-Built Data Acquisition And Its Use In Production Monitoring And Automated Layout Of Civil Infrastructure: A Survey. Adv. Eng. Inform. 2015, 29, 172–183. [Google Scholar] [CrossRef]
  9. Hannan Qureshi, A.; Alaloul, W.; Wing, W.; Saad, S.; Ammad, S.; Musarat, M. Factors Impacting the Implementation Process of Automated Construction Progress Monitoring. Ain Shams Eng. J. 2022, 13, 101808. [Google Scholar] [CrossRef]
  10. Carrera-Hernández, J.; Levresse, G.; Lacan, P. Is UAV-SfM Surveying Ready to Replace Traditional Surveying Techniques? Int. J. Remote Sens. 2020, 41, 4820–4837. [Google Scholar] [CrossRef]
  11. Abd-Elmaaboud, A.; El-Tokhey, M.; Ragheb, A.; Mogahed, Y. Comparative assessment of terrestrial laser scanner against traditional surveying methods. Int. J. Eng. Appl. Sci. 2019, 6, 79–84. [Google Scholar]
  12. Giel, B.; Issa, R. Using Laser Scanning to Access the Accuracy of As-Built BIM. In Proceedings of the 2011 ASCE International Workshop on Computing in Civil Engineering, Miami, FL, USA, 19–22 June 2011; pp. 665–672. [Google Scholar] [CrossRef]
  13. Martinez, J.; Albeaino, G.; Gheisari, M.; Volkmann, W.; Alarcón, L. UAS Point Cloud Accuracy Assessment Using Structure from Motion–Based Photogrammetry and PPK Georeferencing Technique for Building Surveying Applications. J. Comput. Civ. Eng. 2021, 35, 05020004. [Google Scholar] [CrossRef]
  14. Dai, F.; Lu, M. Assessing the Accuracy of Applying Photogrammetry to Take Geometric Measurements on Building Products. J. Constr. Eng. Manag. 2010, 136, 242–250. [Google Scholar] [CrossRef]
  15. El-Din Fawzy, H. 3D laser scanning and close-range photogrammetry for buildings documentation: A hybrid technique towards a better accuracy. Alex. Eng. J. 2019, 58, 1191–1204. [Google Scholar] [CrossRef]
  16. KORUMAZ, S. Terrestrial Laser Scanning with Potentials and Limitations for Archaeological Documentation: A Case Study of the Çatalhöyük. Adv. LiDAR 2021, 1, 32–38. [Google Scholar]
  17. Bauwens, S.; Bartholomeus, H.; Calders, K.; Lejeune, P. Forest Inventory with Terrestrial LiDAR: A Comparison of Static and Hand-Held Mobile Laser Scanning. Forests 2016, 7, 127. [Google Scholar] [CrossRef] [Green Version]
  18. Thomson, C.; Apostolopoulos, G.; Backes, D.; Boehm, J. Mobile Laser Scanning for Indoor Modelling. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, II-5/W2, 289–293. [Google Scholar] [CrossRef] [Green Version]
  19. Trzeciak, M.; Brilakis, I. Comparison of accuracy and density of static and mobile laser scanners. In Proceedings of the 2021 European Conference on Computing in Construction, Rhodes, Greece, 25–27 July 2021. [Google Scholar] [CrossRef]
  20. Lehtola, V.; Kaartinen, H.; Nüchter, A.; Kaijaluoto, R.; Kukko, A.; Litkey, P.; Honkavaara, E.; Rosnell, T.; Vaaja, M.; Virtanen, J.; et al. Comparison of the Selected State-Of-The-Art 3D Indoor Scanning and Point Cloud Generation Methods. Remote Sens. 2017, 9, 796. [Google Scholar] [CrossRef] [Green Version]
  21. Wang, B.; Yin, C.; Luo, H.; Cheng, J.; Wang, Q. Fully automated generation of parametric BIM for MEP scenes based on terrestrial laser scanning data. Autom. Constr. 2021, 125, 103615. [Google Scholar] [CrossRef]
  22. Tzedaki, V.; Kamara, J. Capturing As-Built Information for a BIM Environment Using 3D Laser Scanner: A Process Model. AEI 2013, 2013, 485–494. [Google Scholar] [CrossRef]
  23. Zhang, C.; Arditi, D. Automated progress control using laser scanning technology. Autom. Constr. 2013, 36, 108–116. [Google Scholar] [CrossRef]
  24. Turkan, Y.; Bosche, F.; Haas, C.; Hass, R. Automated Progress Tracking of Erection of Concrete Structures. In Proceedings of the Annual Conference of the Canadian Society for Civil Engineering, Ottawa, ON, USA, 14–17 June 2011. [Google Scholar]
  25. Noardo, F.; Wu, T.; Arroyo Ohori, K.; Krijnen, T.; Stoter, J. IFC models for semi-automating common planning checks for building permits. Autom. Constr. 2022, 134, 104097. [Google Scholar] [CrossRef]
  26. Koo, B.; Shin, B. Applying novelty detection to identify model element to IFC class misclassifications on architectural and infrastructure Building Information Models. J. Comput. Des. Eng. 2018, 5, 391–400. [Google Scholar] [CrossRef]
  27. Asadi, K.; Ramshankar, H.; Noghabaei, M.; Han, K. Real-Time Image Localization and Registration with BIM Using Perspective Alignment for Indoor Monitoring of Construction. J. Comput. Civ. Eng. 2019, 33, 04019031. [Google Scholar] [CrossRef]
  28. Chen, J.; Li, S.; Lu, W. Align to locate: Registering photogrammetric point clouds to BIM for robust indoor localization. Build. Environ. 2022, 209, 108675. [Google Scholar] [CrossRef]
  29. Soilán, M.; Justo, A.; Sánchez-Rodríguez, A.; Riveiro, B. 3D Point Cloud to BIM: Semi-Automated Framework to Define IFC Alignment Entities from MLS-Acquired LiDAR Data of Highway Roads. Remote Sens. 2020, 12, 2301. [Google Scholar] [CrossRef]
  30. Huang, R.; Xu, Y.; Yao, W.; Hoegner, L.; Stilla, U. Robust global registration of point clouds by closed-form solution in the frequency domain. ISPRS J. Photogramm. Remote Sens. 2021, 171, 310–329. [Google Scholar] [CrossRef]
  31. Fan, L.; Atkinson, P. Accuracy of Digital Elevation Models Derived From Terrestrial Laser Scanning Data. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1923–1927. [Google Scholar] [CrossRef]
  32. Meyer, T.; Brunn, A.; Stilla, U. Change detection for indoor construction progress monitoring based on BIM, point clouds and uncertainties. Autom. Constr. 2022, 141, 104442. [Google Scholar] [CrossRef]
  33. Pexman, K.; Lichti, D.; Dawson, P. Automated Storey Separation and Door and Window Extraction for Building Models from Complete Laser Scans. Remote Sens. 2021, 13, 3384. [Google Scholar] [CrossRef]
  34. Maalek, R.; Lichti, D.; Ruwanpura, J. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites. Sensors 2018, 18, 819. [Google Scholar] [CrossRef]
  35. Fontana, S.; Cattaneo, D.; Ballardini, A.; Vaghi, M.; Sorrenti, D. A benchmark for point clouds registration algorithms. Robot. Auton. Syst. 2021, 140, 103734. [Google Scholar] [CrossRef]
  36. Ge, X. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets. ISPRS J. Photogramm. Remote Sens. 2017, 130, 344–357. [Google Scholar] [CrossRef] [Green Version]
  37. Liu, W.; Sun, W.; Wang, S.; Liu, Y. Coarse registration of point clouds with low overlap rate on feature regions. Signal Process. Image Commun. 2021, 98, 116428. [Google Scholar] [CrossRef]
  38. Masood, M.; Aikala, A.; Seppänen, O.; Singh, V. Multi-Building Extraction and Alignment for As-Built Point Clouds: A Case Study with Crane Cameras. Front. Built Environ. 2020, 6, 581295. [Google Scholar] [CrossRef]
  39. Zhang, J.; Yao, Y.; Deng, B. Fast and Robust Iterative Closest Point. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 1. [Google Scholar] [CrossRef]
  40. Bouaziz, S.; Tagliasacchi, A.; Pauly, M. Sparse Iterative Closest Point. Comput. Graph. Forum 2013, 32, 113–123. [Google Scholar] [CrossRef] [Green Version]
  41. Mavridis, P.; Andreadis, A.; Papaioannou, G. Efficient Sparse ICP. Comput. Aided Geom. Des. 2015, 35, 16–26. [Google Scholar] [CrossRef]
  42. Chetverikov, D.; Stepanov, D.; Krsek, P. Robust Euclidean alignment of 3D point sets: The trimmed iterative closest point algorithm. Image Vis. Comput. 2005, 23, 299–309. [Google Scholar] [CrossRef]
  43. Kaiser, T.; Clemen, C.; Maas, H. Automatic co-registration of photogrammetric point clouds with digital building models. Autom. Constr. 2022, 134, 104098. [Google Scholar] [CrossRef]
  44. Bosché, F. Plane-based registration of construction laser scans with 3D/4D building models. Adv. Eng. Inform. 2012, 26, 90–102. [Google Scholar] [CrossRef]
  45. Bueno, M.; Bosché, F.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P. 4-Plane congruent sets for automatic registration of as-is 3D point clouds with 3D BIM models. Autom. Constr. 2018, 89, 120–134. [Google Scholar] [CrossRef]
  46. Turkan, Y.; Bosche, F.; Haas, C.; Haas, R. Automated progress tracking using 4D schedule and 3D sensing technologies. Autom. Constr. 2012, 22, 414–421. [Google Scholar] [CrossRef]
  47. Golparvar-Fard, M.; Peña-Mora, F.; Savarese, S. Automated Progress Monitoring Using Unordered Daily Construction Photographs and IFC-Based Building Information Models. J. Comput. Civ. Eng. 2015, 29, 04014025. [Google Scholar] [CrossRef]
  48. Pučko, Z.; Šuman, N.; Rebolj, D. Automated continuous construction progress monitoring using multiple workplace real time 3D scans. Adv. Eng. Inform. 2018, 38, 27–40. [Google Scholar] [CrossRef]
  49. Mahami, H.; Nasirzadeh, F.; Hosseininaveh Ahmadabadian, A.; Nahavandi, S. Automated Progress Controlling and Monitoring Using Daily Site Images and Building Information Modelling. Buildings 2019, 9, 70. [Google Scholar] [CrossRef] [Green Version]
  50. Wang, Q.; Guo, J.; Kim, M. An Application Oriented Scan-to-BIM Framework. Remote Sens. 2019, 11, 365. [Google Scholar] [CrossRef] [Green Version]
  51. Braun, A.; Tuttas, S.; Borrmann, A.; Stilla, U. Improving progress monitoring by fusing point clouds, semantic data and computer vision. Autom. Constr. 2020, 116, 103210. [Google Scholar] [CrossRef]
  52. Romero-Jarén, R.; Arranz, J. Automatic segmentation and classification of BIM elements from point clouds. Autom. Constr. 2021, 124, 103576. [Google Scholar] [CrossRef]
  53. Bassier, M.; Klein, R.; Van Genechten, B.; Vergauwen, M. IFC walls reconstruction from unstructured point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, IV-2, 33–39. [Google Scholar] [CrossRef]
  54. Rausch, C.; Haas, C. Automated shape and pose updating of building information model elements from 3D point clouds. Autom. Constr. 2021, 124, 103561. [Google Scholar] [CrossRef]
  55. Yang, L.; Cheng, J.; Wang, Q. Semi-automated generation of parametric BIM for steel structures based on terrestrial laser scanning data. Autom. Constr. 2020, 112, 103037. [Google Scholar] [CrossRef]
  56. Krijnen, T.; Beetz, J. An IFC schema extension and binary serialization format to efficiently integrate point cloud data into building models. Adv. Eng. Inform. 2017, 33, 473–490. [Google Scholar] [CrossRef]
Figure 1. Research methodology.
Figure 1. Research methodology.
Buildings 12 01754 g001
Figure 2. Workflow of data acquisition and pre-processing: (a) using a ZEB-GO handheld laser scanner, (b) using a FARO Focus S terrestrial laser scanner, and (c) example of a control point, TOPCON GT series positioning system, and FARO Focus S laser scanner.
Figure 2. Workflow of data acquisition and pre-processing: (a) using a ZEB-GO handheld laser scanner, (b) using a FARO Focus S terrestrial laser scanner, and (c) example of a control point, TOPCON GT series positioning system, and FARO Focus S laser scanner.
Buildings 12 01754 g002
Figure 3. Main experimental data: (a) point cloud of the office building project SQVERAS; (b) point cloud of the residential building project PILIAMIESTIS A1; (c) industry foundation class (IFC) model of the office building project SQVERAS; (d) IFC model of the residential building project PILIAMIESTIS A1.
Figure 3. Main experimental data: (a) point cloud of the office building project SQVERAS; (b) point cloud of the residential building project PILIAMIESTIS A1; (c) industry foundation class (IFC) model of the office building project SQVERAS; (d) IFC model of the residential building project PILIAMIESTIS A1.
Buildings 12 01754 g003aBuildings 12 01754 g003b
Figure 4. IFC elements vertices extraction: (a) IFC SQVERAS data to point cloud, and (b) IFC A1 data to point cloud.
Figure 4. IFC elements vertices extraction: (a) IFC SQVERAS data to point cloud, and (b) IFC A1 data to point cloud.
Buildings 12 01754 g004
Figure 5. The three selected reference points are shown in yellow (1st), magenta (2nd), and orange (3rd): (a) office building, and (b) residential building.
Figure 5. The three selected reference points are shown in yellow (1st), magenta (2nd), and orange (3rd): (a) office building, and (b) residential building.
Buildings 12 01754 g005
Figure 6. Progress monitoring pipeline.
Figure 6. Progress monitoring pipeline.
Buildings 12 01754 g006
Figure 7. Plane limits: (a) calculates the distance to all points, (b) discards the points that are outside the limits of the current plane.
Figure 7. Plane limits: (a) calculates the distance to all points, (b) discards the points that are outside the limits of the current plane.
Buildings 12 01754 g007
Figure 8. Points per face area.
Figure 8. Points per face area.
Buildings 12 01754 g008
Figure 9. IFC object faces by triangles.
Figure 9. IFC object faces by triangles.
Buildings 12 01754 g009
Figure 10. Initial alignment: (a) source and target point cloud’s spatial location after conversion to zero origin, and (b) source and target point cloud’s spatial location after initial alignment using 3 reference points.
Figure 10. Initial alignment: (a) source and target point cloud’s spatial location after conversion to zero origin, and (b) source and target point cloud’s spatial location after initial alignment using 3 reference points.
Buildings 12 01754 g010
Figure 11. Final alignment of the source and target point cloud data after applying the Iterative Closest Point (ICP) method: (a) the data was obtained using the FARO Focus S scanner from the third floor of the office building, and (b) the data was obtained using the ZEB-GO scanner from the residential building.
Figure 11. Final alignment of the source and target point cloud data after applying the Iterative Closest Point (ICP) method: (a) the data was obtained using the FARO Focus S scanner from the third floor of the office building, and (b) the data was obtained using the ZEB-GO scanner from the residential building.
Buildings 12 01754 g011
Figure 12. Method evaluation by removing objects in point cloud data: (a) a wall and two columns were removed in SQVERAS 3rd, and (b) two walls and two columns were removed in PILIAMIESTIS A1 point cloud models.
Figure 12. Method evaluation by removing objects in point cloud data: (a) a wall and two columns were removed in SQVERAS 3rd, and (b) two walls and two columns were removed in PILIAMIESTIS A1 point cloud models.
Buildings 12 01754 g012
Figure 13. Object detection: (a) SQVERAS 3rd, and (b) PILIAMIESTIS A1.
Figure 13. Object detection: (a) SQVERAS 3rd, and (b) PILIAMIESTIS A1.
Buildings 12 01754 g013
Figure 14. Data output in occluded and noisy environment: (a) some of the columns have shifted and duplicated, the surface of some objects is not fully scanned and is, therefore, incomplete, (b) an obstacle caused a hole in the surface, and unnecessary points caused by noise were not removed, and (c) formwork is installed above the columns, and there are also scaffolding and other items nearby.
Figure 14. Data output in occluded and noisy environment: (a) some of the columns have shifted and duplicated, the surface of some objects is not fully scanned and is, therefore, incomplete, (b) an obstacle caused a hole in the surface, and unnecessary points caused by noise were not removed, and (c) formwork is installed above the columns, and there are also scaffolding and other items nearby.
Buildings 12 01754 g014
Figure 15. Example of downsampling: (a) original point cloud, and (b) downsampled point cloud.
Figure 15. Example of downsampling: (a) original point cloud, and (b) downsampled point cloud.
Buildings 12 01754 g015
Figure 16. The results of the experiments.
Figure 16. The results of the experiments.
Buildings 12 01754 g016
Figure 17. Reporting and visualization of results.
Figure 17. Reporting and visualization of results.
Buildings 12 01754 g017
Table 1. A concise description of the datasets.
Table 1. A concise description of the datasets.
Dataset No.Description
Dataset 1Consists of as-built data obtained from the third floor of the office building and the corresponding IFC-based BIM model. As-built data was obtained using a FARO Focus S70 static laser scanner in a clean environment. The resulting point cloud was georeferenced to the global coordinate system.
Dataset 2Consists of as-built data obtained from the residential building and the corresponding IFC-based BIM model. As-built data was obtained using a ZEB-GO mobile laser scanner in a clean environment.
Dataset 3Consists of modified data from Dataset 1. A wall and two columns were removed from the point cloud model. The corresponding IFC-based BIM model was not modified.
Dataset 4Consists of modified data from Dataset 2. Two walls and two columns were removed from the point cloud model. The corresponding IFC-based BIM model was not modified.
Dataset 5Consists of as-built data obtained from the first floor of the office building and the corresponding IFC-based BIM model. As-built data was obtained using a FARO Focus S70 static laser scanner in an occluded environment. Unnecessary points in the point cloud affected by noise were not removed.
Dataset 6Consists of as-built data obtained from the first floor of the office building and the corresponding IFC-based BIM model. As-built data was obtained using a Leica BLK 360 static laser scanner in an occluded environment. Unnecessary points in the point cloud affected by noise were not removed.
Table 2. Main parameters of laser scanners.
Table 2. Main parameters of laser scanners.
DeviceTypeDistance AccuracyScanning Range (Meters)Scanning Speed (Points/Second)
FARO Focus S70 laser scannerTerrestrial1 mm701,000,000
ZEB-GO handheld laser scanner and ZEB-DL2600 data loggerHandheld10–30 mm3043,000
Leica BLK360 laser scannerTerrestrial4 mm at 10 m60360,000
7 mm at 20 m
Table 3. Main parameters of point cloud data.
Table 3. Main parameters of point cloud data.
Point Cloud DataDateAverage Population (Points/m3)Number of PointsFile Size
(MB)
SQVERAS 3rd floor obtained using Faro laser scanner2 July 20191438.789,451,351758
A1 obtained using ZEB-GO laser scanner24 December 2021521.4812,510,712677
SQVERAS 1st floor obtained using Faro laser scanner2 July 20192211.6913,882,7741007
SQVERAS 1st floor obtained using Leica laser scanner4 June 2019726.6012,990,132903
Table 4. Object detection results considering data reduction.
Table 4. Object detection results considering data reduction.
Point Cloud DataOriginal PC
(Number of Points)
Method
Validation
Downsampled PC
(Number of Points)
Method
Validation
SQVERAS 3rd floor obtained using a Faro laser scanner9,451,351All objects
detected
834,551All objects
detected
A1 obtained using a ZEB-GO laser scanner12,510,712All objects
detected
1,236,999All objects
detected
SQVERAS 1st floor obtained using a Faro laser scanner13,882,774All objects
detected
641,578Not all objects
detected
SQVERAS 1st floor obtained using a Leica laser scanner12,990,132All objects
detected
532,111Not all objects
detected
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kavaliauskas, P.; Fernandez, J.B.; McGuinness, K.; Jurelionis, A. Automation of Construction Progress Monitoring by Integrating 3D Point Cloud Data with an IFC-Based BIM Model. Buildings 2022, 12, 1754. https://doi.org/10.3390/buildings12101754

AMA Style

Kavaliauskas P, Fernandez JB, McGuinness K, Jurelionis A. Automation of Construction Progress Monitoring by Integrating 3D Point Cloud Data with an IFC-Based BIM Model. Buildings. 2022; 12(10):1754. https://doi.org/10.3390/buildings12101754

Chicago/Turabian Style

Kavaliauskas, Paulius, Jaime B. Fernandez, Kevin McGuinness, and Andrius Jurelionis. 2022. "Automation of Construction Progress Monitoring by Integrating 3D Point Cloud Data with an IFC-Based BIM Model" Buildings 12, no. 10: 1754. https://doi.org/10.3390/buildings12101754

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop