Next Article in Journal
A Context-Aware Location Recommendation System for Tourists Using Hierarchical LSTM Model
Next Article in Special Issue
System Dynamics Model for the Improvement Planning of School Building Conditions
Previous Article in Journal
Institutional Determinants of Budgetary Expenditures. A BMA-Based Re-Evaluation of Contemporary Theories for OECD Countries
Previous Article in Special Issue
A Simple and Sustainable Prediction Method of Liquefaction-Induced Settlement at Pohang Using an Artificial Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sustainable Application of Hybrid Point Cloud and BIM Method for Tracking Construction Progress

1
Department of Architecture, Yeungnam University College, Nam-gu, Daegu 42415, Korea
2
School of Architecture, Yeungnam University, Gyeongsan-si, Gyeongbuk 38541, Korea
3
School of Architecture & Civil and Architectural Engineering, Kyungpook National University, Buk-gu, Daegu 41566, Korea
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(10), 4106; https://doi.org/10.3390/su12104106
Submission received: 18 April 2020 / Revised: 10 May 2020 / Accepted: 14 May 2020 / Published: 18 May 2020

Abstract

:
Compared to the past, the complexity of construction-project progress has increased as the size of structures has become larger and taller. This has resulted in many unexpected problems with an increasing frequency of occurrence, such as various uncertainties and risk factors. Recently, research was conducted to solve the problem via integration with data-collection automation tools of construction-project-progress measurement. Most of the methods used spatial sensing technology. Thus, this study performed a review of the representative technologies applied to construction-project-progress data collection and identified the unique characteristics of each technology. The basic principle of the progress proposed in this study is its execution through the point cloud and the attributes of BIM, which were studied in five stages: (1) Acquisition of construction completion data using a point cloud, (2) production of a completed 3D model, (3) interworking of an as-planned BIM model and as-built model, (4) construction progress tracking via overlap of two 3D models, and (5) verification by comparison with actual data. This has confirmed that the technical limitations of the construction progress tracking through the point cloud do not exist, and that a fairly high degree of progress data which contains efficiency and accuracy can be collected.

1. Introduction

Data related to the progress of construction projects are very useful both to determine whether timelines are being kept and to assess the quality of the work done, and these data are essential to improving the productivity of construction management [1,2]. However, the progress of construction projects is currently tracked in various ways, such as scheduling, utilizing construction methods, expenditure management, and resource/quality management, and it is difficult to accurately track and record all of those activities [3].
The information required to measure the progress of a construction project can be classified in two categories. The first one is information related to the plan and design and can be acquired at the end of the design phase. The second one is information related to the current construction progress. The latter type cannot be easily collected, and continuously changes. Unfortunately, for most construction project sites, data acquisition depends on the manual recording of information on paper, and the use of photos and documents causes many constraints in time and space. Automation is considered to be the most economical solution to these data acquisition-related problems [4,5].
The goal of this study is to improve the efficiency and accuracy with which progress data is acquired, as this is important to the overall management of the progress of each construction project. To achieve this goal, the study considers recent trends with some construction projects and proposes an alternative process for solving the problems related to data collection on project sites. This study conducts verification procedures on various buildings to confirm the validity of the proposed measures as well as to identify methods of post-processing the acquired data. The contents of each of the performance stages presented in this study are shown in Figure 1.

2. Existing Studies on Automated Progress Data Acquisition

Conventional processes employed to acquire data related to the progress of construction projects are inefficient, both in terms of time and cost, and this has led to many studies in the field of automation technologies [6,7,8,9]. Various mobile IT devices were initially proposed as a way to automate data acquisition because they can transmit information via the Internet. Initial representative studies involved the development of various automated methods to perform field data acquisition using data acquisition technologies (DATs) such as radio frequency identification (RFID), global positioning systems (GPSs), bar codes, time-lapse cameras, and ultra-wideband (UWB) [5,10,11,12,13]. The above-mentioned studies generally indicated that the introduction of mobile-based IT could enhance the efficiency with which data are collected from project sites in real time. Nevertheless, in practice, the proposed methods had technical limitations and they were therefore not commonly applied to construction projects. Typical problems include the cost of purchasing equipment and software for construction projects and the cost of upgrading hardware for maintenance. Furthermore, it has been confirmed that the technical limitations have not yet deviated from the conceptual stage in terms of automation and that the usefulness of the information collected has been poorer than information collected through other techniques [14].
Photogrammetry is a technique that involves the development of a point cloud model from digital photos in order to acquire data about the progress of a construction project. El-Omari and Moselhi [15] is one of the representative studies on photogrammetry, where the amount of work done for a certain time was estimated based on images captured over the corresponding time interval. However, as progress data need to be stored over time, the memory space reserved for data storage should be secured.
The video-based measurement collects progress data by filming the construction project site. This method is effective as it is possible to extract sequential video frame data [16]. Studies on construction projects that have utilized the video-based measurement to acquire data have focused on civil engineering projects such as roads and bridges. The damage detection and safety assessment of facilities and the detection of mobile equipment have been the main areas of focus [17,18,19]. In particular, video-based measurements are affected by many factors, including temperature changes of objects, focus, the data-capture range, and camera resolution.
In 3D laser scanning, laser lights are emitted onto an object, and the distance to the object is calculated using the return time of travel of the light. This method is widely used in the engineering field [16]. Representative studies in this area have examined monitoring methods for the process and interference of mechanical, electrical, and plumbing (MEP) by comparing two 3D models, or by utilizing other methods to create 3D models using actual progress data acquired by LIDAR [20,21]. However, as data acquisition using LIDAR requires the emission of laser lights, if an object has a high reflectance, the efficiency decreases [22]. Besides, the high cost and limited applicability of LIDAR in complex indoor environments are obstacles to its popularity.
Augmented reality (AR) is a combination of various technologies, where virtual images from a computer are added to a real environment [23]. BIM is the representative software used for AR, and it is also applicable to visualization, simulation, information modeling, and safety testing [24,25]. The advantages of AR are that the construction progress and potential defects can be easily determined during the decision-making process, and if necessary, corrections can be made. While AR has been adopted by a large number of studies, there are still many problems related to user convenience, noise, and data interference filtering. Accordingly, practical methods of solving those problems need to be developed. Table 1 shows the characteristics of the data acquisition technologies and is based on elements that should be considered for technical use in measuring the progress in a construction project.
In recent years, studies have been conducted to verify the progress by comparing as-built 3D models collected through LIDAR with those produced during the design phase [20,26]. Representative studies in this area have examined the visualization of process rate monitoring through a 4D simulation model conducted in combination with modeling based on laser scanning.
Han and Golparvar-Fard [27] developed a progressive model via laser scanning and studied the construction progress through the BIM. Patraucean et al. [28] also conducted research on the modeling method for the progressive status of a project through the BIM. Meanwhile, Adan et al. [29] focused on the recognition of objects. After segmenting the point clouds corresponding to the walls of a building, a set of candidate objects was detected independently in the color and geometric spaces, and an original consensus procedure integrated both results to infer recognition. In addition, the recognized object was positioned and inserted in the as-is semantically rich 3D model, or BIM model. Wang et al. [30] developed a technique to automatically estimate the dimensions of precast concrete bridge deck panels and create as-built building information modeling (BIM) models to store the real dimensions of the panels. Bueno et al. [31] presented a novel automatic coarse registration method that is an adaptation of as-is 3D point clouds with 3D BIM models. Rebolj et al. [32] proposed methodology including three parameters (minimum local density, minimum local accuracy, and level of scatter) to measure the quality of point cloud data for construction progress tracking. While a recent study investigated the relationship between the quality of point cloud data and the successful identification of building elements, research is still lacking that can identify the required point cloud data quality for each specific application.
Therefore, Wang et al. [33] suggested three main future research directions within the scan-to-BIM framework. First, the information requirements for different BIM applications should be identified, and the quantitative relationships between the modelling accuracy or point cloud quality and the reliability of as-is BIM for its intended use should be investigated. Second, scan planning techniques should be further studied for cases in which an as-designed BIM does not exist and for UAV-mounted laser scanning. Third, as-is BIM reconstruction techniques should be improved with regard to accuracy, applicability, and level of automation. Puri and Turkan [34] mentioned that future work should focus on multiple larger construction projects that contain elements with complex geometrical shapes.

3. Point Cloud-Based Progress Data Acquisition

3.1. LIDAR-Based Point Cloud Data

Image scanning is a technique that involves optically reading images and converting them into data, information and objects, and a LIDAR device is a device that supports image scanning [35]. LIDAR emits a laser beam to an object at specific intervals, and expresses the shape of the object in a set of 3D coordinates by using the direction of the reflected laser and the distance measured [36].
Points that are obtained in this way have 3D X, Y, and Z coordinates, including geo information, and each constituent point is formed at a point where the laser of the LIDAR is reflected from the object. Accordingly, although no geometric information of the object is given, the surface coordinates of the object are included, from which the length, height, and other similar attributes of the object can be acquired (Figure 2). Consequently, a point cloud that includes the geoinformation of an object can provide high-resolution data without distortion using a 3D mesh model. More information can be acquired by modeling points that are obtained from each scanning task into a shape.
In each scanning iteration, LIDAR enables only the object seen in the straight line from it to be scanned. If there is another object between the LIDAR and the object, no scanning data are acquired. For this reason, in the case where a laser beam does not arrive at a point from the measurement point, information about the point cannot be determined. In addition, as shown in Figure 3, LIDAR radially emits the source of light and thus generates a shadow area. In other words, even if a projection plane is created vertical to the scanning direction of LIDAR, there may be an overlap such as the one shown in the dotted line inside the circle. To prevent such a phenomenon from occurring behind the object to be measured, all of the whole information pertaining to the appearance of the object needs to be scanned. This means that an object should be scanned at least two times.
LIDAR is classified mainly as contact LIDAR and noncontact LIDAR. Contact scanning is a measurement method that attaches a contact sensor called a Touch Probe to an object. The coordinate measuring machine (CMM) is a representative device. Nevertheless, because the sensor directly touches the surface of the object, the object may be easily deformed, thus making the measurement either impossible or time consuming for materials that are likely to become deformed [35].
The first principle of noncontact scanning is that 3D coordinates are formed by timing the return of a laser beam emitted to and reflected from an object surface on the basis of the time-of-flight (TOF) measurement [37]. As this method does not require the sensor to contact the surface of the object, wide areas can be measured at much faster speeds [38]. The TOF measurement installs a measurement device on an axis of rotation and rotates it by a certain angle for horizontal scanning. Meanwhile, for vertical scanning, the laser reflection mirror inside the measurement device is moved by a certain angle. The second principle of noncontact scanning is laser-based triangulation, as illustrated in Figure 4. The reflected light from a target object on which a line-shaped laser beam is irradiated, is measured at a specific cell of a charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS). In other words, this method restores points obtained by scanning a target object into a 3D plane or figure.
The distance between the laser oscillator and the optic sensor is specified, and the oscillation angle is also given. Thus, in a triangle consisting of the laser oscillator, the optic sensor, and the target object, the lengths of two sides can be obtained from the remaining side and two angles. A larger number of points can be measured within a given time period when compared with the TOF method. However, rotation is needed to scan the whole area. Other methods that are used to acquire 3D shapes of objects are the shape from shades (SFS) and the structured light system (SLS). SFS restores the 3D shape of an object by illuminating it with light, and then measuring the intensity of the reflected light source. SLS identifies the outer shape of an object by projecting a light source with a regular pattern on a target object and using the shape of pattern reflected.
The point cloud data of each scene, which are obtained by scanning a target object, need to be combined into a single coordinate system to measure the object dimensions, and to analyze points with nonuniform curvatures, and modeling of shapes. The alignment target is the criterion for the alignment process. Generally, the data of a single scanned scene consist of numerous points, and several hundreds of millions or billions of points are given after the alignment process. Accordingly, it takes a long time to accurately align data. However, depending on users’ demands, the scanning or alignment time is preferred relative to the alignment accuracy. The scanning or alignment time may vary according to the alignment method employed. Table 2 presents the characteristics of alignment methods for point cloud data.
Cloud-to-cloud alignment does not require any specific target but utilizes a particular point in a point cloud. After selecting the model space of two stations to be aligned, a particular point is picked up in the same place, and individual points are selected in a multi-pick mode and are aligned. Here, a station is a scanning point, that is, a point raised by a laser scanner. When selecting the feature points of scanned scenes, which are obtained at each station, they need to be maximally magnified so that an identical point can be selected and picked, and a fixed point with a nonreflecting material should be selected. In addition, accurate picking is required because the alignment quality and error rate are affected. When stations are aligned, at least three pairs of identical points are needed between each scan, and three points need to constitute as large a triangle as possible to minimize the alignment error. In the case where three or more stations are to be aligned, this alignment process should be repeated.
The target-to-target alignment utilizes targets to align two scanned scenes and to combine them as a single scene Targets are installed beforehand on the plane or bottom, wall, and edge of a target object, and the central points or edges of targets are used for alignment. Targets are to be firmly installed. In the case where a shadow area is included in a scene captured by an installed LIDAR, the object needs to be scanned in a different direction so that at least three common points can be recognized between two scan datasets. In this way, accurate alignment can be possible. If a target is installed on the ground that may be inclined or uneven, care is required because the alignment software program may not recognize the target. Besides, as the size of a target varies according to the scanning points, the alignment program may not recognize the target. Accordingly, if the target object is far from the scanner, the target needs to be larger.
Visual alignment is a manual alignment performed by a user who imports two scanning stations to be aligned into the same space. With this method, after two scanning stations are aligned in the X-axis and Y-axis from the user’s perspective, they are also aligned by being moved on the Z-axis and rotated. Visual alignment is most effective for the same or similar features and is also easy for beginners to master.
Cloud-to-cloud alignment and target-to-target alignment, which identify the coordinates of each point and are basically manual operations, are representative methods for the geometric modeling of scanning data. However, if the scanned object is complex, a lot of time and alignment works are required. In such a case, a specific reverse engineering program is usually implemented to automatically extract and align the parts desired by the user. Nevertheless, the use of automatic extraction by employing reverse engineering software has a limitation in terms of the reverse engineering shape, and shapes are often wrongly recognized, which results in inaccurate data alignment. For this reason, the user needs to confirm the result of the automatic alignment using the reverse engineering software and needs to manually remove the wrongly extracted parts. In other words, manual modeling is necessary.

3.2. Drone-Based Point Cloud Data

Drone-based photogrammetry can acquire data of large buildings and terrains. As this method is applicable to large areas, it is recognized as an alternative or supplementary approach for conventional measuring devices [39]. With this advantage, drone-based photogrammetry has been used to measure tasks performed in diverse fields such as building construction, civil work, cultural property management, disaster prevention, and agriculture [40,41,42]. However, this method that employs a drone produces different outputs depending on the weather and brightness of the photo. Besides, it is difficult to obtain close-up images, and a large relative error tends to occur according to the skill of workers and the performance of equipment. In recent times, there have been numerous studies that aim to improve such disadvantages of drone-based photogrammetry in several ways. The majority of those studies focus on verifying the accuracy of data and enhancing it up to a suitable level. In particular, a marker is used for point matching in order to reduce the error range of drone-based scanning [37]. As shown in Figure 5, drone-based photogrammetry can extract point clouds by implementing various software programs such as Pix4D and Context Capture (Bentley), and it can also capture hardly accessible sites at high altitudes. Thus, this method is being more widely used for data acquisition while monitoring, managing, and inspecting facilities.

4. Verification of Accuracy of Point Cloud Data

4.1. Selection of Target Object and Identification of Recognition Rate

This study acquired point cloud data and verified the accuracy of data obtained using LIDAR, which can scan both the exterior and interior of buildings and using a drone that could capture an area inaccessible to managers. This study also examined a method of acquiring available data using post-processing, and finally determined the accuracy and error of the data acquisition according to building shapes.
In this study, three buildings were selected to acquire point cloud data, which were obtained from the framework of those buildings, that is, from columns, girders, beams, and slabs. Building A consisted of two stories and a rooftop. The framework of this building included 12 columns, 20 girders, 23 beams, and 17 slabs. Building B also comprised two stories and a rooftop. The framework of this building included 25 columns, 36 girders, 40 beams, and 62 slabs. Building C consisted of five stories and a rooftop. For Building C, after point cloud data were acquired by using a drone, ultimate data were obtained by image matching. The base data employed for accuracy verification were acquired by comparing the data that were generated by aligning point clouds with measurements. Table 3 presents details of Buildings A, B, and C, where point cloud data were collected for accuracy verification.
This study adopted the visual alignment where data were visually aligned and rotated in the X, Y, and Z axes of the same space. After 3D point cloud data of Building A were completely aligned, the recognition rate of data acquisition was determined for the members of the framework, which include columns, girders, beams, and slabs. In the case of Building A, when the scanning was conducted, the framework had already been completed, but the finishing work had not yet started. For this reason, data for the member of the framework could be easily acquired, and the scanning conditions were similar to those of real construction sites. Fifty rounds of scanning were carried out, and the total duration was 6 h. To prevent incomplete alignment, the scanning interval was set with an overlap of at least 50–60%. Thus, the recognition rate of the members was 100%, and the cloud data could be reliably acquired using LIDAR.
Building B was not under construction but had already been completed. However, as special finishing work had not yet been conducted, the members of the framework could be determined and were thus under similar conditions to those of real construction sites. Thirty rounds of scanning were carried out, and the total duration was 3 h. The LIDAR scanning of this building was performed by setting the acquisition density to “medium.” To prevent incomplete alignment, the scanning interval was set with an overlap of at least 50–60%. While the acquisition density of LIDAR scanning was set to medium, the recognition rate of the members was 100%. Thus, the point cloud data acquisition using LIDAR was found to be reliable.
Building A was selected to verify the accuracy of object recognition. However, this building had a rooftop that could not be scanned using a terrestrial LIDAR. Therefore, a drone was needed to obtain aerial photos, from which data of the overall external building shape could be aligned. This study utilized a rotary wing drone for an experiment to acquire point cloud data. The aerial shots obtained using the drone for data acquisition should be as accurate as possible to minimize alignment errors. However, there are limitations with realizing data acquisition using a drone. These limitations pertain to battery, safety, and GPS technology, making it almost impossible to acquire the high-quality data required by the user. In the case where the need is for accurate engineering, the quality of the scanned data needs to be examined using an appropriate criterion before the data are applied. For Buildings A and C, 209 and 134 photos were, respectively, obtained by operating the drone. Then, point cloud alignment was carried out using those photos.

4.2. Determining Error of Aligned Data

The error of the point cloud data was determined by comparing the measurement data of Buildings A and B and the LIDAR-based alignment models of the scanned data. For the measurement data, the real distances between each building were measured using a measuring device. For alignment model data, the distance between point clouds was measured by implementing a software program. The error was measured by comparing the dimensions of the external width, the distance between columns, and the height of the column, which corresponded to the width, length, and height of the building, respectively. Table 4 presents the errors obtained by comparing measurements and LIDAR scanning results for Buildings A and B. It shows that in the case of Building A, the average error values were 0.011 m, 0.012 m, and 0.019 m in the external width, the distance between columns, and the column height, respectively. Meanwhile, in the case of Building B, the average error values were 0.012 m, 0.011 m, and 0.012 m in the external width, the distance between columns, and the column height, respectively.
According to the BIM guide for 3D imaging, which is published by the General Service Administration (GSA) of the USA, the error range needs to be a maximum of 51 mm for urban design projects and a maximum of 13 mm for architectural designs. Otherwise, the practical accuracy cannot be maintained. In this study, the errors for each item, which were identified by performing comparative measurements, were between 11 mm and 19 mm. This result was remarkably close to 13 mm, which is recommended by GSA for the application of point cloud data to architectural designs. The distance between the two end points of a target member in the scanned data was measured by mouse picking. As this method implies an unavoidable error, the above errors indicate that very accurate data were acquired by this study. Consequently, based on the cases of Buildings A and B, the LIDAR-based measurement and alignment of this study is shown to be accurate.
Errors that were present in the point cloud data obtained using a drone were compared in the same way as the LIDAR-based error verification. The target was Building A. As the drone could capture only the external building shape, the measurement data of external members were compared with the drone-based alignment model of scanned data. Table 5 presents the errors between the measurements and the drone-based alignment data for Building A. It shows that the average errors for the width, length, and height of the building were 0.378 m, 0.358 m (distance between columns), and 0.072 m (column height), respectively. These values were far below 13 mm, which is recommended by GSA for the application of point cloud data to architectural designs. Such a large gap is attributable to the following intrinsic characteristics of drones. First, because each drone captures a target while flying, it is difficult to acquire accurate data. Second, images obtained by a drone need to be converted to point clouds and then imported into a software program that can measure distance. These steps result in significant gaps. Accordingly, this study used the drone-based point cloud data only for the parts for which data could not be acquired using LIDAR.

5. 3D Modeling of Point Cloud Data

5.1. Creation of 3D Model of Point Cloud Date for Target Object

Upon verification of the accuracy of point cloud data, which had been acquired using drone- and LIDAR-based scanning, respectively, it was shown that the data obtained by LIDAR scanning had a higher accuracy than those acquired by the drone. However, the progress data acquisition is likely to include inaccessible areas such as rooftops, and there may be a risk factor when acquiring data. In such a case, the application of LIDAR to data acquisition may be restricted, which will result in uncertain parts in the alignment of the whole point clouds. In this regard, by combining the datasets that have been acquired by a drone and a LIDAR, respectively, the loss of data can be prevented, thus improving the accuracy of progress data for construction sites. As shown in Figure 6, to improve the accuracy of progress data, this study combined two types of point cloud data. The mixing process can be summarized as follows.
  • As the two types of point cloud data acquired by LIDAR and a drone have different file formats, their file attributes need to be unified prior to mixing those data. In this study, the data acquired by a drone had the p4d format and were thus converted to xyz coordinates, which indicated GPS coordinates, in order to be combined with the LIDAR-based data.
  • As the drone-based data thus converted to xyz coordinates had fixed coordinates, automatic alignment was possible without the need for any additional alignment tasks. Accordingly, when the files were imported into the program for LIDAR, the alignment was automatically completed.
  • As the drone-based data were acquired by an aerial shot, they included not only the target object but also the surrounding area. Therefore, the noise was removed to obtain only the necessary part.
  • From the drone-based data, which had been completely imported, only the part available for mixed data was selected, and the remaining parts were removed.
  • The final mixed data were completed by conducting the cloud-to-cloud alignment between the selected drone-based data and the LIDAR-based data.

5.2. 3D Polygon Mesh Modeling

Delanay triangulation (DT) and the Voronoi diagram (VD) are basic concept for 3D polygon mesh modeling. DT is a division in which points on a plane are connected in triangles to divide the space such that the minimum value of the cabinet is maximal, and the outer source of any triangle does not include anything other than the three vertices of the triangle. In other words, of various triangulations, the division in which each triangle is as close as possible to the regular triangle. Meanwhile, a VD is a division of a plane into polygons that contain each of these points when there are points on the plane. When there are points on the plane, the two adjacent generation points should be connected to the line, and a vertical equal division of this line should be drawn. In this way, a vertical isomeric line is drawn, creating a polygon with a vertical isomeric line that divides the sides into polygons. DT and VD are in a dual relationship, and if one is known, the other can immediately be obtained.
Figure 7a shows VD and the DT of the same set of points. The VD is created by sequentially linking the center of the circumcircle of DTs with generating points as a common vertex, and by linking points between adjacent VD areas, DT can be generated for these points. For 3D stereoscopic modeling from 3D point clouds, the use of DT allows for polygon mesh to be obtained from a collection of points on the surface. The triangulation in 3D is called tetrahedralization or tetrahedrization [43]. A tetrahedralization is the partitioning of the input domain into a collection of tetrahedra that meet at only shared faces (vertices, edges, or triangles). Polygons are typically ideal for accurately representing the results of measurements, providing an optimal surface description. However, the results of tetrahedralization are much more complicated than those of a 2D triangulation. Therefore, this study was performed by utilizing the Commercial Modeling Software Package. The Leica Cyclone platform was used for 3D point cloud data visualization and processing, and the Leica 3D Reshaper platform was used for polygon mesh model generation Figure 7b,c.
Mixed-point cloud data can be configured into a 3D model using a modeling process. This process generates a polygon from the outline of the point cloud. After the polygon model of each member is generated, an ultimate 3D model is completed by an editing process. However, this modeling method cannot reflect the details of the acquired data. Construction projects usually include the installation of molds and the casting of concrete, which may cause some errors or bent surfaces that were not originally planned. Manual 3D modeling sets the surfaces of each member and allocates heights in the form of a straight line. Accordingly, detailed errors such as a small slope or bends on a target surface cannot be modeled. Nevertheless, such errors can be detected compared with the actual plan. Figure 8 illustrates a representative process of 3D modeling for a completely aligned point cloud.

5.3. Determination of Errors in the Created 3D Model

The acquisition of accurate data is the most essential part of the reverse engineering using the progress data acquired from a construction site. The acquisition of accurate data during the proposed process in this study is based on the 3D shapes of buildings. Accordingly, error identification is necessary for the 3D shape of a target building. This study also determined the shape of the created 3D model. The volume of the 3D model was compared with the actual data. The amount of concrete poured for the construction was used as the actual data. Table 6 presents the locations, date, and volume (m3) of concrete poured in Building A. Concrete was poured 6 times, and the total volume was 522 m3.
In the case of Building A, the initial data acquired by LIDAR were limited to the above-ground part, and back filling parts, such as sub-slab concrete and foundation, were excluded. In other words, the 3D model was generated by acquiring the point cloud data for the above-ground part. Accordingly, the volume data of the poured concrete were compared over 468 m3, which included PIT, 1F, 2F, protective concrete, and the rooftop.
As with other commercial software programs, a software program that automatically creates a 3D model after a point cloud is imported enables the length, area, and volume of each object to be identified. The volume of the 3D model of Building A was measured to be 479 m3. When the actual data were compared with the volume data of the 3D model, the difference was 11 m3. This result corresponded to a difference of less than 3% compared with the actual data. Thus, the 3D model data showed relatively little error compared with the volume based on the actual data, demonstrating high accuracy.

5.4. Visualization of Construction Progress

In order to track the progress of a project, the current status should be compared with the planned status. This study examined an overlap-based method of comparing the BIM model, which provides the as-planned data of a project, and the point cloud-based 3D model, which shows as-built data. For the comparison, the point cloud-based 3D model needs to be imported into the BIM model. However, in the case where two types of 3D models were used, they were implemented on different software bases. File conversion is required for the importing of data. Figure 9 shows the process involved in comparing two models.

6. Conclusions

This study proposed methods that can be used to track the progress of construction projects, and each of the proposed methods was verified. With respect to for data acquisition, the drone- and LIDAR-based point cloud data acquisition methods were examined, and the accuracy of data was verified with respect to their application to actual construction projects. LIDAR-based point cloud data had error rates of roughly 11–19 mm, indicating a high accuracy level. However, the drone-based data showed a considerably low accuracy level. Because the progress data are based on the 3D shapes of buildings, errors in the 3D shapes were also examined. In the case of Building A, the 3D model based on point cloud data had a difference of 11 m3 when compared to the actual data. This value represented a difference of less than 3% compared with the actual data, thereby demonstrating a low error rate.
In order to track the progress of a project, the current status should be compared with the planned status. The proposed overlapping method used in this study for the BIM model and the point cloud-based 3D model enabled the actual progress to be visualized and compared to the corresponding plan. Therefore, it is expected to permit project managers to more easily track project progress and identify precise statues when progress has not proceeded as planned. This provides the advantage of progress management being carried out through the establishment of future construction plans and the review of schedules. It is also believed that various reports and related data using visualized three-dimensional models will help project participants and stakeholders greatly. All additional accumulated data could also be used as the basis for the maintenance phase after the end of the project or for similar projects in the future.
Based on results obtained, the data acquisition method proposed in this study appears to be very efficient and can enable project managers to assess the progress and comprehensively manage projects. In particular, as decisions can be made quickly based on rapid information delivery, the incidence of workers’ errors and the accompanying need for reconstruction can thus be prevented, leading to reductions in time and cost overruns. However, this study showed that errors and omissions in the alignment of point cloud data caused the poor-quality data alignment. The representative causes were the reflexivity of laser emitted on the surfaces of objects, the distance, and the atmospheric environment. The path of laser was also problematic. If it is possible to omit a specific section or to utilize an independent one that does not need to be aligned with other sections, the problem may be trivial. However, if the section is an essential one that interfaces with different sections, the problem should be resolved.

Author Contributions

In this paper, S.K. (Seungho Kim) collected the data and wrote the paper. S.K. (Sangyong Kim) analyzed the data and conceived the methodology. D.-E.L. developed the ideas and designed the research framework. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2018R1A5A1025137).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Akinci, B.; Boukamp, F.; Gordon, C.; Huber, D.; Lyons, C.; Park, K. A formalism for utilization of sensor systems and integrated project models for active construction quality control. Autom. Constr. 2006, 15, 124–138. [Google Scholar] [CrossRef] [Green Version]
  2. Tang, P.; Huber, D.; Akinci, B.; Lipman, R.; Lytle, A. Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques. Autom. Constr. 2010, 19, 829–843. [Google Scholar] [CrossRef]
  3. Nahangi, M.; Haas, C. Automated 3D compliance checking in pipe spool fabrication. Adv. Eng. Inform. 2014, 28, 360–369. [Google Scholar] [CrossRef]
  4. Cheng, T.; Venugopal, M.; Teizer, J.; Vela, P. Automated trajectory and path planning analysis based on ultra wideband data. J. Comput. Civ. Eng. 2011, 26, 151–160. [Google Scholar] [CrossRef]
  5. Woo, S.; Jeong, S.; Mok, E.; Xia, L.; Choi, C.; Pyeon, M.; Heo, J. Application of wifi-based indoor positioning system for labor tracking at construction sites: A case study in Guangzhou mtr. Autom. Constr. 2011, 20, 3–13. [Google Scholar] [CrossRef]
  6. El-Omari, S.; Moselhi, O. Integrating automated data acquisition technologies for progress reporting of construction projects. Autom. Constr. 2011, 20, 699–705. [Google Scholar] [CrossRef] [Green Version]
  7. Golparvar-Fard, M.; Peña-Mora, F.; Savarese, S. Automated progress monitoring using unordered daily construction photographs and IFC-based building information models. J. Comput. Civ. Eng. 2015, 29, 04014025. [Google Scholar] [CrossRef]
  8. Kim, C.; Son, H.; Kim, C. Automated construction progress measurement using a 4D building information model and 3D data. Autom. Constr. 2013, 31, 75–82. [Google Scholar] [CrossRef]
  9. Kim, S.; Kim, S.; Tang, L.; Kim, G. Efficient Management of Construction Process Using RFID+PMIS System: A Case Study. Appl. Math. Inf. Sci. 2013, 7, 19–26. [Google Scholar] [CrossRef]
  10. Navon, R.; Berkovich, O. Development and on-site evaluation of an automated materials management and control model. J. Constr. Eng. Manag. 2005, 131, 1328–1336. [Google Scholar] [CrossRef]
  11. Cheng, T.; Venugopal, M.; Teizer, J.; Vela, P. Performance evaluation of ultra wideband technology for construction resource location tracking in harsh environments. Automat. Constr. 2011, 20, 1173–1184. [Google Scholar] [CrossRef]
  12. Giretti, A.; Carbonari, A.; Naticchia, B.; DeGrassi, M. Design and first development of an automated real-time safety management system for construction sites. J. Civ. Eng. Manag. 2009, 15, 325–336. [Google Scholar] [CrossRef] [Green Version]
  13. Golparvar-Fard, M.; Peña-Mora, F.; Arboleda, C.; Lee, S. Visualization of construction progress monitoring with 4D simulation model overlaid on time-lapsed photographs. J. Comput. Civ. Eng. 2009, 23, 391–404. [Google Scholar] [CrossRef] [Green Version]
  14. Wang, Q.; Kim, M.K. Applications of 3D point cloud data in the construction industry: A fifteen-year review from 2004 to 2018. Adv. Eng. Inf. 2019, 39, 306–319. [Google Scholar] [CrossRef]
  15. El-Omari, S.; Moselhi, O. Data acquisition from construction sites for tracking purposes. Eng. Constr. Archit. Manag. 2009, 16, 490–503. [Google Scholar] [CrossRef]
  16. Omar, T.; Nehdi, M. Data acquisition technologies for construction progress tracking. Autom. Constr. 2016, 70, 143–155. [Google Scholar] [CrossRef]
  17. Bosché, F.; Ahmed, M.; Turkan, Y.; Haas, C.; Haas, R. Tracking the built status of MEP works: Assessing the value of a Scan-vs-BIM system. J. Comput. Civ. Eng. 2014, 28, 05014004. [Google Scholar] [CrossRef] [Green Version]
  18. Koch, C.; Jog, G.; Brilakis, I. Automated pothole distress assessment using asphalt pavement video data. J. Comput. Civ. Eng. 2013, 27, 370–378. [Google Scholar] [CrossRef]
  19. Zhu, Z.; Brilakis, I. Machine vision-based concrete surface quality assessment. J. Constr. Eng. Manag. 2010, 136, 210–218. [Google Scholar] [CrossRef]
  20. Bosché, F.; Ahmed, M.; Turkan, Y.; Haas, C.; Haas, R. The value of integrating Scan-to-BIM and Scan-vs-BIM techniques for construction monitoring using laser scanning and BIM: The case of cylindrical MEP components. Autom. Constr. 2015, 49, 201–213. [Google Scholar] [CrossRef]
  21. Turkan, Y.; Bosche, F.; Haas, C.; Haas, R. Automated progress tracking using 4D schedule and 3D sensing technologies. Autom. Constr. 2012, 22, 414–421. [Google Scholar] [CrossRef]
  22. Dai, F.; Rashidi, A.; Brilakis, I.; Vela, P. Comparison of image-based and time-of-flight-based technologies for three-dimensional reconstruction of infrastructure. J. Constr. Eng. Manag. 2013, 139, 69–79. [Google Scholar] [CrossRef]
  23. Wang, X.; Truijens, M.; Hou, L.; Wang, Y.; Zhou, Y. Integrating Augmented Reality with Building Information Modeling: Onsite construction process controlling for liquefied natural gas industry. Autom. Constr. 2014, 40, 96–105. [Google Scholar] [CrossRef]
  24. Rankohi, S.; Waugh, L. Review and analysis of augmented reality literature for construction industry. Vis. Eng. 2013, 1, 9. [Google Scholar] [CrossRef] [Green Version]
  25. Shirazi, A.; Behzadan, A.H. Design and assessment of a mobile augmented reality-based information delivery tool for construction and civil engineering curriculum. J. Prof. Issues Eng. Ed. Pr. 2015, 141, 04014012. [Google Scholar] [CrossRef] [Green Version]
  26. Liu, Z.; Lu, Y.; Peh, L.C. A Review and Scientometric Analysis of Global Building Information Modeling (BIM) Research in the Architecture, Engineering and Construction (AEC) Industry. Buildings 2019, 9, 210. [Google Scholar] [CrossRef] [Green Version]
  27. Han, K.; Golparvar-Fard, M. Appearance-based material classification for monitoring of operation-level construction progress using 4D BIM and site photologs. Autom. Constr. 2015, 53, 44–57. [Google Scholar] [CrossRef]
  28. Patraucean, V.; Armeni, I.; Nahangi, M.; Yeung, J.; Brilakis, I.; Haas, C. State of research in automatic as-built modelling. Adv. Eng. Inform. 2015, 29, 162–171. [Google Scholar] [CrossRef] [Green Version]
  29. Adan, A.; Quintana, B.; Prieto, S.A.; Bosche, F. Scan-to-BIM for ‘secondary’ building components. Adv. Eng. Inf. 2018, 37, 119–138. [Google Scholar] [CrossRef]
  30. Wang, Q.; Sohn, H.; Cheng, J.C. Automatic as-built BIM creation of precast concrete bridge deck panels using laser scan data. J. Comput. Civ. Eng. 2018, 32, 04018011. [Google Scholar] [CrossRef]
  31. Bueno, M.; Bosche, F.; Gonzalez-Jorge, H.; Martinez-Sanchez, J.; Arias, P. 4-Plane congruent sets for automatic registration of as-is 3D point clouds with 3D BIM models. Autom. Constr. 2018, 89, 120–134. [Google Scholar] [CrossRef]
  32. Rebolj, D.; Puˇcko, Z.; Babiˇc, N.C.; Bizjak, M.; Mongus, D. Point cloud quality requirements for Scan-vs-BIM based automated construction progress monitoring. Autom. Constr. 2017, 84, 323–334. [Google Scholar] [CrossRef]
  33. Wang, Q.; Guo, J.; Kim, M.K. An application oriented scan-to-BIM framework. Remote Sens. 2019, 11, 365. [Google Scholar] [CrossRef] [Green Version]
  34. Puri, N.; Turkan, Y. Bridge construction progress monitoring using lidar and 4D design models. Autom. Constr. 2020, 109, 102961. [Google Scholar] [CrossRef]
  35. Kang, T. 3D Scanning Vision Reverse Engineering; CIR Publishing: Seoul, Korea, 2017. [Google Scholar]
  36. Kwon, S. Object Recognition and Modeling Technology Using Laser Scanning and BIM for Construction Industry. AIK 2009, 53, 31–38. [Google Scholar]
  37. Choi, G. A Study on the Comparison and Utilization of 3D Point Cloud Data for Building Objects Using Laser Scanning and Photogrammetry; Sungkyunkwan University: Seoul, Korea, 2017. [Google Scholar]
  38. Tonon, F.; Kottenstette, J.T. Laser and photogrammetric methods for rock face characterization. In Proceedings of the 41st US Rock Mechanics Symposium, Golden, CO, USA, 17–18 June 2006. [Google Scholar]
  39. Siebert, S.; Teizer, J. Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system. Autom. Constr. 2014, 41, 1–14. [Google Scholar] [CrossRef]
  40. McLeod, T.; Samson, C.; Labrie, M.; Shehata, K.; Mah, J.; Lai, P.; Elder, J.H. Using video acquired from an unmanned aerial vehicle (UAV) to measure fracture orientation in an open-pit mine. Geomatica 2013, 67, 173–180. [Google Scholar] [CrossRef]
  41. Park, M.H.; Kim, S.G.; Choi, S.Y. The Study about Building Method of Geospatial Informations at Construction Sites by Unmanned Aircraft System (UAS). Korean Assoc. Cadastre Inf. 2013, 15, 145–156. [Google Scholar]
  42. Zarco-Tejada, P.J.; Diaz-Varela, R.; Angileri, V.; Loudjani, P. Tree height quantification using very high resolution imagery acquired from an unmanned aerial vehicle (UAV) and automatic 3D photo-reconstruction methods. Eur. J. Agron. 2014, 55, 89–99. [Google Scholar] [CrossRef]
  43. Remondino, F. From point cloud to surface: The modeling and visualization problem. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2003, 34, W10. [Google Scholar]
Figure 1. Construction progress tracking procedure.
Figure 1. Construction progress tracking procedure.
Sustainability 12 04106 g001
Figure 2. Geometric information acquired by LIDAR.
Figure 2. Geometric information acquired by LIDAR.
Sustainability 12 04106 g002
Figure 3. Example of shadow area due to LIDAR scanning.
Figure 3. Example of shadow area due to LIDAR scanning.
Sustainability 12 04106 g003
Figure 4. Laser measurement by triangulation.
Figure 4. Laser measurement by triangulation.
Sustainability 12 04106 g004
Figure 5. Geometric information obtained by a drone.
Figure 5. Geometric information obtained by a drone.
Sustainability 12 04106 g005
Figure 6. Combination of drone-based data and LIDAR-based data.
Figure 6. Combination of drone-based data and LIDAR-based data.
Sustainability 12 04106 g006
Figure 7. Combination of drone-based data and LIDAR-based data.
Figure 7. Combination of drone-based data and LIDAR-based data.
Sustainability 12 04106 g007
Figure 8. Three-dimensional modeling process for point cloud.
Figure 8. Three-dimensional modeling process for point cloud.
Sustainability 12 04106 g008
Figure 9. Comparison of the BIM model and the point cloud-based model.
Figure 9. Comparison of the BIM model and the point cloud-based model.
Sustainability 12 04106 g009
Table 1. Comparison of data acquisition technologies.
Table 1. Comparison of data acquisition technologies.
Mobile ITPhotogrammetry
Videogrammetry
LIDARAugmented Relity
CostMediumLowHighHigh
Automation levelMediumMediumHighHigh
Educational necessityLowLowHighMedium
PortabilityMediumMediumHighHigh
PotentialityLowLowMediumHigh
Table 2. Alignment methods for point clouds and their characteristics.
Table 2. Alignment methods for point clouds and their characteristics.
Alignment MethodScan TimeAlignment TimeAlignment Accuracy
Cloud to CloudHighMedium-LowHigh
Target to TargetLowHighHigh
Auto RegistrationHighLowMedium
Visual RegistrationHighMediumHigh-Medium
Table 3. Target buildings used to acquire point cloud data.
Table 3. Target buildings used to acquire point cloud data.
Building A
Sustainability 12 04106 i001
Building B
Sustainability 12 04106 i002
Building C
Sustainability 12 04106 i003
Scan typeLIDAR/DroneLIDARDrone
Scanning rate50 scans/209 images30 scans134 scans
Duration6 h/30 min3 h20 min
Acquired dataColumn, Girder, Beam, SlabColumn, Girder, Beam, SlabBuilding Exterior
Table 4. Errors of LIDAR-based point cloud data.
Table 4. Errors of LIDAR-based point cloud data.
Measured Distance (m)LIDAR Scan (m)Average Error (m)
Building AExternal width29.28, 6.62, 10.4529.289, 6.608, 10.4650.011
Distance between columns7.53, 6.67, 1.487.547, 6.673, 1.4950.012
Column height2.762.7790.019
External width35.81, 18.49, 8.0835.829, 18.491, 8.0980.012
Building BDistance between columns3.09, 3.09, 3.023.115, 3.097, 3.0210.011
Column height6.25, 6.716.261, 6.7240.012
Table 5. Errors of drone-based point cloud data.
Table 5. Errors of drone-based point cloud data.
Measured Distance (m)LIDAR Scan (m)Average Error (m)
Building AExternal width29.28, 6.62, 10.4528.173, 6.638, 10.460.378
Distance between columns7.25, 2.85, 5.466.651, 2.839, 4.9940.358
Column height2.762.6880.072
Table 6. Details of concrete pouring in building A.
Table 6. Details of concrete pouring in building A.
Location
Subslab ConcreteFoundationPIT1 F2 FProtective Concrete and RooftopTotal
DateD + 0D + 8D + 21D + 47D + 68D + 131-
Volume (m3)361817413412634522

Share and Cite

MDPI and ACS Style

Kim, S.; Kim, S.; Lee, D.-E. Sustainable Application of Hybrid Point Cloud and BIM Method for Tracking Construction Progress. Sustainability 2020, 12, 4106. https://doi.org/10.3390/su12104106

AMA Style

Kim S, Kim S, Lee D-E. Sustainable Application of Hybrid Point Cloud and BIM Method for Tracking Construction Progress. Sustainability. 2020; 12(10):4106. https://doi.org/10.3390/su12104106

Chicago/Turabian Style

Kim, Seungho, Sangyong Kim, and Dong-Eun Lee. 2020. "Sustainable Application of Hybrid Point Cloud and BIM Method for Tracking Construction Progress" Sustainability 12, no. 10: 4106. https://doi.org/10.3390/su12104106

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop