Next Article in Journal
A Power-Efficient Sensing Approach for Pulse Wave Palpation-Based Heart Rate Measurement
Next Article in Special Issue
Performance Assessment of Reference Modelling Methods for Defect Evaluation in Asphalt Concrete
Previous Article in Journal
An Algorithm for Fitting Sphere Target of Terrestrial LiDAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Processing Strategy and Comparative Performance of Different Mobile LiDAR System Grades for Bridge Monitoring: A Case Study

1
Lyles School of Civil Engineering, Purdue University, West Lafayette, IN 47907, USA
2
Indiana Department of Transportation Research and Development, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(22), 7550; https://doi.org/10.3390/s21227550
Submission received: 25 October 2021 / Revised: 9 November 2021 / Accepted: 11 November 2021 / Published: 13 November 2021
(This article belongs to the Special Issue Remote Sensing for Health Monitoring of Infrastructure)

Abstract

:
Collecting precise as-built data is essential for tracking construction progress. Three-dimensional models generated from such data capture the as-is conditions of the structures, providing valuable information for monitoring existing infrastructure over time. As-built data can be acquired using a wide range of remote sensing technologies, among which mobile LiDAR is gaining increasing attention due to its ability to collect high-resolution data over a relatively large area in a short time. The quality of mobile LiDAR data depends not only on the grade of onboard LiDAR scanners but also on the accuracy of direct georeferencing information and system calibration. Consequently, millimeter-level accuracy is difficult to achieve. In this study, the performance of mapping-grade and surveying-grade mobile LiDAR systems for bridge monitoring is evaluated against static laser scanners. Field surveys were conducted over a concrete bridge where grinding was required to achieve desired smoothness. A semi-automated, feature-based fine registration strategy is proposed to compensate for the impact of georeferencing and system calibration errors on mobile LiDAR data. Bridge deck thickness is evaluated using surface segments to minimize the impact of inherent noise in the point cloud. The results show that the two grades of mobile LiDAR delivered thickness estimates that are in agreement with those derived from static laser scanning in the 1 cm range. The mobile LiDAR data acquisition took roughly five minutes without having a significant impact on traffic, while the static laser scanning required more than three hours.

1. Introduction

Securing bridges with good structural and functional conditions starts with effective quality assurance and quality control during construction to ensure that such infrastructure fully satisfies the design requirements [1]. In addition, frequent and accurate inspection of the structural and functional conditions of each bridge is required to ensure traffic safety and prioritize maintenance [2]. Traditional bridge evaluation practices rely on visual inspection and point-based measurements. Deck thickness is typically checked during and post-construction to ensure structural adequacy and conformance. For instance, the American Association of State Highway and Transportation Officials (AASHTO) requires that the minimum thickness of a concrete deck should not be less than 7 inches [3]. To evaluate bridge deck thickness during construction, measurements are typically taken from a string line pulled between the screed rails or a pole stabbed into the plastic concrete—i.e., freshly poured concrete [4]. Post-construction, ultrasonic thickness meters are usually used. Such approaches are labor-intensive and prone to human errors, as they require trained personnel to identify structurally unsound locations. Furthermore, this spot sampling is not sufficient to capture the thickness values over the entire bridge, which are necessary to provide as-built documentation that can be used for long-term asset inventory and management.
Modern remote sensing techniques provide promising non-contact alternatives for during- and post-construction bridge evaluation and inspection. Prior research has established principles and procedures for using unmanned aerial vehicle (UAV) imagery for bridge visual inspection and 3D information reconstruction [5,6,7,8,9]. Moreover, with several protection mechanisms, UAVs can maneuver at a close proximity to the bridge and even perform contact inspection tasks [10,11,12]. In contrast to imaging sensors, light detection and ranging (LiDAR) provides direct 3D measurements that can be used for quantitative evaluation. Terrestrial laser scanners (TLSs) can acquire high-resolution point clouds with millimeter-level precision, deemed as the gold standard in construction management and infrastructure monitoring. However, the time and labor required for data acquisition and post-processing increase significantly for a large site. The scanner might have to be set up within the driving lane to acquire sufficient point density along the structure, which in turn would affect traffic flow. Therefore, monitoring large-sized infrastructure using TLSs might not be practical and/or scalable. Mobile LiDAR mapping systems (MLMSs) facilitate efficient field surveys that can cover large areas with a minimal impact on traffic flow. For MLMSs, direct georeferencing, i.e., trajectory information provided by the onboard global navigation satellite system/inertial navigation system (GNSS/INS) unit, is typically adopted to reconstruct point clouds in a common reference/mapping frame. Point cloud quality therefore depends on the ranging accuracy of LiDAR units, the grade of the onboard GNSS/INS unit, and the reliability of the system calibration procedure. Centimeter-level positional accuracy can be expected if the quality of derived georeferencing data from the GNSS/INS unit and system calibration parameters are guaranteed. However, intermittent access to a GNSS signal and issues pertaining to system calibration cloud results in discrepancy of more than several centimeters between point clouds from different drive runs/flight lines. In this case, registration is required to fine-tune the point cloud alignment.
This paper describes an assessment of alternative mobile LiDAR systems for bridge monitoring. More specifically, this study addresses the following research questions: (i) can we use MLMS data (with centimeter-level positional accuracy) to derive quantitative measures of bridge and achieve an accuracy similar to that from TLS data, and (ii) are surveying-grade MLMS systems (with millimeter- to centimeter-level accuracy) better than mapping-grade MLMS systems (with an accuracy of a few centimeters) in terms of providing quantitative assessments of bridges? The key contributions of this study can be summarized as follows:
  • A semi-automated feature-based fine registration is proposed to compensate for the impact of georeferencing and system calibration errors on mobile LiDAR data. The developed registration strategy can also be used to fine-tune the point cloud alignment among different TLS scans.
  • A bridge deck thickness evaluation strategy based on surface-to-surface distance is proposed to minimize the impact of inherent noise on the point clouds. The aim is to achieve an accuracy better than ±1 cm for the derived thickness measures.
  • The performance of different MLMS grades is assessed against TLS data in terms of the quality of derived thickness measures, scalability, and impact on traffic flow.
The remainder of this paper is structured as follows: Section 2 provides an overview of prior research; the mobile mapping systems, study site, and data collection procedure are introduced in Section 3; the feature-based registration and bridge deck thickness evaluation approaches are covered in Section 4; the results together with their analysis are discussed in Section 5, which is followed by a summary of the study conclusions and recommendations for future research in Section 6.

2. Related Work

2.1. LiDAR for Infrastructure Mapping

LiDAR, known for its ability to directly generate accurate 3D point clouds with high density, has recently been receiving an increasing amount of interest by the construction management and infrastructure monitoring research/professional communities. Acquired point clouds by TLSs have been used for generating precise 3D models to evaluate the progression of construction processes [13,14,15]. The high accuracy of TLS point clouds also allows for millimeter-level displacement and deformation evaluation post-construction [16,17,18]. However, the point density and accuracy of TLS point clouds drop quickly as the distance from the sensor increases. To ensure full coverage of the structure in question, multiple TLS stations are required. The station locations should be carefully chosen to minimize occlusions and have sufficient overlap (i.e., common areas) for the registration of point clouds from different scans. For a large site, time, labor, and location requirements for data acquisition increase significantly, making it unrealistic to apply such a technique in a scalable manner.
Mobile LiDAR has emerged as a promising alternative that can overcome the shortcomings of TLSs. One or more LiDAR units can be mounted on various platforms, e.g., UAVs, trucks, tractors, and robots. Field surveys with mobile LiDAR are efficient and can cover large areas that are impractical to conduct with TLSs. These key benefits have stimulated the interest of the research/professional community to apply mobile LiDAR for analyzing complex road environments, such as lane marking detection and road boundary extraction [19,20,21,22], as well as mapping railroads and tunnels [23,24,25,26]. Several studies have validated and reported the accuracy of mobile LiDAR data for monitoring civil infrastructure. Puri and Turkan [27] used a wheel-based mobile LiDAR system equipped with a Velodyne HDL-64E LiDAR unit for tracking the construction progress of a bridge. They pointed out that a noise level in the range of ±3–4 cm was present in the as-built data from MLMSs, affecting the performance of progress tracking. Lin et al. [28] evaluated the performance of a wheel-based MLMSs equipped with four Velodyne LiDAR units for mapping airfield pavement before and after a resurfacing process. In their study, the positional accuracy of LiDAR point clouds was in the ±5 cm range, and a 9 cm increase in pavement elevation after resurfacing was detected. Another study verified the absolute accuracy of point clouds acquired by a commercially available system, the Lynx Mobile Mapper system from Optech Inc. (Toronto, ON, Canada) [29]. Their test site was a university campus, comprised of both infrastructure and vegetation. Although the range accuracy of the onboard LiDAR unit in that study was ±8 mm, the absolute accuracy of the point cloud was determined to be in the range of ±1 to ±5 cm, mainly attributed to the trajectory quality. Moreover, in areas with limited/intermittent access to GNSS signals, the positional accuracy might deteriorate to the ±0.3 m range. Inaccuracy of the system calibration parameters would cause additional deterioration in the derived point clouds. Overall, millimeter-level positional accuracy is difficult to achieve even with high-end LiDAR units due to GNSS reception issues and/or system calibration artifacts. Data processing and analysis strategies that reduce the impact of the above factors, as well as inherent noise in the point cloud, are required to take full advantage of potential benefits of mobile LiDAR.

2.2. Point Cloud Registration

Point cloud registration aims at aligning LiDAR data from different TLS scans and/or MLMS drive runs/flight lines to a common reference frame. Registration approaches can be categorized based on the used primitives; namely, direct cloud-to-cloud registration and feature-based registration. The well-known iterative closest point (ICP) [30,31] and its variants belong to the cloud-to-cloud registration category. Such algorithms assume that some initial transformation parameters exist, and their aim is to refine these parameters. Besl and McKay [30] assumed that the closest points from two point clouds after coarse registration constitute a conjugate point pair, and used such pairs for the estimation of transformation parameters. Chen and Medoini [31] further considered surfaces normal during point matching; more specifically, point-to-point correspondence is established by normally projecting a point in one scan onto its adjacent surface in the other scan. Several variants were introduced to improve the ICP robustness using local neighborhood characteristics [32,33]. Studies that share similar concepts, such as the iterative closest projected point [34], point-to-plane registration [35], and multiscale model-to-model cloud comparison [36] have been introduced. In addition, some registration algorithms start by generating a 3D mesh or surface model (e.g., triangulated irregular network), and then use cloud-to-surface or surface-to-surface pairings to estimate the transformation parameters [37,38]. However, creating a mesh is a complex task, especially for point clouds with vertical discontinuities and/or excessive occlusions. In general, cloud-to-cloud fine registration using the ICP or its variants has some disadvantages: (i) it requires large overlap areas between point clouds; (ii) it is sensitive to point density distribution and noise level within the point clouds; and (iii) it requires solid surfaces with good variation in orientation/slope/aspect. For TLS and mobile LiDAR data, a large overlap is not always guaranteed due to the varying sensor-to-object distance, occlusions, and constraints imposed by the scanning environment (e.g., traffic along transportation corridors). In addition, the point density and precision drop quickly as one moves away from the sensor/trajectory, and the distribution of the surface orientation variation within the study site can be unbalanced. These factors could lead to overweighting when estimating the transformation parameters. In order to meet high accuracy requirements, user intervention is unavoidable.
Another group of registration algorithms utilize common features that can be identified in point clouds captured at different locations. In general, such algorithms do not require coarse alignment of the point clouds. The major task in feature-based registration is the identification of common points/features. This task can be quickly performed semi-automatically. Special targets (e.g., highly reflective checkerboard and/or spherical targets), which can be identified in the point clouds, were commonly used to increase the level of automation within the registration process [39,40,41,42]. Among such targets, spherical targets were used more frequently since they eliminate the need to re-orient the target to face the scanner during point cloud acquisition [40]. Target-free registration is nonetheless the ultimate goal and, therefore, many studies focused on the automated identification and matching of features. In urban areas, geometric primitives such as point, linear, cylindrical, and planar features can be reliably extracted from point clouds. Such primitives have been employed to estimate the transformation parameters between two point clouds [41,43,44,45,46,47,48,49,50,51,52]. Some studies demonstrated the ability of feature-based registration in handling point clouds acquired by different platforms, including airborne LiDAR, mobile LiDAR, TLSs, and even imagery [53,54,55,56,57,58]. Habib et al. [59,60] registered LiDAR and photogrammetric data using linear features. Renaudin et al. [61] utilized photogrammetric data to help in feature extraction and the registration of TLS data with minimal overlap. In addition to geometric primitives, key points based on local feature descriptors that encompass local shape geometry were used for registration [62,63,64]. Recently, learning-based local feature descriptors and point cloud registration framework targeting fully automated feature detection, and matching evolved rapidly [65,66,67,68,69,70,71]. While deep-learning-based methods have shown superior performance on several benchmark datasets, their generalization ability on unseen real datasets needs careful evaluation.
Currently, the vast majority of existing registration tasks are based on pair-wise registration (i.e., the registration process is sequentially established two scans at a time) and/or require initial segmentation to derive the feature parameters, which are used in the registration process. Pair-wise registration has two disadvantages: (i) it makes the process time-consuming when dealing with multiple scans and/or drive-runs; and (ii) the sequential registration leads to the propagation of errors, which increases as we move away from the reference scan. Furthermore, existing algorithms commonly utilize feature parameters (e.g., line endpoints, direction vector of linear/axis of cylindrical features, and normal vector of a plane) for the registration process [40,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60]. Direct use of these parameters would not allow for the sufficient mitigation of inherent noise and/or point density variations in the point cloud. In response to these challenges, this study provides a strategy for the simultaneous registration of several scans from TLS and MLMS drive runs. Moreover, the individual points along the registration features, rather than their parameters, are used in the registration process, thus allowing for the mitigation of noise level/point density variation in the point cloud as well as the negative impact of the partial coverage of registration primitives in the different scans.

3. Data Acquisition Systems and Field Surveys

In this study, MLMS and TLS datasets were collected over and underneath a highway bridge that was suspected to have an inadequate deck thickness. This section introduces the data acquisition system and calibration strategy. We also describe the study site and provide information regarding field surveys.

3.1. System Description and Calibration

The MLMS data used in this research were captured by a mapping-grade system—Purdue Wheel-based Mobile Mapping System-High Accuracy (PWMMS-HA)—and a surveying-grade system—Purdue Wheel-based Mobile Mapping System-Ultra High Accuracy (PWMMS-UHA). Figure 1a shows the PWMMS-HA, which is equipped with four LiDAR units (three Velodyne HDL-32E and one Velodyne VLP-16 High Resolution), three cameras, and a GNSS/INS unit for direct georeferencing. The front-left, front-right, rear-left, and rear-right LiDAR units are hereafter denoted as HDL-FL, VLP-FR, HDL-RL, and HDL-RR, respectively. The range accuracy of the Velodyne HDL-32E and VLP-16 is ±2 cm and ±3 cm, respectively [72,73]. The post-processing positional accuracy of the GNSS/INS unit is ±2 cm with an attitude accuracy of 0.020° and 0.025° for the roll/pitch and heading, respectively [74]. The PWMMS-UHA, shown in Figure 1b, is outfitted with two 2D profiler LiDAR units: a Riegl VUX-1HA (hereafter denoted as RI) and a Z+F Profiler 9012 (hereafter denoted as ZF). Two rear-looking cameras and georeferencing units are also installed on this system. The range accuracy of the RI and ZF scanners is ±5 mm and ±3 mm, respectively [75,76]. The GNSS/INS post-processing positional accuracy is ±1–2 cm with an attitude accuracy of ±0.003° for pitch/roll and ±0.004° for heading [77].
The rigorous system calibration introduced by Ravi et al. [78] was carried out for both MLMS vehicles to determine the relative position and orientation (denoted hereafter as mounting parameters) between the onboard sensors and IMU body frame, whose position and orientation are derived through the GNSS/INS integration process. The expected post-calibration positional accuracy of the derived point cloud was estimated using the individual sensors’ specifications and standard deviations of the estimated mounting parameters through the LiDAR error propagation calculator developed by Habib et al. [79]. The calculator suggests an expected accuracy of about ±4 cm and ±2 cm at a range of 30 m from the vehicle for the PWMMS-HA and PWMMS-UHA, respectively.
The TLS point clouds were captured by FARO Focus 3D X330 and Trimble TX8b laser scanners. The FARO Focus 3D X330 laser scanner is integrated with a high-dynamic-range imaging unit. It has a range systematic error of ±2 mm and a range noise better than ±0.5 mm for objects 10 m to 25 m away from the scanner (one sigma). The scanning speed is up to 976,000 points per second with a maximum range of 330 m [80]. The Trimble TX8b laser scanner has a range systematic error of ±2 mm and a range noise better than ±2 mm for objects 2 m to 120 m away from the scanner (in standard modes at one sigma). It can scan up to one million points per second with a maximum range of 120 m [81].

3.2. Study Site and Field Surveys

Field surveys were conducted over and underneath a westbound bridge along an interstate highway at the intersection of the I-74 and US-231 near Crawfordsville in Indiana, USA (shown in Figure 2a). The bridge in question required post construction grinding to meet smoothness and ride quality standards. Once grinding was completed, it was important to perform an as-built survey to document bridge deck thickness. Figure 2b,c displays images acquired by one of the cameras onboard the PWMMS-HA, showing the concrete deck and side view of the bridge, respectively.
Table 1 lists the specifications of the three datasets collected in this study. The MLMS drive run configuration and TLS scan locations are shown in Figure 3a. Both MLMS vehicles started by driving westbound on the I-74 (magenta track in Figure 3a, T1), and then drove southbound and northbound below the bridge on the US-231 (green—T2, T4, T6, and T8—and yellow—T3, T5, T7, and T9—tracks in Figure 3a). Just before reaching the bridge on the I-74, a highway patrol cruiser slowed down the traffic while driving the PWMMS-UHA and PWMMS-HA. Tracks T2–T9, which were designed for investigating the impact of partial GNSS signal outage, were conducted while stopping the southbound and northbound traffic on US-231 for less than three minutes. In total, the data acquisition on the I-74 and US-231 took about five minutes. The average driving speed was 30 km/h for both MLMS vehicles. Figure 4 illustrates the position accuracy charts reported by the GNSS/INS integration software, providing a glimpse of the trajectory quality for the two MLMS vehicles. As can be observed in the figure, the position error of PWMMS-UHA is smaller compared to that of PWMMS-HA, owing to the higher-end IMU onboard. In Figure 4, the highlighted eight peaks correspond to the eight tracks below the bridge (T2–T9 in Figure 3a), suggesting suboptimal accuracy for the below-bridge tracks due to intermittent access to a GNSS signal. The position error, however, does not increase over time since (i) the signal was restored when the vehicles cleared the bridge, (ii) the GNSS data were processed in both forward and backward directions, and (iii) the relatively short GNSS data outage duration can be handled by the onboard IMU.
To speed up the scanning process, TLS point clouds underneath and over the bridge were simultaneously captured by FARO Focus 3D X330 and Trimble TX8b laser scanners, respectively. Figure 3a illustrates the scan locations with the Trimble stations set up outside the barrier rail atop of the I-74 bridge embankment to capture the top surface of the deck—Figure 3b. The FARO stations were located along the US-231 to capture the bottom surface of the deck—Figure 3c. While the below-bridge FARO scan locations ensured sufficient coverage of the bottom surface of the deck, the setup of the Trimble scans was less than optimal as the overlap region between the east (S5) and west (S4 and S6) scans, which are 55 m apart, happens at the middle of the bridge. Therefore, one might expect less than optimal registration of the Trimble scans due to large separation, lower point density, and shallow scan angles within the overlap region. In terms of data collection time, each TLS scan took about thirty-five minutes for a total of three hours, considering the time for the scanners to move and set up between the different locations.

4. Methodology for Point Cloud Registration and Bridge Deck Thickness Evaluation

The proposed workflow (shown in Figure 5) is comprised of registration, bridge deck thickness evaluation, and comparative quality assessment. For MLMS tracks, the point clouds are directly georeferenced in a global mapping frame, and coarse alignment is thus guaranteed. Feature-based fine registration is carried out to fine-tune the alignment between point clouds from different tracks. For TLSs, the point cloud acquired by each scan is available in a different local reference frame, which is defined by the scan location/setup. A coarse registration is carried out based on manually identified conjugate points in the respective point clouds, so that the feature extraction (which will be covered later) for individual scans can be conducted simultaneously. A successive feature-based fine registration is performed to align the point clouds from the different scans. One should note that whether a point cloud is in a local or global reference frame is not critical for the evaluation of bridge deck thickness. In this study, a coarse registration between the TLS and MLMS data is performed only to compare their thickness estimates. The following subsections describe the proposed feature-based fine registration, bridge deck thickness evaluation, and comparative quality assessment.

4.1. Feature-Based Fine Registration

The key advantage of the proposed registration strategy is the simultaneous alignment of point clouds from multiple TLS scans or MLMS drive-runs using planar, linear, and cylindrical features without the setup of specific targets. Besides point cloud alignment, the parametric model of registration primitives is derived and can be used for developing an as-built 3D model as well as the subsequent monitoring of bridge elements (i.e., settlement and deformation of bridge support elements).

4.1.1. Semi-Automated Feature Extraction

The proposed feature extraction strategy is adapted from the multi-class simultaneous segmentation proposed by Habib and Lin [82]. First, a seed point is manually identified in the point cloud from the individual scans/tracks. If the point clouds are roughly aligned, the same seed point can be used to extract the feature from all scans/tracks. Otherwise, the user will need to select the seed point for each scan/track. Next, a seed region is established by identifying the neighboring points of the seed point. Dimensionality analysis [83] is then performed to classify the seed region as a linear/cylindrical, planar, or rough feature. Only planar, linear, and cylindrical seed regions are considered for feature extraction. The parameters of the best-fitting plane/line/cylinder, which will be discussed later in this section, are estimated via an iterative model fitting and outlier removal. More specifically, model parameters are estimated in an iterative manner while assigning lower weights to points that are farther from the fitted surface in the previous iteration. Next, neighboring points that belong to the current feature are augmented through region growing. Concretely, neighboring points whose normal distance are smaller than a multiplication factor times the root-mean-square error (RMSE) of the fitted model are added. The output of the feature extraction includes the segmented points (an example is shown in Figure 6) and parameters describing the respective feature model—both are required in the LSA registration and model parameterization.
In terms of the parametric model representation, a 3D plane is defined by the normal vector to the plane, [ w x w y w z ] T , and signed normal distance from the origin to the plane ( d ) as shown in Equation (1). To establish an independent set of parameters, one of the normal vector components is fixed to 1. The fixed component is chosen according to the orientation of the plane, which is defined based on the eigenvectors corresponding to the smallest eigenvalue, as illustrated in Figure 7. A cylindrical feature, on the other hand, is defined by a 3D line representing its axis and radius   ( r ) . The cylinder axis is represented by a point, [ x 0 y 0 z 0 ] T , and a direction vector, [ u x u y u z ] T , as can be seen in Equation (2), where q represents the point location along the cylinder axis. Since a line has only four degrees of freedom, one of the coordinate components is set to zero, with the respective direction vector component set to 1. The fixed parameters for the cylinder axis are chosen according to its orientation, which is defined based on the eigenvectors corresponding to the largest eigenvalue, as depicted in Figure 8. Finally, a linear feature is represented the same way as the axis of a cylindrical feature (i.e., four parameters, with two representing a point along the line and two defining its direction).
w x x + w y y + w z z w x 2 + w y 2 + w z 2 + d = 0
x = q u x + x 0 y = q u y + y 0 z = q u z + z 0

4.1.2. Least-Squares Adjustment for Feature-Based Fine Registration

The conceptual basis of the proposed LSA model is that conjugate features would fit a single parametric model after registration. The objective function of the LSA model estimates the transformation parameters as well as feature parameters in the common reference frame. For simultaneous registration between m s point clouds, one of them is selected to define a common reference frame, ( p c c = [ x c y c z c ] T ), and the rest are considered as sources, ( p c s i = [ x s i y s i z s i ] T ,   i = 1 ,   2 ,   ,   m s 1 ). The 3D similarity model in Equation (3) is used to represent the transformation from the i t h source ( s i ) to the common ( c ) reference frames, where t s i c ,   R s i c , s s i c denote the translation vector, rotation matrix, and scale factor, respectively:
p c c = t s i c + s s i c R s i c p c s i
For planar features, the normal distance vector, [ n d x n d y n d z ] T , between a transformed point, [ x c y c z c ] T , and the post-registration parametric model, defined by [ w x w y w z ] T and d , is presented in Equation (4)—refer to the illustration in Figure 9a. For a linear/cylindrical feature, the normal distance vector, [ n d x n d y n d z ] T , between a transformed point, [ x c y c z c ] T , and the post-registration parametric model representing the linear/axis of a cylindrical feature, defined by [ x 0 y 0 z 0 ] T and [ u x u y u z ] T , is expressed in Equation (5). An illustration of the linear/cylindrical feature post-registration model fitting is shown in Figure 9b,c, respectively. For planar and linear features, the LSA aims at minimizing the squared sum of normal vector components in Equations (4) and (5), respectively, for all the points in the different scans/tracks that encompass such features. In these equations, denotes the dot product of two vectors. For cylindrical features, on the other hand, the LSA aims at minimizing the squared sum of normal distances between the points that belong to such features and their surface, as given by Equation (6):
[ n d x n d y n d z ] = d [ w x w y w z ] w x 2 + w y 2 + w z 2 ( [ x c y c z c ] [ w x w y w z ] ) [ w x w y w z ] w x 2 + w y 2 + w z 2 = [ 0 0 0 ]
[ n d x n d y n d z ] = [ x c x 0 y c y 0 z c z 0 ] ( [ x c x 0 y c y 0 z c z 0 ] [ u x u y u z ] ) [ u x u y u z ] u x 2 + u y 2 + u z 2 = [ 0 0 0 ]
n d x 2 + n d y 2 + n d z 2 r 2 = 0
The linearized mathematical model corresponding to Equations (4)–(6) can be written as in Equation (7), which is commonly known as the Gauss–Helmert model [84]. Here, y is the discrepancy vector arising from the linearization process; e is the vector of random noise contaminating the point cloud coordinates in the different scans/tracks, which follows a stochastic distribution with a zero mean and a variance–covariance matrix of σ 0 2 P 1 , with σ 0 2 representing the a priori variance factor and the weight matrix, P , depending on the specifications of the data acquisition system; and x is the vector of unknown parameters (including transformation and feature parameters). The matrices A and B are composed of the partial derivatives of the models in Equations (4)–(6) with respect to the unknown parameters and point cloud coordinates, respectively. For a planar or linear feature, the respective B matrix is rank-deficient, which can be deduced by analyzing the effective contribution of Equations (4) and (5) towards the overall redundancy. Although a point on a planar feature provides three equations, it only has a net contribution of one towards the redundancy—corresponding to the minimization of normal distance to the plane. For linear features, the three equations have a net contribution of two towards the redundancy—corresponding to the minimization of normal distance to the linear feature. Lastly, a point on a cylindrical feature provides a net contribution of one towards the redundancy. In order to ensure that the resulting normal matrix is full-rank and that the transformation and feature parameters can be reliably estimated, it is critical to have features with different orientations over the area of interest. The solution vector of the LSA model can be written as per Equation (8), where ( B P 1 B T ) + denotes the Moore–Penrose pseudoinverse that has to be used to compensate for the rank deficiency of the B matrix. The residuals, e ˜ , are given in Equation (9). The a posteriori variance factor, σ 0 ^ 2 , can be computed using Equation (10), where m is the number of unknown parameters. The variance–covariance matrix of estimated parameters is given in Equation (11). Interested readers can refer to Ravi and Habib [85] for more details regarding LSA with a rank-deficient weight matrix. Figure 10 presents a sample registration result using the ICP and proposed feature-based registration approach. The point clouds were acquired from two tracks in opposite driving directions, and thus capture different sides of the bridge piers. While the ICP incorrectly aligned the two sides of the piers, the proposed approach produced superior results by fitting conjugate features to a single cylinder model:
y = A x + B e , e ~   ( 0 ,   σ 0 2 P 1 )
x ^ = ( A T ( B P 1 B T ) + A ) 1 ( A T ( B P 1 B T ) + y )
e ˜ = y A x ^
σ 0 ^ 2 = e ˜ T ( B P 1 B T ) + e ˜ r a n k ( B P 1 B T ) m
Σ = σ 0 ^ 2   ( A T ( B P 1 B T ) + A ) 1

4.2. Bridge Deck Thickness Evaluation

The proposed bridge deck thickness evaluation utilizes surface-to-surface distance in order to mitigate the impact of inherent noise on the post-registration point cloud. First, point clouds capturing the top and bottom surfaces of the deck are extracted—one is selected as the target and another as the source. The target and source point clouds are then partitioned into segments with a pre-defined size (30 cm × 30 cm in this study). For a given segment, the dimensionality analysis is carried out to test its planarity, and only planar segments are included in subsequent evaluation. An iterative plane fitting is then performed to remove potential outlier points and estimate the parameters of the plane (refer to Section 4.1.1 for detailed information). A segment is rejected from the thickness evaluation if (i) the plane-fitting RMSE exceeds a user-defined threshold ( t h r e s R M S E ) or (ii) the remaining inlier points after iterative plane fitting fail to reach another user-defined threshold ( t h r e s n p t ). The first threshold ( t h r e s R M S E ) can be selected according to the expected noise level in the post-registration point cloud. The second threshold ( t h r e s n p t ) can be defined based on the percentage of remaining points after iterative plane fitting—e.g., if more than 50% of the original points are removed by the outlier removal procedure, the segment is rejected from the thickness evaluation. Next, the center of a target plane and its projection on the source plane are determined. The thickness of the deck at that location is evaluated by the normal distance between the center of the target plane and the source plane. Finally, the deck thickness estimates from all the surface segments are visualized as a heat map. Figure 11 illustrates an example, showing the point clouds from the above- and below-bridge tracks (Figure 11a) as well as the corresponding thickness map (Figure 11b). As evident in Figure 11a, the point cloud from the below-bridge tracks shows some gaps because of the occlusion caused by the bridge pier caps.

4.3. Comparative Quality Assessment

In this study, the derived bridge deck thickness using MLMS data is compared to that estimated from TLS point clouds. The comparison is carried out by evaluating the difference between the thickness estimates for corresponding surface segments in the MLMS and TLS data. Therefore, ensuring the alignment between MLMS and TLS data is a prerequisite. A coarse registration is conducted to transform the TLS point cloud into the MLMS mapping frame. Since the proposed segment-based thickness evaluation is conducted at 30 cm intervals and the coarse registration accuracy is better than ±10 cm, a fine registration procedure is not necessary. Once the point clouds are aligned, the correspondence is established by searching for the closest surface segment in the MLMS point cloud for each segment in the TLS point cloud. The difference between bridge deck thickness estimates for corresponding surface segments is evaluated and visualized as a heat map. Moreover, the mean, standard deviation, and RMSE of the differences are reported as quantitative measures of the comparative performance of the different sensing modalities (i.e., TLSs and MLMSs).

5. Experimental Results and Discussion

This section starts with reporting the registration results for the MLMS and TLS datasets. For the MLMS datasets, the alignment between point clouds from different tracks before and after registration is examined to illustrate the quality of the system calibration and the impact of GNSS outage during data acquisition. Next, the bridge deck thickness is evaluated, and a comparative analysis of the derived metrics is reported.

5.1. Point Cloud Registration and Alignment

As described in Section 3.2, one TLS and two MLMS datasets were collected in this study. The TLS dataset had six scans, with each having its local coordinate system. First, a coarse registration using manually identified point pairs in TLS point clouds was performed. For such registration, one of the FARO scans, Scan S1 (see Figure 3), was selected to define the common reference frame. In contrast to the TLS data, point clouds acquired by the MLMSs were directly georeferenced to a global mapping frame through the onboard GNSS/INS unit—the UTM coordinate system with WGS84 as the datum was chosen as the reference frame in this study. The point density over the bridge deck from one track is depicted in Figure 12, where the median is 7700 and 6800 points per square meter for PWMMS-HA and PWMMS-UHA, respectively. PWMMS-HA had a higher and more uniform point density across the bridge deck as compared to PWMMS-UHA.
Prior to fine registration, the alignment of MLMS point clouds from different tracks was inspected. Figure 13 depicts a cross-section of the bridge deck before registration. The top surface of the bridge deck and inside surface of the barrier rails were captured by the above-bridge track (T1), while the bottom surface of the bridge deck and the outside surface of the barrier rails were scanned by the below-bridge tracks (T2–T9). To assess the agreement between point clouds acquired by the four LiDAR units onboard the PWMMS-HA, we performed plane fitting over a 1 m × 1 m segment on the top surface of the bridge deck (see Figure 13a). The plane-fitting RMSE was 1.3 cm, indicating that the point clouds from different LiDAR units in Track T1 were in good agreement, thus verifying the quality of the system calibration parameters (i.e., inaccurate system calibration would result in discrepancies among captured point clouds by different sensors in the same track). A segment on the bottom surface of the bridge deck, in contrast, yielded a plane-fitting RMSE of 3.0 cm. Since this area was captured by eight tracks (T2–T9), one can deduce that the alignment among the point clouds from those tracks was not as good as the alignment between point clouds captured by the PWMMS-HA LiDAR sensors within a given track. The lower trajectory quality due to intermittent access to a GNSS signal below the bridge was the cause of such systematic discrepancy among the tracks, as can be seen in Figure 13a. A similar analysis was conducted for the PWMMS-UHA point clouds, as can be seen in Figure 13b. The zoomed-in area on the top surface of the bridge deck reveals that for a given track, the precision of point clouds from the two LiDAR units onboard the PWMMS-UHA was in the range of few millimeters. However, due to the intermittent access to a GNSS signal during below-bridge data acquisition, a degradation in the RMSE to the 2.0 cm range can be seen. The barrier rails along the two sides of the bridge were used to evaluate the alignment between the above-bridge and below-bridge tracks. The misalignment was interactively quantified by evaluating the distance between two manually selected points on the top of the barrier rail from the above-bridge and below-bridge point clouds. As shown in Figure 13a,b, a misalignment of about 7 cm and 2 cm along the vertical direction was present between the above-bridge and below-bridge point clouds for the PWMMS-HA and PWMMS-UHA, respectively. Such misalignment would lead to, no doubt, an unreliable estimation of bridge deck thickness.
The proposed feature-based fine registration was then performed for the TLS and MLMS datasets to refine the alignment between scans/tracks. First, planar/linear/cylindrical features were semi-automatically extracted from the point clouds, and the results are shown in Figure 14. In order to reliably solve for the transformation parameters, features with different orientations were extracted from the point clouds. These features should be well-distributed over the area of interest, and the number of points contributing to the estimation of different transformation parameters was of similar magnitude to prevent over-weighting in the LSA model. A total of 46, 30, and 29 features were extracted from the TLS, PWMMS-HA, and PWMMS-UHA point clouds, respectively. For the TLS dataset, the overlap between the above-bridge and below-bridge scans was adequate, as the Trimble scanner was placed on the I-74 embankments next to the barrier rails, and thus was able to capture objects on the US-231 without much occlusion. For the MLMS datasets, on the other hand, the overlap between the above- and below-bridge tracks was limited due to occlusions caused by the barrier rails. Point clouds acquired by the above-bridge track barely captured the road surface, signboards, and other objects on the US-231. Common features among the above-bridge and below-bridge tracks include signboards, light poles, and the north side of the barrier rail on the westbound I-74, as can be seen in Figure 14b,c.
Once the features were extracted, the proposed LSA strategy was carried out to estimate the transformation and post-registration feature parameters. For the MLMS datasets, the LSA estimated the transformation parameters among point clouds acquired by each sensor from individual tracks (i.e., the registration was simultaneously conducted for a total of thirty-six and eighteen point clouds for PWMMS-HA and PWMMS-UHA, respectively). Track T1 from HDL-RR and Track T1 from RI were selected as target tracks for PWMMS-HA and PWMMS-UHA, respectively. For TLSs, Scan S1 was used as the target scan when registering the six point clouds captured by the FARO and Trimble scanners. Figure 15 shows box and whisker plots of the transformation parameters for the TLS and MLMS datasets, where one can observe that the magnitude and variance of the parameters for each system were at the same level. The square root of a posteriori variance factor after registration (which represents the noise level in the data as well as the quality of post-registration alignment) is 0.60 cm, 1.47 cm, and 0.74 cm for TLSs, PWMMS-HA, and PWMMS-UHA, respectively. The weighted average of the RMSE of normal distances between the LiDAR points and the best-fitted plane/line/cylinder before and after registration for each dataset is listed in Table 2. The reduction in the post-registration RMSE reflects an improvement in the alignment of the features after registration. As expected, the post-registration RMSE values for the different features are in agreement with the square root of a posteriori variance.
One of the advantages of the proposed alignment strategy is the production of a parametric model representation of the registration primitives, which could be used for bridge monitoring over time (e.g., the cylindrical columns supporting the bridge). To show the comparative performance of TLS and MLMSs in terms of the similarity of derived post-registration parametric models for cylindrical features, Figure 16 depicts a top view of the twelve columns supporting the I-74 bridge together with their axes. As evident from the figure, the cylindrical columns from different sensing modalities are well-aligned. Table 3 reports the estimated radii of the twelve columns where the estimates from different systems are in agreement within the 1 cm range. Moreover, the relative planimetric discrepancies between the TLS and MLMS datasets were evaluated using the derived horizontal locations of the cylindrical columns. The results show that the relative horizontal locations are compatible within a 1 cm range. Once again, these values are in agreement with the post-registration estimates of the square root of a posteriori variance factors reported earlier.
As another verification of the impact of the fine registration of the MLMS point clouds, a cross-section of the bridge deck (at the same location shown in Figure 13) that was extracted from the point is depicted in Figure 17, where a significant improvement can be observed. Zoomed-in areas at the barrier rails show that the above-bridge and below-bridge tracks/scans are in agreement within a 1 cm range for all systems. Moreover, zoomed-in areas at the top and bottom surfaces of the bridge deck verify that the point clouds from different tracks/scans are in good agreement along the vertical direction. According to the plane-fitting RMSE shown in Figure 17, the precision of the point cloud from PWMMS-HA is in the ±1.5 cm range, which is better than the expected value of ±4 cm. Moreover, both PWMMS-UHA and TLS point clouds achieve millimeter-level precision (i.e., in the ±0.3 cm and ±0.4 cm range, respectively). In summary, the results show that the proposed feature-based fine registration can effectively minimize the impact of trajectory and system calibration errors, and thus improve the point cloud quality.

5.2. Bridge Deck Thickness Estimation and Comparative Analysis

Having examined the point cloud alignment, bridge deck thickness was then evaluated using the proposed surface segment-based approach. The segment size was set to 30 cm × 30 cm. The thresholds t h r e s R M S E and t h r e s n p t were defined as three times the square root of a posteriori variance factor after registration and 50% of it, respectively. The thickness estimates are visualized as a heat map and shown in Figure 18. The spatial patterns from the three datasets are similar as they all indicate a smaller thickness value over the right two lanes towards the west side of the bridge.
To quantify the similarity of thickness estimates using different modalities, a coarse registration between the TLS and MLMS datasets was performed to align the former to the reference frame of the latter. The registration accuracy was better than ±10 cm, which is good enough for identifying corresponding surface segments. The difference between the thickness estimates at each segment were evaluated and visualized as a heat map, as can be seen in Figure 19. The mean, standard deviation, and RMSE of the thickness differences are reported in Table 4. One should note that although the PWMMS-HA point cloud is less accurate, it has a higher point density, which in turn provides larger redundancy for plane fitting in bridge deck thickness evaluation. Therefore, the thickness evaluation accuracy for PWMMS-HA is similar to that for PWMMS-UHA. The results show that the bridge deck thickness estimates from different systems are in agreement within the 1–3 cm range. A closer investigation of the results in Figure 19 and Table 4 reveals that there is a higher level of compatibility in the thickness estimates from the PWMMS-HA and PWMMS-UHA systems. When compared to the TLS-based thickness estimates, one can observe a trend in the difference (i.e., underestimation/overestimation of thickness at the west side/east side of the bridge, respectively). This trend is the result of the Trimble scan locations leading to less-than-optimal overlap between the east and west above-bridge scans. Therefore, it is believed that the variation in the thickness estimate is in the 1 cm range. In summary, the PWMMS-HA, although with a centimeter-level accuracy LiDAR unit, has a similar performance as the PWMMS-UHA. The derived thickness from the MLMS units is comparable to that derived using TLS, with the latter being more sensitive to the scan locations. The results also reveal that the proposed segment-based thickness evaluation can handle inherent noise in the point cloud and provide reliable thickness estimates.

6. Conclusions and Recommendations for Future Work

This paper presented an evaluation of the performance of mapping-grade and surveying-grade mobile LiDAR systems for bridge monitoring. The performance of these systems was assessed against static laser scanners. To take full advantage of MLMS-based point clouds, a semi-automated feature-based fine registration was proposed to mitigate the negative impact of georeferencing and system calibration errors. The proposed procedure can simultaneously estimate the necessary transformation parameters for the alignment of all the derived point clouds by various sensors onboard the MLMS from different tracks. In addition, the post-alignment parametric model of the registration primitives (planar, linear, and cylindrical features) is also estimated. Bridge deck thickness was evaluated using surface segments while minimizing the impact of inherent noise in the point clouds. Field surveys were carried out over a representative bridge that had grinding conducted on it to achieve desired pavement smoothness and ride quality. The results show that the proposed feature-based fine registration effectively mitigated the impact of intermittent accessibility to a GNSS signal below the bridge. The post-registration alignment quality for the point clouds captured by the mapping-grade MLMS, surveying-grade MLMS, and TLS units is ±1.5 cm, ±0.7 cm, and ±0.6 cm, respectively. Although point clouds from the mapping-grade system had a higher noise level, the evaluated bridge deck thickness was compatible to the one derived from the surveying-grade system in the range of 1 cm. The thickness estimates from both MLMS units were compatible with that derived from the TLSs. The MLMS data acquisition was conducted in five minutes, while TLSs took more than three hours. The proposed fine-registration strategy also delivered a parametric model of bridge elements, which can be used for monitoring the bridge elements over time. MLMS-based parametric models are in agreement with those from the TLSs in the 1 cm range.
Future research will focus on improving the automation level of feature extraction, as well as deriving other quantitative measures for the identification of structural issues. Moreover, larger infrastructure will be inspected to evaluate the impact of extended outage in GNSS signal reception on derived MLMS point clouds. Finally, we will be focusing on automated segmentation and parametric model representation of different structural elements of the infrastructure to aid the periodic monitoring process.

Author Contributions

Conceptualization, T.W., D.B. and A.H.; methodology, Y.-C.L., J.L. and A.H.; software, Y.-C.L. and J.L.; investigation, Y.-C.L., J.L., Y.-T.C., S.M.H., T.W., D.B. and A.H.; data curation, Y.-C.L., J.L. and Y.-T.C.; writing—original draft preparation, Y.-C.L. and J.L.; writing—review and editing, Y.-T.C., S.M.H., D.B. and A.H.; supervision, T.W., D.B. and A.H.; project administration, T.W., D.B. and A.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Joint Transportation Research Program, administered by the Indiana Department of Transportation and Purdue University (grant No. SPR-4407 and SPR-4438). The contents of this paper reflect the views of the authors, who are responsible for the facts and the accuracy of the data presented herein, and do not necessarily reflect the official views or policies of the sponsoring organizations. These contents do not constitute a standard, specification, or regulation.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Grubb, M.A.; Wilson, K.E.; White, C.D.; Nickas, W.N. Load and Resistance Factor Design (LRFD) for Highway Bridge Superstructures Reference Manual; U.S. Department of Transportation, Federal Highway Administration, National Highway Institute: Washington, DC, USA, 2015.
  2. AASHTO. The Manual for Bridge Evaluation; American Association of State Highway and Transportation Officials: Washington, DC, USA, 2011. [Google Scholar]
  3. AASHTO. AASHTO LRFD Bridge Design Specifications, 9th ed.; American Association of State Highway and Transportation Officials: Washington, DC, USA, 2020; ISBN 1560517387. [Google Scholar]
  4. Bridge Deck Construction Manual; The State of California, Department of Transportation, Division of Engineering Services: Sacramento, CA, USA, 2015.
  5. Morgenthal, G.; Hallermann, N.; Kersten, J.; Taraben, J.; Debus, P.; Helmrich, M.; Rodehorst, V. Framework for automated UAS-based structural condition assessment of bridges. Autom. Constr. 2019, 97, 77–95. [Google Scholar] [CrossRef]
  6. Spencer, B.F.; Hoskere, V.; Narazaki, Y. Advances in Computer Vision-Based Civil Infrastructure Inspection and Monitoring. Engineering 2019, 5, 199–222. [Google Scholar] [CrossRef]
  7. Seo, J.; Duque, L.; Wacker, J. Drone-enabled bridge inspection methodology and application. Autom. Constr. 2018, 94, 112–126. [Google Scholar] [CrossRef]
  8. Chen, S.; Laefer, D.F.; Mangina, E.; Zolanvari, S.M.I.; Byrne, J. UAV Bridge Inspection through Evaluated 3D Reconstructions. J. Bridg. Eng. 2019, 24, 05019001. [Google Scholar] [CrossRef] [Green Version]
  9. Khaloo, A.; Lattanzi, D.; Cunningham, K.; Dell’Andrea, R.; Riley, M. Unmanned aerial vehicle inspection of the Placer River Trail Bridge through image-based 3D modelling. Struct. Infrastruct. Eng. 2018, 14, 124–136. [Google Scholar] [CrossRef]
  10. Salaan, C.J.O.; Okada, Y.; Mizutani, S.; Ishii, T.; Koura, K.; Ohno, K.; Tadokoro, S. Close visual bridge inspection using a UAV with a passive rotating spherical shell. J. Field Robot. 2018, 35, 850–867. [Google Scholar] [CrossRef]
  11. Sanchez-Cuevas, P.J.; Heredia, G.; Ollero, A. Multirotor UAS for Bridge Inspection by Contact Using the Ceiling Effect. In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 767–774. [Google Scholar] [CrossRef] [Green Version]
  12. Sanchez-Cuevas, P.J.; Ramon-Soria, P.; Arrue, B.; Ollero, A.; Heredia, G. Robotic system for inspection by contact of bridge beams using UAVs. Sensors 2019, 19, 305. [Google Scholar] [CrossRef] [Green Version]
  13. Zhang, C.; Arditi, D. Advanced progress control of infrastructure construction projects using terrestrial laser scanning technology. Infrastructures 2020, 5, 83. [Google Scholar] [CrossRef]
  14. Son, H.; Kim, C.; Kwon Cho, Y. Automated Schedule Updates Using As-Built Data and a 4D Building Information Model. J. Manag. Eng. 2017, 33, 04017012. [Google Scholar] [CrossRef]
  15. Pučko, Z.; Šuman, N.; Rebolj, D. Automated continuous construction progress monitoring using multiple workplace real time 3D scans. Adv. Eng. Inform. 2018, 38, 27–40. [Google Scholar] [CrossRef]
  16. Ham, N.; Lee, S.H. Empirical study on structural safety diagnosis of large-scale civil infrastructure using laser scanning and BIM. Sustainability 2018, 10, 4024. [Google Scholar] [CrossRef] [Green Version]
  17. Lee, J.; Lee, K.C.; Lee, S.; Lee, Y.J.; Sim, S.H. Long-term displacement measurement of bridges using a LiDAR system. Struct. Control Health Monit. 2019, 26, e2428. [Google Scholar] [CrossRef]
  18. Cha, G.; Park, S.; Oh, T. A Terrestrial LiDAR-Based Detection of Shape Deformation for Maintenance of Bridge Structures. J. Constr. Eng. Manag. 2019, 145, 04019075. [Google Scholar] [CrossRef]
  19. Cheng, Y.T.; Patel, A.; Wen, C.; Bullock, D.; Habib, A. Intensity thresholding and deep learning based lane marking extraction and lanewidth estimation from mobile light detection and ranging (LiDAR) point clouds. Remote Sens. 2020, 12, 1379. [Google Scholar] [CrossRef]
  20. Wen, C.; Sun, X.; Li, J.; Wang, C.; Guo, Y.; Habib, A. A deep learning framework for road marking extraction, classification and completion from mobile laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2019, 147, 178–192. [Google Scholar] [CrossRef]
  21. Lee, S.; Kim, J.; Yoon, J.S.; Shin, S.; Bailo, O.; Kim, N.; Lee, T.H.; Hong, H.S.; Han, S.H.; Kweon, I.S. VPGNet: Vanishing Point Guided Network for Lane and Road Marking Detection and Recognition. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1965–1973. [Google Scholar] [CrossRef] [Green Version]
  22. Luo, Z.; Li, J.; Xiao, Z.; Mou, Z.G.; Cai, X.; Wang, C. Learning high-level features by fusing multi-view representation of MLS point clouds for 3D object recognition in road environments. ISPRS J. Photogramm. Remote Sens. 2019, 150, 44–58. [Google Scholar] [CrossRef]
  23. Arastounia, M. Automated recognition of railroad infrastructure in rural areas from LIDAR data. Remote Sens. 2015, 7, 14916–14938. [Google Scholar] [CrossRef] [Green Version]
  24. Puente, I.; Akinci, B.; González-Jorge, H.; Díaz-Vilariño, L.; Arias, P. A semi-automated method for extracting vertical clearance and cross sections in tunnels using mobile LiDAR data. Tunn. Undergr. Space Technol. 2016, 59, 48–54. [Google Scholar] [CrossRef]
  25. Sánchez-Rodríguez, A.; Riveiro, B.; Soilán, M.; González-deSantos, L.M. Automated detection and decomposition of railway tunnels from Mobile Laser Scanning Datasets. Autom. Constr. 2018, 96, 171–179. [Google Scholar] [CrossRef]
  26. Karunathilake, A.; Honma, R.; Niina, Y. Self-organized model fitting method for railway structures monitoring using lidar point cloud. Remote Sens. 2020, 12, 3702. [Google Scholar] [CrossRef]
  27. Puri, N.; Turkan, Y. Bridge construction progress monitoring using LiDAR and 4D design models. Autom. Constr. 2020, 109, 102961. [Google Scholar] [CrossRef]
  28. Lin, Y.C.; Cheng, Y.T.; Lin, Y.J.; Flatt, J.E.; Habib, A.; Bullock, D. Evaluating the Accuracy of Mobile LiDAR for Mapping Airfield Infrastructure. Transp. Res. Rec. 2019, 2673, 117–124. [Google Scholar] [CrossRef]
  29. Puente, I.; González-Jorge, H.; Riveiro, B.; Arias, P. Accuracy verification of the Lynx Mobile Mapper system. Opt. Laser Technol. 2013, 45, 578–586. [Google Scholar] [CrossRef]
  30. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures, Proceedings of the Robotics ’91, Boston, MA, USA, 14–15 November 1991; International Society for Optics and Photonics: Bellingham, WA, USA, 1992; Volume 1611, pp. 586–606. [Google Scholar] [CrossRef]
  31. Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  32. Sharp, G.C.; Lee, S.W.; Wehe, D.K. ICP registration using invariant features. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 90–102. [Google Scholar] [CrossRef] [Green Version]
  33. Gressin, A.; Mallet, C.; Demantké, J.Ô.; David, N. Towards 3D lidar point cloud registration improvement using optimal neighborhood knowledge. ISPRS J. Photogramm. Remote Sens. 2013, 79, 240–251. [Google Scholar] [CrossRef] [Green Version]
  34. Al-Durgham, M.; Detchev, I.; Habib, A. Analysis of Two Triangle-Based Multi-Surface Registration Algorithms of Irregular Point Clouds. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXVIII-5/W12, 61–66. [Google Scholar] [CrossRef] [Green Version]
  35. Grant, D.; Bethel, J.; Crawford, M. Point-to-plane registration of terrestrial laser scans. ISPRS J. Photogramm. Remote Sens. 2012, 72, 16–26. [Google Scholar] [CrossRef]
  36. Lague, D.; Brodu, N.; Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z). ISPRS J. Photogramm. Remote Sens. 2013, 82, 10–26. [Google Scholar] [CrossRef] [Green Version]
  37. Habib, A.; Detchev, I.; Bang, K. A comparative analysis of two approaches for multiple-surface registration of irregular point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2010, 38, 1–6. [Google Scholar]
  38. Gruen, A.; Akca, D. Least squares 3D surface and curve matching. ISPRS J. Photogramm. Remote Sens. 2005, 59, 151–174. [Google Scholar] [CrossRef] [Green Version]
  39. Akca, D. Full automatic registration of laser scanner point clouds. In Proceedings of the Optical 3-D Measurement Techniques VI, Zurich, Switzerland, 22–25 September 2003; Volume I, pp. 330–337. [Google Scholar]
  40. Franaszek, M.; Cheok, G.S.; Witzgall, C. Fast automatic registration of range images from 3D imaging systems using sphere targets. Autom. Constr. 2009, 18, 265–274. [Google Scholar] [CrossRef]
  41. Bosché, F. Plane-based registration of construction laser scans with 3D/4D building models. Adv. Eng. Inform. 2012, 26, 90–102. [Google Scholar] [CrossRef]
  42. Liu, W.I. Novel method for sphere target detection and center estimation from mobile terrestrial laser scanner data. Meas. J. Int. Meas. Confed. 2019, 137, 617–623. [Google Scholar] [CrossRef]
  43. Han, J.Y. A noniterative approach for the quick Alignment of multistation unregistered LiDAR point clouds. IEEE Geosci. Remote Sens. Lett. 2010, 7, 727–730. [Google Scholar] [CrossRef]
  44. Kim, P.; Chen, J.; Cho, Y.K. Automated Point Cloud Registration Using Visual and Planar Features for Construction Environments. J. Comput. Civ. Eng. 2018, 32, 04017076. [Google Scholar] [CrossRef]
  45. Han, J.Y.; Jaw, J.J. Solving a similarity transformation between two reference frames using hybrid geometric control features. J. Chin. Inst. Eng. Trans. Chin. Inst. Eng. A 2013, 36, 304–313. [Google Scholar] [CrossRef]
  46. Huang, T. Registration method for terrestrial LiDAR point clouds using geometric features. Opt. Eng. 2012, 51, 021114. [Google Scholar] [CrossRef]
  47. Stamos, I.; Leordeanu, M. Automated Feature-Based Range Registration of Urban Scenes of Large Scale. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; pp. 0–6. [Google Scholar]
  48. Dold, C.; Brenner, C. Registration of terrestrial laser scanning data using planar patches and image data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2006, 36, 78–83. [Google Scholar] [CrossRef]
  49. Al-Durgham, K.; Habib, A. Association-matrix-based sample consensus approach for automated registration of terrestrial laser scans using linear features. Photogramm. Eng. Remote Sens. 2014, 80, 1029–1039. [Google Scholar] [CrossRef]
  50. Al-Durgham, K.; Habib, A.; Mazaheri, M. Solution frequency-based procedure for automated registration of terrestrial laser scans using linear features. In Proceedings of the ASPRS 2014 Annual Conference, Louisville, KY, USA, 23–28 March 2014. [Google Scholar]
  51. Fangning, H.; Ayman, H. A Closed-Form Solution for Coarse Registration of Point Clouds Using Linear Features. J. Surv. Eng. 2016, 142, 04016006. [Google Scholar] [CrossRef]
  52. Al-Durgham, K.; Habib, A.; Kwak, E. RANSAC approach for automated registration of terrestrial laser scans using linear features. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 2, 13–18. [Google Scholar] [CrossRef] [Green Version]
  53. Chuang, T.Y.; Jaw, J.J. Multi-feature registration of point clouds. Remote Sens. 2017, 9, 281. [Google Scholar] [CrossRef] [Green Version]
  54. Wu, H.; Scaioni, M.; Li, H.; Li, N.; Lu, M.; Liu, C. Feature-constrained registration of building point clouds acquired by terrestrial and airborne laser scanners. J. Appl. Remote Sens. 2014, 8, 083587. [Google Scholar] [CrossRef]
  55. Li, J.; Yang, B.; Chen, C.; Habib, A. NRLI-UAV: Non-rigid registration of sequential raw laser scans and images for low-cost UAV LiDAR point cloud quality improvement. ISPRS J. Photogramm. Remote Sens. 2019, 158, 123–145. [Google Scholar] [CrossRef]
  56. Stamos, I.; Allen, P.K. Geometry and texture recovery of scenes of large scale. Comput. Vis. Image Underst. 2002, 88, 94–118. [Google Scholar] [CrossRef]
  57. Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A. A robust registration algorithm for point clouds from UAV images for change detection. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2016, 41, 765–772. [Google Scholar] [CrossRef] [Green Version]
  58. Von Hansen, W.; Gross, H.; Thoennessen, U. Line-Based Registration of Terrestrial and Airborne Lidar Data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 161–166. [Google Scholar]
  59. Habib, A.F.; Ghanma, M.; Morgan, M.; Al-ruzouq, R. Photogrammetric and Lidar Data Registration Using Linear Features. Photogramm. Eng. Remote Sens. 2005, 71, 699–707. [Google Scholar] [CrossRef]
  60. Wendt, A. A concept for feature based data registration by simultaneous consideration of laser scanner data and photogrammetric images. ISPRS J. Photogramm. Remote Sens. 2007, 62, 122–134. [Google Scholar] [CrossRef]
  61. Renaudin, E.; Habib, A.; Kersting, A.P. Featured-based registration of terrestrial laser scans with minimum overlap using photogrammetric data. ETRI J. 2011, 33, 517–527. [Google Scholar] [CrossRef]
  62. Yang, J.; Cao, Z.; Zhang, Q. A fast and robust local descriptor for 3D point cloud registration. Inf. Sci. 2016, 346–347, 163–179. [Google Scholar] [CrossRef]
  63. Bueno, M.; González-Jorge, H.; Martínez-Sánchez, J.; Lorenzo, H. Automatic point cloud coarse registration using geometric keypoint descriptors for indoor scenes. Autom. Constr. 2017, 81, 134–148. [Google Scholar] [CrossRef]
  64. Barnea, S.; Filin, S. Keypoint based autonomous registration of terrestrial laser point-clouds. ISPRS J. Photogramm. Remote Sens. 2008, 63, 19–35. [Google Scholar] [CrossRef]
  65. Choy, C.; Park, J.; Koltun, V. Fully Convolutional Geometric Features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019. [Google Scholar]
  66. Bai, X.; Luo, Z.; Zhou, L.; Fu, H.; Quan, L.; Tai, C.L. D3Feat: Joint learning of dense detection and description of 3D local features. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6358–6366. [Google Scholar]
  67. Choy, C.; Dong, W.; Koltun, V. Deep global registration. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2511–2520. [Google Scholar]
  68. Fu, K.; Liu, S.; Luo, X.; Wang, M. Robust Point Cloud Registration Framework Based on Deep Graph Matching. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8893–8902. [Google Scholar]
  69. Ao, S.; Hu, Q.; Yang, B.; Markham, A.; Guo, Y. SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11753–11762. [Google Scholar]
  70. Huang, S.; Gojcic, Z.; Usvyatsov, M.; Wieser, A.; Schindler, K. PREDATOR: Registration of 3D Point Clouds with Low Overlap. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4267–4276. [Google Scholar]
  71. Bai, X.; Luo, Z.; Zhou, L.; Chen, H.; Li, L.; Hu, Z.; Fu, H.; Tai, C.-L. PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15859–15869. [Google Scholar]
  72. Velodyne HDL32E Datasheet. Available online: https://velodynelidar.com/products/hdl-32e/ (accessed on 26 May 2021).
  73. Velodyne Puck Hi-Res Datasheet. Available online: https://velodynelidar.com/products/puck-hi-res/ (accessed on 26 May 2021).
  74. Applanix POSLV 220 Datasheet. Available online: https://www.applanix.com/products/poslv.htm (accessed on 26 April 2020).
  75. Riegl VUX-1HA. Available online: http://www.riegl.com/products/newriegl-vux-1-series/newriegl-vux-1ha (accessed on 26 April 2020).
  76. Z+F Profiler 9012. Available online: https://www.zf-laser.com/Z-F-PROFILER-R-9012.2d_laserscanner.0.html (accessed on 26 April 2020).
  77. Novatel IMU-ISA-100C. Available online: https://docs.novatel.com/OEM7/Content/Technical_Specs_IMU/ISA_100C_Overview.htm (accessed on 26 April 2020).
  78. Ravi, R.; Lin, Y.J.; Elbahnasawy, M.; Shamseldin, T.; Habib, A. Simultaneous System Calibration of a Multi-LiDAR Multicamera Mobile Mapping Platform. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1694–1714. [Google Scholar] [CrossRef]
  79. Habib, A.; Lay, J.; Wong, C. LIDAR Error Propagation Calculator. Available online: https://engineering.purdue.edu/CE/Academics/Groups/Geomatics/DPRG/files/LIDARErrorPropagation.zip (accessed on 10 October 2021).
  80. FARO Focus 3D X330. Available online: https://www.connectcec.com/sites/cec/uploads/TechSheet_Focus3D_X_330.pdf (accessed on 7 February 2021).
  81. Trimble TX8. Available online: https://fieldtech.trimble.com/resources/product-guides-brochures-data-sheets/datasheet-trimble-tx8-3d-laser-scanner (accessed on 7 February 2021).
  82. Habib, A.; Lin, Y.J. Multi-class simultaneous adaptive segmentation and quality control of point cloud data. Remote Sens. 2016, 8, 104. [Google Scholar] [CrossRef] [Green Version]
  83. Demantké, J.; Mallet, C.; David, N.; Vallet, B. Dimensionality Based Scale Selection in 3D Lidar Point Clouds. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXVIII-5/W12, 97–102. [Google Scholar] [CrossRef] [Green Version]
  84. Mikhail, E.M.; Ackermann, F.E. Observations and Least Squares; University Press of America: Washington, DC, USA, 1982; ISBN 978-0819123978. [Google Scholar]
  85. Ravi, R.; Habib, A. Least Squares Adjustment with a Rank-deficient Weight Matrix and its Applicability towards Image/LiDAR Data Processing. Photogramm. Eng. Remote Sens. 2021, 87, 717–733. [Google Scholar] [CrossRef]
Figure 1. The wheel-based mobile mapping systems and onboard sensors used in this study: (a) Purdue Wheel-based Mobile Mapping System-High Accuracy (PWMMS-HA) and (b) Purdue Wheel-based Mobile Mapping System-Ultra High Accuracy (PWMMS-UHA). Both platforms are non-commercial systems designed and integrated by the research group.
Figure 1. The wheel-based mobile mapping systems and onboard sensors used in this study: (a) Purdue Wheel-based Mobile Mapping System-High Accuracy (PWMMS-HA) and (b) Purdue Wheel-based Mobile Mapping System-Ultra High Accuracy (PWMMS-UHA). Both platforms are non-commercial systems designed and integrated by the research group.
Sensors 21 07550 g001
Figure 2. Study site: (a) the westbound bridge at the intersection of the I-74 and US-231 near Crawfordsville in Indiana, USA (aerial photo adapted from Google Earth images), (b) image of the bridge captured by PWMMS-HA while driving westbound on the I-74, and (c) side view of the bridge captured by PWMMS-HA while driving southbound on the US-231.
Figure 2. Study site: (a) the westbound bridge at the intersection of the I-74 and US-231 near Crawfordsville in Indiana, USA (aerial photo adapted from Google Earth images), (b) image of the bridge captured by PWMMS-HA while driving westbound on the I-74, and (c) side view of the bridge captured by PWMMS-HA while driving southbound on the US-231.
Sensors 21 07550 g002
Figure 3. Data acquisition: (a) drive run configuration for the vehicles (Tracks T1–T9) and TLS scan locations (Scans S1–S6), (b) image of the Trimble station (Scan S4) atop the I-74 embankment outside the barrier rail, and (c) image of the FARO station (Scan S1) on the US-231 under the I-74 bridge.
Figure 3. Data acquisition: (a) drive run configuration for the vehicles (Tracks T1–T9) and TLS scan locations (Scans S1–S6), (b) image of the Trimble station (Scan S4) atop the I-74 embankment outside the barrier rail, and (c) image of the FARO station (Scan S1) on the US-231 under the I-74 bridge.
Sensors 21 07550 g003
Figure 4. Global navigation satellite system/inertial navigation system (GNSS/INS) position accuracy charts for the (a) PWMMS-HA (north and east positions), (b) PWMMS-HA (down position), and (c) PWMMS-UHA vehicles. The highlighted eight peaks correspond to the eight southbound and northbound tracks (Tracks T2–T9) on the US-231 below the bridge, where suboptimal position accuracy can be observed.
Figure 4. Global navigation satellite system/inertial navigation system (GNSS/INS) position accuracy charts for the (a) PWMMS-HA (north and east positions), (b) PWMMS-HA (down position), and (c) PWMMS-UHA vehicles. The highlighted eight peaks correspond to the eight southbound and northbound tracks (Tracks T2–T9) on the US-231 below the bridge, where suboptimal position accuracy can be observed.
Sensors 21 07550 g004
Figure 5. Workflow of the proposed bridge deck thickness evaluation and comparative quality assessment strategy.
Figure 5. Workflow of the proposed bridge deck thickness evaluation and comparative quality assessment strategy.
Sensors 21 07550 g005
Figure 6. An example of cylindrical features (in green) that have been segmented from a LiDAR point cloud (colored by intensity).
Figure 6. An example of cylindrical features (in green) that have been segmented from a LiDAR point cloud (colored by intensity).
Sensors 21 07550 g006
Figure 7. Different options for representing planar features showing the normal vectors to the planes (defined by the eigenvector corresponding to the smallest eigenvalue) that are mainly along the (a) Z-axis (i.e., the eigenvector component along the Z-axis is larger than those along the X and Y axes), (b) Y-axis (i.e., the eigenvector component along the Y-axis is larger than those along the X and Z axes), and (c) X-axis (i.e., the eigenvector component along the X-axis is larger than those along the Y and Z axes).
Figure 7. Different options for representing planar features showing the normal vectors to the planes (defined by the eigenvector corresponding to the smallest eigenvalue) that are mainly along the (a) Z-axis (i.e., the eigenvector component along the Z-axis is larger than those along the X and Y axes), (b) Y-axis (i.e., the eigenvector component along the Y-axis is larger than those along the X and Z axes), and (c) X-axis (i.e., the eigenvector component along the X-axis is larger than those along the Y and Z axes).
Sensors 21 07550 g007
Figure 8. Different options for representing cylindrical features with directions (defined by the eigenvector corresponding to the largest eigenvalue) that are mainly along the (a) Z-axis (i.e., the eigenvector component along the Z-axis is larger than those along the X and Y axes), (b) Y-axis (i.e., the eigenvector component along the Y-axis is larger than those along the X and Z axes), and (c) X-axis (i.e., the eigenvector component along the X-axis is larger than those along the Y and Z axes).
Figure 8. Different options for representing cylindrical features with directions (defined by the eigenvector corresponding to the largest eigenvalue) that are mainly along the (a) Z-axis (i.e., the eigenvector component along the Z-axis is larger than those along the X and Y axes), (b) Y-axis (i.e., the eigenvector component along the Y-axis is larger than those along the X and Z axes), and (c) X-axis (i.e., the eigenvector component along the X-axis is larger than those along the Y and Z axes).
Sensors 21 07550 g008
Figure 9. Schematic illustration of the normal distance vector components that are minimized using the (a) plane-fitting model, (b) 3D line-fitting model, and (c) cylinder-fitting model.
Figure 9. Schematic illustration of the normal distance vector components that are minimized using the (a) plane-fitting model, (b) 3D line-fitting model, and (c) cylinder-fitting model.
Sensors 21 07550 g009
Figure 10. Sample registration results using the iterative closest point (ICP) and proposed approach, showing a cylindrical feature before and after registration.
Figure 10. Sample registration results using the iterative closest point (ICP) and proposed approach, showing a cylindrical feature before and after registration.
Sensors 21 07550 g010
Figure 11. Example of (a) a point cloud capturing the top and bottom surfaces of the bridge deck and (b) a heat map representing the estimated bridge deck thickness.
Figure 11. Example of (a) a point cloud capturing the top and bottom surfaces of the bridge deck and (b) a heat map representing the estimated bridge deck thickness.
Sensors 21 07550 g011
Figure 12. Point density over the bridge deck from one track (track T1) for: (a) PWMMS-HA and (b) PWMMS-UHA.
Figure 12. Point density over the bridge deck from one track (track T1) for: (a) PWMMS-HA and (b) PWMMS-UHA.
Sensors 21 07550 g012
Figure 13. Sample cross-sectional profile of the bridge deck showing the point cloud alignment before registration for the (a) PWMMS-HA and (b) PWMMS-UHA datasets. Both datasets have nine tracks (T1 is above the bridge and T2–T9 are below the bridge).
Figure 13. Sample cross-sectional profile of the bridge deck showing the point cloud alignment before registration for the (a) PWMMS-HA and (b) PWMMS-UHA datasets. Both datasets have nine tracks (T1 is above the bridge and T2–T9 are below the bridge).
Sensors 21 07550 g013
Figure 14. Extracted planar/linear/cylindrical features (in blue/red/green, respectively) for the registration of (a) TLS, (b) PWMMS-HA, and (c) PWMMS-UHA point clouds.
Figure 14. Extracted planar/linear/cylindrical features (in blue/red/green, respectively) for the registration of (a) TLS, (b) PWMMS-HA, and (c) PWMMS-UHA point clouds.
Sensors 21 07550 g014aSensors 21 07550 g014b
Figure 15. Box and whisker plots of the fine-registration transformation parameters for the (a) TLS, (b) PWMMS-HA, and (c) PWMMS-UHA datasets.
Figure 15. Box and whisker plots of the fine-registration transformation parameters for the (a) TLS, (b) PWMMS-HA, and (c) PWMMS-UHA datasets.
Sensors 21 07550 g015
Figure 16. The twelve cylindrical columns and their axes derived from TLS (in blue), PWMMS-HA (in red), and PWMMS-UHA (in green) data.
Figure 16. The twelve cylindrical columns and their axes derived from TLS (in blue), PWMMS-HA (in red), and PWMMS-UHA (in green) data.
Sensors 21 07550 g016
Figure 17. Sample cross-sectional profile showing the post-registration point cloud alignment for the (a) TLS, (b) PWMMS-HA, and (c) PWMMS-UHA datasets.
Figure 17. Sample cross-sectional profile showing the post-registration point cloud alignment for the (a) TLS, (b) PWMMS-HA, and (c) PWMMS-UHA datasets.
Sensors 21 07550 g017aSensors 21 07550 g017b
Figure 18. Bridge deck thickness estimates shown as a heat map using the (a) TLS, (b) PWMMS-HA, and (c) PWMMS-UHA datasets.
Figure 18. Bridge deck thickness estimates shown as a heat map using the (a) TLS, (b) PWMMS-HA, and (c) PWMMS-UHA datasets.
Sensors 21 07550 g018aSensors 21 07550 g018b
Figure 19. Heat map visualization of the difference in bridge deck thickness estimates between (a) TLS and PWMMS-HA, (b) TLS and PWMMS-UHA, and (c) PWMMS-HA and PWMMS-UHA.
Figure 19. Heat map visualization of the difference in bridge deck thickness estimates between (a) TLS and PWMMS-HA, (b) TLS and PWMMS-UHA, and (c) PWMMS-HA and PWMMS-UHA.
Sensors 21 07550 g019
Table 1. Specifications of the acquired point clouds by PWMMS-HA, PWMMS-UHA, and Terrestrial laser scanners (TLSs) above and below the bridge in question.
Table 1. Specifications of the acquired point clouds by PWMMS-HA, PWMMS-UHA, and Terrestrial laser scanners (TLSs) above and below the bridge in question.
SensorNumber of
Tracks/Scans
Number of Points
per Track/Scan
Data Acquisition
Time
PWMMS-HAHDL-RR9~7 million5 min
HDL-RL9~7 million
HDL-FL9~7 million
VLP-FR9~2 million
PWMMS-UHARI9~15 million5 min
ZF9~15 million
TLSFARO3~167 million3 h
Trimble3~199 million
Table 2. Weighted average of the RMSE of plane/line/cylinder fittings before and after registration for TLS, PWMMS-HA, and PWMMS-UHA.
Table 2. Weighted average of the RMSE of plane/line/cylinder fittings before and after registration for TLS, PWMMS-HA, and PWMMS-UHA.
TLSPWMMS-HAPWMMS-UHA
Number of featuresPlanar191110
Linear654
Cylindrical211415
Total463029
Weighted average of RMSE (m)Before0.0140.0260.029
After0.0040.0140.007
Table 3. Estimated post-registration radii of the cylindrical columns supporting the I-74 bridge from the TLS, PWMMS-HA, and PWMMS-UHA point clouds.
Table 3. Estimated post-registration radii of the cylindrical columns supporting the I-74 bridge from the TLS, PWMMS-HA, and PWMMS-UHA point clouds.
Number of PointsRadius (m)Difference (m)
IDTLSHAUHATLSHAUHATLS vs. HATLS vs. UHAHA vs. UHA
C124,752155,48386,3550.3140.3140.3080.000−0.006−0.006
C225,764150,62280,2160.3050.3050.2970.000−0.008−0.008
C325,840149,09078,3080.3060.3040.294−0.002−0.012−0.010
C425,366149,64178,5160.3050.3030.291−0.002−0.014−0.012
C530,775242,075158,4030.3160.3170.3130.001−0.003−0.003
C620,143234,633156,0960.3050.3060.3020.001−0.004−0.004
C720,267233,235152,5660.3050.3060.3010.001−0.004−0.005
C820,123231,872149,2910.3050.3060.3010.001−0.004−0.005
C925,985140,673113,6360.3150.3100.297−0.005−0.018−0.012
C1026,942136,115108,0520.3050.3020.291−0.003−0.014−0.011
C1127,124137,503103,7720.3050.3030.295−0.002−0.010−0.008
C1225,314136,477102,5480.3050.3030.298−0.002−0.007−0.006
Mean−0.001−0.009−0.008
Std. Dev.0.0020.0050.003
RMSE0.0020.0100.008
Table 4. Statistics of the difference between bridge deck thickness estimates from different systems.
Table 4. Statistics of the difference between bridge deck thickness estimates from different systems.
TLS vs.
PWMMS-HA
TLS vs.
PWMMS-UHA
PWMMS-HA vs. PWMMS-UHA
Mean (m)0.0100.005−0.006
Std. Dev. (m)0.0180.0250.012
RMSE (m)0.0210.0250.013
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lin, Y.-C.; Liu, J.; Cheng, Y.-T.; Hasheminasab, S.M.; Wells, T.; Bullock, D.; Habib, A. Processing Strategy and Comparative Performance of Different Mobile LiDAR System Grades for Bridge Monitoring: A Case Study. Sensors 2021, 21, 7550. https://doi.org/10.3390/s21227550

AMA Style

Lin Y-C, Liu J, Cheng Y-T, Hasheminasab SM, Wells T, Bullock D, Habib A. Processing Strategy and Comparative Performance of Different Mobile LiDAR System Grades for Bridge Monitoring: A Case Study. Sensors. 2021; 21(22):7550. https://doi.org/10.3390/s21227550

Chicago/Turabian Style

Lin, Yi-Chun, Jidong Liu, Yi-Ting Cheng, Seyyed Meghdad Hasheminasab, Timothy Wells, Darcy Bullock, and Ayman Habib. 2021. "Processing Strategy and Comparative Performance of Different Mobile LiDAR System Grades for Bridge Monitoring: A Case Study" Sensors 21, no. 22: 7550. https://doi.org/10.3390/s21227550

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop