Next Article in Journal
An Analysis of Open-Ended Coaxial Probe Sensitivity to Heterogeneous Media
Previous Article in Journal
Design of 2D Sparse Array Transducers for Anomaly Detection in Medical Phantoms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Rapid Method of the Rock Mass Surface Reconstruction for Surface Deformation Detection at Close Range

1
School of Civil Engineering and Geomatics, Southwest Petroleum University, Chengdu 610500, China
2
School of Mechatronic Engineering, Southwest Petroleum University, Chengdu 610500, China
3
School of Transportation and Logistics, Southwest Jiaotong University, Chengdu 610031, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(18), 5371; https://doi.org/10.3390/s20185371
Submission received: 17 August 2020 / Revised: 15 September 2020 / Accepted: 15 September 2020 / Published: 19 September 2020
(This article belongs to the Section Remote Sensors)

Abstract

:
Characterizing the surface deformation during the inter-survey period could assist in understanding rock mass progressive failure processes. Moreover, 3D reconstruction of rock mass surface is a crucial step in surface deformation detection. This study presents a method to reconstruct the rock mass surface at close range in a fast way using the improved structure from motion—multi view stereo (SfM) algorithm for surface deformation detection. To adapt the unique feature of rock mass surface, the AKAZE algorithm with the best performance in rock mass feature detection is introduced to improve SfM. The surface reconstructing procedure mainly consists of image acquisition, feature point detection, sparse reconstruction, and dense reconstruction. Hereafter, the proposed method was verified by three experiments. Experiment 1 showed that this method effectively reconstructed the rock mass model. Experiment 2 proved the advanced accuracy of the improved SfM compared with the traditional one in reconstructing the rock mass surface. Eventually, in Experiment 3, the surface deformation of rock mass was quantified through reconstructing images before and after the disturbance. All results have shown that the proposed method could provide reliable information in rock mass surface reconstruction and deformation detection.

1. Introduction

Limited to the topographic and environmental conditions, projects in Southwest China that have been or are under construction are closely related to rock masses, for example, tunnel engineering, slope engineering, and foundation engineering. Much work has been done to evaluate the properties of rock engineering by analyzing the mechanics and failure characteristics of rock masses [1,2]. The surface deformation analysis of the rock masses could also provide useful information to understand the failure mechanism and stability [3,4]. Rock mass surface deformation detection is of great significance in the safety management of a construction project, which could even predict an initial danger of rock engineering to some extent. The surface reconstruction is the basis to quantify the surface deformation process of rock engineering. This study is going to explore a rapid method of the rock mass surface three-dimensional (3D) reconstruction for surface deformation detection in a close range.
Biological monitoring is an essential means of surface deformation detection in rock engineering. In the past decade, remote surveying technology has made significant progress in rapidly acquiring 3D high-resolution digital images [2]. It is commonplace to characterize rock mass surfaces through digital photogrammetry [5,6], light detection and ranging (LiDAR) [7], Interferometric Synthetic Aperture Radar (INSAR) interferometry [8], 3D laser scanning [9,10], and unmanned aerial vehicles (UAVs) [10,11]. These techniques are suitable to measure variety of processes according to their own features. Recent developments in 3D photogrammetry could create 3D surface models or triangulated irregular networks (TINs) using multiple photos [12,13]. Owing to the economy, convenience, and intelligence, the structure from motion–multi view stereo (SfM-MVS) has attracted continuous investigations into acquiring digital information within 3D reconstruction [14,15]. The SfM calculates the object’s position based on the reference point deduced from photos, which is mainly used for sparse reconstruction. Moreover, MVS generates a broad range of point clouds based on the predicted object position and reference point, which is used for dense reconstruction. This technology has been extended to land surface changes [16,17], river erosion [18], and rock failure [19], among others. Some works even achieve a highly accurate reconstruction model that indicates SfM-MVS could provide a survey precision comparable to the current measurement methods [20,21,22,23,24].
However, the literature lacks the investigation into the surface reconstruction of rock mass at close range using SfM-MVS. The quality of 3D reconstruction is one of the most critical issues bothering rock mass surface deformation detection. First of all, the selection of verification methods would lead to different assessment results [25,26,27]. Secondly, the size of the pixel increases as the ranging increases [28], thus the reconstruction accuracy would reduce linearly with the increase in ranging. As to the error measurement criterion, root mean squared error (RMSE) is the most common method to evaluate the difference between model measurements and independent observations [25,29,30]. Last, but not least, regarding the data acquisition facility, sensors that capture different types of images may complex the results of the 3D reconstruction. To adapt the unique feature of rock mass surface, serial factors that affect the quality should be considered in the practical 3D reconstruction of rock mass surface at close range.
Characterizing the surface deformation during the inter-survey period could assist in understanding the rock mass progressive failure processes. The quality of the surface deformation detection is subject to the quality of surface reconstruction using the SfM algorithm. This study proposed an improved A-SfM 3D reconstruction algorithm to realize a more accurate rock mass surface reconstruction by obtaining a higher accuracy 3D reconstruction point cloud of rock surface. The surface deformation of rock mass was detected by 3D reconstruction at different times using an open source software, CloudCompare. This study aims at providing more accurate monitoring information in predicting the further disaster of rock engineering and helps to understand the gradual failure process of rock mass surface to some extent.

2. Proposed Method

The SfM, an algorithm of 3D reconstruction based on various unordered images, is introduced to reconstruct the surface of rock mass with multiple images acquired at close range. The accuracy and cost of 3D reconstruction depend on the number of feature points extracted and matched, calculation time, central processing unit (CPU) utility, and mismatched accuracy of feature points. Regarding the feature extraction part of SfM, the SIFT algorithm with scale and rotation invariance is the mainstream. However, the edge information is still probably lost because the Gaussian blur would smooth all scales of the target image to the same degree of detail and noise. To build a 3D reconstruction model that is more suitable for rock mass image characteristics, this study emphasized improving the feature extraction of the SfM algorithm.
The image data sets of rock mass were taken in Aba Tibetan, Sichuan, China. A total of 41 images were classified into four groups according to the transform of intensity, rotation, scale, and fuzzy, respectively. Table 1 and Table 2 list the parameters of five feature extraction algorithms (AKAZE, ORB, SURF, SIFT, and BRISK), including the number of feature points, number of interior points, number of matching points, execution time, CPU utility, efficiency of feature points, and matching accuracy.
Test results were analyzed comprehensively with the Entropy Weight—TOPSIS. The comprehensive evaluation index C i * is calculated as follows:
C i * = [ 0.1234 ,   0.1325 ,   0.3304 ,   0.3656 ,   0.0481 ]
where SIFT is 0.1234, SURF is 0.1325, ORB is 0.3304, AKAZE is 0.3656, and BRISK is 0.0481. According to the comprehensive evaluation index, the five algorithms are ranked as follows: AWAKE, ORB, SURF, SIFT, and BRISK. Therefore, AKAZE was used to improve the traditional SfM-MVS.
The framework for 3D reconstruction of rock mass surface mainly includes image acquisition, feature point detection, sparse reconstruction, and dense reconstruction, as shown in Figure 1.

3. Basic Theory and Methods

3.1. Image Acquisition

Reconstruction using SfM has less requirements for image acquisition sensors. Table 3 shows the related technical specifications of sensors for image acquisition. The primary purpose of image acquisition sensor design is to select the most appropriate sensor to acquire the image with sufficient resolution. A high-performance single-lens reflex camera is used for image acquisition, in order to mostly adapt to the actual situation of the rock engineering site and reduce time and efficient costs. Most of these cameras enable acquiring ground resolution in centimeters and have strong robustness.

3.2. Feature Point Detection

Feature extraction. The AKAZE algorithm is used to improve the SfM in surface reconstruction owing to the optimization in rock mass image characteristics. AKAZE is a feature extraction algorithm with good robustness due to the introduced modified-local difference binary (M-LDB). The main principle is as follows.
A nonlinear diffusion filter describes the variation of image brightness L in different scales using the divergence of flow function, Formula (1):
L / t = div ( c ( x , y , t ) · L )
where L is the brightness matrix of the image; div and represent the divergence and the gradient of the image, respectively; c ( x , y , t ) is the conduction function; and t is evolutionary time.
The conduction function allows the diffusion equation to adapt to the local structure characteristics of the image. The conduction function is defined as Formula (2).
c ( x , y , t ) = g ( | L σ ( x , y , t ) | )
where L σ is the image smoothed by Gaussian. The conduction kernel function is selected as Formula (3) for optimal diffusion smoothing.
g 2 = 1 ( 1 + | L σ | 2 / λ 2 )
where λ is the deciding factor, which is used to control the degree of diffusion and to determine the edge region that should be enhanced and flat filtered.
The evolutionary time t i is derived by converting the scale parameter in pixels σ i .
t i = 1 2 σ i 2 , i = 0 M
A nonlinear scale-space equation could be acquired using the fast explicit diffusion (FED) algorithm [31] to solve the partial differential equation of Formula (1).
L i = ( I + τ A ( L i ) ) L i | i = 0 , 1 , , n 1
where A ( L i ) is the conductance matrix to the image encoding and is constructed by the histogram of the Gaussian filtered scale image; I is the identity matrix; and τ is step-size, which comes from the factorization of the filter [32]. The matrix A ( L i ) remains unchanged throughout the FED cycle. When the FED loop ends, the algorithm recalculates the value of the matrix A ( L i ) .
After the nonlinear scale space is constructed, the Hessian matrix is used to extract the feature points. Meanwhile, SIFT is used to compare whether 26 points of the same position two layers above and below (include the current layer) are still extreme points. With this method, local feature points are extracted. To verify the feature extraction effect of the AKAZE algorithm, a slop image taken in Aba Tibetan, Sichuan, China was used to display the extraction results of feature points, shown in Figure 2.
Feature matching. After the feature extraction, the neighborhood matching is established to find all matching points. Euclidean distance is adopted to screen the feature point pairs. If the result does not meet the threshold, it would be removed.
In this step, the FLANN feature point matching algorithm is adopted through the K-Dimensional Tree (KD-Tree) in achieving the feature point search first, then the matching degree is determined according to the Euclidean distance formula. This method could segment the feature points of different spaces and obtained the matching point pairs in different spatial domains effectively.
Mismatched elimination. Exact matching is conducted with the Random Sampling Consistency (RANSAC) algorithm. This step allows to obtain the transformation relationship between images [33].
The idea of the RANSAC algorithm is as follows: (1) the data consist of inliers; (2) outliers are prohibited to fit the model; (3) other data are noise points. RANSAC enables to estimate high-precision parameters from a data set containing a large number of outliers, which is an excellent mismatched elimination method.

3.3. Sparse Reconstruction

Figure 3 shows the schematic of sparse reconstruction for two images.
After the interior orientation parameters obtained by camera self-calibration, the exterior parameters of the structure need to be solved. Set the world coordinate system to coincide with the camera coordinate system in the first image, so that the first image could be expressed as R = I . T is the translation vector, T = ( 0 , 0 , 0 ) T , and R is the rotation matrix. The projection matrix P 1 of the first image is shown as Formula (6):
P 1 = K [ I | 0 ] = [ K | 0 ]
where I is the unit matrix and K is the intrinsic parameter matrix.
Similarly, the projection matrix P 2 of the second image could be represented as Formula (7):
P 2 = K [ R | T ]
As the eigenmatrix contains the rotation and translation matrices, it could be obtained according to Formula (8):
E = K T FK
where F is the fundamental matrix, which could be obtained according to the matching points in the initial image pair.
The camera poses between the two cameras could be obtained by the singular value decomposition (SVD) of the eigenmatrix, Formula (9):
E = U D V T
Generally, U and V are orthogonal matrices of order 3, and D is the diagonal matrix.
There are four possible solutions to the projection matrix restored by the eigenmatrix (see in Figure 4 and Formula (10)):
P 2 = [ U W V T | u 3 ] ; [ U W V T | u 3 ] ; [ U W T V T | u 3 ] ; [ U W T V T | u 3 ]
where u 3 is the last column of the matrix W = [ 0 1 0 1 0 0 0 0 1 ] .
Three-dimensional points could be calculated through the position information of the points from the projection matrices of P 1 and P 2 . This process is called triangulation.
Suppose P 1 k and P 2 k are row vectors of P 1 and P 2 , respectively. M w = ( X w , Y w , Z w , 1 ) T is the space coordinates of point M . ( u 1 , v 1 , 1 ) T and ( u 2 , v 2 , 2 ) T are the image coordinates of image 1 and image 2, respectively. The linear equations are obtained as Formula (11) using the coordinate system transforming relations.
[ P 13 u 1 P 11 P 13 v 1 P 12 P 23 u 2 P 21 P 23 v 2 P 22 ] M w = 0
As the number of equations in Formula (11) is more than the unknowns, the least square is introduced to solve the space coordinates of point M . Errors may exist in the obtained coordinates as a result of the error in feature matching. Therefore, the beam adjustment (BA) is introduced to further improve the precision of coordinates because it enables to optimize the camera parameters and 3D coordinates by minimizing the error of reprojection.
The BA needs to be initialized with a good image pair. Firstly, the first BA is performed for the two initialized images. Then, add new images cyclically for a new BA. The BA is an iterative process in which all valid images are computed continuously until the end of the iteration. Finally, camera parameters and scene geometry information are obtained. Reprojection errors are the distances between projection points and real points in images. For m images and n trajectory points, the reprojection error is shown as Formula (12):
min k = 1 m i = 1 n x k i P k X i 2
where P k is the projection matrix and x k i is the image position of pint i in image k . The purpose of the BA is to minimize this function.
Multiple images reconstructing is consistent with the reconstruction of two images. After the initial projection matrix P is solved, use Formula (11) to recover the 3D coordinates of the other image matching points in the nth images. However, as the number of images increases, the difference between the newly added images and the previous images becomes larger. Moreover, the fewer the image matching pairs, the more difficult it is to calculate the fundamental matrix.
Therefore, the projection matrix P is calculated on the position of the newly added images through the reconstructed 3D point coordinates. Suppose ( u 1 , v 1 , 1 ) is the image coordinates of space point M i = ( X i , Y i , Z i , 1 ) T in newly added images, the equation could be derived in Formula (13):
[ 0 T M i T v i M i T M i T 0 T u i M i T ] [ P i 1 P i 2 P i 3 ] = 0
The projection matrix has 11 degrees of freedom (DOF). The projection matrix of the nth images could be obtained using the projections of six reconstructed 3D points on the new image. When more than six points satisfy this requirement, RANSAC could be helpful for a more accurate projection matrix.

3.4. Dense Reconstruction

The sparse point cloud is only useful for regular objects with obvious features, but fails to present the surface information of the object well. Therefore, the complex scenarios need a denser point cloud, for example, rock engineering. The patched-based multi-view stereo (PMVS) algorithm [34] could reconstruct high-precision models with rich surface details for scenes with unclear texture, limited point view, large curvature, and so on.
Patch is a rectangle of a local tangent plane that approximates the object’s surface. V ( p ) is defined as the image set of containing all visible patches. R ( p ) is the patches set of reference images, R ( p ) V ( p ) . The discrepancy function could be defined as Formula (14):
g ( p ) = 1 | V ( p ) R ( p ) | I ( V ( p ) R ( p ) ) h ( p , I , R ( p ) )
where, V ( p ) R ( p ) means patches of V ( p ) that remove R ( p ) , and h ( p , I 1 , I 2 ) is the grayscale consistency function of I and R ( p ) . The steps of the solution are as follows [35,36]:
(1)
Divide patch p into smaller squares, u × u .
(2)
Calculate the difference value of patch p on the image I i to obtain the pixel gray q ( p , I i ) , through bilinear interpolation.
(3)
Subtract the normalized cross correlation (NCC) value of q(p, I 1 ) and q(p, I 1 ) from 1.
(4)
Initialize and optimize the relevant parameters.
The continuity of the patches is a major disadvantage. To solve this problem, the image I i is divided into many β 1 × β 1 pixel pieces C i ( x , y ) , where i is the i th image and ( x , y ) is the subscript of an image piece. For a patch p and the corresponding set V ( p ) , project p onto the image of V ( p ) to obtain the image piece corresponding to patch p . Set Q i ( x , y ) records all the patches projected onto the image pieces.

4. Experiment Preparation

A series of experiments for 3D reconstruction based on images from different angles and directions using the A-SfM algorithm were performed.
Three experiments were designed. Experiment 1 was used to evaluate the results of 3D reconstruction with proposed A-SfM. The second one was employed to test the accuracy of A-SfM. Experiment 3 was conducted to detect the deformation of rock mass surface.
Experiment 1.
Two groups of images were acquired: (1) the rock in the indoor environment without interference; (2) the surface of the soil outdoors. The number of the images in the two groups was 32 and 16, respectively. The iPhone XR camera is selected to acquire images in a counterclockwise direction around the objects, with the distance between the object and camera of 2 m. The examples of image samples are shown in Figure 5.
Experiment 2.
A slope model was built in a laboratory environment to explore the accuracy of the A-SfM algorithm, as shown in Figure 6. The dimension of the model is 35 cm in length, 35.5 cm in width, 12 cm in height, and 50 in gradient. The component mainly consists of the sand, low-grade gravel, and a small amount of mudstone. Mark points were used for binocular vision monitoring to serve as reference data for A-SfM analysis. Three groups of tests were designed to distinguish different mark points and different photograph distances:
Group 1: The photograph distance was 2 m, and the mark points were common.
Group 2: The photograph distance was 1 m, and the mark points were common.
Group 3: The photograph distance was 1 m, and the mark points were concentric circles.
Experiment 3.
The surface deformation of rock mass was quantified based on the 3D reconstruction results of the surface before and after the disturbance. Geodetic control points were applied to compare the two results in the same coordinate system (Figure 7). The procedure was as follows:
(1) 
The geodetic control points were measured and recorded with the total station electronic tachometer.
(2) 
The distance between the wall and each image capture station was measured using a laser range finder, and the locations were marked.
(3) 
Eight images before the disturbance were captured sequentially.
(4) 
Four sandstone samples of 50 mm in diameter and 50 mm in height were used to simulate the uplift and deformation. The samples were placed lightly on the model to avoid disturbing the rock mass at other locations.
(5) 
Eight images after the disturbance were captured sequentially.

5. Results and Analysis

5.1. Description of Results

5.1.1. Results for Experiment 1

Figure 8 shows the 3D reconstruction results with the proposed A-SfM algorithm, where (a) and (b) displays the reconstruction results of the first group images, and (c) and (d) are the results of the second group.

5.1.2. Results for Experiment 2

The results of binocular vision measurement were served as reference data for accurate analysis of the reconstructions. Studies related to binocular vision measurements have been completed and published [5,37]. The measurement results, as shown in Table 4, Table 5 and Table 6, were obtained through camera calibration, pixel coordinate positioning, and space coordinate calculation.
Two methods, SfM and A-SfM, were used to establish the 3-D reconstruction, and the results of A-SfM are shown in Figure 9.

5.1.3. Results for Experiment 3

The aforementioned images captured before and after the disturbance were then computerized and reconstructions using A-SfM before and after the disturbance were established, as shown in Figure 10.

5.2. Reconstruction Results Analysis

The results of Experiment 1 showed that two groups of images were reconstructed well, and the reconstructed models were almost identical to the real objects. However, there were still some disturbance point clouds that affect the modeling. The disturbance point clouds were predominated by the environmental features of shadows and interferers, which were apparent in multiple images. This affected the eigenmatrix of the object and reconstruction of the contour. Besides, the first image took about 80 times longer than others in feature extraction because it needed more time in selecting the initial relative features. It could be inferred that the results of reconstruction using the A-SfM performed well in an environment with prominent characteristics and strong contrast, for example, the rock engineer environment studied in this paper. This gives rise to the application value of rock mass surface detection. Even so, the model accuracy needed further discussion, which is why Experiment 2 was designed.
In Experiment 2, to evaluate the accuracy of the A-SfM algorithm, the results of reconstructions should be compared with those of binocular vision measurements. However, the distance calculated by the reconstructing results is the relative distance instead of the actual measured distance in the space coordinate system. The scaling factor is introduced to map the relative distances into physical distances with Formula (15) [38]:
S = d k n o w n I k n o w n
where S is the scaling factor, d k n o w n is the physical length of an object, and I k n o w n is the pixel length of the object on the imaging plane. Table 7 lists the calculated scaling factor.
According to the scaling factor and the pixel length, the physical length between the mark points in the two reconstructions, one for SfM and the other for A-SfM, was calculated, as shown in Table 8 and Table 9.
Table 10 shows the results of comparing the two reconstructions, using SfM and A-SfM, with the binocular vision measurements. Comparing with the results of reconstructions (both SfM and A-SfM) with those of binocular vision measurement, the 3D reconstruction performance of SfM before and after the improvement was verified. Moreover, the improved SfM algorithm significantly promoted the measurement accuracy, which effectively reflects the real situation (Figure 11). The measurement accuracy was improved from 2.7 mm to 4.58 mm in group 1; from 0.53 mm to 3.51 mm in group 2; and from 0.25 mm to 2.01 mm in group 3.

5.3. Rock Mass Surface Deformation Analysis

Different from most of the traditional measuring methods of single point deformation detection, the 3D point cloud could qualify the variation of the whole monitoring region. The surface deformation of rock mass could be quantified with the deformation detection by the 3D reconstruction results of the surface before and after the disturbance. This proposed procedure included point cloud data cleaning, geodetic control point registration, iterative closest point (ICP) registration, Euclidean distance calculation between registration, and reference point clouds.
Point cloud data cleaning. As discussed in Experiment 1, numerous invalid points and outliers would be generated when 3D point cloud data reconstructed by A-SfM were imported into CloudCompare. Because only the detected deformation area should be preserved, the point cloud data could first be preprocessed through the Bounding Box algorithm. Then, outlier data could be eliminated using the statistical analysis filter. Finally, the invalid points could be cut and divided manually. Figure 12 shows the results before and after the point cloud data cleaning, shown here for the reconstruction before the disturbed data 1 and after the disturbed data 2.
Geodetic control point registration. To make the 3D point clouds before and after the disturbance in the same world coordinates system, the geodetic control point in the reconstructions was registered after the point cloud data cleaning. This proposed procedure took the cloud data before the disturbance as the reference point and the cloud data after the disturbance as the matching point. Figure 13 shows the results of geodetic control point registration, where (a) represents the position and distribution of data before the geodetic control point registration, and (b) is the data after the geodetic control point registration.
Iterative closest point (ICP) registration. There were some errors because the control points were manually selected during the geodetic control point registration. Therefore, it was necessary to use ICP for precise registration. The cloud data before the disturbance were defined as the reference point, and the cloud data after the disturbance were defined as the matching point. Figure 14 presents the results after ICP registration.
Rock mass surface deformation detection. The Euclidean distance between points and neighbor points was calculated using the precise registration data. The deformation ranged from 8.97 × 10−5 m to 0.61 × 10−1 m, and the color scale is shown in Figure 15.
Results analysis. The uplift to simulate the deformation was rock samples with a diameter of 50 mm. However, the lower part of the rock was slightly inserted into the rock mass model and the upper part was placed on the surface of the model. It can be seen in Figure 15 that the smallest surface deformation in the undisturbed zones is 0.094 mm, whereas the maximum deformation in disturbed zones is 48.43 mm. The results presented were generally with the actual situation.

6. Conclusions

Three-dimensional reconstruction of rock mass surface is a crucial step in surface deformation detection, which could assist in understanding rock mass progressive failure processes. On the basis of the SfM method, an A-SfM method was proposed for rock engineering applications so as to acquire the 3D reconstruction that is suitable for the characteristics of rock mass surface. The AKAZE algorithm is used to improve the structure flow of SfM so as to extract the features of the rock mass more easily at close range. Three experiments verified the ability of the proposed A-SfM method. The specific conclusions can be drawn as follows:
(1)
The results of 3D reconstruction in Experiment 1 using the proposed A-SfM showed the reconstructed models were almost identical to the real objects.
(2)
In Experiment 2, the measurement accuracy of the A-SfM improves compared with the measurement accuracy of the SfM.
(3)
Experiment 3 shows that the results detected were generally consistent with the actual situation. The deformation detection by the 3D reconstruction results of the surface before and after the disturbance confirmed that the proposed method effectively quantified the surface deformation of the rock mass.

Author Contributions

The asterisk indicates the corresponding author, and the first two authors contributed equally to this work. The vision sensor system is developed by C.M. and Y.B., under the supervision of Q.H. and L.H. Experiment planning, and setup and measurement of laboratory and field tests are conducted by J.T. Q.C. and J.Z. provided valuable insight in preparing this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “the Research and Innovation Team of Provincial Universities in Sichuan” (18TD0014), “Excellent Youth Foundation of Sichuan Scientific Committee” (2019JDJQ0037), “the Scientific Research Starting Project of SWPU” (2018QHZ025), and “the Sichuan Science and Technology Program” (2020JDRC0091).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Qi, S.; Macciotta, R.; Shou, K.J.; Saroglou, C. Preface to the Special Issue on “Advances in Rock Mass Engineering Geomechanics”. Eng. Geol. 2020, 272, 105642. [Google Scholar] [CrossRef]
  2. Pappalardo, G.; Mineo, S.; Zampelli, S.P.; Cubito, A.; Calcaterra, D. InfraRed Thermography proposed for the estimation of the Cooling Rate Index in the remote survey of rock masses. Int. J. Rock Mech. Min. Sci. 2016, 83, 182–196. [Google Scholar] [CrossRef]
  3. Huang, L.; Su, X.; Tang, H. Optimal selection of estimator for obtaining an accurate three-dimensional rock fracture orientation distribution. Eng. Geol. 2020, 270, 105575. [Google Scholar] [CrossRef]
  4. Kong, D.A.K.T.; Wu, F.A.W.M.; Saroglou, C.A.S.C. Automatic identification and characterization of discontinuities in rock masses from 3D point clouds. Eng. Geol. 2020, 265, 105442. [Google Scholar] [CrossRef]
  5. Hu, Q.; Feng, Z.; He, L.; Shou, Z.; Zeng, J.; Tan, J.; Bai, Y.; Cai, Q.; Gu, Y. Accuracy Improvement of Binocular Vision Measurement System for Slope Deformation Monitoring. Sensors 2020, 20, 1994. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Li, X.; Chen, J.; Zhu, H.Z.T.E. A new method for automated discontinuity trace mapping on rock mass 3D surface model. Comput. Geosci. 2016, 889, 118–131. [Google Scholar] [CrossRef]
  7. Gigli, G.; Casagli, N. Semi-automatic extraction of rock mass structural data from high resolution LIDAR point clouds. Int. J. Rock Mech. Min. Sci. 2011, 48, 187–198. [Google Scholar] [CrossRef]
  8. Woo, K.S.; Eberhardt, E.; Rabus, B.; Stead, D.; Vyazmensky, A. Integration of field characterisation, mine production and InSAR monitoring data to constrain and calibrate 3-D numerical modelling of block caving-induced subsidence. Int. J. Rock Mech. Min. Sci. 2012, 53, 166–178. [Google Scholar] [CrossRef]
  9. Tang, H.; Ge, Y.; Wang, L.; Yuan, Y.; Huang, L.; Sun, M. Study on estimation method of rock mass discontinuity shear strength based on three-dimensional laser scanning and image technique. J. Earth Sci. 2012, 23, 908–913. [Google Scholar] [CrossRef]
  10. Liu, C.; Liu, X.; Peng, X.; Wang, E.; Wang, S. Application of 3D-DDA integrated with unmanned aerial vehicle–laser scanner (UAV-LS) photogrammetry for stability analysis of a blocky rock mass slope. Landslides 2019, 16, 1645–1661. [Google Scholar] [CrossRef]
  11. Wang, S.; Ahmed, Z.; Hashmi, M.Z.; Pengyu, W. Cliff face rock slope stability analysis based on unmanned arial vehicle (UAV) photogrammetry. Geomech. Geophys. Geo-Energy Geo-Resour. 2019, 5, 333–344. [Google Scholar] [CrossRef]
  12. Zhang, P.; Du, K.; Tannant, D.D.; Zhu, H.; Zheng, W. Automated method for extracting and analysing the rock discontinuities from point clouds based on digital surface model of rock mass. Eng. Geol. 2018, 239, 109–118. [Google Scholar] [CrossRef]
  13. Tannant, D. Review of Photogrammetry-Based Techniques for Characterization and Hazard Assessment of Rock Faces. Int. J. Geohazards Environ. 2015, 1, 76–87. [Google Scholar] [CrossRef] [Green Version]
  14. Martinez-Guanter, J.; Ribeiro, Á.; Peteinatos, G.G.; Pérez-Ruiz, M.; Gerhards, R.; Bengochea-Guevara, J.M.; Machleb, J.; Andújar, D. Low-cost three-dimensional modeling of crop plants. Sensors 2019, 19, 2883. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, H.; Wei, Q.; Jiang, Z. 3D reconstruction of space objects from multi-views by a visible sensor. Sensors 2017, 17, 1689. [Google Scholar] [CrossRef]
  16. Cucchiaro, S.; Fallu, D.J.; Zhang, H.; Walsh, K.; Van Oost, K.; Brown, A.G.; Tarolli, P. Multiplatform-SfM and TLS data fusion for monitoring agricultural terraces in complex topographic and landcover conditions. Remote Sens. 2020, 12, 1946. [Google Scholar] [CrossRef]
  17. Llena, M.; Vericat, D.; Cavalli, M.; Crema, S.; Smith, M.W. The effects of land use and topographic changes on sediment connectivity in mountain catchments. Sci. Total Environ. 2019, 660, 899–912. [Google Scholar] [CrossRef] [Green Version]
  18. Duró, G.; Crosato, A.; Kleinhans, M.G.; Uijttewaal, W.S. Bank erosion processes measured with UAV-SfM along complex banklines of a straight mid-sized river reach. Earth Surf. Dynam. 2018, 6, 933. [Google Scholar] [CrossRef] [Green Version]
  19. Warrick, J.A.; Ritchie, A.C.; Schmidt, K.M.; Reid, M.E.; Logan, J. Characterizing the catastrophic 2017 Mud Creek landslide, California, using repeat structure-from-motion (SfM) photogrammetry. Landslides 2019, 16, 1201–1219. [Google Scholar] [CrossRef]
  20. Smith, M.W.; Vericat, D.A. From experimental plots to experimental landscapes: Topography, erosion and deposition in sub-humid badlands from Structure-from-Motion photogrammetry. Earth Surf. Process. Landf. 2015, 40, 1656–1671. [Google Scholar] [CrossRef] [Green Version]
  21. Gómez-Gutiérrez, Á.; Schnabel, S.; Berenguer-Sempere, F.; Lavado-Contador, F.; Rubio-Delgado, J. Using 3D photo-reconstruction methods to estimate gully headcut erosion. Catena 2014, 120, 91–101. [Google Scholar] [CrossRef]
  22. Nouwakpo, S.K.; Weltz, M.A.; McGwire, K. Assessing the performance of structure-from-motion photogrammetry and terrestrial LiDAR for reconstructing soil surface microtopography of naturally vegetated plots. Earth Surf. Process. Landf. 2016, 41, 308–322. [Google Scholar] [CrossRef]
  23. Bakker, M.; Lane, S.N. Archival photogrammetric analysis of river-floodplain systems using Structure from Motion (SfM) methods. Earth Surf. Process. Landf. 2017, 42, 1274–1286. [Google Scholar] [CrossRef] [Green Version]
  24. Eltner, A.; Kaiser, A.; Abellan, A.; Schindewolf, M. Time lapse structure-from-motion photogrammetry for continuous geomorphic monitoring. Earth Surf. Process. Landf. 2017, 42, 2240–2253. [Google Scholar] [CrossRef]
  25. Lucieer, A.; de Jong, S.M.; Turner, D. Mapping landslide displacements using Structure from Motion (SfM) and image correlation of multi-temporal UAV photography. Prog. Phys. Geog. 2014, 38, 97–116. [Google Scholar] [CrossRef]
  26. Ouédraogo, M.M.; Degré, A.; Debouche, C.; Lisein, J. The evaluation of unmanned aerial system-based photogrammetry and terrestrial laser scanning to generate DEMs of agricultural watersheds. Geomorphology 2014, 214, 339–355. [Google Scholar] [CrossRef]
  27. Smith, M.W.A.; Carrivick, J.L.A.; Hooke, J.B.; Kirkby, M.J.A. Reconstructing flash flood magnitudes using ‘Structure-from-Motion’: A rapid assessment tool. J. Hydrol. 2014, 519, 1914–1927. [Google Scholar] [CrossRef]
  28. James, M.R.; Robson, S. Straightforward reconstruction of 3D surfaces and topography with a camera: Accuracy and geoscience application. J. Geophys. Res. 2012, 117, 17. [Google Scholar] [CrossRef] [Green Version]
  29. Woodget, A.S.; Carbonneau, P.E.; Visser, F.; Maddock, I.P. Quantifying submerged fluvial topography using hyperspatial resolution UAS imagery and structure from motion photogrammetry. Earth Surf. Processes Landforms 2014, 40, 47–64. [Google Scholar] [CrossRef] [Green Version]
  30. Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 573–580. [Google Scholar] [CrossRef] [Green Version]
  31. Grewenig, S.; Weickert, J.; Bruhn, A. From box filtering to fast explicit diffusion. Pattern Recognit. 2010, 6376, 533–542. [Google Scholar]
  32. Alcantarilla, P.F.; Nuevo, J.; Bartoli, A. Fast explicit diffusion for accelerated features in nonlinear scale spaces. IEEE Trans. Patt. Anal. Mach. Intell. 2011, 34, 1281–1298. [Google Scholar]
  33. Fischler, M.A.; Bolles, R.C.; Foley, J.D. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  34. Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. Ieee T Pattern Anal. 2009, 32, 1362–1376. [Google Scholar] [CrossRef] [PubMed]
  35. Schonberger, J.L.; Zheng, E.; Frahm, J.; Pollefeys, M. Pixelwise view selection for unstructured multi-view stereo. In Proceedings of the European Conference on Computer Vision (ECCV 2016), Amsterdam, The Netherlands, 8–16 October 2016; pp. 501–518. [Google Scholar]
  36. Furukawa, Y.; Herna Ndez, C. Multi-view stereo: A tutorial. Found. Trends Comput. Graph. Vis. 2015, 9, 1–148. [Google Scholar] [CrossRef] [Green Version]
  37. He, L.; Tan, J.; He, S.; Cai, Q.; Fu, Y.; Tang, S. Non-contact measurement of the surface displacement of a slope based on a smart binocular vision system. Sensors 2018, 18, 2890. [Google Scholar] [CrossRef] [Green Version]
  38. Feng, D.; Feng, M.Q.; Ozer, E.; Fukuda, Y. A vision-based sensor for noncontact structural displacement measurement. Sensors 2015, 15, 16557–16575. [Google Scholar] [CrossRef]
Figure 1. The framework for 3D reconstruction of rock mass surface. FED, fast explicit diffusion.
Figure 1. The framework for 3D reconstruction of rock mass surface. FED, fast explicit diffusion.
Sensors 20 05371 g001
Figure 2. The feature extraction using the AKAZE algorithm.
Figure 2. The feature extraction using the AKAZE algorithm.
Sensors 20 05371 g002
Figure 3. The schematic of sparse reconstruction for two images.
Figure 3. The schematic of sparse reconstruction for two images.
Sensors 20 05371 g003
Figure 4. The four possible camera poses.
Figure 4. The four possible camera poses.
Sensors 20 05371 g004
Figure 5. Examples of image samples.
Figure 5. Examples of image samples.
Sensors 20 05371 g005
Figure 6. The slope model built in a laboratory environment.
Figure 6. The slope model built in a laboratory environment.
Sensors 20 05371 g006
Figure 7. Layout of geodetic control points.
Figure 7. Layout of geodetic control points.
Sensors 20 05371 g007
Figure 8. The 3D reconstruction results using the A-SfM (structure from motion) algorithm in Experiment 1.
Figure 8. The 3D reconstruction results using the A-SfM (structure from motion) algorithm in Experiment 1.
Sensors 20 05371 g008
Figure 9. Results of A-SfM reconstruction in Experiment 2: (a) the photograph distance was 2 m, and the mark points were common; (b) the photograph distance was 1 m, and the mark points were common; (c) the photograph distance was 1 m, and the mark points were concentric.
Figure 9. Results of A-SfM reconstruction in Experiment 2: (a) the photograph distance was 2 m, and the mark points were common; (b) the photograph distance was 1 m, and the mark points were common; (c) the photograph distance was 1 m, and the mark points were concentric.
Sensors 20 05371 g009
Figure 10. Reconstruction before and after the disturbance.
Figure 10. Reconstruction before and after the disturbance.
Sensors 20 05371 g010
Figure 11. The reconstruction errors of the three groups of tests.
Figure 11. The reconstruction errors of the three groups of tests.
Sensors 20 05371 g011
Figure 12. Results before and after the point cloud data cleaning.
Figure 12. Results before and after the point cloud data cleaning.
Sensors 20 05371 g012
Figure 13. Results before and after the geodetic control point registration. (a) The position and distribution of data before the geodetic control point registration, and (b) The data after the geodetic control point registration.
Figure 13. Results before and after the geodetic control point registration. (a) The position and distribution of data before the geodetic control point registration, and (b) The data after the geodetic control point registration.
Sensors 20 05371 g013
Figure 14. The results after iterative closest point (ICP) registration.
Figure 14. The results after iterative closest point (ICP) registration.
Sensors 20 05371 g014
Figure 15. Rock mass surface deformation detection.
Figure 15. Rock mass surface deformation detection.
Sensors 20 05371 g015
Table 1. Average time cost and average calculation cost of five types of operators.
Table 1. Average time cost and average calculation cost of five types of operators.
Algorithm TypeSIFTSURFORBAKAZEBRISK
Average time cost (ms)6877.321205.54571.909189.28199,348.45
Average calculated cost (%)75.2074.4865.7866.4977.01
Table 2. Average number of effective feature points and accuracy of five types of operators.
Table 2. Average number of effective feature points and accuracy of five types of operators.
Algorithm TypeNumber of Feature PointsNumber of Interior PointsNumber of Matching PointsEfficiency of Feature Points (%)Matching Accuracy (%)
SIFT107,32644,21148,75645.4390.68
SURF30,259937011,03236.4684.93
ORB15,0754224502233.3184.10
AKAZE31,68514,55315,97150.4091.12
BRISK131,60255,57361,58446.8090.24
Table 3. The related technical specifications of sensors for image acquisition.
Table 3. The related technical specifications of sensors for image acquisition.
SensorsEffective Pixels (MP)Resolution
(Pixels)
Focal Length
(mm)
Types of SensorsWeight
iPhone 6 Plus7.992449 × 326429Complementary Metal Oxide Semiconductor (CMOS)172 g
iPhone 6s-2449 × 326529CMOS143 g
iPhone XR-4032 × 302426 194 g
Panasonic
Lumix LX5
9.522520 × 377624~90charge coupled device (CCD)271 g
Panasonic
Lumix ZX20
14.13240 × 423024~480CMOS204 g
Camon EOS 7D17.922345 × 518429~216CMOS820 g
Acom Trailcam 53105.0~12.04000 × 30006CMOS245 g
Nikon 75024.36016 × 401624~120CMOS750 g
Nikon D310014.24608 × 307218~200CMOS455 g
Table 4. Binocular vision measurements from group 1.
Table 4. Binocular vision measurements from group 1.
FramesImage Coordinates for the Left CameraImage Coordinates for the Right CameraSpace CoordinatesPhysical Length
(mm)
uvuvxyz
1319.264194.168253.782183.66672.4124−28.5082129.8492.69611
2389.689194.786323.589181.813165.042−28.42722126.33101.072
3455.172208.376387.218194.786244.194−9.801912066.398.59255
4387.836231.234319.882217.643155.52818.4822033.7699.2226
5319.882242.353251.311230.61668.356132.18411988.39153.9743
Table 5. Binocular vision measurements from group 2.
Table 5. Binocular vision measurements from group 2.
FramesImage Coordinates for the Left CameraImage Coordinates for the Right CameraSpace CoordinatesPhysical Length (mm)
uvuvxyz
1382.276223.82296.407213.93636.494139.92011470.81117.3779
2304.438259.651215.481249.14933.649988.45361377.29105.402
3303.82320.191210.539307.836178.93587.02951392.37146.0726
4471.851317.102377.952302.276175.72941.23051482.06100.7577
5455.79260.268366.214247.913113.4958.546421553.66100.3388
Table 6. Binocular vision measurements from group 3.
Table 6. Binocular vision measurements from group 3.
FramesImage Coordinates for the Left CameraImage Coordinates for the Right CameraSpace CoordinatesPhysical Legth (mm)
uvuvxyz
1272.315270.77174.708260.8866.4872144.91951320.11160.9808
2455.172245.442358.183231.851160.85224.72551361.08110.4423
3458.261311.542355.712297.952151.69374.91931263.13152.3925
4285.905356.639180.886344.902113.4958.546421553.66134.0647
Table 7. The scaling factor for each group. Sfm, structure from motion.
Table 7. The scaling factor for each group. Sfm, structure from motion.
GroupsPhysical Length (mm)Pixel Length
(pixels)
Scaling Factor
SFMA-SFMSFMA-SFM
Experiment 1400.2034890.202819196.5708197.2201
Experiment 2400.165360.165183241.8965242.1563
Experiment 3501.042861.03983447.9450748.0846
Table 8. Pixel length and physical length between the mark points in reconstruction using SfM.
Table 8. Pixel length and physical length between the mark points in reconstruction using SfM.
No.Group 1Group 2Group 3
Pixel Length
(pixels)
Physical Length
(mm)
Pixel Length
(pixels)
Physical Length
(mm)
Pixel Length
(pixels)
Physical Length
(mm)
10.39463977.57451260.435196105.27243.15192151.119
20.44471287.41740340.39189394.797532.1669103.8922
30.4341185.33335950.552053133.53973.04844146.1577
40.42867384.2646040.3649188.270442.60767125.0249
50.709373139.4420340.36746488.88824--
Table 9. Pixel length and physical length between the mark points in reconstruction using A-SfM.
Table 9. Pixel length and physical length between the mark points in reconstruction using A-SfM.
No.Group 1Group 2Group 3
Pixel Length
(pixels)
Physical Length
(mm)
Pixel Length
(pixels)
Pixel Length
(pixels)
Physical Length
(mm)
Pixel Length
(pixels)
10.41042880.944680.43693105.80523.18461153.1307
20.46096290.910990.3959595.88192.175576104.6117
30.44650688.059960.565952137.04883.057618147.0243
40.45051588.850520.37456890.703912.605403125.2798
50.720824142.1610.37327490.39073--
Table 10. Results of comparing the two reconstructions with the binocular vision measurements.
Table 10. Results of comparing the two reconstructions with the binocular vision measurements.
GroupsBinocular Vision SFMA-SFMError 1Error 2Accuracy Improvement
Experiment 192.6961177.5745180.9446815.121611.751433.370167
101.07287.417490.9109913.6545610.160983.493582
98.5925585.3333688.0599613.2591910.532592.726596
99.222684.264688.8505214.95810.372084.585919
153.9743139.442142.16114.5322711.813272.719006
Experiment 2117.3779105.2724105.805212.1055111.572640.532871
105.40294.7975395.881910.604469.5200931.084369
146.0726133.5397137.048812.53299.0237863.50911
100.757788.2704490.7039112.487310.053832.433472
100.338888.8882490.3907311.450589.948091.502491
Experiment 3160.9808151.119153.13079.861797.850132.01166
110.4423103.8922104.61176.5501455.8306140.719531
152.3925146.1577147.02436.2347975.3681360.866661
134.0647125.0249125.27989.0398128.7849830.254829

Share and Cite

MDPI and ACS Style

Hu, Q.; Ma, C.; Bai, Y.; He, L.; Tan, J.; Cai, Q.; Zeng, J. A Rapid Method of the Rock Mass Surface Reconstruction for Surface Deformation Detection at Close Range. Sensors 2020, 20, 5371. https://doi.org/10.3390/s20185371

AMA Style

Hu Q, Ma C, Bai Y, He L, Tan J, Cai Q, Zeng J. A Rapid Method of the Rock Mass Surface Reconstruction for Surface Deformation Detection at Close Range. Sensors. 2020; 20(18):5371. https://doi.org/10.3390/s20185371

Chicago/Turabian Style

Hu, Qijun, Chunlin Ma, Yu Bai, Leping He, Jie Tan, Qijie Cai, and Junsen Zeng. 2020. "A Rapid Method of the Rock Mass Surface Reconstruction for Surface Deformation Detection at Close Range" Sensors 20, no. 18: 5371. https://doi.org/10.3390/s20185371

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop