Next Article in Journal
Surface Flame-Retardant Systems of Rigid Polyurethane Foams: An Overview
Next Article in Special Issue
Influence of Manufacturing Defects on Mechanical Behavior of the Laser Powder Bed Fused Invar 36 Alloy: In-Situ X-ray Computed Tomography Studies
Previous Article in Journal
Crystallization Kinetics Analysis of the Binary Amorphous Mg72Zn28 Alloy
Previous Article in Special Issue
Defect Detection of Pipeline Inner Surface Based on Coaxial Digital Image Correlation with Hypercentric Lens
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single-Camera Three-Dimensional Digital Image Correlation with Enhanced Accuracy Based on Four-View Imaging

Department of Engineering Mechanics, School of Civil Engineering, Southeast University, Nanjing 211189, China
*
Authors to whom correspondence should be addressed.
Materials 2023, 16(7), 2726; https://doi.org/10.3390/ma16072726
Submission received: 20 February 2023 / Revised: 26 March 2023 / Accepted: 27 March 2023 / Published: 29 March 2023

Abstract

:
Owing to the advantages of cost-effectiveness, compactness, and the avoidance of complicated camera synchronization, single-camera three-dimensional (3D) digital image correlation (DIC) techniques have gained increasing attention for deformation measurement of materials and structures. In the traditional single-camera 3D-DIC system, the left and right view images can be recorded by a single camera using diffraction grating, a bi-prism, or a set of planar mirrors. To further improve the measurement accuracy of single-camera 3D-DIC, this paper introduces a single-camera four-view imaging technique by installing a pyramidal prism in front of the camera. The 3D reconstruction of the measured points before and after deformation is realized with eight governing equations induced by four views, and the strong geometric constraints of four views can help to improve the measurement accuracy. A static experiment, a rigid body translation experiment, and a four-point bending experiment show that the proposed single-camera 3D-DIC method can achieve higher measurement accuracy than the dual-view single-camera 3D-DIC techniques and that the single-camera 3D-DIC method has advantages in reducing both random error and systematic error.

1. Introduction

Three-dimensional (3D) digital image correlation (DIC) is now a standard technique used to determine the mechanical properties of materials and structures [1,2]. In terms of the number of cameras used, the current 3D-DIC can be divided into traditional dual-camera 3D-DIC [1], single-camera 3D-DIC [3], and multi-camera 3D-DIC [4]. Owing to the advantages of cost-effectiveness, compactness, and the avoidance of complicated camera synchronization, single-camera 3D-DIC techniques have attracted increasing attention for deformation measurement. Pankow et al. developed a single-lens 3D-DIC system using a single camera and a series of mirrors and they applied the system for high-speed out-of-plane displacement measurements [5]. Genovese et al. presented a single-camera pseudo-stereo system using a bi-prism in front of the camera lens to split the scene into two equivalent lateral stereo views in the two halves of the sensor [6,7]. Xia et al. developed a diffraction-assisted image correlation for 3D displacement measurement using a single camera and 2D-DIC algorithm [8,9]. The color separation-based single-camera 3D-DIC using the 3CCD color camera or an industrial color CCD camera has also been developed [10,11]. In these single-camera 3D-DIC techniques, left and right-view images of the sample can be obtained directly with a single camera. After camera calibration, temporal matching, and stereo matching, the morphological information and 3D displacement of the specimens can be measured using a single camera. At present, the single-camera 3D-DIC techniques have been used in many applications, including vibration modal measurement [12], impact deformation measurement [13], video extensometer measurement [14], internal deformation measurement of pipelines [15], and single-event-camera based 3D trajectory measurement [16].
The single-camera 3D-DIC systems reported so far all use the two views for stereo imaging, namely the left view and the right view. Compared to the two-view imaging technique, the four-view imaging technique has been proven to improve the accuracy of 3D reconstruction [17,18]. In the traditional four-view imaging-based stereo vision system, four cameras are used to capture images from four different views [17]. The use of multiple cameras can give rise to problems such as high cost and complicated camera synchronization. Combining the single-camera 3D-DIC technique with the four-view imaging technique will be an effective way to improve the measurement accuracy without increasing the cost.
In this work, a single-camera four-view 3D-DIC technique is proposed for high-accuracy full-field deformation measurement to further enhance the measurement accuracy. By installing a pyramidal prism in front of the camera, the four different view images of the specimen can be easily obtained. The 3D reconstruction of the full-field object points before and after deformation is realized with eight governing equations induced by these four views, and the strong constraints of four views can help to improve the measurement accuracy. This paper focuses on improving the measurement accuracy of single-camera 3D-DIC for full-field displacement and strain measurement. Compared to the video extensometer [18], full-field deformation measurement has a very broad application prospect in the field of experimental mechanics.
The rest of the paper is organized as follows: The principles of the four-view imaging technique are described, and the governing equations for four-view 3D reconstruction are given in Section 2. In Section 3, to validate the effectiveness and accuracy of the proposed single-camera four-view 3D-DIC method, a static experiment, a rigid body translation experiment, and a four-point bending experiment was conducted, and the measurement results were analyzed and compared to the three-view and dual-view measurements. Conclusions are drawn in Section 4.

2. Methodology

Figure 1 shows the basic schematic diagram of a pyramidal prism imaging path. Through the refraction imaging of the pyramidal prism, the image of four views of the specimen can be obtained. If a camera is placed behind the pyramidal prism, the speckle image of the specimen from four views can be recorded for the single-camera 3D-DIC measurement, as shown in Figure 2. In detail, the initial field of view is divided into four equally sized sub-views. Ideally, the measured object center, the prism center, and the optical center of the camera are aligned. It is strictly 50% vertical and 50% horizontal without considering device layout errors.
Figure 3 shows the schematic diagram of the four-view stereo vision, the four virtual cameras represent shots from different views. The point Q is the object point to be measured and points Q 1 , Q 2 , Q 3 and Q 4 are the image points projected onto the image from four different views. With lens distortion removal through the distortion parameters obtained from the four-view calibration, the projection equation can be written as follows,
x i = f x r 11 i X W + r 12 i Y W + r 13 i Z W + t x i r 31 i X W + r 32 i Y W + r 33 i Z W + t z i + f s i r 21 i X W + r 22 i Y W + r 23 i Z W + t y i r 31 i X W + r 32 i Y W + r 33 i Z W + t z i + c x i y i = f y r 21 i X W + r 22 i Y W + r 23 i Z W + t y i r 31 i X W + r 32 i Y W + r 33 i Z W + t z i + c y i , i = 1 , 2 , 3 , 4 ;
where X W Y W Z W are the world coordinates of point Q , x i , y i , i = 1 , 2 , 3 , 4 are the image coordinates of points Q 1 , Q 2 , Q 3 and Q 4 , c x i , c y i , f x i , f y i , f s i are the intrinsic parameters of the single camera, c x i , c y i is the principle point coordinate which is the intersection point of the camera optical axis and the sensor, f x i , f y i is equivalent focal length, and f s i represents the skew of the camera sensor. [ t x i , t y i , t z i ] T is the translation vector and r 11 i to r 33 i are the rotation matrix parameters which can be transformed from a rotation vector by using Rodrigues’ transformation formula, which represents the rotation matrix from the upper left view coordinate system to the i-th view coordinate system, and the superscripts i denote the i-th view.
According to Equation (1), the 3D coordinates of the object point can be solved by the least square method, and the solving equation is as follows:
x i r 31 i c x i r 31 i f s i r 21 i f x i r 11 i x i r 32 i c x i r 32 i f s i r 22 i f x i r 12 i x i r 33 i c x i r 33 i f s i r 23 i f x i r 13 i y i r 31 i c y i r 31 i f y r 21 i y i r 32 i c y i r 32 i f y i r 22 i y i r 33 i c y i r 33 i f y i r 23 i X W Y W Z W = f x i t x i + f s i t y i + c x i t z i x i t z i f y i t y i + c y i t z i y i t z i , i = 1 , 2 , 3 , 4
It can be seen from Equation (2) that a total of eight equations can be used to solve the three unknowns of 3D coordinates of the object point. The 3D coordinates of a point will be determined by the intersection of four rays. Compared to the traditional dual-view single-camera 3D-DIC, there are two more ray geometric constraints. The four equations brought by the two-ray geometric constraints can reduce the uncertainty of 3D reconstruction [14].
By calibrating the four-view single-camera 3D-DIC, the intrinsic parameters, and the extrinsic parameters between the second, third, and fourth views with the first view can be obtained. A set of photos of a planar checkerboard calibration board with different poses were captured by four virtual cameras at the same time, then the intrinsic parameters and the extrinsic parameters of cameras were calculated using Zhang’s calibration method [19]. An example of calibration images is shown in Figure 4.
Figure 5 demonstrates the stereo image matching among the four cameras and sequence image matching for the same camera. When both image matching and calibration are completed, the morphologic information of the objects can be obtained, as shown in Figure 3. Then, the 3D displacements can be calculated with the 3D coordinates of the same point before and after deformation. Based on the reconstructed 3D shape and the calculated 3D displacement, the local surface strains can be computed. Once the initial topology of the specimen has been measured, the normal direction of each point in the initial topology can be defined. Using a least-square plane to fit the data array around each point, the local surface of each point can be defined. According to the local surface and X direction of the Lagrange world coordinate system after the 3D reconstruction, the local coordinate system can be defined. Then, all three components of displacements for each point on the surface are projected to the local Cartesian coordinate system and the strain can be calculated using least-square method.

3. Experimental Results

To validate the effectiveness and accuracy of the proposed single-camera four-view 3D-DIC, a static experiment, a rigid body translation experiment, and a four-point bending experiment were conducted, respectively. In the static experiment and the translation experiment, the displacement accuracy in both in-plane and out-of-plane directions was verified. In the four-point bending experiment, the strain field was measured and compared to the strain gauge technique.
As shown in Figure 6, a single-camera four-view 3D-DIC system consisting of a single camera and a pyramidal prism was mounted on the slide rail (Daheng Optics, GCM-72) on a heavy tripod (FOBA, ASLAI). It is worth mentioning that poor mechanical stability of the setup can affect the final camera calibration results in the same way as lens shakes and the motion blur of calibration images [20,21]. The heavy tripod and the slide rail had to have excellent stability to ensure the accuracy of experiments. The geometry of the pyramidal prism is shown in Figure 7 and the distance between the pyramidal prism and lens was 105 mm. The detailed system configuration is listed in Table 1 and the extrinsic parameters between the first view and other views are shown in Table 2. To reduce the effect of ambient light and refraction, the blue light source and the narrow band filter were used.
As shown in Figure 7, the prism’s angle designed is 59.35°. In the calculation of the geometric sizes of the prism, the field of view size, the prism diameter, and the distance between the object and the prism are predetermined, then the angle can be calculated. Actually, the presented prism with the given angle is not the only possible configuration. The smaller the prism’s angle, the greater the ability of the prism to converge light and therefore the larger the field of view. When all other dimensions remain the same, a larger field of view can be obtained by reducing the prism angle appropriately. We recommend that the angle be preferably less than the designed 59.35°, otherwise the field of view will be too small. However, this angle cannot be reduced below about 40° as the total reflection will occur in the prism.
Figure 8 shows the geometric optical path diagram of the pyramidal prism. The relation between the geometric parameters can be calculated according to Equations (3)–(6),
tan α 2 = s 4 v
sin α 2 sin α 1 = n
sin α sin 90 β α 1 = n
φ = 2 ( α ( 90 β ) )
where s is the size of the imaging chip, v is the distance between the lens and the camera sensor, n is the refractivity, φ is the rotation angle, α , α 1 , α 2 are the angles of refraction. In the current manuscript, the s = 11.264 mm, v ≈ 20 mm, n = 1.5168, β = 59.35, so the rotation angle φ is close to 16°, which is consistent with the calibrated value in Table 2.
Further, the distance between the prism and the lens set and the distance between the prism and the object set can also be roughly calculated according to Equations (7) and (8),
s 2 v d ( l 1 + l 2 )
l 1 l 2 2 sin β d b
where d is the field of view, l 1 is the distance between the prism and the object set, l 2 is the distance between the prism and the lens set. In the current manuscript, the d = 100 mm, so the l 1 + l 2 is about 300 mm and l 1 is about 200 mm, and l 2 is about 100 mm.
A conventional five-parameter distortion model is used in this work. It consists of three radial and two tangential distortion parameters. The prism and the camera lens are treated as a lens set. We assume that the conventional distortion model can correct the overall distortion. The distortion calibration plot of the single-camera four-view 3D-DIC system is shown in Figure 9.
The static experiment configuration is shown in Figure 6a. Artificial speckles are generated on the surface of the specimen in Figure 6b. The working distance between the specimen and the prism is 200 mm. Twenty-five images are captured by the single four-view camera system and calculated by DIC. In detail, VIEW 1 and VIEW 2 are used to construct a conventional two-view DIC, and all four views are combined to provide a four-view DIC measurement. The standard deviation error of displacement for different directions (DX, DY, and DZ) are shown in Figure 10, respectively. Practically, the random error is represented by the standard deviation error. It is noticed that four-view DIC shows a distinct improvement in the random error. Compared to the two-view measurement, the in-plane displacement random error of the four-view measurement is reduced by 0.0004 mm. For out-of-plane displacement random error, it is reduced from 0.0035 mm to 0.002 mm. It can be noticed that the random error is obviously reduced by the strong geometric constraints of four views.
The translation experiment configuration is shown in Figure 11a. A translation stage (Newport, MFA-CC) controlled by a motion controller (Newport, SMC100) drove the specimen to move in a set direction. The specimen and the translation stage are shown in Figure 11b. Speckle patterns on the specimen were generated by applying the water transfer printing (WTF) technique [22], and the diameter of each speckle was 1 mm. In the experiment, the translation stage moved 0.5 mm for each step with a total of eight steps. Under each translation, twenty images were captured averagely to reduce the influence of environmental disturbances. In this experiment, the dual-view, three-view, and four-view single-camera 3D-DIC were used to calculate the displacement field.
For comparisons, full-field displacements of each translation are averaged to calculate the absolute bias error, as shown in Figure 12. It can be seen in Figure 12a that the error of in-plane displacement based on four-view 3D-DIC is the smallest. For four-view 3D-DIC, the average error of all translations is about 0.002 mm, and the error keeps relatively stable. While the results of three-view 3D-DIC show a lower accuracy, the average error is about 0.01 mm. For the dual-view 3D-DIC, the VIEW14 has the smallest systematic error and the absolute bias error can reach 0.02 mm. For out-of-plane displacement, four-view 3D-DIC also achieves the highest accuracy and the average error is about 0.003 mm. For the three-view and dual-view 3D-DIC, the error rises to 0.009 mm and 0.07 mm, respectively. The translation experiment effectively proves the high accuracy and robustness of the single-camera four-view 3D-DIC. Actually, compared to the two-view system, almost an order of magnitude reduction in displacement error in the four-view system is mainly due to inaccurate calibrations. It can be seen in Figure 12 that there is a noticeable increase in error with the increasing translation of DX and DZ in two- and three-view systems. However, the error in the four-view system remains relatively stable. The impact of calibration on the different view systems will be further analyzed in our future work.
Figure 13a shows the setup of the four-point bending experiment, and the specimen is shown in Figure 13b. The four-point bending beam was 150 mm long and 20 mm wide. Speckle patterns were also made via the WTP technique, and the speckle diameter was 0.8 mm. As shown in Figure 13c, the strain gauges were attached to the middle edges of the beam to provide the reference truth of strain. The world coordinate system after 3D reconstruction was transformed to the front surface of the specimen, and the X and Y directions coincide with the horizontal and vertical directions of the beam, respectively. Therefore, since the size of the gauges was known, the frontal region corresponding to the strain gauge grid area could be selected precisely for comparison. The specimen was loaded under ten different load levels. Under each load, twenty images were captured averagely to reduce the influence of environmental disturbances. The four-view speckle image is shown in Figure 14a. Then, the dual-view, three-view, and four-view single-camera 3D-DIC were used to calculate the strain field. The strain field in the pure bending section by the four-view 3D-DIC is presented in Figure 14b. The direction of the strain component Exx is the horizontal direction of the specimen. Due to the presence of subsets in the displacement and strain calculations, half of the subset size close to the upper and lower boundary of the specimen cannot be calculated. It should be noted that, although the strain gauges are placed at the specimen edge, there was still some distance between the grid and the specimen edge. The strain achieved by the strain instrument was actually the average strain in the central grid of the strain gauge. As a result, the strain of the frontal region corresponding to the strain gauge grid could be fully measured.
For comparison, the average strain of the speckle region corresponding to the strain gauge was extracted. The strain attained by 3D-DIC and the strain gauge is shown in Figure 15a, and the absolute errors between 3D-DIC and strain gauge are shown in Figure 15b. It is worth mentioning that only the strain gauge at the lower edge, i.e., at the tensile end, is analyzed in Figure 15. It can be seen in Figure 15 that the four-view 3D-DIC always achieves the highest accuracy of strain (error below 20 με) and the error keeps relatively stable under different loads. This solidly reveals the precision and stability of the single-camera four-view 3D-DIC system. In Figure 15b, the strain error of three-view 3D-DIC is about 50 με, while the dual-view 3D-DIC lead to a bigger error. In the dual-view 3D-DIC, VIEW14 has the smallest systematic error and the absolute error can even achieve 100 με. The four-point bending experiment effectively proves the high accuracy and robustness of the single-camera four-view 3D-DIC.

4. Discussion and Conclusions

Compared to traditional binocular stereo systems, the pseudo stereo image captured by single-camera 3D-DIC techniques typically suffers considerable resolution reduction. To demonstrate this clearly, a resolution comparison of binocular stereo imaging, traditional single-camera 3D-DIC based on two-view imaging, and the proposed accuracy-enhanced single-camera 3D-DIC based on four-view imaging is shown in Figure 16. When the measured objects are rectangular, the traditional two-view imaging shows the same resolution compared with the binocular system, while the four-view system suffers resolution loss. In this case, the four-view system improves the measurement accuracy, but the reduction in resolution reduces it, which is a contradiction. When the object comes to a square, the single-camera two-view and four-view imaging systems both show reduced resolution. However, there is no resolution difference between the proposed four-view system and the conventional two-view system when measuring square objects. Actually, we aim to compare the proposed method with the two-view single-camera 3D-DIC techniques, not the traditional binocular stereo systems. It can be noticed that the proposed four-view system enjoys the advantages of higher accuracy and no resolution loss when measuring square objects. Resolution reduction is one of the inherent drawbacks of single-camera 3D-DIC techniques, and the central region with the best image quality of lens and camera sensor is underutilized.
Despite these drawbacks, single-camera 3D-DIC techniques have gained increasing attention for deformation measurement due to their outstanding advantages of cost-effectiveness, compactness, and avoidance of complicated camera synchronization. Actually, in terms of accuracy, precision, and error reduction, the setup proposed does not have obvious advantages over the previously reported four-camera setup [17]. However, compared to the previously four-camera system, the method in this work still has some significant advantages:
(1) Due to the compactness of the setup proposed in this work, the rigid connection between devices is easier to achieve than the four-camera system, and it is very important for measurement accuracy;
(2) Hard synchronization of four cameras requires additional synchronous trigger instruments, which greatly increases the complexity and cost of the system;
(3) In some cases where the experimental space is limited, the method in this work has the advantage of being more flexible, such as the experiment of observing experimental objects through a small window in a high-temperature box.
In summary, an accuracy-enhanced single-camera 3D-DIC technique is proposed based on four-view imaging. By installing a pyramidal prism in front of the camera, the speckle image of four different views of the specimen can be obtained. The basic principles and system configuration are described in detail. Both the rigid body translation experiment and four-point bending experiment show that the proposed single-camera 3D-DIC method can achieve higher measurement accuracy than the dual-view single-camera 3D-DIC and the three-view single-camera 3D-DIC techniques. In the four-point bending experiment, the absolute strain errors of the proposed four-view single-camera 3D-DIC are less than 20 με. In general, the setup is an interesting generalization of a camera being coupled with Risley prisms scanners [23,24,25]. Although the proposed single-camera 3D-DIC method has some limitations, such as the decreased field of view and increased computation cost, the application of the single-camera 3D-DIC method will still be greatly expanded with the improvement of their accuracy.

Author Contributions

Conceptualization, X.S.; investigation, J.Q. and W.C.; methodology, X.S.; validation, J.Q. and W.C.; writing—original draft preparation, X.S. and W.C.; writing—review and editing, X.S.; supervision, X.S.; project administration, X.S.; funding acquisition, X.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (under grants 12272093 and 11902074).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Luo, P.F.; Chao, Y.J.; Sutton, M.A.; Petters, W.H. Accurate measurement of three-dimensional deformations in deformable and rigid bodies using computer vision. Exp. Mech. 1993, 33, 123–132. [Google Scholar] [CrossRef]
  2. Sutton, M.A.; Orteu, J.J.; Schreier, H. Image Correlation for Shape, Motion and Deformation Measurements; Springer: New York, NY, USA, 2009. [Google Scholar]
  3. Pan, B.; Yu, L.; Zhang, Q. Review of single-camera stereo-digital image correlation techniques for full-field 3D shape and deformation measurement. Sci. China Technol. Sci. 2018, 61, 2–20. [Google Scholar] [CrossRef] [Green Version]
  4. Orteu, J.J.; Bugarin, F.; Harvent, J.; Robert, L.; Velay, V. Multiple-camera instrumentation of a single point incremental forming process pilot for shape and 3D displacement measurements: Methodology and results. Exp. Mech. 2011, 51, 625–639. [Google Scholar] [CrossRef] [Green Version]
  5. Pankow, M.; Justusson, B.; Waas, A.M. Three-dimensional digital image correlation technique using single high-speed camera for measuring large out-of-plane displacements at high framing rates. Appl. Opt. 2010, 49, 3418–3427. [Google Scholar] [CrossRef]
  6. Genovese, K.; Casaletto, L.; Rayas, J.A.; Flore, V.; Martinez, A. Stereo-Digital Image Correlation (DIC) measurements with a single camera using a biprism. Opt. Lasers Eng. 2013, 51, 278–285. [Google Scholar] [CrossRef]
  7. Wu, L.; Zhu, J.; Xie, H.; Zhou, M. Single-lens 3D digital image correlation system based on a bilateral telecentric lens and a bi-prism: Validation and application. Appl. Opt. 2015, 54, 7842–7850. [Google Scholar] [CrossRef]
  8. Xia, S.; Gdoutou, A.; Ravichandran, G. Diffraction assisted image correlation: A novel method for measuring three-dimensional deformation using two-dimensional digital image correlation. Exp. Mech. 2013, 53, 755–765. [Google Scholar] [CrossRef]
  9. Pan, B.; Wang, Q. Single-camera microscopic stereo digital image correlation using a diffraction grating. Opt. Express 2013, 21, 25056–25068. [Google Scholar] [CrossRef] [PubMed]
  10. Yu, L.; Pan, B. Color stereo-digital image correlation method using a single 3CCD color camera. Exp. Mech. 2017, 57, 649–657. [Google Scholar] [CrossRef]
  11. Li, J.; Dan, X.; Xu, W.; Wang, Y.; Yang, G.; Yang, L. 3D digital image correlation using single color camera pseudo-stereo system. Opt. Laser Technol. 2017, 95, 1–7. [Google Scholar] [CrossRef]
  12. Yu, L.; Pan, B. Single-camera high-speed stereo-digital image correlation for full-field vibration measurement. Mech. Syst. Signal Process. 2017, 94, 374–383. [Google Scholar] [CrossRef]
  13. Pan, B.; Yu, L.; Yang, Y.; Song, W.; Guo, L. Full-field transient 3D deformation measurement of 3D braided composite panels during ballistic impact using single-camera high-speed stereo-digital image correlation. Compos. Struct. 2016, 157, 25–32. [Google Scholar] [CrossRef]
  14. Shao, X.; Eisa, M.M.; Chen, Z.; Dong, S.; He, X. Self-calibration single-lens 3D video extensometer for high-accuracy and real-time strain measurement. Opt. Express 2016, 24, 30124–30138. [Google Scholar] [CrossRef] [PubMed]
  15. Yuan, T.; Dai, X.; Shao, X.; Zu, Z.; Cheng, X.; Yang, F.; He, X. Dual-biprism-based digital image correlation for defect detection of pipelines. Opt. Eng. 2019, 58, 014107. [Google Scholar] [CrossRef]
  16. Gao, Z.; Su, Y.; Zhang, Q. Single-event-camera-based 3D trajectory measurement method for high-speed moving targets. Chin. Opt. Lett. 2022, 20, 061101. [Google Scholar] [CrossRef]
  17. Zhu, C.; Shao, X.; Liu, C.; He, X. Accuracy analysis of an orthogonally arranged four-camera 3D digital image correlation system. Appl. Opt. 2019, 58, 6535–6544. [Google Scholar] [CrossRef]
  18. Wu, K.; Qu, J.; Shao, X.; He, X. Biaxial three-dimensional video extensometer based on single-camera four-view imaging. Acta Opt. Sin. 2022, 42, 1315001. [Google Scholar]
  19. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  20. Bravo-Medina, B.; Strojnik, M.; Garcia-Torales, G.; Torres-Ortega, H.; Estrada-Marmolejo, R.; Beltrán-González, A.; Flores, J.L. Error compensation in a pointing system based on Risley prisms. Appl. Opt. 2017, 56, 2209–2216. [Google Scholar] [CrossRef]
  21. Ge, Y.; Liu, J.; Xue, F.; Guan, E.; Yan, W.; Zhao, Y. Effect of mechanical error on dual-wedge laser scanning system and error correction. Appl. Opt. 2018, 57, 6047–6054. [Google Scholar] [CrossRef]
  22. Chen, Z.; Quan, C.; Zhu, F.; He, X. A method to transfer speckle patterns for digital image correlation. Meas. Sci. Technol. 2015, 26, 095201. [Google Scholar] [CrossRef]
  23. Deng, Z.; Liu, X.; Zhao, Z. A cooperative camera surveillance method based on the principle of coarse-fine coupling boresight adjustment. Precis. Eng. 2020, 66, 96–109. [Google Scholar]
  24. Duma, V.F.; Dimb, A.L. Exact Scan Patterns of Rotational Risley Prisms Obtained with a Graphical Method: Multi-Parameter Analysis and Design. Appl. Sci. 2021, 11, 8451. [Google Scholar] [CrossRef]
  25. Liu, X.; Li, A. Multiview three-dimensional imaging using a Risley-prism-based spatially adaptive virtual camera field. Appl. Opt. 2022, 61, 3619–3629. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Basic schematic diagram of a pyramidal prism imaging path.
Figure 1. Basic schematic diagram of a pyramidal prism imaging path.
Materials 16 02726 g001
Figure 2. Speckle image of the specimen from four views.
Figure 2. Speckle image of the specimen from four views.
Materials 16 02726 g002
Figure 3. Schematic diagram of the four-view stereo vision.
Figure 3. Schematic diagram of the four-view stereo vision.
Materials 16 02726 g003
Figure 4. Calibration image of a chessboard calibrator from four views.
Figure 4. Calibration image of a chessboard calibrator from four views.
Materials 16 02726 g004
Figure 5. Four-view stereo vision matching and sequence image matching.
Figure 5. Four-view stereo vision matching and sequence image matching.
Materials 16 02726 g005
Figure 6. Static experiment for: (a) experiment setup; (b) specimen.
Figure 6. Static experiment for: (a) experiment setup; (b) specimen.
Materials 16 02726 g006
Figure 7. The geometry of the pyramidal prism.
Figure 7. The geometry of the pyramidal prism.
Materials 16 02726 g007
Figure 8. The geometric optical path diagram of the pyramidal prism.
Figure 8. The geometric optical path diagram of the pyramidal prism.
Materials 16 02726 g008
Figure 9. Distortion calibration plot of the single-camera four-view 3D-DIC system.
Figure 9. Distortion calibration plot of the single-camera four-view 3D-DIC system.
Materials 16 02726 g009
Figure 10. Comparisons of displacement standard deviation in static experiment between case VIEW-12 (results obtained from camera1 and camera2) and case VIEW-1234 (results obtained from four cameras): (a) DX; (b) DY; (c) DZ.
Figure 10. Comparisons of displacement standard deviation in static experiment between case VIEW-12 (results obtained from camera1 and camera2) and case VIEW-1234 (results obtained from four cameras): (a) DX; (b) DY; (c) DZ.
Materials 16 02726 g010
Figure 11. Translation experiment for: (a) experiment setup; (b) specimen and translation stage.
Figure 11. Translation experiment for: (a) experiment setup; (b) specimen and translation stage.
Materials 16 02726 g011
Figure 12. Results and comparisons in the rigid body translation experiment: (a) absolute error of in-plane displacement; (b) absolute error of out-of-plane displacement.
Figure 12. Results and comparisons in the rigid body translation experiment: (a) absolute error of in-plane displacement; (b) absolute error of out-of-plane displacement.
Materials 16 02726 g012
Figure 13. Four-point bending experiment: (a) experiment setup; (b) specimen; (c) strain gauge.
Figure 13. Four-point bending experiment: (a) experiment setup; (b) specimen; (c) strain gauge.
Materials 16 02726 g013
Figure 14. (a) Speckle image of the four-point bending beam from four views in the experiment; (b) full-field strain ε x x of the pure bending section.
Figure 14. (a) Speckle image of the four-point bending beam from four views in the experiment; (b) full-field strain ε x x of the pure bending section.
Materials 16 02726 g014
Figure 15. Results and comparisons in the four-point bending experiment: (a) measured strain; (b) absolute error.
Figure 15. Results and comparisons in the four-point bending experiment: (a) measured strain; (b) absolute error.
Materials 16 02726 g015
Figure 16. Resolution comparison of binocular stereo imaging, single-camera two-view, and four-view imaging for rectangular and square objects.
Figure 16. Resolution comparison of binocular stereo imaging, single-camera two-view, and four-view imaging for rectangular and square objects.
Materials 16 02726 g016
Table 1. System parameters for the experiments.
Table 1. System parameters for the experiments.
CameraIDS, Ueye CP3370
Resolution/Pixel size2048 × 2048 pixels/5.5 μm
LensKowa, 25 mm
Working distance200 mm
Field of view100mm
Subset/Step27 × 27 pixels/5 pixels
Illumination/WavelengthSingle narrowband blue light/450 nm
Table 2. Extrinsic parameters between the first view with other views.
Table 2. Extrinsic parameters between the first view with other views.
External
Parameters
View 1–2View 1–3View 1–4
Rotation-X0.14°15.77°15.74°
Rotation-Y−15.53°0.65°−15.89°
Rotation-Z3.61°0.99°0.17°
Translation-X−55.27 mm2.85 mm−57.03 mm
Translation-Y−3.21 mm−57.15 mm−57.15 mm
Translation-Z−6.98 mm−1.74 mm−6.12 mm
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shao, X.; Qu, J.; Chen, W. Single-Camera Three-Dimensional Digital Image Correlation with Enhanced Accuracy Based on Four-View Imaging. Materials 2023, 16, 2726. https://doi.org/10.3390/ma16072726

AMA Style

Shao X, Qu J, Chen W. Single-Camera Three-Dimensional Digital Image Correlation with Enhanced Accuracy Based on Four-View Imaging. Materials. 2023; 16(7):2726. https://doi.org/10.3390/ma16072726

Chicago/Turabian Style

Shao, Xinxing, Jingye Qu, and Wenwu Chen. 2023. "Single-Camera Three-Dimensional Digital Image Correlation with Enhanced Accuracy Based on Four-View Imaging" Materials 16, no. 7: 2726. https://doi.org/10.3390/ma16072726

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop