Next Article in Journal
Performance Analysis of Classification and Detection for PV Panel Motion Blur Images Based on Deblurring and Deep Learning Techniques
Next Article in Special Issue
Slope Stability Prediction Method Based on Intelligent Optimization and Machine Learning Algorithms
Previous Article in Journal
Social Media Marketing as a Segmentation Tool
Previous Article in Special Issue
Optimization of Truck–Loader Matching Based on a Simulation Method for Underground Mines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gradient-Based Automatic Exposure Control for Digital Image Correlation

1
School of Civil Engineering and Architecture, Guangzhou City Construction College, Guangzhou 510925, China
2
Key Laboratory of Earthquake Resistance, Earthquake Mitigation and Structural Safety, Ministry of Education, Guangzhou University, Guangzhou 510405, China
3
Guangdong Provincial Key Laboratory of Earthquake Engineering and Applied Technology, Guangzhou 510405, China
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(2), 1149; https://doi.org/10.3390/su15021149
Submission received: 11 December 2022 / Revised: 1 January 2023 / Accepted: 4 January 2023 / Published: 7 January 2023
(This article belongs to the Special Issue Advances in Intelligent and Sustainable Mining)

Abstract

:
Digital image correlation (DIC) is widely used in material experiments such as ores; the quality of a speckle image directly affects the accuracy of the DIC calculation. This study aims to acquire high-quality speckle pattern images and improve the calculation accuracy and stability. A gradient-based image quality metric was selected to evaluate the image quality, and its validity was verified by a rigid body experiment and a numerical experiment. Based on the maximum image quality metric, an automatic exposure control algorithm and the control procedure were proposed to obtain the optimal exposure time. Finally, nine sets of images with different poses and illuminations were captured, and displacement and strain fields were calculated at the fixed exposure time and the optimized exposure time. The results of the rigid-body motion experiment show that the calculated data at the optimized exposure time is smoother and less noisy, and the error is smaller, which verifies the effectiveness of the exposure control procedure and its algorithm and improves the accuracy and stability of DIC calculation.

1. Introduction

Digital image correlation (DIC) is a common optical measurement method for measuring full-field deformation and strain, which is widely used in material experiments such as ores [1,2,3,4]. In the calculation of DIC, the quality of the speckle image directly affects the calculation accuracy [5,6]. In most practical measurement environments, a uniform illumination environment is usually required to illuminate the tested object so as to capture the speckle images with a uniform background intensity under different deformation states. However, in practical engineering and experiments, especially for multi-camera, outdoor or low-speed dynamic experiments, it is more feasible and convenient to use sunlight or skylight than a light source with a specific wavelength. Furthermore, due to the large size of the measured object, the long experiment period, the motion of the object, the partial occlusion of the light caused by the deformation of the object, etc., the illumination conditions may be changed, and the illumination variation directly affects the accuracy of the DIC calculation [7,8,9]. In addition, the CCD camera is generally nonlinear due to its response function and involutional characteristics, which makes the pixel value of the image obtained from the measured object at different positions and different light intensities different and also causes nonuniform background intensity distribution to the obtained speckle image. Therefore, it is necessary to study the influence of illumination variation and modify the related influencing factors to improve the accuracy and stability of the DIC algorithm.
Recently, considerable literature has emerged around the theme of overcoming the difficulties caused by illumination variations. Generally, it can be divided into two solutions. One is to improve the correlation algorithms. The commonly applied correlation criteria including zero-mean normalized cross correlation (ZMCC), zero-mean normalized sum of squared difference (ZMSSD) and parametric sum of squared differences (PSSD) are recommended to deal with linear intensity variations [10]. Some researchers improved the correlation algorithm to deal with nonlinear intensity changes, such as Liu [11] and Xu [8]. The other solution is to improve the quality of speckle pattern images. A number of studies [5,12,13] have reported and demonstrated that high-quality speckle pattern images are the prerequisite and basis for performing high-accuracy DIC measurements. To acquire high-quality speckle pattern images, various factors should be carefully considered to control the amount of light reaching the camera sensor (i.e., camera exposure) [14,15]. Without proper exposure control, images would be overexposed or underexposed. There are several parameters for adjusting the camera exposure [7,16]; the critical and most practical approach is to adjust the exposure time automatically without physically touching the system [15,16].
In this study, with the purpose of acquiring high-quality speckle pattern images for DIC calculation, the nonlinear response of the camera and the nonuniform illumination were considered to improve the calculation accuracy and stability. To this end, an existing gradient-based image quality metric was chosen as the evaluation metric, and a rigid body experiment and a numerical experiment were carried out to verify the efficiency of the image quality metric. Then, based on the maximum image quality metric, the optimal exposure time can be found according to the exposure control algorithm. Finally, nine sets of images at the fixed exposure time and the optimized exposure time were captured and compared, which verifies the efficiency and accuracy of the exposure control and its algorithm.

2. Existing Problems

The correlation criterion is defined to quantify the degree of similarity (or difference) between the reference subset and its deformed counterpart; thus, it is of fundamental importance in DIC. In this study, the widely used zero-mean normalized sum of squared difference (ZNSSD) [17] is chosen as the correlation function. The correlation function is insensitive to the translation and scaling of the gray scale of the target sub-region. For a rectangular sub-region image, there are three deformation modes of translation, scaling and translation + scaling, as shown in Figure 1. If the sub-region of the reference image f x , y is linearly transformed into the sub-region of the current image g x , y = a f x , y + b , where a and b are constant coefficients, then there is no difference among the variation in the image pixel of the three deformation modes calculated by ZNSSD theory.
However, when the image of the sub-region is subject to nonuniform illumination or the illumination is occluded, in addition to translation and scaling, the change in the pixel value in the sub-region may also have a deflection of the pixel value, such as g x , y = a f x , y + α ( x - x c ) + b , where x c is the central position of the sub-region and α is the coefficient of deflection, as shown in Figure 2a. Similarly, due to the nonlinear response function of the image acquisition equipment ( h ( f x , y )), the variation in the pixel value no longer conforms to the form of a f x , y + b , as shown in Figure 2b. In fact, the real collected data come from the image combining the nonlinear response of the equipment and nonuniform illumination, as shown in Figure 2c. Therefore, in addition to the correlation function, the nonlinear response of the equipment and the nonuniform illumination must be considered in DIC calculation.

2.1. Nonlinear Response of the Camara

The optical vignetting map is caused by the actual size of one or more lenses. The front lens covers the rear lens, resulting in a reduction in the effective incident light of the rear lens in the off-axis direction and a gradual decrease in the light intensity from the center of the image to the periphery. The relationship between the input exposure and the output brightness value is called the response function. Generally speaking, the response function is nonlinear. In order to reduce the influence of illumination and obtain the experimental image more accurately, the calibration method for the nonlinear response function of a camera sensor is introduced, which is called photometric calibration [18]. For the details of the photometric calibration process, please refer to [19,20].

2.2. Nonuniform Illumination

For the nonuniform illumination, in order to eliminate or reduce its influence, improving the quality of the light source is a feasible method. Generally, a more stable blue light source [21,22] or fluorescence spraying [23,24] can be used. However, when sunlight or skylight is used as the light source, it is necessary to reduce the influence of illumination and improve the calculation accuracy of DIC in sub-regions.
The image pixel value in the sub-region is regarded as a changing waveform, as shown in Figure 3. When the exposure time is low, the image in the sub-region looks dark; thus, the difference between the peak and trough is small. When the exposure time is high, the image in the sub-region looks bright, the value of the waveform is generally high and the difference between the peak and trough is small, too. However, for the normal exposure time, there is a large difference between the peak and trough. Therefore, the maximum difference between the waveforms (i.e., the maximum gradient information) can be found by adjusting the exposure time so as to improve the accuracy of DIC calculation in the sub-region.

3. Image Quality Metrics

3.1. Gradient-Based Metrics

In order to acquire high-quality images, a proper metric for image quality is critical, although the metrics are highly application-dependent [18,25,26]. Regarding DIC applications, the gradient-based image metrics are usually used as the image quality metrics [27] because the gradient is a dominant source of visual information [28] and the gradient domain is robust against illumination changes [29]. The two existing image quality metrics are analyzed below.
For an image I , when the exposure time is t , the magnitude of the gradient at a pixel p is:
G p , t = I ( p , t ) 2
where I · = I x , I y T .
Usually, the direct gradient sum can be used as the evaluation metric of pixel quality:
M s u m = p i I G ( p i )
Shim et al. [30] defined the gradient information at pixel location p i as:
m p i = 1 N log λ G ~ p i - σ + 1 ,                             G ( p i ) σ 0 ,                                                                                                                           G ( p i ) < σ s . t .   N = log λ 1 - σ + 1
where σ is an activation threshold, G ~ is the gradient magnitude normalized to the range of [0, 1], λ is a control parameter for adjusting mapping tendencies and N is a normalization factor for binding the gradient information to the range of [0, 1].
Then, the total gradient information in an image is:
M s h i m = p i I m p i
Zhang et al. [18] defined the soft percentile metric as a weighted sum of the sorted gradient magnitudes:
M s o f t p e r c ( p ) = i [ 0 , s ] W i p G i
where p means the percentage of the pixels whose gradient magnitudes are smaller than a certain percentile of all the gradient magnitudes, s is the total number of pixels in the image and G i is the sorted gradient magnitudes in an ascending order.
The weights are
W i = 1 N s i n k π 2 p . s + 0.5 i                                                                                       i p . s + 0.5 1 N s i n k π 2 - π 2 p . s + 0.5 ( i - p . s + 0.5 )             i > p . s + 0.5
where · rounds a number down to the closest integer, N = i = 0 s W i .

3.2. Experimental Evaluation

In order to understand the difference in the aforementioned metrics and verify the efficiency and accuracy of the subsequent algorithms, the following experiment model was designed. A steel cube specimen with a side length of 10 cm was processed by a high-precision machine tool. There was a control point every 10 mm in the vertical and horizontal directions on one of the faces of the specimen and a total of 100 control points on this face, which was sprayed with speckle, as shown in Figure 4. The control points were designed for the follow-up SFM (structure-from-motion) calculation.
A sequence of images of the same scene at different exposure times ranging from 1 ms to 10 ms with 0.2 ms intervals were taken. Figure 5 shows the images at the exposure times of 2, 3, 4, 5, 6, 7, 8, 9 and 10 ms. After computing the metrics for all the images by using the above two metrics, Figure 6 displays the comparison of the two metrics. It can be observed that the whole image quality cannot be evaluated well when using M s h i m due to the monotonic rise of the curve, while M s o f t p e r c can better evaluate the image quality according to the extreme point of the curve. It can also be concluded that there exists an optimal exposure time for a specific image under the fixed illumination conditions, and the image quality metric M s o f t p e r c can be used to find the optimal exposure time. Based on the above observations, M s o f t p e r c will be used in the rest of the work.

3.3. Numerical Evaluation

In order to verify the efficiency of the image quality metric in image calculation, the reference image function in Section 3.2 at different exposure times is set as f x , y , the influence of illumination centered on point ( x c , y c ) is linearly superimposed— α x - x c + β y - y c ( α and β are constants)—and then the uniformly distributed noise term r a n d ( s ) is superimposed as well; finally, the changed image function can be obtained:
g x , y = f x , y + α x - x c + β y - y c + r a n d ( s )
Four cases of α , β , s are selected for the calculation: (1) α = 0 ,   β = 0 ,   s = 50 , (2) α = s i n 8 ° ,   β = s i n 10 ° ,   s = 50 , (3) α = 0 ,   β = 0 ,   s = 30 , (4) α = s i n 8 ° ,   β = s i n 10 ° ,   s = 30 .
Figure 7 shows the evaluation of image quality metrics at different exposure times for the above four cases. The black curve is the image quality metric M s o f t p e r c , which is obtained from Equations (5) and (6), where p = 0.8 . The orange area corresponds to the calculated horizontal displacement U x (pixel) or vertical displacement U y (pixel) between the changed image g x , y and the original reference image f x , y at different exposure times. It can be observed that when the exposure time is 1 ms, the maximum horizontal and vertical displacement values are 0.16 pix and 0.21 pix, respectively, which reflects that the change in illumination has a great impact on the calculation results. When the exposure time is within the range of 7–9 ms, the calculation error is basically kept within the range of 0.03 pix, which reflects that the change in illumination has little impact on the calculation results and that the image has a strong robustness. It can be inferred that the calculation error is the smallest at the peak of the evaluation metric M s o f t p e r c , and this method can describe the image quality at different exposure times in DIC calculation well.

4. Exposure Control

Figure 8 shows the schematic diagram of exposure control. First, the vignetting map and camera response curves are obtained from the series of images by using the photometric calibration method. Then, the image quality evaluation is carried out on the photometric calibration image so as to maximize the image quality metric. Based on the maximum image quality evaluation metric, the optimal exposure time can be found, and, finally, the improved current image can be obtained.
Figure 9 shows the schematic diagram of the search algorithm for the optimal exposure time. First, two images at exposure times t 1 and t 2 are taken, so the relationship of the two exposure times is t 2 = t 1 + t . Then, the image quality metrics of two consecutive frames ( M s o f t p e r c , 1 and M s o f t p e r c , 2 ) are calculated according to Equations (5) and (6), and the approximate partial derivative of the first two frames (exposure interval is Δ t ) can be obtained:
M s o f t p e r c t 1 = M s o f t p e r c , 2 - M s o f t p e r c , 1 t 2 - t 1
Define the first secant magnification as η 1 (initial coefficient); then, the predicted location of the next exposure time point is:
t 3 = t 2 + t + η 1 M s o f t p e r c t 1
The exposure times of the second consecutive two frames are t 3 and t 4 , and the relation is t 4 = t 3 + t . The approximate partial derivative can be calculated as:
M s o f t p e r c t 2 = M s o f t p e r c , 4 - M s o f t p e r c , 3 t 4 - t 3
The second secant magnification is:
η 2 = η 1 M s o f t p e r c t 1 M s o f t p e r c t 2
Similarly, the exposure times of the ith consecutive two frames are t 2 i - 1 and t 2 i , and the relation is t 2 i = t 2 i - 1 + t . The approximate partial derivative can be calculated as:
M s o f t p e r c t i = M s o f t p e r c , 2 i - M s o f t p e r c , 2 i - 1 t 2 i - t 2 i - 1
The ith secant magnification is:
η i = η 1 M s o f t p e r c t 1 M s o f t p e r c t i
The predicted location of the next exposure time point is:
t 2 i + 1 = t 2 i + t + η i M s o f t p e r c t i
If t 2 i + 1 - t 2 i - 1 ϵ ( ϵ is the error control parameter), then x * ( x * = t 2 i + 1 ) is the optimal exposure time. Or, if M s o f t p e r c t i δ and M s o f t p e r c t i + 1 δ ( δ is the error control parameter), then x * ( x * = t 2 i + 1 + t 2 i - 1 2 ) is the optimal exposure time. If either of the above two conditions is satisfied, the program is complete.

5. Experiments and Results

5.1. Experimental Procedures

In order to verify the efficiency and accuracy of the above exposure control and its algorithm, the rigid body motion experiments under different exposure conditions using the above exposure control method were carried out. The specimen in Section 3.2 was used again. Nine sets of images were designed for the analysis, among which (a) was designed to keep the specimen’s pose and illumination unchanged; (b–e) were designed to be in the same second pose but with varying illumination conditions; (f–i) were designed to be in the same third pose but with varying illumination conditions. Four direct-current LED lights with a power of 40 W were used, and the position and angle of the lights were randomly placed in the experiment. It is worth noting that both the second pose and the third pose are obtained by rotating the first pose around the vertical, and the motion of the specimen is a rigid body motion. Under the same working condition, two kinds of exposure time were used to capture images; one was the fixed exposure time of 6 m s , and the other was the optimal exposure time using the exposure control algorithm. A group of images under this illumination condition was obtained. Adjust the pose of the specimen or the illumination condition and repeat the above process so as to obtain the sequence images of the specimen under the two exposure conditions. Figure 10 shows the nine sets of images at the two kinds of exposure times in the above nine working conditions. Herein, the images without superscript, such as (a), represent the case of the fixed exposure time, and the images with superscript, such as (a′), represent the case of the optimized exposure time, and this is true hereafter.
Figure 11 shows the schematic diagram for searching for the optimal exposure time. A Hikvision MV-CA023-10UM/C camera with a 25 mm lens was used to capture the images. Zhang’s calibration method [31] was used for the camera calibration. The specimen was placed on an optical platform, and the monocular camera was adjusted so that the specimen was in the view of the camera. First, the exposure time of the camera was adjusted to 6 ms to capture image (a), and then the camera was set to automatic exposure; next, the optimal exposure time was calculated according to the image quality metric M s o f t p e r c (Set p = 0.7 , η 1 = 1000 , t = 0.2   m s and t 1 = 6   m s ) so as to obtain the optimal exposure image (a′). Similarly, the other eight sets of images were obtained. After calculation, the optimized exposure time was 4.00 ms, 7.39 ms, 4.21 ms, 8.42 ms, 9.56 ms, 3.92 ms, 6.77 ms, 4.99 ms and 4.57 ms, respectively.
Next, the displacement and strain calculation were performed by DIC technology. The image resolution was 1920 × 1200 , the window size for displacement calculation was 45 × 45 and the window size for strain calculation was 15 × 15; then, the displacement field and strain field could be obtained.

5.2. Results

Due to the space limitation, only the typical calculation results of three sets of images—(a), (c) and (i)—are taken out for comparison, as shown in Figure 12 and Figure 13. In terms of displacement data as a whole, they are basically the same at the two exposure times, except for images (a) and (a′). However, after careful observation of the degree of the color fluctuation in the displacement fields, it can be found that the data at the optimal exposure time are smoother and that the noise is smaller than that at the fixed exposure time. By comparing the two images, (a) and (a′), it can be seen that the displacement error under exposure control is significantly lower than that at the fixed exposure time, and the data are more uniform and reasonable because the rigid body does not actually move.
Figure 13 shows the comparison of the strain fields ( ε x x , ε x y and ε y y ) in the above three working conditions at the two different exposure times. On the whole, by comparing (a) and (a′) for ε x x , ε x y and ε y y , it can be seen that the strain under the exposure control is smaller than that under the fixed exposure, and the strain data are more uniform and reasonable; by comparing (c) and (c′) and (i) and (i’), it can be seen from the degree of the color fluctuation that the strain data under the exposure control are more continuous and smooth and less noisy than those under the fixed exposure. In addition, since the pose of (c) and (i) are obtained by rotating the pose of (a) around the vertical, these images will have a large horizontal strain and a small vertical strain. From the comparison of ε x x , it can be observed that the strain increases as the rotation angle increases (i.e., from (c) to (i)). After enlarging the local area of the strain fields (i) and (i’), it is found that the relative error of the strain in the horizontal direction is small, and the data of them are basically consistent. This is mainly because, compared with the larger horizontal strain caused by the rotation of the specimen, the illumination has little influence on the horizontal strain; thus, the relative error is small. For ε x y and ε y y , the strain distribution fluctuates obviously because the value of the shear strain or vertical strain is small. It is worth noting that the fluctuation of ε x y and ε y y is caused by the influence of illumination, which is inconsistent with the deformation generated by the specimen’s rigid motion.
Figure 14 shows the boxplots for the error comparison of the five sets of data at the two exposure times. The errors are obtained by comparing the aforementioned DIC calculation results of the displacement and strain in the case of (i) and (i’) with those calculated by the existing SFM method [32]. In the SFM calculation, the pose of control points is estimated from multiple perspectives, and then the displacement and strain field caused by the change in perspective can be calculated. Generally, the SFM calculation of control points is more robust to illumination variation than DIC calculation. Since the motion of the measured specimen is a rigid body motion, the SFM method has a higher accuracy; thus, it is regarded as the benchmark for comparison. It can be observed in Figure 14 that the errors of all of the five sets of data at the optimized exposure time are much smaller than those at the fixed exposure time, and the data distribution is more concentrated, which reflects that the DIC calculation method at the optimized exposure time effectively reduces the data noise and improves the calculation accuracy.

5.3. Discussion

Since the motion of the measured specimen is a rigid-body motion, its true strain equals 0. For the monocular camera, the strain caused by image deformation due to the pose change of the specimen in the image is regarded as the strain of the specimen in this study. In fact, this strain is caused by visual affine, not the real strain. The reason for designing the experiment is to exclude other external influences and more intuitively compare the effects of different exposure conditions. Compared with the real mechanical experiment, the design of the experiment in this study has the following advantages: (1) The cost of the mechanical experiment is high, and the mechanical experiment is easily affected by many external factors; thus, it is more difficult to measure the error magnitude of the proposed exposure control method. (2) The deformation of the specimen caused by the change in the pose in the image can be calculated by the existing SFM method so as to obtain the strain of the specimen. It can be seen from the experimental results that the data obtained by this experiment are smoother, more consistent and more stable under the exposure control, which proves the effectiveness of this experiment and the exposure control method.

6. Conclusions

Based on the existing image quality metric, the applicability and effectiveness of the metric in DIC application was evaluated and verified for the first time. On this basis, the exposure control algorithm and control procedure were proposed to calculate the optimal exposure time. At last, the effectiveness of the exposure control method in DIC calculation was verified through the comparative analysis of displacement and strain fields at the fixed exposure time and the optimal exposure time in nine sets of rigid body motion experiments. The experimental results show that the calculation results under exposure control are more reasonable, i.e., the data are smoother and more consistent, and the error is smaller, which improves the calculation accuracy and stability in DIC calculation.

Author Contributions

Conceptualization, W.T.; formal analysis, W.T.; funding acquisition, J.C. and W.T.; investigation, J.C.; methodology, W.T.; project administration, J.C. and W.T.; resources, J.C.; software, W.T.; supervision, J.C. and W.T.; visualization, W.T.; writing—original draft, J.C. and W.T.; writing—review & editing, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key R&D Project of China (No. 2021YFE0112200), the Featured Innovation Project of Guangdong Universities (2022KTSCX375) and the College-level Project (2020Zzk02).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, M.Z.; Zhang, J.G.; Li, S.W.; Wang, M.; Wang, Y.W.; Cui, P.F. Calculating changes in fractal dimension of surface cracks to quantify how the dynamic loading rate affects rock failure in deep mining. J. Cent. South Univ. 2020, 27, 3013–3024. [Google Scholar] [CrossRef]
  2. Liu, B.; Gao, Y.T.; Jin, A.B.; Elmo, D. Fracture Characteristics of Orebody Rock with Varied Grade Under Dynamic Brazilian Tests. Rock Mech. Rock Eng. 2020, 53, 2381–2398. [Google Scholar] [CrossRef]
  3. Yan, Y.; Li, J.; Li, X. Dynamic viscoelastic model for rock joints under compressive loading. Int. J. Rock Mech. Min. Sci. 2022, 154, 105123. [Google Scholar] [CrossRef]
  4. Li, J.; Yuan, W.; Li, H.; Zou, C. Study on dynamic shear deformation behaviors and test methodology of sawtooth-shaped rock joints under impact load. Int. J. Rock Mech. Min. Sci. 2022, 158, 105210. [Google Scholar] [CrossRef]
  5. Bomarito, G.F.; Hochhalter, J.D.; Ruggles, T.J.; Cannon, A.H. Increasing accuracy and precision of digital image correlation through pattern optimization. Opt. Lasers Eng. 2017, 91, 73–85. [Google Scholar] [CrossRef]
  6. Pan, B.; Xie, H.; Wang, Z.; Qian, K.; Wang, Z. Study on subset size selection in digital image correlation for speckle patterns. Opt. Express 2008, 16, 7037–7048. [Google Scholar] [CrossRef]
  7. Thai, T.Q.; Hansen, R.S.; Smith, A.J.; Lambros, J.; Berke, R.B. Importance of Exposure Time on DIC Measurement Uncertainty at Extreme Temperatures. Exp. Tech. 2019, 43, 261–271. [Google Scholar] [CrossRef]
  8. Xu, J.; Moussawi, A.; Gras, R.; Lubineau, G. Using Image Gradients to Improve Robustness of Digital Image Correlation to Non-uniform Illumination: Effects of Weighting and Normalization Choices. Exp. Mech. 2015, 55, 963–979. [Google Scholar] [CrossRef]
  9. Yang, D.; Zhang, S.; Wang, S.; Yu, Q.; Su, Z.; Zhang, D. Real-time illumination adjustment for video deflectometers. Struct. Control. Health Monit. 2022, 29, e2930. [Google Scholar] [CrossRef]
  10. Pan, B. Digital image correlation for surface deformation measurement: Historical developments, recent advances and future goals. Meas. Sci. Technol. 2018, 29, 082001. [Google Scholar] [CrossRef]
  11. Liu, X.Y.; Tan, Q.C.; Xiong, L.; Liu, G.D.; Liu, J.Y.; Yang, X.; Wang, C.Y. Performance of iterative gradient-based algorithms with different intensity change models in digital image correlation. Opt. Laser Technol. 2012, 44, 1060–1067. [Google Scholar] [CrossRef]
  12. Lecompte, D.; Smits, A.S.H.J.D.; Bossuyt, S.; Sol, H.; Vantomme, J.; Van Hemelrijck, D.; Habraken, A.M. Quality assessment of speckle patterns for digital image correlation. Opt. Lasers Eng. 2006, 44, 1132–1145. [Google Scholar] [CrossRef] [Green Version]
  13. Li, B.J.; Wang, Q.B.; Duan, D.P.; Chen, J.A. Using grey intensity adjustment strategy to enhance the measurement accuracy of digital image correlation considering the effect of intensity saturation. Opt. Lasers Eng. 2018, 104, 173–180. [Google Scholar] [CrossRef]
  14. Wang, Y.; Gao, Y.; Liu, Y.; Gao, Z.; Su, Y.; Zhang, Q. Optimal Aperture and Digital Speckle Optimization in Digital Image Correlation. Exp. Mech. 2021, 61, 677–684. [Google Scholar] [CrossRef]
  15. Pan, B.; Zhang, X.; Lv, Y.; Yu, L. Automatic optimal camera exposure time control for digital image correlation. Meas. Sci. Technol. 2022, 33, 105205. [Google Scholar] [CrossRef]
  16. Zhang, S. Rapid and automatic optimal exposure control for digital fringe projection technique. Opt. Lasers Eng. 2020, 128, 106029. [Google Scholar] [CrossRef]
  17. Tong, W. An Evaluation of Digital Image Correlation Criteria for Strain Mapping Applications. Strain 2005, 41, 167–175. [Google Scholar] [CrossRef]
  18. Zhang, Z.; Forster, C.; Scaramuzza, D. Active Exposure Control for Robust Visual Odometry in HDR Environments. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017. [Google Scholar]
  19. Debevec, P.E.; Malik, J. Recovering High Dynamic Range Radiance Maps from Photographs. ACM SIGGRAPH 2008, 31, 1–10. [Google Scholar] [CrossRef]
  20. Engel, J.; Usenko, V.; Cremers, D. A Photometrically Calibrated Benchmark For Monocular Visual Odometry. arXiv 2016, arXiv:1607.02555. [Google Scholar]
  21. Mahlein, A.K.; Hammersley, S.; Oerke, E.C.; Dehne, H.W.; Goldbach, H.; Grieve, B. Supplemental blue LED lighting array to improve the signal quality in hyperspectral imaging of plants. Sensors 2015, 15, 12834–12840. [Google Scholar] [CrossRef]
  22. Mustafic, A.; Li, C.; Haidekker, M. Blue and UV LED-induced fluorescence in cotton foreign matter. J. Biol. Eng. 2014, 8, 29. [Google Scholar] [CrossRef] [PubMed]
  23. Hiemann, R.; Hilger, N.; Sack, U.; Weigert, M. Objective quality evaluation of fluorescence images to optimize automatic image acquisition. Cytom. Part A 2006, 69A, 182–184. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, R.; Ying, Y.; Rao, X.; Li, J. Quality and safety assessment of food and agricultural products by hyperspectral fluorescence imaging. J. Sci. Food Agric. 2012, 92, 2397–2408. [Google Scholar] [CrossRef]
  25. Kim, J.; Cho, Y.; Kim, A. Proactive Camera Attribute Control Using Bayesian Optimization for Illumination-Resilient Visual Navigation. IEEE Trans. Robot. 2020, 36, 1256–1271. [Google Scholar] [CrossRef]
  26. Pan, B.; Lu, Z.; Xie, H. Mean intensity gradient: An effective global parameter for quality assessment of the speckle patterns used in digital image correlation. Opt. Lasers Eng. 2010, 48, 469–477. [Google Scholar] [CrossRef]
  27. Mehta, I.; Tang, M.; Barfoot, T.D. Gradient-Based Auto-Exposure Control Applied to a Self-Driving Car. In Proceedings of the 2020 17th Conference on Computer and Robot Vision (CRV), Ottawa, ON, Canada, 13–15 May 2020; pp. 166–173. [Google Scholar]
  28. Vondrick, C.; Khosla, A.; Malisiewicz, T.; Torralba, A. HOGgles: Visualizing Object Detection Features. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1–8. [Google Scholar]
  29. Shim, I.; Oh, T.H.; Lee, J.Y.; Choi, J.; Choi, D.G.; Kweon, I.S. Gradient-Based Camera Exposure Control for Outdoor Mobile Platforms. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 1569–1583. [Google Scholar] [CrossRef] [Green Version]
  30. Shim, I.; Lee, J.Y.; Kweon, I.S. Auto-adjusting Camera Exposure for Outdoor Robotics using Gradient Information. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 1011–1017. [Google Scholar]
  31. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  32. Parashar, S.; Pizarro, D.; Bartoli, A. Robust Isometric Non-Rigid Structure-From-Motion. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 6409–6423. [Google Scholar]
Figure 1. (a) Translation, (b) scaling, (c) translation + scaling of image pixels between the reference image and current image.
Figure 1. (a) Translation, (b) scaling, (c) translation + scaling of image pixels between the reference image and current image.
Sustainability 15 01149 g001
Figure 2. (a) Nonuniform illumination, (b) nonlinear response of the equipment, (c) nonuniform illumination + nonlinear response of the equipment.
Figure 2. (a) Nonuniform illumination, (b) nonlinear response of the equipment, (c) nonuniform illumination + nonlinear response of the equipment.
Sustainability 15 01149 g002
Figure 3. Comparison of pixel values for different exposure times.
Figure 3. Comparison of pixel values for different exposure times.
Sustainability 15 01149 g003
Figure 4. Photographs of the specimen.
Figure 4. Photographs of the specimen.
Sustainability 15 01149 g004
Figure 5. Images at the exposure times of 2, 3, 4, 5, 6, 7, 8, 9 and 10 ms.
Figure 5. Images at the exposure times of 2, 3, 4, 5, 6, 7, 8, 9 and 10 ms.
Sustainability 15 01149 g005
Figure 6. Comparison of the two image quality metrics.
Figure 6. Comparison of the two image quality metrics.
Sustainability 15 01149 g006
Figure 7. Evaluation of image quality metrics at different exposure times.
Figure 7. Evaluation of image quality metrics at different exposure times.
Sustainability 15 01149 g007
Figure 8. Schematic diagram of exposure control.
Figure 8. Schematic diagram of exposure control.
Sustainability 15 01149 g008
Figure 9. Schematic diagram of the search algorithm for the optimal exposure time.
Figure 9. Schematic diagram of the search algorithm for the optimal exposure time.
Sustainability 15 01149 g009
Figure 10. Images at (A) the fixed exposure time, (B) the optimal exposure time. ((ai) represent the case of the fixed exposure time, (a′–i′) represent the case of the optimized exposure time).
Figure 10. Images at (A) the fixed exposure time, (B) the optimal exposure time. ((ai) represent the case of the fixed exposure time, (a′–i′) represent the case of the optimized exposure time).
Sustainability 15 01149 g010
Figure 11. Schematic diagram for searching for the optimal exposure time.
Figure 11. Schematic diagram for searching for the optimal exposure time.
Sustainability 15 01149 g011
Figure 12. Comparison of displacement fields ( u x   a n d   u y ) under different working conditions at the two exposure times. ((a,c,i) represent the case of the fixed exposure time, (a′,c′,i′) represent the case of the optimized exposure time).
Figure 12. Comparison of displacement fields ( u x   a n d   u y ) under different working conditions at the two exposure times. ((a,c,i) represent the case of the fixed exposure time, (a′,c′,i′) represent the case of the optimized exposure time).
Sustainability 15 01149 g012
Figure 13. Comparison of strain fields in different working conditions at the two exposure times. ((a,c,i) represent the case of the fixed exposure time, (a′,c′,i′) represent the case of the optimized exposure time).
Figure 13. Comparison of strain fields in different working conditions at the two exposure times. ((a,c,i) represent the case of the fixed exposure time, (a′,c′,i′) represent the case of the optimized exposure time).
Sustainability 15 01149 g013
Figure 14. Boxplots for the error comparison of the five sets of data at the two exposure times. (The left part of each boxplot represents the data distribution at the fixed exposure time, and the right part represents that at the optimized exposure time).
Figure 14. Boxplots for the error comparison of the five sets of data at the two exposure times. (The left part of each boxplot represents the data distribution at the fixed exposure time, and the right part represents that at the optimized exposure time).
Sustainability 15 01149 g014
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, J.; Tao, W. Gradient-Based Automatic Exposure Control for Digital Image Correlation. Sustainability 2023, 15, 1149. https://doi.org/10.3390/su15021149

AMA Style

Chen J, Tao W. Gradient-Based Automatic Exposure Control for Digital Image Correlation. Sustainability. 2023; 15(2):1149. https://doi.org/10.3390/su15021149

Chicago/Turabian Style

Chen, Jiangping, and Weijun Tao. 2023. "Gradient-Based Automatic Exposure Control for Digital Image Correlation" Sustainability 15, no. 2: 1149. https://doi.org/10.3390/su15021149

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop