Next Article in Journal
Model-Driven Approach of Virtual Interactive Environments for Enhanced User Experience
Next Article in Special Issue
A Binocular Vision-Based 3D Sampling Moiré Method for Complex Shape Measurement
Previous Article in Journal
DNA Nanodevices as Mechanical Probes of Protein Structure and Function
Previous Article in Special Issue
Indentation Measurement in Thin Plates under Bending Using 3D Digital Image Correlation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Online Phase Measurement Profilometry for a Fast-Moving Object

Department of Opto-Electronics, Sichuan University, Chengdu 610064, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(6), 2805; https://doi.org/10.3390/app11062805
Submission received: 8 February 2021 / Revised: 11 March 2021 / Accepted: 17 March 2021 / Published: 21 March 2021

Abstract

:
When the measured object is fast moving online, the captured deformed pattern may appear as motion blur, and some phase information will be lost. Therefore, the frame rate has to be improved by adjusting the image acquisition mode of the camera to adapt to a fast-moving object, but the resolution of the captured deformed pattern will be sacrificed. So a super-resolution image reconstruction method based on maximum a posteriori (MAP) estimation is adopted to obtain high-resolution deformed patterns, and in this way, the reconstructed high-resolution deformed patterns also have a good effect on noise suppression. Finally, all the reconstructed high-resolution equivalent phase shifting deformed patterns are used for online three-dimensional (3D) reconstruction. Experimental results prove the effectiveness of the proposed method. The proposed method has a good application prospect in high-precision and fast online 3D measurement.

1. Introduction

With the development of optics, computer and information technology, 3D measurement technology plays an important role in reverse engineering, industrial 3D testing, medical diagnosis, cultural relic protection and so on [1,2,3]. The more commonly used 3D measurement technologies are phase measurement profilometry (PMP) [4,5] and Fourier transform profilometry (FTP) [6,7]. Compared with FTP, PMP is a point-to-point phase calculation with multiple frames of deformed patterns, it has higher precision and is more favored by the industry. Common 3D measurement based on PMP requires the fixed position of the object, but the object is moving in online 3D measurement, which will cause the object coordinates in the captured multiple frames of deformed patterns not to correspond, and lead to the error in the PMP phase demodulation. The pixel matching method [8] is used to obtain deformed patterns with consistent object coordinates. Scholars have carried out a lot of research in the field of online 3D measurement. Among them, in order to improve the matching speed, Peng Kuang et al. [9] proposed a new pixel matching method using the modulation of shadow areas in online 3D measurement. To improve the measured precision, Chen Cheng et al. [10] proposed an online phase measuring profilometry for objects moving with straight-line motion. To avoid the loss of phase information due to frequency filtering, Peng Kuang et al. [11] proposed a dual-frequency online PMP method with phase-shifting parallel to the moving direction of the measured object.
Generally, these methods may have a good effect when the speed of the production line is less than 0.2 m/s. However, when the object moves fast online, the captured deformed patterns will appear as motion blur. In order to adapt to the fast speed, the frame rate of the camera can be increased by adjusting the image acquisition mode, but the resolution of captured deformed patterns is bound to be sacrificed simultaneously. In order to obtain high-resolution deformed patterns and improve the measurement precision, we proposed an online phase measurement profilometry method based on super-resolution image reconstruction [12,13], which combines high-resolution image from multiple frames of low-resolution images with sub-pixel [14] shifts among them. In addition, this paper adopts a super-resolution reconstruction method based on the maximum a posteriori (MAP) [15,16], using Gauss and Markov–Gibbs [17,18] random field models to construct posteriori probability of the high-resolution deformed pattern, the optimal estimation of high-resolution deformed pattern is obtained by minimizing the objective function. In this way, the high-resolution deformed patterns can be obtained. Finally, all the equivalent phase-shifting deformed patterns are demodulated for online 3D reconstruction.

2. Principle

2.1. Principle of PMP

The sinusoidal grating is designed to parallel to the moving direction, and the projector projects the sinusoidal grating onto the surface of the object, the deformed pattern I ( x , y ) captured by the camera is:
I ( x , y ) = A ( x , y ) + B ( x , y ) cos ( φ ( x , y ) + δ )
where A ( x , y ) represents the background intensity of the deformed pattern, B ( x , y ) reflects the contrast of the deformed pattern, φ ( x , y ) is the phase modulated by the height of the object and δ is the shifting phase. Then in N (N ≥ 3) step PMP, the n-th corresponding deformed pattern is:
I n ( x , y ) = A ( x , y ) + B ( x , y ) cos ( φ ( x , y ) + 2 n π N )
The phase φ ( x , y ) is calculated by Equation (3):
φ ( x , y ) = arctan [ n = 1 N I n ( x , y ) sin ( 2 n π N ) n = 1 N I n ( x , y ) cos ( 2 n π N ) ] .  
Because φ ( x , y ) is wrapped in ( π , π ] due to the arctan function, it should be unwrapped to be the continuous phase Φ ( x , y ) by using the rhombus phase unwrapping algorithm [19], and the object height distribution h ( x , y ) is reconstructed by the phase-to-height mapping algorithm [20].
1 h ( x , y ) = a ( x , y ) + b 1 ( x , y ) 1 Φ ( x , y ) + b 2 ( x , y ) Φ C ( x , y ) Φ ( x , y )
where a ( x , y ) ,   b 1 ( x , y ) ,   b 2 ( x , y ) can be obtained by plane calibration, Φ C ( x , y ) is the phase of the reference plane, it can be obtained by measuring the reference plane in advance.

2.2. Super-Resolution Image Reconstruction Algorithm Based on MAP

2.2.1. The Basic Principles of the MAP

Due to the change of camera acquisition mode for fast online measurement, the captured deformed pattern is degraded to low-resolution deformed pattern, and the main degradation process can be expressed as follows:
I n , k = D n , k B n , k M n . k I n + Δ I n , k
where I n , k is the low-resolution deformed pattern, which is the k-th ( k = 1 , 2 , , K ) frame captured in the n-th ( n = 1 , 2 , , N ) phase-shifting position, and K represents the number of low-resolution deformed patterns captured at the same work station (the same shifting phase), D n , k represents the down-sampling matrix, which means that the high-resolution image is sampled at a certain distance so that the resolution of the image is reduced, B n , k is the blur matrix, it can be used to express the effect of the optical system’s blur and aberration on high-resolution image, the point spread function (PSF) is used to express mathematically. M n , k is the motion matrix, used to characterize the pixel displacement of the low-resolution image relative to the high-resolution image. I n is the original high-resolution deformed pattern matrix, and Δ I n , k is the additive random noise matrix. B n , k and D n , k of each low-resolution deformed pattern are the same when a reference is selected, but M n , k of each deformed pattern may not be the same because of the motion of the object. For convenience, only one phase-shifting position is analyzed here, so the Equation (5) can be written as:
I k = H k I + Δ I k
where H k = D k B k M k is a degenerate matrix. Only I k is known in Equation (6), so we need to use an algorithm to estimate H k and Δ I k to solve I .
In experiment, to obtain the high-resolution image from the low-resolution image, we need to set the resolution magnification factor q and interpolate the low-resolution image, so that we can interpolate the first frame by bicubic interpolation, and this interpolated image can be regarded as the reference frame of high-resolution image. Then the down-sampling matrix D k can be solved, and motion matrix M k between other frames of low-resolution images and the reference frame can be calculated by pixel matching. Blur matrix B k can be represented by PSF. Above all, H k is estimated.
Because super-resolution image reconstruction is an ill-posed problem, I needs to be estimated, so what satisfies the condition is a set of images, not the unique solution. However, under the Bayesian theory [21], MAP can flexibly add the prior probability of image, which is the mathematical expression of image features, and then it can convert ill-posed problems into well-posed problems by means of regularization. Finally, the unique solution of the problem can be obtained.
MAP estimation refers to the maximum probability of obtaining a high-resolution image under the condition of known low-resolution image sequence I , which can be expressed as solving max P ( I | I ) . According to Bayesian theory, it can be calculated as:
I ^ = argmax P ( I | I ) = argmax ( P ( I | I ) P ( I ) P ( I ) )
where I ^ represents the estimation of a high-resolution image I , P ( I | I ) is a posteriori probability, P ( I | I ) is a conditional probability when a high-resolution image degenerates into a low-resolution image, P ( I ) and P ( I ) are the prior probabilities of the high-resolution image and the low-resolution image, respectively. Since P ( I ) has no effect on the solution of I ^ , it can be omitted and the Equation (7) is reduced to:
I ^ = argmax ( P ( I | I ) P ( I ) )
Logarithmically available
I ^ = argmax ( log P ( I | I ) + log P ( I ) )
Or
I ^ = argmin ( log P ( I I | I ) log P ( I ) )
Equation (10) is the initial form of the objective function. In order to solve it, the prior probability P ( I ) and the conditional probability P ( I | I ) must be determined. And their distributions depend on assumptions about the statistical model of the image
According to the degeneration model I k = H k I + Δ I k , a low-resolution image can be regarded as a random field and the mean of the random field is H k I because of the random noise, so Gauss random field can be used to solve P ( I ^ | I ) . Markov random field describes the local statistical properties of images, while Gibbs random field describes the global properties by joint probability, so they are combined by equivalence, and the global statistical results can be calculated by using the local Gibbs distribution model.

2.2.2. Establishment of Objective Equation

When the image statistical model is selected, Equation (10) can be written as
I ^ = argmin [ k = 1 K I k H k I ^ 2 + α c C ρ α ( d i c ) ]
where k = 1 K I k H k I ^ 2 represents the difference between the actual data and the estimated value, c C ρ α ( d i c ) is the regular term, and α is the coefficient of the regular term, which determines the influence of the regular term on the image estimation. C is all the cliques [22] in the image matrix neighborhood system, ρ α ( x ) is the potential function related to the cliques C, different potential functions can be selected to obtain different texture statistical properties, and the parameter d i c is the variance of its pixel mean. So the objective function can be expressed as:
Θ ( I ^ ) = k = 1 K I k H k I ^ 2 + α c C ρ α ( d i c )
The function of the regular term is to constrain the estimation I ^ according to the desired goal, and reduce the deviation from the optimal solution. The regular term used in this experiment is a function of the Gibbs distribution model, which describes the energy of a feature in the image neighborhood. The higher the energy, the lower the probability of feature will appear. According to the objective equation, the higher the energy of the feature, the larger the regular term, the greater the inhibition in the process of minimizing the objective equation. Therefore, in order to reduce the influence of noise on the estimation I ^ , the feature should reflect the difference between the noise and the original image and make the noise larger. The potential function ρ α ( d i c ) is chosen according to the penalty degree of removing features of image, such as the Huber equation and linear equation.

2.2.3. Iterative Solution

Since it is hard to directly solve I ^ when Θ ( I ^ ) is minimum, we can choose to estimate I ^ iteratively by using the gradient descent [23], and projecting the negative gradient of the objective function into the constraint space at each iteration, thus I ^ converges to the local minimum and to the global minimum as far as possible.
The whole iterative process can be described as:
  • firstly, the low-resolution image is interpolated by bicubic interpolation, and the initial estimation I ^ 0 is obtained. When m = 0, where m is the number of iterations, the mean square error (MSE) of the initial negative gradient M S E 0 is calculated.
  • calculate the gradient of the objective function g m = Θ ( I ^ m ) .
  • calculate the projection map to the constraint space p m , p m = P g m , where P is the projection operator, and P = E H k T ( H k H k T ) 1 H k , where E is the identity matrix.
  • using the gradient of the objective function to constrain I ^ m , and the expression is
    I ^ m + 1 = I ^ m + ( β p m )
    where β is the learning rate, which is the coefficient of the negative gradient of the objective function.
  • calculate the MSE of the negative gradient of this iteration M S E m . If M S E m M S E 0 ε , I ^ m is the best estimation, where ε is the iteration termination flag. Otherwise, return to step 2°.
MSE is defined as:
M S E = 1 L 1 L 2 x , y [ I ^ ( x , y ) I r ( x , y ) ] 2
where L 1 , L 2 are the width and height of the high-resolution deformed pattern. I r is the reference of high-resolution deformed pattern, which is obtained by interpolating the first frame of low-resolution images. The whole process of MAP is shown in Figure 1.

2.3. Equivalent Deformed Patterns

Using the previous MAP super-resolution image reconstruction method, we can reconstruct a set of high-resolution deformed patterns in different positions of an online moving object. In order to obtain equivalent deformed patterns, which means that an object in each deformed pattern is in the same position, this paper adopts Fast 3D measurement based on improved optical flow for dynamic objects by Peng Kuang et al. [24] for pixel matching. As shown in Figure 2, Figure 2a is the modulation of the deformed pattern I 1 , Figure 2b is the modulation of the deformed pattern I N . Figure 2a is used as a reference, the motion displacement of the object is calculated by pixel matching of Figure 2a,b; at the same time, according to the calculated motion displacement, the modulation with the same position of the object is obtained, as shown in Figure 2d, where the black area is the part of the missing information after the left shift of the pixel matching; phase demodulation can be carried out for the region of interest (ROI), which is the dotted line region in Figure 2c,d. I N is moved in the reverse direction according to the calculated motion displacement, and the equivalent deformed patterns I 1 and I N can be obtained by intercepting I 1 and I N on the ROI in Figure 2d. Similarly, by matching the modulation of I 2 I N 1 with the modulation of I 1 , and intercepting on the ROI in Figure 2d, a set of equivalent deformed patterns I 1 , I 2 , I 3 , , I N can be obtained. The phase φ ( x , y ) of the online object can be obtained by substituting I 1 , I 2 , I 3 , , I N into Equation (3). The continuous phase Φ ( x , y ) of the online object can be obtained by using the rhombic phase unwrapping algorithm and 3D shape of the online object can be obtained by substituting Φ ( x , y ) into Equation (4).

3. Experiment and Analysis

In order to verify the feasibility and practicability of the proposed method, an online 3D measurement experimental system is built. As shown in Figure 3, the projector used in this experiment is PLED-W200 DLP, and the image collector is a SDI-C2010M camera, the highest frame rate of this camera is 60 fps when the image pixel size is 1920 × 1080; in order to adapt the fast speed, we increase the frame rate to 100 fps while the pixel size is decreased to 256 × 256. During this measurement, the object is driven from left to right by the electric translation platform Y100SC01 at a fast speed, Through the previous experimental calibration, we measured that the relationship between pixels and the motion distance is approximately 1.245 mm/pixel. The entire image acquisition process is:
  • The designed 5 frames of sinusoidal gratings with a shifting phase of 2 / 5 π are combined into a repeated video.
  • The projector projects the video onto the object.
  • Start the electric translation platform at a given speed.
  • Turn on the DLP frame synchronization signal to trigger the camera, thus achieving synchronous acquisition.
  • After 0.2 s, turn off the DLP frame synchronization.
In this experiment, the frame rate of the video is 25 fps, and the frame rate of the camera is 100 fps. Thus, at the same work station, 4 frames (K = 4) of deformed patterns are captured. The image acquisition is real time, data processing is post-processing, and the data processing includes super-resolution reconstruction, obtaining equivalent deformed patterns and PMP 3D reconstruction. Because the iterative solution of the super-resolution reconstruction takes some time, and iteration time is related to hardware and data process mode, in future, we will use a better computer or adapt graphics processing unit (GPU) acceleration to speed up data processing.
The proposed method is compared with the averaging method, and the experimental data are shown in Figure 4. Figure 4a is a measured object, which is a “face” model, and in this experiment, the speed of the object is 0.2 m/s. Figure 4b is a set of low-resolution deformed patterns captured at the first work station, which are I 11 ,   I 12 ,   I 13 ,   I 14 , and the pixel size is 256 × 256.
The reconstructed deformed patterns are shown in Figure 5, Figure 5a is I 11 , and it is regarded as a reference frame. The process of pixel matching is in Figure 5a,b. The points marked in Figure 5a are the selected feature points of I 11 , the other points marked in Figure 5b are the matching points of I 14 . The difference between the two set of points represents the movement of the “face”. Then the estimated motion matrix, blur matrix and down-sampling matrix are used for subsequent iterations. In this experiment, the super-resolution factor q is 2, the max number of iterations m is 200. Figure 5c,d are the reconstructed results of the averaging method and MAP, respectively, and their pixel size is 512 × 512. By comparison, it can be seen that the deformed pattern reconstructed by MAP is clearer, and there is no large motion blur. At the same time, compared with the original low-resolution deformed pattern, the deformed pattern reconstructed by MAP has a good effect on noise suppression. Figure 5e,f are the enlargements of the rectangular areas of Figure 5c,d, respectively. Compared with Figure 5e, Figure 5f has a clearer edge, which further indicates that noise is reduced effectively using MAP.
In the same way, high-resolution deformed patterns of the other four work stations can also be obtained, and the reconstructed results are shown in Figure 6. Figure 6a shows five frames of high-resolution deformed patterns reconstructed by the averaging method, and Figure 6c shows five high-resolution deformed patterns reconstructed by the proposed method, which are I 1 , I 2 , I 3 , I 4 , I 5 . In Figure 6a,c, the marked dotted line shows that the position of the object in each frame is not the same. We adopt optical flow [24] to obtain equivalent deformed patterns, and the results are shown in Figure 6b,d. The pixel size of Figure 6b,d is 458 × 512, and in Figure 6b,d, the position of the object in each frame is the same. Figure 6e and Figure 6f are, respectively, the experimental results of PMP 3D reconstruction using Figure 6b,d. From Figure 6e,f, it can be seen that the reconstructed result by the proposed method is better than that by the averaging method, and the phase information is more complete. At the same time, in order to better analyze the measuring results of the proposed method, the measuring result of eight-step PMP [25] is taken as the quasi truth value, and the eight-step PMP is taken for the same but stationary object.
Figure 7 shows the 255th column sectional view of the reconstructed 3D profile of the object with different methods.
Figure 7b is the enlargement of Figure 7a forehead (rectangular area). The dot-and-dash line is the result of the eight-step PMP, the dotted line is the result of the averaging method, and the solid line is the result of the proposed method. As can be seen from Figure 7a, the reconstructed result of the proposed method is closer to that of the eight-step PMP, and better than that of the averaging method. As can be seen from the details shown in Figure 7b, the results of the proposed method are also very close to those of the eight-step PMP. The reconstructed result of the proposed method is better than that of the averaging method, either in the whole or in the detail. The experimental results indicate that the proposed method can improve the resolution while preserving the details of the object.
In order to further verify the applicability of the proposed method, this experiment used a more complex model, a “snail” model, and the speed of the object is 0.5 m/s, super-resolution q = 2, the max num of the iterators m is 200. The experimental data and the results of the comparison experiment are shown in Figure 8. Figure 8a, Figure 8b, and Figure 8c are one frame of the high-resolution deformed patterns obtained by the averaging method, proposed method and eight-step PMP, respectively. Figure 8d–f show the corresponding wrapped phases, and they are obtained by calculating all equivalent deformed patterns, and through phase unwrapping and height mapping, the reconstructed results are shown in Figure 8g–i. It also can be seen that motion blur in Figure 8b is less than that in Figure 8a; The wrapped phase in Figure 8e is clearer than that in Figure 8d and is closer to that in Figure 8f. By comparing Figure 8g–I, the height reconstruction by the proposed method is obviously better than that by the averaging method, and is closer to that by eight-step PMP.
To further analyze the details, Figure 9 is a cross-sectional comparison of the three methods; the dot-and-dash line is the result of the eight-step PMP, the dotted line is the result of the averaging method, and the solid line is the result of the proposed method. It also can be seen that the reconstructed result of the proposed method is better than that of the averaging method, either in the whole or in the detail. It means that the proposed method has a higher precision than the averaging method. The experimental results as shown in both Figure 8 and Figure 9 prove that the proposed method has a good effect on the more complex object.
To further quantitative analysis of the error of the proposed method and the averaging method, we measured the heights of the different planes. The heights of the planes are known and measured by metrological grating, and they are taken as the truth values. The heights are 5 mm, 10 mm and 15 mm. According to the proposed method and the averaging method, we take an online 3D measurement of known height plane. Then we use root mean squared error (RMSE) to analyze the errors. The results of measurements are shown in Table 1. RMSE1 and RMSE2 are the calculated RMSEs of the averaging method and the proposed method, respectively. For example, in the experiment of 5 mm, the measured RMSE of the averaging method is 0.358 mm, and that of the proposed method is 0.091 mm. The results prove that the proposed method has a higher precision and higher reliability than the average method.

4. Conclusions

In the online 3D measurement of a fast-moving object, the frame rate of a camera can be adjusted to adapt to the deformed pattern capturing of the fast-moving object, and the resolution of camera will be sacrificed. In this paper, an online phase measurement profilometry method for a fast-moving object was proposed, and the experimental results prove that the proposed method can not only improve the resolution of the deformed patterns, but also ensure to restore most of the details of the object. The proposed method can adapt to the online 3D measurement when the fast-moving object moves at the speed of 0.5 m/s. We also think that the proposed method has a good application prospect in high-precision and fast online 3D measurement. However, the proposed method is mainly limited by the frame rate of the camera, and if a higher frame rate camera is selected, the speed can be higher. In later work, we may adopt a super-resolution image reconstruction method based on deep learning, the training model can monitor and train more factors.

Author Contributions

Conceptualization, Y.C. and J.G.; methodology, Y.C.; validation, J.G., J.C. and X.H.; investigation, J.C., X.H. and J.G.; resources, Y.C.; writing—original draft preparation, J.G.; writing—review and editing, Y.C. and J.G.; visualization, J.G.; project administration, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Special Grand National Project of China (under grant No. 2009ZX02204-008).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article. The data presented in this study are available in Online Phase Measurement Profilometry for a Fast-Moving Object.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, S. High-speed 3D shape measurement with structured light methods: A review. Opt. Lasers Eng. 2018, 106, 119–131. [Google Scholar] [CrossRef]
  2. Yen, H.-N.; Tsai, D.-M.; Yang, J.-Y. Full-Field 3-D Measurement of Solder Pastes Using LCD-Based Phase Shifting Techniques. IEEE Trans. Electron. Packag. Manuf. 2006, 29, 50–57. [Google Scholar] [CrossRef]
  3. Zuo, C.; Feng, S.; Huang, L.; Tao, T.; Yin, W.; Chen, Q. Phase shifting algorithms for fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 109, 23–59. [Google Scholar] [CrossRef]
  4. Srinivasan, V.; Liu, H.C.; Halioua, M. Automated phase-measuring profilometry of 3-D diffuse objects. Appl. Opt. 1984, 23, 3105. [Google Scholar] [CrossRef]
  5. Zhu, L.; Cao, Y.; He, D.; Chen, C. Grayscale imbalance correction in real-time phase measuring profilometry. Opt. Commun. 2016, 376, 72–80. [Google Scholar] [CrossRef]
  6. Jeught, S.V.; Dirckx, J. Real-time structured light profilometry: A review. Opt. Lasers Eng. 2016, 87, 18–31. [Google Scholar] [CrossRef]
  7. Takeda, M.; Mutoh, K. Fourier transform profilometry for the automatic measurement of 3-D object shapes. Appl. Opt. 1983, 22, 3977–3982. [Google Scholar] [CrossRef]
  8. Xu, X.; Cao, Y.; Wang, Y.; Chen, C.; Fu, G.; Sun, S. A fast pixel matching method based on phase feature extraction in online phase-measuring profilometry. J. Mod. Opt. 2017, 64, 1907–1914. [Google Scholar] [CrossRef]
  9. Peng, K.; Cao, Y.; Wu, Y.; Xiao, Y. A new pixel matching method using the modulation of shadow areas in online 3D measurement. Opt. Lasers Eng. 2013, 51, 1078–1084. [Google Scholar] [CrossRef]
  10. Chen, C.; Cao, Y.P.; Zhong, L.J.; Peng, K. An online phase measuring profilometry for objects moving with straight-line motion. Opt. Commun. 2015, 336, 301–305. [Google Scholar] [CrossRef]
  11. Peng, K.; Cao, Y.; Wu, Y.; Chen, C.; Wan, Y. A dual-frequency online PMP method with phase-shifting parallel to moving direction of measured object. Opt. Commun. 2017, 383, 491–499. [Google Scholar] [CrossRef]
  12. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Verveer, P.J.; van Kempen GM, P.; Jovin, T.M. Super-resolution MAP algorithms applied to fluorescence imaging//Three-Dimensional Microscopy: Image Acquisition and Processing IV. Int. Soc. Opt. Photonics 1997, 2984, 125–135. [Google Scholar]
  14. Keren, D.; Peleg, S.; Brada, R. Image sequence enhancement using sub-pixel displacements. In Proceedings of the CVPR ’88: The Computer Society Conference on Computer Vision and Pattern Recognition, Ann Arbor, MI, USA, 5–9 June 1988. [Google Scholar]
  15. Dmitry, D.; Michael, E. Example-Based Single Document Image Super-Resolution: A Global MAP Approach with Outlier Rejection. Multidimens. Syst. Signal Process. 2007, 18, 103–121. [Google Scholar]
  16. Fu, B.; Wang, L.; Wu, Y.; Wu, Y.; Fu, S.; Ren, Y. Week Texture Information Map Guided Image Super-resolution with Deep Residual Networks. arXiv 2020, arXiv:2003.00451. [Google Scholar]
  17. Qing, C.; Ruan, J.; Xu, X.; Ren, J.; Zabalza, J. Spatial-spectral classification of hyperspectral images: A deep learning framework with Markov Random fields based modelling. IET Image Process. 2019, 13, 235–245. [Google Scholar] [CrossRef] [Green Version]
  18. Chan, M.T.; Herman, G.T.; Levitan, E. Bayesian image reconstruction using image-modeling Gibbs priors. Int. J. Imaging Syst. Technol. 1998, 9, 85–98. [Google Scholar] [CrossRef]
  19. Su, X.; Chen, W. Reliability-guided phase unwrapping algorithm: A review. Opt. Lasers Eng. 2004, 42, 245–261. [Google Scholar] [CrossRef]
  20. Ma, Q.; Cao, Y.; Chen, C.; Wan, Y.; Fu, G.; Wang, Y. Intrinsic feature revelation of phase-to-height mapping in phase measuring profilometry. Opt. Laser Technol. 2018, 108, 46–52. [Google Scholar] [CrossRef]
  21. Hanson, K.M. Introduction to Bayesian image analysis//Medical Imaging 1993: Image Processing. Int. Soc. Opt. Photonics 1993, 1898, 716–731. [Google Scholar]
  22. Li, S.Z. Markov Random Field Modeling in Image Analysis; Springer Science & Business Media: Berlin, Germany, 2009. [Google Scholar]
  23. Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2016, arXiv:1609.04747. [Google Scholar]
  24. Dai, M.; Peng, K.; Zhao, J.; Wan, M.; Wang, W.; Cao, Y. Fast 3D measurement based on improved optical flow for dynamic objects. Opt. Express 2020, 28, 18969. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, S. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Opt. Laser Eng. 2010, 48, 149–158. [Google Scholar] [CrossRef]
Figure 1. The flow-process diagram of maximum a posteriori (MAP).
Figure 1. The flow-process diagram of maximum a posteriori (MAP).
Applsci 11 02805 g001
Figure 2. Modulation of high-resolution fringe pattern: (a) Modulation of I 1 ; (b) Modulation of I N ; (c) Modulation of I 1 with ROI; (d) Modulation of I N with ROI. ROI: region of interest.
Figure 2. Modulation of high-resolution fringe pattern: (a) Modulation of I 1 ; (b) Modulation of I N ; (c) Modulation of I 1 with ROI; (d) Modulation of I N with ROI. ROI: region of interest.
Applsci 11 02805 g002
Figure 3. Experimental system.
Figure 3. Experimental system.
Applsci 11 02805 g003
Figure 4. Measured data of “face” model: (a) Measured object; (b) A set of deformed patterns.
Figure 4. Measured data of “face” model: (a) Measured object; (b) A set of deformed patterns.
Applsci 11 02805 g004
Figure 5. Low-resolution deformed patterns and super-resolution reconstructions: (a) Reference; (b) I 14 with matching points; (c) Reconstruction by averaging; (d) Reconstruction by MAP; (e) Enlargement of rectangular area in (c); (f) Enlargement of rectangular area in (d).
Figure 5. Low-resolution deformed patterns and super-resolution reconstructions: (a) Reference; (b) I 14 with matching points; (c) Reconstruction by averaging; (d) Reconstruction by MAP; (e) Enlargement of rectangular area in (c); (f) Enlargement of rectangular area in (d).
Applsci 11 02805 g005
Figure 6. Experimental results for “face” model, pixel size of Figure 6a,c is 512 × 512, pixel size of (b,d) is 458 × 512: (a) High-resolution deformed patterns by averaging; (b) Equivalent deformed patterns by averaging; (c) High-resolution deformed patterns by the proposed method; (d) Equivalent deformed patterns by the proposed method; (e) Object reconstruction by averaging; (f) Object reconstruction by the proposed method.
Figure 6. Experimental results for “face” model, pixel size of Figure 6a,c is 512 × 512, pixel size of (b,d) is 458 × 512: (a) High-resolution deformed patterns by averaging; (b) Equivalent deformed patterns by averaging; (c) High-resolution deformed patterns by the proposed method; (d) Equivalent deformed patterns by the proposed method; (e) Object reconstruction by averaging; (f) Object reconstruction by the proposed method.
Applsci 11 02805 g006
Figure 7. The 255th column data of the reconstructed object: (a) Sectional view of the 3D profile of the object; (b) Magnified view of the forehead area.
Figure 7. The 255th column data of the reconstructed object: (a) Sectional view of the 3D profile of the object; (b) Magnified view of the forehead area.
Applsci 11 02805 g007
Figure 8. Experimental data and the comparison experiment results of the ‘’snail” model: (a) High-resolution deformed pattern by averaging; (b) High-resolution deformed pattern by the proposed method; (c) High-resolution deformed pattern by 8-step PMP; (d) Wrapped phase by averaging; (e) Wrapped phase by the proposed method; (f) Wrapped phase by 8-step PMP; (g) Reconstruction by averaging; (h) Reconstruction by the proposed method; (i) Reconstruction by 8-step PMP.
Figure 8. Experimental data and the comparison experiment results of the ‘’snail” model: (a) High-resolution deformed pattern by averaging; (b) High-resolution deformed pattern by the proposed method; (c) High-resolution deformed pattern by 8-step PMP; (d) Wrapped phase by averaging; (e) Wrapped phase by the proposed method; (f) Wrapped phase by 8-step PMP; (g) Reconstruction by averaging; (h) Reconstruction by the proposed method; (i) Reconstruction by 8-step PMP.
Applsci 11 02805 g008aApplsci 11 02805 g008b
Figure 9. The 255th column data of the reconstructed object.
Figure 9. The 255th column data of the reconstructed object.
Applsci 11 02805 g009
Table 1. Error analysis of planes with different heights (/mm).
Table 1. Error analysis of planes with different heights (/mm).
HeightRMSE1RMSE2
5.0000.3580.091
10.0000.3170.074
15.0000.3360.086
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gao, J.; Cao, Y.; Chen, J.; Huang, X. Online Phase Measurement Profilometry for a Fast-Moving Object. Appl. Sci. 2021, 11, 2805. https://doi.org/10.3390/app11062805

AMA Style

Gao J, Cao Y, Chen J, Huang X. Online Phase Measurement Profilometry for a Fast-Moving Object. Applied Sciences. 2021; 11(6):2805. https://doi.org/10.3390/app11062805

Chicago/Turabian Style

Gao, Jie, Yiping Cao, Jin Chen, and Xiuzhang Huang. 2021. "Online Phase Measurement Profilometry for a Fast-Moving Object" Applied Sciences 11, no. 6: 2805. https://doi.org/10.3390/app11062805

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop