Next Article in Journal
Special Issue on Recent Developments in Orthodontics on Craniofacial Orthopedics
Previous Article in Journal
Impact of Pro-Argin on the Oral Health-Related Quality of Life: A 24-Week Randomized, Parallel-Group, Multicenter Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on the Hand–Eye Calibration Method of Variable Height and Analysis of Experimental Results Based on Rigid Transformation

School of Mechanical Engineering, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(9), 4415; https://doi.org/10.3390/app12094415
Submission received: 28 March 2022 / Revised: 22 April 2022 / Accepted: 24 April 2022 / Published: 27 April 2022

Abstract

:
In view of the phenomenon that camera imaging will appear large up close and small from afar in the eye-to-hand hand-calibration system, one hand–eye calibration is carried out. The manipulator is only suitable for grasping objects of the same height, and the calibration results cannot be applied to grasping products with variable height. Based on the study of the pinhole camera model and the rigid transformation model between coordinate systems, the introduction of the calibration height parameters, the relationship between parameters of the rigid transformation matrix between image the coordinate system and the robot coordinate system, and sampling height are established. In the experiment, firstly, through the calibration of camera parameters, the influence of camera distortion on imaging quality is eliminated, and the influence of calibration height is ignored. Then, the machine coordinate system and image coordinate system of the calibration plate at different heights are calibrated using the four-point calibration method. The parameters of the rigid transformation matrix at different heights (H) are calculated. Finally, through experimental analysis, the high linear relationship between the parameters of the rigid transformation matrix from the image coordinate system to the robot coordinate system and the calibration height is fitted. By analyzing the random error of the experiment, the linear relationship between calibration height and pixel density is further established, and the systematic error of the experimental process is deeply analyzed. The experimental results show that the hand–eye calibration system based on this linear relationship is precise and suitable for grabbing products of any height, and the positioning error is less than 0.08%.

1. Introduction

When we want to use the manipulator combined with vision to grasp, we usually obtain the pose information of the object in space through the camera. However, the pose information is based on the camera coordinate system and cannot be used directly. If we want the robot end effector to reach the target position, we need to know the coordinate information of the target position in the robot coordinate system. Robot hand–eye calibration is the key technology to solve this problem. It refers to the establishment of the relationship between the manipulator coordinate system and the image coordinate system so that the manipulator can accurately reach the specified position under the guidance of the camera.
At present, many experts and scholars have proposed many hand–eye calibration methods, which are mainly divided into the linear calibration method and the iterative calibration method. The iterative method is mainly used to improve accuracy and robustness. The linear method is an efficient on-line hand–eye calibration method [1].
Shiu and Ahmad [2] first introduced the dynamic equation AX = XB into hand–eye calibration and provided the minimum configuration for the unique solution. Sai and Lenz [3] proposed another highly linear method. Firstly, the translation of hand–eye transformation is estimated, and then its rotation is estimated. This method has been widely used in many systems. The traditional hand–eye calibration method is mainly based on [4,5] the two-step method. Firstly, the rotation matrix of hand–eye parameters is calculated, and then the translation vector is solved by using the rotation matrix, and the parameters of hand–eye matrix are obtained by combining the rotation and translation parameters. This method is simple and easy to understand, but its inherent defect is error transmission [6]. Jianfeng Jiang et al. [7] summarized the hand–eye calibration method into four steps: camera pose, mechanical claw pose, mathematical model, and error measurement. Huajian Song et al. [8] proposed a new analytical solution based on cost function to estimate the hand–eye matrix with measurement error. In order to obtain the nonsingular analytical solution based on the modified Rodrigues parameters, a new additional rotation theory must be introduced. Haihua Cui et al. [9] solved the rotation relationship by twice transforming the motion of the target carried by the robot and then solved the transformation relationship through multiple rotating motions in the robot tool coordinate system.
To solve this problem, some scholars have proposed a synchronous calibration method of rotation and translation parameters. Yuan Zhang et al. [10] proposed [10] a synchronous optimization method for the calibration and measurement of a typical hand–eye positioning system. By establishing the measurement calibration model under nonlinear constraints, an iterative method and closed solution method were proposed to effectively suppress the influence of calibration error on measurement accuracy. Zeng Jinsong et al. [11] proposed a calculation method of nonlinear robot vision guidance calibration parameters based on maximum likelihood estimation on the basis of analyzing the transformation relationship of coordinate systems in robot vision guidance. This method constructs the matrix transformation measure function in the calculation process according to the maximum likelihood estimation and uses the Levenberg Marquardt algorithm to solve the nonlinear least squares problem. The camera calibration parameters and hand–eye calibration parameters can be solved at one time.
In addition, for the establishment of the model, many scholars have discussed the modeling and solution methods of hand–eye calibration. Common modeling methods include quaternion, dual quaternion, and Denavit–Hartenberg (D–H). Zhongtao Fu et al. [12] proposed a collaborative motion coordinate calibration method based on dual quaternion, which effectively solved the problem of synchronous calibration. Deng Shi Chao and Mei Feng put forward the hand–eye calibration algorithm of octet [13] on the basis of dual quaternion and realized the synchronous optimal solution of rotation and translation matrix through the model of AX = XB. Xiao Wang et al. [14] proposed a new model based on a pair of dual equations. When the homogeneous matrix is decoupled or not, the simultaneous solutions and separable solutions of the dual equations are given. Simulation and experiments verify the feasibility and superiority of the proposed method in terms of accuracy and robustness. Kenji Koide and Emanuele Menegatti [15] proposed a hand–eye calibration technology based on minimizing the re-projection error. This method directly obtains the images of the calibration mode without explicitly estimating the camera attitude for each input image, effectively solves the estimation problem, and is easy to extend to different projection models. It can deal with different camera models by changing the projection model.
Some scholars have also combined big data with deep learning [16,17] to explore a new method of hand–eye system calibration. This method does not need to establish an accurate system mathematical model. Based on a large number of experimental data and neural network learning modes, the parameters of the hand–eye system are obtained through training. However, there are some disadvantages. In order to ensure the accuracy of training, tens of thousands of data samples are usually needed, and it takes a lot of time and energy to collect samples.
In hand–eye calibration, the vision system in which the camera is installed in a fixed position outside the manipulator body and the camera does not move with the manipulator is called an eye-to-hand hand–eye system [18]. This installation method has the advantages of convenient installation, simple calculation, and a low rate of measurement errors and is the preferred scheme in machine vision projects [19], as shown in Figure 1 [20].
The defect is that when the camera is imaging, there will be a phenomenon of near—large and far—small. Therefore, one hand–eye calibration is only suitable for grasping objects of the same height. If you want to grab objects of different heights, you need to calibrate several times, and the calibration results are only applicable to grabbing objects with the calibrated height, which is discrete. Once a new product is involved in the system, you need to recalibrate the hand–eye transformation matrix at the height of the product. Moreover, each recalibration requires the disassembly of the robot end effector. Considering the mechanical installation error, the previously calibrated height also needs to be recalibrated. In this way, with the increase in product models, the calibration work becomes more and more arduous.
In order to solve the above problems, this paper first calibrates the camera parameters to eliminate the effect of camera distortion on imaging quality. It ignores the effect of the calibration height, simplifies the hand–eye transformation into a plane-rigid transformation, and then introduces the calibration height parameters to establish a mathematical model that has a highly linear relationship between the parameters of the hand–eye rigid transformation matrix and the calibration and verifies the correctness of the model through experiments. According to the hand–eye calibration model, in any eye-to-hand-type hand–eye system, only a few key height hand–eye relationships need to be calibrated, and the hand–eye relationships of objects of any height in the camera field of view can be calculated based on this linear relationship. The number of hand–eye calibrations at different heights is reduced, and the efficiency of hand–eye calibration is improved.
The rest of this paper is organized as follows: in the second section, the pinhole camera model is introduced. The third section introduces the rigid transformation of the coordinate system. The fourth section is experimental verification and analysis, including camera parameter calibration, rigid transformation from the calibration plate coordinate system to robot coordinate system, and analysis of the experimental results. The fifth part is the conclusion. The sixth part is the acknowledgement of the project.

2. Pinhole Camera Model

A pinhole camera model refers to a model in which all the scenes that can be captured by the camera are displayed upside down on the camera imaging plane through the center point of the camera’s optical axis [21]. The optical center of a camera refers to the axis of the camera coordinate system with the optical axis of the camera. Usually, this axis is called the optical center of the camera. As shown in Figure 2, O represents the center point of the optical axis in the camera; this is the projection center point of the camera. The image plane O′-{u, v} in Figure 2 is a virtual plane, and the true image plane is plane O-{u, v}, which is symmetrical about the center of the projection. Only the image displayed on this plane is upside down. In order to facilitate the calculation, the orientation and size of the image must be transformed so that the transformed image is consistent with the orientation of the original object. In this way, the imaging plane is equivalently converted to the image plane O′-{u, v}.
When the projection center and optical axis of the camera are parallel to the Z axis, the distance from the projection center to the image plane O′-{u, v} is the focal length f of the camera. Assume that there is a point P(XB,YB,ZB) on the real-world calibration plate whose projection on the image plane is Pc (u, v), according to the similar triangle theory:
f Z B = u X B = v Y B
Scilicet:
u = X B f Z v = Y B f Z
When the origin of the two-dimensional image coordinate system and the intersection point O′ of the optical axis and the imaging plane do not coincide, the origin of the two-dimensional image coordinate system needs to be translated to the O′ position. Remember this transformation as (tu, tv); therefore
u = X B f Z B + t u v = Y B f Z B + t v
It can be seen that in the standard pinhole camera model, the coordinate value of point (u, v) on the two-dimensional image coordinate system has a linear relationship with the coordinate value of point (XB, YB) in the world coordinate system. Rewritten into homogeneous coordinate form, it is expressed as
u v 1 = f 0 t u 0 f t v 0 0 1 X B Y B Z B = M X B Y B Z B
The matrix M is called the camera’s internal parameter matrix, and it determines how the points in the real world are projected onto the camera.
The ideal perspective model is a pinhole imaging model, and the object and image will meet the relationship of similar triangles. However, in fact, due to the processing and assembly errors of the camera optical system, the lens cannot meet the relationship in which the object and image form a similar triangle, so there will be distortion between the actual image on the camera image plane and the ideal image. Distortion belongs to the geometric distortion of imaging, which is the phenomenon of picture distortion and deformation caused by different magnification of the image in different areas on the image plane. The degree of this deformation increases successively from the center of the picture to the edge of the picture, which is mainly reflected at the edge of the picture. The two main factors causing distortion are (1) lens shape (radial distortion) and (2) the lens is not parallel to the imaging plane (tangential distortion). The two distortions are shown in the Figure 3 and Figure 4.
In order to reduce distortion, try to avoid taking pictures with the widest angle end or the farthest end of the lens focal length. Due to the manufacturing or installation error of the camera, the lens of the camera has a certain distortion. We usually need to correct the distortion of the lens first, and the imaging model of the camera can meet the standard small-hole imaging model [22,23].

3. Rigid Transformation of the Coordinate System

An affine transformation is a transformation model of a two-dimensional plane coordinate system to another two-dimensional plane coordinate system [24]. When the objects in the two coordinate systems only have rotation and translation transformations, it is said that there is a rigid transformation relationship between the two coordinate systems [25]. In Euclidean space, suppose a point A (x, y) obtains a point B (x′, y′) through a rotation R transform and a translation t transform, and the two transformations are combined to make
f x , y = R f x , y + t
where f (x, y) is the plane before transformation and f (x′, y′) is the plane after transformation. Let the rotation angle of the plane about the Z axis be θ, and then
R = cos θ sin θ sin θ cos θ ,   t = t x t y
If point B (x′, y′) is subjected to a rotation R′ transformation and a translation t’ transformation again to obtain point C (x″, y″), then the transformation process from point A to point C can be expressed as
f x , y = R f x , y + t + t
This is obviously not a linear transformation process. As the combination of basic transformation factors increases, the transformation process will become more and more complicated. Considering the convenience of calculation, here introduce the homogeneous coordinates to represent the points in the plane, and then the transformation from point A to point B can be expressed as
x y 1 = R t 0 1 x y 1 = T x y 1
Equation (6) can be expressed as
x y 1 = R t 0 1 R t 0 1 x y 1 = T T x y 1
In this way, no matter how many times the basic transformation factors are combined, the original coordinate points and the transformed coordinate points are always linear. In terms of mechanical installation of the entire system, it is necessary to ensure that the robot, the plane of the conveyor belt and the optical axis of the camera are parallel to each other. When the Z axes of the two coordinate systems are parallel, the amount of displacement in the Z direction is temporarily ignored so that the three-dimensional space coordinate system of the robot coordinate system is compressed to a two-dimensional plane coordinate system, which is convenient for subsequent calculations. As shown in Figure 5, O C X C , Y C , Z C represents the image coordinate system and O R X R , Y R , Z R represents the robot coordinate system. Since the motion of the robot is a rigid body, and the camera is fixed at the same time, when the Z axes of the two coordinate systems are parallel, there are only a few rotation transformations and translation transformations in the two spatial coordinate systems (the linear transformation model, according to Equation (8)). Therefore, the transformation model of the image coordinate system to the robot coordinate system can be expressed as
X R Y R 1 = T 1 T 2 T n X C Y C 1 = R C R t C R 0 1 X C Y C 1 = T C R X C Y C 1
where n is the number of transformations from the image coordinate system to the robot coordinate system, R C R represents the synthetic rotation matrix, t C R represents the synthetic translation matrix, and T C R represents the final transformation matrix.
From Equations (1) and (4), the linear relationship between the coordinate value of the image coordinate system and the coordinate value of the world coordinate system is only affected by the height Z B of the sampling point from the camera optical center. Because the robot coordinate system is equivalent to the world coordinate system, the transformation matrix of the robot coordinate system and the image coordinate system should also have a certain relationship with the sampling height:
R C R = A 11 A 12 A 21 A 22 ,     t C R = T x T y
Suppose that
A 11 = f 1 H A 12 = f 2 H T x = f 3 H   A 21 = f 4 H A 22 = f 5 H T y = f 6 H
where H is the sampling height. Therefore, Equation (9) can be expressed as
  X R Y R 1 = f 1 H f 2 H f 3 H f 4 H f 5 H f 6 H 0 0 1 X C Y C 1
The above formula establishes a rigid transformation model of the image coordinate system to the robot coordinate system at an arbitrary height H. The following is an experiment to verify the correctness of the model.

4. Experimental Verification and Analysis

In the second section, we propose and establish a rigid transformation mathematical model from image coordinate system to the robot coordinate system at any height h through the standard small hole imaging model in the first section. Next, we mainly verify the correctness of the rigid transformation model proposed in Section 2 through experiments.
First, in the standard pinhole camera model established in Section 1, the matrix m in Equation (4) is called the camera’s internal parameter matrix, which determines how points in the real world are projected to the camera. Through the Zhang Zhengyou calibration method, we obtain the internal parameter matrix M of the camera.
Second, by eliminating the barrel distortion of the camera lens, the four-point calibration method is used to calibrate the four-point coordinates of the calibration plate at different heights in the machine coordinate system and the image coordinate system. The M C R matrix parameters at different heights H are calculated.
Finally, we made a broken line diagram of M C R matrix parameters and calibration height H in the three-dimensional coordinate system, which clearly showed the distribution of M C R matrix parameters at different heights. There was an obvious linear relationship between M C R matrix parameters and calibration height, and it further fitted the linear relationship between M C R matrix parameters and calibration height H. Through the experimental results, the random error in the calibration system is analyzed, the linear relationship between the calibration height and the camera pixel density is studied, and the systematic error in the experimental process is deeply studied. Through verification, the positioning error of the proposed arbitrary height rigid transformation model in practical application is less than 0.08%. The specific experiment and analysis process are as follows.

4.1. Camera Parameter Calibration

The internal parameter matrix and distortion coefficient of the camera are the content of the camera calibration. The knowledge of domestic and foreign scholars has been perfected. Dainis and Juberts [26] give the parameter matrix of the camera by a linear transformation; Ganapathy [27] for the first time gives a method to obtain the camera parameter matrix through the perspective transformation matrix, but the simplest and most practical is Zhang Zhengyou’s calibration method [28,29]. This calibration method does not require high-precision calibration equipment, and the calibration accuracy is higher than the traditional calibration method. Therefore, this paper chooses Zhang Zhengyou’s calibration method as a method to obtain camera parameters. The calibration samples are shown in Figure 6 below.
This paper uses a high-precision alumina calibration plate with a low thermal expansion coefficient. The specific parameters are shown in the following Table 1 and Table 2.
We used the Camera Calibrator toolbox in MATLAB to import the samples in Figure 6 and enter the parameters of the calibration plate. The output is shown in Figure 7.
The final calibration results are shown in the table below:
Table 2. Calibration results of camera internal parameters.
Table 2. Calibration results of camera internal parameters.
Parameter TypeCalibration Results
Equivalent focal length[5010.81789826726, 5011.24572785725]
Principal point coordinates[2768.70959640510, 1806.34169717229]
Radial distortionk1 = −0.0625967952090845, k2 = 0.133984194777852
Tangential distortionp1 = −0.000122713140590104, p2 = 0.00160031845139996
Scaling factor Γ = 0.168076608984161
Mean reprojection errorMean reprojection error = 0.08
From the Table 2 calibration results, the camera imaging quality is much less affected by tangential distortion than by radial distortion. Therefore, when the accuracy requirements of the project are low, in order to improve the calculation speed, the influence of tangential distortion on the imaging quality can be ignored. The internal parameter matrix of the camera can be derived from Table 2:
M = 5010.8179 0.6924 2768.7096 0 5011.2457 1806.3417 0 0 1

4.2. Rigid Transformation from Calibration Plate Coordinate System to Robot Coordinate System

The calibration plate selected in this experimental step is a rectangular plate of 2000 mm × 1200 mm. The smallest enclosing rectangle of the original image is shown in Figure 8a. Due to the effects of lens distortion, the entire image is barrel-shaped [30]. The image effect after distortion correction is shown in Figure 8b. The image exactly matches the standard rectangular frame, indicating that the image distortion has been significantly corrected.
The calibration plate selected in this paper is a rectangular calibration plate. We calibrated the four vertices on the upper surface of the rectangular calibration plate, moved the calibration plate to different heights, calibrated the four-point coordinates in turn, obtained the four-point coordinates of the workpiece in the image coordinate system and the machine coordinate system at different heights, and calculated the different M C R matrix parameters.
For workpieces of different sizes and heights, we know that three non-collinear points can determine a plane. If the accuracy of the four-point calibration results is enough, there is no need for more point calibration. For a quadrilateral workpiece, you can take the four vertices of the upper surface of the workpiece, move the workpiece to different heights, calibrate the four-point coordinates of the upper surface in turn, obtain the four-point coordinates of the workpiece in the image coordinate system and the machine coordinate system at different heights, and calculate the M C R matrix parameters at different heights. For irregular workpieces, we can take the non-collinear key four-point positions on the upper surface of the workpieces to fit into a quadrilateral, determine the key quadrilateral shape of the workpiece, move the workpiece to different heights, calibrate the four-point coordinates of the upper surface in turn, obtain the four-point coordinates of the workpiece in the image coordinate system and the machine coordinate system at different heights, and calculate the M C R matrix parameters at different heights.
In the actual grasping work, the transformation from the image coordinate system to the robot coordinate system is realized by the M C R matrix parameters at different heights, the four-point positions obtained by the camera and the M C R matrix so that the shape and position of the workpiece can be determined, and then the target grasping work of the robot can be realized.
We used the robot’s tip tool (TCP) to establish a tool coordinate system, sequentially aligned the four corner points of the calibration board, and read the corresponding coordinate values on the teaching pendant, as shown in Figure 9.
Due to the large variety of products and the different sizes of products at different heights, a hand–eye calibration is required at each product height position. However, in order to adapt to new height products, hand–eye calibration is generally required at a sufficiently small height step. According to the four-point calibration method, the calibration is performed for every 10 mm height, and some calibration results are shown in Table 3.
According to the principle of the four-point calibration method [31,32]. The rigid transformation matrix of the image coordinate system and the robot coordinate system at each height is obtained, as shown in Table 4.

4.3. Analysis of Experimental Results

The robot grasping process is the inverse process of the camera imaging process, that is, the transformation process from the image coordinate system to the world coordinate system. By calibrating the calibration plates at different heights, we can obtain the rigid transformation matrix M C R parameters of each height, which can realize the transformation from the image coordinate system to the robot coordinate system and realize the target grasping of the robot. However, in this conversion, an important parameter—the height of the workpiece—is lost. In this way, there will be the problem that the same image coordinate points may represent different world coordinate points, resulting in the robot’s “inaccurate grasp.” In addition, due to the wide variety of products and different imaging dimensions of products at different heights, the height position of each product should be calibrated once.
Secondly, if there is no corresponding calibration for the different heights of a new product, we can only use the transformation matrix of the adjacent height calibration for calculation. Although many calibration experiments are carried out in a small step, there will still be errors between the transformation matrix generated by the discrete calibration height and the transformation matrix under the actual product height. In order to solve this problem, we know from the rigid transformation model established in Section 2 that the transformation matrix between robot coordinate system and image coordinate system should also have a certain relationship with the sampling height. Through this linear relationship, we can solve this kind of problem by fitting the rigid change matrix at the partial height obtained in Section 4.2 of the experiment to the rigid change matrix at any height. According to Table 4, we connect the relationship between the parameter values of M C R   and height with broken lines in the same three-dimensional coordinates, as shown in Figure 10.
It can be seen from Figure 10 that M C R matrix parameters show an obvious linear rlationship with the broken line diagram of H. Tx and Ty reflect the change in the translation matrix. A11, A12, A13, and A14 are all near the z = 0 plane, which is also fully consistent with the rotation matrix parameters proposed by us. According to Table 4, we further analyze the relationship between the calibration height and various parameters of M C R . The results are shown in Figure 11.
As can be seen from Figure 11, except for A11 and A22, except for the two parameters, there are some points that are under-fitted. Considering A11 and A22, the two parameters are relatively small, and the approximate linear processing will not have a great impact on the results. In practical application, the points with high dispersion can be eliminated and refitted.
We further analyze the relationship between the calibration height and various parameters of M C R , as shown in Table 5.
As can be seen from Table 5, except for the A11 and A22 two parameters, the other parameters have a strict linear relationship with the calibration height, and the correlation coefficient R2 > 0.995, |Pearsons’s R| > 0.997, adj R-Square > 0.995. Therefore, the proposed linear result is shown in Equation (11).
A 11 = 10 5 H 0.0056 A 12 = 2 × 10 4 H + 0.7004 T x = 0.3511 H 1748.8 A 21 = 2 × 10 4 H + 0.6984 A 22 = 10 5 H + 0.005 T y = 0.5465 H + 682.24
Considering the error caused by the experiment, we further analyzed the experimental results. We can see from the calibration results in Table 3 that the calibration coordinates are in the camera and robot coordinate systems at different heights, as shown in Figure 12.
It can be found that when H is 105 mm, XcB’ has great errors, and errors will inevitably occur in reading data in the actual calibration process. Considering A11 and A22, the two parameters are relatively small, and the approximate linear processing will not have a great impact on the results. In practical application, one-step data preprocessing can be carried out first, and then the points of matrix M C R parameters with high dispersion can be eliminated for refitting. In order to further study the experimental results, we found that there is also a linear relationship between pixel density and camera calibration height H. The research is as follows:
Pixels per inch (PPI) indicates the number of pixels per unit length of content. The larger the PPI, the higher the resolution and fidelity of the representative image. The size of the research object in this paper is in mm, so the pixel density is defined as the number of pixels contained per mm, using Pr as an indicator.
As can be seen from Figure 4
f H = u W    
when the focal length and position of the camera are fixed, the greater the height of the workpiece, the smaller the distance H′ between the optical center of the camera and the upper surface of the workpiece, and the smaller the field of view of the camera. The equation is
W = μ H f  
Set the resolution of the camera as M × n. If M > N, the calculation formula of PPI is
P r = m W = f m μ H
Visible pixel density Pr is inversely proportional to the distance H′ between the optical center of the camera and the upper surface of the workpiece and is directly proportional to the height h of the workpiece.
From Figure 11 and Table 3, the corresponding relationship is established to calculate the corresponding length pixel ratio (Pr_l) and width pixel ratio (Pr_w), as shown in Equation (15). Where l1 and l2 are the length dimensions of the image, w1 and w2 are the width dimensions of the image, and l and W are the workpiece dimensions.
l 1 = X C D X C A l 2 = X C C X C B w 1 = Y C B Y C A w 2 = Y C C Y C D P r _ l   = l 1 + l 2 2 L P r _ w =   w 1 + w 2 2 W
The table of pixel ratio relationship under the height of Equation (15) is shown in Table 6.
Through Table 6, we fit the relationship between the length–width pixel ratio and the workpiece height, as well as the correlation coefficient, as shown in Figure 13.
As shown in Figure 13. It can be seen that the fitting data in the length direction and the fitting data in the width direction are basically the same, but there is also a deviation. Ideally, it should be a straight line. This is due to the camera installation error, unevenness, experimental operation and other errors, resulting in the inconsistency of density.
Further analysis shows that the correlation coefficients of the fitting results in the length and width directions are also inconsistent. This is because the fixed focus lens is used in this paper. When the workpiece changes, the impact of “defocus blur” will inevitably appear in the imaging delay of the camera. However, the error caused by this reason is within the tolerance range of the system since the workpiece is always captured in a certain height range in practical application. Moreover, the lenses used now have a certain depth of field (DOF), so there is no need to make up for the error caused by defocus blur in practical application.
From Figure 13, the correlation coefficient of the fitting result in the width direction is large, and the pixel density in the width direction is greater than that in the length direction. In order to better reflect the relationship between pixel density and calibration height, this paper takes the fitting result in the width direction as the pixel density Pr of the workpiece at height H. Namely
  P r = 0.0004 × H + 1.4317
It can be seen from Equation (11) that there is still a highly linear relationship between the parameters of the rigid transformation matrix from the image coordinate system to the robot coordinate system and the calibration height under multiple errors. In the eye-to-hand hand–eye system, the parameters of   M C R matrix in the calibration system at any height can be calculated quickly according to the fitting results. The measured result is the display data of the teaching pendant at a certain height of a point on the calibration board measured according to the four-point calibration method shown in Figure 9. The result is to calculate the M C R   matrix under the calibration according to Equation (11) and substitute it into Equation (10) to calculate the coordinate value. The verification results are shown in Table 7.
As can be seen from Table 7, firstly, the rigid transformation matrix is calculated according to the linear relationship between the transformation matrix parameters and the calibration height, and then the transformation matrix is used to calculate the position of the robot’s target point under the visual guidance. The error is less than 0.08%, so this linear relationship is reasonable and effective in the calibration system.

5. Conclusions

Through the introduction, this paper introduces the specific working mode of the hand–eye calibration method and the current research status of the hand–eye calibration method. It also discusses the defect in hand–eye calibration: due to the small phenomenon of camera imaging, it cannot be applied to machine grasping at any height. Based on the imaging model of a standard pinhole camera, the rigid transformation matrix is introduced to establish the rigid transformation mathematical model from image coordinate system to robot coordinate system at any height h, and the hand–eye transformation is simplified into plane rigid transformation. In the process of the experiment, the internal parameters and distortion parameters of the camera were obtained according to the Zhang Zhengyou calibration method, which eliminates the influence of the camera barrel distortion on the imaging quality and ignores the influence of the calibration height. Then, the machine coordinate system and image coordinate system of the calibration plate at different heights were calibrated using the four-point calibration method, and the M C R parameter matrix at different heights h was obtained. From the obvious linear relationship between the distribution of the M C R parameter matrix in the three-dimensional coordinate system, the linear relationship between the parameters of the rigid transformation matrix from the image coordinate system to the robot coordinate system and the calibration height was further fitted. The random error of the experimental results was analyzed, and the linear relationship between the calibration height and the pixel density of the image was further studied. Through the experimental results, the systematic errors in the experimental process were deeply analyzed. The experimental results show that the hand–eye relationship of objects at any height in the camera’s field of view is calculated according to this linear relationship, which is accurate and suitable for the positioning of products at any height, and the positioning error is less than 0.08%.
According to this conclusion, the eye-to-hand type hand–eye calibration system only needs to calibrate and calculate the rigid transformation matrix of several sampling heights to calculate the hand–eye rigid transformation matrix of the system at any height, and the result is accurate and effective. The calibration of the coordination system brings great convenience.

Author Contributions

Resources, S.S.; Writing—original draft, D.Z. and W.W.; Writing—review & editing, S.G., D.Z. and W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This paper is supported by the National Natural Science Foundation of China (grant no. 51475129, 51675148, 51405117). This research was funded by the application and demonstration of “intelligent generation” technology for small- and medium-sized enterprises—R & D and demonstration application of the chair of the industry Internet innovation service platform based on artificial intelligence (grant no. 2020C01061).

Conflicts of Interest

The author of this article declares that there is no conflict of interest related to this manuscript.

References

  1. Liu, J.; Wu, J.; Li, X. Robust and accurate hand–eye calibration method based on schur matric decomposition. Sensors 2019, 19, 4490. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Shiu, Y.C.; Ahmad, S. Calibration of Wrist-Mounted Robotic Sensors by Solving Homogeneous Transform Equations of the Form AX = XB. IEEE Trans. Robot. Autom. 1989, 5, 16–29. [Google Scholar] [CrossRef] [Green Version]
  3. Tsai, R.Y.; Lenz, R.K. A new technique for fully autonomous and efficient 3d robotics hand-eye calibration. In Proceedings of the 4th International Symposium on Robotics Research, Santa Clara, CA, USA, 1 May 1988; pp. 287–297. [Google Scholar]
  4. Huang, C.; Chen, D.; Tang, X. Robotic hand-eye calibration based on active vision. In Proceedings of the 2015 8th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 12–13 December 2015; pp. 55–59. [Google Scholar]
  5. Wu, J.; Liu, M.; Qi, Y. Computationally Efficient Robust Algorithm for Generalized Sensor Calibration Problem AR=RB. IEEE Sens. J. 2019, 19, 9512–9521. [Google Scholar] [CrossRef]
  6. Ali, I.; Suominen, O.; Gotchev, A.; Morales, E.R. Methods for simultaneous robot-world-hand–eye calibration: A comparative study. Sensors 2019, 19, 2837. [Google Scholar] [CrossRef] [Green Version]
  7. Jiang, J.; Luo, X.; Luo, Q.; Qiao, L.; Li, M. An overview of hand-eye calibration. Int. J. Adv. Manuf. Technol. 2021, 119, 77–97. [Google Scholar] [CrossRef]
  8. Song, H.; Du, Z.; Wang, W.; Sun, L. Singularity analysis for the existing closed-form solutions of the hand-eye calibration. IEEE Access 2018, 6, 75407–75421. [Google Scholar] [CrossRef]
  9. Cui, H.; Sun, R.; Fang, Z.; Lou, H.; Tian, W.; Liao, W. A novel flexible two-step method for eye-to-hand calibration for robot assembly system. Meas. Control 2020, 53, 2020–2029. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Qiu, Z.; Zhang, X. A Simultaneous Optimization Method of Calibration and Measurement for a Typical Hand–Eye Positioning System. IEEE Trans. Instrum. Meas. 2020, 70, 1–11. [Google Scholar] [CrossRef]
  11. Zeng, J.; Xue, W.; Zhai, X. A Synchronous Solution Method of Calibration Parameters for Visual Guidance of Robot. Mach. Tool Hydraul. 2019, 47, 37–40. [Google Scholar]
  12. Fu, Z.; Pan, J.; Spyrakos-Papastavridis, E.; Chen, X.; Li, M. A dual quaternion-based approach for coordinate calibration of dual robots in collaborative motion. IEEE Robot. Autom. Lett. 2020, 5, 4086–4093. [Google Scholar] [CrossRef]
  13. Deng, S.; Feng, M. Research on Hand-eye Calibration of Monocular Robot. Modul. Macine Autom. Manuf. Tech. 2021, 12, 53–57. [Google Scholar]
  14. Wang, X.; Huang, J.; Song, H. Simultaneous robot–world and hand–eye calibration based on a pair of dual equations. Measurement 2021, 181, 109623. [Google Scholar] [CrossRef]
  15. Koide, K.; Menegatti, E. General hand–eye calibration based on reprojection error minimization. IEEE Robot. Autom. Lett. 2019, 4, 1021–1028. [Google Scholar] [CrossRef]
  16. Levine, S.; Pastor, P.; Krizhevsky, A.; Ibarz, J.; Quillen, D. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res. 2018, 37, 421–436. [Google Scholar] [CrossRef]
  17. Pinto, L.; Gupta, A. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 3406–3413. [Google Scholar]
  18. Wang, L.; Qian, L. Research on Robot Hand-Eye Calibration Method Based on Three-Dimensional Calibration Block. Laser Optoelectron. Prog. 2021, 58, 539–547. [Google Scholar]
  19. Muis, A.; Ohnishi, K. Eye-to-hand approach on eye-in-hand configuration within real-time visual servoing. IEEE/ASME Trans. Mechatron. 2005, 10, 404–410. [Google Scholar] [CrossRef]
  20. Liang, P.; Lin, W.; Luo, G.; Zhang, C. Research of Hand–Eye System with 3D Vision towards Flexible Assembly Application. Electronics 2022, 11, 354. [Google Scholar] [CrossRef]
  21. Juarez-Salazar, R.; Zheng, J.; Diaz-Ramirez, V.H. Distorted pinhole camera modeling and calibration. Appl. Opt. 2020, 59, 11310–11318. [Google Scholar] [CrossRef]
  22. Zhang, Z.; Luo, B. Research and implementation of robot vision positioning. J. Harbin Inst. Technol. 1997, 29, 85–89. [Google Scholar]
  23. Tang, W. Design and Implementation of Visual Localization Algorithm for Demo System of Construction Robots. Master’s Thesis, Harbin Institute of Technology, Harbin, China, 2020. [Google Scholar]
  24. Ying, J.; Chen, W.; Yang, H. Research on Parking spaces recognization and counting algorithm based on affine transformation and template matching. Appl. Res. Comput. 2022, 39, 919–924. [Google Scholar]
  25. Han, J. Higher Institutional Science; Mechanical Industry Press: Beijing, China, 2004. [Google Scholar]
  26. Dainis, A.; Juberts, M. Accurate remote measurement of robot trajectory motion. In Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25–28 March 1985; pp. 92–99. [Google Scholar]
  27. Ganapathy, S. Decomposition of transformation matrices for robot vision. Pattern Recognit. Lett. 1984, 2, 401–412. [Google Scholar] [CrossRef]
  28. Ma, S.; Zhang, Z. Computer Vision: Basics of Computing Theory and Algorithms; Science Press: Beijing, China, 1998. [Google Scholar]
  29. Wang, T.; Wang, L.; Zhang, W.; Duan, X.; Wang, W. Design of infrared target system with Zhang Zhenyou calibration method. Opt. Precis. Eng. 2019, 27, 1828–1835. [Google Scholar] [CrossRef]
  30. Feng, W. Research and Implementation of Image Barrel Distortion Correction. Master’s Thesis, North China University of Technology, Beijing, China, 2011. [Google Scholar]
  31. Zhang, Y.; Qiu, Z.; Zhang, X. Calibration method for hand-eye system with rotation and translation couplings. Appl. Opt. 2019, 58, 5375–5387. [Google Scholar] [CrossRef]
  32. Wu, A.; He, W.; Ouyang, X. A method of hand-eye calibration for palletizing robot based on openCV. Manuf. Technol. Mach. Tool 2018, 6, 45–49. [Google Scholar]
Figure 1. Eye-to-hand diagram.
Figure 1. Eye-to-hand diagram.
Applsci 12 04415 g001
Figure 2. Pinhole imaging model.
Figure 2. Pinhole imaging model.
Applsci 12 04415 g002
Figure 3. Radial distortion diagram.
Figure 3. Radial distortion diagram.
Applsci 12 04415 g003
Figure 4. Tangential distortion diagram.
Figure 4. Tangential distortion diagram.
Applsci 12 04415 g004
Figure 5. Coordinate system transformation.
Figure 5. Coordinate system transformation.
Applsci 12 04415 g005
Figure 6. Camera calibration sample collection.
Figure 6. Camera calibration sample collection.
Applsci 12 04415 g006
Figure 7. Calibration plate attitude and calibration average projection error.
Figure 7. Calibration plate attitude and calibration average projection error.
Applsci 12 04415 g007
Figure 8. Image processing results of the calibration plate before and after distortion correction: (a) before distortion correction and (b) after distortion correction.
Figure 8. Image processing results of the calibration plate before and after distortion correction: (a) before distortion correction and (b) after distortion correction.
Applsci 12 04415 g008
Figure 9. Four-point calibration method.
Figure 9. Four-point calibration method.
Applsci 12 04415 g009
Figure 10. Broken line diagram of rigid transformation matrix parameters and calibrated height parameters: (a) The Z−axis is the parameter value, and (b) Z−axis is the calibration height.
Figure 10. Broken line diagram of rigid transformation matrix parameters and calibrated height parameters: (a) The Z−axis is the parameter value, and (b) Z−axis is the calibration height.
Applsci 12 04415 g010
Figure 11. Relationship between rigid transformation parameters and calibration height.
Figure 11. Relationship between rigid transformation parameters and calibration height.
Applsci 12 04415 g011
Figure 12. Coordinate analysis diagram: (a) three−dimensional display of calibration plate coordinates and (b) actual camera robot coordinates.
Figure 12. Coordinate analysis diagram: (a) three−dimensional display of calibration plate coordinates and (b) actual camera robot coordinates.
Applsci 12 04415 g012
Figure 13. Relationship between length–width pixel ratio and calibration plate height.
Figure 13. Relationship between length–width pixel ratio and calibration plate height.
Applsci 12 04415 g013
Table 1. Alumina calibration plate parameters.
Table 1. Alumina calibration plate parameters.
ModelDimensions (mm)Checker Side Length (mm)Pattern ArrayAccuracy (mm)
LGP500-300500 × 5003013 × 12±0.02
Table 3. Four-point calibration results.
Table 3. Four-point calibration results.
Height (mm)Corner PositionRobot Coordinate System (mm)Image Coordinate System (Pixel)
X R Y R X C Y C
15A−1449.71585.931284.804432.0025
B549.171604.261290.653296.744
C536.962804.993015.6273293.224
D−1462.222785.863009.781428.4819
45A−1414.151530.261191.118470.0939
B584.551561.771218.6063365.488
C564.222762.32958.5633348.97
D−1434.562729.522931.075453.5752
75A−1417.771522.521165.935453.0028
B581.11536.811170.0133373.189
C571.32737.352925.0713370.738
D−1427.862722.062920.992450.5518
105A−1429.651529.601163.014423.642
B569.511530.36145.7893368.993
C568.252730.582916.0973379.346
D−1431.172729.402933.322433.995
Table 4. Parameters of rigid transformation matrix M C R at each height.
Table 4. Parameters of rigid transformation matrix M C R at each height.
Height (mm) A 11 A 12 T x A 21 A 22 T y
15−0.005740.697815001−1743.8551030.6958630080.005118689.4714355
30−0.005310.694389999−1738.1446530.6922770140.004842699.4268799
45−0.005150.69036603−1732.5686040.6896550060.004555706.3405762
55−0.004280.688233972−1729.6822510.6872609850.003596713.5334473
75−0.004710.684557021−1722.4567870.6837670210.00411723.1809692
85−0.004600.682471991−1718.7548830.6818010210.003916728.6638184
95−0.004430.680664003−1716.0028080.6799010040.003839733.9542236
105−0.004750.678767979−1711.7406010.6778290270.004294739.3514404
Table 5. Rigid transform matrix parameters fitting table.
Table 5. Rigid transform matrix parameters fitting table.
A11A12TxA21A22Ty
Linear FittingY = A + B × X
PlotA11 valuesA12 ValuesTx valuesA21 ValuesA22 ValuesTy values
WeightNo Weighting
Intercept−0.00561
±2.8386 × 10−4
0.70041
±3.91838× 10−4
−1748.81438
±0.28725
0.69845
±2.05966× 10−4
0.005
±3.34172× 10−4
682.24144
±0.62448
Slope1.16124 × 10−5
±4.06162 × 10−6
−2.09957× 10−4
±5.60664 × 10−6
0.35111
±0.00411
−1.96476× 10−4
±2.94707 × 10−6
−1.13693 × 10−5
±4.78151 × 10−6
0.54652
±0.00894
Residual Sum
of Squares
7.12351 × 10−71.35738 × 10−60.729493.75039 × 10−79.87247 × 10−73.44764
Pearson’s r0.75941−0.997870.99959−0.99933−0.696520.9992
R−Square (COD)0.57670.995740.999180.998650.485150.9984
Adj.R−Square0.506150.995030.999040.998430.399340.99813
Table 6. Length and width pixel ratio at different heights.
Table 6. Length and width pixel ratio at different heights.
Height (mm)Workpiece Size (mm)Image Size (Pixel) Length   Pixel   Ratio   ( P r _ l ) Width   Pixel   Ratio   ( P r _ w )
152000 × 12002865 × 17241.4323741.437484
302879 × 17341.4393451.444716
452896 × 17401.4477621.45003
552906 × 17461.4522951.454739
752920 × 17551.4600941.46255
852929 × 17601.4646061.466992
952937 × 17651.46851.470833
1052945 × 17701.47271.475282
Table 7. Verification of the applicability of the rigid transformation matrix.
Table 7. Verification of the applicability of the rigid transformation matrix.
Height
(mm)
Results of Testing (mm)Calculation Results (mm)
XY X Error Y Error
20691.071983.25691.22150.0219%1982.97320.0139%
55−1487.951901.81−1488.07520.0084%1901.35760.0238%
88−1419.212504.49−1419.73250.0368%2503.89410.0238%
105537.272542.82537.67390.0752%2543.18030.0142%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Su, S.; Gao, S.; Zhang, D.; Wang, W. Research on the Hand–Eye Calibration Method of Variable Height and Analysis of Experimental Results Based on Rigid Transformation. Appl. Sci. 2022, 12, 4415. https://doi.org/10.3390/app12094415

AMA Style

Su S, Gao S, Zhang D, Wang W. Research on the Hand–Eye Calibration Method of Variable Height and Analysis of Experimental Results Based on Rigid Transformation. Applied Sciences. 2022; 12(9):4415. https://doi.org/10.3390/app12094415

Chicago/Turabian Style

Su, Shaohui, Shang Gao, Dongyang Zhang, and Wanqiang Wang. 2022. "Research on the Hand–Eye Calibration Method of Variable Height and Analysis of Experimental Results Based on Rigid Transformation" Applied Sciences 12, no. 9: 4415. https://doi.org/10.3390/app12094415

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop