Next Article in Journal
A Deep Neural Network Technique for Detecting Real-Time Drifted Twitter Spam
Previous Article in Journal
Preparation and Performance Enhancements of Low-Heat-Releasing Polyurethane Grouting Materials with Epoxy Resin and Water Glass
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Significance of Camera Pixel Error in the Calibration Process of a Robotic Vision System

by
Mohammad Farhan Khan
1,
Elham M. A. Dannoun
2,
Muaffaq M. Nofal
3 and
M. Mursaleen
4,*
1
Electrical Engineering Department, Indian Institute of Technology Roorkee, Roorkee 247667, India
2
Department of Mathematics and Sciences, Woman Campus, Prince Sultan University, Riyadh 11586, Saudi Arabia
3
Department of Mathematics and General Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
4
Department of Medical Research, China Medical University Hospital, China Medical University (Taiwan), Taichung 40402, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(13), 6406; https://doi.org/10.3390/app12136406
Submission received: 12 May 2022 / Revised: 14 June 2022 / Accepted: 20 June 2022 / Published: 23 June 2022
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Although robotic vision systems offer a promising technology solution for rapid and reconfigurable in-process 3D inspection of complex and large parts in contemporary manufacturing, measurement accuracy poses a challenge for its wide deployment. One of the key issues in adopting a robotic vision system is to understand the extent of its measurement errors which are directly correlated with the calibration process. In this paper, a possible source of practical and inherent measurement uncertainties involved in the calibration process of a robotic vision system are discussed. The system considered in this work consists of an image sensor mounted on an industrial robot manipulator with six degrees of freedom. Based on a series of experimental tests and computer simulations, the paper gives a comprehensive performance comparison of different calibration approaches and shows the impact of measurement uncertainties on the calibration process. It has been found from the error sensitivity analysis that minor uncertainties in the calibration process can significantly affect the accuracy of the robotic vision system. Further investigations suggest that inducing errors in image calibration patterns can have an adverse effect on the hand–eye calibration process compared to the angular errors in the robot joints.

1. Introduction

Robotic vision systems offer a promising technology solution for rapid inspection of complex scenarios such as industrial assessment, underwater inspection, remote sensing, and more. However, the reliability of system based solutions to real-world problem is substantially limited due to existence of measurement errors, hence creating a challenge for vision systems in taking accurate decisions [1].
The measurement accuracy of an object with the help of an image sensor mounted on a multi-axis-robot substantially depends on precise estimation of three geometrical transformation matrices, namely matrix A, which represents the transformation from the robot wrist (flange) coordinate frame to the robot world coordinate frame (base coordinate), matrix B, which represents the transformation from the object coordinate frame to the camera coordinate frame, and matrix X, which represents the transformation from the flange coordinate frame to the camera coordinate frame.
To estimate matrix A, the kinematics model of the robot needs to be developed first and then its kinematic parameters are to be estimated with the help of robot calibration methods. In order to estimate the kinematic parameters, several calibration methods have been introduced in the literature, examples include, circle point analysis method [2], arm signature identification method [3], screw axis identification method [4], etc. On the other hand, estimating matrix B with the help of camera calibration is a common practice and well documented in the computer vision community [5,6,7,8,9]. In the calibration process, the camera mounted on the robot flange captures images of a calibration board (checkerboard) from a number of different positions and orientations. A simple yet effective algorithm suggested by Zhang [7] is adopted in this work, which is briefly discussed in Section 2.2.
To investigate the major sources of measurement uncertainties and its overall impact on a common robotic vision system, a practical setup constituting of a camera, a six-axis robotic arm and a calibration object is used. The system calibration requires accurate estimation of the relative position and orientation of the camera with respect to the flange, with the help of existing hand–eye calibration algorithms [10,11,12,13,14,15].
The practical setup used in this paper to calibrate the robotic vision system is illustrated in Figure 1. The developed robotic vision system follows the classical approach to estimating all the three transformation matrices required in this experiment. That is, the camera mounted on the flange of the robotic arm is navigated from position i to j to capture images of the calibration object. Note that A and B are transformation matrices that can be easily known by recording the pose of the flange relative to its base for η number of positions and capturing images of the calibration object from the same η positions, respectively. To calculate unknown matrix X by solving system equation A X = X B , the minimum value of η should not be less than three [10].
In Figure 1, let P be a transformation matrix that defines the transformation of coordinates from the hand coordinates (flange of the robot) to the robot world coordinates (robot base). The matrix P of dimension 4 × 4 can be defined as:
P = R P T P 0 1 ,
where R P is a 3 × 3 rotation matrix, and T P is a 3 × 1 translation vector. Let the matrix P be known for positions i and j, then matrix A can be estimated using the following expression:
A = P j 1 P i .
Similarly, let Q be a transformation matrix that defines the transformation of coordinates from the object coordinates to the camera coordinates. The matrix Q is again of dimension 4 × 4 , which is defined below.
Q = R Q T Q 0 1 ,
where R Q is a 3 × 3 rotation matrix, and T Q is a 3 × 1 translation vector. Let the matrix Q be known for positions i and j by capturing images at both positions and performing extrinsic camera calibration, then matrix B can be estimated using:
B = Q j Q i 1 .
Although various robotic vision systems have been proposed for implementation of in-process 3D measurement with the advantages of flexibility and reconfigurability, the level of measurement uncertainties caused by system calibration errors has not been fully investigated. It is worthy to note that developing a robotic vision system for 3D assessment is a challenging task because all the three calibration processes involved in the system are prone to inaccuracies in evaluation of transformation matrices. It is well expected that inaccuracy in estimating matrices A and B can significantly impact the hand–eye calibration process. In this paper, the analyses focuses on propagation of errors from matrices A and B to matrix X. The theoretical and experimental investigations have revealed that due to difference in evaluation of transformation matrices A and B, errors in matrix B predominately impacts the overall calibration process.
Remainder of the paper is arranged as follows: Section 2 gives an overview of the experimental setup and possible sources of measurement uncertainties for hand–eye calibration based on the kinematic model of a six-axis KUKA robot manipulator. Section 3 presents some results on impact of measurement uncertainties on the calibration process. Section 4 demonstrates a case study for measuring the dimension of 3D parts. Finally, the paper is concluded in Section 5.

2. Experimental Setup and Measurement Uncertainties

The robotic vision system illustrated in Figure 1 comprises of a six degrees of freedom (DOF) robot, i.e., KUKA Agilus KR6 R900 sixx robot, a 3CCD Jai camera that is mounted on the flange of the robot, and an calibration object in the form of 63 squares ( 7 × 9 ) checkerboard. The specifications of Jai camera are given in Table 1. The accuracy of the robotic vision system depends on the multiple factors, examples include kinematic errors, resolution of the camera, accuracy of parameters determined during camera calibration, accuracy of image processing algorithms used to detect objects etc. [16]. The types of calibrations involved in developing a robotic vision system and possible cause of uncertainties in each step are discussed below. Further, the feasible ways to overcome the effect of such uncertainties are also reported in each sub-section.

2.1. Robot Base to Flange Calibration

Robot calibration has been studied for more than three decades which brings compensation to the inaccuracy of robots. Although the technique of robot making has improved considerately during these years, the theory of robot calibration remains the same. It requires a mathematical model to be tailored to the individual robot which is to be calibrated. This is a software model of the robot and is then applied to robot paths which replaces its nominal physical model.
To calibrate and test a robot, it should be performed using high-end measurement equipment such as a laser tracker. Two types of parameters are commonly used for assessing the performance of an industrial robot: repeatability and accuracy. Modern robots can achieve a reasonably good repeatability, for example, a small robot of 5 kg payload typically has the repeatability of 0.02–0.03 mm. To improve the absolute accuracy of a robot the calibration is carried out in four steps:
1.
Determination of a mathematical model of the robot geometry and motion (kinematics and dynamics);
2.
Measurement of the position and orientation of the robot end-effector (flange);
3.
Identification of a relationship of the measurements and robot joint positions;
4.
Modification of software control commands to compensate for the determined model.
Among the aforementioned four steps, the determination of the robot model (step 1) and the identification process (step 3) are the most challenging. A complete calibration model should contain corrections of the geometric parameters (also referred as kinematic parameters), and necessary dynamics parameters, e.g., stiffness. A correction for the unknown error could be an offset or a linear approximation. In this section, applying the well known Denavit and Hartenberg (D–H) coordinate system to KUKA robot manipulator [17], the offset values associated with the parameters of KUKA robot are described.
The D–H coordinate system is helpful in estimating the kinematics of the robot describing the relative position of the flange with respect to its base. The robot kinematics description is a geometric transformation, where the parameters of kinematics describe the motion of the robot manipulator without the consideration of forces and torques. The generic form of the convention for link l of the robot manipulator illustrated in Figure 2 can be described as follows:
T l 1 l = R z , θ l t z , d l t x , a l R x , α l =
c o s ( θ l ) s i n ( θ l ) c o s ( α l ) s i n ( θ l ) s i n ( α l ) a l c o s ( θ l ) s i n ( θ l ) c o s ( θ l ) c o s ( α l ) c o s ( θ l ) s i n ( α l ) a l s i n ( θ l ) 0 s i n ( α l ) c o s ( α l ) d l 0 0 0 1 .
Note that the link transformation T l 1 l contains two basic rotations and two translations, specifically R z , θ l is a rotation around z-axis by θ l degrees; t z , d l is a translation along z-axis by d l distance; t x , a l is a translation along x-axis by a l distance; and R x , α l is a rotation around x-axis by α l degrees. The KUKA robot manipulator used in this study has six links (joints), therefore its overall kinematics transformation is described using six transformation matrices. Ideally the relationship between each successive joints l 1 and l or contiguous link in the form of transformation matrix T l 1 l for KUKA robot is given below.
T 0 1 = c o s ( θ 1 ) 0 s i n ( θ 1 ) a 1 c o s ( θ 1 ) s i n ( θ 1 ) 0 c o s ( θ 1 ) a 1 s i n ( θ 1 ) 0 1 0 0 0 0 0 1
T 1 2 = c o s ( θ 2 ) s i n ( θ 2 ) 0 a 2 c o s ( θ 2 ) s i n ( θ 2 ) c o s ( θ 2 ) 0 a 2 s i n ( θ 2 ) 0 1 0 0 0 0 0 1
T 2 3 = c o s ( θ 3 π / 2 ) 0 s i n ( θ 3 π / 2 ) a 3 c o s ( θ 3 π / 2 ) s i n ( θ 3 π / 2 ) 0 c o s ( θ 3 π / 2 ) a 3 s i n ( θ 3 π / 2 ) 0 1 0 0 0 0 0 1
T 3 4 = c o s ( θ 4 ) 0 s i n ( θ 4 ) 0 s i n ( θ 4 ) 0 c o s ( θ 4 ) 0 0 1 0 d 4 0 0 0 1
T 4 5 = c o s ( θ 5 ) 0 s i n ( θ 5 ) 0 s i n ( θ 5 ) 0 c o s ( θ 5 ) 0 0 1 0 0 0 0 0 1
T 5 6 = c o s ( θ 6 ) s i n ( θ 6 ) 0 0 s i n ( θ 6 ) c o s ( θ 6 ) 0 0 0 0 1 d 6 0 0 0 1 ,
where θ r r [ 1 , 6 ] are the six axis specific angles of the KUKA robot, a 1 = 25, a 2 = 455, a 3 = 35, d 4 = −420, and d 6 = −80. The total transformation matrix T t o t a l from base to flange can be estimated by simply multiplying all the transformation matrices, that is,
T t o t a l = T 0 1 × T 1 2 × T 2 3 × T 3 4 × T 4 5 × T 5 6 .
The result of this calibration is a transformation matrix, which describes the movement of the robot’s flange with respects to its base from position i to j in the form of 4 × 4 matrix comprising of sub-matrix R P and vector T P . Note that the above kinematics model does not accurately define the movement of industrial robots because such robots are inherently not accurate compared to CNC machines. Robot paths suffer from inaccuracies introduced from the way multiple robot axes linked together. The manipulator arm is also prone to dynamic uncertainties as a result of its speed of travel, payload, backlashes, impact forces applied to the robot end-effector from the part etc.
The identification of a relationship of the measurements and robot joint positions is essentially an inverse displacement problem (IDP). This is the core part of robot calibration and can be solved analytically or numerically. An analytical solution is the direct inverse solution and should be used whenever possible. A numeric solution relies on iterative solutions at a cost of computation time but it generally copes well with complex equations. In this work, we have opted latter technique to estimate the offset values associated with the robot parameters. The offset values ( Δ ) associated with our KUKA robot is defined in Table 2.
While performing robot base to flange calibration, the overall accuracy of the robot plays an important role. The KUKA robotic arm considered in this experiment has pose repeatability radius of ±0.03 mm [18], which can be assumed as one form of kinematic error. One of the reasons for such kinematic error is due to ageing of mechanical components, which creates a difference in the movement reported by the sensor with respect to the actual arm movement. Some non-kinematic errors that may also impact the calibration process are mechanical backlash, temperature effects on mechanical components etc. In order to assure that the KUKA robotic arm considered in this work is free from both kinematic and non-kinematic errors, the inverse kinematic is used to evaluate the joint angles which achieves a desired position and orientation of the flange relative to the base frame.

2.2. Camera to Object Calibration

Consider a 3D point P, represented by homogeneous coordinates ( x , y , z , 1 ) T , and its corresponding 2D image projection point p represented by homogeneous coordinates ( α , β , 1 ) T . Assuming a pinhole camera model the mapping between 3D point and the corresponding 2D image point can be defined as:
(13) s [ α , β , 1 ] T = M · [ x , y , z , 1 ] T (14)      = m 11 m 12 m 13 m 14 m 21 m 22 m 23 m 24 m 31 m 32 m 33 m 34 · [ x , y , z , 1 ] T (15)      = K [ R Q | T Q ] · [ x , y , z , 1 ] T ,
where s is a scaling factor, M is the camera projection matrix of dimension 4 × 3 , K is 3 × 3 camera intrinsic matrix, and [ R Q | T Q ] is a 3 × 4 extrinsic camera matrix, with the first three columns R Q representing rotation and the last column T Q representing translation, relating object coordinate frame to the camera coordinate frame.
Finally the 2D image point coordinates α and β are calculated as:
α = m 11 x + m 12 y + m 13 z + m 14 m 31 x + m 32 y + m 33 z + m 34
β = m 21 x + m 22 y + m 23 z + m 24 m 31 x + m 32 y + m 33 z + m 34 .
To find the matrices K, R Q and vector T Q , a camera calibration is required [7]. For that process a known 3D points in the object coordinate frame (e.g., checkerboard pattern) and their corresponding 2D image points are often used.
While performing this type of calibration, various factors can influence its accuracy, examples include: camera resolution, sensor noise, inaccurate feature point, instability of camera mounting device, accuracy of the numerical technique used etc. One of the most feasible way to estimate the uncertainty associated with this type of calibration is to evaluate the re-projection errors for each image. A re-projection error is known as a quantitative measure of accuracy, which can be evaluated by estimating the distance between a feature point detected in a calibration object (checkerboard), and a corresponding world point projected into the same image. Ideally the value of re-projection error should be close to zero.

2.3. Flange to Camera Calibration

The estimation of unknown matrix X from equation A X = X B is also known as hand–eye calibration. The system equation can be formulated in two components, namely, orientational and positional components, which are defined below:
R A t A 0 1 R X t X 0 1 = R X t X 0 1 R B t B 0 1
R A R X R A t X + t A 0 1 = R X R B R X t B + t X 0 1
Using the above expression the respective orientational and positional components can be defined as:
R A R X = R X R B
R A t X + t A = R X t B + t X .
The hand–eye calibration technique is broadly classified in two different categories, that is, either the rotation and translation parts of the matrix X are evaluated either individually or in combined form. In the former category, Shui–Ahmad [10] and Tsai–Lenz [11] have estimated the rotation matrix and translation vector individually using a least squares fitting algorithm. The least square fitting algorithms are fast and simple, but might result in an inaccurate solution [19]. For estimating the rotation matrix of X, the concept of quaternions has been used by Zhuang et al. [20], while Chou–Kamel [21] has simplified the expression by linearising the equation ( X B = 0 ) before estimating the solution using singular value decomposition. Horaud–Dornaika [12] has formulated the rotation matrix with quaternions and conducted a non-linear optimisation on rotation matrix and translation vector.
In the latter category, Remy et al. [13] have proposed a formulation in which the number of unknown parameters remains constant and constrained this condition over complete data set. Andreff et al. [14] estimated the solution of system equation by using the Kronecker product on reformulated linear system. Zhuang–Shiu [22] have applied non-linear optimisation for both rotation matrix and translation vector and introduced the possibility to disregard the orientation of camera for parameter estimation. A similar approach has been presented by Fassi–Legnani [23]. Park–Martin [24] also conducted a similar nonlinear optimisation with the help of the lie group theory. To estimate the relative position and orientation between the reference frames and robot link, Lu–Chou [25] has combined the kinematic equations of rotation and translation into two simple and well structured linear systems.
The major possible sources of uncertainty in hand–eye calibration is erroneous estimation of matrices A and B. It is well known that the robotic vision systems are not completely free from errors, which make hand–eye calibration process more challenging. The hand–eye calibration algorithms discussed above perform ideally under error-free theoretical setups, but might fail to deliver the same performance for practical setups.

3. Results and Discussion

To estimate the theoretical performance of existing hand–eye calibration algorithms, the matrix B is estimated by moving the camera mounted flange of KUKA robot to a number of different positions, and an image of the calibration object is captured from each position. Further, the theoretical value of X is assumed, and the equation A X = X B is used to estimate matrix A for each flange navigation from position i to j. The purpose of opting for a theoretical robotic vision setup is to estimate the impact of systematic error on hand–eye calibration algorithms. The theoretical setup is also helpful in comparing the role of uncertainties in matrices A and B on estimating matrix X. In this section, the impact of measurement uncertainties on theoretical and practical robotic vision systems are thoroughly discussed.

3.1. Error Sensitivity Analysis of Theoretical Setup

The same setup as illustrated in Figure 1 is used to calibrate the theoretical robotic vision system. Let the camera mounted on the flange of the robotic arm be navigated from position i to j and at each position an image of the calibration object is captured. Then a classical camera calibration method is used to estimate matrix Q for each position. The navigation of the camera mounted robot flange from position i to j (refer Figure 1) can be expressed by the following system equation.
P i X Q i = P j X Q j .
In Equation (22), the values of Q i and Q j are known with the help of the camera calibration approach suggested by Zhang [7]. Assuming the theoretical value of matrices X (denoted as X ), matrix P j can be estimated from P i by using the system equation defined in Equation (23).
P j = ( P i X Q i ) ( X Q j ) 1 .
After obtaining all the required matrices, the matrices A and B of the theoretical robotic vision system can be evaluated from Equations (2) and (4). The main objective for theoretical estimation of X is to analyse the performance of hand–eye calibration algorithms under systematic errors. The advantage of opting theoretical setup is that the matrix X = X is accurately known and can be used as the reference matrix while performing uncertainty analyses by adding systematic errors on matrices A and B.
One of the most practical ways to introduce errors in matrix A is to systematically perturb the angles of robotic manipulator. After each additive uncertainty, the erroneous transformation matrix P ^ is evaluated with the help of the kinematics model. Similar approach is to be applied for matrix B, that is, the systematic error is introduced on perfectly detected calibration pattern in the images. After introducing a systematic error each time, the camera calibration algorithm is used to estimate erroneous matrix Q ^ . Now the expression defined in Equation (22) is modified to:
P ^ i X Q ^ i = P ^ j X Q ^ j
Solving this system equation gives
P ^ j 1 P ^ i X = X Q ^ j Q ^ i 1
A ^ X = X B ^ ,
where erroneous matrix A ^ defines a cumulative form of kinematic and non-kinematic errors discussed in Section 2.1, while erroneous matrix B ^ defines the cumulative accuracy error discussed in Section 2.2. Transformation matrix X is unknown and need to be estimated with the help of hand–eye calibration algorithms for different robot positions ( η ). Note that to estimate X, the value of η should not be less than three, hence we have opted η = 4 and η = 5 in this study.
After introducing systematic errors in transformation matrices P and Q, the system equation is solved using three popular hand–eye calibration methods. The first two methods belong to the first computation category based on the individual estimation of rotation matrix and translation vector [11,12], and the other belonging to the second computation category based on combined estimation [14].
Note that uncertainties in the angles of the robot manipulator are obtained by introducing δ A error in all the six joint angles ( θ ’s) of the robot in the form of normally distributed random numbers ( N ( μ A , σ A ) ) simulating noise, where μ A (in degrees) ranges between [ 1 , 1 ] with step-size of 10 1 and σ A = 0.05 μ A . Similarly, the uncertainty in the calibration pattern is obtained by introducing δ B error in all the detected image corner points ( α i , β i ) in the form of normally distributed random numbers lying within the window size of [ 0 , 5 ] having unity step-size. The concept of error introduction in one of the detected corner point is illustrated in Figure 3, which can cause due to usage of low resolution camera.
The norms of errors associated with estimation of matrix X using A ^ and B ^ are given by:
U R = i = 1 n k = 1 o | | R X R ^ X i , k | | F n × o
U T = i = 1 n k = 1 o | | T X T ^ X i , k | | E n × o
where n (=56) represents different combinations of flange positions, o (=1000) is simulation runs for normally distributed noise, | | · | | F is Frobenius norm and | | · | | E is Euclidean norm (in mm) obtained by solving system equation A ^ X = X B ^ . R X and T X are the rotation matrix and translation vector of theoretical X , while R ^ X and T ^ X are the rotation matrix and translation vector of transformation matrix X obtained after introducing uncertainty in the system. Ideally the value of U R and U T are zero, when both errors δ A and δ B are zero.
Figure 4 illustrates the impact of measurement uncertainties on estimating matrix X from the system equation using three different hand–eye calibration algorithms, where change in the colour from navy-blue to red represents increasing errors in estimated transformation matrix X accurately. Figure 4a–f show the sensitivity of three hand–eye calibration methods to uncertainty in both matrices A and B. The uncertainties in matrix A is introduced by adding normally distributed noise to all the six axes present in the robot kinematics model, while uncertainty in matrix B is introduced by adding noise to all the perfectly detected calibration points in the image.
Based on three hand–eye calibration algorithms, the observations suggest that inaccuracies in matrices A and B have adverse impact on correct estimation of X, especially for the case of smaller η as shown in Figure 4. It is seen that when δ A 0 and δ B = 0 , Horaud–Dornaika and Tsai–Lenz methods are able to estimate R X accurately, while Andreff et al. method is not able to find the true value of R X and the magnitude of error function U R increases as δ A increases. This observation suggests that both methods belonging to the first computation category of hand–eye calibration algorithm are expected to estimate more accurate value of R X .
On the other hand, when δ A = 0 and δ B 0 , then none of the methods are able to estimate T X accurately. However, out of the three hand–eye calibration methods, the performance of the Horaud–Dornaika and Tsai–Lenz methods are better than the Andreff et al. method. One of the reasons behind such difference in the performance of two computation categories of the methods is an additive effect of propagated errors from rotation matrix to translation vector in the Andreff et al. method. Further, the Andreff et al. method evaluates matrix X by linearising the system equation. Although linearising the system equation can reduce the complexity of the system and can estimate matrix X in lesser time, such process is expected to reduce the useful constraints of the equation which can result in a wrong estimation of the solution.
A close observation of Figure 4a–f reveals that introducing a minimal error of δ B = 1 has significantly impacted the accurate estimation of R X and T X . Note that δ B = 1 resembles an area of 3 × 3 pixel block, which is coming out to be 194.6 μ m 2 , where each pixel size is 4.65 μ m (refer Table 1). Hence, it can be asserted that introducing angular errors in the robot joints impacted the hand–eye calibration process, but the magnitude of error is not significantly large. On the other hand, inducing errors in image pixel estimation (such as in the case of low resolution camera setup) has significantly affected the hand–eye calibration process. Nevertheless, such error can be significantly reduced by increasing the number of image captures at different robot positions, as evidenced by the differences between the mesh plot for η = 4 and the solid surface for η = 5 in Figure 4 for all three methods. Note that when the errors in both matrices A and B are zero, then all the three hand–eye calibration algorithms are able to estimate the correct value of X.

3.2. Error Sensitivity Analysis of Practical Setup

The theoretical study of error sensitivity analysis is extended to the practical setup as illustrated in Figure 1. Dissimilar to the theoretical setup, the true value of X is unknown, which needs to be estimated with the help of hand–eye calibration algorithms. The norms of errors associated with estimation of matrix X with the help of practically estimated A and B are given by:
U R = i = 1 n k = 1 o | | R X R ^ X i , k | | F n × o
U T = i = 1 n k = 1 o | | T X T ^ X i , k | | E n × o ,
where n (=56) represents different combinations of flange positions, o (=1000) represents simulation runs for normally distributed noise, | | · | | F is Frobenius norm and | | · | | E is Euclidean norm obtained by solving system equations A X = X B and A ^ X = X B ^ . R X and T X are the rotation matrix and translation vector of X when no systematic error is introduced in the system equation i.e., when both δ A = 0 and δ B = 0 , while R ^ X and T ^ X are the rotation matrix and translation vector obtained after solving uncertain system equation A ^ X = X B ^ .
Figure 5 shows the uncertainties in rotation and translation of transformation X obtained by solving the system equation using the same three hand–eye calibration algorithms. The change in the colour from navy-blue to yellow represents increasing deviation of R X and T X from R ^ X and T ^ X respectively. A quick glance of Figure 5 suggests good similarities in the overall behaviour with respect to that of the theoretical setup (refer Figure 4). A respective comparison of Figure 4a–d and Figure 5a–d for δ B = 0 reveals that, for the practical setup, the hand–eye calibration methods are relatively sensitive to uncertainty and resulted into high magnitudes of errors.
A similar trend can be observed for Andreff et al. method from Figure 4e,f and Figure 5e,f, with much higher magnitudes of errors in the practical setup. One of the reasons behind such sensitive behaviour is pre-existence of errors in the image pattern estimation, which has impacted the hand–eye calibration process. It can be asserted that Horaud–Dornaika method performs way better than other methods in over-riding the robotic and camera errors in the calibration process.

4. Case Study: Measuring Dimension of 3D Parts

In this section, a 3D printed car part is considered for estimating its dimension through a robotic vision system under variable camera settings. The objective behind choosing different camera resolutions is to observe the proposed proposition in real world scenario which can replicate error in image pixels by capturing low resolution image.
Figure 6a shows an image of a car door with a planner edge-to-edge dimension of 2.37 × 0.98 m 2 that has been captured from the top-left position occupying whole frame of a door in the image. Note that in Figure 6a, 42 colourful circular stickers of 1 cm diameter are affixed at the different door locations, which is helpful in estimating the error propagation as the pixel error increases.
To predict the door size and diameter of stickers, initially the image is manually extracted from its background and is then merged with a respective depth image, which is shown in Figure 6c. Note that in the depth image, the gray shaded region indicates the variation in the distance of the camera sensor from the local door regions. The darker the depth image, the farther the camera sensor is; hence more chances for uncertainty in pixel estimation as compared to light gray regions. Further, the variation in the shades of gray in the door region also indicates the existence of the combination of rotational and translational variation. Such variations are very common in industrial settings where large objects are inspected in the production belt.
Figure 7 illustrates the diameter of all stickers captured under three different camera resolution i.e., 1392 × 1040 , 1044 × 780 , and 696 × 520 . A falling trend in Figure 7 indicates that as the resolution of camera decreases the capability of estimating accurate size of object also decreases. It can be noted that as the resolution is dropped to 25% from its original value, the average error in estimating diameter of sticker also dropped by 21%. Further, a relation between sticker diameter estimation and distance of sticker can be established in a similar way under all three resolution settings, i.e., as the distance from sensor to object increases, the error in estimating sticker diameter decreases. Similarly, the dimension of door estimated using three camera settings, i.e., 1392 × 1040 , 1044 × 780 , and 696 × 520 is coming out to be, 2.36 × 0.98 m 2 , 2.33 × 0.96 m 2 , and 2.31 × 0.95 m 2 , respectively.

5. Conclusions

In this paper, the impact of measurement uncertainties on estimating the exact solution of the robotic vision system has been investigated. The solution of a system equation A X = X B has been evaluated using three different types of approaches, of which two approaches are based on the individual estimation of the rotation matrix and translation vector, whilst the other one is based on combined estimation. Investigations have revealed that introducing angular errors in the robot joints has less significantly impacted the hand–eye calibration process compared to the error in image pixel estimation. The simulation results also suggest that the solution estimated by the Horaud–Dornaika method is best followed by the Tsai–Lenz method, which are comparatively less affected by robot rotation and translation uncertainties than the Andreff et al. method. The reason behind such difference in the performance of two computation categories of the methods is an additive effect of propagated errors from the rotation matrix to the translation vector in the Andreff et al. method. Further, the Andreff et al. method evaluates matrix X by linearising the system equation, which is expected to reduce the useful constraints of the equation and result in poor estimation of the solution. Hence, out of all of them, the Horaud–Dornaika method should be considered for more accurate hand–eye calibration of practical robotic vision systems.

Author Contributions

Formal analysis, M.F.K., E.M.A.D. and M.M.N.; Investigation, M.F.K., E.M.A.D. and M.M.N.; Methodology, M.F.K., E.M.A.D. and M.M.N.; Visualization, M.F.K., E.M.A.D. and M.M.N.; Writing—original draft, M.F.K., E.M.A.D. and M.M.N.; Writing—review & editing, M.F.K., E.M.A.D., M.M.N. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by Prince Sultan University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Authors would like to acknowledge the financial support of Prince Sultan University for paying the Article Processing Charges (APC) of this publication. Authors would like to thank X. Y. Guan from University of Central Lancashire (Burnley campus, UK) for providing data for analysing error sensitivity of KUKA robot. Authors also like to thank G. Hall, D. Ceglarek, B. Matuszewski and L. K. Shark for providing technical feedback.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gongal, A.; Karkee, M.; Amatya, S. Apple fruit size estimation using a 3D machine vision system. Inf. Process. Agric. 2018, 5, 498–503. [Google Scholar] [CrossRef]
  2. Mooring, B.; Driels, M.; Roth, Z. Fundamentals of Manipulator Calibration; John Wiley & Sons, Inc.: New York, NY, USA, 1991. [Google Scholar]
  3. Stone, H.; Sanderson, A. A prototype arm signature identification system. In Proceedings of the 1987 IEEE International Conference on Robotics and Automation, Raleigh, NC, USA, 31 March–3 April 1987; Institute of Electrical and Electronics Engineers: Piscataway, NJ, USA, 1987; Volume 4, pp. 175–182. [Google Scholar]
  4. Wang, H.; Shen, S.; Lu, X. A screw axis identification method for serial robot calibration based on the POE model. Ind. Robot. Int. J. 2012, 39, 146–153. [Google Scholar] [CrossRef] [Green Version]
  5. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef] [Green Version]
  6. Heikkila, J.; Silven, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1106–1112. [Google Scholar]
  7. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  8. Enebuse, I.; Foo, M.; Ibrahim, B.S.K.K.; Ahmed, H.; Supmak, F.; Eyobu, O.S. A comparative review of hand–eye calibration techniques for vision guided robots. IEEE Access 2021, 9, 113143–113155. [Google Scholar] [CrossRef]
  9. Tabb, A.; Yousef, K.M.A. Solving the robot-world hand–eye(s) calibration problem with iterative methods. Mach. Vis. Appl. 2017, 28, 569–590. [Google Scholar] [CrossRef]
  10. Shiu, Y.; Ahmad, S. Calibration of wrist-mounted robotic sensors by solving homogeneous transform equations of the form AX = XB. IEEE Trans. Robot. Autom. 1989, 5, 16–29. [Google Scholar] [CrossRef] [Green Version]
  11. Tsai, R.; Lenz, R. A new technique for fully autonomous and efficient 3D robotics hand/eye calibration. IEEE Trans. Robot. Autom. 1989, 5, 345–358. [Google Scholar] [CrossRef] [Green Version]
  12. Horaud, R.; Dornaika, F. Hand-Eye Calibration. Int. J. Robot. Res. 1995, 14, 195–210. [Google Scholar] [CrossRef]
  13. Remy, S.; Dhome, M.; Lavest, J.; Daucher, N. Hand-eye calibration. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robot and Systems, Innovative Robotics for Real-World Applications, IROS ’97, Grenoble, France, 7–11 September 1997; Volume 2, pp. 1057–1065. [Google Scholar]
  14. Andreff, N.; Horaud, R.; Espiau, B. On-line hand–eye calibration. In Proceedings of the Second International Conference on 3-D Digital Imaging and Modeling (Cat. No. PR00062), Ottawa, ON, Canada, 8 October 1999; IEEE Computer Society: Washington, DC, USA; pp. 430–436. [Google Scholar]
  15. Daniilidis, K. Hand-Eye Calibration Using Dual Quaternions. Int. J. Robot. Res. 1999, 18, 286–298. [Google Scholar] [CrossRef]
  16. Sankowski, W.; Włodarczyk, M.; Kacperski, D.; Grabowski, K. Estimation of measurement uncertainty in stereo vision system. Image Vis. Comput. 2017, 61, 70–81. [Google Scholar] [CrossRef]
  17. Craig, J.J. Introduction to Robotics: Mechanics and Control; Pearson Educacion: Upper Saddle River, NJ, USA, 2005. [Google Scholar]
  18. KR Agilus Cleanroom Variant. Available online: https://www.kuka.com/en-in/products/robotics-systems/industrial-robots (accessed on 25 February 2021).
  19. Furrer, F.; Fehr, M.; Novkovic, T.; Sommer, H.; Gilitschenski, I.; Siegwart, R. Evaluation of Combined Time-Offset Estimation and Hand-Eye Calibration on Robotic Datasets. In Proceedings of the 11th International Conference on Field and Service Robotics, Zurich, Switzerland, 12–15 September 2017; Springer: Cham, Switzerland, 2017; Volume 5, pp. 145–159. [Google Scholar]
  20. Zhuang, H.; Roth, Z.; Shiu, Y.; Ahmad, S. Comments on “Calibration of wrist-mounted robotic sensors by solving homogeneous transform equations of the form AX = XB” [with reply]. IEEE Trans. Robot. Autom. 1991, 7, 877–878. [Google Scholar] [CrossRef]
  21. Chou, J.C.K.; Kamel, M. Finding the Position and Orientation of a Sensor on a Robot Manipulator Using Quaternions. Int. J. Robot. Res. 1991, 10, 240–254. [Google Scholar] [CrossRef]
  22. Zuang, H.; Shiu, Y.C. A noise-tolerant algorithm for robotic hand–eye calibration with or without sensor orientation measurement. IEEE Trans. Syst. Man Cybern. 1993, 23, 1168–1175. [Google Scholar] [CrossRef]
  23. Fassi, I.; Legnani, G. Hand to sensor calibration: A geometrical interpretation of the matrix equation. J. Robot. Syst. 2005, 22, 497–506. [Google Scholar] [CrossRef]
  24. Park, F.; Martin, B. Robot sensor calibration: Solving AX = XB on the Euclidean group. IEEE Trans. Robot. Autom. 1994, 10, 717–721. [Google Scholar] [CrossRef]
  25. Lu, Y.-C.; Chou, J. Eight-space quaternion approach for robotic hand–eye calibration. In Proceedings of the 1995 IEEE International Conference on Systems, Man and Cybernetics, Intelligent Systems for the 21st Century, Vancouver, BC, Canada, 22–25 October 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 3316–3321. [Google Scholar]
Figure 1. Classical framework for evaluating unknown matrix X for a robotic vision system. The image sensor mounted on flange of multi-axis robot is navigated from position i to j.
Figure 1. Classical framework for evaluating unknown matrix X for a robotic vision system. The image sensor mounted on flange of multi-axis robot is navigated from position i to j.
Applsci 12 06406 g001
Figure 2. D–H coordinate system of link l of robot.
Figure 2. D–H coordinate system of link l of robot.
Applsci 12 06406 g002
Figure 3. Introduction of error δ B in one of the perfectly detected image point ( α i , β i ) . Adding a noise of | γ | pixel on image point ( α i , β i ) will result into region of δ B = γ . The error in B can be estimated by evaluating area (in μ m 2 ) of the uncertain region using pixel size value from Table 1.
Figure 3. Introduction of error δ B in one of the perfectly detected image point ( α i , β i ) . Adding a noise of | γ | pixel on image point ( α i , β i ) will result into region of δ B = γ . The error in B can be estimated by evaluating area (in μ m 2 ) of the uncertain region using pixel size value from Table 1.
Applsci 12 06406 g003
Figure 4. Error norms associated with rotation matrix and translation vector estimated using: (a,b) Horaud–Dornaika method; (c,d) Tsai–Lenz method; (e,f) Andreff et al. method. The mesh plot represents error values for η = 4 , while solid surface plot represents error values for η = 5 .
Figure 4. Error norms associated with rotation matrix and translation vector estimated using: (a,b) Horaud–Dornaika method; (c,d) Tsai–Lenz method; (e,f) Andreff et al. method. The mesh plot represents error values for η = 4 , while solid surface plot represents error values for η = 5 .
Applsci 12 06406 g004
Figure 5. Error norms associated with rotation matrix and translation vector estimated using: (a,b) Horaud–Dornaika method; (c,d) Tsai–Lenz method; (e,f) Andreff et al. method.
Figure 5. Error norms associated with rotation matrix and translation vector estimated using: (a,b) Horaud–Dornaika method; (c,d) Tsai–Lenz method; (e,f) Andreff et al. method.
Applsci 12 06406 g005aApplsci 12 06406 g005b
Figure 6. Image of 3D printed car door captured under different camera resolution, i.e., 1392 × 1040, 1044 × 780, and 696 × 520. (a) Original image, (b) Manually extracted image, (c) Respective depth image.
Figure 6. Image of 3D printed car door captured under different camera resolution, i.e., 1392 × 1040, 1044 × 780, and 696 × 520. (a) Original image, (b) Manually extracted image, (c) Respective depth image.
Applsci 12 06406 g006
Figure 7. Variation in estimation of sticker diameter under different camera settings.
Figure 7. Variation in estimation of sticker diameter under different camera settings.
Applsci 12 06406 g007
Table 1. Specifications of 3CCD Jai camera.
Table 1. Specifications of 3CCD Jai camera.
SpecificationQuantitative Value
Resolution1392 × 1040
Pixel size4.65 μ m
Frame rate20.8 fps
Focal length12 mm
Sensor size1/2″
Minimum working distance150 mm
Table 2. Parameter values of KUKA robot with offset ( Δ ).
Table 2. Parameter values of KUKA robot with offset ( Δ ).
Kuka Link α ± Δ α a ± Δ a (mm) θ ± Δ θ d ± Δ d (mm)
190 − 0.01325 + 0.009 θ 1 + 00
20 + 0.005455 + 0.447 θ 2 + 0.0050 + 0.010
390 + 0.00835 + 0.172 θ 3 − 89.9900
4−90 + 0.0410 − 0.104 θ 4 − 0.083−420 − 0.341
590 − 0.0010 + 0.081 θ 5 + 0.0380 + 0.056
600 θ 6 + 0−80 + 0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khan, M.F.; Dannoun, E.M.A.; Nofal, M.M.; Mursaleen, M. Significance of Camera Pixel Error in the Calibration Process of a Robotic Vision System. Appl. Sci. 2022, 12, 6406. https://doi.org/10.3390/app12136406

AMA Style

Khan MF, Dannoun EMA, Nofal MM, Mursaleen M. Significance of Camera Pixel Error in the Calibration Process of a Robotic Vision System. Applied Sciences. 2022; 12(13):6406. https://doi.org/10.3390/app12136406

Chicago/Turabian Style

Khan, Mohammad Farhan, Elham M. A. Dannoun, Muaffaq M. Nofal, and M. Mursaleen. 2022. "Significance of Camera Pixel Error in the Calibration Process of a Robotic Vision System" Applied Sciences 12, no. 13: 6406. https://doi.org/10.3390/app12136406

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop