Next Article in Journal
Ionosphere-Constrained Single-Frequency PPP with an Android Smartphone and Assessment of GNSS Observations
Previous Article in Journal
A Quartz Crystal Microbalance, Which Tracks Four Overtones in Parallel with a Time Resolution of 10 Milliseconds: Application to Inkjet Printing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

Calibration of In-Plane Center Alignment Errors in the Installation of a Circular Slide with Machine-Vision Sensor and a Reflective Marker

School of Mechanical Engineering, Soongsil University, Seoul 06978, Korea
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(20), 5916; https://doi.org/10.3390/s20205916
Submission received: 1 September 2020 / Revised: 9 October 2020 / Accepted: 13 October 2020 / Published: 20 October 2020
(This article belongs to the Section Optical Sensors)

Abstract

:
This paper describes a method for calibrating in-plane center alignment error (IPCA) that occurs when installing the circular motion slide (CMS). In this study, by combini ng the moving carriage of the CMS and the planar PKM (parallel kinematic mechanism) with the machine tool, the small workspace of the PKM is expanded, and the workpiece is placed on the table with the CMS installed is processed through the machine tool. However, to rigidly mount the CMS on the table, the preload between the guide and the support bearings must be adjusted with the eccentric bearing, and in this process, the IPCA occurs. After installing a reflective marker on the PKM, the PKM is slowly rotated along with the ring guide in the way of stop-and-go without the PKM’s own motion. Then, using a machine vision camera installed at the top of the CMS, the IPCA, which is the difference between the actual center position and the nominal center position of the CMS with respect to the camera, can be successfully calibrated through the circular fitting process. Consequently, it was confirmed that the IPCA of 0.37 mm can be successfully identified with the proposed method.

1. Introduction

The structure of the manipulator used as industrial robots can be mainly divided into a serial kinematic mechanism (SKM) and a parallel kinematic mechanism (PKM). The PKM is naturally associated with a set of constraint functions characterized by its kinematical closure constraints. The PKM has a closed-loop mechanical structure composed of two or more mutually connected links and has relatively high structural stiffness compared with the open chained structure. However, due to its structural characteristics that are mutually constrained, various types of singularities exist inside of its workspace [1,2], as well as the workspace is very small. To overcome these limitations from its inherent structural characteristics of the PKM, various studies have been conducted to eliminate the actuator and end-effector singularities, as well as to efficiently enlarge the usable workspace [3]. F.C. Park and J. W. Kim [4] used differential geometric tools to study PKM’s singularities and provided a finer classification of singularities. In their later works, they proposed the way of actuator redundancy as a means of eliminating actuator singularities and enlarging the usable spatial workspace. Consequently, a six-axis spatial PKM was manufactured based on this principle [5,6]. Gao. F described the characteristic of the workspace according to the variable link length and studied the workspace optimization of the parallel instrument, and Merlet, J.P. studied 3D or 6D boxes obtained from an introductory approach into this area. In addition, Ryu, S. J. carried out workspace optimization by adding linear joint design, or motion guide, to the instrument. Many studies have been conducted, such as performing the optimal design of the device itself, or installing the device on top of the motion guide [7,8,9,10,11,12,13,14,15].
Due to the nature of the PKM for machine tools in this study, the end effector, as shown in Figure 1, should be capable of approaching and performing operations in any direction of the workpieces. Thus, by combining the PKM with machine tools with the moving carriage on the ring guide, the workspace was expanded by rotating the singularity-free workspace of the PKM about a vertical axis passing through the center of the ring guide.
However, ring slides, together with linear slides, have been used extensively as transfer systems for machine tool and automation applications [16].
Figure 2 shows the applications that perform the process through ring slides and linear slides. (a) Optical lens assembly machines are the most representative form of industrial processes using motion guides, and ring slides are used for directional switching purposes and carry out transport operations through linear slides. (b) The moving saw for tube cutting is a piece of equipment made to cut the tube, and a cutting saw placed on the ring slide moves along the ring slide and cuts the long tube. (c) The pick and place device, with a ring slide equipped with a series instrument performing a steady 360° rotational motion. As such, ring slides have been mainly used for the purpose of turning the transportation route and have been used in areas where precision is not required much even if the process takes place on the ring slide.
However, like Figure 3, a variety of applications, including Trioptics high precision optical measurement system ‘Image master crane flex’, a measurement camera was mounted on the ring slides to ensure 10 um of Flange focal length acuity and ±0.2% effective local length accuracy [17]. This shows that ring slides can be used in applications that require relatively precise control.
However, there is an important difference between this study and Trioptics’ application of ring slides, in ‘Image master cineflex’ has a structure in which the ring slide is fixed on the platform on which the ring slide is based, and the bearing and pinion with a measurement camera on it rotates. In this study, on the other hand, the bearings, and motors secure the rotating pinion to the platform as the base, above which the PKM-equipped ring slide rotates.
Figure 4 shows the V-bearing and circular motion slides of an external single-edge ring system. When rigidly integrating the V-surface edge of the ring slide with three support bearings, two concentric bearings, and one eccentric bearing should be evenly spaced at 120° intervals according to the installation guide. In the case of the eccentric bearing used in this study, the eccentric offset between the central axis of rotation and the stud axis of the bearing is 1.9–5.5 mm. The preload adjustment using this eccentric offset can prevent improper slide operation owing to positional dimension errors between the bearing mounting stud holes of the base.
However, as shown in Figure 5, a difference between the nominal and actual dimensions of the center of the slide against the base reference occurs. In this study, the process is carried out by rotating the ring slide directly, so the in-plane center alignment error (IPCA) that can occur when installing the ring slide must be identified and calibrated because it seriously affects the relative positioning accuracy between the workpiece fixed inside the ring slide and the cutting part rotating along the circular guide [18,19,20].
Thus, in this study, a machine vision camera-based calibration method is proposed to identify this IPCA with a reflective marker mounted on the T-shaped fixture at the end-effector of the PKM. The retro-reflective marker-based real-time positioning and localization with the machine vision camera has been actively used in fields requiring precise measurement, such as robotic neurosurgery [21,22], until recently. After installing a reflective marker on the PKM, the PKM is slowly rotated along the ring guide in the way of stop-and-go without the PKM’s own motion, and then the automated marker localization process is performed to obtain the position of the marker with respect to the camera installed on the top of the circular motion slide (CMS). After all the center coordinates of the marker with reference to the camera coordinate system were obtained through this look-then-move process, the centroid coordinates of the CMS are estimated with the circular fit of the set of reflective-marker’s positions. As a result, the IPCA can be determined by calculating the relative distance from the nominal origin.

2. Theoretical Background

2.1. Definition of IPCA in CMS

Premise 1: The origin of all the coordinate systems to be described later is coplanar.
Premise 2: Frame {C} and frame {E} estimated through circle fitting have the same orientation.
R E C I
Premise 3: The nominal reference frame { N r } and the nominal center frame { N c } of the circular slide have the same orientation.
R N c N r I
Figure 6 shows the camera frame {C}, reference nominal frame {Nr}, nominal center frame of the “design” slide {Nc}, actual frame {A}, tool frame {T} attached to the T-shaped calibration tool, and frame {E} estimated according to the trajectory of the frame {T} and the field of view (FOV) of the camera. Because the actual location {A} of the CMS center is unknown, to define the nominal origin, a reference square is installed with three reflective markers attached to the “design” center on the optical table where the mechanism and camera are mounted, and {Nc}, is defined. The detailed method of defining a nominal center frame is described in Chapter 3.
In this study, we aim to identify the alignment errors Nc Δ P E.ORG between the origins of two coordinate systems through frame {Nc} and frame {E} expressed with reference to frame {C}, as indicated by Equations (3) and (4).
T N r C T N c N r = T N c C
Δ N c P E . O R G = Δ C P E Δ C P N c
To estimate {E}, a T-shaped tool with a reflective marker attached to the edge of the cutting part is installed, the marker moves on the cutting part at a constant angular displacement along the CMS (counterclockwise, CCW), and “look-then-move” [23] is repeated, where the camera recognizes the position of the marker with reference to {C}. Five images are captured at each position, and the mean position of the marker is estimated through the machine-vision process with reference to frame {C}. Through this process, with circle fitting of the collected trajectory of the marker, the origin position of the coordinate system {E} is obtained. In this case, all the measurement coordinates are expressed with reference to frame {C}.
The entire process of look-then-move is summarized in Figure 7.

2.2. Measurement System Configuration

As shown in Figure 8 and Table 1, the measurement system consists of a machine-vision camera, dimmable light-emitting diode (LED) lights, a CMS, a parallel cutting part, a calibration tool, reflective markers, a reference square, and a controller.
w = ( n × h ) / f
d = ( t × h ) / f
( f × α n ) s = ( f × β t ) s = h
R e s o l u t i o n = w / α = d / β = s
Here, h represents the height from the calibration tool to the camera, R represents the radius of the CMS, r represents the radius of rotation of the marker, w represents the width of the FOV at h, and d represents the depth of the FOV at h. The main parameters related to the camera and lens selection are the working distance h, which is the installation height of the camera; the minimum FOV width w of the camera; the depth d; the sensor width n; the depth t; and the pixel size m of the image acquired by the camera. Additionally, f represents the focal length between the camera and the lens, and α and β represent the horizontal and vertical pixel resolutions of the camera, respectively. The factor that determines the measurement precision of the entire measurement system is the size of the unit pixel (in the unit of mm), and this value is determined linearly according to the working distance h and the installation height of the camera, after the machine-vision camera is selected. Therefore, in this study, the camera and the working distance were determined using Equations (5)–(8) so that the minimum size (in mm) of the unit pixel would be approximately 0.05 mm. The results are presented in Table 2. The detailed specifications of the camera and lens were presented in a previous study [24].
At this time, as the calibration tool that functions as the end effector of the mechanism is closer to the CMS that serves as a supporting base, the vibration resulting from the position transition of the look-then-move process is minimized. As shown in Figure 9, the look-then-move experiment was performed upon adjusting the circular trajectory to be circumscribed to the greatest possible extent within the given FOV.

3. Experiment and Analysis

3.1. Calibration of Camera

Prior to calibrating the IPCA using the previously selected machine-vision camera, as shown in Figure 10, camera calibration should be performed. Here, error factors are identified and calibrated, such as the distortion of the lens (including the radial distortion and tangential distortion that arise when a point in a three-dimensional space is mapped onto a two-dimensional image plane) and installation uncertainties [25,26,27].
The camera calibration error factors are mainly divided into external and internal factors. The external factors include the working distance of the camera, the light intensity, and the horizontal accuracy of the camera, and the internal factors include the focal length f (the distance between the lens center and the image sensor) and the principal point (the image coordinate of the foot of the perpendicular from the center of the lens to the image sensor).
Through the camera calibration, the aforementioned errors are calibrated, increasing the measurement and calculation accuracy in the machine-vision process. There are many methods for camera calibration [28,29,30,31,32,33], and in this study, calibration was performed using the camera calibration tool Vision Assistant provided by NI LabVIEW (Figure 11 and Figure 12) [34,35].

3.2. Image Acquisition and Processing

The vision sensor must undergo several processing steps for the accurate recognition of the reflective marker [36,37]. First, the grayscale image is converted into a binary image. The purpose of this conversion is to convert the image with grayscale data in the range of 0–255 into an image consisting of only 0 s and 1 s. This reduces the total capacity, and only reflective markers are displayed on the binary image. In LabVIEW, this process is performed using the threshold function. When the brightness of the image is expressed on the scale 0–255, the numbers from 0 to 170 correspond to 0 (black), and the numbers from 171 to 255 correspond to 1 (red) (Figure 12).
Second, all the points in the image that have a circular shape are identified through the “Finding circle” function. At this time, as shown in Figure 12, the head of the fastening screw also has a circular shape, and these are also recognized as circles.
Therefore, when the circle diameter is limited to 12–13 mm (so that only the reflective marker with the diameter of 12.66 mm can be recognized), only three circles are recognized, the center coordinates of these circles are acquired, and the process continues to the marker-identification step [38].
Figure 13 and Figure 14 show the LabVIEW code for the machine-vision process and the hardware configuration for the experiment. After the look-then-move process, the center coordinates of the target marker at each angle were obtained through the machine-vision process, and among the circle fitting functions [39,40,41,42], the Pratt method [42] is used to estimate the origin of the frame {E} of the CMS with reference to the coordinate system {C}.
The estimated origin {E} of the circular slide is compared with the nominal origin, and the following procedure is performed using the machine-vision process and design drawings to define this nominal origin. First, as shown in Figure 15, the center coordinates of the three reflective markers attached to the reference square are obtained through the machine-vision process and defined as P 1 , P 2 and P 3 . The nominal reference frame { N r } is defined by these three coordinates, and the rotation by 1.3° with reference to frame {C} is calculated. The rotation matrix R N r C and P N r C at this point are presented in Table 3 and Equation (8). Second, the nominal center frame { N c } of the CMS is defined in the design drawing. In this case, the nominal origin is the center of the circle with the V-surface edge line of the CMS to be connected to the aforementioned three V bearings as the circumference. Because it is fixed by the eccentric bearing, to calculate { N c }, the center coordinates obtained with all three bearings set as the center and the diameter of the center coordinate trajectory of the CMS obtained with reference to the eccentric bearing offset of 3.6 mm. Therefore, owing to the distance error from the origin of the frame {E}, the error rate occurs in this range. In the design, it is determined that the fastening angle α of the eccentric bearing is between 0° and 31°, and the resulting diameter 1.23 mm of the center coordinate trajectory of the CMS is shown in Figure 15. Additionally, in Figure 16, the V-surface edged lines that can be connected for each angle α are shown.
R N r C = [ 0.99 0.023 0 0.023 0.99 0 0 0 1 ]
P N r C + R N r C P N c N r = P N c C
Therefore, the nominal origin coordinate P N c C can be obtained from the estimated { N r } with reference to the coordinate system {C} and the already known P N c N r , as follows:

3.3. Analysis of Experimental Results

An experiment was performed in which the light intensity and the angle shifting interval of the look-then-move process were varied. The conditions were 480 and 770 lux and 5°, 10°, and 20°, respectively. Table 4 and Table 5 present the mean value and the standard deviation (STD) of the estimated radius for each condition, as well as the distance error, which was the in-plane alignment error.
As indicated by the experimental results, the distance error varied by up to 640%, depending on the light intensity. A circle fitted at 770 lux exhibited little variation with regard to the radii estimated at all the angles and the STD, whereas a circle fitted at 480 lux exhibited a large variation. This is a problem, as the vision sensor was unable to properly recognize the target marker under the low light intensity, as confirmed by Figure 17a. Even after the deviations of the values were calibrated using the Pratt method, the results were unreliable, indicating an adverse effect on the precision.
Figure 18 shows the origin fitting at 770 lux, shown at a scale within 0.8 mm in total. The results were valid, as they were almost independent of the angle shifting interval. Additionally, an alignment error occurred, as evidenced by a comparison with Figure 18 with reference to the normal origin. The alignment error was 0.37 mm, and the error rate was 27% by the range of the nominal origin.

4. Conclusions

To calibrate the center alignment error that occurs when a CMS is used for expanding the workspace of the parallel mechanism, a method for determining the error relative to the normal origin was developed. The method involves rotating the mechanism with a reflective marker attached and defining the center of the circular trajectory of the marker as the actual origin.
For this purpose, a camera was selected, and the relationship between the length per unit pixel and the working distance h was selected as the camera height for the adequate calibration precision. Therefore, h is set at 565 mm, length per unit pixel is set at 0.07841 mm. Then, this value was implemented in an actual experimental environment setting. Additionally, to validate the experimental results, the experiments were conducted several times, while the light intensity and shifting angle were varied. As a result, the origin center alignment error was identified as 0.37 mm in 770 lux brightness conditions and in all angles.
Because the proposed calibration method involves a camera that can be easily installed and leveled through a tripod, it can be applied as long as the reflective-marker recognition is possible, even if the entire space is limited. Thus, it is very useful in the application of a CMS.

Author Contributions

This work was carried out in collaboration among all authors. D.L. provided supervision and guidance in the camera calibration, image acquisition and machine vision processing. H.J., J.Y. carried out the vision sensor image acquisition and data processing. All authors participated in the original draft preparation. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Basic Science Research Program through a National Research Foundation of Korea (NRF) grant funded by the Ministry of Education (NRF-2016R1D1A1B03932054) and the Korean government (MSIT) (NRF-2019R1F1A1045834).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cleary, K.; Brooks, T. Kinematic analysis of a novel 6-DOF parallel manipulator. In Proceedings of the IEEE International Conference on Robotics and Automation, Atlanta, GA, USA, 2–6 May 1993; pp. 708–713. [Google Scholar]
  2. Attia, H.A. Dynamic modelling of planar mechanisms using point coordinates. KSME Int. J. 2003, 17, 1977–1985. [Google Scholar] [CrossRef]
  3. Müller, A. Internal preload control of redundantly actuated parallel manipulator—Its application to backlash avoiding control. IEEE Trans. Robot. Autom. 2005, 21, 668–677. [Google Scholar] [CrossRef]
  4. Park, F.C.; Kim, J.W. Singularity Analysis of Closed Kinematic Chains. J. Mech. Des. 1999, 121, 32–38. [Google Scholar] [CrossRef] [Green Version]
  5. Ryu, S.-J.; Kim, J.W.; Hwang, J.C.; Park, C.; Cho, H.S.; Lee, K.; Lee, Y.; Cornel, U.; Park, F.C. Eclipse: An overactuated parallel mechanism for rapid machining. ASME Manuf. Sci. Eng. 1998, 8, 681–689. [Google Scholar]
  6. Kim, W.J.; Hwang, J.C.; Kim, J.S.; Park, F.C. Eclipse–II: A new parallel mechanism enabling continuous 360-degree spinning plus three-axis translational motions. Proc. IEEE Int. Conf. Robot. Autom. 2001, 18, 3274–3279. [Google Scholar]
  7. Gao, F.; Liu, X.-J.; Chen, X. The relationships between the shapes of the workspaces and the link lengths of 3-DOF symmetrical planar parallel manipulators. Mech. Mach. Theory 2001, 36, 205–220. [Google Scholar] [CrossRef]
  8. Merlet, J.-P. Determination of 6D Workspaces of Gough-Type Parallel Manipulator and Comparison between Different Geometries. Int. J. Robot. Res. 1999, 18, 902–916. [Google Scholar] [CrossRef]
  9. Merlet, J.-P.; Gosselin, C.M.; Mouly, N. Workspaces of planar parallel manipulators. Mech. Mach. Theory 1998, 33, 7–20. [Google Scholar] [CrossRef]
  10. Kim, B.-S.; Lee, J.-W.; Kim, Y.-S.; Kim, J.-D.; Lee, H.-J. The Study of Kinematic Analysis and Control by Optimum Design of Redundantly Actuated Parallel Robot. J. Korean Soc. Precis. Eng. 2012, 29, 426–432. [Google Scholar] [CrossRef]
  11. Lou, Y.; Liu, G.; Chen, N.; Li, Z. Optimal design of parallel manipulators for maximum effective regular workspace. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 795–800. [Google Scholar]
  12. Ryu, S.-J.; Kim, J.W.; Hwang, J.C.; Park, C.; Sang Cho, H.; Lee, K.; Lee, Y.; Cornel, U.; Park, F.C.; Kim, J. Eclipse: An Overactuated Parallel Mechanism for Rapid Machining. In Parallel Kinematic Machines; Boër, C.R., Molinari-Tosatti, L., Smith, K.S., Eds.; Advanced Manufacturing; Springer: London, UK, 1999; pp. 441–455. ISBN 978-1-4471-1228-0. [Google Scholar]
  13. Chablat, D.; Wenger, P. Architecture optimization of a 3-DOF translational parallel mechanism for machining applications, the orthoglide. IEEE Trans. Robot. Automat. 2003, 19, 403–410. [Google Scholar] [CrossRef]
  14. Bai, S. Optimum design of spherical parallel manipulators for a prescribed workspace. Mech. Mach. Theory 2010, 45, 200–211. [Google Scholar] [CrossRef]
  15. Rashoyan, G.V.; Lastochkin, A.B.; Glazunov, V.A. Kinematic analysis of a spatial parallel structure mechanism with a circular guide. J. Mach. Manuf. Reliab. 2015, 44, 626–632. [Google Scholar] [CrossRef]
  16. Hepcomotion Webpage. Available online: https://www.hepcomotion.com/product/ring-guides-track-systems-and-segments/hdrt-heavy-duty-ring-guides-and-track-systems/ (accessed on 1 September 2020).
  17. Trioptics Products Home Page. Available online: https://trioptics.com/products/imagemaster-hr-universal-image-quality-mtf-testing (accessed on 1 September 2020).
  18. Qian, S.; Bao, K.; Zi, B.; Wang, N. Kinematic Calibration of a Cable-Driven Parallel Robot for 3D Printing. Sensors 2018, 18, 2898. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Chen, H.-H. In-situ alignment calibration of attitude and ultra short baseline sensors for precision underwater positioning. Ocean. Eng. 2008, 35, 1448–1462. [Google Scholar] [CrossRef]
  20. Lee, N.K.S.; Yu, G.; Joneja, A.; Ceglarek, D. The modeling and analysis of a butting assembly in the presence of workpiece surface roughness and part dimensional error. Int. J. Adv. Manuf. Technol. 2006, 31, 528–538. [Google Scholar] [CrossRef]
  21. Šuligoj, F.; Svaco, M.; Jerbic, B.; Sekoranja, B.; Vidakovic, J. Automated marker localization in the planning phase of robotic neurosurgery. IEEE Access 2017, 5, 12265–12274. [Google Scholar] [CrossRef]
  22. Nycz, C.J.; Gondokaryono, R.; Carvalho, P.; Patel, N.; Wartenberg, M.; Pilitsis, J.G.; Fischer, G.S. Mechanical validation of an MRI compatible stereotactic neurosurgery robot in preparation for pre-clinical trials. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1677–1684. [Google Scholar]
  23. Hauck, A.; Sorg, M.; Farber, G.; Schenk, T. What can be learned from human reach-to-grasp movements for the design of robotic hand-eye systems? In Proceedings of the 1999 IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C), Detroit, MI, USA, 10–15 May 1999; Volume 4, pp. 2521–2526. [Google Scholar]
  24. Basler Web Home Page. Available online: https://www.baslerweb.com/ko/products/cameras/area-scan-cameras/ace/aca3800-10gm/#tab=spec (accessed on 1 September 2020).
  25. Zeng, J.; Chang, B.; Du, D.; Peng, G.; Chang, S.; Hong, Y.; Wang, L.; Shan, J. A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding. Sensors 2017, 17, 1099. [Google Scholar] [CrossRef] [Green Version]
  26. Wang, Z.; Walsh, K.; Verma, B. On-Tree Mango Fruit Size Estimation Using RGB-D Images. Sensors 2017, 17, 2738. [Google Scholar] [CrossRef] [Green Version]
  27. Nasirahmadi, A.; Sturm, B.; Edwards, S.; Jeppsson, K.-H.; Olsson, A.-C.; Müller, S.; Hensel, O. Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs. Sensors 2019, 19, 3738. [Google Scholar] [CrossRef] [Green Version]
  28. Zhang, Z.-Q. Cameras and Inertial/Magnetic Sensor Units Alignment Calibration. IEEE Trans. Instrum. Meas. 2016, 65, 1495–1502. [Google Scholar] [CrossRef] [Green Version]
  29. Weng, J.; Cohen, P.; Hemiou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar] [CrossRef] [Green Version]
  30. Pollefeys, M.; Koch, R.; Van Gool, L. Self-Calibration and Metric Reconstruction Inspite of Varying and Unknown Intrinsic Camera Parameters. Int. J. Comput. Vis. 1999, 32, 7–25. [Google Scholar] [CrossRef]
  31. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Machine Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  32. Hu, J.-S.; Chang, Y.-J. Automatic Calibration of Hand–Eye–Workspace and Camera Using Hand-Mounted Line Laser. IEEE/ASME Trans. Mechatron. 2013, 18, 1778–1786. [Google Scholar] [CrossRef]
  33. Zhang, Z. Camera calibration with one-dimensional objects. IEEE Trans. Pattern Anal. Machine Intell. 2004, 26, 892–899. [Google Scholar] [CrossRef]
  34. Klinger, T. Image Processing with LabVIEW and IMAQ Vision; Prentice Hall Professional: Upper Saddle River, NJ, USA, 2003. [Google Scholar]
  35. Kwon, K.S.; Ready, S. Practical Guide to Machine Vision Software: An. Introduction with LabVIEW; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  36. Lee, D.-M.; Yang, S.-H.; Lee, S.-R.; Lee, Y.-M. Development of machine vision system and dimensional analysis of the automobile front-chassis-module. KSME Int. J. 2004, 18, 2209. [Google Scholar] [CrossRef]
  37. Oh, S.; You, J.-H.; Kim, Y.-K. FPGA Acceleration of Bolt Inspection Algorithm for a High-Speed Embedded Machine Vision System (ICCAS 2019). In Proceedings of the 2019 19th International Conference on Control, Jeju, Korea, 15–18 October 2019; pp. 1070–1073. [Google Scholar]
  38. Lee, D.J.; Kim, S.H.; Ahn, J.H. Breakage detection of small-diameter tap using vision system in high-speed tapping machine with open architecture controller. KSME Int. J. 2004, 18, 1055–1061. [Google Scholar] [CrossRef]
  39. Thomas, S.M.; Chan, Y.T. A Simple Approach for the Estimation of Circular Arc Center and Its Radius. Comput. Vis. Graph. Image Process. 1989, 45, 362–370. [Google Scholar] [CrossRef]
  40. Taubin, G. Estimation of planar curves, surfaces, and nonplanar space curves defined by implicit equations with applications to edge and range image segmentation. IEEE Trans. Pattern Anal. Machine Intell. 1991, 13, 1115–1138. [Google Scholar] [CrossRef] [Green Version]
  41. Umbach, D.; Jones, K.N. A few methods for fitting circles to data. IEEE Trans. Instrum. Meas. 2003, 52, 1881–1885. [Google Scholar] [CrossRef] [Green Version]
  42. Pratt, V. Direct least squares fitting of algebraic surfaces. Comput. Graph. 1987, 21, 145–152. [Google Scholar] [CrossRef]
Figure 1. (a) Workspace of the fixed parallel kinematic machine, (b) workspace of parallel kinematic machine sweeping the center axis.
Figure 1. (a) Workspace of the fixed parallel kinematic machine, (b) workspace of parallel kinematic machine sweeping the center axis.
Sensors 20 05916 g001
Figure 2. Application of circular and linear slide: (a) Optical lens assembly machine, (b) moving saw for tube cutting, (c) pick and place robot.
Figure 2. Application of circular and linear slide: (a) Optical lens assembly machine, (b) moving saw for tube cutting, (c) pick and place robot.
Sensors 20 05916 g002
Figure 3. Tripotics Products: (a) Image master cineflex (b) Image master HR/universal.
Figure 3. Tripotics Products: (a) Image master cineflex (b) Image master HR/universal.
Sensors 20 05916 g003
Figure 4. V-bearings of the single-edge ring system and circular motion slide system.
Figure 4. V-bearings of the single-edge ring system and circular motion slide system.
Sensors 20 05916 g004
Figure 5. The center alignment error is caused by the eccentric bearing of the circular slide.
Figure 5. The center alignment error is caused by the eccentric bearing of the circular slide.
Sensors 20 05916 g005
Figure 6. Schematic of the in-plane center alignment error (IPCA) of the circular motion slide (CMS), due to the uncertain assembly of the eccentric typed bearing.
Figure 6. Schematic of the in-plane center alignment error (IPCA) of the circular motion slide (CMS), due to the uncertain assembly of the eccentric typed bearing.
Sensors 20 05916 g006
Figure 7. Flowchart of the look-then-move machine-vision process.
Figure 7. Flowchart of the look-then-move machine-vision process.
Sensors 20 05916 g007
Figure 8. Schematic of the entire measurement system.
Figure 8. Schematic of the entire measurement system.
Sensors 20 05916 g008
Figure 9. Schematic of the FOV and marker position.
Figure 9. Schematic of the FOV and marker position.
Sensors 20 05916 g009
Figure 10. Mapping world coordinates to pixel coordinates.
Figure 10. Mapping world coordinates to pixel coordinates.
Sensors 20 05916 g010
Figure 11. Snapshot of the NI Vision Assistant-based machine-vision camera calibration.
Figure 11. Snapshot of the NI Vision Assistant-based machine-vision camera calibration.
Sensors 20 05916 g011
Figure 12. Image threshold in LabVIEW. (left) Grayscale image; (right) binary image.
Figure 12. Image threshold in LabVIEW. (left) Grayscale image; (right) binary image.
Sensors 20 05916 g012
Figure 13. LabVIEW code for the look-then-move process.
Figure 13. LabVIEW code for the look-then-move process.
Sensors 20 05916 g013
Figure 14. Snapshot of the experimental environment. (left) Side view; (right) top view of the vision-camera part with two levels.
Figure 14. Snapshot of the experimental environment. (left) Side view; (right) top view of the vision-camera part with two levels.
Sensors 20 05916 g014
Figure 15. Range of the center point of the CMS with the variation of the eccentric angle.
Figure 15. Range of the center point of the CMS with the variation of the eccentric angle.
Sensors 20 05916 g015
Figure 16. Schematic of the reference square and eccentric bearing.
Figure 16. Schematic of the reference square and eccentric bearing.
Sensors 20 05916 g016
Figure 17. Results of circle fittings with respect to the frame {C} for different angle shifting intervals and LED light intensities: (a) 480 lux; (b) 770 lux.
Figure 17. Results of circle fittings with respect to the frame {C} for different angle shifting intervals and LED light intensities: (a) 480 lux; (b) 770 lux.
Sensors 20 05916 g017
Figure 18. Results of circle fittings with respect to frame {Nc} under the 770-lux light-intensity condition.
Figure 18. Results of circle fittings with respect to frame {Nc} under the 770-lux light-intensity condition.
Sensors 20 05916 g018
Table 1. Information about all the hardware and software used in this study.
Table 1. Information about all the hardware and software used in this study.
-ItemModel NumberSpecifications
Driving partStepper motorEZi-servo M2 60MDrive methodBI-POLAR
Pulse/revolution20,000
Current per PHASE4 A
ControllerNI Compact RIO 9033SoftwareLabview
Driven partCircular slideHepcomotion RIM 627R360 P NPitch diameter606.5 mm
Number of teeth500
PinionP125W14T34BNumber of teeth34
MeasurementVision cameraBasler acA3800-10gmTable 2
LensTC1214-3MPG
Reflective markerOptitrack MKR127M4-10diameter12.7 mm
Table 2. Specifications of the machine-vision camera and lens. FOV, field of view.
Table 2. Specifications of the machine-vision camera and lens. FOV, field of view.
Sensor size n × t6.4 × 4.6 [mm]
Focal length f12 [mm]
Camera resolution3840 × 2748 [pixel]
Length per unit pixel0.07841 [mm]
FOV size w × d300 × 218 [mm]
Working distance h565 [mm]
Table 3. Estimated displacement vectors (mm).
Table 3. Estimated displacement vectors (mm).
ΔPΔxΔy
P N r C 144.90112.60
P N c N r 0.83−8.53
P N c C 145.73104.07
Table 4. Experimental results were obtained at 480 lux (mm).
Table 4. Experimental results were obtained at 480 lux (mm).
ΔθRadiusSTDDist. Error
20°92.294.672.38
10°92.744.351.98
91.965.921.36
Table 5. Experimental results were obtained at 770 lux (mm).
Table 5. Experimental results were obtained at 770 lux (mm).
ΔθRadiusSTDDist. ErrorError Rate
20°94.500.250.3727%
10°94.460.250.3727%
94.440.260.3627%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jeong, H.; Yu, J.; Lee, D. Calibration of In-Plane Center Alignment Errors in the Installation of a Circular Slide with Machine-Vision Sensor and a Reflective Marker. Sensors 2020, 20, 5916. https://doi.org/10.3390/s20205916

AMA Style

Jeong H, Yu J, Lee D. Calibration of In-Plane Center Alignment Errors in the Installation of a Circular Slide with Machine-Vision Sensor and a Reflective Marker. Sensors. 2020; 20(20):5916. https://doi.org/10.3390/s20205916

Chicago/Turabian Style

Jeong, Hyungjin, Jiwon Yu, and Donghun Lee. 2020. "Calibration of In-Plane Center Alignment Errors in the Installation of a Circular Slide with Machine-Vision Sensor and a Reflective Marker" Sensors 20, no. 20: 5916. https://doi.org/10.3390/s20205916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop