Next Article in Journal
Real-Time Object Detection and Classification by UAV Equipped With SAR
Next Article in Special Issue
AWANet: Attentive-Aware Wide-Kernels Asymmetrical Network with Blended Contour Information for Salient Object Detection
Previous Article in Journal
Modular Single-Stage Three-Phase Flyback Differential Inverter for Medium/High-Power Grid Integrated Applications
Previous Article in Special Issue
Semantic VPS for Smartphone Localization in Challenging Urban Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single-Shot Intrinsic Calibration for Autonomous Driving Applications

1
Graduate School of Information Science, Nagoya University, Nagoya 464-8603, Japan
2
Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8656, Japan
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(5), 2067; https://doi.org/10.3390/s22052067
Submission received: 30 January 2022 / Revised: 26 February 2022 / Accepted: 3 March 2022 / Published: 7 March 2022

Abstract

:
In this paper, we present a first-of-its-kind method to determine clear and repeatable guidelines for single-shot camera intrinsic calibration using multiple checkerboards. With the help of a simulator, we found the position and rotation intervals that allow optimal corner detector performance. With these intervals defined, we generated thousands of multiple checkerboard poses and evaluated them using ground truth values, in order to obtain configurations that lead to accurate camera intrinsic parameters. We used these results to define guidelines to create multiple checkerboard setups. We tested and verified the robustness of the guidelines in the simulator, and additionally in the real world with cameras with different focal lengths and distortion profiles, which help generalize our findings. Finally, we used a 3D LiDAR (Light Detection and Ranging) to project and confirm the quality of the intrinsic parameters projection. We found it possible to obtain accurate intrinsic parameters for 3D applications, with at least seven checkerboard setups in a single image that follow our positioning guidelines.

1. Introduction

Navigation robots, such as autonomous vehicles, require a highly accurate representation of their surroundings to navigate and reach their target safely. Sensors such as cameras, radars, and LiDARs (Light Detection and Ranging) are commonly used to provide rich perception information. Each of these sensors can complement each other to supply reliable and accurate data. For example, cameras produce a dense representation of the world, including color, texture, and shape. However, cameras cannot provide reliable depth information at longer distances. On the other hand, LiDARs capture dense and highly accurate range information at short, middle, and often at long range regardless of the lighting conditions.
The simultaneous integration of data from multiple sensors is known as fusion, and it is used to overcome weaknesses in each individual sensor. State-of-the-art perception algorithms utilize fused data inside deep neural networks to improve detection accuracy. For example, some of these networks require the image, the point cloud data, and accurate camera intrinsic and camera-lidar extrinsic parameters to enable training and inference [1,2,3,4]. Another common application that requires precise calibration is camera-based localization, also known as visual SLAM (Simultaneous Localization and Mapping) [5,6]. On the other hand, applications that do not require fusion and only operate on images might not be significantly affected by small errors in the intrinsic camera parameters. Recent advances in deep learning [7,8] apply data augmentation techniques to increase resilience to image distortions. Regardless of whether camera data is used independently or as part of a fusion methodology, any application involving 3D geometry will require accurate and careful sensor calibration. Fundamental to this is how a camera is modeled in terms of its intrinsic parameters.
Cameras have become ubiquitous thanks to their low cost, high quality, and ability to represent the world with dense and feature-rich images. The images created by these devices resemble our own vision, depicting objects located at different distances with different apparent dimensions. The mathematical model commonly used to project the three-dimensional world is the pin-hole camera model. In addition, the plumb-bob model, also known as the Brown–Conrady model, represents the distortion caused by the lens attached to the camera [9]. Model parameters can be estimated using a method known as camera calibration (also referred to as geometric camera calibration or camera re-sectioning). This method requires capturing images while moving either the camera itself or a calibration target, with identifiable features of known dimensions, aiming to cover the entire camera’s field of view. The targets used in this calibration process depend on the selected algorithm. Methods such as the one presented by Zhang [10], use one-dimensional targets in the form of a stick with beads attached to it separated by a known distance. Two-dimensional target arrays have a low cost, and the methods developed for this modality provide sub-pixel accuracy. Finally, three-dimensional targets, often used for photogrammetry applications [11], offer higher accuracy, but are not easily obtainable. For the previous reasons, computer vision and robotics applications traditionally employ 2D planar grid targets in the form of large flat boards with identifiable patterns such as checkerboards, arranged circles, or fiducial markers, among others. Correspondences between feature points on the planar target among all of the frames are determined in order to calculate the intrinsic parameters. This process can be tedious, especially on robots or vehicles featuring large arrays of cameras.
There are instructions on performing the data capture procedure, such as covering the entire frame, making sure the focus is correct, taking multiple images while keeping the focus and focal length fixed, or locating the target at the same distance as the measure of the planned object [12,13,14,15]; however, there are no clear instructions on setting the checkerboard pose to facilitate the process while also obtaining accurate parameters. For this reason, after finishing the data acquisition and estimating the parameters, their validity is not apparent until they are applied to project 3D points. The metric commonly used to evaluate the accuracy of the parameters is the re-projection error. It involves calculating the error between the detected and the corresponding re-projected feature points. However, this metric uses the same points that were used to estimate the parameters, which reduces its reliability. We decided to simplify and automate the calibration process using multiple checkerboards in a single image for the above reasons. This method could be used to build calibration stations featuring static arrays of checkerboards arranged in a specific configuration. These could be installed inside factories where vehicles with multiple cameras can be accurately calibrated using a single shot without the intervention of specialized staff and simultaneously reducing possible human error while manipulating the calibration targets. Furthermore, the parameter estimation process takes a few seconds helping to reduce the calibration time of multiple vehicles with multiple cameras.
Given the importance of accurate camera intrinsic parameters for 3D applications, we aim to: (1) define clear guidelines to calibrate monocular cameras accurately; and (2) create a method that allows us to calibrate accurately using a single-shot with a predefined setting with multiple checkerboards. To accomplish this, we employ a realistic simulator to generate, calibrate and evaluate hundreds of combinations to obtain the minimum number of checkerboards, their positions, and rotations that would provide accurate intrinsic parameters. We additionally evaluate the corner detection accuracy, which is an integral part of the calibration process, and often overlooked in other work. To overcome the weakness of the re-projection error metric, we intentionally project virtual 3D points, labeled as Control Points, on the camera field of view edges. We then use these Control Points to verify the distortion correction qualitatively and select the best checkerboard arrangements based on score combinations. Finally, we test the top-performing checkerboard poses to calibrate real cameras and project the point cloud generated by a 3D LiDAR sensor using the estimated intrinsic parameters to validate our calibration guidelines findings. To the best of our knowledge, this is the first work to carry out an in-depth study using simulations to provide an optimized set of guidelines for one-shot calibration.
In summary, the main contributions of this work are as follows:
  • The obtention of the minimum practical number of checkerboard poses to calculate the camera projection parameters for 3D applications accurately.
  • The definition of guidelines for checkerboards’ position and rotation w.r.t. to the camera to estimate accurate camera intrinsic parameters.
  • Validation of this single-image calibration setup in the real world with different cameras, lenses, and focal lengths, thus accelerating the camera calibration process.
  • Release of the source code and the synthetic images to facilitate the practical application and reproduction of these guidelines in the real world.
We organize this paper as follows: Section 2 includes a discussion of the related work. Section 3.1 introduces how we obtained the baseline intrinsic parameters based on a real camera. We used these parameters to generate the ground truth synthetic datasets and evaluate the calibration checkerboard poses. Section 3.2 presents the simulator we used throughout this work, the checkerboard model, the synthetic camera, and their coordinate systems. Section 3.5 explains in detail the metrics and the experiments we carried out to understand the practical limitation of the corner detector and draws guidelines to obtain reliable checkerboard corners. Section 3.6 introduces the metrics and the experiments we used to investigate the effects of the checkerboard’s poses on the camera’s intrinsic parameters. This section also obtains multiple checkerboard setups that show reduced error with respect to the ground truth parameters when using one-shot calibration. Section 4.1 presents the testing and evaluation that we performed to replicate the optimal synthetic setups to calibrate a camera in a real-world setting, and estimate the actual intrinsic parameters. Additionally, we present the validation of these parameters by projecting the point cloud generated by a 3D LiDAR into the rectified image using the estimated camera intrinsic parameters. Finally, Section 6 presents our findings and summarizes the guidelines for accurate camera calibration in a single shot that we obtained through synthetic experiments.

2. Related Work

There exists a considerable amount of work dedicated to developing techniques for estimation of camera intrinsic parameters. Notable mentions include the work by Zhang [15], Kannala and Brandt [16], and Heikkila and Sliven [17]. Despite being published more than twenty years ago, these methods provide consistent and reliable results. Moreover, the widely-used open-source computer vision library OpenCV [18] and the proprietary Matlab [19] platform use these methods in their camera calibration toolboxes due to their proven accuracy. More recent approaches use deep learning methods to estimate the camera intrinsic parameters using neural networks trained on large datasets of images with known intrinsic parameters [20,21]. These methods are convenient since they do not require any targets or calibration datasets. Nevertheless, these approaches are still far from matching the accuracy achieved by target-based techniques.
Zhang [15] used synthetic data while testing his calibration method to evaluate resilience against noise. He obtained good results with as few as three checkerboards, without aiming to use the parameters in 3D applications. However, he only used the checkerboard corners to measure the error, which might result in over-fitted parameters. Moreover, he did not consider the error introduced by the corner detection phase.
The work dedicated to the extrinsic calibration of LiDARs, radars, and cameras explicitly states that accurate intrinsic camera parameters are required [2,3,4,22,23,24,25,26,27]. These methods find shared features between the 2D perspective space on the images generated by the camera and the 3D Euclidean space employed by radars and LiDARs. The shared features are then input to an optimizer to estimate the extrinsic parameters (relative position t , and rotation R ), which attempts to reduce the projection error of the 3D features ( p lidar ) while using the given camera intrinsic parameters ( P lidar , cam ) in the form p cam = P lidar , cam · R · t · p lidar . This equation illustrates the importance of having high-quality camera intrinsic parameters in order to obtain accurate sensor extrinsic parameters.
There are a limited number of published studies about verifying estimated camera intrinsic parameters for use in 3D applications. Basso et al. [28] stressed the requirement of accurate intrinsic parameters for 3D applications such as SLAM, and introduced a method for the intrinsic and extrinsic calibration optimizer for short-range time-of-flight (ToF) sensors such as the Microsoft Kinect. Geiger et al. [29] presented a single-shot calibration method for short and long-range LiDARs and cameras. Their approach uses multiple checkerboards in a single frame to accelerate and simplify the data acquisition. However, they did not analyze or demonstrate why these positions are optimal. We extend this research direction to create clear guidelines on how to achieve consistent and accurate intrinsic parameters and introduce validation metrics to verify them.

3. Methods

3.1. Real-World Camera Baseline

While this paper focuses on the use of a simulator to optimize calibration methodology, instead of using a pure virtual camera, we decided to use a real camera as our baseline. We therefore first needed to calibrate it, obtain the intrinsic parameters, and verify that these are appropriate for 3D applications. To calibrate our camera, we decided to use planar checkerboards with checkered patterns since they are widely available, are low cost, and have established corner detection methods that provide high accuracy [29,30]. To make the simulation closer to reality, we decided to model and simulate the checkerboard used to calibrate our real camera.
Additionally, to calibrate our real baseline camera and the virtual cameras generated by the simulator, we re-implemented the corner detection method presented by Geiger et al. [29], based on Ha’s algorithm [31]. This is due to its simplicity and proven advantage in noisy and blurry environments when compared to the Harris [32], and Shi-Tomasi [33] corner detectors included in OpenCV.

3.1.1. Baseline Calibration

We intrinsically calibrated a 5.4 MP Lucid Vision Labs machine vision camera (TRI054S) paired with an 8 mm focal length Fujinon lens. We used an 800 mm by 600 mm planar checkerboard printed on 4 mm thick aluminum, with an eight by six pattern, and a 100 mm square size. A total of 292 checkerboard poses were used to generate a baseline. We then used the OpenCV [18] camera calibration toolkit based on Zhang’s method [15], and MATLAB’s [34] Adaptive Thresholding to obtain the camera intrinsic parameters (principal point, focal length, axis skew), three radial distortion coefficients, and two tangential distortion coefficients. We projected the point cloud generated by a 3D LiDAR sensor, a Hesai Pandar 64, extrinsically calibrated using the method by Zhou et al. [25] to validate that these parameters are accurate for 3D applications.

3.2. Simulation

Having a baseline defined by a real camera calibrated with a checkerboard, we created a 3D model with the help of Blender [35], based on the printed planar checkerboard mentioned in Section 3.1.1. We then converted the checkerboard model for use within the LGSVL (LG Silicon Valley) simulator [36] as a controllable object.
The LGSVL Simulator allows the creation of virtual locations, weather scenarios, obstacles, and one ego-vehicle. Any number of sensors can be attached to the ego-vehicle, such as cameras, LiDARs, and GNSS. With the help of the simulator API, we generated an empty scene with an ego vehicle, one camera with the parameters introduced in Section 3.1.1. We then dynamically generated multiple instances of our checkerboard controllable object as illustrated in Figure 1. The camera simulated by the LGSVL simulator rendered images using the plumb-bob model with the given intrinsic parameters. The simulator API allowed us to save these renders as image files.

3.3. Checkerboard Coordinate System

We defined the checkerboard coordinate system to be right-handed. The Z-axis is normal to the checkerboard plane, the X-axis is parallel to the checkerboard’s short side, and the Y-axis is parallel to the long side of the checkerboard. The origin is located at the center of the checkerboard as illustrated in Figure 1.

3.4. Simulator Coordinate System

The simulator coordinate system is left-handed. The X-axis faces to the right, the Y-axis points upwards, and the Z-axis is normal to the camera plane and faces forward.

3.5. Checkerboard Corner Detector Evaluation

Before starting to simulate multiple checkerboards, we decided to initially evaluate the limits of our re-implemented version of the corner detector (based on Geiger et al. [29], and Guiming and Jidong [30]) inside the simulator. We used this information to decide the pose and distance intervals that will have a higher probability of detecting the checkerboard corners, and therefore produce more accurate intrinsic parameters. This step is of utmost importance since the detected checkerboard corners are the inputs for the optimizer. If they contain significant errors, the estimated parameters will be inaccurate.

3.5.1. Corner Detector Metrics

To experimentally obtain the intervals at which the checkerboard corner detector will fail, we located the checkerboard at the center of the camera’s field of view in the simulated world. We rotated the checkerboard with respect to its X (roll, α ), Y (pitch, β ), and Z (yaw, γ ) axes on a [ 90 , 90 ] interval with one-degree steps for the roll and pitch and five degrees steps for the yaw; to obtain the maximum distance at which the detector would fail, we moved the checkerboard away from the camera in 1 m steps, until the detector failed. Additionally, to evaluate the corner detector, we defined the following variables and statistics:
  • Ground Truth 3D corners C 3 Dt are the N three-dimensional points in camera space, where N = u × v , and u and v are the number of inner rows and columns of the checkerboard, respectively.
  • Corner RMSE is calculated between the true 2D corners C 2 Dt in distorted images generated by the simulator, and the corners computed when running the corner detection C c , and calculated as:
    C o r n e r R M S E = i = 1 N ( C 2 D t i C c i ) 2 N
    The true 2D corners are obtained by projecting the true 3D corners as: C 2 Dt = P · C 3 Dt , where P is the optimized projection matrix from 3D camera space to 2D image space after applying distortion correction, and defined as: P = c x 0 f x 0 c y f y 0 0 0 . Translation and rotation are not required in this case because the corners points in C 3 Dt are already in camera space.
  • Inner Checkerboard Area is calculated by obtaining the area of the two triangles formed by the corners in the checkerboard. Area calculation would be exact in an undistorted image, but is not precise in a distorted one. For this reason, we use two triangles to estimate the area, since we propose that this produces better results than using a parallelogram.
  • Checkerboard-Image Plane Angle is defined as the angle between the checkerboard plane normal (as defined by the corners) and the image plane normal (camera z-axis).

3.5.2. Experiments

Rolling Experiment. For this experiment, we positioned the origin of the checkerboard at the same height as the camera origin, and set the checkerboard 4 m away along the Z-axis, then varied the roll angle between 0 and 90 degrees in one-degree steps.
Pitching Experiment. In this experiment, we aligned the checkerboard and camera origins, placed the checkerboard 4 m away from the camera, and varied the pitch rotation in the [ 0 , 90 ] interval with one-degree steps.
Yaw Experiment. For this experiment, we aligned the checkerboard and camera origins and positioned the checkerboard to be 4 m away from the camera on the Z-axis. We then rotated the checkerboard w.r.t the checkerboard’s Z-axis between [ 90 , 90 ] in five degrees steps.
Simultaneous Rolling and Pitching Experiment. In this experiment, we examined the effects of simultaneously varying the pitch and roll on the corner detector. We set the checkerboard origin height to match the camera’s and placed the checkerboard 4 m away from the camera along the Z-axis. Additionally, we fixed the yaw rotation to 53.14 degrees. This angle allowed us to align the checkerboard longer diagonal to the vertical axis; this condition helped us simulate the same circumstances that we would use in an actual camera-3D sensor extrinsic calibration [25]. We then simultaneously varied the roll ( α ) and pitch ( β ) over the intervals of [ 80 , 80 ] and [ 60 , 60 ], respectively.
Range Experiment. In this experiment, we set the camera origin and the checkerboard origin to have the same height and initially separated them by 4 m along the Z-axis. To verify the maximum detection distance of the checkerboard corner detector, we moved the checkerboard away from the camera in 1 m steps until it failed. Additionally, once we obtained the checkerboard corner detector failure range, we repeated the experiment focusing on the working area with 0.5 m steps to understand better the detector’s performance.

3.5.3. Results

Figure 2 and Figure 3 show the results of the corner detector experiments in the simulator. From these, we can draw the following guidelines regarding the corner detector:
  • We found that the corner detector peak performance with respect to roll rotation between the camera plane normal and the checkerboard normal is between 0 and 60 degrees, as we present in Figure 2a. Rotations below 70 degrees can obtain reliable corner detections, but we can see that the performance quickly decreases at angles larger than 70 degrees, and the detector completely fails for angles larger than 78 degrees.
  • We observed in Figure 2b that the corner detector performs best between 20 and 60 degrees when varying the pitch angle between the camera plane normal and the checkerboard normal. The performance degrades at angles larger than 60 degrees, until it cannot detect any corners at all after 78 degrees.
  • From Figure 2c, we can appreciate that the corner detector performs best between 20 and 60 degrees when simultaneously varying the pitch and roll between the camera plane normal and the checkerboard normal. Similarly, we see a reduction in accuracy when the rotations surpass 78 degrees.
  • From Figure 3a we can see that the corner detector can detect corners reliably up to 35 m, and trivially confirm that corner detector RMSE increases with distance; however, Figure 3b only shows a pronounced drop after 10 m. With the intention of including multiple checkerboards per frame, we can suggest placing the checkerboards within 10 m from the camera or ensure that the checkerboard’s visible inner area should be at least 20,000 p x 2 . This value for the area is resolution independent, so we propose it as a guideline for perspective cameras and lenses that produce a different field of view.

3.6. Simulated Calibration Experiments

With the knowledge obtained about the impacts of distance and rotation on the corner detector, we investigated the checkerboard positions in the image frame and their influence on the camera intrinsic parameters. To achieve this, we needed to define the metrics to assist evaluation for each set of checkerboard poses and positions.

3.6.1. Checkerboard Pose Metrics

To verify if a set of checkerboard poses provides a better estimation of the camera’s intrinsic parameters, we calculated the root mean square error (RMSE) between the ground truth parameters and the those estimated from the corners of the checkerboards detected on the image using OpenCV [18]. We measured the following parameters:
  • Focal length ( f x , f y ).
  • Center point ( c x , c y ).
  • Distortion coefficients: three radial ( k 1 , k 2 , k 3 ), and two tangential ( p 1 , p 2 ).
In addition to the intrinsic parameters, we also obtained:
  • The RMSE between the ground truth corner positions and the projected corner points using the estimated intrinsic parameters.
  • The checkerboard corner re-projection error, which is the distance between the detected corners in a calibration image, and the corresponding 3D corner points projected into the same image.
  • The Control Points re-projection error, which is the distance between the projections of a 3D Control Point when using the estimated and the ground truth intrinsic parameters. In Section 3.6.2, we introduce and describe the “Control Points” in more detail.

3.6.2. Control Points

The re-projection error is a metric used to quantify the distance between the projection estimate of a 3D point and its actual projection. This metric is widely used to estimate the performance of the camera intrinsic parameters, measuring the detected corners and their 3D estimated counterparts. However, since these points only contain areas belonging to the checkerboard, this metric is unreliable on other parts of the image. For this reason, we decided to purposely insert 3D virtual points into the simulation to assist in measuring the performance on the edges of the image.
The Control Points are 3D virtual points strategically positioned in the camera frustum that we defined in Section 3.1. They target the areas of the camera field of view that are prone to project points incorrectly due to lens distortions. We located these points systematically as shown in Figure 4 at 5 m and 50 m from the camera origin. We defined those two distances to test the performance at close and long-range.
Figure 5 shows two simulated checkerboards with an almost identical re-projection error value when using only the checkerboard corners. The ground truth checkerboard corners are drawn with a red crosshair, while the re-projected corners are drawn with a green crosshair. The corner points for both checkerboards are re-projected after un-distortion with sub-pixel accuracy. However, when using the Control Points to calculate the re-projection error in Figure 5a, the error metric increases considerably. The checkerboard in Figure 5b, on the other hand, has lower control point re-projection error due to the intrinsic parameters being closer to the ground truth. We use this new metric to help us determine whether a checkerboard pose is adequate or not.

3.6.3. Dual Checkerboard Calibration

In this series of experiments, we evaluated the effect of varying the position and rotation of each checkerboard. This paper aims to formulate guidelines that will help to narrow down the number of combinations required when increasing the number of checkerboards. To do this, we must determine the poses that result in a minimized error.
Having explained the proposed method for corner detector evaluation and defined the required metrics, we also aim to evaluate how the following items affect the calibration parameters:
Finally, we present the results of all the above items in Section 3.6.8.

3.6.4. Dual Checkerboard Rotation Experiments

In these experiments, we investigate the checkerboard pose performance when varying the roll ( α ) and pitch ( β ) angles in different rotational combination patterns: individual, simultaneous, symmetric, and asymmetric. For all these experiments, we fixed the checkerboard yaw angle to 53.14 degrees because this aligns the checkerboard longer diagonal to the vertical-axis, which allowed us to simplify the camera-LiDAR calibration [25]. This condition was required to simulate the same settings to use in the real-world scenario.
We used the roll and pitch rotation intervals we obtained in Section 3.5 for the following experiments, since we previously determined that these would provide accurate checkerboard corner detections.
We prepend all of the experiments in this section by the letter A, denoting the angle variation. The following list summarizes the experiments we carried out, with the rotation experiments being additionally summarized in Table 1:
A1. Varying the pitch and roll of the right checkerboard, while keeping the left checkerboard static. We simulated all the roll ( α ) and pitch ( β ) combinations over the [ 60 , 60 ] interval in 10-degree steps.
A2. Varying the pitch and roll of the left checkerboard, while keeping the right checkerboard static. We varied the roll and pitch over the [ 60 , 60 ] interval in 10-degree steps.
A3. Varying the pitch and roll of the left checkerboard and right checkerboard simultaneously over the [ 60 , 60 ] interval in 10-degree steps. The yaw angle of both checkerboards was set to 53.14 degrees.
A4. This experiment was similar to A3. It varied the pitch and roll of the left checkerboard and right checkerboard simultaneously over the [ 60 , 60 ] interval in 10-degree steps. However, in this experiment, we mirrored the yaw angle between checkerboards, setting the left γ l angle to 53.14 , and the right γ l angle to 53.14 degrees.
A5. Similar to the A3 experiment, we varied simultaneously the roll and pitch angles in the [ 60 , 60 ] interval, in 10 -degree steps. But in this experiment, we mirrored all the rotations ( α l = α r , β l = β r , and γ l = γ r ).

3.6.5. Dual Checkerboard Horizontal Positioning Experiments

In these experiments, we investigated the checkerboard pose performance when varying the horizontal positioning for two simultaneous checkerboards in the camera field-of-view. We used different combinations while keeping the vertical positioning fixed.
From Section 3.5.3, we obtained the performance limits for the object detector we selected on Section 3.2. Additionally, we confirmed that the closer the checkerboard is to the camera plane, the better the checkerboard corner detector performance. For this reason, we positioned both checkerboards five meters from the camera plane along the camera’s Z-axis. Additionally, for these experiments we selected the roll and pitch rotations that provided the best corner detection results. With these initial conditions, we simulated the following experiments involving variation of the horizontal positioning of the checkerboards. We prepend all of the experiments in this section by the letter H, denoting the horizontal variation. We also summarize the experiment parameters and results in Table 2:
H1. In this experiment, we modified the horizontal position of both checkerboards simultaneously along the camera X-axis, while each checkerboard was facing the camera with different rotation angles. We positioned the left checkerboard at the left edge of the image, with the right checkerboard next to its right side separated by 0.9 m. Additionally, we set the right checkerboard to have −40, 20 , and 53.14 degrees for the roll, pitch, and yaw, respectively; and 0, 0, and 53.14 degrees for the left checkerboard. We then moved both checkerboards with 0.02 m steps until the right checkerboard reached the image edge.
H2. Similar to H1, in this experiment we changed only the horizontal position until the right checkerboard reached the image edge, in 0.02 m steps. However, the checkerboard normals had the rotation angles mirrored. The left checkerboard roll, pitch, yaw rotations were set to 50, 20 , and 53.14 degrees, respectively, while the right ones were set to 50 , 20, and 53.14 degrees.
H3. Contrary to the H1 and H2, in this experiment we fixed the initial position of both checkerboards to the center of the image and moved the horizontal position of each checkerboard simultaneously towards the left and right image edges in 0.01 m steps. We set the right checkerboard roll, pitch, and yaw rotations to 40 , 20 , and 53.14 degrees, respectively, while setting the left checkerboard rotations to 0, 0, and 53.14 degrees.
H4. In this experiment, we initially positioned both checkerboards at the center of the frame, separated by 0.8 m. We then moved each checkerboard towards the left and right edges, respectively, in 0.01 m steps. We also set the left and right checkerboards to point towards opposite directions. The right checkerboard roll was set to 50 , the pitch to 20, and the yaw to 53.14 degrees; the left one was set to 50, 20 , and 53.14 degrees in roll, pitch, and yaw, respectively.
H5. In this experiment, we set both checkerboards at the center of the frame separated by 0.8 m, with their rotation angles mirrored. We then moved the left checkerboard towards the left edge of the image and simultaneously moved the right checkerboard to the right side of the frame. Both checkerboards were moved in 0.01 m steps. The right checkerboard roll, pitch, and yaw rotations were set to 50, 20 , and 53.14 degrees, respectively, while the left checkerboard rotations were set to 50 , 20, and 53.14 degrees.

3.6.6. Dual Checkerboard Vertical Positioning Experiments

In these experiments, we examined the checkerboard pose performance when varying symmetrically and asymmetrically the vertical positioning of the two checkerboards in the camera field-of-view. Similar to the experiments of Section 3.6.5, we selected the most performant rotations for the checkerboard detector from Section 3.5.3. The following list explains the vertical positioning experiments. We prepend the experiments in this section with V, denoting the vertical variation. We additionally include a summary of these experiments in Table 2:
V1. In this experiment, we initially positioned the left checkerboard at the bottom left edge and the right checkerboard at the lower right of the frame. We fixed the rotations of both checkerboards to 40 , 20 , and 53.14 degrees for the roll, pitch, and yaw, respectively. We then moved both checkerboards simultaneously to the top of the image in steps of 0.02 m, without modifying the horizontal position.
V2. In this experiment, we initially positioned the left checkerboard to the top-left edge and the right checkerboard to the lower right of the frame. We fixed both checkerboard rotations to 40 , 20 , and 53.14 degrees for the roll, pitch, and yaw, respectively. We then moved the left checkerboard towards the bottom left and the right checkerboard to the top right in steps of 0.02 m, without modifying the horizontal position.

3.6.7. Dual Checkerboard Distance Experiments

Although we know that the corner detector performs better the closer the checkerboard is to the camera, in these experiments we evaluated the impacts of checkerboard scale on the accuracy of the intrinsic parameters. Similar to the horizontal and vertical experiments, we selected the most performant poses from Section 3.5.3. Additionally, from Section 3.6.5 we chose to keep the checkerboards near the left and right edges since this positioning provided better parameter estimation. The following list describes the distance experiments. We prepend the experiments in this section with D, denoting the distance. We additionally include a summary of these experiments in Table 3:
D1. In this experiment, we initially positioned both checkerboards at the camera height, but distanced the left checkerboard 5 m from the camera and the right one at 10 m. We kept the left checkerboard static during the whole experiment on the left image edge, while we reduced the right checkerboard distance in 0.1 m steps until it reached 4.6 m. The roll, pitch, and yaw for both checkerboards was set to 40 , 20 , and 53.14 degrees, respectively.
D2. In this experiment, we initially positioned both checkerboards at the camera height, with one on the left image edge and the other on the right. We set the initial distance of both checkerboards to 10 m, and then moved them towards the camera until they reached a distance of 5 m from the camera. We fixed the roll, pitch, and yaw for both checkerboards to 40 , 20 , and 53.14 degrees, respectively.

3.6.8. Dual Checkerboard Calibration Results

Looking at the results from Table 1, Table 2 and Table 3, we can infer the following guidelines:
1.
It is essential to locate the checkerboards near the image edge in order to generate distortion parameters closer to the ground truth, a well-known fact [11,37,38].
2.
Having a single checkerboard pointing directly to the camera while the other is rotated seems to produce better overall parameters. We hypothesize that this dual checkerboard combination provides enough information to estimate the center point and the focal length accurately, while also producing sufficient data to calculate the distortion parameters.
3.
Keeping the checkerboard rotation inside the [ 60 , 60 ] interval, as we defined in Section 3.5.3, is essential to increase the corner detection accuracy.
4.
Checkerboards located on the image corners are not required, as long as the user can position the checkerboards near the image edge.
5.
Checkerboards with varied rotation relative to the camera plane are more beneficial than those located farther away, since closer checkerboards produce more accurate corners. We found this to be true because rotated checkerboards near the acceptable rotation limits also provide scale variations.

3.6.9. Multiple Checkerboards Calibration

Based on the guidelines we defined in Section 3.6.8, we defined the poses of up to 10 checkerboards in the image frame. Since we narrowed down the combinations, all these experiments contain only fixed positions and rotations, instead of simulation intervals. Additionally, for the top-score poses, we manually picked other performant poses to form two setups: one with six and the other with seven checkerboards. To form these “custom” setups, denoted by subscript C, we chose what we considered “easy” to replicate positions in the real world. For instance, we preferred (but did not prioritize) checkerboard positions closer to the ground or separated from each other; this would help accelerate checkerboard positioning setup in a real-world scenario. However, we always followed the guidelines we defined in Section 3.6.8.
The summary of the simulation experiments with multiple checkerboards is in Table 4. We decided to limit the number of checkerboards to 10 because adding more checkerboards would prove a challenge to set up in the real and simulated worlds while maintaining all the checkerboards at a close range to maximize the performance of the corner detector.

3.6.10. Multiple Checkerboards Calibration Results

Table 5 and Table 6 show that it might be feasible to calibrate with as few as three checkerboards and obtain good calibration results. However, in Figure 6 we can see that the α 1 experiment, with three checkerboards, presents a larger than average focal length error, which might produce scaling problems and an irregular projection at different distances.
Additionally, on the same Section 3.6.10, we can see that having more checkerboards produces more accurate calibration intrinsic parameters. However, we can also infer that using six or seven checkerboards can provide enough information to estimate highly accurate calibration parameters for 3D applications. This number of checkerboards gives a realistic and balanced approach since having more than seven simultaneous checkerboards in the frame might cause difficulties while replicating the setups presented in Table 4 when using real checkerboards and tripods. For this reason, in the next section, we decided to investigate and test these setups with real cameras, project the point cloud from a 3D LiDAR, and verify the quality of the camera intrinsic parameters.

4. Results

4.1. Real-World Calibration Verification

For real-world calibration verification, we used the Lucid Labs machine vision camera that was introduced in Section 3.1.1, and an additional FLIR Blackfly camera (PGE-23S6C), paired with an 8 mm μ Tron lens. This camera has a lower resolution (1920 × 1200 pixels, approximately 2.3 MP) and a larger image sensor, leading to a wider field of view even if it uses a lens with the same focal length. Additionally, we paired a 25 mm Fujinon lens with the Lucid camera to verify a telephoto scenario. We calibrated and validated the three sets as described in Section 3.1.1. A summary of the cameras we used to validate our guidelines is in Table 6.
These experiments would also help us study how our guidelines perform on multiple camera resolutions and wide and narrow fields of view. Wide-angle lenses are commonly used in autonomous driving applications or robotics for peripheral sensing. In contrast, telephoto lenses are used for long-range applications, such as traffic light detection and classification, or object detection at long-range when running on highways. We proceeded to replicate the checkerboard positions we obtained in Section 3.6.10 for the setups with six and seven checkerboards: δ 1 , δ C , ϵ 1 , ϵ C , for the 8 mm and 25 mm Fujinon lenses on the Lucid and the 8 mm μ Tron lens on the FLIR camera, a total of 12 experiments.

4.2. Multiple Checkerboard Verification Experiments

The checkerboard poses we obtained from the simulator are defined by their roll, pitch, and yaw rotations. These angles are numbers that we, as humans, might find difficult to replicate in the real world with such accuracy. For this reason, and to simplify and accelerate the checkerboard positioning in a garage, we created transparent image overlays of the checkerboard poses rendered by the simulator. With the help of the Robot Operating Systems (ROS) [39], we wrote a node that receives the camera stream, superimposes in real-time the overlay, and projected the composed image on a large screen while replicating the checkerboard poses defined by the δ 1 , δ C , ϵ 1 , ϵ C experiments in a garage. Figure 7 shows the setup we used while matching the checkerboards poses with the help of tripods. To simplify the replication of our experiments by the community, we publicly released the image overlays and the ROS node to the following repository: https://gitlab.com/perceptionengine/pe-datatools/ros_image_overlay (accessed on 29 January 2022).
Having matched the checkerboard positions with our simulation setups, we captured a single image for the δ 1 , δ c , ϵ 1 and ϵ c experiments from Section 3.6.9 with the Lucid and the FLIR cameras; we then proceeded to detect the checkerboard corners with the corner detector introduced in Section 3.1.1, and obtained the intrinsic parameters for each camera setup using OpenCV [18].
In addition to the one-shot calibration dataset, we captured camera and LiDAR data outdoors. With the help of this dataset, we obtained the extrinsic calibration parameters between the Lucid and FLIR cameras and the 3D LiDAR, which we summarized in Table 7. We then individually projected the 3D LiDAR point cloud into the same image captured by the camera using the four sets of intrinsic parameters ( δ 1 , δ C , ϵ 1 , ϵ C ). We repeated this procedure for the lower resolution FLIR camera, and the Lucid with the telephoto lens.

5. Discussion

Real-World Calibration Results

Having estimated the intrinsic parameters for the setups outlined in Table 8 and Table 9 using the single-shot setups, we obtained the absolute error between these intrinsic parameters and our baseline values. We prepared qualitative tests, for which we projected the point cloud from a Hesai Pandar 64 LiDAR on the rectified image by each set of estimated parameters from the δ 1 , δ C , ϵ 1 , ϵ C experiments and compiled them in Figure 8, Figure 9 and Figure 10. Additionally, as a quantative evaluation, we calculated the checkerboard corner re-projection error, and summarized the results in Figure 11. In this figure, we can verify that both of the seven checkerboard setups provided the most reliable intrinsic parameters, as we expected.
Both of the ϵ experiments (seven checkerboards) show better overall performance when compared with their equivalent δ counterparts (six checkerboards), as can be observed in Figure 11a,b. This holds true for both the Lucid cameras and the FLIR camera, which is consistent with our simulation results in Figure 6 and the qualitative results in Figure 8 and Figure 9. The best performing experiment in our simulation ( ϵ C ) obtained the lowest error in our real-world experiments. Moreover, the corresponding qualitative results from both cameras also exhibit an excellent point cloud projection as illustrated by Figure 9g,h and Figure 10g,j.
The worst performing simulated experiment ( δ C ) also matched with our real experiment results, in both the wide angle and the telephoto lens experiments. While obtaining the extrinsic parameters, we noted that the roll rotation for both cameras in the δ C test is slightly different from the rest of the experiments, as we can see in Table 7. The extrinsic calibration method searches for shared features in the image and LiDAR domain and uses these as an input for an optimizer to minimize the re-projection error. It utilizes the detected 3D features and projects them using the given camera intrinsic parameters. However, if the intrinsic parameters erratically project the 3D features, the optimizer will estimate an incorrect relative camera-LiDAR position, leading us to infer that the camera-LiDAR extrinsic calibration algorithm incorrectly converged due to inaccurate intrinsic parameters. Additionally, after analyzing the qualitative results for the δ C experiment, we noticed an adequate projection at long range, but on the right-bottom section of Figure 8d,c and Figure 9d,c we found that the point cloud hitting the barricade located closer to the camera is incorrectly projected, pointing out a large error in the focal length. This can be confirmed in the quantitative results presented in Figure 11a,b.
Telephoto lenses tend to expose a low pincushion distortion in the center of the image, which is characterized by larger k 3 coefficients. For this reason, it might appear that the experiment ϵ 1 with the telephoto lens is performing worse in terms of the distortion coefficients. However, even if the difference we appreciate in Figure 11c might seem significant, that difference only slightly affects the edges of the image as shown in Figure 12; qualitative results in Figure 10 also show that ϵ parameters provide a good point cloud projection. Additionally, in terms of extrinsic calibration, we see that similar to the wide-angle experiments, close-range projection on the δ experiments appear to have incorrect size as shown in the barricade we purposely put close to the camera (Figure 10a,c). Finally, it is essential to mention that we carried out the experiments with the telephoto camera using a slightly different setup, for that reason the values of the extrinsic parameters for the telephoto setup in Table 7 are different from its wide-angle counterparts.

6. Conclusions

We presented a first-of-its-kind method to generate clear guidelines for single shot camera intrinsic calibration using multiple checkerboards, suitable for use in 3D applications. With the help of a simulator, we defined the position and rotation intervals that allow a corner detector to obtain optimal detections; we then generated thousands of multiple checkerboard poses and evaluated them to obtain position and rotation intervals that maximize the probability of estimating accurate camera intrinsic parameters. These results gave us enough information to generate checkerboard pose guidelines. Using these guidelines, we developed sets of multiple checkerboard poses and evaluated them synthetically and in the real world using three different camera sets with different resolutions and fields of view.
The overall results show that the camera simulations helped us to accelerate the camera modeling process, its evaluation, and ultimately the creation of guidelines to obtain accurate intrinsic parameters. We can also infer that even if the simulations create ideal image conditions, i.e., images without chromatic aberration, vignetting, and so on, we can still transfer the lessons learned to the real world. It would have been challenging and costly to exactly replicate the simulated experiments in the real world since they require specialized equipment to position and rotate the checkerboards. Moreover, to obtain the ground truth corner coordinates, a team of labelers would be necessary to identify each corner at the pixel level, extending the time required to complete this work.
In this paper, we simulated and evaluated a single plumb-bob camera synthetically. For this reason, the distances we mentioned might not apply to other focal length lenses. Nevertheless, as an additional finding, we learned in Section 3.5.3 that if the checkerboard guidelines we defined in Section 3.6.10 are to be used with a different field of view camera, instead of following the recommended checkerboard distance, the user should aim to project the checkerboard area with at least 20,000  p x 2 to produce precise corner detection. We validated this experimentally when testing on the FLIR camera in Section 4.1, which has a wider field of view and less than half the resolution of the Lucid camera. We additionally validated our guidelines on a telephoto lens, with about a third of the field-of-view of the simulated lens. It is important to note that the simulated experiments and control point validation were designed for a specific wide-angle lens and should be different for a telephoto lens with a narrower field of view and longer reach. Furthermore, our results might not be applied to cameras paired with fish-eye lenses since these are normally modeled with non-perspective projection models, which we aim to test in future work. Nevertheless, we found consistent results between our simulations and our real-world evaluations thanks to the real-world experiments and the validation experiments with the telephoto lens. This enabled us to confirm that if our guidelines are followed, accurate intrinsic parameters for 3D applications can be obtained by using seven checkerboards.
Finally, the checkerboard pose guidelines we defined in Section 3.6.8 and Table 4 can also be used with the original method presented by Zhang [15], which involves using a single checkerboard and capturing multiple images with random orientations. Instead of moving the checkerboard randomly around the whole camera field of view, the user might aim to replicate the checkerboard poses we defined for the δ 1 , ϵ C experiments, or even the ζ , η or θ experiments, and use the images as an input for either the OpenCV or Matlab calibration toolboxes. This would simplify and accelerate the calibration process, while helping to estimate accurate intrinsic parameters for 3D applications such as robotics, autonomous driving, or other applications that require high quality parameters.

Author Contributions

Conceptualization, A.M.C.; methodology, A.M.C. and J.L.; software, A.M.C. and J.L.; validation, A.M.C., J.L., S.K. and M.E.; formal analysis, A.M.C.; investigation, A.M.C. and J.L.; resources, S.K. and M.E.; data curation, A.M.C. and J.L.; writing—original draft preparation, A.M.C.; writing—review and editing, A.M.C. and J.L.; visualization, A.M.C. and J.L.; supervision, S.K. and M.E.; project administration, S.K. and M.E.; funding acquisition, S.K. and M.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LiDARLight Detection and Ranging
SLAMSimultaneous Localization and Mapping
ROSRobot Operating Systems
ToFTime-of-Flight

References

  1. Vora, S.; Lang, A.H.; Helou, B.; Beijbom, O. PointPainting: Sequential Fusion for 3D Object Detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 4603–4611. [Google Scholar] [CrossRef]
  2. Ku, J.; Mozifian, M.; Lee, J.; Harakeh, A.; Waslander, S.L. Joint 3D Proposal Generation and Object Detection from View Aggregation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  3. Qi, C.R.; Liu, W.; Wu, C.; Su, H.; Guibas, L.J. Frustum PointNets for 3D Object Detection from RGB-D Data. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 918–927. [Google Scholar] [CrossRef] [Green Version]
  4. Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-view 3D Object Detection Network for Autonomous Driving. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6526–6534. [Google Scholar] [CrossRef] [Green Version]
  5. Caselitz, T.; Steder, B.; Ruhnke, M.; Burgard, W. Monocular camera localization in 3D LiDAR maps. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 1926–1931. [Google Scholar] [CrossRef]
  6. Wolcott, R.W.; Eustice, R.M. Visual localization within LIDAR maps for automated urban driving. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 176–183. [Google Scholar] [CrossRef] [Green Version]
  7. Muñoz-Bulnes, J.; Fernandez, C.; Parra, I.; Fernández-Llorca, D.; Sotelo, M.A. Deep fully convolutional networks with random data augmentation for enhanced generalization in road detection. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 366–371. [Google Scholar] [CrossRef]
  8. Mikołajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Swinoujscie, Poland, 9–12 May 2018; pp. 117–122. [Google Scholar] [CrossRef]
  9. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
  10. Zhang, Z. Camera calibration with one-dimensional objects. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 892–899. [Google Scholar] [CrossRef]
  11. Remondino, F.; Fraser, C. Digital camera calibration methods: Considerations and comparisons. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2005, 36, 266–272. [Google Scholar]
  12. Ricolfe-Viala, C.; Esparza, A. The Influence of Autofocus Lenses in the Camera Calibration Process. IEEE Trans. Instrum. Meas. 2021, 70, 1–15. [Google Scholar] [CrossRef]
  13. The Mathworks, I. Evaluating the Accuracy of Single Camera Calibration. 2021. Available online: https://www.mathworks.com/help/vision/ug/evaluating-the-accuracy-of-single-camera-calibration.html (accessed on 29 January 2022).
  14. Abeles, P. BoofCV. 2011–2021. Available online: http://boofcv.org/ (accessed on 29 January 2022).
  15. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  16. Kannala, J.; Brandt, S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1335–1340. [Google Scholar] [CrossRef] [Green Version]
  17. Heikkila, J.; Silven, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1106–1112. [Google Scholar] [CrossRef] [Green Version]
  18. Bradski, G. Open Source Computer Vision Library. In Dr. Dobb’s Journal of Software Tools; 2000; Available online: https://github.com/opencv/opencv/wiki/CiteOpenCV (accessed on 29 January 2022).
  19. The Mathworks, Inc. MATLAB Version (R2021a); The Mathworks, Inc.: Natick, MA, USA, 2021. [Google Scholar]
  20. Bogdan, O.; Eckstein, V.; Rameau, F.; Bazin, J.C. DeepCalib: A deep learning approach for automatic intrinsic calibration of wide field-of-view cameras. In Proceedings of the 15th ACM SIGGRAPH European Conference on Visual Media Production, London, UK, 13–14 December 2018. [Google Scholar]
  21. Hold-Geoffroy, Y.; Sunkavalli, K.; Eisenmann, J.; Fisher, M.; Gambaretto, E.; Hadap, S.; Lalonde, J.F. A Perceptual Measure for Deep Single Image Camera Calibration. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2354–2363. [Google Scholar] [CrossRef] [Green Version]
  22. Vasconcelos, F.; Barreto, J.P.; Nunes, U. A Minimal Solution for the Extrinsic Calibration of a Camera and a Laser-Rangefinder. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2097–2107. [Google Scholar] [CrossRef]
  23. Pusztai, Z.; Hajder, L. Accurate Calibration of LiDAR-Camera Systems Using Ordinary Boxes. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 394–402. [Google Scholar] [CrossRef] [Green Version]
  24. Wang, W.; Sakurada, K.; Kawaguchi, N. Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard. Remote Sens. 2017, 9, 851. [Google Scholar] [CrossRef] [Green Version]
  25. Zhou, L.; Li, Z.; Kaess, M. Automatic Extrinsic Calibration of a Camera and a 3D LiDAR Using Line and Plane Correspondences. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5562–5569. [Google Scholar] [CrossRef]
  26. Domhof, J.; Kooij, J.F.; Gavrila, D.M. An Extrinsic Calibration Tool for Radar, Camera and Lidar. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 8107–8113. [Google Scholar] [CrossRef] [Green Version]
  27. Huang, J.K.; Grizzle, J.W. Improvements to Target-Based 3D LiDAR to Camera Calibration. IEEE Access 2020, 8, 134101–134110. [Google Scholar] [CrossRef]
  28. Basso, F.; Menegatti, E.; Pretto, A. Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras. IEEE Trans. Robot. 2018, 34, 1315–1332. [Google Scholar] [CrossRef] [Green Version]
  29. Geiger, A.; Moosmann, F.; Car, O.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, St Paul, MN, USA, 14–18 May 2012; pp. 3936–3943. [Google Scholar] [CrossRef]
  30. Guiming, S.; Jidong, S. Multi-Scale Harris Corner Detection Algorithm Based on Canny Edge-Detection. In Proceedings of the 2018 IEEE International Conference on Computer and Communication Engineering Technology (CCET), Beijing, China, 18–20 August 2018; pp. 305–309. [Google Scholar] [CrossRef]
  31. Ha, J.E. Automatic detection of chessboard and its applications. Opt. Eng. 2009, 48, 067205. [Google Scholar] [CrossRef]
  32. Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the Fourth Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151. [Google Scholar]
  33. Shi, J. Tomasi. Good features to track. In Proceedings of the 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar] [CrossRef]
  34. Bouguet, J.Y. A Release of a Camera Calibration Toolbox for Matlab; 2008. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/ (accessed on 29 January 2022).
  35. Community, B.O. Blender—A 3D Modelling and Rendering Package; Blender Foundation, Stichting Blender Foundation: Amsterdam, The Netherlands, 2018. [Google Scholar]
  36. Rong, G.; Shin, B.H.; Tabatabaee, H.; Lu, Q.; Lemke, S.; Možeiko, M.; Boise, E.; Uhm, G.; Gerow, M.; Mehta, S.; et al. LGSVL Simulator: A High Fidelity Simulator for Autonomous Driving. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; pp. 1–6. [Google Scholar] [CrossRef]
  37. Abraham, S.; Hau, T. Towards autonomous high-precision calibration of digital cameras. Proc. SPIE 1997, 3174, 82–93. [Google Scholar]
  38. Rosten, E.; Loveland, R. Camera distortion self-calibration using the plumb-line constraint and minimal Hough entropy. Mach. Vis. Appl. 2011, 22, 77–85. [Google Scholar] [CrossRef] [Green Version]
  39. Stanford Artificial Intelligence Laboratory. Robotic Operating System; ICRA Workshop on Open Source Software; Stanford Artificial Intelligence Laboratory: Stanford, CA, USA, 2009. [Google Scholar]
Figure 1. Our modeled checkerboard simulated in the LGSVL.
Figure 1. Our modeled checkerboard simulated in the LGSVL.
Sensors 22 02067 g001
Figure 2. Effects of roll ( α ), pitch ( β ) and yaw ( γ ) rotations on the checkerboard corner detector. (a) Effects of Roll ( α ) Angle, (b) Effects of Pitch ( β ) Angle, (c) Effects of Simultaneous Roll ( α ) and Pitch ( β ), and (d) Effects of Yaw ( γ ) Angle.
Figure 2. Effects of roll ( α ), pitch ( β ) and yaw ( γ ) rotations on the checkerboard corner detector. (a) Effects of Roll ( α ) Angle, (b) Effects of Pitch ( β ) Angle, (c) Effects of Simultaneous Roll ( α ) and Pitch ( β ), and (d) Effects of Yaw ( γ ) Angle.
Sensors 22 02067 g002
Figure 3. Effects of distance between the checkerboard and the camera on the checkerboard corner detector. (a) Effects of Distance, 4 to 51 m interval, (b) Effects of Pitch ( β ) Angle.
Figure 3. Effects of distance between the checkerboard and the camera on the checkerboard corner detector. (a) Effects of Distance, 4 to 51 m interval, (b) Effects of Pitch ( β ) Angle.
Sensors 22 02067 g003
Figure 4. Control Points systematically located inside the baseline camera frustum.
Figure 4. Control Points systematically located inside the baseline camera frustum.
Sensors 22 02067 g004
Figure 5. Control points as an auxiliary metric. Green marks represent the “Control points” projected with the ground truth intrinsic parameters, while purple marks represent the projection of the “Control Points” using the estimated intrinsic parameters. Both (a,b) have the same subpixel checkerboard corner re-projection error value, and corners in the checkerboard are correctly re-projected in both cases. However, the estimated intrinsic parameters have a large error in (a), correlating to Control Point re-projection error.
Figure 5. Control points as an auxiliary metric. Green marks represent the “Control points” projected with the ground truth intrinsic parameters, while purple marks represent the projection of the “Control Points” using the estimated intrinsic parameters. Both (a,b) have the same subpixel checkerboard corner re-projection error value, and corners in the checkerboard are correctly re-projected in both cases. However, the estimated intrinsic parameters have a large error in (a), correlating to Control Point re-projection error.
Sensors 22 02067 g005
Figure 6. Quantitative result summary for the simulation experiments with multiple checkerboards.
Figure 6. Quantitative result summary for the simulation experiments with multiple checkerboards.
Sensors 22 02067 g006
Figure 7. Multiple checkerboard verification experiments in a garage. (a) shows the composed image by the camera stream and the checkerboards overlays. (b) shows the camera and the screen we used during the experiments to help us match the checkerboards poses.
Figure 7. Multiple checkerboard verification experiments in a garage. (a) shows the composed image by the camera stream and the checkerboards overlays. (b) shows the camera and the screen we used during the experiments to help us match the checkerboards poses.
Sensors 22 02067 g007
Figure 8. Point cloud projection on the Lucid camera with the wide angle lens, using the intrinsic parameters by the one-shot experiments, replicating the δ 1 , δ C , ϵ 1 , a n d ϵ C experiments. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (a) Projection using δ 1 ’s intrinsic parameters, point cloud colored by depth, (b) projection using δ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (c) projection using δ C ’s intrinsic parameters, point cloud colored by depth, (d) projection using δ C ’s intrinsic parameters, point cloud colored by laser intensity, (e) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by depth, (f) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (g) projection using ϵ C ’s intrinsic parameters, point cloud colored by depth, and (h) projection using ϵ C ’s intrinsic parameters, point cloud colored by laser intensity.
Figure 8. Point cloud projection on the Lucid camera with the wide angle lens, using the intrinsic parameters by the one-shot experiments, replicating the δ 1 , δ C , ϵ 1 , a n d ϵ C experiments. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (a) Projection using δ 1 ’s intrinsic parameters, point cloud colored by depth, (b) projection using δ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (c) projection using δ C ’s intrinsic parameters, point cloud colored by depth, (d) projection using δ C ’s intrinsic parameters, point cloud colored by laser intensity, (e) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by depth, (f) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (g) projection using ϵ C ’s intrinsic parameters, point cloud colored by depth, and (h) projection using ϵ C ’s intrinsic parameters, point cloud colored by laser intensity.
Sensors 22 02067 g008aSensors 22 02067 g008b
Figure 9. Point cloud projection on the FLIR camera with the wide angle lens, using the intrinsic parameters by the one-shot experiments, replicating the δ 1 , δ C , ϵ 1 , ϵ C experiments. Figures on the left column colored the projected point cloud by distance. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (a) Projection using δ 1 ’s intrinsic parameters, point cloud colored by depth, (b) projection using δ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (c) projection using δ C ’s intrinsic parameters, point cloud colored by depth, (d) projection using δ C ’s intrinsic parameters, point cloud colored by laser intensity, (e) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by depth, (f) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (g) projection using ϵ C ’s intrinsic parameters, point cloud colored by depth, and (h) projection using ϵ C ’s intrinsic parameters, point cloud colored by laser intensity.
Figure 9. Point cloud projection on the FLIR camera with the wide angle lens, using the intrinsic parameters by the one-shot experiments, replicating the δ 1 , δ C , ϵ 1 , ϵ C experiments. Figures on the left column colored the projected point cloud by distance. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (a) Projection using δ 1 ’s intrinsic parameters, point cloud colored by depth, (b) projection using δ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (c) projection using δ C ’s intrinsic parameters, point cloud colored by depth, (d) projection using δ C ’s intrinsic parameters, point cloud colored by laser intensity, (e) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by depth, (f) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (g) projection using ϵ C ’s intrinsic parameters, point cloud colored by depth, and (h) projection using ϵ C ’s intrinsic parameters, point cloud colored by laser intensity.
Sensors 22 02067 g009aSensors 22 02067 g009b
Figure 10. Point cloud projection on the Lucid camera with the Telephoto lens, using the intrinsic parameters by the one-shot experiments, replicating the δ 1 , δ C , ϵ 1 , a n d ϵ C experiments. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (a) Projection using δ 1 ’s intrinsic parameters, point cloud colored by depth, (b) projection using δ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (c) projection using δ C ’s intrinsic parameters, point cloud colored by depth, (d) projection using δ C ’s intrinsic parameters, point cloud colored by laser intensity, (e) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by depth, (f) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (g) projection using ϵ C ’s intrinsic parameters, point cloud colored by depth, and (h) projection using ϵ C ’s intrinsic parameters, point cloud colored by laser intensity.
Figure 10. Point cloud projection on the Lucid camera with the Telephoto lens, using the intrinsic parameters by the one-shot experiments, replicating the δ 1 , δ C , ϵ 1 , a n d ϵ C experiments. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (a) Projection using δ 1 ’s intrinsic parameters, point cloud colored by depth, (b) projection using δ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (c) projection using δ C ’s intrinsic parameters, point cloud colored by depth, (d) projection using δ C ’s intrinsic parameters, point cloud colored by laser intensity, (e) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by depth, (f) projection using ϵ 1 ’s intrinsic parameters, point cloud colored by laser intensity, (g) projection using ϵ C ’s intrinsic parameters, point cloud colored by depth, and (h) projection using ϵ C ’s intrinsic parameters, point cloud colored by laser intensity.
Sensors 22 02067 g010aSensors 22 02067 g010b
Figure 11. Real-world results for the multi-checkerboard experiments. The left axis denotes the error in pixels for the sum of both focal lengths (f) and the sum of the center point (c). The right axis denotes the absolute error for the sum of distortion coefficients ( k 1 , k 2 , k 3 , p 1 , p 2 ) and checkerboard corner re-projection error ( ϵ ). (a) Absolute error comparison for the Lucid camera with the wide angle lens, (b) absolute error comparison for the FLIR camera with the wide angle lens, and (c) absolute error comparison for the Lucid camera with the telephoto lens.
Figure 11. Real-world results for the multi-checkerboard experiments. The left axis denotes the error in pixels for the sum of both focal lengths (f) and the sum of the center point (c). The right axis denotes the absolute error for the sum of distortion coefficients ( k 1 , k 2 , k 3 , p 1 , p 2 ) and checkerboard corner re-projection error ( ϵ ). (a) Absolute error comparison for the Lucid camera with the wide angle lens, (b) absolute error comparison for the FLIR camera with the wide angle lens, and (c) absolute error comparison for the Lucid camera with the telephoto lens.
Sensors 22 02067 g011aSensors 22 02067 g011b
Figure 12. Third-order radial distortion coefficient effect comparison on the telephoto lens.
Figure 12. Third-order radial distortion coefficient effect comparison on the telephoto lens.
Sensors 22 02067 g012
Table 1. Summary of the rotation experiments with dual checkerboards and their results. The best poses are defined by the top 2% lowest error of the projected control points. The subscript letter identifies the left ( l ) and right ( r ) checkerboards, respectively. The # symbol in parenthesis represents the index of the experiment.
Table 1. Summary of the rotation experiments with dual checkerboards and their results. The best poses are defined by the top 2% lowest error of the projected control points. The subscript letter identifies the left ( l ) and right ( r ) checkerboards, respectively. The # symbol in parenthesis represents the index of the experiment.
ExperimentLeft Checkerboard ParametersRight Checkerboard ParametersTop 2% Poses, Its Control Points, and Corners Re-Projection Error ( ϵ ctrl , ϵ corner ) [px]
A1
Left Fixed
Right Rotate
x , y , z : (−0.6, 1.0, 5.0) m
α : 0°
β : 0°
γ : −53.14°
x , y , z : (0.6, 1.0, 5.0) m
α : [−60, 60, 10]°
β : [−60, 60, 10]°
γ : −53.14°
(#98) α r : 10°, β r : 10°, ϵ c t r l : 1.807, ϵ c o r n e r : 0.1543
(#72) α r : −10°, β r : 10°, ϵ c t r l : 1.8260, ϵ c o r n e r : 0.1484
(#110) α r : 20°, β r : 0°, ϵ c t r l : 1.8867, ϵ c o r n e r : 0.1611
(#30) α r : −40°, β r : −20°, ϵ c t r l : 1.8930, ϵ c o r n e r : 0.1605
A2
Left Rotate
Right Fixed
x , y , z : (−0.6, 1.0, 5.0) m
α : [−60, 60, 10]°
β : [−60, 60, 10]°
γ : −53.14°
x , y , z : (0.6, 1.0, 5.0) m
α : 0°
β : 0°
γ : −53.14°
(#52) α l : −20°, β l : −60°, ϵ c t r l : 1.1576, ϵ c o r n e r : 0.1686
(#132) α l : 40°, β l : −40°, ϵ c t r l : 1.2514, ϵ c o r n e r : 0.1685
(#94) α l : 10°, β l : −30°, ϵ c t r l : 1.3005, ϵ c o r n e r : 0.1674
(#92) α l : 10°, β l : −50°, ϵ c t r l : 1.3379, ϵ c o r n e r : 0.1507
A3
Left Rotate
Right Rotate
x , y , z : (−0.6, 1.0, 5.0) m
α : [−60, 60, 10]°
β : [−60, 60, 10]°
γ : −53.14°
x , y , z : (0.6, 1.0, 5.0) m
α : [−60, 60, 10]°
β : [−60, 60, 10]°
γ : −53.14°
(#132) α l : 40°, β l : −40°, α r : 40°, β r : −40°, ϵ c t r l : 1.5329, ϵ c o r n e r : 0.1915
(#58) α l : −20°, β l : 0°, α r : −20°, β r : 0°, ϵ c t r l : 1.6313, ϵ c o r n e r : 0.1535
(#157) α l : 60°, β l : −50°, α r : 60°, β r : 50°, ϵ c t r l : 1.8264, ϵ c o r n e r : 0.1928
(#36) α l : −40°, β l : 40°, α r : −40°, β r : 30°, ϵ c t r l : 1.8902, ϵ c o r n e r : 0.1752
A4
Left Rotate
Right Rotate
Mirror Yaw
x , y , z : (−0.6, 1.0, 5.0) m
α : [−60, 60, 10]°
β : [−60, 60, 10]°
γ : 53.14°
x , y , z : (0.6, 1.0, 5.0) m
α : [−60, 60, 10]°
β : [−60, 60, 10]°
γ : −53.14°
(#28) α l : −40°, β l : −40°, α r : −40°, β r : −40°, ϵ c t r l : 1.5135, ϵ c o r n e r : 0.17
(#98) α l : 10°, β l : 10°, α r : 10°, β r : 10°, ϵ c t r l : 1.6489, ϵ c o r n e r : 0.1634
(#73) α l : −10°, β l : 20°, α r : −10°, β r : 20°, ϵ c t r l : 1.9723, ϵ c o r n e r : 0.1635
(#68) α l : −10°, β l : −30°, α r : −10°, β r : −30°, ϵ c t r l : 2.1704, ϵ c o r n e r : 0.1579
A5
Left Rotate
Right Rotate
Mirror Roll/Pitch
x , y , z : (−0.6, 1.0, 5.0) m
α : [−60, 60, 10]°
β : [−60, 60, 10]°
γ : 53.14°
x , y , z : (0.6, 1.0, 5.0) m
α : [60, −60, −10]°
β : [60, −60, −10]°
γ : −53.14°
(#74) α l : −10°, β l : 30°, α r : 10°, β r : 30°, ϵ c t r l : 1.7785, ϵ c o r n e r : 0.1658
(#73) α l : −10°, β l : 20°, α r : 10°, β r : 20°, ϵ c t r l : 1.7820, ϵ c o r n e r : 0.1602
(#21) α l : −50°, β l : 20°, α r : 50°, β r : 20°, ϵ c t r l : 1.7852, ϵ c o r n e r : 0.1853
(#93) α l : 10°, β l : −40°, α r : −10°, β r : 40°, ϵ c t r l : 2.0763, ϵ c o r n e r : 0.1585
Table 2. Summary of the horizontal and vertical positioning experiments with dual checkerboards and their results. The best poses are defined by the top 2% lowest error of the projected control points. The subscripts l and r identify the left and right checkerboards, respectively. The # symbol in parenthesis represents the index of the experiment.
Table 2. Summary of the horizontal and vertical positioning experiments with dual checkerboards and their results. The best poses are defined by the top 2% lowest error of the projected control points. The subscripts l and r identify the left and right checkerboards, respectively. The # symbol in parenthesis represents the index of the experiment.
ExperimentLeft Checkerboard ParametersRight Checkerboard ParametersTop 2% Poses, Its Control Points, and Corners Re-Projection Error ( ϵ ctrl , ϵ corner ) [px]
H1
Left Perpendicular
Horizontally
Together
α , β , γ : (0, 0, −53.14)°
x: [−1.2, 0.4, 0.02] m
y: 1.0 m
z: 5.0 m
α , β , γ : (−40, −20, −53.14)°
x: [−0.3, 1.3 0.02] m
y: 1.0 m
z: 5.0 m
(#74) x l : 0.28 m, x r : 1.18 m, ϵ c t r l : 1.0277, ϵ c o r n e r : 0.1751
(#78) x l : 0.36 m, x r : 1.26 m, ϵ c t r l : 1.0783, ϵ c o r n e r : 0.1739
(#15) x l : −0.89 m, x r : 0.0 m, ϵ c t r l : 1.3312, ϵ c o r n e r : 0.1227
(#5) x l : −1.10 m, x r : −0.02 m, ϵ c t r l : 1.3568, ϵ c o r n e r : 0.1268
H2
Left/Right Mirrored
Horizontally
Together
α , β , γ : (50, −20, −53.14)°
x: [−1.2, 1.3, 0.2] m
y: 1.0 m
z: 5.0 m
α , β , γ : (−50, 20, −53.14)°
x: [−0.3, 0.4, 0.02] m
y: 1.0 m
z: 5.0 m
(#14) x l : −0.92 m, x r : −0.2 m, ϵ c t r l : 1.1834, ϵ c o r n e r : 0.144
(#75) x l : 0.3 m, x r : −1.2 m, ϵ c t r l : 2.0962, ϵ c o r n e r : 0.1826
(#17) x l : −0.86 m, x r : 0.04 m, ϵ c t r l : 2.1201, ϵ c o r n e r : 0.1440
(#18) x l : −0.84 m, x r : 0.06 m, ϵ c t r l : 2.1817, ϵ c o r n e r : 0.1488
H3
Left Perpendicular
Horizontally
Separate
α , β , γ : (0, 0, −53.14)°
x: [−0.4, −1.2, −0.01] m
y: 1.0 m
z: 5.0 m
α , β , γ : (−40, −20, −53.14)°
x: [0.4, 1.2, 0.01] m
y: 1.0 m
z: 5.0 m
(#74) x l : −1.14 m, x r : 1.14 m, ϵ c t r l : 1.0383, ϵ c o r n e r : 0.1465
(#65) x l : −1.05 m, x r : 1.05 m, ϵ c t r l : 1.0566, ϵ c o r n e r : 0.1714
(#73) x l : −1.13 m, x r : 1.13 m, ϵ c t r l : 1.0842, ϵ c o r n e r : 0.1405
(#78) x l : −1.18 m, x r : 1.18 m, ϵ c t r l : 1.1105, ϵ c o r n e r : 0.1781
H4
Left/Right Mirrored
Horizontally
Separate
α , β , γ : (50, −20, −53.14)°
x: [−0.4, −1.2, −0.01] m
y: 1.0 m
z: 5.0 m
α , β , γ : (−50, 20, −53.14)°
x: [0.4, 1.2, 0.01] m
y: 1.0 m
z: 5.0 m
(#20) x l : −0.6 m, x r : 0.6 m, ϵ c t r l : 1.7851, ϵ c o r n e r : 0.1853
(#43) x l : −0.83 m, x r : 0.83 m, ϵ c t r l : 1.9118, ϵ c o r n e r : 0.2020
(#54) x l : −0.94 m, x r : 0.94 m, ϵ c t r l : 1.9452, ϵ c o r n e r : 0.2076
(#31) x l : −0.71 m, x r : 0.71 m, ϵ c t r l : 2.0131, ϵ c o r n e r : 0.1901
H5
Left/Right Mirrored
Inverted
Horizontally
Separate
α , β , γ : (−50, 20, −53.14)°
x: [−0.4, 1.2, 0.01] m
y: 1.0 m
z: 5.0 m
α , β , γ : (50, −20, −53.14)°
x: [0.4,−1.2, 0.01] m
y: 1.0 m
z: 5.0 m
(#6) x l : −0.92 m, x r : −0.2 m, ϵ c t r l : 4.084, ϵ c o r n e r : 0.2147
(#14) x l : 0.3 m, x r : −1.2 m, ϵ c t r l : 4.7277, ϵ c o r n e r : 0.2521
(#13) x l : −0.86 m, x r : 0.04 m, ϵ c t r l : 4.7709, ϵ c o r n e r : 0.2238
(#23) x l : −0.84 m, x r : 0.06 m, ϵ c t r l : 5.43005, ϵ c o r n e r : 0.2391
V1
Left/Right Identical
Rotations
Vertically Together
α , β , γ : (−40, −20, −53.14)°
x: −1.13 m
y: [0.35, 1.65, 0.02] m
z: 5.0 m
α , β , γ : (−40, −20, −53.14)°
x: 1.13 m
y: [0.35, 1.65, 0.02] m
z: 5.0 m
(#42) y l : 1.19 m, y r : 1.19 m, ϵ c t r l : 0.6763, ϵ c o r n e r : 0.1595
(#37) y l : 1.09 m, y r : 1.09 m, ϵ c t r l : 0.7285, ϵ c o r n e r : 0.1654
(#55) y l : 1.45 m, y r : 1.45 m, ϵ c t r l : 0.7542, ϵ c o r n e r : 0.1459
(#57) y l : 1.49 m, y r : 1.49 m, ϵ c t r l : 0.8238, ϵ c o r n e r : 0.1568
V2
Left/Right Identical
Rotations
Vertically Opposite
α , β , γ : (−40, −20, −53.14)°
x: −1.13 m
y: [1.65, 0.35, −0.02] m
z: 5.0 m
α , β , γ : (−40, −20, −53.14)°
x: 1.13 m
y: [0.35, 1.65, 0.02] m
z: 5.0 m
(#34) y l : 0.97m, y r : 1.03 m, ϵ c t r l : 0.8916, ϵ c o r n e r : 0.1508
(#46) y l : 0.73 m, y r : 1.27 m, ϵ c t r l : 0.9385, ϵ c o r n e r : 0.1476
(#37) y l : 0.91 m, y r : 1.09 m, ϵ c t r l : 1.0684, ϵ c o r n e r : 0.1491
(#64) y l : 0.37 m, y r : 1.63 m, ϵ c t r l : 1.1009, ϵ c o r n e r : 0.1516
Table 3. Summary of the distance between camera and checkerboard experiments with dual checkerboards and their results. The best poses are defined by the top 2% lowest error of the projected control points. The subscript l and r identify the left and right checkerboards, respectively. The # symbol in parenthesis represents the index of the experiment.
Table 3. Summary of the distance between camera and checkerboard experiments with dual checkerboards and their results. The best poses are defined by the top 2% lowest error of the projected control points. The subscript l and r identify the left and right checkerboards, respectively. The # symbol in parenthesis represents the index of the experiment.
ExperimentLeft CheckerboardParametersRight CheckerboardParametersTop 2% Poses, Its Control Points, and Corners Re−Projection Error ( ϵ ctrl , ϵ corner ) [px]
D1
Left Fixed/>Right Approach
α , β , γ : (−40, −20, −53.14)°
x: −1.13 m
y: 1.0 m
z: 5.0 m
α , β , γ : (−40, −20, −53.14)°
x: 1.13 m
y: 1.0 m
z: [10.0, 4.6, −0.1] m
(#32) z r : 6.8 m, ϵ c t r l : 1.2415, ϵ c o r n e r : 0.1671
(#43) z r : 5.7 m, ϵ c t r l : 1.4161, ϵ c o r n e r : 0.1646
(#52) z r : 4.8 m, ϵ c t r l : 1.4630, ϵ c o r n e r : 0.1638
(#54) z r : 4.6 m, ϵ c t r l : 1.4680, ϵ c o r n e r : 0.1489
D2
Left/Right Approach
α , β , γ : (−40, −20, −53.14)°
x: −1.13 m
y: 1.0 m
z: [10.0, 4.6, −0.1]
α , β , γ : (−40, −20, −53.14)°
x: 1.13 m
y: 1.0 m
z: [10.0, 4.6, −0.1]
(#49) z l : 5.1 m, z r : 5.1 m, ϵ c t r l : 1.3418, ϵ c o r n e r : 0.1898
(#30) z l : 7.0 m, z r : 7.0 m, ϵ c t r l : 1.5207, ϵ c o r n e r : 0.1872
(#50) z l : 5.0 m, z r : 5.0 m, ϵ c t r l : 1.5672, ϵ c o r n e r : 0.1541
(#46) z l : 5.4 m, z r : 5.4 m, ϵ c t r l : 1.5840, ϵ c o r n e r : 0.2177
Table 4. Summary of the simulation experiments with multiple checkerboards. The Experiment column contains the number of checkerboards and the details for each checkerboard per line. Position is expressed in meters, while rotations are expressed in degrees. The subscript C denotes the experiments of which the poses were manually selected following our guidelines. Numeric subscripts represent the top-performing poses.
Table 4. Summary of the simulation experiments with multiple checkerboards. The Experiment column contains the number of checkerboards and the details for each checkerboard per line. Position is expressed in meters, while rotations are expressed in degrees. The subscript C denotes the experiments of which the poses were manually selected following our guidelines. Numeric subscripts represent the top-performing poses.
Experiment (xyz),(rpy)ImageExperiment (xyz),(rpy)Image
Three Checkerboards α 1
(−0.05, 1.0, 4) m, (0, 0, −53.14)°
(1.05, 1.4, 4.9) m, (10.07, 51.04, 40.53)°
(−1.35, 0.65, 5.6) m, (34.04, −27.33, −35.99)°
Sensors 22 02067 i001Three Checkerboards α 2
(−0.05, 1, 4) m, (0, 0, −53.14)°
(1.05, 1.4, 4.9) m, (36.84, 42.16, −62.15)°
(−1.35, 0.65, 5.6) m, (−33.57, 18.33, −6.1)°
Sensors 22 02067 i002
Four Checkerboards β 1
(−0.45, 1.5, 5) m, (4.44, −14.28, −64.04)°
(0.3, 0.7, 4.5) m, (−11.53, −38.45, −60.02)°
(1.05, 1.4, 4.8) m, (10.07, 51.04, 40.53)°
(−1.35, 0.65, 5.6) m, (34.04, −27.33, −35.99)°
Sensors 22 02067 i003Four Checkerboards β 2
(−0.45, 1.5, 5.0) m, (−11.53, −38.45, −60.02)°
(0.3, 0.7, 4.5) m, (0, 0, −53.14)°
(1.05, 1.4, 4.8) m, (10.07, 51.04, 40.53)°
(−1.35, 0.65, 5.6) m, (34.04, −27.33, −35.99)°
Sensors 22 02067 i004
Five Checkerboards γ 1
(0.05, 1, 4.1) m, (0, 0, −53.14)°
(0.95, 1.6, 4.6) m, (−40.17, −20.68, −1.58)°
(0.95, 0.65, 4.6) m, (10.07, 51.04, 40.53)°
(−0.95, 1.5, 4.8) m, (−11.53, −38.45, −60.02)°
(−0.75, 0.6, 4.3) m, (34.04, −27.33, −35.99)°
Sensors 22 02067 i005Five Checkerboards γ 2
(0.05, 1, 4.3) m, (0, 0, −53.14)°
(0.95, 1.6, 4.8) m, (−40.17, −20.68, −1.58)°
(0.95, 0.65, 4.8) m, (10.07, 51.04, 40.53)°
(−0.95, 1.5, 5) m, (−11.53, −38.45, −60.02)°
(−0.75, 0.6, 4.5) m, (34.04, −27.33, −35.99)°
Sensors 22 02067 i006
Six Checkerboards δ 1
(−0.05, 1.4, 4.4) m, (0, 0, −53.14)°
(0.1, 0.6, 4.2) m, (−36.16, −43.53, −61.14)°
(0.95, 1.6, 4.6) m, (20.17, −20.68, −1.58)°
(0.95, 0.65, 4.6) m, (10.07, 51.04, 40.53)°
(−0.95, 1.5, 4.7) m, (−11.53, −38.45, −60.02)°
(−0.75, 0.6, 4.3) m, (34.04, −27.33, −35.99)°
Sensors 22 02067 i007Seven Checkerboards ϵ 1
(−0.05, 1.7, 4.9) m, (−36.16, −43.53, 10.14)°
(0.15, 1.0, 5.2) m, (0, 0, −53.14)°
(0.1, 0.4, 4.7) m, (64.74, 8.31, −24.57)°
(0.95, 1.6, 4.5) m, (20.17, −20.68, −1.58)°
(0.95, 0.65, 4.5) m, (10.07, 51.04, 40.53)°
(−0.95, 1.5, 4.7) m, (−11.53, −38.45, −60.02)°
(−0.75, 0.6, 4.3) m, (34.04, −27.33, −35.99)°
Sensors 22 02067 i008
Eight Checkerboards ζ 1
(−1.15, 1.1, 5) m, (−11.53, −38.45, −60.02)°
(−0.5, 1.7, 4.9) m, (−36.16, −43.53, 10.14)°
(−0.6, 0.4, 5) m, (34.04, −27.33, −35.99)°
(−0.35, 1.0, 5.4) m, (0, 0, −53.14)°
(0.55, 0.95, 5) m, (40.93, 38.59, 36.58)°
(0.5, 0.5, 4.4) m, (64.74, 8.31, 24.57)°
(0.55, 1.59, 4.5) m, (20.17, −20.68, −1.58)°
(1.35, 0.98, 5.05) m, (10.07, 51.04, 89.53)°
Sensors 22 02067 i009Nine Checkerboards η 1
(−1.37, 0.9, 5.1) m, (−11.53, −58.45, −80.02)°
(−0.8, 1.7, 4.9) m, (−36.16, −43.53, 10.14)°
(−0.6, 0.4, 5.0) m, (34.04, −27.33, −35.99)°
(−0.53, 1.08, 4.8) m, (−20.84, 26.11, −4.00)°
(0.03, 1.65, 5.2) m, (0, 0, −53.14)°
(0.4, 0.98, 4.65) m, (40.93, 38.59, 36.58)°
(0.5, 0.5, 4.4) m, (64.74, 8.31, 24.57)°
(0.75, 1.59, 4.5) m, (20.17, −20.68, −1.58)°
(1.35, 0.98, 5.05) m, (10.07, 51.04, 89.53)°
Sensors 22 02067 i010
Ten Checkerboards θ 1
(−1.41, 1.05, 5.2) m, (−11.53, −58.45, −80.02)°
(−0.8, 1.75, 5.0) m, (−36.16, −43.53, 10.14)°
(−1.06, 0.32, 5.1) m, (39.04, −37.33, −15.99)°
(−0.48, 1.0, 4.4) m, (−20.84, 26.11, −4.00)°
(0.03, 1.65, 5.2) m, (0, 0, −53.14)°
(0.4, 0.98, 4.65) m, (30.93, 28.59, 36.58)°
(0.9, 0.4, 4.7) m, (64.74, 8.31, 24.57)°
(0.89, 1.69, 4.9) m, (20.17, −20.68, −1.58)°
(1.40, 0.95, 5.2) m, (10.07, 51.04, 89.53)°
(−0.06, 0.3, 4.95) m, (64.74, 8.31, −24.57)°
Sensors 22 02067 i011
Six Checkerboards δ C
(0.05, 1.5, 5.1) m, (0, 0, −53.14)°
(0.05, 0.30, 5.2) m, (10, −5, 0)°
(−1.18, 1.3, 4.9) m, (9.23, −4.70, −1.58)°
(1.18, 1.3, 5.1) m, (−4.72, 23.40, −15.88)°
(1.29, 0.24, 5.2) m, (63.74, 18.31, −12.57)°
(−1.38, 0.32, 5.3) m, (−22.92, −58.33, 5.44)°
Sensors 22 02067 i012Seven Checkerboards ϵ C
(0.05, 1.90, 5.5) m, (0, 5, 0)°
(0.05, 1.05, 5.4) m, (0, 0, −53.14)°
(0.05, 0.28, 5.0) m, (10, −5, 0)°
(−1.18, 1.3, 4.9) m, (9.23, −4.70, −1.58)°
(1.18, 1.3, 5.1) m, (−4.72, 23.40, −15.88)°
(1.29, 0.24, 5.2) m, (63.74, 18.31, −12.57)°
(−1.38, 0.32, 5.3) m, (−22.92, −58.33, 5.44)°
Sensors 22 02067 i013
Table 5. Summary of the results for the simulation experiments with multiple checkerboards. The subscript C denotes the experiments of which the poses were manually selected following our guidelines. Numeric subscripts represent the top-performing poses.
Table 5. Summary of the results for the simulation experiments with multiple checkerboards. The subscript C denotes the experiments of which the poses were manually selected following our guidelines. Numeric subscripts represent the top-performing poses.
ExperimentDistorted Corners RMSE [px]Undistorted Corners RMSE [px]Control Points RMSE [px]Focal Length Error [px]Center PointError [px]DistortionError
Ten Checkerboards θ 1 0.5260.6710.7871.487.400.033
Nine Checkerboards η 1 0.5150.8300.8888.3449.910.076
Eight Checkerboards ζ 1 0.5190.7081.011.589.470.025
Seven Checkerboards ϵ 1 0.5210.6720.7764.9454.130.011
Six Checkerboards δ 1 0.5210.7780.9717.626.450.050
Five Checkerboards γ 2 0.4940.9471.223.8911.50.065
Five Checkerboards γ 1 0.5070.7980.9418.596.450.101
Four Checkerboards β 2 0.5220.8670.9744.6811.80.018
Four Checkerboards β 1 0.5371.131.324.0916.80.075
Three Checkerboards α 2 0.5240.6510.69332.521.40.0803
Three Checkerboards α 1 0.5080.7620.8089.791.720.0232
Six Checkerboards δ C 0.5120.6280.6286.973.260.0092
Seven Checkerboards ϵ C 0.5120.8710.9855.6110.650.007
Table 6. List of the real-world cameras and its characteristics. HFOV stands for Horizontal Field of View.
Table 6. List of the real-world cameras and its characteristics. HFOV stands for Horizontal Field of View.
CameraResolutionSensor SizeFocal LengthHFOV
Lucid2880 × 1860 (5.4 MP)10.36 mm8 mm 56 . 7
Lucid2880 × 1860 (5.4 MP)10.36 mm25 mm 19 . 6
FLIR1920 × 1260 (2.3 MP)13.4 mm8 mm 69 . 7
Table 7. Summary of the extrinsic parameters between the cameras and the 3D LiDAR for the outdoors dataset: x , y , z are in meters; roll, pitch and yaw are in radians. We truncated the floating-point values to 3 digits for formatting purposes. The W next to the camera name stands for wide, while the T stands for Telephoto, representing the measurements obtained with the 8 and 25 mm lenses, respectively.
Table 7. Summary of the extrinsic parameters between the cameras and the 3D LiDAR for the outdoors dataset: x , y , z are in meters; roll, pitch and yaw are in radians. We truncated the floating-point values to 3 digits for formatting purposes. The W next to the camera name stands for wide, while the T stands for Telephoto, representing the measurements obtained with the 8 and 25 mm lenses, respectively.
Experimentxyzrollpitchyaw
Lucid W δ 1 0.031−0.140.03−1.5150.005−1.628
Lucid W δ C 0.031−0.140.03−1.4740.01−1.625
Lucid W ϵ 1 0.031−0.140.05−1.513−0.01−1.619
Lucid W ϵ C 0.031−0.120.05−1.51−0.01−1.623
FLIR W δ 1 0.031−0.140.03−1.50.005−1.564
FLIR W δ C 0.031−0.140.03−1.480.005−1.564
FLIR W ϵ 1 0.031−0.140.03−1.50.005−1.564
FLIR W ϵ C 0.031−0.120.03−1.504−0.02−1.563
Lucid T δ 1 0.05−0.140.031−1.588−0.05−1.509
Lucid T δ C 0.05−0.140.031−1.559−0.049−1.521
Lucid T ϵ 1 0.05−0.140.031−1.553−0.045−1.517
Lucid T ϵ C 0.05−0.150.031−1.548−0.05−1.515
Table 8. Summary of the real-world experiments with multiple checkerboards on two different cameras with wide angle lenses.
Table 8. Summary of the real-world experiments with multiple checkerboards on two different cameras with wide angle lenses.
ExperimentImage
Lucid Six Checkerboards δ 1 Sensors 22 02067 i014
Lucid Six Checkerboards δ C Sensors 22 02067 i015
Lucid Seven Checkerboards ϵ 1 Sensors 22 02067 i016
Lucid Seven Checkerboards ϵ C Sensors 22 02067 i017
FLIR Six Checkerboards δ 1 Sensors 22 02067 i018
FLIR Six Checkerboards δ C Sensors 22 02067 i019
FLIR Seven Checkerboards ϵ 1 Sensors 22 02067 i020
FLIR Seven Checkerboards ϵ C Sensors 22 02067 i021
Table 9. Summary of the real-world experiments with multiple checkerboards with a telephoto lens.
Table 9. Summary of the real-world experiments with multiple checkerboards with a telephoto lens.
ExperimentImage
Lucid Six Checkerboards δ 1 Sensors 22 02067 i022
Lucid Six Checkerboards δ C Sensors 22 02067 i023
Lucid Seven Checkerboards ϵ 1 Sensors 22 02067 i024
Lucid Seven Checkerboards ϵ C Sensors 22 02067 i025
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Monrroy Cano, A.; Lambert, J.; Edahiro, M.; Kato, S. Single-Shot Intrinsic Calibration for Autonomous Driving Applications. Sensors 2022, 22, 2067. https://doi.org/10.3390/s22052067

AMA Style

Monrroy Cano A, Lambert J, Edahiro M, Kato S. Single-Shot Intrinsic Calibration for Autonomous Driving Applications. Sensors. 2022; 22(5):2067. https://doi.org/10.3390/s22052067

Chicago/Turabian Style

Monrroy Cano, Abraham, Jacob Lambert, Masato Edahiro, and Shinpei Kato. 2022. "Single-Shot Intrinsic Calibration for Autonomous Driving Applications" Sensors 22, no. 5: 2067. https://doi.org/10.3390/s22052067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop