Next Article in Journal
Calibration of Electrochemical Sensors for Nitrogen Dioxide Gas Detection Using Unmanned Aerial Vehicles
Next Article in Special Issue
A CMOS Optoelectronic Receiver IC with an On-Chip Avalanche Photodiode for Home-Monitoring LiDAR Sensors
Previous Article in Journal
Assessing Velocity and Directionality of Uterine Electrical Activity for Preterm Birth Prediction Using EHG Surface Records
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Calibration Method for Photonic Mixer Device Solid-State Array Lidars Based on Electrical Analog Delay

Key Laboratory of Biomimetic Robots and Systems (Ministry of Education), Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(24), 7329; https://doi.org/10.3390/s20247329
Submission received: 15 November 2020 / Revised: 6 December 2020 / Accepted: 16 December 2020 / Published: 20 December 2020
(This article belongs to the Special Issue Solid-State LiDAR Sensors)

Abstract

:
As a typical application of indirect-time-of-flight (ToF) technology, photonic mixer device (PMD) solid-state array Lidar has gained rapid development in recent years. With the advantages of high resolution, frame rate and accuracy, the equipment is widely used in target recognition, simultaneous localization and mapping (SLAM), industrial inspection, etc. The PMD Lidar is vulnerable to several factors such as ambient light, temperature and the target feature. To eliminate the impact of such factors, a proper calibration is needed. However, the conventional calibration methods need to change several distances in large areas, which result in low efficiency and low accuracy. To address the problems, this paper presents an improved calibration method based on electrical analog delay. The method firstly eliminates the lens distortion using a self-adaptive interpolation algorithm, meanwhile it calibrates the grayscale image using an integral time simulating based method. Then, the grayscale image is used to estimate the parameters of ambient light compensation in depth calibration. Finally, by combining four types of compensation, the method effectively improves the performance of depth calibration. Through several experiments, the proposed method is more adaptive to multiscenes with targets of different reflectivities, which significantly improves the ranging accuracy and adaptability of PMD Lidar.

1. Introduction

Three-dimensional information acquisition has gained extensive attention in the field of computer vision, robot navigation, human–computer interaction, automatic driving, etc. [1]. Generally, the methods to obtain three-dimensional information mainly include stereo vision [2,3], structural light [4], single-pixel 3D imaging [5] and time-of-flight (ToF) [6]. Stereo vision needs advanced matching algorithm to obtain accurate depth information, which is vulnerable to ambient light. Structural light needs projection optimization compensation, which requires high performance of the processing system. Compared with the two methods above, the ToF sensor system utilizes active infrared laser to achieve depth information acquisition, which has the advantages of low cost, high frame frequency and high reliability [7,8].
Photonic mixer device (PMD) solid-state array Lidar, as one typical kind of the ToF sensor system, is widely used in computer vision [9,10]. However, there inevitably exist several errors sources (such as ambient light, integration time, temperature drift and reflectivity), which reduce the performance of the ToF sensor significantly. Hence, the equipment needs to be properly calibrated to achieve reliable depth information acquisition [11].
Several works have been done on PMD Lidar calibration. Lindner [11,12,13,14] put forward a calibration approach, which combined the overall intrinsic, distance and reflectivity related error calibration. Compared with the other approaches, the calibration provided significant contribution to the reduction of calibration data. Kahlman [15,16] presented a parameter based calibration approach, which considered multiple error sources including integration time, reflectivity, distance and temperature. The accuracy was improved to 10 mm at the distance of 2.5 m after calibration. Steiger [17] discussed the influence of internal factors and environmental factors. Then the effect of these factors was compensated by experiments. However, the errors were above the uncertainties specified by the manufacturer even after calibration. Swadzba [18] put forward a calibration algorithm based on stepwise optimization and the particle filter framework. The experimental results showed the accuracy of the method was higher than the traditional calibration method, while the efficiency was decreased. Schiller [19] discussed a joint calibration method based on PMD camera and standard 2D CCD camera. Results showed the internal camera parameters were estimated more precisely. In addition, the limitations of the small field-of-view were overcome by the method. Fuchs [20,21] presented a calibration process for the ToF camera with respect to the intrinsic parameters, the depth measurement distortion and the pose of the camera relative to a robot’s end effector. Chiabrando [22] performed two aspects of the calibration: distance calibration and photogrammetric calibration. For distance calibration, they reduced the distance error to ±15 mm in the range of 1.5–4 m. For photogrammetric calibration, they verified the stability of the estimated camera internal parameters. Christian [23] presented a calibration approach based on the depth and reflectance image of a planar checkerboard. The method improved the efficiency and the accuracy for the calibration of the focal length and 3D pose of the camera. However, the depth accuracy was not improved. Kuhnert [24] raised a joint calibration method based on two types of 3D cameras, PMD camera and stereo camera system, to improve the range accuracy by using one camera to compensate the other one. Schmidt [25] proposed a dynamic calibration method, which can be executed on systems with limited resources. Huang [26] raised an integration time auto adaptation method based on amplitude data, which makes each pixel obtain the depth information under the best conditions. Meanwhile the Gaussian process regression model was utilized to calibrate the depth errors. He [27] analyzed the influence of several external distractions (including material, color, distance, lighting, etc.) and proposed an error correction method based on the particle filter-support vector machine (PF-SVM).
To sum up, most research on the ToF camera calibration focus on the parameters related with measurement accuracy, such as integration time, pixel related error or depth data distortion [28,29,30,31,32,33,34,35]. These methods generally need to place the calibration plate in different distances to acquire depth compensation look-up table (LUT), which require a significant amount of work. In addition, the calibration plate needs to be placed manually. Even a slight change of the angle or the location will introduce extra measuring error of several millimeters. Thus, the process is vulnerable to the human factor. Others [36,37,38,39,40,41] obtain depth compensation data by changing the attitude of ToF camera with external devices, which need complex algorithms to fuse the multisource information. Consequently, there still exist unresolved issues such as a heavy workload, complex calculation requirement and serious human disturbance. A simpler calibration method is needed to improve the applicability, accuracy and convenience of ToF cameras.
To deal with the abovementioned challenges, we [42] proposed a calibration method for PMD solid-state array Lidar based on a black-box calibration device and an electrical analog delay method in the previous work. The method solved part of the problems in traditional calibration methods, such as low efficiency, low accuracy and serious human disturbance. However, some factors still have not taken into account, such as temperature drift, targets with different reflectivity and disturbance of ambient light, which could bring extra errors during the application of PMD solid-state array Lidar.
For this reason, this paper improved the calibration setting and the method, and the main contributions are as follows:
(1)
This paper proposed a self-adaptive grayscale correlation based depth calibration method (SA-GCDCM) for PMD solid-state array Lidar. Due to its special structure, the PMD Lidar has the ability to obtain a grayscale image and depth image simultaneously. Meanwhile, the amplitude of grayscale image has a close relationship with ambient light. Based on this, the grayscale image was used to estimate the parameters of ambient light compensation in depth calibration in this method. Through SA-GCDCM, the disturbance of ambient light could be effectively eliminated. Traditional joint calibration methods always need an extra RGB camera to cooperate with the ToF camera. The inconformity of the parameters of the two cameras, such as the image resolution and the field of view, can introduce extra errors to the system, leading to low calibration accuracy and efficiency. Compared with the traditional methods, this method has no requirement of coordinate transformation and feature matching, leading to better data consistency and self-adaptability.
(2)
This paper proposed a grayscale calibration method based on integration time simulating. Firstly, the raw grayscale images were acquired under multiple ambient light levels through setting the integration time in several values. Then the spatial variances were calculated from the images to estimate the dark signal non-uniformity (DSNU) and photo response non-uniformity (PRNU). At last the influence of DSNU and PRNU were eliminated by a correction algorithm.
(3)
Based on the electrical analog delay method, a comprehensive, multiscene adaptive multifactor calibration model was established through combining the SA-GCDCM with raw distance demodulation compensation, distance response non-uniformity (DRNU) compensation and temperature drift compensation. Compared with the prior methods, the established model is more adaptive to multiscenes with targets of different reflectivities, which significantly improves the ranging accuracy and adaptability of PMD Lidar.
The article is structured as follows: an introduction of working principle of PMD Lidar given in Section 2, along with a discussion of known error sources such as integration time, temperature and DRNU. The combined calibration method is presented in Section 3. Section 4 introduces the experiments and performs the discussion. Finally, a short summary is given in Section 5.

2. System Introduction

2.1. Principle of PMD Solid-State Array Lidar

The PMD solid-state array Lidar mainly includes three parts: the emitting unit, the receiving unit and the processing unit. The emitting unit is composed of four vertical-cavity surface-emitting laser (VCSEL), the driver, the delay-locked loop (DLL) and the modulator. Compared with the light emitting diode (LED), VCSEL has attracted attention in recent ToF lidar research [43,44,45] because of its lower power consumption, higher speed and higher reliability [46]. The receiving unit is composed of the lens, the sensor, the demodulator, the A/D converter and the sequence controller. The processing unit is a digital signal processing (DSP) controller.
The fundamental principle [47,48] of PMD solid-state array Lidar is illustrated in Figure 1. The continuous modulated near infrared (NIR) laser is generated and emitted by the emitting unit. After reflecting at the surface of the objects, the laser is received by the receiving unit. The optical signal is converted into an electrical signal in the receiving unit. Then the electrical signal is transmitted to the processing unit, which calculates the distance data by demodulating the phase delay between the emitted and the detected signal. Finally, three types of images, point cloud, grayscale and depth images, can be output from the DSP controller through data processing.
Signal demodulation is the key step during the working process of the PMD solid-state array Lidar, which is shown in Figure 2. Two different capacitors (CA and CB) with two phase windows (0° and 180°) are set under each pixel of the ToF chip. The differential correlation sampling (DCS) method was used to demodulate the received signal. In general, the sampling number determined the accuracy of the demodulation, while the efficiency could be accordingly influenced. In this paper, the four-step phase-shift method was adopted for sampling. In other words, the process of demodulation was to sample the received signal at four different phases (0°, 90°, 180° and 270°) respectively by using the capacitors of two phase windows, and then suppresses the noise by obtaining the difference between these capacitors. The phase shift of the modulated light was calculated according to the sampling amplitude. At last the target distance was calculated from the phase shift.
The specific process of the four-step phase-shift method was performed as follows. The emitted light signal can be presented as E ( t ) = k A c o s ( ω t ) , while the received signal can be presented as R ( t ) = B + k A c o s ( ω t + Δ φ ) . Where ω is the modulation frequency, A is the amplitude of the emitted signal, Δ φ means the phase shift between the emitted signal and received signal, B is the noise signal generated during the transmission of light and k means the signal attenuation coefficient. The sampling process can be expressed in Equation (1):
Q 1 D C 0 = 0 π ω [ B + k A c o s ( ω t + Δ φ ) ] d t Q 2 D C 0 = π ω 2 π ω [ B + k A c o s ( ω t + Δ φ ) ] d t Q 1 D C 1 = π 2 ω 3 π 2 ω [ B + k A c o s ( ω t + Δ φ ) ] d t Q 2 D C 1 = 3 π 2 ω 5 π 2 ω [ B + k A c o s ( ω t + Δ φ ) ] d t Q 1 D C 2 = π ω 2 π ω [ B + k A c o s ( ω t + Δ φ ) ] d t Q 2 D C 2 = 2 π ω 3 π ω [ B + k A c o s ( ω t + Δ φ ) ] d t Q 1 D C 3 = 3 π 2 ω 5 π 2 ω [ B + k A c o s ( ω t + Δ φ ) ] d t Q 2 D C 3 = 5 π 2 ω 7 π 2 ω [ B + k A c o s ( ω t + Δ φ ) ] d t
where Q 1 D C i and Q 2 D C i , are the integral values of capacitors CA and CB at sampling point i, respectively.
D C 0 = Q 1 D C 0 Q 2 D C 0 D C 1 = Q 1 D C 1 Q 2 D C 1 D C 2 = Q 1 D C 2 Q 2 D C 2 D C 3 = Q 1 D C 3 Q 2 D C 3
The distance is calculated by Equation (3):
D r a w ( x , y ) = c 2 × 1 2 π f × a t a n 2 ( D C 3 ( x , y ) D C 1 ( x , y ) D C 2 ( x , y ) D C 0 ( x , y ) )

2.2. Analysis of Error Sources of PMD Solid-State Array Lidar

The PMD Lidar is vulnerable to several factors such as internal non-uniformity of the ToF sensor, demodulation process, temperature drift, ambient light, etc. Some of the factors have been discussed in our previous work [42], which will not be mentioned in this paper. However, factors like integration time, temperature drift and DRNU still need to be considered. The analysis of these factors and the qualitative study are described in detail as follows.

2.2.1. Integration Time

Integration time is the time span to output individual data. In general, the integration time can be set from tens to thousands of microseconds. Too short integration time brings the loss of local information, as shown in Figure 3a. While too long integration time will exceed the ToF sensor’s receiving range, leading to a local saturation, as shown in Figure 3c. A proper value is needed to capture a sufficient number of photoelectrons without saturation, as shown in Figure 3b.

2.2.2. Temperature Drift

The ToF sensor is susceptible to environment temperature and the heat generated by itself during its working, leading to an uneven temperature distribution. However, the electron mobility in the sensor is temperature dependent. The higher the temperature, the lower the electron mobility, leading to a non-uniformity of measurement, as shown in Figure 4.
In addition, the illumination driver and the external circuit also have temperature dependent demodulation delay, which affect the distance measurement.

2.2.3. Distance Response Non-Uniformity (DRNU)

The ToF sensor typically have many analog to digital converters (ADCs) arranged along the columns of the pixel-field. The different ADCs have slightly different behaviors and result in a non-uniformity between the columns. In addition, this type of error exists in row and due to the non-uniformity of the row addressing signals. These two types of non-uniformities lead to differences of demodulation from pixel to pixel, which is called distance response non-uniformity (DRNU). This type of error also needs to be compensated.
For instance, the phase shift is calculated with Equation (4):
φ = a t a n ( D C 3 D C 1 D C 2 D C 0 )
While the real phase shift without DRNU compensation is calculated with Equation (5):
φ = a t a n ( ( D C 3 + a ) ( D C 1 + b ) ( D C 2 + c ) ( D C 0 + d ) )
Figure 5 demonstrates the depth image without DRNU compensation, where the non-uniformity was obviously unneglectable.

3. Methodology

The conventional calibration methods are time-consuming and complex due to the existence of multiple error sources. Based on the previous work, this paper put forward an improved calibration method based on the electrical analog delay. The method fused various error compensations into a comprehensive calibration model, where the grayscale image and the depth image were calibrated jointly, as shown in Figure 6. The lens distortion was corrected using a self-adaptive interpolation algorithm based on Zhang’s [49] calibration method. For grayscale image correction, DSNU and PRNU were compensated based on an integration time simulating based method. For depth information correction, the grayscale image was used to estimate the parameters of ambient light compensation. After calculating the raw depth data, the pixel fix pattern noise was eliminated by DRNU error compensation. The temperature drift error was compensated at last.

3.1. Lens Distortion Correction

The internal and external parameters of PMD solid-state array Lidar were obtained through Zhang’s [49] calibration method, which is not detailed in this paper. Different from the grayscale image, there may exist the holes on the depth image (the received signal is too low to demodulate a valid signal), resulting in the inapplicability of the traditional correction algorithm. A pixel adaptive interpolation strategy we proposed in [42] was utilized in this paper to solve the problem, which is presented in Table 1.
Green points represent the projection of the corrected pixels on the raw image. Purple points are the surrounding pixels of the green ones, in which the solid purples mean the real distance pixel while the hollow ones represent the holes. D0, D1, D2, D3 and D4 denote the pixels to be interpolated and its lower-left pixel, top-left pixel, lower-right pixel and top-right pixel, respectively. (xp0,yp0), (xp1,yp1), (xp2,yp2), (xp3,yp3) and (xp4,yp4) are the coordinates of the pixels to be interpolated and its lower-left pixel, top-left pixel, lower-right pixel and top-right pixel, respectively. αx = xp0xp1, αy = yp0yp1 and (u,v) are the coordinates of the pixels to be interpolated under a barycentric coordinate system.

3.2. Grayscale Image Calibration

A grayscale image is acquired by TOF chip when PMD solid-state Array Lidar works in the passive mode. The acquisition process is consistent with the intensity image obtained by traditional complementary metal oxide semiconductor (CMOS) chip, which can be used to characterize the intensity of ambient light.
In general, the grayscale image is vulnerable to DSNU and PRNU, where DSNU represents the differences of gray values between pixels captured under the dark condition and PRNU represents the differences of gray values between pixels captured under the common condition, respectively.
The most commonly used approach to correct the influence of DSNU and PRNU has a standardized process, which can be found in european machine vision association (EMVA) standard 1288 [50]. In this approach, one dark image and one bright image are captured under the same exposure condition. DSNU and PRNU are then calculated from the images. However, there exist some limitations of the approach. For instance, the images are captured under a specific condition, leading to bad applicability. The ambient light is set artificially, which introduces extra error. Based on this approach, an integration time simulating based method was proposed in this paper. The main contributions of the method mainly include: (I) Instead of setting the ambient light artificially, the levels of ambient light were simulated by setting the integration times and (II) the non-uniform of the exposure was eliminated by calculating the mean value of multiple ambient light levels. The process of the method is given as follows.
(1)
Set the integration time to 0 μs to simulate the dark condition. Collect N = 100 frames of grayscale images and calculate the mean value.
Y d a r k a v g = 1 N x = 1 320 y = 1 240 ( x , y , N ) d a r k / ( 320 × 240 )
(2)
Change the integration time to simulate different ambient light levels. Collect N = 100 frames of grayscale images under amplitudes of 10%, 30%, 50% and 80% respectively. Similarly, the mean values with different amplitudes are obtained.
Y d a r k a v g = 1 N x = 1 320 y = 1 240 ( x , y , N ) A L / ( 320 × 240 )
(3)
Calculate the spatial variances under different ambient levels. Spatial variance is simply an overall measure of the spatial nonuniformity, which is helpful to estimate DSNU and PRNU.
S d a r k 2 = x = 1 320 y = 1 240 [ ( x , y ) d a r k Y d a r k a v g ] 2 / ( 320 × 240 1 ) S A L 2 = x = 1 320 y = 1 240 [ ( x , y ) A L Y A L a v g ] 2 / ( 320 × 240 1 )
(4)
Calculate the correction values of DSNU and PRNU.
b D S N U = S d a r k k P R N U = S A L 2 S d a r k 2 / ( Y A L a v g Y d a r k a v g )
(5)
The grayscale compensation of pixel ( x , y ) is calculated by Equation (10).
I c o r r ( x , y ) = [ I r a w ( x , y ) b D S N U ( x , y ) ] × k P S N U
where Y d a r k a v g is the mean value of grayscale images under the dark condition, Y A L a v g is the mean value of grayscale images under ambient light, N means the number of frames, S d a r k 2 and S A L 2 are spatial variances under dark and ambient light conditions, respectively, b D S N U is the offset of DSNU, k P R N U is the gain of PRNU and I c o r r ( x , y ) is the compensation value of grayscale image after calibration.

3.3. Depth Image Calibration

3.3.1. Ambient Light Compensation

Due to its special structure, the PMD solid-state array Lidar has the ability to obtain a grayscale image and depth image simultaneously. Meanwhile, the amplitude of a grayscale image has a close relationship with ambient light. Based on this, the grayscale image was used to estimate the parameters of ambient light compensation in depth calibration, which is the self-adaptive grayscale correlation based depth calibration method (SA-GCDCM). The basic idea of the method is to eliminate the influence of ambient light in the sampling stage by introducing an ambient light correction factor KAL. The factor KAL is calculated from several DCs sampled under different ambient light levels. The ambient light is controlled accurately by adjusting the integration time. KAL is then utilized to correct the DCs in the real sampling process. There exists internal noise and external error during the sampling. The errors are corrected in the calculation. The process of the method is concluded as (all the following measurements use the spatial average of the region of interest (ROI) within the coordinates (100,70) and (220,165) and the temporal average of 100 frames as a default):
(1)
Turn the ambient light on and record the amplitude of the grayscale image as Qgray.
(2)
Change the amplitude of the grayscale image to 0.5 times that of Qgray by adjusting the integration time. Measure the DC0/2 and record as DC0setting1 and DC2setting1, respectively.
(3)
Change the amplitude of the grayscale image to 1.5 times that of Qgray by adjusting the integration time. Measure the DC0/2 and record as DC0setting2 and DC2setting2, respectively.
(4)
Turn the ambient light off and measure the DC0/2, which are recorded as DC0no and DC2no, respectively.
(5)
Calculate four measurements, Q01, Q02, Q21 and Q22.
Q 0 1 = D C 0 s e t t i n g 1 D C 0 n o Q 2 1 = D C 2 s e t t i n g 1 D C 2 n o Q 0 2 = D C 0 s e t t i n g 2 D C 0 n o Q 2 2 = D C 2 s e t t i n g 2 D C 2 n o
(6)
Correct the errors generated in the sampling. There inevitably exists internal noise and external error. The internal noise mainly comes from the internal circuit and can be eliminated by subtracting two samples at the same phase. The external error mainly comes from the instability of the environment. It can be suppressed by calculating the mean value of the samples.
k = ( Q 2 2 Q 2 1 ) + ( Q 0 2 Q 0 1 ) 2
(7)
The ambient light correction factor KAL is calculated by Equation (13):
K A L = Q g r a y k
where DC0/1/2/3 are the sample values acquired at 0°, 90°, 180° and 270° respectively. KAL is used to compensate the ambient light error during the demodulation process as Equation (14):
D C 0 / 1 c o r r ( x , y ) = D C 0 / 1 ( x , y ) I c o r r ( x , y ) K A L
where D C 0 / 1 c o r r ( x , y ) represents the corrected value of D C 0 / 1 ( x , y ) .
As introduced in Section 2.1, distance measurements are taken by acquiring the four DCs and calculated pixel-by-pixel during runtime as Equation (15):
D r a w ( x , y ) = c 2 × 1 2 π f × a t a n 2 ( D C 3 ( x , y ) D C 1 ( x , y ) D C 2 ( x , y ) D C 0 ( x , y ) )
The equation is revised after ambient light compensation as Equation (16):
D r a w ( x , y ) = c 2 × 1 2 π f × a t a n 2 ( D C 3 ( x , y ) D C 1 c o r r ( x , y ) D C 2 ( x , y ) D C 0 c o r r ( x , y ) )
where D C 1 c o r r ( x , y ) and D C 0 c o r r ( x , y ) are sample values corrected by K A L .
Through SA-GCDCM, the disturbance of ambient light could be effectively eliminated. Compared with the traditional joint method using a common RGB camera with a ToF camera, this method has no requirement of coordinate transformation and feature matching, leading to a better data consistency and self-adaptability.

3.3.2. Demodulation Error Correction

In general, the sinusoidal wave is adopted as the modulated continuous wave signal, which can be represented as E ( t ) = k A c o s ( ω t ) . Similarly, the received signal is deemed as a sinusoidal wave in demodulation as well. However, because of the limitations of the generator bandwidth, the actual received signal is similar to a rectangular wave [42], as shown in Figure 7. Therefore, the rectangular wave was used for demodulation analysis in this paper.
Different from Section 2.1, the sampling process is modified as Equation (17):
Q 1 D C 0 = A t T o F         0 < t T o F < π w Q 2 D C 0 = A ( π w t T o F )         0 < t T o F < π w { Q 1 D C 1 = A ( π 2 w t T o F ) 0 < t T o F < π 2 w   Q 1 D C 1 = A ( t T o F π 2 w ) π 2 w < t T o F < π w { Q 2 D C 1 = A ( t T o F + π 2 w ) 0 < t T o F < π 2 w   Q 2 D C 1 = A ( 3 π 2 w t T o F ) π 2 w < t T o F < π w Q 1 D C 2 = A ( π w t T o F )         0 < t T o F < π w Q 2 D C 2 = A t T o F         0 < t T o F < π w { Q 1 D C 3 = A ( t T o F + π 2 w ) 0 < t T o F < π 2 w   Q 1 D C 3 = A ( 3 π 2 w t T o F ) π 2 w < t T o F < π w { Q 2 D C 3 = A ( π 2 w t T o F ) 0 < t T o F < π 2 w   Q 2 D C 3 = A ( t T o F π 2 w ) π 2 w < t T o F < π w
Thus, an extra error, which is called the demodulation error is generated in the revising of the sampling process. The method to correct demodulation error has been discussed extensively in [42], which will not be introduced in this paper.

3.3.3. DRNU Error Compensation

The calibration was based on the electrical analog delay method. Instead of changing the real distance, delay-locked loop was used to simulate the distance in this method. The simulated distance is composed of two parts, as shown in Equation (18). The first part is the simulated distance of DLLs. The system contains several DLLs and each DLL represents a specific simulated distance, e.g., 0.3 m. The second part represents the real distance between the calibration plate and the PMD Lidar. Through combining the two parts, multiple distances could be simulated without moving the calibration plate. However, DLL is susceptible to temperature changing and circuit delay, leading to a deviation between simulated distance and the set distance. Thus, a DRNU error compensation was conducted as Equation (19):
D s i m ( x , y ) = n × d D L L + o z e r o
D R N U ( x , y ) = D c a l ( x , y ) D s i m ( x , y )
where dDLL represents the simulated distance of a single DLL, n is the number of DLLs, ozero is the distance between the PMD solid-state array Lidar and the reflecting plate, Dsim(x,y) represents the overall simulated distance, Dcal(x,y) represents the corrected distance after compensation and DRNU(x,y) is the compensation value.
Since the DRNU error is related with distance, and the limited number of compensate values cannot completely cover the whole distance. A linear interpolation was carried out to obtain a continuous offset curve. The interpolate method is quite basic and will not be illustrated here.

3.3.4. Temperature Compensation

Several research have reported the influence of the temperature for the ToF camera [7,15,35]. In this paper, the main components related to temperature error in the PMD Lidar were further classified into three parts, which are the ToF sensor, the illumination driver and the external circuit, as discussed in Section 2.2.2. It was found from experiments that the error showed a linear relation with temperature, from which the higher the temperature, the higher the error was. Since the error arose from temperature drift was compensated with a joint equation, as shown in Equation (20):
D f i n a l ( x , y ) = D c a l ( x , y ) ( T a c t T c a l ) × ( T C p i x + T C l a s e r + n × T C D L L )
where Dfinal(x,y) is the corrected distance after temperature compensation, Tact represents the acting temperature, Tcal means the temperature during the calibration, TCpix is the temperature coefficient of the pixel, TClaser means the temperature coefficient of the illumination unit and TCDLL represents the temperature coefficient of DLL stage.
For the device in this paper, the TCpix was 11.3 mm/K, the TClaser was 2.7 mm/K and the TCDLL was 0.7 mm/K. It is worth mentioning that the parameters were obtained by the specific device, which means the parameters are not applicable for each device. This is due to the diversity of the circuit board, chip and other components derived from fabrication.
After utilizing the temperature compensation, the multiscene adaptive multifactor calibration model was established. Then the feasibility of the model was verified by experiments in Section 4.

4. Experiments and Discussions

4.1. Experimental Settings

The PMD solid-state array Lidar is shown in Figure 8, which is mainly composed of four parts: the emitting unit, the receiving unit, the processing unit and the transmission unit. The emitting unit generates and emits the NIR light with the VCSEL. The receiving unit receives the returned light with a CMOS sensor and converts the optical signal into an electrical signal. The processing unit calculates the distance data by demodulating the phase delay between the emitted and the detected signal. The transmission unit transmits the distance data to the computer.
As illustrated in Figure 9, the experimental settings were established. The grayscale image calibration system, as shown in Figure 9a, was used to calibrate the lens distortion and eliminate the effect of DSNU and PRNU. The depth image calibration system, as shown in Figure 9b, was used to compensate multierror sources in distance measurements.
The grayscale image calibration system mainly includes the checkerboard, the PMD solid-state array Lidar, the clamping device and the rail. Based on the system, the grayscale image calibration was conducted as follows.
(1)
Clamp the PMD Lidar on the clamping device.
(2)
Adjust the clamping device to a proper location where the checkerboard is suitable in size and position in the field of view of the PMD Lidar.
(3)
Calibrate the grayscale image based on integration time simulating.
(4)
Obtain several grayscale images with checkerboard in different directions to calibrate the lens distortion.
(5)
Utilize the pixel adaptive interpolation strategy to fill the holes.
The depth image calibration system mainly includes the reflecting plate, the PMD solid-state array Lidar, the cylindrical tube, the ambient light source, the clamping device and the rail. The cylindrical tube was used to protect the ToF sensor from affecting the stray light. The ambient light source was used to provide the assistant lighting. Based on the system, the depth image calibration was conducted as follows.
(1)
Install the cylinder on the PMD Lidar and clamp the PMD Lidar on the clamping device.
(2)
Adjust the clamping device to a proper location where the quality of the light spot projected on the reflecting plate is optimized.
(3)
Change the distance with the electrical analog delay method to perform the depth calibration. Multiple error compensation is included in this step.
(4)
Change the reflecting board to adjust the method with objects of different reflectivities.
(5)
Conduct the interpolation on the data to obtain the continuous offset curves.

4.2. Results with Grayscale Image Calibration

4.2.1. Lens Distortion Correction

Several grayscale images with checkerboard in different directions were obtained to calibrate the lens distortion. The pixel adaptive interpolation strategy was used to fill the holes. The results are shown Figure 10.
The internal parameters and the distortion coefficients are shown in Table 2.

4.2.2. DSNU and PRNU

The influences of DSNU and PRNU were eliminated based on the integration time simulating method introduced in Section 3.2. Firstly, the raw grayscale images were acquired under multiple ambient light levels through setting the integration time in several values. Then the spatial variances were calculated from the images to estimate the dark signal non-uniformity (DSNU) and photo response non-uniformity (PRNU). At last the influence of DSNU and PRNU were eliminated by a correction algorithm.
To evaluate the effectiveness of the method, several experiments were conducted in different scenes. A checkerboard and a flat white board were used as the test scenes to conduct a qualitative analysis and a quantitative analysis. Then gray images were captured under two real scenes to verify the feasibility of the method. The results are shown in Figure 11.
The images in the left column were captured before calibration, while the images in the right column were captured after calibration. Figure 11a,b were captured with the checkerboard. Compared with the image before calibration, two types of non-uniformity were compensated. In the raw image, the central area was brighter while the surroundings were darker because of the uneven exposure. Meanwhile, there existed distinct light and dark stripes in vertical. In the images after calibration, these two phenomena were suppressed obviously.
To better prove the effectiveness of the method, a quantitative analysis was then conducted. A flat white board was suitable to conduct the analysis because of its good flatness and smoothness. The images are shown in Figure 11c,d. The improvement in visual was consistent with results of the checkerboard. Two types of non-uniformities were compensated obviously. To better verify the improvement, mean value, root mean square error (RMSE) and peak signal to noise ratio (PSNR) [51] were chosen to evaluate the quality of the images, and the results are shown in Table 3. The mean value shows little difference before and after the calibration, which means the method did not change the overall sampling of the grayscale signal. However, the RMSE got a significant reduction after calibration, which indicates the uniformity of the grayscale signal was improved distinctly. In addition, the PSNR improved after calibration, which means the noise derived from PRNU and DSNU was reduced.
Experiments were then conducted in two real scenes to verify the applicability of the method in reality, as shown in Figure 11e–h. It can be seen that the vertical stripes were effectively suppressed after calibration. Meanwhile, the uneven exposure, which leads to uneven brightness of the image, was obviously suppressed. Similarly, quantitative analysis was conducted and the results are shown in Table 3. The mean values showed no distinct differences before and after the calibration, while the reduction of the RMSEs was obvious. It means that the non-uniformity of the grayscale images was suppressed. The PSNRs were higher after calibration, which means the noise was effectively reduced. The results in real scenes were in accordance with results in test scenes, which mean the calibration method is feasible in reality.

4.3. Result with Depth Image Calibration

Depth image calibration was carried out after grayscale image calibration. The results are shown in Figure 12. In the left image, which was obtained before calibration, there existed many incorrect even invalid data points, while the bright and dark stripes can be observed distinctly. The non-uniformity of the whole image indicates the depth data was untrustworthy before calibration. After utilizing ambient light compensation, demodulation error correction, DRNU error compensation and temperature compensation, the quality of the depth image improved significantly. The number of noise points reduced obviously. The confidence of depth information was greatly improved.

4.4. Ranging Accuracy Verification under Real Environment

To verify the ranging accuracy and the adaptability of the calibration method, several tests under the real environment were conducted. The test system is shown in Figure 13. Firstly, the test system was placed in the dark environment (the ambient light was about 0 Lux), indoor environment (about 500 Lux) and outdoor environment (about 1200 Lux), respectively. Then the reflecting plate was set as 80% reflectivity, 50% reflectivity and 20% reflectivity, respectively.
In each test, the distance between the PMD solid-state array Lidar and the reflecting plate changed from 0.5 to 5 m in a gradient. The mean value in ROI (1000 pixels in the central region) was recorded as the measured distance. Then the distance error was calculated. The test results are illustrated in Figure 14.
Compared with the method in [42], the method proposed in this paper effectively reduced the error in the distance range of 0.5–5 m. Meanwhile, the adaptivity under different environments improved a lot. Table 4 gives the detailed performance comparison results of the two methods.
The maximal error, average error and RMSE were chosen to compare the detailed performance. From Table 4, the maximum error was reduced distinctly in the proposed method, while the non-uniformity reduced a lot, too. Though the difference of average error was not distinct as other two indicators, the proposed method had better adaptability. In other words, the proposed was more adaptive to multiscene and different reflectivities.
The proposed method was compared with several traditional methods as well, and the result is shown in Table 5. It can be obviously figured out that the proposed method has better performance on range accuracy compared with methods in [13,17,26]. Although the mean distance error shows no distinct improvement compared with method in [19], the proposed method has better performance on calibration time and scene scope. In addition, the results of experiments expressed that the proposed method had an outstanding performance on adaptability.

5. Conclusions

To improve the range accuracy of the PMD solid-state array Lidar, this paper presents a self-adaptive grayscale correlation based depth calibration method (SA-GCDCM) based on electrical analog delay. Based on the characteristic of the PMD solid-state array Lidar, the grayscale image was used to estimate the parameters of ambient light compensation in depth calibration. To obtain uniform and stable grayscale image, an integration time simulating based method was proposed for eliminating the influence of DSNU and PRNU. Combining SA-GCDCM and demodulation error correction, DRNU error compensation and temperature compensation, a comprehensive, multiscene adaptive multifactor calibration model was established. A series of experiments were conducted to verify the ranging accuracy and the adaptability of the method. Compared with the prior work, the maximum error has reduced distinctly, meanwhile the RMSE was reduced as well, indicating the proposed method had better accuracy and adaptability, respectively. Compared with the traditional methods, the proposed method had better performance on range accuracy and calibration time and scene scope. The proposed method was more adaptive to multiscenes with targets of different reflectivities, which significantly improved the ranging accuracy and adaptability of PMD Lidar.

Author Contributions

X.W. and P.S. developed the improved calibration method and procedure for PMD solid-state array Lidar. X.W. and W.Z. established the calibration system and verification test platform, meanwhile, analyzed the experiment results. All authors have read and agreed to the published version of the manuscript.

Funding

The research was funded by National Defense Basic Scientific Research Program of China grant number JCKY2019602B010.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sansoni, G.; Trebeschi, M.; Docchio, F. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, cultural heritage, medicine, and criminal investigation. Sensors 2009, 9, 568–601. [Google Scholar] [CrossRef] [PubMed]
  2. Henry, P.; Krainin, M.; Herbst, E.; Ren, X.; Fox, D. RGB-D mapping: Using kinect-style depth cameras for dense 3D modeling of indoor environments. Int. J. Robot. Res. 2012, 31, 647–663. [Google Scholar] [CrossRef] [Green Version]
  3. Okada, K.; Inaba, M.; Inoue, H. Integration of real-time binocular stereo vision and whole body information for dynamic walking navigation of humanoid robot. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, (MFI 2003), Tokyo, Japan, 1 August 2003; pp. 131–136. [Google Scholar]
  4. Scharstein, D.; Szeliski, R. High-accuracy stereo depth maps using structured light. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 1, pp. 195–202. [Google Scholar]
  5. Sun, M.J.; Edgar, M.P.; Gibson, G.M.; Sun, B.; Radwell, N.; Lamb, R.; Padgett, M.J. Single-pixel three-dimensional imaging with time-based depth resolution. Nat. Commun. 2016, 7, 12010. [Google Scholar] [CrossRef] [PubMed]
  6. Cui, Y.; Schuon, S.; Chan, D.; Thrun, S.; Theobalt, C. 3D shape scanning with a time-of-flight camera. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1173–1180. [Google Scholar]
  7. He, Y.; Chen, S.Y. Recent advances in 3D Data acquisition and processing by Time-of-Flight camera. IEEE Access. 2019, 7, 12495–12510. [Google Scholar] [CrossRef]
  8. Horaud, R.; Hansard, M.; Evangelidis, G.; Ménier, C. An overview of depth cameras and range scanners based on time-of-flight technologies. Mach. Vis. Appl. 2016, 27, 1005–1020. [Google Scholar] [CrossRef] [Green Version]
  9. Rueda, H.; Fu, C.; Lau, D.L.; Arce, G.R. Single aperture spectral+ToF compressive camera: Toward hyperspectral+depth imagery. IEEE J. Sel. Top. Signal Process. 2017, 11, 992–1003. [Google Scholar] [CrossRef]
  10. Rueda-Chacon, H.; Florez, J.F.; Lau, D.L.; Arce, G.R. Snapshot compressive ToF+Spectral imaging via optimized color-coded apertures. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2346–2360. [Google Scholar]
  11. Lindner, M.; Schiller, I.; Kolb, A.; Koch, R. Time-of-Flight sensor calibration for accurate range sensing. Comput. Vis. Image Underst. 2010, 114, 1318–1328. [Google Scholar] [CrossRef]
  12. Lindner, M.; Kolb, A. Lateral and depth calibration of PMD-distance sensors. advances in visual computing. In Advance of Visual Computing, Proceedings of the Second International Symposium on Visual Computing, Lake Tahoe, NV, USA, 6–8 November 2006; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2006; Volume 4292, pp. 524–533. [Google Scholar]
  13. Lindner, M.; Kolb, A. Calibration of the intensity-related distance error of the PMD TOF-camera. In Intelligent Robots and Computer Vision XXV: Algorithms, Techniques, and Active Vision, Proceedings of Optics East, Boston, MA, USA, 9–12 September 2007; Casanent, D.P., Hall, E.L., Röning, R., Eds.; SPIE: Bellingham, WA, USA, 2007; Volume 6764, p. 67640W. [Google Scholar]
  14. Lindner, M.; Lambers, M.; Kolb, A. Sub-pixel data fusion and edge-enhanced distance refinement for 2d/3d images. Int. J. Intell. Syst. Technol. Appl. 2008, 5, 344–354. [Google Scholar] [CrossRef] [Green Version]
  15. Kahlmann, T.; Remondino, F.; Ingensand, H. Calibration for increased accuracy of the range imaging camera SwissRanger. In Proceedings of the ISPRS Commission V Symposium “Image Engineering and Vision Metrology”, Dresden, Germany, 25–27 September 2006; Volume 36, pp. 136–141. [Google Scholar]
  16. Kahlmann, T.; Ingensand, H. Calibration and development for increased accuracy of 3D range imaging cameras. J. Appl. Geodesy. 2008, 2, 1–11. [Google Scholar] [CrossRef]
  17. Steiger, O.; Felder, J.; Weiss, S. Calibration of time-of-flight range imaging cameras. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1968–1971. [Google Scholar]
  18. Swadzba, A.; Beuter, N.; Schmidt, J.; Sagerer, G. Tracking objects in 6D for reconstructing static scenes. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–7. [Google Scholar]
  19. Schiller, I.; Beder, C.; Koch, R. Calibration of a PMD-camera using a planar calibration pattern together with a multicamera setup. ISPRS Int. J. Geo-Inf. 2008, 21, 297–302. [Google Scholar]
  20. Fuchs, S.; May, S. Calibration and registration for precise surface reconstruction with time of flight cameras. Int. J. Int. Syst. Technol. App. 2008, 5, 274–284. [Google Scholar] [CrossRef]
  21. Fuchs, S.; Hirzinger, G. Extrinsic and depth calibration of ToF-cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–6. [Google Scholar]
  22. Chiabrando, F.; Chiabrando, R.; Piatti, D.; Rinaudo, F. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera. Sensors 2009, 9, 10080–10096. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Christian, B.; Reinhard, K. Calibration of focal length and 3D pose based on the reflectance and depth image of a planar object. Int. J. Intell. Syst. Technol. Appl. 2008, 5, 285–294. [Google Scholar]
  24. Kuhnert, K.D.; Stommel, M. Fusion of stereo-camera and PMD-camera data for real-time suited precise 3D environment reconstruction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 4780–4785. [Google Scholar]
  25. Schmidt, M. Analysis, Modeling and Dynamic Optimization of 3d Time-of-Flight Imaging Systems. Ph.D. Thesis, University of Heidelberg, Heidelberg, Germany, 20 July 2011; pp. 1–158. [Google Scholar]
  26. Huang, T.; Qian, K.; Li, Y. All Pixels Calibration for ToF Camera. In Proceedings of the IOP Conference Series: Earth and Environmental Science, Ordos, China, 28–29 April 2018; IOP Publishing: Bristol, UK, 2018; Volume 170, p. 022164. [Google Scholar]
  27. Ying, H.; Bin, L.; Yu, Z.; Jin, H.; Jun, Y. Depth errors analysis and correction for Time-of-Flight (ToF) cameras. Sensors 2017, 17, 92. [Google Scholar]
  28. Radmer, J.; Fuste, P.M.; Schmidt, H.; Kruger, J. Incident light related distance error study and calibration of the PMD-range imaging camera. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–6. [Google Scholar]
  29. Frank, M.; Plaue, M.; Rapp, H.; Köthe, U.; Jähne, B.; Hamprecht, F.A. Theoretical and experimental error analysis of continuous-wave time-of-flight range cameras. Opt. Eng. 2009, 48, 013602. [Google Scholar]
  30. Schiller, I.; Bartczak, B.; Kellner, F.; Kollmann, J.; Koch, R. Increasing realism and supporting content planning for dynamic scenes in a mixed reality system incorporating a Time-of-Flight camera. In Proceedings of the IET 5th European Conference on Visual Media Production (CVMP 2008), London, UK, 26–27 November 2008; IET: Stevenage, UK, 2008; Volume 7, p. 11. [Google Scholar] [CrossRef] [Green Version]
  31. Foix, S.; Alenya, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11, 1917–1926. [Google Scholar] [CrossRef] [Green Version]
  32. Piatti, D.; Rinaudo, F. SR-4000 and CamCube3.0 time of flight (ToF) cameras: Tests and comparison. Remote Sens. 2012, 4, 1069–1089. [Google Scholar] [CrossRef] [Green Version]
  33. Lee, S. Time-of-flight depth camera accuracy enhancement. Opt. Eng. 2012, 51, 083203. [Google Scholar] [CrossRef]
  34. Lee, C.; Kim, S.Y.; Kwon, Y.M. Depth error compensation for camera fusion system. Opt. Eng. 2013, 52, 073103. [Google Scholar] [CrossRef]
  35. Fürsattel, P.; Placht, S.; Balda, M.; Schaller, C.; Hofmann, H.; Maier, A.; Riess, C. A Comparative error analysis of current Time-of-Flight sensors. IEEE Trans. Comput. Imaging 2017, 2, 27–41. [Google Scholar] [CrossRef]
  36. Karel, W. Integrated range camera calibration using image sequences from hand-held operation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 945–952. [Google Scholar]
  37. Gao, J.; Jia, B.; Zhang, X.; Hu, L. PMD camera calibration based on adaptive bilateral filter. In Proceedings of the 2011 Symposium on Photonics and Optoelectronics (SOPO), Wuhan, China, 16–18 May 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–4. [Google Scholar]
  38. Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Jung, J.; Lee, J.-Y.; Jeong, Y.; Kweon, I.S. Time-of-flight sensor calibration for a color and depth camera pair. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1501–1513. [Google Scholar] [CrossRef]
  40. Villena-Martínez, V.; Fuster-Guilló, A.; Azorín-López, J.; Saval-Calvo, M.; Mora-Pascual, J.; Garcia-Rodriguez, J.; Garcia-Garcia, A. A quantitative comparison of calibration methods for RGB-D sensors using different technologies. Sensors 2017, 17, 243. [Google Scholar] [CrossRef] [Green Version]
  41. Zeng, Y.; Yu, H.; Dai, H.; Song, S.; Lin, M.; Sun, B.; Jiang, W.; Meng, M.Q. An improved calibration method for a rotating 2D LiDAR system. Sensors 2018, 18, 497. [Google Scholar] [CrossRef] [Green Version]
  42. Zhai, Y.; Song, P.; Chen, X. A fast calibration method for photonic mixer device solid-state array lidars. Sensors 2019, 19, 822. [Google Scholar] [CrossRef] [Green Version]
  43. Fujioka, I.; Ho, Z.; Gu, X.; Koyama, F. Solid State LiDAR with Sensing Distance of over 40m using a VCSEL Beam Scanner. In Proceedings of the 2020 Conference on Lasers and Electro-Optics, San Jose, CA, USA, 10–15 May 2020; OSA Publishing: Washington, DC, USA, 2020; p. SM2M.4. [Google Scholar]
  44. Prafulla, M.; Marshall, T.D.; Zhu, Z.; Sridhar, C.; Eric, P.F.; Blair, M.K.; John, W.M. VCSEL Array for a Depth Camera. U.S. Patent US20150229912A1, 27 September 2016. [Google Scholar]
  45. Seurin, J.F.; Zhou, D.; Xu, G.; Miglo, A.; Li, D.; Chen, T.; Guo, B.; Ghosh, C. High-efficiency VCSEL arrays for illumination and sensing in consumer applications. In Vertical-Cavity Surface-Emitting Lasers XX, Proceedings of the SPIE OPTO, San Francisco, CA, USA, 4 March 2016; SPIE: Bellingham, WA, USA, 2016; Volume 9766, p. 97660D. [Google Scholar]
  46. Tatum, J. VCSEL proliferation. In Vertical-Cavity Surface-Emitting Lasers XI, Proceedings of the Integrated Optoelectronic Devices 2007, San Jose, CA, USA; SPIE: Bellingham, WA, USA, 2007; Volume 6484, p. 648403. [Google Scholar]
  47. Kurtti, S.; Nissinen, J.; Kostamovaara, J. A wide dynamic range CMOS laser radar receiver with a time-domain walk error compensation scheme. IEEE Trans. Circuits Syst. I-Regul. Pap. 2016, 64, 550–561. [Google Scholar] [CrossRef]
  48. Kadambi, A.; Whyte, R.; Bhandari, A.; Streeter, L.; Barsi, C.; Dorrington, A.; Raskar, R. Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles. ACM Trans. Graph. 2013, 32, 167. [Google Scholar] [CrossRef] [Green Version]
  49. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  50. EMVA. European Machine Vision Association: Downloads. Available online: www.emva.org/standards-technology/emva-1288/emva-standard-1288-downloads/ (accessed on 30 December 2016).
  51. Pappas, T.N.; Safranek, R.J.; Chen, J. Perceptual Criteria for Image Quality Evaluation. In Handbook of Image and Video Processing, 2nd ed.; Elsevier: Houston, TX, USA, 2005; pp. 939–959. [Google Scholar]
Figure 1. The fundamental principle of the photonic mixer device (PMD) solid-state array Lidar.
Figure 1. The fundamental principle of the photonic mixer device (PMD) solid-state array Lidar.
Sensors 20 07329 g001
Figure 2. Signal demodulation method.
Figure 2. Signal demodulation method.
Sensors 20 07329 g002
Figure 3. Depth images under different integration time. (a) under 50 μs; (b) under 300 μs and (c) under 650 μs. The images were captured with a flat board.
Figure 3. Depth images under different integration time. (a) under 50 μs; (b) under 300 μs and (c) under 650 μs. The images were captured with a flat board.
Sensors 20 07329 g003
Figure 4. Non-uniform measurement due to uneven temperature distribution.
Figure 4. Non-uniform measurement due to uneven temperature distribution.
Sensors 20 07329 g004
Figure 5. Depth image without distance response non-uniformity (DRNU) compensation.
Figure 5. Depth image without distance response non-uniformity (DRNU) compensation.
Sensors 20 07329 g005
Figure 6. The comprehensive calibration model process.
Figure 6. The comprehensive calibration model process.
Sensors 20 07329 g006
Figure 7. The actual sinusoidal modulation signal and its Fourier transform. (a) Actual sinusoidal modulation signal. (b) Fast Fourier transform of the sinusoidal modulation signal.
Figure 7. The actual sinusoidal modulation signal and its Fourier transform. (a) Actual sinusoidal modulation signal. (b) Fast Fourier transform of the sinusoidal modulation signal.
Sensors 20 07329 g007
Figure 8. The PMD solid-state array Lidar.
Figure 8. The PMD solid-state array Lidar.
Sensors 20 07329 g008
Figure 9. Experimental settings. (a) The grayscale image calibration system and (b) the depth image calibration system.
Figure 9. Experimental settings. (a) The grayscale image calibration system and (b) the depth image calibration system.
Sensors 20 07329 g009
Figure 10. Results of lens distortion correction. (a) Before calibration and (b) after calibration.
Figure 10. Results of lens distortion correction. (a) Before calibration and (b) after calibration.
Sensors 20 07329 g010
Figure 11. Results of grayscale image calibration. Images (a, c, e, g) are captured before calibration, while images (b, d, f, h) are captured after calibration.
Figure 11. Results of grayscale image calibration. Images (a, c, e, g) are captured before calibration, while images (b, d, f, h) are captured after calibration.
Sensors 20 07329 g011aSensors 20 07329 g011b
Figure 12. Results of depth image calibration. (a) Before calibration and (b) after calibration.
Figure 12. Results of depth image calibration. (a) Before calibration and (b) after calibration.
Sensors 20 07329 g012
Figure 13. The test system.
Figure 13. The test system.
Sensors 20 07329 g013
Figure 14. The test results. (a) Under different ambient conditions and (b) with targets of different reflectivities.
Figure 14. The test results. (a) Under different ambient conditions and (b) with targets of different reflectivities.
Sensors 20 07329 g014
Table 1. Pixel adaptive interpolation strategy.
Table 1. Pixel adaptive interpolation strategy.
CasePixel Adaptive Interpolation Strategy
Sensors 20 07329 i001 D 0 = ( 1 a x ) × ( 1 a y ) × D 1 + ( 1 a x ) × a y × D 2 + a x × ( 1 a y ) × D 3 + a x × a y × D 4
Sensors 20 07329 i002 D 0 = u D 1 + v D 2 + ( 1 u v ) D 4
Sensors 20 07329 i003 D 0 = ( 1 x p 0 + y p 0 2 ) × D 1 + x p 0 + y p 0 2 × D 4
Sensors 20 07329 i004 D 0 = a y D 4 + ( 1 a y ) × D 3
Sensors 20 07329 i005 D 0 = D x
Sensors 20 07329 i006 D 0 = N a N
Table 2. Lens parameters.
Table 2. Lens parameters.
Internal Parametersfxfycxcy
208.915209.647159.404127.822
Distortion Coefficientsk1k2p1p2
−0.379170.174100.000210.00124
Table 3. Quantitative analysis of the grayscale calibration method.
Table 3. Quantitative analysis of the grayscale calibration method.
Flat White BoardReal Scene 1Real Scene 2
BeforeAfterBeforeAfterBeforeAfter
Mean value913.98915.73559.85571.83278.91305.03
RMSE168.2212.89291.30221.23256.65192.86
PSNR43.9755.1341.5842.7842.1343.37
Table 4. Detailed performance comparison results of the two methods.
Table 4. Detailed performance comparison results of the two methods.
Comparison ItemsMaximal Error (mm)Average Error (mm)RMSE (mm)
The proposed method16.48.134.47
Reference [42] method20.59.685.56
Table 5. Detailed performance compared with traditional methods.
Table 5. Detailed performance compared with traditional methods.
Distance Error (mm)Calibration TimeScene Scope
90011001300170021002500300035004000
Lindner et al. [13]19.428.221.028.913.517.315.921.826.7About dozens of minutesAbout 4 m × 0.6 m × 0.4 m
Steiger et al. [17]NaN3(at 1207)2557NaNNaNNaNNaNAbout dozens of minutesNot mentioned
Schiller et al. [19] (Automatic feature detection)7.45 (mean)NaNNaNAbout dozens of minutesAbout 3 m × 0.6 m × 0.4 m
Schiller et al. [19] (Some manual feature selection)7.51 (mean)NaNNaNAbout dozens of minutesAbout 3 m × 0.6 m × 0.4 m
Huang et al. [26]4223182446605876NaNNot mentionedAbout 1.5 m × 1.5 m × 2 m
The proposed method3.14.45.577.48.19.89.61290 s(calculation)
10 min (calculation, scene setup and initialization)
About 1.0 m × 1.0 m × 1.5 m
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Song, P.; Zhang, W. An Improved Calibration Method for Photonic Mixer Device Solid-State Array Lidars Based on Electrical Analog Delay. Sensors 2020, 20, 7329. https://doi.org/10.3390/s20247329

AMA Style

Wang X, Song P, Zhang W. An Improved Calibration Method for Photonic Mixer Device Solid-State Array Lidars Based on Electrical Analog Delay. Sensors. 2020; 20(24):7329. https://doi.org/10.3390/s20247329

Chicago/Turabian Style

Wang, Xuanquan, Ping Song, and Wuyang Zhang. 2020. "An Improved Calibration Method for Photonic Mixer Device Solid-State Array Lidars Based on Electrical Analog Delay" Sensors 20, no. 24: 7329. https://doi.org/10.3390/s20247329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop