Next Article in Journal
Improvement on Flux Weakening Control Strategy for Electric Vehicle Applications
Previous Article in Journal
Phenolic Profiling of Five Different Australian Grown Apples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study of Correction to the Point Cloud Distortion Based on MEMS LiDAR System

1
National Key Laboratory of Tunable Laser Technology, Harbin Institute of Technology, Harbin 150001, China
2
Shenzhen Geling Institute of Artificial Intelligence and Robotics, Shenzhen 518000, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(5), 2418; https://doi.org/10.3390/app11052418
Submission received: 5 January 2021 / Revised: 25 February 2021 / Accepted: 1 March 2021 / Published: 9 March 2021

Abstract

:
Active imaging technology can perceive the surrounding environment and obtain three-dimensional information of the target. Among them, light detection and ranging (LiDAR) imaging systems are one of the hottest topics in the field of photoelectric active imaging. Due to the small size, fast scanning speed, low power consumption, low price and strong anti-interference, a micro-electro-mechanical system (MEMS) based micro-scanning LiDAR is widely used in LiDAR imaging systems. However, the imaging point cloud will be distorted, which affects the accurate acquisition of target information. Therefore, in this article, we analyzed the causes of distortion initially, and then introduced a novel coordinate correction method, which can correct the point cloud distortion of the micro-scanning LiDAR system based on MEMS. We implemented our coordinate correction method in a two-dimensional MEMS LiDAR system to verify the feasibility. Experiments show that the point cloud distortion is basically corrected and the distortion is reduced by almost 72.5%. This method can provide an effective reference for the correction of point cloud distortion.

1. Introduction

In the field of active optical imaging, a target was exposed and the backscattered light was detected to extract the depth and spatial information for 3D image formation. Light detection and ranging (LiDAR) systems are one of the most important sensors for 3D optical imaging technology [1,2], which has shown enormous potential in different applications including target recognition, ground surface estimation, robotics, and driving on autonomous vehicles [3,4,5,6,7,8]. Therefore, in order to obtain high-quality target information, it is necessary to select appropriate scanning components to improve the imaging performance of LiDAR, which includes, e.g., a galvanometer scanner [9] or Risley prisms [10]. With the rapid development of laser technology, photoelectric control technology, and image processing technology, 3D imaging LiDAR is tending towards miniaturization, high resolution, high-speed, low cost, and so on. The traditional mechanical scanners (such as vlp-16 [11], LMS511 and Lux-4L/8L) are the earliest developed and the most mature technologies, and have been applied to autonomous driving concept vehicles such as Google and Baidu or industrial applications [12,13,14]. The core technology is to place the linear array laser and linear array detector on the speed-regulating motor, which can rotate at a high speed in one dimension to achieve panoramic imaging. Generally, the volume of mechanical scanners is relatively large and bulky, they need rather high-power consumption (from 6 W in vlp-16 to 22 W in Alpha prime), and it is placed on the roof of the car, so the accuracy is very unstable; Solid-state laser 3D image sensor, the core technology is by waveguide or liquid crystal optical phased array device, through the electronic control technology to guide the laser pulse in any direction. However, the optical phased array technology is essentially based on the diffraction effect, optical transmittance is the bottleneck of its further development. The core technology of the hybrid solid-state laser 3D image sensor is to replace the traditional one-dimensional mechanical rotation device with a micro-electro-mechanical system (MEMS), which is placed inside the panoramic optical system to solve the problem of solid-state. The MEMS mirror has been widely used in the LiDAR system due to its advantage including small size, fast scanning speed, low power consumption, low price, and strong anti-interference. The MEMS scanning LiDAR has become the main application direction of today’s LiDAR imaging system [15,16]. However, in the actual detection process, the obtained point cloud has obvious distortion, which seriously affects the application performance of the MEMS LiDAR, reduces the coverage of the detected target, and causes the loss of target information. Hence, aiming at point cloud distortion phenomenon, we have analyzed the reason for distortion initially, proposed and experimentally demonstrated a de-distortion method based on an improved coordinate correction.

2. Methods

2.1. Principle

Figure 1 shows a schematic diagram of a MEMS LiDAR system. The pulsed laser beam generated by the laser is collimated by a collimator, and after passing through a beam splitter, a laser beam directly passes through a plane mirror deflection direction and hits the MEMS scanning mirror to illuminate the target. The MEMS mirror is a two-dimensional galvanometer that is driven by static electricity and can be deflected in the x and y directions at the same time. It can emit pulsed light into space to actively illuminate the object, thereby scanning the target. The other laser beam is directly transmitted to the photodetector (PD) in the processing module as a reference signal for pulsed laser emission. The backscattered light signal from the target surface is received by the non-imaging objective lens of the receiving system and transmitted to the photosensitive surface of the avalanche photodiode (APD). The detector converts the light signal into an electrical signal and then obtains the return signal of the laser after the subsequent processing module. Combining the reference signal recorded by the previous PD, the time of flight (TOF) of the pulsed laser can be calculated, and obtained the z-coordinate of the distance between the MEMS LiDAR system and the target. Then fused with the two-dimensional spatial coordinates (x, y) formed by the scanning trajectory of the MEMS mirror, the three-dimensional spatial coordinates (x, y, z) of the object can be obtained, and the three-dimensional coordinates (x, y, z) can be displayed the 3D point cloud images in real-time on the PC through the signal transmission.
There are two main scanning methods for MEMS mirrors: point-to-point scanning and continuous scanning [17,18,19]. The frequency of point-to-point scanning is relatively low. In this way, the MEMS mirror must go through the process of starting, accelerating, and decelerating whenever it scans a point, so high-speed scanning cannot be performed; while the frequency of continuous scanning is relatively faster. In this paper, the continuous scanning method is selected, and the scanning trajectory of the MEMS mirror is simulated as shown in Figure 2, where x and y are normalized coordinates, and the range of value is −1 to 1. Each point in the normalized coordinates will have a corresponding z coordinate, forming the three-dimensional coordinates of the target.

2.2. Analysis of the Causes of Distortion and Correction Methods

The MEMS mirror performs two-dimensional scanning of the target in the x and y directions, the receiving optical system is responsible for detecting the echo signal of each point in the scan field of view (FOV), and the time measurement system calculates the TOF. The distance between the target and the receiver can be obtained by the formula r = c × t/2, which is the z coordinate of the target in the three-dimensional space (where r is the range to the target, c is the velocity of light in the free space and t is the time it takes for the pulse of energy to travel from its emitter to the observed object and then back to the receiver). However, the distance r corresponding to the TOF detected during the experiment is not the true z coordinate of the target surface point, but the radial distance r from the point to the receiving objective lens. As shown in Figure 3, the radial distance between the target and the receiving objective lens is ri. When scanning a plane, the radial distance ri between different positions of the plane and the objective lens is different. If ri is directly regarded as the z coordinate of the object in the three-dimensional rectangular coordinate system. It will form a distorted point cloud surface as shown in Figure 3, but it is a plane. Additionally, the degree of distortion is related to the angle of the scanning FOV. The bigger the FOV, the more obvious the distortion.
During the experiment, we converted the scanning coordinates (x, y) of the MEMS mirror into normalized coordinates with a value range of −1 to 1. Therefore, there is a scale factor between the real coordinates (X, Y) and the normalized coordinates (x, y), which is related to the distance information of the target and the maximum deflection angle of the galvanometer. Assuming that the normalized coordinates of the point i obtained by the MEMS scanner are (x0i, y0i), the coordinates after fusing the distance information are (x0i, y0i, ri), and the corresponding real coordinates under the LiDAR system are (Xi, Yi, Zi). The deflection angles of the MEMS galvanometer scanning along the x-axis and y-axis are θx and θy, respectively. The maximum deflection angle is set as θxmax, θymax. As shown in Figure 4.
The real coordinate (Xi, Yi, Zi) in the LiDAR system of the point i in the normalized coordinate with the target distance information ri and the MEMS deflection angle θx, θy has the following relationship:
X i = r i sin ( θ x ) cos ( θ y ) Y i = r i cos ( θ x ) sin ( θ y ) Z i = r i cos ( θ x ) cos ( θ y )
Assuming that the n is a proportional coefficient between the normalized coordinate and the real coordinate under the LiDAR system. 1 n = z tan ( θ x max ) with x0i equal to 1. Therefore, the following equation is satisfied for any points (x0i, y0i) in the normalized coordinate:
x 0 i n = z tan ( θ x ) y 0 i n = z tan ( θ y )
From Equation (2), we solve the coordinates x0i and y0i with the deflection angle θ as:
x 0 i = tan ( θ x ) / tan ( θ x max ) y 0 i = tan ( θ y ) / tan ( θ y max )
It can be seen that x0i and y0i respectively represent the ratio of the scanning angle in the horizontal and vertical direction to the maximum scanning angle. The following equations can be obtained from Equations (2) and (3):
cos ( θ x ) = 1 1 + x 0 i 2 tan 2 ( θ x max ) sin ( θ x ) = x 0 i tan ( θ x max ) 1 + x 0 i 2 tan 2 ( θ x max ) cos ( θ y ) = 1 1 + y 0 i 2 tan 2 ( θ y max ) sin ( θ y ) = y 0 i tan ( θ y max ) 1 + y 0 i 2 tan 2 ( θ y max )
According to the range of the values for x0i and y0i are between −1 to 1 in the normalized coordinate system, and tanθxmax and tanθymax are less than 1. Naturally, we applied Taylor expansion to the formula 1 + x 0 i 2 tan 2 ( θ x max ) and 1 + y 0 i 2 tan 2 ( θ y max ) , respectively. Omit higher-order terms above the second order from x 0 i 2 tan 2 ( θ x max ) and y 0 i 2 tan 2 ( θ y max ) , Equation (4) is expressed as follows:
cos ( θ x ) = 1 x 0 i 2 tan 2 ( θ x max ) / 2 sin ( θ x ) = x 0 i tan ( θ x max ) 1 x 0 i 2 tan 2 ( θ x max ) / 2 cos ( θ y ) = 1 y 0 i 2 tan 2 ( θ y max ) / 2 sin ( θ x ) = y 0 i tan ( θ y max ) 1 y 0 i 2 tan 2 ( θ y max ) / 2
Bring Equation (5) into Equation (1), we have:
X i = r i 1 1 2 x 0 i 2 tan 2 ( θ x max ) + y 0 i 2 tan 2 ( θ y max ) tan ( θ x max ) x 0 i Y i = r i 1 1 2 x 0 i 2 tan 2 ( θ x max ) + y 0 i 2 tan 2 ( θ y max ) tan ( θ y max ) y 0 i Z i = r i 1 1 2 x 0 i 2 tan 2 ( θ x max ) + y 0 i 2 tan 2 ( θ y max )
Let
M = 1 1 2 x 0 i 2 tan 2 ( θ x max ) + y 0 i 2 tan 2 ( θ y max )
Then the Equation (6) can be expressed in the form of a matrix as:
X i Y i Z i = x 0 i M X i y 0 i M Y i 1 M Z i r i
where
M X i = M tan ( θ x max ) M Y i = M tan ( θ y max ) M Z i = M
MXi, MYi and MZi represent the conversion factors related to normalizing coordinates (x0i, y0i, ri) to the Cartesian coordinates (Xi, Yi, Zi) in the LiDAR system, respectively.
Theoretically speaking, the three-dimensional coordinates (x, y, z) of each point are related to the radial distance ri between the target and the receiver. In Equation (7), since the two-dimensional scan coordinates (x0i, y0i) of each point are preset, the maximum scan angle 2θmax is also known, thus, the conversion factor for each point in the scan can be determined, the actual coordinates of each point in the FOV satisfy the relationship of Equation (7). The coefficients before ri on the right side of the equal sign are all known. In the actual operation of data processing, after determining the maximum angle of the scan, we can calculate the coordinate conversion factor M for each scan normalized position point according to the coordinate sequence (x0i, y0i) in the MEMS scan track Mxi, Myi, and Mzi, after the time measurement system obtains the distance coordinate ri, they can be converted into the required (Xi, Yi, Zi) according to Equation (8).

3. Experiment

In order to verify the above theory, we conducted the following experiments. For reference, the specifications of the MEMS LiDAR system are summarized in Table 1 and applied the correction method to the system’s 3D imaging experiment. The experimental parameters are shown in Table 2. The scanning parameters of the MEMS mirror are the following: the sampling rate is 50 kHz and each frame scans 64 lines, each line contains 256 points, so a frame contains 64 × 256 = 32,768 points, the frame frequency is about 3Hz, and the scanning trajectory adopts raster scanning.
Combine the two-dimensional coordinates (x, y) of the MEMS scanner in a certain frame with the z-coordinates converted from time measurement, and analyze the changes in the data, and use the correction method mentioned in this article to process the data to obtain the three-dimensional imaging invert the image.
We first measured a simple whiteboard plane. The experimental results are shown in Figure 5. Figure 5a is the point cloud image of the whiteboard before calibration. It can be seen that there is an obvious distortion. Figure 5b is the point cloud after applying the method of this paper to correct. In order to quantitatively illustrate the effectiveness of the method in this paper, we respectively analyze the edge contour part of the whiteboard, as shown in Figure 6. The four pictures in Figure 6 are the plane contour point clouds after the boundary contours in the x and y directions are superimposed with the coordinates in the z-direction. We define the deflection angle of the boundary point of the plane contour centerline relative to the horizontal axis as an index of the degree of distortion of the point cloud image. That is, the degree of point cloud distortion is judged by the size of α and β the Figure 6. The specific data is shown in Table 3.
It can be seen from Table 3 that the distortion degree of the XZ plane profile before correction reaches 15.255°, and after correction, it is reduced to 3.901°. Our method reduces the distortion by 74.428%. The contour distortion of the YZ plane is also reduced by more than 70%. Therefore, our method almost reduces the point cloud distortion by 72.5%. It can be seen that our method works well.
In order to prove the robustness of the method proposed in this paper, we further designed an experiment as shown in Figure 7. Using a LiDAR system to scan complex targets. The experimental results are shown in Figure 8. Figure 8a shows the imaging inversion obtained by combining the scanning coordinates (x, y) and the ranging coordinates z. This is the imaging effect before calibration. Obviously, the inverted point cloud image has obvious distortion. Figure 8c shows the corrected point cloud image, and in order to show the results more clearly, the singular points in the point cloud model and the large-scale outliers of the relative point cloud data are eliminated, that is, the laser 3D scanning imaging point cloud data is directly processed for de-noising [20], the point cloud images after de-noising are shown in Figure 8b,d, respectively.
Obviously, as we can see from the Figure 8 that after using the correction method, the point cloud image has almost no distortion, which further proves the robustness of the proposed method. Therefore, the method proposed in this paper can fully meet future vision imaging requirements.

4. Conclusions

Aiming at the phenomenon of distortion in the point cloud of MEMS micro-scan imaging LiDAR, the causes of the distortion are analyzed and the MEMS scan trajectory is simulated, and a novel coordinate correction method is proposed to correct the distorted point cloud. Applying the method proposed in this article to our experimental equipment, the point cloud distortion of simple and complex targets obtained by scanning are all well corrected. In the experiment, we found that the distortion degree of the point cloud is approximately 16.45° before correction, and only 4.7° after correction. Therefore, the degree of distortion is reduced by nearly 72.5%. It is fully proved that our method has good de-distortion ability.

Author Contributions

Conceptualization: D.G.; data curation: D.G., B.Q., Y.Z., and Q.L.; formal analysis: D.G., B.Q., Y.Z., and Q.L.; funding acquisition: C.W.; investigation: D.G.; methodology: D.G., B.Q., Y.Z., and Q.L.; resources: C.W.; software: D.G.; supervision: C.W.; validation: D.G., B.Q., Y.Z., and Q.L.; visualization: D.G.; writing—original draft: D.G. and Y.Z. writing—review and editing: D.G., B.Q., Y.Z., and Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shenzhen Fundamental Research Program (Grant No. JCYJ2020109150808037); the National Key Scientific Instrument and Equipment Development Projects of China (Grant No.62027823); the National Natural Science Foundation of China (Grant No.61775048).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. McManamon, P.F. Review of ladar: A historic, yet emerging, sensor technology with rich phenomenology. Opt. Eng. 2012, 51, 060901. [Google Scholar] [CrossRef] [Green Version]
  2. Cheng, Y.; Cao, J.; Zhang, F.; Hao, Q. Design and modeling of pulsed-laser three-dimensional imaging system inspired by compound and human hybrid eye. Sci. Rep. 2018, 8, 17164. [Google Scholar] [CrossRef] [PubMed]
  3. Richmond, R.D.; Cain, S.C. Direct-Detection LADAR Systems; SPIE: Bellingham, WA, USA, 2010. [Google Scholar]
  4. Ma, X.; Wang, C.; Han, G.; Ma, Y.; Li, S.; Gong, W.; Chen, J. Regional Atmospheric Aerosol Pollution Detection Based on LiDAR Remote Sensing. Remote Sens. 2019, 11, 2339. [Google Scholar] [CrossRef] [Green Version]
  5. Xing, Y.; Huang, J.; Gruen, A.; Qin, L. Assessing the Performance of ICESat-2/ATLAS Multi-Channel Photon Data for Estimating Ground Topography in Forested Terrain. Remote Sens. 2020, 12, 2084. [Google Scholar] [CrossRef]
  6. Roback, V.; Bulyshev, A.; Amzajerdian, F.; Reisse, R. Helicopter flight test of 3D imaging flash LIDAR technology for safe, autonomous, and precise planetary landing. SPIE 2013, 8731, 1–20. [Google Scholar]
  7. Sun, B.; Edgar, M.P.; Bowman, R.; Vittert, L.E.; Welsh, S.; Bowman, A.; Padgett, M.J. 3D computational imaging with single-pixel detectors. Science 2013, 340, 844–847. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Gong, W.; Zhao, C.; Yu, H.; Chen, M.; Xu, W.; Han, S. Three-dimensional ghost imaging lidar via sparsity constraint. Sci. Rep. 2016, 6, 26133. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Montagu, J. Galvanometric and Resonant Scanners. In Handbook of Optical and Laser Scanning, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2016; pp. 418–473. [Google Scholar]
  10. Zhou, Y.; Lu, Y.; Hei, M.; Liu, G.; Fan, D. Motion control of the wedge prisms in Risley-prism-based beam steering system for precise target tracking. Appl. Opt. 2013, 52, 2849–2857. [Google Scholar] [CrossRef] [PubMed]
  11. Glennie, C.L.; Kusari, A.; Facchin, A. Calibration and Stability Analysis of the VLP-16 Laser Scanner. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 9, 55–60. [Google Scholar] [CrossRef] [Green Version]
  12. Miadlicki, K.; Saków, M. LiDAR Based System for Tracking Loader Crane Operator. In Proceedings of the International Scientific-Technical Conference MANUFACTURING, Poznan, Poland, 19–22 May 2019; Springer: Cham, Switzerland, 2019; pp. 406–421. [Google Scholar]
  13. Miadlicki, M.; Pajor, M.; Sakow, M. Real-time ground filtration method for a loader crane environment monitoring system using sparse LIDAR data. In Proceedings of the 2017 IEEE International Conference on Innovations in Intelligent Systems and Applications (INISTA), Gdynia, Poland, 3–5 July 2017. [Google Scholar]
  14. Miądlicki, K.; Pajor, M.; Saków, M. Ground plane estimation from sparse LIDAR data for loader crane sensor fusion system. In Proceedings of the 2017 22nd International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 28–31 August 2017; pp. 717–722. [Google Scholar]
  15. Takashima, Y.; Hellman, B.; Rodriguez, J.; Chen, G.; Smith, B.; Gin, A.; Espinoza, A.; Winkler, P.; Perl, C.; Luo, C.; et al. MEMS-based imaging LIDAR. In Optics and Photonics for Energy and the Environment; Optical Society of America: Washington, DC, USA, 2018; p. ET4A. [Google Scholar]
  16. Tsuji, H.; Imaki, M.; Kotake, N.; Hirai, A.; Nakaji, M.; Kameyama, S. Range imaging pulsed laser sensor with two-dimensional scanning of transmitted beam and scanless receiver using high-aspect avalanche photodiode array for eye-safe wavelength. Opt. Eng. 2016, 56, 031216. [Google Scholar] [CrossRef] [Green Version]
  17. Milanović, V.; Lo, W.K. Fast and high-precision 3D tracking and position measurement with MEMS micromirrors. In Proceedings of the 2008 IEEE/LEOS International Conference on Optical MEMs and Nanophotonics, Freiburg, Germany, 11–14 August 2008; pp. 72–73. [Google Scholar]
  18. Milanović, V.; Siu, N.; Kasturi, A.; Radojičić, M.; Su, Y. MEMSEye for optical 3D position and orientation measurement. SPIE Int. Soc. Opt. Eng. 2011, 7930, 79300U. [Google Scholar]
  19. Milanović, V.; Kasturi, A. Real-time 3D Tracking. Opt. Photon. 2013, 8, 55–59. [Google Scholar] [CrossRef]
  20. Zheng, Y.; Li, G.; Wu, S.; Liu, Y.; Gao, Y. Guided point cloud denoising via sharp feature skeletons. Vis. Comput. 2017, 33, 857–867. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of a micro-electro-mechanical system (MEMS) light detection and ranging (LiDAR) system.
Figure 1. Schematic diagram of a micro-electro-mechanical system (MEMS) light detection and ranging (LiDAR) system.
Applsci 11 02418 g001
Figure 2. Scanning trace simulation of MEMS mirror.
Figure 2. Scanning trace simulation of MEMS mirror.
Applsci 11 02418 g002
Figure 3. Analysis of the causes of point cloud distortion.
Figure 3. Analysis of the causes of point cloud distortion.
Applsci 11 02418 g003
Figure 4. Schematic diagram of scanning angle of MEMS mirror.
Figure 4. Schematic diagram of scanning angle of MEMS mirror.
Applsci 11 02418 g004
Figure 5. Planar point cloud image obtained by experiment: (a) point cloud before correction; (b) point cloud after correction.
Figure 5. Planar point cloud image obtained by experiment: (a) point cloud before correction; (b) point cloud after correction.
Applsci 11 02418 g005
Figure 6. (a) XZ plane contour before correction; (b) YZ plane contour before correction; (c) XZ plane contour after correction; (d) YZ plane contour after correction; α and β are the deflection angle.
Figure 6. (a) XZ plane contour before correction; (b) YZ plane contour before correction; (c) XZ plane contour after correction; (d) YZ plane contour after correction; α and β are the deflection angle.
Applsci 11 02418 g006
Figure 7. (a) Schematic diagram of the LiDAR system; (b) target in the experiment.
Figure 7. (a) Schematic diagram of the LiDAR system; (b) target in the experiment.
Applsci 11 02418 g007
Figure 8. Distorted point cloud obtained by experiment and corrected point cloud: (a) the imaging inversion obtained by the scanning coordinates (x, y, z); (c) the corrected point cloud image; (b) and (d) the point cloud images after de-noising.
Figure 8. Distorted point cloud obtained by experiment and corrected point cloud: (a) the imaging inversion obtained by the scanning coordinates (x, y, z); (c) the corrected point cloud image; (b) and (d) the point cloud images after de-noising.
Applsci 11 02418 g008
Table 1. Specifications of the MEMS LiDAR system.
Table 1. Specifications of the MEMS LiDAR system.
1 laser
60° (azimuth) by 60° (vertical) FOV
SensorsRange: Up To 50 m
13 cm range accuracy
100 kHz
0.48 Watts
Class 1
Laser1064 nm wavelength
4 ns pulse width
Table 2. Parameter values used in the experiments.
Table 2. Parameter values used in the experiments.
ParametersValues
Transmitter
Beam diameter3.0 mm
Pulse energy4.8 uJ
Angle of divergence0.51 mrad
Transmitting system optical aperture 32 mm
Receiver
Receiving system optical aperture50 mm
APD photosensitive surface size3.0 mm
APD response frequency band 250 MHz
Spectral response range (minimum) 600 to 1150 nm
ADC sampling rate5 GSa/s
ADC bandwidth1 GHz
ADC sampling digit14 bit
Table 3. Comparison of point cloud distortion before and after correction.
Table 3. Comparison of point cloud distortion before and after correction.
Before Correction/°After Correction/°Distortion Reduction
XZ plane15.2553.90174.428%
YZ plane17.6505.19470.572%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, D.; Wang, C.; Qi, B.; Zhang, Y.; Li, Q. A Study of Correction to the Point Cloud Distortion Based on MEMS LiDAR System. Appl. Sci. 2021, 11, 2418. https://doi.org/10.3390/app11052418

AMA Style

Guo D, Wang C, Qi B, Zhang Y, Li Q. A Study of Correction to the Point Cloud Distortion Based on MEMS LiDAR System. Applied Sciences. 2021; 11(5):2418. https://doi.org/10.3390/app11052418

Chicago/Turabian Style

Guo, Dongbing, Chunhui Wang, Baoling Qi, Yu Zhang, and Qingyan Li. 2021. "A Study of Correction to the Point Cloud Distortion Based on MEMS LiDAR System" Applied Sciences 11, no. 5: 2418. https://doi.org/10.3390/app11052418

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop