Next Article in Journal
Generation of Stereo Images Based on a View Synthesis Network
Next Article in Special Issue
An Improved Circumferential Fourier Fit (CFF) Method for Blade Tip Timing Measurements
Previous Article in Journal
Deep-Learning-Based Active Hyperspectral Imaging Classification Method Illuminated by the Supercontinuum Laser
Previous Article in Special Issue
A Method of On-Site Describing the Positional Relation between Two Horizontal Parallel Surfaces and Two Vertical Parallel Surfaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Scanning Three-Dimensional Imaging System with a Single-Pixel Detector: Simulation and Experimental Study

School of Mechanical Engineering, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(9), 3100; https://doi.org/10.3390/app10093100
Submission received: 3 April 2020 / Revised: 24 April 2020 / Accepted: 27 April 2020 / Published: 29 April 2020
(This article belongs to the Special Issue Manufacturing Metrology)

Abstract

:
Existing scanning laser three-dimensional (3D) imaging technology has slow measurement speed. In addition, the measurement accuracy of non-scanning laser 3D imaging technology based on area array detectors is limited by the resolution and response frequency of area array detectors. As a result, applications of laser 3D imaging technology are limited. This paper completed simulations and experiments of a non-scanning 3D imaging system with a single-pixel detector. The single-pixel detector can be used to achieve 3D imaging of a target by compressed sensing to overcome the shortcomings of the existing laser 3D imaging technology. First, the effects of different sampling rates, sparse transform bases, measurement matrices, and reconstruction algorithms on the measurement results were compared through simulation experiments. Second, a non-scanning 3D imaging experimental platform was designed and constructed. Finally, an experiment was performed to compare the effects of different sampling rates and reconstruction algorithms on the reconstruction effect of 3D imaging to obtain a 3D image with a resolution of 8 × 8. The simulation results show that the reconstruction effect of the Hadamard measurement matrix and the minimum total variation reconstruction algorithm performed well.

1. Introduction

At present, using three-dimensional (3D) imaging technology to obtain 3D information about the surrounding environment is important in many application fields, particularly in the fields of autonomous driving, 3D printing, machine vision, and virtual reality [1,2]. Traditional laser 3D imaging is divided into two main types: scanning and non-scanning. Scanning technology performs a point-by-point measurement of the target through a mechanical scanning device. The entire system occupies a large volume and has low measurement efficiency [3,4]. Non-scanning technology usually uses an area array detector to measure the reflected light at every point in the target area simultaneously. However, imaging using an area array detector has problems, such as a low signal-to-noise ratio, limited imaging resolution, and limited response frequency; these lead to low imaging resolution and accuracy [5,6,7].
Nyquist sampling theory requires that the sampling rate must be more than twice the signal bandwidth to reconstruct the original signal without distortion from the discrete sampling signal, but a large amount of redundant information occupies the resources of the signal sampling system. In 2006, Donoho, Candes, and Tao proposed the theory of compressed sensing [8,9,10]. It was proven that for sparse or sparsely expressed signals, the original signal can be reconstructed accurately with a high probability using a small number of measurement times. This resolves the contradiction between measurement resolution and measurement efficiency in traditional 3D imaging signal sampling and makes 3D imaging possible with a single-pixel detector. In 2008, Takhar of Rice University developed a single-pixel camera using the theory of compressed sensing; Takhar first applied compressed sensing to imaging systems [11]. In 2011, Howland of the University of Rochester used a photon-counting method to achieve 3D compressed sensing imaging [12] using a pulsed laser with a wavelength of 780 nm and achieved a distance resolution of 30 cm. In 2014, Guo of Beijing Institute of Technology proposed a 3D compressed sensing imaging method based on the phase method ranging principle [13] and conducted preliminary simulation verification of the entire system but did not study 3D imaging experiments further. In 2015, Sun of Beihang University used a slicing method and a pulsed laser to realize 3D compressed sensing imaging [14] and accurately reconstructed a target scene with a resolution of 128 × 128, achieving an accuracy of 3 mm within a distance of 5 m.
In this paper, a single-pixel non-scanning 3D imaging system was designed, and the system was investigated in terms of theories, simulations, and experiments. First, the effects of different sampling rates, sparse transform bases, measurement matrices, and reconstruction algorithms on the measurement results were compared through simulation experiments. Second, a non-scanning 3D imaging experimental platform, based on a single-pixel detector, was designed and constructed. Finally, an experiment was performed to compare the effects of different sampling rates and reconstruction algorithms on the reconstruction effect of 3D imaging to obtain a 3D image with a resolution of 8 × 8. The results of the experiment showed that the 3D imaging system works, and its reconstruction of the Hadamard measurement matrix and the minimum total variation reconstruction algorithm performed well.

2. Three-Dimensional Imaging Theory

2.1. System Structure

In this study, the non-scanning 3D imaging system with a single-pixel detector is based on the traditional compressed sensing two-dimensional imaging. Meanwhile, modulation information is added to the laser light intensity. We can obtain the distance information about the target area by phase detection to complete the 3D imaging. The system structure is shown in Figure 1.
The 3D imaging process of the system works as follows. The laser emits a continuous modulated beam with a sine wave modulation. The continuous laser enters the mirror area of a digital micro-mirror device (DMD) after collimation. The DMD’s micro-mirror array flips according to the elements of the first row in the measurement matrix, which are input in advance. The laser beam reflected by the DMD is irradiated onto the target through an expanded beam. The reflected laser from the target surface is focused by the condensing lens on the photo-sensitive element of the single-pixel detector, which then receives an electrical signal after photo-electric conversion. The signal is collected by the capture card, and a measured value is obtained after the phase detection process.
The remaining elements of the measurement matrix are entered in sequence, and the above operation is repeated to obtain a number of measured values that correspond to the required sampling rate. After performing the compressed sensing reconstruction processing on the obtained measurement values, the delay phases of the modulated laser at various points in the target area are obtained. Finally, according to the phase method ranging principle, the distance of each point in the target area is calculated, and the 3D information about the target can be obtained.

2.2. Measurement Principle

The two-dimensional signal of the target image, with a resolution of n × n, can be connected in columns or rows, and converted to a one-dimensional discrete signal x. There are N = n2 elements in x, of which there are K non-zero values. After obtaining the linear measurement values y R M under the measurement matrix Φ R M × N ( M < N ) , the compressed sensing sampling process can be described as follows [15]:
y = Φ x .
Equation (1) is solved by the reconstruction algorithm; a one-dimensional signal x can be reconstructed from y, and then restored to a two-dimensional signal by x.
According to the above-mentioned compressed sensing imaging theory, M measurements must be performed by the system used in this study. That is, the DMD needs to be flipped M times. Each row element (1 or 0) in the pre-selected measurement matrix Φ (M × N) corresponds to the switching state of the micro-mirror at each pixel of the DMD in a single measurement. The resolution of the DMD micro-mirror array is n × n. Let the horizontal and vertical coordinates of the DMD pixel be i and j, respectively. In the kth measurement of these M measurements, the state of a single micro-mirror in (i, j) in DMD is
Φ i j k = { 1 0 .
Φ i j k = 1 means that the micro-mirror is on (+ 12° flip), and Φ i j k = 0 means that the micro-mirror is off (−12° flip). The transmitted laser signal after sine wave modulation is
X T k ( t ) = M T k cos ( w t + φ 0 ) ,
where M T k is the amplitude of the transmitted signal, w is the modulation angular frequency of the transmitted signal, and φ 0 is the initial phase of the transmitted signal. The optical signal after DMD modulation can be expressed as
X T k ( t ) = Φ i j k · M T k cos ( w t + φ 0 ) .
Because the distance from each point of the target surface to the transmitting end is different, the transmitted signal will have different phase delays at each point. After the emitted light is reflected by the target, the reflected signal at the corresponding pixel point (i, j) is
X R i j k ( t ) = Φ i j k · M R i j k cos ( w t + φ 0 + Δ φ i j ) ,
where M R i j is the amplitude of the reflected signal at (i, j) due to the attenuation of light intensity, and Δ φ i j is the delay phase of the reflected signal at (i, j) relative to the transmitted signal.
The total reflected light received at the kth time step can be expressed as the sum of the reflected light at each point:
X R k ( t ) = i = 1 n j = 1 n Φ i j k · X R i j ( t ) = cos ( w t + φ 0 ) · i = 1 n j = 1 n ( Φ i j k · M R i j cos Δ φ i j ) sin ( w t + φ 0 ) · i = 1 n j = 1 n ( Φ i j k · M R i j sin Δ φ i j )
The total reflected light can be received by a single-pixel photo-detector and measured. The total reflected signal amplitude is M R k , and the delay phase relative to the transmitted signal is Δ Φ k . An alternative expression of the total reflected light is
X R k ( t ) = M R k cos ( w t + φ 0 + Δ Φ k ) = M R k [ cos ( w t + φ 0 ) cos Δ Φ k sin ( w t + φ 0 ) sin Δ Φ k ]
Comparing Equations (6) and (7), the following can be obtained:
{ i = 1 n j = 1 n Φ i j k · M R i j cos Δ φ i j = M R k cos Δ Φ k = a k i = 1 n j = 1 n Φ i j k · M R i j sin Δ φ i j = M R k sin Δ Φ k = b k .
After M measurements, two sets of measurement values, A and B, are obtained from Equation (8):
{ A = [ a 1 , a 2 , a 3 , , a k , , a M ] T B = [ b 1 , b 2 , b 3 , , b k , , b M ] T .
Equation (9) expresses the product of the sine and cosine of the phase difference of each pixel and the signal amplitude as two column vectors:
{ x 1 = [ M R 11 sin Δ φ 11 , M R 12 sin Δ φ 12 , , M R 1 n sin Δ φ 1 n , , M R n 1 sin Δ φ n 1 , , M R n n sin Δ φ n n ] T x 2 = [ M R 11 cos Δ φ 11 , M R 12 cos Δ φ 12 , , M R 1 n cos Δ φ 1 n , , M R n 1 cos Δ φ n 1 , , M R n n cos Δ φ n n ] T .
According to Equations (8)–(10), and the measurement matrix Φ corresponding to M measurements, we can obtain
{ A = Φ x 1 B = Φ x 2 .
The two equations in Equation (11) are solved by the compressed sensing reconstruction algorithm, and the vectors x1 and x2 are reconstructed. We can eliminate the unknown parameter M R i j in x1 and x2, and we then get the delay phase Δ φ n n of the target. The distance of each point is calculated according to the phase method ranging principle to complete the compressed sensing 3D imaging of the target.

3. Simulation Experiments

3.1. Simulation Process

To study the effects of different sampling rates, sparse transform bases, measurement matrices, and reconstruction algorithms on the effect of 3D imaging reconstruction in compressed sensing, simulation experiments were performed in the MATLAB software environment. Figure 2 is the simulation target used in the simulation experiment. In the figure, the distance d1 of the grayscale value 255 (white) of the pixel is 1 m, and the distance d2 of the grayscale value 0 (black) is 6 m.
The laser modulation frequency was f = 5 × 106 Hz, and the sampling frequency was fs = 1 × 109 Hz. Comparative experiments were conducted using the following compressed sensing parameters:
  • Sampling rate: 0.2, 0.4, 0.6, and 0.8.
  • Sparse transformation basis: Discrete cosine transform (DCT) and fast Fourier transform (FFT).
  • Measurement matrix: Partial Hadamard matrix (HA) [16] and Bernoulli matrix (BE) [17].
  • Reconstruction algorithm: Basis pursuit (BP) [18], orthogonal matching pursuit (OMP) [19], and minimum total variation (TV) [10,20].
Because the DMD micro-mirror can only represent 1 or 0, we select as the measurement matrix the partial Hadamard matrix and Bernoulli matrix, and we replace -1 in the matrix by 0 [21]. TV is an image restoration method based on the discrete gradient of an object. The discrete gradient of most natural images is sparse, so this algorithm does not need to perform sparse transformation. The simulation uses the root mean square error (RMSE) of the delay phase obtained after reconstruction, at each point, as a metric to evaluate the reconstruction effect.

3.2. Simulation Experiment Conclusion

The simulation results are shown in Figure 3.
The results show that under the same sparse transform basis, measurement matrix, and reconstruction algorithm, increasing the sampling rate reduces the phase RMSE and improves the reconstruction effect. When the TV algorithm has the same sampling rate and a different measurement matrix, the phase RMSE after reconstruction is not much different, indicating that the two measurement matrices have a considerable impact on the reconstruction effect. The reconstruction effect of the TV algorithm is better than the effect of the BP or OMP algorithm. The reconstruction effect of the BP algorithm is better than that of the OMP algorithm. The reconstruction effect using FFT sparse transform is clearly better than when using DCT sparse transform. When the sampling rate is the same and the measurement matrix is different, the Hadamard matrix is slightly better than the Bernoulli matrix.
A comparison of the three reconstruction algorithms is shown in Figure 4 using the Hadamard matrix and FFT sparse transform.
Figure 4 shows that the BP algorithm can reconstruct the target map more accurately at a higher sampling rate (0.8 or 0.6), but its phase RMSE is larger at a low sampling rate. The OMP algorithm cannot reconstruct the target image accurately. The phase RMSE of the TV algorithm is maintained at approximately 0.03, and the reconstruction effect is good.
The simulation results show that the reconstruction effect of the FFT sparse transform is obviously better than that of the DCT sparse transform; the reconstruction effect of the Hadamard measurement matrix is slightly better than that of the Bernoulli measurement matrix; and the TV reconstruction algorithm is slightly better than BP, with OMP producing the worst results.
Figure 5 shows the 3D image recovered by the TV reconstruction algorithms and Hadamard matrix. When the sampling rate is 0.8 or 0.6, the 3D imaging effect is good.
In order to further verify that this scheme is also feasible for a complex target, we increased the resolution of imaging and the complexity of the imaging target in the simulation experiments. Figure 6 shows the target with a resolution of 16 × 16. Figure 7 shows the reconstructed 3D image using TV reconstruction algorithms. This illustrates that the complex target can also be successfully reconstructed at a sampling rate of 0.8.

4. Imaging Experiments

4.1. Experimental Process

According to the system structure described in Section 2.1, we constructed an experimental system platform, as shown in Figure 8. The experimental device was installed on a vibration isolation platform, and the position and relative distance of the lens and other experimental devices were manually adjusted.
In the experiment, four different sampling rates (0.2, 0.4, 0.6, and 0.8) were selected, and compressed sensing 3D imaging was performed at these sampling rates. The test target is shown in Figure 9.
The target to be measured was a square target with a set distance of 2 m and a modulation frequency of 5 × 106 Hz. The measurement matrix was a Hadamard matrix, the sparse transform was FFT, and the reconstruction algorithms were BP, OMP, and TV.

4.2. Experimental Results and Discussion

After multiple experiments and post-processing, 3D imaging phase RMSE comparison maps of the three reconstruction algorithms were obtained, as shown in Figure 10. Figure 11 shows the 3D image recovered by the three reconstruction algorithms.
Figure 10 shows that the phase RMSE of the three reconstruction algorithms decreases as the sampling rate increases, which indicates a better 3D imaging effect. The TV reconstruction algorithm has the best reconstruction effect, followed by BP, with OMP being the worst, which is consistent with the simulation results of Section 3.2. From the experimental 3D image in Figure 11, it can be observed that when the TV algorithm has a sampling rate of 0.8, the imaging effect of the square target can maintain the basic contour shape (dashed box). When the sampling rate is reduced to 0.6, the edge information of the 3D imaging is partially missing, and the basic contour shape is not obvious. When the sampling rate is 0.4 or 0.2, the noise is large, and the target shape contour can no longer be discerned at all. BP and OMP cannot display the basic shape of the target well for any sampling rate. The experimental results verify the effectiveness of the single-pixel detector non-scanning 3D imaging system, and they further verify that the TV reconstruction algorithm is far better than the BP and OMP algorithms.
In the experiment, the imaging results are blurry. Limited by the existing laboratory conditions, we set the DMD resolution to 8 × 8, which resulted in a low lateral resolution of the imaging. A weak reflected laser from the diffuse target and a limited laser modulation frequency result in low longitudinal accuracy. In future work, we will optimize the optical path system to reduce the laser power loss in the optical system and increase the laser modulation frequency to improve the longitudinal accuracy. Then, a high imaging resolution can be obtained by setting the DMD to high resolution.

5. Conclusions

In this paper, a single-pixel non-scanning 3D imaging system was designed, which combines compressed sensing technology with phase method laser ranging technology. It can use a single-pixel detector to achieve 3D imaging of a target. Parametric simulation studies and actual imaging experiments prove that the Hadamard measurement matrix and the TV reconstruction algorithm produce better imaging results at the same sampling rate. In the future, we will attempt to increase the laser modulation frequency and imaging resolution to further improve the measurement accuracy and imaging resolution. This system will have even wider application prospects in the field of 3D imaging.

Author Contributions

G.S. conceived the method, designed the experiments, and revised the manuscript; L.Z. performed the simulations and experiments, processed the data and wrote the original manuscript; W.W. provided the experimental funds and reviewed the manuscript; K.L. provided assistance to the experiments and proofread the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant No. 51505113, the Zhejiang Provincial Natural Science Foundation of China under Grant No. LZ16E050001, and the State Key Laboratory of Precision Measuring Technology and Instruments Project under Grant No. PIL1601.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anthes, J.P.; Garcia, P.; Pierce, J.T.; Dressendorfer, P.V. Nonscanned ladar imaging and applications. Proc. SPIE. 1993, 1936, 11–22. [Google Scholar] [CrossRef]
  2. Wang, H.; Liu, Z. 3-D laser imaging technology and applications. Electron. Des. Eng. 2012, 12, 160–163, 168. [Google Scholar]
  3. Albota, M.A.; Heinrichs, R.M.; Kocher, D.G.; Fouche, D.G.; Player, B.E.; O’Brien, M.E.; Aull, B.F.; Zayhowski, J.J.; Mooney, J.; Willard, B.C.; et al. Three-dimensional imaging laser radar with a photon-counting avalanche photodiode array and microchip laser. Appl. Opt. 2002, 41, 7671–7678. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Li, L.; Hu, Y.; Zhao, N.; He, M. Application of Three-Dimensional Laser Imaging Technology. Laser Optoelectron. Prog. 2009, 46, 66–71. [Google Scholar] [CrossRef]
  5. Aull, B.F.; Schuette, D.R.; Young, D.J.; Craig, D.M.; Felton, B.J.; Warner, K. A study of crosstalk in a 256 × 256 photon counting imager based on silicon Geiger-mode avalanche photodiodes. IEEE Sens. J. 2015, 15, 2123–2132. [Google Scholar] [CrossRef]
  6. Aull, B.F. Silicon Geiger-mode avalanche photodiode arrays for photon-starved imaging. Proc. SPIE 2015, 9492, 94920. [Google Scholar] [CrossRef]
  7. Chen, N. Review of 3D laser imaging technology. Laser Infrared 2015, 10, 14–18. [Google Scholar]
  8. Donoho, D.L. Compressive sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  9. Candès, E.J.; Tao, T. Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? IEEE Trans. Inf. Theory 2006, 52, 5406–5425. [Google Scholar] [CrossRef] [Green Version]
  10. Candes, E.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  11. Duarte, M.; Davenport, M.A.; Takbar, D.; Laska, J.N.; Sun, T.; Kelly, K.; Baraniuk, R.G. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag. 2008, 25, 83–91. [Google Scholar] [CrossRef] [Green Version]
  12. Howland, G.; Dixon, P.B.; Howell, J.C. Photon-counting compressive sensing laser radar for 3D imaging. Appl. Opt. 2011, 50, 5917–5920. [Google Scholar] [CrossRef] [Green Version]
  13. Guo, B. Three-Dimensional Compressed Sensing Imaging Using Phase-Shift Laser Range Finding. Master’s Thesis, Beijing Institute of Technology, Beijing, China, 2014. [Google Scholar]
  14. Sun, M.-J.; Edgar, M.P.; Gibson, G.M.; Sun, B.; Radwell, N.; Lamb, R.; Padgett, M.J. Single-pixel three-dimensional imaging with time-based depth resolution. Nat. Commun. 2016, 7, 12010. [Google Scholar] [CrossRef] [PubMed]
  15. Eldar, Y.C.; Kutyniok, G. Compressed Sensing: Theory and Applications; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  16. Hedayat, A.; Wallis, W.D. Hadamard Matrices and Their Applications. Ann. Stat. 1978, 6, 1184–1238. [Google Scholar] [CrossRef]
  17. Yu, L.; Barbot, J.P.; Zheng, G.; Sun, H. Compressive Sensing With Chaotic Sequence. IEEE Signal Process. Lett. 2010, 17, 731–734. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic Decomposition by Basis Pursuit. SIAM Rev. 2001, 43, 129–159. [Google Scholar] [CrossRef] [Green Version]
  19. Foucart, S.; Rauhut, H. A Mathematical Introduction to Compressive Sensing; Springer: New York, NY, USA, 2013. [Google Scholar]
  20. Beck, A.; Teboulle, M. Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Bah, B.; Tanner, J. Improved Bounds on Restricted Isometry Constants for Gaussian Matrices. SIAM J. Matrix Anal. Appl. 2010, 31, 2882–2898. [Google Scholar] [CrossRef] [Green Version]
Figure 1. System structure.
Figure 1. System structure.
Applsci 10 03100 g001
Figure 2. An 8 × 8 target map.
Figure 2. An 8 × 8 target map.
Applsci 10 03100 g002
Figure 3. Reconstruction results of three reconstruction algorithms, based on different measurement matrices and sparse transforms. (a) Basis pursuit (BP) algorithm; (b) orthogonal matching pursuit (OMP) algorithm; (c) total variation (TV) algorithm.
Figure 3. Reconstruction results of three reconstruction algorithms, based on different measurement matrices and sparse transforms. (a) Basis pursuit (BP) algorithm; (b) orthogonal matching pursuit (OMP) algorithm; (c) total variation (TV) algorithm.
Applsci 10 03100 g003aApplsci 10 03100 g003b
Figure 4. Comparison of the results of three reconstruction algorithms.
Figure 4. Comparison of the results of three reconstruction algorithms.
Applsci 10 03100 g004
Figure 5. Three-dimensional image reconstructed by TV reconstruction algorithms when the sampling rate is 0.8, 0.6, 0.4, and 0.2.
Figure 5. Three-dimensional image reconstructed by TV reconstruction algorithms when the sampling rate is 0.8, 0.6, 0.4, and 0.2.
Applsci 10 03100 g005
Figure 6. A 16 × 16 target map.
Figure 6. A 16 × 16 target map.
Applsci 10 03100 g006
Figure 7. Reconstructed three-dimensional image of a complex target when the sampling rates are 0.8, 0.6, 0.4, and 0.2.
Figure 7. Reconstructed three-dimensional image of a complex target when the sampling rates are 0.8, 0.6, 0.4, and 0.2.
Applsci 10 03100 g007
Figure 8. Experimental system platform.
Figure 8. Experimental system platform.
Applsci 10 03100 g008
Figure 9. Experimental test target.
Figure 9. Experimental test target.
Applsci 10 03100 g009
Figure 10. Comparison of 3D imaging phase root mean square error (RMSE) with three reconstruction algorithms.
Figure 10. Comparison of 3D imaging phase root mean square error (RMSE) with three reconstruction algorithms.
Applsci 10 03100 g010
Figure 11. Three-dimensional image reconstructed by three reconstruction algorithms when the sampling rates are 0.8, 0.6, 0.4, and 0.2. (a) BP algorithm; (b) OMP algorithm; (c) TV algorithm.
Figure 11. Three-dimensional image reconstructed by three reconstruction algorithms when the sampling rates are 0.8, 0.6, 0.4, and 0.2. (a) BP algorithm; (b) OMP algorithm; (c) TV algorithm.
Applsci 10 03100 g011aApplsci 10 03100 g011b

Share and Cite

MDPI and ACS Style

Shi, G.; Zheng, L.; Wang, W.; Lu, K. Non-Scanning Three-Dimensional Imaging System with a Single-Pixel Detector: Simulation and Experimental Study. Appl. Sci. 2020, 10, 3100. https://doi.org/10.3390/app10093100

AMA Style

Shi G, Zheng L, Wang W, Lu K. Non-Scanning Three-Dimensional Imaging System with a Single-Pixel Detector: Simulation and Experimental Study. Applied Sciences. 2020; 10(9):3100. https://doi.org/10.3390/app10093100

Chicago/Turabian Style

Shi, Guang, Leijue Zheng, Wen Wang, and Keqing Lu. 2020. "Non-Scanning Three-Dimensional Imaging System with a Single-Pixel Detector: Simulation and Experimental Study" Applied Sciences 10, no. 9: 3100. https://doi.org/10.3390/app10093100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop