Next Article in Journal
A New Method for Estimating Soil Fertility Using Extreme Gradient Boosting and a Backpropagation Neural Network
Next Article in Special Issue
Spatial Spectrum Estimation of Co-Channel Direct Signal in Passive Radar Based on Coprime Array
Previous Article in Journal
Research on the Rotational Correction of Distributed Autonomous Orbit Determination in the Satellite Navigation Constellation
Previous Article in Special Issue
Joint Antenna Placement and Power Allocation for Target Detection in a Distributed MIMO Radar Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reflective Tomography Lidar Image Reconstruction for Long Distance Non-Cooperative Target

1
School of Automation, Northwestern Polytechnical University, Xi’an 710072, China
2
School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
3
State Key Laboratory of Pulsed Power Laser Technology, National University of Defense Technology, Hefei 230037, China
4
School of Physics and Optoelectronic Engineering, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(14), 3310; https://doi.org/10.3390/rs14143310
Submission received: 26 May 2022 / Revised: 5 July 2022 / Accepted: 7 July 2022 / Published: 9 July 2022

Abstract

:
In the long-distance space target detection, the technique of laser reflection tomography (LRT) shows great power and attracts more attention for further study and real use. However, space targets are often non-cooperative, and normally a 360° complete view of reflection projections cannot be obtained. Therefore, this article firstly introduces an improved LRT system design with more advanced laser equipment for long-distance non-cooperative detection to ensure the high quality of the lidar beam and the lidar projection data. Then, the LRT image reconstruction is proposed and focused on the laser image reconstruction method utilizing the total variation (TV) minimization approach based on the sparse algebraic reconstruction technique (ART) model, in order to reconstruct the laser image in a sparse or incomplete view of projections. At last, comparative experiments with the system are performed to validate the advantages of this method with the LRT system. In both near and far field experiments, the effectiveness and superiority of the proposed method are verified for different types of projection data through comparison to typical methods.

1. Introduction

With the rapid development of aerospace science and technology, space target detection and identification have become a hot topic. Reflective tomography lidar can overcome the resolution limitation with little influence from the external environment, especially for targets in the dark background, such as space satellites and debris, which has been proved to be a new lidar system with long distance and high resolution.
Developed from computed tomography (CT), the concept of laser reflective tomography (LRT) was firstly introduced in 1988 by Parker et al. [1] and then some reflection tomography lidar imaging experiments were carried out. In recent years, the research of lidar imaging technology has come into a new stage [2]. Lidar obtains the laser echo projection data at different angles and reconstructs the image of the target by image reconstruction algorithms. For non-cooperative space target detection, the optical imaging system makes it difficult to obtain the detailed information of the target in a long distance due to the limitation of receiver aperture. However, the spatial resolution of lidar imaging based on the reflection tomography is independent of the range and receiver aperture, only related to the pulse width, bandwidth, and noise. On the basis of the improvement of the lidar experimental system, the theory of LRT was further developed, and many efforts have been made to complete the non-cooperative space target detection. Jin conducted reflection tomography lidar imaging by pulse detection to obtain the image of the conical object in the laboratory environment and experiment preliminarily on the LRT imaging with the incomplete view of the projection data [3,4,5]. Zhang determined the limiting conditions of the sampling interval and sampling angle for laser reflective tomography imaging in sensing targets with typical shapes [6,7]. Hu applied LRT to plane target centroid detection for the first time and proposed a method for calculating centroid distance using multi-angle echo data [8]. However, to obtain accurate imaging of non-cooperative space targets in incomplete angles, breakthroughs should be made in both the system design and the laser image reconstruction method, which is also the focus of this article.
Radon-Fourier [3,9] transform is the basic tomographic algorithm, but its accuracy is low, and the reconstructed image has obvious artifacts. Therefore, the FBP algorithm [10,11,12,13] has become the most commonly used algorithm for laser reflection tomography. However, in the space environment, almost all space targets are non-cooperative targets, which significantly affects the result of the FBP algorithm. Hence, it is not acceptable under such circumstances. R. Gordon [14] first proposed the ART to reconstruct the image by solving the linear equations. Considering the projection matrix is huge and cannot be solved directly, the reconstructed image is generally obtained by iterative approximation [15]. ART breaks through the limitation of the viewing angle and can roughly reconstruct the image in the sparse viewing angle. Then, ART is applied to LRT, which can be solved by the iterative method combined with prior knowledge [16]. At the same time, in order to solve the problem of the complex and time-consuming iterative process caused by the huge measurement matrix, the relative method is also utilized [17]. However, the implementation of ART is complicated, and the reconstructed image quality is not high enough and is sensitive to noise. Therefore, it is necessary to improve ART algorithm.
For non-cooperative targets in space, image reconstruction from a small number of observations is at the heart of the problem, which reminds us of the sparse recovery technique. Many prior works in other domains have been performed to improve the source reconstruction quality. In the medical CT image restoration field, Liu et al. proposed a total variation-strokes-projection onto the convex sets (TVS-POCS) reconstruction method to preserve consistencies and eliminate the patchy artifacts [18]. Non-local total variation (NLTV) minimization combined with reweighted L1-norm was conducted by Kim et al. for compressed sensing CT reconstruction [19]. The referable method can also be found in electron tomography, where Goris et al. applied the TV reconstruction technique [20]. Zhang et al. applied the sparse recovery iterative minimization method in ship classification to estimate the high-resolution range profiles of ships [21]. Rostami et al. resolved the imitation in the sensor networks by using diffusive compressed sensing and sparse recovery under the heat equation constraint [22]. Inspired by these excellent prior works, this article introduces the TV sparse reconstruction approach to a more advanced LRT imaging system to detect long distance non-cooperative targets in space, for better performance in resolution maintenance and artifact elimination of the lidar imaging. Furthermore, regarding the angle limitation in the space non-cooperative target detection due to orbits of the target and the transceiver, LRT imaging utilizing the TV sparse approach especially shows its dominance in high-quality image reconstruction of the space target with a sparse or incomplete view of projections.
The rest of the article is organized as follows. In Section 2, the LRT model and the experimental system set up by us are presented. In Section 3, the framework of LRT for real data is introduced, and then the TV sparse reconstruction with the ART model approach is presented for laser image reconstruction, in order to improve the anti-noise ability and reduce the artifacts of the image reconstructed by an incomplete view of projections. Experiments are performed in both the near-field and far-field environments to validate the effectiveness of the proposed method in Section 4, and real data in more complicated environments is also experimented with and discussed in Section 5. Finally, the conclusion is given in Section 6.

2. LRT Model and Experimental Design

2.1. Laser Reflection Tomography Model

The LRT imaging technology detects the multi-view one-dimensional echo signal of the target viz. projections from different angles. Then, the 2-D contour image of the target is reconstructed by image reconstruction techniques.
The LRT imaging model is shown in Figure 1. The r o v coordinate system is obtained by a rotation θ of the x o y coordinate system, which expresses the relationship between the ( x , y ) coordinate and the ( r , v ) coordinate. The emitted laser beam covers the entire target g ( x , y ) surface along the detection direction, and Radon transform is performed to obtain the reflection projection p r ( r , θ ) of the target at angle θ [8]:
p r ( r , θ ) = L r , θ g ( x , y ) d v = L r , θ g ( r cos θ v sin θ , r sin θ + v cos θ ) d v
The integration path L r , θ is perpendicular to the illumination direction, and the polar diameter r = x cos θ + y sin θ , v is the integration path. In laser reflection tomography, the laser cannot penetrate the target, so the reflection coefficient is 0 outside the surface, i.e.,
g ( x , y ) = 0 , ( x , y ) D
where D is the target surface point set.

2.2. Experimental System

The laser reflection tomography system is implemented at the preset test environment. Figure 2a shows the actual photograph of the laser transmitter system. As shown in Figure 2b, the scene of the laser reflection tomography experimental platform includes the laser transmitter, the turntable and wireless remote receiving module, and one target. In the experiment, the laser pulse width is 100 ps, the bandwidth is 10 GHz, and the laser beam completely covers the target. The target is placed on the turntable, which is 1 m above the ground.
While transmitting the laser, the turntable rotates at a uniform angular speed of 25° per second to ensure that the laser frequency and the received echo frequency match the uniform speed of the target.
The detailed schematic diagram of the LRT experimental system is shown in Figure 3. The LRT radar prototype is composed of four parts, including the transmitting part, the receiving part, the data acquisition part, and the data processing part. The transmitting part is composed of a microchip laser, two reflectors, one beam splitter, and one expander system whereas the receiving part is composed of a receiving telescope and a 7.5 GHz avalanche photodiode (APD) detection module, along with a 15 GHz bandwidth Pin light detection module on the other side of the splitter. A laser pulse high-speed data acquisition module with a bandwidth of 4.25 GHz and a sampling rate of 50 GSPS is employed in the data acquisition part and the data processing part is controlled by the industrial personal computer and is able to complete real-time data processing [6].

3. LRT Image Reconstruction

3.1. LRT Data Processing Framework

Via the LRT experimental system mentioned in Section 2, we are able to conduct the LRT experiment, where we transmit the laser toward the intended target and obtain reflection projection data from laser echo. Then, it is necessary to process the echo data to reconstruct the image of the target. The flow chart of the image reconstruction by projection data is shown in Figure 4:
The first step is the registration of projection data. In the outfield experiment environment, the atmospheric turbulence and the jitter of the target and the platform leads to the inconsistency of the rotation center position of the projection data collected over different angles. The image reconstruction based on the misaligned projection data causes dislocation and distortion in the imaging results, which can seriously affect the identification of the contour of the target. Phase retrieval and feature tracking techniques are commonly used for registration [23,24].
The second step is the projection data conversion. The difference between LRT and computed tomography (CT) imaging is that LRT is based on the reflection coefficient of the object surface, whereas CT imaging is based on the internal transmission coefficient [25]. Therefore, the reflection projection data needs to be preprocessed where it is converted into transmission data, which can greatly reduce the difficulty of image reconstruction. Then, the relationship between them can be expressed as
p ( r , θ + 90 ) = p r ( r , θ ) + [ p r ( r , θ + 180 ) ] T
where θ ( 0 θ 179 ) is the incident angle, p r ( r , θ ) is 0 179 reflection projection data, p r ( r , θ + 180 ) is 180 359 projection data, p r ( r , θ + 90 ) denotes the transmission projection data, and T means the symmetric transformation.
The last step, which obtains the cross-section information of an object by measuring the projection of the object in different viewing angles, is tomography. It is undoubtedly the most important throughout the process. With the emergence and development of lidar, tomography algorithm is introduced, which improves the accuracy and efficiency of laser imaging technology. At present, the commonly used classical algorithms include iRadon transform [3], FBP [10,11,12,13], and ART [14]. However, the iRadon transform algorithm can only restore the image roughly, which is far from the ideal imaging result. The FBP algorithm requires complete projection data; otherwise, there will be serious artifacts and distortions, resulting in the reconstructed image quality unable to be guaranteed. However, in practice, it is difficult to sample 360° projections of the target due to the clutter interference, insufficient data, or missing data. On the other hand, although ART can reconstruct the image from incomplete projection data, it is too sensitive to noise and is complex to implement, so the effectiveness and timeliness of the ART cannot be guaranteed at the same time.
Therefore, aiming at solving the problem of large artifacts, low anti-noise performance of the ART and incomplete projections limitations of the FBP algorithm, we utilize the gradient sparse characteristics of the laser reconstructed image, add the regularization term of the imaging model, and combine the ART with the TV sparse reconstruction approach for the LRT technique, which is discussed in detail in the remainder of this section.

3.2. Tomographic Imaging of Lidar Projection Data

3.2.1. ART for Lidar Image Reconstruction

ART reconstructs the image of the target by solving linear equations as Equation (4). Considering the projection measurement matrix is huge and cannot be solved directly, the reconstructed image is generally obtained by iterative approximation. The iterative method first discretizes the continuous image u ( x , y ) . The whole image is divided into N = n × n pixels, the width of which is δ , and the gray value of each pixel is a constant value. Thus, the image can be represented by an n-dimensional matrix u = [ u 1 , u 2 , . u N ] . Assuming that the projection data obtained by M-ray projection is represented by a matrix p = [ p 1 , p 2 , , p M ] , the process of image reconstruction is equivalent to solving the gray value of each pixel in the image according to the received projection data, i.e., the essence of iterative reconstruction is to solve the linear equations:
{ ω 11 u 1 + ω 12 u 2 + + ω 1 N u N = p 1 ω 21 u 1 + ω 22 u 2 + + ω 2 N u N = p 2 ω 31 u 1 + ω 32 u 2 + + ω 3 N u N = p M
p i = j = 1 N p i j = j = 1 N ω i j u j , i = 1 , 2 , , M
where ω i , j is the weight factor, which represents the weight of the jth pixel on the ith projection value of the image passed by the ray, i.e., the crossing length of the ray and each pixel grid. Equation (5) can also be expressed by matrix:
p = A u
Practically, it is very difficult to directly solve the equations. Hence, Kaczmarz relaxation method [15] is generally used to solve the problem. For the ith set of projection data, the iterative equation of ART is:
u j k + 1 = u j k + λ 1 p i j = 1 N ω i n u n k ω i n 2 , i = 1 , 2 , , N
where i is the projection sequence number, j is the pixel sequence number, k is the number of iterations, and λ 1 denotes the relaxation factor. After traversing all grids, an iterative process is completed.

3.2.2. Sparse Reconstruction with ART Model

Based on the traditional ART, considering that most of the reconstructed images of laser reflection tomography have sparse characteristics, or have sparse representation under certain transform bases, the sparse reconstruction method is proposed for the image reconstruction [15]. The image u itself is sparse or the representation of the image under certain transformation bases is sparse, i.e., there exists a matrix ψ when the equation u = ψ α holds, where α is sparse and ψ is a sparse transformation matrix. In this way, the original ART model (4) is transformed to
p = A u = A ψ α = B α
where B = A ψ . Then, α is reconstructed from the observation value p and matrix B , and the original image is reconstructed by u = ψ α .
Hence, the image recovery problem based on compressed sensing is transformed to an L 0 norm minimization problem, and the norm L 1 is used to replace the norm L 0 , making it a convex optimization problem, which is convenient to solve. The equation can be expressed as:
min α α 1 , s . t . p = B α
where i ( i = 0 , 1 , 2 ) represents the i norm. Common sparse model solving methods include basic pursuit (BP), orthogonal matching pursuit (OMP), etc. The principle of the BP algorithm is to constantly find the smaller norm L 1 of α to explain the projection data p . By constant iteration to find a sufficient sparse signal (a signal with a sufficiently small norm L 1 ) that meets the condition, it can be considered that the most appropriate solution of the equation is found. For linear equations p = B α , if we take each column in the matrix as a variable, OMP can select the variables most correlated with the current residual in each iteration process. These selected variables can form a subspace of the matrix B , and each residual is obtained by calculating the difference of p with the orthogonal projection of p on the subspace.
In other words, because there are many zero elements in α , p should belong to a subspace of S p a n ( B ) , which refers to the column vector space of B . Furthermore, the subspace of S p a n ( B ) can be represented as S p a n ( B s u b ) . Note that the subspace B s u b is composed of some columns of B , and α corresponds with these columns which are recorded as α s u b . If B s u b can be figured out, the following optimization problems can be solved:
arg min x s u b B s u b α s u b p 2
Then, the linear regression is applied to obtain α s u b = ( B s u b T B s u b ) 1 B s u b T p . Thus, α is obtained (for the other elements it is zero). The efficiency of the OMP algorithm is greatly improved compared with BP algorithm, but the determination of sparsity fluctuates greatly in different situations, resulting in low image quality. Therefore, this article adopts the idea of sparse reconstruction, combined with the TV regularization approach to reconstruct the image.

3.2.3. TV Sparse Reconstruction with ART Model

In this article, considering that the laser reconstructed image has gradient sparse characteristics, an image reconstruction method based on the TV minimization approach is proposed. At the same time, in order to reduce the image reconstruction error, ART is used to obtain the initial iterative image of TV. Compressed sensing theory points out that most images can be sparse represented by appropriate transforms, such as DCT, FFT, wavelet transform, and so on [15]. In this article, the sparse transformation matrix is measured by total variation. Furthermore, the regularization of the image model is added to achieve the sparse representation of the image. The total variation of an image is defined as the square of the sum of discrete gradients of pixels in two dimensional directions of the image, which is represented as:
TV ( u ) = i , j | u i , j | = | u ( x + 1 , y ) u ( x , y ) | 2 + | u ( x , y + 1 ) u ( x , y ) | 2
For the ART model p = A u , the TV sparse regularization model based on TV minimization is established as follows:
min u A u p 2 2 + λ 2 TV ( u ) 1
where λ 2 is the regularization parameter. Update the image with ART, and then iterate with the gradient descent method to minimize TV. The iterative equation is:
u i , j k + 1 = u i , j k α TV ( u i , j k ) u i , j k
where u i , j k + 1 is the last iterated image, k is the number of iterations, is the gradient, and α is the step of gradient descent. By updating the image alternatively using ART algorithm and TV minimization, steps of the TV sparse reconstruction with ART model are described in Algorithm 1.
Algorithm 1: Implementation of TV sparse reconstruction with ART model.
    Given p , A , N , maxiter, α
    Initialization:  u 0 = 0
    for k = 1,2,3... maxiter   do
    ART Updating:
      for j = 1,2,3..., N     do
       u j k = u j k 1 + λ p i j = 1 N ω i n u n k 1 n = 1 N ω i n 2 ω i j
      u j k = { u j k u j k 0 0 u j k < 0
     end
    TV Minimization:
      t k 1 = u k
      v = t k 1 T V / t k 1
      t k = t k 1 α v
      u k = t k
      k = k + 1
   End

4. Experiments

In this section, the outfield experiments are carried out on the detection target as shown in Figure 5 under the condition of both near-field and far-field. The detection target, which is made of retroreflective material, is composed of three squares, 4 cm on the side, and the included angle between each two is 135°. The laser transmitter is located around 30 m away from the detection target in the near-field and 1 km away in the far-field. Furthermore, the two sets of the echo data are all collected every 1° of target rotation, and 360 groups of echo data can be collected by the turntable rotating for one turn under both conditions.

4.1. Near-Field Experiment

Figure 6a shows the laser reflection projection data after registration in the near-field experiment. A small amount of negative data is caused by the negative noise of the detector, so the negative value in the echo is eliminated. Then, the projection data are converted according to Equation (3), as shown in Figure 6b.
For the projection data in the near-field experiment, the proposed TV sparse imaging reconstruction method in Section 3.2 is performed compared with FBP [10,11,12,13], ART [14], and sparse ART with a complete view of projections and uniformly sampled projections in 5°, 10°, and 20° viewing intervals. The results are shown in Figure 7. In Figure 7a, the iRadon [3,9] results are illustrated just to show the target contour according to the definition in Equation (1), as mentioned in Section 3.1. In Figure 7(a4), for sparse ART with OMP algorithm mentioned in Section 3.2.2, because of using discrete Fourier transform as a sparse basis, it cannot make good use of the gradient sparse features of laser image, leading to obvious deviation between the reconstructed contour and the original target. Then, for Figure 7(a2,a3,a5), the target contour is very clear after imaging with all three approaches. In Figure 7(a5), the resolution of the reconstructed image is significantly improved by using the proposed algorithm compared to Figure 7(a2). Comparing Figure 7(a5) with Figure 7(a3), there are artifacts in the image reconstructed by the traditional ART, which are mostly eliminated in the result of the proposed method. Therefore, in the case of the complete angle, the proposed algorithm greatly improves the resolution of the reconstructed image and reduces the image artifacts.
Next, experiments are carried out at projections with viewing intervals of 5°, 10°, and 20°, respectively. In the near-field experiment, according to the Nyquist sampling theorem, the maximum projection angle sampling interval required to completely reconstruct the detected target laser image within the range of 360° is calculated as about 9°. In Figure 7b, the sampling rate still meets the Nyquist sampling law, and the reconstruction result shows little difference from Figure 7a. When the viewing interval of the projection data is 10° and 20°, the sampling does not meet the Nyquist sampling law. It can be noticed in Figure 7c,d, with the increase in sampling angle interval, that the artifacts in the reconstructed image by iRadon and FBP algorithm increase significantly, which seriously affects the identification of the target contour. However, in the three results related to the ART in Figure 7(c3–c5) and Figure 7(d3–d5), the artifacts are in good condition. At the same time, the imaging resolution of the proposed algorithm is better than the traditional ART and sparse ART with the OMP algorithm. Comparing Figure 7(d5) with Figure 7(a5,b5,c5), with the increase in the sparsity of the angle, the image resolution is generally consistent with that in the complete angle. Furthermore, the artifact is in good condition and the image edge is smooth as well, which greatly reduces the requirements for the detection angle of non-conforming targets and provides convenience for high-efficiency and high-precision laser image reconstruction of non-conforming targets.
For quantitative assessment, information entropy (IE), no-reference signal-to-noise ratio (NRSNR), and variance (Var) are introduced to evaluate the imaging results by using different approaches. IE is defined as follows [26,27]:
I E = p i ln p i
where p i is the probability that the pixel value in the image is i . According to Shannon’s information theory, larger entropy indicates more information. That is, the image is clearer with larger IE in LRT reconstruction.
SNR is the proportion of signal to noise and reflects the influence of the noise to signal or image. Here, the NRSNR is adopted to measure the denoising and artifact elimination effect of different reconstruction algorithms. Furthermore, the NRSNR is represented as follows [28]:
N R S N R = 10 log 10 255 × 255 K ( d B )
K = G M × N
where K is the noise level of the image, G = ( g ( x , y ) T H ) is the noise distribution, g ( x , y ) is the pixel value of the image, T H is a set threshold, and M and N are the size of the image. The impact of the noise is smaller when the NRSNR value is higher, which indicates that the quality of the image is better. In this article, NRSNR is utilized to evaluate the anti-noise performance of different algorithms particularly.
Var reflects the contrast ratio of the image [26], and the definition of Var is:
V a r = y x | g ( x , y ) μ |
where g ( x , y ) is the pixel value of the image at coordinates ( x , y ) and μ is the mean pixel value of the whole image. In other words, it is easier to distinguish different objects when the Var is higher.
Based on the metrics mentioned above, all the reconstructed results in Figure 7 are measured by IE, SNR, and Var as illustrated in Table 1. As shown in Table 1, the IE value of the TV sparse reconstruction method is the highest and maintains well at different sampling intervals, which indicates that the reconstructed image of the proposed method better reflects the feature of the target. Meanwhile, the SNR value and Var value of the proposed method are also better than the other algorithms at different sampling intervals, which further proves the effectiveness of the TV sparse reconstruction method in LRT.
Then, we test the anti-noise performance of the proposed algorithm and explore if it still works under non-uniformly sampled projection data. Adding Gaussian noise to the projection data at complete view angles and 10° intervals, the images reconstructed by iRadon and FBP algorithm are destroyed by the noise in Figure 8b, yet the proposed algorithm shows excellent performance in denoising regardless of the angle in Figure 8(b5). It also performs better than the traditional ART not only in denoising ability but also in edge preservation. In addition, experiments are conducted in random sampling at 10° intervals, as shown in Figure 8c. The result of the proposed algorithm in Figure 8(c5) is superior to other algorithms in artifact elimination and the image can also be generally well reconstructed under the influence of random sampling. For better analysis, all the reconstructed results are evaluated by IE, NRSNR, and Var, illustrated in Table 2. It can be concluded from Table 2 that whether the projection data are added with noise or non-uniformly sampled, the IE and Var of the proposed method are still better than that of the other three. It also can be noted that the NRSNR of the reconstructed image using the proposed method is 2.2245 and 2.5897 in complete views and 10° intervals, respectively, almost 30% higher than that of the FBP, ART, and OMP approaches, which highlights the anti-noise performance of the TV sparse reconstruction method with incomplete views of projection data.
Finally, the proposed algorithm is tested under 0–60°, 90°, 120°, and 150° views of projections. Among the results, the image can be roughly reconstructed under the 150° view of projections, as shown in Figure 9c, and Figure 9(c5) shows that the resolution under the proposed algorithm is significantly better. When the viewing angle is less than 120° as Figure 9a,b, it is hard to recognize the contour of the image. To assess the LRT reconstruction performance under a limited range of projections, the correlation coefficient (CC) [29] is employed to measure the similarity between the reconstructed images with the complete viewing angles and images with the incomplete viewing angles by using the same algorithm. Furthermore, the CC is defined as follows:
C C = c o v ( A , B ) v a r ( A ) v a r ( B )
here A and B are the two compared images, c o v ( A , B ) is the covariance between A and B , and v a r ( A ) is the variance of A . The CC value is between 0 and 1, where the closer the CC value is to 1, the more similar the two images are. Therefore, the reconstruction images in incomplete sampled views can be effectively evaluated as long as the CC value is calculated between these images and the image with full sampled views.
From Table 3, it is obvious that the reconstructed images under the 0–150° view of projections in Figure 9d are all the most similar to the images under the complete view of projections in Figure 7a, as Figure 9d has more viewing angles compared to 0–60°, 90°, and 120° viewing projections. By using the proposed method, the CC values between the LRT images under the limited range of projections and the images under the complete view of projections are all the highest, which validates the superiority of the method compared to other algorithms when dealing with the case of incomplete viewing angles.

4.2. Far-Field Experiment

After meliorating the optical system in the laser transmitter and using a more advanced module, the far-field experiment is performed on the same detection target as shown in Figure 5. After preprocessing the projection data, Figure 10 shows the converted projection data of the far-field experiment. It is obvious that the quality of the projection data is significantly improved in that the echo data hardly fluctuate and their continuity is maintained well.
Similarly, for the projection data in the improved far-field experiment, the proposed imaging reconstruction algorithm is performed compared to iRadon, FBP, and ART with the complete view of projections and uniformly sampled projections in 5°, 10°, and 20° viewing intervals. The results are shown in Figure 11.
As shown in Figure 11, the reconstruction result is significantly optimized and the contour of the detection target is well embodied, which indicates that the new optical system has a great impact on the experiment. From the images in Figure 11a, the included angle between each two squares can be clearly seen, overmatching the imaging results of the near-field experiment. However, it is also obvious that there are more artifacts in this experiment due to the influence of interference in the long distance.
The reconstruction effect of different algorithms resembles the result in the near-field experiment. The proposed TV sparse reconstruction with the ART algorithm shows a better effect under the circumstance of both complete view sampling and sparse sampling, which further proves the high precision of the algorithm. Table 4 analyzes the IE, NRSNR, and Var of four approaches in far-field experiments. It can be clearly seen that the IE, NRSNR, and Var of the proposed method are still better than that of FBP, ART, and OMP. Take the results in 20° intervals as an example; the IE, NRSNR, and Var of the proposed method are 9.6316, 4.7132, and 0.0503, respectively, improved by about 160%, 50%, and 25% compared to that of other approaches. As a result, the effectiveness of the proposed TV sparse reconstruction with the ART algorithm in sparse views is again verified by the far-field experiment.
Figure 12 shows the second far-field experimental imaging results and then the same conclusion can be drawn as near-field experiments. In Figure 12a,b, it can be clearly seen that the proposed method reconstructs the contour of the target well and performs better in denoising than the FBP and ART algorithms. In Figure 12c, the reconstruction effect of random sampling is not ideal, but the proposed method generally restores the contour of the target, which proves to be effective in random sampling as well. As shown in Table 5, the proposed method is still the best among the four algorithms, whether the projection data are added with noise or non-uniformly sampled, which proves the reliability of the proposed method once again. Furthermore, the NRSNR of the reconstructed image using the proposed method is also the highest, proving that the TV sparse reconstruction method possesses the character of working well when dealing with noise.
Finally, the proposed algorithm is tested under 0–60°, 90°, 120°, and 150° views of projections with new far-field data. When the viewing angle is less than 120° as in Figure 13a,b, it is still hard to recognize the contour of the image. However, in this experiment, the image can be reconstructed well under the 150° view of projections as shown in Figure 13c, which is superior to the near-field results. Further, Figure 13(c5) shows that the quality of the reconstructed image using proposed algorithm in 150° view of projections can reach the same level as in complete angles. Correspondingly, Table 6 analyzes the similarity between Figure 13a–d and Figure 11a in the form of CC. Furthermore, the superiority of the LRT TV-ART method is obvious according to the quantitative comparison in Table 6.

5. Discussion

In Section 4, both near-field and far-field experiments are performed and IE, NRSNR, Var, and CC are employed as the quantitative validation to validate the proposed method with nature data. As a result, the proposed method shows its superiority in the sparse or limited view of projections and better noise-suppressing ability compared to iRadon, FBP, ART, and OMP approaches.
However, though near and far-field experiments verify the effectiveness of the TV sparse reconstruction with the ART model method, there are still some limitations. Now we can focus on another far-field experiment. In this far-field experiment, atmospheric turbulence is relatively serious. In addition, the jitter of the platform brings more noise to the nature data and poor illumination makes it harder to achieve a high quality image.
In this far-field experiment, the distance between the laser transmitter and the detection target is 1 km. The echo data are collected every 2° of target rotation and 180 groups of echo data can be collected. The detection target is a triangular prism. The cross section of the triangular prism is an isosceles triangle, with a short side of 11 cm and a long side of 15 cm. Figure 14a shows the far-field target. Figure 14b shows the laser reflection projection data after registration in the far-field experiment. In Figure 14b, the echo data are influenced significantly, and it can be obviously noticed that the noise destroys the continuity of the echo data and the echo fluctuates violently, which is different from echo data in the near-field environment. Similarly, the projection data in the far-field experiment is preprocessed to obtain the converted projection data in complete viewing angles, as shown in Figure 14c.
Then, for the projection data in the far-field experiment, the proposed imaging reconstruction algorithm is performed compared to iRadon, FBP, and ART with a complete view of projections and uniformly sampled projections in 5°, 10°, and 20° viewing intervals. The results are shown in Figure 15. The far-field experiment is also performed. In Figure 15a, for the projection data in the complete angle, all algorithms can generally reconstruct the target and the reconstructed contour of the target in Figure 15(a4) is clearer than others. However, compared with Figure 7a, it can be noticed that the reconstructed image is disturbed and has more artifacts because of the serious noise. Then, when the Nyquist sampling law is satisfied, Figure 15b shows that the reconstruction result of each algorithm is basically consistent with that under complete projections, respectively. With the increase in viewing intervals in Figure 15c,d, the projection sampling does not meet the Nyquist sampling law. In this condition, the artifact of image reconstruction results of iRadon and FBP algorithm increases significantly, which is not suitable for projection data reconstruction at a sparse angle. Moreover, the proposed TV sparse ART method obviously shows its advantages in artifact elimination and anti-noise performance, as displayed in Figure 15(c4,d4). Although the far-field experiment is affected by atmospheric turbulence and jitter of the target and platform, the proposed TV sparse ART method still maintains the image contour resolution of the detected target under sparse angle projection data due to its good anti-noise performance and artifact elimination.
To conclude, the TV sparse reconstruction method shows its great potential in LRT of the space target, where better performance under a sparse or limited view of projections along with the better ability of anti-noise and artifact elimination make it surpass traditional approaches in the quality of image reconstruction. However, though in most cases the image of the target can be generally restored, some improvements still need to be made, especially in more complex environments. Our future work will focus on the system design to promote its applicability and improve the algorithm to achieve the balance between the noise suppression and reconstructed image quality.

6. Conclusions

In this article, the tomography model of laser reflection projection data is introduced, and the designed LRT experimental system is shown. Then, aiming at solving the problem of high-resolution image reconstruction of long-distance non-cooperative target detection under the sparse or incomplete view of reflection projections, a laser image reconstruction approach viz. TV sparse reconstruction with ART model is proposed for lidar imaging of LRT. Considering that the ART model can be formed into the sparse reconstruction model, this article uses ART to reconstruct the initial image, and then utilizes the TV model and the gradient descent algorithm to reduce the total variation of the iterative image and improve its quality. Finally, the near-field and far-field experiments are performed, and comparisons are made between different algorithms to verify the effectiveness and universality of the proposed method. The experimental result illuminates that the proposed LRT imaging using TV sparse reconstruction with the ART model greatly maintains the resolution of the reconstructed image and improves the artifact elimination ability and the anti-noise performance, which is of major significance to the long-distance space non-cooperative target detection. Furthermore, the proposed method also shows its superiority under random sampling and limited viewing angles, which should be still improved while dealing with the far-field target. Furthermore, the experiments also show the weakness of the LRT system in that the reconstruction results are badly influenced when interferences exist, such as wind or vibration. Therefore, more improvement should be made in both the LRT system and the algorithm to strengthen its reliability and accuracy in our future work.

Author Contributions

R.G. and Z.J. (Zheyi Jiang) conceptualized the study and contributed to the article’s organization. Z.J. (Zheyi Jiang), Z.J. (Zhihan Jin) and Z.Z. contributed to the data processing of the research, X.Z., L.G. and Y.H. contributed to the data collection. R.G. and Z.J. (Zheyi Jiang) drafted the manuscript, which was revised by all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shanghai Aerospace Science and Technology Innovation Foundation (grant number SAST2020-028), the Research Plan Project of the National University of Defense Technology (grant number ZK18-01-02), and the National Natural Science Foundation of China (grant number 61871389).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
APDAvalanche Photodiode
ARTAlgebraic Reconstruction Technique
CCCorrelation Coefficient
CTComputed Tomography
DCTDiscrete Cosine Transform
FBPFiltered Back Projection
FFTFast Fourier Transform
IEInformation Entropy
IPCIndustrial Personal Computer
LRTLaser Reflection Tomography
MC LaserMicrochip Laser
NLTVNon-local Total Variation
NRSNRNo-reference Signal-noise Ratio
NPBSNon-polarizing Beam Splitter
OMPOrthogonal Matching Pursuit
PINPositive Intrinsic Negative
SMFSingle Mode Fiber
SNRSignal-noise Ratio
TVTotal Variation
TVS-POCSTotal variation-strokes-projection Onto Convex Sets
VarVariance

References

  1. Parker, J.K.; Craig, E.B.; Klick, D.I.; Knight, F.K.; Kulkarni, S.R.; Marino, R.M.; Senning, J.R.; Tussey, B.K. Reflective tomography: Images from range-resolved laser radar measurements. Appl. Opt. 1988, 27, 2642–2643. [Google Scholar] [CrossRef] [PubMed]
  2. Jin, X.; Zhang, P.; Liu, C. Techniques on long-range and high-resolution imaging lidar. Laser Optoelectron. Prog. 2013, 50, 32–43. [Google Scholar]
  3. Jin, X.; Sun, J.; Yan, Y.; Zhou, Y.; Liu, L. Modified Radon-Fourier transform for reflective tomography laser radar imaging. In Proceedings of the SPIE International Symposium on Photoelectronic Detection and Imaging, Beijing, China, 23–26 April 2011; Volume 8192, pp. S1–S9. [Google Scholar]
  4. Jin, X.; Sun, J.; Yan, Y.; Zhou, Y.; Liu, L. Imaging resolution analysis in limited-view Laser Radar reflective tomography. Opt. Commun. 2012, 285, 2575–2579. [Google Scholar] [CrossRef]
  5. Wang, J.-C.; Zhou, S.-W.; Shi, L.; Hu, Y.-H.; Wang, Y. Image quality analysis and improvement of Ladar reflective tomography for space object recognition. Opt. Commun. 2016, 359, 177–183. [Google Scholar] [CrossRef]
  6. Zhang, X.; Hu, Y.; Wang, Y.; Shen, S.; Fang, J.; Liu, Y.; Han, F. Determining the limiting conditions of sampling interval and sampling angle for laser reflective tomography imaging in sensing targets with typical shapes. Opt. Commun. 2022, 519, 128413. [Google Scholar] [CrossRef]
  7. Zhang, X.; Hu, Y.; Xu, S.; Han, F.; Wang, Y. Application of Image Fusion Algorithm Combined with Visual Saliency in Target Extraction of Reflective Tomography Lidar Image. Comput. Intell. Neurosci. 2022, 2022, 8247344. [Google Scholar] [CrossRef]
  8. Zhang, X.-Y.; Hu, Y.-H.; Shen, S.-Y.; Fang, J.-J.; Wang, Y.-C.; Liu, Y.-F.; Han, F. Kilometer-level laser reflective tomography experiment and debris barycenter estimation. Acta Phys. Sin. 2022, 71, 114205. [Google Scholar] [CrossRef]
  9. He, X. CT image reconstruction based on iradon function. Comput. Inf. Technol. 2020, 28, 21–23. [Google Scholar]
  10. Knight, F.K.; Kulkarni, S.R.; Marine, R.M.; Parker, J.K. Tomographic techniques applied to laser radar reflective measurements. Linc. Lab. J. 1989, 2, 143–158. [Google Scholar]
  11. Pan, X.; Sidky, E.Y.; Vannier, M. Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction? Inverse Probl. 2009, 25, 1230009. [Google Scholar] [CrossRef] [Green Version]
  12. Li, X.; He, Y.; Hua, Q. Application of Computed Tomographic Image Reconstruction Algorithms Based on Filtered Back-Projection in Diagnosis of Bone Trauma Diseases. J. Med Imaging Health Inform. 2020, 10, 1219–1224. [Google Scholar] [CrossRef]
  13. Hu, Y.; Hou, A.; Ma, Q.; Zhao, N.; Xu, S.; Fang, J. Analytical Formula to Investigate the Modulation of Sloped Targets Using LiDAR Waveform. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5700512. [Google Scholar] [CrossRef]
  14. Gordon, R. A tutorial on ART. IEEE Trans. Nucl. Sci. 1974, 21, 78–93. [Google Scholar] [CrossRef]
  15. Lian, Q.; Hao, P. Image reconstruction for CT based on compressed sensing and ART. Opt. Tech. 2009, 35, 422–425. [Google Scholar]
  16. Yang, B.; Hu, Y. Laser reflection tomography target reconstruction algorithm based on algebraic iteration. Infrared Laser Eng. 2019, 48, 0726002.1–0726002.7. [Google Scholar]
  17. Petrovici, M.-A.; Damian, C.; Coltuc, D. Image reconstruction from incomplete measurements: Maximum Entropy versus L1 norm optimization. In Proceedings of the IEEE International Symposium on Signals, Circuits and Systems (ISSCS), Iasi, Romania, 13–14 July 2017; pp. 54–58. [Google Scholar]
  18. Liu, Y.; Liang, Z.; Ma, J.; Lu, H.; Wang, K.; Zhang, H.; Moore, W. Total Variation-Stokes Strategy for Sparse-View X-ray CT Image Reconstruction. IEEE Trans. Med. Imaging 2014, 33, 749–763. [Google Scholar] [CrossRef]
  19. Kim, H.; Chen, J.; Wang, A.; Chuang, C.; Held, M.; Pouliot, J. Non-local total-variation (NLTV) minimization combined with reweighted L1-norm for compressed sensing CT reconstruction. Phys. Med. Biol. 2016, 61, 6878–6891. [Google Scholar] [CrossRef]
  20. Goris, B.; Broek, W.V.D.; Batenburg, K.; Mezerji, H.H.; Bals, S. Electron tomography based on a total variation minimization reconstruction technique. Ultramicroscopy 2011, 113, 120–130. [Google Scholar] [CrossRef]
  21. Zhang, K.; Shui, P.-L. Estimation of Complex High-Resolution Range Profiles of Ships by Sparse Recovery Iterative Minimization Method. IEEE Trans. Aerosp. Electron. Syst. 2022, 99, 15–21. [Google Scholar] [CrossRef]
  22. Rostami, M.; Cheung, N.-M.; Quek, T.Q.S. Compressed sensing of diffusion fields under heat equation constraint. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 4271–4274. [Google Scholar]
  23. Zhao, N.; Hu, Y. Research of phase retrieval algorithm in laser reflective tomography imaging. Infrared Laser Eng. 2019, 48, 1005005. [Google Scholar] [CrossRef]
  24. Chen, J.; Sun, H. Research on projection alignment method in laser reflection tomography. Opt. Commun. 2019, 455, 124548. [Google Scholar] [CrossRef]
  25. Hu, Y.; Tang, J.; Yang, B. Full-waveform echo tomography radar target reconstruction modeling and simulation. Proc. SPIE 2018, 1084, 108460T. [Google Scholar] [CrossRef]
  26. Li, Y.; Chen, N.; Zhang, J. Fast and high sensitivity focusing evaluation function. Appl. Res. Comput. 2010, 27, 1534–1536. [Google Scholar]
  27. Zhang, Y.; Li, C.; Shi, J. Evaluation of no-reference image sharpening results based on information entropy and the ratio of detail variance mean to background variance mean. Nat. Sci. J. Harbin Norm. Univ. 2019, 35, 36–40. [Google Scholar]
  28. Fan, Y.; Sang, Y.; Shen, X. No-reference image SNR assessment under visual masking. J. Appl. Opt. 2012, 33, 711–716. [Google Scholar]
  29. Jiang, Z.; Zhang, S.; Guo, R.; Gao, Y.; Zhi, Y. An adaptive subpixel coregistration for high resolution InSAR image data. In Proceedings of the IEEE International Geoscience and Remote Sensing (IGARSS), Brussels, Belgium, 11–16 July 2021; pp. 3368–3371. [Google Scholar]
Figure 1. LRT imaging model.
Figure 1. LRT imaging model.
Remotesensing 14 03310 g001
Figure 2. Experimental platform. (a) The laser transmitter and (b) the outfield experiment environment.
Figure 2. Experimental platform. (a) The laser transmitter and (b) the outfield experiment environment.
Remotesensing 14 03310 g002
Figure 3. Schematic diagram of LRT radar prototype: R, reflector; NPBS, Non-polarizing beam splitter; APD, avalanche photodiode; Pin, positive intrinsic negative; SMF, single mode fiber; MC Laser, microchip laser.
Figure 3. Schematic diagram of LRT radar prototype: R, reflector; NPBS, Non-polarizing beam splitter; APD, avalanche photodiode; Pin, positive intrinsic negative; SMF, single mode fiber; MC Laser, microchip laser.
Remotesensing 14 03310 g003
Figure 4. Flowchart of image reconstruction by projection data.
Figure 4. Flowchart of image reconstruction by projection data.
Remotesensing 14 03310 g004
Figure 5. Planer combination target with retroreflective material.
Figure 5. Planer combination target with retroreflective material.
Remotesensing 14 03310 g005
Figure 6. (a) Laser reflection projection data after registration; (b) converted projection data in complete angle.
Figure 6. (a) Laser reflection projection data after registration; (b) converted projection data in complete angle.
Remotesensing 14 03310 g006
Figure 7. Near-field experimental imaging results with uniformly sampled views of projections (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) Complete view data (a1a5); (b) sampled data at 5° intervals (b1b5); (c) sampled data at 10° intervals (c1c5); (d) sampled data at 20° intervals (d1d5).
Figure 7. Near-field experimental imaging results with uniformly sampled views of projections (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) Complete view data (a1a5); (b) sampled data at 5° intervals (b1b5); (c) sampled data at 10° intervals (c1c5); (d) sampled data at 20° intervals (d1d5).
Remotesensing 14 03310 g007
Figure 8. Second near-field experimental imaging results (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) Complete view data with noise (a1a5); (b) sampled data at 10°intervals with noise (b1b5); (c) random sampled data at 10° intervals (c1c5).
Figure 8. Second near-field experimental imaging results (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) Complete view data with noise (a1a5); (b) sampled data at 10°intervals with noise (b1b5); (c) random sampled data at 10° intervals (c1c5).
Remotesensing 14 03310 g008
Figure 9. Near-field experimental imaging results under limited view of projections (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) 0–60° sampled data (a1a5); (b) 0–90° sampled data (b1b5); (c) 0–120° sampled data (c1c5); (d) 0–150° sampled data (d1d5).
Figure 9. Near-field experimental imaging results under limited view of projections (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) 0–60° sampled data (a1a5); (b) 0–90° sampled data (b1b5); (c) 0–120° sampled data (c1c5); (d) 0–150° sampled data (d1d5).
Remotesensing 14 03310 g009
Figure 10. Converted projection data in complete angle.
Figure 10. Converted projection data in complete angle.
Remotesensing 14 03310 g010
Figure 11. Far-field experimental imaging results with uniformly sampled views of projections (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) Complete view data (a1a5); (b) sampled data at 5° intervals (b1b5); (c) sampled data at 10° intervals (c1c5); (d) sampled data at 20° intervals (d1d5).
Figure 11. Far-field experimental imaging results with uniformly sampled views of projections (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) Complete view data (a1a5); (b) sampled data at 5° intervals (b1b5); (c) sampled data at 10° intervals (c1c5); (d) sampled data at 20° intervals (d1d5).
Remotesensing 14 03310 g011
Figure 12. Far-field experimental imaging results (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) Complete view data with noise (a1a5); (b) sampled data at 10° intervals with noise (b1b5); (c) random sampled data at 10° intervals (c1c5).
Figure 12. Far-field experimental imaging results (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) Complete view data with noise (a1a5); (b) sampled data at 10° intervals with noise (b1b5); (c) random sampled data at 10° intervals (c1c5).
Remotesensing 14 03310 g012
Figure 13. Far-field experimental imaging results under a limited view of projections (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) 0–60° sampled data (a1a5); (b) 0–90° sampled data (b1b5); (c) 0–120° sampled data (c1c5); (d) 0–150° sampled data (d1d5).
Figure 13. Far-field experimental imaging results under a limited view of projections (Far left) iRadon; (Left) FBP; (Middle) ART; (Right) sparse ART with OMP; (Far right) TV sparse reconstruction with ART. (a) 0–60° sampled data (a1a5); (b) 0–90° sampled data (b1b5); (c) 0–120° sampled data (c1c5); (d) 0–150° sampled data (d1d5).
Remotesensing 14 03310 g013
Figure 14. (a) Far-field target; (b) laser reflection projection data after registration; (c) converted projection data in complete angle.
Figure 14. (a) Far-field target; (b) laser reflection projection data after registration; (c) converted projection data in complete angle.
Remotesensing 14 03310 g014
Figure 15. Far field experimental imaging results with uniformly sampled view of projections (Far left) iRadon; (Left) FBP; (Right) ART; (Far right) TV sparse reconstruction with ART. (a) Complete view data (a1:a4); (b) sampled data at 5° intervals (b1:b4); (c) sampled data at 10° intervals (c1:c4); (d) sampled data at 20° intervals (d1:d4).
Figure 15. Far field experimental imaging results with uniformly sampled view of projections (Far left) iRadon; (Left) FBP; (Right) ART; (Far right) TV sparse reconstruction with ART. (a) Complete view data (a1:a4); (b) sampled data at 5° intervals (b1:b4); (c) sampled data at 10° intervals (c1:c4); (d) sampled data at 20° intervals (d1:d4).
Remotesensing 14 03310 g015
Table 1. Comparison of the metric values of the results by using different algorithms.
Table 1. Comparison of the metric values of the results by using different algorithms.
IENRSNR (dB)Var
FBPARTOMPTV-ARTFBPARTOMPTV-ARTFBPARTOMPTV-ART
complete view5.14856.22536.11339.63051.50812.07582.63173.24370.04630.24780.13180.3428
5° intervals5.23535.24536.1979.63131.57461.97282.37423.15920.04700.04230.08810.1572
10° intervals5.38755.14116.09739.63191.78531.83112.49373.47240.04810.1090.06490.1776
20° intervals5.43574.62865.72989.63131.96151.5742.6233.28580.04610.06180.0310.0912
Table 2. Comparison of anti-noise and random sampled results with the three metrics.
Table 2. Comparison of anti-noise and random sampled results with the three metrics.
IENRSNR (dB)Var
FBPARTOMPTV-ARTFBPARTOMPTV-ARTFBPARTOMPTV-ART
complete view with noise5.32085.41216.12089.63151.99891.98551.64872.22450.02980.05630.13590.2552
10° intervals with noise5.42624.62576.10789.63052.02651.65571.35472.58970.01370.03790.06410.1966
10° intervals random sample5.28394.74426.57129.63142.03451.54271.38272.30980.04180.03750.06510.0982
Table 3. CC values of different algorithms with incomplete views of projection.
Table 3. CC values of different algorithms with incomplete views of projection.
CC
FBPARTOMPTV-ART
0–60° sampled0.44430.50610.45620.5380
0–90° sampled0.49340.48480.51780.5789
0–120° sampled0.63480.65380.60480.6919
0–150° sampled0.84800.86120.69160.9319
Table 4. IE, NRSNR, and Var value of different algorithms in far-field experiment.
Table 4. IE, NRSNR, and Var value of different algorithms in far-field experiment.
IENRSNR (dB)Var
FBPARTOMPTV-ARTFBPARTOMPTV-ARTFBPARTOMPTV-ART
complete view4.50462.27643.99119.63174.2511.57943.31935.79550.07810.01010.05630.1015
5° intervals4.46161.97774.08759.63083.7341.49164.03014.60430.09330.01870.02270.098
10° intervals4.62891.7644.46029.63123.34371.47793.92564.59620.05260.0390.05450.0559
20° intervals4.51521.70454.69329.63164.16451.53713.83094.71320.03190.04460.04520.0503
Table 5. IE, NRSNR, and Var value of anti-noise and random sampled results.
Table 5. IE, NRSNR, and Var value of anti-noise and random sampled results.
IENRSNR (dB)Var
FBPARTOMPTV-ARTFBPARTOMPTV-ARTFBPARTOMPTV-ART
complete view with noise5.06962.17884.06179.63156.9782.10024.825611.84310.00840.00860.01640.01672
10° intervals with noise5.1541.93834.27689.63147.31811.9745.24029.12450.00220.00440.05530.0479
10° intervals random sample4.3731.87874.4919.63134.90491.83022.45336.96270.02190.0050.02970.0411
Table 6. CC values of different algorithms with incomplete views of projection.
Table 6. CC values of different algorithms with incomplete views of projection.
CC
FBPARTOMPTV-ART
0–60° sampled0.39780.36870.19750.4913
0–90° sampled0.4710.41340.37010.4925
0–120° sampled0.43530.42690.40660.4727
0–150° sampled0.75860.74990.68050.7979
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, R.; Jiang, Z.; Jin, Z.; Zhang, Z.; Zhang, X.; Guo, L.; Hu, Y. Reflective Tomography Lidar Image Reconstruction for Long Distance Non-Cooperative Target. Remote Sens. 2022, 14, 3310. https://doi.org/10.3390/rs14143310

AMA Style

Guo R, Jiang Z, Jin Z, Zhang Z, Zhang X, Guo L, Hu Y. Reflective Tomography Lidar Image Reconstruction for Long Distance Non-Cooperative Target. Remote Sensing. 2022; 14(14):3310. https://doi.org/10.3390/rs14143310

Chicago/Turabian Style

Guo, Rui, Zheyi Jiang, Zhihan Jin, Zhao Zhang, Xinyuan Zhang, Liang Guo, and Yihua Hu. 2022. "Reflective Tomography Lidar Image Reconstruction for Long Distance Non-Cooperative Target" Remote Sensing 14, no. 14: 3310. https://doi.org/10.3390/rs14143310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop