Next Article in Journal
The Warming Effect of Urbanization in the Urban Agglomeration Area Accelerates Vegetation Growth on the Urban–Rural Gradient
Next Article in Special Issue
Built-Up Area Mapping for the Greater Bay Area in China from Spaceborne SAR Data Based on the PSDNet and Spatial Statistical Features
Previous Article in Journal
Estimating Productivity Measures in Guayule Using UAS Imagery and Sentinel-2 Satellite Data
Previous Article in Special Issue
Inshore Ship Detection in Large-Scale SAR Images Based on Saliency Enhancement and Bhattacharyya-like Distance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Physics-Based TOF Imaging Simulation for Space Targets Based on Improved Path Tracing

Space Optical Engineering Research Center, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(12), 2868; https://doi.org/10.3390/rs14122868
Submission received: 15 May 2022 / Revised: 11 June 2022 / Accepted: 13 June 2022 / Published: 15 June 2022
(This article belongs to the Special Issue SAR Images Processing and Analysis)

Abstract

:
Aiming at the application of close-up space measurement based on time-of-flight (TOF) cameras, according to the analysis of the characteristics of the space background environment and the imaging characteristics of the TOF camera, a physics-based amplitude modulated continuous wave (AMCW) TOF camera imaging simulation method for space targets based on the improved path tracing is proposed. Firstly, the microfacet bidirectional reflection distribution function (BRDF) model of several typical space target surface materials is fitted according to the measured BRDF data in the TOF camera response band to make it physics-based. Secondly, an improved path tracing algorithm is developed to adapt to the TOF camera by introducing a cosine component to characterize the modulated light in the TOF camera. Then, the imaging link simulation model considering the coupling effects of the BRDF of materials, the suppression of background illumination (SBI), optical system, detector, electronic equipment, platform vibration, and noise is established, and the simulation images of the TOF camera are obtained. Finally, ground tests are carried out, and the test shows that the relative error of the grey mean, grey variance, depth mean, and depth variance is 2.59%, 3.80%, 18.29%, and 14.58%, respectively; the MSE, SSIM, and PSNR results of our method are also better than those of the reference method. The ground test results verify the correctness of the proposed simulation model, which can provide image data support for the ground test of TOF camera algorithms for space targets.

Graphical Abstract

1. Introduction

In recent years, time-of-flight (TOF) imaging technology has been widely used in ground robot positioning and navigation, pose estimation, 3D reconstruction, indoor games, and other fields due to its advantages in structure and performance. Significantly, researchers are promoting TOF imaging technology in spatial tasks such as spatial pose estimation and relative navigation [1,2,3,4,5,6]. However, due to the particularity of the space environment, it is difficult to obtain the imaging results of the actual TOF camera before formulating the space mission scheme, planning mission content, and designing the related algorithms, so it is impossible to evaluate the algorithm capability and ensure the smooth implementation of the mission [7]. The imaging simulation method can provide data input for the back-end algorithm test of the space-based TOF camera. Nevertheless, it is different from the imaging simulation methods of the visible light camera [8,9,10,11,12], infrared camera [13,14,15], and radar [16,17], there are few imaging simulation methods of TOF cameras because the imaging principle is essentially different. Therefore, it is of great significance to develop the imaging simulation method of the TOF camera for space targets.
For the imaging simulation method of the TOF camera, the research status is as follows. References [18,19] proposed a real-time TOF simulation framework for simple geometry based on standard graphics rasterization techniques. This method only considers the influence of errors such as dynamic motion blur and flying pixels and does not consider the influence of the background environment, so it is only suitable for indoor scenes. Reference [20] presented an amplitude modulated continuous wave (AMCW) TOF simulation method using global illumination based on the bidirectional path tracing method for indoor scenes such as kitchens. References [21,22] established physics-based TOF camera simulation methods, respectively. In order to evaluate two alternative approaches in continuous-wave TOF sensor design, reference [21] focused on realistic and practical sensor parameterization. Based on the reflective shadow map (RSM) algorithm, reference [22] introduced the bidirectional reflection distribution function (BRDF) data of materials, which has the physical imaging characteristics of actual materials, but this method also does not consider the influence of background illumination. Reference [23] proposed a pulsed TOF simulation method based on Vulkan shader and NVIDIA VKRAY ray tracing for indoor scenes. It can be seen that most of these existing TOF imaging simulation methods are only for indoor scenes, without considering the influence of background illumination, and there is no research on the imaging simulation method of TOF cameras specifically for space targets.
The main difference between the space environment and the indoor environment is the influence of sunlight. The sunlight will affect the signal-to-noise ratio and even cause the detector’s supersaturation. Therefore, the camera’s hardware must consider the suppression of background illumination (SBI). Many TOF detectors considering the SBI function have been developed [24,25,26,27,28]. At the same time, for the TOF camera simulation, SBI must also be considered, and the corresponding simulation model must be developed. Reference [29] constructed a theoretical model of SBI for PMD detectors, which can effectively characterize the detector’s ability to suppress background illumination.
To sum up, this paper proposes a physics-based imaging simulation method of TOF cameras for space targets based on improved path tracing, aiming at the space application of TOF cameras. The main contributions of this method are as follows:
(1)
An improved path tracing algorithm is developed to adapt to the TOF camera by introducing a cosine component to characterize the modulated light in the TOF camera.
(2)
The background light suppression model is introduced, and the physics-based simulation is realized by considering the BRDF model fitted by the measured data in the near-infrared band of space materials
(3)
A ground test scene is built, and the correctness of the proposed TOF camera imaging simulation method is verified by quantitative evaluation between the simulated image and measured image.

2. Materials and Methods

2.1. Imaging Principle of TOF Camera

Based on the homodyne detection principle, the AMCW TOF camera measures the distance by measuring the cross-correlation between the reflected light and reference signals. The camera first transmits the near-infrared optical signal modulated by a sine wave. The optical signal is reflected by the target surface and received by the infrared detector. The phase delay of the received signal relative to the transmitted signal is calculated to calculate the target distance information. The specific principle is shown in Figure 1.
It is assumed that the transmitted infrared optical signal is g ( t ) = a cos ( 2 π f 0 t ) , its amplitude is a , f 0 is the signal modulation frequency, and the received optical signal s ( t ) is
s ( t ) = a r cos ( 2 π f 0 t + φ ) + b
where a r is the amplitude of reflected signal light, φ is the phase delay caused by target distance, and b is the offset caused by ambient light. Then, the cross-correlation between the transmitted optical signal and the received optical signal is:
c τ ( φ ) = s g = lim T 1 T T / 2 T / 2 s ( t ) g ( t + τ ) d t
where τ is the time delay and is the correlation operation symbol.
In order to recover the amplitude a r and phase φ of the reflected light signal, four sequence amplitude images are collected generally, which are defined as:
C i = c τ i ( φ ) , τ i = i π 2 × 2 π f 0 , i { 0 , 1 , 2 , 3 }
Then, the phase delay φ , reflected signal amplitude a r , and offset b can be obtained as:
φ = arctan ( C 3 C 1 C 0 C 2 )
a r = 1 2 a ( C 3 C 1 ) 2 + ( C 0 C 2 ) 2
b = 1 4 ( C 0 + C 1 + C 2 + C 3 )
Finally, the distance d between the TOF camera and the scene is:
d = 1 2 c l i g h t φ 2 π f 0
The purpose of imaging simulation is to obtain the distance d and the intensity of the reflected signal, which is related to the characteristics of the target material, background, the TOF camera, and so on.

2.2. Imaging Characteristic Modeling

2.2.1. Target Material Characteristics Modeling

In order to achieve physics-based simulation, it is necessary to introduce the reflection characteristics of the actual material of the target surface. The BRDF is usually used to express the reflection characteristics of the material surface. As shown in Figure 2a, on the surface element d A , the incident light direction is ( θ i , ϕ i ) , and the observation direction is ( θ r , ϕ r ) , where θ and ϕ represent the zenith angle and azimuth angle, respectively, and Z represents the normal direction of the surface. The BRDF is defined as the ratio of the radiance d L r ( θ i , ϕ i , θ r , ϕ r ) emitted along the direction ( θ r , ϕ r ) to the irradiance d E i ( θ i , ϕ i ) of the measured surface incident along the direction ( θ i , ϕ i ) , and the formula is as follows.
f r ( θ i , ϕ i , θ r , ϕ r ) = d L r ( θ i , ϕ i , θ r , ϕ r ) d E i ( θ i , ϕ i ) ( s r 1 )
Yellow thermal control material and silicon solar cells are two primary surface materials of space targets. In this paper, their BRDF data are measured by the REFLET-180 BRDF measuring instrument as shown in Figure 2b. The specific material samples and some corresponding measurement results are shown in Figure 3.
At the same time, the above measured BRDF data are theoretically modeled using the microfacet BRDF model [14,15]. The microfacet BRDF model includes the specular reflection term and the diffuse reflection term, and the specular reflection term is the Torrance–Sparrow BRDF model [30]. The definition of the microfacet BRDF model is as follows.
f r ( θ i , ϕ i , θ r , ϕ r ) = k s 4 cos θ i cos θ r D F G + k d π
where k s is the specular reflection coefficient, k d is the diffuse reflection coefficient, D is the micro surface distribution factor, F is the Fresnel factor [31], and G is the geometric attenuation factor. Their specific definition is as follows.
{ D = e tan 2 θ h / α 2 / ( π α 2 cos 4 θ h ) F = F 0 + ( 1 F 0 ) ( 1 cos γ ) n G = min ( 1 , 2 cos θ h cos θ r / cos γ , 2 cos θ h cos θ i / cos γ )
where α = 2 σ and σ are the root mean square slope of the microfacet, F 0 is the Fresnel coefficient at vertical incidence, and n is the undetermined coefficient.
Since the wavelength of the TOF camera light source used in this paper is 850 nm, the parameters of the BRDF model at 850 nm are fitted, and the fitting error is expressed by the following formula [32,33].
e f i t = θ i θ r [ f r - m o d e l cos ( θ r ) f r - m e a s u r e d cos ( θ r ) ] 2 θ i θ r [ f r - m e a s u r e d cos ( θ r ) ] 2
where f r - m e a s u r e d represents the measured BRDF value, f r - m o d e l represents the fitted BRDF value, the parameter fitting results are shown in Table 1, and the visualization of fitting results is shown in Figure 4.

2.2.2. Background Characteristics Modeling

Space targets run in the Earth’s orbit. The radiation in the imaging band of the TOF camera is mainly composed of direct solar radiation, solar radiation reflected by the Earth, solar radiation reflected by the Moon, and stellar radiation. Since the stellar radiation is minimal compared to other radiation, the stellar radiation at the target can be ignored. The surface radiation of the target is shown in Figure 5.
(1)
The irradiance generated by direct solar radiation at the target is:
E S = λ 1 λ 2 c 1 λ 5 ( e c 2 / λ T S 1 ) 1 R S 2 R S E 2 d λ
where λ is the wavelength in µm; c 1 is the first blackbody radiation constant; c 2 is the second blackbody radiation constant; T S is the solar radiation temperature, and T S = 5900 K; R S is the solar radius, and R S = 6.5955 × 10 5 km; R S E is the distance between the Sun and the Earth; λ 1 ~ λ 2 is the observation band of the TOF camera.
(2)
Assuming that the space target is in a high Earth orbit and the Earth is assumed to be a diffuse sphere, the irradiance generated by solar radiation reflected by the Earth at the target is approximate as follows.
E E = E S ρ E R E 2 π ( R E + H T E ) 2 θ E π 2 π 2 π 2 π 2 cos 3 B ( cos θ E cos 2 l + sin θ E sin l cos l ) d B d l = 4 E S ρ E R E 2 3 π ( R E + H T E ) 2 [ cos θ E ( π 2 θ E 2 + sin 2 θ E 4 ) + sin θ E ( 1 4 cos 2 θ E 4 ) ]
where ρ E is the average albedo of the Earth and ρ E = 0.35 ; R E is the radius of the Earth and R E = 6370 km; H T E is the height of the target from the ground; θ E is the angle between vector v S u n - E a r t h and vector v E a r t h - T a r g e t , and the value range is [ 0 , π ] .
(3)
Similarly, assuming that the Moon is a diffuse sphere, the irradiance generated by solar radiation reflected by the Moon at the target is:
E M E S ρ M R M 2 π R T M 2 θ M π 2 π 2 π 2 π 2 cos 3 B ( cos θ M cos 2 l + sin θ M sin l cos l ) d B d l = 4 E S ρ M R M 2 3 π R T M 2 [ cos θ M ( π 2 θ M 2 + sin 2 θ M 4 ) + sin θ M ( 1 4 cos 2 θ M 4 ) ]
where ρ M is the average albedo of the Moon and ρ M = 0.12 ; R M is the radius of the Moon and R M = 1738 km; R T M is the distance between the target and the center of mass of the Moon; θ M is the angle between vector v S u n - M o o n and vector v M o o n - T a r g e t , and the value range is [ 0 , π ] .

2.2.3. SBI Characteristics Modeling

The TOF sensor is commonly the photonic mixer device (PMD). The PMD is a two-tap sensor, and the structure diagram is shown in Figure 6a [34]. The interference of background light on the actively modulated light leads to the premature saturation of the quantum well of the pixel, so that less reflected active light containing depth information is detected, which will lead to increased noise and a reduced signal-to-noise ratio (SNR). The schematic diagram of the influence of background light on PMD is shown in Figure 6b [35]. Therefore, designing a system that can effectively reduce the impact of background illumination is an important job.
The suppression of background illumination (SBI) developed by PMDTec has been successfully applied to TOF cameras such as CamCube. The SBI is an in-pixel circuitry that subtracts ambient light, which prevents the pixels from saturating. The manufacturer did not publish the specific details of the SBI compensation circuit, but reference [36] summarizes the response relationship of the A and B channels of the PMD pixel to the exposure time through experiments, as shown in Figure 7. The PMD pixel will produce a photoelectric response to both the signal light and the background illumination during the exposure time. In the linear region, the SBI is not activated. When the number of electrons in the quantum well A or B reaches n S B I , s t a r t , the SBI is activated. As the exposure time continues to increase, it will enter the SBI Limit region, and the data of this pixel is invalid at this time.
Reference [29] established the SBI model shown in Figure 8 by analyzing the response relationship of the TOF camera to illumination intensity or exposure time. This SBI model is also used in this paper, which can be described as follows. The charge stored in two quantum wells is continuously compared with a reference value n S B I , s t a r t , and once the stored charge in one of the quantum wells exceeds this value, that is, the difference n Δ between the stored charge and the reference value n S B I , s t a r t is positive, a compensation process will be triggered. Two compensation currents are injected into the two quantum wells during the compensation process, respectively, and the number of charges contained in the compensation currents is approximately the same as that of n Δ . After compensation, the quantum well containing more electrons is reset to n S B I , s t a r t , and the charge of the other quantum well is set to a value lower than n S B I , s t a r t .
The compensation process does not lose any critical information, as the phase delay can be estimated by keeping only the difference between the charges in the two quantum wells. As shown below, each sub-amplitude image in Equation (4) is the difference between the amplitude images of channels A and B.
C i = C i A C i B
where C i A and C i B are amplitude images output by channels A and B, respectively.
As shown in the figure above, both the background illumination and the SBI process introduce additional Poisson noise. The Poisson distribution is given by
P λ = λ k k ! e λ
where λ describes the mean of the values, which is here the number of generated electrons. P λ is the probability of detecting k electrons for a given λ .

2.3. Imaging Simulation Modeling

In order to obtain the simulated image of the actual target, an AMCW TOF camera imaging simulation model based on path tracing is proposed in this paper. Firstly, establish the space three-dimensional target scene, simulate the TOF camera to transmit the modulated light signal to the target scene, and obtain the radiance image of the space target scene through the path tracing method. Secondly, the radiance image is coupled with the imaging chain factors such as platform motion, optical system, and detector, and the influence of background light is suppressed through the SBI module. Then, four frames of amplitude images are obtained through time sampling. Finally, the final depth and grey image are obtained through Formula (4) to Formula (7). The specific simulation model is shown in Figure 9.

2.3.1. Improved Path Tracing Algorithm of the TOF Camera

The general path tracing algorithm is suitable for the imaging simulation of visible light and infrared cameras. However, since the light signal in the TOF camera is modulated, there is a cosine component that varies with the propagation distance. Therefore, to apply the path tracing algorithm to the TOF camera, an improved path tracing algorithm is developed by introducing a cosine component to characterize the modulated light in the TOF camera, and the improved algorithm is shown as follows.
Define T ( g ( t ) , d ) to represent the propagation distance d [m] of the optical signal g ( t ) = b + a cos ( 2 π f 0 t ) = g ( t ) + r e a l ( g ( t ) cos ) without attenuation, and its form is as follows.
T ( g ( t ) , d ) = b + a cos ( 2 π f 0 t + 2 π f 0 d c ) = g ( t ) + r e a l ( g ( t ) cos e i Ψ d )
where g ( t ) = b represents the DC component, g ( t ) cos = a e i ( 2 π f 0 t ) represents the cosine component, and r e a l ( ) represents the real part of the imaginary number and Ψ = 2 π f 0 c .
The light source L of the TOF camera is assumed to be a uniform point light source, then the light intensity I L is as follows.
I L ( t ) = P L Ω L [ 1 + cos ( 2 π f 0 t ) ]
where P L [ W ] represents the power of the light, Ω L represents the solid angle, and f 0 represents the modulation frequency of the optical signal.
The illuminance E L ( t , d ) at the micro-plane perpendicular to the light propagation direction at d   [ m ] away from the light source is:
E L ( t , d ) = T ( I L ( t ) / d 2 , d )
For a point P in the scene shown in Figure 10, the illuminance generated by direct lighting at P is E L @ P ( t , d L P ) .
E L @ P ( t , d L P ) = E L ( t , d L P ) cos θ P L
where d L P is the distance between the light source L and the point P , and θ P L is the included angle between the vector v P L of P pointing to L and the normal vector n P of the surface at point P .
The direct illumination radiance L @ P S d i r e c t ( t ) generated by the infrared light source along the direction of P S at point P is:
L @ P S d i r e c t ( t ) = E L @ P ( t , d L P ) f L P S
where f L P S represents the BRDF of the surface. For the irradiance E L @ P ( t , d L P ) of ambient light sources L such as the Sun, the Earth, and the Moon, as shown in Formulas (12)–(14), the cosine component of E L @ P ( t , d L P ) is 0, only the DC component. Therefore, in the derivation process, this paper only gives the case of the active light of the TOF camera. When encountering ambient light, it only needs to change the cosine component related to ambient light to 0.
In addition to the direct illumination of light sources, the light signal reflected by other surfaces will also affect the radiance at point P , as shown in Figure 11. We call this part indirect illumination radiance denoted by L @ P S i n d i r e c t ( t ) , which is defined as follows.
L @ P S i n d i r e c t ( t ) = Ω f P P S L P @ P ( t ) cos ( θ i ) d ϖ i
where L P @ P ( t ) represents the radiance generated by P at point P , Ω represents the solid angle of these surfaces that contribute to the radiance at point P , ϖ i represents the micro solid angle in the ω i direction, and d ϖ i is defined as follows.
d ϖ i = cos ( θ o ) d A ( P ) d P P 2
Since the radiation energy of light in the scene is conserved, the radiance at P can be associated with the radiance at another point P .
L P @ P ( t ) = T ( L o ( i s e c t ( P , ω i ) , ω i , t ) , d P P )
where i s e c t ( P , ω i ) calculates the first intersection P between the light propagating along the ω i direction from point P and the scene. d A ( P ) represents the microfacet at the intersection P .
Substituting Equation (23) into Equation (22), we can get:
L @ P S i n d i r e c t ( t ) = A f P P S L P @ P ( t ) V ( P P ) | cos ( θ i ) | | cos ( θ o ) | d P P 2 d A ( P )
where A is all surfaces of the scene, and V ( P P ) represents the visibility between points P and P . If the two points are visible to each other, V ( P P ) = 1 ; otherwise, V ( P P ) = 0 Let.
G ( P P ) = V ( P P ) | cos ( θ i ) | | cos ( θ o ) | d P P 2
Then
L @ P S i n d i r e c t ( t ) = A f P P S L P @ P ( t ) G ( P P ) d A ( P )
Combining direct radiance and indirect radiance, the radiance L @ P S ( t ) at the point P can be expressed as follows.
L @ P S ( t ) = L @ P S d i r e c t ( t ) + L @ P S i n d i r e c t ( t ) = E L @ P ( t , d L P ) f L P S + A f P P S L P @ P ( t ) G ( P P ) d A ( P )
After the radiance L @ P S ( t ) propagate distance d P S , the radiance L P @ S ( t ) at the camera sensor S is:
L P @ S ( t ) = T ( L @ P S ( t ) , d P S )
The path tracing algorithm [37,38] based on Monte Carlo is utilized to solve the Equation (28), which is approximated as follows.
L @ P 1 S ( t ) 1 N k = 1 N l N u m L i g h t n = 1 M a x D e p t h L M C k ( p ¯ n )
where M a x D e p t h represents the maximum bounce times of light, N u m L i g h t represents the number of light sources, N represents the number of samples per pixel, L M C k ( p ¯ n ) represents the Monte Carlo estimation of the kth sample, as shown below.
L M C ( p ¯ n ) = r e a l { E L @ P n ( t , d L P n ) f L P n S f P n P n 1 P n 2 G ( P n P n 1 ) p A ( P n ) × ( i = 1 n 2 f P i + 1 P i P i 1 V ( P i P i + 1 ) | cos θ i | p ω ( P i P i + 1 ) ) + E L @ P n cos ( t , d L P n ) f L P n S e i Ψ d P n P n 1 f P n P n 1 P n 2 G ( P n P n 1 ) p A ( P n ) × ( i = 1 n 2 e i Ψ d P i + 1 P i f P i + 1 P i P i 1 V ( P i P i + 1 ) | cos θ i | p ω ( P i P i + 1 ) ) }
where p A ( P n ) is the sampling probability function at P n point, which is defined as p A ( P n ) = 1 n A n , n A n is the sum of the surfaces of the scene, and p ω ( P i P i + 1 ) is the sampling probability density function in the P i P i + 1 direction.
The irradiance E p i x e l received by the pixel of the TOF sensor is as follows.
E p i x e l ( t ) = T ( L @ P 1 S ( t ) , d P 1 S ) τ π 4 ( 1 F / # ) 2 cos 4 α
where, τ represents the transmittance of the AMCW TOF camera, F / # represents its f-number, and α represents the angle between the optical axis and the vector v S P 1 .

2.3.2. Imaging Link Impact Modeling

As shown in Figure 9, the target scene radiation forms the conversion process of “radiation-voltage-grey” after passing through each component module of the TOF camera. In this process, the signal is affected by the optical system, detector, circuit processing unit, and platform, which is reflected in the image as the dispersion effect on the radiation image. These dispersion effects can be described by each module’s modulation transmission function (MTF), as shown in Figure 12.
Then, the last output simulation image [39] is:
I m a g e s i m = I F F T ( F F T ( I m a g e R a d ) M T F o p t M T F det M T F e M T F m o t ) + N o i s e
where F F T represents fast Fourier transform; M T F o p t represents the MTF of the optical system; M T F det represents the MTF of the detector; M T F e represents the MTF of the signal processing circuit; M T F m o t represents the MTF of the platform motion; N o i s e represents the noise of the image. MTF models of different processes can be found in references [8,17,40].

3. Results

In order to verify the correctness of the proposed TOF camera imaging simulation method for space targets, the ground experiment scene shown in Figure 13 is built. Considering that it is difficult to simulate the radiation of the Earth and the Moon on the ground, and the influence of these radiations is minimal, the ground experiment only considers the direct solar radiation. The experimental scene comprises a satellite model, a TOF camera, a turntable, a black background, and a solar simulator. The surface of the satellite model is mainly composed of yellow thermal control materials and silicon solar cells, and the TOF camera is placed horizontally on a tripod. The TOF camera used in this paper is the PMD CamBoard camera, and its performance indexes are shown in Table 2. The solar simulator used is the Newport Oriel Sol3A, and its performance indexes are shown in Table 3. The experimental background is a dark background composed of black light-absorbing flannel, and the light beam emitted by the solar simulator completely covers the target model.
During the experiment, all light sources in the room were turned off, the solar simulator was turned on, the output power of the solar simulator was set to the typical power (1 SUN), and the PMD camera was used to capture the images of the target model. Due to the size of the satellite model and the limited illumination area of the solar simulator, to make the beam of the solar simulator cover the target model entirely, the satellite model is inclined at a certain angle. At the same time, the TOF imaging simulation method proposed in this paper was used to obtain the corresponding simulation image according to the experimental conditions. Other simulation parameters used are shown in Table 4. The measured images and the simulated images are shown in Figure 14. In order to facilitate the comparison with the simulated image, the background area of the measured image is eliminated manually.
In Table 4, d c a m - s a t represents the distance between the PMD TOF camera and satellite body center, θ c a m - p a n e l represents the angle between the optical axis of the camera and the plane of the solar panel, and θ s o l a r S i m - p a n e l represents the angle between the beam of the solar simulator and the plane of the solar panel. According to references [41,42], the value of n S B I , s t a r t is about 36,500.
In order to quantitatively evaluate the simulation effect, the mean, variance, mean square error (MSE), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR) of the grey image and depth image are calculated. The smaller the MSE, the larger the SSIM, and the larger the PSNR, indicating the more realistic the simulation results are. These indexes are shown in Table 5, where bold indicates better results. At the same time, the depth image is converted into a point cloud and denoised. The point cloud before and after denoising is shown in Figure 15.

4. Discussion

As shown in Figure 14, our simulated grey and depth images are very similar to the measured grey and simulation images in visual effect. The gray image simulated by the method of reference [22] is also very similar to the measured gray image. However, the corresponding simulated depth image is quite different from the measured depth. The specific reason is that this method does not consider the error introduced by background lighting, resulting in low depth noise. At the same time, as shown in Figure 15, the point cloud coincidence degree of our method is better than that of reference [22]. In addition, it can be seen from Table 5 that the relative error of the grey mean, grey variance, depth mean, and depth variance is 2.59%, 3.80%, 18.29%, and 14.58%, respectively, which is better than the results of the method in reference [22]. Furthermore, our method’s MSE, SSIM, and PSNR results are also better than those of the method in reference [22].

5. Conclusions

This paper proposed a physics-based AMCW TOF camera imaging simulation method based on the improved path tracing for space targets based on the analysis of space background environment characteristics and the TOF camera imaging mechanism. Firstly, the BRDF data of the yellow thermal control material and the silicon cell at 850 nm was measured, the parameters of the microfacet BRDF model were fitted, and the fitting error was less than 5.2%. Secondly, an improved path tracing algorithm was developed to adapt to the TOF camera by introducing a cosine component to characterize the modulated light in the TOF camera. Then, the imaging link simulation model considering the coupling effects of the BRDF of materials, SBI, optical system, detector, electronic equipment, platform vibration, and noise was established. Finally, the ground experiment was carried out, and the relative error of the grey mean, grey variance, depth mean, and depth variance was 2.59%, 3.80%, 18.29%, and 14.58%, respectively. At the same time, our method’s MSE, SSIM, and PSNR results were also better than those of the reference method. The experimental results verify the correctness of the proposed simulation method and can provide image data support for the ground test of TOF camera algorithms for space targets.

Author Contributions

Conceptualization, H.W.; methodology, Z.Y.; software, Z.Y.; validation, Z.Y., X.L., Q.N. and Y.L.; formal analysis, Z.Y.; investigation, Z.Y., X.L. and Q.N.; writing—original draft preparation, Z.Y.; writing—review and editing, Z.Y. and H.W.; visualization, Z.Y., X.L. and Q.N.; supervision, H.W.; project administration, H.W.; funding acquisition, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 61705220.

Data Availability Statement

Not applicable.

Acknowledgments

We sincerely thank the National Natural Science Foundation of China for its support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Opromolla, R.; Fasano, G.; Rufino, G.; Grassi, M. A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations. Prog. Aerosp. Sci. 2017, 93, 53–72. [Google Scholar] [CrossRef]
  2. Klionovska, K.; Burri, M. Hardware-in-the-Loop Simulations with Umbra Conditions for Spacecraft Rendezvous with PMD Visual Sensors. Sensors 2021, 21, 1455. [Google Scholar] [CrossRef] [PubMed]
  3. Ravandoor, K.; Busch, S.; Regoli, L.; Schilling, K. Evaluation and Performance Optimization of PMD Camera for RvD Application. IFAC Proc. Vol. 2013, 46, 149–154. [Google Scholar] [CrossRef]
  4. Potier, A.; Kuwahara, T.; Pala, A.; Fujita, S.; Sato, Y.; Shibuya, Y.; Tomio, H.; Tanghanakanond, P.; Honda, T.; Shibuya, T.; et al. Time-of-Flight Monitoring Camera System of the De-orbiting Drag Sail for Microsatellite ALE-1. Trans. Jpn. Soc. Aeronaut. Space Sci. Aerosp. Technol. Jpn. 2021, 19, 774–783. [Google Scholar] [CrossRef]
  5. Martínez, H.G.; Giorgi, G.; Eissfeller, B. Pose estimation and tracking of non-cooperative rocket bodies using Time-of-Flight cameras. Acta Astronaut. 2017, 139, 165–175. [Google Scholar] [CrossRef]
  6. Ruel, S.; English, C.; Anctil, M.; Daly, J.; Smith, C.; Zhu, S. Real-time 3D vision solution for on-orbit autonomous rendezvous and docking. In Spaceborne Sensors III; SPIE: Bellingham, WA, USA, 2006; p. 622009. [Google Scholar] [CrossRef]
  7. Lebreton, J.; Brochard, R.; Baudry, M.; Jonniaux, G.; Salah, A.H.; Kanani, K.; Goff, M.L.; Masson, A.; Ollagnier, N.; Panicucci, P. Image simulation for space applications with the SurRender software. arXiv 2021, arXiv:2106.11322. [Google Scholar]
  8. Han, Y.; Lin, L.; Sun, H.; Jiang, J.; He, X. Modeling the space-based optical imaging of complex space target based on the pixel method. Optik 2015, 126, 1474–1478. [Google Scholar] [CrossRef]
  9. Zhang, Y.; Lv, L.; Yang, C.; Gu, Y. Research on Digital Imaging Simulation Method of Space Target Navigation Camera. In Proceedings of the 2021 IEEE 16th Conference on Industrial Electronics and Applications (ICIEA), Chengdu, China, 1–4 August 2021; pp. 1643–1648. [Google Scholar]
  10. Li, W.; Cao, Y.; Meng, D.; Wu, Z. Space target scattering characteristic imaging in the visible range based on ray tracing algorithm. In Proceedings of the 12th International Symposium on Antennas, Propagation and EM Theory (ISAPE), Hangzhou, China, 3–6 December 2018; pp. 1–3. [Google Scholar] [CrossRef]
  11. Xu, C.; Shi, H.; Gao, Y.; Zhou, L.; Shi, Q.; Li, J. Space-Based optical imaging dynamic simulation for spatial target. In Proceedings of the AOPC 2019: Optical Sensing and Imaging Technology, Beijing, China, 7–9 July 2019; p. 1133815. [Google Scholar]
  12. Wang, H.; Zhang, W. Visible imaging characteristics of the space target based on bidirectional reflection distribution function. J. Mod. Opt. 2012, 59, 547–554. [Google Scholar] [CrossRef]
  13. Wang, H.; Zhang, W.; Wang, F. Infrared imaging characteristics of space-based targets based on bidirectional reflection distribution function. Infrared Phys. Technol. 2012, 55, 368–375. [Google Scholar] [CrossRef]
  14. Ding, Z.; Han, Y. Infrared characteristics of satellite based on bidirectional reflection distribution function. Infrared Phys. Technol. 2019, 97, 93–100. [Google Scholar] [CrossRef]
  15. Wang, H.; Chen, Y. Modeling and simulation of infrared dynamic characteristics of space-based space targets. Infrared Laser Eng. 2016, 45, 0504002. [Google Scholar] [CrossRef]
  16. Wang, F.; Eibert, T.F.; Jin, Y.-Q. Simulation of ISAR Imaging for a Space Target and Reconstruction under Sparse Sampling via Compressed Sensing. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3432–3441. [Google Scholar] [CrossRef]
  17. Schlutz, M. Synthetic Aperture Radar Imaging Simulated in MATLAB. Master’s Thesis, California Polytechnic State University, San Luis Obispo, CA, USA, 2009. [Google Scholar]
  18. Keller, M.; Kolb, A. Real-time simulation of time-of-flight sensors. Simul. Model. Pract. Theory 2009, 17, 967–978. [Google Scholar] [CrossRef]
  19. Keller, M.; Orthmann, J.; Kolb, A.; Peters, V. A simulation framework for time-of-flight sensors. In Proceedings of the 2007 International Symposium on Signals, Circuits and Systems, Iasi, Romania, 13–14 July 2007; pp. 1–4. [Google Scholar]
  20. Meister, S.; Nair, R.; Kondermann, D. Simulation of Time-of-Flight Sensors using Global Illumination. In Proceedings of the VMV, Lugano, Switzerland, 11–13 September 2013; pp. 33–40. [Google Scholar]
  21. Lambers, M.; Hoberg, S.; Kolb, A. Simulation of Time-of-Flight Sensors for Evaluation of Chip Layout Variants. IEEE Sens. J. 2015, 15, 4019–4026. [Google Scholar] [CrossRef]
  22. Bulczak, D.; Lambers, M.; Kolb, A. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects. Sensors 2018, 18, 13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Thoman, P.; Wippler, M.; Hranitzky, R.; Fahringer, T. RTX-RSim: Accelerated Vulkan room response simulation for time-of-flight imaging. In Proceedings of the International Workshop on OpenCL, Munich, Germany, 27–29 April 2020; pp. 1–11. [Google Scholar]
  24. Cho, J.; Choi, J.; Kim, S.-J.; Park, S.; Shin, J.; Kim, J.D.K.; Yoon, E. A 3-D Camera With Adaptable Background Light Suppression Using Pixel-Binning and Super-Resolution. IEEE J. Solid-State Circuits 2014, 49, 2319–2332. [Google Scholar] [CrossRef]
  25. Shin, J.; Kang, B.; Lee, K.; Kim, J.D.K. A 3D image sensor with adaptable charge subtraction scheme for background light suppression. In Proceedings of the Sensors, Cameras, and Systems for Industrial and Scientific Applications XIV, Burlingame, CA, USA, 3–7 February 2013; p. 865907. [Google Scholar] [CrossRef]
  26. Davidovic, M.; Seiter, J.; Hofbauer, M.; Gaberl, W.; Zimmermann, H. A background light resistant TOF range finder with integrated PIN photodiode in 0.35 μm CMOS. In Proceedings of the Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, Munich, Germany, 13–16 May 2013; p. 87910R. [Google Scholar] [CrossRef]
  27. Davidovic, M.; Hofbauer, M.; Schneider-Hornstein, K.; Zimmermann, H. High dynamic range background light suppression for a TOF distance measurement sensor in 180nm CMOS. In Proceedings of the SENSORS, Limerick, Ireland, 28–31 October 2011; pp. 359–362. [Google Scholar] [CrossRef]
  28. Davidovic, M.; Zach, G.; Schneider-Hornstein, K.; Zimmermann, H. TOF range finding sensor in 90nm CMOS capable of suppressing 180 klx ambient light. In Proceedings of the SENSORS, Waikoloa, HI, USA, 1–4 November 2010; pp. 2413–2416. [Google Scholar] [CrossRef]
  29. Schmidt, M.; Jähne, B. A Physical Model of Time-of-Flight 3D Imaging Systems, Including Suppression of Ambient Light. In Workshop on Dynamic 3D Imaging; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–15. [Google Scholar] [CrossRef]
  30. Torrance, K.E.; Sparrow, E.M. Theory for Off-Specular Reflection from Roughened Surfaces. J. Opt. Soc. Am. 1967, 57, 1105–1114. [Google Scholar] [CrossRef]
  31. Hou, Q.; Zhi, X.; Zhang, H.; Zhang, W. Modeling and validation of spectral BRDF on material surface of space target. In International Symposium on Optoelectronic Technology and Application 2014: Optical Remote Sensing Technology and Applications; International Society for Optics and Photonics: Bellignham, WA, USA, 2014; p. 929914. [Google Scholar]
  32. Sun, C.; Yuan, Y.; Zhang, X.; Wang, Q.; Zhou, Z. Research on the model of spectral BRDF for space target surface material. In Proceedings of the 2010 International Symposium on Optomechatronic Technologies, Toronto, ON, Canada, 25–27 October 2010; pp. 1–6. [Google Scholar] [CrossRef]
  33. Peng, L.I.; Zhi, L.I.; Can, X.U. Measuring and Modeling the Bidirectional Reflection Distribution Function of Space Object’s Surface Material. In Proceedings of the 3rd International Conference on Materials Engineering, Manufacturing Technology and Control (ICMEMTC 2016), Taiyuan, China, 27–28 February 2016. [Google Scholar]
  34. Schwarte, R.; Xu, Z.; Heinol, H.-G.; Olk, J.; Klein, R.; Buxbaum, B.; Fischer, H.; Schulte, J. New electro-optical mixing and correlating sensor: Facilities and applications of the photonic mixer device (PMD). In Proceedings of the Sensors, Sensor Systems, and Sensor Data Processing, Munich, Germany, 16–17 June 1997; pp. 245–253. [Google Scholar] [CrossRef]
  35. Ringbeck, T.; Möller, T.; Hagebeuker, B. Multidimensional measurement by using 3-D PMD sensors. Adv. Radio Sci. 2007, 5, 135–146. [Google Scholar] [CrossRef]
  36. Conde, M.H. Compressive Sensing for the Photonic Mixer Device; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  37. Kajiya, J.T. The rendering equation. In Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, Dallas, TX, USA, 18–22 August 1986; pp. 143–150. [Google Scholar]
  38. Pharr, M.; Jakob, W.; Humphreys, G. Physically Based Rendering: From Theory to Implementation; Morgan Kaufmann: Burlington, MA, USA, 2016. [Google Scholar]
  39. Li, J.; Liu, Z. Image quality enhancement method for on-orbit remote sensing cameras using invariable modulation transfer function. Opt. Express 2017, 25, 17134–17149. [Google Scholar] [CrossRef] [PubMed]
  40. Sukumar, V.; Hess, H.L.; Noren, K.V.; Donohoe, G.; Ay, S. Imaging system MTF-modeling with modulation functions. In Proceedings of the 2008 34th Annual Conference of IEEE Industrial Electronics, Orlando, FL, USA, 10–13 November 2008; pp. 1748–1753. [Google Scholar]
  41. Langmann, B.; Hartmann, K.; Loffeld, O. Increasing the accuracy of Time-of-Flight cameras for machine vision applications. Comput. Ind. 2013, 64, 1090–1098. [Google Scholar] [CrossRef]
  42. Langmann, B. Wide Area 2D/3D Imaging: Development, Analysis and Applications; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
Figure 1. Schematic diagram of the continuous wave TOF system.
Figure 1. Schematic diagram of the continuous wave TOF system.
Remotesensing 14 02868 g001
Figure 2. Schematic diagram of the BRDF geometry and the measuring instrument used in this paper. (a) Schematic diagram of the BRDF geometry; (b) the REFLET-180 BRDF measuring instrument.
Figure 2. Schematic diagram of the BRDF geometry and the measuring instrument used in this paper. (a) Schematic diagram of the BRDF geometry; (b) the REFLET-180 BRDF measuring instrument.
Remotesensing 14 02868 g002
Figure 3. Material samples and some measured BRDF data of yellow thermal control material and silicon solar cells. The brighter the color, the greater the BRDF value. (a) Sample of thermal control material; (b) sample of silicon solar cells; (c) measured BRDF of thermal control material when θ i is 30°; (d) measured BRDF of silicon solar cells when θ i is 30°; (e) measured BRDF of thermal control material when θ i is 60°; (f) measured BRDF of silicon solar cells when θ i is 60°.
Figure 3. Material samples and some measured BRDF data of yellow thermal control material and silicon solar cells. The brighter the color, the greater the BRDF value. (a) Sample of thermal control material; (b) sample of silicon solar cells; (c) measured BRDF of thermal control material when θ i is 30°; (d) measured BRDF of silicon solar cells when θ i is 30°; (e) measured BRDF of thermal control material when θ i is 60°; (f) measured BRDF of silicon solar cells when θ i is 60°.
Remotesensing 14 02868 g003
Figure 4. Measurement and model fitting results of BRDF at 850 nm for yellow thermal control material and silicon solar cells. (a) Fitting results of the BRDF model of yellow thermal control material; (b) fitting results of the BRDF model of silicon solar cells.
Figure 4. Measurement and model fitting results of BRDF at 850 nm for yellow thermal control material and silicon solar cells. (a) Fitting results of the BRDF model of yellow thermal control material; (b) fitting results of the BRDF model of silicon solar cells.
Remotesensing 14 02868 g004
Figure 5. Physical radiation model of space targets.
Figure 5. Physical radiation model of space targets.
Remotesensing 14 02868 g005
Figure 6. The structure diagram of the PMD and the schematic diagram of the influence of background light on the PMD. (a) The structure diagram of the PMD; (b) the schematic diagram of the influence of background illumination on the PMD.
Figure 6. The structure diagram of the PMD and the schematic diagram of the influence of background light on the PMD. (a) The structure diagram of the PMD; (b) the schematic diagram of the influence of background illumination on the PMD.
Remotesensing 14 02868 g006
Figure 7. The response relationship of the A and B channels of the PMD pixel to the exposure time.
Figure 7. The response relationship of the A and B channels of the PMD pixel to the exposure time.
Remotesensing 14 02868 g007
Figure 8. The SBI model. K A and K B represent the charge-to-voltage coefficient. C D K and C D 0 represent the charge difference coefficient and offset, respectively. (or C B K ) and C A 0 (or C B 0 ) represent the compensation charge coefficient and offset, respectively.
Figure 8. The SBI model. K A and K B represent the charge-to-voltage coefficient. C D K and C D 0 represent the charge difference coefficient and offset, respectively. (or C B K ) and C A 0 (or C B 0 ) represent the compensation charge coefficient and offset, respectively.
Remotesensing 14 02868 g008
Figure 9. AMCW TOF camera imaging simulation model.
Figure 9. AMCW TOF camera imaging simulation model.
Remotesensing 14 02868 g009
Figure 10. The schematic diagram of direct lighting.
Figure 10. The schematic diagram of direct lighting.
Remotesensing 14 02868 g010
Figure 11. Radiative transfer between two points.
Figure 11. Radiative transfer between two points.
Remotesensing 14 02868 g011
Figure 12. Mathematical model of the radiative transfer process.
Figure 12. Mathematical model of the radiative transfer process.
Remotesensing 14 02868 g012
Figure 13. Ground experiment scene. (a) Schematic diagram of the ground experiment scene; (b) actual ground experiment scene; (c) the Newport Oriel Sol3A Solar Simulator, Model 94123A.
Figure 13. Ground experiment scene. (a) Schematic diagram of the ground experiment scene; (b) actual ground experiment scene; (c) the Newport Oriel Sol3A Solar Simulator, Model 94123A.
Remotesensing 14 02868 g013
Figure 14. The measured images of the ground experiment scene (the background area is eliminated) and the corresponding simulated images. (a) Measured grey image; (b) measured depth image; (c) grey image simulated by the method of Ref. [22]; (d) depth image simulated by the method of Ref. [22]; (e) simulated grey image of ours; (f) simulated depth image of ours. (c,d) only consider the influence of the active light of the TOF camera and do not consider the ambient solar light. (e,f) consider the process of the SBI model suppressing solar ambient illumination.
Figure 14. The measured images of the ground experiment scene (the background area is eliminated) and the corresponding simulated images. (a) Measured grey image; (b) measured depth image; (c) grey image simulated by the method of Ref. [22]; (d) depth image simulated by the method of Ref. [22]; (e) simulated grey image of ours; (f) simulated depth image of ours. (c,d) only consider the influence of the active light of the TOF camera and do not consider the ambient solar light. (e,f) consider the process of the SBI model suppressing solar ambient illumination.
Remotesensing 14 02868 g014
Figure 15. The simulated [22] and measured point cloud. (a) Original simulated and measured point clouds; (b) simulation and measured point cloud after denoising.
Figure 15. The simulated [22] and measured point cloud. (a) Original simulated and measured point clouds; (b) simulation and measured point cloud after denoising.
Remotesensing 14 02868 g015
Table 1. Fitting results of BRDF model parameters at 850 nm.
Table 1. Fitting results of BRDF model parameters at 850 nm.
k s k d n σ F 0 e f i t
Thermal control material0.6490.0818.510.0140.7752.6%
Silicon solar cells0.3500.0532.220.0100.0215.2%
Table 2. The performance indexes of PMD CamBoard.
Table 2. The performance indexes of PMD CamBoard.
IndexesValue
Resolution224 × 171 pixel
Wavelength of light source850 nm
Field angle62° × 45°
Focal length ( f / d x ,   f / d y )(208.33, 208.33)
Aperture2 mm
Acquisition time per frame4.8 ms typical at 45 fps
Average power consumption300 mW
Table 3. Some performance indexes of the Newport Oriel Sol3A Solar Simulator, Model 94123A.
Table 3. Some performance indexes of the Newport Oriel Sol3A Solar Simulator, Model 94123A.
IndexesValue
Illumination area305 mm × 305 mm
Maximum angle of incidence(half angle) < ±0.5°
Typical power output100 mW/cm2 (1 SUN), ±20% Adjustable
Uniformity<±2%
Spectral match9.7–16.1% (800–900 nm)
Table 4. Other simulation parameters used.
Table 4. Other simulation parameters used.
IndexesValue
Size of satellite body20 × 20 × 20 cm
Size of solar panel63 × 35 cm
d c a m s a t 1.5 m
θ c a m p a n e l 60°
θ s o l a r S i m p a n e l 10°
n S B I , s t a r t 36,500
Table 5. Comparison of indexes between measured images and simulated images.
Table 5. Comparison of indexes between measured images and simulated images.
IndexMeasured MageRef. [22]’s
Results
Our
Results
Ref. [22]’s
Error
Our
Error
GreyMean17.7916.7318.255.96%2.59%
Var1411.671347.531358.064.54%3.80%
MSE1788.351782.60
SSIM0.700.72
PSNR15.6015.62
DepthMean403.04750.28476.7586.16%18.29%
Var2.95 × 1054.15 × 1053.38 × 10540.68%14.58%
MSE5.46 × 1054.45 × 105
SSIM0.800.85
PSNR38.9539.85
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, Z.; Wang, H.; Liu, X.; Ning, Q.; Lu, Y. Physics-Based TOF Imaging Simulation for Space Targets Based on Improved Path Tracing. Remote Sens. 2022, 14, 2868. https://doi.org/10.3390/rs14122868

AMA Style

Yan Z, Wang H, Liu X, Ning Q, Lu Y. Physics-Based TOF Imaging Simulation for Space Targets Based on Improved Path Tracing. Remote Sensing. 2022; 14(12):2868. https://doi.org/10.3390/rs14122868

Chicago/Turabian Style

Yan, Zhiqiang, Hongyuan Wang, Xiang Liu, Qianhao Ning, and Yinxi Lu. 2022. "Physics-Based TOF Imaging Simulation for Space Targets Based on Improved Path Tracing" Remote Sensing 14, no. 12: 2868. https://doi.org/10.3390/rs14122868

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop