Next Article in Journal
A Multi-Level Operation Method for Improving the Resilience of Power Systems under Extreme Weather through Preventive Control and a Virtual Oscillator
Previous Article in Journal
Design, Fabrication and Evaluation of a Stretchable High-Density Electromyography Array
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Relative Motion Estimation Algorithm for Noncooperative Targets Considering Multiple Solutions of Rotational Parameters

1
School of Aeronautics and Astronautics, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
2
Beijing Institute of Aerospace Control Devices, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(6), 1811; https://doi.org/10.3390/s24061811
Submission received: 19 September 2023 / Revised: 21 February 2024 / Accepted: 8 March 2024 / Published: 12 March 2024
(This article belongs to the Special Issue Sensors and Robots for Space Applications)

Abstract

:
On-orbit servicing using a space robot is gaining popularity among the space community for both economic and safety aspects. In particular, the estimation of the relative motion of a noncooperative target is a challenging problem. This study presents a relative motion estimation scheme based on stereovision for noncooperative targets considering multiple solutions of rotational parameters. Specifically, the mass distribution of the target is identified based on the least-square method and the principle of conservation of angular momentum. Then, the determination of a unique principal axis coordinate frame of the target is employed to resolve the multiple-solution problem. In addition, an EKF (extended Kalman filter)-based filter with global observability is designed to estimate the full motion states and inertia parameters of the target. The convergence performance of the proposed method is verified by numerical simulation. The results also demonstrate that the method is robust to occlusion.

1. Introduction

The ever-growing number of malfunctioned spacecrafts remain in orbit with intense space activity, which seriously threatens the safety of operational spacecraft. On-orbit servicing (OOS) technology for repairing, refueling, and deorbiting these defunct spacecrafts has attracted widespread interest in the last decade. Relative pose and motion estimation of the target to be serviced is a key technology in the OOS mission. Frequently, these missions are considered cooperative. In this case, the state of the target can be measured through a global positioning system (GPS) and position-sensing diode (PSD) mounted on the target [1]. However, some defunct spacecrafts are noncooperative targets; i.e., they are unable to actively (i.e., though a communication link) or passively (i.e., though an auxiliary maker) exchange information with the servicing spacecraft, which makes the cooperative architecture inapplicable [2]. Thus, relative motion estimation technology for noncooperative targets has become urgently demanded and challenging.
To address this issue, the servicing spacecraft has to detect the target remotely on its own. Recent work suggests that electronic optical (EO) sensors are the best option for relative motion estimation purposes [3,4] when proximity operation with noncooperative targets is required. Active LIDAR (Light Detection and Ranging) systems and passive monocular/stereovision are typical EO sensors for space application. An LIDAR system can acquire the 3D point cloud of the target, which can be used for motion estimation. The iterative closest point (ICP) may be the most popular algorithm to deal with point clouds for tracking the pose of a target [4]. In [5], the pose is initialized by matching the silhouette image template data with the LIDAR points. The templates are built offline and the sample are restricted to the 2D attitude domain to simplify template matching. Aghili et al. [6] used the pose calculated through the ICP algorithm as the measurement for an extended Kalman filter and derived the covariance of measurement noise. However, LIDAR systems have obvious drawbacks in terms of mass, power consumption, computation load, and hardware complexity, especially for servicing spacecraft with limited weight and energy budget [7].
Comparatively, passive sensor-based approaches have been given more attention for noncooperative close-proximity operations. These methods rely on the features of the target surface, which are extracted from a sequence of images, to realize motion estimation. In the single-camera vision system, a model-based estimation architecture is proposed in [8], using the line segments identified from the edge of the target in images. The pose is then computed by solving a feature-matching problem using the efficient perspective-n-point (PNP) method. Since satellite nozzles and docking rings are equivalent to spatial circles, several researchers select circle or ellipse features as the recognized object to estimate the pose of the target [9,10]. However, the symmetry of these features will result in ambiguity of the vector normal and the loss of one rotational degree of freedom [11]. Zhang et al. [12] exploited the elliptical cone model to determine the pose of the docking ring and addressed the duality by introducing images of a redundant nozzle. In [13], a convolutional neural network (CNN) was applied to monocular images for pose determination. Then, an unscented Kalman filter with adaptive process noise was designed to estimate the motion of the target. Nevertheless, reliable datasets with labeled images of different motion states and illumination conditions are required for CNN training, which is costly for space applications. Because monocular vision offers bearing information only, it will suffer scale ambiguity regarding the position magnitude, which limits its application [14].
A stereovision system can acquire two perspective views of the features, and therefore, the depth information can be recovered. Several studies have utilized stereovision to address motion estimation concerning uncooperative targets. In [15], the rectangle feature of the framework on the backboard was recognized by two collaborative cameras to realize pose measurement. Hu et al. [16] introduced extra line features to recover information on the roll angle around the circle normal. However, the above methods will not work if particular artificial features, e.g., rectangles, lines, or circles, are not attainable on the target. The point feature always exists on the noncooperative target, making it an ideal candidate feature for recognition, especially when no a priori knowledge of the target’s structure or appearance is accessible [14]. An example of the point-based scheme in which the pose as well as the linear and angular velocity are estimated is shown in [1]. Segal et al. [17] built the observation model of a set of feature points based on the coupling translational–rotational kinematic. Several iterated EKFs with different inertial tensors are exploited and the optimal one is determined by adopting a maximum a posteriori identification. Another work [18] reorganized the Euler equation and incorporated the pseudo-measurement equation into the observation model.
Since there is no direct information about the target’s attitude in feature point measurement data, it is a crucial aspect to define a target-fixed coordinate frame to describe the orientation of the target. According to recent research work, the principal axis coordinate frame is typically preferred for motion estimation problems of a noncooperative target [6,18,19,20]. However, the principal axis coordinate frame is not unique for a rigid body. If a principal axis coordinate frame is designed as a state to be estimated in the filter without any constraint, it will bring multiple solution problems to angular velocity, the inertia matrix, and coordinates of features, because these values completely depend on which coordinate frame is utilized to describe the target’s rotation. In other words, multiple sets of these rotational parameters will share the same measurement history, resulting in the lack of global observability of the estimation problem [21]. The multiple-solution problem of rotational parameters, to the knowledge of the authors, has not been mentioned and investigated in the literature concerning relative navigation. Another aspect to take into account is that the rotation of the target will inevitably lead to the occlusion of feature points. During the occlusion period, the estimates will solely rely on the propagation of the dynamic model until these feature points become visible to the sensor again. In this circumstance, the filter may suffer serious convergence problems if the global observability cannot be guaranteed, making it vulnerable to occlusion.
Motivated by this, this study developed a relative motion estimation algorithm for noncooperative targets considering multiple solutions of rotational parameters. The method proposed herein only depends on the tracking of feature points by using stereovision measurements and prior information about the geometric shape of the target is not required. The original contributions of our work are twofold: First, we propose a method to determine the attitude of the target, which has a unique solution of the principal axis coordinate frame. Second, we use EKF along with a uniquely determined principal axis coordinate frame to guarantee global observability. In numerical simulation, the robustness of the algorithm to occlusion is presented and validated.
The rest of the article is organized as follows: Section 2 introduces the observation model of the stereovision system, as well as the dynamic model of the noncooperative target. Section 3 illustrates the multiple-solution problem of rotational parameters in detail and introduces the method for the determination of the principal axis coordinate frame. Section 4 formulates the EKF-based filtering scheme with a determined principal axis coordinate frame. Then, in Section 5, the simulation results are presented. Finally, a conclusion is drawn in Section 6.

2. Mathematical Model

The aim of the relative motion estimation problem is to estimate the relative translational and rotational motion states and inertial parameters of noncooperative targets using the stereovision equipped on the servicing satellite. In this section, the system model, namely, the measurement model of the stereovision and the dynamic model of the target, are presented. Several coordinate systems are introduced to help describe these models.

2.1. Measurement Model of Stereovision

As depicted in Figure 1, a stereovision system is employed on the space robot to observe the target. A simplified measurement model is applied, characterized by two parallel image planes that are perpendicular to the optical axis. Let C denote the sensor coordinate frame which is attached to the center of projection of the left camera. The x axis of frame C point to the center of projection of the right camera, the y axis is aligned with the optical axis, and the z axis obeys the right-hand rule. It is assumed that N feature points on the surface of the target can be detected. The projection of the i th feature points in the left and right image planes is denoted as ( u i l , v i l ) and ( u i r , v i r ) , respectively. Then, the coordinate expressed in frame C can be recovered using a pinhole camera model [17]:
m i = x i y i z i = z i u i l f z i v i l f f b u i l u i r
where f is the focal length and b is the baseline length.

2.2. Dynamic Model of Rotational Motion

In this article, the attitude of the target is parameterized using a quaternion. The kinematics of the quaternion is given as [22,23]
q I G = 1 2 ω GI G 0 q I G
where q I G describes the orientation of any target-fixed coordinate frame ( G ) with respect to the inertial coordinate frame, I . ω GI G is the angular velocity of frame G with respect to frame I expressed in G , and represents the quaternion multiplication. When the target is regarded as a rigid body, the rotational dynamics is given as [6]
I ω ˙ = ω × I ω + L
where I denotes the target’s inertia matrix and L is external disturbance torque.

2.3. Dynamics Model of Translational Motion

Assuming that the servicing spacecraft moves in a circular orbit, the translational motion of the target in the Hill coordinate frame of the space robot (denoted by L ) can be described by the Hill equation [24]:
r ˙ v ˙ = 0 3 × 3 I 3 × 3 E 1 E 2 r v + 0 3 × 3 I 3 × 3 n t
where
E 1 = 0 0 0 0 ω c 2 0 0 0 3 ω c 2 E 2 = 0 0 2 ω c 0 0 0 2 ω c 0 0
r = x y z T and v = x ˙ y ˙ z ˙ T are the relative position and velocity of the target with respect to the servicing satellite expressed in frame L . ω c is the orbital angular velocity of the servicing spacecraft.

3. Determination of Principal Axis Coordinate Frame

As mentioned in the introduction, the principal axis coordinate frame, the target-fixed coordinate frame aligned with its principal axis of inertia, is more preferable to describe the orientation of a noncooperative target and often set as the state to be estimated in a filter. One reason is that the principal axis coordinate frame can reflect the mass distribution of the target, which is useful information for the subsequent design of the capture strategy. Moreover, because the corresponding inertia matrix is diagonal, it will reduce the dimension of unknown inertia parameters. Notice that there are different ways to define a principal axis coordinate frame. (Figure 2 shows a total of 24 principal axis coordinate frames for a rigid body. The principal axes of the target are represented by dashed lines.) Because the principal axis is only determined by the mass property of the target, it is usually unable to be directly measured by the optical sensor. In other words, the target-fixed frame, G , which is related to visual measurement, does not coincide with a principal axis frame in general. As shown in Figure 2, the attitudes of these principal axis coordinate frames are different from each other, which further causes different values of the corresponding angular velocity, inertial matrix, and coordinates of features. These 24 sets of rotational parameters can produce the same time history of measurement and lead to a multiple-solution problem. Consequently, the navigation filter will lose global observability.
Motivated by this, an algorithm for the determination of a unique principal axis coordinate frame is proposed in this section. In this phase, three non-collinear feature points are exploited to define frame G due to the lack of direct pose measurement of the target. The coordinates of these points are computed based on Equation (1). Three orthogonal unit vectors, c i , can be obtained using the following equation:
c 1 = m 2 m 1 / m 2 m 1 c 3 = m 2 m 1 × m 3 m 1 / m 2 m 1 × m 3 m 1 c 2 = c 3 × c 1
Then, a target-fixed photogrammetric coordinate frame, G , is defined, which conforms to
R C G = c 1 c 2 c 3 T
where R C G is the rotation matrix from frame C to frame G . The corresponding inertial matrix expressed in frame G is denoted as
I G = I x x I x y I x z I x y I y y I y z I x z I y z I z z
Since these feature points are randomly distributed and selected, the product of inertia is set as non-diagonal elements of the inertia matrix without loss of generality. It is worth noting that only five inertial parameters are independent because the inertial matrix will always conform to Equation (3), even when multiplied by any constant. After being divided by the first element, the inertial matrix can be normalized as
I ¯ G = 1 I ¯ x y I ¯ x z I ¯ x y I ¯ y y I ¯ y z I ¯ x z I ¯ y z I ¯ z z
and the constant inertia ratio vector is expressed as l = [ I ¯ y y I ¯ z z I ¯ z y I ¯ x z I ¯ y z ] T . According to attitude dynamics, the angular momentum of target can be formulated as
I ¯ G ω = R I G H
where H = [ h 1 h 2 h 3 ] T denotes the angular momentum expressed in frame I . It is assumed that the target is a torque-free tumbling rigid body; the principle of the conservation of angular momentum can be adopted to estimate the inertia parameters. In that case, H will remain constant. Consequently, Equation (10) can be rewritten as the following linear equation:
a ( t ) X = b ( t )
where
a ( t ) = 0 0 ω y ( t ) ω z ( t ) 0 ω y ( t ) 0 ω x ( t ) 0 ω z ( t ) R I G ( t ) 0 ω z ( t ) 0 ω x ( t ) ω y ( t ) , b ( t ) = ω x 0 0
X = [ l T H T ] T . The unknown constant X is estimated by the least-square method if observation data from different epochs are acquired:
x ^ = ( A T A ) 1 A T B
where
A = a ( t 1 ) a ( t 2 ) a ( t k ) , B = b ( t 1 ) b ( t 2 ) b ( t k )
To estimate the angular velocity in Equation (12), we rewrite Equation (2) as
ω = 2 Ξ T ( q ) q .
where
Ξ ( q ) q 4 I 3 + q 1 : 3 × q 1 : 3 T
The angular velocity can be approximated from the numerical differentiation of q [25].
The resulting I ¯ ^ relates the mass distribution of the target to the stereovision measurement. Therefore, the orientation of a principal axis coordinate frame, T , can be determined through orthogonal diagonalization of I ¯ :
R G T I ¯ ^ G R G T T = r 1 r 2 r 3 T I ¯ ^ G r 1 r 2 r 3 = λ 1 λ 2 λ 3
where λ i are eigenvalues of I ¯ . The corresponding eigenvectors, r i , are the column vectors of the rotation matrix from frame T to frame G. Note that multiple solutions of frame T exist, resulting from the selection of r i , as shown in Figure 2. Therefore, an approach to uniquely determine frame T is proposed as follows: Specifically, r i is chosen so that the inequality constraint λ 1 > λ 2 > λ 3 can be satisfied. Furthermore, the first element of r 1 is set to be positive. In fact, these conditions are equivalent to the constraints that the x axis and z axis of frame T are aligned with the principal axis of the largest and smallest moment of inertia, respectively, and that the x axis of frame T forms an acute angle with the x axis of frame G . In this way, the principal axis coordinate frame, T, can be uniquely determined.

4. Extended Kalman Filter with Determined Principal Axis Coordinate Frame

In our implementation, an EKF-based scheme is employed using the estimated attitude of frame T and observed feature points to estimate the rotational and translational motion of the target. Meanwhile, the orientation of frame T is set as the rotational state of the target to be filtered. Because frame T has been defined and a rough estimation is directly acquired in Section 3, the multiple-solution problem will be resolved.
We denote the inertia matrix of the principal axis coordinate frame, T , as
I = I x 0 0 0 I y 0 0 0 I z
which is parameterized in a similar way as in [6].
p x = I y I z I x , p y = I z I x I y , p z = I x I y I z
where p = p x p y p z T is the inertia ratio with
p ˙ = 0
Therefore, Euler dynamics (3) can be rewritten in terms of the inertia ratio as
ω ˙ = K ( ω ) + J ( p ) n r
where
K ( ω ) = p x ω y ω z p y ω x ω z p z ω x ω y , J ( p ) = 1 0 0 0 1 p y 1 + p x 0 0 0 1 + p z 1 p x
n r is the disturbance torque.
Let f i denote the coordinate of feature points in frame T , which is constant, i.e.,
f ˙ i = 0
The state vector to be estimated by the filter is therefore
x = [ q I T T ω T p T r T v T f 1 T f 2 T f N T ] T
From Equations (2), (4), (20), (21) and (23), the system model is described by
x = f ( x )
Considering the composition rules of the quaternion, the error-state vector is given by
Δ x = [ δ θ T δ ω T δ p T δ r T δ v T δ f 1 T δ f 2 T δ f N T ] T
For the angular velocity, inertia ratio, position, velocity, and coordinates of feature points, the error in the estimated b ^ of a state ( b ) is defined as δ b = b b ^ . For the quaternion, the error is parameterized using a rotation vector ( δ θ ) which satisfies
q = 1 2 δ θ 1 1 2 δ θ 2 q ^
The linearized continuous-time model for the error states is obtained by retaining only the first-order term of the Taylor expansion of Equation (25) around the current estimated value [22]:
Δ x ˙ = F Δ x + G n
where
F = [ ω ^ × ] I 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 N 0 3 × 3 M ( ω ^ , p ^ ) N ( ω ^ ) 0 3 × 3 0 3 × 3 0 3 × 3 N 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 N 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 N 0 3 × 3 0 3 × 3 0 3 × 3 E 1 E 2 0 3 × 3 N 0 3 N × 3 0 3 N × 3 0 3 N × 3 0 3 N × 3 0 3 N × 3 0 3 N × 3 N , G = 0 3 × 3 0 3 × 3 J ( p ¯ ) 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 0 3 N × 3 0 3 N × 3
n = [ n r T n t T ] T with
M ( ω ^ , p ^ ) = 0 p x ω z p x ω y p y ω z 0 p y ω x p z ω y p z ω x 0 , N ( ω ^ ) = ω y ω z 0 0 0 ω x ω z 0 0 0 ω x ω y
The covariance of process noise ( n ) is denoted by Q .
As mentioned above, the estimated attitude of frame T in Section 3 offers the direction observation of the state within the filter, the measurement model of which is simply
y 0 = q I T = q C T q I C
where q I C is computed based on the attitude installation matrix of stereovision and the attitude estimation of the servicing spacecraft. Meanwhile, the coordinate of the i th feature point in frame L satisfies
y i = r + R I L R T I f i
which is obtained from the absolute orbit determination of the servicing spacecraft. Putting all the components together, the observation model of filter is defined as
y = h ( x ) = y 0 T y 1 T y 2 T y N T T
The error measurement model is approximated by linearizing Equation (33):
Δ y = H Δ x + η
where η is the measurement noise, and the measurement sensitive matrix ( H ) is given as
H = I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 R ^ T L [ f ^ 1 × ] 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 R ^ T L R ^ T L [ f ^ N × ] 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 R ^ T L
In this study, a continuous–discrete type of EKF is employed to solve the relative motion estimation problem. The implementation of the EKF is based on two processes: prediction and update.
In the prediction step, the optimal estimation of the state (x) and error covariance (P) are propagated for the time interval ( t k , t k + 1 ) through
x k + 1 = x k + + t k t k + 1 f ( x ( t ) ) d t
P k + 1 = Φ ( t k + 1 , t k ) P k + Φ T ( t k + 1 , t k ) + Q k
The error-state transition matrix Φ ( t k + 1 , t k ) can be computed by numerical simulation of the following differential equation:
Φ ˙ ( τ , t k ) = F ( τ ) Φ ( τ , t k )
With the initial condition of Φ ( t k , t k ) = I . Q k is the discrete covariance of system noise and is calculated by [26]
Q k = t k t K + 1 Φ ( t k + 1 , τ ) G Q G T Φ T ( t k + 1 , τ ) d τ
Once the measurement is available, the error state ( Δ x ) and the corresponding covariance are corrected according to
Δ x k + 1 = K k + 1 Δ y k + 1
P k + 1 + = I K k + 1 H k + 1 P k + 1 I K k + 1 H k + 1 T + K k + 1 R k + 1 K k + 1 T
where R k + 1 is the covariance of measurement noise and K k + 1 is the Kalman gain.
K k + 1 = P k + 1 H k + 1 T H k + 1 P k + 1 H k + 1 T + R k + 1 1
The optimal estimation of filter states, except the attitude, can be updated as
x k + 1 + = x k + 1 + Δ x k + 1
To satisfy the unit norm constraint, the update of quaternion is realized through
q k + 1 + = 1 2 δ θ k + 1 1 1 2 δ θ k + 1 2 q k + 1

5. Numerical Simulation

In this section, the numerical simulations are carried out to verify the proposed relative motion estimation method. Specifically, the objectives of the simulative experiment are to (a) evaluate the validity of the determination of the principal axis coordinate frame and (b) investigate the estimation performance of the EKF-based filter with a uniquely determined attitude.
It is assumed that the servicing spacecraft is in a circular orbit with a radius of 6800 km, and thus, the angular rate is 0.0012 rad/s. The initial relative position and velocity of the target with respect to the servicing spacecraft are set as r 0 = 10 , 10 , 10 T m and v 0 = 0.3889 , 0.4932 , 0.8264 T m/s. In this simulation, a microsatellite, mentioned in [6], is selected as the target spacecraft, which has the inertia matrix diag ( [ 8 , 5 , 4 ] )   kg m 2 . The initial attitude of the target parameterized by the quaternion is given as q 0 = 0 , 0 , 0 , 1 T . According to [27], a malfunctional spacecraft may tumble at a rate varying greatly, from 2.9 deg/s to 36 deg/s. Based on this, a typical value of w 0 = 0.1 , 0.1 , 0.1 T rad/s is considered as the initial angular velocity. We assume that the coordinates of feature points expressed in frame T are subject to uniform distribution, with lower and upper bounds of −1.5 m and 1.5 m, which is the same as in [17]. Six instead of ten feature points are supposed to be measured to test the proposed method in extreme conditions. The measurement is generated at a rate of 10 Hz (similarly to [28]). Due to the self-occlusion of the target caused by rotation, these feature points will inevitably be unobservable for some time intervals. The availability of a feature point can be identified by evaluating whether the angle between the observation direction and the normal direction is acute [29]. However, this algorithm is reliable only when the body is convex. For simplicity, a hypothesis of a fixed occlusion period is made, which is sufficient for the purpose of this study. In view of the magnitude of the initial angular rate, w 0 10 deg/s, we consider a complete unavailability of measurement within 20 s, about half of the rotation period of the target. Figure 3 illustrates this concept. In addition, Figure 4 shows the visible trajectories of these six feature points.
(1)
Results of principal axis coordinate determination
This section presents the principal axis coordinate determination results. To evaluate the influence of the measurement noise level, we consider the stereovision system with an angular resolution of 0.1 × 10 5 rad , 0.5 × 10 5 rad , 2 × 10 5 rad , and 5 × 10 5 rad , similar to the setup used in [30]. The root mean square error (RMSE) is applied as the metric of the algorithm, which is defined as
RMSE = 1 N i = 1 N e i T e i
where e is the estimated error and N is the number of Monte Carlo runs.
The RMSE of the estimated attitude of the principal axis coordinate frame is shown in Figure 5, with different levels of noise over 500 Monte Carlo runs. As seen, all RMSE plots at different noise levels converge quickly in a few seconds with acceptable performance, even before the occurrence of occlusion. It is clearly shown that the noise covariance variations have a great influence on the accuracy and convergence performance: an increased noise level implies lower accuracy and slower convergence. This is because the determination of the principal axis coordinate frame is based on the identification and diagonalization of the inertial matrix, the accuracy of which is directly related to the measurement accuracy. Overall, it is safe to state that the principal axis coordinate frame can be determined uniquely with relatively high accuracy through the proposed algorithm, which is used during the subsequent filter phase.
(2)
Results of motion estimation
In this section, the EKF with the determination of the principal axis coordinate frame (DPACF) is implemented to estimate the motion of the target. The covariance of the orbital force and disturbance torque are set as σ t 2 = 2 × 10 6 I 3   m 2 / s 4 and σ r 2 = 2.5 × 10 5 I 3   rad 2 / s 4 , the same as in [6]. Note that the states and parameters are predicted by solely relying on their latest values and propagation of dynamics during the occlusion period shown in Figure 3. We evaluate the performance of the proposed scheme by comparing it with the method in [6], not employing DPACF. The simulation time is 200 s.
Figure 6, Figure 7, Figure 8 and Figure 9 show the estimation errors of rotational parameters, i.e., the attitude angle, angular velocity, inertia ratio, and coordinates of feature points. It is evident from the figure that estimation errors of the method without DPACF are characterized by significant nonzero mean deviation in the whole process. This is because, due to the multiple solutions of the principal axis coordinate frame, the rotational parameters are not unique. In other words, the filter is not globally observable. On the other hand, when the feature points are missing during occlusion, the noise process will lead to a deviation of estimates from the solution it is supposed to converge to. Therefore, there is no guarantee that the filter will converge to the previous or any specific solution of the 24 candidates in the next estimation phase when the measurements are available again, which results in the fluctuation of estimates. This makes the estimation process fragile to occlusion. Comparatively, the error curves of our method can converge in some periods of time and finally remain within a small neighborhood around zero. This mainly benefits from the determination of the principal axis coordinate frame to force the filter to converge to a unique solution, defined in Section 3. The estimation errors of the relative position and velocity are shown in Figure 10 and Figure 11. It can be seen that the divergence of rotational parameters even causes translational states to deviate, although there is no multiple-solution problem in these states. In conclusion, the proposed method is expected to remove the multiple-solution problem and exhibit robustness to inevitable occlusion.

6. Conclusions

A relative motion estimation scheme for noncooperative targets considering multiple solutions of rotational parameters is presented in this paper. This scheme depends on the determination of the principal axis coordinate frame before the implementation of a filter. Through the identification of the mass distribution of the target and the diagonalization of the normalized inertia matrix, the multiple-solution problem of rotational parameters is prohibited and the global observability of the estimation problem is guaranteed. The results demonstrate good convergence performance and robustness to occlusion, owing to the new filter structure with a unique solution. It should be noted that such an algorithm improves the estimation capability and provides a feasible solution for the relative motion estimation problem of future OOS missions.

Author Contributions

Conceptualization, Q.H. and F.M.; writing—original draft, Q.H. and S.W.; writing—review and editing, S.W. and F.M.; supervision, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (62388101, 91748203, and 11972102), the Shenzhen Science and Technology Program (grant nos. 202206193000001 and 20220816231330001), and the State Key Laboratory of Robotics and Systems (HIT) (SKLRS-2022-KF-08).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Feng, Q.; Zhu, Z.H.; Pan, Q.; Liu, Y. Pose and motion estimation of unknown tumbling spacecraft using stereoscopic vision. Adv. Space Res. 2018, 62, 359–369. [Google Scholar] [CrossRef]
  2. Nocerino, A.; Opromolla, R.; Fasano, G.; Grassi, M.; Fontdegloria Balaguer, P.; John, S.; Cho, H.; Bevilacqua, R. Experimental validation of inertia parameters and attitude estimation of uncooperative space targets using solid state LIDAR. Acta Astronaut. 2023, 210, 428–436. [Google Scholar] [CrossRef]
  3. Pasqualetto Cassinis, L.; Fonod, R.; Gill, E. Review of the robustness and applicability of monocular pose estimation systems for relative navigation with an uncooperative spacecraft. Prog. Aerosp. Sci. 2019, 110, 100548. [Google Scholar] [CrossRef]
  4. Opromolla, R.; Fasano, G.; Rufino, G.; Grassi, M. A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations. Prog. Aerosp. Sci. 2017, 93, 53–72. [Google Scholar] [CrossRef]
  5. Woods, J.O.; Christian, J.A. Lidar-based relative navigation with respect to non-cooperative objects. Acta Astronaut. 2016, 126, 298–311. [Google Scholar] [CrossRef]
  6. Aghili, F.; Parsa, K. Motion and Parameter Estimation of Space Objects Using Laser-Vision Data. J. Guid. Control. Dyn. 2009, 32, 538–550. [Google Scholar] [CrossRef]
  7. Zhu, C.; Gao, Z.; Zhao, J.; Long, H.; Liu, C. Weighted total least squares–Bayes filter-based estimation of relative pose for a space non-cooperative unknown target without a priori knowledge. Meas. Sci. Technol. 2021, 33, 025004. [Google Scholar] [CrossRef]
  8. Sharma, S.; Ventura, J.; D’Amico, S. Robust Model-Based Monocular Pose Initialization for Noncooperative Spacecraft Rendezvous. J. Spacecr. Rocket. 2018, 55, 1414–1429. [Google Scholar] [CrossRef]
  9. Peng, J.; Xu, W.; Yan, L.; Pan, E.; Liang, B.; Wu, A.-G. A Pose Measurement Method of a Space Noncooperative Target Based on Maximum Outer Contour Recognition. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 512–526. [Google Scholar] [CrossRef]
  10. Peng, J.; Xu, W.; Liang, B.; Wu, A.-G. Virtual Stereovision Pose Measurement of Noncooperative Space Targets for a Dual-Arm Space Robot. IEEE Trans. Instrum. Meas. 2020, 69, 76–88. [Google Scholar] [CrossRef]
  11. Huang, Y.F.; Zhang, Z.X.; Cui, H.T.; Zhang, L. A low-dimensional binary-based descriptor for unknown satellite relative pose estimation. Acta Astronaut. 2021, 181, 427–438. [Google Scholar] [CrossRef]
  12. Zhang, D.; Yang, G.; Ji, J.; Fan, S.; Jin, M.; Liu, Q. Pose measurement and motion estimation of non-cooperative satellite based on spatial circle feature. Adv. Space Res. 2023, 71, 1721–1734. [Google Scholar] [CrossRef]
  13. Park, T.H.; D’amico, S. Adaptive Neural-Network-Based Unscented Kalman Filter for Robust Pose Tracking of Noncooperative Spacecraft. J. Guid. Control. Dyn. 2023, 46, 1671–1688. [Google Scholar] [CrossRef]
  14. Jiang, C.; Hu, Q. Constrained Kalman filter for uncooperative spacecraft estimation by stereovision. Aerosp. Sci. Technol. 2020, 106, 106133. [Google Scholar] [CrossRef]
  15. Du, X.; Liang, B.; Xu, W.; Qiu, Y. Pose measurement of large non-cooperative satellite based on collaborative cameras. Acta Astronaut. 2011, 68, 2047–2065. [Google Scholar] [CrossRef]
  16. Hu, Q.; Jiang, C. Relative Stereovision-Based Navigation for Noncooperative Spacecraft via Feature Extraction. IEEE/ASME Trans. Mechatron. 2021, 27, 2942–2952. [Google Scholar] [CrossRef]
  17. Segal, S.; Carmi, A.; Gurfil, P. Stereovision-Based Estimation of Relative Dynamics Between Noncooperative Satellites: Theory and Experiments. IEEE Trans. Control. Syst. Technol. 2014, 22, 568–584. [Google Scholar] [CrossRef]
  18. Pesce, V.; Opromolla, R.; Sarno, S.; Lavagna, M.; Grassi, M. Autonomous relative navigation around uncooperative spacecraft based on a single camera. Aerosp. Sci. Technol. 2019, 84, 1070–1080. [Google Scholar] [CrossRef]
  19. Pesce, V.; Lavagna, M.; Bevilacqua, R. Stereovision-based pose and inertia estimation of unknown and uncooperative space objects. Adv. Space Res. 2017, 59, 236–251. [Google Scholar] [CrossRef]
  20. Li, Y.; Wang, Y.; Xie, Y. Using consecutive point clouds for pose and motion estimation of tumbling non-cooperative target. Adv. Space Res. 2019, 63, 1576–1587. [Google Scholar] [CrossRef]
  21. Christian, J.A. Relative Navigation Using Only Intersatellite Range Measurements. J. Spacecr. Rocket. 2017, 54, 13–28. [Google Scholar] [CrossRef]
  22. Markley, F.L.; Crassidis, J.L. Fundamentals of Spacecraft Attitude Determination and Control; Springer: New York, NY, USA, 2014; pp. 1–486. [Google Scholar]
  23. Zeng, X.; Li, Z.; Gan, Q.; Circi, C. Numerical Study on Low-Velocity Impact between Asteroid Lander and Deformable Regolith. J. Guid. Control. Dyn. 2022, 45, 1644–1660. [Google Scholar] [CrossRef]
  24. Alfriend, K.T.; Vadali, S.R.; Gurfil, P.; How, J.P.; Breger, L.S. Spacecraft Formation Flying: Dynamics, Control and Navigation; Elsevier: Amsterdam, The Netherlands, 2009; pp. 1–382. [Google Scholar]
  25. Bar-Itzhack, I.Y. Classification of algorithms for angular velocity estimation. J. Guid. Control. Dyn. 2001, 24, 214–218. [Google Scholar] [CrossRef]
  26. Crassidis, J.L.; Junkins, J.L. Optimal Estimation of Dynamic Systems; Chapman and Hall/CRC: Boca Raton, FL, USA, 2011; pp. 1–728. [Google Scholar]
  27. Lampariello, R.; Mishra, H.; Oumer, N.W.; Peters, J. Robust Motion Prediction of a Free-Tumbling Satellite with On-Ground Experimental Validation. J. Guid. Control. Dyn. 2021, 44, 1777–1793. [Google Scholar] [CrossRef]
  28. Pesce, V.; Haydar, M.F.; Lavagna, M.; Lovera, M. Comparison of filtering techniques for relative attitude estimation of uncooperative space objects. Aerosp. Sci. Technol. 2019, 84, 318–328. [Google Scholar] [CrossRef]
  29. Biondi, G.; Mauro, S.; Mohtar, T.; Pastorelli, S.; Sorli, M. Attitude recovery from feature tracking for estimating angular rate of non-cooperative spacecraft. Mech. Syst. Signal Process. 2017, 83, 321–336. [Google Scholar] [CrossRef]
  30. Feng, Q.; Zhu, Z.H.; Pan, Q.; Hou, X. Relative State and Inertia Estimation of Unknown Tumbling Spacecraft by Stereo Vision. IEEE Access 2018, 6, 54126–54138. [Google Scholar] [CrossRef]
Figure 1. Diagram of stereovision system.
Figure 1. Diagram of stereovision system.
Sensors 24 01811 g001
Figure 2. Multiple solutions of principal axis coordinate frame.
Figure 2. Multiple solutions of principal axis coordinate frame.
Sensors 24 01811 g002
Figure 3. Raw coordinate information of feature points in the presence of occlusion.
Figure 3. Raw coordinate information of feature points in the presence of occlusion.
Sensors 24 01811 g003
Figure 4. Visible trajectories of six different feature points.
Figure 4. Visible trajectories of six different feature points.
Sensors 24 01811 g004
Figure 5. RMSE of attitude determination of principal axis coordinate frame.
Figure 5. RMSE of attitude determination of principal axis coordinate frame.
Sensors 24 01811 g005
Figure 6. Estimation error of attitude angle.
Figure 6. Estimation error of attitude angle.
Sensors 24 01811 g006
Figure 7. Estimation error of angular velocity.
Figure 7. Estimation error of angular velocity.
Sensors 24 01811 g007
Figure 8. Estimation error of inertia ratio.
Figure 8. Estimation error of inertia ratio.
Sensors 24 01811 g008
Figure 9. Estimation error of coordinates of feature point.
Figure 9. Estimation error of coordinates of feature point.
Sensors 24 01811 g009
Figure 10. Estimation error of relative position.
Figure 10. Estimation error of relative position.
Sensors 24 01811 g010
Figure 11. Estimation error of relative velocity.
Figure 11. Estimation error of relative velocity.
Sensors 24 01811 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, Q.; Wu, S.; Meng, F.; Wu, Z. Relative Motion Estimation Algorithm for Noncooperative Targets Considering Multiple Solutions of Rotational Parameters. Sensors 2024, 24, 1811. https://doi.org/10.3390/s24061811

AMA Style

Hu Q, Wu S, Meng F, Wu Z. Relative Motion Estimation Algorithm for Noncooperative Targets Considering Multiple Solutions of Rotational Parameters. Sensors. 2024; 24(6):1811. https://doi.org/10.3390/s24061811

Chicago/Turabian Style

Hu, Qiyang, Shunan Wu, Fanchen Meng, and Zhigang Wu. 2024. "Relative Motion Estimation Algorithm for Noncooperative Targets Considering Multiple Solutions of Rotational Parameters" Sensors 24, no. 6: 1811. https://doi.org/10.3390/s24061811

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop