Next Article in Journal
A Concept for a Mars Boundary Layer Sounding Balloon: Science Case, Technical Concept and Deployment Risk Analysis
Next Article in Special Issue
Satellite Cluster Formation Reconfiguration Based on the Bifurcating Potential Field
Previous Article in Journal / Special Issue
Discrete-Time Attitude Tracking Synchronization for Swarms of Spacecraft Exploiting Interference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autonomous Trajectory Generation Algorithms for Spacecraft Slew Maneuvers

1
Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY 14853, USA
2
Department of Mechanical and Aerospace Engineering, Naval Postgraduate School, Monterey, CA 93943, USA
*
Author to whom correspondence should be addressed.
Aerospace 2022, 9(3), 135; https://doi.org/10.3390/aerospace9030135
Submission received: 7 January 2022 / Revised: 24 January 2022 / Accepted: 1 March 2022 / Published: 3 March 2022
(This article belongs to the Special Issue Recent Advances in Spacecraft Dynamics and Control)

Abstract

:
Spacecraft need to be able to reliably slew quickly and rather than simply commanding a final angle, a trajectory calculated and known throughout a maneuver is preferred. A fully solved trajectory allows for control based off comparing current attitude to a time varying desired attitude, allowing for much better use of control effort and command over slew orientation. This manuscript introduces slew trajectories using sinusoidal functions compared to optimal trajectories using Pontryagin’s method. Use of Pontryagin’s method yields approximately 1.5% lower control effort compared to sinusoidal trajectories. Analysis of the simulated system response demonstrates that correct understanding of the effect of cross-coupling is necessary to avoid unwarranted control costs. Additionally, a combination of feedforward with proportional derivative control generates a system response with 3% reduction in control cost compared to a Feedforward with proportional integral derivative control architecture. Use of a calculated trajectory is shown to reduce control cost by five orders of magnitude and allows for raising of gains by an order of magnitude. When control gains are raised, an eight orders of magnitude lower error is achieved in the slew direction, and rather than an increase in control cost, a decrease by 11.7% is observed. This manuscript concludes that Pontryagin’s method for generating slew trajectories outperforms the use of sinusoidal trajectories and trajectory generation schemes are essential for efficient spacecraft maneuvering.

1. Introduction

The James Webb telescope depicted in Figure 1 is an example of the paramountcy of attitude trajectories of angular momentum to space mission accomplishment. The James Webb Space telescope (JWST) will study the formation of stars, galaxies, and planets 100–250 million years after the Big Bang. To see the first stars and galaxies of the early universe, JWST must look deep into space, necessitating a need for fine pointing accuracy, one of the reasons JWST is positioned so far away from Earth and the perturbations of the atmosphere. Due to the incredible distance between the JWST and potential imaging targets, pointing errors of mere arcseconds can lead to the telescope field of view not including the intended target, and any spacecraft jitter results in blurry, unusable images. Additionally, the control cost of pointing maneuvers must be considered carefully. Lower control costs result in “cheaper” maneuvers, which can allow for less expensive spacecraft launch, due to mass savings, more possible maneuvers over the spacecraft lifetime due to energy savings, or simply greater error tolerances and recovery capabilities.
The same holds true for near Earth observational satellites. While satellites such as Landsat-9 [3] image targets much closer than JWST does, the increase in magnification power and resolution of such satellites warrants the same need for fine pointing accuracy. Understanding and application of trajectory generation and solution optimality leads to even lower control cost and greater pointing accuracy than has been possible. As pointing accuracy needs increase, attitude trajectories and the provided decrease in pointing error and control cost are vital in making future missions possible.
This manuscript describes two methods of formulating spacecraft attitude slew trajectories and compares various control architectures in performing the generated maneuvers. Rather than the classical formulation of feedback control of a single desired value subtracted from the current state, attention has shifted to more intelligent schemes where the slew trajectory is mapped out for the entire maneuver. Not only does a solved maneuver trajectory allow for more control, customization, and confidence in a control architecture, but also pointing conditions and restraints can be imposed and validated. For example, for a spacecraft to keep a sun-sensor pointed at the sun as best as possible throughout a maneuver. The entire trajectory can be simulated, visualized, and checked ahead of time with a much lower degree of randomness than classical control methods based on final conditions rather than trajectories, might allow. Using trajectories also introduces possibilities for finding optimal solutions, in fuel use, time or distance. One such manner of generating optimal trajectories is by use of Pontryagin’s method. In this manuscript, Pontryagin’s principle of using necessary conditions of optimality to solve for optimal trajectories in terms of control cost will be examined and contrasted with simpler trajectory generation techniques. Additionally, progress has been generated in the development of deterministic artificial intelligence, a control scheme based on a statement of self-awareness, and optimal parameter learning, to determine the optimal control to follow a desired trajectory [4,5,6]. Naturally, the question arises of the effects of applying optimal trajectory optimization methods to such adaptive feedforward methods of optimal control, as well as other control architectures. This manuscript will derive trajectories to perform a given slew maneuver of 30 degrees yaw, as well as present the formalism behind adaptive control techniques and deterministic artificial intelligence. Section 2 describes the materials and methods used to reveal the results presented in Section 3. Section 4 provides a brief discussion on the results in Section 3.
Attitude trajectory adaptation was developed and proposed in [4], but the parameterization suffered from expression in inertial coordinates leading to a high computational burden (thus evaluation of techniques on this basis will remain important). Within just a couple of years, development of the identical technique parameterized in the body frame [5] proved to eliminate a large part of the computational burden. Several years later the burden was reduced still further [6] by 33% and 66% parameter reduction with subsequent experimental validation in [7]. Adaptability is often desirable, but results are sometimes difficult to prove as optimal, minimizing some prescribed cost function. Position and attitude constraints have been simultaneously considered [8] and inherent disturbance rejection as well. Limited control actuation has also been treated [9,10], while optimization has been presented when control moment gyroscopes are used [11], as gyroscope singularity is enormously concerning. Guidance trajectories were derived using only attitude trajectories [12] near small bodies. In 2018, trajectories began to be presented [13] to support the newly proposed deterministic artificial intelligence [14], while at the same time, constrained optimization problems of attitude trajectories were proposed in [15]. Following the former deterministic thread of thinking, deterministic optimal assertion of self-awareness and optimal learning were formulated in [16]. With a renewed focus on attitude trajectory optimization [17], disturbance minimizing trajectories for space robots was just proposed [18]. Finally, convex optimization for trajectory generation [19] was presented amidst a focus on rapid maneuvering agile satellites in constellations [20]. Garcia, et al. [8] highlighted the complicated complexity of optimization of space trajectories generated by the nonlinear, coupled governing equations of motion. Sanyal, et al. [21] emphasized the desirability of analytic, continuous trajectory equations over discontinuous ones from the perspective of proof-of-stability. Walker [15,22] utilized genetic-algorithm-tuned fuzzy controller solutions compared to a similar linear quadratic regulator solution (as opposed to nonlinear solutions presented here).
Chen, et al., [17] illustrated that attitude trajectory optimization can simplify control system design and improve relative performance. In this most recent resurgent strand of research, this manuscript examines convex trajectory optimization using Pontryagin’s methods for control minimization [23] and compares several instantiations to comparable variants utilizing the sinusoidal trajectory generation ubiquitously presented in avenues of research in deterministic artificial intelligence expanded in this present treatment.
This work proposes to:
  • Present and derive two methods of autonomous slew trajectory generation, sinusoidal and Pontryagin based generation.
  • Compare the performance of the derived trajectories when combined with various feedforward and feedback control schemes.
  • Present two iterations on Pontryagin based trajectory generation and compare the instantiations.
  • Validate the strength of trajectory-based control schemes over single-state feedback control methods.
The manuscript begins in Section 2 with a re-introduction of sinusoidal trajectory generation before deriving control minimizing (analytic) optimal trajectories using Pontryagin’s method of imposing necessary conditions of optimality followed by solution of boundary value problems for families of solutions. A brief presentation of each of the control methods follows. Section 3 presents the results of simulations of the control methods and lastly Section 4 discusses and interprets the results of Section 3.

2. Materials and Methods

2.1. Autonomous Trajectory Generation

Given an end state, the goal is for the system to autonomously generate a trajectory to follow. Two methods of doing so will be examined, trajectory generation by examining the structure of the problem and solution to efficient spacecraft slews, and trajectory generation using Pontryagin’s method, which consists of imposing necessary conditions of optimality to solve for the boundary values of the boundary value problems of optimal state rate and trajectory. The trajectory shall be composed of an initial quiescent period, followed by a slew, and ending in a final quiescent period.

2.1.1. Sinusoidal Trajectories

In response to a commanded slew, a spacecraft should quickly snap to the desired attitude, where the definition of quick is determined by the human controller. If attitude maneuvering were instantaneous, the plot of the spacecraft’s attitude representation would appear as a step function in the commanded maneuver channel. However, step functions are not smooth, represent discontinuities, and cannot be easily differentiated. Recognizing an instantaneous slew is impossible and a smooth, differentiable trajectory is optimal, we turn to an analysis of the structure of the problem.
Simple ordinary differential equations of the form z ˙ = A z , such as a state space representation of spacecraft rotational dynamics, can be recognized as a harmonic oscillator, which has the solution of an exponential function of the form z = A exp λ t , and can be written as a sinusoidal, z = A sin ( ω t ) [6]. Given the structure of the solution to the dynamics, an idea is for the commanded slew to follow the sinusoidal structure. Studying the sine wave in Figure 2 demonstrates how a square wave can be approximated by a piecewise trajectory of an initial quiescent period, a sine wave with period approaching zero, and a final quiescent period. As sine waves are smooth and differentiable, sinusoidal structure readily lends itself to a sinusoidal trajectory generation scheme
A nominal sine curve is depicted above, note the abrupt start at time t = 0. A smooth initiation is preferable to avoid sudden impulsive maneuvers and undesirable resonance of flexible structures such as solar panels. In Figure 2b, at time t = 3 T 4 , where T is the sine wave period, we note the smooth ramp up, indicating the derivative is zero and slowly increases rather than an initial derivative demanding immediate velocity. A non-instantaneous derivative reduces strain on the system. Thus time t = 3 T / 4 is desirable for the slew start. Choosing the quiescent time, Δ t quiescent , the maneuver time, Δ t maneuver , the sinusoidal trajectory is designed in the following steps.
z = sin ( ω t )
1.
The maneuver time, Δ t maneuver , determines the period of the sine wave. A sine wave takes half the period to go from the lowest value to the highest so the desired period should be twice the slew time. Thus,
ω = 2 π T = 2 π 2 Δ t maneuver = π Δ t maneuver
2.
Phase shift the sine wave such that the desired low point at time t = 3 T 4 , is at the desired maneuver start time after the initial Δ t quiescent , recalling a positive phase shift translates the sine wave in the negative direction and a negative phase shift translates the sine wave in the negative direction. We would like to translate the sine wave to the left by t = 3 T 4 , resulting in the low point at time t = 3 T 4 instead occurring at time t = 0 , and to the right by Δ t quiescent , shifting the low point to the start of the slew.
z = sin ( ω ( t + ϕ ) )
3.
Next, we would like to manipulate the amplitude of the sine wave to match the desired change in attitude.
A = ( A f A 0 )
where A f is the final attitude desired, and A 0 the initial attitude.
4.
Amplitude-shift the curve up for smooth initiation at A 0 by adding A 0 .
z = ( A f A 0 ) [ 1 + sin ( ω t + 3 Δ t maneuver 2 Δ t quiescent ) ]
5.
Craft a piecewise continuous trajectory with an initial quiescent period of Δ t quiescent , followed by the sinusoidal function formed in the preceding steps occurring during Δ t maneuver , and ending in a quiescent period of constant final attitude.
for { t < Δ t quiesant Δ t quiescant t < Δ t maneuver + Δ t quiescant t Δ t maneuver + Δ t quiescant     z = A 0 z = ( A f A 0 ) [ 1 + sin ( ω t + 3 Δ t maneuver 2 Δ t quiescent ) ] z = A f
Equation (6) can then be easily differentiated to solve for the state, rate, and acceleration.

2.1.2. Optimal Trajectory Using Pontryagin’s Method

Another technique of formulating the trajectory is by use of Pontryagin’s principle. Pontryagin’s principle forms a boundary value problem of the trajectory and makes use of necessary conditions of optimality to solve for the boundary values. The result is an optimal trajectory with respect to the cost function used.

Defining the Problem

The system consists of the 3 degree-of-freedom rotational motion of a rigid body, which when the angular velocity is measured in a non-inertial frame, is represented as
T External torque = I θ ¨ Double integrator + ω × I Transport theorem
where I is the moment of inertia of the body, θ ¨ is the second derivative of the state θ , which represents the three body angles, and T is the applied torque in the body frame. The initial states were initialized at zero, and the system was commanded to reach a final state of θ d , or a desired attitude, with a final angular velocity of zero, θ ˙ ( t f ) = 0 . The model was built in Matlab Simulink and the block diagram can be found in Appendix A Figure A1.
The trajectory optimization problem begins with the dominant double-integrator dynamics as a quadratic control (DQC) problem, and can be summarized as Equation (8), whose result will yield optimal trajectories that are subsequently utilized to control the transport theorem terms.
Minimize J [ x ( ) , u ( ) ] = 1 / 2 t 0 t f τ 2 dt
Subject to θ ˙ = ω ω ˙ = τ / I ( θ 0 , ω 0 ) = ( 0 , 0 ) ( θ f , ω f ) = ( 1 , 0 ) t 0 = 0 t f = 1
The state is given by [ θ , ω ] , the angle of rotation and angular velocity respectively. The value τ is the control torque applied, and I, the inertia of the system, J , the cost function, is the quadratic cost, computed by integrating the square of the applied torque. The quadratic cost represents the amount of work or energy needed to control the system and thus translates readily to quantities such as fuel used and real-world dollars. Note the simplification from Equation (7). In Equation (8), the acceleration should be ω ˙ = τ / I ω × I , however, the cross-coupling makes the boundary value problem tricky to solve. Instead, the trajectory of each Euler angle can be solved for individually, under the assumption disregarding the cross-coupling effects does not affect optimality. To do so, the inertia matrix must also be manipulated, as the off-axis products of inertia complicates things as well. Three methods were used and compared, one where the principal axes and principal moments of inertia were found and the Euler trajectories found in the principal frame and then converted to the body frame, one where the principal moments of inertia were used but the trajectories not converted from principal to body frame, i.e., of the form ω ˙ 1 = τ 1 / I 1 , where the 1 subscript indicates the first Euler angle and the principal moment of inertia corresponding to the principal axis best aligned with the first body axis. Lastly, the off-axis products of inertia were simply ignored, and the body frame moments of inertia used in calculating the trajectories. These techniques result in independently solvable Euler angle trajectories.

Principles of Pontryagin: Optimal States, Rates, and Controls

Pontryagin’s principle consists of forming a boundary value problem by first forming a Hamiltonian function from the given cost function and dynamics, secondly minimizing the Hamiltonian with respect to the states, thirdly taking the derivative of the Hamiltonian with respect to the states to solve for the derivatives of the co-vectors of the states, and if needed, using the transversality of the endpoint Lagrangian to solve for the final values of the co-states and generate enough boundary conditions to solve the boundary value problem.

H: Form the Hamiltoninan

H = F + λ T f ( x , u )
where F , is the running cost function, in general the part of the cost function being continuously integrated, in Equation (8), 1 / 2 τ 2 , λ is the costate vector, given by [ λ θ λ ω ] , one costate per state, and f ( x , u ) represents the dynamics of the system, x ˙ . In Equation (8) the state derivatives are f ( x , u ) = [ ω τ I ] .
H = 1 / 2 τ 2 + [ λ θ   λ ω ] [ ω τ / I ]

Minimize the Hamiltonian

Pontryagin’s principle states the optimal solution can be found by taking the derivative of the Hamiltonian with respect to the control and setting the result equal to zero. Applying the derivative to Equation (10) and setting it equal to zero results in
dH / du = dH / d τ = τ + λ ω / I = 0
which gives τ = λ ω / I .

Adjoint Equation

The next step is to take the derivative of the Hamiltonian with respect to the states and set the result equal to the negative derivative of the co-states. Differentiating Equation (10) with respect to θ and ω results in
dH / d θ = λ ˙ θ = 0 λ θ = a
dH / d ω = λ ˙ ω = λ θ λ ω = λ θ t + b =   at + b

Terminal Transversality of the Endpoint Lagrangian

Conditions necessitating the use of terminal transversality of the endpoint Lagrangian are generally present when there are not enough boundary conditions to solve the boundary value problem formed in the previous steps. The method consists of forming a modified final cost function, E ¯ , from the final cost function, (in the problem of DQC, 0), plus the boundary conditions multiplied by unknown Lagrangians. The derivative of E ¯ , with respect to the endpoints x f , equals the final values of the co-states. Luckily, for this problem terminal transversality is not required as the boundary value problem can already be solved.

The Boundary Value Problem

The boundary value problem can now be summarized as
θ ˙ = ω ω ˙ = τ / I τ = λ ω I λ θ = a λ ω = at + b ( θ 0 , ω 0 ) = ( 0 , 0 ) ( θ f , ω f ) = ( 1 , 0 ) t 0 = 0 t f = 1
Solving the problem gives an optimal solution for the state, rate, acceleration, and torque. Substituting the equation for λ ω into the equation for the torque and integrating twice to find the optimal state and rate trajectories,
τ = ( I 1 ) ( at     b ) = ( I 1 ) ( at + b ) ω ˙ = ( I 1 ) 2 ( 1 / 2 at 2 + bt + c ) θ = ( I 1 ) 2 ( 1 / 6 at 3 + 1 / 2 bt 2 + ct + d )
Finally, the final values are used to solve for the coefficients and the solution for the optimal state, rate, and torque is as follows
θ * = ( I 1 ) 2 ( 3 t 2 2 t 3 ) ω * = ( I 1 ) 2 ( 6 t 6 t 2 ) τ * = ( I 1 ) ( 6 12 t )
where * denotes the optimal solution. These optimal solutions to the trajectory are plotted in Figure 3 for a 30-degree yaw maneuver.

2.2. Feedback Controllers

A summary of the utilized controllers will be given in Section 2.2 beginning with Section 2.2.1, a standard proportional plus derivative (PD) controller establishing a classical performance benchmark, followed by a proportional plus integral plus derivative (PID) controller. A proportional plus derivative plus integral (PDI) control is introduced next in Section 2.2.3 followed by an enhanced PDI in Section 2.2.4. Feedforward controllers are introduced in Section 2.3 beginning with classical, ideal feedforward control in Section 2.3.1 followed by adaptive feedforward control in Section 2.3.2.

2.2.1. Proportional, Derivative (PD)

Given the desired final conditions are a function of the angle θ and the angular velocity θ ˙ , which is being driven to zero, a proportional derivative or PD controller is a logical choice. The PD controller computes the input to the plant, or the control as
τ = u fb = K d ( q ˙ q ˙ d ) K p ( q q d ) = K d q ˜ ˙ K p q ˜
where τ is the control torque composed of the feedback control, K d and K p are the derivative and proportional gains respectively, q and q ˙ are the state and state derivative, q d and q d ˙ are the desired state and state derivative and the tilde represents the tracking errors, the error between the state and the desired state.

2.2.2. Proportional, Integral, Derivative (PID)

Another classical control choice is the proportional integral derivative controller. The PD controller computes the input to the plant, or the control as
τ = u fb = K p ( q q d ) K d d dt ( q q d ) K I ( q q d ) dt
where τ is the control torque, composed of the feedback control, K p , K I and K d , are the proportional, integral, and derivative gains respectively, q is the state, and q d   q d ˙ is the desired state.

2.2.3. Proportional, Derivative, Integral (PDI)

The proportional derivative integral controller differentiates itself from the PID controller by not differentiating the state to get the rate for calculation of the rate error. Instead, the PDI uses the rate from the state estimates directly. Eliminating the differentiation results in less noise in the error signal and smoother control. The PDI controller computes the input to the plant, or the control as displayed in Equation (19).
τ = u fb = K p ( q q d ) K d ( q ˙ q ˙ d ) K I ( q q d ) dt

2.2.4. Enhanced PDI

Given the non-linearities imposed by the transport theorem cross product in the system dynamics, a proposed improvement is the addition of a cross product term in the feedback control to account for the coupled motion [13]. The Enhanced PDI controller then has the form displayed in Equation (20)
τ = u fb = K p ( q q d ) K d ( q ˙ q ˙ d ) K I ( q q d ) ( q ˙ q ˙ d ) × I ( q ˙ q ˙ d )

2.3. Feedforward Controllers

Feedback controllers are ubiquitously applicable to control systems from initial points to desired final points, utilizing a particular strength of feedback to imbue robustness. On the other hand, particularly when the goal is tracking a prescribed trajectory, tracking controllers are often of the feedforward nature lacking feedback, but instead emphasizing the analytic expressions of the controlled system dynamics and the prescribed desired trajectory. This section describes classical feedforward controllers first followed by nonlinear adaptive feedforward controllers (with a few disparate instantiations including various regression models and deterministic artificial intelligence).

2.3.1. Classical Feedforward Controller

Given the desired state, rate, and accelerations have been solved for, the feedforward control can be defined from an understanding of the system dynamics, and the feedforward control is displayed in Equation (21).
τ = u ff = I ω ˙ d + ω d × I ω d
Where the subscript refers to the desired trajectory. The control is formed by plugging in the desired states, rates, and accelerations, into equations of motion in Equation (7), giving us the control that should produce the desired states, rates, and accelerations, given perfect system modeling.

2.3.2. Adaptive Feedforward Development

If the dynamics were exactly always known, given a desired state trajectory, an ideal control could always be formulated and commanded, as in Equation (21), to accomplish the desired maneuver. However, in practicality, determining the inertia dyadic perfectly is very difficult, at best a close estimate is made, resulting in imperfect feedforward control. To better estimate the system and the ideal control, a method of updating the system model based off feedback error is proposed. For inertia matrix [ I ] , Coriolis matrix [ C ] , and applied external torque τ , the equations of motion are
{ τ ideal } = [ I ] { q ¨ } + [ C ] { q ˙ } = [ I ] { q ¨ d } + [ C ] { q ˙ d }
where q is the state, and q d the desired state. In more common terms
{ τ ideal } = I ω ˙ + ω × I ω = [ I ] { q ¨ d } + [ C ] { q ˙ d } = [ Φ ( ω d , ω ˙ d ) ] { θ }
The goal is to break the feedforward torque command into a regression model of a product of a matrix of knowns, [ Φ ] , times a vector of unknowns, { θ } . There are several choices available for the parameterization of { θ } , discussed in Section 2.3.3.
The adaptive aspect is an adaptation of the vector of unknowns, in response to error of the state with respect to the desired trajectory. The rate is defined in Equation (24).
Θ ˙ = Γ Φ T ( q ˜ ˙ + λ q ˜ )
where Γ > 0 and q ˜ ˙ is the error in the state derivative with respect to the desired state derivative q ˙ q ˙ d , as defined above. Equation (24) can be integrated starting from an initial estimate of Θ , to form a continuously learning, adaptive system.
Another choice is to form [ Φ ] using reference trajectories is in Equation (25):
q ˙ r = q ˙ d λ ( q q d ) = q ˙ d λ ( q ˜ ) ,   q ¨ r = q ¨ d λ ( q ˙ q ˙ d ) = q ¨ d λ ( q ˜ ˙ )
for λ > 0 , at which point q d in Equation (23) would be replaced with q r . Reference trajectories serve to compensate for the error in trajectory by giving a “boost”. For instance, if the state is lower than the desired state, to ensure the end conditions are still met, the state derivative should be slightly boosted, e.g., if the desired final state is five radians at time, t = 1, the desired state derivative a constant four radians per second, and rather than being at one radian, at time t=0 the state is at zero, then to arrive at the desired final state of five radians, the state derivative will need to be five radians per second assuming linear motion. The reference trajectory should in theory help ensure the final conditions are better met without the need for classical feedback control.
Through formulation of a Lyapunov function [4] and Barbarat’s lemma [6], control of the form of Equation (23) in conjunction with feedback control of the form of Equation (17), is proven stable, and the state error proven to tend to zero. The proof is omitted here for brevity but can be found in [6].

2.3.3. Regression Modeling

9-Parameter Regression

Parameter regression seeks to write the governing dynamics equations, Equations (2) and (3), as products of a matrix of knowns and a vector of unknowns. While the governing equations of motion are messy and nonlinear in the state, they are linear in terms of the inertia dyadic [ I ] . Noting the state is already known for control purposes, and the inertia dyadic is the most difficult to estimate, Equation (3) can be separated into a matrix of knowns as a function of the state, and a vector of unknowns, a function of the inertia dyadic and the angular momentum H . The presented 9-parameter regression model is derived in Slotine [4].
[ Φ ( ω r , ω ˙ r ) ] 3 × 9 { Θ ^ } 9 × 1 = [ ω ˙ x ω ˙ y ω ˙ z 0 0 0 0 ω z ω y 0 ω ˙ x 0 ω ˙ y ω ˙ z 0 ω z 0 ω x 0 0 ω ˙ x 0 ω ˙ y ω ˙ z ω y ω x 0 ] { J ^ xx J ^ xy J ^ xz J ^ yy J ^ yz J ^ zz H ^ x H ^ y H ^ z }
Multiplying out the expression, Equation (26) is equivalent to Equation (3).

6-Parameter Regression

Recalling H = I ω , the nine-parameter model can be simplified to an equivalent six parameter model [6]:
[ Φ ( ω r , ω ˙ r ) ] 3 × 6 { Θ ^ } 6 × 1 = [ ω ˙ x ω ˙ y ω ˙ z ω y ω z 0 ω z ω y ω x ω z ω ˙ x 0 ω ˙ y ω ˙ z ω z ω y ω x ω y 0 ω ˙ x ω y ω x ω ˙ y ω ˙ z ] { J ^ xx J ^ xy J ^ xz J ^ yy J ^ yz J ^ zz }

2.3.4. Deterministic Artificial Intelligence

Deterministic Artificial Intelligence begins with the same regression modeling as Adaptive Control, which can be portrayed as a statement of self, as the model is learning the system dynamics that govern itself. However, the crucial difference is the optimal learning of DAI. DAI uses the standard solution to the batch least squares regression problem [14,16]
θ ^ = ( Φ T Φ ) 1 Φ T u
where Θ ^ is estimated using feedback control, and the estimated states in Φ are provided by a Luenberger Observer. Equation (28) represents optimal learning of the state. Differentiating Equation (28) results in Equation (29).
δ Θ ^ = ( Φ T Φ ) 1 Φ T δ u
which allows for the change in Θ ^ to be estimated and then integrated to get a smoother estimation of Θ ^ . optimal learning expressed in Equations (28) and (29), with the declaration of self-awareness, learns the system parameters and can then be used to make an optimal feedforward control, u ff = Φ Θ ^ .

2.4. Luenberger Observer

A Luenberger Observer is used to estimate the control observed to act on the system. Given errors in the estimation of the inertia matrix, the control commanded will not result in the state and rate expected. Therefore, the Luenberger Observer examines the actual state and rate outputted by the plant and uses them to estimate the control responsible for the system response given the estimate of the inertia matrix, resulting in a simulated observer estimation of the control torque. The Luenberger Observer is of the form
θ ^ = [ J 1 ( K p e + K i edt ) dt + K d e ˙ ] dt
where e is the error of the observed state. The torque can be extracted as
τ = K p e + K i e   dt
And δ u in Equation (29) is found by subtracting the result of Equation (31) from the commanded torque outputted by the chosen controller.

2.5. Simulation

A simulation was developed in MATLAB Simulink to examine the difference between feedback control, adaptive feedforward control and feedback plus adaptive feedforward control. A timestep of 0.001 s was used in combination with a fourth-ordered Runge–Kutta integration solver.

3. Results

The various controllers and generation techniques were compared. For all controllers the gains were K P = 100,000 , K D = 1000 ,   K I = 10 ,   Γ = 30 ,   λ = 100 .

3.1. Comparison of Sinusoidal and Pontryagin Based Trajectory Generation

All the control methods presented in Section 2 were run, as well as all combinations of feedforward and feedback control techniques. Table 1 shows the system responses when a sinusoidal trajectory was used, and Table 2 shows when Pontryagin’s method was used. For both tables a symmetric inertia matrix of [16.67 0 0; 0 16.67 0; 0 0 16.67], representing a small cube sat was used. Due to the symmetry, there were no complications due to cross-coupling.

3.2. Non-Symmetrical Inertia Matrix

The analysis in Section 3.1 was repeated with a non-symmetric inertia matrix of [90 10 10; 10 100 −20; 10 −20 250] with results displayed in Table 3. To eliminate the cross-coupling, the method of solving for the trajectory in the principal frame and then converting to the body frame was used.

3.3. Pontryagin Cross-Coupling Techniques

Noting the large spikes in cost in Table 4, the analysis of Table 4 was repeated, with Pontryagin trajectory solved by using the principal moments of inertia but not converting from the principal frame to the body and instead assuming the control could be taken to be the same in either frame. The results are in Table 5.
Lastly the three techniques for eliminating the cross-coupling in Pontryagin generation of optimal trajectories were compared for several different maneuvers. The maneuvers consisted of [ θ f , ϕ f , ψ f ] where the final desired value for each maneuver was specified. Note K P and K D had to be raised to 100,000 and 10,000 respectively, to eliminate ringing in the second and third maneuvers. DAI + PD was used as the controller; the results are displayed in Table 6.

3.4. Constant Trajectory Feedback Control

For comparison, feedback control with a constant trajectory consisting of the desired end-state, was simulated. A PD controller was used with the same gains as in the previous simulations, K P = 100,000 , K D = 1000 . The non-symmetric inertial-dyadic presented in Section 3.2 was used. The system response is given in Figure 4.

4. Discussion

Looking at Table 1 and Table 2, note the addition of feedforward decreases the control effort by 0.8% on average. Additionally, the three feedforward techniques seem to perform equally well. Comparing Table 2 with Table 1, Pontryagin trajectory generation lowers control cost by about 1–1.5%.
Looking at Table 3 and Table 4, Pontryagin techniques with feedback incur a lot of control cost. Examining the control torques and trajectories in Figure 5, reveals the Pontryagin method of converting from the principal frame to the body, commands motion in all three axes (see Figure 5e) due to the cross-coupling. Comparing Figure 5e with Figure 5f,d reveals the sinusoidal does not command motion in all three axes.
Looking at the tracking of the Euler angles in Figure 6, there is significant change in the non-slew directions. Once the slew is over, the error in commanding undesirable motion in the non-slew directions manifests itself in a spike of feedback control, making the control cost abnormally high. When Pontryagin is used without converting back into the body frame from the principal, essentially deriving a trajectory in the non-principal frame, undesirable motion in the non-slew directions was eliminated. In the formulation used in Figure 6 and by virtue of non-zero cross-products of inertia, principal axes were mis-aligned with the body axes, and while a seemingly fine idea, solving for the trajectory in the principal frame and then converting to the body frame introduces error. The misalignment causes undesired rotation in the body frame when the trajectory is converted, via cross-coupled terms in the inertia dyadic.
Table 5 proves not converting between principal and body frames fixes the error. Table 5 also demonstrates the flexibility of the control architecture, with low error independent of which axis was being rotated around. The disparity between principal moments of inertia and moments of inertia in the body frame, 81.5970, 105.326, 253.0783, compared with 90, 100, 250, is worth noting. Using the moments of inertia in the body frame to solve for the optimal trajectory in Pontryagin’s method applied in the non-principal frame, was predicted to have a detrimental effect, as the control might not be scaled properly. However, Table 6 shows no difference between using principal or non-principal moments of inertia.
Figure 4 shows the stark contrast between using a calculated trajectory and not. Comparing Figure 4b to the PD control in Table 5, note similar final errors in roll pitch and yaw channels, and five orders of magnitude worse control cost. In Table 7, the PD controller gains were raised revealing that use of a calculated trajectory results in better final state error as compared to control without a calculated trajectory. Additionally, in response to elevated gains, control cost rose by a factor of ten in the non-trajectory case (between Figure 4b and the second row of Table 7), however when using Pontryagin optimal trajectory, the control cost actually decreased in response to elevated gains, by 11.7% (comparing the third row of Table 5 and the first row of Table 7) while the error decreased by an order of magnitude in the roll and pitch channels, and seven orders of magnitude in the yaw channel. The use of a calculated trajectory is thus shown to allow for higher gains that result in significantly lower error, and lower control cost. Additionally, the plotted system response with no calculated trajectory in Figure 4a, shows significant overshoot and significant oscillatory motion as compared to using a calculated trajectory.
As for a computational burden analysis, run time was recorded in each of the trials. Unfortunately, run-time was found to be dependent on the hardware the simulation was run on and how much of the CPU was available, resulting in an inability to compare run time across different tables. However, looking at each table individually, no run time different of more than 20% was observed, and no observable patterns found. With more detailed analysis it could be concluded whether any of the control methods used were more computationally demanding than the others, however for the purpose of this study, all control methods were deemed relatively similar in terms of computational burden.

4.1. Conclusions

Trajectory generation demonstrates significant decrease in control cost and attitude error as well as allowing for more versatile gain tuning. Pontryagin’s method shows promise, lowering control cost slightly over Sinusoidal generation and lays the groundwork for more sophisticated autonomous trajectory generation, by modification of the Hamiltonian (Equations (9) and (10)), such as imposition of pointing restraints, not possible with the sinusoidal method. Proper understanding of cross-coupling dynamical effects proved crucial in the formulation of Pontryagin’s method, however proving simple to deal with. Lower pointing errors make missions focused on objects further away, or requiring higher magnification and object resolution, possible, and lower control costs improve the efficiency of such missions, increasing viability.

4.2. Future Work

RTOC was not implemented and studied. RTOC inherently uses the trajectory solved for in Pontryagin’s method and a comparison of RTOC to control using sinusoidal trajectory generation would be interesting. Additionally, the assumption to solve for each Euler angle trajectory separately using Pontryagin’s method, inherently introduces some error. The amount of error in this formulation should be examined. A better approach would be to solve Pontryagin’s method for the full coupled dynamics. While there do exist methods to do so, they lie outside of the scope of this work were not attempted.
Of note, results are possible with higher gains and modest increases in cost. However, the investigation of this work was not into achieving the lowest error possible.
The true power of Pontryagin’s method lies in the ability to parameterize desired trajectories and solve for optimal solutions. For instance, during a slew a requirement might be specified declaring the spacecraft shall never point in a certain direction. With sinusoidal trajectory generation, there is no way to enforce a requirement on pointing restrictions without analysis of the sinusoidal trajectory and careful design of piecewise maneuvers around restrictions. However, Pontryagin’s method allows for such specifications in the formation of the Hamiltonian. Pontryagin’s method could significantly reduce control efforts of more complicated maneuvers with pointing restriction.

Author Contributions

Conceptualization, A.S. and T.S.; methodology, A.S. and T.S.; software, A.S. and T.S.; validation, A.S. and T.S.; formal analysis, A.S. and T.S.; investigation, A.S. and T.S.; resources, T.S.; data curation, A.S.; writing—original draft preparation, A.S.; writing—review and editing, A.S. and T.S.; visualization, A.S.; supervision, T.S.; project administration, A.S and T.S.; funding acquisition, T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was funded by the corresponding author.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data may be made available by contacting the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Table of variable definitions for Section 2.1.1.
Table A1. Table of variable definitions for Section 2.1.1.
VariableDefinition
z , z ˙ , z ¨ Arbitrary motion states variables used to formulate autonomous trajectories
A , A 0 , A f Arbitrary motion state amplitude, initial and final amplitude used to formulate autonomous trajectories
t Time
λ Eigenvalue associated with exponential solution to ordinary differential equations
ω Frequency of sinusoidal functions
T Period of sinusoidal functions
Δ t quiescent User-defined quiescent period (no motion should occur during the quiescent period)
Δ t meanuever User-defined duration of maneuver (often established by time-optimization problems)
ϕ Phase angle of sinusoidal functions
Table A2. Table of variable definitions for Section 2.1.2 through 4.1.
Table A2. Table of variable definitions for Section 2.1.2 through 4.1.
VariableDefinition
T ,   τ Torque
I Inertia Matrix
θ , θ ˙ , θ ¨ Three body angle representation of attitude, its derivative and second derivative
ω , ω d , ω ˙ , ω ˙ d Angular velocity, desired angular velocity, derivative, and desired derivative of angular velocity
J Control cost
H Hamiltonian function
F Running cost function
λ Vector of the co-states (Section 2.1.2) and reference trajectory gain (Section 2.3.2 on)
x , x ˙ , x ¨ Arbitrary state space representation of the state, its derivative and its second derivative
u ,   δ u ,
u f f , u f b
Control and control derivative in state space form, feedforward, and feedback control
f ( x , u ) State space dynamics
a , b , c , d Arbitrary constants used in solving of trajectory
K d , K p , K I Derivative, Proportional, and Integral gains
q , q d , q ˙ , q ˙ d Arbitrary representation of state, state derivative, desired state, and desired state derivative
q ˜ , q ˜ ˙ Error between state and desired state, error between state derivative and desired state derivative
[ C ] Coriolis Matrix
Φ , Φ ^ Matrix of knowns, learned matrix of knowns
θ , θ ^ Vector of unknowns, learned vector of unknowns
Γ Learning rate gain
e Error of Luenberger observed state
Table A3. Consolidated table of acronyms.
Table A3. Consolidated table of acronyms.
AcronymDefinition
PDProportional Derivative Controller
PIDProportional, Integral, Derivative Controller
PDIProportional, Derivative, Integral Controller
FFFeedforward
FBFeedback
DAIDeterministic Artificial Intelligence
Figure A1. Simulink Model Complete display of all SIMULINK systems and subsystems.
Figure A1. Simulink Model Complete display of all SIMULINK systems and subsystems.
Aerospace 09 00135 g0a1
Figure A2. Trajectory generation block model.
Figure A2. Trajectory generation block model.
Aerospace 09 00135 g0a2
Figure A3. Sinusoidal Trajectory Generation.
Figure A3. Sinusoidal Trajectory Generation.
Aerospace 09 00135 g0a3
Figure A4. Pontryagin based Trajectory Generation.
Figure A4. Pontryagin based Trajectory Generation.
Aerospace 09 00135 g0a4
Figure A5. Controller block model.
Figure A5. Controller block model.
Aerospace 09 00135 g0a5
Figure A6. Sensors and Observers block model.
Figure A6. Sensors and Observers block model.
Aerospace 09 00135 g0a6
Figure A7. Luenberger Observer block model.
Figure A7. Luenberger Observer block model.
Aerospace 09 00135 g0a7
Figure A8. Rigid body Dynamics block model.
Figure A8. Rigid body Dynamics block model.
Aerospace 09 00135 g0a8

References

  1. NASA to Provide Update on James Webb Space Telescope. 15 July 2020 MEDIA ADVISORY M20-083. Available online: https://www.nasa.gov/press-release/nasa-to-provide-update-on-james-webb-space-telescope (accessed on 2 March 2020).
  2. NASA Image Use Policy. Available online: https://gpm.nasa.gov/image-use-policy (accessed on 23 December 2021).
  3. Garner, R. Landsat Overview. NASA. Available online: https://www.nasa.gov/mission_pages/landsat/overview/index.html (accessed on 1 April 2015).
  4. Slotine, J.-J.E.; Weiping, L. Applied Nonlinear Control; Prentice-Hall: Hoboken, NJ, USA, 1991. [Google Scholar]
  5. Fossen, T. Comments on Hamiltonian adaptive control of spacecraft by Slotine, J.J.E. and Di Benedetto, M.D. IEEE Trans. Autom. Control. 1993, 38, 671–672. [Google Scholar] [CrossRef]
  6. Sands, T.; Kim, J.; Agrawal, B. Improved Hamiltonian Adaptive Control of spacecraft. In Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 11 May 2009. [Google Scholar] [CrossRef]
  7. Sands, T.; Kim, J.; Agrawal, B. Spacecraft Adaptive Control Evaluation. In Proceedings of the Infotech@Aerospace 2012, Garden Grove, CA, USA, 19–21 June 2012. [Google Scholar] [CrossRef]
  8. Garcia, I.; How, J. Trajectory Optimization for Satellite Reconfiguration Maneuvers with Position and Attitude Constraints. In Proceedings of the American Control Conference, Portland, OR, USA, 8–10 June 2005. [Google Scholar] [CrossRef] [Green Version]
  9. Sanyal, A.; Fosbury, A.; Chaturvedi, N.; Bernstein, D. Inertia-free Spacecraft Attitude Trajectory Tracking with Internal-Model-Based Disturbance Rejection and Almost Global Stabilization. In Proceedings of the American Control Conference, St. Louis, MO, USA, 10–12 June 2009. [Google Scholar]
  10. Yoshimura, Y.; Matsuno, T.; Hokamoto, S. Global trajectory design for position and attitude control of an underactuated satellite. Trans. Jap. Soc. Aero. Space Sci. 2016, 59, 107–114. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, L.; Yu, C.; Zhang, S.; Cai, H. Optimal attitude trajectory planning method for CMG actuated spacecraft. Proc. Inst. Mech. Eng. Part G J. Aero. Eng. 2018, 232, 131–142. [Google Scholar] [CrossRef]
  12. Li, X.; Warier, R.R.; Sanyal, A.K.; Qiao, D. Trajectory tracking near small bodies using only attitude control. J. Guid. Control. Dyn. 2019, 42, 109–122. [Google Scholar] [CrossRef]
  13. Baker, K.; Cooper, M.; Heidlauf, P.; Sands, T. Autonomous Trajectory Generation for Deterministic Artificial Intelligence. Electr. Electron. Eng. 2018, 8, 59–68. [Google Scholar] [CrossRef]
  14. Sands, T. Development of Deterministic Artificial Intelligence for Unmanned Underwater Vehicles (UUV); MDPI, Multidisciplinary Digital Publishing Institute: Basel, Switzerland, 2020; Available online: https://www.mdpi.com/2077-1312/8/8/578/htm (accessed on 23 December 2021).
  15. Walker, A. Genetic Fuzzy Attitude State Trajectory Optimization for a 3U Cube Sat. Ph.D. Dissertation, University of Cincinnati, Cincinnati, OH, USA, 15 June 2020. [Google Scholar] [CrossRef]
  16. Smeresky, B.; Rizzo, A.; Sands, T. Optimal Learning and Self-Awareness Versus PDI. Algorithms 2020, 13, 23. [Google Scholar] [CrossRef] [Green Version]
  17. Chen, C.; Guo, W.; Wang, P.; Sun, L.; Zha, F.; Shi, J.; Li, M. Attitude Trajectory Optimization to Ensure Balance Hexapod Locomotion. Sensors 2020, 20, 6295. [Google Scholar] [CrossRef] [PubMed]
  18. Zhou, Q.; Liu, X.; Cai, G. Base attitude disturbance minimizing trajectory planning for a dual-arm space robot. Proc. Inst. Mech. Eng. Part G J. Aero. Eng. 2021. [Google Scholar] [CrossRef]
  19. Malyuta, D.; Reynolds, T.; Szmuk, M.; Lew, T.; Bonalli, R.; Pavone, M.; Acikmese, B. Convex Optimization for Trajectory Generation. arXiv 2021, arXiv:2106.09125. [Google Scholar]
  20. Sin, E.; Arcak, M.; Nag, S.; Ravindra, V.; Li, L.; Levinson, R. Attitude Trajectory Optimization for Agile Satellites in Autonomous Remote Sensing Constellations. In Proceedings of the AIAA Scitech Forum, Virtual Online Event, 4 January 2021. [Google Scholar] [CrossRef]
  21. Sanyal, A.; Fosbury, A.; Chaturvedi, N.; Bernstein, D. Inertia-Free Spacecraft Attitude Tracking with Disturbance Rejection and Almost Global Stabilization. In Proceedings of the American Control Conference, Hyatt Regency Riverfront, St. Louis, MO, USA, 10–12 June 2009. [Google Scholar] [CrossRef] [Green Version]
  22. Walker, A.; Putman, P.; Cohen, K. Solely Magnetic Genetic/Fuzzy-Attitude-Control Algorithm for a CubeSat. J. Space. Rock. 2015, 52, 1627–1639. [Google Scholar] [CrossRef]
  23. Sands, T. Virtual Sensoring of Motion Using Pontryagin’s Treatment of Hamiltonian Systems. Sensors 2021, 21, 4603. [Google Scholar] [CrossRef] [PubMed]
Figure 1. James Webb telescope positions in the second Lagrange point, L2 from which attitude maneuvers point the telescope towards distant targets in outer space. [1] Image used consistent with NASA policy, “NASA content (images, videos, audio, etc.) are generally not copyrighted and may be used for educational or informational purposes without needing explicit permissions” [2].
Figure 1. James Webb telescope positions in the second Lagrange point, L2 from which attitude maneuvers point the telescope towards distant targets in outer space. [1] Image used consistent with NASA policy, “NASA content (images, videos, audio, etc.) are generally not copyrighted and may be used for educational or informational purposes without needing explicit permissions” [2].
Aerospace 09 00135 g001
Figure 2. (a) Piecewise sinusoidal trajectory with a slew time/maneuver time of 5 s and a quiescent time of 5 s before and after the slew (b) A nominal sine wave used to reveal relationship between points on the curve’s time and phase angle.
Figure 2. (a) Piecewise sinusoidal trajectory with a slew time/maneuver time of 5 s and a quiescent time of 5 s before and after the slew (b) A nominal sine wave used to reveal relationship between points on the curve’s time and phase angle.
Aerospace 09 00135 g002
Figure 3. Optimal Trajectories from Pontryagin’s Method with time in seconds on the abscissas. (a) angular acceleration ω ˙ ( t ) on the ordinate, (b) angular velocity ω ( t ) on the ordinate, (c) angular displacement θ ( t ) on the ordinate. Note the similarity in the (c) to Figure 2a.
Figure 3. Optimal Trajectories from Pontryagin’s Method with time in seconds on the abscissas. (a) angular acceleration ω ˙ ( t ) on the ordinate, (b) angular velocity ω ( t ) on the ordinate, (c) angular displacement θ ( t ) on the ordinate. Note the similarity in the (c) to Figure 2a.
Aerospace 09 00135 g003
Figure 4. System response of PD control with constant trajectory. (a) Plot of Euler angles during the maneuver. Roll is solid blue, pitch is thick dashed red, yaw is thin dashed yellow (b) Characteristics of the response. * Note, this run time was found to be much lower due to differences in available processing power and when compared to re-simulations of some of the above methods, found to be comparable.
Figure 4. System response of PD control with constant trajectory. (a) Plot of Euler angles during the maneuver. Roll is solid blue, pitch is thick dashed red, yaw is thin dashed yellow (b) Characteristics of the response. * Note, this run time was found to be much lower due to differences in available processing power and when compared to re-simulations of some of the above methods, found to be comparable.
Aerospace 09 00135 g004
Figure 5. Comparison of system response to Pontryagin generated trajectory and Sinusoidal trajectory. Roll angle in degrees is displayed as a solid, blue line; pitch angle as a dashed red line, and yaw angle as a dotted gold line. The top row displays results using Pontryagin, while the bottom row displays results using Sinusoidal. (a) Feedback Control Pontryagin, (b) Feedforward Control Pontryagin, (c) Acceleration Trajectory Pontryagin, (d) Rate Trajectory Pontryagin, (e) State Trajectory Pontryagin, (f) Feedback Control Sinusoidal, (g) Feedforward Control Sinusoidal, (h) Acceleration Trajectory Sinusoidal, (i) Rate Trajectory Sinusoidal, (j) State Trajectory Sinusoidal.
Figure 5. Comparison of system response to Pontryagin generated trajectory and Sinusoidal trajectory. Roll angle in degrees is displayed as a solid, blue line; pitch angle as a dashed red line, and yaw angle as a dotted gold line. The top row displays results using Pontryagin, while the bottom row displays results using Sinusoidal. (a) Feedback Control Pontryagin, (b) Feedforward Control Pontryagin, (c) Acceleration Trajectory Pontryagin, (d) Rate Trajectory Pontryagin, (e) State Trajectory Pontryagin, (f) Feedback Control Sinusoidal, (g) Feedforward Control Sinusoidal, (h) Acceleration Trajectory Sinusoidal, (i) Rate Trajectory Sinusoidal, (j) State Trajectory Sinusoidal.
Aerospace 09 00135 g005
Figure 6. Comparison of True Euler angles during slew, (a,d) roll, (b,e) pitch, and (c,f) yaw. Top row (ac) using Pontryagin derivation in principal frame, bottom row (df) using Pontryagin derivation in non-principal frame. (a) roll using principal frame, y-axis [−1,1.5] (b) pitch in principal frame, y-axis [−4,2], (c) yaw using principal frame, y-axis [0,30], (d) roll using non-principal, y-axis [−1.5 × 10−3,1 × 10−3], (e) pitch using non-principal axis, y-axis [−3 × 10−3, 4 × 10−3], (f) yaw using non-principal, y-axis [0,30].
Figure 6. Comparison of True Euler angles during slew, (a,d) roll, (b,e) pitch, and (c,f) yaw. Top row (ac) using Pontryagin derivation in principal frame, bottom row (df) using Pontryagin derivation in non-principal frame. (a) roll using principal frame, y-axis [−1,1.5] (b) pitch in principal frame, y-axis [−4,2], (c) yaw using principal frame, y-axis [0,30], (d) roll using non-principal, y-axis [−1.5 × 10−3,1 × 10−3], (e) pitch using non-principal axis, y-axis [−3 × 10−3, 4 × 10−3], (f) yaw using non-principal, y-axis [0,30].
Aerospace 09 00135 g006
Table 1. System responses for various controller architectures in response to sinusoidal trajectory for a symmetric spacecraft.
Table 1. System responses for various controller architectures in response to sinusoidal trajectory for a symmetric spacecraft.
ControllerFinal Roll ErrorFinal Pitch ErrorFinal Yaw ErrorControl MeasureRun Time
Classical Feedforward (FF) 1.1611 × 10 4 1.8036 × 10 5 1.4804 × 10 1 7.421110.6987
PID1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.771811.0032
PD1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.512110.3543
PDI1.7592 × 10 10 −1.783 × 10 11 −4.9919 × 10 11 7.512111.5215
Enhanced PDI1.7592 × 10 10 −1.783 × 10 11 −4.9919 × 10 11 7.512110.9573
Adaptive Feedforward 1.18 × 10 10 1.8098 × 10 5 −2.124 × 10 2 7.332511.9128
DAI1.1761 × 10 10 1.8684 × 10 5 1.4849 × 10 1 7.421210.2718
Classical FF + PID1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.700410.8363
Classical FF + PD1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.45210.6769
Classical FF + PDI1.7592 × 10 10 −1.783 × 10 11 −3.2875 × 10 10 7.45210.7868
Classical FF + Enhanced PDI1.7592 × 10 10 −1.783 × 10 11 −3.2875 × 10 10 7.45210.7328
Adaptive FF + PID1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.700411.038
Adaptive FF + PD1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.45211.4322
Adaptive FF + PDI1.7592 × 10 10 −1.783 × 10 11 −3.2954 × 10 10 7.45210.9991
Adaptive FF + Enhanced PDI1.7592 × 10 10 −1.783 × 10 11 −3.2954 × 10 10 7.45210.9991
DAI + PID 1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.700412.2044
DAI + PD1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.45210.4896
DAI + PDI1.7592 × 10 10 −1.7829 × 10 11 −3.2945 × 10 10 7.45211.211
DAI + Enhanced PDI1.7592 × 10 10 −1.7829 × 10 11 −3.2945 × 10 10 7.45210.6421
Table 2. System responses for various controller architectures in response to Pontryagin based trajectory for a symmetric spacecraft.
Table 2. System responses for various controller architectures in response to Pontryagin based trajectory for a symmetric spacecraft.
ControllerFinal Roll ErrorFinal Pitch ErrorFinal Yaw ErrorControl MeasureRun Time
Classical Feedforward (FF)1.1611 × 10 4 1.8452 × 10 5 −1.8 × 10 1 7.313710.7324
PID1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.688910.6961
PD1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.427210.7384
PDI1.7592 × 10 10 −1.783 × 10 11 −4.9919 × 10 11 7.427210.5332
Enhanced PDI1.7592 × 10 10 −1.783 × 10 11 −4.9919 × 10 11 7.427211.5323
Adaptive Feedforward1.18 × 10 4 1.8098 × 10 5 −2.124 × 10 2 7.332511.9128
DAI1.1761 × 10 4 1.8684 × 10 5 1.4849 × 10 1 7.421210.2718
Classical FF + PID1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.613211.2188
Classical FF + PD1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.372110.5145
Classical FF + PDI1.7592 × 10 10 −1.783 × 10 11 −3.9984 × 10 10 7.372111.6424
Classical FF + Enhanced PDI1.7617 × 10 10 −1.7788 × 10 11 −3.9984 × 10 10 7.372110.9332
Adaptive FF + PID1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.613210.9133
Adaptive FF + PD1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.372111.4322
Adaptive FF + PDI1.7592 × 10 10 −1.783 × 10 11 −3.9918 × 10 10 7.372111.1862
Adaptive FF + Enhanced PDI1.7592 × 10 10 −1.783 × 10 11 −3.9918 × 10 10 7.372110.8416
DAI + PID1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.613211.821
DAI + PD1.7617 × 10 10 −1.7788 × 10 11 5.3291 × 10 14 7.372110.9259
DAI + PDI1.7592 × 10 10 −1.7829 × 10 11 −3.9918 × 10 10 7.372111.8363
DAI + Enhanced PDI1.7592 × 10 10 −1.7829 × 10 11 −3.9918 × 10 10 7.372110.5461
Table 3. System responses for various controller architectures in response to sinusoidal trajectory for a non-symmetric spacecraft.
Table 3. System responses for various controller architectures in response to sinusoidal trajectory for a non-symmetric spacecraft.
ControllerFinal Roll ErrorFinal Pitch ErrorFinal Yaw ErrorControl MeasureRun Time
Classical Feedforward−7.8465 × 10 3 07.4289 × 10 3 1.4755 × 10 1 1683.117511.3062
PID7.02999 × 10 8 −1.4242 × 10 3 3.2021 × 10 7 1859.854811.0474
PD4.6554 × 10 8 4.2169 × 10 3 −1.2343 × 10 7 1856.890211.1187
PDI1.2405 × 10 7 8.0862 × 10 8 −1.2423 × 10 7 1856.894912.1823
Enhanced PDI−7.7385 × 10 5 −3.8538 × 10 5 −1.2461 × 10 7 1854.536710.8186
Adaptive Feedforward−46.224112.00510.915031691.483411.3136
DAI−23.5631−6.94154.49731682.639411.4354
DAI + PID5.2592 × 10 8 2.7825 × 10 8 −1.0617 × 10 8 1683.442812.3925
DAI + PD5.1693 × 10 8 2.9962 × 10 8 −2.7428 × 10 8 1683.128511.1785
DAI + PDI−1.2907 × 10 7 6.8226 × 10 8 −3.2272 × 10 8 1683.128611.3506
DAI + Enhanced PDI−7.738 × 10 5 −3.8551 × 10 5 −3.2648 × 10 8 1680.774212.0274
Table 4. System responses for various controller architectures in response to Pontryagin derived trajectory for a non-symmetric spacecraft.
Table 4. System responses for various controller architectures in response to Pontryagin derived trajectory for a non-symmetric spacecraft.
ControllerFinal Roll ErrorFinal Pitch ErrorFinal Yaw ErrorControl MeasureRun Time
Classical Feedforward5.7025 × 10 1 −3.9878−4.671 × 10 1 1685.692810.9308
PID6.2024 × 10 7 −1.3186 × 10 6 1.0581 × 10 5 6,039,807.099811.1978
PD−1.4968 × 10 7 5.0851 × 10 7 −3.7909 × 10 6 2,308,873.801710.7945
PDI−7.6111 × 10 7 4.6852 × 10 6 −3.6072 × 10 6 2,308,895.12412.1382
Enhanced PDI−7.6111 × 10 7 4.6852 × 10 6 −3.60722,308,895.12510.7405
Adaptive Feedforward−41.59597.5133−1.88411661.302810.8365
DAI−16.6137−5.50111.18281576.626911.0189
DAI + PID6.022 × 10 7 −1.2758 × 10 6 1.0244 × 10 6 6,039,357.250312.3928
DAI + PD−1.3938 × 10 7 4.8404 × 10 7 −3.5984 × 10 6 2,307,70510.6427
DAI + PDI−7.5074 × 10 7 4.6608 × 10 6 −3.408 × 10 6 2,307,726.897712.251
DAI + Enhanced PDI−7.5074 × 10 7 4.6608 × 10 6 −3.408 × 10 6 2,307,726.897711.9634
Table 5. Controller analysis for Pontryagin generated trajectory without principal to body frame conversion.
Table 5. Controller analysis for Pontryagin generated trajectory without principal to body frame conversion.
ControllerFinal Roll ErrorFinal Pitch ErrorFinal Yaw ErrorControl MeasureRun Time
Classical Feedforward1.94649 × 10 2 1.6969 × 10 2 −1.7869 × 10 1 1658.71211.9093
PID7.3164 × 10 8 −2.1048 × 10 8 3.7374 × 10 7 1914.512612.5761
PD4.4187 × 10 8 4.7793 × 10 8 −1.6766 × 10 7 1910.244311.9447
PDI1.1956 × 10 7 8.5436 × 10 8 −1.6847 × 10 7 1910.251111.6654
Enhanced PDI−7.5271 × 10 5 −3.7483 × 10 5 −1.689 × 10 7 1908.383413.7759
Adaptive Feedforward−43.724511.20647.9855 × 10 1 1667.448412.5086
DAI−22.9724−6.34893.9381658.461212.3755
DAI + PID5.4829 × 10 8 2.2512 × 10 8 3.1165 × 10 8 1665.49312.0513
DAI + PD5.4538 × 10 8 2.3202 × 10 8 2.5734 × 10 8 1665.139411.874
DAI + PDI1.2938 × 10 7 6.1282 × 10 8 3.192 × 10 8 1665.139411.6999
DAI + Enhanced PDI−7.5261 × 10 5 −3.7507 × 10 5 3.1499 × 10 8 1663.269112.6906
Table 6. Comparison of various methods of cross-coupling elimination in Pontryagin trajectory generation versus sinusoidal trajectory generation for different maneuvers.
Table 6. Comparison of various methods of cross-coupling elimination in Pontryagin trajectory generation versus sinusoidal trajectory generation for different maneuvers.
Trajectory GenerationManeuverFinal Roll ErrorFinal Pitch ErrorFinal Yaw ErrorControl MeasureRun Time
Sinusoidal[0;0;30]5.1819 × 10 8 2.9662 × 10 8 −2.5066 × 10 8 1681.094911.687
Pontryagin in principal frame[0;0;30]−1.3882 × 10 7 4.8269 × 10 7 −3.5878 × 10 6 2,304,236.632111.941
Pontryagin in non-principal frame with principal moments[0;0;30]5.4792 × 10 8 2.26 × 10 8 3.0474 × 10 8 1662.345611.7472
Pontryagin in non-principal frame with non-principal moments[0;0;30]5.4792 × 10 8 2.26 × 10 8 3.0474 × 10 8 1662.345612.62
Sinusoidal[30;0;0]1.9871 × 10 8 3.1413 × 10 9 −1.8097 × 10 9 227.490411.4441
Pontryagin in principal frame[30;0;0]7.5466 × 10 8 1.1922 × 10 9 −6.9042 × 10 10 5.1764 × 10 9 11.6651
Pontryagin non-principal frame with principal moments[30;0;0]1.9871 × 10 8 3.1413 × 10 9 −1.8097 × 10 9 225.276511.6865
Pontryagin in non-principal frame with non-principal moments[30;0;0]1.9871 × 10 8 3.1413 × 10 9 −1.8097 × 10 9 225.276511.952
Sinusoidal[30;0;30]1.9872 × 10 8 3.1322 × 10 9 −1.8096 × 10 9 1921.298311.3724
Pontryagin in non-principal[30;0;30]1.2193 × 10 8 1.9152 × 10 9 −1.1148 × 10 9 4.7500 × 10 9 11.4976
Pontryagin principal[30;0;30]1.9872 × 10 9 3.1322 × 10 9 −1.8096 × 10 9 1896.254811.5732
Pontryagin in non-principal frame with non-principal moments[30;0;30]1.9872 × 10 8 3.1322 × 10 9 −1.8096 × 10 9 1896.254811.4064
Table 7. Comparison of PD control with and without Pontryagin Optimal Trajectory in non-principal frame with elevated gains of K P = 1,000,000 and K D = 10,000 .
Table 7. Comparison of PD control with and without Pontryagin Optimal Trajectory in non-principal frame with elevated gains of K P = 1,000,000 and K D = 10,000 .
ControllerFinal Roll ErrorFinal Pitch ErrorFinal Yaw ErrorControl MeasureRun Time
PD w/Trajectory5.3161 × 10 9 2.6474 × 10 9 3.5527 × 10 15 1687.59.0155
PD w/o Trajectory2.8341 × 10 8 1.416 × 10 8 3.5527 × 10 15 3.4862 × 10 9 8.648
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sandberg, A.; Sands, T. Autonomous Trajectory Generation Algorithms for Spacecraft Slew Maneuvers. Aerospace 2022, 9, 135. https://doi.org/10.3390/aerospace9030135

AMA Style

Sandberg A, Sands T. Autonomous Trajectory Generation Algorithms for Spacecraft Slew Maneuvers. Aerospace. 2022; 9(3):135. https://doi.org/10.3390/aerospace9030135

Chicago/Turabian Style

Sandberg, Andrew, and Timothy Sands. 2022. "Autonomous Trajectory Generation Algorithms for Spacecraft Slew Maneuvers" Aerospace 9, no. 3: 135. https://doi.org/10.3390/aerospace9030135

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop