Next Article in Journal
Sensitivity Analysis of Wing Geometric and Kinematic Parameters for the Aerodynamic Performance of Hovering Flapping Wing
Next Article in Special Issue
A Geometrical, Reachable Set Approach for Constrained Pursuit–Evasion Games with Multiple Pursuers and Evaders
Previous Article in Journal
Numerical Study of Combustion and Emission Characteristics for Hydrogen Mixed Fuel in the Methane-Fueled Gas Turbine Combustor
Previous Article in Special Issue
Convex Optimization for Rendezvous and Proximity Operation via Birkhoff Pseudospectral Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convex Optimization-Based Techniques for Trajectory Design and Control of Nonlinear Systems with Polytopic Range

Mechanical and Aerospace Engineering Department, Utah State University, Logan, UT 84322, USA
*
Author to whom correspondence should be addressed.
Aerospace 2023, 10(1), 71; https://doi.org/10.3390/aerospace10010071
Submission received: 21 October 2022 / Revised: 26 November 2022 / Accepted: 4 January 2023 / Published: 10 January 2023
(This article belongs to the Special Issue Convex Optimization for Aerospace Guidance and Control Applications)

Abstract

:
This paper presents new techniques for the trajectory design and control of nonlinear dynamical systems. The technique uses a convex polytope to bound the range of the nonlinear function and associates with each vertex an auxiliary linear system. Provided controls associated with the linear systems can be generated to satisfy an ordering constraint, the nonlinear control is computable by the interpolation of controls obtained by convex optimization. This theoretical result leads to two numerical approaches for solving the nonlinear constrained problem: one requires solving a single convex optimization problem and the other requires solving a sequence of convex optimization problems. The approaches are applied to two practical problems in aerospace engineering: a constrained relative orbital motion problem and an attitude control problem. The solve times for both problems and approaches are on the order of seconds. It is concluded that these techniques are rigorous and of practical use in solving nonlinear trajectory design and control problems.

1. Introduction

This paper presents new techniques for the trajectory design and control of nonlinear dynamical systems. Assuming that the state and control are constrained to convex sets, a sufficient condition for the nonlinear control to be computable by interpolation of controls obtained by convex optimization is stated and proved. This theoretical result leads to two numerical approaches for solving the nonlinear constrained problem: one requires solving a single convex optimization problem and the other requires solving a sequence of convex optimization problems.
The approaches are applied to two spacecraft trajectory design and control problems: (1) a spacecraft is constrained to stay a certain distance from another space object in orbit while moving from one location to another in a finite time; (2) a spacecraft is to change its attitude in a finite time. The first problem may arise in the inspection phase of an on-orbit servicing (OOS) mission [1]. For such a mission, the controlled spacecraft must stay far enough from the target space object for safety reasons but also close enough to efficiently perform the inspection. In general, OOS missions are needed to either repair a damaged spacecraft or to extend the active lifetime of a spacecraft. An example of a spacecraft that has had multiple OOS missions to service it is the Hubble Space Telescope [2]. Beyond OOS missions, there are other types of missions where this type of problem could arise, such as asteroid proximity operations [3,4].
The finite-time attitude control problem arises in optical applications where a picture of a space object is to be taken at a certain time [5]. There may, for example, be a short time window when the lighting of a space object is sufficient due to the relative positions of the space object, the controlled spacecraft, and the sun [6,7]. The problem could also arise when a spacecraft wants to perform an impulsive or so-called ’delta-v’ maneuver [8] and its thrusters need to be oriented in the correct direction.
The class of problems studied in this paper has nonlinear dynamics, and there is a large body of literature on feedback control, optimization, and trajectory design for such systems. The next two subsections situate the present work relative to control-related work and optimization/trajectory-design-related work.

1.1. Control Theory

There are multiple techniques to solve nonlinear control problems based on linearization. One of the most common techniques is feedback linearization [9]. This technique, when applicable, allows one to use well-studied linear control techniques on the linearized problem. Feedback linearization is an exact linearization technique because no approximations are needed. However, errors in state estimation, modeling, and time delays may destabilize the system. Because the nonlinear term is part of feedback, incorporation of control constraints is challenging.
There are other techniques that produce exact linearizations, such as the ones discussed in [10,11]. In both of these cases, a linear system is achieved by the transformation of variables. The challenge is that these types of exact linearizations exist for only a small subset of dynamical systems, and even when they do, the transformation may not be obvious, requiring the control engineer to spend significant time deriving it.
In addition to exact linearization techniques, there are techniques that approximate the nonlinear system by a linear one. The most well-known one is where the Jacobian is evaluated at an operating point. Some of the other techniques include approximation of the Koopman operator and the use of machine learning [12,13,14]. These techniques approximate the nonlinear dynamics by introducing new state variables/lifting functions to arrive at a high-order linear model. The results of this paper leverage ‘auxiliary’ linear systems associated with a nonlinear system; however, neither exact nor approximate linearizations are used.
Yet another technique to linearize a nonlinear system is to use linear parameter-varying (LPV) system [15]. LPV systems have been used to represent nonlinear systems in many applications, such as aerospace, ground vehicle, and robotic control [16]. The controller design for LPV systems usually takes advantage of linear matrix inequalities (LMI) and convex combinations of linear systems [17]. LPV controller techniques are often used in robust control where the model has uncertainty and the controller is designed to be robust for these uncertainties. The LPV controller techniques using LMIs are limited to controllers with asymptotic stability, whereas the controller design technique introduced in this paper can be used to solve fixed finite-time problems. A non-LPV technique where the nonlinear system is bounded by linear ones was introduced in [18].
Control design using control Lyapunov functions has been performed for nonlinear control affine systems with ball-type control constraints [19] and polytopic control constraints [20]. In [20], their algorithm minimizes the control magnitude pointwise in time; as such, it requires the solution of a quadratic program (QP) pointwise in time. Similarly, control barrier functions have been used to simultaneously ensure system ‘safety’ and stability, again often requiring pointwise optimization [21]. The technique in this paper requires the solution of only one SOCP, and then the nonlinear control may be interpolated.

1.2. Optimization and Trajectory Design

Classical approaches for control of linear and nonlinear systems do not account for state and control constraints. When such constraints are present, optimization-based techniques may be used. Most relevant here are the recent results in lossless convexification and sequential convex programming. The fundamental idea in lossless convexification is to relax non-convex constraints to convex form and then prove the relaxation did not introduce new optimal solutions [22]. Motivating this research has been minimum-fuel-type optimal control problems with linear dynamics and annular control constraints [23]. The earliest papers required a technical assumption related to the transversality condition of optimal control. This assumption has recently been weakened, allowing a broader class of problems including ones with fixed final time [24]. The theory has also been extended to problems with linear state constraints [25,26], quadratic state constraints [27,28], and disconnected control sets [29]. For problems with state constraints, a key condition in the proofs is related to the strong observability property of the dynamical system [30].
The theory has also been extended to annularly constrained problems with nonlinear dynamics [31]. The relaxation applies only to the control constraint. Consequently, the relaxed problem remains non-convex because of the nonlinear dynamics and non-convex techniques, such as sequential quadratic programming and sequential convex programming [32,33,34,35], are required to solve the problem. Those techniques rely upon local linearizations/convexifications and iterative updates to converge to local extrema. In sequential convex programming, multiple convex programs must be solved in the process. In contrast, our Theorem 1 provides sufficient conditions for the nonlinear trajectory design and control problem to be solved with a single convex program.

1.3. Contributions

The problem of interest in this paper is a nonlinear control problem with convex state and control constraints. The primary theoretical contribution is a set of sufficient conditions under which the problem is solvable as a single convex program. This is in contrast to:
  • Classical control techniques, which do not account for state and control constraints;
  • Lossless convexifications, which do not generate convex problems when nonlinear dynamics are present;
  • Sequential convex programming, which requires the solution of many convex programs.
Our technique uses a convex polytope to bound the range of the nonlinear function, and associates with each vertex of the polytope an ‘auxiliary’ linear system. The sufficient conditions presented in Theorem 1 guarantee that a feasible control for the nonlinear system is interpolatable from the ‘auxiliary’ linear controls obtained via convex optimization. A conceptual understanding of the proof leads to a ‘resetting approach’ that applies even when the sufficient conditions are violated.
The primary applied contribution is the application of the theory and algorithms to two problems in aerospace engineering. The first problem is a spherically constrained relative orbital motion problem, which has been studied from the lossless convexification [28] and nonlinear dynamics [36] perspectives. The second problem is an attitude control problem using the dynamics of the Euler axis.

1.4. Outline

The remainder of the paper is structured as follows: Section 2 describes the problem of interest, which is a nonlinear control problem with convex state and control constraints, and proves sufficient conditions (see Theorem 1) for the problem to be solvable in a single convex program. A sequential or resetting approach is then outlined for cases in which the sufficient conditions are not satisfied. The theoretical results and associated algorithms are applied to a constrained relative orbital motion problem in Section 3 and an attitude control problem in Section 4. Conclusions are presented in Section 5. Because the paper focuses on discrete-time systems, a particular strategy for discretizing continuous-time systems is presented in Appendix A.

2. Problem and Main Result

This section describes the problem of interest, provides a sufficient condition for its solution as a single convex program, and describes a practical algorithm for implementing the theoretical results. Consider a nonlinear system of the form
x ˙ = A x + B u + E η ( x ) , x 0 = x ( t 0 ) given
where A R n × n is the system matrix, B R n × m is the control-influence matrix, and E R n × p is a mapping for the nonlinearity η : D R p . The system is defined on a spatial domain D R n and time domain I = [ t 0 , t f ] , where t 0 is the initial time and t f is the final time. The given initial condition is x 0 = x ( t 0 ) . It is assumed that the range of the nonlinearity η is bounded by a convex polytope as
x D , η ( x ) co { δ 1 , δ 2 , , δ q } R p
where δ i R p are the vertices of the polytope and ‘co’ denotes the convex hull. The trajectory design and control problem is to drive the state to the terminal constraint set X f D while maintaining x X D and u U R m . All of the constraint sets X f , X, and U are assumed to be convex. The polytopic range assumption is satisfied when X is compact and η is continuous since each component of η is guaranteed to attain a minimum and maximum. These minima and maxima may then serve as vertices of the polytope. Note that this is a feasibility problem; no objective function is present.
Though motivated by continuous-time dynamics in aerospace applications, all analysis is conducted on an analogous discrete-time system
x [ k + 1 ] = A d x [ k ] + B d u [ k ] + E d η ( x [ k ] ) , x [ 0 ] = x ( t 0 ) given
where the discrete-time index is k I d = { 0 , , N 1 } . A significant portion of the analysis in this section can be performed in either continuous or discrete time. However, for use as a control-synthesis tool using finite-dimensional optimization, it is best to use a discrete-time formulation. To our knowledge, all results on the lossless convexification of optimal control problems are based on continuous-time formulations and measure-theoretic considerations. As such, the ‘lossless’ guarantees do not hold up in the associated discrete-time algorithms. The use of discrete-time dynamics in the formulation is superior in this respect. A particular discretization strategy used in the forthcoming examples is provided in Appendix A.
Now, consider the auxiliary systems
x i [ k + 1 ] = A d x i [ k ] + B d u i [ k ] + E d δ i , x i [ 0 ] = x 0 , i { 1 , , q }
where in the i th system the nonlinearity η is replaced by the δ i vertex of its bounding polytope. Solutions to these linear time-invariant (LTI) difference equations are, for every k { 1 , , N } , given by
x i [ k ] = A d k x 0 + j = 0 k 1 A d k 1 j B d u i [ j ] + E d δ i = A d k x 0 + j = 0 k 1 α i [ k , j ]
where for every k { 1 , , N } and j { 0 , , k 1 }
α i [ k , j ] = A d k 1 j B d u i [ j ] + E d δ i .
For a given sequence { u i [ k ] } k = 0 N 1 , the sequence { α i [ k , j ] } j = 0 k 1 becomes known. This fact motivates the following theorem, which is the main theoretical result of the paper.
Theorem 1. 
For each i { 1 , , q } , let { u i [ k ] } k = 0 N 1 be a control sequence that solves the trajectory design and control problem for the i th auxiliary system. Let α i [ k , j ] be given by (6). If for every k { 1 , , N } and l { 1 , , n } there exist indices , { 1 , 2 , , q } such that for every i { 1 , , q } and j { 0 , , k 1 }
α l [ k , j ] α i l [ k , j ] α l [ k , j ] α i l [ k , j ]
then the sequence { u [ k ] } k = 0 N 1 with u [ k ] = u 1 [ k ] λ 1 [ k ] + + u q [ k ] λ q [ k ] solves the trajectory design and control problem for the nonlinear system, where λ i [ k ] satisfies the interpolation constraint
η ( x [ k ] ) = δ 1 λ 1 [ k ] + + δ q λ q [ k ]
and the convex combination constraints
i = 1 q λ i [ k ] = 1 , 0 λ i [ k ] 1 .
Proof. 
For every k I d , define the following quantities.
U [ k ] = u 1 [ k ] , , u q [ k ] Δ = δ 1 , , δ q λ [ k ] = λ 1 [ k ] , , λ q [ k ]
For any sequence { λ [ k ] } k = 0 N 1 satisfying (9), the linear system
x [ k + 1 ] = A d x [ k ] + B d U [ k ] λ [ k ] + E Δ λ [ k ]
has a unique solution for every k { 1 , , N } given by
x [ k ] = A d k x 0 + j = 0 k 1 A d k 1 j ( B U [ j ] + E Δ ) λ [ j ] = A d k x 0 + j = 0 k 1 α 1 [ k , j ] , , α q [ k , j ] λ [ j ] = A d k x 0 + j = 0 k 1 β [ k , j ]
where for every k { 1 , , N } and j { 0 , , k 1 }
β [ k , j ] = α 1 [ k , j ] , , α q [ k , j ] λ [ j ] .
Because β [ k , j ] is defined as a convex combination of α 1 [ k , j ] , , α q [ k , j ] , it is known that each component l { 1 , 2 , , n } of β [ k , j ] is constrained based on (7) as
k { 1 , , N } , j { 0 , , k 1 } , α l [ k , j ] β l [ k , j ] α l [ k , j ] .
Consequently, their sums are ordered in the same way.
k { 1 , , N } , j = 0 k 1 α l [ k , j ] j = 0 k 1 β l [ k , j ] j = 0 k 1 α l [ k , j ] .
Adding the lth component of A d k x 0 to each, it follows that for every k { 1 , , N }
x l [ k ] x l [ k ] x l [ k ] .
That is, elements of the sequence { x l [ k ] } k = 0 k = N are always between elements of the sequences { x l [ k ] } k = 0 k = N and { x l [ k ] } k = 0 k = N . From the convexity of X and X f , it is concluded that for every k { 0 , , N } the states x [ k ] X and x [ N ] X f .
Upon choosing, for every k I d , λ [ k ] such that (8) is satisfied and u [ k ] = U [ k ] λ [ k ] , the original discrete-time system (3) is obtained. By convexity of U, the interpolated control u [ k ] U . That is, the sequence { u [ k ] } k = 0 N 1 solves the nonlinear trajectory design and control problem. □
Conceptually, it is convenient to think of the controls and nonlinearities as non-homogenous forcing terms whose effects are captured in α i [ k , j ] —see (6). The theorem requires through (7) that it be possible for the non-homogenous effects to be bounded componentwise by only two of the possibly many α i [ k , j ] terms. For this reason, we refer to (7) as the ‘ordering constraint’. It is this restriction, as detailed in the proof, that allows the nonlinear trajectories to be bounded by the linear ones, thus allowing interpolation for the nonlinear controls.
This paper uses two techniques to design trajectories for nonlinear systems that are both based on Theorem 1. The first technique sets (7) as a constraint in an optimization problem. Including this constraint does not add any nonlinearities to the optimization problem. However, it does require the control engineer to pre-determine what the ⋎ and ⋏ are for each l and each k. In other words, the lower and upper limits on α can vary by the element of α as well as by the time instance k; however, they must be pre-determined by the control engineer in this formulation. In the forthcoming examples, this is called the ’constrained approach’.
The second technique computes a trajectory for a full time domain I d and then checks if the condition (7) is violated. If it is, the problem is reset at the first k I d , where the condition is violated. Not introducing the constraint (7) into the optimization problem makes solving the optimization problem faster; however, this technique does require computing the controller multiple times with a shrinking time horizon (similar to MPC). This technique also does not require the control engineer to pre-determine ⋎ or ⋏, which can be a challenging task, especially for higher-dimensional systems. The algorithm for this technique is given in Algorithm 1 and is referred to as the ’resetting approach’.
Algorithm 1 Resetting Approach.
Input: 
x 0 , N , A d , B d , E d , Δ , η
Output: 
u
  1:
Set K = 0 and x [ 0 ] = x 0 .
  2:
while K < N 1   do
  3:
   For each k { K , K + 1 , , N 1 } find u 1 [ k ] , u 2 [ k ] , , u q [ k ] .
  4:
   for  k = K  to N 1  do
  5:
      For each j { K , K + 1 , , k } compute α 1 [ k , j ] , α 2 [ k , j ] , , α q [ k , j ] .
  6:
      Determine α [ k ] and α [ k ] .
  7:
      if  j { K , K + 1 , , k } such that α [ k , j ] < α [ k ]  or α [ k , j ] > α [ k ]  then
  8:
        Set K = k .
  9:
        break for
10:
      end if
11:
      Use (9) to get λ [ k ] .
12:
      Compute nonlinear control as u [ k ] = U [ k ] λ [ k ] .
13:
      Use u [ k ] in (3) to get x [ k + 1 ] .
14:
   end for
15:
end while

3. Spherically Constrained Relative Motion Trajectory Design

The approaches developed in Section 2 are now applied to a spherically constrained relative orbital motion problem. Variations of the problem have motivated a recent lossless convexification [28] and nonlinear dynamical analysis [36], which identified periodic and chaotic motion.
The relative motion of two spacecraft in proximity to each other in low earth orbit is described by the Clohessy–Wiltshire (CW) equations [37] as
y ˙ = v v ˙ = M 1 y + M 2 v + τ
where y R 3 is the relative position of the spacecraft in the local vertical local horizontal (LVLH) frame, v R 3 is the relative velocity of the spacecraft in the LVLH frame, and τ R 3 is the thrust-to-mass ratio. The matrices M 1 and M 2 are
M 1 = 3 ω 2 0 0 0 0 0 0 0 ω 2 , M 2 = 0 2 ω 0 2 ω 0 0 0 0 0
where the constant ω R is the mean motion of the reference orbit. To prevent the vehicles from colliding while keeping them close together, the relative position is constrained to a sphere of radius R, i.e., | | y | | = R . Because of this constraint, use of a spherical coordinate system is convenient. The transformation from Cartesian to spherical coordinates is given as
y 1 y 2 y 3 = r cos ( ϕ ) cos ( θ ) cos ( ϕ ) sin ( θ ) sin ( ϕ )
where r R is the radial distance of the spacecraft from the target space object, which in the case of a spherical constraint is constant and equal to R. The spherical angles are ϕ ( π / 2 , π / 2 ) and θ [ 0 , 2 π ] . An illustration of the coordinate transformation is shown in Figure 1.
The thrust-to-mass ratio may be transformed from Cartesian to spherical coordinates as
u r u θ u ϕ = cos ( ϕ ) cos ( θ ) cos ( ϕ ) sin ( θ ) sin ( ϕ ) sin ( θ ) cos ( θ ) 0 sin ( ϕ ) cos ( θ ) sin ( ϕ ) sin ( θ ) cos ( ϕ ) τ 1 τ 2 τ 3 .
Using the the defined transformations, the CW dynamics in the spherical coordinate system are
r ¨ = r ϕ ˙ 2 + r ω 2 3 cos ( ϕ ) 2 cos ( θ ) 2 sin ( θ ) 2 + r θ ˙ 2 cos ( ϕ ) 2 + 2 ω r θ ˙ cos ( ϕ ) 2 + u r
θ ¨ = 2 ( θ ˙ + ω ) ( ϕ ˙ tan ( ϕ ) r ˙ / r ) 3 ω 2 sin ( θ ) cos ( θ ) + u θ / r cos ( ϕ )
ϕ ¨ = 2 ϕ ˙ r ˙ / r sin ( 2 ϕ ) ( θ ˙ + ω ) 2 / 2 3 ω 2 sin ( ϕ ) cos ( ϕ ) cos 2 ( θ ) + u ϕ / r .
Keeping in mind that the constraint is to stay on a spherical surface centered at the target space object, we know that r = R , r ˙ = 0 and r ¨ = 0 . Using this information, (21a) can be solved for the radial control u r to obtain
u r = R ϕ ˙ 2 + ω 2 ( 3 cos ( ϕ ) 2 cos ( θ ) 2 sin ( θ ) 2 ) + θ ˙ 2 cos ( ϕ ) 2 + 2 ω θ ˙ cos ( ϕ ) 2 .
The spherical controls u θ and u ϕ may then take any values, and the relative distance between the spacecraft will remain R. Furthermore, since the r ¨ equation is constant, it does not need to be considered for the dynamics, and the simplified equations for θ ¨ and ϕ ¨ with r = R and r ˙ = 0 are
θ ¨ = 2 ( θ ˙ + ω ) ϕ ˙ tan ( ϕ ) 3 ω 2 sin ( θ ) cos ( θ ) + u θ / R cos ( ϕ ) ϕ ¨ = sin ( 2 ϕ ) ( θ ˙ + ω ) 2 / 2 3 ω 2 sin ( ϕ ) cos ( ϕ ) cos 2 ( θ ) + u ϕ / R .
Now, if a new control vector is defined as u = [ u θ / cos ( ϕ ) , u ϕ ] , and a state vector is defined as x = [ θ , ϕ , θ ˙ , ϕ ˙ ] , the above equations can be written in the following form
x ˙ = 0 2 × 2 I 2 × 2 0 2 × 2 0 2 × 2 x + 1 R 0 2 × 2 I 2 × 2 u + 0 2 × 2 2 ( θ ˙ + ω ) ϕ ˙ tan ( ϕ ) 3 ω 2 sin ( θ ) cos ( θ ) sin ( 2 ϕ ) ( θ ˙ + ω ) 2 / 2 3 ω 2 sin ( ϕ ) cos ( ϕ ) cos 2 ( θ ) .
This nonlinear system is in the form of (1) with the system matrices being
A = 0 2 × 2 I 2 × 2 0 2 × 2 0 2 × 2 , B = 1 R 0 2 × 2 I 2 × 2 , E = 0 2 × 2 I 2 × 2
and the nonlinear function η given by
η ( θ , ϕ , θ ˙ , ϕ ˙ ) = 2 ( θ ˙ + ω ) ϕ ˙ tan ( ϕ ) 3 ω 2 sin ( θ ) cos ( θ ) sin ( 2 ϕ ) ( θ ˙ + ω ) 2 / 2 3 ω 2 sin ( ϕ ) cos ( ϕ ) cos 2 ( θ ) .
For numerical purposes, the radius R of the sphere is 100 m and the mean motion ω is 4 rad/h, which corresponds to a low earth orbit with a period of π / 2 h. The dynamics in (24) are discretized according to the procedure in Appendix A with Δ t = 0.05 h to obtain
A d = 1 0 0.05 0 0 1 0 0.05 0 0 1 0 0 0 0 1 , B d = 0.0125 0 0 0.0125 0.5 0 0 0.5 × 10 3 , E d = 0.00125 0 0 0.00125 0.05 0 0 0.05 .
The initial relative position of the spacecraft is defined as θ 0 = 3 π / 4 rad and ϕ 0 = π / 4 rad with initial relative velocity of zero. The desired final relative position of the spacecraft is θ f = ϕ f = 0 rad and the desired final relative velocity is zero. This maneuver is to be completed in 6 h. With the chosen time step as 0.05 h, N becomes 120. The vertices for the bounding polytope of the nonlinearity are chosen as (all in rad/h2)
δ 1 = 35 42 , δ 2 = 35 + 42 , δ 3 = + 35 42 , δ 4 = + 35 + 42 .
With this in place, Theorem 1 may be applied to generate feasible solutions to the nonlinear trajectory design and control problem. First, the ‘constrained approach’ is used, which is followed by the ‘resetting approach’. Both of the solutions are compared against a solution that was achieved by solving a non-convex optimization problem using MATLAB’s [38] fmincon solver. The ‘constrained’ and ‘resetting’ solutions are used as initial guesses for the non-convex solver.

3.1. Constrained Approach

In this approach, the constraints in (7) are incorporated into an optimization problem. The second-order cone programming (SOCP) problem to be solved is
min u i = 1 4 k = 0 N 1 10 12 u i [ k ] 2 + x i [ N ] x f 2
s . t . x i [ k + 1 ] = A d x i [ k ] + B d u i [ k ] + E d δ i , i = 1 , , 4 , k = 0 , , N 1
x i [ 0 ] = x 0 , i = 1 , , 4
Equation ( 7 ) with = [ 4 , 4 , 4 , 4 ] and = [ 2 , 3 , 2 , 3 ] , k = 1 , , N
The cost function in (29a) has two quadratic terms. The first term penalizes the control inputs of the auxiliary LTI systems. This term is not needed; it serves to regularize or smooth the resulting solutions. The second term penalizes errors in the final state. Weights multiplying the terms have been chosen based on their magnitudes. The control inputs for the auxiliary LTI systems are on the order of 10 4 , whereas the final state error gets very close to zero; therefore, the multiplier for the final state error is chosen to be significantly larger than that for the control inputs. The dynamics of the auxiliary LTI systems are enforced in (29b) with initial conditions in (29c). The ordering constraint (7) of Theorem 1 is enforced in (29d). Note that other orderings can be used, and it is not required for one ordering to be enforced at all k. The user is free to choose the ordering.
Solving the optimization problem (29) took 1.14 s on a laptop with a 2.30 GHz Intel i7 processor using Gurobi [39] in MATLAB [38] through YALMIP [40]. The same setup was used for all the computations in the following sections as well.
Figure 2 shows the relative angular displacement trajectories of the spacecraft and Figure 3 shows the relative angular velocity trajectories of the spacecraft. In the figures, the linear system trajectories are covered by the actual nonlinear system trajectories; however, the nonlinear system trajectory is between the upper and lower bounds of the linear system trajectories. From inspection of the magnified inset, it is evident that the final angular displacements are on the order of 10 4 rad corresponding to a final position error on the centimeter level.

3.2. Resetting Approach

In this section, the constrained relative motion problem is solved using the ’resetting approach’ described in Algorithm 1. The SOCP problem used to find control solutions in Line 3 of Algorithm 1 is the same as (29) but with (29d) removed. The resets are performed as described in Algorithm 1.
The results are shown in Figure 4 and Figure 5. The results are similar to those shown in Figure 2 and Figure 3, except that towards the end of the time horizon, the angular velocity trajectories of the linear systems diverge from the desired final velocity. The noticeable difference is that designing this controller requires a solution to 11 SOCP problems as there were 10 resets due to violations of the ordering constraint in 7. On average, solving each problem required 0.17 s for a total solve time of 1.87 s. The number of variables in each optimization problem also reduces on each reset as the time horizon gets shorter.

4. Spacecraft Attitude Control

The approaches developed in Section 2 are now applied to a spacecraft attitude control problem. To represent the attitude of a spacecraft, a rotation vector θ = α e R 3 is used. The unit vector e R 3 is the Euler rotation axis and α R is the angular displacement. To derive the kinematics of the spacecraft attitude, a quaternion q R 4 is defined representing the spacecraft attitude as [41]
q = 1 α sin ( α / 2 ) θ cos ( α / 2 ) .
Taking the time derivative leads to
q ˙ = 1 α sin ( α / 2 ) θ ˙ 0 + 1 α 2 sin ( α / 2 ) + 1 2 α cos ( α / 2 ) α ˙ θ 1 2 sin ( α / 2 ) α ˙ .
It is also well-known that the quaternion kinematics can be written as [42]
q ˙ = 1 2 q 4 ω + q 1 : 3 × ω q 1 : 3 ω
         = 1 2 cos ( α / 2 ) ω 1 α sin ( α / 2 ) θ × ω 1 α sin ( α / 2 ) θ ω
where ω R R 3 is the attitude rate of the spacecraft expressed in its body frame. Comparing the last element of (31) and (33) gives an expression for α ˙ as α ˙ = α 1 θ ω . Comparing the first three elements of (31) and (33), substituting the expression of α ˙ , and solving for θ ˙ , gives
θ ˙ = 1 α 1 2 cot ( α / 2 ) 1 α θ ω θ + 1 2 α cot ( α / 2 ) ω + θ × ω .
Subtracting and adding α 2 θ θ ω on the right-hand side, using the vector triple product, and noting that α = θ (provided 0 < α < 2 π ), simplifies the above to
θ ˙ = ω + 1 2 θ × ω + 1 θ 2 1 θ 2 cot ( | | θ | | / 2 ) θ × ( θ × ω ) .
The attitude dynamics of the spacecraft can be derived from Euler’s equations [42]. Assuming that the only external torque affecting the system is a control torque τ R 3 , the dynamics are
ω ˙ = J 1 ω × J ω + τ
where J is a moment of inertia of the spacecraft about its center of mass represented in the body frame. Defining the state vector as x = [ θ , ω ] R 6 , Equations (35) and (36) can be written as
x ˙ = 0 3 × 3 I 3 × 2 0 3 × 3 0 3 × 3 x + 0 3 × 3 J 1 τ + I 3 × 3 0 3 × 3 0 3 × 3 J 1 1 2 θ × ω + 1 θ 2 1 θ 2 cot ( | | θ | | / 2 ) θ × ( θ × ω ) ω × J ω .
To simplify the nonlinear effects, it is assumed that the rate vector ω is known from navigation and available for feedback linearization. Upon defining the control τ = u ω × ( J ω ) , the nonlinear system is
x ˙ = 0 3 × 3 I 3 × 2 0 3 × 3 0 3 × 3 x + 0 3 × 3 J 1 u + I 3 × 3 0 3 × 3 1 2 θ × ω + 1 θ 2 1 θ 2 cot ( | | θ | | / 2 ) θ × ( θ × ω ) .
Feedback linearization has the effect of reducing the dimension of the nonlinearity from six to three, which in turn reduces the dimension of the bounding convex polytope.
This nonlinear system is in the form of (1), with the system matrices being
A = 0 3 × 3 I 3 × 2 0 3 × 3 0 3 × 3 , B = 0 3 × 3 J 1 , E = I 3 × 3 0 3 × 3
and the nonlinear function η given by
η ( θ , ω ) = 1 2 θ × ω + 1 θ 2 1 θ 2 cot ( | | θ | | / 2 ) θ × ( θ × ω ) .
For numerical purposes, the spacecraft is assumed to have a cylindrical shape with constant density. The radius of the spacecraft is r = 0.25 m, height h = 0.5 m, and mass m = 100 kg. The body z-axis is defined to be aligned with the axis of the cylinder. The moment of inertia matrix is then given as J = diag { J x x , J y y , J z z } , where J x x = J y y = 1 4 m r 2 + 1 12 m h 2 and J z z = 1 2 m r 2 . The dynamics in (37) are discretized according to the procedure in Appendix A with Δ t = 0.05 sec to obtain
A d = 1 0 0 0.05 0 0 0 1 0 0 0.05 0 0 0 1 0 0 0.05 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 , B d = 0.343 0 0 0 0.343 0 0 0 0.4 13.7 0 0 0 13.7 0 0 0 16 × 10 3 , E d = 0.05 0 0 0 0.05 0 0 0 0.05 0 0 0 0 0 0 0 0 0 .
The initial attitude of the spacecraft is given with an axis of rotation of 1 3 [ 1 , 1 , 1 ] and angle 45 . The desired final attitude is given by an axis of rotation of 1 3 [ 1 , 1 , 1 ] and angle 90 . The initial angular velocity of the spacecraft is [ 5 , 10 , 15 ] / s and the desired final angular velocity is zero. The maneuver is to be performed in 6 s. With the chosen time step of 0.05 s, N becomes 120. The vertices for the bounding polytope of the nonlinearity are chosen as (all in rad/s)
δ 1 = 0.1 0.1 0.1 , δ 2 = 0.1 0.1 + 0.1 , δ 3 = 0.1 + 0.1 0.1 , δ 4 = 0.1 + 0.1 + 0.1 , δ 5 = + 0.1 0.1 0.1 , δ 6 = + 0.1 0.1 + 0.1 , δ 7 = + 0.1 + 0.1 0.1 , δ 8 = + 0.1 + 0.1 + 0.1 .
Again, Theorem 1 may be applied to generate feasible solutions to the nonlinear trajectory design and control problem. The ‘constrained’ and ‘resetting’ approaches are compared against a solution that was achieved by solving a non-convex optimization problem using MATLAB’s [38] fmincon solver with the initial guesses obtained from the ‘constrained’ and ‘resetting solutions’.

4.1. Constrained Approach

In this approach, the constraints in (7) are incorporated into an optimization problem. The second-order cone programming (SOCP) problem to be solved is
min u i = 1 8 k = 0 N 1 10 8 u i [ k ] 2 + x i [ N ] x f 2
s . t . x i [ k + 1 ] = A d x i [ k ] + B d u i [ k ] + E d δ i , i = 1 , , 8 , k = 0 , , N 1
x i [ 0 ] = x 0 , i = 1 , , 8
Equation ( 7 ) with = [ 4 , 6 , 7 , 8 , 8 , 8 ] , k = 1 , , N    = [ 8 , 8 , 8 , 4 , 6 , 7 ] , k = 1 , , N
Due to different scaling between the auxiliary system control magnitudes and errors between actual and desired final states compared with the relative motion problem of the previous section, the weight on the control term was set to 10 8 . As before, the control penalty is not needed but has a regularizing effect. The dynamics of the auxiliary LTI systems are enforced in (43b) with initial conditions in (43c). The ordering constraint (7) of Theorem 1 is enforced in (43d).
Solving the SOCP problem took 1.09 s. Figure 6 shows the attitude of the spapecraft with the dashed line as the desired final attitude. Figure 7 shows the angular velocity of the spacecraft. It can be seen in these figures that the nonlinear system does not arrive exactly to the desired final state but stays between the upper and lower bounding linear systems. The magnitude of error between the desired final rotation vector and the final rotation vector of the nonlinear system is 0.107 rad. However, the average magnitude of error between the desired final rotation vector and the final rotation vectors of the linear auxiliary systems is 0.217 rad. Since the linear auxiliary systems provide the limits for the nonlinear system trajectories, better performance is desired and is investigated as part of the ‘resetting approach’.

4.2. Resetting Approach

In this section, the attitude control problem is solved using the ’resetting approach’ described in Algorithm 1. The SOCP problem used to find control solutions in Line 3 of Algorithm 1 is the same as (43) but with (43d) removed. The resets are performed as described in Algorithm 1.
Figure 8 shows the attitude of the spacecraft with the dashed line as the desired final attitude. Figure 9 shows the angular velocity of the spacecraft. It can be seen on these figures that the resetting controller seems to perform significantly better than the constrained controller in the previous section especially when comparing the errors between the desired final attitude and the actual final attitude. The magnitude of the error between the desired final rotation vector and the final rotation vector of the nonlinear system is 2.17 × 10 3 rad. Even the average magnitude of the error between the desired final rotation vector and the final rotation vectors of the auxiliary systems is only 2.9 × 10 3 rad. These are significantly lower compared with the final attitude errors reached with the constrained controller. The design of this controller required solving 20 SOCP problems with an average solution time of 0.233 s for a total solver time of 4.67 s. The resets are also easily noticeable because of the saw-blade-like linear system trajectories.

5. Summary and Conclusions

Convex optimization-based techniques for trajectory design and control of nonlinear systems with convex state and control constraints were presented. The techniques apply to systems for which the range of the nonlinear dynamics function is contained in a convex polytope. Sufficient conditions for the problem to be solvable by a single convex program were given. The theoretical results were applied to two practical problems in aerospace engineering: a constrained relative orbital motion problem and an attitude control problem. Solution times were on the order of seconds. An alternative ‘resetting approach’ was also introduced that applies to problems not satisfying the sufficient conditions or when the so-called ordering constraint is difficult to implement. This resetting approach requires the solution of multiple convex programs; however, solve times remained on the order of seconds. Both methods were compared against a non-convex optimization solution as well. It is concluded that this technique is rigorous and is a practical means of solving nonlinear trajectory design and control problems efficiently.

Author Contributions

Conceptualization, O.J. and M.W.H.; methodology, O.J. and M.W.H.; software, O.J.; validation, O.J. and M.W.H.; formal analysis, O.J. and M.W.H.; investigation, O.J. and M.W.H.; resources, O.J. and M.W.H.; data curation, O.J. and M.W.H.; writing—original draft preparation, O.J.; writing—review and editing, O.J. and M.W.H.; visualization, O.J.; supervision, M.W.H.; project administration, M.W.H.; funding acquisition, M.W.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Office of Naval Research, grant number N00014-22-1-2131.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The continuous-time system of interest is
x ˙ = A x + B u + E η ( x ) .
Upon defining the fundamental matrix Φ ( t k + 1 , t k ) = e A ( t k + 1 t k ) , it can be shown by direct differentiation that a solution to (A1) on the interval [ t k , t k + 1 ] is
x ( t k + 1 ) = Φ ( t k + 1 , t k ) x ( t k ) + t k t k + 1 Φ ( t k + 1 , τ ) [ B u ( τ ) + E η ( x ( τ ) ) ] d τ .
For small time steps Δ t = t k + 1 t k , evaluation of the integral may be approximated by fixing u and η ( x ) at their initial values u ( t k ) and η ( x ( t k ) ) , respectively. Upon defining
A d = Φ ( t k + 1 , t k ) , B d = t k t k + 1 Φ ( t k + 1 , τ ) B d τ , E d = t k t k + 1 Φ ( t k + 1 , τ ) E d τ ,
a discrete-time approximation of the continuous-time system is
x [ k + 1 ] = A d x [ k ] + B d u [ k ] + E d η ( x [ k ] ) ,
which is the same as (3).

References

  1. Flores-Abad, A.; Ma, O.; Pham, K.; Ulrich, S. A review of space robotics technologies for on-orbit servicing. Prog. Aerosp. Sci. 2014, 68, 1–26. [Google Scholar] [CrossRef] [Green Version]
  2. Li, W.J.; Cheng, D.Y.; Liu, X.G.; Wang, Y.B.; Shi, W.H.; Tang, Z.X.; Gao, F.; Zeng, F.M.; Chai, H.Y.; Luo, W.B.; et al. On-orbit service (OOS) of spacecraft: A review of engineering developments. Prog. Aerosp. Sci. 2019, 108, 32–120. [Google Scholar] [CrossRef]
  3. Tsuda, Y.; Yoshikawa, M.; Abe, M.; Minamino, H.; Nakazawa, S. System design of the Hayabusa 2—Asteroid sample return mission to 1999 JU3. Acta Astronaut. 2013, 91, 356–362. [Google Scholar] [CrossRef]
  4. Gaudet, B.; Linares, R.; Furfaro, R. Terminal adaptive guidance via reinforcement meta-learning: Applications to autonomous asteroid close-proximity operations. Acta Astronaut. 2020, 171, 1–13. [Google Scholar] [CrossRef] [Green Version]
  5. D’Amico, S.; Benn, M.; Jørgensen, J.L. Pose estimation of an uncooperative spacecraft from actual space imagery. Int. J. Space Sci. Eng. 2014, 2, 171–189. [Google Scholar]
  6. Stastny, N.B.; Geller, D.K. Autonomous optical navigation at Jupiter: A linear covariance analysis. J. Spacecr. Rocket. 2008, 45, 290–298. [Google Scholar] [CrossRef] [Green Version]
  7. Bradley, N.; Olikara, Z.; Bhaskaran, S.; Young, B. Cislunar navigation accuracy using optical observations of natural and artificial targets. J. Spacecr. Rocket. 2020, 57, 777–792. [Google Scholar] [CrossRef]
  8. Curtis, H. Orbital Mechanics for Engineering Students, 2nd ed.; Butterworth-Heinemann: Oxford, UK, 2009. [Google Scholar]
  9. Khalil, H.K. Nonlinear Systems; Pearson: New York, NY, USA, 2001. [Google Scholar]
  10. Harris, M.W.; Açıkmeşe, B. Maximum divert for planetary landing using convex optimization. J. Optim. Theory Appl. 2014, 162, 975–995. [Google Scholar] [CrossRef]
  11. Brunton, S.L.; Brunton, B.W.; Proctor, J.L.; Kutz, J.N. Koopman Invariant Subspaces and Finite Linear Representations of Nonlinear Dynamical Systems for Control. PLoS ONE 2016, 11, e0150171. [Google Scholar] [CrossRef] [Green Version]
  12. Yeung, E.; Kundu, S.; Hodas, N. Learning deep neural network representations for Koopman operators of nonlinear dynamical systems. In Proceedings of the 2019 American Control Conference (ACC), Philadelphia, PA, USA, 10–12 July 2019; pp. 4832–4839. [Google Scholar]
  13. Brunton, S.L.; Kutz, J.N. Data-Driven science and Engineering: Machine Learning, Dynamical Systems, and Control; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
  14. Lusch, B.; Kutz, J.N.; Brunton, S.L. Deep learning for universal linear embeddings of nonlinear dynamics. Nat. Commun. 2018, 9, 1–10. [Google Scholar] [CrossRef] [Green Version]
  15. Rugh, W.J.; Shamma, J.S. Research on gain scheduling. Automatica 2000, 36, 1401–1425. [Google Scholar] [CrossRef]
  16. Sename, O.; Gaspar, P.; Bokor, J. Robust Control and Linear Parameter Varying Approaches: Application to Vehicle Dynamics; Springer: Berlin/Heidelberg, Germany, 2013; Volume 437. [Google Scholar]
  17. He, T. Smooth Switching LPV Control and Its Applications; Michigan State University: East Lansing, MI, USA, 2019. [Google Scholar]
  18. Jansson, O.; Harris, M.W. Nonlinear Control Algorithm for Systems with Convex Polytope Bounded Nonlinearities. In Proceedings of the 2022 Intermountain Engineering, Technology and Computing (IETC), Orem, UT, USA, 13–14 May 2022; pp. 1–6. [Google Scholar]
  19. Malisoff, M.; Sontag, E.D. Universal formulas for feedback stabilization with respect to Minkowski balls. Syst. Control Lett. 2000, 40, 247–260. [Google Scholar] [CrossRef]
  20. Pylorof, D.; Bakolas, E. Nonlinear control under polytopic input constraints with application to the attitude control problem. In Proceedings of the 2015 American Control Conference (ACC), Chicago, IL, USA, 1–3 July 2015; pp. 4555–4560. [Google Scholar] [CrossRef]
  21. Ames, A.D.; Coogan, S.; Egerstedt, M.; Notomista, G.; Sreenath, K.; Tabuada, P. Control Barrier Functions: Theory and Applications. In Proceedings of the 2019 18th European Control Conference (ECC), Naples, Italy, 25–28 June 2019; pp. 3420–3431. [Google Scholar] [CrossRef] [Green Version]
  22. Harris, M.W. Lossless Convexification of Optimal Control Problems; The University of Texas at Austin: Austin, TX, USA, 2014. [Google Scholar]
  23. Açıkmeşe, B.; Blackmore, L. Lossless convexification for a class of optimal control problems with nonconvex control constraints. Automatica 2011, 47, 341–347. [Google Scholar] [CrossRef]
  24. Kunhippurayil, S.; Harris, M.W.; Jansson, O. Lossless Convexification of Optimal Control Problems with Annular Control Constraints. Automatica 2021, 133, 109848. [Google Scholar] [CrossRef]
  25. Harris, M.W.; Açıkmeşe, B. Lossless convexification of non-convex optimal control problems for state constrained linear systems. Automatica 2014, 50, 2304–2311. [Google Scholar] [CrossRef]
  26. Harris, M.W.; Açıkmeşe, B. Lossless Convexification for a Class of Optimal Control Problems with Linear State Constraints. In Proceedings of the 52nd IEEE Conference on Decision and Control, Florence, Italy, 10–13 December 2013. [Google Scholar]
  27. Harris, M.W.; Açıkmeşe, B. Lossless Convexification for a Class of Optimal Control Problems with Quadratic State Constraints. In Proceedings of the 2013 American Control Conference, Washington, DC, USA, 17–19 June 2013. [Google Scholar]
  28. Woodford, N.; Harris, M.W. Geometric Properties of Time Optimal Controls with State Constraints Using Strong Observability. IEEE Trans. Autom. Control 2022, 67, 6881–6887. [Google Scholar] [CrossRef]
  29. Harris, M.W. Optimal Control on Disconnected Sets using Extreme Point Relaxations and Normality Approximations. IEEE Trans. Autom. Control 2021, 66, 6063–6070. [Google Scholar] [CrossRef]
  30. Kunhippurayil, S.; Harris, M.W. Strong observability as a sufficient condition for non-singularity and lossless convexification in optimal control with mixed constraints. Control Theory Technol. 2022, 20, 475–487. [Google Scholar] [CrossRef]
  31. Blackmore, L.; Açıkmeşe, B.; Carson, J.M. Lossless convexification of control constraints for a class of nonlinear optimal control problems. Syst. Control Lett. 2012, 61, 863–871. [Google Scholar] [CrossRef]
  32. Lu, P.; Liu, X. Autonomous trajectory planning for rendezvous and proximity aperations by conic optimization. J. Guid. Control Dyn. 2013, 36, 375–389. [Google Scholar] [CrossRef]
  33. Lu, P. Introducing computational guidance and control. J. Guid. Control Dyn. 2017, 40, 193. [Google Scholar] [CrossRef]
  34. Liu, X.; Lu, P.; Pan, B. Survey of convex optimization for aerospace applications. Astrodynamics 2017, 1, 23–40. [Google Scholar] [CrossRef]
  35. Liu, X.; Lu, P. Solving nonconvex optimal control problems by convex optimization. J. Guid. Control Dyn. 2014, 37, 750–765. [Google Scholar] [CrossRef]
  36. Harris, M.W.; Woodford, N.T. Equilibria, Periodicity, and Chaotic Behavior in Spherically Constrained Relative Orbital Motion. Nonlinear Dyn. 2023, 111, 1–17. [Google Scholar] [CrossRef]
  37. Clohessy, W.; Wiltshire, R. Terminal guidance system for satellite rendezvous. J. Aerosp. Sci. 1960, 27, 653–658. [Google Scholar] [CrossRef]
  38. MATLAB 2021a; The Mathworks, Inc.: Natick, MA, USA, 2021.
  39. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual; Gurobi Optimization, LLC: Houston, TX, USA, 2022. [Google Scholar]
  40. Löfberg, J. YALMIP: A Toolbox for Modeling and Optimization in MATLAB. In Proceedings of the CACSD Conference, Taipei, Taiwan, 2–4 September 2004. [Google Scholar]
  41. Shuster, M.D. The kinematic equation for the rotation vector. IEEE Trans. Aerosp. Electron. Syst. 1993, 29, 263–267. [Google Scholar] [CrossRef]
  42. Markley, F.L.; Crassidis, J.L. Fundamentals of Spacecraft Attitude Determination and Control; Springer: New York, NY, USA, 2014; Volume 1286. [Google Scholar]
Figure 1. Coordinate transformation of relative coordinates.
Figure 1. Coordinate transformation of relative coordinates.
Aerospace 10 00071 g001
Figure 2. Angular displacement trajectories of the spacecraft. The black curves are the angular displacements of the actual nonlinear system, whereas the gray curves are the angular displacements of the auxiliary linear systems. The dashed line is the desired final position and the dotted line a solution using a non-convex solver.
Figure 2. Angular displacement trajectories of the spacecraft. The black curves are the angular displacements of the actual nonlinear system, whereas the gray curves are the angular displacements of the auxiliary linear systems. The dashed line is the desired final position and the dotted line a solution using a non-convex solver.
Aerospace 10 00071 g002
Figure 3. Angular velocity trajectories of the spacecraft. The black curves are the angular velocities of the actual nonlinear system whereas the gray curves are the angular velocities of the auxiliary linear systems. The dashed line is the desired velocity at final time and the dotted line a solution using a non-convex solver.
Figure 3. Angular velocity trajectories of the spacecraft. The black curves are the angular velocities of the actual nonlinear system whereas the gray curves are the angular velocities of the auxiliary linear systems. The dashed line is the desired velocity at final time and the dotted line a solution using a non-convex solver.
Aerospace 10 00071 g003
Figure 4. Angular displacement trajectories of the spacecraft. The black curves are the angular displacements of the actual nonlinear system, whereas the gray curves are the angular displacements of the auxiliary linear systems. The dashed line is the desired final position and the dotted line a solution using a non-convex solver.
Figure 4. Angular displacement trajectories of the spacecraft. The black curves are the angular displacements of the actual nonlinear system, whereas the gray curves are the angular displacements of the auxiliary linear systems. The dashed line is the desired final position and the dotted line a solution using a non-convex solver.
Aerospace 10 00071 g004
Figure 5. Angular velocity trajectories of the spacecraft. The black curves are the angular velocities of the actual nonlinear system, whereas the gray curves are the angular velocities of the auxiliary linear systems. The dashed line is the desired velocity at final time and the dotted line a solution using a non-convex solver.
Figure 5. Angular velocity trajectories of the spacecraft. The black curves are the angular velocities of the actual nonlinear system, whereas the gray curves are the angular velocities of the auxiliary linear systems. The dashed line is the desired velocity at final time and the dotted line a solution using a non-convex solver.
Aerospace 10 00071 g005
Figure 6. Attitude of the spacecraft. The black curves are the attitude of the actual nonlinear system, whereas the gray curves are the attitudes of the auxiliary linear system. The dashed line is the desired final attitude and the dotted line a solution using a non-convex solver.
Figure 6. Attitude of the spacecraft. The black curves are the attitude of the actual nonlinear system, whereas the gray curves are the attitudes of the auxiliary linear system. The dashed line is the desired final attitude and the dotted line a solution using a non-convex solver.
Aerospace 10 00071 g006
Figure 7. Angular velocity of the spacecraft. The black curves are the angular velocities of the actual nonlinear system, whereas the gray curves are the angular velocities of the auxiliary linear systems. The dashed line is the desired angular velocity and the dotted line a solution using a non-convex solver.
Figure 7. Angular velocity of the spacecraft. The black curves are the angular velocities of the actual nonlinear system, whereas the gray curves are the angular velocities of the auxiliary linear systems. The dashed line is the desired angular velocity and the dotted line a solution using a non-convex solver.
Aerospace 10 00071 g007
Figure 8. Attitude of the spacecraft. The black curves are the attitude of the actual nonlinear system, whereas the gray curves are the attitudes of the auxiliary linear systems. The dashed line is the desired final attitude and the dotted line a solution using a non-convex solver.
Figure 8. Attitude of the spacecraft. The black curves are the attitude of the actual nonlinear system, whereas the gray curves are the attitudes of the auxiliary linear systems. The dashed line is the desired final attitude and the dotted line a solution using a non-convex solver.
Aerospace 10 00071 g008
Figure 9. Angular velocity of the spacecraft. The black curves are the angular velocity of the actual nonlinear system, whereas the gray curves are the angular velocities of the auxiliary linear systems. The dashed line is the desired angular velocity and the dotted line a solution using a non-convex solver.
Figure 9. Angular velocity of the spacecraft. The black curves are the angular velocity of the actual nonlinear system, whereas the gray curves are the angular velocities of the auxiliary linear systems. The dashed line is the desired angular velocity and the dotted line a solution using a non-convex solver.
Aerospace 10 00071 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jansson, O.; Harris, M.W. Convex Optimization-Based Techniques for Trajectory Design and Control of Nonlinear Systems with Polytopic Range. Aerospace 2023, 10, 71. https://doi.org/10.3390/aerospace10010071

AMA Style

Jansson O, Harris MW. Convex Optimization-Based Techniques for Trajectory Design and Control of Nonlinear Systems with Polytopic Range. Aerospace. 2023; 10(1):71. https://doi.org/10.3390/aerospace10010071

Chicago/Turabian Style

Jansson, Olli, and Matthew W. Harris. 2023. "Convex Optimization-Based Techniques for Trajectory Design and Control of Nonlinear Systems with Polytopic Range" Aerospace 10, no. 1: 71. https://doi.org/10.3390/aerospace10010071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop