Next Article in Journal
A Novel Data Partitioning Method for Active Privacy Protection Applied to Medical Records
Next Article in Special Issue
A Simple Learning Approach for Robust Tracking Control of a Class of Dynamical Systems
Previous Article in Journal
NS-GAAFET Compact Modeling: Technological Challenges in Sub-3-nm Circuit Performance
Previous Article in Special Issue
Passification-Based Robust Phase-Shift Control for Two-Rotor Vibration Machine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonlinear Simulation and Performance Characterisation of an Adaptive Model Predictive Control Method for Booster Separation and Re-Entry

1
School of Mechanical and Mining Engineering, University of Queensland, Brisbane 4072, Australia
2
School of Aerospace and Mechanical Engineering, University of Oklahoma, Norman, OK 73019, USA
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(6), 1488; https://doi.org/10.3390/electronics12061488
Submission received: 6 January 2023 / Revised: 13 March 2023 / Accepted: 20 March 2023 / Published: 21 March 2023

Abstract

:
This paper evaluates the L 1 adaptive model predictive control (AMPC- L 1 ) method in terms of its control performance and computational load. The control performance is assessed on the basis of the nonlinear simulation of a fly-back booster conducting stage separation and re-entry, and compared to baseline nonadaptive MPC and as a pole placement controller in both longitudinal and lateral control tasks. Simulation results show that AMPC- L 1 exhibits superior control performance under nominal conditions, and aerodynamic and guidance law uncertainties. The computational load of AMPC- L 1 is also evaluated on an embedded platform to demonstrate that AMPC- L 1 preserves the efficiency properties of AMPC while improving its performance.

1. Introduction

Adaptive control and model predictive control (MPC) are two historically separate areas of control theory that provide different performance benefits to the control problem of an uncertain system. At each control update, MPC solves a finite-horizon optimal control problem with a receding horizon principle. The solution to the optimal control problem is the control input calculated on the basis of the prediction of future states [1,2]. MPC is a control method with predictive capability and superior constraint handling compared to those of classical PID controllers, thereby improving control performance [3,4]. As a form of optimal control, MPC seeks minimal tracking errors while abiding by the constraints of the system model [5,6]. Moreover, the predictive element of MPC means that the controller acts pre-emptively to squash tracking errors instead of waiting to react to errors as in PID control. An excellent review of MPC studies in aerospace systems was provided by Eren et al. [7]. With respect to re-entry vehicles, Van Soest et al. implemented MPC for the re-entry simulation of the X-38 unpowered crew return vehicle (CRV) at 10 Hz [8], and Pascucci et al. implemented MPC for a re-entry vehicle undergoing powered descent using thrust vectoring [9].
The field of adaptive control addresses a different issue. The inherent presence of uncertainties in a system model results in a mismatch between the model that is used to synthesise the control algorithm and the real system, resulting in performance degradation for classical and optimal control methods [10,11,12,13]. Adaptive control seeks to compensate for uncertainties in the system model by either (i) estimating the system model explicitly and modifying the control law accordingly or (ii) estimating the representative matched and unmatched uncertainties present in the system, and compensating for them in the control law [14,15,16,17,18]. Both categories aim to retain the nominal design performance of the nonadaptive controller in the presence of model uncertainties.
Adaptive control has seen much activity within the aerospace community. Banerjee et al. applied an L 1 augmentation to a pole placement controller for the longitudinal dynamics of hypersonic gliders. Simulation studies found that adaptive augmentation improved control performance [19]. An optimal control modification was applied to a model reference adaptive controller (MRAC) for a fighter aircraft [20]. Compared to the baseline controller and simple MRAC, the optimal control modification performed better. Zhou et al. presented a sliding-mode adaptive control algorithm based on a backstepping approach [21]. In that study, the sliding-mode control law was used to suppress the effects of parameter variations and disturbances in the attitude rate loop. Liu et al. designed a high-performance adaptive controller for hypersonic flight vehicles [22]. The simulation results showed that the developed control technique could improve the transient response of hypersonic flight vehicles compared to other adaptive strategies, focusing on parametric uncertainty and asymptotic tracking.
Despite the different historical roots of adaptive control and MPC, there have been many attempts to derive algorithms that combine elements of the two, resulting in adaptive MPC. Various forms of adaptive MPC exist in the literature that have been applied to aerospace control problems. Chowdary et al. propose a concurrent learning algorithm combined with constrained model predictive control [23,24]. Feedback linearisation is used to linearise a nonlinear aircraft wing-rock model. Concurrent learning (CL) is a modified direct model reference adaptive control (MRAC) scheme that relaxes the persistent excitation requirement by utilising the long-term memory of past parameter estimates and tracking errors. The CL controller is used online to identify the plant parameters. Upon convergence, a switching rule is used to switch to an MPC control algorithm that utilises the estimated plant parameters in its optimisation routine. Improvements to roll tracking performance were observed when the concurrent learning converges and MPC is switched on. If the parameters deviate substantially while MPC is switched on, the algorithm switches back to concurrent learning to relearn the correct parameters.
Mehndiratta et al. presented a learning-based adaptive nonlinear MPC (NMPC) implementation for a ground robot and a small unmanned aerial vehicle (UAV) with experimental results [25]. The learning algorithm was a nonlinear moving horizon estimator (NMHE) that estimated the state variables and plant parameters. The estimated state and parameters were then used in the NMPC algorithm to compute the optimal control input. For the UAV experiment, the NMHE estimated the velocities and mass of the vehicle. The estimated velocities and mass were used in the position hold NMPC to compute the desired vertical force and attitude angles. Experimental results showed that, with the NMHE updating the parameter estimates, the UAV tracked the desired trajectory with less tracking error than that in the case with NMHE off, especially in altitude.
Pereida and Schoellig presented an adaptive MPC scheme using L 1 adaptive control for the trajectory tracking of an agile quadrotor [26]. The implementation of the L 1 adaptive control method estimated the matched uncertainties of the system using an adaptive law, but did not account for unmatched uncertainties. MPC was used to generate reference commands that were augmented with the L 1 adaptive control law. This resulted in improved experimental performance in contrast to that of nonadaptive and nonpredictive methods despite dynamic environmental disturbances.
While MPC provides many performance benefits, autonomous aerospace vehicles impose form factor requirements on computing hardware, which has led to a drive for efficient control algorithms. This has led to research in reducing the computational complexity of MPC. Explicit MPC involves precomputing solutions to optimal control problems offline and storing them into multidimensional look-up tables. Explicit MPC was applied to a spacecraft control problem [27]. Another approach to reducing the computational requirements of MPC is to reduce the number of prediction points over the finite horizon. Algebraic MPC is a form of efficient MPC first proposed by Gibbens and Medagoda to reduce the computational burden of conventional MPC by replacing the Taylor series expansion of the state transition matrix with an eigendecomposition, allowing for the exact evaluation of the state transition [28]. This eliminates the truncation error associated with large time steps in Taylor series approximation, thereby allowing for a reduced number of prediction time steps without the usual penalties.
Chai et al. proposed an adaptive and efficient MPC scheme based on algebraic MPC and L 1 adaptive control, and applied the algorithm on a fly-backk booster re-entry longitudinal linear time-varying (LTV) model [29]. AMPC- L 1 combines the efficiency benefits of AMPC with the performance guarantees of L 1 adaptive control in the presence of uncertainties. AMPC- L 1 retains the nominal design performance of AMPC despite disturbances and model mismatch, while baseline AMPC experiences degraded performance. Although Chai et al. presented the derivation for AMPC- L 1 and demonstrated the performance of the algorithm on an LTV model, this paper extends the existing research by: (i) including stage separation in the mission scenario, (ii) using a high-fidelity nonlinear simulation model with both longitudinal and lateral dynamics, and (iii) describing the implementation of AMPC- L 1 on an embedded Linux computer. Specifically, the contributions of this paper are as follows:
  • Presenting an AMPC- L 1 controller design for longitudinal and lateral control of a re-entry vehicle undergoing stage separation and re-entry maneuvers.
  • Evaluating AMPC- L 1 against baseline AMPC and pole placement through nonlinear simulation of booster separation and re-entry with modeling and input uncertainties.
  • Presenting an overview of a software implementation of AMPC- L 1 .
  • Demonstrating AMPC- L 1 implemented for a representative embedded computer and characterising the computational expense.

2. Fly-Back Booster Dynamic Model

The fly-back booster model is based on a dynamic aircraft model provided by Zipfel with different vehicle-specific aerodynamics, gravimetrics, and mission profiles [30]. The fly-back booster is intended as the first stage for a second-stage scramjet-powered accelerator [31]. More details on the development of the fly-back booster model may be found in [32,33].
The system states of a fly-back booster during stage separation and re-entry are position ( s N , s E , s D ), velocity ( u , v , w ), attitude ( ϕ , θ , ψ ), and body rates ( p , q , r ). The nonlinear system state dynamics equations are as follows:
u ˙ v ˙ w ˙ = 0 r q r 0 p q p 0 u v w + 1 m f a , x f a , y f a , z + T l b 0 0 m g ,
s ˙ N s ˙ E s ˙ D = ( T l b ) T u v w ,
p ˙ q ˙ r ˙ = I 11 I 12 I 13 I 12 I 22 I 23 I 13 I 23 I 33 1 0 r q r 0 p q p 0 I 11 I 12 I 13 I 12 I 22 I 23 I 13 I 23 I 33 p q r + m a , x m a , y m a , z ,
ϕ ˙ θ ˙ ψ ˙ = 1 sin ϕ tan θ cos ϕ tan θ 0 cos ϕ sin ϕ 0 sin ϕ / cos θ cos ϕ / cos θ ,
where I is the inertia matrix, m is the mass, g is the gravitational force, f a is the aerodynamic forces, m a is the aerodynamic moments, and T l b is the transformation matrix from local-level coordinates to body coordinates. Written in full, T l b is expressed as follows:  
T l b = cos ψ cos θ sin ψ cos θ sin θ cos ψ sin θ sin ϕ sin ψ cos ϕ sin ψ sin θ sin ϕ + cos ψ cos ϕ cos θ sin ϕ cos ψ sin θ cos ϕ + sin ψ sin ϕ sin ψ sin θ cos ϕ cos ψ sin ϕ cos θ cos ϕ .
As the booster was unpowered during stage separation and re-entry, no propulsive forces were accounted for in this model. The aerodynamic forces and moments were defined using the following equations:
f a , x f a , y f a , z = q ¯ S C X C Y C Z , and
m a , x m a , y m a , z = q ¯ S b C l C m C n
where C X , C Y , and C Z are the aerodynamic force coefficients, and C l , C m , and C n are the aerodynamic moment coefficients computed on the basis of a combination of look-up tables and aerodynamic derivatives based on a first-order Taylor series expansion. q ¯ is dynamic pressure, S is the reference area, and b is the reference length for the lateral coefficients. The aerodynamic derivatives were stored in look-up tables as functions of the Mach number (M) and angle of attack ( α ). The equations for the aerodynamic coefficients are as follows:
C X = C X 0 ( M , α ) + C X δ e ( M , α ) | δ e | + c 2 V C X q ( M , α ) q ,
C Y = C Y 0 ( M , α ) + C Y δ r ( M , α ) δ r + C Y β ( M , α ) β + b 2 V ( C Y r ( M , α ) r + C Y p ( M , α ) p ) ,
C Z = C Z 0 ( M , α ) + C Z δ e ( M , α ) δ e + C Z β ( M , α ) β + c 2 V C z q ( M , α ) q ,
C l = C l 0 ( M , α ) + C l β ( M , α ) β + C l δ a ( M , α ) δ a + C l δ r ( M , α ) δ r + b 2 V ( C l r ( M ) r + C l p ( M ) p ) ,
C m = C m 0 ( M , α ) + C m δ e ( M , α ) δ e + c 2 V C m q ( M , α ) q ,
C n = C n 0 ( M , α ) + C n β ( M , α ) β + C n δ r ( M , α ) δ r + b 2 V ( C n r ( M , α ) r + C n p ( M ) p ) ,
where V is the vehicle velocity magnitude, c is the reference area for longitudinal coefficients, and β is the side-slip angle. The control inputs are aileron ( δ a ), elevator ( δ e ), and rudder ( δ r ).

3. Controller Design

In this section, the controllers are presented for both the longitudinal and the lateral cases using the AMPC- L 1 control algorithm, which consists of a state predictor, the L 1 adaptive law, the L 1 control law, and the AMPC optimal control law.

3.1. Summary of AMPC- L 1 Algorithm

An overview of the AMPC- L 1 architecture is shown in Figure 1. The details of the AMPC- L 1 algorithm are found in [29]. The dashed box shows the L 1 adaptive control structure that interacted with AMPC via the desired closed-loop dynamics matrix A m , reference y r , and state x ( t ) .
The main results from the AMPC- L 1 derivation are summarised below in algorithm form and may be interpreted as pseudocode. The full derivation and list of assumptions associated with AMPC- L 1 are not repeated here, but may be found in [29]. Algorithm 1 describes the required computation for a single update of AMPC- L 1 on the k-th iteration. It was assumed that, prior to these computations, the system dynamics matrices were updated on the basis of the current flight condition.
Algorithm 1: A single update of AMPC - L 1 for the k-th iteration
Data: Reference command ( y r ), current state ( x k ), state prediction from previous update ( x ^ k ), unmatched state from previous update ( x u m , k )
Result: Optimal control input with uncertainty compensation u full , updated state prediction ( x ^ k + 1 ), updated unmatched state ( x u m , k + 1 )
Compute AMPC optimal gain K:
  • Φ ( δ t ) = S diag ( e λ 1 ( δ t ) , , e λ k ( δ t ) ) S 1
  • G C A 1 ( Φ ( δ t ) I ) B m
  • F C Φ ( δ t )
  • K ( G T Q G + R ) 1 G T Q
Compute desired closed-loop system dynamics:
  • A m A B m K F
Update estimated uncertainties ( σ ^ ) on the basis of the predicted state (from the last update):
  • B [ B m B u m ]
  • Φ ad ( T s ) A m 1 ( e A m T s I n ) ,
  • x ˜ k x ^ k x k , and
  • μ k e A m T s x ˜ k ,
  • σ ^ k , 1 I m B 1 Φ ad 1 ( T s ) μ k
  • σ ^ k , 2 I n m B 1 Φ ad 1 ( T s ) μ k
Compute control input on the basis of the estimated uncertainties:
  • H m , inv ( 1 / ( C A m 1 B m ) )
  • η 1 σ ^ k , 1
  • η 2 σ ^ k , 2
  • η 2 m H m , inv C x u m , k
  • u raw η 1 + η 2 m K y r
  • u ad LowPassFilter ( u raw , Δ t , ω k )
  • u full = u ad K F x k
Update the unmatched state:
  • x ˙ u m A m x u m , k + B u m η 2
  • x u m , k + 1 x u m , k + x ˙ u m × Δ t
Compute state prediction for next update x ^ k + 1 :
  • x ^ ˙ k A m x ^ k + B m ( u ad + σ ^ k , 1 ) + B u m σ ^ k , 2
  • x ^ k + 1 x ^ k + x ^ ˙ k × Δ t
Matrix G describes the forced response, Q is the matrix of output error weights, and R is a matrix of control input weights. F is the free-response matrix with F R 1 × n , and G R . δ t is the prediction horizon, and A, B m , and C are the state dynamics, control input, and measurement matrices, respectively, for a linear model as formulated below:
x ˙ = A x + B m u
y = C x
State transition matrix Φ ( δ t ) was calculated using an eigendecomposition with S that defined a set of eigenvectors of the system λ 1 to λ k . K is the optimal control matrix based on AMPC and was used to update the desired closed-loop dynamics ( A m ) matrix, which is used by L 1 adaptive control to predict the system response. The mismatch between the predicted and actual states was used to calculate the uncertainties ( σ ^ k , 1 , σ ^ k , 2 ). The calculated uncertainties were then used to compute the adaptive control input that compensated for the uncertainties.

3.2. Longitudinal Control

While the model used for the simulation and performance comparison in this paper was nonlinear, the longitudinal controller used the linearised angle of attack dynamics [30], which are as follows:
x ˙ ( t ) = A ( t ) x ( t ) + B m ( t ) u ( t )
q ˙ ( t ) α ˙ ( t ) = M q ( t ) M α ( t ) α ˙ q N α ( t ) / V ( t ) q ( t ) α ( t ) + M δ e ( t ) N δ e ( t ) V ( t ) u ( t )
y ( t ) = C x ( t ) = 0 1 q ( t ) α ( t ) ,
where
M α = q ¯ S ¯ c I 2 C m α , N α = q ¯ S ¯ m ¯ C N α , M q = q ¯ S ¯ c 2 2 I 2 V C m q , M δ e = q ¯ S ¯ c I 2 C m δ e ,
N δ e = q ¯ S ¯ m ¯ C N δ e .
Parameter α ˙ q is the pitch rate to the angle of the attack map. The linearisation of nonlinear models for aerospace vehicles is used to obtain linear time-variant models. Thus, system parameters are updated considering the current flight condition during each control update. The AMPC design parameters are shown in Table 1. Q was chosen to prioritise performance. The same tuning parameters were used for all AMPC algorithms assessed in this paper.
The design parameters of L 1 adaptive control are shown in Table 2. The adaptive update time step is simply the control update rate for a control scheme that adapts at every control update. The low-pass filter cut-off frequency was selected to ensure the L 1 -norm condition [34].

3.3. Lateral Control

The lateral control design used for AMPC and AMPC- L 1 in this study utilises a model with combined roll and yaw dynamics with yaw rate r, side-slip angle β , roll rate p, and roll angle ϕ [30]. The linear state-space model for lateral dynamics is as follows:
x ˙ = A x + B m u
r ˙ β ˙ p ˙ ϕ ˙ = L N r L N β L N p 0 1 Y β V 0 g V L L r L L β L L p 0 0 0 1 0 r β p ϕ + 0 L N δ r 0 Y δ r / V L L δ a 0 0 0 δ a δ r
y = C x = 1 0 0 0 0 1 0 0 0 0 0 1 r β p ϕ ,
where
L N β = q ¯ S ¯ b I 3 C n β , L N r = q ¯ S ¯ b 2 2 I 3 V C n r , L N p = q ¯ S ¯ b 2 2 I 3 V C n p , Y β = q ¯ S ¯ m ¯ C y β ,
L L r = q ¯ S ¯ b 2 2 I 1 V C l r , L L p = q ¯ S ¯ b 2 2 I 1 V C l p , L L β = q ¯ S ¯ b I 1 C l β .
Algorithm 1 was applied generically to both longitudinal and lateral dynamics. The design parameters for lateral control using AMPC- L 1 are shown in Table 3 and Table 4. The prediction time horizon was chosen to balance the need for aggressive short- and long-term corrections.

4. Results and Discussion

Three different control algorithms are assessed for their control performance under various simulated conditions in this section. The control algorithms were pole placement [33], baseline AMPC, and AMPC- L 1 [29].
At the start of stage separation, the booster was assumed to have an initial attitude of ϕ = 90 , θ = 0 , and ψ = 0 . First, the booster must undergo a stage separation maneuver to detach itself from the launch stack quickly and safely. Then, it performs a reorientation to an attitude of ϕ = 90 . Lastly, it decelerates in a controlled manner, performing a pull-up maneuver to further decrease its speed. In a practical scenario, the vehicle then deploys wings for a fly-back phase, landing at the launch site. However, the final fly-back phase was not considered in this analysis. Figure 2 shows the nominal commands assumed to come from the guidance law for stage separation, reorientation, and descent.
The stage separation phase lasts from t = 0 to t = 18 seconds. This is marked by a ramp-up of commanded α while keeping ϕ = 90 . The reorientation phase involves ramping the commanded roll to zero while keeping the α = 0 . The pull-up consists of ramping the commanded α to 7 . For the AMPC controllers, side slip ( β ) and yaw rate (r) are regulated, but only the yaw rate is regulated for the pole placement.

4.1. Nominal Case

In the nominal case, the control algorithms were assumed to have perfect knowledge of the dynamic model with a deterministic set of guidance commands. The guidance and control loops were updated at 500 Hz. Figure 3, Figure 4, Figure 5 and Figure 6 show the control performance of the pole placement, AMPC, and AMPC- L 1 under nominal conditions.
Figure 3 shows that all controllers had similar levels of performance, with AMPC- L 1 exhibiting slightly less error during the ramping up and down. This was due to the ability of the L 1 adaptations to compensate for the model mismatch arising from the nonlinearities.
Figure 4 shows that the baseline AMPC had the worst performance, accumulating errors of over 20 during the reorientation maneuver before converging back to zero during the pull-up maneuver. The pole placement controller resulted in steady-state error throughout the stage separation phase. Arguably, the AMPC and pole placement controllers may be tuned further to produce better error tracking. However, the AMPC- L 1 controller, which had the same tuning parameters as those of the baseline AMPC, could eliminate the steady-state error and keep the tracking error well below 5 .
The side-slip regulation performance is shown in Figure 5. The stage-separation and reorientation maneuvers produced perturbations in lateral velocity, which translated into disturbances to the side-slip angle. All controllers performed acceptably, keeping the tracking error below 2 .
The yaw-rate regulation performance is shown in Figure 6. The stage-separation and reorientation maneuvers produced perturbations in the yaw rate. All controllers performed acceptably, keeping the tracking error below 1 / s .

4.2. Aerodynamic Uncertainties

Although the control algorithms performed acceptably under nominal conditions, it is important to consider the effects of aerodynamic uncertainty on the control performance. In reality, the controller always has imperfect knowledge of the aerodynamic parameters. The control performance of the pole placement, AMPC, and AMPC- L 1 was assessed in the presence of an aerodynamic model mismatch. Aerodynamic errors were introduced into the nonlinear simulation model on the basis of past research pertaining to the relative accuracy of modern aerodynamic prediction methods for re-entry vehicles [35]. The aerodynamic errors are shown in Table 5.
The aerodynamic errors were not intended to capture the worst-case scenario; rather, they represent a realistic case that is within the bounds of possible errors. The simulations were run with the aerodynamic errors, and the results are shown in Figure 7, Figure 8, Figure 9 and Figure 10.
Figure 7 shows that the pole placement controller exhibited large transient and steady-state errors. The transient error occurred during stage separation and pull-up maneuvers. Both AMPC and AMPC- L 1 exhibited acceptable control performance, while AMPC- L 1 had the lowest tracking error.
In the roll tracking task, a large transient error of over 50 was present when baseline AMPC was used, as shown in Figure 8. Compared to the results from the nominal case in Figure 4, the pole placement and AMPC controllers produced double the maximal transient error in the presence of aerodynamic errors. This is in contrast to AMPC- L 1 , which retained the control performance of the nominal case.
Compared to the nominal case in Figure 5, all controllers produced similar levels of side-slip performance, as shown in Figure 9.
Compared to the nominal case in Figure 6, the pole placement and AMPC- L 1 controllers achieved similar performance in yaw-rate regulation as shown in Figure 10. However, the performance of the baseline AMPC slightly deteriorated.

4.3. Guidance Law Uncertainties

The focus of this study was evaluating the attitude control performance of pole placement, AMPC, and AMPC- L 1 . So far, the guidance law was assumed to produce a certain sequence of α and ϕ commands that were sent to the attitude controllers. However, in reality, the guidance law might produce different commands to what was originally planned, reacting to the vehicle’s current state to fulfil a broader mission plan. For instance, α commands may be increased or decreased in order to change the vehicle drag, thereby changing its deceleration. The ϕ commands may also be increased or decreased during the pull-up phase to allow the vehicle to track a certain heading.
The specifics of the guidance law design and requirements are outside the scope of this study. However, the uncertainty of the reference commands tracked by the attitude controllers may be considered. Therefore, a change in guidance commands is considered in this section. There were no further changes to the controller design parameters (Figure 11).
The control performance plots are shown in Figure 12, Figure 13, Figure 14 and Figure 15. The three control algorithms show a similar α tracking performance. Baseline AMPC has large roll tracking error of over 20 . The pole placement yaw damper results in the largest error in side slip and yaw rate. In all channels, the AMPC- L 1 achieves the best control performance.

4.4. Computational Expense Characterisation

This section characterises the computational expense of running AMPC- L 1 on an embedded Linux computer. A hardware-in-the-loop simulation is configured to examine the computational loads. The main goal of the study presented in this section is to establish a set of feasible loop rates given the software, algorithm, and hardware implementation for both AMPC and AMPC- L 1 . The embedded computer is Raspberry Pi 3 Model B with a 1.2G Hz 64-bit CPU and 1 GB RAM. Modern flight computers typically have better processing and RAM than those, rendering this a conservative test.
The hardware-in-the-loop simulation in this study used the models and controller described in Section 3.3. The lateral dynamics with four states and two control inputs are simulated with the AMPC and AMPC- L 1 control schemes, implemented using Algorithm 1. Two software applications are required: a dynamics simulator that runs on a native x86 computer and the flight controller application that runs on the Raspberry Pi.
The communications between the simulator and the flight controller follow a server-client model using TCP sockets. However, the send() and recv() methods are repeated synchronously until the end of the simulation. The simulator application:
  • Receives control input data from the flight controller application.
  • Processes the dynamics and updates the state of the vehicle.
  • Sends the state of the vehicle back to the flight controller application.
The flight controller application similarly performs the following tasks:
  • Receives the state of the vehicle from the simulator application.
  • Computes the best control inputs on the basis of vehicle state.
  • Sends the computed control input data back to the simulator application.
The x86 computer used to run the simulator application was an Intel Core i7-8550U CPU @ 1.80 GHz with 8 GB RAM. Due to the much faster processor on the x86 computer, any bottlenecks in computational performance may be attributed to the flight controller application. Both applications are single-threaded.
A network was set up with Raspberry Pi and the x86 computer. The network router was isolated from the Internet and any other network traffic in order to create a deterministic and fair testing environment. As such, only the two above-mentioned devices were configured on the network.
The software toolchain used to build the simulator and flight controller applications is presented in Table 6.
Both applications were built on Ubuntu 20. The optimisation compiler option was set to prioritise run-time performance at the expense of compilation time and executable size.
A Linux programme htop was used to record the CPU % usage. The CPU usage in one simulation may be averaged to obtain a performance metric for comparing the computational load at different loop update rates. These results are presented in Table 7. On the basis of the results in Table 7, even at a 1 KHz loop rate, the average CPU usage of a single core on the Raspberry Pi 3 was clearly only 39% for AMPC- L 1 . The L 1 augmentation increased the average CPU load by approximately 3%, less than 10% of the computational usage due to baseline AMPC, demonstrating the suitability of AMPC- L 1 as an adaptive and efficient model predictive control algorithm for implementation on embedded systems.

5. Conclusions

In this paper, the AMPC- L 1 adaptive model predictive control algorithm was implemented on a nonlinear simulation of fly-back booster stage separation and re-entry. The performance of AMPC- L 1 was evaluated against baseline AMPC and pole placement controllers under nominal conditions, aerodynamic uncertainties, and guidance law uncertainties. The controllers are synthesised using linear models for longitudinal and lateral dynamics, which are successively linearised on the basis of the latest flight condition. In all cases, AMPC- L 1 showed superior control performance compared to baseline AMPC and pole placement. AMPC- L 1 could compensate for model mismatch due to nonlinearities and other sources of uncertainty to preserve the nominal performance of AMPC. The computational load usage of AMPC- L 1 was assessed and benchmarked against the baseline AMPC. AMPC- L 1 was capable of being updated up to 1 KHz on a single core of a Raspberry Pi Model 3 B while utilising an average CPU load of 39%, which demonstrates the suitability of AMPC- L 1 for implementation on resource-constrained systems.

Author Contributions

Conceptualisation, J.C. and E.K.; methodology, J.C. and E.K.; software, J.C.; validation, J.C.; investigation, J.C. and E.K.; writing—original draft preparation, J.C.; writing—review and editing, E.K.; supervision, E.K. All authors have read and agreed to the published version of the manuscript.

Funding

The second author was a recipient of the Australian Government Research Training Program stipend while undertaking the research activities that culminated in the publication of this note.

Data Availability Statement

Data available on request from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Golzari, A.; Nejat Pishkenari, H.; Salarieh, H.; Abdollahi, T. Quaternion based linear time-varying model predictive attitude control for satellites with two reaction wheels. Aerosp. Sci. Technol. 2020, 98, 105677. [Google Scholar] [CrossRef]
  2. Xu, Q.; Wang, Z.; Li, Y. Fuzzy adaptive nonlinear information fusion model predictive attitude control of unmanned rotorcrafts. Aerosp. Sci. Technol. 2020, 98, 105686. [Google Scholar] [CrossRef]
  3. Grüne, L.; Pannek, J. Nonlinear Model Predictive Control. In Nonlinear Model Predictive Control: Theory and Algorithms; Springer International Publishing: Cham, Switzerland, 2017; pp. 45–69. [Google Scholar] [CrossRef]
  4. Kayacan, E.; Park, S.; Ratti, C.; Rus, D. Learning-based Nonlinear Model Predictive Control of Reconfigurable Autonomous Robotic Boats: Roboats. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Venetian Macao, Macau, 3–8 November 2019; pp. 8230–8237. [Google Scholar] [CrossRef]
  5. Camacho, E.F.; Bordons, C.; Alba, C.B. Model Predictive Control; Advanced Textbooks in Control and Signal Processing; Springer: London, UK, 2004. [Google Scholar]
  6. Greer, W.B.; Sultan, C. Infinite horizon model predictive control tracking application to helicopters. Aerosp. Sci. Technol. 2020, 98, 105675. [Google Scholar] [CrossRef]
  7. Eren, U.; Prach, A.; Koçer, B.B.; Raković, S.V.; Kayacan, E.; Açıkmeşe, B. Model Predictive Control in Aerospace Systems: Current State and Opportunities. J. Guid. Control. Dyn. 2017, 40, 1541–1566. [Google Scholar] [CrossRef]
  8. Van Soest, W.R.; Chu, Q.P.; Mulder, J.A. Combined Feedback Linearization and Constrained Model Predictive Control for Entry Flight. J. Guid. Control. Dyn. 2006, 29, 427–434. [Google Scholar] [CrossRef]
  9. Pascucci, C.A.; Bennani, S.; Bemporad, A. Model Predictive Control for Powered Descent Guidance and Control. In Proceedings of the 2015 European Control Conference (ECC), Linz, Austria, 15–17 July 2015; pp. 1388–1393. [Google Scholar] [CrossRef]
  10. Tohidi, S.S.; Yildiz, Y.; Kolmanovsky, I. Adaptive control allocation for constrained systems. Automatica 2020, 121, 109161. [Google Scholar] [CrossRef]
  11. Kayacan, E. Sliding mode learning control of uncertain nonlinear systems with Lyapunov stability analysis. Trans. Inst. Meas. Control 2019, 41, 1750–1760. [Google Scholar] [CrossRef]
  12. Shao, X.; Shi, Y. Neural Adaptive Control for MEMS Gyroscope With Full-State Constraints and Quantized Input. IEEE Trans. Ind. Inform. 2020, 16, 6444–6454. [Google Scholar] [CrossRef]
  13. Chen, K.; Zhu, S.; Wei, C.; Xu, T.; Zhang, X. Output constrained adaptive neural control for generic hypersonic vehicles suffering from non-affine aerodynamic characteristics and stochastic disturbances. Aerosp. Sci. Technol. 2021, 111, 106469. [Google Scholar] [CrossRef]
  14. Åström, K.J.; Wittenmark, B. Adaptive Control; Courier Corporation: Chelmsford, MA, USA, 2013. [Google Scholar]
  15. Kayacan, E. Closed-Loop Error Learning Control for Uncertain Nonlinear Systems With Experimental Validation on a Mobile Robot. IEEE/ASME Trans. Mechatron. 2019, 24, 2397–2405. [Google Scholar] [CrossRef]
  16. Wang, S.; Na, J. Parameter Estimation and Adaptive Control for Servo Mechanisms With Friction Compensation. IEEE Trans. Ind. Inform. 2020, 16, 6816–6825. [Google Scholar] [CrossRef]
  17. Bhandari, V.; Kayacan, E. Concurrent Learning Based Tracking Control of Nonlinear Systems using Gaussian Process. In Proceedings of the 2021 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Delft, The Netherlands, 12–16 July 2021; pp. 970–975. [Google Scholar] [CrossRef]
  18. Kayacan, E.; Park, S.; Ratti, C.; Rus, D. Online System Identification Algorithm without Persistent Excitation for Robotic Systems: Application to Reconfigurable Autonomous Vessels. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Venetian Macao, Macau, 3–8 November 2019; pp. 1840–1847. [Google Scholar] [CrossRef]
  19. Banerjee, S. L1 Adaptive Control Augmentation for a Hypersonic Glider. Ph.D. Thesis, University of Queensland, St Lucia, Australia, 2016. [Google Scholar] [CrossRef] [Green Version]
  20. Nguyen, N.; Hanson, C.; Burken, J.; Schaefer, J. Normalized optimal control modification and flight experiments on NASA F/A-18 aircraft. J. Guid. Control. Dyn. 2017, 40, 1060–1074. [Google Scholar] [CrossRef] [Green Version]
  21. Zhang, J.; Sun, C.; Zhang, R.; Qian, C. Adaptive sliding mode control for re-entry attitude of near space hypersonic vehicle based on backstepping design. IEEE/CAA J. Autom. Sin. 2015, 2, 94–101. [Google Scholar]
  22. Liu, J.; An, H.; Gao, Y.; Wang, C.; Wu, L. Adaptive Control of Hypersonic Flight Vehicles With Limited Angle-of-Attack. IEEE/ASME Trans. Mechatron. 2018, 23, 883–894. [Google Scholar] [CrossRef]
  23. Chowdhary, G.; Miihlegg, M.; How, J.P.; Holzapfel, F. A concurrent learning adaptive-optimal control architecture for nonlinear systems. In Proceedings of the IEEE Conference on Decision and Control, Firenze, Italy, 10–13 December 2013; pp. 868–873. [Google Scholar] [CrossRef]
  24. Chowdhary, G.; Muhlegg, M.; How, J.P.; Holzapfel, F. Concurrent Learning Adaptive Model Predictive Control. In Selected Papers of the Second CEAS Specialist Conference on Guidance, Navigation and Contr; Chu, Q., Mulder, B., Choukroun, D., van Kampen, E.J., de Visser, C., Looye, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 29–47. [Google Scholar]
  25. Mehndiratta, M.; Kayacan, E.; Patel, S.; Kayacan, E.; Chowdhary, G. Learning-Based Fast Nonlinear Model Predictive Control for Custom-Made 3D Printed Ground and Aerial Robots. In Handbook of Model Predictive Control; Birkhäuser: Cham, Switzerland, 2019; pp. 581–605. [Google Scholar] [CrossRef]
  26. Pereida, K.; Schoellig, A.P. Adaptive Model Predictive Control for High-Accuracy Trajectory Tracking in Changing Conditions. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Madrid, Spain, 1–5 October 2018; pp. 7831–7837. [Google Scholar] [CrossRef] [Green Version]
  27. Hegrenæs, Ø.; Gravdahl, J.T.; Tøndel, P. Spacecraft Attitude Control using Explicit Model Predictive Control. Automatica 2005, 41, 2107–2114. [Google Scholar] [CrossRef] [Green Version]
  28. Gibbens, P.W.; Medagoda, E.D.B. Efficient Model Predictive Control Algorithm for Aircraft. J. Guid. Control. Dyn. 2011, 34, 1909–1915. [Google Scholar] [CrossRef]
  29. Chai, J.; Medagoda, E.; Kayacan, E. Adaptive and Efficient Model Predictive Control for Booster Reentry. J. Guid. Control. Dyn. 2020, 43, 2372–2382. [Google Scholar] [CrossRef]
  30. Zipfel, P.H. Modeling and Simulation of Aerospace Vehicle Dynamics; American Institute of Aeronautics and Astronautics (AIAA): San Diego, CA, USA, 2000. [Google Scholar]
  31. Preller, D.; Smart, M. SPARTAN: Scramjet Powered Accelerator for Reusable Technology AdvaNcement. In Proceedings of the 12th Reinventing Space Conference; Springer: Berlin/Heidelberg, Germany, 2017; pp. 139–147. [Google Scholar]
  32. Chai, J.; Smart, M.K.; Forbes-Spyratos, S.; Kearney, M. Fly Back Booster Design for Mach 5 Scramjet Launch. In Proceedings of the 68th International Astronautical Congress. International Astronautical Federation, Adelaide, Australia, 25–29 September 2017; pp. 1–11. [Google Scholar]
  33. Chai, J.; Smart, M.K.; Kearney, M. Re-entry Dynamics and Control of Pivot Wing Fly Back Boosters. In Proceedings of the 2018 AIAA SPACE and Astronautics Forum and Exposition, Orlando, FL, USA, 17–19 September 2018; p. 5400. [Google Scholar] [CrossRef]
  34. Hovakimyan, N.; Cao, C. L1 Adaptive Control Theory: Guaranteed Robustness with Fast Adaptation; SIAM: Philadelphia, PA, USA, 2011; pp. 112–114. [Google Scholar]
  35. Cobleigh, B.R. Development Uncertainty of the X-33 Aerodynamic Model; Technical report; National Aeronautics and Space Administration, Dryden Flight Research Centre: Edwards, CA, USA, 1998. [Google Scholar]
Figure 1. AMPC- L 1 control structure for systems with uncertainties σ ^ 1 ( t ) , σ ^ 2 ( t ) .
Figure 1. AMPC- L 1 control structure for systems with uncertainties σ ^ 1 ( t ) , σ ^ 2 ( t ) .
Electronics 12 01488 g001
Figure 2. Nominal guidance commands for a fly-back booster during re-entry.
Figure 2. Nominal guidance commands for a fly-back booster during re-entry.
Electronics 12 01488 g002
Figure 3. Angle-of-attack tracking error for nominal case.
Figure 3. Angle-of-attack tracking error for nominal case.
Electronics 12 01488 g003
Figure 4. Roll tracking error for the nominal case.
Figure 4. Roll tracking error for the nominal case.
Electronics 12 01488 g004
Figure 5. Sideslip regulation error for the nominal case.
Figure 5. Sideslip regulation error for the nominal case.
Electronics 12 01488 g005
Figure 6. Yaw-rate regulation error for the nominal case.
Figure 6. Yaw-rate regulation error for the nominal case.
Electronics 12 01488 g006
Figure 7. Angle-of-attack tracking error for aerodynamic uncertainty case.
Figure 7. Angle-of-attack tracking error for aerodynamic uncertainty case.
Electronics 12 01488 g007
Figure 8. Roll tracking error for aerodynamic uncertainty case.
Figure 8. Roll tracking error for aerodynamic uncertainty case.
Electronics 12 01488 g008
Figure 9. Sideslip regulation error for aerodynamic uncertainty case.
Figure 9. Sideslip regulation error for aerodynamic uncertainty case.
Electronics 12 01488 g009
Figure 10. Yaw rate regulation error for aerodynamic uncertainty case.
Figure 10. Yaw rate regulation error for aerodynamic uncertainty case.
Electronics 12 01488 g010
Figure 11. Modified guidance commands for a fly-back booster during re-entry.
Figure 11. Modified guidance commands for a fly-back booster during re-entry.
Electronics 12 01488 g011
Figure 12. Angle-of-attack tracking error for guidance uncertainty case.
Figure 12. Angle-of-attack tracking error for guidance uncertainty case.
Electronics 12 01488 g012
Figure 13. Roll tracking error for guidance uncertainty case.
Figure 13. Roll tracking error for guidance uncertainty case.
Electronics 12 01488 g013
Figure 14. Sideslip regulation error for guidance uncertainty case.
Figure 14. Sideslip regulation error for guidance uncertainty case.
Electronics 12 01488 g014
Figure 15. Yaw rate regulation error for guidance uncertainty case.
Figure 15. Yaw rate regulation error for guidance uncertainty case.
Electronics 12 01488 g015
Table 1. Design parameters of AMPC for longitudinal control.
Table 1. Design parameters of AMPC for longitudinal control.
ParameterDescriptionValue
QWeight for tracking error0.99
RWeight for control input0.001
δ t p Prediction horizon0.5 s
Table 2. Design parameters of L 1 adaptive control for longitudinal control.
Table 2. Design parameters of L 1 adaptive control for longitudinal control.
ParameterDescriptionValue
ω c Cut-off frequency of the low-pass filter20 Hz
δ t a d Time step for adaptive update0.002 s
Table 3. Design parameters of AMPC for lateral control.
Table 3. Design parameters of AMPC for lateral control.
ParameterDescriptionValue
Q ϕ Tracking error penalty weight for roll angle0.99
Q β Tracking error penalty weight for side-slip angle0.9
Q r Tracking error penalty weight for yaw rate0.5
R δ a Control input penalty weight for aileron0.001
R δ r Control input penalty weight for rudder0.001
δ t p Prediction horizon0.5 s
Table 4. Design parameters of L 1 adaptive control for lateral control.
Table 4. Design parameters of L 1 adaptive control for lateral control.
ParameterDescriptionValue
ω c Cutoff frequency for the low pass filter20 Hz
δ t a d Time step for adaptive update0.002 s
Table 5. Aerodynamic errors injected into the simulation model [35].
Table 5. Aerodynamic errors injected into the simulation model [35].
Aerodynamic Parameter% Error
C L 21
C D 53
C m −34
C m δ e 46
C m q −66
C l β 4.4
C n β 7.3
C y β 12.7
C l δ a 5.57
C y δ a −4.52
C n δ r −6.9
C y δ r 17
C l p −2.85
C l r −80.4
C n p 50.4
C n r 23.77
Table 6. Toolchain for building flight controller and simulator applications.
Table 6. Toolchain for building flight controller and simulator applications.
SimulatorFlight Controller
C CompilerGCC 7.5.0arm-linux-gnueabihf-gcc 8.3.0
C++ CompilerG++ 7.5.0arm-linux-gnueabihf-g++ 8.3.0
Runtime Optimisation-O3-O3
Eigen33.3.43.3.4
cmake3.14.43.14.4
C++1717
Table 7. Comparison of computational load on the Raspberry Pi at various loop update rates.
Table 7. Comparison of computational load on the Raspberry Pi at various loop update rates.
Control SchemeLoop Rate (Hz)CPU %
AMPC1007%
AMPC- L 1 1007.5%
AMPC20014.5%
AMPC- L 1 20014.5%
AMPC33319.2%
AMPC- L 1 33320.3%
AMPC50022.5%
AMPC- L 1 50022.9%
AMPC100036%
AMPC- L 1 100039%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chai, J.; Kayacan, E. Nonlinear Simulation and Performance Characterisation of an Adaptive Model Predictive Control Method for Booster Separation and Re-Entry. Electronics 2023, 12, 1488. https://doi.org/10.3390/electronics12061488

AMA Style

Chai J, Kayacan E. Nonlinear Simulation and Performance Characterisation of an Adaptive Model Predictive Control Method for Booster Separation and Re-Entry. Electronics. 2023; 12(6):1488. https://doi.org/10.3390/electronics12061488

Chicago/Turabian Style

Chai, Joseph, and Erkan Kayacan. 2023. "Nonlinear Simulation and Performance Characterisation of an Adaptive Model Predictive Control Method for Booster Separation and Re-Entry" Electronics 12, no. 6: 1488. https://doi.org/10.3390/electronics12061488

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop