Next Article in Journal
Mean Value-Based Parallel Collaborative Optimization Method for Modular Aircraft Structural Layout
Next Article in Special Issue
Finite Element Method-Based Optimisation of Magnetic Coupler Design for Safe Operation of Hybrid UAVs
Previous Article in Journal
Numerical Investigation on the Jet Characteristics and the Heat and Drag Reductions of Opposing Jet in Hypersonic Nonequilibrium Flows
Previous Article in Special Issue
Minimisation of Failure Transients in a Fail-Safe Electro-Mechanical Actuator Employed for the Flap Movables of a High-Speed Helicopter-Plane
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Application of Deep Reinforcement Learning in Reconfiguration Control of Aircraft Anti-Skid Braking System

College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
Electronic Engineering Department, Aviation Key Laboratory of Science and Technology on Aero Electromechanical System Integration, Nanjing 211106, China
Author to whom correspondence should be addressed.
Aerospace 2022, 9(10), 555;
Received: 11 July 2022 / Revised: 17 September 2022 / Accepted: 19 September 2022 / Published: 26 September 2022


The aircraft anti-skid braking system (AABS) plays an important role in aircraft taking off, taxiing, and safe landing. In addition to the disturbances from the complex runway environment, potential component faults, such as actuators faults, can also reduce the safety and reliability of AABS. To meet the increasing performance requirements of AABS under fault and disturbance conditions, a novel reconfiguration controller based on linear active disturbance rejection control combined with deep reinforcement learning was proposed in this paper. The proposed controller treated component faults, external perturbations, and measurement noise as the total disturbances. The twin delayed deep deterministic policy gradient algorithm (TD3) was introduced to realize the parameter self-adjustments of both the extended state observer and the state error feedback law. The action space, state space, reward function, and network structure for the algorithm training were properly designed, so that the total disturbances could be estimated and compensated for more accurately. The simulation results validated the environmental adaptability and robustness of the proposed reconfiguration controller.

1. Introduction

The aircraft anti-skid braking system (AABS) is an essential airborne utilities system to ensure the safe and smooth landing of aircraft [1]. With the development of aircraft towards high speed and large tonnage, the performance requirements of AABS are increasing. Moreover, AABS is a complex system with strong nonlinearity, strong coupling, and time-varying parameters, and is sensitive to the runway environment [2]. These characteristics make AABS controller design an interesting and challenging topic.
The most widely used control method in practice is PID + PBM, which is a speed differential control law. However, it suffers from low-speed slipping and underutilization of ground bonding forces, making it difficult to meet high performance requirements. To this end, researchers have proposed many advanced control methods to improve the AABS performance, such as mixed slip deceleration PID control [3], model predictive control [4], extremum-seeking control [5], sliding mode control [6], reinforcement Q-learning control [7], and so on. Zhang et al. [8] proposed a feedback linearization controller with a prescribed performance function to ensure the transient and steady-state braking performance. Qiu et al. [9] combined backstepping dynamic surface control with an asymmetric barrier Lyapunov function to obtain a robust tracking response in the presence of disturbance and runway surface transitions. Mirzaei et al. [10] developed a fuzzy braking controller optimized by a genetic algorithm and introduced an error-based global optimization approach for fast convergence near the optimum point. The above-mentioned works provide an in-depth study on AABS control; however, the adverse effects caused by typical component faults such as actuator faults are neglected. Since most AABS are designed based on hydraulic control systems, the long hydraulic pipes create an enormous risk of air mixing with oil, and internal leakage. Without regular maintenance, it is easy to cause functional degradation or even failure, which raises many security concerns [11,12]. How to ensure the stability and the acceptable braking performance of AABS after actuator faults becomes a key issue.
In order to actually improve the safety and reliability of AABS, the fault probability can be reduced by reliability design and redundant technology on the one hand [13]. However, due to the production factors (cost/weight/technological level), the redundancy of aircraft components is so limited that the system reliability is hard to increase. On the other hand, fault-tolerant control (FTC) technology can be introduced into the AABS controller design, which is the future development direction of AABS and the key technology that needs urgent attention [14]. Reconfiguration control is a popular branch of FTC that has been widely used in many safety-critical systems, especially in aerospace engineering [15,16]. The essence of reconfiguration control is to consider the possible faults of the plant in the controller design process. When component faults occur, the fault system information is used to reconfigure the controller structure or parameters automatically [17]. In this way, the adverse effects caused by faults can be restrained or eliminated, thus realizing an asymptotically stable and acceptable performance of the closed-loop system. A number of common reconfiguration control methods can be classified as follows: adaptive control [18,19], multi-model switching control [20], sliding mode control [21], fuzzy control [22], other robust control [23], etc. In addition, the characteristics of AABS increase the difficulty of accurate modeling, and many nonlinear reconfiguration control methods are complex and relatively hard to apply in engineering. Therefore, it is crucial to design a reconfiguration controller with a clear structure, and which is model-independent, strong fault-perturbation resistant, and easy to implement.
Han retained the essence of PID control and proposed an active disturbance rejection control (ADRC) technique that requires low model accuracy and shows good control performance [24]. ADRC can estimate disturbances in internal and external systems and compensate for them [25]. Furthermore, ADRC has been widely used in FTC system design because of its obvious advantages in solving control problems of nonlinear models with uncertainty and strong disturbances [26,27,28]. Although the structure is not difficult to implement with modern digital computer technology, ADRC needs to tune a bunch of parameters which makes it hard to use in practice [29]. To overcome the difficulty, Gao proposed linear active disturbance rejection control (LADRC), which is based on linear extended state observer (LESO) and linear state error feedback (LSEF) [30,31]. The bandwidth tuning method greatly reduced the number of LADRC parameters. LADRC has been applied to solve various control problems [32,33,34].
However, it is well known that a controller with fixed parameters may not be able to maintain the acceptable (rated or degraded) performance of a fault system. For this reason, some advanced algorithms with parameter adaptive capabilities have been introduced by researchers that further improve the robustness and environmental adaptability of ADRC, such as neural networks [35,36], fuzzy logic [37,38], and the sliding mode [39,40]. With the development of artificial intelligence techniques, reinforcement learning has been applied to control science and engineering [41,42], and good results have been achieved. Yuan et al. proposed a novel online control algorithm for a thickener which is based on reinforcement learning [43]. Pang et al. studied the infinite-horizon adaptive optimal control of continuous-time linear periodic systems, using reinforcement learning techniques [44]. A Q-learning-based adaptive method for ADRC parameters was proposed by Chen et al. and has been applied to the ship course control [45].
Motivated by the above observations, in this paper, a reconfiguration control scheme via LADRC combined with deep reinforcement learning was developed for AABS which is subject to various fault perturbations. The proposed reconfiguration control method is a remarkable control strategy compared to previous methods for three reasons:
(1) AABS is extended with a new state variable, which is the sum of all unknown dynamics and disturbances not noticed in the fault-free system description. This state variable can be estimated using LESO. It indirectly simplifies the AABS modeling;
(2) Artificial intelligence technology is introduced and combined with the traditional control method to solve special control problems. By combining LADRC with the deep reinforcement learning TD3 algorithm, the selection of controller parameters is equivalent to the choice of agent actions. The parameter adaptive capabilities of LESO and LSEF are endowed through the continuous interaction between the agent and the environment, which not only eliminates the tedious manual tuning of the parameters, but also results in more accurate estimation and compensation for the adverse effects of fault perturbations;
(3) It is a data-driven robust control strategy that does not require any additional fault detection or identification (FDI) module, while the controller parameters are adaptive. Therefore, the proposed method corresponds to a novel combination of active reconfiguration control and FDI-free reconfiguration control, which makes it an interesting solution under unknown fault conditions.
The paper is organized as follows. Section 2 describes AABS dynamics with an actuator fault factor. The reconfiguration controller is presented in Section 3. The simulation results are presented to demonstrate the merits of the proposed method in Section 4, and conclusions are drawn in Section 5.

2. AABS Modeling

The AABS mainly consists of the following components: aircraft fuselage, landing gear, wheels, a hydraulic servo system, a braking device, and an anti-skid braking controller. The subsystems are strongly coupled and exhibit strong nonlinearity and complexity.
Based on the actual process and objective facts of anti-skid braking, the following reasonable assumptions can be made [46]:
The aircraft fuselage is regarded as a rigid body with concentrated mass;
The gyroscopic moment generated by the engine rotor is not considered during the aircraft braking process;
The crosswind effect is ignored;
Only the longitudinal deformation of the tire is taken into account and the deformation of the ground is ignored;
All wheels are the same and controlled synchronously.

2.1. Aircraft Fuselage Dynamics

The force diagram of the aircraft fuselage is shown in Figure 1 and the specific parameters described in the diagram are shown in Table 1.
The aircraft force and torque equilibrium equations are:
m V ˙ + F x + F s + f 1 + f 2 T 0 = 0 F y + N 1 + N 2 m g = 0 N 2 b + F s h s N 1 a T 0 h t f 1 H f 2 H = 0
According to the influence of aerodynamic characteristics, we can obtain [46]:
T 0 = T 0 + K v V F x = 1 2 ρ C x S V 2 F y = 1 2 ρ C y S V 2 F s = 1 2 ρ C x s S s V 2 f 1 = μ 1 N 1 f 2 = μ 2 N 2

2.2. Landing Gear Dynamics

The main function of the landing gear is to support and buffer the aircraft, thus improving the longitudinal and vertical forces. In addition to the wheel and braking device, the struts, buffers, and torque arm are also the main components of the landing gear. In this paper, it is assumed that the stiffness of the torque arm is large enough, and the torsional freedom of the wheel with respect to the strut and the buffer is ignored, so the torque arm is not considered.
The buffer can be reasonably simplified as a mass-spring-damping system [46], and the force acting on the aircraft fuselage by the buffer can be described as:
N 1 = K 1 X 1 + C 1 X ˙ 1 2 N 2 = K 2 X 2 + C 2 X ˙ 2 2
X 1 = a + y X 2 = b + y
whose parameters are shown in Table 2.
Due to the non-rigid connection between the landing gear and the aircraft fuselage, horizontal and angular displacements are generated under the action of braking forces. However, the struts are cantilever beams, and their angular displacements are very small and negligible. Therefore, the lateral stiffness model can be expressed by the following equivalent second-order equation:
d a = f 1 K 0 1 W n 2 s 2 + 2 ξ W n s + 1 d V = d d t ( d a )
whose parameters are shown in Table 3.

2.3. Wheel Dynamics

The force diagram of the main wheel brake is shown in Figure 2.
It can be seen that during the taxiing, the main wheel is subjected to a combined effect of the braking torque M s and the ground friction torque M j . Due to the effect of the lateral stiffness, there is a longitudinal axle velocity V z x along the fuselage, which is superimposed by the aircraft velocity V and the navigation vibration velocity d V . The dynamics equation of the main wheel is [46]:
ω ˙ = M j M s J + V z x R g V w = ω R g V z x = V + d V R g = R N k σ M j = μ N R g n
whose parameters are shown in Table 4.
During the braking, the tires are subjected to the braking torque that keeps the aircraft speed always greater than the wheel speed, that is V > V w . Thus, the slip ratio λ is defined to represent the slip motion ratio of the wheels relative to the runway. For the main wheel, using V z x instead of V to calculate λ can avoid false brake release due to landing gear deformation, thus effectively reducing the landing gear walk situation [46]. The following equation is used to calculate the slip rate in this paper:
λ = V z x V w V z x
The tire–runway combination coefficient is related to many factors, including real-time runway conditions, aircraft speed, slip rate, and so on. A simple empirical formula called ‘magic formula’ developed by Pacejka [47] is widely used to calculate and can be expressed as follows:
μ ( λ , τ j ) = τ 1 sin ( τ 2 arctg ( τ 3 λ ) )
where τ j ( j = 1 , 2 , 3 ) , τ 1 , τ 2 , τ 3 are peak factor, stiffness factor, and curve shape factor, respectively. Table 5 lists the specific parameters for several different runway statuses [48].

2.4. Hydraulic Servo System and Braking Device Modeling

Due to the complex structure of the hydraulic servo system, in this paper, some simplifications have been made so that only electro-hydraulic servo valves and pipes are considered. Their transfer functions are given as follows:
M ( s ) = K s v s 2 ω s v 2 + 2 ξ s v s ω s v + 1 L ( s ) = K p T p s + 1
whose parameters are shown in Table 6.
It should be noted that the anti-skid braking controller should realize both braking control and anti-skid control. To this end, there is an approximately linear relationship between the brake pressure P and the control current I c , which can be described as follows:
P = I c M ( s ) L ( s ) + P 0
where P 0 = 1 × 10 7   Pa .
The braking device serves to convert the brake pressure into brake torque, which is calculated as follows:
M s = μ m c N m c P R m c
whose parameters are shown in Table 6.
The hydraulic servo system, as the actuator of AABS, is inevitably subject to some potential faults. Problems such as hydraulic oil mixing with air, internal leakage, and vibration seriously affect the efficiency of the hydraulic servo system [49]. Therefore, in this paper, the loss of efficiency (LOE) is introduced to represent a typical AABS actuator fault, which is characterized by a decrease in the actuator gain from its nominal value [26]. In the case of an actuator LOE fault, the brake pressure generated by the hydraulic servo system deviates from the commanded output expected by the controller. In other words, one instead has:
P f a u l t = k L O E P
where P f a u l t represents the actuator actual output, and k L O E ( 0 , 1 ] refers the LOE fault factor.
Remark 1.
n % LOE is equivalent to the LOE fault gain k L O E = 1 n / 100 , k L O E = 1 indicates that the actuator is fault-free.
Remark 2.
Note that if the components do not always have the same characteristics as those of fault-free, it is necessary to establish the fault model. This not only provides an accurate model for the next reconfiguration on controller design, but also ensures that the adverse effects caused by fault perturbation can be effectively observed and compensated for.
Thus, Equation (11) can be rewritten as follows:
M s = μ m c N m c P f a u l t R m c
where M s is the actual brake torque.
Remark 3.
As can be seen from the entire modeling process described above, AABS is nonlinear and highly coupled. The actuator fault leads to a sudden jump in the model parameters with greater internal perturbation compared to the fault-free case. Meanwhile, external disturbances such as the runway environment cannot be ignored.

3. Reconfiguration Controller Design

3.1. Problem Description

Despite the aircraft having three degrees of freedom, only longitudinal taxiing is focused on in AABS. In this paper, AABS adopted the slip speed control type [48], that is, the braked wheel speed V ω was used as the reference input, and the aircraft speed V was dynamically adjusted by the AABS controller to achieve anti-skid braking. According to Section 2, the AABS longitudinal dynamics model can be rewritten as follows:
V ¨ = f ( V , V ˙ , ϖ o u t , ϖ f ) + b v u
where f ( ) is the controlled plant dynamics, ϖ o u t represents the external disturbance, ϖ f is an uncertain term including component faults, b v is the control gain, and u is the system input.
Let x 1 = V , x 2 = V ˙ . Set f ( V , V ˙ , ϖ o u t , ϖ f ) as the system generalized total perturbation and extend it to a new system state variable, i.e., x 3 = f ( V , V ˙ , ϖ o u t , ϖ f ) . Then the state equation of System (14) can be obtained:
x ˙ 1 = x 2 x ˙ 2 = x 3 + b v u x ˙ 3 = h ( V , V ˙ , ϖ o u t , ϖ f )
where x 1 , x 2 , x 3 are system state variables, and h ( V , V ˙ , ϖ o u t , ϖ f ) = f ˙ ( V , V ˙ , ϖ o u t , ϖ f ) .
Assumption 1.
Both the system generalized total perturbation f ( V , V ˙ , ϖ o u t , ϖ f ) and its differential h ( V , V ˙ , ϖ o u t , ϖ f ) are bounded, i.e., | f ( V , V ˙ , ϖ o u t , ϖ f ) | σ 1 | h ( V , V ˙ , ϖ o u t , ϖ f ) | σ 2 , where σ 1 , σ 2 are two positive numbers.
For System (14), affected by the total perturbation, a LADRC reconfiguration controller was designed next to restrain or eliminate the adverse effects, thus realizing the asymptotic stability and acceptable performance of the closed-loop system.

3.2. LADRC Controller Design

The control schematic of the LADRC is shown in Figure 3.
Firstly, the following tracking differentiator (TD) was designed:
e k = v 1 k v r k fh = fhan e k , v 2 k , r , h v 1 k + 1 = v 1 k + h v 2 k v 2 k + 1 = v 2 k + h fh
where v r is the desired input, v 1 is the transition process of v r , v 2 is the derivative of v 1 , and r and h are adjusted accordingly as filter coefficients. The function fhan is defined as follows:
fhan x 1 , x 2 , r , h = r sgn a , a > d 0 r a d , a d 0
We established the following form, LESO:
z ˙ 1 = z 2 β 1 ( z 1 v 1 ) z ˙ 2 = z 3 β 2 ( z 1 v 1 ) + b v u z ˙ 3 = β 3 ( z 1 v 1 )
Selecting the suitable observer gains ( β 1 , β 2 , β 3 ) , LESO then enabled real-time observation of the variables in System (14) [50], i.e., z 1 v 1 , z 2 v 2 , z 3 f ( V , V ˙ , ϖ o u t , ϖ f ) .
u = u 0 z 3 b v
When z 3 can estimate f ( V , V ˙ , ϖ o u t , ϖ f ) without error, let LSEF be:
e 1 = v 1 z 1 e 2 = v 2 z 2 u 0 = k 1 e 1 + k 2 e 2
then the system (15) can be simplified to a double integral series structure:
V ¨ = ( f ( V , V ˙ , ϖ o u t , ϖ f ) z 3 ) + u 0 u 0
Further, the bandwidth method [50] was used and we could obtain:
β 1 = 3 ω o β 2 = 3 ω o 2 β 3 = ω o 3
where ω o is the observer bandwidth. The larger ω o is, the smaller LESO observation errors are. However, the sensitivity of the system to noise may be increased, so the ω o selection requires comprehensive consideration.
Similarly, according to the parameterization method and engineering experience [32], the LSEF parameters can be chosen as:
k 1 = ω c 2 k 2 = 2 ξ ω c
where ω c is the controller bandwidth, ξ is the damping ratio, and in this paper ξ = 1 . Therefore, the parameter tuning problem of LADRC controller was simplified to the observer bandwidth ω o and controller bandwidth ω c configuration.

3.3. TD3 Algorithm

TD3 algorithm is an offline RL algorithm based on DDPG proposed in 2015 [51]. This approach adopted a similar method implemented in Double-DQN [52] to reduce the overestimation in function approximation, delaying the update frequency in the actor–network, and adding noises to target the actor–network to release the sensitivity and instability in DDPG. The structure of TD3 is shown in Figure 4.
Updating the parameters of critic networks by minimizing loss:
L = N 1 y Q θ i s , a 2
where s is the current state, a is the current action, and Q θ i s , a stands for the parameterized state-action value function Q with parameter θ i .
y = r + γ min i = 1 , 2 Q θ i s , a ˜
is the target value of the function Q θ s , a , γ 0 , 1 is the discount factor, and the target action is defined as:
a ˜ = π ϕ s + ϵ
where noise ϵ follows a clipped normal distribution clip N 0 , σ , c , c , c > 0 . This implies that ϵ is a random variable with N 0 , σ and belongs to the interval c , c .
The inputs of the actor network are both Q θ s , a from the critic network and the minibatch form the memory, and the output is the action given by:
a t = π ϕ s t + ϵ
where ϕ is the parameter of the actor network, and π ϕ is the output form the actor network, which is a deterministic and continuous value. Noise ϵ follows the normal distribution N 0 , σ , and is added for exploration.
Updating the parameters of the actor–network based on deterministic gradient strategy:
ϕ J ϕ = N 1 a Q θ 1 s , a a = π ϕ s ϕ π ϕ s
TD3 updates the actor–network and all three target networks every d steps periodically in order to avoid a too fast convergence. The parameters of the critic target networks and the actor–target network are updated according to:
θ i τ θ i + 1 τ θ i ϕ τ ϕ + 1 τ ϕ
The pseudocode of the proposed approach is given in Algorithm 1.
Algorithm 1. TD3
Initialize critic networks Q θ 1 ,   Q θ 2 and actor network π ϕ with random parameters θ 1 ,   θ 2 ,   ϕ ;
Initialize target networks Q θ 1 , Q θ 2 with θ 1 θ 1 ,   θ 2 θ 2 , and target actor network π ϕ with ϕ ϕ ;
Initialize replay buffer ;
For every episode:
Initialize state s ;
Select action with exploration noise a π a + N 0 , σ ;
Observe reward r and new state s ;
Store transition tuple s , a , r , s in R ;
Sample mini-batch of N transitions s , a , r , s from R ;
Attain a ˜ π ϕ s + ϵ ,   where   ϵ c l i p N 0 , σ , c , c ;
Update critics θ i min θ i N 1 y Q θ i s , a 2 ;
Every d steps:
Update ϕ by the deterministic policy gradient:
ϕ J ϕ = N 1 a Q θ 1 s , a a = π ϕ s ϕ π ϕ s ;
Update target network:
θ i τ θ i + 1 τ θ i
ϕ τ ϕ + 1 τ ϕ ;
s s ;
Until s reaches terminal state s T .

3.4. TD3-LADRC Reconfiguration Controller Design

Lack of environment adaptability, poor control performance, and weak robustness are the main shortcomings of parameter-fixed controllers [36]. When a fault occurs, it may not be possible to maintain the acceptable (rated or degraded) performance of the damaged system. Motivated by the above analysis, a reconfiguration controller called TD3-LADRC is proposed in this paper, and its control schematic is shown in Figure 5.
The deep reinforcement learning algorithm TD3 is introduced to realize the LADRC parameters adaption. The details of each part have been described above. The selection of control parameters is treated as the agent’s action a t , and the response result of the control system s t is considered as the state, i.e., as follows:
a t = ω o , ω c T s t = s o b s = e , e ˙ , V , V ˙ T
where e = V V w , and s o b s is the agent observations vector.
The range of each controller parameter is selected as follows:
ω c 0 , 4 ω o 100 , 200
The reward function plays a crucial role in the reinforcement learning algorithm. The appropriateness of the reward function design directly affects the training effect of the reinforcement learning, which in turn affects the effectiveness of the whole reconfiguration controller. According to the working characteristics of AABS, the following reward function is selected after several attempts to ensure stable and smooth braking:
r t = 1 6 V ˙ 4 + 0 V ˙ > 4 V ˙ < 6 100 V < 2 e > 20
The stop conditions for each training episode are as follows, and one of the three will do:
The aircraft speed V < 2 ;
The error between main wheel speed and aircraft speed e > 20 ;
Simulation time t > 20   s .
Remark 4.
TD3, TD, LESO, and LSEF together constitute the TD3-LADRC controller. Compared to normal LADRC, TD3-LADRC realizes the parameter adaption that makes the controller reconfigurable. The robustness and immunity are greatly improved. It can effectively compensate the adverse effects caused by the total perturbations including faults.

3.5. TD3-LESO Estimation Capability Analysis

In order to prove the stability of the whole closed-loop system, the convergence of TD3-LESO is first analyzed in conjunction with Assumption 1 [53]. Let the estimation errors of TD3-LESO be x ˜ i = x i z i , i = 1 , 2 , 3 , and the estimation error equation of the observer can be obtained as:
x ˜ ˙ 1 = x ˜ 2 3 ω o x ˜ 1 x ˜ ˙ 2 = x ˜ 3 3 ω o 2 x ˜ 1 x ˜ ˙ 3 = h ( V , V ˙ , ϖ o u t , ϖ f ) ω o 3 x ˜ 1
Let ε i = x ˜ i ω o i 1 , i = 1 , 2 , 3 , then Equation (33) can be rewritten as:
ε ˙ = ω o A 3 ε + B h ( V , V ˙ , ϖ o u t , ϖ f ) ω o 2
where A 3 = 3 1 0 3 0 1 1 0 0 , B = [ 0 0 1 ] T .
Based on Assumption 1 and Theorem 2 in Reference [54], the following theorem can be obtained:
Theorem 1.
Under the condition that h ( V , V ˙ , ϖ o u t , ϖ f ) is bounded, the TD3-LESO estimation errors are bounded and their upper bound decrease monotonically with the increase of the observer bandwidth ω o .
The proof is given in the Appendix A. Thus, it is clear that there are three positive numbers υ i , i = 1 , 2 , 3 , such that the state estimation error | x ˜ i | υ i holds, i.e., the TD3-LESO estimation errors are bounded, which can effectively estimate the states of the controlled plant and the total perturbation.

3.6. Stability Analysis of Closed-loop System

The closed-loop system consisted of the control laws (19) and (20), and the controlled object (21) is:
V ¨ = f z 3 + k 1 e 1 + k 2 e 2
If we defined the tracking errors as ε i = v i x i , i = 1 , 2 , then we could attain:
ε ˙ 1 = r ˙ 1 x ˙ 1 = r 2 x 2 = e ˜ 2 ε ˙ 2 = r ˙ 2 x ˙ 2 = r 3 V ¨ = k 1 ε 1 k 1 x ˜ 1 k 2 ε 2 k 2 x ˜ 2 x ˜ 3
Let ε = ε 1 , ε 2 T , x ˜ = x ˜ 1 , x ˜ 2 , x ˜ 3 T , then:
ε ˙ ( t ) = A ε ε ( t ) + A x ˜ x ˜ ( t )
where A e = 0 1 k 1 k 2 , A x ˜ = 0 0 0 k 1 k 2 1 .
By solving Equation (37):
ε ( t ) = e A ε t ε ( 0 ) + 0 t e A ε ( t τ ) A x ˜ x ˜ ( τ ) d τ
Combining Assumption 1, Theorem 1, Theorem 3, and Theorem 4 in the literature [54], the following theorem was proposed to analyze the stability of the closed-loop system:
Theorem 2.
Under the condition that the TD3-LESO estimation errors are bounded, there exists a controller bandwidth ω c , such that the tracking error of the closed-loop system is bounded. Thus, for a bounded input, the output of the closed-loop system is bounded, i.e., the closed-loop system is BIBO-stable.
See the Appendix A for proof.

4. Simulation Results

In order to verify the reconfiguration capability and disturbance rejection capabilities of the proposed method, the corresponding simulations are carried out in this section and compared with conventional PID + PBM and LADRC.
The initial states of the aircraft are set as follows:
The initial speed of aircraft landing V 0 = 72   m / s ;
The initial height of the center of gravity H h = 2.178   m .
To prevent deep wheel slippage as well as tire blowout, the wheel speed was kept following the aircraft speed quickly at first, and the brake pressure was applied only after 1.5 s. The anti-skid brake control was considered to be over when V was less than 2 m/s.
In the experiment, both the critic networks and the actor networks were realized by a fully connected neural network with three hidden layers. The number of neurons in the hidden layer was (50,25,25). The activation function of the hidden layer was selected as the ReLU function, and the activation function of the output layer of the actor network was selected as the tanh function. In addition, the parameters of the actor network and the critic network were tuned by an Adam optimizer. The remaining parameters of TD3-LADRC are shown in Table 7.
Remark 5.
It is noted that the braking time t and braking distance x are selected as the criteria for braking efficiency, and the system stability is observed by slip rate λ .
The model simulation was carried out in MATLAB 2022a, and the TD3 algorithm was realized through the reinforcement learning toolbox. The simulation time was 20 s, the sampling time was 0.001 s. The training stopped when the average reward reached 12,000. The training took about 6 h to complete. The learning curves of the reward obtained by the agent for each interaction with the environment during the training process are shown in Figure 6.
It can be seen that at the beginning of the training, the agent was in the exploration phase and the reward obtained was relatively low. Later, the reward gradually increased, and after 40 episodes, the reward was steadily maintained at a high level and the algorithm gradually converges.

4.1. Case 1: Fault-Free and External Disturbance-Free in Dry Runway Condition

The simulation results of the dynamic braking process for different control schemes are shown in Figure 7 and Figure 8 and Table 8.
As can be seen from Figure 7, PID + PBM leads to numerous skids during braking, which may cause serious loss to the tires. In contrast, LADRC and TD3-LADRC not only skid less frequently, but also have shorter braking time and braking distance. Moreover, the control effect of TD3-LADRC is better than LADRC. Figure 8 shows that TD3-LADRC can dynamically tune the controller parameters to accurately observe and compensate for the total disturbances, and thus improve the AABS performance.
Remark 6.
During the braking process, it is observed that in some instants ω c = 0 . It may not affect the stability of the whole system. On the one hand, the value of ω c does not change the fact that A ε is Hurwitz (see Proof of Theorem 2 for details). On the other hand, ω c is constantly changed by the agent through a continuous interaction with the environment, and in these instants the agent considers ω c = 0 as optimal, i.e., no anti-skid braking control leads to better braking results.

4.2. Case 2: Actuator LOE Fault in Dry Runway Condition

The fault considered here assumed a 20% actuator LOE at 5 s and escalated to 40% LOE at 10 s. The simulation results are shown in Figure 9 and Figure 10 and Table 9.
As can be seen in Figure 9, PID + PBM continuously performed a large braking and releasing operation under the combined effect of fault and disturbance. This makes braking much less efficient and risks dragging and flat tires. In addition, LADRC cannot brake the aircraft to a stop which is not allowed in practice. Figure 9c shows that there is a high frequency of wheel slip in the low-speed phase of the aircraft. In contrast, TD3-LADRC retains the experience gained from the agent’s prior training and continuously adjusts the controller parameters online based on the plant states, which ultimately allows the aircraft to brake smoothly. From Figure 10a, it can be seen that the total fault perturbations are estimated fast and accurately based on the adaptive LESO. Overall, TD3-LADRC not only improves the robustness and immunity of the controller in fault-perturbed conditions, but also greatly significantly improves the safety and reliability of AABS.

4.3. Case 3: Actuator LOE Fault in Mixed Runway Condition

The mixed runway structure is as follows: dry runway in the interval of 0–10 s, wet runway in the interval of 10–20 s, and snow runway after 20 s. The fault considered here assumed a 10% actuator LOE at 10 s. The simulation results are shown in Figure 11 and Figure 12 and Table 10.
The deterioration of the runway conditions has resulted in a very poor tire–ground bond. It can be seen from Figure 11 that both braking time and braking distance have increased compared to the dry runway. Figure 12 shows that TD3-LADRC is still able to achieve controller parameters adaption, accurately observe the total fault perturbations, and effectively compensate for the adverse effects. The whole reconfiguration control system adapts well to runway changes. The environmental adaptability of AABS is improved.

5. Conclusions

A linear active disturbance rejection reconfiguration control scheme based on deep reinforcement learning was proposed to meet the higher performance requirements of AABS under fault-perturbed conditions. According to the composition structure and working principle, AABS mathematical model with an actuator fault factor is established. A TD3-LADRC reconfiguration controller was developed, and the parameters of LSEF and LESO were adjusted online using the TD3 algorithm. The simulation results under different conditions verified that the designed controller can effectively improve the anti-skid braking performance even under faults and perturbations, as well as different runway environments. It successfully strengthened the robustness, immunity, and environmental adaptability of the AABS, thereby improving the safety and reliability of the aircraft. However, TD3-LADRC is so complex that its control effectiveness was verified only by simulations in this paper. The combined effect caused by various uncertainties in practical applications on the robustness of the controller cannot be completely considered. Therefore, in future work, an aircraft braking hardware-in-loop experimental platform is necessary to build, consisting of the host PC, the target CPU, the anti-skid braking controller, the actuators, and the aircraft wheel. The host PC and the target CPU are the software simulation part, while the other four parts are the hardware part.

Author Contributions

Conceptualization, S.L., Z.Y. and Z.Z.; methodology, S.L., Z.Y. and Z.Z.; software, S.L. and Z.Z.; validation, S.L., Z.Y., Z.Z., R.J., T.R., Y.J., S.C. and X.Z.; formal analysis, S.L., Z.Y. and Z.Z.; investigation, S.L., Z.Y., Z.Z., R.J., T.R., Y.J., S.C. and X.Z.; resources, Z.Y., Z.Z., R.J., T.R., Y.J., S.C. and X.Z.; data curation, S.L., Z.Y., Z.Z., R.J., T.R., Y.J., S.C. and X.Z.; writing—original draft, S.L. and Z.Z.; writing—review and editing, S.L. and Z.Z.; visualization, S.L. and Z.Z.; supervision, S.L., Z.Y., Z.Z., R.J., T.R., Y.J., S.C. and X.Z.; project administration, S.L., Z.Y. and Z.Z.; funding acquisition, Z.Y., Z.Z., R.J., T.R., Y.J., S.C. and X.Z. All authors have read and agreed to the published version of the manuscript.


This research was funded by the Key Laboratory Projects of Aeronautical Science Foundation of China, grant numbers 201928052006 and 20162852031, and Postgraduate Research & Practice Innovation Program of NUAA, grant number xcxjh20210332.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorems

Proof of Theorem 1.
By solving Equation (34) we can attain:
ε ( t ) = e ω o A 3 t ε ( 0 ) + 0 t e ω o A 3 ( t τ ) B h ( V ( τ ) , V ˙ ( τ ) , ϖ o u t , ϖ f ) ω o 2 d τ
Define ζ ( t ) as follows:
ζ ( t ) = 0 t e ω o A 3 ( t τ ) B h ( V ( τ ) , V ˙ ( τ ) , ϖ o u t , ϖ f ) ω o 2 d τ
From the fact that h ( V ( τ ) , V ˙ ( τ ) , ϖ o u t , ϖ f ) is bounded, we have:
ζ i ( t ) σ ω o 3 A 3 1 B i + A 3 1 e ω o A 3 t B i
Because A 3 1 = 0 0 1 1 0 3 0 1 3 , we can attain:
A 3 1 B i 3
Considering that A 3 is Hurwitz, there is thus a finite time T 1 so that for any t T 1 , i , j = 1 , 2 , 3 , the following formula holds [54]:
e ω o A 3 t i j 1 ω o 3
Therefore, the following formula is satisfied:
e ω o A 3 t B i 1 ω o 3
Finally, we can attain:
A 3 1 e ω o A 3 t B i 4 ω o 3
From Equations (A3), (A4), and (A7) we can attain:
ζ i ( t ) 3 σ ω o 3 + 4 σ ω o 6
Let ε sum ( 0 ) = ε 1 ( 0 ) + ε 2 ( 0 ) + ε 3 ( 0 ) , for all t T 1 , the following formula holds:
e ω o A 3 t ε ( 0 ) i ε s u m ( 0 ) ω o 3
Form Equation (A1) we can attain:
ε i ( t ) e ω o A 3 t ε ( 0 ) i + ζ i ( t )
Let x ˜ sum ( 0 ) = x ˜ 1 ( 0 ) + x ˜ 2 ( 0 ) + x ˜ 3 ( 0 ) , from ε i = x ˜ i ω o i 1 and formulas (A8)–(A10), we can attain:
x ˜ i ( t ) x ˜ s u m ( 0 ) ω o 3 + 3 σ ω o 4 i + 4 σ ω o 7 i = υ i
For all t T 1 , i = 1 , 2 , 3 , the above formula holds. □
Proof of Theorem 2.
According to Equation (37) and Theorem 1, we can attain:
A x ˜ x ˜ ( τ ) 1 = 0 A x ˜ x ˜ ( τ ) 2 k s u m υ i = γ l , t T 1
where k s u m = 1 + k 1 + k 2 , bringing in the controller bandwidth k s u m = 1 + ω c 2 + 2 ω c , and taking the parameters in this way ensures that A ε is Hurwitz [54].
Define Θ = [ 0 γ l ] T , let ϑ ( t ) = 0 t e A ε ( t τ ) A x ˜ x ˜ ( τ ) d τ , then we can attain:
ϑ i ( t ) A ε 1 Θ i + A ε 1 e A ε t Θ i ,   i = 1 , 2
A ε 1 Θ 1 = γ l k 1 = γ l ω c 2 A ε 1 Θ 2 = 0
Consider that A ε is Hurwitz; thus, there is a finite time T 2 so that for any t T 2 , i , j = 1 , 2 , 3 , the following formula holds [54]:
e A ε t i j 1 ω c 3
Let T 3 = max T 1 , T 2 , for any t T 3 , i = 1 , 2 , we can attain:
( e A ε t Θ ) i γ l ω c 3
Then we can attain:
A ε 1 e A ε t Θ i 1 + k 2 ω c 2 γ l ω c 3 , i = 1 γ l ω c 3 , i = 2
From Equations (A13), (A14), and (A17) we can attain that for any t T 3 :
ϑ i ( t ) γ l ω c 2 + ( 1 + k 2 ) γ l ω c 5 , i = 1 γ l ω c 3 , i = 2
Let ε s ( 0 ) = ε 1 ( 0 ) + ε 2 ( 0 ) , then for any t T 3 :
e A ε t ε ( 0 ) i ε s ( 0 ) ω c 3
From Equation (A12), we can attain:
ε i ( t ) e A ε t ε ( 0 ) i + ϑ i ( t )
From Equations (A12), (A18)–(A20), we can attain that for any t T 3 , i = 1 , 2 :
ε i ( t ) ε s ( 0 ) ω c 3 + k s u m υ i ω c 2 + ( 1 + k 2 ) k s u m υ i ω c 5 , i = 1 k s u m υ i + ε s ( 0 ) ω c 3 , i = 2


  1. Li, F.; Jiao, Z. Adaptive Aircraft Anti-Skid Braking Control Based on Joint Force Model. J. Beijing Univ. Aeronaut. Astronaut. 2013, 4, 447–452. [Google Scholar]
  2. Jiao, Z.; Sun, D.; Shang, Y.; Liu, X.; Wu, S. A high efficiency aircraft anti-skid brake control with runway identification. Aerosp. Sci. Technol. 2019, 91, 82–95. [Google Scholar] [CrossRef]
  3. Chen, M.; Liu, W.; Ma, Y.; Wang, J.; Xu, F.; Wang, Y. Mixed slip-deceleration PID control of aircraft wheel braking system. IFAC-PapersOnLine 2018, 51, 160–165. [Google Scholar] [CrossRef]
  4. Chen, M.; Xu, F.; Liang, X.; Liu, W. MSD-based NMPC Aircraft Anti-skid Brake Control Method Considering Runway Variation. IEEE Access 2021, 9, 51793–51804. [Google Scholar] [CrossRef]
  5. Dinçmen, E.; Güvenç, B.A.; Acarman, T. Extremum-seeking control of ABS braking in road vehicles with lateral force improvement. IEEE Trans. Contr. Syst. Technol. 2012, 22, 230–237. [Google Scholar] [CrossRef]
  6. Li, F.B.; Huang, P.M.; Yang, C.H.; Liao, L.Q.; Gui, W.H. Sliding mode control design of aircraft electric brake system based on nonlinear disturbance observer. Acta Autom. Sin. 2021, 47, 2557–2569. [Google Scholar]
  7. Radac, M.B.; Precup, R.E. Data-driven model-free slip control of anti-lock braking systems using reinforcement Q-learning. Neurocomputing 2018, 275, 317–329. [Google Scholar] [CrossRef]
  8. Zhang, R.; Peng, J.; Chen, B.; Gao, K.; Yang, Y.; Huang, Z. Prescribed Performance Active Braking Control with Reference Adaptation for High-Speed Trains. Actuators 2021, 10, 313. [Google Scholar] [CrossRef]
  9. Qiu, Y.; Liang, X.; Dai, Z. Backstepping dynamic surface control for an anti-skid braking system. Control Eng. Pract. 2015, 42, 140–152. [Google Scholar] [CrossRef]
  10. Mirzaei, A.; Moallem, M.; Dehkordi, B.M.; Fahimi, B. Design of an Optimal Fuzzy Controller for Antilock Braking Systems. IEEE Trans. Veh. Technol. 2006, 55, 1725–1730. [Google Scholar] [CrossRef]
  11. Xiang, Y.; Jin, J. Hybrid Fault-Tolerant Flight Control System Design Against Partial Actuator Failures. IEEE Trans. Control Syst. Technol. 2012, 20, 871–886. [Google Scholar]
  12. Niksefat, N.; Sepehri, N. A QFT fault-tolerant control for electrohydraulic positioning systems. IEEE Trans. Control Syst. Technol. 2002, 4, 626–632. [Google Scholar] [CrossRef]
  13. Wang, D.Y.; Tu, Y.; Liu, C. Connotation and research of reconfigurability for spacecraft control systems: A review. Acta Autom. Sin. 2017, 43, 1687–1702. [Google Scholar]
  14. Han, Y.G.; Liu, Z.P.; Dong, Z.C. Research on Present Situation and Development Direction of Aircraft Anti-Skid Braking System; China Aviation Publishing & Media CO., LTD.: Xi’an, China, 2020; Volume 5, pp. 525–529. [Google Scholar]
  15. Calise, A.J.; Lee, S.; Sharma, M. Development of a reconfigurable flight control law for tailless aircraft. J. Guid. Control Dyn. 2001, 24, 896–902. [Google Scholar] [CrossRef]
  16. Yin, S.; Xiao, B.; Ding, S.X.; Zhou, D. A review on recent development of spacecraft attitude fault tolerant control system. IEEE Trans. Ind. Electron. 2016, 63, 3311–3320. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Jin, J. Bibliographical review on reconfigurable fault-tolerant control systems. Annu. Rev. Control 2008, 32, 229–252. [Google Scholar] [CrossRef]
  18. Zhang, Z.; Yang, Z.; Xiong, S.; Chen, S.; Liu, S.; Zhang, X. Simple Adaptive Control-Based Reconfiguration Design of Cabin Pressure Control System. Complexity 2021, 2021, 6635571. [Google Scholar] [CrossRef]
  19. Chen, F.; Wu, Q.; Jiang, B.; Tao, G. A reconfiguration scheme for quadrotor helicopter via simple adaptive control and quantum logic. IEEE Trans. Ind. Electron. 2015, 62, 4328–4335. [Google Scholar] [CrossRef]
  20. Guo, Y.; Jiang, B. Multiple model-based adaptive reconfiguration control for actuator fault. Acta Autom. Sinica 2009, 35, 1452–1458. [Google Scholar] [CrossRef]
  21. Gao, Z.; Jiang, B.; Shi, P.; Qian, M.; Lin, J. Active fault tolerant control design for reusable launch vehicle using adaptive sliding mode technique. J. Frankl. Inst. 2012, 349, 1543–1560. [Google Scholar] [CrossRef]
  22. Shen, Q.; Jiang, B.; Cocquempot, V. Fuzzy Logic System-Based Adaptive Fault-Tolerant Control for Near-Space Vehicle Attitude Dynamics with Actuator Faults. IEEE Trans. Fuzzy Syst. 2013, 21, 289–300. [Google Scholar] [CrossRef]
  23. Lv, X.; Jiang, B.; Qi, R.; Zhao, J. Survey on nonlinear reconfigurable flight control. J. Syst. Eng. Electron. 2013, 24, 971–983. [Google Scholar] [CrossRef]
  24. Han, J. From PID to active disturbance rejection control. IEEE Trans. Ind. Electron. 2009, 56, 900–906. [Google Scholar] [CrossRef]
  25. Huang, Y.; Xue, W. Active disturbance rejection control: Methodology and theoretical analysis. ISA Trans. 2014, 53, 963–976. [Google Scholar] [CrossRef] [PubMed]
  26. Guo, Y.; Jiang, B.; Zhang, Y. A novel robust attitude control for quadrotor aircraft subject to actuator faults and wind gusts. IEEE/CAA J. Autom. Sin. 2017, 5, 292–300. [Google Scholar] [CrossRef]
  27. Zhang, Z.; Yang, Z.; Zhou, G.; Liu, S.; Zhou, D.; Chen, S.; Zhang, X. Adaptive Fuzzy Active-Disturbance Rejection Control-Based Reconfiguration Controller Design for Aircraft Anti-Skid Braking System. Actuators 2021, 10, 201. [Google Scholar] [CrossRef]
  28. Zhou, L.; Ma, L.; Wang, J. Fault tolerant control for a class of nonlinear system based on active disturbance rejection control and rbf neural networks. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 7321–7326. [Google Scholar]
  29. Tan, W.; Fu, C. Linear active disturbance-rejection control: Analysis and tuning via IMC. IEEE Trans. Ind. Electron. 2015, 63, 2350–2359. [Google Scholar] [CrossRef]
  30. Gao, Z. Scaling and bandwidth-parameterization based controller tuning. In Proceedings of the 2003 American Control Conference, Denver, CO, USA, 4–6 June 2003; pp. 4989–4996. [Google Scholar]
  31. Gao, Z. Active disturbance rejection control: A paradigm shift in feedback control system design. In Proceedings of the 2006 American Control Conference, Minneapolis, MN, USA, 14–16 June 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 2399–2405. [Google Scholar]
  32. Li, P.; Zhu, G.; Zhang, M. Linear active disturbance rejection control for servo motor systems with input delay via internal model control rules. IEEE Trans. Ind. Electron. 2020, 68, 1077–1086. [Google Scholar] [CrossRef]
  33. Wang, L.X.; Zhao, D.X.; Liu, F.C.; Liu, Q.; Meng, F.L. Linear active disturbance rejection control for electro-hydraulic proportional position synchronous. Control. Theory Appl. 2018, 35, 1618–1625. [Google Scholar]
  34. Li, J.; Qi, X.H.; Wan, H.; Xia, Y.Q. Active disturbance rejection control: Theoretical results summary and future researches. Control Theory Appl. 2017, 34, 281–295. [Google Scholar]
  35. Qiao, G.L.; Tong, C.N.; Sun, Y.K. Study on Mould Level and Casting Speed Coordination Control Based on ADRC with DRNN Optimization. Acta Autom. Sin. 2007, 33, 641–648. [Google Scholar]
  36. Qi, X.H.; Li, J.; Han, S.T. Adaptive active disturbance rejection control and its simulation based on BP neural network. Acta Armamentarii 2013, 34, 776–782. [Google Scholar]
  37. Sun, K.; Xu, Z.L.; Gai, K.; Zou, J.Y.; Dou, R.Z. Novel Position Controller of Pmsm Servo System Based on Active-disturbance Rejection Controller. Proc. Chin. Soc. Electr. Eng. 2007, 27, 43–46. [Google Scholar]
  38. Dou, J.X.; Kong, X.X.; Wen, B.C. Attitude fuzzy active disturbance rejection controller design of quadrotor UAV and its stability analysis. J. Chin. Inert. Technol. 2015, 23, 824–830. [Google Scholar]
  39. Zhao, X.; Wu, C. Current deviation decoupling control based on sliding mode active disturbance rejection for PMLSM. Opt. Precis. Eng. 2022, 30, 431–441. [Google Scholar] [CrossRef]
  40. Li, B.; Zeng, L.; Zhang, P.; Zhu, Z. Sliding mode active disturbance rejection decoupling control for active magnetic bearings. Electr. Mach. Control. 2021, 7, 129–138. [Google Scholar]
  41. Buşoniu, L.; de Bruin, T.; Tolić, D.; Kober, J.; Palunko, I. Reinforcement learning for control: Performance, stability, and deep approximators. Annu. Rev. Control 2018, 46, 8–28. [Google Scholar] [CrossRef]
  42. Nian, R.; Liu, J.; Huang, B. A review on reinforcement learning: Introduction and applications in industrial process control. Comput. Chem. Eng. 2020, 139, 106886. [Google Scholar] [CrossRef]
  43. Yuan, Z.L.; He, R.Z.; Yao, C.; Li, J.; Ban, X.J. Online reinforcement learning control algorithm for concentration of thickener underflow. Acta Autom. Sin. 2021, 47, 1558–1571. [Google Scholar]
  44. Pang, B.; Jiang, Z.P.; Mareels, I. Reinforcement learning for adaptive optimal control of continuous-time linear periodic systems. Automatica 2020, 118, 109035. [Google Scholar] [CrossRef]
  45. Chen, Z.; Qin, B.; Sun, M.; Sun, Q. Q-Learning-based parameters adaptive algorithm for active disturbance rejection control and its application to ship course control. Neurocomputing 2020, 408, 51–63. [Google Scholar] [CrossRef]
  46. Zou, M.Y. Design and Simulation Research on New Control Law of Aircraft Anti-skid Braking System. Master’s Thesis, Northwestern Polytechnical University, Xi’an, China, 2005. [Google Scholar]
  47. Pacejka, H.B.; Bakker, E. The magic formula tyre model. Veh. Syst. Dyn. 1992, 21, 1–18. [Google Scholar] [CrossRef]
  48. Wang, J.S. Nonlinear Control Theory and its Application to Aircraft Antiskid Brake Systems. Master’s Thesis, Northwestern Polytechnical University, Xi’an, China, 2001. [Google Scholar]
  49. Jiao, Z.; Liu, X.; Shang, Y.; Huang, C. An integrated self-energized brake system for aircrafts based on a switching valve control. Aerosp. Sci. Technol. 2017, 60, 20–30. [Google Scholar] [CrossRef]
  50. Yuan, D.; Ma, X.J.; Zeng, Q.H.; Qiu, X. Research on frequency-band characteristics and parameters configuration of linear active disturbance rejection control for second-order systems. Control Theory Appl. 2013, 30, 1630–1640. [Google Scholar]
  51. Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous control with deep reinforcement learning. arXiv 2015, arXiv:1509.02971. [Google Scholar]
  52. Fujimoto, S.; Hoof, H.; Meger, D. Addressing function approximation error in actor-critic methods. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 1587–1596. [Google Scholar]
  53. Chen, Z.Q.; Sun, M.W.; Yang, R.G. On the Stability of Linear Active Disturbance Rejection Control. Acta Autom. Sin. 2013, 39, 574–580. [Google Scholar] [CrossRef]
  54. Zheng, Q.; Gao, L.Q.; Gao, Z. On stability analysis of active disturbance rejection control for nonlinear time-varying plants with unknown dynamics. In Proceedings of the 2007 46th IEEE conference on decision and control, New Orleans, LA, USA, 12–14 December 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 3501–3506. [Google Scholar]
Figure 1. Force diagram of aircraft fuselage.
Figure 1. Force diagram of aircraft fuselage.
Aerospace 09 00555 g001
Figure 2. Force diagram of the main wheel.
Figure 2. Force diagram of the main wheel.
Aerospace 09 00555 g002
Figure 3. Control schematic of LADRC.
Figure 3. Control schematic of LADRC.
Aerospace 09 00555 g003
Figure 4. Structure of TD3.
Figure 4. Structure of TD3.
Aerospace 09 00555 g004
Figure 5. Control schematic of TD3-LADRC.
Figure 5. Control schematic of TD3-LADRC.
Aerospace 09 00555 g005
Figure 6. Learning curves.
Figure 6. Learning curves.
Aerospace 09 00555 g006
Figure 7. (a) Aircraft velocity and wheel velocity; (b) breaking distance; (c) slip ratio; (d) control input.
Figure 7. (a) Aircraft velocity and wheel velocity; (b) breaking distance; (c) slip ratio; (d) control input.
Aerospace 09 00555 g007
Figure 8. (a) Extended state of TD3-LADRC; (b) controller bandwidth ω c ; (c) controller bandwidth ω o .
Figure 8. (a) Extended state of TD3-LADRC; (b) controller bandwidth ω c ; (c) controller bandwidth ω o .
Aerospace 09 00555 g008
Figure 9. (a) Aircraft velocity and wheel velocity; (b) breaking distance; (c) slip ratio; (d) control input.
Figure 9. (a) Aircraft velocity and wheel velocity; (b) breaking distance; (c) slip ratio; (d) control input.
Aerospace 09 00555 g009
Figure 10. (a) Extended state of TD3-LADRC; (b) controller bandwidth ω c ; (c) controller bandwidth ω o .
Figure 10. (a) Extended state of TD3-LADRC; (b) controller bandwidth ω c ; (c) controller bandwidth ω o .
Aerospace 09 00555 g010
Figure 11. (a) Aircraft velocity and wheel velocity; (b) breaking distance; (c) slip ratio; (d) control input.
Figure 11. (a) Aircraft velocity and wheel velocity; (b) breaking distance; (c) slip ratio; (d) control input.
Aerospace 09 00555 g011
Figure 12. (a) Extended state of TD3-LADRC; (b) controller bandwidth ω c ; (c) controller bandwidth ω o .
Figure 12. (a) Extended state of TD3-LADRC; (b) controller bandwidth ω c ; (c) controller bandwidth ω o .
Aerospace 09 00555 g012
Table 1. Parameters of aircraft fuselage dynamics.
Table 1. Parameters of aircraft fuselage dynamics.
H Center of gravity height
y Center of gravity height variation
V Aircraft speed
T 0 Engine force
F x Aerodynamic drag
F y Aerodynamic lift
F s Parachute drag
f 1 Braking friction force between main wheel and ground
f 2 Braking friction force between front wheel and ground
N 1 Main wheel support force
N 2 Front wheel support force
m Mass of the aircraft1761 kg
g Gravitational acceleration9.8 m/s2
h t Distance between engine force line
and center of gravity
0.1 m
h s Distance between parachute drag line
and center of gravity
0.67 m
a Distance between main wheel and center of gravity1.076 m
b Distance between front wheel and center of gravity6.727 m
I Fuselage inertia4000 kg·s2·m
S Wing aera50.88 m2
S s Parachute area20 m2
C x Aerodynamic drag coefficient0.1027
C y Aerodynamic lift coefficient0.6
C x s Parachute drag coefficient0.75
T 0 Intimal engine force426 kg
K v Velocity coefficient of engine1 kg·s/m
ρ Air density4000 kg·s2/m4
Table 2. Parameters of the buffer.
Table 2. Parameters of the buffer.
X 1 Main buffer compression
X 2 Front buffer compression
K 1 Main buffer stiffness coefficient42,529
K 2 Front buffer stiffness coefficient2500
C 1 Main buffer damping coefficient800
C 2 Front buffer damping coefficient800
Table 3. Parameters of the landing gear lateral stiffness model.
Table 3. Parameters of the landing gear lateral stiffness model.
d a Navigation vibration displacementPlease see Equation (5)
d V Navigation vibration speedPlease see Equation (5)
K 0 Dynamic stiffness coefficient536,000
ξ Dynamic stiffness coefficient0.2
W n Equivalent model natural frequency60 Hz
Table 4. Parameters of the main wheel.
Table 4. Parameters of the main wheel.
ω Main wheel angular velocity
ω ˙ Main wheel angular acceleration
V w Main wheel line speed
R g Main wheel rolling radius
N Radical load
J Main wheel inertia1.855 kg·s2·m
R Wheel free radius0.4 m
k σ Tire compression coefficient1.07 × 10−5 m/kg
n Equivalent model natural frequency4
Table 5. Parameters of the runway status.
Table 5. Parameters of the runway status.
Runway Status τ 1 τ 2 τ 3
Dry runway0.851.534414.5
Wet runway0.402.08.2
Snow runway0.282.087510
Table 6. Parameters of the hydraulic servo system.
Table 6. Parameters of the hydraulic servo system.
K s v Servo valve gain1
ω s v Servo valve natural frequency17.7074 rad/s
ξ s v Servo valve damping ratio0.36
K p Main wheel rolling radius1
T p Pipe gain0.01
μ m c Friction coefficient of brake material0.23
N m c Number of friction surfaces4
R m c Effective brake friction radius0.142 m
Table 7. Parameters of TD3-LADRC.
Table 7. Parameters of TD3-LADRC.
Control gain b v 2
T D r 0.001
T D h 1
Discount factor γ 0.99
Actor learning rate0.0001
Critic learning rate0.001
Target update rate τ 0.001
Table 8. AABS performance index.
Table 8. AABS performance index.
Performance IndexPID + PBMLADRCTD3-LADRC
Braking time (s)20.4816.7314.79
Braking distance (m)811.9595.46571.18
Table 9. AABS performance index.
Table 9. AABS performance index.
Performance IndexPID + PBMLADRCTD3-LADRC
Braking time (s)23.48-17.70
Braking distance (m)838.46-618.12
Table 10. AABS performance index.
Table 10. AABS performance index.
Performance IndexPID + PBMLADRCTD3-LADRC
Braking time (s)49.1429.8224.19
Braking distance (m)1228.71739.99672.03
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, S.; Yang, Z.; Zhang, Z.; Jiang, R.; Ren, T.; Jiang, Y.; Chen, S.; Zhang, X. Application of Deep Reinforcement Learning in Reconfiguration Control of Aircraft Anti-Skid Braking System. Aerospace 2022, 9, 555.

AMA Style

Liu S, Yang Z, Zhang Z, Jiang R, Ren T, Jiang Y, Chen S, Zhang X. Application of Deep Reinforcement Learning in Reconfiguration Control of Aircraft Anti-Skid Braking System. Aerospace. 2022; 9(10):555.

Chicago/Turabian Style

Liu, Shuchang, Zhong Yang, Zhao Zhang, Runqiang Jiang, Tongyang Ren, Yuan Jiang, Shuang Chen, and Xiaokai Zhang. 2022. "Application of Deep Reinforcement Learning in Reconfiguration Control of Aircraft Anti-Skid Braking System" Aerospace 9, no. 10: 555.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop