Next Article in Journal
Efficient Solution of Burgers’, Modified Burgers’ and KdV–Burgers’ Equations Using B-Spline Approximation Functions
Next Article in Special Issue
An Application of Rouché’s Theorem to Delimit the Zeros of a Certain Class of Robustly Stable Polynomials
Previous Article in Journal
Hydraulic Data Preprocessing for Machine Learning-Based Intrusion Detection in Cyber-Physical Systems
Previous Article in Special Issue
A Mathematical Tool to Investigate the Stability Analysis of Structured Uncertain Dynamical Systems with M-Matrices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analytical Design of Optimal Model Predictive Control and Its Application in Small-Scale Helicopters

1
School of Astronautics, Northwestern Polytechnical University, Xi’an 710072, China
2
Electrical Engineering Department, Future University in Egypt, Cairo 11835, Egypt
3
Engineering Physics and Mathematics Department, Faculty of Engineering, Ain Shams University, Cairo 11535, Egypt
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(8), 1845; https://doi.org/10.3390/math11081845
Submission received: 4 March 2023 / Revised: 2 April 2023 / Accepted: 4 April 2023 / Published: 13 April 2023
(This article belongs to the Special Issue Analysis and Control of Dynamical Systems)

Abstract

:
A new method for controlling the position and speed of a small-scale helicopter based on optimal model predictive control is presented in this paper. In the proposed method, the homotopy perturbation technique is used to analytically solve the optimization problem and, as a result, to find the control signal. To assess the proposed method, a small-scale helicopter system is modeled and controlled using the proposed method. The proposed method has been investigated under different conditions and its results have been compared with the conventional predictive control method. The simulation results show that the proposed technique is highly proficient in the face of various uncertainties and disturbances, and can quickly return the helicopter to its path.

1. Introduction

A control system is a tool that directs and regulates the behavior of a device or process to the desired values by generating and executing a set of control commands and inputs. One of the new control methods that have a clear and active system model in the controller structure is the model predictive control (MPC) method [1,2,3,4,5]. Predictive control refers to a set of algorithms that calculate the control signal to optimize the future behavior of a system. This method has been developed since the 1970s and has been widely used in industrial processes. However, most MPC techniques are focused on discrete time and less so on continuous time. Various methods and techniques have been developed in recent years to solve the problem of continuous-time prediction control, which can be mentioned as follows [6,7,8,9,10].
In [11], based on the direct multiple throwing methods, the forecast horizon is divided into subintervals, and by discretizing the control variables and parameterizing the state variables, the optimal control problem with infinite dimension to the nonlinear programming problem. After completing these steps, the NLP problem is solved using fast numerical methods based on the derived information. In [12], in each sampling interval, the control variable is considered a constant function. The main feature of the proposed method is the fragmentary nature of the control signal. This feature leads to a solvable optimization problem in which the number of control variables is reduced. In [13], to reduce the computational volume and speed up the optimization process, by considering a set of orthogonal real functions, the control function is expressed as a series of them. The article [14], first turns the problem of optimal control into a problem of nonlinear programming with the method of orthogonal accompaniment. Then, to find the absolute minimum of the optimization problem, it uses global optimization methods. In [15], using the direct method, the continuous-time predictor control problem and the optimal control problem are converted into NLP problems at any time of sampling, i.e., by parameterization and discretization of control and state variables, and it is then solved using the nonlinear programming method.
Moreover, in [16], the authors addressed the issue that the discretization of the optimal control problem in MPC is often performed at points of equal distance, and only if the sampling intervals are small enough, and if this discretization has the necessary accuracy. For this purpose, to improve the discretization accuracy and prevent the use of a large number of subsets, a quasi-spectral method has been used that approximates the state and control variables with Lagrange polynomials. In [17], the problem of tracking for nonlinear systems using MPC is stated. In this way, by first obtaining successive derivations of the system output equation and using them in the Taylor expansion of the system output, the tracking error in the predictor horizon is approximated by the Taylor series expansion. Then, the present optimization problem is solved using conventional methods. In [18], the authors proposed a design of model predictive control for discrete-time nonlinear systems using global optimization. Since the problem of nonlinear programming may be nonconvex, if it was first convex, it is treated using existing methods and then solved after linearization. In 2010, the authors proposed a solution to the continuous-time linear regulator problem. First, the period was subdivided into sub-bases by appropriate methods, and in each subperiod, the control function was parameterized into one of three methods: zero-order maintenance, partial linear maintenance, and one-order maintenance. Then, by converting the optimal control problem into the form of a finite-dimensional optimization problem, they determined the unknown parameters in the parameterizations. In this way, the optimal control variables are determined [19]. The problem of nonlinear prediction control using a free derivative optimization algorithm is proposed in [20]. Since optimization methods are based on gradients, they are not suitable for functions that are not derivative. Therefore, the free derivative algorithm was investigated and used in this paper. Of course, first, the optimal control problems in predictive control were parameterized using the throwing method, and, then, the resulting nonlinear programming problem was solved using a free derivative algorithm. There are also many other challenges in the field of control. For example, regarding trajectory tracking, various research studies have been carried out in [21,22,23,24,25,26], which can be interesting for those interested. On the other hand, with progress in various control fields, new methods were also proposed to solve existing challenges [27,28,29,30]. Among these challenges, we can mention the efficiency of robot transmission. In many studies, methods have been proposed to increase the transfer efficiency of robots. Among the proposed methods, we can mention event-triggered control [31,32,33,34,35,36].
According to the studies, in all of the above methods, while solving the problem of continuous-time predictor control, linearization, discretization, or parameterization methods have been used, and none of them has solved the problem of continuous-time predictor control. On the other hand, as we know, the issue of robust control in continuous-time space is rich. Therefore, these are the motivations of this paper, to propose continuous-time prediction control, and it is shown that the design of continuous-time nonlinear predictive controller design is equivalent to a system of differential-algebraic equations. Then, by solving this device using the semianalytical method of the homotopy perturbation method, the control function and the optimal state function are determined continuously.
The general structure of the sections of this article is such that first, the problem is studied in this research, i.e., the problem of continuous-time predictor control is stated in the second part. Since, at each moment of updating the algorithm, an optimal control problem must be solved, so in the third section, the optimal control problem and its solution approaches are repeated. On the other hand, to solve the problem of optimal control, an indirect method has been used, As a result, differential-algebraic equations with boundary conditions are created. To solve this device, the semianalytical method of homotopy perturbation has been used, which is mentioned in the fourth section. In the fifth section, the proposed design algorithm for predicting continuous-time systems is presented. In the sixth section, numerical examples and simulations, and in the seventh section, the conclusion is presented.

2. Problem Statement of Continuous-Time Predictive Control

Consider the following nonlinear continuous-time system:
x ˙ t = f x t , u t , t y t = k x t x t 0 = x 0
With the state x ( t ) R n , the initial condition is x 0 R n , the output is y ( t ) : R R r , and the control input is u ( t ) U R m , where U is a compact set containing the origin. The nonlinear function f : R n + m + 1 R n and k : R n R r are smooth (continuous and derivative) with respect to all components. Here, it is assumed that system (1) has a unique solution for each initial condition x 0 R n nd for each continuous control function u : R + U [37,38,39,40].
According to the designer’s goal, which is usually to bring the state or output of the system to the desired value with minimal control effort, a cost function with the following infinite horizon is considered:
J x 0 , u = 0 g x t , u t , t d t
where g : R n + m + 1 R + is a continuous function. Thus, problems (1) and (2) form an optimization problem with differential-algebraic constraints. If this optimization problem can be solved for the infinite horizon and there is also no model or system mismatch, then the control function found at the moment t = 0 can be applied to the system at all times that t ≥ 0. This is not the case in general, and because of confusion and model-system mismatch, the actual system behavior is different from the predicted behavior of the model. In this paper, to compensate for this difference, slide horizon control techniques such as predictive control can be used.
In predictive control, first, based on the current conditions of state variables, predicting the future behavior of the system from the system model and considering the cost function, the optimal controls are calculated. Then, the calculated optimal controls are applied to the dynamic system in a small part of the time interval. By moving the sampling interval forward and repeating this process, a control loop is created. This closed control loop is obtained by calculating the optimal controls of the instantaneous conditions of the state variables. According to the idea of predictive control and the theory of optimal control, it can be said that the design of the predictive controller for continuous-time systems (1) and (2) is performed according to the following Algorithm 1 in four steps:
Algorithm 1: Predictive controller for continuous-time systems (1) and (2)
0. The HP prediction horizon, update step, δ > 0 (note that δ is not necessarily a constant value but is smaller than the prediction horizon), and consider the sequence of update moments { t i } i 0 , such that t i + 1 = t i + δ .
1. Measure or determine the state of the system at the t i , x t i moment.
2. Define the optimal control problem below, and solve it to determine the answer of u H P * : [ t i , t i + H P ) R m : min   J H P ( x , u )   u p c ( t i , t i + H P , R m ) = t i t i + H P g x t , u t , t d t
s . t x ˙ t = f x t , u t , t  x( t i ) = x t i
where pc ( a , b , R m ) represents the set of all continuous functions of the segment α : [ a , b ] R m .
3. Apply u MPC ( t ) = u H P * ( t ) control on the system in the range t [ t i , t i + δ ) and ignore the remaining control signal.
4. Repeat the above process for the next update moment t i + 1 = t i + δ .
In the next sections, we will study and examine the equations as well as their control structures. For further information on these equations, you can refer to references [41,42,43,44,45].

3. Optimal Control Problem and Approaches to Solve It

As stated in Section 2, in each step of predictive control, we encounter an OC problem that can be expressed as follows:
min J = t 0 t f g x t , u t , t d t
s . t x ˙ ( t ) = f ( x t , u t , t )
x ( t 0 ) = x 0
The purpose of solving the optimal control problem is to determine the acceptable control function u * t = [ u 1 * t , , u m * t ] , which causes the system to follow the acceptable state function x * t = [ x 1 * t , , x m * t ] and minimize the J function. To solve this problem and ensure the stability of the steady state, the fin function h ( x ( t f ) ,   t f ) , which is a function of the finite time and the final state, is added to the cost function, and, thus, the cost function J is changed as follows:
J = h x t f , t f + t 0 t f g x t , u t , t d t
Definition 1.
A pair of  ( x ( ) ,   u ( ) )  is an acceptable answer for (3) if it holds true in the constraints of the problem.
Definition 2.
The acceptable answer  ( x 0 ( ) , u 0 ( ) )  is the weak local minimum of the problem (3) with the cost function (4) if it is said that if we have  J ( x 0 ( ) , u 0 ( ) )     J ( x ( ) , u ( ) )  for  ε 1 ,  ε 2  > 0 and for all acceptable answers for  ( x ( ) , u ( ) )  that apply to conditions  x x 0 < u u 0 < ε 2  and  ε 1 . The local minimum is usually called the optimal answer.
Definition 3.
The answer of  ( x 0 ( ) , u 0 ( ) )  is called the absolute minimum if the above conditions are met for  ε 1 ,  ε 2  = ∞.
In general, there are three common methods to solve the problem of optimal control: the dynamic programming method, the direct method, and the indirect method. In the dynamic programming method [11], an optimization policy is obtained by applying the “optimization principle”, which leads to solving sequential equations. In direct methods [12], through the discretization and parameterization of variables, the optimal control problem becomes a nonlinear programming problem. Then, the resulting problem is solved using conventional algorithms. In the indirect methods, using the changing calculus and the Pontryagin minimum principle, the necessary optimal conditions, which constitute a two-point boundary value problem, are extracted. Then, by solving it with the relevant methods, the control function and the optimal state function are obtained. In this paper, the indirect method is utilized to solve the optimal control problem. The following is an indirect case study.

Indirect Method of Solving Optimal Control Problems

In the indirect method of solving the optimal control problem, due to the benefit of the analytical process and the issues of change calculation, the obtained answer is valid at least in the necessary conditions. Therefore, it seems that this method can provide the optimal answer with high accuracy among the optimal control problem-solving methods [46,47,48].
Theorem 1.
Assume that  ( x * ( t ) , u * ( t ) )  is the local minimum of the problem (3) with the cost function (4). Then, the function also has a derivative continuous state  λ ( t ) :   [ t 0 ,   t f ]   R n , such that, according to the Hamiltonian function:
H x t , u t , λ t , t
g x t , u t , t + λ T t [ f x t , u t , t ]
The following relationships are established:
x ˙ * = H λ x * t , u * t , λ * t , t λ ˙ * = H x x * t , u * t , λ * t , t H x * t , u * t , λ * t , t H x * t , u t , λ * t , t f u ( t ) U
and
[ h x x * t f , t f λ * t f ] T x f +
[ H x * t f , u * t f , λ * t f , t f + h t x * t f , t f δ t f = 0
Equation H ( x * ( t ) ,   u * ( t ) ,   λ * ( t ) ,   t )     H ( x * ( t ) ,   u ( t ) ,   λ * ( t ) ,   t ) emphasizes the “Pontryagin minimization principle”, where u * must minimize the Hamiltonian H ( x * ( t ) ,   u ( t ) ,   λ * ( t ) ,   t )   function .
Theorem 2.
Suppose that  ( x ( t ) ,   u ( t ) )  is an acceptable answer to the problem (3) with a cost function (4) that holds in the necessary conditions of optimality. If  g ( x , u , t )  is a derivative and convex of any of the components  f ( x , u , t )  with respect to  ( x , u )  for any  t     [ t 0 ,   t f ] , then (x(t),u(t)) is an absolute minimum of the optimal control problem.
According to the hypotheses of the above theorems, by solving the device (6), we can find the optimal solution to the optimal control problem (3) with the cost function (4). The set of equations (6) is a boundary value problem, such that if the differential equations and algebraic equations are nonlinear and its initial and boundary conditions are linear, it is difficult to find an accurate and analytical solution. In [14], the semianalytical method of homotopy perturbation is used to solve it.

4. Homotopy Perturbation Method

Most of the phenomena we encounter in nature and technology are modeled using nonlinear differential equations, which, in many cases, it is not possible to find the exact answer. There are various semianalytical and numerical methods for solving nonlinear differential equations. The semianalytical method of homotopy perturbation is used to solve nonlinear equations [15] of the system of differential-algebraic equations with initial values [16], partial differential equations [17], etc.
The hematopoietic disorder method is a combination of the classical disorder method and the homotopy method [48]. The perturbation method is based on the existence of a parameter or small variables (perturbation) in the equation. To improve this method and apply it to unbalanced equations, some solutions have been proposed [49,50]. In this way, first, using the concept of homotopy in topology, an attempt has been made to construct a homotopic equation relating to the desired equation. In this homotopic equation, there is a perturbation parameter that changes in the range [1 and 0]. Then, the disorder method was applied to the homotopic equation. In the perturbation method, first, the solution of the equation is considered as a series of powers according to the perturbation parameter with unknown coefficients (function). Then, by substituting that series in the homotopic equation and equating the same power coefficients of the perturbation parameter, linear equations are obtained, which, by solving them sequentially, unknown functions in the series are obtained. To explain the main idea of the homotopy perturbation methodology, the following differential equation is considered:
A x t f t , B x = 0 , t
where A is a general differential operator, B is a boundary operator, and f(t) is a known analytical function. In general, operator A can be divided into L and , where L and are linear and nonlinear operators, respectively. Thus, Equation (8) can be formulated as:
L x t + x t f t = 0 , B x = 0
To solve Equation (9), first, the v ( t , p ) : Ω × [ 0,1 ] R homotopic is constructed that holds in the following condition:
H ( v , p ) = L ( v ( t , p ) ) L ( x 0 ( t ) ) + p L ( x 0 ( t ) ) + p ( v ( t , p ) ) p f ( t ) = 0
where v ( t , p ) is an unknown function, p   [ 0 , 1 ] is a substitution parameter, and x 0 ( t ) is an initial approximation of the solution of the differential Equation (8) that must hold in the boundary condition. It is clear from Equation (10) that when p = 0, it is v ( t ,   0 ) = x 0 ( t ) , and when p = 1, it is v ( t ,   1 ) = x ( t ) . In other words, by changing p from zero to one, the answer of v ( t ,   p ) changes uniformly from x 0 ( t ) to x ( t ) . Suppose the answer to Equation (9) is written as a series of p :
v = v 0 + p v 1 + p 2 v 2 +
where v i ( t ) ,   i = 0 ,   1 ,   are unknown functions that are obtained according to the perturbation method. By placing p = 1 in (10), the approximate answer to Equation (8) is obtained:
x t = lim p 1 v = v 0 + v 1 + v 2 +
The convergence rate of the series (11) depends on the actual solution of Equation (7) for the A( v ) operator. By substituting Equation (10) for homotopy (9) and equating the same power coefficients p on both sides of the equation, the following equations are obtained:
L v m t = m 1 v 0 t , , v m 1 t , m 1 , B v m = 0
where i   and   i 0 are the coefficients p i in the nonlinear operator :
v t = 0 v 0 t + p 1 v 0 t , v 1 t + p 2 2 v 0 t , v 1 t , v 2 t +
where m     1   and v m ( t ) are easily obtained by solving Equation (13).

5. The Proposed Method

As stated in Section 2, according to Algorithm 1, the continuous-time prediction control problem is performed repeatedly in four steps. Now, considering the system dynamics, cost function, and indirect method of solving the optimal control problem, we can present the following algorithm to solve the continuous-time prediction control problem. Consider the following cost function and dynamic system:
J = h x T , T + t 0 T g x t , u t , t ) d t
x t = f x t , u t , t
y t = k x ( t )
x t 0 = x 0
The design of the predictive controller for the problem (14), which is a feedback controller in the framework of sliding horizon control, is carried out by solving the system of differential-algebraic equations according to the following Algorithm 2 in four steps:
Algorithm 2: Predictive controller for the problem (14)
0. The HP forecast horizon, the update step δ > 0 , (note that δ is not necessarily a fixed value but is defined as smaller than the forecast horizon), and the sequence of update moments { t i } i 0 , such as t i + 1 = t i + δ .
1. Solve the system of differential-algebraic equations of the following DAEs using the method of homotopy perturbation to obtain approximate answers of x MPC ( t ) state functions, both λ MPC ( t ) state and u MPC ( t ) control, in the range of [ t i ,   t i + H P )
2. Apply u MPC ( t ) control over the system in the range of t   [ t i ,   t i + δ ) and ignore the rest of the control signal.
3. Determine the system state at moment t i + 1 with the x MPC ( t ) state function obtained in step (1), x t i + 1 = x MPC ( t i + 1 ) .
4. Repeat the above process for the next update moment t i + 1 = t i + δ .
To perform the first step of the above algorithm, we go through the following steps:
x ˙ = f x t , u t , t λ ˙ = f x x t , u t , t T λ t g x 0 = g u x t , u t , t + f x x t , u t , t T λ t x t i = x i λ t + H P = h x x t i + H P
In the hematopoietic disorder method, first, a homotopy is made corresponding to each of the equations (algebraic and differential) of the device (15). Approximate answers for the device (15) are considered a series of arbitrary order s, with unknown coefficients, such as the following, which, by substituting them in the constructed homotopies, the unknown functions of the series are obtained:
x t x ^ t = x 0 t + p x 1 t + + p s x s ( t ) λ t λ ^ t = λ 0 t + p λ 1 t + + p s λ s ( t ) u t u ^ t = u 0 t + p u 1 t + + p s u s ( t )
Then, by setting p = 1, the approximate answers of the device (15) are determined:
x MPC t = x 0 t + x 1 t + + x s ( t ) λ MPC t = λ 0 t + λ 1 t + + λ s ( t ) u MPC t = u 0 t + u 1 t + + u s ( t )
To solve the device (15) by the method of homotopy perturbation, the initial values of the state variables, λ ( t 0 ) , are not known. Therefore, λ ( t 0 ) = α ,   α R n are first considered unknowns. Then, after solving the device, unknown values of α are obtained according to the λ t i + H P = h x x t i + H P boundary condition.
Note 1.
To implement the designed time controller, we implement it as a data sample so that the effect of discretization can be seen in the implementation of the designed continuous controller in digital systems.

Stability

In MPC, an optimal infinite horizon control problem is approximated by a sequence of finite horizon problems in the context of a slippery horizon control. In general, the stability of the closed-loop system is not guaranteed due to the use of finite predictive horizons. To achieve sustainability, various methods and techniques have been proposed. In [44], considering hypotheses on system dynamics, the sentence adds the finite X t + H P T P X t + H P condition to the cost function and the final unequal constraint X ( t + H P ) Ω to the constraints of the optimal open-loop control problem. Then, the existence and method of selecting the fine matrix show the final state P and the final region Ω in such a way that the condition of the stability of the closed-loop system is guaranteed. Of course, the stated conditions are sufficient for sustainability and are not necessary. The presence of an added final inequality constraint makes it difficult to solve the optimal open-loop control problem, and the online execution of the problem is difficult in terms of computational time.
In [49], it is shown that a suitable choice for the forecasting horizon can ensure the establishment of the final inequality constraint Lemma 2, as described in [39]. Then, in Theorem 1, in [38], the stability conditions of the closed-loop system are stated. In [45], the stability analysis of sliding horizon controllers based on the Lyapunov function as the final penalty term was investigated. Theorem 4 in [20] states the existence of a forecast horizon that guarantees the stability of the closed-loop system.

6. Simulation

In this section, to show the capability and efficiency of the proposed algorithm, the performance of the proposed method for designing a predictive controller for a CE150 laboratory helicopter is reviewed. In a helicopter, the angles of the main and lateral rotation planes are fixed, and the only function of the blades is to maintain angular balance in both vertical and horizontal planes with angular characteristics ψ and ϕ . The only control inputs of the helicopter are the number of revolutions of its blades, which are denoted by u1 and u2. The torque produced by the main blade mainly affects the vertical plane Elevation ( ψ ), and the torque produced by the side blade affects the horizontal plane Azimuth ( ϕ ). By examining the controllability and visibility conditions of the two horizontal and vertical SISO subsystems, we make sure that both are controllable.
In this section, the goal is to control the separate SISO path horizontally. In practice, two independent SISO subsystems can be created by tightening each of the two separate screws embedded in the horizontal and vertical planes to prevent the helicopter from moving and by not commanding the locked path entrance. To begin the design of the controller, we first consider the nonlinear differential equations of the helicopter governing state space:
f ψ x t , u 1 t , t = x 2 1 I ( τ g sin x 1 B φ x 2 + a 1 x 3 2 + b 1 x 3 x 4 1 T 1 2 u 1 x 2 2 T 1 x 2
x t = x 1 x 2 x 3 x 4 = ψ ψ ˙ α α ˙
g ψ x t , u 1 t , t = [ x 1 ]
In the stated dynamic model, four state variables express the state and behavior of the system at any given time, and there is a control variable that controls how the system behaves. The parameters of this model are given in Table 1, where T 1 is the rotor thrust and I ( N . m ) is the moment of inertia for which the moments of inertia terms can be found using the definition assuming that the point mass weight is concentrated at the motors and the counterweight. Further, the rest of the parameters are constant coefficients.
Since the cost function is written based on state variables, the following equations must be solved to obtain the equilibrium x ^ point, assuming the reference y d output, and the state in which the helicopter is directly forward and at an angle r:
x ˙ t = f x t , u t , t = 0 y d = r
By solving the above equations, the equilibrium point is obtained:
if y d = 70 0 = 0.9397 radian then x ^ = 0.9397 , 0 , 0.6189 , 0 T and u ^ = [ 0.6189 ]
if y d = 90 0 = 1.5708 radian then x ^ = 1.5708 , 0 , 0.6387 , 0 T and u ^ = [ 0.6387 ]
The goal is to keep the helicopter at equilibrium; therefore, to design the controller, we consider the cost function as follows:
min J = x 1 t f x 1 r e f t f T H 1 x 1 t f x 1 r e f t f + x 3 t f x 3 r e f t f T H 3 x 3 t f x 3 r e f t f + t 0 t f x 1 t x 1 r e f t T Q 1 x 1 t x 1 r e f t + x 3 t x 3 r e f t T Q 3 x 3 t x 3 r e f t + u T R u d t
According to Algorithm 2, by solving the system of differential equations with the following boundary conditions using the method of homotopy perturbation in each update interval, the state function and the optimal control function are obtained:
x ˙ 1 = x 2 x ˙ 2 = 1 I ( τ g sin x 1 B x 2 + a 1 x 3 2 + b 1 x 3 x ˙ 3 = x 4 x ˙ 4 = 1 T 1 2 x 3 2 T 1 x 4 1 T 1 2 2 R λ 4 λ ˙ 1 = 2 Q 1 x 1 x 1 d + τ g I λ 2 cos x 1 λ ˙ 2 = λ 1 + B I λ 2 λ ˙ 3 = 2 Q 3 x 3 x 3 d 2 a 1 I λ 2 x 3 b 1 I λ 2 + 1 T 1 2 λ 4 λ ˙ 4 = λ 3 + 2 T 1 T 1 2 λ 4 u = 1 T 1 2 2 R λ 4
with border conditions
x 1 t 0 = x 10 x 2 t 0 = x 20 x 3 t 0 = x 30 x 4 t 0 = x 40
and
2 H 1 x 1 t f x 1 r e f λ 1 t f = 0 λ 2 t f = 0 2 H 3 x 3 t f x 3 r e f λ 3 t f = 0 λ 4 t f = 0
By implementing the proposed method, the approximate solutions of system states and control signals are calculated as follows.
x 1 t = 1.0472 + 0.3153 t 2 0 , 0.05 1.0472 + 0.0017 t + 0.2966 t 2 0.05 , 0.1
x 2 t = 0.6306 t + 0.0642 t 2 0 , 0.05 0.0003 + 0.6481 t + 0.5495 t 2 0.05 , 0.1
x 3 t = 0.6093 0.2476 t 2 0 , 0.05 0.6095 0.0138 t 0.0462 t 2 0.05 , 0.1
x 4 t = 0.4952 + 2.5437 t 2 0 , 0.05 0.0080 0.3240 t + 2.3161 t 2 0.05 , 0.1
u t = 3.6041 97.4538 t + 614.2150 t 2 0 , 0.05 9.8578 155.3022 t + 597.2224 t 2 0.05 , 0.1
In the first scenario, it is assumed that the helicopter flies in a vertical movement from point (0, 0, 0) to point (0, 0, 10), as displayed in Figure 1.
In Figure 1, our proposed optimal MPC method is compared with the sliding model control method presented in [48]. As can be seen, our proposed method deviates less from the path and has a better performance. In the second scenario, the performance of the control system in the horizontal movement of the helicopter from point (0.0.10) to point (10.10.10) is evaluated, as illustrated in Figure 2.
As can be inferred from Figure 2, the performance of the optimal MPC control system is better than SMC in horizontal movement and has less deviation and error. In the third scenario, both take-off and landing are defined and there is diagonal movement. In this scenario, the helicopter first goes from point (0.0.0) to point (0.0.10) and then to point (50.50.60). In the last movement, it lands vertically at the point (50.50.0) (see Figure 3). For more clarity, the third scenario is viewed from different angles in Figure 4, Figure 5 and Figure 6.
Next, in the fourth scenario, moving along a curved path and on a semicircle is considered (Figure 5). As can be seen in Figure 5, the proposed method demonstrates good performance while moving on the curved path, and the control system has been able to keep the helicopter on the path. Next, the wind force is increased from 10 m/s to 15 m/s and applied to the helicopter in different directions, as shown in Figure 6. It can be observed from Figure 6 and Figure 7 that the proposed control system performed very well when faced with relatively strong wind force, while the SMC method has an error of more than 3 m from the desired path. Table 2 shows the comparison results for the proposed method against the three methods presented in other articles based on the root mean square error (RMSE) criterion.
Finally, Table 2 shows the RMSE along the x, y, and z-axes. As can be seen, the proposed method (last column of the table) has the best performance among all methods. The reason for this problem is the use of the homotopic perturbation technique for finding the optimal analytical answer in the MPC structure.

7. Conclusions

In this paper, a new analytical method for optimal model predictive control (MPC) design was presented. In the other works, the continuous-time MPC design has finally been transformed into a discrete-time system and solved as a discrete time. On the other hand, as we know, the issue of robust control in continuous-time space is rich and strong. Therefore, these are the motivations of this paper, to propose continuous-time MPC, and it has been shown that the design of a predictive controller for continuous-time nonlinear systems is equivalent to a set of differential-algebraic equations (DAEs) with boundary conditions. By solving the mentioned sets sequentially based on homotopic perturbation, the control function and state function are obtained. Unlike methods that assume the control function at constant update intervals, this method specifies the control function and the optimal state function at all times. The presented method was tested on a small-sized helicopter, and the desired results were obtained. For this, we must have a successful design of a helicopter model. The status of the helicopter can be controlled from the following four control inputs: collective, longitudinal cyclic, lateral cyclic (main rotor), and pedal (tail rotor) commands. By obtaining the linear model in different conditions, it is possible to provide an accurate design for the helicopter. It should also be noted that, in general, helicopter control is performed using the main rotor and tail. This flying system is very sensitive to wind force, which was also addressed in the simulation. A comparison with the SMC method showed that the proposed method performs much better in all motion scenarios. The values of RMSE(x), RMSE(y), and RMSE(z) using the proposed controller are equal to 0.76, 0.71, and 0.59, respectively, which shows the superiority of this type of controller compared to other controllers.

Author Contributions

Conceptualization, W.H., J.Q., X.M., and A.S.; methodology, W.H., J.Q., and X.M.; software, X.M., J.Q., and W.H.; validation, X.M., A.S., and M.M.S.; formal analysis, W.H., J.Q., and X.M.; investigation, all authors; resources, X.M. and M.M.S.; data curation, M.M.S.; writing—original draft preparation, W.H., J.Q., and X.M.; writing—review and editing, all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Essa, M.E.-S.M.; Elsisi, M.; Saleh Elsayed, M.; Fawzy Ahmed, M.; Elshafeey, A.M. An Improvement of Model Predictive for Aircraft Longitudinal Flight Control Based on Intelligent Technique. Mathematics 2022, 10, 3510. [Google Scholar] [CrossRef]
  2. Guo, X.; Shirkhani, M.; Ahmed, E.M. Machine-Learning-Based Improved Smith Predictive Control for MIMO Processes. Mathematics 2022, 10, 3696. [Google Scholar] [CrossRef]
  3. Huang, H.; Shirkhani, M.; Tavoosi, J.; Mahmoud, O. A New Intelligent Dynamic Control Method for a Class of Stochastic Nonlinear Systems. Mathematics 2022, 10, 1406. [Google Scholar] [CrossRef]
  4. Chen, W.H.; You, F. Sustainable building climate control with renewable energy sources using nonlinear model predictive control. Renew. Sustain. Energy Rev. 2022, 168, 112830. [Google Scholar] [CrossRef]
  5. Danyali, S.; Aghaei, O.; Shirkhani, M.; Aazami, R.; Tavoosi, J.; Mohammadzadeh, A.; Mosavi, A. A New Model Predictive Control Method for Buck-Boost Inverter-Based Photovoltaic Systems. Sustainability 2022, 14, 11731. [Google Scholar] [CrossRef]
  6. Tavoosi, J.; Shirkhani, M.; Abdali, A.; Mohammadzadeh, A.; Nazari, M.; Mobayen, S.; Asad, J.H.; Bartoszewicz, A. A New General Type-2 Fuzzy Predictive Scheme for PID Tuning. Appl. Sci. 2021, 11, 10392. [Google Scholar] [CrossRef]
  7. Yang, H.; Xi, D.; Weng, X.; Qian, F.; Tan, B. A Numerical Algorithm for Self-Learning Model Predictive Control in Servo Systems. Mathematics 2022, 10, 3152. [Google Scholar] [CrossRef]
  8. Tavoosi, J.; Shirkhani, M.; Azizi, A.; Din, S.U.; Mohammadzadeh, A.; Mobayen, S. A hybrid approach for fault location in power distributed networks: Impedance-based and machine learning technique. Electr. Power Syst. Res. 2022, 210, 108073. [Google Scholar] [CrossRef]
  9. Aazami, R.; Heydari, O.; Tavoosi, J.; Shirkhani, M.; Mohammadzadeh, A.; Mosavi, A. Optimal Control of an Energy-Storage System in a Microgrid for Reducing Wind-Power Fluctuations. Sustainability 2022, 14, 6183. [Google Scholar] [CrossRef]
  10. Mohammadi, F.; Mohammadi-Ivatloo, B.; Gharehpetian, G.B.; Ali, M.H.; Wei, W.; Erdinç, O.; Shirkhani, M. Robust control strategies for microgrids: A review. IEEE Syst. J. 2021, 16, 2401–2412. [Google Scholar] [CrossRef]
  11. Hu, Q. Boundless Data Analytics through Progressive Mining. Ph.D. Dissertation, Rutgers University-School of Graduate Studies, New Brunswick, NJ, USA, 2018. [Google Scholar]
  12. Wu, T.; Xiong, L.; Cheng, J.; Xie, X. New results on stabilization analysis for fuzzy semi-Markov jump chaotic systems with state quantized sampled-data controller. Inf. Sci. 2020, 521, 231–250. [Google Scholar] [CrossRef]
  13. Karamanakos, P.; Geyer, T. Guidelines for the design of finite control set model predictive controllers. IEEE Trans. Power Electron. 2019, 35, 7434–7450. [Google Scholar] [CrossRef]
  14. Mahoui, S.; Moulay, M.S.; Omrane, A. Finite element approximation to optimal pointwise control of parabolic problems with incomplete data. In Proceedings of the International Conference on Mathematical Modelling in Applied Sciences, Saint Petersburg, Russia, 24–28 July 2017; p. 65. [Google Scholar]
  15. Fontes, F.A.; Paiva, L.T. Guaranteed constraint satisfaction in continuous-time control problems. IEEE Control Syst. Lett. 2018, 3, 13–18. [Google Scholar] [CrossRef]
  16. Faedo, N.; Olaya, S.; Ringwood, J.V. Optimal control, MPC and MPC-like algorithms for wave energy systems: An overview. IFAC J. Syst. Control 2017, 1, 37–56. [Google Scholar] [CrossRef]
  17. Berberich, J.; Köhler, J.; Müller, M.A.; Allgöwer, F. Linear tracking MPC for nonlinear systems—Part II: The data-driven case. IEEE Trans. Autom. Control 2022, 67, 4406–4421. [Google Scholar] [CrossRef]
  18. Pereira, G.C.; Lima, P.F.; Wahlberg, B.; Pettersson, H.; Mårtensson, J. Linear time-varying robust model predictive control for discrete-time nonlinear systems. In Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami Beach, FL, USA, 17–19 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 2659–2666. [Google Scholar]
  19. Gregory, J.; Lin, C. Constrained Optimization in the Calculus of Variations and Optimal Control Theory; Chapman and Hall/CRC: Boca Raton, FL, USA, 2018. [Google Scholar]
  20. Nasir, Y.; Yu, W.; Sepehrnoori, K. Hybrid derivative-free technique and effective machine learning surrogate for nonlinear constrained well placement and production optimization. J. Pet. Sci. Eng. 2020, 186, 106726. [Google Scholar] [CrossRef]
  21. Li, J.; Zhang, G.; Li, B. Robust Adaptive Neural Cooperative Control for the USV-UAV Based on the LVS-LVA Guidance Principle. J. Mar. Sci. Eng. 2022, 10, 51. [Google Scholar] [CrossRef]
  22. Zhang, H.; Zou, Q.; Ju, Y.; Song, C.; Chen, D. Distance-based support vector machine to predict DNA N6-methyladenine modification. Curr. Bioinform. 2022, 17, 473–482. [Google Scholar]
  23. Zhang, H.; Zhao, X.; Zhang, L.; Niu, B.; Zong, G.; Xu, N. Observer-based adaptive fuzzy hierarchical sliding mode control of uncertain under-actuated switched nonlinear systems with input quantization. Int. J. Robust Nonlinear Control 2022, 32, 8163–8185. [Google Scholar] [CrossRef]
  24. Si, Z.; Yang, M.; Yu, Y.; Ding, T. Photovoltaic power forecast based on satellite images considering effects of solar position. Appl. Energy 2021, 302, 117514. [Google Scholar] [CrossRef]
  25. Ding, T.; Zhang, Y.; Ma, G.; Cao, Z.; Zhao, X.; Tao, B. Trajectory tracking of redundantly actuated mobile robot by MPC velocity control under steering strategy constraint. Mechatronics 2022, 84, 102779. [Google Scholar] [CrossRef]
  26. Chang, Y.; Niu, B.; Wang, H.; Zhang, L.; Ahmad, A.M.; Alassafi, M.O. Adaptive tracking control for nonlinear system in pure-feedback form with prescribed performance and unknown hysteresis. IMA J. Math. Control Inf. 2022, 39, 892–911. [Google Scholar] [CrossRef]
  27. Zhang, H.; Wang, H.; Niu, B.; Zhang, L.; Ahmad, A.M. Sliding-mode surface-based adaptive actor-critic optimal control for switched nonlinear systems with average dwell time. Inf. Sci. 2021, 580, 756–774. [Google Scholar] [CrossRef]
  28. Cheng, F.; Liang, H.; Wang, H.; Zong, G.; Xu, N. Adaptive Neural Self-Triggered Bipartite Fault-Tolerant Control for Nonlinear MASs With Dead-Zone Constraints. IEEE Trans. Autom. Sci. Eng. 2022. [Google Scholar] [CrossRef]
  29. Wang, Y.; Niu, B.; Ahmad, A.; Liu, Y.; Wang, H.; Zong, G.; Alsaadi, F. Adaptive command filtered control for switched multi-input multi-output nonlinear systems with hysteresis inputs. Int. J. Adapt. Control Signal Process. 2022, 36, 3023–3042. [Google Scholar] [CrossRef]
  30. Li, Y.; Niu, B.; Zong, G.; Zhao, J.; Zhao, X. Command filter-based adaptive neural finite-time control for stochastic nonlinear systems with time-varying full-state constraints and asymmetric input saturation. Int. J. Syst. Sci. 2022, 53, 199–221. [Google Scholar] [CrossRef]
  31. Zhang, G.; Wang, L.; Li, J.; Zhang, W. Improved LVS guidance and path-following control for unmanned sailboat robot with the minimum triggered setting. Ocean Eng. 2023, 272, 113860. [Google Scholar] [CrossRef]
  32. Liu, S.; Niu, B.; Zong, G.; Zhao, X.; Xu, N. Adaptive fixed-time hierarchical sliding mode control for switched under-actuated systems with dead-zone constraints via event-triggered strategy. Appl. Math. Comput. 2022, 435, 127441. [Google Scholar] [CrossRef]
  33. Gao, H.; Shi, K.; Zhang, H. A novel event-triggered strategy for networked switched control systems. J. Frankl. Inst. 2021, 358, 251–267. [Google Scholar] [CrossRef]
  34. Cao, Y.; Zhao, N.; Xu, N.; Zhao, X.; Alsaadi, F.E. Minimal-Approximation-Based Adaptive Event-Triggered Control of Switched Nonlinear Systems with Unknown Control Direction. Electronics 2022, 11, 3386. [Google Scholar] [CrossRef]
  35. Cao, C.; Wang, J.; Kwok, D.; Cui, F.; Zhang, Z.; Zhao, D.; Li, M.J.; Zou, Q. webTWAS: A resource for disease candidate susceptibility genes identified by transcriptome-wide association study. Nucleic Acids Res. 2022, 50, D1123–D1130. [Google Scholar] [CrossRef] [PubMed]
  36. Tavoosi, J.; Shirkhani, M.; Azizi, A. Control engineering solutions during epidemics: A review. Int. J. Model. Identif. Control 2021, 39, 97–106. [Google Scholar] [CrossRef]
  37. Panchal, B.; Mate, N.; Talole, S.E. Continuous-time predictive control-based integrated guidance and control. J. Guid. Control Dyn. 2017, 40, 1579–1595. [Google Scholar] [CrossRef]
  38. Wang, T.; Kang, Y.; Li, P.; Zhao, Y.B.; Yu, P. Robust model predictive control for constrained networked nonlinear systems: An approximation-based approach. Neurocomputing 2020, 418, 56–65. [Google Scholar] [CrossRef]
  39. Wang, M.; Yang, M.; Fang, Z.; Wang, M.; Wu, Q. A Practical Feeder Planning Model for Urban Distribution System. IEEE Trans. Power Syst. 2022, 38, 1297–1308. [Google Scholar] [CrossRef]
  40. Köhler, J.; Soloperto, R.; Müller, M.A.; Allgöwer, F. A computationally efficient robust model predictive control framework for uncertain nonlinear systems. IEEE Trans. Autom. Control 2020, 66, 794–801. [Google Scholar] [CrossRef]
  41. Sparasci, M. Nonlinear Modeling and Control of Coaxial Rotor UAVs with Application to the Mars Helicopter. Master’s Thesis, Politecnico di Milano University, Milan, Italy, 2022. [Google Scholar] [CrossRef]
  42. Fethalla, N. Modelling, Identification, and Control of a Quadrotor Helicopter. Ph.D. Dissertation, École de Technologie Supérieure, Montreal, QC, Canada, 2019. [Google Scholar]
  43. Ifkirne, S. Helicopter Mathematical Modelling and Optimal Controller Design. Bachelor’s Thesis, Universitat Politècnica de Catalunya, Barcelona, Spain, 2015. [Google Scholar]
  44. Iranmehr, H.; Aazami, R.; Tavoosi, J.; Shirkhani, M.; Azizi, A.R.; Mohammadzadeh, A.; Mosavi, A.H.; Guo, W. Modeling the price of emergency power transmission lines in the reserve market due to the influence of renewable energies. Front. Energy Res. 2022, 9, 944. [Google Scholar] [CrossRef]
  45. Chen, H.; Allgower, F. A Quasi-Infinite Horizon Nonlinear Model Predictive Control Scheme with Guaranteed Stability. Automatica 1998, 34, 1205–1217. [Google Scholar] [CrossRef]
  46. Salati, A.B.; Shamsi, M.; Torres, D.F. Direct transcription methods based on fractional integral approximation formulas for solving nonlinear fractional optimal control problems. Commun. Nonlinear Sci. Numer. Simul. 2019, 67, 334–350. [Google Scholar] [CrossRef]
  47. Li, P.; Yang, M.; Wu, Q. Confidence interval based distributionally robust real-time economic dispatch approach considering wind power accommodation risk. IEEE Trans. Sustain. Energy 2020, 12, 58–69. [Google Scholar] [CrossRef]
  48. Chen, H.; Allgower, F. A quasi infinite horizon nonlinear predictive control scheme for stable. IFAC Proc. Vol. 1997, 30, 529–534. [Google Scholar] [CrossRef]
  49. Fang, X.; Wu, A.; Shang, Y.; Dong, N. Robust control of small-scale unmanned helicopter with matched and mismatched disturbances. J. Frankl. Inst. 2016, 353, 4803–4820. [Google Scholar] [CrossRef]
  50. Guo, X.; Qi, G.; Li, X.; Ma, S. Chaos control of small-scale UAV helicopter based on high order differential feedback controller. Int. J. Control 2022, 95, 2473–2484. [Google Scholar] [CrossRef]
  51. Zhao, W.; Meng, Z.; Wang, K.; Zhang, H. Backstepping Control of an Unmanned Helicopter Subjected to External Disturbance and Model Uncertainty. Appl. Sci. 2021, 11, 5331. [Google Scholar] [CrossRef]
Figure 1. The result of applying the proposed control method and SMC method [50] to the helicopter in the first scenario.
Figure 1. The result of applying the proposed control method and SMC method [50] to the helicopter in the first scenario.
Mathematics 11 01845 g001
Figure 2. The result of applying the proposed control method and SMC method [50] to the helicopter in the second scenario.
Figure 2. The result of applying the proposed control method and SMC method [50] to the helicopter in the second scenario.
Mathematics 11 01845 g002
Figure 3. The result of applying the proposed control method and SMC method [50] to the helicopter in the third scenario.
Figure 3. The result of applying the proposed control method and SMC method [50] to the helicopter in the third scenario.
Mathematics 11 01845 g003
Figure 4. XY (a), XZ (b), and YZ (c) planes of Figure 3.
Figure 4. XY (a), XZ (b), and YZ (c) planes of Figure 3.
Mathematics 11 01845 g004
Figure 5. The result of applying the proposed control method and SMC method [50] to the helicopter in the fourth scenario.
Figure 5. The result of applying the proposed control method and SMC method [50] to the helicopter in the fourth scenario.
Mathematics 11 01845 g005
Figure 6. The result of applying the proposed control method and SMC method [50] to the helicopter in the fourth scenario with 15 m/s wind force.
Figure 6. The result of applying the proposed control method and SMC method [50] to the helicopter in the fourth scenario with 15 m/s wind force.
Mathematics 11 01845 g006
Figure 7. XY plane Figure 6.
Figure 7. XY plane Figure 6.
Mathematics 11 01845 g007
Table 1. State space parameters of the helicopter.
Table 1. State space parameters of the helicopter.
a 1 = 0.1950 I = 0.0055   ( N . m ) T 1 = 0.3550   ( N )
b 1 = 0.0030 B φ = 0.0011 τ g = 0.0810
Table 2. The RMSE is compared for four different control methods.
Table 2. The RMSE is compared for four different control methods.
H [49] SMC [50]Backstepping [51]Optimal MPC
RMSE(x)1.150.930.910.76
RMSE(y)1.020.970.890.71
RMSE(z)0.930.780.740.59
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, W.; Quan, J.; Ma, X.; Salah, M.M.; Shaker, A. Analytical Design of Optimal Model Predictive Control and Its Application in Small-Scale Helicopters. Mathematics 2023, 11, 1845. https://doi.org/10.3390/math11081845

AMA Style

Hu W, Quan J, Ma X, Salah MM, Shaker A. Analytical Design of Optimal Model Predictive Control and Its Application in Small-Scale Helicopters. Mathematics. 2023; 11(8):1845. https://doi.org/10.3390/math11081845

Chicago/Turabian Style

Hu, Weijun, Jiale Quan, Xianlong Ma, Mostafa M. Salah, and Ahmed Shaker. 2023. "Analytical Design of Optimal Model Predictive Control and Its Application in Small-Scale Helicopters" Mathematics 11, no. 8: 1845. https://doi.org/10.3390/math11081845

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop