Next Article in Journal / Special Issue
Hybrid Impulsive Pinning Control for Mean Square Synchronization of Uncertain Multi-Link Complex Networks with Stochastic Characteristics and Hybrid Delays
Previous Article in Journal
LCAM: Low-Complexity Attention Module for Lightweight Face Recognition Networks
Previous Article in Special Issue
Stability Analysis for a Class of Stochastic Differential Equations with Impulses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On a Method for Optimizing Controlled Polynomial Systems with Constraints

1
Department of Applied Mathematics, Buryat State University, 670000 Ulan-Ude, Russia
2
Buryat State University, 670000 Ulan-Ude, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(7), 1695; https://doi.org/10.3390/math11071695
Submission received: 30 January 2023 / Revised: 29 March 2023 / Accepted: 30 March 2023 / Published: 2 April 2023

Abstract

:
A new optimization approach is considered in the class of polynomial in-state optimal control problems with constraints based on nonlocal control improvement conditions, which are constructed in the form of special fixed-point problems in the control space. The proposed method of successive approximations of control retains all constraints at each iteration and does not use the operation of parametric variation of control at each iteration, in contrast to known gradient methods. In addition, the initial approximation of the iterative process may not satisfy the constraints, which is a significant factor in increasing the efficiency of the approach. The comparative efficiency of the proposed method of fixed points in the considered class of problems is illustrated in a model example.

1. Introduction

Polynomial optimal control problems arise in many topical applications. First, state-polynomial systems of ordinary differential equations are traditionally used to describe models of ecological-economic [1,2,3], biological [4,5,6] processes, including models of immunological processes with delays [7]. It should be noted that, in general, questions about the adequacy of the introduction of controls and the choice of optimization criteria when setting the corresponding optimal control problems, in particular, biomedical and ecological-economic processes, still need further research. The state of affairs here is such that the apparatus of optimal control methods at the present stage acts as a means of studying models, showing their consistency and adequacy to real processes, testing hypotheses, and solving controllability problems, i.e., transferring a process from one state to another. Second, various classes of polynomial optimal control problems can be formed by regularizing inverse problems of mathematical physics. In particular, in problems of identification of parameters of systems of ordinary differential equations [8,9]. Third, polynomial optimal control problems can be formed for polynomial approximation of the right parts of controlled nonlinear systems for their approximate solution [10].
Polynomial systems of ordinary differential equations are also considered in the problems of observability, controllability, stabilization, and regulation of controlled systems [11,12,13,14,15]. Actual problems of the analysis of polynomial systems are the issues of stability and qualitative analysis of solutions to systems [16,17,18,19,20]. To solve polynomial systems of differential equations, special approaches and methods are being developed [21,22].
In connection with the above, it seems relevant to develop specialized mathematical and algorithmic support for the effective solution of classes of polynomial optimal control problems. This software can serve as a tool for automating research and be the basis for mobile expert automated decision-making systems with intelligent support that do not require time-consuming experimental tuning of optimization methods for a specific task of the class under consideration.
A well-known approach to solving some classes of polynomial optimal control problems is methods of partial discretization with respect to controlled variables with a reduction to problems of finite-dimensional quadratic programming [23,24]. Another specialized approach focuses on the class of controllable linear in-state systems with a quadratic optimality criterion without constraints, for which nonlocal control improvement methods were proposed in [25]. These methods are based on special formulas for the increment of the objective functional that do not contain residual expansion terms. Control improvement is achieved as a result of solving two Cauchy problems. This feature of the methods is an essential factor for improving the efficiency of solving optimal control problems, which is estimated by the total number of solved Cauchy problems. Another new approach was developed for polynomial in state optimal control problems without constraints, for which nonlocal control improvement methods generalizing the methods of [25] were constructed in [26,27]. These methods are also based on non-standard increment formulas of the objective functional without residual expansion terms, for which special modifications of the standard conjugate system were developed. Improvement of control is achieved as a result of solving a boundary value problem, which is much simpler than the known boundary value problem of the maximum principle. In the class of optimal control problems linear in state, the solution of such a boundary value problem is reduced to solving two Cauchy problems, and the considered methods become equivalent to the methods of [25]. In the general polynomial case, to solve the above boundary value problem for improving control, iterative algorithms were developed based on the well-known perturbation method in mathematics.
In this paper, in the considered class of polynomial optimal control problems with constraints, we construct conditions for improving control with the form of a special fixed-point problem in the control space. To solve problems of the class under consideration, iterative algorithms are proposed based on the well-known theory and methods of fixed points.

2. Polynomial Optimal Control Problem

We consider a class of polynomial in-state and linear in-control problems of optimal control with one terminal constraint:
x ˙ ( t ) = A ( x ( t ) , t ) u ( t ) + b ( x ( t ) , t ) , x ( t 0 ) = x 0 , u ( t ) U , t T = [ t 0 , t 1 ] ,
Φ 0 ( u ) = c , x ( t 1 ) inf u V ,
Φ 1 ( u ) = x 1 ( t 1 ) x 1 1 = 0 ,
where x = ( x 1 ( t ) , x 2 ( t ) , x n ( t ) ) is the state vector, u = ( u 1 ( t ) , u 2 ( t ) , u r ( t ) ) is the control vector. The interval T is fixed. The initial state x 0 R n , the value x 1 1 R , the vector c = ( c 1 , c 2 , , c n ) are given, while c 1 = 0 . The matrix function A ( x , t ) and the vector function b ( x , t ) are polynomial in x of degree l 1 and continuous in t on the set R n × T . The set of control values U R r is compact and convex. The set of available controls V is considered in the space of piecewise continuous functions P C ( T ) on the interval T:
V = u P C ( T ) : u ( t ) U , t T .
For the scalar product of vectors, the standard notation · , · is used.
Under the considered conditions for setting the problem (1)–(3), the local existence of a unique solution to the Cauchy problem (1) is guaranteed for any available control. The global existence of a solution to the Cauchy problem (1) over the entire considered time interval is assumed by default. Sufficient conditions for the existence of a global solution for nonlinear Cauchy problems are known in the literature. In particular, such conditions are given in [25].
For available control v V we denote by x ( t , v ) , t T the solution of the Cauchy problem (1) for u ( t ) = v ( t ) , t T . Denote the set of admissible controls:
W = u V : x 1 ( t 1 , u ) = x 1 1 .
Many polynomial optimal control problems with phase, terminal, and mixed constraints can be reduced to the form (1)–(3) using standard methods of penalizing for violation of constraints. In particular, the general polynomial in state and linear in control optimal control problem with functional equality constraints, in which the functionals specifying the goal and constraints are, respectively, of the form:
Φ 0 ( u ) inf , Φ i ( u ) = 0 , i = 1 , , s , s 1 ,
Φ i ( u ) = φ i ( x ( t 1 ) ) + T ( d i ( x ( t ) , t ) + g i ( x ( t ) , t ) , u ) d t , i = 0 , , s .
In this problem, the functions φ i ( x ) , i = 0 , , s are polynomials of degree l 1 1 on R n , the functions d i ( x , t ) , g i ( x , t ) , i = 0 , , s are polynomial in x of degree l 1 1 and continuous in t on R n × T .
For problem (1)–(3), the Pontryagin function has the form:
H ( p , x , u , t ) = H 0 ( p , x , t ) + H 1 ( p , x , t ) , u ,
where p R n is the conjugate vector, H 0 ( p , x , t ) = p , b ( x , t ) , H 1 ( p , x , t ) = A ( x , t ) T p .
We introduce the regular Lagrange functional with the multiplier λ R :
L ( u , λ ) = Φ 0 ( u ) + λ Φ 1 ( u ) = c , x ( t 1 ) + λ ( x 1 ( t 1 ) x 1 1 ) .
Let us consider an auxiliary problem of optimal control without constraints:
L ( u , λ ) inf u V .
In accordance with [26,27] for controls u 0 V , v V , there is a formula for the increment of the Lagrange functional without remainder terms of the expansions:
Δ v L ( u 0 , λ ) = L ( v , λ ) L ( u 0 , λ ) = T H 1 ( p ( t , u 0 , v , λ ) , x ( t , v ) , t ) , v ( t ) u 0 ( t ) d t .
In Formula (5), the function p ( t , u 0 , v , λ ) , t T is the solution of the modified conjugate system:
p ˙ ( t ) = H x 1 2 ! H x , z x 1 l ! H x , z x , z x , z x , p 1 ( t 1 ) = λ , p i ( t 1 ) = c i , i = 2 , n ¯ ,
in which the partial derivatives with respect to x are calculated with the values of the arguments x = x ( t , u 0 ) , u = u 0 ( t ) , z = x ( t , v ) x ( t , u 0 ) . For the partial derivatives of the Pontryagin function with respect to x and u, the corresponding standard notation H x and H u is used.
In a problem linear in state and control, the modified conjugate system (6) becomes equivalent to the standard conjugate system:
ψ ˙ ( t ) = H x , ψ 1 ( t 1 ) = λ , ψ i ( t 1 ) = c i , i = 2 , n ¯ .
Let ψ ( t , u 0 , λ ) , t T be the solution of the Cauchy problem (7) for x = x ( t , u 0 ) , u = u 0 ( t ) . It’s obvious that p ( t , u 0 , u 0 , λ ) = ψ ( t , u 0 , λ ) , t T .

3. Conditions and Method for Improving Control

For the control u 0 V and the given parameter α > 0 , consider the auxiliary vector function:
u α ( p , x , t ) = P U u 0 ( t ) + α H 1 ( p , x , t ) , p R n , x R n , t T ,
where P U is the projection operator onto a set U in the Euclidean norm.
We will assume that the problem of projection onto the set U admits an analytical solution.
In accordance with the well-known property of the projection operator, we have the inequality:
T H 1 ( p , x , t ) , u α ( p , x , t ) u 0 ( t ) d t 1 α T | | u α ( p , x , t ) u 0 ( t ) | | 2 d t .
For the Euclidean norm of a vector, the standard notation | | · | | is used.
For the auxiliary problem (4), the known necessary optimality condition (Pontryagin’s maximum principle) for control u V using a function u α can be represented in the following form:
u ( t ) = u α ( ψ ( t , u , λ ) , x ( t , u ) , t ) , t T .
This condition is equivalent to the well-known condition of the maximum principle in the non-degenerate problem (1)–(3) for control u W with a certain multiplier λ R . Controls that satisfy the condition of the maximum principle are called extremal controls for convenience.
Let the control v V be a solution of the following system of equations:
v ( t ) = u α ( p ( t , u 0 , v , λ ) , x ( t , v ) , t ) , t T , λ R , Φ 1 ( v ) = x 1 ( t 1 , v ) x 1 1 = 0 .
It is obvious that v W . From inequality (8) and the increment formula (5), we obtain an estimate for the increment of the Lagrange functional:
Δ v L ( u 0 , λ ) 1 α T | | v ( t ) u 0 ( t ) | | 2 d t .
If u 0 W , then on the controls u 0 , v , the Lagrange functional coincides with the objective functional. Then, by virtue of estimate (10), there is an improvement in the objective functional Φ 0 with the estimate:
Δ v Φ 0 ( u 0 ) = Φ 0 ( v ) Φ 0 ( u 0 ) 1 α T | | v ( t ) u 0 ( t ) | | 2 d t .
For extremal control u 0 W in non-degenerate problem (1)–(3), system (9) has an obvious solution v = u 0 . Thus, if system (9) for an extremal control u 0 W has a non-unique solution, then the extremal control u 0 W can be rigorously improved with estimate (11).
Based on these properties, we obtain the following assertions.
Theorem 1
(maximum principle). Let the control u 0 W be optimal in the non-degenerate problem (1)–(3). Then u 0 W is a solution to system (9) for some α > 0 .
Theorem 2
(strengthened necessary optimality condition). Let the control u 0 W be optimal in the non-degenerate problem (1)–(3). Then for all α > 0 , the control u 0 W is the only solution to system (9).
Thus, system (9) allows us to formulate a new necessary optimality condition in the non-degenerate problem (1)–(3), strengthened in comparison with the well-known maximum principle.
Theorem 3.
System (9) is equivalent to the following boundary value problem:
x ˙ ( t ) = A ( x ( t ) , t ) u α ( p ( t ) , x ( t ) , t ) + b ( x ( t ) , t ) , x ( t 0 ) = x 0 , x 1 ( t 1 ) = x 1 1 , p ˙ ( t ) = H x 1 2 ! H x , z x 1 l ! H x , z x , z x , z x , p 1 ( t 1 ) = λ , p i ( t 1 ) = c i , i = 2 , n ¯ ,
in which the partial derivatives with respect to x are calculated with the values of the arguments x = x ( t , u 0 ) , u = u 0 ( t ) , z = x ( t ) x ( t , u 0 ) .
Proof. 
If the control v V with the corresponding multiplier λ R is a solution to system (9), then the pair of functions ( x ( t , v ) , p ( t , u 0 , v , λ ) ), t T with this multiplier is the solution to the indicated boundary value problem. Conversely, if a pair of functions ( x ( t ) , p ( t ) ), t T with the corresponding multiplier λ R is a solution to the indicated boundary value problem, then the control v ( t ) = u α ( p ( t ) , x ( t ) , t ) , t T with this multiplier is a solution to system (9). □
Consequence. For extremal control u 0 W , the boundary value problem is always solvable.
Proof. 
The control v = u 0 is a solution to system (9) for some λ R . Then it follows from Theorem 3 that the pair of functions ( x ( t , u 0 ) , ψ ( t , u 0 , λ ) ), t T with this multiplier is a solution to the indicated boundary value problem. □
Let us consider the sequence of controls u s V , s 0 , where the control u s W at s 1 is the solution of the corresponding system (9), in which the control u s 1 is considered instead of the control u 0 . The sequence Φ 0 ( u s ) is a non-increasing sequence: Φ 0 ( u s + 1 ) Φ 0 ( u s ) . The value δ ( u s ) = Φ 0 ( u s ) Φ 0 ( u s + 1 ) 0 for s 1 characterizes the residual of the maximum principle on the control u s W in the non-degenerate problem (1)–(3). If δ ( u s ) = 0 for s 1 , then on the basis of estimate (11) we obtain that the control u s ( t ) , t T satisfies the condition of the maximum principle in the non-degenerate problem (1)–(3).
Thus, the following convergence assertion can be easily obtained.
Theorem 4.
Let the functional Φ 0 ( u ) in the non-degenerate problem (1)–(3) be bounded from below on the set W. Then the sequence u s V , s 0 converges in the sense of the residual maximum principle:
δ ( u s ) 0 , s .
The system of control improvement conditions (9) is considered as a special operator fixed-point problem with an additional algebraic equation in the space of available controls.
For a given α > 0 to solve system (9), the following iterative process is proposed for k 0 with initial control v 0 V for k = 0 :
v k + 1 ( t ) = u α ( p ( t , u 0 , v k , λ ) , x ( t , v k + 1 ) , t ) , t T , λ R , Φ 1 ( v k + 1 ) = x 1 ( t 1 , v k + 1 ) x 1 1 = 0 .
The initial control v 0 V may not be an admissible control. At each iteration of the process, a special Cauchy problem is solved:
x ˙ ( t ) = A ( x ( t ) , t ) u α ( p ( t , u 0 , v k , λ ) , x ( t ) , t ) + b ( x ( t ) , t ) , x ( t 0 ) = x 0 .
Then an auxiliary control is constructed according to the rule:
v k + 1 ( t ) = u α ( p ( t , u 0 , v k , λ ) , x ( t ) , t ) , t T , λ R .
Using construction, we get:
x ( t ) = x ( t , v k + 1 ) .
As a result, an auxiliary control is determined that satisfies the first equation of system (12) and depends on λ R . Hence, system (12) reduces to an algebraic equation with respect to the unknown Lagrange multiplier λ R . It is assumed that the solution to this equation exists.
Thus, the main feature of the proposed iterative process is the satisfaction of the constraints of the optimal control problem at each iteration for k 1 .
The convergence of process (12) is controlled by the choice of the projecting parameter α > 0 and can be proven under certain conditions similarly to [26] for sufficiently small α > 0 based on the well-known principle of contraction mappings.
The iterative process (12) is applied until the first improvement of the control u 0 V . Next, for the resulting control, a new fixed-point problem is constructed. The calculation of successive fixed-point problems ends if there is no improvement in control over the objective functional. Thus, an iterative method of fixed points is formed for constructing a relaxation sequence of admissible controls, i.e., satisfying the constraints of the problem. Satisfaction of the constraints of the problem at each iteration of successive approximations of the control is achieved by choosing the Lagrange multiplier. This allows us to effectively solve the fundamental problem of choosing the Lagrange multiplier and narrow the dimension of the search space for improving controls to the space of admissible controls in optimal control problems with constraints.
In optimal control problems, the convergence of relaxation sequences of controls in terms of the residual of the maximum principle, which can be defined in different ways, is often studied [25]. One of the ways to determine the residual of the maximum principle was indicated above. Under certain conditions, it is possible to prove the convergence of the relaxation sequence of admissible controls in terms of the residual of the maximum principle in the non-degenerate problem (1)–(3), which is generated with the proposed method of fixed points for a sufficiently small α > 0 .

4. Examples

Example 1.
The comparative efficiency of the proposed fixed-point method is illustrated in a model example of the problem of optimal control of the immune process without delay. In accordance with works [7,9], the model of a controlled system in a dimensionless form can be represented as:
x ˙ 1 = h 1 x 1 h 2 x 1 x 2 u x 1 , u ( t ) [ 0 , u m a x ] , t T = [ 0 , t 1 ] , x ˙ 2 = h 4 ( x 3 x 2 ) h 8 x 1 x 2 , x ˙ 3 = h 3 x 1 x 2 h 5 ( x 3 1 ) , x ˙ 4 = h 6 x 1 h 7 x 4 , x 1 ( 0 ) = x 1 0 > 0 , x 2 ( 0 ) = 1 , x 3 ( 0 ) = 1 , x 4 ( 0 ) = 0 .
The variable x 1 = x 1 ( t ) characterizes the infectious pathogen (virus), and the variables x 2 = x 2 ( t ) , x 3 = x 3 ( t ) characterize the organism’s defenses (antibodies and plasma cells, respectively). The variable x 4 = x 4 ( t ) characterizes the degree of damage to the organism, h i > 0 , i = 1 , 8 ¯ are given constant coefficients. The initial conditions simulate the situation of infection of the organism with a small initial dose of the virus x 1 0 at the initial moment t = 0 . The control u ( t ) , t T characterizes the intensity of the introduction of immunoglobulins that neutralize the virus.
The control u ( t ) 0 , t T corresponds to the case of no treatment using the introduction of immunoglobulins. This model situation corresponds to an acute illness with recovery at the following values of the coefficients [7]:
h 1 = 2 , h 2 = 0.8 , h 3 = 10 4 , h 4 = 0.17 , h 5 = 0.5 ,
h 6 = 10 , h 7 = 0.12 , h 8 = 8 , x 1 0 = 10 6 .
The unit of time corresponds to one day.
The purpose of the control is to minimize the value of the virus by the end of treatment with the introduction of immunoglobulins with the condition of limiting the indicator of damage to the organism at a given time interval:
Φ 0 ( u ) = x 1 ( t 1 ) inf ,
T x 4 ( t ) d t m , m > 0 .
The considered limitation (14) is important in modeling the acute form of a viral disease when the consequences of damage to the organism cannot be neglected.
The value of the maximum intensity of the control action was set equal to u m a x = 0.5 . The time interval T was set to be equal to 20 days: t 1 = 20 . The value of the maximum damage to the organism was chosen to be equal to m = 0.1 .
We introduce an additional variable according to the rule:
x ˙ 5 = x 4 , x 5 ( 0 ) = 0 .
Then the integral constraint (14) reduces to the terminal constraint:
x 5 ( t 1 ) m , m > 0 .
In numerical calculations of the problem with constraint (14), the validity of the activity property of inequality (14) was established. As a result, the problem of the considered class was studied:
Φ 0 ( u ) = x 1 ( t 1 ) inf ,
Φ 1 ( u ) = x 5 ( t 1 ) m = 0 , m > 0 .
The Pontryagin function in problem (13), (15)–(17) is represented as:
H ( p , x , u , t ) = H 0 ( p , x , t ) + H 1 ( p , x , t ) u ,
H 0 ( p , x , t ) = p 1 ( h 1 x 1 h 2 x 1 x 2 ) + p 2 ( h 4 ( x 3 x 2 ) h 8 x 1 x 2 ) + + p 3 ( h 3 x 1 x 2 h 5 ( x 3 1 ) ) + p 4 ( h 6 x 1 h 7 x 4 2 ) + p 5 x 4 ,
H 1 ( p , x , t ) = p 1 x 1 .
To solve problem (13), (15)–(17), we used the proposed method of fixed points (M2) based on the iterative process (12) and the well-known method of penalty functionals (M1) with the objective functional of the following form:
Φ ( u ) = Φ 0 ( u ) + γ s Φ 1 2 ( u ) inf ,
where γ s > 0 , s 0 is a given penalty parameter.
Auxiliary penalty problems (13), (15) and (18) were calculated using the well-known conditional gradient method [28]. As a criterion for stopping the calculation of the penalty problem for a fixed value of the penalty, parameter γ s > 0 condition was chosen:
| Φ ( u k + 1 ) Φ ( u k ) | < ε 1 | Φ ( u k ) | ,
where k > 0 is an iterative index of the conditional gradient method, ε 1 = 10 5 .
After reaching stopping criterion (19), the fulfillment of the terminal constraint was checked:
| x 5 ( t 1 , u k + 1 ) m | < ε 2 ,
where ε 2 = 10 4 is the specified accuracy.
If condition (20) was not satisfied, then a new penalty problem was calculated with the penalty parameter:
γ s + 1 = β γ s , β = 10 .
The initial value of the penalty parameter γ 0 was set equal to 10 10 .
When calculating a new penalty problem using the conditional gradient method, the resulting computational control in the previous penalty problem was chosen as the initial approximation. The calculation using the M1 method ended with the simultaneous fulfillment of conditions (19) and (20).
To implement the proposed method M2, we consider the auxiliary regular Lagrange functional with a multiplier λ R :
L ( u , λ ) = x 1 ( t 1 ) + λ ( x 5 ( t 1 ) m ) .
The modified conjugate system (6) takes the form:
p ˙ 1 = h 1 p 1 + h 2 x 2 p 1 + p 1 u + h 8 x 2 p 2 h 3 x 2 p 3 h 6 p 4 + 1 2 ( h 2 p 1 + h 8 p 2 h 3 p 3 ) z 2 , p ˙ 2 = h 2 x 1 p 1 + h 4 p 2 + h 8 x 1 p 2 h 3 x 1 p 3 + 1 2 ( h 2 p 1 + h 8 p 2 h 3 p 3 ) z 1 , p ˙ 3 = h 4 p 2 + h 5 p 3 , p ˙ 4 = h 7 p 4 p 5 , p ˙ 5 = 0 , p 1 ( t 1 ) = 1 , p 2 ( t 1 ) = p 3 ( t 1 ) = p 4 ( t 1 ) = 0 , p 5 ( t 1 ) = λ .
For available controls u 0 , v , the function p ( t , u 0 , v , λ ) , t T is the solution of the modified conjugate system for u = u 0 ( t ) , x i = x i ( t , u 0 ) , z i = x i ( t , v ) x i ( t , u 0 ) , i = 1 , 2 .
Auxiliary vector function u α ( p , x , t ) based on the projecting operation is determined with the formula:
u α ( p , x , t ) = u m a x , u 0 ( t ) α p 1 x 1 > u m a x , u 0 ( t ) α p 1 x 1 , 0 u 0 ( t ) α p 1 x 1 u m a x , 0 , u 0 ( t ) α p 1 x 1 < 0 .
The fixed-point problem (9) to improve the available control u 0 takes the form:
v ( t ) = u α ( p ( t , u 0 , v , λ ) , x ( t , v ) , t ) , t T , λ R , Φ 1 ( v ) = x 5 ( t 1 , v ) m = 0 .
For a given α > 0 iterative process (12) for k 0 with an initial available control v 0 for k = 0 has the form:
v k + 1 ( t ) = u α ( p ( t , u 0 , v k , λ ) , x ( t , v k + 1 ) , t ) , t T , λ R , Φ 1 ( v k + 1 ) = x 5 ( t 1 , v k + 1 ) m = 0 .
At each iteration of process (22), a special Cauchy problem is solved:
x ˙ 1 ( t ) = h 1 x 1 ( t ) h 2 x 1 ( t ) x 2 ( t ) u α ( p ( t , u 0 , v k , λ ) , x ( t ) , t ) x 1 ( t ) , x ˙ 2 ( t ) = h 4 ( x 3 ( t ) x 2 ( t ) ) h 8 x 1 ( t ) x 2 ( t ) , x ˙ 3 ( t ) = h 3 x 1 ( t ) x 2 ( t ) h 5 ( x 3 ( t ) 1 ) , x ˙ 4 ( t ) = h 6 x 1 ( t ) h 7 x 4 ( t ) , x ˙ 5 ( t ) = x 4 ( t ) , x 1 ( 0 ) = x 1 0 > 0 , x 2 ( 0 ) = 1 , x 3 ( 0 ) = 1 , x 4 ( 0 ) = 0 , x 5 ( 0 ) = 0 ,
with the simultaneous calculation of the auxiliary control:
v k + 1 ( t ) = u α ( p ( t , u 0 , v k , λ ) , x ( t ) , t ) , t T , λ R .
The resulting control, which depends on the Lagrange multiplier, satisfies the first equation of system (22).
To solve the corresponding algebraic equation of system (22) with respect to the Lagrange multiplier, the dumpol procedure from the Fortran software package [29] was used, which implemented the deformable polyhedron method. The accuracy of solving the equation was chosen to be equal to 10 4 , which corresponded to the accuracy of criterion (20).
For a given α > 0 iterative process (22) was carried out until the first fulfillment of the condition:
Φ 0 ( u k + 1 ) < Φ 0 ( u k ) .
In this case, to improve the resulting control, a new problem (21) and algorithm (22) were constructed. In this case, as an initial approximation of the control at k = 0 for iterative process (22), the resulting computational control was chosen.
Thus, starting from the second computational improvement problem (21), the sequence of computational controls forms a relaxation sequence of controls that satisfy constraint (17) with a given accuracy.
If a strict improvement of the control in the process of iterations (22) was not achieved, then the numerical calculation of fixed-point problem (21) was carried out until the condition:
| Φ 0 ( u k + 1 ) Φ 0 ( u k ) | < ε 3 | Φ 0 ( u k ) | ,
where ε 3 = 10 5 . This was the end of the construction and calculation of sequential fixed-point problems for improving control.
As a starting initial approximation in both methods M1 and M2, the control u ( t ) 0 , t T was chosen.
The comparative results of calculations using the considered methods are shown in Table 1 in which Φ 0 is the calculated value of the objective functional of the problem, | Φ 1 | is the modulus of the calculated value of the functional corresponding to constraint (17), and N is the total number of solved Cauchy problems. The note for the M1 method gives the value of the penalty parameter, which provided the specified accuracy of the terminal constraint (20). For proposed method M2, the note indicates the specified value of projection parameter α > 0 , which ensures the convergence of the iterative process (22).
The computational control in methods M1 and M2 is a piecewise-constant function with an accuracy of up to a day with a switching point at the moment t = 5 from the maximum value u m a x = 0.5 to the minimum value equal to zero with reverse switching at moment t = 14 .
Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 show approximate graphs of the computational control and the corresponding phase variables with a time discretization step equal to 0.1 .
According to the optimal strategy for the treatment of an acute disease, taking into account the limitation of the severity of the disease, it is necessary to administer immunoglobulins with maximum intensity at the initial stage of the disease in order to reduce the severity of the disease when the organism’s immune response is still weak. Then, as the organism’s defenses are formed, it is necessary to stop the administration of the drug so that the immune response is formed in full force by the type of feedback on the pathogen. At the last stage, corresponding to the recovery of the organism, immunoglobulins must be administered again with maximum intensity in order to achieve a minimization of the virus value by the end of the specified treatment interval. Previously, a similar strategy for treating the disease using the exacerbation method was proposed and tested in the course of the computational model experiments in [7]. The formulation of tasks for the optimal management of the treatment of a disease makes it possible to substantiate and effectively regulate the process of treatment using the exacerbation method.
Within the framework of a model example, the proposed method of fixed points provides a significant reduction in computational complexity compared to the standard method of penalty functionals, which is estimated using the total number of Cauchy computational problems.
Example 2.
The comparative efficiency of the proposed fixed-point method is illustrated with the example of the well-known model problem of satellite rotation stabilization [30,31], which is considered in the following formulation:
x ˙ 1 = 1 3 x 2 x 3 + 100 u 1 , t T = [ 0 , t 1 ] , t 1 = 0.1 , x ˙ 2 = x 1 x 3 + 25 u 2 , x ˙ 3 = x 1 x 2 + 100 u 3 , x 1 ( 0 ) = 200 , x 2 ( 0 ) = 30 , x 3 ( 0 ) = 40 , | u 1 ( t ) | 40 , | u 2 ( t ) | 20 , | u 3 ( t ) | 40 , t T ,
Φ 0 ( u ) = 1 2 ( x 2 2 ( t 1 ) + x 3 2 ( t 1 ) ) inf ,
Φ 1 ( u ) = x 1 ( t 1 ) = 0 .
The equations of the system describe the dynamics of the rotation of a satellite equipped with three jet engines. The controls characterize fuel consumption. The minimized functional from the control reflects the goal of achieving a state characterized by the absence of satellite rotation (stabilization).
The Pontryagin function in problem (23)–(25) is represented as:
H ( p , x , u , t ) = H 0 ( p , x , t ) + H 11 ( p , x , t ) u 1 + H 12 ( p , x , t ) u 2 + H 13 ( p , x , t ) u 3 ,
H 0 ( p , x , t ) = 1 3 p 1 x 2 x 3 p 2 x 1 x 3 p 3 x 1 x 2 ,
H 11 ( p , x , t ) = 100 p 1 , H 12 ( p , x , t ) = 25 p 2 , H 13 ( p , x , t ) = 100 p 3 .
To solve problem (23)–(25), we used the proposed method of fixed points (M3) based on the iterative process (12) and the known method of penalty functionals with the objective functional in the following form:
Φ ( u ) = Φ 0 ( u ) + γ s Φ 1 2 ( u ) inf ,
where γ s > 0 , s 0 is a given penalty parameter.
Auxiliary penalty problems (23) and (26) were calculated using the well-known conditional gradient method (M1) and gradient projection method (M2) [28]. As a criterion for stopping the calculation of the penalty problem for a fixed value of the penalty parameter, the following condition was chosen:
| Φ ( u k + 1 ) Φ ( u k ) | < ε 1 | Φ ( u k ) | ,
where k > 0 is an iterative index of conditional gradient and gradient projection methods, ε 1 = 10 5 .
After reaching the stopping criterion (27), the fulfillment of the terminal constraint was checked:
| x 1 ( t 1 , u k + 1 ) | < ε 2 ,
where ε 2 = 10 6 is the specified accuracy.
If condition (28) was not satisfied, then a new penalty problem was calculated with the penalty parameter:
γ s + 1 = β γ s , β = 10 .
The initial value of the penalty parameter γ 0 was set equal to 0.5 .
When calculating a new penalty problem using the M1 and M2 methods, the resulting computational control in the previous penalty problem was chosen as the initial approximation. The calculation using methods M1 and M2 ended when conditions (27) and (28) were simultaneously satisfied.
To implement the proposed M3 method, an auxiliary regular Lagrange functional with the multiplier λ R was considered:
L ( u , λ ) = 1 2 ( x 2 2 ( t 1 ) + x 3 2 ( t 1 ) ) + λ x 1 ( t 1 ) .
First, problems (23)–(25) were reduced to forms (1)–(3) by introducing an auxiliary variable x 4 ( t ) = 1 2 ( x 2 2 ( t ) + x 3 2 ( t ) ) . After compiling the modified conjugate system and excluding the corresponding conjugate variable p 4 ( t ) from it, the modified conjugate system takes the form:
p ˙ 1 = p 2 x 3 + p 3 x 2 + 1 2 ( p 3 z 2 + p 2 z 3 ) , p ˙ 2 = 1 3 p 1 x 3 + p 3 x 1 + 1 2 ( p 3 z 1 1 3 p 1 z 3 ) , p ˙ 3 = 1 3 p 1 x 2 + p 2 x 1 + 1 2 ( p 2 z 1 1 3 p 1 z 2 ) , p 1 ( t 1 ) = λ , p 2 ( t 1 ) = x 2 ( t 1 ) 1 2 z 2 ( t 1 ) , p 3 ( t 1 ) = x 3 ( t 1 ) 1 2 z 3 ( t 1 ) .
For available controls u 0 , v , the function p ( t , u 0 , v , λ ) , t T is the solution of the modified conjugate system for x i = x i ( t , u 0 ) , z i = x i ( t , v ) x i ( t , u 0 ) , i = 1 , 2 , 3 .
Auxiliary vector function u α ( p , x , t ) based on the projecting operation is determined with the formula:
u α ( p , x , t ) = u 1 α ( p , x , t ) u 2 α ( p , x , t ) u 3 α ( p , x , t ) ,
where
u 1 α ( p , x , t ) = 40 , u 0 ( t ) + 100 α p 1 > 40 , u 0 ( t ) + 100 α p 1 , 40 u 0 ( t ) + 100 α p 1 40 , 40 , u 0 ( t ) + 100 α p 1 < 40 ,
u 2 α ( p , x , t ) = 20 , u 0 ( t ) + 25 α p 2 > 20 , u 0 ( t ) + 25 α p 2 , 20 u 0 ( t ) + 25 α p 2 20 , 20 , u 0 ( t ) + 25 α p 2 < 20 ,
u 3 α ( p , x , t ) = 40 , u 0 ( t ) + 100 α p 3 > 40 , u 0 ( t ) + 100 α p 3 , 40 u 0 ( t ) + 100 α p 3 40 , 40 , u 0 ( t ) + 100 α p 3 < 40 .
Fixed-point problem (9) to improve the available control u 0 takes the form:
v ( t ) = u α ( p ( t , u 0 , v , λ ) , x ( t , v ) , t ) , t T , λ R , Φ 1 ( v ) = x 1 ( t 1 , v ) = 0 .
For a given α > 0 iterative process (12) for k 0 with an initial available control v 0 for k = 0 has the form:
v k + 1 ( t ) = u α ( p ( t , u 0 , v k , λ ) , x ( t , v k + 1 ) , t ) , t T , λ R , Φ 1 ( v k + 1 ) = x 1 ( t 1 , v k + 1 ) = 0 .
At each iteration of process (30), a special Cauchy problem is solved:
x ˙ 1 = 1 3 x 2 x 3 + 100 u 1 α ( p ( t , u 0 , v k , λ ) , x , t ) , t T = [ 0 , t 1 ] , x ˙ 2 = x 1 x 3 + 25 u 2 α ( p ( t , u 0 , v k , λ ) , x , t ) , x ˙ 3 = x 1 x 2 + 100 u 3 α ( p ( t , u 0 , v k , λ ) , x , t ) , x 1 ( 0 ) = 200 , x 2 ( 0 ) = 30 , x 3 ( 0 ) = 40
with the simultaneous calculation of the auxiliary control:
v k + 1 ( t ) = u α ( p ( t , u 0 , v k , λ ) , x ( t ) , t ) , t T , λ R .
The received control, which depends on the Lagrange multiplier, satisfies the first equation of system (30).
To solve the corresponding algebraic equation of system (30) with respect to the Lagrange multiplier, the dumpol procedure from the Fortran software package [29] was used, which implements the deformable polyhedron method. The accuracy of solving the equation was chosen equal to 10 6 , which corresponds to the accuracy of criterion (28).
For a given α > 0 , iterative process (30) was carried out until the first fulfillment of the condition:
Φ 0 ( u k + 1 ) < Φ 0 ( u k ) .
In this case, to improve the resulting control, a new problem (29) and algorithm (30) were constructed. In this case, as an initial approximation of the control at k = 0 for the iterative process (30), the resulting computational control was chosen.
Thus, starting from the second computational improvement problem (29), the sequence of computational controls forms a relaxation sequence of controls that satisfy constraint (25) with a given accuracy.
If a strict improvement of the control in the process of iterations (30) was not achieved, then the numerical calculation of the fixed-point problem (29) was carried out until the condition:
| Φ 0 ( u k + 1 ) Φ 0 ( u k ) | < ε 3 | Φ 0 ( u k ) | ,
where ε 3 = 10 5 . This was the end of the construction and calculation of sequential fixed-point problems for improving control.
As a starting initial approximation in all methods, we chose the control u ( t ) 0 , t T .
Comparative results of calculations using the considered methods are shown in Table 2 in which Φ 0 is the calculated value of the objective functional of the problem, | Φ 1 | is the modulus of the calculated value of the functional corresponding to constraint (25), N is the total number of solved Cauchy problems. The note for the M1 and M2 methods gives the value of the penalty parameter, which provided the specified accuracy of the terminal constraint (25). For proposed method M3, the note indicates the specified value of the projection parameter α > 0 , which ensures the convergence of the iterative process (30).
In the framework of Example 2 with multidimensional control, the proposed method of fixed points provides a significant reduction in the computational complexity, which is estimated by the total number of Cauchy computational problems compared to the known gradient methods based on penalty functionals.
Example 3.
The considered example illustrates the possibility of the rigorous improvement of a non-optimal control that satisfies the maximum principle using the proposed fixed-point method. Gradient methods do not have this capability.
x ˙ ( t ) = u ( t ) , x ( 0 ) = 0 , | u ( t ) | 20 , t T = [ 0 , π ] , Φ 0 ( u ) = 0 π x 2 d t inf , Φ 1 ( u ) = x ( π ) = 0 .
The Pontryagin function has the form:
H = p u + x 2 , H 0 = x 2 , H 1 = p .
For a control u 0 V and a given parameter α > 0 , an auxiliary vector function u α ( p , x , t ) based on the projecting operation takes the form:
u α ( p , x , t ) = 20 , u 0 ( t ) + α p > 20 , u 0 ( t ) + α p , 20 u 0 ( t ) + α p 20 , 20 , u 0 ( t ) + α p < 20 .
A simple analysis of the problem, taking into account its reduction to the form (1)–(3) shows that the problem under consideration is non-degenerate and the maximum principle condition for control u W for some λ R can be represented in the following projection form:
u ( t ) = u α ( ψ ( t , u , λ ) , x ( t , u ) , t ) ,
where the function ψ ( t , u , λ ) , t T is the solution of the standard conjugate system:
ψ ˙ ( t ) = 2 x ( t ) , ψ ( π ) = λ
for x ( t ) = x ( t , u ) , t T . The control u 0 ( t ) = 0 , t T , is a non-optimal extremal control. Wherein x ( t , u 0 ) 0 , t T , Φ 0 ( u 0 ) = 0 .
The improvement problem for control u 0 V based on the regular Lagrange functional L ( u , λ ) = Φ 0 ( u ) + λ Φ 1 ( u ) has the following form:
v ( t ) = u α ( p ( t , u 0 , v , λ ) , x ( t , v ) , t ) , t T , λ R , Φ 1 ( v ) = x ( π , v ) = 0 ,
where the function p ( t , u 0 , v , λ ) , t T is the solution of the modified conjugate system:
p ˙ ( t ) = 2 x ( t ) z ( t ) , p ( π ) = λ
for x ( t ) = x ( t , u 0 ) , z ( t ) = x ( t , v ) x ( t , u 0 ) , t T . From this we obtain that for the extremal control u 0 ( t ) = 0 , t T the function p ( t , u 0 , v , λ ) , t T is the solution of the modified conjugate system:
p ˙ ( t ) = z ( t ) , p ( π ) = λ
for z ( t ) = x ( t , v ) , t T .
System (31) for improving the extremal control u 0 ( t ) = 0 , t T is an equivalent boundary value problem:
x ˙ ( t ) = u α ( p , x , t ) , x ( 0 ) = x ( π ) = 0 , p ˙ ( t ) = x ( t ) , p ( π ) = λ .
In accordance with Theorem 3, the pair of functions ψ ( t , u 0 , λ ) = 0 , x ( t ) = x ( t , u 0 ) = 0 , t T with λ = 0 is an obvious solution to this boundary value problem.
A simple analysis shows that this boundary value problem for α = 1 has solutions of the following form with λ = C :
p ( t ) = C cos t , x ( t ) = C sin t , | C | 20 , t T .
These solutions of the boundary value problem correspond to the solutions of system (31):
v ( t ) = C cos t , t T
with the corresponding values of the objective functional Φ 0 ( v ) = π 2 C 2 .
Thus, system (31) at α = 1 for extremal control u 0 = 0 has a non-unique solution, and extremal control u 0 = 0 is strictly improved on other solutions of system (31) at 0 < | C | 20 .
Let us show the possibility of a rigorous improvement of the extremal control u 0 = 0 using the proposed fixed-point method with the parameter α = 1 .
The iterative process for solving system (31) takes the form
v k + 1 ( t ) = u α ( p ( t , u 0 , v k , λ ) , x ( t , v k + 1 ) , t ) , t T , λ R , Φ 1 ( v k + 1 ) = x ( π , v k + 1 ) = 0 .
As an initial approximation of the iterative process (32), we consider the available control v 0 ( t ) 6 , t T , which corresponds to the phase trajectory x ( t , v 0 ) = 6 t , t T . In that case, the function p ( t , u 0 , v 0 , t ) , t T is the solution of the modified conjugate equation:
p ˙ ( t ) = 6 t , p ( π ) = λ .
The solution to this equation is the function:
p ( t , u 0 , v 0 , t ) = 3 t 2 λ 3 π 2 , t T .
Let us assume that | p ( t , u 0 , v 0 , λ ) | 20 , t T . Then the corresponding Cauchy problem for the phase system takes the form:
x ˙ = 3 t 2 λ 3 π 2 , x ( 0 ) = 0 .
The solution to this equation is the function:
x ( t ) = t 3 ( λ + 3 π 2 ) t , t T .
The condition x ( π , v 1 ) = 0 determines the value of the Lagrange multiplier λ ¯ = 2 π 2 . This gives us a function:
p ( t , u 0 , v 0 , t ) = 3 t 2 π 2 , t T ,
which satisfies the condition | p ( t , u 0 , v 0 , λ ¯ ) | 20 , t T .
From here, we get the control:
v 1 ( t ) = 3 t 2 π 2 , t T .
This control corresponds to the phase trajectory x ( t , v 1 ) = t 3 π 2 t , t T and the value of the objective functional:
Φ 0 ( v 1 ) = 8 105 π 7 230.118 < Φ 0 ( u 0 ) = 0 .
Thus, the fixed-point method already at the first iteration makes it possible to strictly improve the non-optimal extremal control u 0 = 0 . The possibility of a rigorous improvement of the extremal control appears due to the available choice of the starting initial control v 0 V , which differs from the extremal control. There is no such choice in gradient methods.

5. Conclusions

Let us single out the difference between the proposed fixed-point approach and the well-known Lagrange approach in problems with constraints. The Lagrange method is based on the necessary conditions for the optimality of control (Pontryagin’s maximum principle) in problems with constraints, represented by the generalized Lagrange functional. The proposed method of fixed points is based on the conditions for improving control, represented by the regular Lagrange functional in the form of a fixed-point problem in the control space. The developed conditions for improving control in the form of a fixed-point problem make it possible to apply and modify the known theory and methods of fixed points to find a solution to the considered polynomial optimal control problems with constraints.
The proposed method of fixed points is characterized by the property of non-local improvement of control; the possibility of rigorous improvement of non-optimal controls that satisfy the maximum principle; the absence of a control variation procedure, which is typical for gradient methods; precise fulfillment of constraints at each iteration of the method; the presence of one main tuning projection parameter α > 0 , which regulates the speed, quality, and area of convergence of the iterative process. These properties are important for improving the efficiency of solving polynomial optimal control problems with constraints compared to known methods.
One of the main limitations of the application of the proposed method is the assumption that at each iteration of the method, the corresponding algebraic equation is solvable with respect to the Lagrange multiplier. In cases where, at some iterations of the proposed method, the indicated algebraic equation is unsolvable, a generalized modification of the method can be used, which consists in finding the Lagrange multiplier that minimizes the modulus of the restriction functional.
The proposed optimization approach based on constructing control improvement conditions in the form of fixed-point problems can be extended to other polynomial optimal control problems, including those with delays, mixed control functions and parameters, piecewise constant controls, and other features. In particular, it is planned to develop a modification of the proposed fixed-point method for polynomial optimal control problems with constant delays, which are typical for models of the immune process in diseases.

Author Contributions

Conceptualization, A.B.; Methodology, A.B.; Software, D.T.; Investigation, D.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Modeling the Socio-Ecological-Economic System of a Region; Gurman, V.; Ryumina, E. (Eds.) Nauka: Moscow, Russia, 2001; pp. 1–175. [Google Scholar]
  2. Modeling and Control of Regional Development Processes; Vasiliev, S. (Ed.) Fizmatlit: Moscow, Russia, 2001; pp. 1–432. [Google Scholar]
  3. Proops, J.; Safonov, P. Modeling in Ecological Economics; Edward Elgar: Cheltenham, UK, 2004; pp. 1–203. [Google Scholar]
  4. Marry, J. Nonlinear Differential Equations in Biology. Lectures on Models; Mir: Moscow, Russia, 1983; pp. 1–397. [Google Scholar]
  5. Riznichenko, G. Lectures on Mathematical Models in Biology; Regulyarnaya i Khaoticheskaya Dinamika: Izhevsk, Russia, 2002; pp. 1–232. [Google Scholar]
  6. Bratus, A.; Novozhilov, A.; Platonov, A. Dynamic Systems and Models in Biology; Fizmatlit: Moscow, Russia, 2010; pp. 1–400. [Google Scholar]
  7. Marchuk, G. Mathematical Models of Immune Response in Infections Diseases; Kluwer Press: Dordrecht, The Netherlands, 1997; pp. 1–360. [Google Scholar]
  8. Shestakov, A.; Sviridyuk, G.; Keller, A.; Zamyshlyaeva, A.; Khudyakov, Y. Numerical investigation of optimal dynamic measurements. Acta IMEKO 2018, 7, 65–72. [Google Scholar] [CrossRef]
  9. Kabanikhin, S.; Krivorot’ko, O. Optimization methods for solving inverse immunology and epidemiology problems. Comput. Math. Math. Phys. 2020, 60, 580–589. [Google Scholar] [CrossRef]
  10. Nduka, M.; Oruh, B. A Fermat polynomial method for solving optimal control problems. IJMAM Int. J. Math. Anal. Model. 2022, 5, 56–68. [Google Scholar]
  11. Gerbet, D.; Röbenack, K. A high-gain observer for embedded polynomial dynamical systems. Mathematics 2023, 11, 190. [Google Scholar] [CrossRef]
  12. Guo, M.; De Persis, C.; Tesi, P. Data-driven stabilization of nonlinear polynomial systems with noisy data. IEEE Trans. Autom. Control 2022, 67, 4210–4217. [Google Scholar] [CrossRef]
  13. Shumafov, M. Stabilization of linear control systems and pole assignment problem: A survey. Vestnik St. Petersburg Univ. Math. 2019, 6, 564–591. [Google Scholar]
  14. Warrad, B.I.; Bouafoura, M.K.; Braiek, N.B. Tracking control design for nonlinear polynomial systems via augmented error system approach and block pulse functions technique. Kybernetika 2019, 55, 831–851. [Google Scholar]
  15. Baillieul, J. Controllability and observability of polynomial dynamical systems. Nonlinear Anal. Theory Meth. Appl. 1981, 5, 543–552. [Google Scholar] [CrossRef]
  16. Chen, C. Explicit solutions and stability properties of homogeneous polynomial dynamical systems. IEEE Trans. Autom. Control 2022, 1–8. [Google Scholar] [CrossRef]
  17. Ahmadi, A.A.; El Khadir, B. On algebraic proofs of stability for homogeneous vector fields. IEEE Trans. Autom. Control 2020, 65, 325–332. [Google Scholar] [CrossRef] [Green Version]
  18. Chukanov, S.; Chukanov, I. The investigation of nonlinear polynomial control systems. MAIS 2021, 28, 238–249. [Google Scholar] [CrossRef]
  19. Xiao, B.; Lam, H.-K.; Zhong, Z. Iterative stability analysis for general polynomial control systems. Nonlinear Dyn. 2021, 105, 3139–3148. [Google Scholar] [CrossRef]
  20. Roitenberg, V. On generic polinomial differential equations of second order on the circle. Sib. Elektron. Mat. Izv. 2020, 17, 2122–2130. [Google Scholar] [CrossRef]
  21. Zaytsev, M.; Akkerman, V. Explicit transformation of the Riccati equation and other polynomial ODEs to systems of linear ODEs. Tomsk State Univ. J. Math. Mech. 2021, 72, 5–14. [Google Scholar] [CrossRef]
  22. Yousif, A.; Qasim, A. A novel iterative method based on Bernstein-Adomian polynomials to solve non-linear differential equations. Open Access Libr. J. 2020, 7, e6267. [Google Scholar] [CrossRef]
  23. Changhuang, W.; Ran, D.; Ping, L. Alternating minimization algorithm for polynomial optimal control problems. J. Guid. Control Dyn. 2019, 42, 723–736. [Google Scholar]
  24. Arguchintsev, A.; Srochko, V. Procedure for Regularization of Bilinear Optimal Control Problems Based on a Finite-Dimensional Model; St Petersburg State University: St Petersburg, Russia, 2022; Volume 18, pp. 179–187. [Google Scholar]
  25. Srochko, V. Iterative Methods for Solving Optimal Control Problems; Fizmatlit: Moscow, Russia, 2000; pp. 1–160. [Google Scholar]
  26. Buldaev, A. Perturbation Methods in Problem of the Improvement and Optimization of the Controlled Systems; Buryat State University: Ulan-Ude, Russia, 2008; pp. 1–260. [Google Scholar]
  27. Buldaev, A. A boundary improvement problem for linearly controlled processes. Autom. Remote Control 2011, 72, 1221–1228. [Google Scholar] [CrossRef]
  28. Vasiliev, O. Optimization Methods; World Federation Publishers Company INC.: Atlanta, GA, USA, 1996; pp. 1–276. [Google Scholar]
  29. Bartenev, O. Fortran for Professionals. IMSL Mathematical Library. Part 2; Dialog-MIFI: Moscow, Russia, 2001; pp. 1–320. [Google Scholar]
  30. Tyatushkin, A. Numerical Methods and Software Tools for Optimization of Controlled Systems; Nauka: Novosibirsk, Russia, 1992; pp. 1–192. [Google Scholar]
  31. Fedorenko, R. Approximate Solution of Optimal Control Problems; Nauka: Moscow, Russia, 1978; pp. 1–488. [Google Scholar]
Figure 1. 1—control u = 0 ; 2—computational control u.
Figure 1. 1—control u = 0 ; 2—computational control u.
Mathematics 11 01695 g001
Figure 2. 1—trajectory lg x 1 for u = 0 ; 2—computational trajectory lg x 1 .
Figure 2. 1—trajectory lg x 1 for u = 0 ; 2—computational trajectory lg x 1 .
Mathematics 11 01695 g002
Figure 3. 1—trajectory x 2 for u = 0 ; 2—computational trajectory x 2 .
Figure 3. 1—trajectory x 2 for u = 0 ; 2—computational trajectory x 2 .
Mathematics 11 01695 g003
Figure 4. 1—trajectory x 3 for u = 0 ; 2—computational trajectory x 3 .
Figure 4. 1—trajectory x 3 for u = 0 ; 2—computational trajectory x 3 .
Mathematics 11 01695 g004
Figure 5. 1—trajectory x 4 for u = 0 ; 2—computational trajectory x 4 .
Figure 5. 1—trajectory x 4 for u = 0 ; 2—computational trajectory x 4 .
Mathematics 11 01695 g005
Figure 6. 1—trajectory x 5 for u = 0 ; 2—computational trajectory x 5 .
Figure 6. 1—trajectory x 5 for u = 0 ; 2—computational trajectory x 5 .
Mathematics 11 01695 g006
Table 1. Quantitative indicators of calculations for methods M1 and M2.
Table 1. Quantitative indicators of calculations for methods M1 and M2.
Method Φ 0 | Φ 1 | NNote
M12.686698 × 10−191.854861 × 10−546410−6
M21.172261 × 10−201.534792 × 10−588103
Table 2. Quantitative indicators of calculations for methods M1, M2 and M3.
Table 2. Quantitative indicators of calculations for methods M1, M2 and M3.
Method Φ 0 | Φ 1 | NNote
M13.16428 × 10−132.45074 × 10−785120.5
M21.48471 × 10−133.13041 × 10−726420.5
M33.63122 × 10−135.22914 × 10−8145810−5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Buldaev, A.; Trunin, D. On a Method for Optimizing Controlled Polynomial Systems with Constraints. Mathematics 2023, 11, 1695. https://doi.org/10.3390/math11071695

AMA Style

Buldaev A, Trunin D. On a Method for Optimizing Controlled Polynomial Systems with Constraints. Mathematics. 2023; 11(7):1695. https://doi.org/10.3390/math11071695

Chicago/Turabian Style

Buldaev, Alexander, and Dmitry Trunin. 2023. "On a Method for Optimizing Controlled Polynomial Systems with Constraints" Mathematics 11, no. 7: 1695. https://doi.org/10.3390/math11071695

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop