Next Article in Journal
Equivalent Statements of Two Multidimensional Hilbert-Type Integral Inequalities with Parameters
Previous Article in Journal
Color Image Recovery Using Generalized Matrix Completion over Higher-Order Finite Dimensional Algebra
Previous Article in Special Issue
Non-Zero Sum Nash Game for Discrete-Time Infinite Markov Jump Stochastic Systems with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Solution of the Multi-Model Singular Linear-Quadratic Optimal Control Problem: Regularization Approach

The Galilee Research Center for Applied Mathematics, Braude College of Engineering, Karmiel 2161002, Israel
Axioms 2023, 12(10), 955; https://doi.org/10.3390/axioms12100955
Submission received: 11 September 2023 / Revised: 3 October 2023 / Accepted: 9 October 2023 / Published: 10 October 2023
(This article belongs to the Special Issue Advances in Analysis and Control of Systems with Uncertainties II)

Abstract

:
We consider a finite horizon multi-model linear-quadratic optimal control problem. For this problem, we treat the case where the problem’s functional does not contain a control function. The latter means that the problem under consideration is a singular optimal control problem. To solve this problem, we associate it with a new optimal control problem for the same multi-model system. The functional in this new problem is the sum of the original functional and an integral of the square of the Euclidean norm of the vector-valued control with a small positive weighting coefficient. Thus, the new problem is regular. Moreover, it is a multi-model cheap control problem. Using the solvability conditions (Robust Maximum Principle), the solution of this cheap control problem is reduced to the solution of the following three problems: (i) a terminal-value problem for an extended matrix Riccati type differential equation; (ii) an initial-value problem for an extended vector linear differential equation; (iii) a nonlinear optimization (mathematical programming) problem. We analyze an asymptotic behavior of these problems. Using this asymptotic analysis, we design the minimizing sequence of state-feedback controls for the original multi-model singular optimal control problem, and obtain the infimum of the functional of this problem. We illustrate the theoretical results with an academic example.

1. Introduction

Multi-model systems represent the class of uncertain systems depending on an unknown numerical parameter, which belongs to some given set. This set can be either finite or infinite and compact. Thus, a multi-model system represents a set of single-model systems, each of which is associated with one of the aforementioned parameters. The optimal control problem of a multi-model system is a Min-Max type optimization problem. In such an optimal control problem, the functional is maximized with respect to the parameter and minimized with respect to the control. For multi-model optimal control problems, the first-order optimality condition (Robust Maximum Principle) was recently developed in the book [1] (see also the book [2]). Among other topics, where other (as in [1,2]) versions of multi-model systems and their analysis are considered, we can mention, for instance, the following: robust optimization in spline regression for multi-model regulatory networks (see, e.g., [3] and references therein), multi-regimes stochastic differential games with jumps (see, e.g., [4] and references therein), games with fuzzy uncertainty (see, e.g., [5] and references therein), and robust portfolio optimization under parallelepiped uncertainty (see, e.g., [6] and references therein).
The singular optimal control problem is such that the first-order optimality conditions Maximum Principle ([7]), Robust Maximum Principle ([1]), and Hamilton–Jacobi-Bellman Equation ([8]) are not applicable for obtaining its solution. Single-model singular optimal control problems are extensively studied in the literature. Several approaches to the analysis and solution of such problems are widely used. Thus, higher order necessary/sufficient control optimality conditions can be useful in solving the singular optimal control problems (see, e.g., [9,10,11,12,13] and references therein). However, the higher order optimality conditions fail to yield a candidate optimal control/optimal control for the problem, which does not have an optimal control in the class of regular (non-generalized) functions, even if the problem’s functional has a finite infimum/supremum in this class of functions. The second approach is based on the design of a singular optimal control as a minimizing sequence of regular open-loop controls. This minimizing sequence is a sequence of regular control functions of time, along which the functional tends to its infimum/supremum (see, e.g., [12,14,15] and references therein). A generalization of this approach is the extension approach (see [16,17,18]). The third approach combines geometric and analytic methods. Namely, this approach is based on a decomposition of the state space into the “singular” and “regular” subspaces, and a design of an optimal open loop control as a sum of impulsive (in the singular subspace) and regular (in the regular subspace) functions (see, e.g., [19,20,21,22] and references therein). The fourth approach proposes to look for a solution to a singular optimal control problem in a properly defined class of generalized functions (see, e.g., [23]). Finally, the fifth approach is based on regularization of the original singular problem by a “small” correction of its “singular” functional (see e.g., [24,25,26] and references therein). Such a regularization is a kind of Tikhonov’s regularization of ill-posed problems [27]. This approach yields the solution to the original problem in the form of a minimizing sequence of state feedback controls.
However, to the best of our knowledge, multi-model singular optimal control problems were not considered in the literature. In this paper, we consider the finite horizon multi-model singular linear-quadratic optimal control problem. We solve this problem by application of the regularization approach, which yields a new regular optimal control problem. The latter problem is a multi-model cheap control problem. To the best of our knowledge, multi-model cheap control problems also were not considered in the literature. Asymptotic analysis of the multi-model cheap control problem, obtained in this paper, is carried out. Based on this analysis, a minimizing sequence of state-feedback controls in the original multi-model singular control problem is designed and the infimum of the functional of this problem is derived.
It should be noted that the present paper is rather theoretical. Its motivation is to extend the regularization approach to analysis and solution of multi-model singular optimal control problems. Since the Robust Maximum Principle, applied to multi-model optimal control problems, differs considerably from the Maximum Principle, applied to single-model optimal control problems, the aforementioned extension is not trivial. It requires obtaining significantly new results in the asymptotic analysis of singularly perturbed problems, as well as significantly new results in the asymptotic analysis of optimization (mathematical programming) problems.
We organize the paper as follows. In the next section (Section 2), we present the rigorous formulation of the considered problem. Also, we present the main definitions. In Section 3, we regularize the original singular problem. This regularization yields a new problem—the multi-model cheap control problem. Using the Robust Maximum Principle, we present the solvability conditions of this new problem. In Section 4, we analyze asymptotically, these solvability conditions. Based on this analysis, in Section 5, we design the minimizing sequence of state-feedback controls for the original multi-model singular optimal control problem and obtain the infimum of the functional of this problem. In Section 6, we present, an illustrative academic example. We devote Section 7 to the concluding remarks and outlook.
The following main notations are applied in the paper.
1. 
E n denotes the n-dimensional real Euclidean space.
2. 
· denotes the Euclidean norm either of a vector or of a matrix.
3. 
The superscript “T” denotes the transposition of a matrix A ( A T ) , or of a vector x, ( x T ) .
4. 
L [ a , b ; E n ] denotes the linear space of n-dimensional vector-valued real functions, square-integrable in the finite interval [ a , b ] .
5. 
O n 1 × n 2 is used for the zero matrix of the dimension n 1 × n 2 , except in the cases where the dimension of the zero matrix is obvious. In such cases, the notation 0 is used for the zero matrix.
6. 
I n is the n-dimensional identity matrix.
7. 
col ( x , y ) , where x E n , y E m , denotes the column block-vector of the dimension n + m with the upper block x and the lower block y.
8. 
The inequality A B , where A and B are quadratic symmetric matrices of the same dimensions, means that the matrix B A is positive semi-definite.

2. Problem Formulation and Main Definitions

Consider the following multi-model system:
d w k ( t ) d t = A k ( t ) w k ( t ) + B k ( t ) u ( t ) , w k ( 0 ) = w ˜ 0 , t [ 0 , t f ] , k { 1 , 2 , , K } , K > 1 ,
where w k ( t ) , ( k { 1 , 2 , , K } ) is a state in the multi-model system and w k ( t ) E n , ( k = 1 , 2 , , K ) ; u ( t ) is a control in the multi-model system and u ( t ) E r , ( r n ) ; t f > 0 is a given time instant; A k ( t ) and B k ( t ) , t [ 0 , t f ] , ( k = 1 , 2 , , K ) are given matrix-valued continuous functions of corresponding dimensions; w ˜ 0 E n is a given vector.
Let us consider the following functional:
F ( u , k ) = w k T ( t f ) H ˜ w k ( t f ) + 0 t f w k T ( t ) D ˜ ( t ) w k ( t ) d t , k { 1 , 2 , , K } ,
where H ˜ is a constant symmetric positive semi-definite n × n -matrix; for any t [ 0 , t f ] , D ˜ ( t ) is a symmetric positive semi-definite n × n -matrix.
Based on the functional F ( u , k ) , we construct the performance index evaluating the control process of the multi-model system (1)
J ( u ) = max k { 1 , 2 , , K } F ( u , k ) inf u .
Remark 1. 
Since the control u ( · ) is not present explicitly in the functional F ( u , k ) (and, therefore, in the functional J ( u ) ), the first-order optimality conditions (see [1]) fail to yield an optimal control to the problem (1), (3). Thus, this problem is a singular optimal control problem.
Let us introduce the vectors
w = col ( w 1 , w 2 , , w K ) E K n w 0 = col ( w ˜ 0 , w ˜ 0 , , w ˜ 0 ) E K n
and the set U of all functions u = u ( w , t ) : E K n × [ 0 , t f ] E r , which are measurable with respect to t [ 0 , t f ] for any fixed w E K n and satisfy the local Lipschitz condition with respect to w E K n uniformly in t [ 0 , t f ] .
Definition 1. 
By U, we denote the subset of the set U , such that the following conditions are valid:
(i) 
for any u ( w , t ) U and any w 0 E K n of the form in (4), the initial-value problem (1) with k = 1 , 2 , , K and u ( t ) = u ( w , t ) has the unique absolutely continuous solution w u ( t ; w 0 ) = col w 1 , u ( t ; w 0 ) , w 2 , u ( t ; w 0 ) , , w K , u ( t ; w 0 ) in the entire interval [ 0 , t f ] ;
(ii) 
u w u ( t ; w 0 ) , t L 2 [ 0 , t f ; E r ] .
Such a defined set U is called the set of all admissible state-feedback controls in the problem (1) and (3).
Remark 2. 
Since for any k { 1 , 2 , , K } , any u ( w , t ) U and any w 0 E K n of the form in (4), the value of the functional F ( u , k ) with u ( t ) = u ( w , t ) is non-negative, then for any aforementioned w 0 E K n , there exists a finite infimum J * ( w 0 ) of the functional J ( u ) with respect to u ( t ) = u ( w , t ) U in the problem (1) and (3).
Consider a sequence of the functions u q * ( w , t ) U , ( q = 1 , 2 , ) .
Definition 2. 
The sequence u q * ( w , t ) q = 1 + is called a minimizing robust control sequence (or briefly, a minimizing sequence) of the optimal control problem (1) and (3) if for any w 0 E K n of the form in (4):
(a) 
there exist lim q + J u q * ( w , t ) ;
(b) 
the following equality is valid:
lim q + J u q * ( w , t ) = J * ( w 0 ) .
In this case, the value J * ( w 0 ) is called an optimal value of the functional in the problem (1) and (3).
The objective of the paper is to design the minimizing sequence of the optimal control problem (1) and (3) and to derive the expression for the optimal value of the functional in this problem.

3. Regularization of the Optimal Control Problem (1) and (3)

3.1. Multi-Model Cheap Control Problem

To design the minimizing sequence of the problem (1) and (3), first, we are going to regularize it. Namely, we replace (approximately) the singular problem (1) and (3) with a parameter-dependent regular optimal control problem. This new problem has the same multi-model dynamics (1) as the original singular problem has. However, the functional in the new problem, having the regular form, differs from the original functional J ( u ) . Namely, the functional in the new problem has the form
J ε ( u ) = max k { 1 , 2 , , K } F ε ( u , k ) ,
where
F ε ( u , k ) = w k T ( t f ) H ˜ w k ( t f ) + 0 t f [ w k T ( t ) D ˜ ( t ) w k ( t ) + ε 2 u T ( t ) u ( t ) ] d t , k { 1 , 2 , , K } ,
ε > 0 is a small parameter.
Remark 3. 
Like in the original optimal control problem (1) and (3) the objective of the control u in the new problem (1), (5) and (6) is the minimization of its functional by a proper choice of u = u ( w , t ) U .
Remark 4. 
Since the parameter ε > 0 is small, the problem (1), (5) and (6) is a cheap control problem, i.e., an optimal control problem with a control cost much smaller than a state cost in the functional. Single-model cheap control problems have been studied extensively in the literature (see e.g., [24,25,26,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46] and references therein). However, to the best of our knowledge, multi-model cheap control problems have not yet been studied in the literature. It is important to note that, due to the smallness of the control cost, a cheap control problem can be transformed to an optimal control problem for a singularly perturbed system. Various results in the topic of optimal control problems for singularly perturbed single-model systems can be found, for instance, in [36,39,40,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61] and references therein. However, to the best of our knowledge, optimal control problems for singularly perturbed multi-model systems have not yet been studied in the literature.

3.2. Solvability Conditions of the Optimal Control Problem (1), (5) and (6)

Based on the results of the book [1] (Section 9.4), let us introduce for consideration the following block-diagonal K n × K n -matrices:
A ( t ) = A 1 ( t ) O n × n O n × n O n × n A 2 ( t ) O n × n ......... ......... ......... O n × n O n × n A K ( t ) , H = H ˜ O n × n O n × n O n × n H ˜ O n × n ......... ......... ......... O n × n O n × n H ˜ , D ( t ) = D ˜ ( t ) O n × n O n × n O n × n D ˜ ( t ) O n × n ......... ......... ......... O n × n O n × n D ˜ ( t ) , Λ = λ 1 I n O n × n O n × n O n × n λ 2 I n O n × n ......... ......... ......... O n × n O n × n λ K I n ,
where λ k , ( k = 1 , 2 , , K ) are scalar nonnegative parameters satisfying the condition k = 1 K λ k = 1 , i.e., the vector λ = col ( λ 1 , λ 2 , , λ K ) belongs to the set
Ω λ = λ = col ( λ 1 , λ 2 , , λ K ) E K : λ 1 0 , λ 2 0 , , λ K 0 , k = 1 K λ k = 1 .
Along with the above-introduced block-diagonal matrices, let us introduce for consideration the following block-form matrix:
B ( t ) = B 1 ( t ) B 2 ( t ) B K ( t ) .
Based on the matrices in (7) and (8), we consider the following terminal-value problem for the matrix Riccati differential equation:
d P ( t ) d t = P ( t ) A ( t ) A T ( t ) P ( t ) + P ( t ) S ( t , ε ) P ( t ) Λ D ( t ) , t [ 0 , t f ] , P ( t f ) = Λ H ,
where
S ( t , ε ) = 1 ε 2 B ( t ) B T ( t ) .
Remark 5. 
For any λ Ω λ and any ε > 0 , the terminal-value problem (9) has the unique solution P ( t ) = P ( t , λ , ε ) in the entire interval [ 0 , t f ] , and P T ( t , λ , ε ) = P ( t , λ , ε ) .
Let us introduce the vector κ E K , the set
Ω κ = κ = col ( κ 1 , κ 2 , , κ K ) E K : κ 1 0 , κ 2 0 , , κ K 0 , k = 1 K κ k = 1 ,
and the matrices
H κ = κ 1 H ˜ O n × n O n × n O n × n κ 2 H ˜ O n × n ......... ......... ......... O n × n O n × n κ K H ˜ , D κ ( t ) = κ 1 D ˜ ( t ) O n × n O n × n O n × n κ 2 D ˜ ( t ) O n × n ......... ......... ......... O n × n O n × n κ K D ˜ ( t ) .
Proposition 1. 
For a given ε > 0 , the robust optimal state-feedback control u = u ε * ( w , t , λ * ) of the multi-model cheap control problem (1), (5) and (6) has the form
u ε * ( w , t , λ * ) = 1 ε 2 B T ( t ) P ( t , λ * , ε ) w , w E K n , t [ 0 , t f ] ,
where
λ * = λ * ( ε ) = argmin λ Ω λ I ( λ , ε ) ,
I ( λ , ε ) = ( w 0 ) T P ( 0 , λ , ε ) w 0 w T ( t f ) Λ H w ( t f ) 0 t f w T ( t ) Λ D ( t ) w ( t ) d t + max κ Ω κ 0 t f w T ( t ) D κ ( t ) w ( t ) d t + w T ( t f ) H κ w ( t f ) ;
the vector w 0 E K n is of the form in (4) and w ( t ) = w ( t , λ , ε ) is the solution of the initial-value problem
d w ( t ) d t = A ( t ) S ( t , ε ) P ( t , λ , ε ) w ( t ) , w ( 0 ) = w 0 , t [ 0 , t f ] .
The optimal value I ε * of the functional in the problem (1), (5) and (6) is
I ε * = I λ * ( ε ) , ε .
Proof. 
The statements of the proposition are direct consequences of the results of [1] (Section 9.4). □

4. Asymptotic Analysis of the Solvability Conditions to the Problem (1), (5) and (6)

4.1. Transformation of the Terminal-Value Problem (9), the Initial-Value Problem (16) and the Optimization Problem (14) and (15)

In what follows, we assume that:
AI. 
For any k { 1 , 2 , , K } and any t [ 0 , t f ] , the matrix B k ( t ) has the column rank r.
AII. 
For any k { 1 , 2 , , K } and any t [ 0 , t f ] , det B k T ( t ) D ˜ ( t ) B k ( t ) 0 .
AIII. 
For any k { 1 , 2 , , K } , H ˜ B k ( t f ) = 0 .
AIV. 
The matrix-valued functions A k ( t ) , ( k = 1 , 2 , , K ) are continuously differentiable in the interval [ 0 , t f ] .
AV. 
The matrix-valued functions B k ( t ) , ( k = 1 , 2 , , K ) and D ˜ ( t ) are twice continuously differentiable in the interval [ 0 , t f ] .
Let, for any t [ 0 , t f ] , B c ( t ) be a complement matrix to the matrix B ( t ) defined in (8). Thus, the dimension of the matrix B c ( t ) is K n × ( K n r ) , and the block-form matrix B c ( t ) , B ( t ) is invertible for all t [ 0 , t f ] . Due to the definition of the matrix-valued function B ( t ) , as well as the assumption AV and the results of the book [62] (Section 3.3), the matrix-valued function B c ( t ) can be chosen twice continuously differentiable in the interval [ 0 , t f ] .
Lemma 1. 
Let the assumptions AII and AV be satisfied. Then, there exist numbers 0 < ν min ν max such that, for all t [ 0 , t f ] and all λ Ω λ , the following relation is valid:
ν min I r B T ( t ) Λ D ( t ) B ( t ) ν max I r .
Thus, for all t [ 0 , t f ] and all λ Ω λ , the matrix B T ( t ) Λ D ( t ) B ( t ) is invertible and
1 ν max I r B T ( t ) Λ D ( t ) B ( t ) 1 1 ν min I r .
Proof. 
Using the definitions of the matrices D ( t ) , Λ and B ( t ) in Equations (7) and (8), we directly obtain
B T ( t ) Λ D ( t ) B ( t ) = k = 1 K λ k B k T ( t ) D ˜ ( t ) B k ( t ) , t [ 0 , t f ] , λ = col ( λ 1 , λ 2 , , λ K ) Ω λ .
Since the matrix D ˜ ( t ) is symmetric and positive semi-definite for each t [ 0 , t f ] , then the matrices B k T ( t ) D ˜ ( t ) B k ( t ) , ( k = 1 , 2 , , K ) are symmetric and positive definite for each t [ 0 , t f ] . Therefore, all the eigenvalues of each of these matrices are real and positive numbers for each t [ 0 , t f ] . Moreover, due to the assumption AV and the results of [63], these eigenvalues are continuous functions of t [ 0 , t f ] . Let μ k , i ( t ) , ( k = 1 , 2 , K ; i = 1 , , r ) , t [ 0 , t f ] be all the eigenvalues (including equal ones) of the matrix B k T ( t ) D ˜ ( t ) B k ( t ) . Then, we have
0 < μ k , min = min t [ 0 , t f ] min i { 1 , , r } μ k , i ( t ) max t [ 0 , t f ] max i { 1 , , r } μ k , i ( t ) = μ k , max , k = 1 , 2 , , K .
Note that μ k , max , ( k = 1 , 2 , , K ) are finite values.
Using Equations (19) and (20), we obtain
k = 1 K λ k μ k , min I r B T ( t ) Λ D ( t ) B ( t ) k = 1 K λ k μ k , max I r .
Let us choose the numbers ν min and ν max as:
ν min = min k { 1 , 2 , , K } μ k , min , ν max = max k { 1 , 2 , , K } μ k , max .
Such a choice of ν min and ν max , along with Equation (20), the relation (21) and the inclusion col ( λ 1 , λ 2 , , λ K ) Ω λ , directly yields the relation (17). The relation (18) is an immediate consequence of the relation (17). This completes the proof of the lemma. □
Consider the following matrix-valued functions of ( t , λ ) [ 0 , t f ] × Ω λ :
L ( t , λ ) = B c ( t ) B ( t ) B T ( t ) Λ D ( t ) B ( t ) 1 B T ( t ) Λ D ( t ) B c ( t ) , R ( t , λ ) = L ( t , λ ) , B ( t ) .
Remark 6. 
Due to Lemma 1 and the results of [62] (Section 3.3), the matrix R ( t , λ ) is invertible and R ( t , λ ) , R 1 ( t , λ ) are bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . Moreover, the matrix-valued function R ( t , λ ) is twice continuously differentiable with respect to t [ 0 , t f ] uniformly in λ Ω λ , and this function is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] .
Using the aforementioned matrix-valued function R ( t , λ ) and its properties, we transform the unknown P ( t ) in the terminal-value problem (9) as follows:
P ( t ) = R T ( t , λ ) 1 P ( t ) R 1 ( t , λ ) , t [ 0 , t f ] , λ Ω λ ,
where P ( t ) is a new unknown matrix-valued function.
By virtue of the results of [62] (Section 3.3), as well as Equation (10), Lemma 1 and Remark 6, we directly have the following assertion.
Proposition 2. 
Let the assumptions AI-AV be valid. Then, for any ε > 0 and any λ Ω λ , the transformation (23) converts the terminal-value problem (9) to the new terminal-value problem
d P ( t ) d t = A ( t , λ ) P ( t ) P ( t ) A T ( t , λ ) + P ( t ) S ( ε ) P ( t ) D ( t , λ ) , t [ 0 , t f ] , P ( t f ) = H ( λ ) ,
where
A ( t , λ ) = R 1 ( t , λ ) A ( t ) R ( t , λ ) d R ( t , λ ) / d t ,
B ( t ) = R 1 ( t , λ ) B ( t ) = O ( K n r ) × r I r = B ,
S ( ε ) = 1 ε 2 B B T = O ( K n r ) × ( K n r ) O ( K n r ) × r O r × ( K n r ) ( 1 / ε 2 ) I r ,
D ( t , λ ) = R T ( t , λ ) Λ D ( t ) R ( t , λ ) = D 1 ( t , λ ) O ( K n r ) × r O r × ( K n r ) D 2 ( t , λ ) ,
H ( λ ) = R T ( t f , λ ) Λ H R ( t f , λ ) = H 1 ( λ ) O ( K n r ) × r O r × ( K n r ) O r × r ,
D 1 ( t , λ ) = L T ( t , λ ) Λ D ( t ) L ( t , λ ) , D 2 ( t , λ ) = B T ( t ) Λ D ( t ) B ( t ) ,
H 1 ( λ ) = L T ( t f , λ ) Λ H L ( t f , λ ) .
For all t [ 0 , t f ] and λ Ω λ , the matrix D 1 ( t , λ ) is positive semi-definite, while the matrix D 2 ( t , λ ) is positive definite. For all λ Ω λ , the matrix H 1 ( λ ) is positive semi-definite, and the matrix-valued function H 1 ( λ ) is continuous. The matrix-valued functions A ( t , λ ) , D ( t , λ ) are continuously differentiable with respect to t [ 0 , t f ] uniformly in λ Ω λ , and these functions are continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] .
Remark 7. 
For any λ Ω λ and any ε > 0 , the terminal-value problem (24) has the unique solution P ( t ) = P ( t , λ , ε ) in the entire interval [ 0 , t f ] , and P T ( t , λ , ε ) = P ( t , λ , ε ) .
Now, let us make the following transformation of the unknown w ( t ) in the initial-value problem (16):
w ( t ) = R ( t , λ ) z ( t ) , t [ 0 , t f ] , λ Ω λ ,
where z ( t ) is a new unknown vector-valued function.
As a direct consequence of Proposition 2 and Remark 6, we have the following assertion.
Corollary 1. 
Let the assumptions AI-AV be valid. Then, for any ε > 0 and any λ Ω λ , the transformation (32), along with the transformation (23), converts the initial-value problem (16) to the new initial-value problem
d z ( t ) d t = A ( t , λ ) S ( ε ) P ( t ) z ( t ) , z ( 0 ) = z 0 ( λ ) , t [ 0 , t f ] ,
where
z 0 ( λ ) = R 1 ( 0 , λ ) w 0 ,
and the vector-valued function z 0 ( λ ) is continuous for λ Ω λ .
Corollary 2. 
Let the assumptions AI-AV be valid. Then, for any ε > 0 , the transformations (23) and (32) convert the optimization problem (14) and (15) to the equivalent optimization problem
λ * = λ * ( ε ) = argmin λ Ω λ J ( λ , ε ) ,
J ( λ , ε ) = z 0 ( λ ) T P ( 0 , λ , ε ) z 0 ( λ ) z ( t f , λ , ε ) T H ( λ ) z ( t f , λ , ε ) 0 t f z ( t , λ , ε ) T D ( t , λ ) z ( t , λ , ε ) d t + max κ Ω κ [ 0 t f z ( t , λ , ε ) T R T ( t , λ ) D κ ( t ) R ( t , λ ) z ( t , λ , ε ) d t + z ( t f , λ , ε ) T R T ( t f , λ ) H κ R ( t f , λ ) z ( t f , λ , ε ) ] ,
where the vector z 0 ( λ ) is given by (34); z ( t , λ , ε ) is the solution of the initial-value problem (33); P ( t , λ , ε ) is the solution of the terminal-value problem (24); the matrices D ( t , λ ) and H ( λ ) are given by (28) and (29), respectively; the set Ω κ is given by (11); the matrices H κ and D κ ( t ) are given in (12).
Moreover,
J λ * ( ε ) , ε = I λ * ( ε ) , ε , ε 0 .
Proof. 
The statements of the corollary follow immediately from Propositions 1, 2 and Corollary 2. □

4.2. Asymptotic Solution of the Terminal-Value Problem (24)

First of all let us note that, due to the block form of the matrix S ( ε ) (see the Equation (27)), the right-hand side of the differential Equation in (24) has a singularity with respect to ε at ε = 0 . In order to remove this singularity, we look for the solution P ( t ) = P ( t , λ , ε ) of the problem (24) in the form of the block matrix
P ( t , λ , ε ) = P 1 ( t , λ , ε ) ε P 2 ( t , λ , ε ) ε P 2 T ( t , λ , ε ) ε P 3 ( t , λ , ε ) ,
where the matrices P 1 ( t , λ , ε ) , P 2 ( t , λ , ε ) and P 3 ( t , λ , ε ) are of the dimensions ( K n r ) × ( K n r ) , ( K n r ) × r and r × r , respectively; P 1 T ( t , λ , ε ) = P 1 ( t , λ , ε ) , P 3 T ( t , λ , ε ) = P 3 ( t , λ , ε ) .
As with the partitioning the matrix P ( t , λ , ε ) , let us also partition the matrix A ( t , λ ) into blocks as follows:
A ( t , λ ) = A 1 ( t , λ ) A 2 ( t , λ ) A 3 ( t , λ ) A 4 ( t , λ ) ,
where the matrices A 1 ( t , λ ) , A 2 ( t , λ ) , A 3 ( t , λ ) and A 4 ( t , λ ) are of the dimensions ( K n r ) × ( K n r ) , ( K n r ) × r , r × ( K n r ) and r × r , respectively.
Now, substituting the block forms of the matrices S ( ε ) , D ( t , λ ) , H ( λ ) , P ( t , λ , ε ) , A ( t , λ , ε ) (see Equations (27)–(29), (37) and (38)) into the problem (24), we obtain after a routine matrix algebra the following equivalent terminal-value problem in the time interval [ 0 , t f ] :
d P 1 ( t , λ , ε ) d t = P 1 ( t , λ , ε ) A 1 ( t , λ ) ε P 2 ( t , λ , ε ) A 3 ( t , λ ) A 1 T ( t , λ ) P 1 ( t , λ , ε ) ε A 3 T ( t , λ ) P 2 T ( t , λ , ε ) + P 2 ( t , λ , ε ) P 2 T ( t , λ , ε ) D 1 ( t , λ ) , P 1 ( t f , λ , ε ) = H 1 ( λ ) ,
ε d P 2 ( t , λ , ε ) d t = P 1 ( t , λ , ε ) A 2 ( t , λ ) ε P 2 ( t , λ , ε ) A 4 ( t , λ ) ε A 1 T ( t , λ ) P 2 ( t , λ , ε ) ε A 3 T ( t , λ ) P 3 ( t , λ , ε ) + P 2 ( t , λ , ε ) P 3 ( t , λ , ε ) , P 2 ( t f , λ , ε ) = 0 ,
ε d P 3 ( t , λ , ε ) d t = ε P 2 T ( t , λ , ε ) A 2 ( t , λ ) ε P 3 ( t , λ , ε ) A 4 ( t , λ ) ε A 2 T ( t , λ ) P 2 ( t , λ , ε ) ε A 4 T ( t , λ ) P 3 ( t , λ , ε ) + P 3 ( t , λ , ε ) 2 D 2 ( t , λ ) , P 3 ( t f , λ , ε ) = 0 .
Remark 8. 
Since the terminal-value problem (39)–(41) is equivalent to the problem (24), then (due to Remark 7), for any λ Ω λ and any ε > 0 , the problem (39)–(41) has the unique solution P 1 ( t , λ , ε ) , P 2 ( t , λ , ε ) , P 1 ( t , λ , ε ) in the entire interval [ 0 , t f ] . Also, it should be noted that, for any λ Ω λ , the terminal-value problem (39)–(41) is a singularly perturbed one for a set of Riccati-type matrix differential equations. In what follows of this subsection, based on the Boundary Function Method (see, e.g., [64]), we construct and justify the zero-order asymptotic solution of this problem. Namely, we seek this asymptotic solution in the form
P j 0 ( t , λ , ε ) = P j 0 o ( t , λ ) + P j 0 b ( τ , λ ) , j = 1 , 2 , 3 , τ = ( t t f ) / ε ,
where the terms with the upper index "o” constitute the so-called outer solution, while the terms with the upper index “b” are the boundary correction terms in a left-hand neighborhood of t = t f ; τ 0 is a new independent variable, called the stretched time. For any t [ 0 , t f ) , τ as ε + 0 . Equations and conditions for obtaining the outer solution and the boundary correction terms are derived by substituting the representation (42) into the terminal-value problem (39)–(41) instead of P j ( t , λ , ε ) , ( j = 1 , 2 , 3 ) , and equating the coefficients for the same power of ε on both sides of the resulting equations, separately the coefficients depending on t and on τ.

4.2.1. Obtaining the Boundary Layer Correction P 1 0 b ( τ )

For this boundary layer correction, we have the equation
d P 10 b ( τ , λ ) d τ = 0 , τ 0 , λ Ω λ .
Like in the Boundary Function Method, we require that the boundary layer correction terms tend to zero for τ tending to . Thus, we require that
lim τ P 10 b ( τ , λ ) = 0 .
Moreover, we require that the limit (44) is uniform with respect to λ Ω λ .
From Equation (43), we obtain
P 10 b ( τ , λ ) = C ( λ ) τ 0 ,
where C ( λ ) is an arbitrary matrix-valued function of λ Ω λ .
Equation (45), along with the requirement of the fulfillment of the limit relation (44) uniformly in λ Ω λ , yields
P 10 b ( τ , λ ) 0 , τ 0 , λ Ω λ .

4.2.2. Obtaining the Outer Solution Terms

The equations and conditions for these terms are the following for all t [ 0 , t f ] and λ Ω λ :
d P 10 o ( t , λ ) d t = P 10 o ( t , λ ) A 1 ( t , λ ) A 1 T ( t , λ ) P 10 o ( t , λ ) + P 20 o ( t , λ ) P 20 o ( t , λ ) T D 1 ( t , λ ) , P 10 o ( t f , λ ) = H 1 ( λ ) ,
P 10 o ( t , λ ) A 2 ( t , λ ) + P 20 o ( t , λ ) P 30 o ( t , λ ) = 0 ,
P 30 o ( t , λ ) 2 D 2 ( t , λ ) = 0 ,
Remark 9. 
It is important to note that in the system (47)–(49), the unknown matrix-valued functions P 20 o ( t , λ ) and P 30 o ( t , λ ) are not subject to any terminal conditions. This occurs because in (47)–(49) these unknowns are subject to the algebraic (but not differential) equations.
Solving the algebraic Equation (49) and taking into account the positive definiteness of the matrix D 2 ( t , λ ) , we obtain
P 30 o ( t , λ ) = D 2 ( t , λ ) 1 / 2 , t [ 0 , t f ] , λ Ω λ ,
where the superscript “ 1 / 2 ” denotes the unique positive definite square root of the corresponding positive definite matrix.
Remark 10. 
Due to Proposition 2, P 30 o ( t , λ ) is bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . Moreover, due to Proposition 2 and the Implicit Function Theorem [65], the matrix-valued function P 30 o ( t , λ ) is continuously differentiable with respect to t [ 0 , t f ] uniformly in λ Ω λ , and d P 30 o ( t , λ ) / d t is bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . In addition, since D 2 ( t , λ ) is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] , then P 30 o ( t , λ ) also is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] .
Solving Equation (48) with respect to P 20 o ( t ) and using (50), we have
P 20 o ( t , λ ) = P 10 o ( t , λ ) A 2 ( t , λ ) D 2 ( t , λ ) 1 / 2 , t [ 0 , t f ] , λ Ω λ ,
where the superscript “ 1 / 2 ” denotes the inverse matrix for the unique positive definite square root of corresponding positive definite matrix.
The substitution of (51) into (47) yields the following terminal-value problem with respect to P 10 o ( t , λ ) for all λ Ω λ :
d P 10 o ( t , λ ) d t = P 10 o ( t , λ ) A 1 ( t , λ ) A 1 T ( t , λ ) P 10 o ( t , λ ) + P 10 o ( t , λ ) S 1 o ( t , λ ) P 10 o ( t , λ ) D 1 ( t , λ ) , t [ 0 , t f ] , P 10 o ( t f , λ ) = H 1 ( λ ) ,
where
S 1 o ( t , λ ) = A 2 ( t , λ ) D 2 1 ( t , λ ) A 2 T ( t , λ ) .
Remark 11. 
Since, for all t [ 0 , t f ] and all λ Ω λ , the matrices D 1 ( t , λ ) , H 1 ( λ ) are positive semi-definite and the matrix D 2 ( t , λ ) is positive definite (see Proposition 2), then for all λ Ω λ , the terminal-value problem (52) has the unique solution P 10 o ( t , λ ) in the entire interval [ 0 , t f ] . Moreover, due to Proposition 2, P 10 o ( t , λ ) and d P 10 o ( t , λ ) / d t are bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . Therefore, due to Remark 10 and Equations (50) and (51), P 20 o ( t , λ ) and d P 20 o ( t , λ ) / d t are bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . In addition, since A 1 ( t , λ ) , S 1 o ( t , λ ) , D 1 ( t , λ ) are continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] and H 1 ( λ ) is continuous with respect to λ Ω λ then, by virtue of the results of [66] (Chapter 5), P 10 o ( t , λ ) also is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] . Therefore, due to Equations (50) and (51), Remark 10 and the continuity of A 2 ( t , λ ) with respect to λ Ω λ uniformly in t [ 0 , t f ] , P 20 o ( t , λ ) also is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] .

4.2.3. Control-Theoretic Interpretation of the Terminal-Value Problem (52)

For any given λ Ω λ , let us consider the optimal control problem with the dynamics described by the system
d x o ( t ) d t = A 1 ( t , λ ) x o ( t ) + A 2 ( t , λ ) u o ( t ) , x o ( 0 ) = w up 0 , t [ 0 , t f ] ,
where x o ( t ) E K n r is a state vector, u o ( t ) E r is a control; w up 0 E K n r is the upper block of the vector w 0 defined in (4).
The functional, to be minimized by u o ( t ) , has the form
J o ( u o ) = x o ( t f ) T H 1 ( λ ) x o ( t f ) + 0 t f x o ( t ) T D 1 ( t , λ ) x o ( t ) + u o ( t ) T D 2 ( t , λ ) u o ( t ) d t .
Let us introduce into the consideration the set U o of all functions u o = u o ( x o , t , λ ) : E K n r × [ 0 , t f ] × Ω λ E r , which are measurable with respect to t [ 0 , t f ] for any fixed ( x o , λ ) E K n r × Ω λ and satisfy the local Lipschitz condition with respect to x o E K n r uniformly in ( t , λ ) [ 0 , t f ] × Ω λ .
Definition 3. 
By U o , we denote the subset of the set U o , such that the following conditions are valid:
(i) 
for any λ Ω λ , any u o ( x o , t , λ ) U o and any w up 0 E K n r , the initial-value problem (54) with u o ( t ) = u o ( x o , t , λ ) has the unique absolutely continuous solution x u o ( t ; x 0 , λ ) in the entire interval [ 0 , t f ] ;
(ii) 
u o x u o ( t ; x 0 , λ ) , t , λ L 2 [ 0 , t f ; E r ] .
Such a defined set U o is called the set of all admissible state-feedback controls in the problem (54) and (55).
Based on the results of [67] (Section 5) and [1] (Section 4.3), we have immediately the following assertion.
Proposition 3. 
Let the assumptions AI-AV be satisfied. Then, for any λ Ω λ , the optimal state-feedback control u o = u o * ( x o , t , λ ) of the problem (54) and (55) is
u o * ( x o , t , λ ) = D 2 1 ( t , λ ) A 2 T ( t , λ ) P 10 o ( t , λ ) x o U o .
The optimal value of the functional in the problem (54) and (55) has the form
J o * ( x 0 , λ ) = J o u o * ( x o , t , λ ) = ( w up 0 ) T P 10 o ( 0 , λ ) w up 0 .

4.2.4. Obtaining the Boundary Layer Correction Terms P 2 0 b ( τ , λ ) and P 30 b ( τ , λ )

These terms are obtained as the solution of the terminal-value problem
d P 20 b ( τ , λ ) d τ = P 20 o ( t f , λ ) P 30 b ( τ , λ ) + P 20 b ( τ , λ ) P 30 o ( t f , λ ) + P 20 b ( τ , λ ) P 30 b ( τ , λ ) , d P 30 b ( τ , λ ) d τ = P 30 o ( t f , λ ) P 30 b ( τ , λ ) + P 30 b ( τ , λ ) P 30 o ( t f , λ ) + ( P 30 b ( τ , λ ) ) 2 , P 20 b ( 0 , λ ) = P 20 o ( t f , λ ) , P 30 b ( 0 , λ ) = P 30 o ( t f , λ ) ,
where τ 0 , λ Ω λ .
Substituting the expressions for P 30 o ( t , λ ) and P 20 o ( t , λ ) (see Equations (50) and (51)) into the terminal-value problem (56) and taking into account the terminal condition for P 10 o ( t , λ ) (see Equation (52)), we transform the aforementioned terminal-value problem as follows:
d P 20 b ( τ , λ ) d τ = P 20 b ( τ , λ ) D 2 ( t f , λ ) 1 / 2 + P 30 b ( τ , λ ) + H 1 ( λ ) A 2 ( t f , λ ) D 2 ( t f , λ ) 1 / 2 P 30 b ( τ , λ ) , P 20 b ( 0 , λ ) = H 1 ( λ ) A 2 ( t f , λ ) D 2 ( t f , λ ) 1 / 2 , τ 0 , λ Ω λ ,
d P 30 b ( τ , λ ) d τ = D 2 ( t f , λ ) 1 / 2 P 30 b ( τ , λ ) + P 30 b ( τ , λ ) D 2 ( t f , λ ) 1 / 2 + P 30 b ( τ , λ ) 2 , P 30 b ( 0 , λ ) = D 2 ( t f , λ ) 1 / 2 , τ 0 , λ Ω λ .
Based on the results of [62] (Section 4.5), we obtain the solution of the terminal-value problem (57) and (58) in the form
P 20 b ( τ , λ ) = 2 H 1 ( λ ) A 2 ( t f , λ ) D 2 ( t f , λ ) 1 / 2 exp 2 D 2 ( t f , λ ) 1 / 2 τ [ I r + exp 2 D 2 ( t f , λ ) 1 / 2 τ ] 1 , τ 0 , λ Ω λ ,
P 30 b ( τ , λ ) = 2 D 2 ( t f , λ ) 1 / 2 exp 2 D 2 ( t f , λ ) 1 / 2 τ [ I r + exp 2 D 2 ( t f , λ ) 1 / 2 τ ] 1 , τ 0 , λ Ω λ .
Due to Lemma 1 (see the inequalities in (17)) and Proposition 2 (see the expression for D 2 ( t , λ ) in (30)), the matrix-valued functions P 20 b ( τ , λ ) and P 30 b ( τ , λ ) are exponentially decaying for τ uniformly with respect to λ Ω λ , i.e., there exist scalar constants a > 0 and β > 0 independent of λ Ω λ such that P 20 b ( τ , λ ) and P 30 b ( τ , λ ) satisfy the inequalities
P 20 b ( τ , λ ) a exp ( β τ ) , P 30 b ( τ , λ ) a exp ( β τ ) , τ 0 , λ Ω λ .

4.2.5. Justification of the Asymptotic Solution to the Terminal-Value Problem (39)–(41)

Theorem 1. 
Let the assumptions AI-AV be fulfilled. Then, there exists a number ε 0 > 0 independent of λ Ω λ such that, for all ε ( 0 , ε 0 ] , the entries of the solution to the terminal-value problem (39)–(41) P 1 ( t , λ , ε ) , P 2 ( t , λ , ε ) , P 3 ( t , λ , ε ) satisfy the inequalities
P 1 ( t , λ , ε ) P 10 o ( t , λ ) c ε , P j ( t , λ , ε ) P j 0 ( t , λ , ε ) c ε , j = 2 , 3 , t [ 0 , t f ] , λ Ω λ ,
where P j 0 ( t , λ , ε ) , ( j = 2 , 3 ) are given in (42); c > 0 is some constant independent of ε and λ Ω λ .
Proof. 
In the proof of the theorem, we are based on the results of [62] (Section 4.5, Lemma 4.2 and its proof) and make proper changes associated with the dependence of the solution to the problem (39)–(41) not only on the parameter ε but also on the vector-valued parameter λ . These changes allow us to prove the uniformity of the inequalities in (62) with respect to λ Ω λ .
Let us make the transformation of the variables in the problem (39)–(41)
P 1 ( t , λ , ε ) = P 10 o ( t , λ ) + Δ 1 ( t , λ , ε ) , P j ( t , λ , ε ) = P j 0 ( t , λ , ε ) + Δ j ( t , λ , ε ) , j = 2 , 3 ,
where Δ j ( t , λ , ε ) , ( j = 1 , 2 , 3 ) are new unknown matrix-valued functions; Δ 1 T ( t , λ , ε ) = Δ 1 ( t , λ , ε ) , Δ 3 T ( t , λ , ε ) = Δ 3 ( t , λ , ε ) .
Using the above introduced new unknown matrix-valued functions, let us construct the following block-form matrix-valued function:
Δ ( t , λ , ε ) = Δ 1 ( t , λ , ε ) ε Δ 2 ( t , λ , ε ) ε Δ 2 T ( t , λ , ε ) ε Δ 3 ( t , λ , ε ) .
Now, let us substitute the representation (63) into the problem (39)–(41). Due to this substitution and the use of Equations (46)–(49) and (56), as well as the block representations of the matrices S ( ε ) , D ( t , λ ) , H ( λ ) , P ( t , λ , ε ) , A ( t , λ ) (see the Equations (27)–(29), (37) and (38)), we obtain after a routine matrix algebra the terminal-value problem for Δ ( t , λ , ε )
d Δ ( t , λ , ε ) d t = Δ ( t , λ , ε ) Θ ( t , λ , ε ) Θ T ( t , λ , ε ) Δ ( t , λ , ε ) + Δ ( t , λ , ε ) S ( ε ) Δ ( t , λ , ε ) Γ ( t , λ , ε ) , t [ 0 , t f ] , Δ ( t f , λ , ε ) = 0 ,
where λ Ω λ ;
Θ ( t , λ , ε ) = A ( t , λ ) S ( ε ) P 0 ( t , λ , ε ) ;
P 0 ( t , λ , ε ) = P 10 ( t , λ , ε ) ε P 20 ( t , λ , ε ) ε P 20 T ( t , λ , ε ) ε P 30 ( t , λ , ε ) ;
the matrix-valued function Γ ( t , λ , ε ) has the block form
Γ ( t , λ , ε ) = Γ 1 ( t , λ , ε ) Γ 2 ( t , λ , ε ) Γ 2 T ( t , λ , ε ) Γ 3 ( t , λ , ε ) ,
and
Γ 1 ( t , λ , ε ) = ε P 20 o ( t , λ ) + P 20 b ( τ , λ ) A 3 ( t , λ ) ε A 3 T ( t , λ ) P 20 o ( t , λ ) + P 20 b ( τ , λ ) T + P 20 o ( t , λ ) P 20 b ( τ , λ ) T + P 20 b ( τ , λ ) P 20 o ( t , λ ) T + P 20 b ( τ , λ ) P 20 b ( τ , λ ) T , Γ 2 ( t , λ , ε ) = ε d P 20 o ( t , λ ) d t ε P 20 o ( t , λ ) + P 20 b ( τ , λ ) A 4 ( t , λ ) ε A 1 T ( t , λ ) P 20 o ( t , λ ) + P 20 b ( τ , λ ) ε A 3 T ( t , λ ) P 30 o ( t , λ ) + P 30 b ( τ , λ ) + P 20 o ( t , λ ) P 20 o ( t f , λ ) P 30 b ( τ , λ ) + P 20 b ( τ , λ ) P 30 o ( t , λ ) P 30 o ( t f , λ ) , Γ 3 ( t , λ , ε ) = ε d P 30 o ( t , λ ) d t ε P 20 o ( t , λ ) + P 20 b ( τ , λ ) T A 2 ( t , λ ) ε P 30 o ( t , λ ) + P 30 b ( τ , λ ) A 4 ( t , λ ) ε A 2 T ( t , λ ) P 20 o ( t , λ ) + P 20 b ( τ , λ ) ε A 4 T ( t , λ ) P 30 o ( t , λ ) + P 30 b ( τ , λ ) + P 30 o ( t , λ ) P 30 o ( t f , λ ) P 30 b ( τ , λ ) + P 30 b ( τ , λ ) P 30 o ( t , λ ) P 30 o ( t f , λ ) .
Remark 12. 
Since the terminal-value problem (9) (and, therefore, each of the terminal-value problems (24) and (39)–(41)) has the unique solution in the entire interval [ 0 , t f ] for any λ Ω λ and any ε > 0 , then the terminal-value problem (65) also has the unique solution in the entire interval [ 0 , t f ] for any λ Ω λ and any ε > 0 .
Let us estimate the matrix-valued functions Γ j ( t , λ , ε ) , ( j = 1 , 2 , 3 ) . To accomplish this, first, we are going to estimate the two last addends in the expressions for Γ 2 ( t , λ , ε ) and Γ 3 ( t , λ , ε ) . Let us start with the addend P 20 o ( t , λ ) P 20 o ( t f , λ ) P 30 b ( τ , λ ) . Using the Lagrange’s formula ([68]) and the expression for the variable τ in Equation (42), we can rewrite this addend as
P 20 o ( t , λ ) P 20 o ( t f , λ ) P 30 b ( τ , λ ) = d P 20 o ( t ˜ , λ ) d t ( t t f ) P 30 b ( τ , λ ) = ε d P 20 o ( t ˜ , λ ) d t τ P 30 b ( τ , λ ) , t [ 0 , t f ] , λ Ω λ ,
where t ˜ ( t , t f ) , τ = ( t t f ) / ε , ε > 0 .
Due to the inequality for P 30 b ( τ , λ ) in (61), we directly obtain the existence of scalar constants 0 < a 1 < a and 0 < β 1 < β independent of λ Ω λ such that
τ P 30 b ( τ , λ ) a 1 exp ( β 1 τ ) , τ 0 , λ Ω λ .
Equation (69), along with the boundedness of d P 20 o ( t , λ ) / d t (see Remark 11) and the inequality (70), yield immediately the inequality
P 20 o ( t , λ ) P 20 o ( t f , λ ) P 30 b ( τ , λ ) α 1 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ ,
where τ = ( t t f ) / ε , ε > 0 , α 1 > 0 is some constant independent of ε and λ Ω λ .
Using the boundedness of d P 30 o ( t , λ ) / d t (see Remark 10) and the inequalities in (61), we obtain (quite similarly to the inequality (71)) the following inequalities:
P 20 b ( τ , λ ) P 30 o ( t , λ ) P 30 o ( t f , λ ) α 2 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ , P 30 o ( t , λ ) P 30 o ( t f , λ ) P 30 b ( τ , λ ) α 2 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ , P 30 b ( τ , λ ) P 30 o ( t , λ ) P 30 o ( t f , λ ) α 2 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ ,
where τ = ( t t f ) / ε , ε > 0 , α 2 > 0 is some constant independent of ε and λ Ω λ .
Now, using Equation (68), the inequalities in (61), and Remarks 10, 11, we directly obtain the following inequalities:
Γ 1 ( t , λ , ε ) b 1 [ ε + exp ( β τ ) ] , Γ l ( t , λ , ε ) b 1 ε 1 + exp ( β 1 τ ) , l = 2 , 3 , τ = ( t t f ) / ε , t [ 0 , t f ] , ε > 0 , λ Ω λ ,
where b 1 > 0 is some constant independent of ε and λ Ω λ ; β is the positive number introduced in (61); β 1 is the positive number introduced in (70).
By virtue of the results of [69], the problem (65) can be rewritten in the equivalent integral form
Δ ( t , λ , ε ) = t f t Φ T ( σ , t , λ , ε ) [ Δ ( σ , λ , ε ) S ( ε ) Δ ( σ , λ , ε ) Γ ( σ , λ , ε ) ] Φ ( σ , t , λ , ε ) d σ , t [ 0 , t f ] , λ Ω λ , ε > 0 ,
where, for any given t [ 0 , t f ] , λ Ω λ and ε > 0 , the K n × K n -matrix-valued function Φ ( σ , t , λ , ε ) is the unique solution of the problem
d Φ ( σ , t , λ , ε ) d σ = Θ ( σ , λ , ε ) Φ ( σ , t , λ , ε ) , Φ ( t , t , λ , ε ) = I K n , σ [ t , t f ] .
By Φ 1 ( σ , t , λ , ε ) , Φ 2 ( σ , t , λ , ε ) , Φ 3 ( σ , t , λ , ε ) and Φ 4 ( σ , t , λ , ε ) , we denote the upper left-hand, upper right-hand, lower left-hand and lower right-hand blocks of the matrix Φ ( σ , t , λ , ε ) of the dimensions ( K n r ) × ( K n r ) , ( K n r ) × r , r × ( K n r ) and r × r , respectively, i.e.,
Φ ( σ , t , λ , ε ) = Φ 1 ( σ , t , λ , ε ) Φ 2 ( σ , t , λ , ε ) Φ 3 ( σ , t , λ , ε ) Φ 4 ( σ , t , λ , ε ) .
Based on the results of [30] (Lemma 4.2) and taking into account Proposition 2, the Equation (50), the inequalities in (61) and Remarks 10 and 11, we immediately have the following estimates of these blocks for all 0 t σ t f and all λ Ω λ :
Φ l ( σ , t , λ , ε ) b 2 , l = 1 , 3 , Φ 2 ( σ , t , λ , ε ) b 2 ε , Φ 4 ( σ , t , λ , ε ) b 2 ε + exp 0.5 β ( σ t ) / ε , ε ( 0 , ε 1 ] ,
where ε 1 > 0 is some sufficiently small number; b 2 > 0 is some constant independent of ε and λ Ω λ .
Now, we are going to apply the method of successive approximations to the Equation (73). For this purpose, we consider the sequence of the matrix-valued functions Δ i ( t , λ , ε ) i = 0 + given as:
Δ i + 1 ( t , λ , ε ) = t f t Φ T ( σ , t , λ , ε ) [ Δ i ( σ , λ , ε ) S ( ε ) Δ i ( σ , λ , ε ) Γ ( σ , λ , ε ) ] Φ ( σ , t , λ , ε ) d σ , i = 0 , 1 , , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 1 ] ,
where the initial guess Δ 0 ( t , λ , ε ) = 0 , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 1 ] .
Since the matrices S ( ε ) , Γ ( t , λ , ε ) and Δ 0 ( t , λ , ε ) are symmetric, then the matrices Δ i ( σ , λ , ε ) , ( i = 1 , 2 , ) also are symmetric. Let us represent these matrices in the following block form:
Δ i ( σ , λ , ε ) = Δ i , 1 ( t , λ , ε ) ε Δ i , 2 ( t , λ , ε ) ε Δ i , 2 T ( t , λ , ε ) ε Δ i , 3 ( t , λ , ε ) , i = 1 , 2 , ,
where the dimensions of the blocks in each of these matrices are the same as the dimensions of the corresponding blocks in (64).
Using the block representations of the matrices S ( ε ) , Γ ( t , λ , ε ) , Φ ( σ , t , λ , ε ) and Δ i ( t , λ , ε ) (see Equations (27), (67), (74) and (77)), as well as using the inequalities (72) and (75), we obtain the existence of a positive number ε 0 ε 1 such that, for any ε ( 0 , ε 0 ] and any λ Ω λ , the sequence Δ i ( t , λ , ε ) i = 0 + converges in the linear space of all K n × K n -matrix-valued functions continuous in the interval [ 0 , t f ] . Since the inequalities (72) and (75) are uniform with respect to λ Ω λ and ε ( 0 , ε 0 ] , then this convergence also is uniform with respect to λ Ω λ and ε ( 0 , ε 0 ] . Moreover, the following inequalities are fulfilled:
Δ i , j ( t , λ , ε ) c ε , i = 1 , 2 , j = 1 , 2 , 3 , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] ,
where c > 0 is some constant independent of λ , ε , i and j.
Let
Δ * ( t , λ , ε ) = lim i + Δ i ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Comparison of (73) and (76) directly yields that Δ * ( t , λ , ε ) is the solution of the integral Equation (73) and, therefore, of the terminal-value problem (65) in the entire interval [ 0 , t f ] . Moreover, this solution has a block form similar to (64) and satisfies the inequalities
Δ j * ( t , λ , ε ) c ε , j = 1 , 2 , 3 , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Taking into account the uniqueness of the solution to the problem (65) (see Remark 12), we have that
Δ j ( t , λ , ε ) = Δ j * ( t , λ , ε ) , j = 1 , 2 , 3 , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Using this equation, as well as Equation (63) and the inequalities in (78), we directly obtain the inequalities in (62). This completes the proof of the theorem. □

4.3. Asymptotic Solution of the Initial-Value Problem (33)

First of all, let us note that the matrix P ( t ) , appearing in the right-hand side of the differential Equation in (33), is the unique solution of the terminal-value problem (24). Thus, P ( t ) = P ( t , λ , ε ) , which has the block form (37). Hence, calculating the product S ( ε ) P ( t ) appearing in the right-hand side of the differential Equation in (33), and using Equation (27), we obtain for t [ 0 , t f ] , λ Ω λ , ε > 0
S ( ε ) P ( t ) = S ( ε ) P ( t , λ , ε ) = O ( K n r ) × ( K n r ) O ( K n r ) × r ( 1 / ε ) P 2 T ( t , λ , ε ) ( 1 / ε ) P 3 ( t , λ , ε ) .
Due to Equation (79), the right-hand side of the differential Equation in (33) has a singularity with respect to ε at ε = 0 meaning that the initial-value problem (33) is singularly perturbed. However, this problem is not in an explicit singular perturbation form. In order to transform the problem (33) to the explicit singular perturbation form, we look for its solution z ( t ) = z ( t , λ , ε ) in the form of the block vector
z ( t , λ , ε ) = col x ( t , λ , ε ) , y ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε > 0 ,
where x ( t , λ , ε ) E K n r , y ( t , λ , ε ) E r .
Also, let us partition the vector z 0 ( λ ) as follows:
z 0 ( λ ) = col x 0 ( λ ) , y 0 ( λ ) , λ Ω λ ,
where x 0 ( λ ) E K n r , y 0 ( λ ) E r .
Now, substituting the block forms of the matrices A ( t , λ , ε ) , S ( ε ) P ( t , λ , ε ) and the block forms of the vectors z ( t , λ , ε ) , z 0 ( λ ) (see Equations (38), (79)–(81)) into the problem (33), we obtain after a routine matrix-vector algebra the following equivalent initial-value problem in the time interval [ 0 , t f ] :
d x ( t , λ , ε ) d t = A 1 ( t , λ ) x ( t , λ , ε ) + A 2 ( t , λ ) y ( t , λ , ε ) , ε d y ( t , λ , ε ) d t = ε A 3 ( t , λ ) P 2 T ( t , λ , ε ) x ( t , λ , ε ) + ε A 4 ( t , λ ) P 3 ( t , λ , ε ) y ( t , λ , ε ) , x ( 0 , λ , ε ) = x 0 ( λ ) , y ( 0 , λ , ε ) = y 0 ( λ ) ,
where λ Ω λ , ε > 0 .
In what follows of this subsection, based on the Boundary Function Method (see, e.g., [64]), we are going to construct and justify the zero-order asymptotic solution of the singularly perturbed initial-value problem (82). Taking into account the zero-order asymptotic solution to the terminal-value problem (39)–(41) (see Equation (42)), we look for the zero-order asymptotic solution of the problem (82) in the form
x 0 ( t , λ , ε ) = x 0 o ( t , λ ) + x 0 b , 1 ( θ , λ ) + x 0 b , 2 ( τ , λ ) , y 0 ( t , λ , ε ) = y 0 o ( t , λ ) + y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) , θ = t / ε , τ = ( t t f ) / ε ,
where the terms with the upper index “o” constitute the outer solution; the terms with the upper index “ b , 1 ” are the boundary correction terms in a right-hand neighbourhood of t = 0 ; the terms with the upper index “ b , 2 ” are the boundary correction terms in a left-hand neighbourhood of t = t f ; θ 0 and τ 0 are new independent variables. For any t ( 0 , t f ] , θ + as ε + 0 . For any t [ 0 , t f ) , τ as ε + 0 . Equations and conditions for obtaining the outer solution and the boundary correction terms of each type are derived by substituting the expressions for x 0 ( t , λ , ε ) , y 0 ( t , λ , ε ) , P 20 ( t , λ , ε ) and P 30 ( t , λ , ε ) (see Equations (42) and (83)) into the initial-value problem (82) instead of x ( t , λ , ε ) , y ( t , λ , ε ) , P 2 ( t , λ , ε ) and P 3 ( t , λ , ε ) , respectively, and equating the coefficients for the same power of ε on both sides of the resulting equations, separately the coefficients depending on t, on θ and on τ .

4.3.1. Obtaining the Boundary Layer Corrections x 0 b , 1 ( θ , λ ) and x 0 b , 2 ( τ , λ )

For this boundary layer corrections, we have the equations
d x 0 b , 1 ( θ , λ ) d θ = 0 , θ 0 , λ Ω λ ,
d x 0 b , 2 ( τ , λ ) d τ = 0 , τ 0 , λ Ω λ .
Due to the Boundary Function Method, we require that the boundary layer correction terms in a right-hand neighborhood of t = 0 tend to zero for θ tending to + , while the boundary layer correction terms in a left-hand neighborhood of t = t f tend to zero for τ tending to . Thus, we require that
lim θ + x 0 b , 1 ( θ , λ ) = 0 , lim τ x 0 b , 2 ( τ , λ ) = 0 .
Moreover, we require that these limits are uniform with respect to λ Ω λ .
From Equations (84)–(86) we obtain (quite similarly to Equation (46) in Section 4.2.1)
x 0 b , 1 ( θ , λ ) 0 , θ 0 , λ Ω λ ,
x 0 b , 2 ( τ , λ ) 0 , τ 0 , λ Ω λ .

4.3.2. Obtaining the Outer Solution Terms

The equations and conditions for these terms have the following form for all t [ 0 , t f ] and λ Ω λ :
d x 0 o ( t , λ ) d t = A 1 ( t , λ ) x 0 o ( t , λ ) + A 2 ( t , λ ) y 0 o ( t , λ ) , x 0 o ( 0 , λ ) = x 0 ( λ ) , P 20 o ( t , λ ) T x 0 o ( t , λ ) + P 30 o ( t , λ ) y 0 o ( t , λ ) = 0 .
Remark 13. 
As with Remark 9, let us note that in the system (89), the unknown vector-valued function y 0 o ( t , λ ) is not subject to any initial condition. This occurs because in (89) this unknown is subject to the algebraic (but not differential) equation.
Solving the algebraic equation of the system (89) with respect to y 0 o ( t , λ ) and using Equations (50) and (51), we obtain
y 0 o ( t , λ ) = D 2 1 ( t , λ ) A 2 T ( t , λ ) P 10 o ( t , λ ) x 0 o ( t , λ ) , t [ 0 , t f ] , λ Ω λ .
Substituting (90) into the differential equation of the system (89) and using the Equation (53) yield the following initial-value problem with respect to x 0 o ( t , λ ) for all λ Ω λ :
d x 0 o ( t , λ ) d t = A 1 ( t , λ ) S 1 o ( t , λ ) P 10 o ( t , λ ) x 0 o ( t , λ ) , x 0 o ( 0 , λ ) = x 0 ( λ ) , t [ 0 , t f ] .
Remark 14. 
Due to Proposition 2 and Remark 11, x 0 o ( t , λ ) and d x 0 o ( t , λ ) / d t are bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . Therefore, due to Equation (90), y 0 o ( t , λ ) and d y 0 o ( t , λ ) / d t are bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . In addition, since A 1 ( t , λ ) , S 1 o ( t , λ ) , P 10 o ( t , λ ) are continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] then, by virtue of the results of [66] (Chapter 5), x 0 o ( t , λ ) also is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] . Therefore, due to Equation (90), Remark 10 and the continuity of A 2 ( t , λ ) with respect to λ Ω λ uniformly in t [ 0 , t f ] , y 0 o ( t , λ ) is also continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] .

4.3.3. Obtaining the Boundary Layer Correction Term y 0 b , 1 ( θ , λ )

This term is obtained as the solution of the initial-value problem
d y 0 b , 1 ( θ , λ ) d θ = P 30 o ( 0 , λ ) y 0 b , 1 ( θ , λ ) , θ 0 , λ Ω λ , y 0 b , 1 ( 0 ) = y 0 ( λ ) y 0 o ( 0 , λ ) = y 0 ( λ ) + D 2 1 ( 0 , λ ) A 2 T ( 0 , λ ) P 10 o ( 0 , λ ) x 0 ( λ ) , λ Ω λ ,
where, due to Equation (50), P 30 o ( 0 , λ ) = D 2 ( 0 , λ ) 1 / 2 .
Solving the problem (92), we directly have
y 0 b , 1 ( θ , λ ) = y 0 ( λ ) + D 2 1 ( 0 , λ ) A 2 T ( 0 , λ ) P 10 o ( 0 , λ ) x 0 ( λ ) exp D 2 ( 0 , λ ) 1 / 2 θ , θ 0 , λ Ω λ .
Since all the matrices and vectors, appearing in the right-hand side of Equation (93) are bounded for all λ Ω λ , and the matrix D 2 ( 0 , λ ) 1 / 2 is positive definite and continuous for all λ Ω λ (see Remark 10), then the vector-valued function y 0 b , 1 ( θ , λ ) , given by Equation (93), satisfies the inequality
y 0 b , 1 ( θ , λ ) a 2 exp ( β 2 θ ) , θ 0 , λ Ω λ ,
where a 2 > 0 and β 2 > 0 are some constants independent of λ .

4.3.4. Obtaining the Boundary Layer Correction Term y 0 b , 2 ( τ , λ )

For this term, we have the equation
d y 0 b , 2 ( τ , λ ) d τ = P 30 o ( t f , λ ) + P 30 b ( τ , λ ) y 0 b , 2 ( τ , λ ) P 20 b ( τ , λ ) T x 0 o ( t f , λ ) P 30 b ( τ , λ ) y 0 o ( t f , λ ) , τ 0 , λ Ω λ ,
where, due to Equation (50), P 30 o ( t f , λ ) = D 2 ( t f , λ ) 1 / 2 ; P 20 b ( τ , λ ) and P 30 b ( τ , λ ) are given by Equations (59) and (60), respectively; x 0 o ( t , λ ) is the unique solution of the initial-value problem (91), while y 0 o ( t , λ ) is given by Equation (90).
By virtue of the results of [24], the fundamental matrix of the homogeneous equation, corresponding to Equation (95), is the following:
Y 0 b , 2 ( τ , σ , λ ) = Ψ ( τ , λ ) Ψ 1 ( σ , λ ) , < τ σ 0 , λ Ω λ ,
where
Ψ ( τ , λ ) = exp D 2 ( t f , λ ) 1 / 2 τ + exp D 2 ( t f , λ ) 1 / 2 τ , τ 0 , λ Ω λ .
Since, for all λ Ω λ , the matrix D 2 ( t f , λ ) 1 / 2 is positive definite and the matrix-valued function D 2 ( t f , λ ) 1 / 2 is continuous, then
lim τ Ψ ( τ , λ ) = + , lim τ Ψ 1 ( τ , λ ) = 0 ,
and both limits are uniform with respect to λ Ω λ .
Solving Equation (95) with a given initial value y 0 b , 2 ( 0 , λ ) of y 0 b , 2 ( τ , λ ) and using the form (96) and (97) of the corresponding fundamental matrix, we directly have
y 0 b , 2 ( τ , λ ) = Ψ ( τ , λ ) [ 1 2 y 0 b , 2 ( 0 , λ ) 0 τ Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) 0 τ Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) ] , τ 0 , λ Ω λ .
This equation can be rewritten as:
Ψ 1 ( τ , λ ) y 0 b , 2 ( τ , λ ) = 1 2 y 0 b , 2 ( 0 , λ ) 0 τ Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) 0 τ Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) , τ 0 , λ Ω λ .
Applying to Equation (100) the second limit relation in Equation (98), as well as the aforementioned requirement that the boundary layer correction terms in a left-hand neighborhood of t = t f that tend to zero for τ , we immediately have
y 0 b , 2 ( 0 , λ ) = 2 [ 0 Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) + 0 Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) ] , λ Ω λ .
Due to the inequalities in (61) and the second limit relation in (98), each of the integrals in the right-hand side of the equality in (101) converges and this convergence is uniform with respect to λ Ω λ .
Substitution of (101) into Equation (99) yields after a routine rearrangement
y 0 b , 2 ( τ , λ ) = τ Ψ ( τ , λ ) Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) + τ Ψ ( τ , λ ) Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) , τ 0 , λ Ω λ .
Let us estimate y 0 b , 2 ( τ , λ ) . To accomplish this, first, let us estimate the product Ψ ( τ , λ ) Ψ 1 ( σ , λ ) for < σ τ 0 and λ Ω λ . Such an estimation directly follows from Equation (97), as well as from the positive definiteness and boundedness of D 2 ( t f , λ ) 1 / 2 uniform with respect to λ Ω λ . Thus, we have
Ψ ( τ , λ ) Ψ 1 ( σ , λ ) a 3 exp β 3 ( σ τ ) , < σ τ 0 , λ Ω λ ,
where a 3 > 0 and β 3 > 0 are some constants independent of λ Ω λ .
Now, using the inequalities in (61), the inequality (103) and Remark 14, we directly obtain the following estimate of y 0 b , 2 ( τ , λ ) given by (102):
y 0 b , 2 ( τ , λ ) a 4 exp ( β 4 τ ) , τ 0 , λ Ω λ ,
where a 4 > 0 and β 4 > 0 are some constants independent of λ Ω λ .

4.3.5. Justification of the Asymptotic Solution to the Initial-Value Problem (82)

Theorem 2. 
Let the assumptions AI-AV be fulfilled. Then, there exists a number ε ˜ 0 ( 0 , ε 0 ] independent of λ Ω λ such that, for all ε ( 0 , ε ˜ 0 ] , the entries of the solution to the initial-value problem (82) x ( t , λ , ε ) , y ( t , λ , ε ) satisfy the inequalities
x ( t , λ , ε ) x 0 o ( t , λ ) c ˜ 1 ε , t [ 0 , t f ] , λ Ω λ , y ( t , λ , ε ) y 0 ( t , λ , ε ) c ˜ 1 ε , t [ 0 , t f ] , λ Ω λ ,
where y 0 ( t , λ , ε ) is given in (83); c ˜ 1 > 0 is some constant independent of ε and λ Ω λ ; ε 0 > 0 is the number introduced in Theorem 1.
Proof. 
Let us make the transformation of the variables in the problem (82)
x ( t , λ , ε ) = x 0 o ( t , λ ) + δ x ( t , λ , ε ) , y ( t , λ , ε ) = y 0 ( t , λ , ε ) + δ y ( t , λ , ε ) ,
where δ x ( t , λ , ε ) and δ y ( t , λ , ε ) are new unknown vector-valued functions.
Substitution of (106) into the problem (82), and use of the Equations (87)–(89), (92), (95) and (102) and Equations (42) and (63) yield after a routine algebra the following initial-value problem for the unknowns δ x ( t , λ , ε ) and δ y ( t , λ , ε ) in the time interval [ 0 , t f ] :
d δ x ( t , λ , ε ) d t = A 1 ( t , λ ) δ x ( t , λ , ε ) + A 2 ( t , λ ) δ y ( t , λ , ε ) + γ x ( t , λ , ε ) , δ x ( 0 , λ , ε ) = 0 , ε d δ y ( t , λ , ε ) d t = ε A 3 ( t , λ ) P 2 T ( t , λ , ε ) δ x ( t , λ , ε ) + ε A 4 ( t , λ ) P 3 ( t , λ , ε ) δ y ( t , λ , ε ) + γ y ( t , λ , ε ) , δ y ( 0 , λ , ε ) = φ y ( λ , ε ) ,
where λ Ω λ ,
γ x ( t , λ , ε ) = A 2 ( t , λ ) y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) , γ y ( t , λ , ε ) = P 20 b ( τ , λ ) T x 0 o ( t , λ ) x 0 o ( t f , λ ) P 30 b ( τ , λ ) y 0 o ( t , λ ) y 0 o ( t f , λ ) P 30 b ( τ , λ ) y 0 b , 1 ( θ , λ ) P 30 o ( t , λ ) P 30 o ( 0 , λ ) y 0 b , 1 ( θ , λ ) + ε A 3 ( t , λ ) x 0 o ( t , λ ) + ε A 4 ( t , λ ) y 0 o ( t , λ ) + y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) Δ 2 ( t , λ , ε ) T x 0 o ( t , λ ) Δ 3 ( t , λ , ε ) y 0 o ( t , λ ) + y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) , φ y ( λ , ε ) = y 0 b , 2 ( τ 0 , λ ) = τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) + τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) , τ 0 = t f / ε .
Let us estimate the vector-valued functions γ x ( t , λ , ε ) , γ y ( t , λ , ε ) and φ y ( λ , ε ) . Using the boundedness of the matrix-valued function A 2 ( t , λ ) for all ( t , λ ) [ 0 , t f ] × Ω λ (see Proposition 2), as well as the inequalities (94) and (104), we directly have
γ x ( t , λ , ε ) b x exp ( β 2 θ ) + exp ( β 4 τ ) , θ = t / ε , τ = ( t t f ) / ε , t [ 0 , t f ] , ε > 0 , λ Ω λ ,
where b x > 0 is some constant independent of ε and λ Ω λ ; β 2 and β 4 are positive constants introduced in (94) and (104), respectively.
To estimate the vector-valued function γ y ( t , λ , ε ) , we should estimate each of its addends. Using the boundedness of d P 30 o ( t , λ ) / d t , d x 0 o ( t , λ ) / d t , d y 0 o ( t , λ ) / d t for all ( t , λ ) [ 0 , t f ] × Ω λ (see Remarks 10, 14) and the inequalities (61) and (94), we obtain (quite similarly to the inequality (71)) the following inequalities:
P 20 b ( τ , λ ) T x 0 o ( t , λ ) x 0 o ( t f , λ ) b y , 1 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ , P 30 b ( τ , λ ) y 0 o ( t , λ ) y 0 o ( t f , λ ) b y , 1 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ , P 30 o ( t , λ ) P 30 o ( 0 , λ ) y 0 b , 1 ( θ , λ ) b y , 1 ε exp ( β y , 1 θ ) , t [ 0 , t f ] , λ Ω λ ,
where τ = ( t t f ) / ε , θ = t / ε , ε > 0 ; b y , 1 > 0 is some constant independent of ε and λ Ω λ ; β 1 > 0 is the constant introduced in the inequality (70); 0 < β y , 1 < β 2 is some constant independent of ε and λ Ω λ ; β 2 > 0 is the constant introduced in (94).
Furthermore, using the second inequality in (61) and the inequality (94), we have
P 30 b ( τ , λ ) y 0 b , 1 ( θ , λ ) b y , 2 exp ( β y , 2 t f / ε ) ,
where b y , 2 = a a 2 , β y , 2 = min { β , β 2 } , ε > 0 .
Finally, using the boundedness of the matrix-valued functions A 3 ( t , λ ) and A 4 ( t , λ ) for all ( t , λ ) [ 0 , t f ] × Ω λ (see Proposition 2), the boundedness of the vector-valued functions x 0 o ( t , λ ) and y 0 o ( t , λ ) for all ( t , λ ) [ 0 , t f ] × Ω λ (see Remark 14), inequalities (94) and (104), as well as using Theorem 1 and Equation (63), yield the inequalities
ε A 3 ( t , λ ) x 0 o ( t , λ ) b y , 3 ε , t [ 0 , t f ] , λ Ω λ , ε > 0 , ε A 4 ( t , λ ) y 0 o ( t , λ ) + y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) b y , 3 ε , t [ 0 , t f ] , λ Ω λ , ε > 0 ,
Δ 2 ( t , λ , ε ) T x 0 o ( t , λ ) b y , 3 ε , t [ 0 , t f ] , λ Ω λ , ε > 0 , Δ 3 ( t , λ , ε ) y 0 o ( t , λ ) + y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) b y , 3 ε , t [ 0 , t f ] , λ Ω λ , ε > 0 ,
where b y , 3 > 0 is some constant independent of ε and λ Ω λ .
The inequalities (110)–(112) directly yield the estimate of γ y ( t , λ , ε )
γ y ( t , λ , ε ) b y ε , t [ 0 , t f ] , λ Ω λ , ε > 0 ,
where b y > 0 is some constant independent of ε and λ Ω λ .
Proceed to the estimate of φ y ( λ , ε ) . Using the inequalities (61) and (103), we obtain the following chain of inequalities and equality:
φ y ( λ , ε ) τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) + τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) + τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) a a 3 x 0 o ( t f , λ ) + y 0 o ( t f , λ ) τ 0 exp ( β σ ) d σ = a a 3 β x 0 o ( t f , λ ) + y 0 o ( t f , λ ) exp ( β τ 0 ) , λ Ω λ , ε > 0 .
This chain of the inequalities and the equality, along with the expression for τ 0 (see Equation (108)) and the boundedness of the vector-valued functions x 0 o ( t , λ ) , y 0 o ( t , λ ) for all ( t , λ ) [ 0 , t f ] × Ω λ (see Remark 14), implies immediately the estimate of φ y ( λ , ε )
φ y ( λ , ε ) b φ ε , λ Ω λ , ε > 0 ,
where b φ > 0 is some constant independent of ε and λ Ω λ .
Let us introduce the following vectors of the dimension K n :
δ ( t , λ , ε ) = δ x ( t , λ , ε ) δ y ( t , λ , ε ) , γ ( t , λ , ε ) = γ x ( t , λ , ε ) γ y ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε > 0 , φ ( λ , ε ) = 0 φ y ( λ , ε ) , λ Ω λ , ε > 0 .
Also, let us introduce into the consideration the following matrix:
Δ ˜ ( t , λ , ε ) = O ( K n r ) × ( K n r ) O ( K n r ) × r ( 1 / ε ) Δ 2 ( t , λ , ε ) T ( 1 / ε ) Δ 3 ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] ,
where Δ 2 ( t , λ , ε ) and Δ 3 ( t , λ , ε ) are defined in Equation (63); ε 0 is introduced in Theorem 1.
Due to Theorem 1 (see the inequalities in (62)) and Equation (63), we immediately have
( 1 / ε ) Δ 2 ( t , λ , ε ) T c , ( 1 / ε ) Δ 3 ( t , λ , ε ) c , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Using the vectors δ ( t , λ , ε ) , γ ( t , λ , ε ) , φ ( λ , ε ) and the matrix Δ ˜ ( t , λ , ε ) as well as the matrix Θ ( t , λ , ε ) (see Equation (66)), we can rewrite the initial-value problem (107) in the form
d δ ( t , λ , ε ) d t = Θ ( t , λ , ε ) δ ( t , λ , ε ) Δ ˜ ( t , λ , ε ) δ ( t , λ , ε ) + γ ( t , λ , ε ) , δ ( 0 , λ , ε ) = φ ( λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Let K n × K n -matrix-valued function Υ ( t , χ , λ , ε ) , 0 χ t t f be the unique solution to the following initial-value problem:
d Υ ( t , χ , λ , ε ) d t = Θ ( t , λ , ε ) Υ ( t , χ , λ , ε ) , Υ ( χ , χ , λ , ε ) = I K n , t [ χ , t f ] .
By Υ 1 ( t , χ , λ , ε ) , Υ 2 ( t , χ , λ , ε ) , Υ 3 ( t , χ , λ , ε ) and Υ 4 ( t , χ , λ , ε ) , we denote the upper left-hand, upper right-hand, lower left-hand and lower right-hand blocks of the matrix Υ ( t , χ , λ , ε ) of the dimensions ( K n r ) × ( K n r ) , ( K n r ) × r , r × ( K n r ) and r × r , respectively, i.e.,
Υ ( t , χ , λ , ε ) = Υ 1 ( t , χ , λ , ε ) Υ 2 ( t , χ , λ , ε ) Υ 3 ( t , χ , λ , ε ) Υ 4 ( t , χ , λ , ε ) .
Similarly to the inequalities in (75), we have the following estimates of these blocks for all 0 χ t t f and all λ Ω λ :
Υ l ( t , χ , λ , ε ) b 2 , l = 1 , 3 , Υ 2 ( t , χ , λ , ε ) b 2 ε , Υ 4 ( t , χ , λ , ε ) b 2 ε + exp 0.5 β ( t χ ) / ε , ε ( 0 , ε 1 ] ,
where the constant β > 0 is introduced in (61); the constants ε 1 > 0 and b 2 > 0 are introduced in (75).
Using the matrix-valued function Υ ( t , χ , λ , ε ) , let us rewrite the initial-value problem (118) in the equivalent integral form
δ ( t , λ , ε ) = Υ ( t , 0 , λ , ε ) φ ( λ , ε ) 0 t Υ ( t , χ , λ , ε ) Δ ˜ ( χ , λ , ε ) δ ( χ , λ , ε ) γ ( χ , λ , ε ) d χ , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Now (similarly to the proof of Theorem 1), we are going to apply the method of successive approximations to Equation (121). For this purpose, we consider the sequence of the vector-valued functions δ i ( t , λ , ε ) i = 0 + given as:
δ i + 1 ( t , λ , ε ) = Υ ( t , 0 , λ , ε ) φ ( λ , ε ) 0 t Υ ( t , χ , λ , ε ) Δ ˜ ( χ , λ , ε ) δ i ( χ , λ , ε ) γ ( χ , λ , ε ) d χ , i = 0 , 1 , , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] ,
where the initial guess δ 0 ( t , λ , ε ) = 0 , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Let us represent the vector-valued functions δ i ( t , λ , ε ) , ( i = 1 , 2 , ) in the block form as follows:
δ i ( t , λ , ε ) = δ i , x ( t , λ , ε ) δ i , y ( t , λ , ε ) , i = 1 , 2 , ,
where the dimension of the upper block is K n r , while the dimension of the lower block is r, i.e., the dimensions of these blocks are the same as the dimensions of the corresponding blocks in the vector-valued function δ ( t , λ , ε ) (see Equation (115)).
Using the block representations of the matrices Δ ˜ ( t , λ , ε ) , Υ ( t , χ , λ , ε ) (see the Equations (116) and (119)) and the block representations of the vectors γ ( t , λ , ε ) , φ ( λ , ε ) , δ i ( t , λ , ε ) (see Equations (115) and (123)), as well as using the inequalities (109), (113), (114), (117) and (120), we obtain the existence of a positive number ε ˜ 0 ε 0 such that, for any ε ( 0 , ε ˜ 0 ] and any λ Ω λ , the sequence δ i ( t , λ , ε ) i = 0 + converges in the linear space of all K n -vector-valued functions continuous in the interval [ 0 , t f ] . Since the aforementioned inequalities are uniform with respect to λ Ω λ and ε ( 0 , ε ˜ 0 ] , then this convergence also is uniform with respect to λ Ω λ and ε ( 0 , ε ˜ 0 ] . Moreover, the following inequalities are fulfilled:
δ i , x ( t , λ , ε ) c ˜ 1 ε , δ i , y ( t , λ , ε ) c ˜ 1 ε , i = 1 , 2 , , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε ˜ 0 ] ,
where c ˜ 1 > 0 is some constant independent of λ , ε and i.
Let us denote
δ * ( t , λ , ε ) = lim i + δ i ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Equations (121) and (122) immediately imply that δ * ( t , λ , ε ) is the solution of the integral Equation (121) and, therefore, of the initial-value problem (118) in the entire interval [ 0 , t f ] . Moreover, this solution has the block form similar to the block form of the vector δ ( t , λ , ε ) (see Equation (115)) and satisfies the inequalities
δ x * ( t , λ , ε ) c ˜ ε , δ y * ( t , λ , ε ) c ˜ ε , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε ˜ 0 ] .
Since the initial-value problem (118) has the unique solution, then
δ ( t , λ , ε ) = δ * ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε ˜ 0 ] .
This equation, along with Equation (106) and the inequalities in (124), directly yields the inequalities in (105). Thus, the theorem is proven. □
Let us introduce the following vector-valued functions of the dimension K n :
z 0 o ( t , λ ) = col x 0 o ( t , λ ) , y 0 o ( t , λ ) , z 0 b , 1 ( θ , λ ) = col 0 , y 0 b , 1 ( θ , λ ) , z 0 b , 2 ( τ , λ ) = col 0 , y 0 b , 2 ( τ , λ ) , z 0 ( t , λ , ε ) = z 0 o ( t , λ ) + z 0 b , 1 ( θ , λ ) + z 0 b , 2 ( τ , λ ) , t [ 0 , t f ] , θ = t / ε , τ = ( t t f ) / ε , λ Ω λ , ε ( 0 , ε ˜ 0 ] .
Thus, by virtue of Theorem 2, we have
z ( t , λ , ε ) z 0 ( t , λ , ε ) 2 c ˜ 1 ε , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε ˜ 0 ] .

4.4. Transformation of the Optimal Control in the Problem (1), (5) and (6)

To transform the expression (13) of the optimal control in the problem (1), (5) and (6), first, we observe the following. Since P ( t , λ , ε ) , t [ 0 , t f ] is the unique solution of the terminal-value problem (24) for any λ Ω λ and ε > 0 , then P t , λ * ( ε ) , ε , t [ 0 , t f ] is the unique solution of the problem (24) with λ = λ * ( ε ) and any ε > 0 . Remember that λ = λ * ( ε ) , ε > 0 is the solution of the optimization problem (14) and (15) and, due to Corollary 2, of the optimization problem (35) and (36). Taking into account the aforementioned observation, as well as Equation (23) and Proposition 2, we directly have that
P t , λ * ( ε ) , ε = R T t , λ * ( ε ) 1 P t , λ * ( ε ) , ε R 1 t , λ * ( ε ) , t [ 0 , t f ] , ε > 0
is the unique solution of the terminal-value problem (9) with λ = λ * ( ε ) .
Substituting (126) into Equation (13) and using Equations (26) and (37), we obtain after a routine rearrangement the following expression for the optimal control of the problem (1), (5) and (6):
u ε * w , t , λ * ( ε ) = 1 ε P 2 T t , λ * ( ε ) , ε , P 3 t , λ * ( ε ) , ε R 1 t , λ * ( ε ) w , w E K n , t [ 0 , t f ] , ε > 0 .
Finally, substituting the solution w t , λ * ( ε ) , ε of the initial-value problem (16) with λ = λ * ( ε ) into (127) and using Corollary 1, we obtain the time realization u * t , λ * ( ε ) , ε of the state-feedback optimal control in the problem (1), (5) and (6) along w = w t , λ * ( ε ) , ε (the open-loop optimal control in this problem)
u * t , λ * ( ε ) , ε = u ε * w t , λ * ( ε ) , ε , t , λ * ( ε ) = 1 ε P 2 T t , λ * ( ε ) , ε , P 3 t , λ * ( ε ) , ε R 1 t , λ * ( ε ) w t , λ * ( ε ) , ε = 1 ε P 2 T t , λ * ( ε ) , ε , P 3 t , λ * ( ε ) , ε z t , λ * ( ε ) , ε , t [ 0 , t f ] , ε > 0 .
Since u ε * w , t , λ * ( ε ) is the state-feedback optimal control in the problem (1), (5) and (6) and u * t , λ * ( ε ) , ε is the open-loop optimal control in this problem, then using Proposition 1 and Corollary 2, we obtain
J λ * ( ε ) , ε = I λ * ( ε ) , ε = J ε u ε * w , t , λ * ( ε ) = J ε u * t , λ * ( ε ) , ε , ε > 0 .

4.5. Asymptotic Behaviour of the Solution to the Optimization Problem (35) and (36)

Along with the optimization problem (35) and (36), let us consider the following optimization problem:
λ 0 * = argmin λ Ω λ J 0 ( λ ) ,
J 0 ( λ ) = x 0 ( λ ) T P 10 o ( 0 , λ ) x 0 ( λ ) x 0 o ( t f , λ ) T H 1 ( λ ) x 0 o ( t f , λ ) 0 t f x 0 o ( t , λ ) T D 1 ( t , λ ) x 0 o ( t , λ ) + y 0 o ( t , λ ) T D 2 ( t , λ ) y 0 o ( t , λ ) d t + max κ Ω κ [ 0 t f z 0 o ( t , λ ) T R T ( t , λ ) D κ ( t ) R ( t , λ ) z 0 o ( t , λ ) d t + x 0 o ( t f , λ ) T L T ( t f , λ ) H κ L ( t f , λ ) x 0 o ( t f , λ ) ] ,
where the matrices D 1 ( t , λ ) and D 2 ( t , λ ) are defined in (28); the matrix H 1 ( λ ) is defined in (29); the set Ω κ is given by (11); the matrices H κ and D κ ( t ) are given in (12); the K n -vector z 0 o ( t , λ ) is given in (125).
In contrast with the optimization problem (35) and (36), the optimization problem (130) and (131) is independent of ε .
Lemma 2. 
Let the assumptions AI-AV be fulfilled. Then, the function J 0 ( λ ) is continuous with respect to λ Ω λ . Moreover, the following limit equality is valid:
lim ε + 0 J ( λ , ε ) = J 0 ( λ ) uniformly in λ Ω λ .
Proof. 
We start with the proof of the first statement of the lemma. Let us observe that the functions D 1 ( t , λ ) , D 2 ( t , λ ) , H 1 ( λ ) , R ( t , λ ) , P 10 o ( t , λ ) , x 0 o ( t , λ ) , y 0 o ( t , λ ) are bounded for ( t , λ ) [ 0 , t f ] × Ω λ and they are continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] (see Proposition 2, Remarks 6, 11, 14). Also, let us observe that the function H κ is continuous with respect to κ Ω κ , while the function D κ ( t ) is continuous with respect to κ Ω κ uniformly in t [ 0 , t f ] . These observations, as well as the theorem on continuity of an integral with respect to a parameter [65,68] and the Maximum Theorem [70], directly yield the continuity of the function J 0 ( λ ) with respect to λ Ω λ . Thus, the first statement of the lemma is proven.
Proceed to the proof of the limit equality (132). To prove this equality, first, we are going to transform the expression z ( t f , λ , ε ) T R T ( t f , λ ) H κ R ( t f , λ ) z ( t f , λ , ε ) appearing in the function J ( λ , ε ) (see Equation (36)). Namely, using the assumption AIII, the symmetry of the matrix H ˜ and Equations (8), (12), (22) and (80), we have
z ( t f , λ , ε ) T R T ( t f , λ ) H κ R ( t f , λ ) z ( t f , λ , ε ) = x T ( t f , λ , ε , y T ( t f , λ , ε ) ) L T ( t f , λ ) B T ( t f ) H κ L ( t f , λ ) , H κ B ( t f ) x ( t f , λ , ε ) y ( t f , λ , ε ) = x T ( t f , λ , ε ) L T ( t f , λ ) + y T ( t f , λ , ε ) B T ( t f ) H κ L ( t f , λ ) x ( t f , λ , ε ) = x T ( t f , λ , ε ) L T ( t f , λ ) H κ L ( t f , λ ) x ( t f , λ , ε ) + y T ( t f , λ , ε ) B T ( t f ) H κ L ( t f , λ ) x ( t f , λ , ε ) = x T ( t f , λ , ε ) L T ( t f , λ ) H κ L ( t f , λ ) x ( t f , λ , ε ) , λ Ω λ , κ Ω κ , ε > 0 .
Now, using Equations (28), (29), (36), (37), (80), (131), and (133), as well as Theorems 1 and 2, and the inequalities (61), (94) and (104), we obtain the limit equality (132). This completes the proof of the lemma. □
In what follows, we assume the following:
AVI. 
The optimization problem (130) and (131) has the unique solution λ 0 * .
Theorem 3. 
Let the assumptions AI-AVI be fulfilled. Then the solution λ * ( ε ) , ε ( 0 , ε ˜ 0 ] of the optimization problem (35) and (36) tends to the solution λ 0 * of the optimization problem (130) and (131) for ε + 0 , i.e.,
lim ε + 0 λ * ( ε ) = λ 0 * .
Proof. 
(by contradiction). Let us assume that the statement of the theorem is wrong. This means the existence of sequences { ε i } i = 1 + , { λ i } i = 1 + and a number η > 0 which satisfy the following conditions: (a) ε i ( 0 , ε ˜ 0 ] , ( i = 1 , 2 , ) and lim i + ε i = 0 ; (b) λ i Ω λ , ( i = 1 , 2 , ) ; (c) for any i { 1 , 2 , } , λ i = argmin λ Ω λ J ( λ , ε i ) , i.e., this vector minimizes the function (36) with ε = ε i ; (d) for any i { 1 , 2 , } , λ i λ 0 * η .
From the conditions (b) and (c), we directly have
J ( λ i , ε i ) J ( λ , ε i ) i { 1 , 2 , } , λ Ω λ .
Since the set Ω λ is bounded and closed, then the condition (b) implies the existence of a convergent in this set subsequence of the sequence { λ i } i = 1 + . For the sake of simplicity (but without loss of generality), we assume that { λ i } i = 1 + itself is such a subsequence. Thus, there exists
lim i + λ i = λ ¯ Ω λ .
Moreover, by virtue of the aforementioned condition (d),
λ ¯ λ 0 * η > 0 .
Now, using the aforementioned condition (a) on the sequence { ε i } i = 1 + , as well as Equation (135) and Lemma 2, we obtain the limit equality
lim i + J ( λ i , ε i ) = J 0 ( λ ¯ ) ,
The inequality (134), along with the equalities (132) and (137), yields immediately the following inequality:
J 0 ( λ ¯ ) J 0 ( λ ) λ Ω λ ,
meaning that the vector λ ¯ Ω λ minimizes the function J 0 ( λ ) in the set Ω λ . Hence, due to the assumption AVI, λ ¯ = λ 0 * . However, this equality contradicts the inequality (136). This contradiction implies the correctness of the statement of the theorem, which completes its proof. □
As a direct consequence of Lemma 2 and Theorem 3, we have the following assertion.
Corollary 3. 
Let the assumptions AI-AVI be fulfilled. Then, for the solution λ * ( ε ) , ε ( 0 , ε ˜ 0 ] of the optimization problem (35) and (36), there exists a function g * ( ε ) > 0 , ε ( 0 , ε ˜ 0 ] , such that lim ε + 0 g * ( ε ) = 0 and
| J λ * ( ε ) , ε J 0 λ 0 * | g * ( ε ) , ε ( 0 , ε ˜ 0 ] .

4.6. Asymptotically Suboptimal Control of the Problem (1), (5) and (6)

4.6.1. Formal Construction of the Suboptimal Control

Replacing in the right-hand side of (127) λ * ( ε ) with λ 0 * , as well as P 2 t , λ * ( ε ) , ε with P 20 o ( t , λ 0 * ) and P 3 t , λ * ( ε ) , ε with P 30 o ( t , λ 0 * ) , we obtain the following state-feedback control:
u ^ ε ( w , t , λ 0 * ) = 1 ε P 20 o ( t , λ 0 * ) T , P 30 o ( t , λ 0 * ) R 1 ( t , λ 0 * ) w , w E K n , t [ 0 , t f ] , ε > 0 .
It is clear that, for all ε > 0 , u ^ ε ( w , t , λ 0 * ) U , i.e., this control is admissible in the problem (1), (5) and (6). In what follows of this subsection, we are going to show that u ^ ε ( w , t , λ 0 * ) is asymptotically suboptimal in this problem. The latter means that this control provides the value of the functional in the problem (1), (5) and (6), which are arbitrarily close to the optimal value of this functional for all sufficiently small ε > 0 .
Substituting the control (138) into the initial-value problem (1) with k = 1 , 2 , , K and using Equations (4), (7) and (8), we obtain the corresponding closed-loop system with the trajectory denoted as w ^ ( t , ε )
d w ^ ( t , ε ) d t = A ( t ) 1 ε B ( t ) P 20 o ( t , λ 0 * ) T , P 30 o ( t , λ 0 * ) R 1 ( t , λ 0 * ) w ^ ( t , ε ) , w ^ ( 0 , ε ) = w 0 , t [ 0 , t f ] , ε > 0 .
Below, we analyze an asymptotic (with respect to ε ) behaviour of w ^ ( t , ε ) .

4.6.2. Asymptotic Behaviour of the Solution to the Initial-Value Problem (139)

To analyze the asymptotic behaviour of w ^ ( t , ε ) , we make the following transformation of variables in (139):
w ^ ( t , ε ) = R ( t , λ 0 * ) z ^ ( t , ε ) , t [ 0 , t f ] , ε > 0 ,
where z ^ ( t , ε ) is a new unknown vector-valued function.
The transformation (140), along with Equations (25), (26), (34), and (38), converts the initial-value problem (139) to the new initial-value problem with respect to z ^ ( t , ε )
d z ^ ( t , ε ) d t = A 1 ( t , λ 0 * ) A 2 ( t , λ 0 * ) A 3 ( t , λ 0 * ) 1 ε P 20 o ( t , λ 0 * ) T A 4 ( t , λ 0 * ) 1 ε P 30 o ( t , λ 0 * ) z ^ ( t , ε ) , z ^ ( 0 , ε ) = z 0 ( λ 0 * ) , t [ 0 , t f ] , ε > 0 .
As with the results of Section 4.3 (see Equation (80)), we represent the solution z ^ ( t , ε ) of the initial-value problem (141) in the block form
z ^ ( t , ε ) = col x ^ ( t , ε ) , y ^ ( t , ε ) , t [ 0 , t f ] , ε > 0 ,
where x ^ ( t , ε ) E K n r , y ^ ( t , ε ) E r .
Due to the representation (142) and Equation (81), the initial-value problem (141) is transformed to the following equivalent initial-value problem in the time interval [ 0 , t f ] :
d x ^ ( t , ε ) d t = A 1 ( t , λ 0 * ) x ^ ( t , ε ) + A 2 ( t , λ 0 * ) y ^ ( t , ε ) , ε d y ^ ( t , ε ) d t = ε A 3 ( t , λ 0 * ) P 20 o ( t , λ 0 * ) T x ^ ( t , ε ) + ε A 4 ( t , λ 0 * ) P 30 o ( t , λ 0 * ) y ^ ( t , ε ) , x ^ ( 0 , ε ) = x 0 ( λ 0 * ) , y ^ ( 0 , ε ) = y 0 ( λ 0 * ) ,
where ε > 0 .
Quite similarly to the results of Section 4.3 (see Equation (83)), we construct the zero-order asymptotic solution of the problem (143) in the form
x ^ 0 ( t , ε ) = x ^ 0 o ( t ) + x ^ 0 b , 1 ( θ ) + x ^ 0 b , 2 ( τ ) , y ^ 0 ( t , ε ) = y ^ 0 o ( t ) + y ^ 0 b , 1 ( θ ) + y ^ 0 b , 2 ( τ ) , θ = t / ε , τ = ( t t f ) / ε ,
where, similarly to (91) and (90), x ^ 0 o ( t ) and y ^ 0 o ( t ) are obtained from the system
d x ^ 0 o ( t ) d t = A 1 ( t , λ 0 * ) S 1 o ( t , λ 0 * ) P 10 o ( t , λ 0 * ) x ^ 0 o ( t ) , x ^ 0 o ( 0 ) = x 0 ( λ 0 * ) , t [ 0 , t f ] , y ^ 0 o ( t ) = D 2 1 ( t , λ 0 * ) A 2 T ( t , λ 0 * ) P 10 o ( t , λ 0 * ) x ^ 0 o ( t ) , t [ 0 , t f ] ;
similarly to (84)–(88), we have
x ^ 0 b , 1 ( θ ) 0 , θ 0 , x ^ 0 b , 2 ( τ ) 0 , τ 0 ;
similarly to (92)–(94) we obtain
y ^ 0 b , 1 ( θ ) = y 0 ( λ 0 * ) + D 2 1 ( 0 , λ 0 * ) A 2 T ( 0 , λ 0 * ) P 10 o ( 0 , λ 0 * ) x 0 ( λ 0 * ) exp D 2 ( 0 , λ 0 * ) 1 / 2 θ , θ 0 ,
yielding
y ^ 0 b , 1 ( θ ) a ^ exp ( β ^ θ ) , θ 0 ,
a ^ > 0 and β ^ > 0 are some constants.
The vector-valued function y ^ 0 b , 2 ( τ ) is obtained a bit differently than the vector-valued function y 0 b , 2 ( τ ) (see Section 4.3.4), because in the initial-value problem (143) only P 20 o ( · ) and P 30 o ( · ) (but not P 2 ( · ) and P 3 ( · ) like in (82)) are present. Namely, in contrast with the Equation (95), the vector-valued function y ^ 0 b , 2 ( τ ) satisfies the following differential equation:
d y ^ 0 b , 2 ( τ ) d τ = P 30 o ( t f , λ 0 * ) y ^ 0 b , 2 ( τ ) , τ 0 ,
where, due to Equation (50), P 30 o ( t f , λ ) = D 2 ( t f , λ ) 1 / 2 .
Solving Equation (149) with the initial value y ^ 0 b , 2 ( 0 ) of y ^ 0 b , 2 ( τ ) yields
y ^ 0 b , 2 ( τ ) = exp D 2 ( t f , λ ) 1 / 2 τ y ^ 0 b , 2 ( 0 ) , τ 0 .
Taking into account the positive definiteness of the matrix D 2 ( t f , λ ) 1 / 2 , we directly obtain that a single initial value y ^ 0 b , 2 ( 0 ) , for which y ^ 0 b , 2 ( τ ) from (150) satisfies the Boundary Function Method requirement ( lim τ y ^ 0 b , 2 ( τ ) = 0 ), is y ^ 0 b , 2 ( 0 ) = 0 . The latter, along with (150), implies
y ^ 0 b , 2 ( τ ) 0 , τ 0 .
Now, based on Equations (144)–(148) and (151), we obtain (quite similarly to Theorem 2) the following assertion.
Lemma 3. 
Let the assumptions of AI-AV be fulfilled. Then, there exists a number ε ^ 0 > 0 such that, for all ε ( 0 , ε ^ 0 ] , the entries of the solution to the initial-value problem (143) x ^ ( t , ε ) , y ^ ( t , ε ) satisfy the inequalities
x ^ ( t , ε ) x ^ 0 o ( t ) c ^ 1 ε , t [ 0 , t f ] , y ^ ( t , ε ) y ^ 0 o ( t ) y ^ 0 b , 1 ( θ ) c ^ 1 ε , t [ 0 , t f ] , θ = t / ε ,
where c ^ 1 > 0 is some constant independent of ε.
Let us introduce the following vector-valued functions of the dimension K n :
z ^ 0 o ( t ) = col x ^ 0 o ( t ) , y ^ 0 o ( t ) , z ^ 0 b , 1 ( θ ) = col 0 , y ^ 0 b , 1 ( θ ) , z ^ 0 ( t , ε ) = z ^ 0 o ( t ) + z ^ 0 b , 1 ( θ ) , t [ 0 , t f ] , θ = t / ε , ε ( 0 , ε ^ 0 ] .
Thus, by virtue of Lemma 3, we have
z ^ ( t , ε ) z ^ 0 ( t , ε ) 2 c ^ 1 ε , t [ 0 , t f ] , ε ( 0 , ε ^ ] .

4.6.3. Time Realization of the Control (138) in the Problem (1), (5) and (6)

The time realization of the control (138) along w = w ^ ( t , ε ) , which is an open-loop control in the problem (1), (5) and (6), has the form
u ^ ( t , λ 0 * , ε ) = u ^ ε ( w ^ ( t , ε ) , t , λ 0 * ) = 1 ε