Next Article in Journal
Optimal Circular Economy and Process Maintenance Strategies for an Imperfect Production–Inventory Model with Scrap Returns
Next Article in Special Issue
UWB and MB-OFDM for Lunar Rover Navigation and Communication
Previous Article in Journal
Efficiency Optimization of an Annular-Nozzle Air Ejector under the Influence of Structural and Operating Parameters
Previous Article in Special Issue
An Adaptive Controller Design for Nonlinear Active Air Suspension Systems with Uncertainties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Operational Matrix of New Shifted Wavelet Functions for Solving Optimal Control Problem

Department of Applied Science, University of Technology, Baghdad 10066, Iraq
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3040; https://doi.org/10.3390/math11143040
Submission received: 7 June 2023 / Revised: 28 June 2023 / Accepted: 7 July 2023 / Published: 8 July 2023
(This article belongs to the Special Issue Modeling and Simulation in Engineering, 3rd Edition)

Abstract

:
This paper is devoted to proposing an approximate numerical algorithm based on the use of the state parameterization technique in order to find the solution to the optimal control problem (OCP). An explicit formula for new shifted wavelet (NSW) functions is constructed. A new formula that expresses the first-order derivative of the NSW in terms of their original NSW is established. The development of our suggested numerical algorithms begins with the extraction of a new operational matrix of derivative from this derivative formula. The expansion’s convergence study is performed in detail, and some illustrative examples of OCP are displayed. The proposed algorithm is compared with the exact one and some other methods in the literature. This confirms the accuracy and the high efficiency of the presented algorithm.

1. Introduction

The dynamics in some mathematical models are represented by a system of ordinary differential equations for a set of dependent functions   x ( t ) when an independent procedure set controls such systems u ( t ) . In this case, the aim is to determine u ( t ) that optimizes the dynamical system, and this problem is called optimal control. Numerous studies have focused on the approximate solutions of optimal control problems, which can be found in many fields [1,2,3,4,5]. The two general techniques, which are indirect and direct, are used for the approximate solution of optimal control problems. An indirect method transforms the original optimal control problem to a boundary value problem, which can be solved either analytically or numerically.
Direct methods are more suitable techniques and can be quickly and simply utilized to a new optimal control problem. Optimal control in natural methods is seen as a standard optimization problem by searching for the control function u ( t ) that optimizes the performance index. Different algorithms were used for solving optimal control problems, including the indirect modified pseudospectral method [6], a direct Chebyshev cardinal functions method [7], Cauchy discretization technique [8], the synthesized optimal control technique [9], Legendre functions method [10], Evolutionary Algorithm-Control Input Range Estimation [11], a hybrid of block-pulse function, and orthonormal Taylor polynomials [12]. (See [13,14,15,16,17] for some other articles exploring various optimal control problems.) Wavelet functions have important roles in numerical analysis for solving optimal control problems [18,19,20,21]. In particular, the Chebyshev wavelets families are widely applied in contributions to the field of approximation theory. For example, the authors in [22] employed the Boubaker wavelets together with the operation matrix of derivative in order to solve the singular initial value problem. The collocation method is presented in [23] based on the second kind of Chebyshev wavelets for solving calculus of variation problems. The use of the operational matrices of derivatives and integrals has been highlighted in the field of numerical analysis [24]. This utilization gives special algorithms to obtain accurate approximate solutions of many types of differential and integral equations with flexible computations. An operational matrix of derivatives is extracted based on choosing suitable basis functions in terms of celebrated special functions and expressing the first derivative of these basis functions in terms of their original types.
Motivated by the above discussion, we are mainly interested in presenting new shifted wavelet functions with some important properties. A novel state parameterization method is suggested to solve the optimal control problem. Such a method is used together with NSW as a basis function to parameterize the states variables. The proposed technique is constructed to simultaneously reach the accuracy and efficiency. The rest of the work is organized as follows: Section 2 provides the definition of NSW. In Section 3, the convergence of the NSW is studied. The general exact formula of the NSW differentiation operational matrix is generated in Section 4, and then the suggested algorithm to solve the optimal control problem is illustrated in Section 5. Section six discusses the application of the NSW by considering various examples in the optimal control. Simulation results are also given in Section 7, followed by concluding remarks that are summarized in Section 8.

2. The New Shifted Wavelet Functions

The expression for the special polynomials M m t   in the interval [−1, 1] can be defined as below:
M 0 ( t ) = 2 ,   M 1 ( t ) = t ,   M 2 ( t ) = t 2 2 ,  
The general recurrence relation for obtaining M m t , m = 2 ,   3 ,   4 , is given by:
M m + 1 t = t M m 1 t M m t ,   m = 1 ,   2 ,   3 ,   4 , ,
with the given initial conditions M 0 t and M 1 t .
Sometimes, it is convenient to use the half-interval [0, 1] instead of the interval [−1, 1]. In this case, the term shifted is defined and indicated by M s m t . In this work, M s m t are defined as M s m t = M m 2 t 1 .
Wavelet functions have been used successfully in scientific and engineering fields. The special new shifted wavelet functions can be defined as below:
Q n m t = 2 k 1 2 π M s m 2 k t 2 n + 1 ,             n 1 2 k 1 t n 2 k 1 , 0 ,                                                                                     o t h e r w i s e .
where n = 1 ,   2 ,   , 2 k , k can be assumed to be any positive integer, m is the degree of the shifted polynomials, and t denotes the time for m = 0 ,   1 ,   ,   M .
Here, M s m t are called the shifted special polynomials of order m, which are orthogonal with respect to the weight function w t , and which satisfy the following recursive formula:
M s m t = 2 t 1 M s m 1 t M s m 2 t ,     m = 2,3 , 4 , ,
with initial conditions:
M s 0 t = 2 ,   M s 1 t = 2 t 1 .

3. Convergence Analysis of New Wavelet Functions

A function approximation f C 2 [ 0,1 ) , with f ¨ t   L , L > 0 may by expanded in terms of new shifted wavelets as below:
f t = n = 1 m = 0 c n m Q n m t .
where:
c n m = f t ,   Q t   .
In Equation (5), the symbol . , . is denoted the inner product operator on Hilbert space over the interval 0,1 ] .
If the infinite series in Equation (4) is truncated, then the solution f t can be rewritten in matrix form as below:
f t = n = 1 2 k 1 m = 0 M c n m Q n m t = C T Φ t ,
where Φ t and C are matrices of 2k−lM × 1 dimensions, given by:
C = c 1,0   c 1,1     c 1 , M   c 2,0 c 2 , M   c 2 k 1 , 0   c 2 k 1 , M
and:
Φ t = Q 1,0   Q 1,1   Q 1 , M     Q 2,0 Q 2 , M     Q 2 k 1 , 0   Q 2 k 1 , M T
Note that both k and n are integer numbers, and m is the degree of shifted polynomials. Now, we state and prove a theorem in order to ensure the convergence of the new shifted wavelet expansion of a function.
Theorem 1.
Assume that a function f ( t ) L w 2 0,1 where w t = 1 1 t , t ± 1 , with bounded second derivative f ¨ ( t ) L , L > 0 ,   f can be expanded as an infinite series of the new shifted wavelets (1), then   c n m in (4) converges uniformly to   f ,   i.e., c n m satisfy the inequality:
c n m L 1 n 3 2 2 3 2 π m 2 1 .
Proof. 
Let:
f t = n = 1 m = 0 c n m Q n m ( t ) .
It follows that for k = 1,2 , 3 , ; n = 1,2 , , 2 k , m = 0,1 , , M .
c n m = f t , Q t = 0 1 f t Q n m t w k t d t . = 0 n 1 2 k 1 f t Q n m t w k t d t + n 1 2 k 1 n 2 k 1 f t Q n m t w k t d t + n 2 k 1 1 f t Q n m t w k t d t .
Using Equation (1), one can obtain:
c n m = n 1 2 k 1 n 2 k 1 f t 2 k 1 2 π M s m 2 k t 2 n + 1 w 2 k t 2 n + 1 d t .
If m > 1 , by substituting:
2 k t 2 n + 1 = c o s θ ,   t = c o s θ + 2 n 1 2 k ,   d t = s i n θ 2 k d θ .
c n m = 2 k 1 2 π 0 π f c o s θ + 2 n 1 2 k 2 c o s   m θ   1 1 c o s 2 θ s i n θ 2 k d θ .
c n m = 2 ( k + 1 ) 2 π 0 π f c o s θ + 2 n 1 2 k c o s   m θ   d θ .
By using method of integration by parts, let:
0 π u d v = u v 0 π v d u , u = f c o s θ + 2 n 1 2 k ,   d u = f ˙ c o s θ + 2 n 1 2 k s i n θ 2 k ,   d v = cos m θ   d θ ,   v = sin m θ m , m 1 .
c n m = 2 ( k + 1 ) 2 π f c o s θ + 2 n 1 2 k sin m θ m 0 π 2 ( k + 1 ) 2 m 2 k π 0 π f ˙ c o s θ + 2 n 1 2 k sin m θ   s i n θ   d θ .
Using again the method of integration by parts, let:
u = f ˙ c o s θ + 2 n 1 2 k , d u = f ¨ c o s θ + 2 n 1 2 k s i n θ 2 k d θ ,   d v = sin m θ   s i n θ   d θ ,
v = s i n m 1 θ m 1 s i n m + 1 θ m + 1 .
c n m = 2 k + 1 2 m 2 k π f ˙ c o s θ + 2 n 1 2 k s i n m + 1 θ m + 1 + s i n m 1 θ m 1 0 π 2 k + 1 2 m 2 2 k π 0 π f ¨ c o s θ + 2 n 1 2 k s i n θ s i n m 1 θ m 1 s i n m + 1 θ m + 1 d θ .
We have:
c n m = 2 ( k + 1 ) 2 m 2 2 k π 0 π f ¨ c o s θ + 2 n 1 2 k s i n θ s i n m 1 θ m 1 s i n m + 1 θ m + 1 d θ
Thus, we obtain:
c n m = 2 ( k + 1 ) 2 m 2 2 k π 0 π f ¨ c o s θ + 2 n 1 2 k s i n θ s i n m + 1 θ m + 1 + s i n m 1 θ m 1 d θ 2 k + 1 2 m 2 2 k π 0 π f ¨ c o s θ + 2 n 1 2 k s i n m + 1 θ m + 1 + s i n m 1 θ m 1 d θ L 2 k + 1 2 m 2 2 k π 0 π s i n θ s i n m 1 θ m 1 s i n m + 1 θ m + 1 d θ .
However,
0 π s i n θ s i n m + 1 θ m + 1 + s i n m 1 θ m 1 d θ = 0 π s i n θ s i n m + 1 θ m + 1 + s i n m 1 θ m 1 d θ . 0 π s i n θ s i n m + 1 θ m + 1 + s i n θ s i n m 1 θ m 1 d θ 2 m π m 2 1 .
Hence:
c n m L 2 k + 1 2 m 2 2 k π 2 m π m 2 1 .
c n m L 2 k + 1 2 2 2 k 2 π m 2 1 .
Since n 2 k 1 , we have inequality becoming:
c n m L 1 n 3 2 2 3 2 π m 2 1 .
Therefore, the wavelets expansion n = 1 m = 0 c n m Q n m ( t ) converges to f t uniformly. □

Accuracy Analysis

If the function f t is expanded in terms of New Shifted Wavelet Functions as in Equations (4) and (5), that is:
f t = n = 1 m = 0 c n m Q n m ( t ) .
then it is not possible to perform the computation of an infinite number of terms, and we must thus truncate the series as below:
  f M t = n = 1 2 k 1 m = 0 M 1 c n m Q n m ( t ) .
so that:
f t f M t = r t .
where r t is the residual function defined by:
  r t = n = 2 k 1 + 1 m = M c n m Q n m ( t ) .
We must select the coefficients such that r ( t ) is less than some convergence value ϵ , that is:
0 1 | f t f M t | 2 w n t d t 1 2 < ϵ ,
for all M greater than some positive integer value M 0 .
The calculation of the accuracy of a numerical method is crucial to describe the applicability and performance in order to solve problems. Theorem 2 discusses the accuracy of the wavelets representation of a function.
Theorem 2.
Let f be a continuous function defined on the interval [0, 1) and f ¨ ( t ) < L , then the accuracy estimation is given by:
c n , m = π L 2 3 2 n = 2 k 1 + 1 m = M 1 n 3 2 1 m 2 1 1 2 ,
where:
c n , m = 0 1 r ( t ) 2 w n t d t 1 2 .
Proof. 
Since:
c n , m = 0 1 r ( t ) 2 w n t d t 1 2
Then:
c n m 2 = 0 1 r ( t ) 2 w n t d t . = 0 1 n = 2 k 1 + 1 m = M c n m Q n m ( t ) 2 w n t d t . = n = 2 k 1 + 1 m = M c n m 2 0 1 Q n m ( t ) 2 w n t d t .
From the orthonormality criterion form Q n m , one can obtain:
c n m 2 = n = 2 k 1 + 1 m = M c n m 2 .
Using the findings from Equation (7):
c n m 2 = π L 2 3 2 n = 2 k 1 + 1 m = M 1 n 3 2 1 m 2 1 ,
or
c n , m = π L 2 3 2 n = 2 k 1 + 1 m = M 1 n 3 2 1 m 2 1 1 2 .  

4. Operational Matrix of the NSW

The present section is built to derive an operational matrix of derivatives for the NSW. Based on the NSW vector Φ t mentioned in Equation (1), it can be determined that the operational matrix of integer derivative is as below.
The following theorem is needed hereafter:
Theorem 3.
Let Φ t be the NSW vector defined in Equation (1). Then, the first derivative of the vector Φ t can be expressed as:
d Φ t ) d t = D Φ Φ t ,
where D Φ is 2 k 1 M + 1 square operation matrix of differentiation and is defined by:
D Φ = D O O O D O O O D
In which D is a square matrix and their elements can be explicitly obtained as below:
D i , j = 2 k i , i   o d d   a n d   j = 0 , 2 i , i > j   a n d   i j = o d d , 0 , o t h e r w i s e .
Proof. 
By using NSW, the r t h element of vector Q n , m ( t ) can be rewritten in the following way:
Q r t = Q n , m t = 2 k 1 2 π M s m 2 k t 2 n + 1 ,
For n 1 2 k 1 t n 2 k 1 and Q r t = 0 outside the interval t n 1 2 k 1 , n 2 k 1 , where r = n m + 1 + m + 1 , m = 0 , 1 , , M , n = 0 , 1 , 2 , , ( 2 k 1 ) .
Or
Q n , m t = 2 k 1 2 π M s m 2 k t 2 n + 1 χ n 1 2 k 1 , n 2 k 1   ,
where
χ n 1 2 k 1 , n 2 k 1 = 1 , t n 1 2 k 1 , n 2 k 1 , 0 , o t h e r w i s e .
Differentiating Equation (11) with respect to t yields:
d Φ t d t = 2 k 1 2 π M ˙ s m 2 k t 2 n + 1 ,   for   t n 1 2 k 1 , n 2 k 1 .
Hence, the NSW expansion only has those elements in Q n , m t that are non-zero in the interval n 1 2 k 1 , n 2 k 1 , that is:
Q r t ,   r = n M + 1 , n M + 1 + 2 , , n M + 1 + M + 1 .
This enables us to expand d Q n m ( t ) d t in terms of the NSW in the form:
d Φ t d t = r = n M + 1 + 1 n + 1 M + 1 c r Q r t .
This implies that the operational matrix D Φ ( t ) is a block matrix, as defined in Equation (9), since d Φ t d t = 0 .
Then, we have d Φ t d t = 0 for r = 1 M + 1 + 1 ,   2 M + 1 + 1 , , 2 k 1 M + 1 + 1 ,
As a result, the elements of the first row of matrix D given in Equation (10) are zeros.
Now, substitute d M ˙ s m ( t ) d t back into Equation (13), gives:
d Q n , m ( t ) d t = 1 π 2 .   2 k 1 n i = 1 n 1 M s n 2 i + 1 t + 1 2 M s 0 ,     i f   n   o d d , i = 1 n 1 M s n 2 i + 1 t ,                                 i f   n   e v e n .
Expanding Equation (15) in terms of NSW basis allows us to obtain:
d Q n , m ( t ) d t = 2.2 k n i = 1 n 1 Q n ( M + 1 ) + i t + 1 2 Q 0 ,       i f   n   o d d , i = 1 n 1 Q n ( M + 1 ) + i t ,                                 i f   n   e v e n .
Choosing D ( i , j ) such that:
D i , j = 2 k i i   o d d ,     j = 0 , 2 i i > j ,     i j = o d d , 0 o t h e r w i s e .
The equation d Q n , m ( t ) d t = D Q n , m ( t ) is hold. □

5. The NSW Algorithm for Solving Optimal Control Problem

In this section, the task of optimizing systems governed by ordinary differential equations, which leads to the optimal control problems, is investigated as they are arising in many applications in astronautics and aeronautics.
Consider the following process on fixed interval 0 ,   1 :
J = 0 1 F t , u t , x t d t ,
subject to:
u t = f t , x t , x ˙ t ,
together with the conditions:
x 0 = x 0 ,     x 1 = x 1 .
where:   x : 0 ,   1 R   is the state variable, u : 0,1 R ,  is the control variable, and the function f is assumed to be real valued continuously differentiable.
First, we assume the solution of the state variables x t and x ˙ t in terms of NSW, respectively, is as below:
x t = i = 0 m c i Q i t ,
x ˙ t = i = 0 m c i D Q i t .
where C = c 0   c 1 c m T is the unknown parameters vector.
The second step is to obtain the approximation for the control variable by substituting Equations (19) and (20) into Equation (17):
u t = f t ,   i = 0 m c i Q i t , i = 0 m c i D Q i t .
Finally, the performance index value J is obtained as a function of the unknown c 0 ,   c 1 ,   c 2 ,   ,   c m as below
J = 0 1 F i = 0 m c i Q i t 2 , i = 0 m c i D Q i t 2 d t .
The resulting quadratic mathematical programming problem can be simplified as below:
J = 1 2 C T H C ,
where:
H = 2 0 1 F Φ t 2 , D Φ t 2 d t ,
subject to:
F C b = 0 .
where:
F = Φ T ( 0 ) Φ T ( 1 ) ,       b = x 0 x 1 .
Using Lagrange multiplier technique to obtain the optimal values of the unknown parameters C
C = H 1 F T F H 1 F T 1 b .

6. Test Examples

In this section, the results for the numerical simulation of optimal control problems formulated based on the proposed new shifted wavelet method are presented. Different test cases for m defined in the interval 0 ,   1 are considered with a single state function and a single control function. Note that the proposed method can be solved problems with multiple controls. The test problems are considered continuous optimal controls, and the analytic solution is known in order to allow the validation of the proposed algorithm by comparing its result with the exact solution.
Example 1.
In the following example, we have one state function x ( t ) and one control function u t . This problem is concerned with minimization of [25,26]:
min J = 0 1 u 2 ( t ) + x 2 ( t ) d t ,
subject to:
u t = x ˙ t ,
with initial conditions  x 0 = 0 , x 1 = 0.5 .
The exact value of the performance index is J = 0.328258821379.
Table 1 shows the values of the coefficients, and Table 2 and Table 3 give the values of the state and the control, respectively.
Table 4 gives the absolute errors E m = J e x a c t J m that the NSW method might produce with compression to the following methods:
  • The method existing in [25].
  • Chebyshev method proposed in [26].
Example 2:
Consider the second test problem [26]:
min J = 0 1 u 2 ( t ) + 3 x 2 ( t ) d t ,
u t = x ˙ t x t , x 0 = 1 ,   x 1 = 0.51314538 .
The exact solution of (22) is:
u t = 3 e 4 3 e 4 + 1 e 2 t 3 3 e 4 + 1 e 2 t ,   x t = 3 e 4 3 e 4 + 1 e 2 t + 1 3 e 4 + 1 e 2 t   and   J = 2.791659975 .
Table 5 shows the values of the coefficients, and Table 6 and Table 7 give the values of the state and the control, respectively, whereas Table 8 lists the absolute errors that our method NSW might produce and compares our technique to the method presented in [26]. From these tables, it can be seen that the state and the control variables are accurately approximated by the proposed method.
Table 8 illustrates the fast convergence rate of the proposed method, since the errors decay rapidly by increasing the number of the NSW.
Example 3.
Consider the third test problem
J = 1 2 0 1 u 2 t + x 2 1 d t ,
u t = x ˙ t x t ,   x 0 = 1 ,   x 1 = 0.3678794412   and   J e x a c t = 1 .
Table 9 shows the values of the coefficients while Table 10 and Table 11 compare the exact solutions and the approximate solutions of x ( t ) and   u ( t ) , respectively, for m = 3 ,   4 ,   5 . The absolute errors of J for various values of m are listed in Table 12. From these results, it is worthwhile to note that the approximate solutions obtain by the proposed method completely coincide with the exact solutions.
Example 4.
Consider the fourth test problem [26]:
min J = 0 1 0.5 u 2 ( t ) + x 2 ( t ) d t ,
u t = x ˙ t 0.5 x ( t ) ,   x 0 = 1 ,   x 1 = 0.5018480732 .
The exact solution of (23) is:
u t = 2 e 3 t e 3 a ,   x t = 2 e 3 t + e 3 a ,   where   a = 2 e 3 t 2 ( 1 + e 3 )   and   J = 0.8641644978 .
Table 13 compares absolute errors of presented method wavelets to the existing method presented in [26] with different values of m , and we see that the absolute errors of the presented method provide good results compares to the existing other method, which indicates a decrease in absolute errors with the increase in the value of m .
It is clear that the approximate solution of the performance index when m = 8 is in very good agreement with the corresponding exact solution. Table 13 reports the absolute errors of J m obtained by the proposed method at m = 3 ,   4 ,   5 in comparison to the method in [26] at m = 2 ,   3 ,   4 . The obtained results show that the approximate solutions are more accurate for the proposed method than the method in [26]. In addition, the fast convergence rate of the proposed method is also illustrated from the absolute errors results, since by increasing the number of the NSW, the errors decay rapidly.

7. Discussion

The NSW coefficients for the state function x ( t ) and the control function u ( t ) , the NSW approximated values x m ( t ) of orders m = 3 ,   4   a n d   5 , the NSW approximated values u m ( t ) of orders m = 3 ,   4   a n d   5 and the error estimates E m for different values of m are reported respectively in Table 1, Table 2, Table 3 and Table 4 for Example 1, In Table 5, Table 6, Table 7 and Table 8 for Example 2, in Table 8, Table 9, Table 10, Table 11 and Table 12 for Example 3 while in Table 13, the obtained error estimates E m for different values of m have been calculated. A comparison between the NSW approximation and the exact solution shows that as m increases, the errors decay rapidly. One of the important advantages of the use of the NSW method is that the convergence of J m is faster than some other methods in the literature see [25,26]. Therefore, by proceeding an approximations for the suitable value of m , the results obtained by the proposed method will rapidly tend to the results for the exact solution. The NSW approximation of order five is a very accurate approximation of the exact solution. Examples 2–4 have been solved by many researchers using different approaches, but the results obtained by NSW using state parameterization are the best results. From the results of Examples 2–4, it is clear that our algorithm gives better or comparable results with that of algorithms in [25,26], although the amount of computations in our method is very much less than in their algorithms.
A comparison between the results for the exact solution and for the values of m = 3 shows that the error in the performance index is of the order of 10 4 , while for the values m = 5 , an agreement of about nine decimal places is obtained in the performance index. The results gradually tend towards the exact results as we systematically proceed to higher order approximations.
Table 4, Table 7, Table 11 and Table 13 report the absolute errors for the performance index obtained by our method in comparison to the method in [25,26] at m = 3 ,   4 ,   5 . The obtained results show that the absolute errors are better for the proposed method than those obtained in [25,26]. From such tables, it can be found that the state and the control variables are accurately approximated by the presented method.

8. Conclusions

This paper presents a new technique for obtaining the numerical solutions for optimal control problems. The derivation of the method is based on the construction of a new shifted wavelet with its operational matrix of derivatives. One of the advantages of the proposed technique is adopting a limited number of wavelets basis functions.
Approximate and exact solutions of examples are correspondingly compared. For Example 1, a comparison reports in Table 4 that it is clear that at m = 5, the results obtained by the proposed method are better than those in [25,26], with the absolute error of the performance index 9.3 × 10−9, 1.6 × 10−8 and 2.1 × 10−4, respectively. Numerical results for Example 2 were presented in [25] with the best absolute error 4.4 × 10−3, while in our method, the best absolute method is 2.0 × 10−6. Absolute errors of Example 4 were also given as 1.9 × 10−4 and 7.1 × 10−8 in [26] and the present work, respectively. The best absolute errors of Example 4 presented in [26] are given in Table 13. As can be seen from Table 4, Table 8, Table 12 and Table 13, the present method is highly efficient and accurate, and it is quite high, even in the case of a small number of the basis wavelet functions.

Author Contributions

Methodology, G.A.; validation, S.S; formal analysis, G.A. and S.S; resources, G.A.; supervision, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

M m t The special polynomials defined in the interval [−1, 1].
M s m t The shifted special polynomials defined in the interval [0, 1].
Q n m t The new shifted special wavelet functions.
Φ t The vector of the basis functions.
w ( t ) The weight function.
f ¨ t The second derivative.
. , . Inner product operator on Hilbert space.
f ( t ) L w 2 0,1 means   0 1 f t 2 w ( t ) d t finite.
f C 2 [ 0,1 ) f   and   its   first   derivative   f ˙ are continues.
J Performance index value.
x ( t ) State variable.
u ( t ) Control variable.
r ( x ) Residual function
ϵ Convergence   value   greater than zero.
D Q Operation matrix of derivative.
R Real numbers.
χ Delta function.
u e x a c t Exact values of the control variable.
x e x a c t Exact values of the state variable.
u m Approximate values of the control variable.
x m Approximate values of the state variable.
J e x a c t Exact value of the performance index.
J m Approximate value of the performance index.
F Integrand function.
C Vector of unknown parameters.
C Vector of optimal parameters.
E m = J e x a c t J m . The absolute errors.

References

  1. Zhaohua, G.; Chongyang, L.; Kok, L.; Song, W.; Yonghong, W. Numerical solution of free final time fractional optimal control problems. Appl. Math. Comput. 2021, 405, 126270. [Google Scholar]
  2. Hans, G.; Christian, K.; Andreas, M.; Andreas, P. Numerical solution of optimal control problems with explicit and implicit switches. Optim. Methods Softw. 2018, 33, 450–474. [Google Scholar]
  3. Wang, Z.; Yan, L. An Indirect Method for Inequality Constrained Optimal Control Problems. IFAC Pap. Line 2017, 50, 4070–4075. [Google Scholar] [CrossRef]
  4. Yang, C.; Fabien, B. An adaptive mesh refinement method for indirectly solving optimal control problems. Numer Algor 2022, 91, 193–225. [Google Scholar] [CrossRef]
  5. Nave, O. Modification of Semi-Analytical Method Applied System of ODE. Mod. Appl. Sci. 2020, 14, 75. [Google Scholar] [CrossRef]
  6. Mohammad, A. A modified pseudospectral method for indirect solving a class of switching optimal control problems. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2022, 234, 1531–1542. [Google Scholar]
  7. Mohammad, H. A new direct method based on the Chebyshev cardinal functions for variable-order fractional optimal control problems. J. Frankl. Inst. 2018, 355, 4970–4995. [Google Scholar]
  8. Mohamed, A.; Mohand, B.; Nacima, M.; Philippe, M. Direct method to solve linear-quadratic optimal control problems. Numer. Algebra Control. Optim. 2021, 11, 645–663. [Google Scholar]
  9. Askhat, D.; Elena, S.; Sergey, K. Approaches to Numerical Solution of Optimal Control Problem Using Evolutionary Computations. Appl. Sci. 2021, 11, 7096. [Google Scholar]
  10. Mirvakili, M.; Allahviranloo, T.; Soltanian, F. A numerical method for approximating the solution of fuzzy fractional optimal control problems in caputo sense using legendre functions. J. Intell. Fuzzy Syst. 2022, 43, 3827–3858. [Google Scholar] [CrossRef]
  11. Viorel, M.; Iulian, A. Optimal Control Systems Using Evolutionary Algorithm-Control Input Range Estimation. Automation 2022, 3, 95–115. [Google Scholar]
  12. Marzban, H.R.; Malakoutikhah, F. Solution of delay fractional optimal control problems using a hybrid of block-pulse functions and orthonormal Taylor polynomials. J. Frankl. Inst. 2019, 356, 8182–8215. [Google Scholar] [CrossRef]
  13. Khamis, N.; Selamat, H.; Ismail, F.S.; Lutfy, O.F. Optimal exit configuration of factory layout for a safer emergency evacuation using crowd simulation model and multi-objective artificial bee colony optimization. Int. J. Integr. Eng. 2019, 11, 183–191. [Google Scholar] [CrossRef] [Green Version]
  14. Behzad, K.; Delavarkhalafi, A.; Karbassi, M.; Boubaker, K. A Numerical Approach for Solving Optimal Control Problems Using the Boubaker Polynomials Expansion Scheme. J. Interpolat. Approx. Sci. Comput. 2014, 3, 1–18. [Google Scholar]
  15. Ayat, O.; Mirkamal, M. Solving optimal control problems by using Hermite polynomials. Comput. Methods Differ. Equ. 2020, 8, 314–329. [Google Scholar]
  16. Abed, M.S.; Lutfy, O.F.; Al-Doori, Q.F. Online Path Planning of Mobile Robots Based on African Vultures Optimization Algorithm in Unknown Environments. J. Eur. Des Syst. Autom. 2022, 55, 405–412. [Google Scholar] [CrossRef]
  17. Sayevand, K.; Zarvan, Z.; Nikan, O. On Approximate Solution of Optimal Control Problems by Parabolic Equations. Int. J. Appl. Comput. Math. 2022, 8, 248. [Google Scholar] [CrossRef]
  18. Suman, S.; Kumar, A.; Singh, G.K. A new closed form method for design of variable bandwidth linear phase FIR filter using Bernstein multiwavelets. Int. J. Electron. 2015, 102, 635–650. [Google Scholar] [CrossRef]
  19. Mahdi, S.M.; Lutfy, O.F. Control of a servo-hydraulic system utilizing an extended wavelet functional link neural network based on sine cosine algorithms. Indones. J. Electr. Eng. Comput. Sci. 2022, 25, 847–856. [Google Scholar] [CrossRef]
  20. Keshavarz, E.; Ordokhani, Y.; Razzaghi, M. The Taylor wavelets method for solving the initial and boundary value problems of Bratu-type equations. Appl. Numer. Math. 2018, 128, 205–216. [Google Scholar] [CrossRef]
  21. Akram, K.; Asadollah, M.; Sohrab, E. Solving Optimal Control Problem Using Hermite Wavelet, Numerical Algebra. Control. Optim. 2019, 9, 101–112. [Google Scholar]
  22. Rabiei, K.; Ordokhani, Y. A new operational matrix based on Boubaker wavelet for solving optimal control problems of arbitrary order. Trans. Inst. Meas. Control 2020, 42, 1858–1870. [Google Scholar] [CrossRef]
  23. Jafari, H.; Nemati, S.; Ganji, R.M. Operational matrices based on the shifted fifth-kind Chebyshev polynomials for solving nonlinear variable order integro-differential equations. Adv. Differ. Equ. 2021, 2021, 435. [Google Scholar] [CrossRef] [PubMed]
  24. Vellappandi, M.; Govindaraj, V. Operator theoretic approach to optimal control problems characterized by the Caputo fractional differential equations. Results Control Optim. 2023, 10, 100194. [Google Scholar] [CrossRef]
  25. Kafash, B.; Delavarkhalafi, A. Restarted State Parameterization Method For Optimal Control Problems. J. Math. Comput. Sci. 2015, 14, 151–161. [Google Scholar] [CrossRef] [Green Version]
  26. Kafash, B.; Delavarkhalafi; Mkarbass, S. Application of Chebyshev polynomials to derive efficient algorithms for the solution of optimal control problems. Sci. Iran. 2012, 19, 795–805. [Google Scholar] [CrossRef] [Green Version]
Table 1. The unknown coefficients c i in Example 1.
Table 1. The unknown coefficients c i in Example 1.
c i m = 3 m = 4 m = 5
c 0 0.2305457113385760.2367273034768720.256199322878168
c 1 0.1233668163272310.1300138770700870.160508907830196
c 2 0.0062942253228180.0178223785253630.047112195129104
  c 3 0.0038609989793490.026814740481690
c 4 0.005789066578131
Table 2. Approximate and exact values of x ( t ) for Example 1.
Table 2. Approximate and exact values of x ( t ) for Example 1.
t m = 3 m = 4 m = 5 x e x a c t
0.20.0818181818180.0857251585610.08566326577180.0856602272147
0.40.1727272727270.1746807610990.17476776134260.1747583001210
0.60.2727272727270.2707737843550.27086078459840.2708700372292
0.80.3818181818180.3779112050730.37784931228340.3778527400206
10.5000000000000.5000000000000.50000000000000.5000000000000
Table 3. Approximate and exact values of u ( t ) for Example 1.
Table 3. Approximate and exact values of u ( t ) for Example 1.
t m = 3 m = 4 m = 5 u e x a c t
0.20.4318181818181820.4334460887949420.4341131879758700.433996647185271
0.40.4772727272727280.4593657505285450.4598878493039890.459952039568011
0.60.5227272727272740.5048202959830840.5042981972076390.504366922299765
0.80.5681818181818210.5698097251585590.5691426259776330.569023820575788
10.6136363636363670.6543340380549700.6562195299047810.656517642749666
Table 4. A comparison of the results of Example 1.
Table 4. A comparison of the results of Example 1.
E m
m Presented MethodMethod in [26]Method in [25]
30.000330.000330.0050
40.000000510.000000520.0034
50.00000000930.0000000160.00021
Table 5. The unknown coefficients ci in Example 2.
Table 5. The unknown coefficients ci in Example 2.
c i m = 3 m = 4 m = 5
c 0 0.3555075068713180.3468024679179680.353341978261637
c 1 0.0118653476259150.0036581581933730.011466719011776
c 2 0.0598656329410910.0475548487922780.053850500952116
c 3 −0.0041035947162710.0006.7914878500
c 4 0.001195685875318
Table 6. Approximate and exact values of x(t) for Example 2.
Table 6. Approximate and exact values of x(t) for Example 2.
t m = 3 m = 4 m = 5 x e x a c t
0.20.729698175428570.715473553487700.713037483410080.7131081208852
0.40.545861801142850.538749490172420.541726909156170.5418429752453
0.60.448490877142850.455603188113290.458580607097040.4584348199397
0.80.437585403428570.451810025369440.449373955291820.4493594610058
10.513145380000000.513145380000000.513145379999990.5131453766955
Table 7. Approximate and exact values of u(t) for Example 2.
Table 7. Approximate and exact values of u(t) for Example 2.
t m = 3 m = 4 m = 5 u e x a c t
0.2−2.56767274857141−2.71584589378880−2.78633403260869−2.79165997531006
0.4−1.86504367257142−1.85674597643924−1.83028754865182−1.82851756831608
0.6−1.24888004685713−1.17657155199105−1.16048897823791−1.16185967374545
0.8−0.71918187142857−0.66109799850335−0.68313541022400−0.68359121816281
1−0.275949146285715−0.296100694035280−0.31768698166747−0.31616348542896
Table 8. Estimated values of J m for m = 3 , 4 , 5 for Example 2.
Table 8. Estimated values of J m for m = 3 , 4 , 5 for Example 2.
m J m E m J m in [26] E m
32.797182335390.00552.79774363040.0060
42.792373083370.000712.79608386420.0044
52.791662024690.00000202.79608386420.0044
Table 9. The unknown coefficients ci in Example 3.
Table 9. The unknown coefficients ci in Example 3.
c i m = 3 m = 4 m = 5
c 0 0.3173889432648600.3143662037500580.314854167521389
c 1 −0.10561159916574−0.10846146531065−0.10789004268669
c 2 0.0172194828347260.0129446836173730.013407812437593
c 3 −0.00142493307245−0.00107009805596
c 4 0.00008.870875412
Table 10. Approximate and exact values of x ( t ) for Example 3.
Table 10. Approximate and exact values of x ( t ) for Example 3.
t m = 3 m = 4 m = 5 x e x a c t
0.20.82383481765090.81889545700540.81872613325390.81873075307798
0.40.67254017059630.67007049027360.67030850196190.67032004603563
0.60.54611605883630.54858573915910.54882375084740.54881163609402
0.80.44456248237090.44950184301640.44933251926490.44932896411722
10.36787944120000.36787944119990.36787944120000.36787944117144
Table 11. Approximate and exact values of u ( t ) for Example 3.
Table 11. Approximate and exact values of u ( t ) for Example 3.
t m = 3 m = 4 m = 5 u e x a c t
0.2−1.642484391160−1.6396030974501−1.6376087511890−1.6374615061559
0.4−1.366837067632−1.3417286510180−1.3405383263440−1.3406400920712
0.6−1.116060279400−1.0958912234308−1.0975575714816−1.0976232721880
0.8−0.890154026461−0.8971514540429−0.89880715280116−0.8986579282344
1−0.689118308818−0.7405699822088−0.73541173113306−0.7357588823428
Table 12. Estimated values of J m for m = 3,4 , 5 of Example 3.
Table 12. Estimated values of J m for m = 3,4 , 5 of Example 3.
m J m E m
31.0002729347590600.00027
41.0000019042546010.0000019
51.0000000075165380.0000000075
Table 13. Estimated values of J m for m = 3 , 4 , 5 of Example 4.
Table 13. Estimated values of J m for m = 3 , 4 , 5 of Example 4.
m J m E m m J m in [26] E m
30.864728809380.0005620.864539044600037
40.864218072350.00005330.86445504720.00029
50.864164568960.00000007140.86435464520.00019
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abass, G.; Shihab, S. Operational Matrix of New Shifted Wavelet Functions for Solving Optimal Control Problem. Mathematics 2023, 11, 3040. https://doi.org/10.3390/math11143040

AMA Style

Abass G, Shihab S. Operational Matrix of New Shifted Wavelet Functions for Solving Optimal Control Problem. Mathematics. 2023; 11(14):3040. https://doi.org/10.3390/math11143040

Chicago/Turabian Style

Abass, Gufran, and Suha Shihab. 2023. "Operational Matrix of New Shifted Wavelet Functions for Solving Optimal Control Problem" Mathematics 11, no. 14: 3040. https://doi.org/10.3390/math11143040

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop