Next Article in Journal
Dynamic RNA Coding Color Image Cipher Based on Chain Feedback Structure
Next Article in Special Issue
Inequalities for Riemann–Liouville-Type Fractional Derivatives of Convex Lyapunov Functions and Applications to Stability Theory
Previous Article in Journal
A Simplified Model for the On-Line Identification of Bearing Direct-Dynamic Parameters Based on Algebraic Identification (AI)
Previous Article in Special Issue
A Second-Order Time Discretization for Second Kind Volterra Integral Equations with Non-Smooth Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Numerical Approach for Solving Systems of Fractional Problems and Their Applications in Science

1
Institute of Mathematical Sciences, Universiti Malaya, Kuala Lumpur 50603, Malaysia
2
Department of Mathematics, AL-Qunfudhah University College, Umm Al-Qura University, Al Qunfudhah 24382, Saudi Arabia
3
Department of Mathematics, Academy of Engineering and Medical Sciences, Khartoum 11115, Sudan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3132; https://doi.org/10.3390/math11143132
Submission received: 17 June 2023 / Revised: 7 July 2023 / Accepted: 10 July 2023 / Published: 16 July 2023

Abstract

:
In this article, we present a new numerical approach for solving a class of systems of fractional initial value problems based on the operational matrix method. We derive the method and provide a convergence analysis. To reduce computational cost, we transform the algebraic problem produced by this approach into a set of 2 × 2 nonlinear equations, instead of solving a system of 2 m × 2 m equations. We apply our approach to three main applications in science: optimal control problems, Riccati equations, and clock reactions. We compare our results with those of other researchers, considering computational time, cost, and absolute errors. Additionally, we validate our numerical method by comparing our results with the integer model when the fractional order approaches one. We present numerous figures and tables to illustrate our findings. The results demonstrate the effectiveness of the proposed approach.

1. Introduction

The area of mathematics that focuses on the integration and differentiation of real or complex orders is known as fractional calculus. Despite its ancient origins, fractional calculus (FC) has gained significant popularity in recent years due to its wide range of applications [1,2,3,4,5,6]. One intriguing aspect of FC is the existence of multiple fractional operators, allowing researchers to choose the most suitable operator to describe real-world phenomena. In [7], the authors solved a system of fractional linear equations, while in [8], Al-Refai discussed some fundamental results for fractional derivatives with nonsingular kernels.
Initially, FC was limited to fractional integrals obtained by iteratively applying integrals to acquire nth-order integrals and replacing k with any integer. The corresponding derivatives were defined using classical methods. However, researchers have discovered new fractional operators with nonlocal and nonsingular kernels that better capture real-world phenomena by utilizing the limiting process and the Dirac delta function [9,10,11,12,13,14,15,16].
Various numerical techniques have been employed to solve nonlinear fractional differential equations, such as the Adomian decomposition method, the homotopy perturbation method, and the collocation method.
Among the numerical tools used to solve fractional problems, the operational matrix method (OMM) is particularly useful. Zamanpour and Ezzati [17] applied the OM of fractional integration of trigonometric functions to numerically solve nonlinear fractional weakly singular two-dimensional partial Volterra integral equations. Syam et al. [18] investigated delay equations and demonstrated the effectiveness of the OMM. They employed OMs and the properties of two-dimensional block-pulse functions (BPFs) to reduce two-dimensional fractional integral equations to systems of algebraic equations. Najafalizadeh and Ezzati [19] provided examples, both linear and nonlinear, to showcase the accuracy, efficiency, and speed of the operational matrix method.
In this paper, we investigate a fractional system of initial value problems given by
D α ξ 1 ( τ ) = P 1 ( ξ 1 ( τ ) , ξ 2 ( τ ) ) ,
D α ξ 2 ( τ ) = P 2 ( ξ 1 ( τ ) , ξ 2 ( τ ) ) ,
subject to the initial conditions
ξ 1 ( 0 ) = η 1 , ξ 2 ( 0 ) = η 2 ,
where P 1 and P 2 are second-order polynomial functions of two variables, 0 < α 1 , ξ 1 ( τ ) and ξ 2 ( τ ) are unknown functions defined on the interval 0 τ T , T is a positive constant, and η 1 , η 2 are constants. The fractional derivative is considered in the Caputo sense. We can express P 1 and P 2 as follows:
P 1 ( ξ 1 , ξ 2 ) = a 2 , 0 ξ 1 2 + a 1 , 1 ξ 1 ξ 2 + a 0 , 2 ξ 2 2 + a 1 , 0 ξ 1 + a 0 , 1 ξ 2 + a 0 , 0 ,
P 2 ( ξ 1 , ξ 2 ) = b 2 , 0 ξ 1 2 + b 1 , 1 ξ 1 ξ 2 + b 0 , 2 ξ 2 2 + b 1 , 0 ξ 1 + b 0 , 1 ξ 2 + b 0 , 0 ,
where a i , j and b i , j are constants for i , j = 0 , 1 , 2 . Systems (1)–(3) have several applications. For example, clock reactions have been investigated for over a hundred years by researchers such as Richards and Loomis [20], Forbes et al. [21], and Horváth and Nagypál [22]. Clock reactions are intriguing reactions that exhibit a dramatic change, causing a noticeable color change on the surface. The precise definition of this scientific phenomenon is yet to be determined. However, when the iodate and sulfite reaction occurs, it demonstrates the characteristics of a clock reaction, with an induction time followed by a rapid transition. One of the more recent clock reactions discovered is the chlorate–iodine reaction, which only takes place under UV light. This reaction involves the disassembly of iodine molecules, which then react sequentially with the chlorate. Clock reactions are not only important in chemistry education but also have industrial and biological applications. The clock reaction of vitamin C serves as an accessible model system for mathematical chemistry. Kerr, Thomson, and Smith [23] mathematically modeled a substrate-depletive, non-auto-catalytic clock reaction involving household chemicals (vitamin C, iodine, hydrogen peroxide, and starch) using a system of nonlinear ordinary differential equations. Its mathematical model, as described in Write [24,25], can be given as follows:
D α ξ 1 ( τ ) = ξ 1 ( τ ) ξ 2 ( τ ) + η 1 η 2 ( 1 2 ξ 1 ( τ ) ) 2 ,
D α ξ 2 ( τ ) = η 2 ξ 1 ( τ ) ξ 2 ( τ ) ,
with
ξ 1 ( 0 ) = η , ξ 2 ( 0 ) = 1 ,
where η , η 1 , η 2 are positive real constants, 0 < α 1 , and ξ 1 ( τ ) and ξ 2 ( τ ) are unknown functions with 0 τ T . Another interesting application of Systems (1)–(3) is the set of nonlinear Riccati differential equations (RDEs). RDEs find applications in various fields such as physics, algebraic geometry, and conformal mapping theory, see [9,10]. They also arise in numerous real-world problems. The OMM can be employed to solve systems of RDEs. The OMM facilitates the reduction of nonlinear fractional-order RDEs into an algebraic system. This approach offers advantages such as low setup costs for the equations without the need for projection techniques like Galerkin or collocation methods. It is worth mentioning that we consider a system of 2 × 2 equations because the applications we are planning to discuss involve such systems. However, we can generalize this technique to n × n equations. In that case, we obtain a system of n × n algebraic equations. Furthermore, we can also study general nonlinear equations. However, in this case, we end up with a system of n × n equations, and we need to employ numerical methods like Newton’s method to solve this nonlinear system.
As an example of the problems that we will discuss as an application of Systems (1)–(3), consider the following problem:
D α ξ 1 ( τ ) D α ξ 2 ( τ ) = a 0 b 0 + a 1 a 2 b 1 b 2 ξ 1 ( τ ) ξ 2 ( τ ) c 0 ξ 1 ( τ ) ξ 2 ( τ ) ξ 1 ( τ ) ξ 2 ( τ ) a 3 b 3 ξ 1 ( τ ) ξ 2 ( τ )
with
ξ 1 ( 0 ) ξ 2 ( 0 ) = η 0 η 1
where c 0 , η 0 , η 1 , a i , b i are constants for i = 0 , 1 , 2 , 3 , c 0 .
The third application which we discuss in this paper is optimal control problems (OCPs). Numerous fields, including aerospace engineering, robotics, economics, and finance, utilize optimal control. In aerospace engineering, optimal control is employed to design aircraft and spacecraft control systems that achieve desired trajectories while minimizing fuel consumption. The development of motion planning and trajectory optimization algorithms for robots in robotics heavily relies on optimal control techniques. In the field of finance, optimal control is applied to devise investment plans that maximize returns while adhering to predetermined risk levels, see [26,27]. We will study the following OCP:
F = 1 2 0 T a 1 ( τ ) ψ 1 2 ( τ ) + a 2 ( τ ) ψ 2 2 ( τ ) d τ
subject to
D α ψ 1 ( t ) = b 1 ( t ) ψ 1 ( t ) + b 2 ( t ) ψ 1 ( t ) , Ψ 1 ( t 0 ) = θ 1
where a 1 ( t ) 0 , a 2 ( t ) > 0 , b 2 ( t ) 0 , and 0 < α 1 .
The article is organized into six sections. Section 1 provides a brief literature overview of the applications related to the problem being studied and introduces the OMM as the solution approach. Section 2 presents the definitions and key findings for the BPF and Caputo derivative. In Section 3, we develop the OM approach. The convergence analysis is discussed in Section 4. Section 5 presents the numerical results, showcasing the application of the method. Finally, in Section 6, we provide conclusions and discuss the findings in detail.

2. Preliminaries

We present the definitions of the Caputo derivative and the fractional integral operator [28]. Additionally, we provide the definition and main properties of the block-pulse functions [18].
Definition 1.
A real function ξ ( τ ) , τ > 0 , is said to be in the space C λ , λ R if there exists a real number q > λ , such that ξ ( τ ) = τ q ξ 1 ( τ ) , where ξ 1 ( τ ) C [ 0 , ) , and it is said to be in the space C λ k if ξ ( k ) C λ , k N [9].
Definition 2.
For α > 0 , k 1 < α < k , k N , τ > 0 , and ξ C 1 k , the Caputo fractional derivative is defined by
D α ξ ( τ ) = 1 Γ ( k α ) 0 τ ( τ r ) k 1 α ξ ( k ) ( r ) d r , α > 0 , ξ ( τ ) , α = 0 ,
where Γ is the Gamma function [9].
Using the substitution ν = τ r , we can observe that
D α τ η = 0 , η < α , η 0 , 1 , 2 , Γ ( η + 1 ) Γ ( η α + 1 ) τ η α , otherwise , α > 0 .
The Riemann–Liouville fractional integral operator is given as follows.
Definition 3.
The Riemann–Liouville fractional integral operator for τ > 0 , α , τ is defined by [28]
I α ξ ( τ ) = 1 Γ ( α ) 0 τ ξ ( r ) ( τ r ) α 1 d r .
The following two properties are important as they connect the Caputo derivative with the fractional integral operator.
D α I α ξ ( τ ) = ξ ( τ )
and
I α D α ξ ( τ ) = ξ ( τ ) l = 0 k 1 ξ ( l ) ( 0 ) l ! τ l
where k 1 α < k . For more details, we refer the reader to [9,28].
Another important concept in this paper is the BPF, which is defined as follows [18].
Definition 4.
Let m be a positive integer and T be a given positive real number. Then, the lth BPF is a function from [ 0 , T ) into { 0 , 1 } which is given by
β l ( τ ) = 1 , l Δ τ < ( j + 1 ) Δ 0 , o t h e r w i s e
where Δ = T m and j = 0 , 1 , , m 1 .
We can express the set of block-pulse functions as a vector function of the form
β ( τ ) = β 0 ( τ ) β 1 ( τ ) β m 1 ( τ ) .
The BPFs have two important properties: the product relation and the orthogonality relation.
Theorem 1.
Let β 0 ( τ ) , β 1 ( τ ) , , β m 1 ( τ ) be given as in Equation (18) on [ 0 , T ) . Then,
β i ( τ ) β j ( τ ) = β i ( τ ) , i = j 0 , i j
and
0 T β i ( τ ) β j ( τ ) d t = Δ , i = j 0 , i j
for 0 i , j m 1 .
Proof. 
This follows directly from the fact that the BPFs are defined on disjoint sub-intervals of length h. □
Following Theorem 1, it is established that any function in L 2 [ 0 , T ) can be expressed in terms of the BPFs as stated in Theorem 2.
Theorem 2.
If ξ L 2 [ 0 , T ) , then
ξ ( τ ) j = 0 m 1 ξ j β j ( τ )
where
ξ j = 1 Δ j Δ ( j + 1 ) Δ ξ ( τ ) d τ .
Proof. 
To obtain the result of the theorem, we multiply both sides of Equation (22) by β i ( t ) and integrate both sides over the interval [ 0 , T ) . By doing so, we can directly derive the result using Equations (20) and (21). □
From Equation (22), we can express the function ξ ( τ ) in matrix form as follows:
ξ ( τ ) Ξ β ( τ )
where Ξ = ξ 0 , ξ 1 , , ξ m 1 .

3. Method of Solution

In this section, we outline the method used to solve Systems (1)–(3). The OM approach, combined with the properties of the Caputo derivative and the BPFs, provides an efficient and accurate technique for solving fractional differential equations.
First, we approximate the unknown functions ξ 1 ( τ ) and ξ 2 ( τ ) using the BPFs as shown in Equation (22). This approximation allows us to express the functions in terms of a finite number of basis functions β i ( τ ) . Then, we obtain the following
ξ 1 ( τ ) = i = 0 m 1 ξ 1 , i β i ( τ ) , ξ 2 ( τ ) = j = 0 m 1 ξ 2 , j β j ( τ )
This can be rewritten in matrix form as
ξ 1 ( t ) = Ξ 1 β ( τ ) , ξ 2 ( τ ) = Ξ 2 β ( τ ) ,
where Ξ 1 = ξ 1 , 0 , ξ 1 , 1 , , ξ 1 , m 1 and Ξ 2 = ξ 2 , 0 , ξ 2 , 1 , , ξ 2 , m 1 . Next, we derive the OM of the product of the functions ξ 1 ( τ ) and ξ 2 ( τ ) , as described in Theorem (3). This operational matrix enables us to represent the product of the functions in matrix form.
Theorem 3.
Let ξ 1 , ξ 2 L 2 [ 0 , T ) be two functions. Then,
ξ 1 ( τ ) ξ 2 ( τ ) = Ω 1 , 2 β ( τ )
where Ω 1 , 2 = Ξ 1 Ξ 2 and denotes the Hadamard product.
Proof. 
From Theorem 1 and Equation (25), we have
ξ 1 ( τ ) ξ 2 ( τ ) = i = 0 m 1 ξ 1 , i β i ( τ ) j = 0 m 1 ξ 2 , j β j ( τ ) = i = 0 m 1 j = 0 m 1 ξ 1 , i ξ 2 , j β i ( τ ) β j ( τ ) = j = 0 m 1 ξ 1 , j ξ 2 , j β j ( τ ) = Ω 1 , 2 β ( τ )
where
Ω 1 , 2 = ξ 1 , 0 ξ 2 , 0 , ξ 1 , 1 ξ 2 , 1 , , ξ 1 , m 1 ξ 2 , m 1 = Ξ 1 Ξ 2
using the Hadamard product. □
Theorem 3 establishes the relationship between the product of the functions ξ 1 ( τ ) and ξ 2 ( τ ) and the BPFs β ( τ ) . The product can be represented as a linear combination of the BPFs with coefficients given by the Hadamard product of the coefficient vectors Ξ 1 and Ξ 2 . This result provides a convenient form for evaluating the product of the functions in terms of the BPFs and their corresponding coefficients. Similar argument can be used to prove that if ξ 1 , ξ 2 L 2 [ 0 , T ) , then
ξ 1 2 ( τ ) = Ω 1 , 1 β ( τ ) , ξ 2 2 ( τ ) = Ω 2 , 2 β ( τ )
where
Ω 1 , 1 = ξ 1 , 0 2 , ξ 1 , 1 2 , , ξ 1 , m 1 2 = Ξ 1 Ξ 1 , Ω 2 , 2 = ξ 2 , 0 2 , ξ 2 , 1 2 , , ξ 2 , m 1 2 = Ξ 2 Ξ 2 .
In the next theorem, we derive the OM of the Riemann–Liouville fractional integral operator I α .
Theorem 4.
The OM of the Riemann–Liouville fractional integral operator I α is
Ω = Δ α Γ ( α + 2 ) 1 ϱ 1 ϱ 2 ϱ m 2 ϱ m 1 0 1 ϱ 1 ϱ m 3 ϱ m 2 0 0 1 ϱ m 4 ϱ m 3 0 0 0 1 ϱ 1 0 0 0 0 1 .
Proof. 
For any 0 j < m , we have
I α β j ( τ ) = 1 Γ ( α ) 0 τ ( τ r ) α 1 β j ( r ) d r = 0 , τ < j Δ ( τ j Δ ) α Γ ( α + 1 ) , j Δ τ < ( j + 1 ) Δ ( τ j Δ ) α ( τ j Δ Δ ) α Γ ( α + 1 ) , ( j + 1 ) Δ τ < T .
Let
I α β j ( τ ) = i = 0 m 1 π i , j β j ( τ ) ,
then
π i , j = 1 Δ 0 T I α β i ( τ ) b j ( τ ) d τ = 1 Δ j Δ ( j + 1 ) Δ I α β i ( τ ) d τ = Δ α Γ ( α + 2 ) , 0 i = j m 1 Δ α ( j i + 1 ) α + 1 2 ( j i ) α + 1 + ( j i 1 ) α + 1 Γ ( α + 2 ) , 0 i < j m 1 0 , 0 j < i m 1 .
Let q = j i and ϱ j = ( q + 1 ) α + 1 2 q α + 1 + ( q 1 ) α + 1 . Then, the operational matrix of I α is
Ω = Δ α Γ ( α + 2 ) 1 ϱ 1 ϱ 2 ϱ m 2 ϱ m 1 0 1 ϱ 1 ϱ m 3 ϱ m 2 0 0 1 ϱ m 4 ϱ m 3 0 0 0 1 ϱ 1 0 0 0 0 1 .
Now, we will describe how to use the OMs to solve Systems (1)–(3). Theorem 2 implies that the OM of the constant function h ( τ ) = 1 is
h ( τ ) = I 1 β ( τ ) , I 1 = ( 1 , 1 , , 1 ) .
Using Equations (26), (27), (29), and (33), we can rewrite Systems (1)–(3) as
D α ξ 1 ( τ ) = a 2 , 0 Ω 1 , 1 + a 1 , 1 Ω 1 , 2 + a 0 , 2 Ω 2 , 2 + a 1 , 0 Ξ 1 + a 0 , 1 Ξ 2 + a 0 , 0 I 1 β ( τ ) ,
D α ξ 2 ( τ ) = b 2 , 0 Ω 1 , 1 + b 1 , 1 Ω 1 , 2 + b 0 , 2 Ω 2 , 2 + b 1 , 0 Ξ 1 + b 0 , 1 Ξ 2 + b 0 , 0 I 1 β ( τ ) .
Taking the fractional integral operator I α for both sides of Equations (34) and (35) and using Equation (17), we obtain
ξ 1 ( τ ) ξ 1 ( 0 ) = a 2 , 0 Ω 1 , 1 + a 1 , 1 Ω 1 , 2 + a 0 , 2 Ω 2 , 2 + a 1 , 0 Ξ 1 + a 0 , 1 Ξ 2 + a 0 , 0 I 1 I α β ( τ ) ,
ξ 2 ( τ ) ξ 2 ( 0 ) = b 2 , 0 Ω 1 , 1 + b 1 , 1 Ω 1 , 2 + b 0 , 2 Ω 2 , 2 + b 1 , 0 Ξ 1 + b 0 , 1 Ξ 2 + b 0 , 0 I 1 I α β ( τ ) .
Theorem 27 and Equations (26) and (33) imply that
a 2 , 0 Ω 1 , 1 + a 1 , 1 Ω 1 , 2 + a 0 , 2 Ω 2 , 2 + a 1 , 0 Ξ 1 + a 0 , 1 Ξ 2 + a 0 , 0 I 1 Ω + η 1 I 1 Ξ 1 β ( τ ) = 0 ,
b 2 , 0 Ω 1 , 1 + b 1 , 1 Ω 1 , 2 + b 0 , 2 Ω 2 , 2 + b 1 , 0 Ξ 1 + b 0 , 1 Ξ 2 + b 0 , 0 I 1 Ω + η 2 I 1 Ξ 2 β ( τ ) = 0 .
The orthogonality relation in Theorem 1 implies that
a 2 , 0 Ω 1 , 1 + a 1 , 1 Ω 1 , 2 + a 0 , 2 Ω 2 , 2 + a 1 , 0 Ξ 1 + a 0 , 1 Ξ 2 + a 0 , 0 I 1 Ω + η 1 I 1 Ξ 1 = 0 ,
b 2 , 0 Ω 1 , 1 + b 1 , 1 Ω 1 , 2 + b 0 , 2 Ω 2 , 2 + b 1 , 0 Ξ 1 + b 0 , 1 Ξ 2 + b 0 , 0 I 1 Ω + η 2 I 1 Ξ 2 = 0 .
Since Ω is an upper triangular matrix, the components of Equations (40) and (41) can be written as
a 2 , 0 ξ 1 , i 2 + a 1 , 1 ξ 1 , i ξ 2 , i + a 0 , 2 ξ 2 , i 2 + a 1 , 0 ξ 1 , i + a 0 , 1 ξ 2 , i Δ α Γ ( α + 2 ) ξ 1 , i = λ 1 , i ,
b 2 , 0 ξ 1 , i 2 + b 1 , 1 ξ 1 , i ξ 2 , i + b 0 , 2 ξ 2 , i 2 + b 1 , 0 ξ 1 , i + b 0 , 1 ξ 2 , i Δ α Γ ( α + 2 ) ξ 1 , i = λ 2 , i ,
where
λ 1 , i = j = 0 i 1 a 2 , 0 ξ 1 , j 2 + a 1 , 1 ξ 1 , j ξ 2 , j + a 0 , 2 ξ 2 , j 2 + a 1 , 0 ξ 1 , j + a 0 , 1 ξ 2 , j + a 0 , 0 Δ α ϱ i j Γ ( α + 2 ) η 1 a 0 , 0 Δ α Γ ( α + 2 ) , λ 1 , i = j = 0 i 1 b 2 , 0 ξ 1 , j 2 + b 1 , 1 ξ 1 , j ξ 2 , j + b 0 , 2 ξ 2 , j 2 + b 1 , 0 ξ 1 , j + b 0 , 1 ξ 2 , j + b 0 , 0 Δ α ϱ i j Γ ( α + 2 ) η 2 b 0 , 0 Δ α Γ ( α + 2 ) ,
for i = 0 , 1 , , m 1 . Therefore, we will solve Systems (40) and (41) iteratively. In each iteration, we solve a 2 × 2 system as in Equations (42) and (43). This approach reduces the computational cost of the form of nonlinear Systems (42) and (43), as we solve m systems of 2 × 2 equations instead of solving a nonlinear system of 2 m × 2 m equations and unknowns.

4. Convergence Analysis

Let ξ L 2 ( [ 0 , T ) ) be a function, and its norm is defined by
| ξ | = 0 T ξ ( τ ) 2 d τ .
From Equation (44), we can approximate ξ ( τ ) by
ξ m ( τ ) = j = 0 m 1 ξ j β ( τ ) .
In the first theorem, we aim to prove that the mean square error achieves its minimum value when ξ j is given by Equation (23).
Theorem 5.
Let ξ L 2 ( [ 0 , T ) ) and ξ m ( τ ) be given by Equation (45). Then, the error term
ϵ ( ξ 0 , ξ 1 , , ξ m 1 ) = 0 T ( ξ ( τ ) ξ m ( τ ) ) 2 d τ
reaches its minimum value when ξ j is given by Equation (23) for j = 0 , 1 , , m 1 .
Proof. 
Let 0 j m 1 . Using Theorem 1, we have
ϵ ξ j = 2 0 T ( ξ ( τ ) ξ m ( τ ) ) β j ( τ ) d τ = 2 0 T ξ ( τ ) β j ( τ ) d τ Δ ξ j = 0 .
Then,
ξ j = 1 Δ 0 T ξ ( τ ) β j ( τ ) d τ .
Theorem 1 implies that
2 ϵ ξ j ξ k = 2 0 T β j ( τ ) β k ( τ ) d τ = 2 Δ , j = k 0 , j k , 0 j , k m 1 .
For any 0 j m 1 , we have
2 ϵ ξ 0 2 2 ϵ ξ 0 ξ 1 2 ϵ ξ 0 ξ j 2 ϵ ξ 1 ξ 0 2 ϵ ξ 1 2 2 ϵ ξ 1 ξ j 2 ϵ ξ j ξ 0 2 ϵ ξ j ξ 1 2 ϵ ξ j 2 = 2 Δ 0 0 0 2 Δ 0 0 0 2 Δ = ( 2 Δ ) j + 1 > 0 .
Thus, ϵ reaches its minimum value when ξ j is given by Equation (23) for j = 0 , 1 , , m 1 .
Theorem 6.
Let ξ ( τ ) be a continuous bounded function in [ 0 , T ) . Then, ξ m ( τ ) converges pointwise to ξ ( τ ) on [ 0 , T ) . Moreover,
0 T ( ξ ( τ ) ) 2 d τ = i = 0 ξ i 2 | β i | 2 .
Proof. 
Using Equation (23), we have
ξ i = m T i T m ( i + 1 ) T m ξ ( τ ) d τ , i = 0 , 1 , , m 1 .
Let i = 0 . Partition the interval [ 0 , T m ] into q uniform sub-intervals of length h. Using the Riemann integral formula with the left-end point, we have
ξ 0 = m T 0 T m ξ ( τ ) d τ = m T lim h 0 j = 0 q 1 ξ ( τ k ) h = m T lim h 0 j = 0 q 1 ξ ( j h ) T m q = lim h 0 j = 0 q 1 ξ ( j h ) q .
Since ξ is a continuous function, we have
lim m ξ 0 = lim m lim h 0 j = 0 q 1 ξ ( j h ) q = lim h 0 j = 0 q 1 ξ ( lim m j T m q ) q = lim h 0 j = 0 q 1 ξ ( 0 ) q = ξ ( 0 ) .
Similarly, for any τ i [ 0 , T ) , we can prove that
lim m ξ j = ξ ( τ j ) .
Thus, ξ m ( τ ) converges pointwise to ξ ( τ ) on [ 0 , T ) . Now, using Theorem 1, we have
0 T ( ξ ( τ ) ) 2 d τ = lim m 0 T ( ξ m ( τ ) ) 2 d τ = lim m j = 0 m 1 ξ j 2 0 T β j ( τ ) 2 d τ = i = 0 ξ i 2 β i 2 .
Now, we want to find the order of the mean square error in the approximation of ξ ( τ ) on the interval [ 0 , T ) .
Theorem 7.
Let ξ ( τ ) be a differentiable function on [ 0 , T ) such that
| ξ ( τ ) | Π
for all τ [ 0 , T ) , where Π is a positive real number. Then,
| ϵ ( τ ) | 2 C 1 Δ 2
where ϵ ( τ ) = ξ ( τ ) ξ m ( τ ) , τ [ 0 , T ) , ξ m ( τ ) is given by Equation (45), and Δ = T m .
Proof. 
Let τ j = j Δ and I j = [ τ j , τ j + 1 ) , where Δ = T m and j = 0 , 1 , , m 1 . By the mean value theorem for integrals and Equation (23), we have
ξ m ( τ ) = ξ j , τ [ τ j , τ j + 1 ) ,
= 1 Δ τ j τ j + 1 ξ ( τ ) d τ ,
= ξ ( ν j ) , ν j [ τ j , τ j + 1 ) , j = 0 , 1 , , m 1 .
Then, by the mean value theorem for integrals, we have
| ϵ ( τ ) | 2 = 0 T ( ξ ( τ ) ξ m ( τ ) ) 2 d τ = j = 0 m 1 τ j τ j + 1 ( ξ ( τ ) ξ m ( τ ) ) 2 d τ = j = 0 m 1 τ j τ j + 1 ( ξ ( τ ) ξ ( ν j ) ) 2 d τ = Δ j = 0 m 1 ( ξ ( ω j ) ξ ( ν j ) ) 2
where ω j , ν j [ τ j , τ j + 1 ) and j = 0 , 1 , , m 1 . By the mean value theorem and Equation (47), we obtain
| ϵ ( τ ) | 2 Δ Π 2 j = 0 m 1 ( ω j ν j ) 2 Δ Π 2 j = 0 m 1 Δ 2 = C 1 Δ 2
where C 1 = Π 2 T . □
Now, we will discuss the pointwise convergence of ξ 1 , m ( τ ) ξ 2 , m ( τ ) and ξ 1 , m 2 ( τ ) .
Theorem 8.
Let ξ 1 ( τ ) and ξ 2 ( τ ) be bounded continuous functions in [ 0 , T ) . Then:
  • ξ 1 , m ( τ ) ξ 2 , m ( τ ) is pointwise convergent to ξ 1 ( τ ) ξ 2 ( τ ) .
  • ξ 1 , m 2 ( τ ) is pointwise convergent to ξ 1 2 ( τ ) .
  • ξ 2 , m 2 ( τ ) is pointwise convergent to ξ 2 2 ( τ ) .
Proof. 
Assume that ξ 1 ( τ ) and ξ 2 ( τ ) are bounded continuous functions in [ 0 , T ) . Then, there exist two positive real numbers Π 1 and Π 2 such that
| ξ 1 ( τ ) | Π 1 , | ξ 2 ( τ ) | Π 2 .
Let 0 < ϵ 3 Π 1 Π 2 . By Theorem (6), ξ 1 , m ( τ ) and ξ 2 , m ( τ ) are pointwise convergent to ξ 1 ( τ ) and ξ 2 ( τ ) , respectively. Then, for m N ( ϵ ) , we have
| ξ 1 , m ( τ ) ξ 1 ( τ ) | < ϵ 3 Π 2 , | ξ 2 , m ( τ ) ξ 2 ( τ ) | < ϵ 3 Π 1 ,
for some positive integer N ( ϵ ) . Then, for m N ( ϵ ) , we have
| ξ 1 , m ( τ ) ξ 2 , m ( τ ) ξ 1 ( τ ) ξ 2 ( τ ) | | ( ξ 1 , m ( τ ) ξ 1 ( τ ) ) ( ξ 2 , m ( τ ) ξ 2 ( τ ) ) | + | ξ 1 ( τ ) ( ξ 2 , m ( τ ) ξ 2 ( τ ) ) | + | ξ 2 ( τ ) ( ξ 1 , m ( τ ) ξ 1 ( τ ) ) | ϵ 2 9 Π 1 Π 2 + Π 1 ϵ 3 Π 1 + Π 2 ϵ 3 Π 2 ϵ .
Then, ξ 1 , m ( τ ) ξ 2 , m ( τ ) is pointwise convergent to ξ 1 ( τ ) ξ 2 ( τ ) . The proofs of Parts (2) and (3) are similar to the proof of Part (1). □
Finally, we will prove the main theorem, which is the convergence theorem.
Theorem 9.
Let ξ 1 ( τ ) and ξ 2 ( τ ) be bounded continuous functions in [ 0 , T ) . Then, ξ 1 , m ( τ ) and ξ 2 , m ( τ ) are pointwise convergent to the exact solutions of Problems (1)–(3), ξ 1 ( τ ) and ξ 2 ( τ ) , respectively.
Proof. 
Let ξ 1 ( τ ) and ξ 2 ( τ ) be the exact solutions of Systems (1)–(3). Let ξ 1 , m ( τ ) and ξ 2 , m ( τ ) be the approximate solutions of Systems (1)–(3) given by Equation (22). Then, we have
D α ξ 1 ( τ ) = P 1 ( ξ 1 ( τ ) , ξ 2 ( τ ) ) , D α ξ 1 , m ( τ ) = P 1 ( ξ 1 , m ( τ ) , ξ 2 , m ( τ ) ) , D α ξ 2 ( τ ) = P 2 ( ξ 1 ( τ ) , ξ 2 ( τ ) ) , D α ξ 2 , m ( τ ) = P 2 ( ξ 1 , m ( τ ) , ξ 2 , m ( τ ) ) .
Taking the fractional integral operator (15) and using Equation (17), we obtain
ξ 1 ( τ ) = ξ 1 ( 0 ) 1 Γ ( α ) 0 τ P 1 ( ξ 1 ( r ) , ξ 2 ( r ) ) ( τ r ) α 1 d r , ξ 1 , m ( τ ) = ξ 1 , m ( 0 ) 1 Γ ( α ) 0 τ P 1 ( ξ 1 , m ( r ) , ξ 2 , m ( r ) ) ( τ r ) α 1 d r , ξ 2 ( τ ) = ξ 2 ( 0 ) η 2 Γ ( α ) 0 τ P 2 ( ξ 1 ( r ) , ξ 2 ( r ) ) ( τ r ) α 1 d r , ξ 2 , m ( τ ) = ξ 2 , m ( 0 ) η 2 Γ ( α ) 0 τ P 2 ( ξ 1 , m ( r ) , ξ 2 , m ( r ) ) ( τ r ) α 1 d r .
Let
ϵ 1 , m = | ξ 1 ( τ ) ξ 1 , m ( τ ) | , ϵ 2 , m = | ξ 2 ( τ ) ξ 2 , m ( τ ) | .
From the proof of Theorem 5, we have
lim m ξ 1 , m ( 0 ) = ξ 1 ( 0 ) = η , lim m ξ 2 , m ( 0 ) = ξ 2 ( 0 ) = 1 .
Therefore, there exists a positive integer m 0 such that for any m m 0 , we have
| ξ 1 ( 0 ) ξ 1 , m ( 0 ) | Δ 2 , | ξ 2 ( 0 ) ξ 2 , m ( 0 ) | Δ 2 .
By the triangle inequality, we obtain
| ξ 1 ( τ ) ξ 1 , m ( τ ) | Δ 2 + 0 τ ( τ r ) α 1 d r Γ ( α ) J 1 ,
| ξ 2 ( τ ) ξ 2 , m ( τ ) | Δ 2 + η 2 0 τ ( τ r ) α 1 d r Γ ( α ) J 2 ,
where
J 1 = | P 1 ( ξ 1 ( r ) , ξ 2 ( r ) ) P 1 ( ξ 1 , m ( r ) , ξ 2 , m ( r ) ) |
and
J 2 = | P 2 ( ξ 1 ( r ) , ξ 2 ( r ) ) P 2 ( ξ 1 , m ( r ) , ξ 2 , m ( r ) ) | .
Using Equation (53) from the proof of Theorem 8 and Equation (48), we have
J 1 < Δ ,
J 2 < 4 Δ + 4 Δ = 8 Δ .
Note that we used ϵ < Δ in Equation (53). Then, we have
| ξ 1 ( τ ) ξ 1 , m ( τ ) | Δ 2 + T α Γ ( α + 1 ) Δ ,
| ξ 2 ( τ ) ξ 2 , m ( τ ) | Δ 2 + η 2 T α Γ ( α + 1 ) Δ .
As Δ approaches zero, the right-hand sides of Equations (60) and (61) go to zero. Thus, we have
lim m ξ 1 , m ( τ ) = ξ 1 ( τ ) , lim m ξ 2 , m ( τ ) = ξ 2 ( τ ) ,
for all τ [ 0 , T ) . Therefore, the approximate solutions in Equation (25) converge pointwise to the exact solution of the model in Problems (1)–(3) in [ 0 , T ) . □

5. Numerical Results

In this section, we will examine three applications utilizing the solution method described in Section 3. These applications include the clock reaction of vitamin C, a system of Riccati equations that arises in control theory, and a problem from optimal control theory.

5.1. Clock Reaction of Vitamin C

In this subsection, we solve Systems (6)–(8) for η 1 = 0.001 , η 2 = 2 , and η = 0.2 . We will investigate the influence of the fractional derivative α on the solution. We compare the behavior of the solution as α approaches 1 with the results presented in [23]. Additionally, we will discuss the impact of η 1 and η 2 .
In all calculations in this subsection, we set the number of BPFs m to 40. Figure 1 and Figure 2 depict the approximate solutions of ξ 1 ( τ ) and ξ 2 ( τ ) for various values of α : 0.7, 0.8, 0.9, 0.95, and 1. To facilitate comparison with [23] and demonstrate the behavior of the approximate solution using our proposed method, we provide sketches of the approximate solutions of ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 in four distinct regions, as shown in Figure 3, Figure 4, Figure 5 and Figure 6. In Figure 7, we present the approximate solutions of ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 over the entire interval [ 0 , 1200 ] .
To numerically measure the accuracy of the approximate solutions, and since the exact solution of Systems (1)–(3) is unknown, we compute the residual errors defined as follows:
R e s 1 ( ξ 1 ) = D α ξ 1 ( τ ) + ξ 1 ( τ ) ξ 2 ( τ ) η 1 η 2 ( 1 2 ξ 1 ( τ ) ) 2 ,
R e s 1 ( ξ 2 ) = D α ξ 2 ( τ ) + η 2 ξ 1 ( τ ) ξ 2 ( τ ) .
Figure 8 illustrates the residual errors of ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 . Since the residual errors are nearly zero, except at the beginning of the interval, we present them over a large interval to demonstrate the error behavior. Additionally, we provide plots of the residual errors over smaller intervals to showcase their magnitudes.
In Figure 9, we analyze the behavior of the solutions for different values of α in comparison with each other.
Next, we examine the influence of η 2 in Figure 10 and Figure 11 for α = 0.95 and η 2 = 2 , 3 , 4 , 5 .
Finally, we study the length of the induction period. The length of the induction period is defined as the value of r when ξ 1 ( r ) = ξ 2 ( r ) . Table 1 reports the length of the induction period for η 1 = 0.001 , η 2 = 2 , and η = 0.2 across different values of α .
Furthermore, we compute the length of the induction period for α = 0.95 and different values of η 2 . The results are presented in Table 2.

5.2. System of Riccati Equations

Consider the following system of fractional Riccati equations:
D α ξ 1 ( τ ) = ξ 2 ( τ ) ξ 1 2 ( τ ) ξ 1 ( τ ) ξ 2 ( τ ) , ξ 1 ( 0 ) = 0 ,
D α ξ 2 ( τ ) = ξ 1 ( τ ) ξ 2 2 ( τ ) ξ 1 ( τ ) ξ 2 ( τ ) , ξ 2 ( 0 ) = 1 .
Then, the approximate solutions of ξ 1 ( τ ) and ξ 2 ( τ ) for different values of α are given in Figure 12 and Figure 13, respectively. Let us define the maximum error as follows:
e r r o r = max D α ξ 1 ( τ i ) ξ 2 ( τ i ) + ξ 1 2 ( τ i ) + ξ 1 ( τ i ) ξ 2 ( τ i ) D α ξ 2 ( τ i ) ξ 1 ( τ i ) + ξ 2 2 ( τ i ) + ξ 1 ( τ i ) ξ 2 ( τ i ) 2 : τ i = i 1 m , i = 1 , 2 , , 101 .
Then, the errors in our approach are reported in Table 3.

5.3. Optimal Control Problem

Consider the following optimal control problem [29,30,31]:
J = 1 2 0 1 ψ 1 2 ( t ) + ψ 2 2 ( t ) d t
subject to
D α ψ 1 ( t ) = ψ 1 ( t ) + ψ ( t ) , ψ 1 ( 0 ) = 1 .
To minimize J, we follow the method described in [30]. The dynamics of the state and adjoint equations are given in fractional form as
D α ψ 1 ( t ) = ψ 1 ( t ) ψ 3 ( t ) , ψ 1 ( 0 ) = 1 ,
D α ψ 3 ( t ) = ψ 1 ( t ) + ψ 3 ( t ) , ψ 3 ( 1 ) = 0 ,
where ψ 1 and ψ 3 are the state and co-state variables, respectively, while the optimal control function is given by
ψ 2 ( t ) = ψ 3 ( t ) .
To solve the previous system using the OMM, we assume that ψ 3 ( 0 ) = θ and we use the linear shooting method to find θ . We compared our results with those in [29,30] based on computational time, and the comparison is reported in Table 4. We also report the values of J in Table 5. To compare the computational time between our approach and the approaches in [29,30], we implemented their approaches, and the results are reported in Table 5.
In addition, the absolute errors for the state functions in [29,31] are given in Table 5 for when Legendre and Bernoulli polynomials of a degree of at most five were used.

6. Conclusions and Discussion of the Results

In this article, we have presented a new numerical approach for solving a class of systems of fractional initial value problems based on the operational matrix method. The method has been derived, and a convergence analysis has been provided. By effectively simplifying the resulting algebraic system, we have reduced the computational cost by transforming the problem into a set of 2 × 2 nonlinear equations instead of solving a system of 2 m × 2 m equations.
We have applied our approach to three main applications in science: optimal control problems, Riccati equations, and clock reactions. The clock reaction not only holds significance in chemistry education but also has industrial and biological applications. We have compared our results with those of other researchers, considering computational time, cost, and absolute errors. Additionally, we have validated our numerical method by comparing our results with the integer model when the fractional order approaches one. To support our findings, we have included numerous figures and tables.
Based on the presented figures and tables, we can draw the following conclusions:
  • Figure 1 and Figure 2 demonstrate that the profile of ξ 1 increases as α increases, while the profile of ξ 2 decreases as α increases.
  • Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 reveal that the behavior of concentrations ξ 1 and ξ 2 differs based on the value of τ when α = 1 . The domain can be divided into four parts: rapid decay of ξ 1 and ξ 2 in the first interval, linear increase in ξ 1 and linear decay of ξ 2 in the second region, fast increase in ξ 1 and fast decay of ξ 2 in the third region, and linear increase in ξ 1 and rapid decay of ξ 2 in the fourth region. This behavior is consistent with [23].
  • Figure 5 visually indicates the end of the induction period τ e .
  • Figure 8 shows that the residual errors of ξ 1 and ξ 2 are approximately 10 6 , providing strong numerical evidence for the convergence of the proposed method.
  • Figure 9 compares the behavior of the two approximate solutions and confirms their similarity to [23].
  • Figure 10 and Figure 11 demonstrate the influence of η 2 on the approximate solutions. It affects the induction period when ξ 1 ( τ ) = ξ 2 ( τ ) but does not alter the shape of the solutions.
  • Table 1 and Table 2 reveal that the induction period decreases as α increases and approaches one, with a similar effect when η 2 increases.
  • As α approaches one, the approximate solutions converge to the solutions in [23].
  • Figure 12 and Figure 13 illustrate the approximate solutions of ξ 1 and ξ 2 for the Riccati differential equations. The approximate solutions converge to the solutions obtained when α = 1 , as the fractional order approaches one.
  • Table 3 presents the maximum error for different values of α , which is approximately 10 15 . This indicates that the approximate solution converges to the exact solution.
  • Table 4 compares the computation time in our approach with those in [29,31] for the optimal control problem. It is evident that our approach requires significantly less computational time compared to [29,31]. This demonstrates that our approach converges faster to the exact solution, which is attributed to reducing our algebraic system to a set of 2 × 2 algebraic systems.
  • Table 5 compares the absolute error in our approach with those in [29,31]. It is evident that the absolute error in our approach is smaller compared to [29,31]. This serves as strong numerical evidence that our approach converges faster to the exact solution.
  • We can generalize our approach to boundary value problems using the linear shooting method. To do this, we assume initial conditions θ 1 and θ 2 , and then we determine these values by solving the system using the given boundary conditions.

Author Contributions

Methodology, S.M.S.; software, S.M.S.; formal analysis, S.M.S.; investigation, S.M.S.; writing—original draft, R.M.K.; writing—review editing, S.H.A.; supervision, Z.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Deanship of Scientific Research at Umm Al-Qura University, grant number 23UQU4310382DSR002.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Amin, R.; Ahmad, H.; Shah, K.; Bilal Hafeez, M.; Sumelka, W. Theoretical and computational analysis of nonlinear fractional integro-differential equations via collocation method. Chaos Solitons Fractals 2021, 151, 111252. [Google Scholar] [CrossRef]
  2. Biazar, J.; Sadri, K. Solution of weakly singular fractional integro-differential equations by using a new operational approach. J. Comput. Appl. Math. 2019, 352, 453–477. [Google Scholar] [CrossRef]
  3. Sahu, P.; Saha Ray, S. Legendre spectral collocation method for the solution of the model describing biological species living together. J. Comput. Appl. Math. 2016, 296, 47–55. [Google Scholar] [CrossRef]
  4. Roul, P.; Meyer, P. Numerical solutions of systems of nonlinear-differential equations by Homotopy-perturbation method. Appl. Math. Model. 2011, 35, 4234–4242. [Google Scholar] [CrossRef]
  5. Shakeri, F.; Dehghan, M. Solution of a model describing biological species living together using the variational iteration method. Math. Comput. Model. 2008, 48, 685–699. [Google Scholar] [CrossRef]
  6. Hatamzadeh-Varmazyar, S.; Masouri, Z.; Babolian, E. Numerical method for solving arbitrary linear differential equations using a set of orthogonal basis functions and operational matrix. Appl. Math. Model. 2016, 40, 233–253. [Google Scholar] [CrossRef]
  7. Odibat, Z.M. Analytic study on linear systems of fractional differential equations. Comput. Math. Appl. 2010, 59, 1171–1183. [Google Scholar] [CrossRef] [Green Version]
  8. Al-Refai, M. Fundamental results on systems of fractional differential equations in- volving Caputo-Fabrizio fractional derivative. Jordan J. Math. Stat. 2020, 13, 389–399. [Google Scholar]
  9. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  10. Syam, S.M.; Siri, Z.; Altoum, S.H.; Kasmani, R.M. Analytical and Numerical Methods for Solving Second-Order Two-Dimensional Symmetric Sequential Fractional Integro-Differential Equations. Symmetry 2023, 15, 1263. [Google Scholar] [CrossRef]
  11. Katugampola, U.N. New approach to generalized fractional integral. Appl. Math. Comput. 2011, 218, 860–865. [Google Scholar] [CrossRef] [Green Version]
  12. Baleanu, D.; Jaiami, A.; Sajjadi, S.; Mozyrski, D. A new fractional model and optimal control of a tumor-immune surveillance with non-singular derivative operator. Chaos 2019, 29, 083127. [Google Scholar] [CrossRef]
  13. Caputo, M.; Fabrizio, M. A new definition of fractional derivative without singular kernel. Prog. Fract. Differ. Appl. 2015, 1, 73–85. [Google Scholar]
  14. Atangana, A.; Baleanu, D. New fractional derivative with non-local and non-singular kernel. Therm. Sci. 2016, 20, 757–763. [Google Scholar] [CrossRef] [Green Version]
  15. Da Vanterler, J.; Sousa, C.; Capelas de Oliveira, E. A Gronwall inequality and the Cauchy-type problem by means of Ψ-Hilfer operator. arXiv 2017, arXiv:1709.03634. [Google Scholar] [CrossRef] [Green Version]
  16. Youbi, F.; Momani, S.; Hasan, S.; Al-Smadi, M. Effective numerical technique for nonlinear Caputo-Fabrizio Systems of fractional Volterra Integro-differential equations in Hilbert space. Alex. Eng. J. 2022, 61, 1778–1786. [Google Scholar] [CrossRef]
  17. Zamanpour, I.; Ezzati, R. Operational matrix method for solving fractional weakly singular 2D partial Volterra integral equations. J. Comput. Appl. Math. 2023, 419, 114704. [Google Scholar] [CrossRef]
  18. Syam, M.I.; Sharadga, M.; Hashim, I. A numerical method for solving fractional delay differential equations based on the operational matrix method. Chaos Solitons Fractals 2021, 147, 110977. [Google Scholar] [CrossRef]
  19. Najafalizadeh, S.; Ezzati, R. A block pulse operational matrix method for solving two-dimensional nonlinear integro-differential equations of fractional order. J. Comput. Appl. Math. 2017, 326, 159–170. [Google Scholar] [CrossRef]
  20. Richards, W.T.; Loomis, A.L. The chemical effects of high frequency sound waves I. A preliminary survey. J. Am. Chem. Soc. 1927, 49, 3086–3100. [Google Scholar] [CrossRef]
  21. Forbes, G.S.; Estill, H.W.; Walker, O.J. Induction periods in reactions between thiosulfate and arsenite or arsenate: A useful clock reaction. J. Am. Chem. Soc. 1922, 44, 97–102. [Google Scholar] [CrossRef]
  22. Horváth, A.K.; Nagypál, I. Classification of clock reactions. ChemPhysChem 2015, 16, 588–594. [Google Scholar] [CrossRef] [PubMed]
  23. Kerr, R.; Thomson, W.M.; Smith, D.J. Mathematical modelling of the vitamin C clock reaction. R. Soc. Open Sci. 2019, 6, 181367. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Wright, S.W. Tick tock, a vitamin C clock. J. Chem. Edu. 2002, 79, 40A. [Google Scholar] [CrossRef]
  25. Wright, S.W. The vitamin C clock reaction. J. Chem. Edu. 2002, 79, 41. [Google Scholar] [CrossRef]
  26. Rawlings, J.B.; Mayne, D.Q.; Diehl, M. Model Predictive Control: Theory and Design, 2nd ed.; Nob Hill Publishing: Santa Barbara, CA, USA, 2017. [Google Scholar]
  27. Mehta, P.G. Principles of Control Systems Engineering; PHI Learning Pvt. Ltd: New Delhi, India, 2011. [Google Scholar]
  28. Samko, S.G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives: Theory and Applications; Gordon and Breach: Yverdon, Switzerland, 1993. [Google Scholar]
  29. Lotf, A.; Dehghan, M.; Yousef, S.A. A numerical technique for solving fractional optimal control problems. Comput. Math. Appl. 2011, 62, 1055–1067. [Google Scholar] [CrossRef] [Green Version]
  30. Agrawal, O.P.; Baleanu, D. A Hamiltonian Formulation and a Direct Numerical Scheme for Fractional Optimal Control Problems. J. Vib. Control 2007, 13, 1269–1281. [Google Scholar] [CrossRef]
  31. Keshavarz, E.; Ordokhani, Y.; Razzaghi, M. A numerical solution for fractional optimal control problems via Bernoulli polynomials. J. Vib. Control 2016, 22, 3889–3903. [Google Scholar] [CrossRef]
Figure 1. The approximate solution ξ 1 ( τ ) for α = 0.7 , 0.8 , 0.9 , 0.95 , 1 .
Figure 1. The approximate solution ξ 1 ( τ ) for α = 0.7 , 0.8 , 0.9 , 0.95 , 1 .
Mathematics 11 03132 g001
Figure 2. The approximate solution ξ 2 ( τ ) for α = 0.7 , 0.8 , 0.9 , 0.95 , 1 .
Figure 2. The approximate solution ξ 2 ( τ ) for α = 0.7 , 0.8 , 0.9 , 0.95 , 1 .
Mathematics 11 03132 g002
Figure 3. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 on [ 0 , 30 ] .
Figure 3. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 on [ 0 , 30 ] .
Mathematics 11 03132 g003
Figure 4. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 on [ 0 , 160 ] .
Figure 4. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 on [ 0 , 160 ] .
Mathematics 11 03132 g004
Figure 5. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 on [ 90 , 210 ] .
Figure 5. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 on [ 90 , 210 ] .
Mathematics 11 03132 g005
Figure 6. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 on [ 160 , 1100 ] .
Figure 6. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 on [ 160 , 1100 ] .
Mathematics 11 03132 g006
Figure 7. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 on [ 0 , 1200 ] .
Figure 7. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 on [ 0 , 1200 ] .
Mathematics 11 03132 g007
Figure 8. Residual errors of ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 .
Figure 8. Residual errors of ξ 1 ( τ ) and ξ 2 ( τ ) for α = 1 .
Mathematics 11 03132 g008
Figure 9. The approximate solutions ξ 1 ( τ ) against each other.
Figure 9. The approximate solutions ξ 1 ( τ ) against each other.
Mathematics 11 03132 g009
Figure 10. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for η 2 = 2 ,   3 .
Figure 10. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for η 2 = 2 ,   3 .
Mathematics 11 03132 g010
Figure 11. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for η 2 = 4 ,   5 .
Figure 11. The approximate solutions ξ 1 ( τ ) and ξ 2 ( τ ) for η 2 = 4 ,   5 .
Mathematics 11 03132 g011
Figure 12. The approximate solution ξ 1 ( τ ) for different values of α .
Figure 12. The approximate solution ξ 1 ( τ ) for different values of α .
Mathematics 11 03132 g012
Figure 13. The approximate solution ξ 2 ( τ ) for different values of α .
Figure 13. The approximate solution ξ 2 ( τ ) for different values of α .
Mathematics 11 03132 g013
Table 1. Length of the induction period when η 1 = 0.001 , η 2 = 2 , and η = 0.2 .
Table 1. Length of the induction period when η 1 = 0.001 , η 2 = 2 , and η = 0.2 .
α τ
0.7234.889
0.8205.528
0.9182.691
0.95173.076
1164.422
Table 2. Length of the induction period for α = 0.95 and η 2 = 2 , 3 , 4 , 5 .
Table 2. Length of the induction period for α = 0.95 and η 2 = 2 , 3 , 4 , 5 .
η 2 τ
2173.076
356.4086
422.6705
59.85377
Table 3. The errors for α = 0.7 , 0.8 , 0.9 , 0.95 , 1 .
Table 3. The errors for α = 0.7 , 0.8 , 0.9 , 0.95 , 1 .
α Error
1 2.3848 × 10 15
0.95 2.4925 × 10 15
0.8 2.7809 × 10 15
0.7 2.9809 × 10 15
Table 4. The CT in seconds and the value of J.
Table 4. The CT in seconds and the value of J.
α JCT in OMMCT in [29]CT in [30]CT in [31]
0.80.1675810.593.264.214.46
0.90.1675810.583.294.224.42
0.990.1675810.593.194.974.99
10.1675810.583.224.784.98
Table 5. The absolute error in OMM and in [29,31].
Table 5. The absolute error in OMM and in [29,31].
tAbs. Error in [29]Abs. Error in [31]Abs. Error in OMM
0.0 6.25 × 10 6 6.25 × 10 6 6.25 × 10 9
0.1 1.34 × 10 5 2.39 × 10 6 5.23 × 10 9
0.2 2.12 × 10 5 1.21 × 10 6 4.24 × 10 9
0.3 3.24 × 10 5 1.72 × 10 6 3.98 × 10 9
0.4 4.73 × 10 5 6.82 × 10 7 4.01 × 10 9
0.5 6.20 × 10 5 1.93 × 10 6 4.24 × 10 9
0.6 7.49 × 10 5 3.11 × 10 7 5.21 × 10 9
0.7 8.88 × 10 5 1.90 × 10 6 4.83 × 10 9
0.8 1.07 × 10 5 9.17 × 10 7 3.99 × 10 9
0.9 1.31 × 10 5 2.49 × 10 6 2.11 × 10 9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Syam, S.M.; Siri, Z.; Altoum, S.H.; Kasmani, R.M. An Efficient Numerical Approach for Solving Systems of Fractional Problems and Their Applications in Science. Mathematics 2023, 11, 3132. https://doi.org/10.3390/math11143132

AMA Style

Syam SM, Siri Z, Altoum SH, Kasmani RM. An Efficient Numerical Approach for Solving Systems of Fractional Problems and Their Applications in Science. Mathematics. 2023; 11(14):3132. https://doi.org/10.3390/math11143132

Chicago/Turabian Style

Syam, Sondos M., Z. Siri, Sami H. Altoum, and R. Md. Kasmani. 2023. "An Efficient Numerical Approach for Solving Systems of Fractional Problems and Their Applications in Science" Mathematics 11, no. 14: 3132. https://doi.org/10.3390/math11143132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop