Next Article in Journal
Quantifying the Nonadiabaticity Strength Constant in Recently Discovered Highly Compressed Superconductors
Next Article in Special Issue
Applications of Euler Sums and Series Involving the Zeta Functions
Previous Article in Journal
Characterisation of Microstructure and Special Grain Boundaries in LPBF AlSi10Mg Alloy Subjected to the KoBo Extrusion Process
Previous Article in Special Issue
Vieta–Lucas Polynomials for the Brusselator System with the Rabotnov Fractional-Exponential Kernel Fractional Derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of a Higher-Order 𝒜-Stable Block Approach with Symmetric Hybrid Points and an Adaptive Step-Size Strategy for Integrating Differential Systems Efficiently

1
Department of Mathematics, I. K. Gujral Punjab Technical University Jalandhar, Main Campus, Kapurthala 144603, Punjab, India
2
Department of Mathematics, Akal University, Raman Road, Talwandi Sabo 151302, Punjab, India
3
Scientific Computing Group, Universidad de Salamanca, Plaza de la Merced, 37008 Salamanca, Spain
4
Department of Mathematics, Escuela Politécnica Superior de Zamora, Campus Viriato, 49022 Zamora, Spain
5
University Institute of Engineering and Technology, Panjab University, Sector-25, Chandigarh 160025, Chandigarh, India
*
Author to whom correspondence should be addressed.
Current address: Department of Engineering, Plaksha University, Alpha City, Sector-101, Mohali 140306, Punjab, India.
These authors contributed equally to this work.
Symmetry 2023, 15(9), 1635; https://doi.org/10.3390/sym15091635
Submission received: 27 July 2023 / Revised: 18 August 2023 / Accepted: 21 August 2023 / Published: 24 August 2023

Abstract

:
This article introduces a computational hybrid one-step technique designed for solving initial value differential systems of a first order, which utilizes second derivative function evaluations. The method incorporates three intra-step symmetric points that are calculated to provide an optimum version of the suggested scheme. By combining the hybrid and block methodologies, an efficient numerical method is achieved. The hybrid nature of the algorithm determines that the first Dahlquist barrier is overcome, ensuring its effectiveness. The proposed technique exhibits an eighth order of convergence and demonstrates A -stability characteristics, making it particularly well suited for handling stiff problems. Additionally, an adjustable step size variant of the algorithm is developed using an embedded-type technique. Through numerical experiments, it is shown that the suggested approach outperforms some other well-known methods with similar properties when applied to initial-value ordinary differential problems.

1. Introduction

In this article, our aim is to construct an efficient algorithm for integrating initial-value differential problems given by
z ( x ) = f ( x , z ) ; z ( x 0 ) = z 0 ,
with x [ x 0 , x N ] , z : [ x 0 , x N ] R m , f : [ x 0 , x N ] × R m R m , assuming that all prerequisites for the existence of a unique solution are fulfilled.
Differential equations are used to model continuous phenomena that frequently occur in real-world situations. Unfortunately, very few of such equations can be tackled analytically. In this scenario, usually the problem of interest is dealt with numerically, that is, an approximate solution is obtained on a discrete set of points. The classes of Runge–Kutta and linear multi-steps techniques have usually been to obtain reliable approximations of the true solution of (1). For more details, one can see the monographs written by Butcher [1], Hairer [2,3], Lambert [4], Brugnano [5], Rosser [6] and Milne [7]. In Matlab and Mathematica, many ODE solvers are incorporated with the purpose of carrying out the task efficiently. For a detailed description of the codes, one can see the references by Shampine et al. [8,9] and Dormand and Prince [10]. Some of these built-in codes are specifically designed for solving stiff and non-stiff systems, for instance, the ODE23s solver is an integrated function explicitly crafted to manage stiff systems, as noted by Shampine et al. [11]. It excels particularly when dealing with coarse tolerances. This solver relies on an adapted Rosenbrock approach of a second order, employing a variable step size methodology through a blend of precise second and third-order formulas, effectively estimating the solution. The development of efficient algorithms with good stability characteristics is a major problem of interest in the numerical analysis of differential systems.
The first Dahlquist’s barrier, as is well known, limits the order that can be achieved in the class of zero-stable linear multi-step methods. Dahlquist established that the order of accuracy, say p, of a linear m-step method is as follows:
p m + 1 , when m is odd , p m + 2 , when m is even .
To overcome the limitations of linear multi-step methods, numerous researchers have proposed hybrid approaches which include information from the solution at off-step points within the required interval of interest. These hybrid methods have also been referred to as linear multi-step methods of a modified type [1]. For a more comprehensive understanding of these techniques, Lambert’s book [4] serves as a valuable resource. Block methods were initially designed to obtain starting guesses for implicit multi-step methods, but they have evolved to be used in general-purpose codes [12]. These block methods aim to provide the approximate solution at various grid points at once (see [13]). For an extensive list of references on these methods, interested readers may refer to Lambert’s book and Brugnano’s work [4,5].
Here, we have used both approaches, namely hybrid and block, to develop a new method using interpolation and collocation techniques. An optimized version of the method can be obtained by following an appropriate optimization strategy. The obtained algorithm can be considered an extended version of the one presented in [14]. Additionally, the technique is devised in an adaptive step size version, employing an embedded-type technique.
The article’s content is organized into the following sections: Section 2 contains the derivation of the new algorithm. In Section 3, the primary characteristics of the proposed method are examined. The formulation of the new method in adaptive step size mode is elaborated on in Section 4. The performance of the proposed method, compared to some existing methods in the literature, is demonstrated through numerical experiments in Section 5. In Section 6, the efficiency curves have been plotted, which reflects the better performance of the proposed scheme. Finally, Section 7 shows some conclusions drawn from the study.

2. Construction of the Proposed Scheme

For the sake of simplicity, the method is derived for solving the differential system (1) with m = 1 , and then, by using the component-wise strategy it could be applied for solving problems with m > 1 . Let us consider a fixed step size h = x j + 1 x j on a discrete grid with N + 1 points, x 0 < x 1 < < x N . To derive the method, begin by considering the theoretical solution of (1) in the form of an interpolating polynomial expressed as:
z ( x ) R ( x ) = i = 0 η 1 + η 2 1 Ψ i Θ i ( x ) , x [ x j , x j + k ] .
Consider η 1 as the numbers of interpolation points and η 2 as that of collocation points, satisfying 0 η 1 k and η 2 > 0 , with k denoting the number of steps in the block, with k > 0 . The terms Ψ i are unknown constants to be calculated, and Θ i ( x ) = ( x x j ) i represents the polynomial basis functions. To introduce a hybrid nature to the proposed method, three values, r 1 , r 2 = 1 / 2 , and r 3 [ 0 , 1 ] , are chosen such that x j + r 1 = x j + r 1 · h , x j + r 2 = x j + r 2 · h , and x j + r 3 = x j + r 3 · h represent three intra-step points. The development of the present method involves specifying the parameters η 1 = 1 , η 2 = 8 , and k = 1 . The derivation of the method is detailed below:
Step 1: 
The unknown coefficients Ψ i in (2) are determined by imposing the following conditions:
(i)
z ( x j ) = R ( x j )
(ii)
z ( x j + v ) = R ( x j + v ) , v = 0 , r 1 , r 2 , r 3 , 1
(iii)
z ( x j + v ) = R ( x j + v ) , v = 0 , r 2 , 1
Hence, a system of nine equations in nine unknowns Ψ i , i = 0 ( 1 ) 8 is obtained. This system can be solved by using a computer algebra system (CAS). After substituting the obtained Ψ i s into (2), we get
z ( x ) R ( x ) = λ ( x ) z j + h i = 0 1 μ i ( x ) f j + i + i = 1 3 μ r i ( x ) f j + r i + h 2 σ 0 ( x ) f j + σ r 2 ( x ) f j + r 2 + σ 1 ( x ) f j + 1
where
f j + m = f ( x j + m , z j + m ) , m = 0 , r 1 , r 2 , r 3 , 1 , f j + m = f ( x j + m , z j + m ) , m = 0 , r 2 , 1 and z j + m z ( x j + m ) .
Step 2: 
In order to obtain optimized values for r 1 and r 3 , we evaluate expression (3) at x = x j + 1 and x = x j + r 2 . This allows us to approximate the true solution at the final point and at the midpoint of the interval [ x j , x j + 1 ] , denoted as x j + 1 and x j + r 2 , respectively, in terms of r 1 and r 3 , as shown in [15]. The evaluation of z ( x j + 1 ) and z ( x j + r 2 ) can be readily obtained through a CAS, although the resulting expressions can be quite lengthy, and so they are not presented here. To determine the appropriate values of the unknown parameters r 1 and r 3 , the following optimization strategy is employed:
(i)
By expanding the formulas for z ( x j + 1 ) and z ( x j + r 2 ) using the Taylor series around x j , we obtain the local truncation errors of these formulas, which are given by
L ( z ( x j + 1 ) , h ) = ( 2 + r 1 ( 3 6 r 3 ) + 3 r 3 ) z ( 9 ) ( x j ) h 9 203 , 212 , 800 + O ( h 10 )
and
L ( z ( x j + r 2 ) , h ) = ( 23 + r 1 ( 87 384 r 3 ) + 87 r 3 ) z ( 9 ) ( x j ) h 9 26 , 011 , 238 , 400 + O ( h 10 ) .
(ii)
By setting the leading terms of the truncation errors in (4) and (5) equal to zero, we arrive at the following system of nonlinear equations:
2 + r 1 ( 3 6 r 3 ) + 3 r 3 = 0 23 + r 1 ( 87 384 r 3 ) + 87 r 3 = 0 .
(iii)
The implicit system of equations above represents two curves in the r 1 r 3 -plane, exhibiting symmetry with respect to the diagonal r 1 = r 3 . A unique solution satisfying the condition 0 < r 1 < r 2 < r 3 < 1 is obtained, and it is given by:
r 1 = 1 6 3 3 0.211325 , r 3 = 1 6 3 + 3 0.788675 .
Substituting the optimal values of r 1 and r 3 in the expressions (4) and (5), we get
L ( z ( x j + 1 ) , h ) = z ( 11 ) ( x j ) h 11 1 , 207 , 084 , 032 , 000 + O ( h 12 ) ,
L ( z ( x j + r 2 ) , h ) = z ( 10 ) ( x j ) h 10 133 , 772 , 083 , 200 + O ( h 11 ) .
Note that using the optimized values of r 1 and r 3 , we gain at least an order of accuracy in the formulas to approximate z j + 1 and z j + r 2 .
Step 3: 
Finally, we require approximations of the true solution at the remaining intra-step points, that is, at x = x j + r 1 , x j + r 3 . These are obtained by substituting the optimized values of r 1 , r 3 and x = x j + r 1 h , x j + r 3 h in the expression in (3). The proposed hybrid method provides four approximations of the true solution at x j + r 1 , x j + r 2 , x j + r 3 , x j + 1 . The coefficients of each of the formulas of the block method are listed in Table 1.
This proposed method is implicit in nature, requiring the solution of a system of four equations at each iteration. The commonly employed approach to solve this system is Newton’s method or its variants, as outlined in [12].

3. Method Analysis: Examining Its Characteristics

Now, we discuss some basic characteristics of the method presented in Table 1, as the order of accuracy, zero-stability, and the analysis of linear stability.

3.1. Order of Accuracy and Consistency

To address the convergence analysis, we rewrite the method whose coefficients are given in Table 1 as
A Z J = h B F J + h 2 C G J ,
where A , B , and C are matrices of dimension 4 × 5 , given by
A = 1 1 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0 1 ,
B = 727 + 44 3 7560 108 + 3 840 4 36 + 23 3 945 36 23 3 280 43 + 44 3 7560 619 6720 9 70 + 9 3 128 16 105 9 70 9 3 128 11 6720 727 44 3 7560 36 + 23 3 280 4 36 + 23 3 945 108 3 840 43 44 3 7560 19 210 9 35 32 105 9 35 19 210 ,
C = 62 + 9 3 22 , 680 0 1 162 0 8 9 3 22 , 680 67 26 , 880 0 1 96 0 1 8960 62 9 3 22 , 680 0 1 162 0 8 + 9 3 22 , 680 1 420 0 0 0 1 420
and
Z J = ( z j , z j + r 1 , z j + r 2 , z j + r 3 , z j + 1 ) T , F J = ( f j , f j + r 1 , f j + r 2 , f j + r 3 , f j + 1 ) T , G J = ( f j , f j + r 1 , f j + r 2 , f j + r 3 , f j + 1 ) T .
The linear difference operator L ¯ related to the difference block given in (8) can be expressed as follows:
L ¯ [ z ( x ) , h ] = j ρ ¯ j z ( x + j h ) h j δ ¯ j z ( x + j h ) h 2 j γ ¯ j z ( x + j h ) , j = 0 , r 1 , r 2 , r 3 , 1
where ρ ¯ j , δ ¯ j , and γ ¯ j are precisely the column vectors of matrices A , B and C. We make the assumption that z ( x ) is sufficiently differentiable to expand z ( x + j h ) , z ( x + j h ) , and z ( x + j h ) in Taylor series around x. In this way we get
L ¯ [ z ( x ) , h ] = τ ¯ 0 z ( x ) + τ ¯ 1 h z ( x ) + τ ¯ 2 h 2 z ( x ) + + τ ¯ p h p z ( p ) ( x ) +
Definition 1.
The linear operator in (9) and the method whose coefficients are given in Table 1 are regarded as being of order p when the condition stated in (10) produces
τ ¯ 0 = τ ¯ 1 = τ ¯ 2 = = τ ¯ p = 0 and τ ¯ p + 1 0 .
We note that the τ ¯ i s are vectors and τ ¯ p + 1 is known as the vector of error constants, being
L ¯ [ z ( x ) , h ] = τ ¯ p + 1 h p + 1 z ( p + 1 ) ( x ) + O ( h p + 2 ) .
Concerning the obtained method, it is τ ¯ 0 = τ ¯ 1 = τ ¯ 2 = = τ ¯ 8 = 0 and
τ ¯ 9 = 1 1 , 881 , 169 , 920 3 , 0 , 1 1 , 881 , 169 , 920 3 , 0 T
with
L ¯ [ z ( x ) , h ] = 1 1 , 881 , 169 , 920 3 h 9 z 9 ( x ) + O ( h 10 ) 1 133 , 772 , 083 , 200 h 10 z 10 ( x ) + O ( h 11 ) 1 1 , 881 , 169 , 920 3 h 9 z 9 ( x ) + O ( h 10 ) 1 1 , 207 , 084 , 032 , 000 h 11 z 11 ( x ) + O ( h 12 ) .
This indicates that the proposed method in Table 1 has order p = 8 . Since p 1 , this characteristic serves as a sufficient condition for ensuring the consistency of the corresponding block method.

3.2. Zero-Stability Analysis

The primary focus of this subsection lies in analyzing the stability of the difference Equation (8) when h tends to zero. In this case, the method in Table 1 reduces to z j + r 1 = z j , z j + r 2 = z j , z j + r 3 = z j , z j + 1 = z j , which can further be rewritten in a more convenient way as
W ^ ζ = A ¯ W ^ ζ 1 ,
where
A ¯ = 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1
and
W ^ ζ = ( z j + r 1 , z j + r 2 , z j + r 3 , z j + 1 ) T , W ^ ζ 1 = ( z j + r 1 1 , z j + r 2 1 , z j + r 3 1 , z j ) T .
The first characteristic polynomial of the proposed block method is
ρ ( η ) = d e t [ I 4 η A ¯ ] = η 3 ( η 1 ) .
Since the roots of the equation ρ ( η ) = 0 fulfill η j 1 , and the root with a modulus of one is simple, the proposed method is considered to be zero-stable according to [2].

3.3. Convergence

As per the Lax equivalence theorem, a method is deemed to be convergent if and only if it satisfies both consistency and zero-stability. In the preceding sections, we have demonstrated the consistency and zero-stability of the newly developed method, leading to the conclusion that the method presented in Table 1 is convergent.

3.4. Linear Stability Analysis

The notion of linear stability differs from zero stability since zero stability analysis involves considering the step size h 0 . In practical applications, we always work with finite step sizes, i.e., h > 0 .
Let us consider the well-known Dahlquist’s test equation:
z ( x ) = γ z ( x ) , R e ( γ ) < 0 .
The theoretical solution to this problem is represented by z ( x ) = e γ x , exhibiting a tendency to approach zero as x approaches infinity. It is anticipated that when employing the proposed method to solve the test problem (14), the resulting numerical solution will exhibit a similar behavior to the analytical solution. To ascertain the region in which the numerical method reproduces the true behavior of the test problem’s solutions, we apply the proposed method to the problem described in (14), thereby obtaining a system of equations.
M W ^ ζ = N W ^ ζ 1
where
M = B 11 B 12 B 21 B 22
with
B 11 = 840 108 + 3 H 840 864 + 552 3 35 H H 5670 9 64 + 35 3 H 4480 1 16 H 105 + H 2 96 ,
B 12 = 36 + 23 3 H 280 H 3 89 + 3 + 19 + 3 H 22 , 680 1 + 3 576 + 315 3 H 4480 ( 44 3 H ) H 26 , 880 ,
B 21 = 36 + 23 3 H 280 H 864 + 552 3 + 35 H 5670 9 H 35 32 H 105 ,
B 22 = 840 + 108 + 3 H 840 H 129 + 132 3 8 H 9 3 H 22 , 680 9 H 35 ( 420 + ( 38 + H ) H ) 420
and
N = 0 0 0 22 , 680 + H 2181 + 132 3 + 62 H + 9 3 H 22 , 680 0 0 0 26 , 880 + H ( 2476 + 67 H ) 26 , 880 0 0 0 22 , 680 + H 2181 132 3 + 62 9 3 H 22 , 680 0 0 0 ( 420 + H ( 38 + H ) ) 420 ,
with H = γ h and W ^ ζ , W ^ ζ 1 as in (12).
In order to understand the stability characteristics of the proposed hybrid block method, (15) is rewritten as
W ^ ζ = G ( H ) W ^ ζ 1
where G ( H ) = M 1 N is known as the stability matrix. The stability properties of the derived method rely on evaluating the magnitude of the eigenvalues of the stability matrix, with particular emphasis on the spectral radius, denoted as ρ [ G ( H ) ] . The region of absolute stability, denoted as S, is defined as explained in [3].
S = { H C : | ρ [ G ( H ) ] | < 1 } .
If the region of absolute stability contains the entire left-half complex plane, i.e., C S , then the method is considered to be A -stable. The spectral radius of the stability matrix G ( H ) is given by
ρ [ G ( H ) ] = R ( H ) S ( H ) ,
where
R ( H ) = 483 , 840 + 241 , 920 H + 55 , 440 H 2 + 7560 H 3 + 660 H 4 + 36 H 5 + H 6 , S ( H ) = 483 , 840 241 , 920 H + 55 , 440 H 2 7560 H 3 + 660 H 4 36 H 5 + H 6 .
The absolute value of the spectral radius given in (17) is less than unity on the left-half complex plane. This establishes the conclusion that the method whose coefficients are given in Table 1 is A -stable. Hence, the interval of absolute stability for the proposed method is ( , 0 ) .

4. Adaptive Step-Size Formulation

The proposed method (8) can be transformed into an adaptive step-size algorithm by adopting a procedure similar to the one described in [11]. This procedure involves executing two approaches of different orders, denoted as p and q, with p > q , simultaneously. The combination of formulas involves using the lower-order formula to estimate the local error at each integration step, while the higher-order method advances the solution. The careful selection of this pair is crucial. To maintain the computational cost at the same level, the formula of the lower order is chosen in a manner that it does not need new function evaluations. In this way, there will be no increase in the computational cost in terms of the number of function evaluations.
The detailed procedure is as follows:
In the present case, the following implicit block formula of a lower order ( q = 7 ) with local truncation error L T E = 19 z ( 8 ) ( x j ) h 8 304 , 819 , 200 + O ( h 9 ) is considered
z j + 1 * = z j + h 19 105 f j + 36 19 3 140 f j + r 1 + 32 105 f j + r 2 + 36 + 19 3 140 f j + r 3 + h 2 5 504 f j 19 315 f j + r 2 + 13 2520 f j + 1 .
The considered lower-order formula is firstly used to compute the approximate solution of the system at the final point of the block interval, x j + 1 , denoted as z j + 1 * . It is worth noting that this computation will not involve any additional computational efforts concerning new function evaluations. We have established that the local error L T E 1 obtained for the approximation z j + 1 * is
L T E 1 = z ( x j + h ) z j + 1 * = O ( h q + 1 )
where z ( x ) represents the continuous solution of the problem.
Secondly, the solution of the problem is also approximated at the same block interval with the proposed higher-order block method ( p = 8 ) of interest. The obtained numerical solution at the final point is denoted as z j + 1 and the local error L T E 2 obtained in this case is
L T E 2 = z ( x j + h ) z j + 1 = O ( h p + 1 ) .
In the next step, the unknown exact solution z ( x j + h ) can be eliminated by differencing Equation (18) from (19), and the obtained expression is denoted as E e s t
E e s t = z j + 1 z j + 1 * = L T E 1 + O ( h p + 1 ) .
For sufficiently small values of h, the term O ( h q + 1 ) dominates in expression (20), hence it gets the computable estimate of the local error of the lower-order formula. The advancing of the integration process can be carried out with the more accurate available higher-order solution z j + 1 . Therefore, it can be said that the procedure uses the lower-order formula to estimate the local error at each step of the integration, while the higher-order method advances the computation.
This local error estimate E e s t helps in predicting the suitable step size for the forthcoming block. The solver will change the step size from h o l d to h n e w until the magnitude of the local error estimate E e s t is less than the predefined user tolerance T o l . The value of new step size is given by
h n e w = ξ h o l d T O L | | E s t | | 1 / ( q + 1 ) ,
where q denotes the order of the lower-order formula ( q = 7 ) and the introduced ξ represents a safety factor lying in ( 0 , 1 ) to avoid the failure of the integration steps.
To initiate the strategy, an initial step size h i n i is needed, which can be selected using existing techniques from the literature (refer to [9,16]), or simply by choosing a very small starting step size as described in Sedgwick [17]. Subsequently, the algorithm will adjust this value if required, based on the chosen step size modification strategy. The entire process yields an embedded-type approach, which can be summarized in the following steps:
  • Firstly, assume the step size h = h i n i and solve the IVP (1) numerically, using the developed method with that step size and denote the approximate solution as z j + 1 .
  • Secondly, use the lower-order formula to obtain a different approximate solution z j + 1 * .
  • As a next step, obtain the local error estimate E e s t by considering the difference of the obtained approximate solutions in Step 1 and Step 2, which is defined as E e s t = z j + 1 z j + 1 * .
  • Here, the user has to predefine the tolerance level T o l such that
    (i)
    If | | E e s t | | T o l then the new step size can be taken as h n e w = 2 × h o l d which makes the computation more efficient and it will reach the final stage in fewer steps and keep the obtained values and proceed the computations.
    (ii)
    If | | E e s t | | > T o l , then the current step size is revised to new step size, as in (21), until it reaches | | E e s t | | T o l and carries out the computations using the new step size.

5. Numerical Experimentation

This section focuses on the application of the adaptive step size version of the algorithm presented in (8) to various well-known problems in the literature. Some of these differential systems model different chemical reactions, where certain variables undergo rapid changes while others vary slowly, indicating stiffness. The abbreviations presented in the tables are identified as follows:
(i)
h i n i (initial step size): This denotes the initial step size used with the variable step-size algorithm for conducting the integration procedure. The initial step size in adaptive step-size solvers is crucial, as it sets the starting point for the dynamic adjustments. In this paper, we conduct numerical experiments employing identical initial step sizes for all solvers.
(ii)
TOL (tolerance): This represents the error tolerance set by the user, signifying the predetermined tolerance of error for numerical integration using the chosen step size. Error tolerance is a key parameter in adaptive step-size solvers for numerical integration of differential equations. It defines the acceptable level of error between the numerical solution and the true solution of the differential equation. Adaptive solvers adjust their step sizes dynamically to ensure that the computed solution remains within this error tolerance. Likewise, in this context, we conduct experiments with all solvers, maintaining the same error tolerance.
(iii)
MaxErr (maximum error): This refers to the highest magnitude of error observed among all the absolute errors computed for the numerical solution z k j at the calculated grid points.
MaxErr = max 1 k m { max 0 j N { | z k ( x j ) z k j | } } ,
and Δ z k (maximum absolute errors in computing the numerical solution z k j on the grid points) is given by
Δ z k = max 0 j N { | z k ( x j ) z k j | } ,
where N + 1 is the number of grid points, z k ( x j ) and z k j represent the exact value and approximate k t h -component of the solution of an m-system at the point x j
(iv)
N (no. of steps): In this context, it indicates the count of integration steps executed by the solver to reach the final step of the interval.
(v)
FEVALs (number of function evaluations): Here, this represents the overall count of function evaluations (including derivatives) performed by the solver to reach the final integration step.
(vi)
C t i m e (computational time): It refers to the CPU time in seconds. The computational time taken by a solver to integrate a differential equation numerically is influenced by algorithm complexity, problem characteristics, step size, hardware resources, parallelization, and various other factors. Choosing an appropriate algorithm, adjusting parameters wisely, and utilizing available hardware effectively can help balance accuracy and computational efficiency.
(vii)
J.Eval (Jacobi evaluations): It denotes the total count of Jacobian evaluations taken by the solver throughout the numerical integration. In the context of numerical methods for solving ordinary differential equations (ODEs), particularly implicit methods or methods that involve solving systems of equations, “Jacobi evaluations” refer to the number of times the Jacobian matrix is computed and evaluated during the integration process. The Jacobian matrix captures the partial derivatives of the system of equations with respect to the variables involved. It plays a crucial role in implicit methods where a system of equations is solved at each integration step.
The following well-studied methods were used for comparison purposes:
  • Lob-IIIC: This is a Runge–Kutta implicit approach, utilizing a Lobatto quadrature technique. Specifically, it is a five-stage eighth-order method of Lobatto type. To facilitate adaptive step size integration, the same approach is adapted using the strategy outlined in Section 4. In the adaptive step-size formulation, a three-stage Lobatto-type method is employed as a lower-order approach. This lower-order method necessitates eight function evaluations at each integration step.
  • RKGauss: This is a ‘Gaussian quadrature’-based implicit Runge–Kutta method following an A -stable characteristic. Here, the method is also formulated in a variable step-size mode using the same strategy. In this approach, the lower-order method used employs the same function evaluations as in the main method. This method evaluates six functions at each integration step. The method chosen is a five-stage method of order ten [1].
  • RADAU: The solver utilized in the experiments is designed with variable orders (1,5,9,13) and incorporates step-size control. It uses implicit Runge–Kutta approaches, specifically Radau–IIa. In the experiments, we employed the Matlab code provided by Hairer. The code can be found in the Matlab Stiff package at http://www.unige.ch/~hairer/software.html (accessed on 10 July 2023).
  • EMOHB: This refers to the new hybrid scheme in (8), incorporating the variable step-size strategy elucidated in Section 4. As the method is implicit, solving a system of equations is required at each iteration. The code for this method has been formulated using Mathematica. In order to tackle the resulting algebraic systems, the FindRoot command was utilized.
Remark 1.
Both solvers, that is, RADAU and the new scheme, change the step size using different strategies. RADAU considers both variable-step and variable-order methods, while the proposed scheme, RKGauss, and Lob-IIIC methods have been formulated in variable step-size versions, following the technique outlined in Section 4. We considered T O L = A b s T O L = R e l T O L , where A b s T O L and R e l T O L represent the abbreviations for absolute error tolerance and relative error tolerance, respectively. RADAU utilizes A b s T O L and R e l T O L , while the hybrid scheme in Table 1 uses T O L to get an estimate of the error on each iteration. For the implementation, we used MATLAB R2009b for the RADAU solver, while the derived method, Lob-IIIC and RKGauss codes were implemented in Wolfram Mathematicaversion 11.0.1.0. The computations were performed on a laptop with a processor i3-4030U CPU @ 1.90 GHz, running on Windows 10.

5.1. The Oregonator

The origin of this stiff system can be traced back to the renowned Belousov–Zhabotinskii reaction. The problem is given by
z 1 ( x ) = a ( z 2 ( x ) + z 1 ( x ) ( 1 b z 1 ( x ) z 2 ( x ) ) ) z 2 ( x ) = ( z 3 ( x ) ( 1 + z 1 ( x ) ) z 2 ( x ) ) / a z 3 ( x ) = c ( z 1 ( x ) z 3 ( x ) )
For the computation, the constants are assumed as follows: a = 77.27 , b = 8.375 × 10 6 , and c = 0.161 , with initial values z ( 0 ) = ( 1 , 2 , 3 ) T and integration interval [ 0 , 360 ] . The reference values at x N are as follows:
z 1 ( x N ) = 0.1000814870318523 × 10 1 , z 2 ( x N ) = 0.1228178521549917 × 10 4 , z 3 ( x N ) = 0.1320554942846706 × 10 3 .
The problem is tackled numerically using two solvers: Lob-IIIC and EMOHB. A comparison of their results is presented in Table 2, carried out over the same number of steps and function evaluations. To ensure an equal number of steps, a varied initial step size h i n i and predefined tolerances T o l were employed. The data presented in Table 2 demonstrate the superior performance of the proposed algorithm in comparison to Lob-IIIC. The discrete solutions of the problem are depicted in Figure 1. Additionally, efficiency curves for this problem are plotted in Figure 2, further illustrating the improved accuracy and fewer Jacobian computations achieved by the new method.

5.2. The Robertson Chemistry Problem

We now turn our attention to a classical stiff problem from chemical sciences, which models the kinetics of an autocatalytic reaction [3]. This problem has been extensively studied by Cash [18,19] and has also been addressed in [14,20]. The problem is
z 1 ( x ) = 0.04 z 1 ( x ) + 10 4 z 2 ( x ) z 3 ( x ) , z 1 ( 0 ) = 1 , z 2 ( x ) = 0.04 z 1 ( x ) 10 4 z 2 ( x ) z 3 ( x ) 3 × 10 7 z 2 2 ( x ) , z 2 ( 0 ) = 0 , z 3 ( x ) = 3 × 10 7 z 2 ( x ) 2 , z 3 ( 0 ) = 0 .
The numerical solution to the problem is computed over the integration interval [ 0 , 40 ] , following [18]. Here, the NDSolve Mathematica command incorporated with a 12th order Runge–Kutta implicit method has been used to compute the reference solutions at x N = 40 , which helps to compare the efficiency of proposed scheme in terms of errors.
z 1 ( 40 ) = 0.71582706871940509022276063873209 , z 2 ( 40 ) = 9.185534764557763892160044740155 × 10 6 , z 3 ( 40 ) = 0.28416374574583035201334720122317 .
The computational data in Table 3 are presented considering different initial step sizes h i n i , tolerances T O L , which compares the proposed method to other solvers: RKGauss, RADAU. By examining these data, it becomes evident that the proposed method outperforms the other methods. The efficiency curves were plotted taking
( h i n i , T O L ) = ( 10 a , 10 ( a + 4 ) ) , a = 3 ( 1 ) 6 ,
which are represented in Figure 3.

5.3. The Brusselator System

Let us consider the “Brusselator system” [2], which has also been studied in previous works such as [14]. The problem is given by
z 1 ( x ) = L + z 1 2 ( x ) z 2 ( x ) ( M + 1 ) z 1 ( x ) , z 1 ( 0 ) = z 1 0 , z 2 ( x ) = M z 1 ( x ) z 1 2 ( x ) z 2 ( x ) , z 2 ( 0 ) = z 2 0 .
The positive real constants L and M are involved in the system [21]. The points ( z 1 * , z 2 * ) = ( L , M / L ) can be showed as the critical points of the system. The values for constants L and M are assumed to be L = 1 , M = 3 , with z 1 0 = 1.5 and z 2 0 = 3 for numerical purposes. The numerical integration is spanned over the interval [ 0 , 20 ] . The reference solution at x N = 20 was obtained using Mathematica by employing NDSolve with a 12th order Runge–Kutta implicit approach. Hence, the obtained values are
z 1 ( x N ) = 0.498637071268347848635481287883 , z 2 ( x N ) = 4.596780349452011183183066998636 .
In the numerical experiments, we used the combinations
( h i n i , T O L ) = ( 10 a , 10 ( a + 3 ) ) , a = 1 , 2 , 3 .
The data presented in Table 4 indicate that the new method outperforms other methods. Figure 4 illustrates the plot of the obtained solution components z 1 j and z 2 j on [0,20], using T O L = 10 9 and h i n i = 10 7 . The adaptive step size scheme discussed in Section 4 demonstrates effectiveness with the proposed method, as larger step sizes are employed for smoother portions in the curves, effectively reducing computation time and the number of function evaluations.

5.4. A Mildly Stiff Linear System

As a next test problem, we consider a mildly stiff problem, which is a well-known classical system with a stiffness ratio 1:1000. The problem was also discussed in [13]. The problem composed of two first-order equations is
z 1 ( x ) = 998 z 1 ( x ) + 1998 z 2 ( x ) , z 1 ( 0 ) = 1 , z 2 ( x ) = 999 z 1 ( x ) 1999 z 2 ( x ) , z 2 ( 0 ) = 1 .
The results are computed over the interval [0,10] taking ( h i n i , T O L ) = ( 10 ξ , 10 ( ξ + 1 ) ) , ξ = 2 , 3 , 4 . The exact solution for the given system, with decaying exponential components, is
z 1 ( x ) = 4 e x 3 e 1000 x , z 2 ( x ) = 2 e x + 3 e 1000 x .
The numerical results in Table 5 reveal the good performance of the derived method compared with the considered solvers. The exact and discrete solutions are presented in Figure 5 on the time interval [0,10].

5.5. Jacobi Elliptic Functions

Next, we explore a well-known system associated with Jacobi elliptic functions s n , c n , and d n , which is commonly employed as a numerical test [22]. It is given by
s n ( x ) = c n ( x ) d n ( x ) , s n ( 0 ) = 0 , c n ( x ) = s n ( x ) d n ( x ) , c n ( 0 ) = 1 , d n ( x ) = m s n ( x ) c n ( x ) , d n ( 0 ) = 1 ,
with m = 1 2 . The exact solution can be expressed as follows:
s n ( x ) = 2 π 1 2 K n = 0 q n + 1 2 1 q 2 n + 1 sin ( ( 2 n + 1 ) v ) , c n ( x ) = 2 π 1 2 K n = 0 q n + 1 2 1 + q 2 n + 1 cos ( ( 2 n + 1 ) v ) , d n ( x ) = π 2 K + 2 π K n = 1 q n 1 + q 2 n cos ( 2 n v ) .
The values of the parameters appearing in the analytical solution are
q = e π , v = π t 2 K , and K = 0 π 2 d θ 1 1 2 sin 2 θ 1.85 .
We solved this problem over the integration interval [ 0 , 50 ] . The computations were carried out for
( h i n i , T O L ) = ( 10 a , 10 ( a + 3 ) ) , a = 1 , 2 , 3 .
The collected data in Table 6 show the excellent performance of the proposed scheme.

5.6. Van der Pol System

The well-known Van der Pol system described in [23] is considered. The system is given by
z 1 ( x ) = z 2 ( x ) , z 1 ( 0 ) = 2 , z 2 ( x ) = ( 1 z 1 2 ( x ) ) z 2 ( x ) z 1 ( x ) ϵ , z 2 ( 0 ) = 2 3 + 10 81 ϵ 292 2187 ϵ 2 1814 19 , 683 ϵ 3 .
We have integrate this system on [ 0 , 0.55139 ] , using ϵ = 10 1 , which aligns with the approach in [23]. We have considered the combinations
( h i n i , T O L ) = ( 10 a , 10 ( a + 3 ) ) , a = 3 , 4 , 5 .
The reference solution for the problem
z 1 ( x N ) = 1.563373944230092 , z 2 ( x N ) = 1.000020831854273 ,
was provided using a 12th order Runge–Kutta implicit method. The data presented in Table 7 strongly support the excellent performance of the new scheme.

5.7. Test Problem

Finally, we have considered a challenging first-order differential equation given by
z ( x ) = 20 z ( x ) ( z ( x ) 1 ) cos x , z ( 0 ) = 0.5 ,
whose exact solution is z ( x ) = 1 1 + e 20 sin x . From the plot of the exact solution shown in Figure 6, one can guess that this problem is a demanding one to be solved numerically with solvers using a constant step size. We have considered h i n i = 10 4 with various tolerances T O L = 10 k , k = 11 , 12 , 13 , 14 . We have compared the proposed method only with the solver RADAU. We have not considered the RKGauss and Lob-IIIC methods as they are not performing well for this problem, needing a high number of function evaluations to get acceptable accuracy. The data in Table 8 show the good performance of the method. Figure 6 presents the plot of the exact and discrete solutions for h i n i = 10 5 , T O L = 10 11 with the EMOHB approach.

6. About the Efficiency Curves

Here, we examine the efficiency curves that compare the performance of the new hybrid method (EMOHB) with the RADAU one, based on the maximum absolute errors (EMAX) versus the total number of function evaluations (FEvals). The curves presented in Figure 7 illustrate the comparison for problems Section 5.3, Section 5.4, Section 5.5, Section 5.6 and Section 5.7. Notably, we have excluded the plots for methods LOBIIIC and RKGauss, as the tabular data already indicate their higher number of function evaluations compared to the other methods under consideration. Upon analyzing the efficiency curves, it becomes evident that the new proposed method stands out as the most efficient approach for solving the type of problems considered.

7. Conclusions

This paper introduces an optimized hybrid technique specially crafted for the integration of first-order initial value problems (IVPs), employing three intra-step points. To heighten accuracy, the method integrates using second-order derivatives in its formulas. The approach’s genesis lies in a fusion of two techniques: the hybrid and block approaches. This hybrid nature enables the method to surmount the Dahlquist barriers, while the embedded block approach concurrently evaluates the numerical solution at diverse grid points, including off-step points, yielding computational efficiency. The hybrid point values are computed using an optimization strategy, culminating in an accurate scheme. Moreover, an enhanced iteration of the proposed approach is presented, incorporating an adaptive step size capability. This adaptive approach allows the method to dynamically adjust the step size as required. Numerical experiments are conducted to evaluate the new scheme’s performance, revealing its potential as a promising and efficient solution for the addressed problem.

Author Contributions

The proposed method was conceived and analyzed by R.S., G.S. and H.R., who also took charge of writing the main manuscript text. The experiment section and figure plotting were carried out by H.R. and V.K. Conceptualization, R.S., G.S. and H.R.; methodology, R.S., G.S. and H.R.; software, R.S., G.S. and H.R.; validation, R.S., G.S. and H.R.; formal analysis, R.S. and G.S.; investigation, H.R. and V.K.; resources, H.R.; data curation, V.K.; writing—original draft preparation, R.S. and G.S.; writing—review and editing, R.S., G.S. and H.R.; visualization, R.S., G.S. and H.R.; supervision, G.S., H.R. and V.K.; project administration, H.R. and V.K. All authors have read and agreed to the published version of the manuscript.

Funding

No specific grant from funding agencies in the public, commercial, or not-for-profit sectors was received for this research.

Data Availability Statement

The numerical data used to support the findings of this study are available from Rajat Singla upon request.

Acknowledgments

We would like to thank the anonymous reviewers for their constructive comments that have greatly contributed to improving the manuscript. Further, Rajat Singla would like to thank I.K. Gujral Punjab Technical University Jalandhar, Punajb (India), for providing research facilities for the present work.

Conflicts of Interest

The authors declare that they have no conflicts of interest in this section.

Abbreviations

The following abbreviations are used in this manuscript:
IVP Initial value problem
ODEs Ordinary differential systems
TOL Tolerance
FEVAL’s Number of fucntion evaluations
h i n i  Intial step size
MaxErr Maximum absolute errors along the integration interval
Δ z k  Maximum absolute errors in computing the numerical solution z k j along the integration interval
N Number of integration steps
C t i m e  CPU time in seconds
J.Eval Number of Jacobian evaluations

References

  1. Butcher, J.C. Numerical Methods for Ordinary Differential Equations; John Wiley & Sons Ltd.: Hoboken, NJ, USA, 2008. [Google Scholar]
  2. Hairer, E.; Nörsett, S.P.; Wanner, G. Solving Ordinary Differential Equations-I; Springer: Berlin, Germany, 1993. [Google Scholar]
  3. Hairer, E.; Wanner, G. Solving Ordinary Differential Equations-II; Springer: Berlin, Germany, 1996. [Google Scholar]
  4. Lambert, J.D. Numerical Methods for Ordinary Differential Systems: The Initial Value Problem; John Wiley & Sons: New York, NY, USA, 1991. [Google Scholar]
  5. Brugnano, L.; Trigiante, D. Solving Differential Problems by Multi-Step Initial and Boundary Value Methods; Gordon and Breach Science Publishers: Philadelphia, PA, USA, 1998. [Google Scholar]
  6. Rosser, J.B. A Runge–Kutta for all seasons. SIAM Rev. 1967, 9, 417–452. [Google Scholar] [CrossRef]
  7. Milne, W.E. Numerical Solution of Differential Equations; John Wiley and Sons: New York, NY, USA, 1953. [Google Scholar]
  8. Shampine, L.F.; Reichelt, M.W. The MATLAB ODE suite. SIAM J. Sci. Comput. 1997, 18, 1–22. [Google Scholar]
  9. Shampine, L.F.; Gordon, M.K. Computer Solution of Ordinary Differential Equations: The Initial Value Problem; Freeman: San Francisco, CA, USA, 1975. [Google Scholar]
  10. Dormand, J.R.; Prince, P.R. A family of embedded Runge–Kutta formuale. J. Comput. Appl. Math. 1980, 6, 19–26. [Google Scholar]
  11. Shampine, L.F.; Gladwell, I.; Thompson, S. Solving ODEs with MATLAB; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
  12. Brugnano, L.; Trigiante, D. Block Implicit Methods for ODEs. In Recent Trends in Numerical Analysis; Trigiante, D., Ed.; Nova Science Publ. Inc.: New York, NY, USA, 2001; pp. 81–105. [Google Scholar]
  13. Ramos, H.; Singh, G. A tenth order A–stable two–step hybrid block method for solving initial value problems of ODEs. Appl. Math. Comput. 2017, 310, 75–88. [Google Scholar] [CrossRef]
  14. Singh, G.; Garg, A.; Kanwar, V.; Ramos, H. An efficient optimized adaptive step size hybrid block method for integrating differential systems. Appl. Math. Comput. 2019, 362, 124567. [Google Scholar] [CrossRef]
  15. Singh, G.; Garg, A.; Singla, R.; Kanwar, V. A novel two-parameter class of optimized hybrid block methods for integrating differential systems numerically. Comput. Math. Methods 2021, 3, e1214. [Google Scholar] [CrossRef]
  16. Watts, H.A. Starting step–size for an ODE solver. J. Comput. Appl. Math. 1983, 9, 177–191. [Google Scholar] [CrossRef]
  17. Sedgwick, A.E. An Effective Variable Order Variable Step Adams Method; Deptartment of Computer Science, Reptort 53; University of Toronto: Toronto, ON, Canada, 1973. [Google Scholar]
  18. Cash, J.R. Second derivative extended backward differentiation formulas for the numerical integration of stiff systems. SIAM. J. Numer. Anal. 1981, 18, 21–36. [Google Scholar] [CrossRef]
  19. Cash, J.R. On the integration of stiff systems of ODEs using extended backward differentiation formulae. Numer. Math. 1980, 34, 235–246. [Google Scholar] [CrossRef]
  20. Ramos, H.; Singh, G. A note on variable step–size formulation of a Simpson’s–type second derivative block method for solving stiff systems. Appl. Math. Lett. 2017, 64, 101–107. [Google Scholar] [CrossRef]
  21. Twizell, E.H.; Gumel, A.B.; Cao, Q. A second order scheme for Brusselator reaction diffusion system. J. Math. Chem. 1999, 26, 333–344. [Google Scholar] [CrossRef]
  22. Glaser, A.; Rokhlin, V. A new class of highly accurate solvers for ordinary differential equations. J. Sci. Comput. 2009, 38, 368–399. [Google Scholar] [CrossRef]
  23. Bras, M.; Izzo, G.; Jackiewicz, Z. Accurate implicit–explicit general linear methods with inherent Runge–Kutta stability. J. Sci. Comput. 2016, 70, 1105–1143. [Google Scholar] [CrossRef]
Figure 1. Discrete solutions of problem in Section 5.1 using EMOHB with h i n i = 10 2 , T O L = 10 3 .
Figure 1. Discrete solutions of problem in Section 5.1 using EMOHB with h i n i = 10 2 , T O L = 10 3 .
Symmetry 15 01635 g001
Figure 2. Efficiency curves for problem in Section 5.1.
Figure 2. Efficiency curves for problem in Section 5.1.
Symmetry 15 01635 g002
Figure 3. Efficiency curves for problem in Section 5.2.
Figure 3. Efficiency curves for problem in Section 5.2.
Symmetry 15 01635 g003
Figure 4. Plot of discrete solution components z 1 j and z 2 j of problem in Section 5.3, 1 j 20 with h i n i = 10 7 , T O L = 10 9 .
Figure 4. Plot of discrete solution components z 1 j and z 2 j of problem in Section 5.3, 1 j 20 with h i n i = 10 7 , T O L = 10 9 .
Symmetry 15 01635 g004
Figure 5. Exact and discrete solutions of problem in Section 5.4 using the proposed method EMOHB with h i n i = 10 6 , T O L = 10 10 . (a) Plot of solution z 1 ( x ) of problem in Section 5.4; (b) plot of solution z 2 ( x ) of problem in Section 5.4.
Figure 5. Exact and discrete solutions of problem in Section 5.4 using the proposed method EMOHB with h i n i = 10 6 , T O L = 10 10 . (a) Plot of solution z 1 ( x ) of problem in Section 5.4; (b) plot of solution z 2 ( x ) of problem in Section 5.4.
Symmetry 15 01635 g005
Figure 6. Plot of exact (blue line) and discrete (red dots) solutions of problem in Section 5.7 with h i n i = 10 5 , T O L = 10 11 .
Figure 6. Plot of exact (blue line) and discrete (red dots) solutions of problem in Section 5.7 with h i n i = 10 5 , T O L = 10 11 .
Symmetry 15 01635 g006
Figure 7. Efficiency curves. (a) Efficiency curves for problem in Section 5.3; (b) Efficiency curves for problem in Section 5.4; (c) Efficiency curves for problem in Section 5.5; (d) Efficiency curves for problem in Section 5.6; (e) Efficiency curves for problem in Section 5.7.
Figure 7. Efficiency curves. (a) Efficiency curves for problem in Section 5.3; (b) Efficiency curves for problem in Section 5.4; (c) Efficiency curves for problem in Section 5.5; (d) Efficiency curves for problem in Section 5.6; (e) Efficiency curves for problem in Section 5.7.
Symmetry 15 01635 g007aSymmetry 15 01635 g007b
Table 1. Coefficients of the method.
Table 1. Coefficients of the method.
z λ μ 0 μ r 1 μ r 2 μ r 3 μ 1 σ 0 σ r 2 σ 1
z j + r 1 1 727 + 44 3 7560 108 + 3 840 4 36 + 23 3 945 36 23 3 280 43 + 44 3 7560 62 + 9 3 22 , 680 1 162 8 9 3 22 , 680
z j + r 2 1 619 6720 9 70 + 9 3 128 16 105 9 70 9 3 128 11 6720 67 26 , 880 1 96 1 8960
z j + r 3 1 727 44 3 7560 36 + 23 3 280 4 36 + 23 3 945 108 3 840 43 44 3 7560 62 9 3 22 , 680 1 162 8 + 9 3 22 , 680
z j + 1 1 19 210 9 35 32 105 9 35 19 210 1 420 0 1 420
Table 2. Data for problem in Section 5.1.
Table 2. Data for problem in Section 5.1.
NFEVAL’sMethodMaxErrJ.Eval
8086464Lob-IIIC 5.22381 × 10 0 5652
EMOHB 8.71751 × 10 10 4862
171213,696Lob-IIIC 2.12699 × 10 4 10,421
EMOHB 6.32099 × 10 11 8874
186514,920Lob-IIIC 8.45584 × 10 5 11,082
EMOHB 3.93356 × 10 11 9476
385230,816Lob-IIIC 8.09143 × 10 7 20,826
EMOHB 2.04636 × 10 12 17,909
Table 3. Data for problem in Section 5.2.
Table 3. Data for problem in Section 5.2.
Method h ini TOL Δ z 1 Δ z 2 Δ z 3 NFEVALsJ.Eval C time
EMOHB 10 10 10 12 1.5 × 10 17 6.0 × 10 20 1.5 × 10 17 493922251.17
10 10 10 13 1.4 × 10 18 2.0 × 10 21 1.4 × 10 18 604802551.43
10 10 10 14 6.4 × 10 18 2.7 × 10 22 6.4 × 10 18 756002601.32
RKGauss 10 6 10 9 1.0 × 10 7 4.1 × 10 12 1.0 × 10 7 250150014654.16
10 6 10 10 3.5 × 10 8 1.3 × 10 12 3.5 × 10 8 7124272395112.82
RADAU 10 6 10 9 5.1 × 10 10 0.0 × 10 10 5.1 × 10 10 55467430.5470
10 6 10 10 2.4 × 10 12 0.0 × 10 12 2.4 × 10 12 53736280.6148
10 10 10 13 1.7 × 10 13 0.0 × 10 13 1.7 × 10 13 1021213410.8941
10 10 10 14 5.0 × 10 14 0.1 × 10 16 5.0 × 10 14 1311437430.7285
Table 4. Data for problem in Section 5.3.
Table 4. Data for problem in Section 5.3.
h ini TOLMethodMaxErrNFEVALs
10 1 10 4 RADAU 8.2435 × 10 6 113957
RKGauss 4.03571 × 10 5 64384
EMOHB 1.972285 × 10 7 36288
10 2 10 5 RADAU 1.8202 × 10 7 1561296
RKGauss 2.37942 × 10 5 101606
EMOHB 2.358920 × 10 8 45360
10 3 10 6 RADAU 4.0993 × 10 7 1731692
RKGauss 1.43751 × 10 5 1971182
EMOHB 1.53089 × 10 9 56448
Table 5. Data for problem in Section 5.4.
Table 5. Data for problem in Section 5.4.
h ini TOLMethodMaxErrNFEVALs
10 2 10 3 RADAU 2.0618 × 10 5 24110
RKGauss 2.3902 × 10 4 22132
EMOHB 4.12974 × 10 6 1296
10 3 10 4 RADAU 1.8942 × 10 6 30143
RKGauss 1.0263 × 10 4 30180
EMOHB 9.46409 × 10 8 14112
10 4 10 5 RADAU 3.6773 × 10 7 35173
RKGauss 2.0540 × 10 5 52312
EMOHB 9.82063 × 10 9 16128
Table 6. Data for problem in Section 5.5.
Table 6. Data for problem in Section 5.5.
h ini TOLMethodMaxErrNFEVALs
10 1 10 4 RADAU 1.0737 × 10 3 1451115
RKGauss 2.20187 × 10 3 89534
EMOHB 1.73727 × 10 6 42336
10 2 10 5 RADAU 2.6296 × 10 4 1851323
RKGauss 1.48977 × 10 3 145870
EMOHB 8.56278 × 10 8 56448
10 3 10 6 RADAU 2.7531 × 10 5 2541764
RKGauss 5.42132 × 10 4 3261956
EMOHB 2.41961 × 10 8 74592
Table 7. Data for problem in Section 5.6.
Table 7. Data for problem in Section 5.6.
h ini TOLMethodMaxErrNFEVALs
10 3 10 6 RADAU 1.5397 × 10 6 13119
RKGauss 5.7954 × 10 6 1378
EMOHB 1.93659 × 10 9 432
10 4 10 7 RADAU 1.8324 × 10 7 16148
RKGauss 2.4281 × 10 6 23138
EMOHB 6.75444 × 10 11 540
10 5 10 8 RADAU 2.0754 × 10 8 18154
RKGauss 6.64973 × 10 7 56336
EMOHB 1.84577 × 10 11 848
Table 8. Data for problem in Section 5.7.
Table 8. Data for problem in Section 5.7.
TOLMethodMaxErrNFEVALs
10 11 RADAU 4.6038 × 10 5 165135,131
EMOHB 4.83376 × 10 6 8767008
10 12 RADAU 2.7569 × 10 5 206742,622
EMOHB 1.84514 × 10 7 11429136
10 13 RADAU 8.1137 × 10 6 257250,018
EMOHB 9.48677 × 10 9 149911,992
10 14 RADAU 3.3542 × 10 7 262855,925
EMOHB 1.36784 × 10 9 197115,768
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Singla, R.; Singh, G.; Ramos, H.; Kanwar, V. Development of a Higher-Order 𝒜-Stable Block Approach with Symmetric Hybrid Points and an Adaptive Step-Size Strategy for Integrating Differential Systems Efficiently. Symmetry 2023, 15, 1635. https://doi.org/10.3390/sym15091635

AMA Style

Singla R, Singh G, Ramos H, Kanwar V. Development of a Higher-Order 𝒜-Stable Block Approach with Symmetric Hybrid Points and an Adaptive Step-Size Strategy for Integrating Differential Systems Efficiently. Symmetry. 2023; 15(9):1635. https://doi.org/10.3390/sym15091635

Chicago/Turabian Style

Singla, Rajat, Gurjinder Singh, Higinio Ramos, and Vinay Kanwar. 2023. "Development of a Higher-Order 𝒜-Stable Block Approach with Symmetric Hybrid Points and an Adaptive Step-Size Strategy for Integrating Differential Systems Efficiently" Symmetry 15, no. 9: 1635. https://doi.org/10.3390/sym15091635

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop