Next Article in Journal
Controllability of Impulsive Neutral Fractional Stochastic Systems
Next Article in Special Issue
Quasi-Monomiality Principle and Certain Properties of Degenerate Hybrid Special Polynomials
Previous Article in Journal
Nonlinear Radiative Nanofluidic Hydrothermal Unsteady Bidirectional Transport with Thermal/Mass Convection Aspects
Previous Article in Special Issue
A New Numerical Approach for Variable-Order Time-Fractional Modified Subdiffusion Equation via Riemann–Liouville Fractional Derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some New Time and Cost Efficient Quadrature Formulas to Compute Integrals Using Derivatives with Error Analysis

by
Sara Mahesar
1,
Muhammad Mujtaba Shaikh
1,*,
Muhammad Saleem Chandio
2 and
Abdul Wasim Shaikh
2
1
Department of Basic Sciences and Related Studies, Mehran University of Engineering and Technology, Jamshoro 76020, Pakistan
2
Institute of Mathematics and Computer Science, University of Sindh, Jamshoro 76020, Pakistan
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(12), 2611; https://doi.org/10.3390/sym14122611
Submission received: 7 September 2022 / Revised: 25 October 2022 / Accepted: 2 December 2022 / Published: 9 December 2022
(This article belongs to the Special Issue Numerical Analysis, Approximation Theory, Differential Equations)

Abstract

:
In this research, some new and efficient quadrature rules are proposed involving the combination of function and its first derivative evaluations at equally spaced data points with the main focus on their computational efficiency in terms of cost and time usage. The methods are theoretically derived, and theorems on the order of accuracy, degree of precision and error terms are proved. The proposed methods are semi-open-type rules with derivatives. The order of accuracy and degree of precision of the proposed methods are higher than the classical rules for which a systematic and symmetrical ascendancy has been proved. Various numerical tests are performed to compare the performance of the proposed methods with the existing methods in terms of accuracy, precision, leading local and global truncation errors, numerical convergence rates and computational cost with average CPU usage. In addition to the classical semi-open rules, the proposed methods have also been compared with some Gauss–Legendre methods for performance evaluation on various integrals involving some oscillatory, periodic and integrals with derivative singularities. The analysis of the results proves that the devised techniques are more efficient than the classical semi-open Newton–Cotes rules from theoretical and numerical perspectives because of promisingly reduced functional cost and lesser execution times. The proposed methods compete well with the spectral Gauss–Legendre rules, and in some cases outperform. Symmetric error distributions have been observed in regular cases of integrands, whereas asymmetrical behavior is evidenced in oscillatory and highly nonlinear cases.

1. Introduction

Numerical analysis has a rich store of methods to find the answer by purely arithmetical operations. Many practical problems in applied sciences and mathematical physics are given in the form of integrals. Different analytical techniques are available to compute such integrals; several numerical techniques have been developed to obtain approximate solutions for various classes of integrals. The methods of numerical integration are sometimes referred to as quadrature rules because these are used to approximate the integrals of the functions of one variable. As quadrature rules provide a very close estimate of the actual integration in most critical problems, the numerical evaluation of integrals through these techniques has been a key topic in mathematical research. These formulas are used for calculating the area under the region defined by the integrand within a range, finite or infinite, and when the integrand either cannot be integrated analytically or values are given in a tabular form only without its explicit mathematical description then numerical integration is the only option [1,2]. The worst cases consist of a definite integral whose analytical evaluation is not possible especially when these are associated with singularities and nonlinearities in solving differential and integral equations [2,3], for example:
0 1 e x x 2 3 d x , 0 1 e x 2 1 x 2 d x , 1 sin x 2 d x ,   and   0 1 sin x x d x
The efficiency of quadrature rules is usually categorized in terms of the degree of precision and order of accuracy. Therefore, obtaining higher precision and accuracy in numerical integration formulas becomes one of the challenges in the field of numerical analysis. A numerical integration formula is generally given as:
a b f ( x ) d x i = 0 n c i f ( x i )
where the constant c i are the weights and x 0 ,   x 1 ,   ,   x n are n + 1 equidistant nodes in the interval [ a ,   b ] . The Formula (2) represents Newton–Cotes quadrature rules, in general, form. In the semi-open Newton–Cotes formulae, the function evaluation at one of the endpoints of the interval is excluded. In this paper, we excluded the endpoint, which is on the right side of the interval, i.e., nodes at equally spaced points of [ a , b ) are considered. We can rewrite (2) as:
a b f ( x ) d x x 0 x n f ( x ) d x i = 0 n 1 c i f ( x i )
where x 0 ,   x 1 ,   ,   x n 1 are distinct 𝑛 integration points and c i are 𝑛 weights within the interval [ a ,   b ) with x i = a   i h , i = 0 ,   1 ,   2 ,     ,   n 1 , and h = ( b a ) /   ( n + 1 ) .
The starting semi-open Newton–Cotes quadrature rule in basic form along with the local error term is defined as:
a b f ( x ) d x = ( b a ) f ( a ) + ( b a ) 2 2 f ( ξ )
where ξ ( a , b ) and is known as the one-point semi-open Newton–Cotes rule. The composite form of this method with the global error term is defined as:
x 0 x n f ( x ) d x = h [ i = 0 n 1 f ( x i ) ] + h 2 ( b a ) f ( η )
where η ( a , b ) , h = b a n + 1 , x i = a + i h and i = 0 , 1 , 2 , , n 1 .
The Newton–Cotes quadrature rules are interpolatory in nature. This means that the rules are formed by assuming that the integrand is an interpolatory polynomial of a suitable degree, and thus can be expressed exactly as a regular Taylor’s series approximation. The formulation of Newton–Cotes rules are frequently based upon interpolating polynomials due to Lagrange. Similarly, Hermite-type polynomials can also be focused as are based on the exactness of the integrand as well as its first-order derivative. This new formation of integration formulas is referred to as corrected Newton–Cotes quadrature rules. These formulas are more accurate than the conventional rules since they have a higher order of precision and accuracy [4]. Several new quadrature rules were discovered that were termed to be optimal for different families of integrands [4], which have proven to be much more significant. Various numerical techniques have been proposed as an improvement of classical Newton–Cotes rules. A unified approach to solving systems of linear equations with coefficient matrices of the Vandermonde type for closed and open Newton–Cotes rules was given by El-Mikkawy in [5,6]. An improvement in Newton–Cotes formulas with usual nodes along with nodes at both, none, and only one endpoint of the interval of integration to raise the degree of precision and accuracy of the methods by changing the endpoints from constants to variables [7,8,9]. The key idea of this research was to extend the monomials of space that increase the number of equations and unknowns. Additionally, the developments of first and second-kind Chebyshev quadrature rules of Newton–Cotes type were devised for open and semi-open rules in [10,11]. Burg proposed enhanced classes of Newton–Cotes rules with both endpoints by using first-order derivatives [12] and also using the derivatives at the midpoint [13] of the interval using the concept of precision degree. In [14], a class of methods for closed Newton–Cotes formulas was proposed using the midpoint derivative value and are proved to obtain an increase of two precision orders over the classic closed Newton–Cotes formula. These authors in [15] presented a derivative-based trapezoid rule for the Riemann–Stieltjes integral. In [3], a comparison was made between the polynomial collocation and quadrature methods that are uniformly spaced for the Fredholm integral equation of the second kind. Memon et al. [16] proposed derivative-based schemes for Riemann–Stieltjes integration, the same authors [17], devised a new technique for four-point Riemann–Stieltjes integrals. An error analysis of Newton–Cotes cubature rules was focused on in detail by Malik et al. in [18].
Integration has wide applications in the many fields of science and engineering, for instance in the field of probability theory [19,20], stochastic processes and oscillators [21,22,23], and functional analysis [24], particularly in the spectral theorem for self-adjoint operators in Hilbert space [25]. Some special types of integrals based on the oscillatory, periodic, or singular nature of integrands were also highlighted in detail in the literature and the consequent approximations using quadrature rules were encouraged [26,27,28,29,30]. In the recent past, numerous new approaches have been devised to find the approximation of definite integrals in which the derivatives of the function at different statistical means were used. In [31] a Newton–Cotes rule was proposed using the derivative at the mid-point of the interval for algebraic functions. Likewise, this work was extended where the derivatives of the functions were assessed by geometric mean and harmonic mean at the endpoints of the interval in [32]. On the other hand, the comparison between the three techniques using the arithmetic mean, geometric mean, and harmonic mean was made in [33]. Additionally, several other approaches for closed and open Newton–Cotes rules were invented where the derivatives of the integrand were employed utilizing the centroidal Mean [34], contra-harmonic mean [35], and heronian mean [36]. Two efficient derivative-based schemes were introduced by Rike and Imran in [37] where the arithmetic mean was used in the mid-point rule. These new derivative-based schemes were proved to be more effective than the original Newton–Cotes formulas, in terms of error terms and approximate integral values. The literature tailors to the fact that semi-open Newton–Cotes has been less focused on the perspectives of derivative-based end-point corrections to improve the accuracy and precision of the conventional rules. Such improvements can prove to be more efficient in dealing with integrals having an end-point singularity instantly than other closed methods. Consequently, the derivative-based refinements in basic semi-open Newton–Cotes will give rise to more studies on their application for the numerical approximation of higher dimensional integrals, Riemann–Stieltjes integrals, and complex line integrals on one hand. On the other hand, more appropriate numerical techniques can be devised for numerical solutions of differential equations in one or more independent variables, and one-dimensional and boundary integral equations [15,16,17,18].
In this research, some new quadrature formulas which are utilized derivatives to compute integrals are proposed and are proven to be time-efficient and cost-effective. This is conducted by attempting to modify the classical semi-open Newton–Cotes rule, i.e., S O N C by introducing first-order derivatives at all nodes, excluding the upper endpoint of the interval [ a , b ] . The proposed methods are proved to be more efficient in terms of order of accuracy and degree of precision than the classical SONC rules. However, to increase the accuracy of the new methods, the weights of the first-order derivatives of the function are used, which work as additional parameters and are computed by using the concept of precision through associated systems of linear equations. The theoretical development of the new methods, error analysis and exhaustive numerical experiments are used to demonstrate the performance of proposed rules against the conventional Newton–Cotes rules of semi-open type. In parallel, the proposed modifications are also tested against the Gauss–Legendre (GL) rules [29] of similar orders of accuracy on varying nature integrands including periodic, oscillatory and derivative singularities. The proposed formulas guarantee a substantial reduction in computational cost and execution time for a fixed predefined error tolerance against SONC without derivatives and compete well with the GL methods.

2. Derivation of Proposed Quadrature Formulas Using Derivatives

Let f C 2 n + 1 [ a , b ] be a real-valued function. Let the interval [ a , b ] be subdivided into n sub-intervals with x 0 , x 1 , , x n with n + 1 nodes.
The proposed formulas are based on the conventional semi-open Newton–Cotes rule but with additional derivatives as perturbation terms to reduce the truncation errors without compromising the efficiency of the existing formulas. In fact, the new formulas add up to the order of accuracies and degrees of precision with promising reductions in execution time and computational overhead. The modifications are conducted in four ways, as explained now with adopted notations.

2.1. Modified Semi-Open Newton–Cotes Rule-1 with Derivatives (MSONC1)

MSONC1 uses function values and first derivatives at all interior points of [ a , b ] including the left endpoint. The derivation of MSONC1 is discussed first, and then its degree of precision is proved in Theorem 1. We attempt to propose a rule with greater precision than the classical semi-open Newton–Cotes rule by using the first-order derivative of the integrand at all points, excluding the one endpoint of the interval [ a , b ] , within the quadrature rule, whilst maintaining the enhancement of the order of accuracy. The basic form of the first proposed method (MSONC1) is:
a b f ( x ) d x M S O N C 1 = ( b a ) f ( a ) + ( b a ) 2 2 f ( a )
As the number of function evaluations in (6) is two, which is an even number; therefore, the precision of (6) is at most 1, i.e., the precision is at most n 1 when n is even and is n for odd n . This leads to the condition on (6) that for all the monomials 𝑥𝑘 of degree 𝑘 = 0,1 it will be exact. Therefore, a system of 2 by 2 equations is formed using first-order polynomials,
f ( x ) = a 0 + a 1 x
The general form of this scheme is:
a b f ( x ) d x M S O N C 1 = c 0 f ( a ) + c 1 f ( a )
The two equations formed by this approach are:
{ For   f ( x ) = 1 ; a b d x = ( b a ) = c 0 For   f ( x ) = x ;   a b x d x = b 2 a 2 2 = c 0 a + c 1 }
We determined the weight coefficients by solving the system (9) simultaneously, we obtain, c 0 = ( b a ) and c 1 = ( b a ) 2 2 . Hence the following quadrature rule is obtained:
a b f ( x ) d x M S O N C 1 = ( b a ) f ( a ) + ( b a ) 2 2 f ( a )
Theorem 1. 
The degree of precision of one-point MSONC1 is one.
Proof. 
To prove this theorem, we verify that the new method (8) is exact for f ( x ) = 1 , x . The exact values are:
a b 1 d x = ( b a )   and   a b x d x = b 2 a 2 2
Moreover, the approximate results using MSONC1 are:
For   f ( x ) = 1 , M S O N C 1 = ( b a )   and   for   f ( x ) = x , M S O N C 1 = b 2 a 2 2
However, the approximate value using MSONC1 for f ( x ) = x 2 is not exact, i.e.,
a b x 2 d x M S O N C 1 = a b ( b a ) 2
a b x 2 d x = ( b 3 a 3 3 ) M S O N C 1  
which shows that the degree of precision of the proposed method MSONC1 is one. □

2.2. Modified Semi-Open Newton–Cotes Rule-2 with Derivatives (MSONC2)

MSONC2 uses function values along with derivatives at each point x i [ a , b ) . MSONC2 is a new rule with greater precision than the classical semi-open Newton–Cotes rule, and it is proposed by using the first-order derivatives only at interior points of the interval [ a , b ] , within the quadrature rule, whilst maintaining the enhancement of order of accuracy. The basic form of the proposed method MSONC2 is:
a b f ( x ) d x M S O N C 2 = ( b a ) f ( a ) + ( b a ) 2 2 f ( a + b 2 )
As the number of function evaluations in (13) is two, which is an even number; the precision of (13) is at most 1. This leads to the condition in (13) that for all the monomials 𝑥𝑘 of degree k = 0 , 1 it will be exact. Therefore, a system of 2 by 2 equations is formed using first-order polynomials,
f ( x ) = a 0 + a 1 x
The general form of this scheme is:
a b f ( x ) d x M S O N C 2 = c 0 f ( a ) + c 1 f ( a + b 2 )
The two equations formed by this approach are:
{ For   f ( x ) = 1 ;   a b d x = ( b a ) = c 0 For   f ( x ) = 1 ;   a b x d x = b 2 a 2 2 = c 0 a + c 1   }
We determined the weight coefficients by solving Equation (16) simultaneously, we obtain,
c 0 = ( b a ) ,   c 1 = ( b a ) 2 2
Hence, the following quadrature rule MSONC2 is obtained:
a b f ( x ) d x M S O N C 2 = ( b a ) f ( a ) + ( b a ) 2 2 [ f ( a + b 2 ) ]
Theorem 2 discusses the degree of precision of MSONC2.
Theorem 2. 
The degree of precision one-point MSONC2 is one.
Proof. 
To prove this theorem, we verify that the new method MSONC2 is exact for f ( x ) = 1 , x . The exact values are:
a b 1 d x = ( b a )   and   a b x d x = b 2 a 2 2
Moreover, the approximate results using MSONC2 are:
For   f ( x ) = 1 , M S O N C 2 = ( b a )   and   for   f ( x ) = x , M S O N C 2 = b 2 a 2 2
However, the approximate value using MSONC2 for f ( x ) = x 2 is not exact, i.e.,
M S O N C 2 ( x 2 ; a , b ) = ( b a ) ( b 2 + a 2 ) 2
a b x 2 d x = ( b 3 a 3 3 ) M S O N C 2 ( x 2 ; a , b )  
which shows that the degree of precision of the proposed method MSONC2 is one. □

2.3. Modified Semi-Open Newton–Cotes Rule-3 with Derivatives (MSONC3)

MSONC3 uses function values over [ a , b ) and the first derivatives at the endpoints of the interval [ a , b ] only. The theoretical development of MSONC3 is detailed here, and results on the degree of precision are proved in Theorem 3.
MSONC3 is a new rule with greater precision than the classical semi-open Newton–Cotes rule. In this method, the evaluation of first-order derivatives is restricted to the endpoints of the interval [ a , b ] only, whilst maintaining the enhancement of order of accuracy. The basic form of the proposed method MSONC3 is
a b f ( x ) d x M S O N C 3 = ( b a ) f ( a ) + ( b a ) 2 6 [ 2 f ( a ) + f ( b ) ]
As we know that the number of function evaluations in (20) is three, which is an odd number; therefore, the precision of (20) is at most 2. This leads to the exactness condition on (20) for all the monomials 𝑥𝑘 of degree 𝑘 = 0,1,2. Therefore, a system of 3 by 3 equations is formed using second-order polynomials,
f ( x ) = a 0 + a 1 x + a 2 x 2
The general form of this scheme is:
a b f ( x ) d x M S O N C 3 = c 0 f ( a ) + c 1 f ( a ) + c 2 f ( b )
The three equations formed by this approach are:
{ For   f ( x ) = 1 , a b d x = b a = c 0 For   f ( x ) = x , a b x d x = b 2 a 2 2 = c 0 a + c 1 + c 2 For   f ( x ) = x 2 , a b x 2 d x = b 3 a 3 3 = c 0 a 2 + 2 a c 1 + 2 b c 2 }
We determined the weight coefficients by solving system (23) simultaneously, hence,
c 0 = ( b a ) , c 1 = ( b a ) 3 2   and   c 2 = ( b a ) 6 2
The following quadrature rule MSONC3 is obtained:
a b f ( x ) d x M S O N C 3 = ( b a ) f ( a ) + ( b a ) 2 6 [ 2 f ( a ) + f ( b ) ]
Theorem 3. 
The degree of precision of one-point MSONC3 is two.
Proof. 
To prove this theorem, we verify that the new method (24) is exact for f ( x ) = 1 , x , x2. The exact values are:
a b 1 d x = ( b a ) , a b x d x = b 2 a 2 2   and   a b x 2 d x = b 3 a 3 3  
Moreover, the approximate results using MSONC3 are:
For   f ( x ) = 1 , M S O N C 3 = ( b a ) ,   for   f ( x ) = x , M S O N C 3 = b 2 a 2 2 and for   f ( x ) = x 2 ,   M S O N C 3 = b 3 a 3 3 .
However, the approximate value using MSONC3 for f ( x ) = x 3 is not exact. i.e.,
a b x 3 d x M S O N C 3 = 1 2 [ b 4 + 2 b 2 a 2 a 4 ]
Moreover   a b x 3 d x = b 4 a 4 4 M S O N C 3  
which shows that the degree of precision of the proposed method MSONC3 is two. □

2.4. Modified Semi-Open Newton–Cotes Rule-4 with Derivatives (MSONC4)

MSONC4 uses function values over [ a , b ) and the first derivatives at all points of the interval [ a , b ] . The construction is explained first followed by the degree of precision of the proposed MSONC4 in Theorem 4.
A rule with greater precision than the classical semi-open Newton–Cotes rule is proposed by using the first-order derivative of the integrand at all interior points including the endpoints of the interval [ a , b ] , whilst maintaining the enhancement of order of accuracy. The basic form of the proposed method MSONC4 is
a b f ( x ) d x M S O N C 4 = ( b a ) f ( a ) + ( b a ) 2 6 [ 2 f ( a ) + 2 f ( a + b 2 ) + 0 × f ( b ) ]
As we know that the number of function evaluations in (27) is four, which is an even number; therefore, the precision of (27) is at most 3. This leads to the condition on (27) that for all monomials x k of degree k = 0 , 1 , 2 , 3 it will be exact. Therefore, a system of 4 by 4 equations is formed using a third-order polynomial,
f ( x ) = a 0 + a 1 x + a 2 x 2 + a 3 x 3
The general form of this scheme is:
a b f ( x ) d x M S O N C 4 = c 0 f ( a ) + c 1 f ( a ) + c 2 f ( a + b 2 ) + c 3 f ( b )
The four equations formed by this approach are:
{ For   f ( x ) = 1 ; a b d x = b a = c 0   For   f ( x ) = x ; a b x d x = b 2 a 2 2 = c 0 a + c 1 + c 2 + c 3 For   f ( x ) = x 2 ; a b x 2 d x = b 3 a 3 3 = c 0 a 2 + 2 a c 1 + + 2 c 2 ( a + b 2 ) + 2 b c 3 For   f ( x ) = x 3 ; a b x 3 d x = b 4 a 4 4 = c 0 a 3 + 3 a 2 c 1 + 3 c 2 ( a + b 2 ) 2 + 3 b 2 c 3 }
We determined the weight coefficients by solving the system (30) simultaneously, we obtain, c 0 = ( b a ) , c 1 = ( b a ) 6 2 ,   c 2 = ( b a ) 3 2   and   c 3 = 0 .
Hence, the following numerical quadrature rule: MSONC4 is obtained:
a b f ( x ) d x = ( b a ) f ( a ) + ( b a ) 2 6 [ f ( a ) + 2 f ( a + b 2 ) + 0 × f ( b ) ]
Theorem 4. 
The degree of precision of one-point MSONC4 is three.
Proof. 
To prove this theorem, we will verify that the new method (31) is exact for f ( x ) = 1 , x , x2, x3. The exact values are:
a b 1 d x = ( b a )     a b x d x = b 2 a 2 2 , a b x 2 d x = b 3 a 3 3   and a b x 3 d x = b 4 a 4 4  
Moreover, the approximate results using MSONC4 are:
For   f ( x ) = 1 , M S O N C 4 = ( b a )   for   f ( x ) = x , M S O N C 4 = b 2 a 2 2 , for   f ( x ) = x 2 ,   M S O N C 4 = b 3 a 3 3 , and M S O N C 4 ( x 3 ; a , b ) = b 4 a 4 4 .
However, the approximate value using MSONC4 for f ( x ) = x 4 is not exact, i.e.,
M S O N C 4 ( x 4 ; a , b ) = 8 ( a + b 2 ) 3 + 4 a 3 ( a b ) 6 2 a 4 ( a b )
Moreover,
a b x 4 d x = b 5 a 5 5 M S O N C 4 ( x 4 ; a , b )  
which shows that the degree of precision of the proposed method MSONC4 is three. □

3. Error Analysis of Proposed Quadrature Formulas

In this section, we derive the local and global error terms of all the proposed methods using the remainder of the modified quadrature rule for the monomial x p + 1 ( p + 1 ) ! and the exact answer of 1 ( p + 1 ) ! a b x p + 1 d x , where p is the precision of the method [12]. In the forthcoming theorems, the error terms have been derived and the order of accuracy of the proposed rules has also been established in basic forms. Theorems 5–8 discuss the errors and accuracy of MSONC1–4 in basic form.
Theorem 5. 
The local error term of MSONC1 is:
E M S O N C 1 = ( b a ) 6 3 f ( ξ )
where ξ ( a , b ) , and the local order of accuracy is three.
Proof. 
As the proposed method MSONC1 is exact for all the monomials of order 0 and 1, the second-order term of Taylor’s series of f ( x ) about x = x 0 is:
f ( x ) = 1 2 ! ( x x 0 ) 2 f ( x 0 )
Using (35) the error term of MSONC1 can be represented as:
E M S O N C 1 = [ E x a c t ( x 2 2 ! ; a , b ) M S O N C 1 ( x 2 2 ! ; a , b ) ] f ( ξ )
The exact integral value is:
Exact ( x 2 2 ! ; a , b ) = b 3 a 3 6
And the approximate value of integral by MSONC1 is:
M S O N C 1 ( x 2 2 ! ; a , b ) = a b ( b a ) 4
Using exact and approximate evaluations in (36), we have,
E M S O N C 1 = ( b a ) 3 6 f ( ξ )
For h = ( b a ) . Hence (39) will be:
E M S O N C 1 = h 3 6 f ( ξ )
where ξ ( a , b ) , and the order of accuracy of this method is three.
Theorem 6. 
The local error term of MSONC2 is:
E M S O N C 2 = ( b a ) 12 3 f ( ξ )
where ξ ( a , b ) , and order of accuracy is three.
Proof. 
As the proposed method MSONC2 is exact for the monomials of degrees 0 and 1, the second-order term of Taylor’s series of f ( x ) about x = x 0 is:
f ( x ) = 1 2 ! ( x x 0 ) 2 f ( x 0 )
Using (42) the error term of MSONC2 can be represented as:
E   M S O N C 2   = [ E x a c t ( x 2 2 ! ; a , b ) M S O N C 2   ( x 2 2 ! ; a , b ) ] f ( ξ )
The exact integral value is:
Exact ( x 2 2 ! ; a , b ) = b 3 a 3 6
The approximate value of integral by MSONC2 is:
M S O N C 2   ( x 2 2 ! ; a , b ) = ( b a ) ( a 2 + b 2 ) 4
Using exact and approximate evaluations in (43)
E   M S O N C 2   = ( b a ) 3 12 f ( ξ )
For ( b a ) = h , we have,
E   M S O N C 2   = h 3 12 f ( ξ )
where ξ ( a , b ) and the order of accuracy of this method is three.
Theorem 7. 
The local error term of MSONC3 is:
E M S O N C 3 = 1 24 ( b a ) 4 f ( 3 ) ( ξ )
where ξ ( a , b ) , and order of accuracy for MSONC3 is four.
Proof. 
As the proposed method MSONC3 is exact for the monomials of degrees 0, 1 and 2, the third-order term of Taylor’s series of f ( x ) about x = x 0 is:
f ( x ) = 1 3 ! ( x x 0 ) 3 f ( 3 ) ( x 0 )
Using (49) the error term of MSONC3 can be represented as:
E M S O N C 3 = [ E x a c t ( x 3 3 ! ; a , b ) M S O N C 3 ( x 3 3 ! ; a , b ) ] f ( 3 ) ( ξ )
The exact integral value is:
Exact ( x 3 3 ! ; a , b ) = 1 3 ! ( b 4 a 4 4 )
The approximate value of integral by MSONC3 is:
M S O N C 3 ( x 3 3 ! ; a , b ) = 1 3 ! ( a 3 b + 3 2 a 2 b 2 a b 3 + 1 2 b 4 )
Using exact and approximate evaluations in (50)
E M S O N C 3 = 1 24 ( b a ) 4 f ( 3 ) ( ξ )
For h = ( b a ) , we have,
E M S O N C 3 = 1 24 h 4 f ( 3 ) ( ξ )
where ξ ( a , b ) , hence from (54), we conclude that the order of accuracy of MSONC3 is four.
Theorem 8. 
The local error term of MSONC4 is:
E M S O N C 4 = 1 720 ( b a ) 5 f ( 4 ) ( ξ )
where ξ ( a , b ) , and the order of accuracy for MSONC4 is five.
Proof. 
As MSONC4 is exact for the monomials of degrees 0, 1, 2 and 3, the fourth order term of Taylor’s series of f ( x ) about x = x 0 is:
f ( x ) = 1 4 ! ( x x 0 ) 4 f 4 ( x 0 )
Using (56), the error term of MSONC4 can be represented as:
E M S O N C 4 = [ E x a c t ( x 4 4 ! ; a , b ) M S O N C 4 ( x 4 4 ! ; a , b ) ] f 4 ( ξ )
The exact integral value is:
Exact ( x 4 4 ! ; a , b ) = 1 4 ! ( b 5 a 5 5 )
The approximate value of integral by MSONC4 is:
M S O N C 4 ( x 4 4 ! ; a , b ) = 1 4 ! ( 8 ( a + b 2 ) 3 + 4 a 3 ( a b ) 6 2 a 4 ( a b ) )
Using exact and approximate evaluations in (57)
E M S O N C 4 = 1 720 ( b a ) 5 f ( 4 ) ( ξ )
For h = ( b a ) , we have,
E M S O N C 4 = 1 720 h 5 f ( 4 ) ( ξ )
where ξ ( a , b ) , and the order of accuracy of this method is five. □

4. Results and Discussion

Here, various numerical tests have been conducted on proposed quadrature formulas MSONC1–4 with derivatives against the existing conventional rules SONC without derivatives as well as one-point and two-point Gauss–Legendre rules (GL1 and GL2), which confirm the validity of the theoretical results. While all the proposed methods are modifications of the one-point derivative free SONC rule only, we have also included the GL2 rule in comparison for analysis of improvement in the accuracy of the proposed rules. The GL methods have been programmed in composite form like the other methods, to compare the efficiency in similar situations.
Ten numerical problems have been solved for each scheme, as taken from [4,7] with motivations on special integrands [22,23,26,27,28,29,30,38], whose exact values were determined using MATLAB (R2014b) software in double precision arithmetic. All the results are noted in Intel (R) Core i5 Laptop with RAM of 8.00 GB and a processing speed of 1.8 GHz. Additionally, the computational order of accuracy (COC) and the absolute errors are computed for all the integrals. The following integrals are analyzed to prove our results. The exact integral values with 15 decimal places are shown against each example. Examples 1–5 represent regular integrands involving polynomial, rational, exponential, logarithmic and trigonometric integrands. Example 6 represents an integrand with derivative singularity, Examples 7 and 8 are periodic integrals defined in the range of periodic intervals, and Examples 9 and 10 are more challenging situations concerning the evaluations of complicated and highly oscillatory integrals. These ten examples have been added in the comparison to comment on the performance of proposed and existing methods exhaustively from viewpoint of computational efficiency in different situations.
Example 1. 
0 1 x e x d x = 0.264241117657115
Example 2. 
0 π 4 cos 2 ( x ) d x = 0.642699081698724
Example 3. 
0 1 1 1 + x d x = 0.693147180559945
Example 4. 
0 π 4 e cos ( x ) d x = 1.939734850623649
Example 5. 
0 1 x ln ( 1 + x ) 1 + x 2 d x = 0.162865005917789
Example 6. 
0 1 1 x 2 d x = 0.785398153397448
Example 7. 
0 2 π e cos ( x ) d x = 7.954926521012844
Example 8. 
0 2 π [ 1.1 + 2.3 cos x + 3.6 cos 2 x 4.32 cos 3 x + 1.6 sin x 2.35 sin 2 x + 8.6 sin 3 x ] d x = 6.91150387897544
Example 9. 
1 1 e ( x + sin ( e ( e ( x + 1 3 ) ) ) ) d x = 1.281138806443303788097
Example 10. 
0 2 π x cos 20 x sin 50 x d x = 0.149599650170943
We compute and discuss the results of the comparative analysis of the methods in several ways here to show the distinct roles of the performance of the proposed methods. After the derivation and theoretical error analysis of the proposed approaches in the previous section, where we observed the rapidly decreasing error patterns and distributions of the methods, ascendance in precision degrees and orders of accuracy, here we begin with the computational order of accuracy (COC) being analyzed using the following formula, defined in [4].
COC = ln ( | N ( 2 h ) N ( 0 ) | / | N ( h ) N ( 0 ) | ) ln 2
whereas N(0) means the exact result, and N(h) and N(2h) are the numerical results of the definite integrals with step size h and 2h, respectively.
Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 show the computational order of accuracy COC of all the methods for examples 1–10, respectively, against the number of strips ( m ), which also confirms the theoretical order of accuracy of the proposed methods. The columns below each heading of SONC signify the computational order of accuracy of classical SONC quadrature rules, while the columns below the heading of MSONC1, MSONC2, MSONC3 and MSONC4 represent the computational order of accuracy of proposed derivative-based semi-open Newton–Cotes rules. The COC indices for the GL1 and GL2 rules are also worked out in these tables. In the case of Examples 1–5, the order of accuracy of the modified derivative-based schemes MSONC1, MSONC2, MSONC3 and MSONC4 are observed as 2, 2, 3 and 4, respectively; the order of accuracy of the classical method SONC is 1 and of the GL1 and GL2 rules is 2 and 4, which shows that the proposed methods MSONC-1,2 are efficient in comparison to the conventional SONC and compete well the GL1 rule, whereas MSONC-3 is higher-order accurate than SONC and GL1 rules; MSONC4 shows enhanced accuracy than all others and competes well with the GL2 rule. While the proposed rules are one-point methods, the obvious enhancement in the approximation is observed due to the fact that derivatives have been used as additional information. For Example 6, where the integrand has a derivative singularity, not only for the proposed derivative-based rules but also the derivative-free SONC, GL1 and GL2 rules, they could not meet with the expected order of accuracy, which is seen as 1 for SONC and 1.5 by all others instead of 2, 3 or 4. This is because even the derivative-free methods contain the derivatives passively in the error terms; thus, singularities have an effect on the convergence of the methods. Example 6 highlights the fact that the derivative-based methods can behave similarly to the derivative-free methods in such situations, whereas for cases without singularities, the former show accelerated convergence. Examples 7 and 8 represent the periodic integrals defined in their periodic interval length. It can be seen that the proposed derivative-based MSONC1–4 and conventional derivative-free SONC methods show much-accelerated convergence than the usual theoretical ones, whereas the GL1-2 methods in comparison take a bit more effort to achieve accuracy. Finally, in the case of highly nonlinear and oscillatory integrands in Examples 9 and 10, all methods show oscillatory error distributions. This is because the integrand is subject to many oscillations throughout the interval of integration, thus adding more strips, or equivalently, increasing the number of sub-intervals in the composite forms the consequent errors are moderated due to oscillations. In these examples, the proposed methods do show enhanced convergence alternatively and more frequently for increased strips as compared to the GL1 and GL2 rules.
We compare the absolute error distributions versus the number of strips between the new and conventional approaches. The following formula is used to examine the absolute errors [1].
Absolute   Error = | f * ( x ) f ( x ) |
where, f * ( x ) and f ( x ) represents the exact and approximate values (obtained by proposed methods) of the integrand. Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5 represent the absolute errors that are calculated to examine and compare the results of the classical SONC, GL1, GL2, and modified derivative-based SONC methods and MSONC1–4 versus the number of strips for the first five integrals mentioned above. Hence, these figures show the decreasing absolute error distributions for all the integrals and the trends depict the faster convergence of the proposed methods, and the order of accuracy is consistent with the derived error terms. The outcomes generated by the new methods confirm that these possess lower errors than the original ones. Particularly, the MSONC4 behaves best of all, even the accompanying GL2 rule with similar order of accuracy (4). Whereas the MSONC2 and 3 are better than SONC and GL1 rules. For Example 6 through Figure 6, which is with a derivative singularity of the integrand, all applicable proposed methods continue the similar improvement and exhibit lower absolute errors as strips increase against respective derivative-free SONC, GL1 and GL2 rules. For the periodic integrals of Examples 7 and 8, the error drops from Figure 7 and Figure 8 confirm that the proposed MSONC1–4 and the conventional SONC rules show much-accelerated convergence than what is theoretically expected and achieve double precision accuracy quickly in fewer strips as compared to the GL1 and GL2 rules. This is due to the fact that Newton–Cotes rules and consequent improvements are more suitable for the periodic integrals than other methods utilizing zeros of orthogonal polynomials as nodes. Figure 9 and Figure 10 show an oscillatory pattern of error drops for approximating integrals in Examples 9 and 10 by all methods as expected. However, the higher order methods, without any specification that they use derivatives or not, try to hit lower errors alternatively. The MSONC4 and GL2 behave in this way better than other methods in Examples 9 and 10, respectively. These last two examples highlight the fact that when integrands are highly nonlinear and oscillatory, the decreasing error patterns are not that smooth and stable; however, all interpolatory quadrature methods have to compromise on accuracy regardless of the use of derivatives.
Finally, the total computational cost per integration step is observed to achieve the pre-specified error tolerance, i.e., 1 × 10 7 mostly in regular examples, whereas bit lower in complicated integrals, and the average C.P.U time (in seconds) is also computed. Due to the higher number of function evaluations at each integration step, a quadrature rule might provide reasonable accuracy in fewer steps but could also be computationally more expensive and less effective than other approaches. First, the computational costs are determined for the methods generally in Table 11 as the total evaluations required per strip summarized for the modified and existing methods, by which we found the computational costs of each test problem. In Table 12 and Table 13, we list the total computational cost for the integrals mentioned in Examples 1–6 and Examples 7–10, respectively. It is analyzed from the numerical results that the computational cost of the proposed MSONC1–4 methods is lesser than the conventional SONC and GL-1 methods for examples 1–5, whereas the MSONC-4 and GL-2 methods are computationally closer in performance with GL-2 taking a slight edge over the MSONC4. It should be noted that the error drops of the MSONC4 over shown to be smaller than those of GL-2 in similar examples through Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5. Thus, the proposed methods are cost-effective as compared to the conventional ones and MSONC4 competes well with the GL-2 rule which is a two-point method and the MSONC4 is a one-point method. For Example 6, the MSONC1,2,4 exhibit cost-efficient behavior from Table 12 against all derivative-free methods SONC and GL-1,2. Here, MSONC4 is best of all computationally for the integral with derivative singularity. All the SONC versions, existing derivative-free as well as the proposed derivative-based MSONC1–4 out-perform in comparison to the GL-1,2 rules for periodic integrals in Examples 7 and 8 from the viewpoint of achieving the double precision accuracy in lesser cost as shown in Table 13. For the oscillatory integrals of Examples 9 and 10 through Table 13, one can see that the SONC, GL-2 and the MSONC1–4 show similar performance with the GL-2 and MSONC4 taking the edge over others to achieve one decimal place accuracy. However, for higher accuracy, all methods continue showing oscillatory error drops as shown in Figure 9 and Figure 10 already.
After computational ascendance in terms of computational costs to achieve a preset error of at most 1 × 10 7 in regular examples and slightly lower in some, we now explore the execution times as CPU times (in seconds), which are used to determine the runtime of the processor in MATLAB software for each code of the method separately to meet up the preset accuracy level. The execution times account for all evaluations: derivative as well as functional ones to finally determine the time efficiency of the methods. A salient feature to examine the execution times is to explore the concern of whether the proposed rules with derivatives add much more burden on the processor based on the fact that derivatives may sometimes be more complex in computation than the functions alone. We noted already that the proposed quadrature formulas use a lower total number of evaluations (functional as well as derivative) compared to the SONC and some other conventional rules without derivatives in form of computational costs. The execution time helps us in proving that the amount of processing power required for functional, as well as some derivative evaluations in the proposed quadrature formulas, is not a big compromise as that required by the conventional rules with only a lot of functional evaluations. Table 14 and Table 15 list the CPU times (in seconds) for all methods in the case of Examples 1–6 and 7–10, respectively. We observe that in this sense as well, the proposed formulas take ascendancy over the existing derivative-free one-point methods: SONC and GL-1. For the two-point GL-2 method, the results by the proposed MSONC1–4 compare well and are mostly lower in some instances. Additionally, the average CPU time achieved by the new techniques is smaller than the average CPU time of the original SONC method and comparable with the GL-1,2 rules, which utilize special nodes at the zeros of Legendre polynomials, to achieve the same error. Thus, the proposed methods are time efficient as well and the numerical evidence suggests that the execution times for the derivatives are not as high as for the usual nodes.

5. Conclusions

In this research, four efficient quadrature formulas using derivatives to compute integrals have been proposed, which were computationally efficient, cost-effective and time-efficient modifications of the conventional semi-open Newton–Cotes quadrature rule. The theoretical development of the methods along with theorems with proofs on the order of accuracy, degree of precision and error terms for all the proposed methods were discussed. An exhaustive comparison was carried out on several integrals from the literature involving regular, periodic and derivative-singular and oscillatory integrands. In addition to this, computational results in terms of computational cost, observed orders of accuracy, average C.P.U times (in seconds), as well as absolute error drops, were also determined for ten test integrals from literature to compare all proposed methods with derivatives MSONC1, MSONC2, MSONC3, and MSONC4 with classical derivative-free SONC and GL-1, GL-2 rules. The analysis of the results depicts the efficiency of the modified methods over the conventional methods. The main features have been the cost-effectiveness and time efficiency of the proposed methods with derivatives with enhanced theoretical properties over the existing derivative-free methods not only in regular integrals but also in the case of some special integrals.

Author Contributions

Conceptualization, S.M. and M.M.S.; methodology, S.M.; software, S.M.; validation, M.M.S. and M.S.C.; formal analysis, A.W.S.; investigation, S.M.; resources, M.M.S.; data curation, S.M.; writing—original draft preparation, S.M., supervision, M.S.C.; project administration, A.W.S.; funding acquisition, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

The authors received no specific funding for this study.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Authors are thankful to Michael Ruzhansky from the Department of Mathematics, University of Ghent, for his kind encouragement and cooperation.

Conflicts of Interest

The authors declare that they have no conflict of interest to report regarding the present study.

References

  1. Chapra, S.C. Applied Numerical Methods with MATLAB, 3rd ed.; Mc Graw Hill Education Private Ltd.: New Delhi, India, 2012; pp. 103–106. [Google Scholar]
  2. Atkinson, K. An Introduction to Numerical Analysis. In Interpolation Theory, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1988; p. 170. [Google Scholar]
  3. Shaikh, M.M. Analysis of Polynomial Collocation and Uniformly Spaced Quadrature Methods for Second Kind Linear Fredholm Integral Equations—A Comparison. Turk. J. Anal. Number Theory 2019, 7, 91–97. [Google Scholar]
  4. Zafar, F.; Saleem, S.; Burg, C.O.E. New Derivative Based Open Newton-Cotes Quadrature Rules. Abstr. Appl. Anal. 2014, 2014, 109138. [Google Scholar] [CrossRef]
  5. El-Mikkawy, M. A unified approach to Newton–Cotes quadrature formulae. Appl. Math. Comput. 2003, 138, 403–413. [Google Scholar] [CrossRef]
  6. El-Mikkawy, M.E.A. On the Error Analysis Associated with the Newton-Cotes Formulae. Int. J. Comput. Math. 2002, 79, 1043–1047. [Google Scholar] [CrossRef]
  7. Dehghan, M.; Masjed-Jamei, M.; Eslahchi, M. On numerical improvement of closed Newton–Cotes quadrature rules. Appl. Math. Comput. 2004, 165, 251–260. [Google Scholar] [CrossRef]
  8. Dehghan, M.; Masjed-Jamei, M.; Eslahchi, M. On numerical improvement of open Newton–Cotes quadrature rules. Appl. Math. Comput. 2006, 175, 618–627. [Google Scholar] [CrossRef]
  9. Dehghan, M.; Masjed-Jamei, M.; Eslahchi, M. The semi-open Newton–Cotes quadrature rule and its numerical improvement. Appl. Math. Comput. 2005, 171, 1129–1140. [Google Scholar] [CrossRef]
  10. Hashemiparast, S.; Eslahchi, M.; Dehghan, M.; Masjed-Jamei, M. The first kind Chebyshev–Newton–Cotes quadrature rules (semi-open type) and its numerical improvement. Appl. Math. Comput. 2006, 174, 1020–1032. [Google Scholar] [CrossRef]
  11. Hashemiparast, S.M.; Masjed-Jamei, M.; Eslahchi, M.R.; Dehghan, M. The second kind Chebyshev-Newton-Cotes quadraturerule (open type) and its numerical improvement. Appl. Math. Comput. 2006, 180, 605–613. [Google Scholar] [CrossRef]
  12. Clarence, O.E.; Burg, C.O. Derivative-based closed Newton–Cotes numerical quadrature. Appl. Math. Comput. 2012, 218, 7052–7065. [Google Scholar] [CrossRef]
  13. Clarence, O.E.; Degny, E. Derivative-Based Midpoint Quadrature Rule. Appl. Math. 2013, 04, 228–234. [Google Scholar] [CrossRef]
  14. Zhao, W.; Li, H. Midpoint Derivative-Based Closed Newton-Cotes Quadrature. Abstr. Appl. Anal. 2013, 2013, 492507. [Google Scholar] [CrossRef] [Green Version]
  15. Zhao, W.; Zhang, Z. Derivative-Based Trapezoid Rule for the Riemann-Stieltjes Integral. Math. Probl. Eng. 2014, 2014, 874651. [Google Scholar] [CrossRef] [Green Version]
  16. Memon, K.; Shaikh, M.; Chandio, M.; Shaikh, A. A Modified Derivative-Based Scheme for the Riemann-Stieltjes Integral. SINDH Univ. Res. J.-Sci. Ser. 2020, 52, 37–40. [Google Scholar] [CrossRef]
  17. Memon, K.; Shaikh, M.M.; Chandio, M.S.; Shaikh, A.W. An Efficient Four-point Quadrature Rule for Reimann Stieljes Integral. J. Mech. Contin. Math. Sci. 2021, 16, 2454–7190. [Google Scholar] [CrossRef]
  18. Malik, K.; Shaikh, M.M.; Chandio, M.S.; Shaikh, A.W. Error Analysis of Newton Cotes Cubature Rulesl. J. Mech. Contin. Math. Sci. 2020, 15, 2454–7190. [Google Scholar]
  19. Billingsley, P. Probability and Measures; John Wiley and Sons, Inc.: New York, NY, USA, 1995. [Google Scholar]
  20. Egghe, L. Construction of concentration measures for General Lorenz curves using Riemann-Stieltjes integrals. Math. Comput. Model. 2002, 35, 1149–1163. [Google Scholar] [CrossRef] [Green Version]
  21. Rossberg, H.-J. Kopp, P.E., Martingales and Stochastic Integrals. Cambridge et al., Cambridge University Press 1984. XI, 202 S., £ 17.50 B H/c. US $ 29.95. ISBN 0-521-24758-6. J. Appl. Math. Mech./Z. Angew. Math. Mech. 1985, 65, 536. [Google Scholar] [CrossRef]
  22. D’Ambrosio, R.; Scalone, C. Filon quadrature for stochastic oscillators driven by time-varying forces. Appl. Numer. Math. 2021, 169, 21–31. [Google Scholar] [CrossRef]
  23. D’Ambrosio, R.; Scalone, C. Asymptotic Quadrature Based Numerical Integration of Stochastic Damped Oscillators. In Proceedings of the ICCSA 2021: Computational Science and Its Applications, Cagliari, Italy, 13–16 September 2021; 12950 LNCS. pp. 622–629. [Google Scholar]
  24. Rudin, W. Functional Analysis; McGraw Hill Science: New York, NY, USA, 1991. [Google Scholar]
  25. Kanwal, R.P. Linear Integral Equations; Academic Press, Inc.: California, CA, USA; Elsevier: Amsterdam, The Netherlands, 1971; pp. 167–193. [Google Scholar] [CrossRef]
  26. Iserles, A.; Nørsett, S.P. Efficient quadrature of highly oscillatory integrals using derivatives. Proc. R. Soc. A Math. Phys. Eng. Sci. 2005, 461, 1383–1399. [Google Scholar] [CrossRef] [Green Version]
  27. Iserles, A.; Nørsett, S.P. On Quadrature Methods for Highly Oscillatory Integrals and Their Implementation. BIT Numer. Math. 2004, 44, 755–772. [Google Scholar] [CrossRef]
  28. Khanamiryan, M. Quadrature methods for highly oscillatory linear and nonlinear systems of ordinary differential equations: Part I. BIT Numer. Math. 2008, 48, 743–761. [Google Scholar] [CrossRef]
  29. Dahlquist, G.; Björck, Å. Numerical Methods; Courier Corporation: Chelmsford, MA, USA, 2003. [Google Scholar]
  30. Trefethen, L.N.; Weideman, J.A.C. The exponentially convergent Trapezoidal rule. SIAM Rev. 2014, 56, 385–458. [Google Scholar] [CrossRef] [Green Version]
  31. Ramachandran, T.; Parimala, R. Open Newton cotes quadrature with midpoint derivative for integration of Algebraic functions. Int. J. Res. Eng. Technol. 2015, 4, 430–435. [Google Scholar] [CrossRef]
  32. Ramachandran, T.; Udayakumar, D.; Parimala, R. Geometric mean derivative-based closed Newton cotes quadrature. Int. J. Pure Eng. Math. 2016, 4, 107–116. [Google Scholar]
  33. Ramachandran, T.; Udayakumar, D.; Parimala, R. Comparison of Arithmetic Mean, Geometric Mean and Harmonic Mean Derivative-Based Closed Newton Cotes Quadrature. Prog. Nonlinear Dyn. Chaos 2016, 4, 35–43. [Google Scholar]
  34. Ramachandran, T.; Parimala, R. Centroidal Mean Derivative-Based Closed Newton Cotes Quadrature. Int. J. Sci. Res. 2016, 5, 338–343. [Google Scholar]
  35. Rana, K. Harmonic Mean and Contra-Harmonic Mean Derivative-Based Closed Newton-Cotes Quadrature. Integr. J. Res. Arts Humanit. 2022, 2, 55–61. [Google Scholar] [CrossRef]
  36. Ramachandran, T.; Udayakumar, D.; Parimala, R. Heronian mean derivative-based closed newton cotes quadrature. Int. J. Math. Arch. 2016, 7, 53–58. [Google Scholar]
  37. Marjulisa, R.; Imran, M. Syamsudhuha Arithmetic mean derivative based midpoint rule. Appl. Math. Sci. 2018, 12, 625–633. [Google Scholar] [CrossRef] [Green Version]
  38. Ehrenmark, U.T. A three-point formula for numerical quadrature of oscillatory integrals with variable frequency. J. Comput. Appl. Math. 1988, 21, 87–99. [Google Scholar] [CrossRef]
Figure 1. Absolute error drops versus number of strips for Example 1.
Figure 1. Absolute error drops versus number of strips for Example 1.
Symmetry 14 02611 g001
Figure 2. Absolute error drops versus number of strips for Example 2.
Figure 2. Absolute error drops versus number of strips for Example 2.
Symmetry 14 02611 g002
Figure 3. Absolute error drops versus number of strips for Example 3.
Figure 3. Absolute error drops versus number of strips for Example 3.
Symmetry 14 02611 g003
Figure 4. Absolute error drops versus number of strips for Example 4.
Figure 4. Absolute error drops versus number of strips for Example 4.
Symmetry 14 02611 g004
Figure 5. Absolute error drops versus number of strips for Example 5.
Figure 5. Absolute error drops versus number of strips for Example 5.
Symmetry 14 02611 g005
Figure 6. Absolute error drops versus number of strips for Example 6.
Figure 6. Absolute error drops versus number of strips for Example 6.
Symmetry 14 02611 g006
Figure 7. Absolute error drops versus number of strips for Example 7.
Figure 7. Absolute error drops versus number of strips for Example 7.
Symmetry 14 02611 g007
Figure 8. Absolute error drops versus number of strips for Example 8.
Figure 8. Absolute error drops versus number of strips for Example 8.
Symmetry 14 02611 g008
Figure 9. Absolute error drops versus number of strips for Example 9.
Figure 9. Absolute error drops versus number of strips for Example 9.
Symmetry 14 02611 g009
Figure 10. Absolute error drops versus number of strips for Example 10.
Figure 10. Absolute error drops versus number of strips for Example 10.
Symmetry 14 02611 g010
Table 1. Comparison of COC for example 1.
Table 1. Comparison of COC for example 1.
m S O N C M S O N C 1 M S O N C 2 M S O N C 3 M S O N C 4 G L 1 G L 2
1NANANANANANANA
21.13762.13022.11683.00834.04421.98213.9866
41.07502.06922.06573.00624.02541.99553.9966
81.03912.03572.03473.00364.01351.99883.9991
161.01992.01812.01783.00194.00691.99973.9997
321.01012.00912.00903.00104.00351.99993.9999
641.00502.00452.00453.00054.00171.99994.0000
Table 2. Comparison of COC for example 2.
Table 2. Comparison of COC for example 2.
m S O N C M S O N C 1 M S O N C 2 M S O N C 3 M S O N C 4 G L 1 G L 2
1NANANANANANANA
20.89322.12102.13632.99184.08242.01964.0213
40.95012.06502.06892.99324.03812.00484.0053
80.97572.03382.03482.99594.01832.00124.0013
160.98802.01732.01752.99784.00902.00034.0003
320.99402.00872.00882.99884.00442.00004.0000
640.99702.00442.00442.99944.00222.00003.9994
Table 3. Comparison of COC for example 3.
Table 3. Comparison of COC for example 3.
m S O N C M S O N C 1 M S O N C 2 M S O N C 3 M S O N C 4 G L 1 G L 2
1NANANANANANANA
21.07852.17062.12742.98244.02561.94733.8512
41.04252.09622.08423.00414.04961.98563.9574
81.02192.05052.04743.00584.03561.99633.9888
161.01112.02582.02503.00384.02051.99903.9971
321.00562.01302.01283.00224.01091.99973.9992
641.00282.00652.00643.00114.00561.99993.9998
Table 4. Comparison of COC for example 4.
Table 4. Comparison of COC for example 4.
m S O N C M S O N C 1 M S O N C 2 M S O N C 3 M S O N C 4 G L 1 G L 2
1NANANANANANANA
20.88932.10122.11273.00214.14022.01294.0219
40. 94812.05312.05592.99654.06762.00324.0054
80.974812.02732.02802.99724.03352.00074.0013
160. 98752.01392.01412.99834.01672.00014.0003
320. 99382.00702.00702.99914.00832.00004.0000
640. 99692.00352.00352.99954.00422.00003.9987
Table 5. Comparison of COC for example 5.
Table 5. Comparison of COC for example 5.
m S O N C M S O N C 1 M S O N C 2 M S O N C 3 M S O N C 4 G L 1 G L 2
1NANANANANANANA
20.95982.50592.48413.06133.73011.86114.2112
40.97832.31382.30313.01683.87891.97184.0288
80.98912.18312.17923.00613.94241.99314.0067
160.99452.10032.09923.00253.97131.99834.0016
320.99722.05272.05243.00113.98561.99954.0004
640.99862.02712.02703.00053.99281.99984.0000
Table 6. Comparison of COC for example 6.
Table 6. Comparison of COC for example 6.
m S O N C M S O N C 1 M S O N C 2 M S O N C 4 G L 1 G L 2
1NANANANANANA
20.73761.48331.46671.52271.47601.5144
40.83701.48341.47201.51141.48791.2175
80.89411.48871.48181.50581.49391.7940
160.92921.49321.48941.50291.49691.5019
320.95191.49621.49421.50151.49841.5010
640.96681.49791.49691.50071.49921.5005
Table 7. Comparison of COC for example 7.
Table 7. Comparison of COC for example 7.
m S O N C M S O N C 1 M S O N C 2 M S O N C 3 M S O N C 4 G L 1 G L 2
1NANANANANANANA
25.66115.66115.661105.661105.66115.60305.5205
414.746114.746114.746114.746114.746114.746014.7458
829.3923ExactExactExactExact25.991826.7521
16ExactExactExactExactExact29.1521Exact
32ExactExactExactExactExactExactExact
64ExactExactExactExactExactExactExact
Table 8. Comparison of COC for example 8.
Table 8. Comparison of COC for example 8.
m S O N C M S O N C 1 M S O N C 2 M S O N C 3 M S O N C 4 G L 1 G L 2
1NANANANANANANA
28.798498.876210.40988.87629.55132.22301.0256
4ExactExactExactExactExact1.35612.5331
8ExactExactExactExactExactExact1.4084
16ExactExactExactExactExactExactExact
32ExactExactExactExactExactExactExact
64ExactExactExactExactExactExactExact
Table 9. Comparison of COC for example 9.
Table 9. Comparison of COC for example 9.
m S O N C M S O N C 1 M S O N C 2 M S O N C 3 M S O N C 4 G L 1 G L 2
1NANANANANANANA
20.42421.08140.91982.17860.68523.35880.7325
41.09560.29412.67441.08714.13463.35693.2343
82.30542.73651.12662.244200.10620.27460.7258
161.24470.31745.47264.595562.35250.35151.8474
321.154932.084930.66341.23801.46052.29562.6009
640.10390.517012.78591.02651.19170.01810.1329
Table 10. Comparison of COC for example 10.
Table 10. Comparison of COC for example 10.
m S O N C M S O N C 1 M S O N C 2 M S O N C 3 M S O N C 4 G L 1 G L 2
1NANANANANANANA
2 8.4 × 10 14 2.014011.00104.35753.61612.09910.9050
41.40222.11201.40222.37302.37302.09990.1044
80.53702.24590.53703.22583.22580.70371.1193
16 7.8 × 10 2 4.42240.07840.28060.28061.94701.0007
321.40510.98331.405181.12341.123461.56580.8118
640.54592.48570.54590.91110.91110.52070.1913
Table 11. Computational costs for m strips of all methods.
Table 11. Computational costs for m strips of all methods.
M e t h o d s T o t a l
S O N C m
M S O N C 1 2 m
M S O N C 2 2 m
M S O N C 3 2 m + 1
M S O N C 4 3 m
GL-1 m
GL-2 2 m
Table 12. Computational costs comparison to achieve 1 × 10 7 absolute error for Examples 1–6.
Table 12. Computational costs comparison to achieve 1 × 10 7 absolute error for Examples 1–6.
M e t h o d s E x a m p l e   1 E x a m p l e   2 E x a m p l e   3 E x a m p l e   4 E x a m p l e   5 E x a m p l e   6
S O N C 919,698981,7471,250,0001,360,0003,465,800500,000
M S O N C 1 12901014111812146468516
M S O N C 2 9127187928604564313
M S O N C 3 87759379101NA
M S O N C 4 18212718211748
GL-1129210161120121664818,104
GL-216142014184380
Table 13. Computational costs comparison to achieve 1 × 10 15 (Examples 7 and 8) and 1 × 10 1 (Examples 9 and 10).
Table 13. Computational costs comparison to achieve 1 × 10 15 (Examples 7 and 8) and 1 × 10 1 (Examples 9 and 10).
M e t h o d s E x a m p l e   7 E x a m p l e   8 E x a m p l e   9 E x a m p l e   10
SONC7377
M S O N C 1 73816
MSONC2731011
M S O N C 3 7399
M S O N C 4 7349
GL-1156414
GL-2141844
Table 14. Average CPU time (in seconds) comparison to achieve 1 × 10 7 absolute error for Examples 1–6.
Table 14. Average CPU time (in seconds) comparison to achieve 1 × 10 7 absolute error for Examples 1–6.
M e t h o d s E x a m p l e   1 E x a m p l e   2 E x a m p l e   3 E x a m p l e   4 E x a m p l e   5 E x a m p l e   6
S O N C 1.01901.02380.88561.046120.5804140.08610
M S O N C 1 0.16230.16190.10740.15610.25713.07367
M S O N C 2 0.43160.44320.29680.41750.53881.8850
M S O N C 3 0.42290.41840.29760.42150.5191NA
M S O N C 4 0.43600.43710.31800.41620.54421.3438
GL-10.59450.50550.58970.54700.71021.7778
GL-20.45240.42990.42880.41760.54241.7599
Table 15. Average CPU time (in seconds) comparison to achieve 1 × 10 2 absolute error for Examples 7–10.
Table 15. Average CPU time (in seconds) comparison to achieve 1 × 10 2 absolute error for Examples 7–10.
M e t h o d s E x a m p l e   7 E x a m p l e   8 E x a m p l e   9 E x a m p l e   10
S O N C 1.12322.47601.70461.9635
M S O N C 1 1.08042.57461.71051.5916
M S O N C 2 1.25692.50711.76131.9557
M S O N C 3 1.12612.45701.56161.8320
M S O N C 4 1.23772.41571.65021.8456
GL-11.20342.66561.83471.9282
GL-21.11812.06291.66471.8177
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mahesar, S.; Shaikh, M.M.; Chandio, M.S.; Shaikh, A.W. Some New Time and Cost Efficient Quadrature Formulas to Compute Integrals Using Derivatives with Error Analysis. Symmetry 2022, 14, 2611. https://doi.org/10.3390/sym14122611

AMA Style

Mahesar S, Shaikh MM, Chandio MS, Shaikh AW. Some New Time and Cost Efficient Quadrature Formulas to Compute Integrals Using Derivatives with Error Analysis. Symmetry. 2022; 14(12):2611. https://doi.org/10.3390/sym14122611

Chicago/Turabian Style

Mahesar, Sara, Muhammad Mujtaba Shaikh, Muhammad Saleem Chandio, and Abdul Wasim Shaikh. 2022. "Some New Time and Cost Efficient Quadrature Formulas to Compute Integrals Using Derivatives with Error Analysis" Symmetry 14, no. 12: 2611. https://doi.org/10.3390/sym14122611

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop