Next Article in Journal
Hybrid Harmony Search Optimization Algorithm for Continuous Functions
Next Article in Special Issue
On Some Fixed Point Iterative Schemes with Strong Convergence and Their Applications
Previous Article in Journal
A Radial Hybrid Estimation of Distribution Algorithm for the Truck and Trailer Routing Problem
Previous Article in Special Issue
Higher-Order Multiplicative Derivative Iterative Scheme to Solve the Nonlinear Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multiplicative Calculus Approach to Solve Applied Nonlinear Models

1
Department of Mathematics, Chandigarh University, Gharuan, Mohali 140413, Punjab, India
2
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2023, 28(2), 28; https://doi.org/10.3390/mca28020028
Submission received: 30 December 2022 / Revised: 13 February 2023 / Accepted: 17 February 2023 / Published: 21 February 2023

Abstract

:
Problems such as population growth, continuous stirred tank reactor (CSTR), and ideal gas have been studied over the last four decades in the fields of medical science, engineering, and applied science, respectively. Some of the main motivations were to understand the pattern of such issues and how to obtain the solution to them. With the help of applied mathematics, these problems can be converted or modeled by nonlinear expressions with similar properties. Then, the required solution can be obtained by means of iterative techniques. In this manuscript, we propose a new iterative scheme for computing multiple roots (without prior knowledge of multiplicity m) based on multiplicative calculus rather than standard calculus. The structure of our scheme stands on the well-known Schröder method and also retains the same convergence order. Some numerical examples are tested to find the roots of nonlinear equations, and results are found to be competent compared with ordinary derivative methods. Finally, the new scheme is also analyzed by the basin of attractions that also supports the theoretical aspects.

1. Introduction

In the seventeenth century, Newton and Leibnitz created the differential and integral calculus concept based on subtraction and addition operations. Later in the 1970s, Grossman and Katz [1] developed a different definition of differential and integral calculus that utilized the multiplication and division operation instead of addition and subtraction. This definition of differential and integral calculus is named multiplicative calculus. In 2008, Bashirov et al. [2] contributed to multiplicative calculus and its applications. After this, some authors worked on some applications of multiplicative calculus in different areas such as biology [3], science and finance [4], biomedical sciences [5], economic growth [6], etc. From the above discussion, we can say that the multiplicative calculus approach has an important role in the field of applied sciences [7,8,9,10,11,12,13,14,15,16,17].
In 2016 and 2020, Özyzpici et al. [18] and Ali Özyzpici [19] suggested a new way to solve nonlinear equations with the help of a multiplicative calculus approach (MCA). The numerical results of these methods [18,19] have been found to be much better compared with iterative techniques with a standard calculus approach. In these studies [18,19], researchers focused only on the simple root of nonlinear equations. They did not discuss multiple roots because finding the multiple roots of nonlinear expressions is a more complicated and challenging task compared to simple roots. Retaining the same convergence order and lengthy, complicated calculations, the complex body structure of the iterative method and computational efficiency are also other reasons. In addition, most of the iterative methods for multiple roots required prior knowledge of multiplicity m, which is not practically possible to obtain in advance. According to our best knowledge, we have no iterative method based on the multiplicative calculus approach for multiple roots of nonlinear equations in the available literature.
While keeping these things in mind, we propose a new iterative technique for multiple roots with unknown multiplicity based on MCA. According to our best knowledge, we are the first to report such a scheme with a multiplicative calculus approach that can handle multiple roots. In addition, our scheme does not require prior knowledge of multiplicity m. The structure of our scheme stands on the well-known Schröder method. We compare our scheme with existing methods on the basis of absolute error difference between two consecutive iterations, order of convergence, number of iterations, CPU timing, the graphs of absolute errors, and bar graphs. We found that our methods perform much better in all ways of comparison. Finally, we study the basin of attraction of our method, which also supports the numerical results.
The details of the paper are as follows: Section 2 states the proposed multiplicative method; Section 3 represents the convergence analysis of suggested methods; Section 4 demonstrates the experimental work of newly constructed schemes; Section 5 is devoted to the graphical analysis of new methods; the last Section 6 depicts the concluding remarks.

1.1. Some Basic Terminologies

Definition 1
([2]). The nonlinear function g : Ω R R is multiplicative differentiable (g*) at x or on Ω if it is positive and differentiable at x or on Ω, and it is defined as
g * ( x ) = d * g d x = lim h 0 g ( x + h ) g ( x ) 1 h , g * ( x ) = lim h 0 * g 1 h = e g ( x ) g ( x ) , = e l n g ( x ) .
In a similar pattern, the higher-order multiplicative derivative is defined as
g * * ( x ) = e l n g * ( x ) = e l n g ( x ) ,
and, more generally,
g * ( n ) ( x ) = e l n g ( n ) ( x ) , n = 0 , 1 , 2 ,
where ( l n g ) = l n g ( x ) . Note in Equation (3) that n = 0 means no multiplicative derivative and it depicts the original function g ( x ) = 1 .

1.2. Some Results on Multiplicative Differentiation

Consider g and h to be multiplicative differentiable and ψ to be ordinary differentiable functions. Let c be a positive constant; then, we have
  • ( c ) * = 1
  • ( c g ) * ( x ) = g * ( x )
  • ( g h ) * ( x ) = g * ( x ) h * ( x )
  • ( g h ) * ( x ) = g * ( x ) h * ( x )
  • ( g ψ ) * ( x ) = g * ( x ) ψ ( x ) . g ( x ) ψ ( x )
  • ( g ψ ) * ( x ) = g * ψ ( x ) ψ ( x )
Definition 2.
Suppose g : Ω R R + is a positive nonlinear function. Then, the multiplicative nonlinear equation is defined as
g ( x ) = 1 .
Theorem 1
([20]). Let g : Ω R be ( n + 1 ) times multiplicative differentiable in an open interval Ω. Therefore, for any x , x + a Ω , a n u m b e r η ( 0 , 1 ) such that
g ( x + a ) = l = 0 n g * ( l ) ( x ) a l l ! g * ( n + 1 ) ( x + η a ) a n + 1 ( n + 1 ) ! .

2. Proposed Schemes

Here, we consider the well-known Schröder Method defined as
x k + 1 = x k g ( x k ) g ( x k ) g ( x k ) 2 g ( x k ) g ( x k ) , k = 0 , 1 , 2 , .
We replace the ordinary derivative g ( x k ) and g ( x k ) of the function g ( x k ) with multiplicative derivative l n g * ( x k ) and l n g * * ( x k ) in the method (6) and obtain the following iterative method to solve the nonlinear equation:
Multiplicative Schröder Method (MSM)
x k + 1 = x k l n g ( x k ) l n g * ( x k ) l n g * ( x k ) 2 l n g ( x k ) l n g * * ( x k ) , k = 0 , 1 , 2 , .

3. Convergence Analysis

Theorem 2.
Assume the sufficiently multiplicative differentiable function g : Ω R R + with r 1 multiplicative root in an open interval Ω. Whenever x 0 is sufficiently close to r 1 , the multiplicative Schröder scheme (7) has quadratic convergence.
Proof. 
Let r 1 be a multiplicative root of function g ( x ) such that g ( r 1 ) = 1 . Since the function g ( x ) is sufficiently multiplicative differentiable, by using Equation (5) and the error equation e k = r 1 x k , we have
g ( r 1 ) = 1 = g ( x k ) g * ( x k ) e k g * * ( x k ) e k 2 2 g * * * ( c 1 ) e k 3 6 ,
g ( r 1 ) = 1 = g ( x k ) g * ( x k ) e k g * * ( c 2 ) e k 2 2 ,
where c 1 , c 2 are between r 1 and x k . Now, raising the power of (8) by l n g * ( x k ) gives
1 = g ( x k ) l n g * ( x k ) g * ( x k ) l n g * ( x k ) e k g * * ( x k ) l n g * ( x k ) 2 e k 2 g * * * ( c 1 ) l n g * ( x k ) 6 e k 3 ,
and raising the power of (9) by e k l n g * * ( x k ) gives
1 = g ( x k ) e k l n g * * ( x k ) g * ( x k ) l n g * * ( x k ) e k 2 g * * ( c 2 ) l n g * * ( x k ) e k 3 2 .
Dividing (10) by (11) gives
g ( x k ) l n g * ( x k ) g * ( x k ) l n g * ( x k ) g ( x k ) l n g * * ( x k ) e k g * * ( x k ) l n ( g * ( x k ) ) 2 g * ( x k ) l n g * * ( x k ) e k 2 g * * * ( c 1 ) l n g * ( x k ) 6 g ( c 2 ) l n g * * ( x k ) 2 e k 3 = 1 .
After using the natural log on both sides of (12) and the properties of the natural log, one can have
l n g * ( x k ) l n g ( x k ) + l n g * ( x k ) l n g * ( x k ) g ( x k ) l n g * * ( x k ) e k + l n g * * ( x k ) l n ( g * ( x k ) ) 2 g * ( x k ) l n g * * ( x k ) e k 2 + O ( e k 3 ) = 0 , l n g * ( x k ) l n g ( x k ) + l n g * ( x k ) l n g * ( x k ) l n g ( x k ) l n g * * ( x k ) e k + l n g * * ( x k ) l n ( g * ( x k ) ) 2 l n g * ( x k ) l n g * * ( x k ) e k 2 + O ( e k 3 ) = 0 , l n g * ( x k ) l n g ( x k ) + l n g * ( x k ) 2 l n g ( x k ) l n g * * ( x k ) e k l n g * ( x k ) l n g * * ( x k ) e k 2 2 + O ( e k 3 ) = 0 .
Rearranging the terms of the Equation (13), we have
l n g ( x k ) l n g * ( x k ) l n g * ( x k ) 2 l n g ( x k ) l n g * * ( x k ) = e k + e k 2 2 l n g * ( x k ) l n g * * ( x k ) l n g * ( x k ) 2 l n g ( x k ) l n g * * ( x k ) + O ( e k 3 ) .
Now, using e k = r 1 x k and the root r 1 on both sides of the Equation (7), we obtain
r 1 x k + 1 = r 1 x k + l n g ( x k ) l n g * ( x k ) l n g * ( x k ) 2 l n g ( x k ) l n g * ( x k ) , e k + 1 = e k e k + e k 2 ( B ) + O ( e k 3 ) , e k + 1 = e k 2 ( B ) + O ( e k 3 ) ,
where B = 1 2 l n g * ( x k ) l n g * * ( x k ) l n g * ( x k ) 2 l n g ( x k ) l n g * * ( x k ) .
Hence, technique (7) has quadratic convergence. □

4. Experimental Work

In this section, some experiments are performed on our iterative method and compared with the existing methods of similar order of convergence. We contrast our multiplicative Schröder method ( M S M ) to the well-known classical Schröder method ( S M ) (6). In addition, we also compare it with the modified Newton’s method ( M N M ) [21], which is defined as
x k + 1 = x k m g ( x k ) g ( x k ) .
The method (16) requires prior knowledge of multiplicity m of the required root. All the numerical work has been conducted using M a t h e m a t i c a   11 . For the ordinary derivative case, the stopping criterion is | g ( x k ) | < 10 50 , and in the multiplicative derivative case, | g 1 ( x k ) 1 | < 10 50 . The iteration index k, CPU timing, and consecutive iteration error | x k + 1 x k | are presented in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6. Finally, the approximate computational order of convergence (ACOC) ρ is calculated with the following formula:
ρ = l n | x p + 1 x p | | x p x p 1 | l n | x p x p 1 | | x p 1 x p 2 | ,   for   each   p = 2 ,   3 ,
Remark 1.
The meaning of expression m ( ± n ) is m × 10 ± n in all the tables.
Example 1.
Firstly, we consider the population growth model that formulates the following nonlinear function:
g ( x ) = 1000 1564 e x + 435 1564 ( e x 1 ) 1 .
In this model, we evaluate the birth rate denoted as x if a specific local area has 1000 thousand people at first and 435 thousand move into the local area in the first year. Likewise, we assume 1564 thousand individuals toward the finish of one year. The computed results towards the required zero x r = 0.1009979 are displayed in Table 1. Clearly, the method M S M demonstrates better results in terms of consecutive error, number of iterations, and CPU timing in comparison with existing ones.
Example 2.
Here, we study the nonlinear problem g ( x ) = x e x 2 s i n 2 ( x ) + 3 c o x ( x ) 4 having a zero x r = 1.06513 . The evaluated results are demonstrated in Table 2. From the obtained results in Table 2, we can say that our method is faster than the existing methods since our scheme converges to the required root in only four iterations compared with the others that required seven and eight. In addition, our scheme has the lowest absolute error difference and CPU timing among the mentioned methods.
Example 3.
Now, we test the methods on the continuous stirred tank reactor problem, which was converted into the following mathematical expression by Douglas [22]:
κ 2.98 ( s + 2.25 ) ( s + 1.45 ) ( s + 2.85 ) 2 ( s + 4.35 ) = 1 .
Here, κ denotes the gain of the proportional controller. For the values of κ, the control system is stable; however, when κ = 0 , we have the poles of the open-loop transferred function as the solutions of the following nonlinear function:
g ( x ) = x 4 + 1.5 x 3 + 47.49 x 2 + 83.06325 x + 5.123266875 .
The function g ( x ) has zero 2.85 with multiplicity m = 2 . The outcomes of the suggested method are demonstrated in Table 3 and results are equally competent compared with those of M N M and S M .
Example 4.
Lastly, we worked on the Van der Waals equation of ideal gas [23], which describes the characteristics of real gas, and formed it into the following mathematical expression:
g ( x ) = x 3 5.22 x 2 + 9.0825 x 5.2675 .
One of its zeros, x r = 1.75 , has multiplicity m = 2 . The performance of different iterative schemes has been shown in Table 4 and one can easily conclude that the proposed method M S M converges much faster to the root than the other methods M N M and S M .
Example 5.
Eigenvalues play a significant role in linear algebra and in many applications of image processing. However, it is sometimes a tough task to evaluate eigenvalues if we have a matrix of larger size. So, here, we focus on finding the eigenvalues of the following ninth-order matrix:
B = 1 8 12 0 0 19 19 76 19 18 437 64 24 0 24 24 64 8 32 376 16 0 24 4 4 16 4 8 92 40 0 0 10 50 40 2 20 242 4 0 0 1 41 4 1 2 25 40 0 0 18 18 104 18 20 462 84 0 0 29 29 84 21 42 501 16 0 0 4 4 16 4 16 92 0 0 0 0 0 0 0 0 24 ,
The characteristic equation of matrix B forms the following polynomial function:
g 5 ( x ) = x ( x 8 29 x 7 + 349 x 6 2261 x 5 + 8455 x 4 17663 x 3 + 15927 x 2 + 6993 x 24732 ) + 12960 .
This function has a zero x = 3 of multiplicity m = 4 . Table 5 reports the results of the proposed scheme, which are much better in contrast with the available techniques in terms of errors, order of convergence, and CPU time. Further, no doubt, M S M consumes an equal number of iterations but with the lowest CPU time and less error.
Example 6.
Lastly, we applied the proposed methods to the clustering problem defined as
g 6 ( x ) = ( x 1 ) 120 ( x 2 ) 150 ( x 3 ) 100 ( x 4 ) 55 .
The function g 6 ( x ) has the zeros 1 , 2 , 3 , 4 with multiplicity 120 , 150 , 100 , 55 , respectively. In this example, we approximated the zero 1 with multiplicity 120. In Table 6, the numerical results are depicted.
Remark 2.
The graphical error analysis of Examples 1 to 6 is shown in Figure 1. It is clear from all subfigures of Figure 1 that our method of error reduction is faster than existing methods. In a similar way, iteration comparisons of different existing methods with our method are given in Figure 2. Clearly, the proposed method converges to root in less iterations compared with other schemes.

5. Basin of Attraction

The concept of the basin of attraction confirms the convergence of all the possible roots of the nonlinear equation within a specified rectangular region. So, we also present dynamical planes [24] of modified Newton’s method ( M N M ), ordinary Schröder method ( S M ), and multiplicative Schröder method ( M S M ) on different initial values in the rectangular region [ 2.5 , 2.5 ] × [ 2.5 , 2.5 ] . We have chosen three problems to analyze the basin of attraction for comparison of these three methods. Each image is plotted by an initial guess as an ordered pair of 256 complex points of the abscissa and coordinate axis. If an initial point does not converge to the required root, it is plotted with black color; otherwise, different colors are used to represent different roots with tolerance 10 3 .
Example 7.
The scalar function z 2 1 has the zeros { 1 , 1 } . In Figure 3, pink and yellow colors represent the convergence of zeros and black color for the divergence. It is clear that the proposed methods are approaching the desired zero.
Example 8.
The nonlinear function z 3 1 , having the zeros { 1 , e 2 π i 3 , e 4 π i 3 } , is tested and the basin of attraction is shown in Figure 4. The divergence area is very small in M S M .
Example 9.
Lastly, the basin of attraction of the nonlinear function z 3 + z with zeros { 0 , i , i } is shown in Figure 5. It is clear that the method S M has a more divergent area in comparison with the proposed method.

6. Conclusions

In this paper, we proposed a new iterative method with the help of MCA. Schröder’s iterative method and multiplicative derivatives are the two main pillars of our scheme. We studied the convergence analysis of the presented method. The suggested scheme did not require the prior knowledge of multiplicity m. In addition, we also provide a more efficient solution to the population growth, continuous stirred tank reactor (CSTR), ideal gas, and academic problems compared with the existing solutions.
We compared our techniques on the basis of (i) the absolute error difference between two consecutive iterations, (ii) the order of convergence, (iii) the number of iterations, (iv) CPU timing, (v) the graphs of absolute errors, and (vi) bar graphs. In all six different ways, we found that our method performs much better in comparison with the existing methods. Finally, we studied the basin of attraction, the findings of which also support the numerical results. In future work, we will focus on the multi-point iterative methods for multiple roots as well as for systems of nonlinear equations. This area will open a new, veritable Pandora’s Box of iterative methods.

Author Contributions

Conceptualization, S.B. and R.B.; Methodology, G.S., S.B. and R.B.; Software, S.B. and R.B.; Validation, G.S.; Writing—original draft, G.S., S.B. and R.B.; Supervision, S.B. and R.B. All authors have read and agreed to the published version of the manuscript.

Funding

The Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia has funded this project under grant no.(KEP-MSc-58-130-1443).

Acknowledgments

The Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia has funded this project under grant no.(KEP-MSc-58-130-1443). The authors, therefore, acknowledge with thanks DSR for technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Grossman, M.; Katz, R. Non-Newtonian Calculus; Lee Press: Pigeon Cove, MA, USA, 1972. [Google Scholar]
  2. Bashirov, A.E.; Kurpınar, E.M.; Özyapıcı, A. Multiplicative calculus and its applications. J. Math. Anal. Appl. 2008, 337, 36–48. [Google Scholar] [CrossRef] [Green Version]
  3. Englehardt, J.; Swartout, J.; Loewenstine, C. A new theoretical discrete growth distribution with verification for microbial counts in water. Risk Anal. 2009, 29, 841–856. [Google Scholar] [CrossRef] [PubMed]
  4. Bashirov, A.E.; Mısırlı, E.; Tandoğdu, Y. On modeling with multiplicative differential equations. Appl. Math. J. Chin. Univ. 2011, 26, 425–438. [Google Scholar] [CrossRef]
  5. Florack, L.; Assen, H.V. Multiplicative calculus in biomedical image analysis. J. Math. Imaging Vis. 2012, 42, 64–75. [Google Scholar] [CrossRef] [Green Version]
  6. Filip, D.A.; Piatecki, C. A Non-Newtonian Examination of the Theory of Exogenous Economic Growth; Laboratoire d’Économie d’Orléans: Orléans, France, 2010. [Google Scholar]
  7. Narayanaswamy, M.K.; Jagan, K.; Sivasankaran, S. Go-MoS2/Water Flow over a Shrinking Cylinder with Stefan Blowing, Joule Heating, and Thermal Radiation. Math. Comput. Appl. 2022, 27, 110. [Google Scholar] [CrossRef]
  8. Sivasankaran, S.; Pan, K.L. Natural convection of nanofluids in a cavity with nonuniform temperature distributions on side walls. Numer. Heat Transf. Part A Appl. 2014, 65, 247–268. [Google Scholar] [CrossRef]
  9. Sivasankaran, S.; Bhuvaneswari, M.; Alzahrani, A.K. Numerical simulation on convection of non-Newtonian fluid in a porous enclosure with non-uniform heating and thermal radiation. Alex. Eng. J. 2020, 59, 3315–3323. [Google Scholar] [CrossRef]
  10. Sivanandam, S.; Chamkha, A.J.; Mallawi, F.O.M.; Alghamdi, M.S.; Alqahtani, A.M. Effects of entropy generation, thermal radiation and moving-wall direction on mixed convective flow of nanofluid in an enclosure. Mathematics 2020, 8, 1471. [Google Scholar] [CrossRef]
  11. Jagan, K.; Sivasankaran, S. Three-Dimensional Non-Linearly Thermally Radiated Flow of Jeffrey Nanoliquid towards a Stretchy Surface with Convective Boundary and Cattaneo–Christov Flux. Math. Comput. Appl. 2022, 27, 98. [Google Scholar] [CrossRef]
  12. Sivasankaran, S.; Ho, C.J. Effect of temperature-dependent properties on MHD convection of water near its density maximum in a square cavity. Int. J. Therm. Sci. 2008, 47, 1184–1194. [Google Scholar] [CrossRef]
  13. Sivasankaran, S.; Sivakumar, V.; Prakash, P. Numerical study on mixed convection in a lid-driven cavity with non-uniform heating on both sidewalls. Int. J. Heat Mass Transf. 2010, 53, 4304–4315. [Google Scholar] [CrossRef]
  14. Sivasankaran, S.; Malleswaran, A.; Lee, J.; Sundar, P. Hydro-magnetic combined convection in a lid-driven cavity with sinusoidal boundary conditions on both sidewalls. Int. J. Heat Mass Transf. 2011, 54, 512–525. [Google Scholar] [CrossRef]
  15. Bhuvaneswari, M.; Sivasankaran, S.; Kim, Y.J. Magneto convection in a Square Enclosure with Sinusoidal Temperature Distributions on Both Side Walls. Numer. Heat Transf. Part A Appl. 2011, 59, 167–184. [Google Scholar] [CrossRef]
  16. Bhuvaneswari, M.; Sivasankaran, S.; Kim, Y.J. Numerical Study on Double Diffusive Mixed Convection with a Soret Effect in a Two-Sided Lid-Driven Cavity. Numer. Heat Transf. Part A Appl. 2011, 59, 543–560. [Google Scholar] [CrossRef]
  17. Sivasankaran, S.; Sivakumar, V.; Hussein, A.K. Numerical study on mixed convection in an inclined lid-driven cavity with discrete heating. Int. Commun. Heat Mass Transf. 2013, 46, 112–125. [Google Scholar] [CrossRef]
  18. Özyapıcı, A.; Sensoy, Z.B.; Karanfiller, T. Effective Root-Finding Methods for Nonlinear Equations Based on Multiplicative Calculi. J. Math. 2016, 2016, 8174610. [Google Scholar] [CrossRef]
  19. Özyapıcı, A. Effective numerical methods for non-linear equations. Inter. J. Appl. Comput. Math. 2020, 6, 1–8. [Google Scholar] [CrossRef]
  20. Cumhur, I.; Gokdogan, A.; Unal, E. Multiplicative Newton’s Methods with Cubic Convergence. New Trends Math. Sci. 2017, 3, 299–307. [Google Scholar]
  21. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  22. Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972. [Google Scholar]
  23. Behl, R.; Bhalla, S.; Nán, Ȧ.M.; Moysi, A. An Optimal Derivative Free Family of Chebyshev-Halley’s Method for Multiple Zeros. Mathematics 2021, 9, 546. [Google Scholar] [CrossRef]
  24. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, 11, 780153. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Graphical error analysis.
Figure 1. Graphical error analysis.
Mca 28 00028 g001
Figure 2. Iteration analysis.
Figure 2. Iteration analysis.
Mca 28 00028 g002aMca 28 00028 g002b
Figure 3. Dynamical planes of new and existing methods for Example 7.
Figure 3. Dynamical planes of new and existing methods for Example 7.
Mca 28 00028 g003
Figure 4. Dynamical planes of new and existing methods for Example 8.
Figure 4. Dynamical planes of new and existing methods for Example 8.
Mca 28 00028 g004
Figure 5. Dynamical planes of new and existing methods for Example 9.
Figure 5. Dynamical planes of new and existing methods for Example 9.
Mca 28 00028 g005
Table 1. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 1 .
Table 1. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 1 .
Schemesk | x ( k + 1 ) x ( k ) | ρ Total Number of IterationsCPU Time (Seconds)
M N M 23.7(−2)2.00050.203
36.7(−4)
42.1(−7)
S M 21.1(−1)2.00660.157
35.8(−3)
41.6(−5)
M S M 21.2(−5)2.00140.062
36.4(−12)
41.8(−24)
Table 2. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 0.75 .
Table 2. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 0.75 .
Schemesk | x ( k + 1 ) x ( k ) | ρ Total Number of IterationsCPU Time (Seconds)
M N M 2 2.1 ( 1 ) 2.938 8 0.078
3 2.1 ( 1 )
4 1.9 ( 1 )
S M 2 9.1 ( 2 ) 2.000 7 0.109
3 4.3 ( 2 )
4 6.0 ( 3 )
M S M 2 3.7 ( 3 ) 1.998 4 0.062
3 1.1 ( 5 )
4 8.7 ( 11 )
Table 3. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 2.5 .
Table 3. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 2.5 .
Schemesk | x ( k + 1 ) x ( k ) | ρ Total Number of IterationsCPU Time (Seconds)
M N M 2 8.0 ( 6 ) 1.903 4 0.093
3 1.5 ( 12 )
4 5.6 ( 26 )
S M 2 1.6 ( 4 ) 2.187 4 0.078
3 5.8 ( 10 )
4 8.1 ( 21 )
M S M 2 3.4 ( 4 ) 2.266 4 0.125
3 2.7 ( 9 )
4 1.7 ( 19 )
Table 4. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 1.9 .
Table 4. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 1.9 .
Schemesk | x ( k + 1 ) x ( k ) | ρ Total Number of IterationsCPU Time (Seconds)
M N M 2 9.0 ( 3 ) 1.995 6 0.078
3 1.1 ( 3 )
4 2.0 ( 5 )
S M 2 1.7 ( 3 ) 2.050 5 0.094
3 5.4 ( 5 )
4 5.0 ( 8 )
M S M 2 1.2 ( 3 ) 2.033 5 0.062
3 2.7 ( 5 )
4 1.3 ( 8 )
Table 5. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 31 10 .
Table 5. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 31 10 .
Schemesk | x ( k + 1 ) x ( k ) | ρ Total Number of IterationsCPU Time (Seconds)
M N M 2 2.0 ( 6 ) 2.000 4 0.156
3 9.2 ( 13 )
4 2.0 ( 25 )
S M 2 2.5 ( 6 ) 2.000 4 0.163
3 1.5 ( 12 )
4 5.5 ( 25 )
M S M 2 8.7 ( 7 ) 2.000 4 0.125
3 1.8 ( 13 )
4 7.6 ( 27 )
Table 6. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 9 10 .
Table 6. Convergence behavior of the methods M N M , S M , and M S M at approximation x 0 = 9 10 .
Schemesk | x ( k + 1 ) x ( k ) | ρ Total Number of IterationsCPU Time (Seconds)
M N M 2 3.6 ( 4 ) 2.000 4 0.093
3 2.4 ( 7 )
4 1.0 ( 13 )
S M 2 4.4 ( 4 ) 2.000 4 0.188
3 3.5 ( 7 )
4 2.2 ( 13 )
M S M 2 4.4 ( 4 ) 2.000 4 0.156
3 3.5 ( 7 )
4 2.2 ( 13 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Singh, G.; Bhalla, S.; Behl, R. A Multiplicative Calculus Approach to Solve Applied Nonlinear Models. Math. Comput. Appl. 2023, 28, 28. https://doi.org/10.3390/mca28020028

AMA Style

Singh G, Bhalla S, Behl R. A Multiplicative Calculus Approach to Solve Applied Nonlinear Models. Mathematical and Computational Applications. 2023; 28(2):28. https://doi.org/10.3390/mca28020028

Chicago/Turabian Style

Singh, Gurjeet, Sonia Bhalla, and Ramandeep Behl. 2023. "A Multiplicative Calculus Approach to Solve Applied Nonlinear Models" Mathematical and Computational Applications 28, no. 2: 28. https://doi.org/10.3390/mca28020028

Article Metrics

Back to TopTop