Next Article in Journal
Performance Measures in a Generalized Asymmetric Simple Inclusion Process
Next Article in Special Issue
Revisiting the Copula-Based Trading Method Using the Laplace Marginal Distribution Function
Previous Article in Journal
On Indeterminacy of Interval Multiplicative Pairwise Comparison Matrix
Previous Article in Special Issue
Study of Dynamics of a COVID-19 Model for Saudi Arabia with Vaccination Rate, Saturated Treatment Function and Saturated Incidence Rate
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Resolution of Initial Value Problems of Ordinary Differential Equations Systems

by
Josep Vicent Arnau i Córdoba
1 and
Màrius Josep Fullana i Alfonso
2,*
1
Departament de Matemàtica Aplicada, Universitat de València, Avinguda Vicent Andrés Estellés, s/n, 46100 Valencia, Spain
2
Institut Universitari de Matemàtica Multidisciplinària, Universitat Politècnica de València, Camí de Vera, s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(4), 593; https://doi.org/10.3390/math10040593
Submission received: 18 December 2021 / Revised: 31 January 2022 / Accepted: 1 February 2022 / Published: 14 February 2022
(This article belongs to the Special Issue New Trends and Developments in Numerical Analysis)

Abstract

:
In this work, we present some techniques applicable to Initial Value Problems when solving a System of Ordinary Differential Equations (ODE). Such techniques should be used when applying adaptive step-size numerical methods. In our case, a Runge-Kutta-Fehlberg algorithm (RKF45) has been employed, but the procedure presented here can also be applied to other adaptive methods, such as N-body problems, as AP3M or similar ones. By doing so, catastrophic cancellations were eliminated. A mathematical optimization was carried out by introducing the objective function in the ODE System (ODES). Resizing of local errors was also utilised in order to adress the problem. This resize implies the use of certain variables to adjust the integration step while the other variables are used as parameters to determine the coefficients of the ODE system. This resize was executed by using the asymptotic solution of this system. The change of variables is necessary to guarantee the stability of the integration. Therefore, the linearization of the ODES is possible and can be used as a powerful control test. All these tools are applied to a physical problem. The example we present here is the effective numerical resolution of Lemaitre-Tolman-Bondi space-time solutions of Einstein Equations.

1. Introduction

In mathematics, a dynamical system is a system in which a function describes the time-dependence of a point in a geometrical space. Numerous examples can be found in the literature, applied to a wide variety of fields in science and technology. In order to make a prediction about the system’s future behaviour, an analytical solution of such equations or their integration over time through computer simulations must be carried out. When modelling such a dynamical system to be able to investigate certain aspects of its behaviour, we end up generating a system of ordinary differential equations (ODES), partial differential equations (PDE), and algebraic equations (AE), whose analogous solution is unknown to us in the vast majority of cases. We need to resort to numerical methods for ODES which allow us to evaluate approximate solutions with a certain tolerance after fixing the indeterminate constants of the system, either by initial values (IVP or initial value problem) or boundary conditions (boundary Cauchy problem). In any case, we will find a large number of numerical methods at our disposal that can be divided into two large groups:
(i) The predictor-corrector type (also called Adams), either implicit or explicit. These are usually multistep methods which we can use when we want to increase the order of the method (that is, their precision), and usually require uniform pitch spacing x i + 1 = x i + h (prior discretization of the interval [ x 0 , x N h ] ). These are more effective methods in the sense that they usually require fewer evaluations from the ODES, but it is clear that if the value in x 0 + h depends on the previous values of k, x k 1 , , x 0 , it will be necessary to generate or impose more initial conditions, and it is not very clear whether we can consider the integration of the ODES as an IVP.
(ii) Runge–Kutta (RK)-type [1] are also explicit or implicit, but they are one-step and manage to increase their order by calculating in more intermediate points of the interval [ x i , x i + h ] . In this way, we can say that the value in x i + h depends only on the value in x i , so we are talking about the IVP in each subinterval. This feature of the RK methods allows us to use a variable h (different in each step) and adapt them to the particular ODES to achieve the same estimated error order in each step. See the Ref. [2].
Consequently, our first choice was between an Adams or an RK. We opted for the RKF45 or explicit Runge-Kutta-Felberg45 [3], which is one of the most widely used among the adaptive step RKF45. However, as we intended to also address more realistic simulations with density defects compensated by a wall at a certain distance, we implemented the Cash–Karp variant [4], which allowed us to choose between an RKF45 of order O( h 5 ) or one of variable order. In such a case, rapid changes in the gradient of some functions must be expected, and must therefore be taken into account so as to be prepared for such an eventuality.
Once our dynamical system is modelled, we must proceed to establish the objective function that models the particular aspect that we want to simulate for investigating its evolution and be in a position to predict future values (meteorology, evolution of the pandemic, volcanology, behaviour of a structure in the face of earthquakes, and so forth.). The fields of application are multiple, and the accuracy of the predictions depends on the accuracy of the model and the rigour of the tests required for the applied integration method.
Nowadays, there are applications of all kinds that include sophisticated solve, odeint, or similar commands, such as MAPLE [5], MATLAB [6], Mathematica [7] or Python [8], among others. These methods require writing our system in a certain way and assigning values to certain parameters that configure the method to be applied. After a while, it prints results and/or graphs which should be interpreted without clearly knowing what has been done, why the method has changed in certain cases, and so forth, and it is therefore difficult to make modifications to the integrating algorithm. At most, we can change our dynamic system or try to vary its parameters in a series of tests and errors that we hope will provide us with a more or less stable response. Then, we calculate means, variances, and other statistical properties which are published. Of course they make our work easier, but we must always keep in mind that those applications or commands are quite generic, and they have not been written for our specific problem. They will be very sophisticated, but it is very difficult for them to be optimal to give us control of the process. What is very fast is usually not very reliable. If we aim to take control over the evolution of our numerical simulation, some tools of Numerical Analysis must be taken into account whenever we face the resolution of a Dynamical System. Such tools are described in Section 2. When solving Dynamical Systems, the convergence and stability analysis of the problem must also be considered; see, for instance [9,10]. In the example we present here, such study was also undertaken.
In this paper, we solve the ODE (Initial Value Problem) associated with the gravitational effect of a pressureless, spherically symmetric perturbation on free photons described within the General Relativity Theory (GRT) and follow null geodesics. These null geodesics describe the Cosmic Microwave Background (CMB) Radiation. We numerically compute the anisotropies on CMB caused by a great inhomogeneity in the universe [11,12,13,14,15,16,17,18]. A detailed explanation of the main physical results and where they were published are presented in Section 8. Some important definitions related with our problem are presented in Appendix A.
Examples of modern physical problems where analytic solutions of differential equations are not possible, such as the problem we present in the present paper, is found in quantum systems. Many body quantum systems appear and have to be solved by applying advanced algorithms [19,20,21]. The ODEs are the Schroedinger and Dirac equations [20,22,23]. In such kinds of problems, the consideration of the finite size of nucleons and nuclei needs the use of advantageous numerical techniques [20,21,23]. Nowadays, there exists an increasing interest in dealing with numerical solutions of wave functions for N-body problems using the field of machine learning methods [24,25,26,27].
Other kinds of problems which also deal with the development of numerical integration schemes for physical systems are based on the theory of variations. The integration methods, known as discrete variational integrators, which treat the discretization of the Hamilton’s principle, can be found in the said theory. It is worth noting that the preservation of the geometric features of the dynamical system makes these methods appropriate for both conservative and nearly dissipative systems [28,29,30,31]. Systems that present highly oscillatory behaviour [28] are one of the most difficult challenges to solve ODEs. Here, standard numerical methods require a huge number of (time) steps to trace the oscillations, while geometric integrators are advantageous tools [32]. Geometric integrators face highly qualified simulations for long-term integration processes avoiding spurious effects [32,33,34,35,36,37]. It is interesting to derive numerical integration schemes that respect the geometric characteristics of such systems [38,39].
In a totally different field such as image processing, the Ref. [40] deals with the blind deconvolution of an image looking for the convolution kernel using fractional powers of the Laplacian (see e.g., [41,42] and references therein), which turns the method into a dynamic assimilable regularization system to our rescaled variables, although ODES are replaced by successive FFTs (Fast Fourier Transform). Likewise, the Ref. [43] deals with the restoration of images by means of non-linear Gaussian approximations on the Luminance of the image.
The example we present here is the effective numerical resolution of Lemaître-Tolman-Bondi (LTB) space-time solutions of Einstein Equations (EE); see [44]. These techniques can also be used in other LTB applications, so knowing them is of great interest. In the near future, we plan to explain similar techniques that we have applied to N-body algorithms in the cosmological treatment of large-scale structure, such as in the Hydra project (see [45,46]). All in all, we also intend to share some of the important features of the algorithms we have designed in relativistic positioning systems (see [47]), which we are currently working on. All of these and similar techniques are powerful tools for other astrophysics, physics, and engineering issues in general. We find it appropriate to present them so that they may be used, where appropriate, in the improvement of numerical methods for solving systems of equations both in our field and in science and technology in general.
The paper is organized in the following sections. Section 2 describes some previous considerations about the techniques we apply. Section 3 introduces the main features of the physical problem that is solved with our numerical and computational techniques. Section 4 gives details of the EE solutions used in the cosmological problem solved. In Section 5, the numerical algorithm CS is described. Section 6 deals with other applications of the CS algorithm. Section 7 introduces the numerical resolution of the null geodesic equations that describe the CMB anisotropies in a LTB universe. In this section, our particular application of the RKF algorithm is also described. Some words about the convergence and stability of the algorithms are also presented in Section 7. Section 8 presents the main results and a brief discussion of them, in conjunction with other papers of the current literature. Section 9 has a summary of our paper and the main conclusions. This section includes: (i) a short summary of the paper’s novelties and its scientific contribution, (ii) the conclusions extracted by the present study, and (iii) future extensions and outlook. At the end of the paper, we included two appendices which try to add more information about some of the aspects we consider relevant in our work.

2. Numerical Analysis Tools

In this section, we specify certain tools of Numerical Analysis that must be considered to solve a Dynamical System. These tools allow us to take control of the evolution of our numerical simulation. Sometimes, this process uses more CPU time, and in some cases, also RAM memory. Nevertheless, following it, we have a guarantee that the solution will fit the problem to be solved.
The aspects that should be considered the most in these integrations are:
(1)
Working with computers that use finite precision mathematics, the first question to keep in mind is that in computational models, you have to try to avoid, whenever possible, the subtraction of functions that can have values of the same order. Such a subtraction produces a smaller result than that of the operands and it can lead to the loss of the most significant digits of the given value and produce the named catastrophic cancellation; see [48] (chapter 1). This operation is especially disastrous in numerical derivation processes as well as in recursive algorithms which are quite fashionable for their beauty and the simplicity of their implementation, as checked in the Ref. [48] (problem 2, chapter 1). It is also dangerous when a derivative (total or partial) is expected to change signs (detection of extremals in the corresponding function) because if it changes signs, it means that it has had to reach zero and lost all its significant digits. Therefore, we must always look for alternative formulas to those that appear in books and manuals (see Appendix B for the IEEE standards) for floating point computation, for intrinsic functions of programming languages, and for numerical derivatives.
(2)
The next step is to try to base our model on another with an analytical solution that is the limit of ours when a certain parameter tends to zero, for example (asymptotic model). If the asymptotic model has an analytical solution that provides us with certain AE to obtain values of some variables near those of our model, such values eventually serve to rescale or renormalize the variables of our model. This technique is very common in the N-body processes of physics (renormalization of sources), and even in the numerical solution of hyperbolic ODE and PDE which requires reconstruction of the axes at each step to avoid distortions if the coefficients vary with time.
(3)
At this point we need to verify that the functions we are going to use are evaluable with a precision greater than the tolerance we impose (standard IEEE). This analysis shows to what extent it is feasible to solve our model and which problems require modifications of the integrating algorithm.
(4)
A very important tool in simulations is to make variable changes, such as passing an affine parameter λ to the t variable that focuses more on knowing how much we have advanced and how much we still have to. We can also rely on the easy asymptotic solution to be calculated in order to define new variables that are quotients of the same variable in each system. These ratios have a very small range of variation, and we reduce the impact of noise (local error) on the objective function. Sometimes it is even possible to decouple some ODEs, as in the case discussed in the present paper.
(5)
We enter the phase of incorporating our objective function into the dynamical system that we have modelled through its analytical derivation (if it is an AE) so that it can evolve with the global model. If the values that this function take are very small compared to those obtained from the integration from other ODEs, we have to rescale the estimated values of the relative errors of each ODE in each step to avoid situations where the big ones dominate the small ones.
(6)
The decisive step is to modify the integrating method so that it only takes into account the relative error of the principal variables for the purpose of re-evaluating the integration step.
(7)
Extremal detection is also very important in order to modify the step. Usually, some of these extremals (zeros of the derivatives) indicate the areas of greatest effect and should not be ignored. In our RKF45ng algorithm, we indicate a way to achieve this, which has given us very good results in the past, although it is probably very improvable.
(8)
Finally, any modification we introduce must be intensively tested. The most accurate test is to get the algorithm of our nonlinear dynamical system to obtain the values of the asymptotic system when we make the corresponding parameter zero (background test), at least within the precision of the machine used for the defined variables. If the final algorithm passes this test, we can be sure that it will pass any other reasonable test.

3. The Physical Problem to Solve

In the present paper, we show the numerical algorithm built to solve the equations that compute the gravitational effect of a pressureless spherically symmetric perturbation on free photons radiation. Those photons are described in the General Relativity Theory (GRT) and follow null geodesics. Moreover, they account for Cosmic Background Radiation (CBR). For some important definitions related with our problem, see Appendix A.
Now, the applied cosmological theory is presented. A spherically symmetric pressureless perturbation may be described in Cosmology using the following exact solutions of Einstein Equations:
(a)
Friedmann-Lemaître-Robertson-Walker solution (FLRW) which represents a homogeneous and isotropic space-time (with constant energy density ρ B ). This is the background Universe.
(b)
Lemaître-Tolman-Bondi solution (LTB) which accounts for a Universe with a unique great perturbation (an excess or a defect of energy density ρ over the mean) which is a pressureless spherically symmetric solution and which asymptotically tends to a FLRW metric (background). This is a parametric solution and requires numerical methods to determine the necessary functions in every space-time point. See ([44,49]).
If an observer O is placed in an arbitrary point of LTB, photons reaching from different directions will be affected in a different way by the perturbation. Given a direction ψ at t 0 , we call T ( ψ ) to the photon temperature received by O evolving in LTB and T B ( ψ ) that is one received by the same observer but evolving in the asymptotic FLRW. We are interested in computing; see [50]
δ B = T ( ψ ) T B ( ψ ) T B ( ψ ) .
This temperature contrast is so far named the gravitational anisotropy produced by the perturbation, and it is supposed to have values of the order of 10 5 . Notice that the spherical symmetry and the absence of external fields allows us to ensure that all planes with φ = c o n s t a n t are equivalent, and for this reason only, the case φ = 0 will be studied. This fact can be seen in Figure 1.
The procedure to solve the problem is as follows:
  • Build a suitable algorithm (we will name it CS) to numerically solve the parametric LTB metric with sufficient precision.
  • Generate a second numerical algorithm (we will call it RKF45ng) that allows the integration of the null geodesics equations (or paths of free photons) in LTB with appropriate initial conditions.
  • Compute the gravitational anisotropy.
  • Design suitable controls and tests to check the consistency of the obtained values. In order to do that, the following data are considered:
    4.1.
    If the LTB perturbation, while photons are crossing it, stays with small density contrasts ρ ρ B 0.01 ρ B , it is admissible to make a first-order analytical approximation, both in the Einstein Equations and in the null geodesics. Then one can integrate this linear analytical approximation. A linear code has been built for such a case. This code supplies a method to compare with the case when ρ ρ B .
    4.2.
    If one integrates the null geodesics of LTB generated with a null perturbation ( ρ = ρ B ), one should obtain a FLRW Universe and, therefore, null values of the anisotropy. This is a good test to detect systematic errors and loss of precision. Moreover, it will be the main tool to guarantee that the RKFng is stable. (Background test).
That means our LTB null geodesics integration algorithm is nonlinear in the sense that no analytical approximations of the equations to integrate are done, although it should also be useful for the first studied linear case. Such a requirement should originate in additional problems in the error control and in the numerical stability of the system to integrate in these two assumptions: (1) When LTB is very close to the FLRW limit (points far away from the symmetry centre); and (2) When the null geodesic is very near to the symmetry centre. The main goal of our research is the detection and solution of such kinds of problems.

4. Solutions of EE Used

4.1. LTB Solution of EE

The spherically symmetric conditions imposed to the LTB universe, as well as the use of comoving coordinates and null pression, make most of the components of Einstein’s tensor null. Thus, their evolution equations are reduced to:
G 11 = 0 ; G 14 = 0 ; G 44 = 8 π G ρ ( M , t ) .
The G 44 = 8 π G ρ ( M , t ) equation becomes in a relation between M and R ( M , t ) through ρ ( M , t ) . This relation may be expressed through one of the two following equivalent equations:
M = 4 π 0 R x 2 ρ ( x , t ) d x , ρ = 4 π R 2 R M 1 ,
and the second one allows us to obtain R / M ; then, this partial derivative can be obtained with the same relative precision as that obtained in R and ρ .
The equations G 11 = 0 and G 14 = 0 are analytically integrable and lead to the parametric solution of LTB with both arbitrary functions f ( M ) and t 1 ( M ) to be determined with the initial conditions. This solution is:
f > 0 non   connected shell R = G M f ( Ch η 1 ) t + t 1 ( M ) = G M f 3 / 2 ( Sh η η )
f < 0 connected shell R = G M f ( 1 cos η ) t + t 1 ( M ) = G M ( f ) 3 / 2 ( η sin η )
f = 0 limiting shell R = 9 G M 2 1 / 3 ( t + t 1 ( M ) ) 2 / 3
On the one hand, the spherically symmetric LTB metric stands for:
d s 2 = 1 Φ 2 R M 2 d M 2 + R 2 ( d θ 2 + sin 2 θ d φ 2 ) d t 2 ,
where:
Φ 2 = 1 + f ( M ) = 1 + R t 2 2 G M R .
Function f ( M ) is subjected to the restriction f ( M ) > 1 so that the metric has no problems. Furthermore, once f ( M ) is known, R / t can be computed. Then, this function establishes the variation profile of the radial coordinate with respect to t. On the other hand, f ( M ) plays the role of the difference between kinetic and potential energies of the M shell. Its sign allows to distinguish the connected and non-connected shells of the gravitational field. As f ( M ) does not depend on time, the connexion of the shell is determined by the initial conditions.
Once this equation has been solved, the next step is to compute the Christoffel symbols which appear in the null geodesics equations, and read:
Γ 11 4 = R M 2 R t M 1 Φ 2 ; Γ 22 4 = R R t ; Γ 24 2 = 1 R R M ;
Γ 11 1 = 1 Φ Φ M + 2 R M 2 M R ; Γ 12 2 = 1 R R t ;
Γ 14 1 = 2 R t M M R ; Γ 22 1 = R Φ 2 M R .
Such functions will appear as coefficients of a coupled differential equations system when integrating the null geodesics. Therefore, maximum precision is required. As a matter of fact, the Γ 11 1 symbol is the only one which is the sum of two functions and can have precision problems in the case where it becomes very small.

4.2. FLRW Solution of EE

For a FLRW universe, one has:
d s 2 = a 2 1 ± r 2 d r 2 + a 2 r 2 ( d θ 2 + sin 2 θ d φ 2 ) d t 2 ,
and that is why if one takes:
Φ 2 = 1 ± r 2 , R = a ( t ) r , M = 4 π ρ B R 3 / 3 ,
then FLRW becomes a LTB with constant ρ = ρ B for every t and r as a comoving radial coordinate (independent of t) and of a ( t ) as the scale factor. The computations to complete, in this case, are evident. Note that these equations are AE, not ODE or PDE.

5. CS Algorithm

In order to complete the effective numerical computation of the parametric LTB solution, one has to perform the following steps:

5.1. Initial Functions

To determine the two arbitrary functions of the parametric solution, one has to proceed in the following way:
The function t 1 ( M ) usually becomes zero at t D , although it could not be so.
To determine the function f ( M ) through the density profile defined at t D (see Equation (A1) at Appendix A), one has to proceed this way:
(i)
Computation of R(M,t): From Equation (1) and the density profile, one has
M = 4 π ρ R v 3 3 R R v 3 + 3 ε R R v Arctg R R v .
This equation has a unique positive solution of x = R / R v for each M 0 . Once M is known, it can be written in the following way:
x 3 + 3 ε ( x Arctg   x ) d = 0 , d = 3 M 4 π ρ R v 3
and it can be applied to any method of roots computation as the solution is unique and simple. As in the FLRW, one has M B = 4 π ρ B R B 3 / 3 , and x 0 = R B / R v can be taken as an initial value for the Newton method. It still has to be determined whether the error tolerance is admissible for R. Therefore, a method to obtain the radial coordinate R ( M , t D ) associated to each shell is achieved with a prefixed relative error bound. That is, the profile R ( M , t D ) is obtained.
(ii)
Computation of f(M): By taking z = ( R f ) / ( G M ) in Equations (2) and (3), one has:
z > 0 ; G M R 3 1 / 2 t D = z 2 + 2 z ArgCh ( z + 1 ) z 3 / 2 ,
z < 0 ; G M R 3 1 / 2 t D = Arccos ( z + 1 ) z ( z + 2 ) ( z ) 3 / 2 .
If z 0 , one of these equations has a solution (see Figure 2), from which z is obtained through a roots calculus and, afterwards, f ( M ) with the same precision when numerically solved for an arbitrary M and its corresponding R ( M , t D ) . If no equations have a solution, that means that in the shell, the following is accomplished:
R = 9 G M 2 1 / 3 ( t D ) 2 / 3
and then, f ( M ) = 0 (limiting shell).

5.2. The Other Functions at t D

In order to determine the rest of the necessary functions, one should proceed this way:
  • As ρ is known at t D (initial profile), in the Equation (1), one can directly compute the value
    R M = 1 4 π ρ R 2
    with the same relative error bound as R.
  • Furthermore, once f ( M ) and R ( M , t D ) are known, one can also obtain
    R t = f ( M ) + 2 G M R
    with the same relative error bound. So as to know the sign of this derivative, one needs to know whether, at a later time, R ( M , t ) increases or decreases for the same shell. The advantage of this procedure is that the derivative value is precise and no significant digits are lost (see Appendix B).
  • The other functions appearing in the Christoffel symbols such as 2 R / t M , 2 R / M 2 y Φ / M need derivation numerical techniques (a t = c o n s t a n t ) of the known functions R / t , R / M and Φ = 1 + f ( M ) , although one cannot now guarantee a fixed relative error bound, as the catastrophic cancellation phenomenon can appear through the loss of significative digits with the subtraction of similar numbers. The logarithmic derivative is crucial here.
We tried to choose Δ M so that the possible maximum between the truncation and the rounding errors was the smaller one (see Appendix B).

5.3. Functions for t > t D

As f ( M ) is now known, given a value of M, one can directly choose among the Equations (2)–(4) to determine which is suitable for this shell. By substituting t D by the desired t and, with the same change z = ( R f ) / ( G M ) , one has:
f > 0 ; f 3 / 2 G M t = z 2 + 2 z ArgCh ( z + 1 ) ,
1 < f < 0 ; ( f ) 3 / 2 G M t = Arccos ( z + 1 ) z ( z + 2 ) ,
f = 0 ; R = 9 G M 2 1 / 3 ( t ) 2 / 3 .
Let us now compute the approximated root for z (if necessary) and let us obtain the corresponding R ( M , t ) . From now on, easy computations allow us to obtain the rest of the necessary functions at each point ( M , t ) .
For example, for known R ( M , t ) and f ( M ) , Equation (6) gives the value of | R / t | , and a numerical derivation at a constant t value gives R / M . Then Equation (1) gives the value of ρ ( M , t ) . The rest of the procedure is as in the former case.
Once all the commented functions are obtained, one can compute the Christoffel symbols at any shell M and at any time t.

6. Additional Utility of the CS Algorithm

A variation of this algorithm allows to obtain, with different FLRW background universes, the ε and R v parameters at t D , and this lets us “simulate” the present profiles of certain realistic structures, such as the Boötes Void, the Great Attractor, the Virgo Cluster, and so forth. To do so, one proceeds as follows:
  • Choose an Ω [ 0.2 , 1 [ and, together with the critical density value ρ c ( t 0 ) , determine ρ B ( t 0 ) = Ω ρ c ( t 0 ) for the asymptotic FLRW.
  • With the core “radius” R v ( t 0 ) , choose R v 1 ( t D ) a ( t D ) R v ( t 0 ) / a ( t 0 ) (application of the scale factor). Additionally choose a value of the contrast for t D   ε 1 and compute the ρ B ( t D ) value of the FLRW.
  • Simplify the CS algorithm to obtain only the R ( M , t ) and ρ ( M , t ) values. Apply it only at t D and t 0 for a predetermined sample of shells M i and obtain R ( M i , t 0 ) and ρ ( M i , t 0 ) .
  • Compute ρ 0 = ρ ( 0 , t 0 ) by extrapolation and ε ( t 0 ) ^ = ( ρ 0 ρ B ) / ρ B (maximum present density contrast).
  • Additionally compute R v ^ or the present core radius by interpolation, first searching the value of the core shell, M v , such that ρ ( M v , t 0 ) ρ B = ρ B ε ( t 0 ) ^ / 2 and the corresponding value of R ( M v , t 0 ) .
  • In this way, a function of two variables has been defined:
    [ ε ( t 0 ) ^ , R v ^ ] = PARAM ( ε 1 , R v 1 )
    evaluable for each pair of initial data.
  • Now the numerical secant method for systems of equations allows to perform iterations, starting with two initial values ( ε 1 , R v 1 ) y ( ε 2 , R v 2 ) , until it is able to find the desired present maximum contrast and core radius (in the case where it converges, of course).
This process is able to simulate perturbations with positive contrast (overdensities), with the same present linear contrast (as shown in Figure 3) and for universes with different values of Ω . Initial parameters of each overdensity are shown in Table 1b. For these cases, high precision is not required in the PARAM algorithm as the present contrast is obtained from observational low-precision data. The initial R v are all of the order of 180 / 51 h 1 Mpc, and that is the reason why they are not written in Table 1b.
So as to generate initial values that lead to nonlinear negative present contrasts (underdensities), a negative present profile was chosen (see Figure 4) and the data found are written in Table 1a. The relation R v 1 22.5 / 51 h 1 Mpc is accomplished here as well, but is not included in Table 1a, as it is the same one for all underdensities.
The data of Table 1b have been used to generate overdensities. The effects of these overdensities in the CMB anisotropies have been used to compare the results of our nonlinear algorithm with the linear algorithm stated before. Meanwhile, the underdensities created with the data of Table 1a have been used to test the nonlinear results.

7. Resolution of the Null Geodesics EDOS

From Equation (5) and the fact that φ = c o n s t a n t for spherical symmetry reasons, along any null geodesic, one has: d s 2 = 0 = g 11 d M 2 + g 22 d θ 2 d t 2 , where
g 11 = 1 Φ 2 R M 2 , g 22 = R 2 .
On the other hand, if one defines λ as the affine parameter of the geodesic and the tangent components in a point of the geodesic as ( K 1 , K 2 , K 3 , K 4 ) , the following relations stand:
K 1 = d M d λ , K 2 = d θ d λ , K 3 = 0 , K 4 = d t d λ .
Finally, the free photons evolution in GRT must accomplish the following condition d K α / d λ = Γ β γ α K β K γ . In LTB, that means the resolution of the following system:
System A : K 1 = d M / d λ , K 2 = d θ / d λ , K 4 = d t / d λ d K 1 / d λ = 2 Γ 14 1 K 1 K 4 Γ 11 1 K 1 K 1 Γ 22 1 K 2 K 2 d K 2 / d λ = 2 K 2 ( Γ 24 2 K 4 + Γ 12 2 K 1 ) d K 4 / d λ = Γ 11 4 K 1 K 1 Γ 22 4 K 2 K 2 Control   Eq . : ( K 4 ) 2 = g 11 ( K 1 ) 2 + g 22 ( K 2 ) 2 Objective : δ B = a ( t 0 ) K 0 4 / a ( t D ) K D 4 1
where the Christoffel symbols we must compute with the CS algorithm appear. The independent variable is the affine parameter λ and the dependent variables are M , θ , t , K 1 , K 2 , K 4 .
These equations form a system of six coupled differential first-order equations that have to be integrated with the suitable initial conditions. As when one gives conditions in t D , one has many problems to ensure that the photon arrives at the observer at t 0 , and the best procedure is to do it in the inverse way. It is supposed that the observer is looking in the direction that forms an angle Ψ 0 with the O C s axis, and then the initial conditions should be:
λ 0 = 0 , K 0 1 = cos Ψ 0 g 11 ( M O , t 0 ) , K 0 2 = sin Ψ 0 g 22 ( M O , t 0 ) , K 0 4 = 1 ,
where the 0 subscript stands for the initial value while M O is the M coordinate of the observer shell.
Let us also suppose that CMB is homogeneous in the hypersurface t = t D and that the temperature of all the photons that start from that hypersurface is constant. In these conditions, it is well-known that the gravitational anisotropy to be computed verifies:
δ B = a ( t 0 ) K 0 4 a ( t D ) K D 4 1 ,
where a ( t ) is the scale factor.
Theoretically, one only has to compute K D 4 by integrating (backwards in time) the system of equations. In any way, it can be supposed that problems with the stability of the system will appear in the case where you need to obtain δ B of the order of 10 5 with Equation (18), which should be the precision required in K D 4 to obtain at least seven correct decimals in the quotient and to be able to guarantee that δ B has at least three correct significative digits. If what one tries to do is to obtain quasi null anisotropy (FLRW limit), the precision requirements should be even greater.
As it can be noticed, the problem is not only the election of an appropriate integration method. One will also need for the integration step to also adapt to a precision that one elects, as for great values of M, LTB tends to FLRW. A very small step is only valid to increase the global round error and the CPU cost. Moreover, in these conditions, the gravitational anisotropy will change very little. (b) For small values of M, the density contrast increases in one direction and decreases in the opposite one, so one has to be very precise in the chosen step, as in this case, the greater changes of the gravitational anisotropy are present.
We have chosen a Runge-Kutta-Fehlberg of order 4(5) with a variable step requiring, at first, a local truncation relative error of the order of 10 10 in every variable. Of course, this forces us to demand that the roots x and z of the equations solved with the CS algorithm also have the same relative precision, although the functions determined through numerical derivation may not have it.

7.1. Numerical Instability of the Initial System

When system A was tried to be integrated for ε = 0 (FLRW test) with the VAX (From the IFIC, of València) computer, it was found that, in some directions Ψ 0 , this integration stopped without reaching t D because it found negative values of M, despite the quadruple precision required to minimize the rounding errors. This absurd result indicated uncontrolled oscillations in the system coefficients (numerical instability). When Γ β γ α implied values where plotted amplified oscillations were detected in the Γ 11 1 , as it was predictable for, it was the only term with sum/subtraction of functions.

7.2. First Numerically Stable System

Problem 1.
In order to avoid the problems with the Γ 11 1 term, it must not appear in the system to integrate.
Solution 1: Substitute the equation where the Γ 11 1 term appears by the control equation. At the same time, the independent variable is also changed considering now the t variable instead of the affine parameter λ . In this way, a greater control is achieved on the time that the integration must be stopped.
Once these changes are made, the new system to solve is the following one:
System B : K 1 / K 4 = d M / d t , K 2 / K 4 = d θ / d t 1 = g 11 ( d M / d t ) 2 + g 22 ( d θ / d t ) 2 d K 2 / d t = 2 ( K 2 / K 4 ) ( Γ 24 2 K 4 + Γ 12 2 K 1 ) d K 4 / d t = ( Γ 11 4 K 1 K 1 Γ 22 4 K 2 K 2 ) / K 4 Control   Eq . : K 4 d K 1 / d t = 2 Γ 14 1 K 1 K 4 Γ 11 1 K 1 K 1 Γ 22 1 K 2 K 2 Objetive : δ B = a ( t 0 ) K 0 4 / a ( t D ) K D 4 1
where the d λ = d t / K 4 equation is unnecessary because the value of this parameter is not interesting and the equation eliminated is now used as the control equation with the purpose of controlling the Γ 11 1 oscillations (by comparing the obtained value when clearing the unknown in this equation, with the value obtained in the CS algorithm).
The integration of System B with ε = 0 (FLRW test) allows to arrive at t D if | Ψ 0 | 0.1 grades. Then, the numerical instability of the system has disappeared, and the diagnosis of Problem 1 has been confirmed. Nevertheless, the final values of δ B are of the order of 10 5 , depending on the initial value of Ψ 0 , although they should be close to 0. This fact indicates that, either (1) there are systematic errors in the integration or (2) the step fit of the integration is not suitable. As a matter of fact, both cases appear when the observation angle Ψ 0 is small.
In the case that Ψ 0 is small, the trajectory of the photon which comes from t D is each time nearer to C s , and therefore travels through shells with decreasing M until it starts to go far away from C s . This indicates that d M / d t changes its sign at some point of the photon trajectory defining a minimum at M < M O . This minimum should be detected by the integration method or else systematic errors might appear. Moreover, the objective function does not help in the adjustment of the integration step, and this is dangerous as if significative digits should be lost, then:

7.3. Inclusion of the Objective Function

In the previous section, (Section 7.2), some troubles have been presented. They should be summarized this way:
Problem 2.
(2a) The possible change in the sign of d M / d t must be detected.
(2b) The objective function must be included in the system of equations.
Solution 2.
(2a) The second line of system B suggests that d M / d t could be cleared as a function of d θ / d t or vice versa, but then the sign will not be well-defined. This equation is of the type 1 = cos 2 α + sin 2 α as both g 11 and g 22 are positive, and therefore, an angle α must exist, such as one that can define these derivatives through trigonometric functions.
As a matter of fact, let ( K 1 , K 2 , 0 , 0 ) be the spatial projection of the tangent vector to the null geodesic and ( g 11 ( M , t ) , 0 , 0 , 0 ) be the radial direction which joins C s with the photon. The cosine of the angle formed by these two vectors is given by the quotient of the dot product of both vectors and their norms, that is:
cos ( α ) = K 1 g 11 K 1 K 1 g 11 + K 2 K 2 g 22 ,
which when using the first and second equations of system B, the following is obtained:
d M d t = cos α g 11 , d θ d t = sin α g 22 ,
and allows for rewriting of the other equations in terms of α . Now the sign of d M / d t is perfectly defined, and this derivative changes its sign when α = π / 2 ± n π with n as an integer.
On the other hand, the initial value of α at t 0 is, evidently, Ψ 0 (see Figure 5), and the following can be written:
d α d t = sin α ( Γ 14 1 Γ 24 2 ) cos α Γ 12 2 g 11 .
Now the integration from t 0 to t D is done in two parts when cos α changes its sign, both of them with constant signs. In this way, it is definite that no “jumps” in the extremal point are produced because this is in a great step-interval of integration. This modifies the RKF45ng in the following way (if | Ψ 0 | < π / 2 ): (i) While | α | π / 2 both in t and in t h , the RKF45ng controls the change of the step. (ii) The first time that a change of sign between t and t h appears, the increase of the step h is forbidden until indicated, the values in t h are not accepted, and the step is divided by 10. The integration follows with such a restriction and every time that | α | > π / 2 is overcome, the new value is not accepted and a division by 10 is made in h. (iii) When | h | < t o l h or π / 2 | α | < t o l α , the control of the step is returned to the RKF45ng so that the integration is completed by increasing or decreasing the step.
As our goal is to be near the extremal point with a desired precision and it is also known that the extremal point is unique (if it exists), no more changes are needed, except by the obvious ones that must be done when the integration is from t D to t 0 in the double integration test proposed further on.
(2b) With the aim that the integration algorithm also controls the error of the objective function, the auxiliary function is defined as follows:
δ ( t ) = a ( t 0 ) K 0 4 a ( t ) K 4 ( t ) 1 ,
that verifies the relation δ B = δ ( t D ) . The total derivative with respect to the time of this equation,
d δ d t = ( 1 + δ ) ( Γ 14 1 Γ 24 2 ) cos 2 α + Γ 24 2 1 a d a d t ,
gives a new differential equation for δ with initial value δ ( t 0 ) = 0 and that leads to the following system:
System C : d M d t = cos α g 11 d δ d t = ( δ + 1 ) ( Γ 14 1 Γ 24 2 ) cos 2 α + Γ 24 2 1 a d a d t d α d t = sin α ( Γ 14 1 Γ 24 2 ) cos α Γ 12 2 g 11
with the variables α , δ and M decoupled from the other ones, although the integration of system C with ε = 0 does not substantially improve the results from System B.
Our conclusion is that the true problem with this test is in the different value ranges that the variables and their total derivatives with respect to t have. Thus one has that: (i) M has a very great range of positive values that depends on the observation direction Ψ 0 , but d M / d t has a constant sign in some directions and changes in other ones. (ii) α always has values in ] 0 , π ] (if the observation direction is positive) and has monotonous behaviour when travelling along the null geodesic. (iii) δ achieves a maximum (As M achieves a minimum) whether | Ψ 0 |   mod π is small but, in any case, its variation range is very small.

7.4. Rescaling of the Variables. RKF45ng Algorithm

Therefore, rescaling the truncation error in each variable or the integration method is necessary, which will adjust the step according to the truncation error of the dominant variable.
The question here is that we are building an general algorithm, and there is no “perfect rescaling” that allows us to consider all the possible combinations of LTB, linear or not, with success guarantees. Moreover, the scaling of errors will depend on the observation direction, and this cannot be achieved with the classical method of defining a “relative weights” vector for the variables.
The only way to make all the integration variables have a similar truncation error is to make a change in the variables M and α to another y 1 and y 2 , that represents small changes in their values and their derivatives values. Such a change of variable must be considered as variable rescaling, at every integration step, that helps the RKF45ng to control the local truncation error according to the physical conditions of the problem.
Given the Ψ 0 direction and the null geodesic of the LTB γ that arrives to O in this direction, let γ B be a fictitious null geodesic of FLRW that arrives to O in the same direction. (a) Associate point of both geodesics with the same coordinate t. The point ( M , α , t ) γ is made corresponding with the point ( M B , α B , t ) γ B . (b) Use M B and α B to define the integration variables, y 1 = M B / M , y 2 = α B / α . Now, the variables y 1 , y 2 , δ will be the ones that control the step and the two first variables have the range reduced in the neighbourhood of 1.
In terms of the variables δ , y 1 and y 2 , the system to solve is formed by the following equations:
Final System : d δ d t = ( 1 + δ ) ( Γ 14 1 Γ 24 2 ) cos 2 α + Γ 24 2 1 a d a d t , d y 1 d t = 1 M d M B d t y 1 d M d t , d y 2 d t = 1 α d α B d t y 2 d α d t , d α d t = sin α ( Γ 14 1 Γ 24 2 ) cos α Γ 12 2 g 11 , d α B d t = sin α B 1 + r 2 a r , d M B d t = 3 M B d α B d t c t g α B ,
with M B = 4 π ρ B a 3 r 3 / 3 , being M B = M , α B = α at t 0 . Now the two last equations allow us to obtain some acceptable values of M B and α B , based in the FLRW background, to rescale the variables M and α at y 1 and y 2 .
If the order of the variables chosen is: δ , y 1 , y 2 , α , α B , M B , the rescaling vector can be defined as s c a l e = ( f a c δ , 1 , 1 , 0 , 0 , 0 ) that multiplies the truncation error of each variable. This definition implies that the RKF45ng will not consider the variables α , α B , M B to adjust the step. For y 1 and y 2 will simply consider their truncation error, while for δ it will consider its truncation error multiplied times f a c δ . Such an election first allows the integration with f a c δ = 1 and, if the results are not satisfactory with the new suitable tests, f a c δ may progressively be increased and force the RKF45ng to make smaller steps.
More concisely, define: (a) A Δ i = ( y n + 1 ) i ( y n + 1 * ) i for each integration variable (see [4]). (b) Now the vector r e e s c i = Δ i s c a l e i . (c) Finally, Δ = r e e s c 1 (the maximum modulus component). Then the RKF45ng is now prepared so that it only considers the truncation errors of the first three variables to adjust the step.
Theoretically, there is no limit for f a c δ . Thus, with f a c δ = 10 5 , the condition Δ 10 10 reads 10 5 Δ 1 10 10 . The local truncation error in δ is required to be of the order of 10 15 ! and the relative error in the final step of the order of 10 12 in 1000 steps. This must be avoided as the greater f a c δ is, the greater the CPU cost is, more integration steps are done, and the global rounding error increases.
The variables y 1 , y 2 should oscillate around the unity, so that the f a c δ = 1 and the condition of absolute error of local truncation is 10 10 , and leads to us asking whether 9 or 10 are correct decimal digits in y 1 , y 2 and δ . If δ reaches values of the order of 10 5 , at least two or three correct digits are expected in 1000 integration steps. Such precision is acceptable to determine the order of the gravitational anisotropy, and it makes no sense to demand more precision given the CPU cost and the problem conditions.

7.5. Convergence and Stability. FLRW Test for RKF45ng with ε = 0

Once the final system is integrated with ε = 0 , t o l r = 10 10 , f a c δ = 1 , with local truncation of the order of t o l r and quadruple precision in the VAX computer of the IFIC: (a) The RKF45ng arrives at t D if | Ψ 0 | 0.001 grades in a number of steps between 800 and 1200, according the direction. (b) The | δ B | values obtained are in the interval [ 10 26 , 10 24 ] t o l r 2 × 10 4 . As some of the computed functions by derivation should have a relative error greater than the local truncation error, it must be concluded that the final system is numerically stable for the FLRW limit.

7.6. Convergence and Stability. New Tests for the RKF45ng When ε 0

Once the stability of the system for the FLRW limit has been checked, the guarantee that the final system is working properly for great values of M is achieved, and therefore, one can be confident that the process of integration does not “contaminate” the δ variable at t D . A check of the final results obtained for LTB is lacking. In order to do this, the following test was designed.

7.6.1. Double-Integration Test

Let us call r k f 45 n g the algorithm of integration of the Final System of the null geodesics. If v a l i n i stands for the initial set of values and v a l f i n for the final set of values, with the notation [ v a l f i n ] = r k f 45 n g ( v a l i n i , ± 1 ) , where the + sign indicates integration from t D to t 0 and vice versa. If:
  • [ v a l f i n ] = r k f 45 n g ( v a l i n i , 1 ) ,
  • [ v a l i n i 1 ] = r k f 45 n g ( v a l f i n , + 1 ) ,
the difference | v a l i n i v a l i n i 1 | allows to “estimate” the recovered digits of each initial variable.

7.6.2. Direction Test

Integrate the final system with Ψ 0 , where Ψ 0 and Ψ 0 ± 2 π allows us to detect the possible disarrangements in the rescaling vector when the modulus of α or its sign are changed.

7.6.3. Extrapolation Test

The final plots of δ B as a function of Ψ 0 analysis allows us to determine the smoothness of this curve and the appropriate working of the extrapolation to Ψ 0 = 0 .

7.6.4. Linear Test

If some of the inhomogeneities that are in a linear regime at present are generated, one can obtain δ B with the linear code and with the r k f 45 n g , and compare their results.
For the case of ε 0 , t o l r = 10 10 , f a c δ = 1 , with local truncation of the order of rescaled t o l r and quadruple precision in the VAX of the IFIC.
For the linear test, the following steps are given:
  • With both codes, linear and r k f 45 n g , the present temperature contrast δ B is computed for values of Ψ 0 = 1 , 2 , , 180 . In Ψ 0 = 0 the result is obtained by extrapolation. This way, the values of δ B are known in 181 evenly spaced points.
  • By taking into account that T ( t 0 ) / T D = ( 1 + δ B ) / ( 1 + Z D ) , the T 0 ( Ψ 0 ) temperatures in each direction and their mean are computed:
    < T 0 > = 1 2 0 π T 0 ( Ψ 0 ) sin ( Ψ 0 ) d Ψ 0 .
  • Now the new contrast is computed:
    δ T ( Ψ 0 ) = T ( Ψ 0 ) < T 0 > < T 0 > .
    As a consequence of this definition, the monopole of δ T is null.
  • The dipolar contribution to δ T is related with the Observer–Symmetry Centre relative velocity (Doppler Effect). The other contributions to δ T are named δ R or nondipolar anisotropy contribution, and are computed by subtracting the dipolar part from δ T .
  • Given any inhomogeneity O 3 O 7 of Table 1b, δ T plots of the linear solution and of the r k f 45 n g solution are indistinguishable.
  • For the case O 4 , the plots δ R ( Ψ 0 ) are shown in Figure 6a. The continuous plot corresponds to the linear computations, while the discontinuous plot corresponds to the non-linear ones ( r k f 45 n g algorithm). The figure shows that both results are very similar. This means there is very simultaneous effectiveness in both schemes. For the other inhomogeneities of Table 1b, we also obtained very similar results for both algorithms (linear and non-linear). Figure 6b plots the non-linear computations of δ R ( Ψ 0 ) for the cases O 3 O 7 of Table 1b. In Figure 6b, we can appreciate that the CMB anistropy grows as the values of Ω decrease. Therefore, these kinds of CMB anisotropies, δ R ( Ψ 0 ) values, are greater as the model of universe is more open.

8. Results and Discussion

Next, some of the most important physical results of the numerical work presented in this paper are discussed, as well as an indication of where they have been published. The detailed numerical and computational techniques that were not presented in such papers are presented here. The value of the results presented here and their original features have been analysed throughout our present paper.
The first results of the physical application presented in this paper can be found in the Ref. [11]. This was the first time our method was presented. Linear tests and preliminary results with non-linear structures were demonstrated. The first conclusion was that, in linear structures, both our non-linear and linear algorithms produced graphically indistinguishable results. The results had to be expanded down to the quadrupole to appreciate the difference. As a side-effect, we found that our non-linear algorithm produced higher levels of precision in δ B than expected. The reason for this was that even the octupole (with values between 10 7 and 10 9 ) “appeared” to behave properly, although only t o l r = 10 10 was required as the local error bound in each step and between 800 and 1200 steps were performed in the cases treated there. A less robust code would have presented octupole noise contamination in all likelihood.
Concrete simulated experiments with the non-linear version (part) of the algorithm were performed on the clearly non-linear structures of the Great Attractor and the Virgo cluster [12]. These calculations referred to the computation of the quadrupole. This was separated into two parts: (a) Q D or relativistic quadrupole and (b) Q R or the remainder. Comparisons with results of COBE satellite data were made, which revealed that Q R 10 7 was too small to be detected with the technology of the epoch (nineties of the past century). At the same time, the robustness of our code in [11] was confirmed, which reinforced our results. The effect of a Great Attractor-like structure located at Z 0 was also studied between redshifts 2 and 30 [13]. If the density parameter Ω 0 was varied between 0.15 and 0.4, the conclusion was that the pure gravitational effect of these structures was sufficient to explain a significant part of the large-scale anisotropy of the Universe. Such a conclusion meant that it might be necessary to review the concept of large-scale angular anisotropic generation. The results presented in the Ref. [13] were published as a rapid Letter due to the importance of their findings.
More results were obtained after the previous papers. In the Ref. [14], a modification of the density profile at t D was presented. This modification included two contrast values ε 1 < 0 and ε 2 > 0 and two radii of the core R v 1 > > R v 2 , that is, two superimposed disturbances with the same centre of symmetry. These values should be adjusted so that at a certain R > > R v 1 , in t D , the density contrast would be null (compensated structures at a predetermined distance). To do this, the PARAM algorithm was modified in a non-trivial way. Many graphs and tables which helped clarify the analysis were generated, and important conclusions, for instance, the compatibility with observational data, were obtained. The most interesting findings were produced for very open universes and objects very far away from the observer. A deeper study of the structures presented in the Ref. [13] can be found in [15]. A more in-depth study of great underdensities was also performed. in the Ref. [16], the study of the Boötes void with a similar technique to that of paper [14] was carried out; that is, simulating a density vacuum surrounded by a “wall” with excess density which compensated the internal void.
Moreover, we have the following papers about CMB anisotropies in LTB universes: [51,52,53,54,55,56]. The reader may find comprehensive explanations, including important consequences of previous applications performed with our algorithm, in the book [17] and in the review paper of the Ref. [18]. The latter paper also outlines an overview of CMB and CMB anisotropies research so far.
At this stage, we would like to say some words about the results and the convergence and stability of our algorithms. They are written in the next three paragraphs.
With the example we were trying to solve, we have proven that with the expected signal of our objective function, the temperature contrast, or anisotropy of the CMB in an LTB universe, was persistently polluted by the integration method. If ε = 0 , the density contrast was null and our objective function should be 0 for any value of t, in particular for t = t D , since we would be integrating in a FLRW universe or asymptotic limit. It could be argued that the EDOS of the LTB is inadequate for the FLRW, but it was clear that at a great distance from the centre of the disturbance in both universes they almost coincided in any direction of observation and the integration method was contaminating the signal we were studying. Our integration method was one of the most widely used among Runge–Kutta ones—the Kasp–Carl version of the RKF45. However, since we did not need to resort to the coefficients which provide methods of order h 3 , h 2 and h 1 , we had actually used an RKF45. The convergence and theoretical stability of such a method is sufficiently analysed in the bibliography (see, for instance, [3,4]).
Our tools are now successively presented to show which specific problems improve, and to make a normalized analysis of the value of each tool, and with that aim, specific parameters of precision requirements are given. These were: (i) t o l r = 10 10 as the bound of the local truncation error and of the relative precision of the root calculation when it was performed, (ii) ε = 0 so that our objective function gave us the order of the polluting noise and (iii) calculation in quadruple precision to minimize the effect of rounding error (see Appendix B). With this we intended to estimate the impact of the noise in our objective function. We assumed that the initial system could not reach t D in some directions because it obtained M < 0 in some layers, which was absurd. We rewrote our ODES to avoid this problem. Thus, we managed to get t D up to 0.1 degree observation angles in a first correction. It was only after applying all the mentioned tools and getting the final system, that we verified that our objective function was less than t o l r 2 / 100 2 . This was a resounding zero given the tolerance, t o l r , required for each function and its derivatives. It must be noted that if the local truncation error was of the order of t o l r , when performing about 1000 integration steps, the theory told us that the global truncation error could be of the order of 1000 × t o l r , that is, it could become of the order of 10 7 . On the other hand, we checked that the overall noise was of an order less than 10 24 when working with quadruple precision, that is, about 38 decimal places. These findings ensure that our algorithms are robust, sufficiently converge, and are very stable.
Our algorithms have been used in a series of papers that are described above. They have been applied both in linear and non-linear structures. Some interesting results about stability and convergence can be summarized as follows: (i) The double integration test gives values in the rang | v a l i n i v a l i n i 1 | [ 10 8 , 10 7 ] for δ B . As the initial value of δ B = 0 and the final value is δ B 10 5 , it is deduced that the final value should have at least 8–9 correct decimal digits, which is enough. As a matter of fact, it is highly possible that it has more correct digits due to the fact that the way to “force” the approximation to α = π / 2 is not equivalent for both integrations. (ii) The other initial variables give similar relative error results, i.e., | v a l i n i v a l i n i 1 | / | v a l i n i | [ 10 8 , 10 7 ] . (iii) All the published plots are smooth enough to allow extrapolation at Ψ 0 = 0 .
There are recent examples of optimization of numerical methods similar to ours. For instance, the numerical resolution of the Dirac equations (see [19] where the DiracSolver is introduced. See also references therein). Another technique which improves numerical algorithms following a procedure that has some similarities with our work is that presented in the Ref. [38], in which the author extended methods of deriving variational integrators to provide solutions of systems having oscillatory behaviour. His techniques are applied to problems of discrete Lagrangian functions, discrete Euler-Lagrange equations and the definition of a weighted sum of ”continuous intermediate Lagrangians” each of them evaluated at an intermediate time node. Additionally, the author introduced different numbers of intermediate points. He assessed the accuracy, convergence and computational time of the proposed technique by testing and comparing them with well-known standards. This work has points of connection with our research as the tests have similarities in the standards used (see also references therein in the Ref. [38]). There is a recent numerically research in Astrophysics which could be connected to ours too. in the Ref. [57] the authors numerically worked with the McVittie’s metric and the Mathisson–Papapetrou (MP) equations in the post-Newtonian approach [58]. They analytically and numerically investigated the orbits of spinning particles around black holes in the post-Newtonian limit and in the presence of cosmic expansion. On the one hand, we could connect the study with LTB and LFRW metrics with McVittie’s one. On the other hand, the spinning particle orbits could be compared with our numerical Relativistic Positioning Systems techniques used in the Refs. [47,59].

9. Summary and Conclusions

We have presented here the resolution of an ODES applied to a cosmological problem. We chose a RKF45 instead of an Adams. Here, we describe all the adaptations we have made to solve our physical problem. The objective function has been defined. The originality of our work is that the numerical tools used in our algorithms, which are particular to our problem, should be extended to other problems in the fields of physics and engineering as we remark in the text. The accuracy of the predictions depends on the accuracy of the model and the rigour of the tests. This is also discussed along the text. The numerical algorithms built to solve the physical problem have been described for the use of adaptive step-size numerical methods. In their application, some general properties have been used, and can be extrapolated to other problems. Catastrophic cancellations have been suppressed, and such kinds of suppression can be considered in similar problems. Additionally, the introduction of the objective function helps the optimization of the resolution of the ODES. Moreover, local error resizing is applied, and its usefulness is explained in our paper. This resizing is performed by the use of the asymptotic solution of the ODES. This technique can also be applied to similar problems. The change of variables guarantees the stability of the solution. The linearization of the problem is used as a powerful control test. What we state here is that this kind of tool should solve a wide range of problems that are unstable or do not have a proper solution with the modern numerical tools. These results represent one of the main novelties of our paper and their scientific contribution.
Some conclusions can be extracted by our present study. The IVP techniques used in our research can be applied to solve ODES. Such techniques should be used when applying adaptive step-size numerical methods with variables in a wide range of values and that seek an almost residual objective function value. In order to consider these Dynamical Systems as a disturbance of the asymptotic system, it is necessary to find an asymptotic solution on which to rely. In this case, the Dynamical System is remodeled in order to study the evolution of this disturbance. This is the meaning of our change to the variables y 1 , y 2 and the rescaling of the errors. The RKF45 algorithm used and its adaptive methods can be applied to other similar problems, such as N-body problems, like AP3M, or similar. The way that catastrophic cancellations are eliminated should be used in problems where ODES have to be solved. Mathematical optimization was carried out by introducing the objective function in the ODES. We encourage the use of this procedure when an ODES has similar treatment. Local error resizing was also executed to solve the problem. This resize implies the use of certain variables to adjust the integration step, while the other variables are used as parameters to determine the coefficients of the ODES. This resize is made by using the asymptotic solution of this system. This approach has demonstrated to be very useful and should be applicable to other similar problems as ours. The change of variables is necessary to guarantee the stability of the integration. The linearization of the ODES, when possible, should work as a powerful control test. All the skills named in these paragraphs are powerful tactics to be applied to physical problems where ODES are needed.
We will now say a few words about future extensions of our research and an outlook. The case we have treated here is the effective numerical resolution of LTB space-time solutions of EE. These tools can also be applied to other LTB problems which make them relevant. In addition, our procedures may be used with other metrics, such as McVitte’s one [57]. As a matter of fact, we have used them in N-body algorithms when studying the large-scale structure of the universe in Cosmology, as well as the CMB anisotropies, both in the framework of the Hydra project (see [45,46]) or similar (see [60]). The Hydra code is based on AP3M (Adaptive Particle-Particle, Particle-Mesh) and is adapted with our routines to the evolution of Dark Matter (with and without gas) in Cosmology. At present, some of our techniques are also being applied in Relativistic Positioning Systems in the Solar System (see [47,59]). We solved an ODES of 36 equations of which initial values were obtained from ephemeris and evolved with EE using an asymptotic system of Schwarzschild’s metric. Our rescaling approach is crucial as the differences from satellites’ and celestial objects’ semi-axes in the solar system must be solved. In the near future, we want to present the progress of our skills and the new algorithms built so far in another paper. All of these and similar techniques are powerful tools for other astrophysics, physics and engineering issues in general. We hope all the procedures presented here and in other publications will be useful in the resolution of ODES, both in our specific field and in science and technology in general.

Author Contributions

Conceptualization, J.V.A.i.C. and M.J.F.i.A.; formal analysis, J.V.A.i.C. and M.J.F.i.A.; funding acquisition, M.J.F.i.A.; investigation, J.V.A.i.C. and M.J.F.i.A.; methodology, J.V.A.i.C. and M.J.F.i.A.; project administration, M.J.F.i.A.; resources, M.J.F.i.A.; software, J.V.A.i.C. and M.J.F.i.A.; supervision, M.J.F.i.A.; validation, J.V.A.i.C. and M.J.F.i.A.; visualization, J.V.A.i.C.; writing—original draft, J.V.A.i.C. and M.J.F.i.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was patially funded by the Spanish Ministerio de Ciencia, Innovación y Universidades and the Fondo Europeo de Desarrollo Regional, Projects PID2019-109753GB-C21 and PID2019-109753GB-C22, the Generalitat Valenciana Project AICO/2020/125 and the Universitat de València Special Action Project UV-INVAE19-1197312. The APC was funded by the Spanish Ministerio de Ciencia, Innovación y Universidades and the Fondo Europeo de Desarrollo Regional, Projects PID2019-109753GB-C21 and PID2019-109753GB-C22.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We get a lump in our throat and we get emotional every time we remember (and we can well assure you that they are many times) our dear friend and companion Diego Pacual Sáez Milán to whom we owe a lot of our knowledge, and we will be forever deeply grateful for his guidance and mentorship. It was an honour and privilege to know him and work alongside him, for much of our lives. That is why we want to take the opportunity that this space gives us to thank him once again so much. Diego left us four years ago, but his person and his memory will be in this universe as long as we are in it. With him we shared this and most of the work of our research, developed over so many years of patience, debates, discussions, hardships, laughters and joy; in other words, many moments of happiness to remember for as long as our breath and force be with us. There are no words to describe our gratitude, both for his human and scientific teachings. My friend and colleague Diego, our esteem is with you, accompanies us and pushes us through our work, both scientifically and of any sort. Thank you very much, dear colleague! This work was supported by the Spanish Ministerio de Ciencia, Innovación y Universidades and the Fondo Europeo de Desarrollo Regional, Projects PID2019-109753GB-C21 and PID2019-109753GB-C22, the Generalitat Valenciana Project AICO/2020/125 and the Universitat de València Special Action Project UV-INVAE19-1197312.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Some Important Definitions

  • Comoving coordinates. These coordinates assign constant spatial coordinate values to observers who measure the universe as isotropic. As such observers move along with the Hubble flow, they are called comoving observers. CMB is also perceived to be isotropic. It is useful, for our purposes, to assign comoving coordinates to every point of the universe because LFRW and LTB universes are in expansion. Moreover, due to the spherical symmetry of such universes we choose coordinates P ( M , θ , φ , t ) , with the origin in the symmetry centre C s of the perfect fluid which is their source. Furthermore, we take the coordinates O ( M O , π , 0 , t ) (beeing O the observer), as the φ coordinate becomes useless in our study. Additionally, for symmetry reasons, it is sufficient to study the spatial half-plane that contains the O C s axis (see Figure 1).
    The M coordinate is a label associated to every spatial spherical surface with centre in C s , named shell, and is related to the mass-energy contained in the surface defined by the shell. The radial coordinate R ( M , t ) of such a shell depends on the time and is related with M and ρ ( M , t ) through the EE.
    In FLRW, we choose an arbitrary point as the origin of coordinates and an arbitrary direction as the axis (at constant t, all the axes are equivalent) and the coordinates Q ( r , θ , φ , t ) are also defined, where r is a radial coordinate, which is also comoving. The shells in this universe are a mere auxiliar definition which depends on the origin of coordinates.
  • Decoupling time. This is defined as the time t D , where it can be considered that the dominant effect over the free photons is gravitational. This moment is as far as it is measured by the spectral lines Redshift  Z D . Decoupling time should be taken at Z D 1100 . From this moment, photons freely propagate in the gravitational field of the Universe and its structures. As we are interested in computing the gravitational effect produced by nonlinear structures, we can take initial values at some suitable Z D < 1100 . In our work, we take initial values at Z D = 50 10 3 Mpc (1 Mpc ≈ 3.2 10 6 lightyear).
  • Scale factor. This is the time function a = a ( t ) included in the FLRW metric. The distances among galaxies are proportional to the scale factor and the following relation stands a ( t ) = a ( t 0 ) / ( 1 + Z ) , where t 0 is the present time and Z is the redshift at time t.
  • Critical density This is the mass density value ρ c such that, when it exceeds, the effect of gravity in the universe is great enough to stop its expansion in finite time. In our work, we used it to define different values of ρ B as background FLRW universes for our model of LTB. This definition is made through the density parameter Ω = ρ B / ρ c . Thus, values such that Ω < 1 originate FLRW open universes, while Ω > 1 values originate closed FLRW universes. The Ω = 1 case corresponds to a flat FLRW.
  • Density profile. This is an initial condition to determine the LTB metric. At the initial time t D we take, in this work, a density profile given by:
    ρ ( M , t D ) = ρ B 1 + ε 1 + R R v 2 ,
    where ε gives the maximum density contrast ( ρ ρ B ) / ρ B and R v indicates the radial coordinate at which this contrast is reduced to a half (we call it core). As t varies, ρ ( M , t ) changes. The new profile in t may change its shape if the perturbation is nonlinear. Nevertheless, with linear perturbations it is acceptable that the shape of the profile remains the same.
    Both R v and ε besides Ω , are initial values, for the searched algorithm. Ω [ 0.2 , 1 [ have been taken in the present work. With this initial profile, the spherical symmetry is guaranteed and the asymptotic FLRW limit with ρ B = c t e .

Appendix B. Numerical Aspects to Take into Account

Appendix B.1. IEEE Norm

The standard representation of numbers in floating point arithmetic (scientific calculus) is of the type y = ± ( . d 1 d 2 d t ) β × β e , and is defined by four integer values. The base β , the precision or word length t, and the range of exponents e m i n e e m a x . The mantissa m = ± ( . d 1 d 2 d t ) accomplishes 0 m < 1 and 0 d i < β . In order to ensure unique representation, d 1 0 , and, in this case, the representation is said to be normalized; if the base is also 2, the d 1 = 1 , so it is not necessary to save it as the decimal point.
The word length t is chosen for each variable by assigning a type such as float, double, quadruple, and so forth, and here comes the concept of precision and rounding error for each variable. The IEEE standard specifies three basic formats, among others. In order not to extend ourselves too much, we limit ourselves to binary machines (see Table A1).
Table A1. Storage of variables in the IEEE standard in binary machines. Rounding units and range of variables according to the declared type.
Table A1. Storage of variables in the IEEE standard in binary machines. Rounding units and range of variables according to the declared type.
TypeLongMantissaExpoRounding UnitRank
Float32 bits24 bits8 bits 2 24 5.96 × 10 8 10 ± 38
Double64 bits53 bits11 bits 2 53 1.11 × 10 16 10 ± 308
Quad128 bits113 bits15 bits 2 113 9.63 × 10 35 10 ± 4932
The IEEE standard also specifies that if w is an elementary operation on the operands ( x , y ) , the result of z = ( x w y ) must be given as if the operation ( x w y ) was performed exactly and, when obtaining the result, it was rounded. This standard allows us to guarantee that the relative rounding error of the elemental operations is of the order of the values given in column 5 of the table above. Naturally, the loss of significant digits has nothing to do with rounding, but with the negligence of the programmer.

Appendix B.2. Intrinsic Functions

We refer in this section to the functions that compilers incorporate as elementary functions exp , log , s q r t , all hyperbolic and trigonometric functions and their inverses, and so forth. Such functions are optimized not only in CPU time, but also choose the most suitable method to the value of the input variable to ensure that the error in the output is of the order of the rounding unit. Such functions should be used whenever possible in order to have an estimate of rounding.

Appendix B.3. Numerical Derivation

In numerical analysis, it is well-known that given g ( x ) C 2 and h , Taylor’s development allows us to write
g ( x + h ) = g ( x ) + g ( x ) h + 1 2 g ( ξ ) h 2 , ξ ] min ( x + h , x ) , max ( x + h , x ) [ ,
so it is very common to calculate the numerical derivative of the function g ( x ) using the Mean Value Theorem
g ( x ) = g ( x + h ) g ( x ) h , E ( g ) 1 2 max ξ g ( ξ ) h
where E ( g ) designates a bound of the truncation error for the derivative, error because we have truncated the Taylor expansion in the linear approximation.
With the relative error of g we are already presented with the first problem because we should calculate E ( g ) / g ( x ) and that can give problems in an environment of the extremals of g ( x ) where the derivative is close to 0. It is true that if the derivative vanishes it should be g ( x + h ) = g ( x ) , but in finite precision arithmetic finding, strict equality is a lottery and we cannot subject our algorithms to such chance. Additionally, E ( g ) can lose significant digits if h is too small, making our attempt to control the error even more disastrous.
We refer to this problem when we say that the truncation of the derivative is of order  O ( h ) while rounding is of order  O ( 1 / h ) . We are actually talking improperly and we should say that rounding is in the order of the rounding unit mentioned in the IEEE standard, divided by h . If h is small, we are losing precision in the derivative, as if our variable double became float in a single calculation when h 10 8 . Due to this circumstance, it is essential to implement ways to detect the extremals by means of geometric or trigonometric formulas that allow us to avoid these problems.
While it does not seem feasible to avoid amplification by h of any errors in the evaluation of g ( x ) , we can protect ourselves against the possible catastrophic cancellation in Δ g = g ( x + h ) g ( x ) when both evaluations have the same sign since otherwise, there is no problem. The solution is so simple as to use the Taylor expansion of the function log ( g ( x + h ) ) . Suppose g ( x + h ) , g ( x ) > 0 , and we will have:
log ( g ( x + h ) ) = log ( g ( x ) ) + g ( x ) g ( x ) h + 1 2 g ( ν ) g ( ν ) g ( ν ) g ( ν ) 2 h 2 ,
with the same conditions for ν that ξ had before. If now we are left with the linear approximation and we solve g ( x ) , we are left with:
g ( x ) = g ( x ) log ( g ( x + h ) ) log ( g ( x ) ) h = g ( x ) h log g ( x + h ) g ( x ) .
Now the quotient of the evaluations of g is an elementary operation and log is an intrinsic function so all operations are within the scope of the rounding unit corresponding to the type of declared variable. The h can be very small and will only affect the value of g , but since both values g ( x + h ) and g ( x ) will be very close, the quotient will be close to 1 and its logarithm tends to zero. It does not solve all the problems, but it helps to a high degree in our case to calculate the three partial derivatives with respect to M in the CS algorithm.

References

  1. Perrin, C.L. Numerical Recipes in Fortran 90: The Art of Scientific Computing, Volume 2 (3 CD-ROMs and Manual); Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P., Eds.; Cambridge University Press: New York, NY, USA, 1997. [Google Scholar]
  2. Butcher, J.C. A history of runge-kutta methods. Appl. Numer. Math. 1996, 20, 247–260. [Google Scholar] [CrossRef]
  3. Fehlberg, E. Low-Order Classical Runge-Kutta Formulas with Stepsize Control and Their Application to Some Heat Transfer Problems; National Aeronautics and Space Administration: Washington, DC, USA, 1969; Volume 315. [Google Scholar]
  4. Cash, J.R.; Karp, A.H. A variable order runge-kutta method for initial value problems with rapidly varying right-hand sides. ACM Trans. Math. Softw. TOMS 1990, 16, 201–222. [Google Scholar] [CrossRef]
  5. Lynch, S. Dynamical Systems with Applications Using MapleTM; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  6. Lynch, S. Dynamical Systems with Applications Using MATLAB®; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  7. Lynch, S. Dynamical Systems with Applications Using Mathematica®; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  8. Lynch, S. Dynamical Systems with Applications Using Python; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  9. Zhu, Q. Stabilization of stochastic nonlinear delay systems with exogenous disturbances and the event-triggered feedback control. IEEE Trans. Autom. Control 2018, 64, 3764–3771. [Google Scholar] [CrossRef]
  10. Wang, H.; Zhu, Q. Global stabilization of a class of stochastic nonlinear time-delay systems with siss inverse dynamics. IEEE Trans. Autom. Control 2020, 65, 4448–4455. [Google Scholar] [CrossRef]
  11. Arnau, J.V.; Fullana, M.J.; Monreal, L.; Sáez, D. On the microwave background anisotropies produced by nonlinear voids. Astrophys. J. 1993, 402, 359–368. [Google Scholar] [CrossRef]
  12. Sáez, D.; Arnau, J.V.; Fullana, M.J. The imprints of the great attractor and the virgo cluster on the microwave background. Mon. Not. R. Astron. Soc. 1993, 263, 681–686. [Google Scholar] [CrossRef] [Green Version]
  13. Arnau, J.V.; Fullana, M.J.; Sáez, D. Great attractor-like structures and large-scale anisotropy. Mon. Not. R. Astron. Soc. 1994, 268, L17–L21. [Google Scholar] [CrossRef] [Green Version]
  14. Fullana, M.J.; Sáez, D.; Arnau, J.V. On the microwave background anisotropy produced by great attractor-like structures. Astrophys. J. Suppl. Ser. 1994, 94, 1–16. [Google Scholar] [CrossRef]
  15. Sáez, D.; Amau, J.V.; Fullana, M.J. Effects of great attractor-like objects on the cosmic microwave background. Astrophys. Lett. Commun. 1995, 32, 75. [Google Scholar]
  16. Fullana, M.J.; Arnau, J.V.; Sáez, D. On the microwave background anisotropy produced by big voids in open universes. Mon. Not. R. Astron. Soc. 1996, 280, 1181–1189. [Google Scholar] [CrossRef] [Green Version]
  17. Fullana i Alfonso, M.J. Anisotropies de la Radiació de fons de Microones Produïdes per Inhomogeneïtats Cosmològiques No Lineals; Servei de Publicacions de la Universitat de València: València, Spain, 1996. [Google Scholar]
  18. Fullana, M.J.; Arnau, J.V.; Sáez, D. Looking for the imprints of nonlinear structures on the cosmic microwave background. Vistas Astron. 1997, 41, 467–492. [Google Scholar] [CrossRef]
  19. Tsoulos, I.G.; Kosmas, O.T.; Stavrou, V.N. Diracsolver: A tool for solving the dirac equation. Comput. Phys. Commun. 2019, 236, 237–243. [Google Scholar] [CrossRef] [Green Version]
  20. Kosmas, T.S.; Lagaris, I.E. On the muon–nucleus integrals entering the neutrinoless μ–> e-conversion rates. J. Phys. G Nucl. Part. Phys. 2002, 28, 2907. [Google Scholar] [CrossRef] [Green Version]
  21. Kitano, R.; Koike, M.; Okada, Y. Detailed calculation of lepton flavor violating muon-electron conversion rate for various nuclei. Phys. Rev. D 2002, 66, 096002. [Google Scholar] [CrossRef] [Green Version]
  22. Stoica, S.; Mirea, M. New calculations for phase space factors involved in double-β decay. Phys. Rev. C 2013, 88, 037303. [Google Scholar] [CrossRef] [Green Version]
  23. Stoica, S.; Mirea, M.; Niţescu, O.; Nabi, J.; Ishfaq, M. New phase space calculations for β-decay half-lives. Adv. High Energy Phys. 2016. [Google Scholar] [CrossRef] [Green Version]
  24. Carleo, G.; Troyer, M. Solving the quantum many-body problem with artificial neural networks. Science 2017, 355, 602–606. [Google Scholar] [CrossRef] [Green Version]
  25. Snyder, J.C.; Rupp, M.; Hansen, K.; Müller, K.; Burke, K. Finding density functionals with machine learning. Phys. Rev. Lett. 2012, 108, 253002. [Google Scholar] [CrossRef] [PubMed]
  26. Arsenault, L.; Lopez-Bezanilla, A.; Lilienfeld, O.A.v.; Millis, A.J. Machine learning for many-body physics: The case of the anderson impurity model. Phys. Rev. B 2014, 90, 155136. [Google Scholar] [CrossRef] [Green Version]
  27. Torlai, G.; Mazzola, G.; Carrasquilla, J.; Troyer, M.; Melko, R.; Carleo, G. Neural-network quantum state tomography. Nat. Phys. 2018, 14, 447–450. [Google Scholar] [CrossRef] [Green Version]
  28. Engquist, B.; Fokas, A.; Hairer, E.; Iserles, A. Highly Oscillatory Problems; Cambridge University Press: Cambridge, UK, 2009; p. 366. [Google Scholar]
  29. Wendlandt, J.M.; Marsden, J.E. Mechanical integrators derived from a discrete variational principle. Phys. D Nonlinear Phenom. 1997, 106, 223–246. [Google Scholar] [CrossRef] [Green Version]
  30. Kane, C.; Marsden, J.E.; Ortiz, M. Symplectic-energy-momentum preserving variational integrators. J. Math. Phys. 1999, 40, 335–337. [Google Scholar] [CrossRef] [Green Version]
  31. Marsden, J.E.; West, M. Discrete mechanics and variational integrators. Acta Numer. 2001, 10, 357–514. [Google Scholar] [CrossRef] [Green Version]
  32. Hairer, E.; Lubich, C.; Wanner, G. Geometric numerical integration illustrated by the störmer–verlet method. Acta Numer. 2003, 12, 399–450. [Google Scholar] [CrossRef] [Green Version]
  33. Leimkuhler, B.; Reich, S. Simulating Hamiltonian Dynamics; Cambridge University Press: Cambridge, UK, 2004; Number 14. [Google Scholar]
  34. Ober-Blöbaum, S. Galerkin variational integrators and modified symplectic runge–kutta methods. IMA J. Numer. Anal. 2017, 37, 375–406. [Google Scholar] [CrossRef]
  35. Ober-Blöbaum, S.; Saake, N. Construction and analysis of higher order galerkin variational integrators. Adv. Comput. Math. 2015, 41, 955–986. [Google Scholar] [CrossRef]
  36. Campos, C.M.; Junge, O.; Ober-Blöbaum, S. Higher order variational time discretization of optimal control problems. arXiv 2012, arXiv:1204.6171. [Google Scholar]
  37. Kosmas, O.; Leyendecker, S. Variational integrators for orbital problems using frequency estimation. Adv. Comput. Math. 2019, 45, 1–21. [Google Scholar] [CrossRef] [Green Version]
  38. Kosmas, O. Energy minimization scheme for split potential systems using exponential variational integrators. Appl. Mech. 2021, 2, 431–441. [Google Scholar] [CrossRef]
  39. Kosmas, O.; Leyendecker, S. Family of higher order exponential variational integrators for split potential systems. J. Phys. Conf. Ser. 2015, 574, 012002. [Google Scholar] [CrossRef]
  40. Romero, P.D.; Candela, V.F. Blind deconvolution models regularized by fractional powers of the laplacian. J. Math. Imaging Vis. 2008, 32, 181–191. [Google Scholar] [CrossRef]
  41. Abbas, M.I.; Ragusa, M.A. On the hybrid fractional differential equations with fractional proportional derivatives of a function with respect to a certain function. Symmetry 2021, 13, 264. [Google Scholar] [CrossRef]
  42. Treeby, B.E.; Cox, B.T. Modeling power law absorption and dispersion in viscoelastic solids using a split-field and the fractional laplacian. J. Acoust. Soc. Am. 2014, 136, 1499–1510. [Google Scholar] [CrossRef] [Green Version]
  43. Candela, V.; Falcó, A.; Romero, P.D. A general framework for a class of non-linear approximations with applications to image restoration. J. Comput. Appl. Math. 2018, 330, 982–994. [Google Scholar] [CrossRef]
  44. Lemaître, G. Expansion of the universe, the expanding universe. Mon. Not. R. Astron. Soc. 1931, 91, 490–501. [Google Scholar] [CrossRef] [Green Version]
  45. Fullana, M.J.; Arnau, J.V.; Thacker, R.J.; Couchman, H.M.P.; Sáez, D. Estimating small angular scale cosmic microwave background anisotropy with high-resolution n-body simulations: Weak lensing. Astrophys. J. 2010, 712, 367. [Google Scholar] [CrossRef]
  46. Fullana, M.J.; Arnau, J.V.; Thacker, R.J.; Couchman, H.M.P.; Sáez, D. On the estimation and detection of the rees–sciama effect. Mon. Not. R. Astron. Soc. 2016, 464, 3784–3795. [Google Scholar] [CrossRef] [Green Version]
  47. Colmenero, N.P.; Córdoba, J.V.A.; Fullana i Alfonso, M.J. Relativistic positioning: Including the influence of the gravitational action of the sun and the moon and the earth’s oblateness on galileo satellites. Astrophys. Space Sci. 2021, 366, 1–19. [Google Scholar]
  48. Amat, S.; Aràndiga, F.; Arnau, J.V.; Donat Beneito, R.M.; Mulet Mestre, P.; Peris Sancho, R. Aproximació Numèrica; Servei de Publicacions de la Universitat de València: València, Spain, 2002. [Google Scholar]
  49. Bondi, H. Spherically symmetrical models in general relativity. Mon. Not. R. Astron. Soc. 1947, 107, 410–425. [Google Scholar] [CrossRef] [Green Version]
  50. Bardeen, J.M. Gauge-invariant cosmological perturbations. Phys. Rev. D 1980, 22, 1882–1905. [Google Scholar] [CrossRef]
  51. Saez, D.; Arnau, J.V. On the Tolman Bondi solution of Einstein’s Equations. Numerical Applications. In Recent Developments in Gravitation; World Scientific Publishing Co Pte Ltd.: Singapore, 1990; p. 415. [Google Scholar]
  52. Fullana, M.J.; Saez, D.P.; Monreal, L.; Arnau, J.V. The great attractor and the anisotropy of the microwave background. In Recent Developments in Gravitation; World Scientific Publishing Co Pte Ltd.: Singapore, 1992; p. 131. [Google Scholar] [CrossRef]
  53. Fullana, M.J.; Arnau, J.V. The Boötes void and the Cosmic Microwave Background. In Inhomogeneous Cosmological Models; World Scientific Publishing Co Pte Ltd.: Singapore, 1995; p. 125. [Google Scholar]
  54. Fullana, M.J.; Saez, D.; Arnau, J.V. The imprints of great voids on the cosmic microwave background. In Relativistic Astrophysics and Cosmology; World Scientific Publishing Co Pte Ltd.: Singapore, 1997; p. 121. [Google Scholar]
  55. Saez, D.; Arnau, J.V. Accurate simulations of the microwave sky at small angular scales. In Microwave Background Anisotropies, Proceedings of the XVIth Moriond Astrophysics Meeting, Les Arcs, Savoie, France, 16–23 March 1996; Editions Frontières: Singapore, 1997; p. 245. [Google Scholar]
  56. Sáez, D.; Arnau, J.V. Learning from observations of the microwave background at small angular scales. Astrophys. J. 1997, 476, 1. [Google Scholar] [CrossRef] [Green Version]
  57. Antoniou, I.; Papadopoulos, D.; Perivolaropoulos, L. Spinning particle orbits around a black hole in an expanding background. Class. Quantum Gravity 2019, 36, 085002. [Google Scholar] [CrossRef] [Green Version]
  58. Myron, M. Neue mechanik materieller systemes. Acta Phys. Polon. 1937, 6, 163–2900. [Google Scholar]
  59. Fullana Alfonso, M.J.; Saez Milán, D.P.; Arnau, J.V.; Colmenero, N.P. Some improvements on relativistic positioning systems. Appl. Math. Nonlinear Sci. 2018, 3, 161–166. [Google Scholar] [CrossRef] [Green Version]
  60. Puchades, N.; Fullana, M.J.; Arnau, J.V.; Saez, D. On the rees–sciama effect: Maps and statistics. Mon. Not. R. Astron. Soc. 2006, 370, 1849–1858. [Google Scholar] [CrossRef] [Green Version]
Figure 1. LTB plane with t = c o n s t a n t and φ = 0 . The slices M O , observer, and another arbitrary slice are plotted, as well as the symmetry centre C s . The discontinuous line γ represents the null geodesic of a photon arriving at the observer, O, at time t 0 .
Figure 1. LTB plane with t = c o n s t a n t and φ = 0 . The slices M O , observer, and another arbitrary slice are plotted, as well as the symmetry centre C s . The discontinuous line γ represents the null geodesic of a photon arriving at the observer, O, at time t 0 .
Mathematics 10 00593 g001
Figure 2. Plot of the right-hand side of Equations (13) and (14) for z [ 2 , 2 ] . The plot of negative values of z corresponds to Equation (14) and the plot of positive values of z corresponds to Equation (13).
Figure 2. Plot of the right-hand side of Equations (13) and (14) for z [ 2 , 2 ] . The plot of negative values of z corresponds to Equation (14) and the plot of positive values of z corresponds to Equation (13).
Mathematics 10 00593 g002
Figure 3. Present linear density contrast as a function of R, for the perturbations O 1 O 7 , with R v ( t 0 ) = 180 h 1 Mpc and ε ( t 0 ) = 0.0108 . The initial values at t D that generate each overdensity appear in Table 1b.
Figure 3. Present linear density contrast as a function of R, for the perturbations O 1 O 7 , with R v ( t 0 ) = 180 h 1 Mpc and ε ( t 0 ) = 0.0108 . The initial values at t D that generate each overdensity appear in Table 1b.
Mathematics 10 00593 g003
Figure 4. Present nonlinear negative density contrast as a function of R, for the perturbations U 1 U 10 , with R v ( t 0 ) = 22.5 h 1 Mpc and ε ( t 0 ) = 0.74 . The initial values at t D that generate each underdensity appear in Table 1a.
Figure 4. Present nonlinear negative density contrast as a function of R, for the perturbations U 1 U 10 , with R v ( t 0 ) = 22.5 h 1 Mpc and ε ( t 0 ) = 0.74 . The initial values at t D that generate each underdensity appear in Table 1a.
Mathematics 10 00593 g004
Figure 5. Plane of the LTB with t = c o n s t a n t and φ = 0 . The angle α ( t ) and the observation direction Ψ 0 are plotted.
Figure 5. Plane of the LTB with t = c o n s t a n t and φ = 0 . The angle α ( t ) and the observation direction Ψ 0 are plotted.
Mathematics 10 00593 g005
Figure 6. Plot of δR × 106 as a function of the observational angle, Ψ0, in degrees. (a) Inhomogeneity O4. The continuous plot is the result of the linear computation. The discontinuous plot is the result of rkf45ng computation. (b) Anistopies of overdensities in universes with different Ω values.
Figure 6. Plot of δR × 106 as a function of the observational angle, Ψ0, in degrees. (a) Inhomogeneity O4. The continuous plot is the result of the linear computation. The discontinuous plot is the result of rkf45ng computation. (b) Anistopies of overdensities in universes with different Ω values.
Mathematics 10 00593 g006
Table 1. Initial parameters, for universes with different Ω , with the same present energy density contrast, δ .
Table 1. Initial parameters, for universes with different Ω , with the same present energy density contrast, δ .
(a) Nonlinear Underdensities
Case Ω ε ( t D )
U 1 0.2−1.10 × 10 1
U 2 0.3−8.42 × 10 2
U 3 0.4−7.00 × 10 2
U 4 0.5−6.09 × 10 2
U 5 0.6−5.45 × 10 2
U 6 0.7−4.97 × 10 2
U 7 0.8−4.60 × 10 2
U 8 0.9−4.30 × 10 2
U 9 0.95−4.18 × 10 2
U 10 0.999−4.06 × 10 2
(b) Overdensities
Case Ω ε ( t D )
O 1 0.25.985 × 10 4
O 2 0.34.496 × 10 4
O 3 0.43.700 × 10 4
O 4 0.53.200 × 10 4
O 5 0.62.870 × 10 4
O 6 0.72.550 × 10 4
O 7 0.82.400 × 10 4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Arnau i Córdoba, J.V.; Fullana i Alfonso, M.J. Resolution of Initial Value Problems of Ordinary Differential Equations Systems. Mathematics 2022, 10, 593. https://doi.org/10.3390/math10040593

AMA Style

Arnau i Córdoba JV, Fullana i Alfonso MJ. Resolution of Initial Value Problems of Ordinary Differential Equations Systems. Mathematics. 2022; 10(4):593. https://doi.org/10.3390/math10040593

Chicago/Turabian Style

Arnau i Córdoba, Josep Vicent, and Màrius Josep Fullana i Alfonso. 2022. "Resolution of Initial Value Problems of Ordinary Differential Equations Systems" Mathematics 10, no. 4: 593. https://doi.org/10.3390/math10040593

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop