Next Article in Journal
Group Classification of the Unsteady Axisymmetric Boundary Layer Equation
Previous Article in Journal
Dynamic Merging for Optimal Onboard Resource Utilization: Innovating Mission Queue Constructing Method in Multi-Satellite Spatial Information Networks
Previous Article in Special Issue
Vibration Characteristics of a Functionally Graded Viscoelastic Fluid-Conveying Pipe with Initial Geometric Defects under Thermal–Magnetic Coupling Fields
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Unified Approach for the Calculation of Different Sample-Based Measures with the Single Sampling Method

by
Maciej Leszczynski
,
Przemyslaw Perlikowski
and
Piotr Brzeski
*
Division of Dynamics, Lodz University of Technology, Stefanowskiego 1/15, 90-924 Lodz, Poland
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(7), 987; https://doi.org/10.3390/math12070987
Submission received: 21 February 2024 / Revised: 19 March 2024 / Accepted: 23 March 2024 / Published: 26 March 2024
(This article belongs to the Special Issue Advances in Computational Dynamics and Mechanical Engineering)

Abstract

:
This paper explores two sample-based methods for analysing multistable systems: basin stability and basin entropy. Both methods rely on many numerical integration trials conducted with diverse initial conditions. The collected data is categorised and used to compute metrics that characterise solution stability, phase space structure, and system dynamics predictability. Basin stability assesses the overall likelihood of reaching specific solutions, while the basin entropy measure aims to capture the structure of attraction basins and the complexity of their boundaries. Although these two metrics complement each other effectively, their original procedures for computation differ significantly. This paper introduces a universal approach and algorithm for calculating basin stability and entropy measures. The suitability of these procedures is demonstrated through the analysis of two non-linear systems.

1. Introduction

Mathematical models given by ordinary differential equations (ODEs) [1,2] are used to describe the time evolution of dynamical systems in many branches of science. The ODEs model different dynamical phenomena that originate from mechanical engineering [3,4,5], neuroscience [6,7], physics [8,9,10,11], medicine [12,13], electromechanics [14,15], production technologies [16,17], ecology [18,19], biology [20,21,22,23], climate dynamics [24,25], and many more. Still, every case may require a different approach for dynamical analysis.
One of the most challenging dynamical phenomena that is now widely investigated is the coexistence of more than one stable attractor in the phase space that is called multistability [26,27,28,29,30,31,32,33]. Typically, in multistable systems, we observe a few attractors. However, there are from tens of to infinite coexisting solutions in the furthest case, which is called extreme multistability [34,35,36,37,38,39]. Usually, analytical analysis of multistable systems requires simplifying equations or imposing strong assumptions on the solution. Hence, nowadays, most non-linear multistable systems ODEs are tackled with numerical methods. Several numerical approaches allow the investigation of the multistability of the systems. The bifurcation analysis with the path following [40,41,42] is the most sophisticated. This method enables following the branches of stable and unstable equilibria or periodic solutions, detects bifurcations, and tracks them in a one- or two-parameter space. Nevertheless, this method is complex and requires in-depth knowledge of systems dynamics. On the contrary, the bifurcation diagrams can also be obtained with the brute-force integration forwards and backwards. In such an implementation, they help detect the coexisting solutions and determine bifurcation sequences and ranges of complex behaviour. However, diagrams miss solutions in case of multiple coexisting attractors or hidden attractors. A way to tackle that is to calculate the basins of attraction. If the range of initial conditions is appropriately chosen, we have certainty that all possible solutions are detected. The problem appears for higher dimensional systems, where we can present only two-dimensional cross-sections of multidimensional phase space. Again, it is possible to overcome that with a method proposed by Menck et al. [43] called basin stability. It can be used to characterise the volume of basins of attraction in the multidimensional phase space. To estimate the basin stability measure, one has to perform many Bernoulli trials each time, drawing initial conditions randomly and checking which attractor is reached. This method is new but has already successfully been applied in numerous different scenarios [44,45,46,47,48,49,50]. The main advantages of this method are that it can be applied to all types of systems, and it is a straightforward procedure; thus, a person who is not an expert in non-linear dynamics can use it to estimate the risk of unwanted behaviour. In 2017, the experimental validation of the basin stability approach was performed [51] and results proved that the accuracy of the basin stability approach is comparable with classical methods.
While basin stability quantifies the probability of reaching a particular attractor that shows how robust a given attractor is against random perturbations, it does not consider the phase space's structure. The method to describe the structure of basins of attraction and the complexity of their boundaries was proposed by Daza et al. [52] in 2016. The new measure, called basin entropy, aims to reflect the information about the structure of the phase space and provides supplementary information about the predictability of the dynamical system. The initially proposed algorithm to obtain basin entropy assumes that one has to build a grid on the phase space and, in each part, estimate basin stability. Thus, the phase space is subdivided into boxes. Then, the obtained value is used to get Gibbs entropy for each box. Summing the entropies leads to a quantitative measure of the uncertainty associated with the state space.
The motivation of this paper is to combine these two metrics to analyse a response of a dynamical system. It is essential because both metrics are based on different approaches; hence, it is not obvious how to obtain both of them simultaneously. The important parts of the analysis are to study how the number of random trials, the size of the selected box, and the number of trials in each box influence the accuracy of the calculation. Moreover, the idea of scaling ranges of initial conditions is proposed.
This paper is organised as follows: In Section 2.1, we introduce the basin stability metric and perform an exemplary analysis of well-understood Van der Pol–Duffing equations. In Section 2.2, the basin entropy is introduced, illustrated with the same model as in the previous section, and two modifications of the method are discussed. In Section 2.3, we present the comparison of basin entropy calculated using several different approaches, one of them utilising the data from the basin stability procedure. In Section 3, we combine all the presented results to analyse a more sophisticated model, namely, a double pendulum system. Finally, in Section 4, we conclude our work.

2. Methods

2.1. Basin Stability

The basin stability, introduced in [43], is a method that allows a numerical description of the stability of attractors for dynamical systems. For low-dimensional systems, the basins of attraction give similar information about the phase space structure as the basin stability. However, for multidimensional systems, the basins of attraction fail because they show the two-dimensional cross-sections of the phase space. Hence, it is only possible to analyse some such possible planes. The idea behind the basin stability method is elementary. Still, it is a powerful tool in the analysis of complex systems. It allows the comparison of the stability of different attractors in a sense of how big the attracted region is for every one of them. Here, we will present a quick overview of the method for more depth details (see [53]).
We define basin stability for an n-dimensional ( n N ) dynamical system with N A attractors in an analysed region of the state space Ω R n . Then, integrating the system equations of motion multiple times with random initial conditions from Ω allows for estimating the probability of reaching each attractor. A single integration will be called a trial. The proportion of initial conditions that reach a certain attractor compared to the overall number of trials is the estimation of how the attractor is stable and called basin stability B s ( A ) of attractor A . The selection of drawing of initial conditions' ranges is crucial because one or more attractors could be omitted due to wrongly chosen sets. Hence, one has to carefully determine the bounds of drawn initial conditions to get a general overview of the asymptotic response of the system. On the other hand, one could also use a narrower set of initial conditions to focus only on practically accessible initial states or consider constraints imposed on the system.
We will now present the usage of basin stability on an archetypal model of an externally excited oscillator (the Van der Pol–Duffing system) given by the equation
x ¨ α ( 1 x 2 ) x ˙ + x 3 = F sin ( ω t ) ,
where α , F, and ω are positive constants. For the purpose of illustration, we fixed parameters to the following values: α = 0.1 , ω = 0.981 , and F = 1 . The initial conditions x and x ˙ were drawn uniformly from a set [ 2 , 2 ] × [ 1 , 3 ] , integrated forward and classified with regard to the attractor they tend to. In such a way, 200,000 trials were performed. Calculated basin stability for each attractor is given in the Table 1.
We can see that all the points chosen from the state space are classified to reach one of three periodic attractors. We classify them based on their periodicity, hence periods 1, 7, and 11 mean that the system orbit is closed in 1, 7, and 11 excitation periods, respectively. The most stable attractor from this analysis is the one with a period 11, and the least stable is the period 7 attractor. However, none of the detected attractors are rare, as in the definition presented in [54], where the B s ( A ) < < 1 .
As the system given by Equation (1) is given by a second-order ODE, and it has one degree of freedom, we can also use the data obtained during the calculation of basin stability to plot the basins of attraction.
Figure 1 was created by plotting the initial conditions chosen in the procedure of calculating basin stability in the state space. The colours specify to which basin, given initial conditions, is assigned so we obtain basins of attraction of the system. In the centre of the analysed rectangle ( x , x ˙ ) = ( 0 , 1 ) , we can see the dominance of the attractor with period 1; however, the further the initial condition is from the centre, the more it is probable to change its asymptotic behaviour. Moreover, we can see that the period 7 attractor (green) is significantly less present in the state space than the other two. The results of basin stability brought the same conclusion. This metric, however, does not imply how riddled [55,56] the basins are in the state space, and from Figure 1, we can reason that the initial conditions far from the centre of the rectangle are not robust against perturbations. Furthermore, there might be cases where the basin stability of two attractors is similar; however, one of the basins has a very regular shape and the other one is riddled. To analyse this feature, we will present the basin entropy metric, and calculate it for the model given by Equation (1).

2.2. Basin Entropy

In principle, basins of attractions for dynamical systems can have simple, compact basins or riddled basins. Boundaries between the basins of attraction can be smooth, irregular, or even fractal. The basin entropy measure, first introduced in [52], is a metric that can quantify and mathematically distinguish such cases. The formulation originated from the Gibbs entropy definition. This metric helps quantify the structure of basins of attraction and provides information about the system's unpredictability.
We can define basin entropy for an n-dimensional ( n N ) dynamical system with N A attractors in an analysed region of the state space Ω R n . We then cover Ω with k N disjoint n-dimensional hypercubes of linear size ε in each dimension. Each of these boxes, in principle, contains infinitely many trajectories. Moreover, each such trajectory leads to one of N A attractors mentioned before. For such a formulation, the Gibbs entropy of every box denoted i 1 , , k is given by
S i = j = 1 N A p i , j log 1 p i , j ,
where p i , j is the basin stability of a jth attractor numbered j 1 , N A calculated in the ith box. It is worth noting that, if a box does not cover a part of any basin boundary, its entropy equals zero. Thus, it is possible to limit the calculations to the boxes that cover the boundaries of the basins of attraction. Then, to obtain the basin entropy for the whole Ω , we sum the entropy of each box. This is true because we chose a non-overlapping cover. Thus, the entropy is given as
S = i = 1 k S i = i = 1 k j = 1 N A p i , j log 1 p i , j .
As the number of boxes N depends on the chosen value of ε , the above sum also depends on the chosen box size. With smaller ε (and, thus, smaller boxes), the value of entropy S increases. Daza et al. [52] proposed calculating the entropy value relative to the total number of boxes N to overcome this effect. Thus, the value of the proportion
S b = S N = 1 N i = 1 k j = 1 N A p i , j log 1 p i , j ,
is called basin entropy. This indicator determines the degree of uncertainty of the subset Ω of the state space, with a minimal value of 0 (single attractor in the state space) and maximal value of ln ( N A ) for N A equiprobable attractors.
The basin entropy can be estimated using numerical simulations. In the original formulation [52], the authors suggested using 25 trials per box, as it tends to have a relative error below 5 % and not prolong the computation. To illustrate the method, we will use Equation (1) with the same parameter values as for basin stability (see Equation (1)) and calculate the basin entropy for different values of ε and 25 trials per box.
Figure 2 presents values of basin entropy calculated for different values of box sizes ε in two projections. In panel (a) we see the linear scale. The increase in the box size causes the increase of the basin entropy because the box covers a larger area of the phase space. The right plot (panel (b)) shows that in the log–log scale, the relation between ε and basin entropy is linear, and it was proved in detail in [52] in a general case.
This scenario has one important feature, which makes calculations of basin entropy a straightforward procedure. Namely, the state space is a Cartesian product of two (or more, in a general case) intervals of the same lengths. Thus, when we choose ε such that one dimension of the state space can be covered with disjoint intervals of length ε , it is also true for the other dimensions. In the general case, it can be hard to find ε to satisfy the above conditions. To overcome that, if the original state space is rectangular, one can normalise it to the n-dimensional unit cube. In the analysed case, we normalised the set of initial conditions x 0 [ 2 , 2 ] × [ 1 , 3 ] to x o s from [ 0 , 1 ] × [ 0 , 1 ] and calculated the values of basin entropy for different values of ε and scaled ε s . In the analysed case, we expected a difference in the results because the original phase space has two dimensions in both directions.
Figure 3 presents the comparison of values of basin entropy for these two approaches (the scaled and unscaled phase variables). The values of ε s were scaled back to match their corresponding values in the non-scaled case. This comparison shows that the normalisation of phase space does not affect the calculated values of basin entropy. Thus, all the following calculations of basin entropy will be done on scaled variables. This is helpful in cases where the ranges of initial values for one state variable differ significantly from the other.
Both basin entropy and basin stability give a value from a list of values describing the whole state space, but they cannot conclude local stability. However, we can use these concepts to perform such analyses. For both cases, we divided the state space into subsets (in our case, two-dimensional intervals, 20 for each dimension, resulting in 400 boxes) and calculated the basin stability for every attractor and every box, as well as the entropy of every box.
Figure 4 presents the colourmaps of basin stability for different subsets of the phase space for every detected attractor. By choosing a box with a high value of basin stability (close to a red colour), we have a high probability that an initial condition from this box will asymptotically lead to the desired attractor. Conversely, when the value is low (closer to a blue colour), the box is thought to be more unstable, meaning there is a higher probability of reaching an undesired attractor.
Figure 5 presents the colourmaps of entropy calculated for defined earlier subsets of the state space. In the centre of the plot ( x 1 = 0 , x 2 = 1 ), we can see a region with low entropies, suggesting that an initial condition from that region is highly probable to reach the desired state. However, moving away from the centre, the value steadily increases, suggesting a higher uncertainty in choosing the right initial conditions.

2.3. Estimating Basin Entropy with Sample-Based Methods

These two presented metrics, basin stability and basin entropy, take the set of classified initial conditions as an input. The mentioned classification specifies to what attractor the particular initial conditions tend to. However, there is one crucial difference between the required input data. Namely, the basin entropy requires the initial conditions to be distributed over the state space so that every box created during the calculation of basin entropy contains the same amount of trials. On the other hand, the basin stability initialises all trials at random. Thus, it cannot be ensured that the basin entropy calculated on such data matches the equally distributed one. In [52], we can find information that it is possible to estimate the values of basin entropy random sampling. Nevertheless, the paper does not include the study of the influence of the number of trials in the box or the value of ε on the estimation accuracy. Using random sampling to calculate such a study is crucial and cannot be overcome in the analysis. We performed an in-depth analysis of the error of this approach. Firstly, one needs to know the desired value of basin entropy for the phase space. Because the outcome value is a scaled sum of entropies calculated for each box, we can narrow the domain to a single box in this analysis.
We performed 1000 trials using the model described by the Equation (1) with α = 0.1 , ω = 0.981 , and F = 1 on the set [ 0 , 0.5 ] × [ 0 , 0.5 ] , which is equal to [ 0 , 0.125 ] × [ 0 , 0.125 ] in the rescaled variables (please see Section 2.2 for details). The initial conditions were chosen randomly with uniform distribution. Then, we calculated the basin entropy using a fraction of the available data, starting at the first point, adding one point each step until we used all the trials. As a reference value, we took the basin entropy calculated for 1000 points and checked the relative error in each step.
Figure 6 presents values of relative error between basin entropy calculated for each step and the reference value. We can see that, in principle, the more points we take into the analysis, the closer we are to the actual value of basin entropy. Thus, if the basin entropy calculated for data from the basin stability procedure is close to the basin entropy for data with equal point distribution per box, it is a good estimation. In choosing the right value for the number of trials N and the box size ε , one needs to also take into consideration the fact that, in the process of picking random initial conditions, it may happen that some of the boxes will not contain any chosen point. Assuming that we have k boxes of the same size, the probability that one initial condition is drawn from a certain box is equal to 1 k . Conversely, the probability that it is not chosen from this box is 1 1 k . As the initial condition for each trial is chosen independently, the joint probability that none of the N trials are initialised in the analysed box is ( 1 1 k ) N . Finally, taking into account that this event can happen for every box, the probability that one of the boxes dividing the state space does not contain any drawn initial condition is min ( k · ( 1 1 k ) N , 1 ) (for some cases, like k = 3 and N = 2 , the formula could result in a probability greater than 1, thus, it bounded with the min function). To evaluate this, we performed three types of experiments:
  • Equal distribution with 25 points per box, with box length ε ;
  • Equal distribution with 100 points per box, with box length ε ;
  • Random sampling for 4,000,000 trials.
The number of trials for the equal distribution experiments varied, as we took the values of ε s from a set ε s { 0.005 , 0.00625 , 0.008 , 0.01 , 0.0125 , 0.02 , 0.025 } . It is important to note that these values defined the box sizes in scaled state space which corresponded to ε { 0.02 , 0.025 , 0.032 , 0.04 , 0.05 , 0.08 , 0.1 } in the non-scaled case. For each experiment, we then calculated the basin entropy for different box sizes and compared the results.
Figure 7 presents the comparison of calculated values of basin entropy for the three previously described experiments, depending on the box side length ε . The values calculated for random trials are closer to the case with 100 initial conditions per box, than to 25 points per box. It confirms the data presented in Figure 6 and it means that basin entropy calculated with input data generated by the basin stability procedure is an accurate estimate of the real basin entropy.
To summarise Section 3 and Section 4, we can accurately estimate the values of basin entropy with the data generated in the basin stability procedure. In addition, if the state space is not a multidimensional cube, one can scale the domain to the unit cube and calculate the basin entropy on the scaled variables. In Section 3, such a procedure will be presented.

3. Trial-Based Basin Entropy for a Double Pendulum Model

The idea behind basin stability-type metrics is to evaluate the asymptotic behaviour of a general system, where one cannot, in principle, visualise the results. Hence, this section presents previously discussed metrics for a double pendulum model, a paradigmatic example in nonlinear dynamics. The system was designed, constructed, and tested in our laboratory [51,57,58]. Hence, equations of motion refer to a physically existing experimental rig.
The model and photo of the considered system are presented in Figure 8 in panels (a) and (b), respectively. The system consists of two pendulums. The first one is horizontal, and its angular position is given by φ 1 ; the second one, with an angular position given by φ 2 , is attached to the first pendulum and moves freely around its pivot point. The upper pendulum has a length l 1 , moment of inertia J 1 , mass m 1 , and a centre of gravity located at distance d 1 from its pin joint. The linear spring supports it at the right end with a stiffness coefficient given by k. The second pendulum has a moment of inertia J 2 , mass m 2 , and a centre of gravity located at d 2 from its pin joint. The damping in the system is small so that it can be linearised without loss of precise replication of the system's motion by the model. For the first pendulum, the damping coefficient is given by c 1 , while for the second one, it is given by c 2 . The whole system is placed on the shaker, which excites the system kinematically with a harmonic function of amplitude A and frequency ω .
The equations of motion of the investigated system are given as follows:
( J 2 + m 2 d 2 2 ) φ ¨ 1 + m 2 d 2 ( A ω 2 cos ( ω t ) + g ) s i n φ 2 + m 2 d 2 ( x 1 ( cos ( φ 1 φ 2 ) φ ˙ 1 2 + sin ( φ 1 φ 2 ) φ ¨ 1 ) + c 2 φ ˙ 2 = 0
( J 1 + m 1 d 1 2 + m 2 x 1 2 ) φ ¨ 1 + c 1 φ ˙ 1 + 1 2 l 1 2 k s i n ( 2 φ 1 ) + m 2 x 1 d 2 ( sin ( φ 1 φ 2 ) φ ¨ 2 cos ( φ 1 φ 2 ) φ ˙ 2 2 ) + ( m 1 d 1 + m 2 x 1 ) ( A ω 2 cos ( ω t ) + g ) cos φ 1 = 0
In this study, the values of the parameters are equal to the ones introduced in the original paper [51], thus, J 1 = 4.524 [ 10 3 kgm 2 ] , m 1 = 0.5562 [ kg ] , l 1 = 0.315 [ m ] , d 1 = 0.18 [ m ] , x 1 = 0.153 [ m ] , c 1 = 0.05 [ Nms ] , k = 6850 [ N m ] , J 2 = 4.469 [ 10 5 kgm 2 ] , m 2 = 0.02077 [ kg ] , d 2 = 0.063 [ m ] , c 1 = 7 [ 10 6 Nms ] , and g = 9.81 [ m s 2 ] . Furthermore, we chose ω = 38 [ rad s ] and A = 6 [ 10 3 m ] as the frequency and magnitude of excitation. As the system under analysis is given by a second-order, two-dimensional ordinary differential equation, there are four degrees of freedom; thus, we need four initial conditions to integrate the system. We set the ranges of acceptable initial conditions to be φ 1 [ 0.015 , 0.015 ] [ rad ] , φ 2 [ π , π ] [ rad ] , φ ˙ 1 [ 1.2 , 1.2 ] [ rad s ] , and φ ˙ 2 [ 60 , 60 ] [ rad s ] . As it is easy to see, there is a large difference in the ranges of initial conditions for the first and second pendulums. It is caused by the peculiar design of the rig, where the first pendulum is fixed to the spring on its end and has a minimal amplitude of motion, while the second pendulum can freely rotate around its pivot point. Such a large difference in initial conditions' ranges shows that scaling ranges during the calculation of basin entropy is necessary. With such a setting, several periodic attractors exist, as shown in [51]. These attractors were analysed using basin stability and basin entropy metrics. To do so, firstly, we needed to decide the number of trials we wanted to perform. Although the higher the number of experiments, the more precise the results, we needed to consider the overall computation time required. To do so, we performed a similar procedure to the one presented in the previous section.
Firstly, one needs to choose the box size ε . As the state space size varies across dimensions, we normalised the results before the calculations. As such, we also chose scaled ε s = 0.1 and ε s = 0.05 ; thus, every analysed box side was a fraction of the size of the state space ( 1 10 or 1 20 respectively). Then, we picked one point x 0 from the state space at random, such that the ε s size box initialised at x 0 was fully contained in the state space. The next step was to perform a number of numerical trials similar to the calculation of basin stability but in a smaller domain. This number may vary from case to case. In our example, we chose 1000 trials. Finally, we calculated the entropy for this selected box, taking as the input the fraction of the data from the previous step. Finally, we repeated the procedure several times, ten times in our example.
Figure 9 presents the values of entropy in these two cases ( ε = 0.1 and ε = 0.05 ) for random boxes from the state space depending on the percentage of trials taken to calculate basin entropy. The calculations were performed for each box separately. We can see that the results stabilise with increased data points in all cases. Setting as the desired baseline the entropy calculated for all the data (1000 trials here), it was rational to set some arbitrary error threshold and check how many simulations we needed to get values below the threshold. From Figure 9c,d, which present the values from Figure 9a,b, but relative compared to the baseline, we concluded that the smaller the ε , the less data we need per box to get closer to the desired value. This, however, forced us to perform more trials, as we needed more boxes to cover the state space. Indeed, the case of ε = 0.05 required twice the number of simulations per dimension of the state space compared to the ε = 0.1 . Thus, we focused on the case with ε = 0.1 to limit the number of simulations needed. We can see that most values tend to the limit quickly, reaching less than 10 % error with less than 200 trials. There is an outlier marked with a violet line colour (in that its relative error is high, even with many simulations). However, such high relative error is caused by the small value of the baseline basin entropy. Thus, even if such a case occurred in the final simulation, the error introduced by such a box would have a negligible impact on the final value of calculations. Furthermore, Figure 9e,f present the discrete differential of the functions presented in Figure 9a,b; these are the differences of entropies calculated using a specified number of samples, with the entropy calculated using ten fewer samples. Thus, for a value n in an x-axis, we calculated the entropy using the n samples and decreased it by the value of the entropy calculated using n 10 samples. From this type of plot, we can see that the calculation of entropies using more and more samples stabilises with the increase in samples. Again, we can choose a specific threshold for the specific scenario. In our case, as chosen earlier, 200 samples gave a reliable result.
From performing such an initial analysis, one can decide the number of required simulations depending on the investigated problem and required accuracy. In our case, we chose 200 trials per box, with box lengths ε 0.1 , which were dictated by the fact that we calculated only the estimated value of basin entropy. If a case requires more precision, say a relative error of less than 5 % , one may take 400 or even more trials per box. This needs to be analysed and customised to the system under analysis. For our setting, we had 10 4 boxes, as the state space is four-dimensional, which led to a 2,000,000 overall number of trials. From the formula derived in Section 2.3, we can calculate the probability of having an empty box, where k = 10 4 , N = 2,000,000, resulting in a probability of 1.37 × 10 83 .
Taking all the information described earlier into consideration, we calculated the basin stability and basin entropy for the double pendulum described by Equations (5) and (6). We thus generated 2,000,000 points from the set of acceptable initial conditions and classified them according to which attractor they tended to. We detected seven main attractors for this case. We classified them, taking into account the properties of the motion. The upper pendulum always performed an oscillatory motion. In contrast, for the second pendulum, we observed both oscillations and rotations with different locking ratios concerning the frequency of the excitation. Therefore, we named each solution based on the behaviour of the second pendulum and ratio n p : m p , which means that for m p periods of excitation, we observed n p full oscillations or rotations of the second pendulum. Based on those assumptions, we obtained the following types of periodic motion: 1:1, 1:4, and 1:5 oscillations, and 1:1, 1:2, 2:3,1:5, 7:7, 8:8, and 9:9 rotations. There were also several rare attractors. However, their summarised share of the state space was less than 0.2 % ; hence, we could neglect them in the analysis. It is important to note that in our previous paper [51], we detected fewer solutions for considered values of system parameters. A significant difference in the number of trials caused it. In [51], we drew 172,000 sets of initial conditions and two parameters (amplitude A and frequency ω of excitation), while in this paper, we fixed the parameters of the system and only drew initial conditions. Hence, the trials were much more densely distributed, and we detected solutions with a small basin of attractions. Furthermore, it is worth noting that the difference between periodic motions of kind n:n for some n N (like 1:1 or 9:9) is that the periodic trajectory is completed after n periods of excitation.
As there were four state variables, we could not plot the estimated basin of attraction in the phase space. To get some insight, we chose two of the variables and plotted the projection of the basins to the plane containing the rectangle of acceptable values of these variables.
Figure 10 presents the projections for the pairs of variables φ 1 , φ ˙ 1 and φ 2 , φ ˙ 2 . Panel (b) presents the projection to the φ 2 , φ ˙ 2 plane; we can see some structure: two big basins for the attractor of period 1 with rotation, one centralised basin of attractor period 1 with oscillations, and some smaller basins. In comparison to panel (b), we can see that, concerning the chosen two variables, the basins are mixed in the state space. An important feature here is that both plots are generated using the same datasets, but projected onto different axes. Thus, in panel (a), although there may seem to be a uniform distribution of basins throughout the state space, it is a mix of basins of attraction. This suggests that there is a high susceptibility to inaccuracy in choosing initial conditions for the variables φ 1 , φ ˙ 1 . At the same time, there are regions in the φ 2 , φ ˙ 2 plane, where there is a high certainty of the asymptotic behaviour of the system. For example, it can be seen that the attractors of period 1 are more stable than other ones. We calculated the basin stability metric from the generated data to verify that.
Table 2 shows the number of trials classified to the main attractors and their basin stability. As concluded from inspecting the projections of the basins of attraction, the two attractors of period 1 are the dominant attractors in the state space. The third attractor has a period 4 and oscillations, occupying roughly a quarter of the analysed subset. However, these results do not imply how riddled the basins of attraction are in the phase space in this case. To understand this aspect, we calculated the basin entropy using the data we already had.
As determined earlier, we started the calculations of basin entropy for ε = 0.1 . We then increased the box sizes to see how the values changed for bigger subsets. As analyses of basin entropy error presented previously were done only for ε = 0.1 , one might ask if these are also valid for bigger values. However, as the boxes increased, more trials were taken into calculation for the basin entropy for every box. Thus, the precision should not be lower than for the limiting case of ε = 0.1 . We thus defined the step of varying ε to be 0.0025 and performed 40 calculations to reach ε = 0.2 . We compared these results to a similar procedure, which takes only 50,000 classified trajectories to calculate basin entropy.
Figure 11 presents the values of basin entropy calculated for two mentioned cases (2,000,000 trials and 50,000 trials). Firstly, we see that considering a lower number of trials may lead to results that suggest that the system is much more organised than it is in reality. The extreme case would be when the number of trials is less than the number of boxes; thus, some boxes are empty. Thus, we encourage one to perform a similar analysis of the required number of trials as we did earlier in this paper.
Moving on to the case of 2,000,000 trials, we see that the basin entropy for this system does not reach any of the extreme values (i.e., 0 and ln ( N ) for N different attractors). Thus, we reasoned that the state space consisted of regular regions, as well as regions that have basins with mixed boundaries. With such a case, it is possible to choose initial conditions that would be robust against inaccuracies and operate as intended. To get a deeper insight into the dynamics, further analysis is required. We propose an exemplary approach to achieving that.
We divided the state space into smaller subsets to obtain a more local result. More precisely, we created multidimensional intervals in the φ 2 , φ ˙ 2 projection, as it has a more organized structure than φ 1 , φ ˙ 1 (see Figure 10). Thus, we took intervals of a form [ 0.015 , 0.015 ] × [ π , π ] × [ 1.2 + j · 2.4 20 , 1.2 + ( j + 1 ) · 2.4 20 ] × [ 60 + i · 120 20 , 60 + ( i + 1 ) · 120 20 ] for j , i { 0 , 1 19 } . For such intervals, we performed a similar analysis.
Figure 12 presents the basin stability metric for nine different attractors given in Table 2, but calculated for the multidimensional intervals described above. We can determine the probability of reaching the specific solutions locally for the smaller subsets of the state space. Several conclusions come from this further analysis. Firstly, we can see that the regions where two of the most dominant attractors, 1:1 oscillations and 1:1 rotations, are in good accordance with the conclusion from the basins of attraction method. The advantage of this approach is that it first gives a numerical value describing the attraction of a region (the basins of attraction method only gives a visual cue). Additionally, it does not bear a risk of some basins visually covering the other ones (the visualisation tool we used plots the graph basin by basin, covering the previously drawn points). Another aspect uncovered is that reaching the 1:2 rotation attractor is highly probable in the two boxes near φ 2 = ± π and φ ˙ 2 = 20 .
For the boxes with the domination of a single attractor, it is easy to determine how mixed the attractors are. However, for other subsets, it is not obvious what the distribution of the basins is. We calculated the entropy for all the defined above intervals to evaluate that.
Figure 13 presents the distribution of entropies throughout the projection of the state space. Using such a plot, we can find which boxes are more mixed. Three central regions have small entropy, corresponding to the two attractors of period 1:1 and the attractor with period 1:4. Interestingly, even though the attractor of period 1:4 has relatively high basin stability, in the regions where it is dominating, the entropy is not close to zero. That would suggest that it can be impossible to have confidence that the system will follow this trajectory. Comparatively, although the basin stability calculated for the overall phase space of the 1:2 rotations attractor is relatively small, we can see that in the previously mentioned boxes, near φ 2 = ± π and φ ˙ 2 = 20 , the entropy value is also low. Having that in mind, one could choose to use the initial conditions from that region and have confidence that the asymptotic behaviour of the system would tend to the selected 1:2 rotations attractor.
To summarise, while choosing the initial conditions for the system and using the presented approach, one would be interested in regions where the basin stability of the desired attractor is high while the corresponding entropy is low. This ensures a predictable asymptotic behaviour of the system.

4. Conclusions

In this article, we discussed two metrics that can describe the asymptotic behaviour of the dynamical system. Firstly, in Section 2.1, we presented the basin stability method, which evaluates the stability of attractors of the system by analysing how big a fraction of state space belongs to its basin of attraction. As an input, it requires a random set of initial conditions, each classified to which attractor it tends. Being a straightforward method, it gives great insight into the system's dynamics. However, its main drawback is that it does not consider how riddled the basins are in the state space. The basin entropy metric, on the other hand, enables us to distinguish the regular phase spaces and the ones with riddled boundaries between the basins. Its main idea is to divide the state space into smaller sets, perform a defined number of numerical simulations in each set, and calculate the Gibbs entropy on such sets. These two methods were illustrated using the classical model given by the Van der Pol–Duffing equations.
We also compared classical basin entropy to two modified versions using this model. Firstly, we discovered that if we take the initial conditions from the rectangular set, one can normalise it and calculate the basin entropy on the normalised set. This approach can be used when the acceptable initial values for each state variable come from intervals of different sizes. It was shown that the values of classical basin entropy and the normalised version are almost identical. The second modification was to draw the initial conditions instead of having a fixed number of trials per box. We draw the initial values from the whole state space and then assign them to their corresponding box for calculations of basin entropy. It was shown that, with enough trials, such a procedure generates a good estimation of basin entropy calculated classically.
The basin stability and the modified basin entropy were then used to analyse the double pendulum system. Due to the described modifications, we calculated the basin entropy on the scaled state space using the randomised trials obtained with the procedure of calculating basin stability. To choose the proper number of simulations, we analysed entropy for randomised subsets of the state space. The values of basin stability for the main attractors of the system were also presented. We detected the two most stable periodic attractors. We then calculated the basin entropy, concluding that the analysed basins were neither fractal nor regular. As this conclusion was not decisive, a further local analysis was performed. The state space was divided into smaller subsets, and the calculations of basin stability were repeated for every subset. From this analysis, we were able to find subsets of high basin stability of attractors that were not dominant in the global analysis. For the chosen subsets, the entropy was also calculated. This analysis helped us to understand which parts of the state space are occupied by points from multiple basins of attraction. With such an extended analysis, it is possible to adequately choose the regions of operation for which the system is most likely to evolve as intended.
To summarise, the presented results give important information about investigating the structure of the phase space of multidimensional and multistable systems. We showed the methodology of how to tackle the simultaneous calculation of basin stability and basin entropy. The influence of the number of random trials, the size of the selected box, and the number of trials in each box influence the accuracy of the calculation. We also introduced the scaled ranges of initial conditions to obtain reliable results for the system with significantly different spans of initial conditions. We believe that this approach can be used for a wide range of dynamic systems.

Author Contributions

M.L.: methodology, software, validation, writing—original draft, visualisation, investigation, and conceptualisation. P.P.: conceptualisation, software, writing—review and editing, and methodology. P.B.: conceptualisation, supervision, funding acquisition, methodology, and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Science Center Poland based on the decision number 2018/31/D/ST8/02439.

Data Availability Statement

Data generated or analysed during this study are included in this published article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Towers, D.A.; Edwards, D.; Hamson, M. Guide to Mathematical Modelling; Bloomsbury Publishing: London, UK, 2020. [Google Scholar]
  2. Mesterton-Gibbons, M. A Concrete Approach to Mathematical Modelling; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  3. Leonov, G.; Kuznetsov, N.; Kiseleva, M.; Solovyeva, E.; Zaretskiy, A. Hidden oscillations in mathematical model of drilling system actuated by induction motor with a wound rotor. Nonlinear Dyn. 2014, 77, 277–288. [Google Scholar] [CrossRef]
  4. Olt, J.; Liivapuu, O.; Maksarov, V.; Liyvapuu, A.; Tärgla, T. Mathematical modelling of cutting process system. In Engineering Mathematics I; Springer: Berlin/Heidelberg, Germany, 2016; pp. 173–186. [Google Scholar]
  5. Benić, Z.; Piljek, P.; Kotarski, D. Mathematical modelling of unmanned aerial vehicles with four rotors. Interdiscip. Descr. Complex Syst. Indecs 2016, 14, 88–100. [Google Scholar] [CrossRef]
  6. Avitabile, D.; Homer, M.; Champneys, A.; Jackson, J.; Robert, D. Mathematical modelling of the active hearing process in mosquitoes. J. R. Soc. Interface 2010, 7, 105–122. [Google Scholar] [CrossRef] [PubMed]
  7. Kaplan, D.M.; Craver, C.F. The explanatory force of dynamical and mathematical models in neuroscience: A mechanistic perspective. Philos. Sci. 2011, 78, 601–627. [Google Scholar] [CrossRef]
  8. Fasano, A.; Hömberg, D.; Naumov, D. On a mathematical model for laser-induced thermotherapy. Appl. Math. Model. 2010, 34, 3831–3840. [Google Scholar] [CrossRef]
  9. Sun, Y.; Hao, M. Statistical analysis and optimization of process parameters in Ti6Al4V laser cladding using Nd: YAG laser. Opt. Lasers Eng. 2012, 50, 985–995. [Google Scholar] [CrossRef]
  10. Ladd, T.D.; Press, D.; De Greve, K.; McMahon, P.L.; Friess, B.; Schneider, C.; Kamp, M.; Höfling, S.; Forchel, A.; Yamamoto, Y. Pulsed nuclear pumping and spin diffusion in a single charged quantum dot. Phys. Rev. Lett. 2010, 105, 107401. [Google Scholar] [CrossRef]
  11. Koyano, Y.; Suematsu, N.J.; Kitahata, H. Rotational motion of a camphor disk in a circular region. Phys. Rev. E 2019, 99, 022211. [Google Scholar] [CrossRef] [PubMed]
  12. Beauchemin, C.A.; Handel, A. A review of mathematical models of influenza A infections within a host or cell culture: Lessons learned and challenges ahead. BMC Public Health 2011, 11, S7. [Google Scholar] [CrossRef]
  13. Huppert, A.; Katriel, G. Mathematical modelling and prediction in infectious disease epidemiology. Clin. Microbiol. Infect. 2013, 19, 999–1005. [Google Scholar] [CrossRef]
  14. Pustovetov, M.Y. A mathematical model of the three-phase induction motor in three-phase stator reference frame describing electromagnetic and electromechanical processes. In Proceedings of the 2016 Dynamics of Systems, Mechanisms and Machines (Dynamics), Omsk, Russia, 15–17 November 2016; pp. 1–5. [Google Scholar]
  15. Lyshevski, S.E. Electromechanical Systems, Electric Machines, and Applied Mechatronics; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  16. Özgüven, C.; Özbakır, L.; Yavuz, Y. Mathematical models for job-shop scheduling problems with routing and process plan flexibility. Appl. Math. Model. 2010, 34, 1539–1548. [Google Scholar] [CrossRef]
  17. Mahdavi, I.; Aalaei, A.; Paydar, M.M.; Solimanpur, M. Designing a mathematical model for dynamic cellular manufacturing systems considering production planning and worker assignment. Comput. Math. Appl. 2010, 60, 1014–1025. [Google Scholar] [CrossRef]
  18. Wade, M.J.; Harmand, J.; Benyahia, B.; Bouchez, T.; Chaillou, S.; Cloez, B.; Godon, J.J.; Boudjemaa, B.M.; Rapaport, A.; Sari, T.; et al. Perspectives in mathematical modelling for microbial ecology. Ecol. Model. 2016, 321, 64–74. [Google Scholar] [CrossRef]
  19. pada Das, K.; Kundu, K.; Chattopadhyay, J. A predator–prey mathematical model with both the populations affected by diseases. Ecol. Complex. 2011, 8, 68–80. [Google Scholar] [CrossRef]
  20. Cui, Q.; Xu, C.; Ou, W.; Pang, Y.; Liu, Z.; Li, P.; Yao, L. Bifurcation Behavior and Hybrid Controller Design of a 2D Lotka–Volterra Commensal Symbiosis System Accompanying Delay. Mathematics 2023, 11, 4808. [Google Scholar] [CrossRef]
  21. Du, W.; Xiao, M.; Ding, J.; Yao, Y.; Wang, Z.; Yang, X. Fractional-order PD control at Hopf bifurcation in a delayed predator–prey system with trans-species infectious diseases. Math. Comput. Simul. 2023, 205, 414–438. [Google Scholar] [CrossRef]
  22. Naik, M.K.; Baishya, C.; Veeresha, P. A chaos control strategy for the fractional 3D Lotka–Volterra like attractor. Math. Comput. Simul. 2023, 211, 1–22. [Google Scholar] [CrossRef]
  23. Ou, W.; Xu, C.; Cui, Q.; Pang, Y.; Liu, Z.; Shen, J.; Baber, M.Z.; Farman, M.; Ahmad, S. Hopf bifurcation exploration and control technique in a predator-prey system incorporating delay. AIMS Math 2024, 9, 1622–1651. [Google Scholar] [CrossRef]
  24. Boers, N.; Marwan, N.; Barbosa, H.M.; Kurths, J. A deforestation-induced tipping point for the South American monsoon system. Sci. Rep. 2017, 7, 41489. [Google Scholar] [CrossRef]
  25. Rajagopal, K.; Jafari, S.; Pham, V.T.; Wei, Z.; Premraj, D.; Thamilmaran, K.; Karthikeyan, A. Antimonotonicity, bifurcation and multistability in the vallis model for El Niño. Int. J. Bifurc. Chaos 2019, 29, 1950032. [Google Scholar] [CrossRef]
  26. Feudel, U.; Pisarchik, A.N.; Showalter, K. Multistability and tipping: From mathematics and physics to climate and brain Minireview and preface to the focus issue. Chaos Interdiscip. J. Nonlinear Sci. 2018, 28, 033501. [Google Scholar] [CrossRef] [PubMed]
  27. Skardal, P.S.; Arenas, A. Abrupt desynchronization and extensive multistability in globally coupled oscillator simplexes. Phys. Rev. Lett. 2019, 122, 248301. [Google Scholar] [CrossRef] [PubMed]
  28. Farhan, A.K.; Ali, R.S.; Natiq, H.; Al-Saidi, N.M. A new S-box generation algorithm based on multistability behavior of a plasma perturbation model. IEEE Access 2019, 7, 124914–124924. [Google Scholar] [CrossRef]
  29. Lin, H.; Wang, C.; Tan, Y. Hidden extreme multistability with hyperchaos and transient chaos in a Hopfield neural network affected by electromagnetic radiation. Nonlinear Dyn. 2020, 99, 2369–2386. [Google Scholar] [CrossRef]
  30. Hellmann, F.; Schultz, P.; Jaros, P.; Levchenko, R.; Kapitaniak, T.; Kurths, J.; Maistrenko, Y. Network-induced multistability through lossy coupling and exotic solitary states. Nat. Commun. 2020, 11, 592. [Google Scholar] [CrossRef]
  31. Wang, N.; Zhang, G.; Kuznetsov, N.V.; Bao, H. Hidden attractors and multistability in a modified Chua's circuit. Commun. Nonlinear Sci. Numer. Simul. 2021, 92, 105494. [Google Scholar] [CrossRef]
  32. Zhu, R.; del Rio-Salgado, J.M.; Garcia-Ojalvo, J.; Elowitz, M.B. Synthetic multistability in mammalian cells. Science 2021, 375, eabg9765. [Google Scholar] [CrossRef] [PubMed]
  33. Fang, S.; Zhou, S.; Yurchenko, D.; Yang, T.; Liao, W.H. Multistability phenomenon in signal processing, energy harvesting, composite structures, and metamaterials: A review. Mech. Syst. Signal Process. 2022, 166, 108419. [Google Scholar] [CrossRef]
  34. Ngonghala, C.N.; Feudel, U.; Showalter, K. Extreme multistability in a chemical model system. Phys. Rev. E 2011, 83, 056206. [Google Scholar] [CrossRef]
  35. Hens, C.; Dana, S.K.; Feudel, U. Extreme multistability: Attractor manipulation and robustness. Chaos Interdiscip. J. Nonlinear Sci. 2015, 25, 053112. [Google Scholar] [CrossRef]
  36. Jaros, P.; Perlikowski, P.; Kapitaniak, T. Synchronization and multistability in the ring of modified Rössler oscillators. Eur. Phys. J. Spec. Top. 2015, 224, 1541–1552. [Google Scholar] [CrossRef]
  37. Li, C.; Sprott, J.C.; Hu, W.; Xu, Y. Infinite multistability in a self-reproducing chaotic system. Int. J. Bifurc. Chaos 2017, 27, 1750160. [Google Scholar] [CrossRef]
  38. Louodop, P.; Tchitnga, R.; Fagundes, F.F.; Kountchou, M.; Tamba, V.K.; Cerdeira, H.A. Extreme multistability in a Josephson-junction-based circuit. Phys. Rev. E 2019, 99, 042208. [Google Scholar] [CrossRef] [PubMed]
  39. Pisarchik, A.N.; Jaimes-Reátegui, R.; Rodríguez-Flores, C.; García-López, J.; Huerta-Cuéllar, G.; Martín-Pasquín, F. Secure chaotic communication based on extreme multistability. J. Frankl. Inst. 2021, 358, 2561–2575. [Google Scholar] [CrossRef]
  40. Doedel, E.J.; Champneys, A.R.; Dercole, F.; Fairgrieve, T.F.; Kuznetsov, Y.A.; Oldeman, B.; Paffenroth, R.; Sandstede, B.; Wang, X.; Zhang, C. AUTO-07P: Continuation and Bifurcation Software for Ordinary Differential Equations; Concordia University: Montreal, QC, Canada, 2007. [Google Scholar]
  41. Doedel, E.; Keller, H.B.; Kernevez, J.P. Numerical Analysis and Control of Bifurcation Problems (I): Bifurcation in Finite Dimensions. Int. J. Bifurc. Chaos 1991, 1, 493–520. [Google Scholar] [CrossRef]
  42. Dhooge, A.; Govaerts, W.; Kuznetsov, Y.A. MATCONT: A MATLAB package for numerical bifurcation analysis of ODEs. ACM Trans. Math. Softw. (TOMS) 2003, 29, 141–164. [Google Scholar] [CrossRef]
  43. Menck, P.J.; Heitzig, J.; Marwan, N.; Kurths, J. How basin stability complements the linear-stability paradigm. Nat. Phys. 2013, 9, 89–92. [Google Scholar] [CrossRef]
  44. Menck, P.J.; Heitzig, J.; Kurths, J.; Joachim Schellnhuber, H. How dead ends undermine power grid stability. Nat. Commun. 2014, 5, 3969. [Google Scholar] [CrossRef]
  45. Kerswell, R.R.; Pringle, C.C.T.; Willis, A.P. An optimization approach for analysing nonlinear stability with transition to turbulence in fluids as an exemplar. Rep. Prog. Phys. 2014, 77, 085901. [Google Scholar] [CrossRef]
  46. Ji, P.; Kurths, J. Basin Stability in Complex Oscillator Networks. In Nonlinear Dynamics of Electronic Systems; Mladenov, V.M., Ivanov, P.C., Eds.; Springer: Cham, Switzerland, 2014; pp. 211–218. [Google Scholar]
  47. Leng, S.; Lin, W.; Kurths, J. Basin stability in delayed dynamics. Sci. Rep. 2016, 6, 21449. [Google Scholar] [CrossRef]
  48. Brzeski, P.; Lazarek, M.; Kapitaniak, T.; Kurths, J.; Perlikowski, P. Basin stability approach for quantifying responses of multistable systems with parameters mismatch. Meccanica 2016, 51, 2713–2726. [Google Scholar] [CrossRef]
  49. Dudkowski, D.; Jafari, S.; Kapitaniak, T.; Kuznetsov, N.V.; Leonov, G.A.; Prasad, A. Hidden attractors in dynamical systems. Phys. Rep. 2016, 637, 1–50. [Google Scholar] [CrossRef]
  50. Pattanayak, D.; Mishra, A.; Dana, S.K.; Bairagi, N. Bistability in a tri-trophic food chain model: Basin stability perspective. Chaos Interdiscip. J. Nonlinear Sci. 2021, 31, 073124. [Google Scholar] [CrossRef] [PubMed]
  51. Brzeski, P.; Wojewoda, J.; Kapitaniak, T.; Kurths, J.; Perlikowski, P. Sample-based approach can outperform the classical dynamical analysis-experimental confirmation of the basin stability method. Sci. Rep. 2017, 7, 6121. [Google Scholar] [CrossRef] [PubMed]
  52. Daza, A.; Wagemakers, A.; Georgeot, B.; Guéry-Odelin, D.; Sanjuán, M.A.F. Basin entropy: A new tool to analyze uncertainty in dynamical systems. Sci. Rep. 2016, 6, 31416. [Google Scholar] [CrossRef]
  53. Brzeski, P.; Perlikowski, P. Sample-based methods of analysis for multistable dynamical systems. Arch. Comput. Methods Eng. 2019, 26, 1515–1545. [Google Scholar] [CrossRef]
  54. Chudzik, A.; Perlikowski, P.; Stefanski, A.; Kapitaniak, T. Multistability and rare attractors in van der pol–duffing oscillator. Int. J. Bifurc. Chaos 2011, 21, 1907–1912. [Google Scholar] [CrossRef]
  55. Alexander, J.; Yorke, J.A.; You, Z.; Kan, I. Riddled basins. Int. J. Bifurc. Chaos 1992, 2, 795–813. [Google Scholar] [CrossRef]
  56. Lai, Y.C.; Grebogi, C. Characterizing riddled fractal sets. Phys. Rev. E 1996, 53, 1371. [Google Scholar] [CrossRef]
  57. Dudkowski, D.; Grabski, J.; Wojewoda, J.; Perlikowski, P.; Maistrenko, Y.; Kapitaniak, T. Experimental multistable states for small network of coupled pendula. Sci. Rep. 2016, 6, 29833. [Google Scholar] [CrossRef]
  58. Strzalko, J.; Grabski, J.; Wojewoda, J.; Wiercigroch, M.; Kapitaniak, T. Synchronous rotation of the set of double pendula: Experimental observations. Chaos Interdiscip. J. Nonlinear Sci. 2012, 22, 047503. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Plot of points drawn randomly in the procedure of calculating basin stability, estimating the basin of attraction for the Van der Pol–Duffing Equation (1).
Figure 1. Plot of points drawn randomly in the procedure of calculating basin stability, estimating the basin of attraction for the Van der Pol–Duffing Equation (1).
Mathematics 12 00987 g001
Figure 2. Basin entropy (a) and basin entropy in log–log scale (b) for the Van der Pol Equation (1).
Figure 2. Basin entropy (a) and basin entropy in log–log scale (b) for the Van der Pol Equation (1).
Mathematics 12 00987 g002
Figure 3. Comparison of values of basin entropy for scaled and unscaled phase variables.
Figure 3. Comparison of values of basin entropy for scaled and unscaled phase variables.
Mathematics 12 00987 g003
Figure 4. Basin stability for three different attractors calculated for subsets that are multidimensional intervals. The colour marks the value of basin stability for every box.
Figure 4. Basin stability for three different attractors calculated for subsets that are multidimensional intervals. The colour marks the value of basin stability for every box.
Mathematics 12 00987 g004
Figure 5. Entropy calculated for subsets that are multidimensional intervals. The colour marks the value of entropy for every box.
Figure 5. Entropy calculated for subsets that are multidimensional intervals. The colour marks the value of entropy for every box.
Mathematics 12 00987 g005
Figure 6. The relative error between basin entropy calculated for 1000 trials in a single box, and basin entropy calculated with fewer points for the Van der Pol–Duffing system given by Equation (1).
Figure 6. The relative error between basin entropy calculated for 1000 trials in a single box, and basin entropy calculated with fewer points for the Van der Pol–Duffing system given by Equation (1).
Mathematics 12 00987 g006
Figure 7. Values of basin entropy for different size of boxes ε for three different approaches: equal distribution of 25 per box, 100 per box, and random sampling for 4,000,000 trials.
Figure 7. Values of basin entropy for different size of boxes ε for three different approaches: equal distribution of 25 per box, 100 per box, and random sampling for 4,000,000 trials.
Mathematics 12 00987 g007
Figure 8. The physical model of the double pendulum systems (a) and its realisation (b) in laboratory.
Figure 8. The physical model of the double pendulum systems (a) and its realisation (b) in laboratory.
Mathematics 12 00987 g008
Figure 9. Values of entropy for 10 chosen boxes from the state space with scaled box length ε = 0.1 (left) and ε = 0.05 (right) depending on the percentage of overall 1000 trials. Each colour represents entropy calculated for a single random box. Panels (a,b) present the raw values of entropy, while panels (c,d) present the relative error of entropy. Two dashed lines mark 5 % and 10 % error levels. Panels (e,f) present the discrete differentials of the function presented in panels (a,b).
Figure 9. Values of entropy for 10 chosen boxes from the state space with scaled box length ε = 0.1 (left) and ε = 0.05 (right) depending on the percentage of overall 1000 trials. Each colour represents entropy calculated for a single random box. Panels (a,b) present the raw values of entropy, while panels (c,d) present the relative error of entropy. Two dashed lines mark 5 % and 10 % error levels. Panels (e,f) present the discrete differentials of the function presented in panels (a,b).
Mathematics 12 00987 g009
Figure 10. The projections of basin of attraction of the system given by the Equations (5) and (6) for the pairs ( φ 1 , φ ˙ 1 ) (a) and ( φ 2 , φ ˙ 2 ) (b).
Figure 10. The projections of basin of attraction of the system given by the Equations (5) and (6) for the pairs ( φ 1 , φ ˙ 1 ) (a) and ( φ 2 , φ ˙ 2 ) (b).
Mathematics 12 00987 g010
Figure 11. Values of basin entropy for different ε for the system given by the Equations (5) and (6).
Figure 11. Values of basin entropy for different ε for the system given by the Equations (5) and (6).
Mathematics 12 00987 g011
Figure 12. Values of basin stability for different attractors for the system given by the Equations (5) and (6) with phase space divided into smaller subsets.
Figure 12. Values of basin stability for different attractors for the system given by the Equations (5) and (6) with phase space divided into smaller subsets.
Mathematics 12 00987 g012
Figure 13. Values of entropy for the system given by the Equations (5) and (6) with phase space divided into smaller subsets.
Figure 13. Values of entropy for the system given by the Equations (5) and (6) with phase space divided into smaller subsets.
Mathematics 12 00987 g013
Table 1. Basin stability for the Van der Pol–Duffing Equation (1) with parameter values α = 0.1 , ω = 0.981 , and F = 1 .
Table 1. Basin stability for the Van der Pol–Duffing Equation (1) with parameter values α = 0.1 , ω = 0.981 , and F = 1 .
AttractorNumber of PointsBasin Stability
Period 177,9330.3896
Period 728,6340.1433
Period 1193,4330.4671
Table 2. The values of basin stability for different attractors of the system given by the Equations (5) and (6).
Table 2. The values of basin stability for different attractors of the system given by the Equations (5) and (6).
AttractorNumber of TrialsBasin Stability
1:1 oscillations641,7690.3208
1:1 rotations641,7880.3208
1:2 rotations109,4100.0547
2:3 rotations59780.0029
1:4 oscillations514,2470.2571
1:5 oscillations32,4010.0162
7:7 rotations80800.0040
9:9 rotations34,0560.0170
8:8 rotations83130.0041
other39580.0019
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Leszczynski, M.; Perlikowski, P.; Brzeski, P. A Unified Approach for the Calculation of Different Sample-Based Measures with the Single Sampling Method. Mathematics 2024, 12, 987. https://doi.org/10.3390/math12070987

AMA Style

Leszczynski M, Perlikowski P, Brzeski P. A Unified Approach for the Calculation of Different Sample-Based Measures with the Single Sampling Method. Mathematics. 2024; 12(7):987. https://doi.org/10.3390/math12070987

Chicago/Turabian Style

Leszczynski, Maciej, Przemyslaw Perlikowski, and Piotr Brzeski. 2024. "A Unified Approach for the Calculation of Different Sample-Based Measures with the Single Sampling Method" Mathematics 12, no. 7: 987. https://doi.org/10.3390/math12070987

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop