Next Article in Journal
Enhanced Efficiency at Maximum Power in a Fock–Darwin Model Quantum Dot Engine
Next Article in Special Issue
Brief Review on the Connection between the Micro-Canonical Ensemble and the Sq-Canonical Probability Distribution
Previous Article in Journal
A New Quantum Private Protocol for Set Intersection Cardinality Based on a Quantum Homomorphic Encryption Scheme for Toffoli Gate
Previous Article in Special Issue
Some Non-Obvious Consequences of Non-Extensiveness of Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Necessary Condition of Self-Organisation in Nonextensive Open Systems

1
Department of Physics, Faculty of Science, Ege University, Izmir 35100, Turkey
2
Department of Physics, Faculty of Arts and Sciences, Izmir University of Economics, Izmir 35330, Turkey
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2023, 25(3), 517; https://doi.org/10.3390/e25030517
Submission received: 15 February 2023 / Revised: 10 March 2023 / Accepted: 14 March 2023 / Published: 17 March 2023
(This article belongs to the Special Issue Non-additive Entropy Formulas: Motivation and Derivations)

Abstract

:
In this paper, we focus on evolution from an equilibrium state in a power law form by means of q-exponentials to an arbitrary one. Introducing new q-Gibbsian equalities as the necessary condition of self-organization in nonextensive open systems, we theoretically show how to derive the connections between q-renormalized entropies ( Δ S ˜ q ) and q-relative entropies ( K L q ) in both Bregman and Csiszar forms after we clearly explain the connection between renormalized entropy by Klimantovich and relative entropy by Kullback-Leibler without using any predefined effective Hamiltonian. This function, in our treatment, spontaneously comes directly from the calculations. We also explain the difference between using ordinary and normalized q-expectations in mean energy calculations of the states. To verify the results numerically, we use a toy model of complexity, namely the logistic map defined as X t + 1 = 1 a X t 2 , where a [ 0 , 2 ] is the map parameter. We measure the level of self-organization using two distinct forms of the q-renormalized entropy through period doublings and chaotic band mergings of the map as the number of periods/chaotic-bands increase/decrease. We associate the behaviour of the q-renormalized entropies with the emergence/disappearance of complex structures in the phase space as the control parameter of the map changes. Similar to Shiner-Davison-Landsberg (SDL) complexity, we categorize the tendencies of the q-renormalized entropies for the evaluation of the map for the whole control parameter space. Moreover, we show that any evolution between two states possesses a unique q = q * value (not a range for q values) for which the q-Gibbsian equalities hold and the values are the same for the Bregmann and Csiszar forms. Interestingly, if the evolution is from a = 0 to a = a c 1.4011 , this unique q * value is found to be q * 0.2445 , which is the same value of q s e n s i t i v i t y given in the literature.

1. Introduction

The main problem for researchers who are interested in entropy-based measures is discerning which measure would be the most suitable one for the complex system under consideration. The definition of ‘suitable’ implies the measure which represents a behaviour that is compatible with the dynamics of the system among definitions of the measures in the literature. For example, it has been expected that the measure is able to make a distinction among possible phases of the system as the parameter set of the system slightly changes. Despite the numerous definitions [1,2,3,4,5,6,7,8], the measures can be categorized into three types (Figure 1), namely type-I, type-II and type-III that are similar to SDL complexity, formally [5]. The first type considers measure as a monotonically increasing function of disorder. In the second type, measure is a convex function of disorder. Hence, it is a minimum for both completely order and completely disorder, and a maximum at a point between them. In the last type, measure is a monotonically decreasing function of disorder. The crucial point in this classification is that classic notion of entropy by Shannon [9] is associated with the degree of disorder. It should be noted that the control parameter of the system can also be used as the degree of disorder if the Shannon entropy S is a monotonically increasing function of some system parameter, say a, as the system evolves in its parameter space from a to a + Δ a [10].
Even if it seems that changing the parameter in space is not directly related to time, it always takes time from one state to another for a real system since an evolution depends on the change of conditions, i.e., of parameters, with time [11]. For the evolution of dissipative dynamical systems which pave their way to successive stable branches, and then to successive chaotic band mergings as the parameter set of the system slightly changes, the classic notion of the entropy by Shannon, the relative entropy by Kullback-Leibler and the renormalized entropy by Klimontovich are good examples for entropy-based measures of type-I, type-II and type-III, respectively [10]. The Shannon entropy monotonically increases from the first branch to the most chaotic state (type-I). The relative entropy increases monotonically from the first branch up to the edge of chaos, and then decreases monotonically up to the first chaotic band merge (type-II). The renormalized entropy decreases monotonically from the first branch up to the edge of chaos (type-III), and then increases monotonically up to the first chaotic band merge (type-I). In other words, when the sequence of branches emerges, the relative order increases, i.e., the measure of complexity, the renormalized entropy decreases [12].
The behaviour of the renormalized entropy indicates the relative degree of order in the system as first suggested by Haken [13] in the context of self-organization. The S-theorem that is a basis for the method of renormalized entropy was proved for the transition from laminar to turbulent flow [14]. Such a kind transformation confirmed that turbulent structures are more ordered, that implies highly organized, than laminar [15]. Moreover, Rayleigh–Benard convection [16,17], Taylor instability experiment [18], bacterial [19] and Dictyostelium discoideum [20] colonies are some typical examples in which the most ordered spatial patterns emerge in the phase spaces via changing conditions of the systems, that indicates a high level of self-organization.
The Shannon entropy expression, setting k B = 1 , reads
S = i p i ln p i
where p i are the probability of an event i of a sample set. Maximizing the Shannon entropy of the system subject to suitable constraints (namely, mean energy and probability normalization constraints), using Lagrange multipliers method, one can obtain the equilibrium distribution as
p i e q = e β ϵ i Z
where β is the inverse temperature and Z = i e β ϵ i is the partition function. For the evolution to the equilibrium state from an arbitrary one ( p p e q ), the Shannon entropy can be used to find the difference between the entropies Δ S ( p p e q ) = S ( p e q ) S ( p ) 0 that is known as the second law of thermodynamics. However, from both Boltzmann’s H-theorem and Gibbs theorem, it is well known that this inequality is only valid for an isolated evolution between the states. Hence, it is violated for the evolution of open systems that exchange of energy/matter with its surroundings is allowed. In other words, these theorems state that Δ S ( p p e q ) equals relative entropy with a limitation that the mean energy E remains constant (i.e, E p e q = E p ) [21]. Therefore, in a process of such an evolution (from p to p e q ) as long as mean energy is the same, from these theorems, it follows that
Δ S = S e q S = i p i ln p i p i e q = K ( p | | p e q ) 0
where p e q and p are probability distributions corresponding to the equilibrium state and arbitrary one, respectively. Equation (3) implies that the equilibrium state has the greatest disorder (or chaoticity) as compared to the arbitrary state. The usual expression of the relative entropy K ( p | | r ) = i p i ln p i r i gives the entropy produced by the change from the state p to the state r. Whenever r = p e q , it can be written in terms of free energy differences of the states
K ( p | | p e q ) ( F F e q )
where F = E p 1 β S and F e q = E p e q 1 β S e q .
Due to the strong limitations on the second law, Klimontovich introduced his S-theorem, where ‘S’ stands for ‘self-organization’, that makes it possible to analyze open systems in terms of the Gibbs theorem [21]. According to the S-theorem, it is possible to compare distinct states, which are the equilibrium and a non-equilibrium stationary state, under dissipation of the energy that implies the evolution of an open system. To compensate the dissipation, a new mean energy equality E p ˜ e q = E p similar to the Gibbs’ equality via a renormalization procedure p e q p ˜ e q is defined. After such a renormalization, the renormalized entropy Δ S ˜ is defined as (noticing that evolution is from p ˜ e q to p)
Δ S ˜ = S S ˜ e q = i p i ln p i p ˜ i e q = K ( p | | p ˜ e q ) 0
where p ˜ e q and p are the renormalized equilibrium and non-equilibrium stationary states, respectively (all proofs regarding Equations (3)–(5) will be given in the next part of the paper).
In Section 2, we show that the definition of mean energy equality (and also Equation (5)) in the concept of the renormalized entropy by Klimontovich is valid for an evolution to/from the canonical equilibrium distribution of exponential form in Equation (2). We also discuss the connections among the Shannon, the Kullback-Leibler relative and the renormalized entropies within a thermodynamic perspective.
In Section 3, we theoretically show how to apply this procedure on a nonextensive open system, whose generalized canonical probability distribution is of q-exponential form
e q x = 1 + ( 1 q ) x 1 / ( 1 q ) .
This distribution is the one that comes from the maximization of the Tsallis entropy given by [22]
S q = 1 i p i q 1 q ,
and it recovers the Shannon entropy S = i p i ln p i in the limit q 1 as a special case. It is well known that the maximization of the Tsallis entropy subject to the ordinary constraints ( i p i ϵ i = E and i p i = 1 ) yields a canonical distribution in a q-exponential form,
p i o r d = e 2 q β * ϵ i Z ( β * ) ,
where 1 / Z ( β * ) is a normalization constant [23]. On the other hand, normalized q-expectation is employed instead of the ordinary constraints, ( i [ p i q ϵ i / i p i q ] = E q and i [ p i q / i p i q ] = 1 ), the canonical probability distributions reads
p i ( n o r ) = e q β ^ ( ϵ i E q ) Z ( β ^ )
where β ^ = β / C q and C q = i ( p i n o r ) q [22]. Respectively, when the generalized version of canonical distributions in Equations (8) and (9) put into two kinds of q-generalized relative entropies, which are well known Bregman K q B ( p | | p o r d ) and Csiszar types K q C ( p | | p n o r ) , it can be shown that the generalized relative entropies are associated with the q-generalized version of free energy differences of the states [24]:
K q B ( p | | p o r d ) ( F q F q o r d )
K q C ( p | | p n o r ) ( F q F q n o r )
that are similar to the Equation (4). It can be also noted that there is one more version of the formalism using unnormalized q-expectations in the constraints. However, it is shown in [25] that all these versions are equivalent to each other.
In the following subsections, we explain the necessary condition (on the q-mean energy equality) for self organization of open systems using the Bregman and Csiszar forms of generalized relative entropies, respectively. Klimontovich himself as well as some recent efforts on the generalization of the renormalized entropy [26,27] have invoked a predefined ‘effective Hamiltonian’ function to obtain the mean energy equality. We will theoretically show here what kind of equalities would be necessary conditions for self organization of nonextensive systems without using any predefined effective Hamiltonian function. The results will come up as a direct consequence of our approach. Moreover, we derive relations between q-renormalized entropy and the generalized relative entropies from the viewpoint of information theoretic approaches in Section 2.
In Section 4, we use a paradigmatic toy model, the logistic map, in order to numerically show the level of self-organization (or the degree of complexity from self-organisation) for a system that paves its way to successive stable branches, and then to successive chaotic band mergings as the parameter of the system slightly changes. We show the behaviour of the q-relative entropies in both Bregmann and Csiszar forms as the suitable measure for self-organisation, and define their types of complexity (type-I, -II or -III). Finally, we show a unique q * values obtained through evolution of the states with the system parameter and relate its value at the edge of chaos with the q s e n s i t i v i t y value obtained for the logistic map [28,29].

2. Thermodynamic Perspective of the Renormalized Entropy and Connections

Let us consider an evolution from an equilibrium state p 0 to an arbitrary one p as the control parameter of the complex system slightly changes from a 0 to a 0 + Δ a . Entropy produced by the change of state (i.e., corresponding information gain) in such an evolution can be given by the Kullback-Leibler relative entropy [30]
K ( p | | p 0 ) = i p i ln p i p 0 i 0 .
Adding and subtracting i p 0 i ln p 0 i to and from right-hand side of Equation (11), this can be rewritten as
K ( p | | p 0 ) = Δ S ( p 0 p ) i ( p i p 0 i ) ln p 0 i
where Δ S ( p 0 p ) = i p 0 i ln p 0 i i p i ln p i .
Substituting p 0 i = p i e q , the canonical equilibrium distribution of exponential form in Equation (2), into the logarithmic function ln p 0 i in Equation (12), it can be immediately shown that
K ( p | | p e q ) = Δ S ( p e q p ) + β Δ E ( p e q p )
where Δ E ( p e q p ) = E p E p e q is the difference between the mean energies through the evolution from the state p e q to the state p. Comparing Equations (12) and (13), it follows that
Δ E ( p e q p ) = 1 β i ( p i p i e q ) ln p i e q .
One can notice that Equations (12)–(14) lead three important connections regarding the proofs of Equations (3)–(5):
(i)
Equation (3), the second law of thermodynamics Δ S ( p p e q ) 0 , can be derived from Equation (13) noticing that Δ S ( p p e q ) = Δ S ( p e q p ) and K ( p | | p e q ) 0 . This derivation requires the limitation that the Gibbs equality holds, i.e., i ( p i p i e q ) ln p i e q Δ E = 0 , which implies the evolution is isolated, i.e., the mean energy is the same through the evolution.
(ii)
Equation (4) can easily obtain from Equation (13) using the definition of the free energy given as F = E r 1 β S for any state (r). It means that Kullback-Leibler relative entropy is associated with the free energy difference of the states in such an evolution.
(iii)
Equation (5), the result of the S-theorem by Klimontovich, can be shown from Equation (13) by a transformation p e q p ˜ e q ensuring the renormalization of the state so that it compensates the mean energy difference between the states corresponding to the renormalized equilibrium and non-equilibrium stationary states, i.e Δ E ( p ˜ e q p ) = 0 . The compensation requires the Gibbs equality defined as
i p i ln p i e q = i p ˜ i e q ln p i e q .
Such a renormalization enables us to use the Gibbs theorem for an open system with energy flux. To compare the states in terms of the renormalized Δ S ˜ and Kullback-Leibler relative entropies K, the connection can be written as
Δ S ˜ = K ( p | | p ˜ e q ) 0
by means of Equations (12) and (15).
It should be noted that our assumption reveals with Equations (12)–(14) why a quantity called the effective Hamiltonian, H e f f = ln p 0 i , for the reference equilibrium state was preferred by Klimontovich. In our assumption, choosing the reference state as p 0 = p e q yields Δ E ( p e q p ) in Equation (13), spontaneously. The details and applications on both synthetical and real data of the renormalized entropy by Klimontovich can be found in references [10,12,21,31] and [32,33,34,35], respectively.

3. Derivation of the q-Renormalized Entropy and Connections

Similar to the connection in Equation (4) between Kullback-Leibler relative entropy and free energy differences, there are two types of q-generalized relative entropies whose connections to q-free energy differences exists [36]. Respectively, they are called Bregmann form given by
K q B ( p | | p 0 ) = 1 q 1 i p i p i q 1 p 0 i q 1 i ( p i p 0 i ) p 0 i q 1
and Csiszar form defined as
K q C ( p | | p 0 ) = 1 q 1 i p i p i p 0 i q 1 1 .
Noticing the dependence of the generalized relative entropies on the constraints from Equation (10), we derive the q-renormalized entropies for the evolution from a stationary state in the functional forms of the inverse power law (i.e., q-exponentials in Equations (8) and (9)) within a thermodynamic perspective similar to the Section 2. The crucial point of such an approach is that a predefined effective hamiltonian function is not necessary. Moreover, we show the necessary conditions of self organization in nonextensive open system using a q-Gibbsian equality in the following subsections.

3.1. Derivation and Connection I: q-Renormalized Entropy and Bregman Form of Relative Entropy

Firstly, we reorganize the Bregman form of the generalized relative entropy in Equation (17) as
K q B ( p | | p 0 ) = i p 0 i ln 2 q p 0 i + i p i ln 2 q p i q i ( p i p 0 i ) ln 2 q p 0 i
where the identical relation of ( 2 q ) -deformed logarithm, ln 2 q x = ( x q 1 1 ) / ( q 1 ) , has been used. It should be noted that the same relation leads to the q-logarithmic form of the Tsallis entropy in Equation (7), that is given by
S q = i p i ln 2 q p i .
Putting this form in Equation (19), we have a similar expression to Equation (12), which reads
K q B ( p | | p 0 ) = Δ S q ( p 0 p ) q i ( p i p 0 i ) ln 2 q p 0 i
where Δ S q ( p 0 p ) = S q ( p ) S q ( p 0 ) is the change in the Tsallis entropies through an evolution from the state p 0 to p.
Substituting p 0 i = p i o r d , the stationary distribution of the ( 2 q ) -exponential form in Equation (8), into the ( 2 q ) -logarithmic function ln 2 q p 0 i in Equation (21), we can immediately write
K q B ( p | | p o r d ) = Δ S q ( p o r d p ) + β i ( p i p i o r d ) ϵ i
where β = q β * Z q 1 . From the transformation p i o r d p ˜ i o r d on the reference state, one can easily obtain
K q B ( p | | p ˜ o r d ) = Δ S q ( p ˜ o r d p ) + β Δ E ( p o r d p )
where Δ E ( p o r d p ) = E p E q p o r d is the mean energy difference and p ˜ i o r d = ( p i o r d ) q / i ( p i o r d ) q is the distribution chosen so that it enables us to vanish the second terms in the right hand side of Equations (22) and (23) at a unique value of q = q * , namely,
K q * B ( p | | p ˜ o r d ) = Δ S q * ( p ˜ o r d p )
It should be noted here that the transformation enables us to equate the mean energies, i.e., E q * p o r d = E p , at a unique value of q taking the normalized q-average instead of the ordinary average in the calculation of the mean energy of the reference state. In other words, it re-normalizes the mean energy of the reference state.
Comparing Equations (21)–(24), it can be easily shown that the compensation requires a q-Gibbsian equality given by
i p i ln 2 q * p i o r d = i p ˜ i o r d ln 2 q * p i o r d
where the unique q * can be found numerically.
One can easily show that Bregman form of the generalized relative entropy in Equation (22) is associated with the q-generalized version of free energy differences as can be given in Equation (10a) where F q = < E > p S q / β and F q o r d = < E > p o r d S q / β . Moreover, as can be seen in Equation (24), there is a one-to-one correspondence between the generalized relative entropy and the q-renormalized entropy due to the compensation of the mean energy differences.
It is also worth noting that q-renormalized entropy is not a generalization of the usual renormalized entropy by Klimontovich. Although the generalized relative entropy in Equation (17) recovers the relative entropy by Kullback-Leibler in the limit q 1 , the Gibbsian equality in Equation (25) is ensured at q * = 1 only if the transition from p o r d to p belongs a cyclic process or the states possess the same degree of complexity. At q = q * 1 , the value of q holds the Gibbsian equality as the necessary condition of self organization for the transition between distinct states and leads the connection in Equation (24). Therefore, the parameter q * , which is the unique value of q, measures the relative degree of order/disorder between the states. We confirm it using the toy model in Section 4.

3.2. Derivation and Connection II: q-Renormalized Entropy and Csiszar Type of Relative Entropy

Substituting p 0 i = p i n o r , the stationary distribution of the q-exponential form in Equation (9), into Equation (18) and using the identical relation Z 1 q = C q where C q = i ( p i n o r ) q , the Csiszar form of the generalized relative entropy can be written as
K q C ( p | | p n o r ) = Δ S q ( p n o r p ) + β ^ D q i p q D q ϵ i E q p n o r
where D q = i p i q . Using the q-deformed logarithm form, ln q x , instead of the second term in the right hand side of Equation (26), it follows that
K q C ( p | | p n o r ) = Δ S q ( p n o r p ) C q D q i p q D q ( p i n o r ) q C q ln q ( p i n o r )
By the transformation p i p ˜ i on the other state, Equation (26) yields
K q * C ( p ˜ | | p n o r ) = Δ S q * ( p n o r p ˜ )
where p ˜ i = p i 1 / q / i p i 1 / q is the distribution chosen so that it enables us to vanish the second terms in the right hand side of Equations (26) and (27) at a unique value of q = q * , satisfying E q * p n o r = E p .
Comparing Equations (26) and (27), it can be easily shown that the compensation of mean energy difference requires a Gibbsian equality given by
i p i ln q * p i n o r = i ( p i n o r ) q * C q ln q * p i n o r
where the unique q * can be found numerically.
At this point, it should be emphasized that the Gibbsian equalities in Equations (25) and (29) both lead the same mean energy equality, that is E q * p 0 = E p , if one applies a transformation on the reference state for the Bregman form of the generalized relative entropy taking p 0 = p o r d and on the other state for the Csiszar form of the generalized relative entropy taking p 0 = p n o r .

4. Application: Logistic Map

To identify behaviour of the q-renormalized entropies for an evolution in the control parameter space and to illustrate the consistency of Bregman and Csiszar forms of the relative entropies within the context of the self-organization, we apply these procedures on the logistic map. In addition to its ‘very simple expression’ as a toy model, the logistic map has ‘highly complicated’ dynamics in the phase space [37]. Moreover, it is very convenient to search whether there exist a connection between self-organization and bifurcation processes as the system parameter slightly changes.
The expression of the logistic map reads
f ( X t ) = X t + 1 = 1 a X t 2
where X t [ 1 , 1 ] is a sufficiently long phase space trajectory, t is iteration step ( t = 1 , 2 , , N ) and a [ 0 , 2 ] is the control parameter of the map.
For the evolution of the map from a 0 to a 0 + Δ a , one can easily generate the trajectories { X t ( a 0 ) } and { X t ( a 0 + Δ a ) } . Respectively, the corresponding probability distributions estimated from the trajectories are p 0 = p 0 ( X , a 0 ) for the reference state and p 1 = p 1 ( X , a 0 + Δ a ) for the other state where p 0 = p 1 = 1 .
For the estimation of the distributions, we use the dependence of spectral intensities on the frequency w. In other words, we use the Fourier transformation p ( w , a ) = F ( w , a ) · F * ( w , a ) of the trajectory { X t ( a ) } instead of residence time distribution. Technically, we generate trajectories of the map in Equation (30) with the length of 65,536 points after 4096 points are discarded as transients. The spectrum is then averaged over 16 periodograms with a length of 4096 points. The details of the estimation procedure can be found in a recent paper [10].
It is well known that the bifurcation diagram of the logistic map represents a very rich dynamics where transitions with period-doubling route to chaos arise as the control parameter changes in the range of a [ 0 , 2 ] as can be seen in Figure 2. The map has a critical point at a = a c = 1.401155 which can be approached from the most ordered state where the value of the control parameter is a = 0 . From a = 0 (where period-1 occurs) to a = a c (where 2 periods accumulate), the map shows a period-doubling procedure of 2 n periods. One can also approach the critical point from the most chaotic state (where the value of the control parameter is a = 2 ), via a band splitting procedure where 2 bands split at the critical point. In other words, the map is in a periodic region from a = 0 up to a = a c with a period doubling procedure as it is in a chaotic region from a = 2 up to a = a c with a band splitting procedure. It is also possible to see narrow periodic windows in the chaotic region that possess similar structures to those of the map in the whole range of the control parameter, i.e., a [ 0 , 2 ] . Moreover, the Lyapunov exponent of the map can be calculated using
λ = lim N 1 N t = 0 N 1 log | f ( X t ) | ,
by substituting the first derivative of the map function in the Equation (31) and is used to make a distinction between periodic regions ( λ < 0 ) and chaotic ones ( λ > 0 ) [38].
To compare the reference state of the map with all other states within q-renormalized entropies, we choose the reference state p 0 at a = a 0 = 0 and all other states in the region of a [ 0 , 2 ] , i.e., p 0 = p ( w , a 0 = 0 ) and p 1 = p ( w , a 0 + Δ a ) with a parameter increase step Δ a = 0.01 . To calculate the entropies, we firstly numerically define the unique q * values that hold the Gibsian equalities in Equations (25) and (29). We then obtain the q-renormalized entropies Δ S q * ( p ˜ 0 p ) and Δ S q * ( p 0 p ˜ ) that are associated with q-relative entropies K q * B ( p | | p ˜ 0 ) and K q * C ( p ˜ | | p 0 ) , respectively.
In Figure 3, from top to bottom, we plot the bifurcation diagram, the Lyapunov exponent, the q-renormalized entropy in Bregman and Csiszar forms and evolution of the q * values in the control parameter space. We denote some points just above the bifurcation diagram as can be seen as a 0 , a 1 , a 2 , , a ˜ 2 , a ˜ 1 , a ˜ 0 to divide the map in distinct regions as analogous to the periodic and chaotic band regions in Figure 2. There are 2 n 1 number of periods and chaotic bands between the regions a [ a n 1 , a n ] and a [ a ˜ n 1 , a ˜ n ] where n = 1 , 2 , 3 , , , respectively. As the control parameter evolves from a 0 = 0 to a ˜ 0 = 2 , the periodic trajectories bifurcate at the critical point a n , where the first bifurcation point is a 1 , up to the chaos threshold a c , where infinite number of periods exists. As a reverse process, an infinite number of chaotic bands which exist at the chaos threshold a c start to merge through the critical points a ˜ n up to the a ˜ 1 where the last chaotic band merging exists. The Lyapunov exponent vanishes at all critical points from a 1 to a ˜ 1 as it has a negative value in the range of a [ a 0 , a c ) and has a positive value in the range of a ( a c , a ˜ 0 ] . It can also be seen that the q-renormalized entropy in both Bregman and Csiszar forms point out the same relative degree of order/disorder in the range of period–1 which implies a low level of self-organization/complexity. When the sequence of branches emerges at a 1 , the relative order, i.e., the level of self-organization/complexity, increases and the q-renormalized entropies decrease through successive bifurcations up to the chaos threshold point a c . Such behaviour of q-renormalized entropy in the control parameter space of a [ a 1 , a c ] is compatible with that of the entropy-based measures of type-III in Figure 1. In the range of chaotic band merging area of a ( a c , a ˜ 1 ] as a reverse process, increase in q-renormalized entropies corresponds to the entropy-based measures of type-I in Figure 1. It means that the relative order, i.e., the level of self-organization/complexity, decreases through the band merging area. It should be noted that the q-renormalized relative entropy in Csiszar form evaluates that the level of order in the range of period–1 of a [ a 0 , a 1 ] has the same degree of complexity with the level of disorder in the range of chaotic band–1 of a ( a ˜ 1 , a ˜ 0 ] . However, the degree of order/disorder in the range of chaotic band–1 of a ( a ˜ 1 , a ˜ 0 ] decreases/increases except for a very thin periodic window, which is similar to the behaviour of the Lyapunov exponent in the same range of the control parameter. Moreover, q-renormalized entropy in Bregman form is more accurate for localization of the chaos threshold such that it corresponds to a local minimum between a ( a 1 , a ˜ 1 ] where a c = 1.4011 .
The equalities between the q-generalized relative entropies and the q-renormalized entropies in Equations (24) and (28) guarantee that the evolution of the q-generalized relative entropies as complexity measures in the range of period doublings and chaotic band mergings of a [ a 1 , a ˜ 1 ] at a unique q = q * value conforms with the behaviour of the entropy-based measure of type-II in Figure 1. Such behaviour of the complexity function is similar to the behaviour of complexity in coffee automaton (or to experiment of coffee with milk). It was discussed by defining a “complextropy” measure that first increases and then decreases in closed thermodynamic systems, in contrast to usual Shannon entropy (which increases monotonically) [39]. Similar to the model, in the range of a 1 < a c < a ˜ 1 , the q * -generalized relative entropy represents the most organized spatial pattern at the chaos threshold where a = a c due to the relations roughly K q * = Δ S ˜ q * where Δ S ˜ q * is the general definition of the q-renormalized entropy. q-relative entropy is an evaluation of the change in entropy relative to a reference state chosen. For the transition between the reference equilibrium state and the other arbitrary state, one can numerically localize the unique q * value as the one for which the Gibbsian equalities hold. We show in the bottom of Figure 3 that the q * values are the same for both Bregman and Csiszar forms of the q-renormalized entropies for an evolution in the control parameter space of the logistic map, satisfying the Gibbsian equalities as the necessary condition of self organization. In other words, renormalization enables us to equate mean energies of the states at a unique q * value in a manner that the evolution of the reference state is isolated after renormalization. For the evolution of the q * values in the range of the control parameter a [ 0 , 2 ] , the q * values decrease from q * = 1 to q * = 0 where the maximum value indicates the most ordered state (period-1) and the minimum value points out the most disordered (strongly chaotic) state. The process offers a method by means of q-renormalized entropies on how to measure the level of self organization in spatially-extended fractals. On the other hand, the Shannon entropy leads to an increase since it is proportional to the logarithm of the accessible volume in phase space, however a decrease in entropy is necessary to link a connection to the self-organization.
In Figure 4, we zoom to the chaos threshold in order to localize unique q * value at the critical point. It is intriguing that this unique q * value happens to coincide with the q s e n s i t i v i t y value [28,29] (i.e., q * 0.2445 ). At this point, it is worth noting that this kind of varying q parameter tendency with the control parameter of the map is very reminiscent to the behaviour of the running q parameter with the energy scale detected in recent cosmological studies [40,41].

5. Conclusions

It is well known that the second law of thermodynamic ( Δ S 0 ) is only valid for an isolated evolution of an arbitrary state to an equilibrium state. This inequality can be derived by substituting Gibbs equality in the definition of Kullback-Leibler relative entropy, which implies that the equilibrium state shows the greatest disorder (or chaoticity) as compared to any arbitrary state as long as the mean energy is the same. The mean energy equality through the evolution is a consequence of Gibbs equality, that points out a strong limitation of the law. Hence, it is violated for the evolution of open systems in which the exchange of energy/matter with its surroundings is allowed. The problem was solved by Klimantovich via the S-theorem where ’S’ stands for criterion of self-organization. The theorem is based on renormalization of the equilibrium distribution in a manner that Gibbs equality holds. Mean energy in terms of a predefined effective Hamiltonian function for an open system is constant through the evolution after renormalization. The renormalization on the distribution leads a renormalized entropy as a new complexity measure to compare distinct states, i.e., a renormalized equilibrium state and an arbitrary one. For an isolated evolution from the renormalized equilibrium state to an arbitrary one, a decrease in the renormalized entropy indicates an increase in the relative degree of order in the system that indicates the creation of complicated structures in the phase space as first suggested by Haken in the context of self-organization [13]. Although the renormalized entropy is a suitable measure to explain highly organised structures that emerge in phase space, we have shown that its expression ( Δ S ˜ = K L 0 ) is valid for the systems which evaluate from canonical equilibrium state. Moreover, choosing a reference state in exponential form spontaneously reveals the predefined effective Hamiltonian function ( H e f f = ln p 0 ) by Klimontovich directly from the calculations. We have also shown that such kind of relation between q-renormalized entropy and q-generalized relative entropies (in the form of both Bregmann and Csiszar) can be written by introducing new q-Gibbsian equalities as the necessary conditions of self-organisation. The crucial point for the new equalities is that they are only valid for a unique q = q * value for the transition between two states and lead to the same mean energy equality that is E q * p 0 = E p . To achieve this result, it is necessary to apply a transformation on the reference state for the Bregman form of the generalized relative entropy taking p 0 = p o r d and on the other state for the Csiszar form of the generalized relative entropy taking p 0 = p n o r as the stationary distributions of the ( 2 q ) -exponential and q-exponential forms, respectively. To verify the results numerically, we have used the control parameter evolution of the logistic map. As the control parameter changes in a [ 0 , 2 ] , we have shown a fall in the q-renormalized entropies through period doublings in the range of a [ 0 , a c ] and an increase in the q-renormalized entropies through chaotic band mergings in the range of a [ a c , 2 ] . Such kind of behaviour of the q-renormalized entropy is compatible with the SDL complexity of type-III and type-I as the signs of emerging and destroying highly organized structures in the phase space, respectively [5]. We have also looked closely at the chaos threshold of the map, and interestingly we discovered that the unique q * value is q * 0.2445 , which coincides with the value of q s e n s i t i v i t y given in the literature [28,29].
Finally, it would be good to note that these considerations could be applied to some specific class of nonextensive systems, such as black holes and other gravitational systems. An interesting future work addressing a possible discussion of our scheme for such systems would be highly welcomed.

Author Contributions

Conceptualization, O.A. and U.T.; Methodology, O.A. and U.T.; Writing—review & editing, O.A. and U.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

U.T. is a member of the Science Academy, Bilim Akademisi, Turkey and acknowledges partial support from TUBITAK (Turkish Agency) under the Research Project number 121F269.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kolmogorov, A.N. Three approaches to the quantitative definition of information. Probl. Inf. Transm. 1965, 1, 1–7. [Google Scholar] [CrossRef]
  2. Chaitin, G.J. On the length of programs for computing finite binary sequences. J. ACM 1966, 13, 547–569. [Google Scholar] [CrossRef]
  3. Bennett, C.H. On the nature and origin of complexity in discrete, homogeneous, locally-interacting systems. Found. Phys. 1986, 16, 585–592. [Google Scholar] [CrossRef]
  4. Lloyd, S.; Pagels, H. Complexity as thermodynamic depth. Ann. Phys. 1988, 188, 186–213. [Google Scholar] [CrossRef]
  5. Shiner, J.S.; Davison, M.; Landsperg, P.T. Simple measure for complexity. Phys. Rev. E 1999, 59, 1459–1464. [Google Scholar] [CrossRef]
  6. López-Ruiz, R.; Mancini, H.; Calbet, X. A statistical measure of complexity. Phys. Lett. A 1995, 209, 321–326. [Google Scholar] [CrossRef] [Green Version]
  7. Feldman, D.P.; Crutchfield, J.P. Measures of statistical complexity: Why? Phys. Lett. A 1998, 238, 244–252. [Google Scholar] [CrossRef]
  8. Newman, M.E.J. Resource letter cs–1: Complex systems. Am. J. Phys. 2011, 79, 800–810. [Google Scholar] [CrossRef]
  9. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  10. Afsar, O. A comparison of Shannon, Kullback–Leibler and renormalized entropies within successive bifurcations. Physica D 2019, 390, 62–68. [Google Scholar] [CrossRef]
  11. Blum, H.F. Perspectives in: Evolution. Am. Sci. 1955, 43, 595–610. [Google Scholar]
  12. Saparin, P.; Witt, A.; Kurths, J.; Anishchenko, V. The renormalized entropy—An appropriate complexity measure? Chaos Solit. Fractals 1994, 4, 1907–1916. [Google Scholar] [CrossRef]
  13. Haken, H. Information and Self-Organization: A Macroscopic Approach to Complex Systems (Springer Series in Synergetics); Springer: Secaucus, NJ, USA, 2006. [Google Scholar]
  14. Klimontovich, Y.L. Entropy and entropy production in the laminar and turbulent flows. Pis’ma ZhTF 1984, 10, 80. [Google Scholar]
  15. Klimontovich, Y.L. Statistical Physics; Harwood Academic Publishers: Cambridgeshire, UK, 1986. [Google Scholar]
  16. Rayleigh, L. LIX. On convection currents in a horizontal layer of fluid, when the higher temperature is on the under side. Lond. Edinb. Dubl. Philos. Mag. 1916, 32, 529–546. [Google Scholar] [CrossRef] [Green Version]
  17. Bénard, H. Les tourbillons cellulaires dans une nappe liquide. Méthodes optiques d’observation et d’enregistrement. J. Phys. Theor. Appl. 1901, 10, 254–266. [Google Scholar] [CrossRef]
  18. Taylor, G.I. Viii. stability of a viscous liquid contained between two rotating cylinders. Philos. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 1923, 223, 289–343. [Google Scholar]
  19. Ben-Jacob, E.; Schochet, O.; Tenenbaum, A.; Cohen, I.; Czirók, A.; Vicsek, T. Generic modelling of cooperative growth patterns in bacterial colonies. Nature 1994, 368, 46. [Google Scholar] [CrossRef] [PubMed]
  20. Pálsson, E.; Cox, E.C. Origin and evolution of circular waves and spirals in dictyostelium discoideum territories. Proc. Natl. Acad. Sci. USA 1996, 93, 1151–1155. [Google Scholar] [CrossRef] [Green Version]
  21. Klimontovich, Y.L. Criteria of Self-organization. Chaos Solit. Fractals 1995, 5, 1985–2002. [Google Scholar] [CrossRef]
  22. Tsallis, C.; Mendes, R.S.; Plastino, A.R. The role of constraints within generalized nonextensive statistics. Physica A 1998, 261, 534–554. [Google Scholar] [CrossRef]
  23. Oikonomou, T.; Bagci, G.B. The maximization of Tsallis entropy with complete deformed functions and the problem of constraints. Phys. Lett. A 2010, 374, 2225–2229. [Google Scholar] [CrossRef] [Green Version]
  24. Abe, S.; Bagci, G.B. Necessity of q-expectation value in nonextensive statistical mechanics. Phys. Rev. E 2005, 71, 016139. [Google Scholar] [CrossRef] [Green Version]
  25. Ferri, G.L.; Martinez, S.; Plastino, A. Equivalence of the four versions of Tsallis’s statistics. J. Stat. Mech. 2005, P04009. [Google Scholar] [CrossRef] [Green Version]
  26. Bagci, G.B. Nonadditive open systems and the problem of constraints. Int. J. Mod. Phys. B 2008, 22, 3381–3396. [Google Scholar] [CrossRef]
  27. Bagci, G.B.; Tirnakli, U. Self-organization in nonadditive systems with external noise. Int. J. Bifurcat. Chaos 2009, 19, 4247–4252. [Google Scholar] [CrossRef] [Green Version]
  28. Tsallis, C.; Plastino, A.R.; Zheng, W.-M. Power-law sensitivity to initial conditions—New entropic representation. Chaos Solitons Fractals 1997, 8, 885. [Google Scholar] [CrossRef]
  29. Costa, U.M.S.; Lyra, M.L.; Plastino, A.R.; Tsallis, C. Power-law sensitivity to initial conditions within a logisticlike family of maps: Fractality and nonextensivity. Phys. Rev. E 1997, 56, 245. [Google Scholar] [CrossRef] [Green Version]
  30. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  31. Afsar, O.; Bagci, G.B.; Tirnakli, U. Renormalized entropy for one dimensional discrete maps: Periodic and quasi-periodic route to chaos and their robustness. Eur. Phys. J. B 2013, 86, 307. [Google Scholar] [CrossRef] [Green Version]
  32. Kurths, J.; Voss, A.; Saparin, P.; Witt, A.; Kleiner, H.J.; Wessel, N. Quantitative analysis of heart rate variability. Chaos 1995, 5, 88–94. [Google Scholar] [CrossRef]
  33. Voss, A.; Kurths, J.; Kleiner, H.; Witt, A.; Wessel, N.; Saparin, P.; Osterziel, K.; Schurath, R.; Dietz, R. The application of methods of non-linear dynamics for the improved and predictive recognition of patients threatened by sudden cardiac death. Cardiovasc. Res. 1996, 31, 419–433. [Google Scholar] [CrossRef] [PubMed]
  34. Kopitzki, K.; Warnke, P.C.; Timmer, J. Quantitative analysis by renormalized entropy of invasive electroencephalograph recordings in focal epilepsy. Phys. Rev. E 1998, 58, 4859–4864. [Google Scholar] [CrossRef] [Green Version]
  35. Afsar, O.; Tirnakli, U.; Kurths, J. Entropy-based complexity measures for gait data of patients with parkinson’s disease. Chaos 2016, 26, 023115. [Google Scholar] [CrossRef] [PubMed]
  36. Venkatesan, R.C.; Plastino, A. Scaled Bregman divergences in a Tsallis scenario. Physica A 2011, 390, 2749–2758. [Google Scholar] [CrossRef] [Green Version]
  37. May, R.M. Simple mathematical models with very complicated dynamics. Nature 1976, 261, 459–467. [Google Scholar]
  38. Schuster, H.G. Deterministic Chaos: An Introduction; Wiley-VCH: Weinheim, Germany, 2006. [Google Scholar]
  39. Aaronson, S.; Carroll, S.; Ouellette, L. Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton. arXiv 2014, arXiv:1405.6903. [Google Scholar]
  40. Luciano, G.G.; Blasone, M. q-generalized Tsallis thermostatistics in Unruh effect for mixed fields. Phys. Rev. D 2021, 104, 045004. [Google Scholar] [CrossRef]
  41. Nojiri, S.; Odintsov, S.D.; Saridakis, E.N. Modified cosmology from extended entropy with varying exponent. Eur. Phys. J. C 2019, 79, 242. [Google Scholar] [CrossRef]
Figure 1. An example of types of entropy based measures as a function of control parameter (adapted from [5]).
Figure 1. An example of types of entropy based measures as a function of control parameter (adapted from [5]).
Entropy 25 00517 g001
Figure 2. A representation of the pitchfork bifurcations in periodic regime (black) and the band merging structures in chaotic regime (blue, green and red) of the logistic map. The black dashed lines represent the bifurcation points ( a n ) and the band merging points ( a ˜ n ).
Figure 2. A representation of the pitchfork bifurcations in periodic regime (black) and the band merging structures in chaotic regime (blue, green and red) of the logistic map. The black dashed lines represent the bifurcation points ( a n ) and the band merging points ( a ˜ n ).
Entropy 25 00517 g002
Figure 3. For the evolution of the logistic map in the control parameter space, from top to bottom: Bifurcation diagram, Lyapunov exponent, Bregman form of q-renormalized entropy, Csiszar form of q-renormalized entropy and q * values that hold the Gibbsian equalities in Equations (25) and (29), respectively.
Figure 3. For the evolution of the logistic map in the control parameter space, from top to bottom: Bifurcation diagram, Lyapunov exponent, Bregman form of q-renormalized entropy, Csiszar form of q-renormalized entropy and q * values that hold the Gibbsian equalities in Equations (25) and (29), respectively.
Entropy 25 00517 g003
Figure 4. q-Renormalized entropy (top) and q * values for the control parameter evolution of the logistic map near chaos threshold ( a [ 1.390 , 1.4051 ] ).
Figure 4. q-Renormalized entropy (top) and q * values for the control parameter evolution of the logistic map near chaos threshold ( a [ 1.390 , 1.4051 ] ).
Entropy 25 00517 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Afsar, O.; Tirnakli, U. Necessary Condition of Self-Organisation in Nonextensive Open Systems. Entropy 2023, 25, 517. https://doi.org/10.3390/e25030517

AMA Style

Afsar O, Tirnakli U. Necessary Condition of Self-Organisation in Nonextensive Open Systems. Entropy. 2023; 25(3):517. https://doi.org/10.3390/e25030517

Chicago/Turabian Style

Afsar, Ozgur, and Ugur Tirnakli. 2023. "Necessary Condition of Self-Organisation in Nonextensive Open Systems" Entropy 25, no. 3: 517. https://doi.org/10.3390/e25030517

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop