Next Article in Journal
Investigation of the Second-Order Hankel Determinant for Sakaguchi-Type Functions Involving the Symmetric Cardioid-Shaped Domain
Previous Article in Journal
Applications of Fractional Differentiation Matrices in Solving Caputo Fractional Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Properties of Fractal Tsallis Entropy

by
Vasile Preda
1,2,3,* and
Răzvan-Cornel Sfetcu
1
1
Faculty of Mathematics and Computer Science, University of Bucharest, Str. Academiei 14, 010014 Bucharest, Romania
2
“Gheorghe Mihoc-Caius Iacob” Institute of Mathematical Statistics and Applied Mathematics, Calea 13 Septembrie 13, 050711 Bucharest, Romania
3
“Costin C. Kiriţescu” National Institute of Economic Research , Calea 13 Septembrie 13, 050711 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(5), 375; https://doi.org/10.3390/fractalfract7050375
Submission received: 31 March 2023 / Revised: 23 April 2023 / Accepted: 27 April 2023 / Published: 30 April 2023
(This article belongs to the Section Life Science, Biophysics)

Abstract

:
We introduce fractal Tsallis entropy and show that it satisfies Shannon–Khinchin axioms. Analogously to Tsallis divergence (or Tsallis relative entropy, according to some authors), fractal Tsallis divergence is defined and some properties of it are studied. Within this framework, Lesche stability is verified and an example concerning the microcanonical ensemble is given. We generalize the LMC complexity measure (LMC is Lopez-Ruiz, Mancini and Calbert), apply it to a two-level system and define the statistical complexity by using the Euclidean and Wootters’ distance measures in order to analyze it for two-level systems.

1. Introduction

In recent years, the concept of entropy has been intensively studied and multiple generalizations have appeared, i.e., Tsallis entropy, Varma entropy, Rényi entropy, Kaniadakis entropy, fractional entropy, fractal entropy, natural time entropy, etc., with applications in many areas such as earthquakes [1,2,3,4,5,6], stock exchanges [7,8], plasma [9,10,11], signal processing [12], Markov chains [13,14,15], astrophysics [16,17], model selection [18,19], finance [20,21,22,23] and Lie symmetries [24,25].
There are also many applications in medicine, more exactly the following: electroencephalographic (EEG) records [26,27,28], medical images [29,30,31], electromyogram (EMG) records [32,33], electrocardiograms (ECG) ([34] and references therein) and photoplethysmography (PPG) ([35] and references therein).
Deng et al. [26] proposed, for detection at reset state of Alzheimer’s disease complex abnormality, MMSWEP (multivariate multi-scale weighted permutation entropy). This method takes into consideration permutation patterns, that incorporate the correlations between different brain regions, different temporal scales and the amplitude information. In the beginning, the synthetic data are used in order to validate MMSWEP performance; then, the complexity features of Alzheimer’s disease patients are extracted. Moreover, the normal controls in different EEG frequency bands are obtained using the MMSWEP method.
Tylová et al. [27] used the permutation entropy of equidistantly sampled data. This method leads to bias reduction of permutation entropy estimates, time complexities and memory decrease of permutation analysis; hence, there are no limitations for the permutation sample and EEG signal lengths.
Yin et al. [28] introduced MPEr (multiscale permutation Rényi entropy), a new complexity algorithm which is applied to epileptic EEG signals. Its performance is compared to MPE—multiscale permutation entropy; PEr—permutation Rényi entrop; and other complexity measures algorithms. Moreover, the complexity of different epileptic EEG signals and the statistical analysis is performed using MPEr algorithm.
Chen and Ramabadran [29] implemented the entropy-coded DPCM (Differential Pulse Code Modulation) method using two different quantizers, each having a large number of quantization levels. The authors evaluated this method from the point of view of quality of reconstructed images and compression performance using some MR (magnetic resonance) and US (ultrasound) images. They also show that the entropy-coded DPCM method can meaningfully outperform the JPEG (Joint Photographics Experts Group) standard.
Rodrigues and Giraldi [30] proposed a generalization of the pseudo-additive property of Tsallis entropy. The image segmentation is presented as an application of this entropy. As in other papers, we can see that precisely the nonadditiveness of Tsallis entropy helps us to obtain results which are applied in many areas.
Studholme et al. [31] studied the development of entropy-based registration criteria for automatic alignment of multi-modality 3D medical images. The authors introduced a normalized measure. This new measure is defined as the ratio of the sum of the marginal entropies and the joint entropy. Using the experiments on clinical data and a simple image model, the new normalized measure and the effect of changing overlap are compared. The normalized entropy measure proves significantly improved behaviour.
Cuesta-Frau [32] defined SlopEn (Slope Entropy). This concept operates in a different way, by using a novel encoding method based on the slope generated by two consecutive data samples, disregarding time series amplitude information and keeping the symbolic representation of subsequences.
Zhang and Su [33] proposed an improved particle swarm optimization algorithm. The experimental results show that the combined sample entropy algorithm has strong robustness, real-time capability and anti-interference in the signal recognition of dual-channel EMGs.
In order to define most of the entropies mentioned above, a deformed logarithm was used. The fractal and fractional entropies were defined using the classical logarithm. Wang [36] introduced the fractal entropy in order to describe complex systems which exhibit fractal or chaotic phase space. Ubriaco [37] introduced the fractional entropy and used it to study anomalous diffusion in [38]. Radhakrishnan et al. [39] defined a two-parameter generalized entropy which can be reduced, in some particular cases, to these entropies. Using these ideas, we introduce, in this paper, fractal Tsallis entropy. More exactly, the Tsallis logarithm is used instead of the natural logarithm. We prove Shannon–Khinchin axioms corresponding to this entropy. We introduce the fractal Tsallis divergence and study some properties of it. As a thermodynamic property, we check Lesche stability for the aforementioned divergence and give an example concerning microcanonical ensemble. We define a generalization of the LMC (Lopez-Ruiz, Mancini and Calbet) complexity measure (see [40]), by using the fractal Tsallis entropy in order to apply it to a two-level system. We construct complexity measures for disequilibrium distances as Euclidean distance and Wootters’ distance.

2. Fractal Tsallis Entropy

Definition 1. 
For any α R * and any x [ 0 , ) , we define the Tsallis logarithm via
log α T x = x α 1 α if x > 0 0 if x = 0 .
For more informations on the properties of the functions defined by Tsallis, we recommend [41]. In this paper, we denote α = 1 q , where q is the Tsallis entropic index.
We consider α ( 0 , ) , β ( 0 , 1 ] , n N * = { 1 , 2 , } and p = ( p 1 , , p n ) a probability distribution, i.e., p 1 , , p n [ 0 , 1 ] such that i = 1 n p i = 1 .
Definition 2. 
We define the fractal Tsallis entropy by
S α , β T ( p ) = S α , β T ( p 1 , , p n ) = i = 1 n p i β log α T ( p i ) = i = 1 n p i β p i α + β α .
Remark 1. 
If β = 1 , we obtain Tsallis entropy (see [42,43]) in the previous definition.
Theorem 1. 
We have the following properties for fractal Tsallis entropy:
1 .   S α , β T ( p ) 0 .
2 . The function S α , β T is continuous in each variable.
3 .   S α , β T ( p 1 , , p n , 0 ) = S α , β T ( p 1 , , p n ) .
4 .   S α , β T ( p 1 , , p n ) S α , β T 1 n , , 1 n .
Proof. 
1 . Because α + β > β and p i [ 0 , 1 ] for any i { 1 , , n } , we have p i α + β p i β ; hence, S α , β T ( p ) 0 .
2 . Because in the expression of S α , β T ( p ) only operations with continuous functions appear, we obtain the conclusion.
3 .   S α , β T ( p 1 , , p n , 0 ) = i = 1 n p i β log α T ( p i ) 0 β log α T ( 0 ) = i = 1 n p i β log α T ( p i ) = S α , β T ( p 1 , , p n ) .
4 . Taking into account the continuity (see 2 . ), there exists ( p 1 * , , p n * ) [ 0 , 1 ] n such that i = 1 n p i * = 1 and S α , β T ( p 1 * , , p n * ) = max ( p 1 , , p n ) [ 0 , 1 ] n S α , β T ( p 1 , , p n ) .
If there is i { 1 , , n } such that p i * = 1 , then p j * = 0 for any j { 1 , , n } , j i and S α , β T ( p 1 * , , p n * ) = 0 , which is minimum, not maximum.
Hence ( p 1 * , , p n * ) ( 0 , 1 ) n and ( p 1 * , , p n * ) is a stationary point of S α , β T conditioned by i = 1 n p i = 1 . Applying Lagrange’s Multipliers Theorem, we obtain that, for any i , j { 1 , , n } , the following relationships hold:
β p i * β 1 ( α + β ) p i * α + β 1 = β p j * β 1 ( α + β ) p j * α + β 1 p i * β 1 β ( α + β ) p i * α = p j * β 1 β ( α + β ) p j * α .
We shall prove that p i * = p j * .
If p i * < p j * , then β p i * β 1 β p j * β 1 and β ( α + β ) p i * α > β ( α + β ) p j * α ; hence
p i * β 1 β ( α + β ) p i * α > p j * β 1 β ( α + β ) p j * α , which is a contradiction.
If p i * > p j * , then β p i * β 1 β p j * β 1 and β ( α + β ) p i * α < β ( α + β ) p j * α ; hence
p i * β 1 β ( α + β ) p i * α < p j * β 1 β ( α + β ) p j * α , which is a contradiction.
So p i * = p j * for any i , j { 1 , , n } and, because i = 1 n p i * = 1 , it follows that p i * = 1 n for any i { 1 , , n } . □
Notation. We denote
S α , β T , max = S α , β T 1 n , , 1 n = n 1 n β 1 n α + β α = n α 1 α n α + β 1 .
Lemma 1 
(see [43]). We have the following pseudo-additivity property concerning the Tsallis logarithm:
log α T ( x y ) = log α T ( x ) + log α T ( y ) + α log α T ( x ) log α T ( y ) ,
valid for any x , y ( 0 , ) .
Theorem 2. 
Let m , n N * . We take, for any i { 1 , , m } and any j { 1 , , n } , the probability of occurrence of a joint event, denoted by p i j , in which p i and q j are probabilities of occurrence of the individual events. We assume that the two events are independent, i.e., p i j = p i q j for any i { 1 , , m } and j { 1 , , n } .
Let P = ( p 11 , , p 1 n , p 21 , , p m n ) .
Then we have the following pseudo-additivity property for fractal Tsallis entropy:
S α , β T ( P ) = j = 1 n q j β S α , β T ( p 1 , , p m ) + i = 1 m p i β S α , β T ( q 1 , , q n ) + α S α , β T ( p 1 , , p m ) S α , β T ( q 1 , , q n ) .
Proof. 
Using Lemma 1, we obtain:
S α , β T ( P ) = i = 1 m j = 1 n p i j β log α T p i j = i = 1 m j = 1 n p i q j β log α T p i q j = i = 1 m j = 1 n p i β q j β log α T p i + log α T q j + α log α T p i log α T q j = j = 1 n q j β S α , β T ( p 1 , , p m ) + i = 1 m p i β S α , β T ( q 1 , , q n ) + α S α , β T ( p 1 , , p m ) S α , β T ( q 1 , , q n ) .

3. Fractal Tsallis Divergence

Let α ( 0 , ) , β ( 0 , 1 ] , n N * and p = ( p 1 , , p n ) , q = ( q 1 , , q n ) be two probability distributions. We use the convention 0 0 = 0 and assume that the following implication is true: q i = 0 p i = 0 .
Definition 3. 
The fractal Tsallis divergence is given by
D α , β T ( p | | q ) = i = 1 n p i β log α T p i q i .
Remark 2. 
If β = 1 , we obtain Tsallis divergence (see [44,45,46]) in the last definition.
Lemma 2 
(see [47]). For any x ( 0 , ) { 1 } , let φ x : R { 0 } R ,
φ x ( t ) = x t 1 t .
The function φ x is strictly increasing.
Theorem 3. 
We have the following properties for the fractal Tsallis divergence:
1 . If i = 1 n p i β i = 1 n q i β , then D α , β T ( p | | q ) 0 .
2 . Assume that i = 1 n p i β i = 1 n q i β . The next assertions are equivalent:
( i )   D α , β T ( p | | q ) = 0 .
( i i )   p = q (i.e., p i = q i for any i { 1 , , n } ).
3 .   D α , β T ( p | | q ) is a continuous function in each variable.
4 .   D α , β T ( ( p 1 , , p i , , p j , , p n ) | | ( q 1 , , q i , , q j , , q n ) )   = D α , β T ( ( p 1 , , p j , , p i , , p n ) | | ( q 1 , , q j , , q i , , q n ) ) .
5 .   D α , β T ( ( p 1 , , p n , 0 ) | | ( q 1 , , q n , 0 ) ) = D α , β T ( ( p 1 , , p n ) | | ( q 1 , , q n ) ) .
Proof. 
1 .   D α , β T ( p | | q ) = i = 1 n p i β log α T p i q i = i = 1 n p i β p i q i α 1 α .
Using Lemma 2, we obtain that p i q i α 1 α p i q i β 1 β .
Hence D α , β T ( p | | q ) i = 1 n p i β p i q i β 1 β = i = 1 n p i β i = 1 n q i β β 0 .
2 .   ( i ) ( i i )
If there exists i 0 { 1 , , n } such that p i 0 q i 0 , then
p i 0 q i 0 α 1 α > p i 0 q i 0 β 1 β ;
hence
i = 1 n p i β p i q i α 1 α > i = 1 n p i β p i q i β 1 β = i = 1 n p i β q i β β 0 ,
which is a contradiction.
So p i = q i for any i { 1 , , n } .
( i i ) ( i ) .
If p i = q i for any i { 1 , , n } , then
D α , β T ( p | | q ) = i = 1 n p i β log α T 1 = i = 1 n p i β · 0 = 0 .
3 . Because in the expression of D α , β T ( p | | q ) only operations with continuous functions appear, we obtain the conclusion.
4 .   D α , β T ( ( p 1 , , p i , , p j , , p n ) | | ( q 1 , , q i , , q j , , q n ) )   = p 1 β log α T p 1 q 1 + + p i β log α T p i q i + · · · + p j β log α T p j q j + · · · + p n β log α T p n q n   = p 1 β log α T p 1 q 1 + + p j β log α T p j q j + · · · + p i β log α T p i q i + · · · + p n β log α T p n q n = D α , β T ( ( p 1 , , p j , , p i , , p n ) | | ( q 1 , , q j , , q i , , q n ) ) .
5 . Using the convention 0 0 = 0 , we have
D α , β T ( ( p 1 , , p n , 0 ) | | ( q 1 , , q n , 0 ) ) = i = 1 n p i β log α T p i q i + 0 β log α T 0 0 = i = 1 n p i β log α T p i q i = D α , β T ( ( p 1 , , p n ) | | ( q 1 , , q n ) ) .
The following example shows that, if we drop the condition i = 1 n p i β i = 1 n q i β in Theorem 3, we can have D α , β T ( p | | q ) < 0 .
Example 1. 
Let n = 3 , α = 1 2 , β = 1 2 , p = ( p 1 , p 2 , p 3 ) = 4 2 100 2 , 25 2 100 2 , 9 2 100 2 and q = ( q 1 , q 2 , q 3 ) = 1 100 2 , 50 2 100 2 , 9 2 100 2 .
We remark that
i = 1 3 p i β = 38 100 < 60 100 = i = 1 3 q i β .
We also have
D α , β T ( p | | q ) = i = 1 3 p i q i 1 2 p i 1 2 = 16 100 + 25 2 · 100 + 9 100 38 100 < 0 .
Theorem 4. 
Let m , n N * , p ( 1 ) = ( p 1 ( 1 ) , , p m ( 1 ) ) , q ( 1 ) = ( q 1 ( 1 ) , , q n ( 1 ) ) , p ( 2 ) = ( p 1 ( 2 ) , , p m ( 2 ) ) , q ( 2 ) = ( q 1 ( 2 ) , , q n ( 2 ) ) be probability distributions and P ( 1 ) = ( p 11 ( 1 ) , , p 1 n ( 1 ) , p 21 ( 1 ) , , p m n ( 1 ) ) , P ( 2 ) = ( p 11 ( 2 ) , , p 1 n ( 2 ) , p 21 ( 2 ) , , p m n ( 2 ) ) , where p i j ( 1 ) = p i ( 1 ) q j ( 1 ) , p i j ( 2 ) = p i ( 2 ) q j ( 2 ) for any i { 1 , , m } and j { 1 , , n } . Hence
D α , β T ( P ( 1 ) | | P ( 2 ) ) = j = 1 n q j ( 1 ) β D α , β T ( p ( 1 ) | | p ( 2 ) ) + i = 1 m p i ( 1 ) β D α , β T ( q ( 1 ) | | q ( 2 ) ) + α D α , β T ( p ( 1 ) | | p ( 2 ) ) D α , β T ( q ( 1 ) | | q ( 2 ) ) .
Proof. 
We have:
D α , β T ( P ( 1 ) | | P ( 2 ) ) = i = 1 m j = 1 n p i j ( 1 ) β log α T p i j ( 1 ) p i j ( 2 ) = i = 1 m j = 1 n p i ( 1 ) q j ( 1 ) β log α T p i ( 1 ) q j ( 1 ) p i ( 2 ) q j ( 2 ) = i = 1 m j = 1 n p i ( 1 ) β q j ( 1 ) β log α T p i ( 1 ) p i ( 2 ) + log α T q j ( 1 ) q j ( 2 ) + α log α T p i ( 1 ) p i ( 2 ) log α T q j ( 1 ) q j ( 2 ) = j = 1 n q j ( 1 ) β i = 1 m p i ( 1 ) β log α T p i ( 1 ) p i ( 2 ) + i = 1 m p i ( 1 ) β j = 1 n q j ( 1 ) β log α T q j ( 1 ) q j ( 2 ) + α i = 1 m p i ( 1 ) β log α T p i ( 1 ) p i ( 2 ) j = 1 n q j ( 1 ) β log α T q j ( 1 ) q j ( 2 ) = j = 1 n q j ( 1 ) β D α , β T ( p ( 1 ) | | p ( 2 ) ) + i = 1 m p i ( 1 ) β D α , β T ( q ( 1 ) | | q ( 2 ) )   + α D α , β T ( p ( 1 ) | | p ( 2 ) ) D α , β T ( q ( 1 ) | | q ( 2 ) ) .
Because the fractal Tsallis divergence is not generally symmetric in p and q, we define the symmetric measure
J α , β T ( p , q ) = 1 2 D α , β T ( p | | q ) + D α , β T ( q | | p ) ,
which has the following properties:
1 .   J α , β T ( p , q ) = J α , β T ( q , p ) .
2 . If i = 1 n p i β = i = 1 n q i β , then J α , β T ( p , q ) 0 .
3 . Assume that i = 1 n p i β = i = 1 n q i β . The next assertions are equivalent:
( i )   J α , β T ( p , q ) = 0
( i i )   p = q .
In the sequel we withdraw the assumption q i = 0 p i = 0 . The mathematical expression corresponding to the modified fractal Tsallis divergence is
D α , β T ( p , q ) = D α , β T p | | p + q 2 = i = 1 n p i β log α T p i p i + q i 2 = i = 1 n p i β log α T 2 p i p i + q i .
This measure satisfies all the aforementioned properties of the fractal Tsallis divergence, but it is not symmetric. As above, we consider
J α , β T ( p , q ) = 1 2 D α , β T ( p , q ) + D α , β T ( q , p ) .
We also have:
1 . J α , β T ( p , q ) = J α , β T ( q , p ) .
2 . If i = 1 n p i β i = 1 n p i + q i 2 β and i = 1 n q i β i = 1 n p i + q i 2 β , then J α , β T ( p , q ) 0 .
3 . Assume that i = 1 n p i β i = 1 n p i + q i 2 β and i = 1 n q i β i = 1 n p i + q i 2 β . The next assertions are equivalent:
( i )   J α , β T ( p , q ) = 0 .
( i i )   p = q .
The next proposition gives a relationship between the last two symmetric measures.
Proposition 1. 
We have
J α , β T ( p , q ) 1 2 J α 2 , β T ( p , q ) .
Proof. 
Because, for any i { 1 , , n } ,
2 p i p i + q i p i q i ,
we obtain
log α T 2 p i p i + q i 1 2 log α 2 T p i q i .
Hence
i = 1 n p i β log α T 2 p i p i + q i 1 2 i = 1 n p i β log α 2 T p i q i .
In the same way, it can be seen that
i = 1 n q i β log α T 2 q i p i + q i 1 2 i = 1 n q i β log α 2 T q i p i .
We conclude that
J α , β T ( p , q ) 1 2 J α 2 , β T ( p , q ) .

4. Thermodynamic Properties

It is known that, by optimizing the entropy subject to the norm constraint and the energy constraint, we can obtain the canonical probability distribution p i . Using an analogous procedure for the fractal Tsallis entropy, we construct the functional
L α , β T ( p 1 , , p n ) = i = 1 n p i β log α T p i A i = 1 n p i 1 B i = 1 n p i ε i E = i = 1 n p i β p i α + β α A i = 1 n p i 1 B i = 1 n p i ε i E ,
where A and B are Lagrange’s multipliers, ε i is the energy eigenvalue and E is the internal energy.
Differentiating the functional L α , β T , we obtain
L α , β T p i = β p i β 1 ( α + β ) p i α + β 1 α ( A + B ε i ) .
When the functional L α , β T attains a maximum, its variation with respect to p i is null; hence
β p i β 1 ( α + β ) p i α + β 1 α = A + B ε i .
Integrating the last relationship with respect to p i , we obtain
p i β p i α + β α = ( A + B ε i ) p i + C ,
where C is a real constant.
Now we can find the p i ’s for given α and β either analytically or numerically. The specific values of α and β come from real physical systems.

4.1. Lesche Stability

Lesche introduced in [48,49] a stability criterion to study the stabilities of Rényi and Boltzmann–Gibbs entropies. This stability criterion is concerned with the fact that an infinitesimal change in the probabilities will produce equally infinitesimal changes in the observable; i.e., for any ε > 0 , there exists δ > 0 such that for any n N * and for any p = ( p 1 , , p n ) and q = ( q 1 , , q n ) , two probability distributions, the following implication is true:
i = 1 n p i q i < δ S α , β T ( p ) S α , β T ( q ) S α , β T , max < ε .
In order to show that fractal Tsallis entropy satisfies Lesche stability, we will use the stability criterion from [50]. More exactly, it is sufficient to prove that lim p q 1 0 lim n B p q 1 , n = 0 , where
p q 1 = i = 1 n p i q i
and
B p q 1 , n = 0 p q 1 n f 1 ( t ) d t + p q 1 n 0 1 n f 1 ( t ) d t 1 n 0 1 f 1 ( t ) d t ,
the function f 1 ( p ) being the inverse probability distribution found in (1).
More exactly, in the last formula, we used Equation (18) from [50], having f 1 ( p ) = β p β 1 ( α + β ) p α + β 1 α and t m i n = 1 .
Hence
B p q 1 , n = p q 1 n β p q 1 n α + β + α p q 1 n 1 n β 1 n α + β = p q 1 n β p q 1 n α + β + α p q 1 n · n α + β n α 1 = p q 1 β · n α n α 1 p q 1 α + β n α 1 + α p q 1 · n α + β 1 n α 1 .
Now, we can easily see that lim p q 1 0 lim n B p q 1 , n = 0 ; i.e., the Lesche stability criterion is satisfied.

4.2. Generic Example in the Microcanonical Ensemble

The microcanonical ensemble can be used to describe an isolated system in the thermodynamic equilibrium. All the microstates are equally probable in a microcanonical picture. Hence, taking into account the equiprobability, i.e., p i = 1 W for any i { 1 , , W } , the fractal Tsallis entropy becomes
S α , β T ( p ) = i = 1 W 1 W β log α T 1 W = W · 1 W β · 1 1 W α α = W · 1 W β 1 W β + α α = W 1 β W 1 α β α ,
where W is the total number of microstates. In this case, we call S α , β T the microcanonical fractal Tsallis entropy derived from the fractal Tsallis entropy.
For this entropy, the temperature is given via
1 T = S α , β T E = ( 1 β ) W β ( 1 α β ) W α β α · W E .
If W = C E f , the temperature is given by
1 T = ( 1 β ) C E f β ( 1 α β ) C E f α β α · f C E f 1 = ( 1 β ) C 1 β E ( 1 β ) f 1 ( 1 α β ) C 1 α β E ( 1 α β ) f 1 α ;
i.e.,
T = α ( 1 β ) C 1 β E ( 1 β ) f 1 ( 1 α β ) C 1 α β E ( 1 α β ) f 1 .
Although Formula (2) is given only for the microcanonical ensemble, a direct extension of this method to include other kinds of adiabatic ensembles can be obtained.

5. Complexity Measures

For a system consisting of N accessible states with a set of probabilities P = ( p 1 , , p n ) such that i = 1 N p i = 1 , the complexity measure is given as C ( P ) = S ( P ) D ( P ) , where D ( P ) = i = 1 N p i 1 N 2 .
The LMC (López-Ruiz, Mancini and Calbet) measure of complexity is considered to be a nonextensive quantity. Yamano [51] proposed a generalized measure of complexity based on Tsallis entropy with a view to absorbing the nonadditive features of the entropy. Using these ideas, we define a statistical measure corresponding to the fractal Tsallis entropy via
C α , β T ( P ) = S α , β T ( P ) D ( P ) = i = 1 N p i β log α T p i i = 1 N p i 1 N 2 .
In cases in which we work with a two-level system with probabilities P = ( p , 1 p ) , the expressions for the entropy and the disequilibrium measure are as follows:
S α , β T ( P ) = p β log α T ( p ) ( 1 p ) β log α T ( 1 p ) = p β p α + β + ( 1 p ) β ( 1 p ) α + β α ,
and
D ( P ) = 2 p 1 2 2 ,
respectively.
Having this framework, the statistical complexity is
C α , β T ( P ) = 2 p β p α + β + ( 1 p ) β ( 1 p ) α + β α · p 1 2 2 .
The natural step is to generalize this measure to the continuous case, considering the probability distribution function p ( x ) with the normalization condition p ( x ) d x = 1 . Fractal Tsallis entropy and the disequilibrium measure are, in this case,
S α , β T ( P ) = p ( x ) β log α T ( p ( x ) ) d x ,
D ( P ) = p ( x ) 2 d x ,
respectively.
With this framework, the fractal Tsallis generalization of the LMC complexity measure is
C α , β T ( P ) = p ( x ) β log α T ( p ( x ) ) d x p ( x ) 2 d x .
Martin et al. [52] introduced a modified definition in order to scale the complexity to lie in the interval [ 0 , 1 ] . The modified statistical measure of complexity corresponding to fractal Tsallis entropy is given as follows:
C α , β T ( P ) = S α , β T ( P ) S α , β T , max · D x ( P ) ,
where D x is the normalized disequilibrium measure based on a particular distance measure x.
In the sequel we give some examples of the disequilibrium based on different distance measures.

5.1. Disequilibrium Measure Based on Euclidean Distance

In this case, the disequilibrium D E is given by
D E ( P ) = N E i = 1 N ( p 1 , , p N ) 1 N , , 1 N E ,
where N is the number of accessible states and N E = N N 1 .
In the case of a two-level system with probabilities P = ( p , 1 p ) , we have:
S α , β T ( P ) = p β log α T ( p ) ( 1 p ) β log α T ( 1 p ) = p β p α + β + ( 1 p ) β ( 1 p ) α + β α ,
S α , β T , max = 2 1 β 2 1 α β α
and
D E ( P ) = p 1 2 2 + 1 p 1 2 2 = 2 p 1 2 2 .
Hence, in this case,
C α , β T ( P ) = p β p α + β + ( 1 p ) β ( 1 p ) α + β α 2 1 β 2 1 α β α · 2 p 1 2 2 = 2 p β p α + β + ( 1 p ) β ( 1 p ) α + β 2 1 β 2 1 α β · p 1 2 2 .
However, the Euclidean distance used above do not take into account the stochastic nature of the probabilities. In order to avoid this, the statistical distance measure introduced by Wootters (see [53]) can be used to redefine the disequilibrium.

5.2. Disequilibrium Measure Based on Wootters’ Distance

Wootters [53] introduced a new statistical measure in which the distance between two quantities is determined by the size of statistical fluctuations in a measurement. Let P = ( p 1 , , p N ) and Q = ( q 1 , , q N ) be two probability distributions. Assume that they are indistinguishable and the difference between them is smaller than the size of the typical fluctuation. Then the statistical distance is the shortest curve connecting two points in the probability space. This shortest distance is equal to the angle between the two probability vectors and is given by
D W ( P , Q ) = P Q W = cos 1 i = 1 N p i q i .
In the case Q = 1 N , 1 N , , 1 N , we denote
D W ( P ) = ( p 1 , , p N ) 1 N , , 1 N W = cos 1 i = 1 N p i N .
In the case the two probability distributions coincide with each other, this measure vanishes. It attains the maximum value in cases in which each outcome has a positive probability according to one distribution and zero probability according to the other distribution. Using Wootters’ distance, Martin et al. [52] defined the disequilibrium measure for the extensive Boltzmann–Gibbs entropy. We continue this work, dealing with fractal Tsallis entropy. The disequilibrium defined using a normalized version of the statistical distance is given via
D W ( P , Q ) = N W cos 1 i = 1 N p i q i ,
where N is the number of accessible states and N W = 1 cos 1 1 N is the corresponding normalization constant.
As above, in the case Q = 1 N , , 1 N , we denote
D W ( P ) = N W cos 1 i = 1 N p i N .
The statistical measure of complexity of a two-level system with probabilities P = ( p , 1 p ) , computed using this disequilibrium distance, is
C α , β T ( P ) = p β p α + β + ( 1 p ) β ( 1 p ) α + β 2 1 β 2 1 α β · cos 1 p 2 + 1 p 2 cos 1 1 2 .
Remark 3. 
We use the hypothesis β ( 0 , 1 ] only at Theorem 1 point 3 and when we show that Lesche stability is valid for fractal Tsallis entropy. The rest of the proofs are valid without this assumption. Of course, in the absence of this hypothesis, we do not know the value of S α , β T , max .

6. Conclusions

We introduced fractal Tsallis entropy and fractal Tsallis divergence, which generalize Tsallis entropy and Tsallis divergence, respectively. The standard properties, like Shannon–Khinchin axioms, are verified. We also showed that, in some cases, Lesche stability is preserved. A generalization of the LMC complexity measure is introduced, which is applied to measure the complexity of a two-level system. We analyzed the complexity of a given system by using a normalized form of the entropy and distance measure. The complexity is defined with the help of two different distance measures, namely the normalized form of the Euclidean distance and the normalized Wootters’ distance; then, the complexity of a two-level system is thoroughly analyzed. Because the Wootters’ distance for the disequilibrium measure captures the stochastic effects of the probability distribution, we remark that it is more appropriate to use it. Motivated by the applications of entropy in medical image processing, we hope that this new entropy is a step forward in this framework.

Author Contributions

Conceptualization, V.P. and R.-C.S.; Formal analysis, V.P. and R.-C.S.; Funding acquisition, V.P.; Investigation, R.-C.S.; Methodology, V.P.; Supervision, V.P.; Validation, R.-C.S.; Visualization, V.P. and R.-C.S.; Writing—original draft, R.-C.S.; Writing—review and editing, V.P. All authors contributed equally to the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are very much indebted to the anonymous referees and to the editors for their most valuable comments and suggestions which improved the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abe, S.; Suzuki, N. Law for the distance between successive earthquakes. J. Geophys. Res. 2003, 108, 2113. [Google Scholar] [CrossRef]
  2. Darooneh, A.H.; Dadashinia, C. Analysis of the spatial and temporal distributions between successive earthquakes: Nonextensive statistical mechanics viewpoint. Phys. A 2008, 387, 3647–3654. [Google Scholar] [CrossRef]
  3. Hasumi, T. Hypocenter interval statistics between successive earthquakes in the twodimensional Burridge-Knopoff model. Phys. A 2009, 388, 477–482. [Google Scholar] [CrossRef]
  4. Ramírez-Rojas, A.; Flores-Márquez, E.L.; Sarlis, N.V.; Varotsos, P.A. The complexity measures associated with the fluctuations of the entropy in natural time before the deadly México M8.2 Earthquake on 7 September 2017. Entropy 2018, 20, 477. [Google Scholar] [CrossRef] [PubMed]
  5. Sarlis, N.V.; Skordas, E.S.; Varotsos, P.A. A remarkable change of the entropy of seismicity in natural time under time reversal before the super-giant M9 Tohoku earthquake on 11 March 2011. EPL 2018, 124, 29001. [Google Scholar] [CrossRef]
  6. Varotsos, P.A.; Sarlis, N.V.; Skordas, E.S. Tsallis entropy index q and the complexity measure of seismicity in natural time under time reversal before the M9 Tohoku earthquake in 2011. Entropy 2018, 20, 757. [Google Scholar] [CrossRef]
  7. Jiang, Z.Q.; Chen, W.; Zhou, W.X. Scaling in the distribution of intertrade durations of Chinese stocks. Phys. A 2008, 387, 5818–5825. [Google Scholar] [CrossRef]
  8. Kaizoji, T. An interacting-agent model of financial markets from the viewpoint of nonextensive statistical mechanics. Phys. A 2006, 370, 109–113. [Google Scholar] [CrossRef]
  9. Lima, J.A.S.; Silva, R., Jr.; Santos, J. Plasma oscillations and nonextensive statistics. Phys. Rev. E 2000, 61, 3260. [Google Scholar] [CrossRef]
  10. Livadiotis, G.; McComas, D.J. Beyond kappa distributions: Exploiting Tsallis statistical mechanics in space plasmas. J. Geophys. Res. 2009, 114, A11105. [Google Scholar] [CrossRef]
  11. Livadiotis, G.; McComas, D.J. Thermodynamic definitions of temperature and kappa and introduction of the entropy defect. Entropy 2021, 23, 1683. [Google Scholar] [CrossRef] [PubMed]
  12. Kailath, T. The divergence and Bhattacharyya distance measures in signal selection. IEEE Trans. Commun. Technol. 1967, 15, 52–60. [Google Scholar] [CrossRef]
  13. Barbu, V.S.; Karagrigoriou, A.; Preda, V. Entropy, divergence rates and weighted divergence rates for Markov chains. I: The alpha-gamma and beta-gamma case. Proc. Rom. Acad. Ser. A Math. Phys. Tech. Sci. Inf. Sci. 2017, 18, 293–301. [Google Scholar]
  14. Barbu, V.S.; Karagrigoriou, A.; Preda, V. Entropy and divergence rates for Markov chains. II: The weighted case. Proc. Rom. Acad. Ser. A Math. Phys. Tech. Sci. Inf. Sci. 2018, 19, 3–10. [Google Scholar]
  15. Barbu, V.S.; Karagrigoriou, A.; Preda, V. Entropy and divergence rates for Markov chains. III: The Cressie and Read case and applications. Proc. Rom. Acad. Ser. A Math. Phys. Tech. Sci. Inf. Sci. 2018, 19, 413–421. [Google Scholar]
  16. Abreul, E.M.C.; Neto, J.A.; Barboza, E.M.; Nunes, R.C. Jeans instability criterion from the viewpoint of Kaniadakis’ statistics. Europhys. Lett. 2016, 114, 55001. [Google Scholar] [CrossRef]
  17. Cure, M.; Rial, D.F.; Christen, A.; Cassetti, J. A method to deconvolve stellar rotational velocities. Astron. Astrophys. 2014, 564, A85. [Google Scholar] [CrossRef]
  18. Toma, A. Model selection criteria using divergences. Entropy 2014, 16, 2686–2698. [Google Scholar] [CrossRef]
  19. Toma, A.; Karagrigoriou, A.; Trentou, P. Robust model selection criteria based on pseudodistances. Entropy 2020, 22, 304. [Google Scholar] [CrossRef]
  20. Preda, V.; Dedu, S.; Sheraz, M. New measure selection for Hunt-Devolder semi-Markov regime switching interest rate models. Phys. A 2014, 407, 350–359. [Google Scholar] [CrossRef]
  21. Preda, V.; Dedu, S.; Iatan, I.; Dănilă Cernat, I.; Sheraz, M. Tsallis entropy for loss models and survival models involving truncated and censored random variables. Entropy 2022, 24, 1654. [Google Scholar] [CrossRef] [PubMed]
  22. Trivellato, B. The minimal k-entropy martingale measure. Int. J. Theor. Appl. Financ. 2012, 15, 1250038. [Google Scholar] [CrossRef]
  23. Trivellato, B. Deformed exponentials and applications to finance. Entropy 2013, 15, 3471–3489. [Google Scholar] [CrossRef]
  24. Hirică, I.-E.; Pripoae, C.-L.; Pripoae, G.-T.; Preda, V. Lie symmetries of the nonlinear Fokker-Planck equation based on weighted Kaniadakis entropy. Mathematics 2022, 10, 2776. [Google Scholar] [CrossRef]
  25. Pripoae, C.-L.; Hirică, I.-E.; Pripoae, G.-T.; Preda, V. Lie symmetries of the nonlinear Fokker-Planck equation based on weighted Tsallis entropy. Carpathian J. Math. 2022, 38, 597–617. [Google Scholar] [CrossRef]
  26. Deng, B.; Cai, L.; Li, S.; Wang, R.; Yu, H.; Chen, Y. Multivariate multi-scale weighted permutation entropy analysis of EEG complexity for Alzheimer’s disease. Cogn. Neurodyn 2017, 11, 217–231. [Google Scholar] [CrossRef]
  27. Tylová, L.; Kukal, J.; Hubata-Vacek, V.; Vyšata, O. Unbiased estimation of permutation entropy in EEG analysis for Alzheimer’s disease classification. Biomed. Signal Process. Control 2018, 39, 424–430. [Google Scholar] [CrossRef]
  28. Yin, Y.; Sun, K.; He, S. Multiscale permutation Rényi entropy and its application for EEG signals. PLoS ONE 2018, 13, 0202558. [Google Scholar] [CrossRef]
  29. Chen, K.; Ramabadran, T.V. Near-lossless compression of medical images through entropy-coded DPCM. IEEE Trans. Med. Imaging 1994, 13, 538–548. [Google Scholar] [CrossRef]
  30. Rodrigues, P.; Giraldi, G. Improving the non-extensive medical image segmentation based on Tsallis entropy. Pattern Anal. Appl. 2011, 14, 369–379. [Google Scholar] [CrossRef]
  31. Studholme, C.; Hill, D.L.G.; Hawkes, D.J. An overlap invariant entropy measure of 3D medical image alignment. Pattern Recognit. 1999, 32, 71–86. [Google Scholar] [CrossRef]
  32. Cuesta-Frau, D. Slope entropy: A new time series complexity estimator based on both symbolic patterns and amplitude information. Entropy 2019, 21, 1167. [Google Scholar] [CrossRef]
  33. Zhang, H.; Su, M. Hand gesture recognition of double-channel EMG signals based on sample entropy and PSO-SVM. J. Phys. Conf. Ser. 2020, 1631, 012001. [Google Scholar] [CrossRef]
  34. Sarlis, N.V.; Christopoulos, S.-R.G.; Bemplidaki, M.M. Change ∆S of the entropy in natural time under time reversal: Complexity measures upon change of scale. Europhys. Lett. 2015, 109, 18002. [Google Scholar] [CrossRef]
  35. Baldoumas, G.; Peschos, D.; Tatsis, G.; Chronopoulos, S.K.; Christofilakis, V.; Kostarakis, P.; Varotsos, P.; Sarlis, N.V.; Skordas, E.S.; Bechlioulis, A.; et al. A prototype photoplethysmography electronic device that distinguishes congestive heart failure from healthy individuals by applying natural time analysis. Electronics 2019, 8, 1288. [Google Scholar] [CrossRef]
  36. Wang, Q.A. Extensive generalization of statistical mechanics based on incomplete information theory. Entropy 2003, 5, 220–232. [Google Scholar] [CrossRef]
  37. Ubriaco, M.R. Entropies based on fractional calculus. Phys. Lett. A 2009, 373, 2516–2519. [Google Scholar] [CrossRef]
  38. Ubriaco, M.R. A simple mathematical model for anomalous diffusion via Fisher’s information theory. Phys. Lett. A 2009, 373, 4017–4021. [Google Scholar] [CrossRef]
  39. Radhakrishnan, C.; Chinnarasu, R.; Jambulingam, S. A fractional entropy in fractal phase space: Properties and characterization. Int. J. Stat. Mech. 2014, 2014, 460364. [Google Scholar] [CrossRef]
  40. López-Ruiz, R.; Mancini, H.L.; Calbet, X. A statistical measure of complexity. Phys. Lett. A 1995, 209, 321–326. [Google Scholar] [CrossRef]
  41. Umarov, S.; Tsallis, C.; Steinberg, S. On a q-Central Limit Theorem Consistent with Nonextensive Statistical Mechanics. Milan J. Math. 2008, 76, 307–328. [Google Scholar] [CrossRef]
  42. Tsallis, C. Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  43. Tsallis, C. Introduction to Nonextensive Statistical Mechanics; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  44. Furuichi, S.; Yanagi, K.; Kuriyama, K. Fundamental properties of Tsallis relative entropy. J. Math. Phys. 2004, 45, 4868–4877. [Google Scholar] [CrossRef]
  45. Huang, J.; Yong, W.-A.; Hong, L. Generalization of the Kullback-Leibler divergence in the Tsallis statistics. J. Math. Anal. Appl. 2016, 436, 501–512. [Google Scholar] [CrossRef]
  46. Sfetcu, R.-C. Tsallis and Rényi divergences of generalized Jacobi polynomials. Phys. A 2016, 460, 131–138. [Google Scholar] [CrossRef]
  47. Sfetcu, R.-C.; Sfetcu, S.-C.; Preda, V. On Tsallis and Kaniadakis divergences. Math. Phys. Anal. Geom. 2022, 25, 23. [Google Scholar] [CrossRef]
  48. Lesche, B. Instabilities of Rényi entropies. J. Stat. Phys. 1982, 27, 419–422. [Google Scholar] [CrossRef]
  49. Lesche, B. Rényi entropies and observables. Phys. Rev. E 2004, 70, 017102. [Google Scholar] [CrossRef]
  50. Abe, S.; Kaniadakis, G.; Scarfone, A.M. Stabilities of generalized entropies. J. Phys. A Math. Gen. 2004, 37, 10513–10519. [Google Scholar] [CrossRef]
  51. Yamano, T. A statistical complexity measure with nonextensive entropy and quasi-multiplicativity. J. Math. Phys. 2004, 45, 1974–1987. [Google Scholar] [CrossRef]
  52. Martin, M.T.; Plastino, A.; Rosso, O.A. Statistical complexity and disequilibrium. Phys. Lett. A 2003, 311, 126–132. [Google Scholar] [CrossRef]
  53. Wootters, W.K. Statistical distance and Hilbert space. Phys. Rev. D Part. Fields 1981, 23, 357–362. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Preda, V.; Sfetcu, R.-C. Some Properties of Fractal Tsallis Entropy. Fractal Fract. 2023, 7, 375. https://doi.org/10.3390/fractalfract7050375

AMA Style

Preda V, Sfetcu R-C. Some Properties of Fractal Tsallis Entropy. Fractal and Fractional. 2023; 7(5):375. https://doi.org/10.3390/fractalfract7050375

Chicago/Turabian Style

Preda, Vasile, and Răzvan-Cornel Sfetcu. 2023. "Some Properties of Fractal Tsallis Entropy" Fractal and Fractional 7, no. 5: 375. https://doi.org/10.3390/fractalfract7050375

Article Metrics

Back to TopTop