Next Article in Journal
WikiArtVectors: Style and Color Representations of Artworks for Cultural Analysis via Information Theoretic Measures
Previous Article in Journal
Evaluation of Various Ejector Profiles on CO2 Transcritical Refrigeration System Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Logical Entropy of Information Sources

1
School of Computer Science of Information Technology, Qiannan Normal University for Nationalities, Duyun, Guizhou 558000, China
2
Department of Mathematics, Sirjan University of Technology, Sirjan 7813733385, Iran
3
Department of Mathematics, COMSATS University Islamabad, Lahore Campus, Islamabad 54000, Pakistan
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(9), 1174; https://doi.org/10.3390/e24091174
Submission received: 27 July 2022 / Revised: 13 August 2022 / Accepted: 18 August 2022 / Published: 23 August 2022
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In this paper, we present the concept of the logical entropy of order m, logical mutual information, and the logical entropy for information sources. We found upper and lower bounds for the logical entropy of a random variable by using convex functions. We show that the logical entropy of the joint distributions X 1 and X 2 is always less than the sum of the logical entropy of the variables X 1 and X 2 . We define the logical Shannon entropy and logical metric permutation entropy to an information system and examine the properties of this kind of entropy. Finally, we examine the amount of the logical metric entropy and permutation logical entropy for maps.
MSC:
94A17; 37B40; 26A51; 81P10

1. Introduction and Basic Notions

Entropy is an influential quantity that has been explored in a wide range of studies, from applied to physical sciences. In the 19th century, Carnot and Clausius diversified the concept of entropy into three main directions—entropy associated with heat engines (where it behaves similar to a thermal charge), statistical entropy, and (according to Boltzmann and Shannon) entropy in communications channels and information security. Thus, the theory of entropy plays a key role in mathematics, statistics, dynamical systems (where complexity is mostly measured by entropy), information theory [1], chemistry [2], and physics [3] (see also [4,5,6]).
In recent years, other information source entropies have been studied [7,8,9]. Butt et al. in [10,11] introduced new bounds for Shannon, relative, and Mandelbrot entropies via interpolating polynomials. Amig and colleagues defined entropy as a random process and the permutation entropy of a source [1,12].
Ellerman [13] was the first to take credit for introducing a detailed introduction to the concept of logical entropy and establishing its relationship with the renowned Shannon entropy. In recent years, many researchers have focused on extending the notion of logical entropy in new directions/perspectives. Markechová et al. [14] proposed the study of logical entropy and logical mutual information of experiments in the intuitionistic fuzzy case. Ebrahimzadeh [15] proposed the logical entropy of a quantum dynamical system and investigated its ergodic properties. However, the logical entropy of a fuzzy dynamical system was investigated in [7] (see also [16]). Tamir et al. [17] extended the idea of logical entropy over the quantum domain and expressed it in terms of the density matrix. In [18], Ellerman defined logical conditional entropy and logical relative entropy. In fact, logical entropy is a particular case of Tsallis entropy when q = 2 . Logical entropy resembles the information measure introduced by Brukner and Zeilinger [19]. In [13], Ellerman introduced the concept of logical entropy for a random variable. He studied the logical entropy of the joint distribution p ( x , y ) over X × Y as:
h ( x , y ) = 1 x , y [ p ( x , y ) ] 2 .
The motive of this study was to extend the concept of logical entropy presented in [13] to information sources. Since estimating entropy from the information source can be difficult [20], we defined the logical metric permutation entropy of a map and used it to apply for an information source.
In the article, ( Γ , G , μ ) is a measurable probability space (i.e., Γ and G enjoys the structure of σ -algebra of subsets of Γ with μ ( Γ ) = 1 ). Further, if X is a random variable of Γ possessing discrete finite state space A = { a 1 , , a n } , then the function p : A [ 0 , 1 ] defined by
p ( x ) = μ { γ Γ : X ( γ ) = x }
is a probability function. H μ ( X ) = x A p ( x ) log p ( x ) denotes the Shannon entropy of X [1]. If ( X n ) n = 1 is a sequence of the random variables on Γ , the sequence X n is called an information source (also called the stochastic process [ S . P ]). Similarly, if m 1 , then we define p : A m [ 0 , 1 ] by
p ( x 1 , , x m ) = μ { γ Γ : X 1 ( γ ) = x 1 , , X m ( γ ) = x m } .
We know that
x 1 , , x m A p ( x 1 , , x m ) = μ ( Γ ) = 1
for every natural number m. A finite space S . P , X = ( X n ) n = 1 can be recalled as a stationary finite space S . P if
p ( x 1 , , x m ) = μ { γ Γ : X k + 1 ( γ ) = x 1 , , X k + m ( γ ) = x m } ,
for every m , k N . In an information–theoretical setting, one may assume a stationary S . P , X as a data source. A finite space S . P , X is strictly a stationary finite space S . P if
p ( x 1 , , x m ) = μ { γ Γ : X k 1 ( γ ) = x 1 , , X k m ( γ ) = x m } ,
for every { k 1 , , k m } N . The Shannon entropy of order m of source X is defined by [1,12]
H μ ( X 1 m ) = x 1 , , x m A p ( x 1 , , x m ) log p ( x 1 , , x m ) .
The Shannon entropy of source X is defined by h μ ( X ) = lim m ( 1 m H μ ( X 1 m ) ) . If we assume that the alphabet A from source X accepts an order ≤, so that ( A , ) is a totally ordered set, then define another order ≺ on A by [1]
t i t j t i < t j or ( t i = t j and i < j ) .
We say that a length-m sequence t k k + m 1 = ( t k , , t k + m 1 ) has an order pattern π if, t k + π ( 0 ) t k + π ( 1 ) t k + π ( m 1 ) , where t i , t j A , k N and i j . To a S . P , X = ( X n ) n N 0 we associate a probability process R = ( R n ) n N 0 defined by R m ( γ ) = i = 0 m δ ( X i ( γ ) X m ( γ ) ) . The sequence R defines a discrete-time process that is non-stationary. The metric permutation entropy of order m and the metric permutation entropy of source X are, respectively, defined by [1,12]
H μ ( X 0 m 1 ) = H μ ( R 0 m 1 ) = 1 m 1 r 0 , , r m 1 p ( r 0 m 1 ) log p ( r 0 m 1 ) ,
and h μ ( X ) = lim sup m H μ ( X 0 m 1 ) .

2. Main Results

In this section, we use the symbol x 1 m for ( x 1 , , x m ) to simplify the notation.
Definition 1.
Reference [13]. Let X be a random variable on Γ with discrete finite state space A = { a 1 , , a n } . Then,
H μ l ( X ) = x A p ( x ) [ 1 p ( x ) ] = 1 x A [ p ( x ) ] 2
is called the logical Shannon entropy of X.
Theorem 1.
Reference [21] If f is convex on I and ζ = min 1 i n { y i } , η = max 1 i n { y i } , then
f ( ζ ) + f ( η ) 2 f ( ζ + η 2 ) n i = 1 n f ( y i ) n f ( i = 1 n y i n ) f ( ζ ) + f ( η ) 2 f ( ζ + η 2 ) .
Theorem 2.
Suppose that X is a random variable on Γ with a discrete finite state space A = { a 1 , , a n } , ζ = min 1 i n { p ( a i ) } and η = max 1 i n { p ( a i ) } , then
0 Δ ( ζ , η ) : = ( ζ η ) 2 4 n 1 n H μ l ( X ) n ( ζ η ) 2 4 = n Δ ( ζ , η ) .
Proof. 
Applying Theorem 1 with f ( x ) = x 2 x , we obtain
1 n ( ( ζ 2 ζ ) + ( η 2 η ) 2 ( ( ζ + η 2 ) 2 ζ + η 2 ) ) 1 n i = 1 n ( x i 2 x i ) ( ( 1 n x i n ) 2 ( 1 n x i n ) ) ( ζ 2 ζ ) + ( η 2 η ) 2 ( ( ζ + η 2 ) 2 ζ + η 2 ) .
Putting y i = p ( a i ) , it follows that
1 n ( ( ζ 2 ζ ) + ( η 2 η ) 2 ( ( ζ + η 2 ) 2 ζ + η 2 ) ) 1 n i = 1 n ( ( p ( a i ) ) 2 p ( a i ) ) ( ( 1 n p ( a i ) n ) 2 ( 1 n p ( a i ) n ) ) ( ζ 2 ζ ) + ( η 2 η ) 2 ( ( ζ + η 2 ) 2 ζ + η 2 ) .
Thus,
1 n ( ( ζ 2 ζ ) + ( η 2 η ) 2 ( ( ζ + η 2 ) 2 ζ + η 2 ) ) 1 n i = 1 n ( p ( a i ) ) 2 1 n i = 1 n p ( a i ) ( 1 n 2 1 n ) ( ζ 2 ζ ) + ( η 2 η ) 2 ( ( ζ + η 2 ) 2 ζ + η 2 ) .
Hence,
1 n ( ( ζ 2 ζ ) + ( η 2 η ) 2 ( ( ζ + η 2 ) 2 ζ + η 2 ) ) 1 n ( 1 H ζ l ( X ) ) 1 n ( 1 n 2 1 n ) ( ζ 2 ζ ) + ( η 2 η ) 2 ( ( ζ + η 2 ) 2 ζ + η 2 ) .
After some calculations, it turns out that
Δ ( ζ , η ) : = ( ζ η ) 2 4 n 1 n H μ l ( X ) n ( ζ η ) 2 4 .
Lemma 1.
Let X be a random variable with alphabet A = { a 1 , , a n } . Then, 0 H μ l ( X ) n 1 n , and equality holds if and only if p ( a i ) = p ( a j ) for every 1 i , j n .
Proof. 
Using Theorem 2, we obtain 0 H μ l ( X ) n 1 n . Now, let H μ l ( X ) = n 1 n , by the use of Theorem 2, we have M ( ζ , η ) = ( ζ η ) 2 4 = 0 and, thus, ζ = η . Therefore, max 1 i n { p ( a i ) } = min 1 i n { p ( a i ) } . Thus, p ( a i ) = p ( a j ) for every 1 i , j n . On the other hand, if p ( a i ) = p ( a j ) for every 1 i , j n , then ζ = η , so M ( ζ , η ) = 0 and by the use of Theorem 2, we obtain H μ l ( X ) n 1 n = 0 . Hence, H μ l ( X ) = n 1 n . □
Definition 2.
The logical Shannon entropy of order m of source X is defined by
H μ l ( X 1 m ) = H μ l ( X 1 , , X m ) : = x 1 , , x m A p ( x 1 , , x m ) ( 1 p ( x 1 , , x m ) ) , = 1 x 1 , , x m A ( p ( x 1 , , x m ) ) 2
It is easy to see that may be p ( x 1 , x 2 ) p ( x 2 , x 1 ) but for every two random variables x 1 , x 2 we have H μ l ( x 1 , x 2 ) = H μ l ( x 2 , x 1 ) .
Definition 3.
Let m be a natural number and 1 i 1 , , i m n . We define the sets A i 1 i 2 i m by
A i 1 i 2 i m = { γ Γ : X 1 ( γ ) = a i 1 , X 2 ( γ ) = a i 2 , , X m ( γ ) = a i m } .
and μ ( A i 1 i 2 i m ) : = a i 1 i 2 i m .
Moreover, A i 1 i 2 i m A j 1 j 2 j m = for every ( i 1 , i 2 , , i m ) ( j 1 , j 2 , , j m ) and for every m N . Furthermore, if γ j = 1 n A i 1 i 2 i m j , then γ A i 1 i 2 i m j 0 for some j 0 { 1 , , n } . Hence,
X 1 ( γ ) = a i 1 , , X m ( γ ) = a i m , X m + 1 ( γ ) = a j 0
for some j 0 { 1 , , n } and, thus, γ A i 1 i 2 i m . Moreover, if γ A i 1 i 2 i m , then
X 1 ( γ ) = a i 1 , , X m ( γ ) = a i m .
Define X m + 1 ( γ ) = a j 0 . Therefore,
X 1 ( γ ) = a i 1 , , X m ( γ ) = a i m , X m + 1 ( γ ) = a j 0 .
Hence, γ A i 1 i 2 i m j 0 for some j 0 { 1 , , n } and, thus, γ j = 1 n A i 1 i 2 i m j . So,
A i 1 i 2 i m = j = 1 n A i 1 i 2 i m j
and, therefore, Γ = i 1 , i 2 , , i m A i 1 i 2 i m . Hence, we obtain
i 1 i 2 i m a i 1 i 2 i m = 1
and
a i 1 i 2 i m = μ ( A i 1 i 2 i m ) = μ ( j = 1 n A i 1 i 2 i m j ) = j = 1 n μ ( A i 1 i 2 i m j ) = j = 1 n a i 1 i 2 i m j
for every 1 i 1 , i 2 , , i m n .
We now prove the following Theorem by employing Lemma A1 (see Appendix A):
Theorem 3.
If X 1 and X 2 are two random variables on Γ, then
max { H μ l ( X 1 ) , H μ l ( X 2 ) } H μ l ( X 1 , X 2 ) H μ l ( X 1 ) + H μ l ( X 2 ) .
Proof. 
Suppose A = { a 1 , , a n } . For every 1 i , j n , we consider
B i = { γ Γ : X 1 ( γ ) = a i } , C j = { γ Γ : X 2 ( γ ) = a j } , A i j = B i C j , b i = μ ( B i ) , c j : = μ ( C j ) , a i j = μ ( A i j ) .
Moreover, C i C j = and B i B j = for every 1 i j n ; thus, A i j A k l = for every ordered pair ( i , j ) ( k , l ) . For obvious reasons, B i = j = 1 n A i j for each 1 i n and C j = i = 1 n A i j for each 1 j n , and Γ = i , j A i j . So, we have i , j a i j = 1 and for every 1 i , j n ,
b i = μ ( B i ) = μ ( j = 1 n A i j ) = j = 1 μ ( A i j ) = j = 1 n a i j ,
and
c j = μ ( C j ) = μ ( i = 1 n A i j ) = i = 1 μ ( A i j ) = i = 1 n a i j .
With the use of Lemma A1, we have
i = 1 n ( j = 1 n a i j ) 2 + j = 1 n ( i = 1 n a i j ) 2 1 + i , j a i j 2 .
Therefore,
i = 1 n b i 2 + j = 1 n c j 2 1 + i , j a i j 2 .
Consequently,
i = 1 n ( μ ( B i ) ) 2 + j = 1 n ( μ ( C j ) ) 2 1 + i , j ( μ ( A i j ) ) 2 ,
and
i , j ( μ ( A i j ) ) 2 1 i = 1 n ( μ ( B i ) ) 2 j = 1 n ( μ ( C j ) ) 2 .
Hence,
1 i , j ( μ ( A i j ) ) 2 ( 1 i = 1 n ( μ ( B i ) ) 2 ) + ( 1 j = 1 n ( μ ( C j ) ) 2 ) ,
it follows that H μ l ( X 1 , X 2 ) H μ l ( X 1 ) + H μ l ( X 2 ) .
Now, we prove the left-hand inequality. Since
b i = μ ( B i ) = μ ( j = 1 n A i j ) = j = 1 μ ( A i j ) = j = 1 n a i j
for every 1 i n , b i 2 = ( j = 1 n a i j ) 2 j = 1 n a i j 2 . Therefore,
( μ ( B i ) ) 2 j = 1 n ( μ ( A i j ) ) 2 ,
and, thus,
i = 1 n ( μ ( B i ) ) 2 i = 1 n j = 1 n ( μ ( A i j ) ) 2 .
So, H μ l ( X 1 ) H μ l ( X 1 , X 2 ) .
Similarly, H μ l ( X 2 ) H μ l ( X 1 , X 2 ) . Consequently,
max { H μ l ( X 1 ) , H μ l ( X 2 ) } H μ l ( X 1 , X 2 ) .
Corollary 1.
If X is an information source, then
max { H μ l ( X i ) : 1 i k } H μ l ( X 1 , , X k ) i = 1 k H μ l ( X i ) , ( k N ) .
Proof. 
This follows from Theorem 3. □
Definition 4.
The logical metric permutation entropy of order m of source X = { X 0 , X 1 , } defined by
H μ l ( X 0 m 1 ) = H μ l ( R 0 m 1 ) = 1 r 0 , , r m 1 ( p ( r 0 m 1 ) ) 2 .
Lemma 2.
For a S . P , X , the sequence of { H μ l ( X 1 m ) } m increases. Thus, lim m H μ l ( X 1 m ) exists.
Proof. 
According to (1),
p ( x 1 , , x m ) = x m + 1 p ( x 1 , , x m , x m + 1 )
for every m N . Therefore,
( p ( x 1 , , x m ) ) 2 = ( x m + 1 p ( x 1 , , x m , x m + 1 ) ) 2 x m + 1 ( p ( x 1 , , x m , x m + 1 ) ) 2 ,
and
x 1 m ( p ( x 1 , , x m ) ) 2 x 1 m + 1 ( p ( x 1 , , x m , x m + 1 ) ) 2 .
This means that
H μ l ( X 1 m ) = 1 x 1 m ( p ( x 1 , , x m ) ) 2 1 x 1 m + 1 ( p ( x 1 , , x m , x m + 1 ) ) 2 = H μ l ( X 1 m + 1 ) .
Definition 5.
The logical Shannon entropy of source X = { X 1 , X 2 , } is defined by
h μ l ( X ) = lim m ( H μ l ( X 1 m ) ) .
Definition 6.
The logical metric permutation entropy of source X = { X 0 , X 1 , } is defined by
h μ l ( X ) = lim m H μ l ( X 0 m 1 ) .
Remark 1.
Let m be a positive integer number. Then 0 H μ l ( X 1 m ) 1 and 0 h μ l ( X ) 1 .
Lemma 3.
Let X = ( X 1 , X 1 , X 1 , ) be an information source. Then the following holds:
1. 
H μ l ( X 1 , X 1 , , X 1 m t i m e s ) = H μ l ( X 1 ) , for every m N .
2. 
h μ l ( X ) = H μ l ( X 1 ) .
Proof. 
  • If X = ( X 1 , X 1 , X 1 , ) , then
    p ( x 1 , x 2 , , x m ) = p ( x 1 ) x 1 = x 2 = = x m 0 x i x j , for some 1 i j m .
    Hence,
    H μ l ( X 1 , , X m ) = x 1 , , x m A p ( x 1 , , x m ) ( 1 p ( x 1 , , x m ) ) = x 1 A p ( x 1 ) ( 1 p ( x 1 ) ) = H μ l ( X 1 ) .
  • We derive from (3) that
    h μ l ( X ) = lim m H μ l ( X 1 , , X m ) = lim m H μ l ( X 1 , , X 1 ) = lim m H μ l ( X 1 ) = H μ l ( X 1 ) .
Theorem 4.
Suppose that X represents an information source on Γ with the discrete finite state space A = { a 1 , , a n } .
1. 
If ζ m = min x 1 m A { p ( x 1 m ) } and η m = max x 1 m A { p ( x 1 m ) } , then
0 Δ ( ζ m , η m ) n m 1 n m H μ l ( x 1 m ) n m Δ ( ζ m , η m ) ,
2. 
lim m Δ ( ζ m , η m ) 1 h μ l ( X ) lim m n m Δ ( ζ m , η m ) .
Proof. 
  • The result follows from Theorem 2.
  • Taking the limit as m in (4), consequently (2) holds.
Lemma 4.
Let X represent an information source on Γ with the discrete finite state space A = { a 1 , , a n } , then 0 H μ l ( X 1 m ) n m 1 n m , and equality holds if and only if p ( x 1 m ) = p ( t 1 m ) for every x 1 m , t 1 m A m .
Proof. 
By Theorem 4, 0 H μ l ( X 1 m ) n m 1 n m . If H μ l ( X 1 m ) = n m 1 n m , then by the use of Theorem 4 we obtain Δ ( ζ m , η m ) = ( ζ m η m ) 2 4 = 0 . Hence ζ m = η m . Therefore max x 1 m A { p ( x 1 m ) } = min x 1 m A { p ( x 1 m ) } . Thus p ( x 1 m ) = p ( t 1 m ) for every x 1 m , t 1 m A m . On the other hand if p ( x 1 m ) = p ( t 1 m ) for every x 1 m , t 1 m A m , then ζ m = η m . Therefore Δ ( ζ m , η m ) = 0 and by Theorem 4 has H μ l ( X 1 m ) n m 1 n m = 0 and thus H μ l ( X 1 m ) = n m 1 n m . □
Definition 7.
Let p ( x ) 0 , the conditional probability function defined by p ( y | x ) : = p ( x , y ) p ( x ) . In general, for p ( x 1 , , x n ) 0 , the conditional probability function is defined by p ( x 1 | x 2 , , x n + 1 ) : = p ( x 1 , x 2 , , x n + 1 ) p ( x 2 , x 3 , , x n ) .
Lemma 5.
Let x 1 , x 2 , , x n + 1 be a word. Then
p ( x m + 1 , x m , , x 1 ) = i = 1 m + 1 p ( x i | x i 1 , , x 1 ) ,
where m N and p ( x 1 | x 0 ) : = p ( x 1 ) .
Proof. 
We prove the lemma by induction. If m = 1 , have p ( x 1 , x 2 ) = p ( x 1 ) × p ( x 1 | x 2 ) . Thus, the statement is true for m = 1 . Now suppose the statement is true for m = k 1 , we give reasons for m = k .
i = 1 k + 1 p ( x i | x i 1 , , x 1 ) = i = 1 k p ( x i | x i 1 , , x 1 ) × p ( x k + 1 | x k , , x 1 ) = p ( x k , x k 1 , , x 1 ) × p ( x k + 1 | x k , , x 1 ) = p ( x k , x k 1 , , x 1 ) × p ( x k + 1 , x k , , x 1 ) p ( x k , x k 1 , , x 1 ) = p ( x k + 1 , x k , , x 1 ) ,
which completes the proof. □
Definition 8.
Let X 1 and X 2 be two random variables on Γ. We define the conditional logical entropy of X 2 given X 1 by
H μ l ( X 2 | X 1 ) : = x 1 , x 2 ( p ( x 1 ) ) 2 ( p ( x 2 ) ( p ( x 2 | x 1 ) ) 2 ) .
Note: if p ( x 1 ) = 0 , define ( p ( x 1 ) ) 2 ( p ( x 2 ) ( p ( x 2 | x 1 ) ) 2 = 0 .
Definition 9.
Suppose X 1 , X 2 , , X m are m random variables on Γ. Define the conditional logical entropy of X m given X 1 , , X m 1 by
H μ l ( X m | X m 1 , , X 2 , X 1 ) : = x 1 m ( p ( x m 1 , , x 2 , x 1 ) ) 2 [ p ( x m ) ( p ( x m | x m 1 , , x 1 ) ) 2 ] .
Lemma 6.
Suppose X 1 , X 2 , , X m are m random variables on Γ, then
H μ l ( X m | X m 1 , , X 2 , X 1 ) = x 1 m 1 ( p ( x m 1 , , x 2 , x 1 ) ) 2 x 1 m ( p ( x m , , x 2 , x 1 ) ) 2 = H μ l ( X m , X m 1 , , X 2 , X 1 ) H μ l ( X m 1 , , X 2 , X 1 ) .
Proof. 
According to Definition 9, we obtain
H μ l ( X n | X m 1 , , X 2 , X 1 ) = x 1 m ( p ( x m 1 , , x 2 , x 1 ) ) 2 ( p ( x m ) ( p ( x m | x m 1 , , x 1 ) ) 2 ) = x 1 m p ( x m 1 , , x 2 , x 1 ) ) 2 p ( x m ) x 1 m p ( x m 1 , , x 2 , x 1 ) ) 2 ( p ( x m | x m 1 , , x 1 ) ) 2 = ( x 1 m 1 p ( x m 1 , , x 2 , x 1 ) ) 2 ) ( x m p ( x m ) ) x 1 m p ( x m 1 , , x 2 , x 1 ) ) 2 ( p ( x m , x m 1 , , x 1 ) p ( x m 1 , , x 1 ) ) 2 = x 1 m 1 ( p ( x m 1 , , x 2 , x 1 ) ) 2 x 1 m ( p ( x m , , x 2 , x 1 ) ) 2 .
Lemma 7.
Let X be a stationary finite space S . P , then
x 2 n ( p ( x n , , x 2 ) ) 2 = x 1 n 1 ( p ( x n 1 , , x 2 , x 1 ) ) 2 .
Proof. 
Since X is stationary,
x 2 n ( p ( x n , , x 2 ) ) 2 = x 2 n ( μ ( { γ Γ : X n ( γ ) = x n , , X 2 ( γ ) = x 2 } ) ) 2 = x 2 n ( μ ( { γ Γ : X n 1 ( γ ) = x n , , X 1 ( γ ) = x 2 } ) ) 2 = x 1 n 1 ( p ( x n 1 , , x 2 , x 1 ) ) 2 ,
which yields (5). □
Theorem 5.
Let X be a stationary finite space S . P , with discrete finite state space A = { a 1 , , a n } . Then the sequence of conditional logical entropies H μ l ( X m | X m 1 , , X 1 ) decreases.
Proof. 
Under the notation of Definition 3, define
{ A i 1 i 2 i m : 1 i 1 , i 2 , , i m n } = { D 1 , D 2 , , D M } ,
and μ ( D r ) = d r where M = n m . Furthermore, assume that
D i j = D i { γ Γ : x j ( γ ) = a j } , μ ( D i j ) = d i j ,
D i j k = D i j { γ Γ : x k ( γ ) = a k } , μ ( D i j k ) = d i j k ,
where 1 i M and 1 j , k n . It is easy to see that D i D j = for every 1 i j n , and D i j D r s = for every ordered pair ( i , j ) ( r , s ) . Therefore, D i j k D r s t = for every ( i , j , k ) ( r , s , t ) . For obvious reasons, D i = j = 1 n D i j for each 1 i n , D i j = k = 1 n D i j k for every 1 i , j n and Γ = i , j , k D i j k . Consequently, i , j , k d i j k = 1 and
d i = μ ( D i ) = μ ( j = 1 n D i j ) = j = 1 n μ ( D i j ) = j = 1 n d i j
and
d i j = μ ( D i j ) = μ ( k = 1 n D i j k ) = k = 1 n μ ( D i j k ) = k = 1 n d i j k
for every 1 j M , 1 i n .
Using Theorem A1 and Lemma 7, we deduce that
x 1 m + 2 ( p ( x m + 2 , , x 2 , x 1 ) ) 2 = i = 1 M j = 1 n k = 1 n d i j k 2 = i , j , k d i j k 2 x 1 m + 1 ( p ( x m + 1 , , x 2 , x 1 ) ) 2 = i = 1 M j = 1 n d i j 2 = i , j d i j 2 = i , j ( k = 1 n d i j k ) 2 x 1 m ( p ( x m , , x 2 , x 1 ) ) 2 = i = 1 M d i 2 = i = 1 M ( j , k = 1 n d i j k ) 2 x 2 m + 2 ( p ( x m + 2 , , x 2 ) ) 2 = x 1 m + 1 ( p ( x m + 1 , , x 2 , x 1 ) ) 2 = i , j d i j 2 .
With the use of Theorem A1, we obtain
H μ l ( X m + 2 | X m + 1 , , X 2 , X 1 ) = x 1 m + 1 ( p ( x m + 1 , , x 2 , x 1 ) ) 2 x 1 m + 2 ( p ( x m + 2 , , x 2 , x 1 ) ) 2 . = i , j , k d i j k 2 i , j ( k = 1 n d i j k ) 2 i , j ( k = 1 n d i j k ) 2 i = 1 M ( j , k = 1 n d i j k ) 2 = x 1 m ( p ( x m , , x 2 , x 1 ) ) 2 x 1 m + 1 ( p ( x m + 1 , , x 2 , x 1 ) ) 2 = H μ l ( X m + 1 | X m , , X 2 , X 1 ) ,
this means that the sequence of conditional logical entropies
H μ l ( X m | X m 1 , , X 1 )
is decreasing, so
0 H μ l ( X m + 1 | X m , , X 1 ) H μ l ( X m | X m 1 , , X 1 ) H μ l ( X 1 ) .
Corollary 2.
Let X = ( X 1 , X 2 , X 3 , ) be a source. Then the limit lim n H μ l ( X n | X n 1 , , X 1 ) exists.
Lemma 8.
Let X = ( X m ) m = 1 be a stationary finite space S . P . Then
x 2 m + 1 ( p ( x m + 1 | x m , , x 2 ) ) 2 = x 1 m ( p ( x m | x m 1 , , x 1 ) ) 2 .
Proof. 
Since X is stationary,
x 2 m + 1 ( p ( x m + 1 | x m , , x 2 ) ) 2 = x 2 m + 1 ( p ( x m + 1 , x m , , x 2 ) p ( x m , , x 2 ) ) 2 = x 2 m + 1 ( μ ( { γ Γ : x m + 1 ( γ ) = x m + 1 , , x 2 ( Γ ) = x 2 } ) μ ( { Γ Γ : x m ( Γ ) = x m , , x 2 ( γ ) = x 2 } ) ) 2 = x 2 m + 1 ( μ ( { γ Γ : x m ( γ ) = x m + 1 , , x 1 ( γ ) = x 2 } ) μ ( { γ Γ : x m 1 ( γ ) = x m , , x 1 ( γ ) = x 2 } ) ) 2 = x 1 m ( p ( x m | x m 1 , , x 1 ) ) 2 ,
which completes the proof. □
Theorem 6.
Let X = ( X n ) n = 1 be a stationary finite space S . P . Then
H μ l ( X m + 1 | X m , , X 2 ) = H μ l ( X m | X m 1 , , X 1 )
Proof. 
According to Lemma 7,
H μ l ( X m + 1 | X m , , X 2 ) = x 2 m ( p ( x m , , x 2 ) ) 2 x 2 m + 1 ( p ( x m , , x 2 ) ) 2 = x 1 m 1 ( p ( x m 1 , , x 1 ) ) 2 x 1 m ( p ( x m , , x 1 ) ) 2 = H μ l ( X m | X m 1 , , X 2 , X 1 ) .
Theorem 6 is thus proved. □
Theorem 7.
Let X 1 and X 2 be two random variables on Γ. Then the following hold:
1. 
H μ l ( X 2 | X 1 ) = H μ l ( X 1 , X 2 ) H μ l ( X 1 ) .
2. 
H μ l ( X 2 | X 1 ) + H μ l ( X 1 ) = H μ l ( X 1 | X 2 ) + H μ l ( X 2 ) .
Proof. 
  • Using the definition of condition logical entropy, we deduce
    H μ l ( X 2 | X 1 ) = ( x 1 ( p ( x 1 ) ) 2 ) x 1 , x 2 ( p ( x 1 , x 2 ) ) 2 = ( 1 x 1 , x 2 ( p ( x 1 , x 2 ) ) 2 ) ( 1 x 1 ( p ( x 1 ) ) 2 ) = H μ l ( X 1 , X 2 ) H μ l ( X 1 ) ,
    which completes the proof.
  • From the previous part, and since H μ l ( X 1 , X 2 ) = H μ l ( X 2 , X 1 ) , we have
    H μ l ( X 2 | X 1 ) + H μ l ( X 1 ) = H μ l ( X 1 , X 2 ) = H μ l ( X 2 , X 1 ) = H μ l ( X 1 | X 2 ) + H μ l ( X 2 ) .
Theorem 8.
Let X = ( X 1 , X 1 , X 1 , ) be an information source. Then
H μ l ( X 1 , , X m ) = i = 1 m H μ l ( X i | X i 1 , , X 1 ) ,
where H μ l ( X 1 | X 0 ) : = H μ l ( X 1 ) .
Proof. 
According to Lemma 6, we obtain
i = 1 m H μ l ( X i | X i 1 , , X 1 ) = H μ l ( X 1 ) + i = 2 m ( H μ l ( X i , , X 1 ) H μ l ( X i 1 , , X 1 ) ) = H μ l ( X 1 , , X m ) ,
hence the theorem is proven. □
Theorem 9.
Let X = ( X 1 , X 2 , X 3 , ) be an information source. Then
h μ l ( X ) = i = 1 H μ l ( X i | X i 1 , , X 1 ) .
Proof. 
By the use of Theorem 8, we obtain
h μ l ( X ) = lim n i = 1 n H μ l ( X i | X i 1 , , X 1 ) = i = 1 H μ l ( X i | X i 1 , , X 1 ) ,
which completes the proof. □
Definition 10.
An independent information source, X = ( X 1 , X 2 , X 3 , ) , is a source with the following property
p ( x 1 , x 2 , , x m ) = i = 1 m p ( x i )
for all x 1 m .
Theorem 10.
Let X = ( X 1 , X 2 , X 3 , ) be an independent information source. Then
H μ l ( X m + 1 | X m , , X 1 ) = ( 1 H μ l ( X m , , X 1 ) ) H μ l ( X m + 1 )
for every m N .
Proof. 
Since X = ( X 1 , X 2 , X 3 , ) is an independent random variables, we have
H μ l ( X m + 1 | X m , , X 1 ) = x 1 m + 1 ( p ( x m , , x 1 ) ) 2 p ( x m + 1 ) x 1 m + 1 ( p ( x m + 1 , , x 1 ) ) 2 = x 1 m + 1 ( p ( x m , , x 1 ) ) 2 p ( x m + 1 ) x 1 m + 1 ( p ( x m + 1 , , x 1 ) ) 2 = x 1 m + 1 ( p ( x m , , x 1 ) ) 2 p ( x m + 1 ) x 1 m + 1 ( p ( x m , , x 1 ) ) 2 ( p ( x m + 1 ) ) 2 = x 1 m + 1 ( p ( x m , , x 1 ) ) 2 ( p ( x m + 1 ) ( p ( x m + 1 ) ) 2 ) = ( x 1 m ( p ( x m , , x 1 ) ) 2 ) ( x m + 1 ( p ( x m + 1 ) ( p ( x m + 1 ) ) 2 ) ) = ( 1 H μ l ( X m , , X 1 ) ) H μ l ( X m + 1 ) .
The result follows from (6). □
Theorem 11.
Suppose that X = ( X 1 , X 2 , X 3 , ) is an independent information source and lim n H μ l ( X n ) 0 . Then h μ l ( X ) = 1 .
Proof. 
In view of Theorem 10 and Lemma A2, we conclude that
lim n H μ l ( X n + 1 | X n , , X 1 ) = lim n ( 1 H μ l ( X n , , X 1 ) ) H μ l ( X n + 1 ) = lim n ( 1 H μ l ( X n , , X 1 ) ) × lim n H μ l ( X n + 1 ) = 0 .
Since lim n H μ l ( X n ) 0 , lim n ( 1 H μ l ( X n , , X 1 ) ) = 0 . Hence,
h μ l ( X ) = lim n H μ l ( X n , , X 1 ) = 1 .
Theorem 12.
Let X = ( X 1 , X 2 , X 3 , ) be an independent information source. Then
H μ l ( X m , , X 1 ) = 1 i = 1 m ( 1 H μ l ( X i ) )
for every m N .
Proof. 
Since X is an independent source,
H μ l ( X m , , X 1 ) = 1 x 1 , , x m ( p ( x 1 , , x m ) ) 2 = 1 x 1 , , x m ( i = 1 m p ( x i ) ) 2 = 1 x 1 , , x m ( i = 1 m ( p ( x i ) ) 2 ) = 1 i = 1 m ( x i ( p ( x i ) ) 2 ) = 1 i = 1 m ( 1 H μ l ( X i ) ) ,
which is the desired result. □
Theorem 13.
If X = ( X 1 , X 2 , X 3 , ) is an independent information source, then
1. 
lim n i = 1 n ( 1 H μ l ( X i ) ) = 1 h μ l ( X ) .
2. 
If there exists k N , such that H μ l ( X k ) = 1 , then h μ l ( X ) = 1 .
Proof. 
  • This follows from Theorem 12.
  • Let H μ l ( X k ) = 1 for some k N . Since H μ l ( X k ) = 1 ,
    1 h μ l ( X ) = lim n i = 1 n ( 1 H μ l ( X i ) ) = 0 .
Hence, h μ l ( X ) = 1 . □
Definition 11.
Let X 1 and X 2 be two random variables on Γ. Define the logical mutual information of X 2 and X 1 by
I μ l ( X 1 , X 2 ) : = H μ l ( X 1 ) H μ l ( X 1 | X 2 ) .
Lemma 9.
Let X 1 and X 2 be two random variables on Γ. Then the following hold:
1. 
I μ l ( X 1 , X 2 ) = H μ l ( X 2 ) H μ l ( X 2 | X 1 ) .
2. 
I μ l ( X 1 , X 2 ) = H μ l ( X 1 ) + H μ l ( X 2 ) H μ l ( X 1 , X 2 ) .
3. 
I μ l ( X 1 , X 2 ) = I μ l ( X 2 , X 1 ) .
4. 
I μ l ( X 1 , X 1 ) = H μ l ( X 1 ) .
5. 
If X 1 and X 2 are independent random variables, then
I μ l ( X 1 , X 2 ) = H μ l ( X 1 ) H μ l ( X 2 ) .
Proof. 
1–3.
follows from Definition 11 and Theorem 7.
4.
According to Lemma 3, H μ l ( X 1 , X 1 ) = H μ l ( X 1 ) . Therefore,
I μ l ( X 1 , X 1 ) = H μ l ( X 1 ) + H μ l ( X 1 ) H μ l ( X 1 , X 1 ) = 2 H μ l ( X 1 ) H μ l ( X 1 ) ) = H μ l ( X 1 ) .
5.
It follows from Lemma 12 that
H μ l ( X 1 , X 2 ) = 1 ( 1 H μ l ( X 1 ) ) ( 1 H μ l ( X 2 ) ) = H μ l ( X 1 ) + H μ l ( X 2 ) H μ l ( X 1 ) H μ l ( X 2 ) .
Hence, the result follows from 2. □
Definition 12.
Let X = ( X 1 , X 2 , X 3 , ) be an information source. Define the logical mutual information of X 1 ,…, X m by
I μ l ( X 1 , , X m ) : = i = 1 m H μ l ( X i ) H μ l ( X 1 , , X m ) .
Lemma 10.
Let X 1 and X 2 be two random variables on Γ. Then
0 H μ l ( X 2 | X 1 ) H μ l ( X 2 ) .
Proof. 
It follows from Theorem 8 that
H μ l ( X 1 , X 2 ) = H μ l ( X 1 ) + H μ l ( X 2 | X 1 )
and from Theorem 3 that H μ l ( X 1 , X 2 ) H μ l ( X 1 ) + H μ l ( X 2 ) . Hence,
H μ l ( X 1 ) + H μ l ( X 2 | X 1 ) H μ l ( X 1 ) + H μ l ( X 2 ) .
This means that H μ l ( X 2 | X 1 ) H μ l ( X 2 ) .
Theorem 14.
Let X 1 and X 2 be two random variables on Γ. Then the following holds:
0 I μ l ( X 1 , X 2 ) min { H μ l ( X 1 ) , H μ l ( X 1 ) } .
Proof. 
From Lemma 9, it follows that
I μ l ( X 1 , X 2 ) = H μ l ( X 1 ) + H μ l ( X 2 ) H μ l ( X 1 , X 2 ) .
Furthermore, Theorem 3 yields H μ l ( X 2 ) H μ l ( X 1 , X 2 ) . Hence,
I μ l ( X 1 , X 2 ) = H μ l ( X 1 ) + H μ l ( X 2 ) H μ l ( X 1 , X 2 ) H μ l ( X 1 ) + H μ l ( X 1 , X 2 ) H μ l ( X 1 , X 2 ) = H μ l ( X 1 ) .
Similarly, I μ l ( X 1 , X 2 ) H μ l ( X 2 ) ; therefore,
I μ l ( X 1 , X 2 ) min { H μ l ( X 1 ) , H μ l ( X 1 ) } .
On the other hand, (2) yields
H μ l ( X 2 ) + H μ l ( X 2 ) H μ l ( X 1 , X 2 ) 0 .
Therefore, I μ l ( X 1 , X 2 ) 0 and, thus,
0 I μ l ( X 1 , X 2 ) min { H μ l ( X 1 ) , H μ l ( X 1 ) } .

3. Logical Entropy of Maps

Definition 13.
Let f : Γ Γ be a measurable function and α = { α 1 , , α n } be a partition of Γ. The logical metric entropy of order m of f with respect to the partition α is defined by
h μ l , m ( f , α ) = 1 1 x 0 , , x m n ( μ ( α x 0 f 1 ( α x 1 ) f m ( ( α x m ) ) ) 2 ,
and the logical metric entropy of f with respect to the partition α is defined by
h μ l ( f , α ) = lim m h μ l , m ( f , α ) .
The limits in (7) and (8) exist (see Theorem 15). The logical metric entropy of f is defined by h μ l ( f ) = sup α h μ l ( f , α ) .
Remark 2.
0 h μ l ( f ) 1 .
Let I be an interval, h : I I be a function and x I . For the finite orbit { h n ( x ) : 0 n L 1 } , we say that x is of type ordinal L-pattern π = π ( x ) = ( π 0 , , π L 1 ) if
h π 0 ( x ) < h π 1 ( x ) < < h π L 1 ( x ) .
We denote P π the set of x I that are of type π .
Definition 14.
The logical metric permutation entropy of order m of f is defined by
H μ l , m ( f ) : = 1 π S m ( μ ( p π ) ) 2 ,
and the logical metric permutation entropy of f is defined by
h μ l * ( f ) : = lim m H μ l , m ( f ) = 1 lim m π S m ( μ ( p π ) ) 2 .
Theorem 15.
Given A = { 0 , 1 , , n 1 } with X m : [ 0 , 1 ] A , is defined as follows:
X m ( x ) = i f m ( x ) α i .
Then h μ l ( f , α ) = h μ l ( X ) where X is a stationary process ( X 0 , X 1 , ) .
Proof. 
Since
H μ l ( X m ) = 1 i = 0 n 1 ( μ ( { x : f m ( x ) α i } ) ) 2 = 1 i = 0 n 1 ( μ ( f m α i ) ) 2 ,
we have
p ( x 0 , , x m ) = μ ( { x : x 0 ( x ) = x 0 , , x m ( x ) = x m } ) = μ ( { x : x α x 0 , , f m ( x ) α x m } ) = μ ( α x 0 f 1 ( α x 1 ) f m ( ( α x m ) ) .
Hence,
H μ l ( X 0 m ) = 1 x 0 m ( μ ( α x 0 f 1 ( α x 1 ) f m ( ( α x m ) ) ) 2 ,
and so (9) implies that h μ l ( f , α ) = h μ l ( X ) . □

4. Examples and Applications in Logistic and Tent Maps

Example 1.
Let g ( x ) = 4 x ( 1 x ) : [ 0 , 1 ] [ 0 , 1 ] be the logistic map (see Figure 1 and Figure 2 and Table 1). Then
p ( 0 , 1 ) = ( 0 , 3 4 ) , p ( 1 , 0 ) = ( 3 4 , 1 ) , p ( 0 , 1 , 2 ) = ( 0 , 1 4 ) , p ( 0 , 2 , 1 ) = ( 1 4 , 5 5 8 ) , p ( 2 , 0 , 1 ) = ( 5 5 8 , 3 4 ) , p ( 1 , 0 , 2 ) = ( 3 4 , 5 + 5 8 ) , p ( 1 , 2 , 0 ) = ( 5 + 5 8 , 1 ) , p ( 2 , 1 , 0 ) = .
Therefore,
π S 2 ( μ ( p π ) ) 2 = ( 3 4 ) 2 + ( 1 4 ) 2 = 5 8 , H μ l , 2 ( g ) = 1 5 8 = 0 / 375 ,
π S 3 ( μ ( p π ) ) 2 = ( 1 4 ) 2 + ( 3 5 8 ) 2 + ( 1 + 5 8 ) 2 + ( 5 1 8 ) 2 + ( 3 5 8 ) 2 , = 17 6 5 32
and
H μ l , 3 ( g ) = 1 17 6 5 32 = 15 + 6 5 32 0 / 888 .
Example 2.
Ref. [1] Let α = { α 0 = [ 0 , 1 2 ] , α 1 = ( 1 2 , 1 ] } . We consider the tent map (see Figure 3 and Figure 4 and Table 2) Λ : [ 0 , 1 ] [ 0 , 1 ] by
Λ ( x ) = 2 x 0 x 1 2 , 2 2 x 1 2 x 1 , .
Define X m : [ 0 , 1 ] { 0 , 1 } via
X m ( x ) = 0 if Λ m ( x ) α 0 , 1 if Λ m ( x ) α 1 ,
for every m 0 . Let
α 00 = [ 0 , 1 4 ] = { x α 0 : Λ ( x ) α 0 } , α 01 = ( 1 4 , 1 2 ] = { x α 0 : Λ ( x ) α 1 } , α 10 = [ 3 4 , 1 ] = { x α 1 : Λ ( x ) α 0 } , α 11 = ( 1 2 , 3 4 ) = { x α 1 : Λ ( x ) α 1 } .
Given α i 1 i m , where m N , set
α i 1 i m 0 = α i 1 i m { x [ 0 , 1 ] : Λ m ( x ) α 0 } ,
α i 1 i m 1 = α i 1 i m { x [ 0 , 1 ] : Λ m ( x ) α 1 } ,
and
α i 0 i 1 i m = k = 0 m Λ k α i k .
Therefore,
α 000 = [ 0 , 1 8 ] , α 001 = ( 1 8 , 1 4 ] , α 010 = [ 3 8 , 1 2 ] , α 011 = ( 1 4 , 3 8 ) , α 100 = [ 7 8 , 1 ] , α 101 = [ 3 4 , 7 8 ) , α 110 = ( 1 2 , 5 8 ] , α 111 = ( 5 8 , 3 4 ) , α 0000 = [ 0 , 1 16 ] , α 0001 = ( 1 16 , 1 8 ] , α 0010 = [ 3 16 , 1 4 ] , α 0011 = ( 1 8 , 3 16 ) , α 0100 = [ 7 16 , 1 2 ] , α 0101 = [ 3 8 , 7 16 ) , α 0110 = ( 1 4 , 5 16 ] , α 0111 = ( 5 16 , 3 8 ) , α 1000 = [ 15 16 , 1 ] , α 1000 = [ 7 8 , 15 16 ) , α 1010 = [ 3 4 , 13 16 ] , α 1011 = ( 13 16 , 7 8 ) , α 1100 = ( 1 2 , 9 16 ] , α 1101 = ( 9 16 , 5 8 ] , α 1110 = [ 11 16 , 3 4 ) , α 1111 = ( 5 8 , 3 4 ) .
The sets { α i 0 i m 1 } are identical to the binary sequences of 0, 1 in length m.
Hence, μ ( α i 0 i m 1 ) = 1 2 m and, thus,
H μ l ( X 0 m 1 ) = 1 i 0 i m 1 ( μ ( α i 1 i m ) ) 2 = 1 i 0 m 1 ( 1 2 m ) 2 = 1 2 m × 1 2 2 m = 2 m 1 2 m .
So, h μ l ( X ) = 1 , h μ l ( Λ , α ) = 1 and h μ l ( Λ ) = 1 . Furthermore, if
X = ( X 1 , X 1 , X 1 , ) ,
then H μ l ( X 1 , , X 1 ) = H μ l ( X 1 ) and h μ l ( X ) = 1 2 .
Example 3.
Reference [1]. Consider the symmetric tent map in Example 2, we obtain (Figure 5 and Figure 6 and Table 3)
p ( 0 , 1 ) = ( 0 , 2 3 ) , p ( 1 , 0 ) = ( 2 3 , 1 ) , p ( 0 , 1 , 2 ) = ( 0 , 1 3 ) , p ( 0 , 2 , 1 ) = ( 1 3 , 2 5 ) , p ( 2 , 0 , 1 ) = ( 2 5 , 2 3 ) , p ( 1 , 0 , 2 ) = ( 2 3 , 4 5 ) , p ( 1 , 2 , 0 ) = ( 4 5 , 1 ) , p ( 2 , 1 , 0 ) = , p ( 0 , 1 , 2 , 3 ) = ( 0 , 1 6 ) , p ( 0 , 1 , 3 , 2 ) = ( 1 6 , 1 5 ) , p ( 0 , 3 , 1 , 2 ) = ( 1 5 , 2 9 ) ( 2 7 , 1 3 ) , p ( 3 , 0 , 1 , 2 ) = ( 2 9 , 2 7 ) , p ( 0 , 2 , 1 , 3 ) = ( 1 3 , 2 5 ) , p ( 2 , 0 , 3 , 1 ) = ( 2 5 , 4 9 ) ( 4 7 , 3 5 ) , p ( 2 , 3 , 0 , 1 ) = ( 4 9 , 4 7 ) , p ( 2 , 0 , 1 , 3 ) = ( 3 5 , 2 3 ) , p ( 3 , 1 , 0 , 2 ) = ( 2 3 , 4 5 ) , p ( 1 , 3 , 2 , 0 ) = ( 4 5 , 5 6 ) , p ( 1 , 2 , 0 , 3 ) = ( 6 7 , 8 9 ) , p ( 1 , 2 , 3 , 0 ) = ( 5 6 , 6 7 ) ( 8 9 , 1 ) .
Therefore,
π S 2 ( μ ( p π ) ) 2 = ( 2 3 ) 2 + ( 1 3 ) 2 = 5 9 0 / 556 , H μ l , 2 ( Λ ) = 1 5 9 = 4 9 0 / 444 , π S 3 ( μ ( p π ) ) 2 = ( 1 3 ) 2 + ( 1 15 ) 2 + ( 4 15 ) 2 + ( 2 15 ) 2 + ( 1 5 ) 2 = 11 45 0 / 244 , H μ l , 3 ( Λ ) = 1 11 45 = 34 45 0 / 756 , π S 4 ( μ ( p π ) ) 2 0.106 , H μ l , 4 ( Λ ) 0 / 894 .
Furthermore,
h μ l ( Λ ) = lim m H μ l , m ( Λ ) = 1 = h μ l ( Λ ) .
Example 4.
Let I = [ 0 , 1 ] endowed with the measure ν,
ν ( A ) = χ A ( 1 2 ) = 1 i f 1 2 A 0 i f 1 2 A ,
and let f : [ 0 , 1 ] [ 0 , 1 ] be a function. Then h ν l ( f , α ) = 0 for every partition α. Hence, h ν l ( f ) = 0 .

5. Concluding Remarks

We introduced the concept of the logical entropy of random variables. In addition, we found a bound for the logical entropy of a random variable. We also extended the Shannon and permutation entropies to information sources. Finally, we used these results to estimate the logical entropy of the maps. In this article, we only introduced the concept of logical entropy for information systems. In future studies, researchers can find methods that calculate or estimate the numerical value of this type of entropy. It is pertinent to mention that, in the future, Rényi’s metric entropy and Rényi’s permutation entropy can be generalized for information sources. Another important problem is to extend this idea for quantum logical entropy, as it is a good direction to investigate the existence of such results.

Author Contributions

Conceptualization, Y.S.; Formal analysis, Y.S.; Funding acquisition, P.X.; Investigation, Y.S.; Methodology, Y.S. and S.I.B.; Supervision, P.X. and S.I.B.; Validation, P.X.; Writing—original draft, Y.S.; Writing—review & editing, P.X. and S.I.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded in part by the National Natural Science Foundation of China (grant no. 62002079).

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this appendix, we prove the following results that we need in the paper.
Lemma A1.
Let M = [ θ i j ] n × n be a matrix that 0 θ i j 1 for every 1 i , j n and i , j θ i j = 1 , then
i = 1 n ( j = 1 n θ i j ) 2 + j = 1 n ( i = 1 n θ i j ) 2 1 + i , j θ i j 2 .
Proof. 
Since
i = 1 n ( j = 1 n θ i j ) 2 + j = 1 n ( i = 1 n θ i j ) 2 = i , j θ i j 2 + ( i , j θ i j ) 2 2 i k , j < l θ i j θ k l = i , j θ i j 2 + 1 2 i k , j < l θ i j θ k l 1 + i , j θ i j 2 ,
the assertion is proved. □
Theorem A1.
Let a i j k be real numbers and a i j k 0 for every 1 i n 1 , 1 i n 2 and 1 i n 3 . Then
2 i , j ( k a i j k ) 2 i , j , k a i j k 2 + i ( j , k a i j k ) 2 .
Proof. 
Since
2 i , j ( k a i j k ) 2 = 2 i , j ( k , r a i j k a i j r ) = 2 i , j , k , r a i j k a i j r i , j , t , k , r a i j k a i t r = i ( j , k a i j k ) 2 i , j , k a i j k 2 + i ( j , k a i j k ) 2 ,
which completes the proof of the theorem. □
Lemma A2.
For an information source X = ( X 1 , X 2 , X 3 , ) ,
lim n H μ l ( X n | X n 1 , , X 1 ) = 0 .
Proof. 
According to Theorem 9, the series n = 1 H μ l ( X n | X n 1 , , X 1 ) converges and, thus, lim n H μ l ( X n | X n 1 , , X 1 ) = 0 .

References

  1. Amigo, J.M. Permutation Complexity in Dynamical Systems “Ordinal Patterns, Permutation Entropy, and All That”; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  2. Mehrpooya, A.; Sayyari, Y.; Molaei, M.R. Algebraic and Shannon entropies of commutative hypergroups and their connection with information and permutation entropies and with calculation of entropy for chemical algebras. Soft Comput. 2019, 23, 13035–13053. [Google Scholar] [CrossRef]
  3. Corda, C.; FatehiNia, M.; Molaei, M.R.; Sayyari, Y. Entropy of iterated function systems and their relations with black holes and Bohr-like black holes entropies. Entropy 2018, 20, 56. [Google Scholar] [CrossRef] [PubMed]
  4. Walters, P. An Introduction to Ergodic Theory; Springer: New York, NY, USA, 1982. [Google Scholar]
  5. Ellerman, D. Logical Entropy: Introduction to Classical and Quantum Logical Information Theory. Entropy 2018, 20, 679. [Google Scholar] [CrossRef] [PubMed]
  6. Natal, J.; Avila, I.; Tsukahara, V.B.; Pinheiro, M.; Maciel, C.D. Entropy, From Thermodynamics to Information Processing. Entropy 2021, 23, 1340. [Google Scholar] [CrossRef] [PubMed]
  7. Markechová, D.; Riečan, B. Logical entropy of fuzzy dynamical systems. Entropy 2016, 18, 157. [Google Scholar] [CrossRef]
  8. Ellerman, D. The logic of partitions: Introduction to the dual of the logic of subsets. Rev. Symb. Log. 2010, 3, 287–350. [Google Scholar] [CrossRef]
  9. Sayyari, Y. An improvement of the upper bound on the entropy of information sources. J. Math. Ext. 2021, 15, 1–12. [Google Scholar]
  10. Butt, S.I.; Mehmood, N.; Pečarić, D.; Pečarić, J. New bounds for Shannon, relative and Mandelbrot entropies via Abel-Gontscharoff interpolating polynomial. Math. Inequal. Appl. 2019, 22, 1283–1301. [Google Scholar] [CrossRef]
  11. Mehmood, N.; Butt, S.I.; Pečarić, D.; Pečarić, J. New bounds for Shannon, relative and Mandelbrot entropies via Hermite interpolating polynomial. Demonstr. Math. 2018, 51, 112–130. [Google Scholar] [CrossRef]
  12. Amigo, J.M.; Kennel, M.B. Topological Permutation Entropy. Phys. Nonlinear Phenom. 2007, 231, 137–142. [Google Scholar] [CrossRef]
  13. Ellerman, D. An introduction to logical entropy and its relation to Shannon entropy. Int. J. Semant. Comput. 2013, 7, 121–145. [Google Scholar] [CrossRef]
  14. Markechová, D.; Riečan, B. Logical Entropy and Logical Mutual Information of Experiments in the Intuitionistic Fuzzy Case. Entropy 2017, 19, 429. [Google Scholar] [CrossRef] [PubMed]
  15. Ebrahimzadeh, A. Logical entropy of quantum dynamical systems. Open Phys. 2016, 14, 1–5. [Google Scholar] [CrossRef]
  16. Markechová, D.; Ebrahimzadeh, A.; Giski, Z.E. Logical entropy of dynamical systems. Adv. Differ. Equ. 2018, 2018, 70. [Google Scholar] [CrossRef]
  17. Tamir, B.; Cohen, E. Logical Entropy for Quantum States. arXiv 2014, arXiv:1412.0616v2. [Google Scholar]
  18. Ellerman, D. Information as Distinction: New Foundation for Information Theory. arXiv 2013, arXiv:1301.5607. [Google Scholar]
  19. Brukner, C.; Zeilinger, A. Conceptual Inadequacy of the Shannon Information in Quantum Measurements. Phys. Rev. A 2001, 63, 022113. [Google Scholar] [CrossRef]
  20. Sayyari, Y. New entropy bounds via uniformly convex functions. Chaos Solitons Fractals 2020, 141, 110360. [Google Scholar] [CrossRef]
  21. Simic, S. Jensen’s inequality and new entropy bounds. Appl. Math. Lett. 2009, 22, 1262–1265. [Google Scholar] [CrossRef]
Figure 1. x , g ( x ) , and g 2 ( x ) .
Figure 1. x , g ( x ) , and g 2 ( x ) .
Entropy 24 01174 g001
Figure 2. H μ l , m ( g ) , and h μ ( g , m ) .
Figure 2. H μ l , m ( g ) , and h μ ( g , m ) .
Entropy 24 01174 g002
Figure 3. Λ , Λ 2 and Λ 3 .
Figure 3. Λ , Λ 2 and Λ 3 .
Entropy 24 01174 g003
Figure 4. H μ l , m ( Λ ) , and h μ ( Λ , m ) .
Figure 4. H μ l , m ( Λ ) , and h μ ( Λ , m ) .
Entropy 24 01174 g004
Figure 5. x , Λ ( x ) , Λ 2 ( x ) and Λ 3 ( x ) .
Figure 5. x , Λ ( x ) , Λ 2 ( x ) and Λ 3 ( x ) .
Entropy 24 01174 g005
Figure 6. H μ l , m * ( Λ ) , and h μ * ( Λ , m ) .
Figure 6. H μ l , m * ( Λ ) , and h μ * ( Λ , m ) .
Entropy 24 01174 g006
Table 1. Logical metric permutation entropy and metric permutation entropy [1] for the logistic map up to order m = 3 .
Table 1. Logical metric permutation entropy and metric permutation entropy [1] for the logistic map up to order m = 3 .
m123
H μ l , m ( g ) 00/3750/888
h μ ( g , m ) 00/280/39
Table 2. Logical metric entropy and metric entropy [1] for the tent map up to order m = 3 .
Table 2. Logical metric entropy and metric entropy [1] for the tent map up to order m = 3 .
m123m
H μ l , m ( Λ ) 00/750.875 1 1 2 m
h μ ( Λ , m ) 0 ln 2 ln 2 ln 2
Table 3. Logical metric permutation entropy and metric permutation entropy [1] for the tent map up to order m = 4 .
Table 3. Logical metric permutation entropy and metric permutation entropy [1] for the tent map up to order m = 4 .
m1234
H μ l , m ( Λ ) 00/4440/7560/894
h μ ( Λ , m ) 00/320/410/59
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, P.; Sayyari, Y.; Butt, S.I. Logical Entropy of Information Sources. Entropy 2022, 24, 1174. https://doi.org/10.3390/e24091174

AMA Style

Xu P, Sayyari Y, Butt SI. Logical Entropy of Information Sources. Entropy. 2022; 24(9):1174. https://doi.org/10.3390/e24091174

Chicago/Turabian Style

Xu, Peng, Yamin Sayyari, and Saad Ihsan Butt. 2022. "Logical Entropy of Information Sources" Entropy 24, no. 9: 1174. https://doi.org/10.3390/e24091174

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop