Next Article in Journal
Stationary Wavelet-Fourier Entropy and Kernel Extreme Learning for Bearing Multi-Fault Diagnosis
Next Article in Special Issue
Improving Parameter Estimation of Entropic Uncertainty Relation in Continuous-Variable Quantum Key Distribution
Previous Article in Journal
Nonadditive Entropies and Complex Systems
Previous Article in Special Issue
Probabilistic Resumable Quantum Teleportation of a Two-Qubit Entangled State
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Exact Variance of Tsallis Entanglement Entropy in a Random Pure State

Department of Electrical and Computer Engineering, University of Michigan, Dearborn, MI 48128, USA
Entropy 2019, 21(5), 539; https://doi.org/10.3390/e21050539
Submission received: 26 April 2019 / Revised: 15 May 2019 / Accepted: 25 May 2019 / Published: 27 May 2019
(This article belongs to the Special Issue Entropy in Foundations of Quantum Physics)

Abstract

:
The Tsallis entropy is a useful one-parameter generalization to the standard von Neumann entropy in quantum information theory. In this work, we study the variance of the Tsallis entropy of bipartite quantum systems in a random pure state. The main result is an exact variance formula of the Tsallis entropy that involves finite sums of some terminating hypergeometric functions. In the special cases of quadratic entropy and small subsystem dimensions, the main result is further simplified to explicit variance expressions. As a byproduct, we find an independent proof of the recently proven variance formula of the von Neumann entropy based on the derived moment relation to the Tsallis entropy.

1. Introduction

Classical information theory is the theory behind the modern development of computing, communication, data compression, and other fields. As its classical counterpart, quantum information theory aims at understanding the theoretical underpinnings of quantum science that will enable future quantum technologies. One of the most fundamental features of quantum science is the phenomenon of quantum entanglement. Quantum states that are highly entangled contain more information about different parts of the composite system.
As a step to understand quantum entanglement, we choose to study the entanglement property of quantum bipartite systems. The quantum bipartite model, proposed in the seminal work of Page [1], is a standard model for describing the interaction of a physical object with its environment for various quantum systems. In particular, we wish to understand the degree of entanglement as measured by the entanglement entropies of such systems. The statistical behavior of entanglement entropies can be understood from their moments. In principle, the knowledge of all integer moments determines uniquely the distribution of the considered entropy as it is supported in a finite interval (cf. (5) below). This is also known as Hausdorff’s moment problem [2,3]. In practice, a finite number of moments can be utilized to construct approximations to the distribution of the entropy, where the higher moments describe the tail distribution that provides crucial information such as whether the mean entropy is a typical value [4]. Of particular importance is the second moment (variance) that governs the fluctuation of the entropy around the mean value. With the first two moments, one could already construct an upper bound to the probability of finding a state with entropy lower than the mean entropy by using the concentration of measure techniques [4].
The existing knowledge in the literature is mostly focused on the von Neumann entropy [1,4,5,6,7,8,9,10], where its first three exact moments are known. In this work, we consider the Tsallis entropy [11], which is a one-parameter generalization of the von Neumann entropy. The Tsallis entropy enjoys certain advantages in describing quantum entanglement. For example, it overcomes the inability of the von Neumann entropy to model systems with long-range interactions [12]. The Tsallis entropy also has the unique nonadditivity (also known as nonextensivity) property, whose physical relevance to quantum systems has been increasingly identified [13]. In the literature, the mean value of the Tsallis entropy was derived by Malacarne–Mendes–Lenzi [12]. The focus of this work is to study its variance.
The paper is organized as follows. In Section 2, we introduce the quantum bipartite model and the entanglement entropies. In Section 3, an exact variance formula of the Tsallis entropy in terms of finite sums of terminating hypergeometric functions is derived, which is the main result of this paper. As a byproduct, we provide in Appendix A another proof to the recently proven [4,10] Vivo–Pato–Oshanin’s conjecture [9] on the variance of the von Neumann entropy. In Section 4, the derived variance formula of the Tsallis entropy is further simplified to explicit expressions in the special cases of quadratic entropy and small subsystem dimensions. We summarize the main results and point out a possible approach to study the higher moments in Section 5.

2. Bipartite System and Entanglement Entropy

We consider a composite quantum system consisting of two subsystems A and B of Hilbert space dimensions m and n, respectively. The Hilbert space H A + B of the composite system is given by the tensor product of the Hilbert spaces of the subsystems, H A + B = H A H B . The random pure state (as opposed to the mixed state) of the composite system is written as a linear combination of the random coefficients x i , j and the complete basis i A and j B of H A and H B , ψ = i = 1 m j = 1 n x i , j i A j B . The corresponding density matrix ρ = ψ ψ has the natural constraint tr ( ρ ) = 1 . This implies that the m × n random coefficient matrix X = ( x i , j ) satisfies:
tr XX = 1 .
Without loss of generality, it is assumed that m n . The reduced density matrix ρ A of the smaller subsystem A admits the Schmidt decomposition ρ A = i = 1 m λ i ϕ i A ϕ i A , where λ i is the i th largest eigenvalue of XX . The conservation of probability (1) now implies the constraint i = 1 m λ i = 1 . The probability measure of the random coefficient matrix X is the Haar measure, where the entries are uniformly distributed over all the possible values satisfying the constraint (1). The resulting eigenvalue density of XX is (see, e.g., [1]),
f λ = Γ ( m n ) c δ 1 i = 1 m λ i 1 i < j m λ i λ j 2 i = 1 m λ i n m ,
where δ ( · ) is the Dirac delta function and the constant:
c = i = 1 m Γ ( n i + 1 ) Γ ( i ) .
The random matrix ensemble (2) is also known as the (unitary) fixed-trace ensemble. The above-described quantum bipartite model is useful in modeling various quantum systems. For example, in [1], the subsystem A is a black hole, and the subsystem B is the associated radiation field. In another example [14], the subsystem A is a set of spins, and the subsystem B represents the environment of a heat bath.
The degree of entanglement of quantum systems can be measured by the entanglement entropy, which is a function of the eigenvalues of XX . The function should monotonically increase from the separable state ( λ 1 = 1 , λ 2 = = λ m = 0 ) to the maximally-entangled state ( λ 1 = λ 2 = λ m = 1 / m ). The most well-known entanglement entropy is the von Neumann entropy:
S = i = 1 m λ i ln λ i ,
which achieves the separable state and maximally-entangled state when S = 0 and when S = ln m , respectively. A one-parameter generalization of the von Neumann entropy is the Tsallis entropy [11]:
T = 1 q 1 1 i = 1 m λ i q , q R \ { 0 } ,
which, by l’Hôpital’s rule, reduces to the von Neumann entropy (4) when the non-zero real parameter q approaches one. The Tsallis entropy (5) achieves the separable state and maximally-entangled state when T = 0 and T = m q 1 1 / ( q 1 ) m q 1 , respectively. In some aspects, the Tsallis entropy provides a better description of the entanglement. For example, it overcomes the inability of the von Neumann entropy to model systems with long-range interactions [12]. The Tsallis entropy also has a definite concavity for any q, i.e., being convex for q < 0 and concave for q > 0 . We also point out that by studying the moments of the Tsallis entropy (5) first, one may recover the moments of the von Neumann entropy (4) in a relatively simpler manner as opposed to directly working with the von Neumann entropy. The advantage of this indirect approach has been very recently demonstrated in the works [4,15]. In the same spirit, we will also provide in Appendix A another proof to the variance of the von Neumann entropy starting from the relation to the Tsallis entropy.
In the literature, the first moment of the von Neumann entropy E f S (the subscript f emphasizes that the expectation is taken over the fixed-trace ensemble (2)) was conjectured by Page [1]. Page’s conjecture was proven independently by Foong and Kanno [5], Sánchez-Ruiz [6], Sen [7], and Adachi–Toda–Kubotani [8]. Recently, an expression for the variance of the von Neumann entropy V f S was conjectured by Vivo–Pato–Oshanin (VPO) [9], which was subsequently proven by the author [10]. Bianchi and Donà [4] provided an independent proof to VPO’s conjecture very recently, where they also derived the third moment. For the Tsallis entropy, the first moment E f T was derived by Malacarne–Mendes–Lenzi [12]. The task of the present work is to study the variance of the Tsallis entropy V f T .

3. Exact Variance of the Tsallis Entropy

Similar to the case of the von Neumann entropy [1,10], the starting point of the calculation is to convert the moments defined over the fixed-traced ensemble (2) to the well-studied Laguerre ensemble, whose correlation functions are explicitly known. Before discussing the moments conversion approach, we first set up necessary definitions relevant to the Laguerre ensemble. By construction (1), the random coefficient matrix X is naturally related to a Wishart matrix YY as:
XX = YY tr YY ,
where Y is an m × n ( m n ) matrix of independently and identically distributed complex Gaussian entries (complex Ginibre matrix). The density of the eigenvalues 0 < θ m < < θ 1 < of YY equals [16]:
g θ = 1 c 1 i < j m θ i θ j 2 i = 1 m θ i n m e θ i ,
where c is the same as in (3), and the above ensemble is known as the Laguerre ensemble. The trace of the Wishart matrix:
r = tr YY = i = 1 m θ i
follows a gamma distribution with the density [9]:
h m n ( r ) = 1 Γ ( m n ) e r r m n 1 , r [ 0 , ) .
The relation (6) induces the change of variables:
λ i = θ i r , i = 1 , , m ,
that leads to a well-known relation (see, e.g., [1]) among the densities (2), (7), and (9) as:
f λ h m n ( r ) d r i = 1 m d λ i = g θ i = 1 m d θ i .
This implies that r is independent of each λ i , i = 1 , , m , since their densities factorize.
For the von Neumann entropy (4), the relation (11) has been exploited to convert the first two moments [1,10] from the fixed-trace ensemble (2) to the Laguerre ensemble (7). The moments conversion was an essential starting point in proving the conjectures of Page [1,6] and Vivo–Pato–Oshanin [10]. We now show that the moments conversion approach can be also applied to study the Tsallis entropy. We first define:
L = i = 1 m θ i q
as the induced Tsallis entropy of the Laguerre ensemble (7). Here, for the convenience of the discussion, we have defined the induced entropy, which may not have the physical meaning of an entropy. Using the change of variables (10), the kth power of the Tsallis entropy (5) can be written as:
T k = 1 ( q 1 ) k 1 L r q k = 1 ( q 1 ) k i = 0 k ( 1 ) i k i L i r q i
and thus, we have:
E f T k = 1 ( q 1 ) k i = 0 k ( 1 ) i k i E f L i r q i .
The expectation on the left-hand side is computed as:
E f L i r q i = λ L i r q i f λ i = 1 m d λ i
= λ L i r q i f λ i = 1 m d λ i r h m n + q i ( r ) d r
= Γ ( m n ) Γ ( m n + q i ) λ r L i f λ h m n ( r ) d r i = 1 m d λ i
= Γ ( m n ) Γ ( m n + q i ) E g L i ,
where the multiplication of an appropriate constant 1 = r h m n + q i ( r ) d r in (16) along with the fact that r q i h m n + q i ( r ) = Γ ( m n ) h m n ( r ) / Γ ( m n + q i ) lead to (17), and the last equality (18) is established by the change of measures (11). Inserting (18) into (14), the kth moment of the Tsallis entropy (5) is written as a sum involving the first k moments of the induced Tsallis entropy (12) as:
E f T k = Γ ( m n ) ( q 1 ) k i = 0 k k i ( 1 ) i Γ ( m n + q i ) E g L i .
With the above relation (19), the computation of moments over the less tractable correlation functions of the fixed-trace ensemble (2) is now converted to the one over the Laguerre ensemble (7), which will be calculated explicitly. In particular, computing the variance V f T = E f T 2 E f 2 T requires the moments relation (19) for k = 1 ,
E f T = 1 q 1 1 Γ ( m n ) Γ ( m n + q ) E g L
and k = 2 ,
E f T 2 = 1 ( q 1 ) 2 1 2 Γ ( m n ) Γ ( m n + q ) E g L + Γ ( m n ) Γ ( m n + 2 q ) E g L 2 ,
where the first moment relation (20) has also appeared in [12]. It is seen from (21) that the essential task now is to compute E g L and E g L 2 . Before proceeding to the calculation, we point out that in the limit q 1 , the derived second moments relation (21) leads to a new proof to the recently proven variance formula of the von Neumann entropy [10] with details provided in the Appendix A.
The computation of E g L and E g L 2 involves one and two arbitrary eigenvalue densities, denoted respectively by g 1 ( x 1 ) and g 2 ( x 1 , x 2 ) , of the Laguerre ensemble as:
E g L = m 0 x 1 q g 1 ( x 1 ) d x 1 ,
E g L 2 = m 0 x 1 2 q g 1 ( x 1 ) d x 1 + m ( m 1 ) 0 0 x 1 q x 2 q g 2 x 1 , x 2 d x 1 d x 2 .
In general, the joint density of N arbitrary eigenvalues g N ( x 1 , , x N ) is related to the N-point correlation function:
X N x 1 , , x N = det K x i , x j i , j = 1 N
as [16] g N ( x 1 , , x N ) = X N x 1 , , x N ( m N ) ! / m ! , where det ( · ) is the matrix determinant and the symmetric function K ( x i , x j ) is the correlation kernel. In particular, we have:
g 1 ( x 1 ) = 1 m K ( x 1 , x 1 ) ,
g 2 ( x 1 , x 2 ) = 1 m ( m 1 ) K ( x 1 , x 1 ) K ( x 2 , x 2 ) K 2 ( x 1 , x 2 ) ,
and the correlation kernel K ( x i , x j ) of the Laguerre ensemble can be explicitly written as [16]:
K ( x i , x j ) = e x i x j ( x i x j ) n m k = 0 m 1 C k ( x i ) C k ( x j ) k ! ( n m + k ) ! ,
where:
C k ( x ) = ( 1 ) k k ! L k ( n m ) ( x )
with:
L k ( n m ) ( x ) = i = 0 k ( 1 ) i n m + k k i x i i !
the (generalized) Laguerre polynomial being of degree k. The Laguerre polynomials satisfy the orthogonality relation [16]:
0 x n m e x L k ( n m ) ( x ) L l ( n m ) ( x ) d x = ( n m + k ) ! k ! δ k l ,
where δ k l is the Kronecker delta function. It is known that the one-point correlation function admits a more convenient representation as [6,16]:
X 1 ( x ) = K ( x , x ) = m ! ( n 1 ) ! x n m e x L m 1 ( n m + 1 ) ( x ) 2 L m 2 ( n m + 1 ) ( x ) L m ( n m + 1 ) ( x ) .
We also need the following integral identity, due to Schrödinger [17], that generalizes the integral (30) to:
A s , t ( α , β ) ( q ) = 0 x q e x L s ( α ) ( x ) L t ( β ) ( x ) d x = ( 1 ) s + t k = 0 min ( s , t ) q α s k q β t k Γ ( k + q + 1 ) k ! , q > 1 .
With the above preparation, we now proceed to the calculation of E g L and E g L 2 . Inserting (25) and (31) into (22) and defining further:
A s , t = A s , t ( n m + 1 , n m + 1 ) ( n m + q ) ,
one obtains by using (32) that:
E g L = m ! ( n 1 ) ! A m 1 , m 1 A m 2 , m = m ! ( n 1 ) ! ( k = 0 m 1 q 1 m k 1 2 Γ ( n m + q + k + 1 ) k ! k = 0 m 2 q 1 m k 2 q 1 m k Γ ( n m + q + k + 1 ) k ! ) ,
which is valid for q > 1 . The first moment expression in the above form has been obtained in [12], and we continue to show that it can be compactly written as a terminating hypergeometric function of the unit argument. Indeed, since:
q 1 1 q 1 1 Γ ( n + q ) ( m 1 ) ! = 0 , q > 1 \ { 0 } ,
we have:
E g L = m ! ( n 1 ) ! k = 0 m 1 Γ ( n m + q + k + 1 ) k ! q 1 m k 1 2 q 1 m k 2 q 1 m k = m ! Γ 2 ( q ) ( n 1 ) ! k = 0 m 1 Γ ( n + q k ) ( m k 1 ) ! 1 k ! 2 Γ 2 ( q k ) 1 ( k 1 ) ! ( k + 1 ) ! Γ ( q k + 1 ) Γ ( q k 1 )
= m ! Γ ( q + 1 ) Γ ( q ) ( n 1 ) ! k = 0 m 1 Γ ( n + q k ) ( m k 1 ) ! Γ ( q k + 1 ) Γ ( q k ) k ! ( k + 1 ) !
= m ! Γ ( q + 1 ) Γ ( q ) ( n 1 ) ! Γ ( n + q ) ( m 1 ) ! Γ ( q + 1 ) Γ ( q ) k = 0 m 1 ( 1 m ) k ( q ) k ( 1 q ) k ( 1 n q ) k ( 2 ) k 1 k !
= m Γ ( n + q ) ( n 1 ) ! 3 F 2 1 m , q , 1 q 1 n q , 2 ; 1 , q > 1 \ { 0 } ,
where the second equality follows from the change of variable k m 1 k , and (38) is obtained by repeated use of the identity:
Γ ( m k ) = ( 1 ) k ( 1 m ) k Γ ( m )
with ( a ) n = Γ ( a + n ) / Γ ( a ) being Pochhammer’s symbol; and (39) is obtained by the series definition of the hypergeometric function:
p F q a 1 a p b 1 b q ; z = k = 0 ( a 1 ) k ( a p ) k ( b 1 ) k ( b q ) k z k k !
that reduces to a finite sum if one of the parameters a i is a negative integer. Inserting (39) into (20), we arrive at a compact expression for the first moment of the Tsallis entropy as:
E f T = 1 q 1 1 m ( m n 1 ) ! Γ ( n + q ) ( n 1 ) ! Γ ( m n + q ) 3 F 2 1 m , q , 1 q 1 n q , 2 ; 1 , q > 1 \ { 0 } .
We now calculate E g L 2 . Inserting (25) and (26) into (23), one has:
E g L 2 = I 1 I 2 + m 2 Γ 2 ( n + q ) ( n 1 ) ! 2 3 F 2 1 m , q , 1 q 1 n q , 2 ; 1 2 ,
where:
I 1 = 0 x 1 2 q K ( x 1 , x 1 ) d x 1 ,
I 2 = 0 0 x 1 q x 2 q K 2 x 1 , x 2 d x 1 d x 2 ,
and we have used the result (39) with the fact that:
0 x q K ( x , x ) d x = E g L .
The integral I 1 can be read off from the steps that led to (39) by replacing q with 2 q as:
I 1 = m Γ ( n + 2 q ) ( n 1 ) ! 3 F 2 1 m , 2 q , 1 2 q 1 n 2 q , 2 ; 1 .
Inserting (27) into (45) and defining further (cf. (32)):
A s , t = A s , t ( n m , n m ) ( n m + q ) ,
the integral I 2 is written as:
I 2 = k = 0 m 1 k ! 2 A k , k 2 ( n m + k ) ! 2 + 2 j = 1 m 1 i = 0 j 1 i ! j ! A i , j 2 ( n m + i ) ! ( n m + j ) ! ,
where by using (32) and (40), we obtain:
A i , j = Γ 2 ( q + 1 ) k = 0 i Γ ( n m + q + k + 1 ) Γ ( q i + k + 1 ) Γ ( q j + k + 1 ) ( i k ) ! ( j k ) ! k !
= Γ ( n m + q + 1 ) Γ ( q i + 1 ) Γ ( q j + 1 ) i ! j ! k = 0 i ( i ) k ( j ) k ( n m + q + 1 ) k ( q i + 1 ) k ( q j + 1 ) k 1 k !
= Γ ( n m + q + 1 ) Γ ( q i + 1 ) Γ ( q j + 1 ) i ! j ! 3 F 2 i , j , n m + q + 1 q i + 1 , q j + 1 ; 1 ,
and similarly:
A k , k = Γ ( n m + q + 1 ) Γ 2 ( q k + 1 ) k ! 2 3 F 2 k , k , n m + q + 1 q k + 1 , q k + 1 ; 1 .
Finally, by inserting (47), (49), (52), and (53) into (43), we arrive at:
E g L 2 = m 2 Γ 2 ( n + q ) ( n 1 ) ! 2 3 F 2 1 m , q , 1 q 1 n q , 2 ; 1 2 + m Γ ( n + 2 q ) ( n 1 ) ! 3 F 2 1 m , 2 q , 1 2 q 1 n 2 q , 2 ; 1 Γ 4 ( q + 1 ) Γ 2 ( n m + q + 1 ) i = 0 m 1 L 2 ( i , i ) + 2 j = 1 m 1 i = 0 j 1 L 2 ( i , j ) , q > 1 \ { 0 } ,
where the symmetric function L ( i , j ) = L ( j , i ) is:
L ( i , j ) = 3 F 2 i , j , n m + q + 1 q i + 1 , q j + 1 ; 1 Γ ( q i + 1 ) Γ ( q j + 1 ) i ! j ! ( n m + i ) ! ( n m + j ) ! .
With the derived first two moments (39) and (54) and the relations (20) and (21), an exact variance formula of the Tsallis entropy is obtained.

4. Special Cases

Though the derived results (39) and (54) may not be further simplified for an arbitrary m, n, and q, we will show that explicit variance expressions can be obtained in some special cases of practical relevance.

4.1. Quadratic Entropy q = 2

In the special case q = 2 , the Tsallis entropy (5) reduces to the quadratic entropy:
T = 1 i = 1 m λ i 2 ,
which was first considered in physics by Fermi [12]. The quadratic entropy (56) is the only entropy among all possible q values that satisfies the information invariance and continuity criterion [18].
By the series representations (38) and (51), the first two moments in the case q = 2 are directly computed as:
E g L = m n ( m + n ) ,
E g L 2 = m n m n 3 + 2 m 2 n 2 + 4 n 2 + m 3 n + 10 m n + 4 m 2 + 2 .
By (20) and (21), we immediately have:
E f T = m n m n + 1 m n + 1 ,
E f T 2 = ( m 1 ) ( n 1 ) ( m n + 1 ) ( m n + 2 ) ( m n + 3 ) m 2 n 2 m n 2 m 2 n + 5 m n 4 n 4 m + 8 ,
which lead to the variance of Tsallis entropy for q = 2 as:
V f T = 2 m 2 1 n 2 1 m n + 1 2 m n + 2 m n + 3 .
Finally, we note that explicit variance expressions for other positive integer values of q can be similarly obtained.

4.2. Subsystems of Dimensions m = 2 and m = 3

We now consider the cases when dimensions m of the smaller subsystems are small. This is a relevant scenario for subsystems consisting of, for example, only a few entangled particles [14]. For m = 2 with any n and q, the series representations (38) and (51) directly lead to the results:
E g L = q 2 + q + 2 n 2 ( n 1 ) ! Γ ( q + n 1 ) ,
E g L 2 = 2 ( n 1 ) ! Γ ( q + n 1 ) Γ ( q + n ) ( n 2 ) ! + 2 q 2 + q + n 1 Γ ( 2 q + n 1 ) .
In the same manner, for m = 3 with any n and q, we obtain:
E g L = 6 n q 2 + q 3 + ( q 2 ) ( q 1 ) ( q + 2 ) ( q + 3 ) + 6 n 2 2 ( n 1 ) ! Γ ( q + n 2 ) ,
E g L 2 = 6 n ( q 2 + q 3 ) + q 4 + 4 q 3 7 q 2 10 q + 12 + 6 n 2 ( n 1 ) ! ( n 2 ) ! Γ ( q + n 2 ) Γ ( q + n 1 ) + 3 n ( 4 q 2 + 2 q 3 ) + 8 q 4 + 8 q 3 14 q 2 8 q + 6 + 3 n 2 ( n 1 ) ! Γ ( 2 q + n 2 ) .
The corresponding variances are obtained by keeping in mind the relations (20) and (21). For m 4 , explicit variance expressions can be similarly calculated. However, it does not seem promising to find an explicit variance formula valid for any m, n, and q.

5. Summary and Perspectives on Higher Moments

We studied the exact variance of the Tsallis entropy, which is a one-parameter (q) generalization of the von Neumann entropy. The main result is an exact variance expression (54) valid for q > 1 as finite sums of terminating hypergeometric functions. For q = 1 , we find a short proof to the variance formula of the degenerate case of the von Neumann entropy in the Appendix. For other special cases of the practical importance of q = 2 , m = 2 , and m = 3 , explicit variance expressions have been obtained in (61), (63), and (65), respectively.
We end this paper with some perspectives on the higher moments of the Tsallis entropy. In principle, the higher moments can be calculated by integrating over the correlation kernel (27) as demonstrated for the first two moments. In practice, the calculation becomes progressively complicated as the order of moments increases. Here, we outline an alternative path that may systematically lead to the moments of any order in a recursive manner.
We focus on the induced Tsallis entropy L as defined in (12) since the moments conversion is available (19). The starting point is the generating function of L:
τ m ( t , q ) = E g e t L = 1 c 0 0 1 i < j m θ i θ j 2 i = 1 m θ i n m e θ i + t θ i q d θ i
= 1 c det 0 x i + j + n m 2 e x + t x q d x i , j = 1 m ,
which is a two-parameter (t and q) deformation of the Laguerre ensemble (7). Compared to the weight function w ( x ) = x n m e x of the Laguerre ensemble, the deformation induces a new weight function:
w ( x ) = x n m e x + t x q ,
which generalizes the Toda deformation [19] w ( x ) = x n m e x + t x with the parameter q. The basic idea to produce the moments systematically is to find some differential and difference equations of the generating function τ m ( t , q ) . The theory of integrable systems [16] may provide the possibility to obtain differential equations for the Hankel determinant (67) with respect to continuous variables t and q, as well as difference equations with respect to the discrete variable m. In particular, when q is a positive integer, the deformation (68) is known as multi-time Toda deformation [19], where much of the integrable structure is known [19].

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. A New Proof to the Variance Formula of the von Neumann Entropy

Vivo, Pato, and Oshanin recently conjectured that the variance of the von Neumann entropy (4) in a random pure state (2) is [9]:
V f S = ψ 1 m n + 1 + m + n m n + 1 ψ 1 n ( m + 1 ) ( m + 2 n + 1 ) 4 n 2 ( m n + 1 ) ,
where:
ψ 1 ( x ) = d 2 ln Γ ( x ) d x 2
is the trigamma function. The conjecture was proven in [4,10], and here, we provide another proof starting from the relation (21),
E f T 2 = 1 ( q 1 ) 2 1 2 Γ ( m n ) Γ ( m n + q ) E g L + Γ ( m n ) Γ ( m n + 2 q ) E g L 2 .
To resolve the indeterminacy in the limit q 1 , we apply twice l’Hôpital’s rule on both sides of the above equation:
E f S 2 = lim q 1 E f T 2 = Γ ( m n ) 2 2 Γ ( m n + q ) E g L + 1 Γ ( m n + 2 q ) E g L 2 | q = 1 ,
where f = d f / d q . Define an induced von Neumann entropy of the Laguerre ensemble (7):
R a = i = 1 m θ i ln a θ i
with R 1 further denoted by R = R 1 ; the right-hand side of (A4) can be evaluated by using the following facts:
E g L | q = 1 = E r r ,
E g L 2 | q = 1 = E r r 2 ,
E g L | q = 1 = E g R ,
E g L | q = 1 = E g R 2 ,
E g L 2 | q = 1 = 2 E g r R ,
E g L 2 | q = 1 = 2 E g R 2 + 2 E g r R 2 ,
and the definitions of the digamma function ψ 0 ( x ) = d ln Γ ( x ) / d x and the trigamma function (A2) that give:
Γ ( q ) = Γ ( q ) ψ 0 ( q ) ,
Γ ( q ) = Γ ( q ) ψ 1 ( q ) + ψ 0 2 ( q ) ,
as:
E f S 2 = 1 m n 2 E g R ψ 0 ( m n + 1 ) + E r r ψ 1 ( m n + 1 ) ψ 0 2 ( m n + 1 ) E g R 2 + 1 m n ( m n + 1 ) ( E g R 2 + E g r R 2 4 E g r R ψ 0 ( m n + 2 ) 2 E r r 2 ψ 1 ( m n + 2 ) ψ 0 2 ( m n + 2 ) ) .
In (A14), the first two moments of r are given by:
E r r = m n , E r r 2 = m n ( m n + 1 ) ,
which are obtained from the k th moment expression (cf. (9)):
E r r k = Γ ( m n + k ) Γ ( m n ) .
The first two moments of the induced von Neumann entropy over the Laguerre ensemble E g R and E g R 2 in (A14) have been computed in [6] and [7] as:
E g R = m n ψ 0 ( n ) + 1 2 m ( m + 1 )
and in [10] as:
E g R 2 = m n ( m + n ) ψ 1 ( n ) + m n ( m n + 1 ) ψ 0 2 ( n ) + m m 2 n + m n + m + 2 n + 1 ψ 0 ( n ) + 1 4 m ( m + 1 ) m 2 + m + 2 ,
respectively. The remaining task is to calculate E g r R , E g R 2 , and E g r R 2 in (A14). This relies on the repeated use of the change of variables (10) and measures (11), which exploit the independence between r and λ . Indeed, we have:
E g r R = E g r i = 1 m r λ i ln r λ i
= E r r 2 ln r E g r 2 S
= Γ ( m n + 2 ) Γ ( m n ) ψ 0 ( m n + 2 ) E r r 2 E f S
= m n ( m n + 1 ) ψ 0 ( n ) + 1 m n + 1 + m + 1 2 n ,
where (A21) is obtained by (11) and the identity:
0 e r r a 1 ln r d r = Γ ( a ) ψ 0 ( a ) , Re ( a ) > 0 ,
and (A22) is obtained by (A16) and the mean formula of the von Neumann entropy [1,5,6,7,8]:
E f S = ψ 0 ( m n + 1 ) ψ 0 ( n ) m + 1 2 n .
Define a generalized von Neumann entropy to (4) as:
S a = i = 1 m λ i ln a λ i
with S = S 1 , we similarly have:
E g r R 2 = E r r 2 ln 2 r 2 E g r 2 ln r S E g r 2 S 2
= E r r 2 ln 2 r 2 E r r 2 ln r E f S E r r 2 E f S 2 ,
where the term:
E r r 2 ln 2 r = m n ( m n + 1 ) ψ 1 ( m n + 2 ) + ψ 2 ( m n + 2 )
is obtained by the identity:
0 e r r a 1 ln 2 r d r = Γ ( a ) ψ 1 ( a ) + ψ 0 2 ( a ) , Re ( a ) > 0 ,
and it remains to calculate the term E f S 2 in (A27),
E f S 2 = λ R 2 r 2 S ln r + ln 2 r f λ i = 1 m d λ i r h m n + 1 ( r ) d r
= 1 m n E g R 2 2 ψ 0 ( m n + 1 ) E f S + ψ 1 ( m n + 1 ) + ψ 0 2 ( m n + 1 )
= 1 m n E g R 2 + ψ 1 ( m n + 1 ) ψ 0 2 ( m n + 1 ) + 2 ψ 0 ( m n + 1 ) ψ 0 ( n ) + m + 1 2 n .
It is seen that the term involving E g R 2 in the above cancels the one in (A14). Finally, inserting (A15), (A17), (A18), (A22), (A27), and (A32) into (A14) and keeping in mind the mean formula (A24), we prove the variance formula (A1) after some necessary simplification by the identities:
ψ 0 ( l + n ) = ψ 0 ( l ) + k = 0 n 1 1 l + k , ψ 1 ( l + n ) = ψ 1 ( l ) k = 0 n 1 1 ( l + k ) 2 .

References

  1. Page, D.N. Average entropy of a subsystem. Phys. Rev. Lett. 1993, 71, 1291–1294. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Hausdorff, F. Summationsmethoden und Momentfolgen. I. Math. Z. 1921, 9, 74–109. [Google Scholar] [CrossRef] [Green Version]
  3. Hausdorff, F. Summationsmethoden und Momentfolgen. II. Math. Z. 1921, 9, 280–299. [Google Scholar] [CrossRef] [Green Version]
  4. Bianchi, E.; Donà, P. Typical entropy of a subsystem: Page curve and its variance. arXiv 2019, arXiv:1904.08370. [Google Scholar]
  5. Foong, S.K.; Kanno, S. Proof of Page’s conjecture on the average entropy of a subsystem. Phys. Rev. Lett. 1994, 72, 1148–1151. [Google Scholar] [CrossRef] [PubMed]
  6. Sánchez-Ruiz, J. Simple proof of Page’s conjecture on the average entropy of a subsystem. Phys. Rev. E 1995, 52, 5653–5655. [Google Scholar] [CrossRef]
  7. Sen, S. Average entropy of a quantum subsystem. Phys. Rev. Lett. 1996, 77, 1–3. [Google Scholar] [CrossRef] [PubMed]
  8. Adachi, S.; Toda, M.; Kubotani, H. Random matrix theory of singular values of rectangular complex matrices I: Exact formula of one-body distribution function in fixed-trace ensemble. Ann. Phys. 2009, 324, 2278–2358. [Google Scholar] [CrossRef]
  9. Vivo, P.; Pato, M.P.; Oshanin, G. Random pure states: Quantifying bipartite entanglement beyond the linear statistics. Phys. Rev. E 2016, 93, 052106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Wei, L. Proof of Vivo-Pato-Oshanin’s conjecture on the fluctuation of von Neumann entropy. Phys. Rev. E 2017, 96, 022106. [Google Scholar] [CrossRef] [PubMed]
  11. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1998, 52, 479–487. [Google Scholar] [CrossRef]
  12. Malacarne, L.C.; Mendes, R.S.; Lenzi, E.K. Average entropy of a subsystem from its average Tsallis entropy. Phys. Rev. E 2002, 65, 046131. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Gell-Mann, M.; Tsallis, C. Nonextensive Entropy: Interdisciplinary Applications; Oxford University Press: New York, NY, USA, 2004. [Google Scholar]
  14. Majumdar, S.N. Extreme eigenvalues of Wishart matrices: application to entangled bipartite system. In The Oxford Handbook of Random Matrix Theory; Akemann, G., Baik, J., Di Francesco, P., Eds.; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
  15. Sarkar, A.; Kumar, S. Bures-Hall ensemble: Spectral densities and average entropies. arXiv 2019, arXiv:1901.09587. [Google Scholar]
  16. Forrester, P. Log-gases and Random Matrices; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
  17. Schrödinger, E. Quantisierung als eigenwertproblem. Ann. Phys. (Leipzig) 1926, 80, 437–490. [Google Scholar] [CrossRef]
  18. Brukner, Č.; Zeilinger, A. Information invariance and quantum probabilities. Found. Phys. 2009, 39, 677–689. [Google Scholar] [CrossRef]
  19. Ismail, M.E.H. Classical and Quantum Orthogonal Polynomials in One Variable; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]

Share and Cite

MDPI and ACS Style

Wei, L. On the Exact Variance of Tsallis Entanglement Entropy in a Random Pure State. Entropy 2019, 21, 539. https://doi.org/10.3390/e21050539

AMA Style

Wei L. On the Exact Variance of Tsallis Entanglement Entropy in a Random Pure State. Entropy. 2019; 21(5):539. https://doi.org/10.3390/e21050539

Chicago/Turabian Style

Wei, Lu. 2019. "On the Exact Variance of Tsallis Entanglement Entropy in a Random Pure State" Entropy 21, no. 5: 539. https://doi.org/10.3390/e21050539

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop