Next Article in Journal
Uniform Error Estimates of the Finite Element Method for the Navier–Stokes Equations in R2 with L2 Initial Data
Next Article in Special Issue
Orthogonal Polynomials with Singularly Perturbed Freud Weights
Previous Article in Journal
Repeated Cross-Scale Structure-Induced Feature Fusion Network for 2D Hand Pose Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Note on Cumulant Technique in Random Matrix Theory

by
Alexander Soshnikov
*,† and
Chutong Wu
Department of Mathematics, University of California at Davis, Davis, CA 95616, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2023, 25(5), 725; https://doi.org/10.3390/e25050725
Submission received: 3 April 2023 / Revised: 20 April 2023 / Accepted: 21 April 2023 / Published: 27 April 2023
(This article belongs to the Special Issue Random Matrices: Theory and Applications)

Abstract

:
We discuss the cumulant approach to spectral properties of large random matrices. In particular, we study in detail the joint cumulants of high traces of large unitary random matrices and prove Gaussian fluctuation for pair-counting statistics with non-smooth test functions.

1. Introduction

Random Matrix Theory has its origins in the works of statisticians in the 1920s and nuclear physicists in the 1950s. In the pioneering papers [1,2,3], Eugene Wigner introduced an ensemble of random matrices that now have his name and computed the limiting spectral distribution. The main ingredient of the proof was the method of moments that allowed Wigner to study asymptotics of the traces of powers of a random symmetric (Hermitian) matrix with independent identically distributed (i.i.d.) components. Since then, the method of moments has been successfully used to study spectral properties of large random matrices on many occasions. It works particularly well when a random matrix has many independent components. We refer the reader to [4,5,6,7,8,9,10,11,12,13] and references therein.
The purpose of this paper is to discuss several applications of the cumulant technique in Random Matrix Theory. The cumulant approach is especially useful if point correlation functions are given by the determinantal or Pfaffian formulas.
The paper is organized as follows. In the Methods section, we give a brief overview of the known results. Several novel results related to the joint cumulants of traces of high powers and pair counting statistics of eigenvalues of a large random unitary matrix are formulated and proved in the Results section. The Discussion section is devoted to brief comments on the future lines of research.
Throughout the paper, the notation  a N = O ( b N )  means that the ratio  a N / b N  is bounded from above in absolute value. The notation  a N = o ( b N )  means that  a n / b N 0  as  N .  The notation  a N = Ω ( b N )  means that the ratio  a N / b N  is bounded from above in absolute value by a power of  log N .  Finally, we sometimes use the notation  a b  for the maximum of a and  b .

2. Methods

2.1. Determinantal Point Process and Cluster Functions

The ideas of the cumulant approach in Random Matrix Theory go back at least to the 1995 paper [14] by Costin and Lebowitz, which studied a so-called sine random point process, namely, a determinantal random point process with the correlation kernel
K ( x , y ) = sin ( π ( x y ) ) π ( x y ) .
In other words, the k-point correlation functions of the random point process are given by
ρ k ( x 1 , , x k ) = det [ K ( x i , x j ) ] 1 i , j k , k 1 .
We refer the reader to [15,16] for an introduction to determinantal random point processes. The sine random point process appears as a scaling limit of many ensembles of random matrices, including the Circular Unitary Ensemble defined below in (11).
The main result of [14] states that  # [ 0 , L ] ,  the number of particles in an interval  [ 0 , L ] ,  has Gaussian fluctuation in the limit  L  with logarithmically growing variance. Namely,
# [ 0 , L ] L log L D N ( 0 , 1 / π 2 ) .
To study the limiting distribution of the counting random variable  # [ 0 , L ] ,  Costin and Lebowitz suggested using the so-called cluster (Ursell) k-point functions. For  k = 1 , 2 , 3 , the cluster functions are given by
r 1 ( x 1 ) = ρ 1 ( x 1 ) , r 2 ( x 1 , x 2 ) = ρ 2 ( x 1 , x 2 ) ρ 1 ( x 1 ) ρ 1 ( x 2 ) , r 3 ( x 1 , x 2 , x 3 ) = ρ 3 ( x 1 , x 2 , x 3 ) ρ 2 ( x 1 , x 2 ) ρ 1 ( x 3 ) ρ 2 ( x 1 , x 3 ) ρ 1 ( x 2 ) ρ 2 ( x 2 , x 3 ) ρ 1 ( x 1 ) + 2 ρ 1 ( x 1 ) ρ 1 ( x 2 ) ρ 1 ( x 3 ) .
For arbitrary  k 1 , one has
r k ( x 1 , , x k ) = π ( 1 ) | π | 1 ( | π | 1 ) ! j = 1 | π | ρ | B j | ( x i : i B j ) ,
where the sum at the r.h.s. of (4) is over all partitions  π  of the set  { 1 , 2 , , k }  into blocks  B 1 , , B m , | π | = m  stands for the number of blocks in a partition  π ,  and  | B i |  denotes the cardinality of a block  B i .
For determinantal random point processes, (2) and (4) imply
r k ( x 1 , , x k ) = ( 1 ) k 1 k σ S k K ( x σ ( 1 ) , x σ ( 2 ) ) K ( x σ ( 2 ) , x σ ( 3 ) ) K ( x σ ( k ) , x σ ( 1 ) ) = ( 1 ) k 1 K ( x 1 , x 2 ) K ( x 2 , x 3 ) K ( x k , x 1 ) + ,
where the sum in the first line is over all permutations in the symmetric group  S k , and the sum in the second line is over all  ( k 1 ) !  cyclic permutations in  S k , with the first of such cyclic permutations being  1 2 3 k 1 .
Cluster point functions are closely related to the cumulants of  # [ 0 , L ] .  Denote by  I k ( L )  the integral of the k-point cluster function over the k-dimensional cube  [ 0 , L ] k :
I k ( L ) = [ 0 , L ] k r k ( x 1 , , x k ) d x 1 d x k , k 1 .
Furthermore, denote by  κ j ( L ) , j = 1 , 2 , 3 , ,  the cumulants of the counting random variable  # [ 0 , L ] .  Recall that the moment and cumulant generating functions are related by
j = 1 κ j j ! z j = log ( j = 0 m j j ! z j ) .
It follows from (4) and (6) that the j-th cumulant of  # [ 0 , L ]  can be written as a linear combination of the integrals  I k ( L ) , 1 k j .  Namely, the following relation holds for the generating functions:
j = 1 κ j ( L ) j ! z j = j = 1 I j ( L ) j ! ( e z 1 ) j .
To prove the Central Limit Theorem for the normalized random variable  # [ 0 , L ] E # [ 0 , L ] Var # [ 0 , L ] , it is sufficient to show that
Var # [ 0 , L ] a s L a n d κ j ( L ) = o ( ( Var # [ 0 , L ] ) j / 2 ) f o r j > 2
since (8) would imply that all cumulants starting with the third one of the normalized counting random variable  # [ 0 , L ] E # [ 0 , L ] Var # [ 0 , L ]  go to zero as  L .
It is not hard to see that if a random point process is such that  Var # [ 0 , L ]  grows linearly in L and the cluster function integrals satisfy
I j ( L ) = o ( L j / 2 ) , j > 2 ,
a routine application of (7) and (8) finishes the proof of the Central Limit Theorem for the counting function. However, the sine random point process exhibits a more delicate behavior—the variance of the number of particles grows only logarithmically, namely:
Var # [ 0 , L ] = 1 π 2 log L + O ( 1 ) .
Thus, a more subtle analysis of the asymptotics of  I j ( L )  is required for  j > 2 .  The proof ([14]) of the Central Limit Theorem (3) follows from the bounds
0 L 0 L sin ( π ( x 1 x 2 ) ) π ( x 1 x 2 ) sin ( π ( x 2 x 1 ) ) π ( x 2 x 1 ) d x 1 d x 2 = L 1 π 2 log L + O ( 1 ) , [ 0 , L ] k sin ( π ( x 1 x 2 ) ) π ( x 1 x 2 ) sin ( π ( x 2 x 3 ) ) π ( x 2 x 3 ) sin ( π ( x k x 1 ) ) π ( x k x 1 ) d x 1 d x k = L + O ( log L ) .
Remarkably, the result can be generalized to the case of any determinantal random point field on a locally compact Hausdorff phase space E, equipped with a  σ -finite measure  μ  on the Borel  σ -algebra provided the correlation kernel is Hermitian and locally trace class and the variance goes to infinity. Namely, the following holds:
Theorem 1 
([17,18]). Let  ( X , F , P n )  be a family of determinantal random point fields with hermitian locally trace class correlation kernels  K n  and  I n  be a family of Borel subsets of E with compact closure. Then, if  V a r n # ( I n )  as  n ,  the normalized counting random variable  # ( I n ) E # ( I n ) V a r # ( I n )  converges in distribution to a standard normal  N ( 0 , 1 ) .
Remark 1. 
The multivariate version of the theorem also holds (see [17,18]). It is worth noting that the univariate result allows a very nice probabilistic interpretation ([19]). Namely, one can show that the counting random variable  # ( I )  is equal in distribution to a sum of independent non-identically distributed Bernoulli random variables with the probabilities of success given by the eigenvalues of the integral operator  K I f ( x ) = I K ( x , y ) f ( y ) μ ( d y )  on  L 2 ( I , μ ) .  However, we are not aware of a simple probabilistic interpretation of the multivariate CLT result.
Analogous results hold for linear statistics  L f = i f ( x i ) ,  where f is a bounded measurable test function, e.g., with a compact support, and the summation is over all points of a random point field.
Theorem 2 
([17,18]). Let  ( X , F , P n )  be a family of determinantal random point fields with hermitian locally trace class correlation kernels  K n  and  f n  be a bounded, measurable, compactly supported function on the phase space E such that
V a r L f n , sup | f n ( x ) | = o ( V a r L f n ) ε , E L | f | n = O ( ( V a r L f n ) δ ) ,
for any  ε > 0  and some  δ > 0  as  n .  Then, the normalized linear statistic  L f n E L f n V a r L f n  converges in distribution to  N ( 0 , 1 )  as  n .
For additional results in this direction, we refer the reader to [20,21,22]. Moreover, the connection between integrals of cluster point functions and cumulants can be exploited to study significantly more challenging problems regarding empirical spectral distribution of spacings and extreme spacings in determinantal random point processes (see, e.g., [23,24]).

2.2. Linear Statistics in Classical Compact Groups

For many ensembles of large random matrices, the variance of a linear statistic either stays finite or grows slower than any power of the dimension, provided the test function is sufficiently smooth. Therefore, Theorem 2 is not typically applicable even if the point correlation functions are determinantal. It is instructive to consider classical compact groups. In this paper, we focus our attention on random unitary matrices. However, many results can be extended to the orthogonal and symplectic matrices without much difficulty.
Let V be a unitary matrix chosen at random with respect to the Haar measure on the unitary group  U ( N ) .  We are interested in studying statistical properties of the eigenvalues of V, which we denote by  e i θ j , 1 j N , π θ 1 , , θ N < π .  The joint probability density of the eigenvalues is given by the formula ([25]):
p N ( θ 1 , , θ N ) = 1 ( 2 π ) N N ! 1 j < k N e i θ j e i θ k 2 .
The joint distribution of the eigenvalues is known as the Weyl measure, and the ensemble is known in Random Matrices as the Circular Unitary Ensemble ([26,27,28]). Remarkably, if a test function f on the unit circle is sufficiently smooth, the variance of the linear statistics  j = 1 N f ( θ j )  stays finite as  N .  Moreover,
Theorem 3 
([29,30]). Let f be a real-valued function on the unit circle satisfying
| f k ^ | 2 | k | < ,
where
f k ^ = 1 2 π π π f ( x ) e i k x d x , k Z
are the Fourier coefficients of  f . Then,
j = 1 N f ( θ j ) f 0 ^ N D N ( 0 , | f k ^ | 2 | k | ) .
Remark 2. 
One can show that the result of Theorem 3 follows from the Strong Szego Theorem for Toeplitz determinants ([30]).
Remark 3. 
It follows from Theorem 3 that the real and imaginary parts of the traces of the powers of a random unitary matrix,  Tr V k = j = 1 N e i k θ j , k 1 ,  converge to independent normal random variables with mean zero and variance  k / 2  as  N .  In ([31]), Johansson proved that the rate of convergence to normal law is  O ( N δ N )  for some  δ > 0 .  Recently, the results have been further improved in [32,33].
Many joint moments of the traces of powers of a random unitary matrix V coincide with the corresponding joint moments of the limiting normal random variables. Namely, let  Z j , j 1 ,  be a sequence of independent standard complex normals. Then, for any  k 1  and non-negative integers  a j , b j , 1 j k  satisfying
N ( j = 1 k j a j ) ( j = 1 k j b j ) ,
one has ([29]):
E j = 1 k Tr ( V j ) a j Tr ( V j ) b j ¯ = δ a b j = 1 k j a j a j ! = E j = 1 k ( j Z j ) a j ( j Z j ) b j ¯ ,
where the delta symbol  δ a b  is one if  a = ( a 1 , , a k )  coincides with  b = ( b 1 , , b k )  and is zero otherwise. For a beautiful survey of this and related results for random matrices from classical compact groups, we refer the reader to [34].
To better understand (16) and related phenomena, one can study the joint cumulants of the traces of powers of a random matrix. Let  { X α A }  be a family of random variables. The joint cumulants are defined as (see, e.g., [35]):
κ i 1 , , i n : = κ ( X i 1 , , X i n ) = π ( | π | 1 ) ! ( 1 ) | π | 1 B π E i B X i ,
where the sum is over all partitions  π  of the set  { i 1 , , i n } ,  B goes over the list of all blocks of the partition  π ,  and  | π |  denotes the number of blocks in the partition.
We recall that the joint moments are expressed in terms of the cumulants as
E ( X 1 , , X n ) : = E 1 i n X i = π B π κ ( X i : i B ) .
We will use the notation  T N , k : = Tr ( V k )  for the traces of powers of V and denote by  κ p ( N ) ( k 1 , , k p )  the joint cumulants of  T N , k 1 , , T N , k p ,  i.e.,
κ p ( N ) ( k 1 , , k p ) : = κ ( T N , k 1 , , T N , k p ) .
For a determinantal random point process with a correlation kernel  K ( x , y ) , the cumulants of a linear statistic  L f = i f ( x i )  can be computed as ([18]):
κ p ( L f ) = m = 1 p ( 1 ) m m ( p 1 , , p m ) : p 1 + + p m = p , p 1 , , p m 1 p ! p 1 ! p m ! ×
f p 1 ( x 1 ) K ( x 1 , x 2 ) f p 2 ( x 2 ) K ( x 2 , x 3 ) f p m ( x m ) K ( x m , x 1 ) d x 1 d x m .
For the CUE, the point correlation functions are given by the determinantal Formula (2) with the correlation kernel
K N ( x , y ) = 1 2 π j = 0 N 1 e i j ( x y ) .
This allows one to obtain the following formula in the CUE case, provided the test function f is sufficiently smooth ([36], see also [37,38,39]):
κ p ( L f ) = k 1 , , k p Z k 1 + + k p = 0 j = 1 p f ^ k j m = 1 p ( 1 ) m 1 m ( p 1 , , p m ) : p 1 + + p m = p , p 1 , , p m 1 p ! p 1 ! p m ! × max 0 , N max ( 0 , i = 1 p 1 k i , , i = 1 p 1 + + p m 1 k i ) max ( 0 , i = 1 p 1 k i , , i = 1 p 1 + + p m 1 k i ) ,
that can be rewritten as
κ p ( L f ) = k 1 , , k p Z k 1 + + k p = 0 j = 1 p f ^ k j m = 1 p ( 1 ) m m ( p 1 , , p m ) : p 1 + + p m = p , p 1 , , p m 1 1 p 1 ! p m ! × σ S p J N ( p 1 , , p m , k σ ( 1 ) , , k σ ( p ) ) ,
where  J N  is a piece-wise linear function defined by the following formula
J N ( p 1 , , p m ; k 1 , , k p ) : = min N , max ( 0 , i = 1 p 1 k i , i = 1 p 1 + p 2 k i , , i = 1 p 1 + + p m 1 k i ) + max ( 0 , i = 1 p 1 k i , , i = 1 p 1 + + p m 1 k i )
for positive integers  p 1 , , p m 1 , p 1 + + p m = p ,  and integers  k 1 , , k p ,  satisfying  i = 1 p k i = 0 .  Since the joint cumulants are symmetric and multilinear, it follows from (22) and (23) that  κ p ( N ) ( k 1 , , k p )  can be computed as
Lemma 1 
([40]).
(i) If  p > 1  and either  k 1 + + k p 0  or  i p k i = 0 ,  or both, then
κ p ( N ) ( k 1 , , k p ) = 0 .
(ii) If  p > 1 , i p k i 0 ,  and  i = 1 p k i = 0 ,  then
κ p ( N ) ( k 1 , , k p ) = m = 1 p ( 1 ) m m ( p 1 , , p m ) : p 1 + + p m = p , p 1 , , p m 1 1 p 1 ! p m ! × σ S p J N ( p 1 , , p m ; k σ ( 1 ) , , k σ ( p ) ) .
Remark 4. 
Alternatively, one can rewrite the last formula as
κ p ( N ) ( k 1 , , k p ) = m = 1 p ( 1 ) m m ( C 1 , , C m ) : C 1 C m = { 1 , , p } min N , max ( 0 , i C 1 k i , , i C 1 C m 1 k i ) + max ( 0 , i C 1 k i , , i C 1 C m 1 k i ) ,
where the summation is over all ordered collections of non-empty disjoint subsets  C 1 , , C m ,   m 1 ,  of the set  { 1 , , p }  such that  j C j = { 1 , , p } .
It should be emphasized that the exact combinatorial structure manifested in Lemma 1 is specific to the classical compact groups and the sine determinantal point process.
To finish the proof of (14) and (16), using the cumulants, one observes that for
i = 1 p | k i | 2 N a n d i = 1 p k i = 0
the expression for  J N ( p 1 , , p m ; k 1 , , k p )  simplifies, namely:
J N ( p 1 , , p m ; k 1 , , k p ) = max 0 , i = 1 p 1 k i , i = 1 p 1 + p 2 k i , , i = 1 p 1 + + p m 1 k i + max 0 , i = 1 p 1 ( k i ) , , i = 1 p 1 + + p m 1 ( k i ) .
One then uses a combinatorial lemma from [36] to show that all joint cumulants  κ p ( N ) ( k 1 , , k p )  of order three and higher ( p > 2 ) are identically zero, provided  i = 1 p | k i | 2 N :
Lemma 2 
([36]). Let  k 1 , , k p , p 2  be arbitrary real numbers such that  i k i = 0 .  Then, the sum
σ S p m = 1 p ( 1 ) m m ( p 1 , , p m ) : p 1 + + p m = p , p 1 , , p m 1 1 p 1 ! p m ! max ( 0 , i = 1 p 1 k σ ( i ) , , i = 1 p 1 + + p m 1 k σ ( i ) )
equals  | k 1 | = | k 2 |  if  p = 2  and zero if  p > 2 .
Remark 5. 
The lemma is related to the combinatorial proof of the Strong Szego Theorem. We refer the reader to [41,42,43,44] for recent applications of the cumulant method for determinantal processes to study linear statistics of Hermitian unitary invariant random matrices and free fermions processes.

2.3. Multivariate Linear Statistics and Number Theory Connections

This subsection is devoted to the discussion of multivariate linear statistics of the form
S N ( f ) = 1 j 1 , j 2 , , j n + 1 N f ( L N ( θ j 2 θ j 1 ) , , L N ( θ j n + 1 θ j 1 ) ) ,
where  θ = ( θ 1 , , θ N ) T N  comes from the CUE (or, in general, the Circular Beta Ensemble with arbitrary  β > 0 ), the scaling factor  L N  satisfies  1 L N N ,  and f is a sufficiently smooth test function defined on the unit circle (when  L N = 1 ) or the real line (when  L N ).
The study of such multivariate linear statistics in [40,45,46,47] was motivated by connections between Random Matrices and Number Theory (see, e.g., [48,49,50,51] and references therein). The local case ( L N = N ) is of a particular interest for the CUE since the multivariate linear statistic
S N ( f ) = 1 j 1 , j 2 , , j n + 1 N f ( N ( θ j 2 θ j 1 ) c , , N ( θ j n + 1 θ j 1 ) c ) ,
where  f C c ( R )  and  ( θ ϕ ) c  is the phase difference on the unit circle (i.e., the difference modulo  2 π ) and corresponds to multivariate linear statistics of the (rescaled) zeros of the Riemann zeta function studied by Montgomery [52,53], Hejhal [54], and Rudnick and Sarnak [55] (see, e.g., [40]). We also point out a recent related preprint [56], where multivariate linear statistics have been studied for the determinantal random process with the projection correlation kernel on the unit sphere  S d , d > 1 .
To study the limiting distribution of (30), we write using the Fourier transform
S N ( f ) = 1 ( 2 π ) n / 2 N n k Z n f ^ ( k 1 N , , k n N ) j = 1 n + 1 T N , k j
where  k n + 1 = j = 1 n k j ,  and  f ^ ( t ) = 1 ( 2 π ) n / 2 R n f ( x ) e i t · x d x , t = ( t 1 , , t n ) .
The proof of the Gaussian fluctuation for  ( S N ( f ) E S N ( f ) ) / N  presented in [40] relies on a detailed study of the joint cumulants  κ p ( N ) ( k 1 , , k p ) , p > 1 ,  for the arguments  k i = O ( N ) , 1 i p .  It is combinatorial in nature. One of the key ingredients of the proof is the fact that joint cumulants scale with  N .  Namely,
c p ( t 1 , , t p ) : = 1 N κ p ( N ) ( t 1 N , , t p N )
does not depend on  N .  Moreover, it is a piece-wise linear, bounded function of  t 1 , , t p  on the hyperplane  t 1 + + t p = 0  (it is identically zero when  t 1 + + t p 0 ).
Lemma 3 
([40]). Rescaled joint cumulants  c p  defined in (32) can be written for  p > 1  and  i = 1 p t i = 0  as
c p ( t 1 , , t p ) = m = 1 p ( 1 ) m m ( p 1 , , p m ) : p 1 + + p m = p , p 1 , , p m 1 1 p 1 ! p m ! σ S p j ( p 1 , , p m ; t σ ( 1 ) , , t σ ( p ) ) ,
where  j ( p 1 , , p m ; t 1 , , t p )  is defined for positive integers  p 1 , , p m , p 1 + + p m = p ,  and real numbers  t 1 , , t p ,  as
j ( p 1 , , p m ; t 1 , , t p ) : = min 1 , max ( 0 , i = 1 p 1 t i , i = 1 p 1 + p 2 t i , , i = 1 p 1 + + p m 1 t i ) + max ( 0 , i = 1 p 1 ( t i ) , , i = 1 p 1 + + p m 1 ( t i ) ) .
Moreover, the following holds:
(i) 
c p ( t 1 , , t p ) , p > 1 , i s a b o u n d e d s y m m e t r i c piece-wise l i n e a r f u n c t i o n o n i = 1 p t i = 0 .
(ii) 
c p ( t 1 , , t p ) = 0 i f p > 1 a n d i = 1 p t i 0 .
(iii) 
c 1 ( 0 ) = 1 a n d c 1 ( t ) = 0 f o r t 0 .
We finish this section with a discussion of the multivariate linear statistics for non-smooth functions. For simplicity, we assume that  n = 1  and restrict our attention to the global regime ( L N = 1 ) .  Then,
S N ( f ) = i , j = 1 N f ( θ i θ j ) .
In [45], we proved that if f is an even real-valued function on the unit circle such that both  f  belongs to  L 2 ( T ) , then
S N ( f ) E S N ( f ) D 2 m = 1 f ^ m m ( φ m 1 ) ,
where  φ m  are i.i.d. exponential random variables with  E ( φ m ) = 1 . We note that the condition  k | f k ^ | 2 k 2 <  is necessary and sufficient for (36) to hold. If the series  k | f k ^ | 2 k 2  is slowly diverging so that the sequence of partial sums  V N = | k | N | f k ^ | 2 k 2  is slowly varying in the sense of Karamata ([57]), then the Central Limit Theorem holds ([46]):
S N ( F ) E S N ( f ) V N D N ( 0 , 2 ) .
The case of the linearly growing variance will be studied in the next section, where we will consider a class of even test functions f for which  f ^ k k C 0  as  k .  The values of the cumulant function  κ p ( N ) ( k 1 , , k p )  for  k i = O ( N ) , 1 i p  play an important role in the analysis.

3. Results

We study the Circular Unitary Ensemble of random matrices (11). In particular, we are interested in the joint distribution of the traces of high powers of a unitary matrix V. As before, we denote by  κ p ( N ) ( k 1 , , k p )  the joint cumulants of  Tr V k 1 , , Tr V k p , 1 i p .  In what follows,  k i = O ( N ) , 1 i p ,  and we are looking for scenarios when the joint cumulants vanish.
We start by considering the joint cumulants of  Tr V k  and  Tr V k  for  k N .  While the distribution in this case is well understood (see, e.g., [58]), it is instructive to consider the cumulant approach that will be further expanded later in this section. We are interested in the values of the joint cumulants  κ p ( N ) ( k 1 , , k p ) , p > 1 ,  where
k 1 = = k q = k a n d k q + 1 = = k p = k , 0 q p , k N .
It follows from Lemma 1 that  κ p ( N ) ( k , , k , k , , k ) = 0  unless p is even and  p = 2 q .  We then have  k 1 = k q = k  and  k q + 1 = = k 2 q = k .  We will use the Formula (26). Since  k N , one has
min N , max ( 0 , i B 1 k i , , i B 1 B m 1 k i ) + max ( 0 , i B 1 k i , , i B 1 B m 1 k i ) = N
unless each subset  B j  has an equal number of elements equal to k and  k ,  in which case the l.h.s. of (37) is zero. A routine combinatorial analysis produces
κ 2 q ( N ) ( k , , k , k , , k ) = N m = 1 q ( 1 ) m 1 m ( q 1 , , q m ) : q 1 + + q m = q , q 1 , , q m 1 q ! q 1 ! q m ! 2 .
Therefore, the cumulant generating function  C N ( x , y )  can be written as
log E exp ( x Tr V k + y Tr V k ) = N log m = 0 1 ( m ! ) 2 ( x y ) m = N log I 0 ( 4 x y ) ,
where
I 0 ( u ) = 1 2 π 0 2 π e u cos θ d θ = m = 0 1 ( m ! ) 2 2 2 m u 2 m
is the modified Bessel function of the first kind.
A more elementary way to derive (39) relies on the observation by Rains ([58]) that the kth powers of the eigenvalues of V are i.i.d. random variables uniformly distributed on the unit circle provided  k N .  Thus,  Tr V k ,  for  k N ,  is given by the sum of N i.i.d. uniform random variables on the unit circle. The traces of different powers are still correlated for finite  N .  However, a significant portion of the joint cumulants vanishes.
Proposition 1. 
Let  s i , 1 i l  be  l > 1  distinct positive integers and  a i , b i  be non-negative integers such that  a i + b i 1  for all  i .  Suppose that
| i = 1 l n i s i | N n 1 , , n l Z : b i n i a i , 1 i l , i = 1 l | n i | > 0 .
Let  p = 1 l ( a i + b i )  and  ( k 1 , , k p ) Z p  be such that the first  a 1  coordinates of the vector are equal to  s 1 ,  the next  b 1  coordinates are equal to  s 1 ,  the further next  a 2  coordinates are equal to  s 2 ,  the following  b 2  coordinates are equal to  s 2 ,  etc,  ,  and the last  b l  coordinates are equal to  s l .
Then,
κ p ( N ) ( k 1 , , k p ) = 0 .
Moreover, for  0 n i a i , 0 m i b i , 1 i l , one has
E i = 1 l ( Tr V s i ) n i ( Tr V s i ) m i = δ n m i = 1 l E | j = 1 N φ j | 2 n i ,
where  { φ i }  are i.i.d. uniform random variables on the unit circle, and  δ n m  is the delta symbol.
Proof of Proposition 1. 
While the proof does not depend on the value of  l ,  we will assume that  l = 2  in order to simplify the notations. It follows from (41) that the joint cumulant is zero unless
a i = b i , 1 i 2 ,
since otherwise  1 p k i 0 .  Therefore, from now on, we can assume that (44) holds. As before, we will use (26) to compute the joint cumulants. We call a subset  B j  balanced if the number of times  s 1  appears in it is equal to the number of times  s 1  appears in it, and the same holds for  s 2  and  s 2 .  It follows from (41) that unless unless each subset  B j  is balanced, the l.h.s. of (37) is  N .  Otherwise, the l.h.s. of (37) is zero. We obtain that for  a 1 = b 1 = c  and  a 2 = b 2 = d ,
κ p ( N ) ( s 1 , , s 1 , s 1 , , s 1 , s 2 , , s 2 , s 2 , , s 2 ) = N m 1 ( 1 ) m 1 m c 1 + + c m = c , c 1 , c m 0 d 1 + + d m = d , d 1 , d m 0 c i + d i 1 , 1 i m c ! c 1 ! c m ! 2 d ! d 1 ! d m ! 2 , p = 2 ( c + d ) .
Therefore, the coefficient in front of  ( x y ) c ( z w ) d  in the cumulant generating function
C N ( x , y , z , w ) : = log E exp ( x Tr V s 1 + y Tr V s 1 + z Tr V s 2 + w Tr V s 2 )
coincides with the corresponding coefficient in the power series expansion of
N log m = 0 1 ( m ! ) 2 ( x y ) m m = 0 1 ( n ! ) 2 ( z w ) n = N log I 0 ( 4 x y ) I 0 ( 4 z w ) = N log I 0 ( 4 x y ) + N log I 0 ( 4 z w ) = log E exp ( x j = 1 N ϕ j + y j = 1 N ϕ j ¯ + z j = 1 N η j + w j = 1 N η j ¯ ) ,
where  { ϕ i } , { η i }  are i.i.d. uniform random variables on the unit circle, and  ϕ ¯  denotes the complex conjugate. The identity (43) follows from the derived identity for the cumulants and the Formula (18), expressing moments in terms of cumulants. □
To illustrate the utility of the cumulant approach, we formulate and prove below the Central Limit Theorem for pair-counting statistics for a class of non-smooth test functions. We consider
S N ( f ) = i , j = 1 N f ( θ i θ j ) ,
where  ( θ 1 , , θ N )  is a random CUE point configuration, and f is an even real-valued function on the unit circle such that
f ^ k k C 0 as k .
Remark 6. 
f ( θ ) = log | sin ( θ / 2 ) |  satisfies the above test function requirements.
Theorem 4. 
Let f be a real even function on the unit circle, satisfying (46). Then,
S N ( f ) E S N ( f ) N D N ( 0 , σ 2 ( C ) ) ,
where the limiting variance can be computed as
σ 2 ( C ) = 4 C 2 3 2 0 1 1 y ( 1 + 1 / y ) log ( 1 + y ) 1 d y
4 C 2 2 0 1 1 y ( 1 + y ) log ( 1 + y ) d y + 0 1 1 y y log ( 1 y ) d y .
Proof of Theorem 4. 
We start with the following formula for the variance of  S N ( f ) :
Proposition 2 
([45]). Let f be a real even function on the unit circle such that  f L 2 ( T ) .  Then,
V a r [ S N ( f ) ] = 4 1 s N 1 s 2 ( f ^ ( s ) ) 2 + N 2 N s ( f ^ ( s ) ) 2 N N s ( f ^ ( s ) ) 2
4 1 s , t 1 | s t | N 1 N max ( s , t ) ( N | s t | ) f ^ ( s ) f ^ ( t ) + 1 s , t N 1 N + 1 s + t ( ( s + t ) N ) f ^ ( s ) f ^ ( t ) .
Since the test function f satisfies (46), we have
1 s N 1 s 2 ( f ^ ( s ) ) 2 = N ( 1 + o ( 1 ) ) , N s ( f ^ ( s ) ) 2 = 1 N + O ( 1 N 2 ) ,
and the two double sums in (51), up to a negligible error, are equal to the Riemann sums of converging integrals. As a result,
Var [ S N ( f ) ] = 4 C 2 N 2 D 1 ( 1 | x y | ) 1 x y d x d y D 2 ( x + y 1 ) 1 x y d x d y ( 1 + o ( 1 ) ) ,
where
D 1 = { ( x , y ) : x , y > 0 , | x y | 1 , max ( x , y ) 1 } , D 2 = { ( x , y ) : 0 < x , y 1 , x + y 1 } .
The integrals in (52) can be trivially simplified as
D 1 ( 1 | x y | ) 1 x y d x d y = 2 + 2 0 1 1 y ( 1 + y ) log ( 1 + y ) d y + 2 1 1 y ( 1 + y ) log ( 1 + 1 / y ) 1 d y , D 2 ( x + y 1 ) 1 x y d x d y = 1 + 0 1 1 y y log ( 1 y ) d y ,
which leads to (48) and (49).
To prove the theorem, we will study asymptotics of the higher moments of  S N ( f ) .  We start by truncating  f .  Namely, we replace it by  f ˜ ( x ) = | k | N log N f k ^ e i k x .  It follows from (50) and (51) that  Var [ S N ( f ) S N ( f ˜ ) ] = Var [ S N ( f f ˜ ) ] = o ( N ) .  Therefore, without a loss of generality, we can assume that the Fourier coefficients  f ^ k  are zero for  | k | > N log N .  Whenever it does not lead to ambiguity, we will use the notation f for the truncated version of a test function. Recall that
S N ( f ) = k f k ^ | T N , k | 2 = k f k ^ T N , k T N , k ,
where we use the notation  T N , k = Tr V k .  Let us fix a positive integer  m > 2 .  We have
E ( S N ( f ) E S N ( f ) ) m = k 1 , , k m Z f ^ k 1 f ^ k m E i = 1 m T N , k i T N , k i E T N , k i T N , k i .
Let us denote  k m + i = k i , 1 i m .  Applying
Lemma 4. 
9.2 from [45] (see also Lemma 4.1 in [40]), one rewrites
E i = 1 m T N , k i T N , k i E T N , k i T N , k i = E i = 1 m T N , k i T N , k m + i E T N , k i T N , k m + i
as  π * B π κ ( T N , k i : i B ) ,  where the sum is over all partitions π of  { 1 , , 2 m }  that do not contain atoms and two-element subsets of the form  { i , i + m } , i = 1 , , m .  Therefore,
E ( S N ( f ) E S N ( f ) ) m = π * k 1 , , k m 0 f ^ k 1 f ^ k m B π κ ( T N , k i : i B ) .
For a given partition π, denote
Σ π = k 1 , , k m 0 f ^ k 1 f ^ k m B π κ ( T N , k i : i B )
and
S π = k 1 , , k m 0 | k i | N log N 1 | k 1 | 1 | k m | B π min ( N , i B | k i | ) 1 i B k i = 0 .
Clearly,
Σ π = O ( S π ) .
Following [40,45], we denote
[ i ] : = { i , i + m } , 1 i m ,
and introduce the equivalence relation  π  on the set  { [ 1 ] , , [ m ] } :
[ i ] π [ j ]
if and only if there is a sequence of blocks  B 1 , , B q , q 1 ,  of the partition π such that
B 1 [ i ] , B 1 [ i 1 ] ; B 2 [ i 1 ] , B 2 [ i 2 ] ; ; B q [ i q 1 ] , B q [ j ] ,
for some  1 i 1 , , i q 1 m .  We call a partition π optimal if the cardinality of every equivalence class is  2 .  We claim that for each optimal partition π, the corresponding sum  Σ π = σ m N m / 2 ( 1 + o ( 1 ) ) ,  and for each suboptimal partition,  Σ π = o ( N m / 2 ) .
The first part of the claim immediately follows from the variance computations above, as the  m dimensional sum (57) then factorizes into the product of  m / 2  two-dimensional sums, each equal  σ 2 N ( 1 + o ( 1 ) ) .
Next, we will show that the suboptimal partitions give negligible contributions  o ( N m / 2 ) .  One of the key ingredients is the following upper bound on  S π .
Lemma 5. 
Let π be a partition of  { 1 , , 2 m }  that does not contain atoms and two-element subsets of the form  { i , i + m } , i = 1 , , m .  Then,
S π = Ω ( N m / 2 ) ,
i.e.,  S π N m / 2 ( log N ) d  for some d and all sufficiently large  N .  Moreover, if π contains at least one block of size less than 4, then
S π = Ω ( N m 1 2 ) .
Proof of Lemma 4. 
The proof uses mathematical induction on  m . Without a loss of generality, we can assume that the equivalence relation  π  has only one equivalence class. Otherwise, the sum  S π  factorizes over the equivalence classes, and the argument is applicable to each equivalence class separately.
If π does not contain an equivalence class of size less than 4, the bound (63) follows since the number of terms in the product on the r.h.s. of (58) is not bigger than  m / 2 ,  each cumulant is  O ( N ) ,  and the partial sums of the harmonic series grow logarithmically.
Suppose now that π contains a block of size  2 ,  say  B 1 = { i , j + m } , 1 i j m .  Then,  { i + m , j }  cannot be a block of the partition. Consider the block of the partition that contains  i + m .  Without a loss of generality, we can assume that this block is  B 2 ,  i.e.,  i + m B 2 .  Construct a partition  π  of the set  { 1 , , 2 m } \ { i , i + m } , where we discard  B 1  and replace  i + m  in  B 2  by  j + m .  It follows from the construction that  S π S π  and by the inductive hypothesis,  S π = Ω ( N m 1 2 ) .
Now, suppose that π has a block of size  3 ,  say  B 1 = { i , j , l + m } , 1 i , j , l m , l i , j .  Consider the block of π that contains  l ,  say  l B 2 .  Construct a partition  π  of the set  { 1 , , 2 m } \ { l , l + m } , where we discard  B 1  and replace l in  B 2  by i and  j .  Then,  S π S π  and, again, applying the inductive hypothesis,  S π = Ω ( N m 1 2 ) .  □
Remark 7. 
A similar argument shows that if π contains a block of size 4 of the form
{ i , j , l , l + m } , 1 i , j , l m ,  then  S π = Ω ( N m 1 2 ) .
Finally, if π contains a block of size 4 of the form  B 1 = { i , j , n + m , l + m } , 1 i , j , n , l m ,  such that  { i , j } { n , l } ,  we consider two cases, namely,  | k n | > | k l |  and  | k n | | k l | .  In the first case, we construct a partition  π  of the set  { 1 , , 2 m } \ { n , n + m } , where we discard  B 1  and replace n in its block by  i , j ,  and  l + m .  In the second case, we do the same procedure with l instead of  n .  In both cases, the sum is bounded from above by a sum of dimension one less, and we again obtain  S π = Ω ( N m 1 2 )  by the inductive assumption.
Now we are ready to finish the proof of the theorem. Recall that we could assume without a loss of generality that the equivalence relation  π  has only one equivalence class. Note that (59) and (63), Lemma 4 and Remark 7 prove  Σ π = o ( N m / 2 )  for all suboptimal partitions π, except the ones that comprise the blocks of size 4 of the form  { i , j , j + m , l + m } , 1 i , j , l m , i l .  Since  k i + k j + k j + m + k l + m = 0 ,  and  k j + m = k j ,  we obtain  k l + m = k i .  It follows from the direct computations (see, e.g., [40]) that
κ 4 ( N ) ( k i , k j , k j , k i ) = 0 , i f 1 | k i | = | k j | N / 2 , N 2 | k i | , i f N / 2 < | k i | = | k j | N , N , i f | k i | = | k j | N , | | k i | | k j | | N , i f 1 | | k i | | k j | | N 1 , N max ( | k i | , | k j | ) , N | k i | | k j | , i f 1 | k i | | k j | N 1 , N + 1 | k i | + | k j | , 0 , e l s e . .
In particular,  | κ 4 ( N ) ( k i , k j , k j , k i ) | min ( | k i | , | k j | ) .  Without a loss of generality, we can assume that  π = { B 1 , , B m / 2 } ,  where
B 1 = { 1 , 2 , 2 + m , 3 + m } , B 2 = { 3 , 4 , 4 + m , 5 + m } , ,
B m / 2 1 = { m 3 , m 2 , 2 m 2 , 2 m 1 } , B m / 2 = { m 1 , m , 2 m , m + 1 } .
Then,
Σ π = k 1 , , k m 0 f ^ k 1 f ^ k m i = 1 m / 2 κ 4 ( N ) ( k 2 i 1 , k 2 i , k 2 i , k 2 i + 1 ) .
Since the fourth cumulant function vanishes unless the sum of the arguments is zero, one has  k 1 = k 3 = k 5 = = k m 1 .  Moreover, the fourth cumulant function vanishes unless the sum of the absolute values of the arguments is at least  2 N .  Therefore,  | k 1 | + | k 2 i | N , 1 i m ,  and
| Σ π | C o n s t k 1 0 , | k 1 | N log N k 2 , k 4 , k 6 , , k m 0 | k 2 i | N log N 1 | k 1 | m / 2 i = 1 m / 2 1 | k 2 i | i = 1 m / 2 min ( | k 1 | , | k 2 i | ) 1 | k 1 | + | k 2 i | N .
Splitting the summation with respect to  k 1  into two parts, corresponding to  | k 1 | N  and  | k 1 | > N  correspondingly, one arrives at the bound:
| Σ π | 2 × C o n s t 0 < k 1 N 1 k 2 , k 4 , k 6 , , k m 0 | k 2 i | N log N i = 1 m / 2 1 | k 2 i | + | k 1 | N 1 | k 1 | m / 2 k 2 , k 4 , k 6 , , k m 0 | k 2 i | N log N 1 .
The right-hand side is  = o ( N m / 2 )  for  m > 2 .  Therefore, we have shown that  Σ π = o ( N m / 2 )  for all suboptimal partitions.
If  m = 2 q + 1  is odd, then cleraly there are no optimal partitions, which implies
E ( S N ( f ) E S N ( f ) ) 2 q + 1 = o ( N q + 1 / 2 ) .
If  m = 2 q  is even, then there are exactly  ( 2 q 1 ) ! !  optimal partitions, and for each such partition π, one has  Σ π = σ 2 q ( C ) N q ( 1 + o ( 1 ) .  When we combine these results together, we obtain
E ( S N ( f ) E S N ( f ) ) 2 q = ( 2 q 1 ) ! ! σ 2 q ( C ) N q ( 1 + o ( 1 ) .
The bounds (70) and (71) finish the proof.

4. Discussion

We have discussed several applications of the cumulant technique in Random Matrix Theory, specifically for the ensembles with determinantal k-point correlation functions. The suggested approach to the fluctuations of multivariate linear statistics of the eigenvalues of random unitary matrices can be extended to other classes of test functions and other classical groups.

Author Contributions

Methodology, A.S.; formal analysis, A.S. and C.W.; investigation, A.S. and C.W.; writing—original draft preparation, A.S. and C.W.; writing—review and editing, A.S. and C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This material is based upon work partially supported by the National Science Foundation under Grant No. 1440140, while the first author was in residence at the Mathematical Sciences Research Institute in Berkeley, California, during a part of the Fall semester of 2021.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data sharing not applicable to this article, as no datasets were generated or analyzed during the current study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wigner, E. Characteristic Vectors of Bordered Matrices with Infinite Dimension. I. Ann. Math. 1955, 62, 548–564. [Google Scholar] [CrossRef]
  2. Wigner, E. Characteristic Vectors of Bordered Matrices with Infinite Dimension. II. Ann. Math. 1957, 65, 203–207. [Google Scholar] [CrossRef]
  3. Wigner, E. On the Distribution of the Roots of Certain Symmetric Spaces. Ann. Math. 1958, 67, 325–327. [Google Scholar] [CrossRef]
  4. Bai, Z.D. Methodologies in Spectral Analysis of Large Dimensional Random Matrices, a Review. Stat. Sin. 1999, 9, 611–677. [Google Scholar]
  5. Bose, A.; Hazra, R.S.; Saha, K. Patterned Random Matrices and Method of Moments. In Proceedings of the International Congress of Mathematicians, Hyderabad, India, 19–27 August 2010; pp. 2203–2231. [Google Scholar]
  6. Diaconu, S. More Limiting Distributions for Eigenvalues of Wigner Matrices. arXiv 2022, arXiv:2203.08712. [Google Scholar] [CrossRef]
  7. Diaconu, S. Finite Rank Perturbations of Heavy-Tailed Wigner Matrices. arXiv 2022, arXiv:2208.02756. [Google Scholar]
  8. Feldheim, O.; Sodin, S. A Universality Result for the Smallest Eigenvalues of Certain Sample Covariance Matrices. Geom. Funct. Anal. 2010, 20–21, 88–123. [Google Scholar] [CrossRef]
  9. Peche, S.; Soshnikov, A. Wigner Random Matrices with Non-Symmetrically Distributed Entries. J. Stat. Phys. 2007, 129, 857–884. [Google Scholar] [CrossRef]
  10. Sinai, Y.; Soshnikov, A. A Refinement of Wigner’s Semicircle Law in a Neighborhood of the Spectrum Edge for Random Symmetric Matrices. Funct. Anal. Its Appl. 1998, 32, 114–131. [Google Scholar] [CrossRef]
  11. Sodin, S. Several Applications of the Moment Method in Random Matrix Theory. In Proceedings of the International Congress of Mathematicians, Seoul, Republic of Korea, 13–21 August 2014; pp. 451–475. [Google Scholar]
  12. Soshnikov, A. Universality at the Edge of the Spectrum in Wigner Random Matrices. Commun. Math. Phys. 1999, 207, 697–733. [Google Scholar] [CrossRef]
  13. Yin, Y.Q.; Bai, Z.D.; Krishnaiah, P.R. On the Limit of the Largest Eigenvalue of the Large Dimensional Sample Covariance Matrix. Probab. Theory Relat. Fields 1988, 78, 509–521. [Google Scholar] [CrossRef]
  14. Costin, O.; Lebowitz, J. Gaussian Fluctuations in Random Matrices. Phys. Rev. Lett. 1995, 75, 69–72. [Google Scholar] [CrossRef] [PubMed]
  15. Borodin, A. Determinantal Point Processes. In The Oxford Handbook of Random Matrix Theory; Akemann, B., Baik, J., Di Francesco, P., Eds.; Oxford University Press: Oxford, UK, 2011; pp. 231–249. [Google Scholar]
  16. Soshnikov, A. Determinantal Random Point Fields. Russ. Math. Surv. 2000, 55, 923–975. [Google Scholar] [CrossRef]
  17. Soshnikov, A. Gaussian Fluctuation in Airy, Bessel, Sine and Other Determinantal Random Point Fields. J. Stat. Phys. 2000, 100, 491–522. [Google Scholar] [CrossRef]
  18. Soshnikov, A. Gaussian Limit for Determinantal Random Point Fields. Ann. Probab. 2002, 30, 171–187. [Google Scholar] [CrossRef]
  19. Hough, J.B.; Krishnapur, M.; Peres, Y.; Virag, B. Determinantal Processes and Independence. Probab. Surv. 2006, 3, 206–229. [Google Scholar] [CrossRef]
  20. Ameur, Y.; Hedenmalm, H.; Makarov, N. Fluctuations of Eigenvalues of Random Normal Matrices. Duke Math. J. 2011, 159, 31–81. [Google Scholar] [CrossRef]
  21. Haimi, A.; Romero, J.L. Normality of Smooth Statistics for Planar Determinantal Point Processes. arXiv 2022, arXiv:2210.10588. [Google Scholar]
  22. Rider, B.; Virag, B. Complex Determinantal Processes and H1 Noise. Electron. J. Probab. 2007, 12, 1238–1257. [Google Scholar] [CrossRef]
  23. Soshnikov, A. Level Spacings Distribution for Large Random Matrices: Gaussian Fluctuations. Ann. Math. 1998, 148, 573–617. [Google Scholar] [CrossRef]
  24. Soshnikov, A. Statistics of Extreme Spacings in Determinantal Random Point Processes. Mosc. Math. J. 2005, 5, 705–719. [Google Scholar] [CrossRef]
  25. Weyl, H. The Classical Groups: Their Invariants and Representations, 2nd ed.; Princeton University Press: Princeton, NJ, USA, 1997. [Google Scholar]
  26. Dyson, F.J. Statistical Theory of the Energy Levels of Complex Systems. III. J. Math. Phys. 1962, 3, 1191–1198. [Google Scholar] [CrossRef]
  27. Meckes, E.S. The Random Matrix Theory of the Classical Compact Groups; Cambridge University Press: Cambridge, UK, 2019. [Google Scholar]
  28. Mehta, M.L. Random Matrices, 3rd ed.; Elsevier Ltd.: New York, NY, USA, 2004. [Google Scholar]
  29. Diaconis, P.; Shahshahani, M. On Eigenvalues of Random Matrices. J. Appl. Probab. 1994, 31A, 49–62. [Google Scholar] [CrossRef]
  30. Johansson, K. On Szego’s Asymptotic Formula for Toeplitz Determinants and Generalizations. Bull. Des Sci. Math. 1988, 112, 257–304. [Google Scholar]
  31. Johansson, K. On Random Matrices from the Compact Classical Groups. Ann. Math. 1997, 145, 519–545. [Google Scholar] [CrossRef]
  32. Courteaut, K.; Johansson, K.; Lambert, G. From Berry-Esseen to Super-exponential. arXiv 2022, arXiv:2204.03282. [Google Scholar]
  33. Johansson, K.; Lambert, G. Multivariate Normal Approximation for Traces of Random Unitary Matrices. Ann. Probab. 2021, 49, 2961–3010. [Google Scholar] [CrossRef]
  34. Diaconis, P. Patterns in Eigenvalues. Bull. New Ser. Am. Math. Soc. 2003, 40, 155–178. [Google Scholar] [CrossRef]
  35. Malyshev, V.A.; Minlos, R.A. Gibbs Random Fields. Cluster Expansions; Springer: Dordrecht, The Netherlands, 1991. [Google Scholar]
  36. Soshnikov, A. Central Limit Theorem for Local Linear Statistics in Classical Compact Groups and Related Combinatorial Identities. Ann. Probab. 2000, 28, 1353–1370. [Google Scholar] [CrossRef]
  37. Haake, F.; Kus, M.; Sommers, H.J.; Schomerus, H.; Zyczkowski, K. Secular Determinants of Random Unitary Matrices. J. Phys. A 1996, 29, 3641–3658. [Google Scholar] [CrossRef]
  38. Haake, F.; Sommers, H.-J.; Weber, J. Fluctuations and Ergodicity of the Form Factor of Quantum Propagators and Random Unitary Matrices. J. Phys. A 1999, 32, 6903–6913. [Google Scholar] [CrossRef]
  39. Spohn, H. Interacting Brownian Particles: A Study of Dyson Model. In Hydrodynamic Behavior and Interacting Particle Systems; Papanicolau, G., Ed.; Springer: Dordrecht, The Netherlands, 1987; pp. 151–179. [Google Scholar]
  40. Soshnikov, A. Gaussian Fluctuation for Smoothed Local Correlations in CUE. J. Stat. Phys. 2023, 190, 13. [Google Scholar] [CrossRef]
  41. Breuer, J.; Duits, M. Central Limit Theorems for Biorthogonal Ensembles and Asymptotics of Recurrence Coefficients. J. Am. Math. Soc. 2017, 30, 27–66. [Google Scholar] [CrossRef]
  42. Deleporte, A.; Lambert, G. Universality for Free Fermions and the Local Weyl Law for Semiclassical Schrödinger Operators. arXiv 2021, arXiv:2109.02121. [Google Scholar]
  43. Lambert, G. CLT for Biorthogonal Ensembles and Related Combinatorial Identities. Adv. Math. 2018, 329, 590–648. [Google Scholar] [CrossRef]
  44. Lambert, G. Mesoscopic Fluctuations for Unitary Invariant Ensembles. Electron. J. Probab. 2018, 23, 1–33. [Google Scholar] [CrossRef]
  45. Aguirre, A.; Soshnikov, A.; Sumpter, J. Pair Dependent Linear Statistics for CβE. Random Matrices Theory Appl. 2021, 10, 2150035. [Google Scholar] [CrossRef]
  46. Aguirre, A.; Soshnikov, A. A Note on Pair Dependent Linear Statistics with Slowly Growing Variance. Theor. Math. Phys. 2020, 205, 502–512. [Google Scholar] [CrossRef]
  47. Sumpter, J.C. Pair Dependent Linear Statistics for Circular Random Matrices. Ph.D. Thesis, University of California at Davis, Davis, CA, USA, 2021. [Google Scholar]
  48. Chhaibi, R.; Najnudel, J.; Nikeghbali, A. The Circular Unitary Ensemble and the Riemann Zeta Function: The Microscopic Landscape and a New Approach to Ratios. Invent. Math. 2017, 207, 23–113. [Google Scholar] [CrossRef]
  49. Conrey, J.B. L-Functions and Random Matrices. In Mathematics Unlimited—2001 and Beyond; Engquist, B., Schmid, W., Eds.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 331–352. [Google Scholar]
  50. Keating, J. Random Matrices and Number Theory: Some Recent Themes. In Stochastic Processes and Random Matrices. Lecture Notes of the Les Houches Summer School: Volume 104; Schehr, G., Altland, A., Fyodorov, Y., O’Connell, N., Cugliandolo, L., Eds.; Oxford University Press: Oxford, UK, 2017; pp. 348–381. [Google Scholar]
  51. Mezzadri, F.; Snaith, N.C. (Eds.) Recent Perspectives in Random Matrix Theory and Number Theory; London Mathematical Society Lecture Note Series 322; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  52. Montgomery, H.L. On Pair Correlation of Zeros of the Zeta Function. Proc. Symp. Pure Math. 1973, 24, 181–193. [Google Scholar]
  53. Montgomery, H.L. Distribution of the Zeros of the Riemann Zeta Function. In Proceedings of the International Congress of Mathematicians, Vancouver, DC, Canada, 21–29 August 1974; pp. 379–381. [Google Scholar]
  54. Hejhal, D. On the Triple Correlation of the Zeros of the Zeta Function. Int. Math. Res. Notes 1994, 7, 293–302. [Google Scholar] [CrossRef]
  55. Rudnick, Z.; Sarnak, P. Zeros of Principal L-functions and Random Matrix Theory. Duke Math. J. 1996, 81, 269–322. [Google Scholar] [CrossRef]
  56. Feng, R.; Goetze, F.; Yao, D. Determinantal Point Processes on Spheres: Multivariate Linear Statistics. arXiv 2023, arXiv:2301.09216. [Google Scholar]
  57. Bingham, N.H.; Goldie, C.M.; Teugels, J.L. Regular Variation; Cambridge University Press: Cambridge, UK, 1987. [Google Scholar]
  58. Rains, E. High Powers of Random Elements of Compact Lie Groups. Probab. Theory Relat. Fields 1997, 107, 219–241. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Soshnikov, A.; Wu, C. A Note on Cumulant Technique in Random Matrix Theory. Entropy 2023, 25, 725. https://doi.org/10.3390/e25050725

AMA Style

Soshnikov A, Wu C. A Note on Cumulant Technique in Random Matrix Theory. Entropy. 2023; 25(5):725. https://doi.org/10.3390/e25050725

Chicago/Turabian Style

Soshnikov, Alexander, and Chutong Wu. 2023. "A Note on Cumulant Technique in Random Matrix Theory" Entropy 25, no. 5: 725. https://doi.org/10.3390/e25050725

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop