Next Article in Journal
Convergence Analysis for Yosida Variational Inclusion Problem with Its Corresponding Yosida Resolvent Equation Problem through Inertial Extrapolation Scheme
Next Article in Special Issue
Second Order Chebyshev–Edgeworth-Type Approximations for Statistics Based on Random Size Samples
Previous Article in Journal
A Conway–Maxwell–Poisson Type Generalization of Hypergeometric Distribution
Previous Article in Special Issue
High-Dimensional Consistencies of KOO Methods for the Selection of Variables in Multivariate Linear Regression Models with Covariance Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Limit Theorem for Spectra of Laplace Matrix of Random Graphs

by
Alexander N. Tikhomirov
Institute of Physics and of Mathematics, Komi Science Center of Ural Branch of RAS, 167982 Syktyvkar, Russia
Mathematics 2023, 11(3), 764; https://doi.org/10.3390/math11030764
Submission received: 6 December 2022 / Revised: 31 January 2023 / Accepted: 1 February 2023 / Published: 2 February 2023
(This article belongs to the Special Issue Limit Theorems of Probability Theory)

Abstract

:
We consider the limit of the empirical spectral distribution of Laplace matrices of generalized random graphs. Applying the Stieltjes transform method, we prove under general conditions that the limit spectral distribution of Laplace matrices converges to the free convolution of the semicircular law and the normal law.

1. Introduction and Summary

The spectral theory of random graphs is a branch of mathematics that has been studied intensively in the literature in recent decades. The asymptotic behavior of eigenvalues and eigenvectors of matrices associated with graphs, adjacency matrices and Laplace matrices, in particular (see definition below), as the number of vertices of the graph tends to infinity is investigated. See for instance [1,2,3,4,5,6,7,8]. The adjacency matrix of the generalized Erdős–Rènyi random graph is a special case of the generalized Wigner matrix (matrices with elements that are independent up to symmetry, with zero means and different variances). Many deep results have been obtained recently for such matrices. Methods of studying of the spectrum asymptotics of the adjacency matrices are the same as for the spectrum asymptotics of Wigner matrices—these are the method of moments and the Stieltjes transform method. It should be noted that the most profound results for the spectrum of Wigner random matrices were obtained by the methods related to the Stieltjes transform; see [3,9,10].
Laplace matrices have one significant difference—the dependence of the diagonal elements on the remaining elements of the matrix. This significantly complicates the study. For instance, the limit distribution of the empirical spectral function of the Laplace matrix of a complete graph (non-random) was found firstly in 2006; see [11]. In most of the works devoted to the study of the spectrum asymptotics of Laplace matrices of random graphs, the method of moments is used; see [2,4,12]. In this paper, we consider the empirical spectral distribution function of the Laplace matrices of both weighted and unweighted generalized Erdős-Rényi random graphs. We have obtained simple sufficient conditions for the convergence of the empirical spectral distribution function of the Laplace matrices of random graphs to a distribution function that is a free convolution of the semicircular law and the standard normal law. The conditions are expressed in terms of the properties of the graph edge probability matrix and the weight variance matrix (for weighted graphs). To prove the convergence, we exclusively use the Stieltjes transform method.
We consider a non-oriented simple graph (without loops and with simple edges) { V , E } with vertices | V | = n and set of edges E such that edges e E are independent and have probability p e . Consider the adjacency n × n matrix
A = A j k ,
where
A j k = 0 , if ( j , k ) E , 1 , if ( j , k ) E .
Define a degree of vertex j V as
d j : = k : ( j , k ) E A j k .
We shall assume that A j k for 1 j k n are independent and E A j k = p j k ( n ) . Note that E d j = k : k j p j k ( n ) . We have that matrix A is symmetric, i.e., A j k = A k j , and that r.v.’s A j k for 1 j k n are independent. We introduce the quantity
a ^ n = 1 n j , k = 1 n p j k ( n ) ( 1 p j k ( n ) ) .
We introduce the diagonal matrix
D = diag ( d 1 , , d n ) ,
normalized and centered Laplace matrix of not weighted graph G defined as
L ^ = 1 a ^ n ( D A ) E ( D A ) .
We shall consider the weighted graphs G ˜ = ( V , E , w ) as well with weight function w j k = w k j = X j k , where, for 1 j k n , there are independent random variables s.t.
E X j k = 0 , E X j k 2 = σ j k 2 .
The distribution of X j k may depend on n, but for brevity, we shall omit the index n in the notations. We introduce the quantity
a n = 1 n i , j = 1 n p i j ( n ) σ i j 2 .
The quantity a n may be interpreted as the expected mean degree of graph G ˜ . With graph G ˜ , we consider the adjacency matrix
A ˜ = A i j X i j
and normalized Laplace or Markov matrix
L ˜ = 1 a n ( D ˜ A ˜ ) ,
where
D ˜ = diag ( d ˜ 1 , , d ˜ n ) with d ˜ i = j : j i A i j X i j .
We shall denote by λ 1 ( B ) λ 2 ( B ) λ n ( B ) ordered eigenvalues of a symmetric n × n matrix B . We shall consider the spectrum of matrices L ˜ , and L ^ . For brevity of notation, we shall write μ ˜ j = λ j ( L ˜ ) , and μ ^ j = λ j ( L ^ ) . We introduce the corresponding empirical spectral distributions (ESDs)
G ^ n ( x ) : = 1 n j = 1 n I { μ ^ j x } , G ˜ n ( x ) : = 1 n j = 1 n I { μ ˜ j x } .
In the paper [11], in 2006, it was shown under conditions p i j ( n ) 1 and σ i j 2 1 , for any 1 i , j n , that ESD G ˜ n ( x ) weakly converges in probability to the non-random distribution function G ( x ) , which is defined as a free convolution of the Gaussian distribution function and the semicircular distribution function (the definition of free convolution see, for instance, in [13]).
In [4], in 2010, the authors considered the limit of G ˜ n ( x ) for weighted Erdös–Renyi graphs ( p i j ( n ) p n ) with equivariance weights ( σ i j 2 σ 2 ). Assuming that p n bounded away from zero and one, and that random variables X i j have the fourth moment, they proved that G ˜ n ( x ) weakly converges to the same function G ( x ) .
In [14], in 2020, Yizhe Zhu considered the so-called graphon approach to the limiting spectral distribution of Wigner-type matrices. The author described the moments of the limit spectral measure in terms 2279–2375, of graphon of the variance profile matrix Σ = ( σ i j 2 ) and number of trees with a fixed number of vertices. Recently, Chatterjee and Hazra published the paper [12] in which the approach of Zhu was developed.
In [15], in 2021, the author stated simple conditions on probabilities p i j for the convergence of ESD of adjacency matrices to the semicircular law. In the present paper, we consider the convergence of ESD G ^ n ( x ) and G ˜ n ( x ) under similar conditions to the function G ( x ) .
First, we formulate some conditions which we shall use in the present paper.
  • Condition C P ( 0 ) :
    a n , as n .
  • Condition C P ( 0 a ) : There exists a constant C 0 s.t.
    sup n 1 max 1 j , k n 1 a n p j k ( n ) σ j k 2 C 0 < .
  • Condition C P ( 1 ) :
    lim n 1 n a n j = 1 n k = 1 n | p j k ( n ) σ j k 2 a n n | = 0 .
  • Condition C X ( 1 ) : For any τ > 0
    L n ( τ ) : = 1 n a n i , j = 1 n p i j ( n ) E X i j 2 I { | X i j | > τ a n } 0 as n .
Remark 1.
Condition C P ( 1 ) is equivalent to the following two conditions together
  • Condition C P ( 1 a ) :
    lim n 1 n j = 1 n | 1 a n k = 1 n p j k ( n ) σ j k 2 1 | = 0 .
  • Condition C P ( 1 b ) :
    lim n 1 n a n j = 1 n k = 1 n | p j k ( n ) σ j k 2 1 n l = 1 n p j l ( n ) σ j l 2 | = 0 .
The main result of the present paper is the following theorem.
Theorem 1.
Let conditions C P ( 0 ) , C P ( 0 a ) , C P ( 1 ) , C X ( 1 ) hold. Then, ESDs G ˜ n ( x ) converge in probability to the distribution function G ( x ) , which is the additive free convolution of the standard normal distribution function and the semi-circular distribution function:
lim n G ˜ n ( x ) = G ( x ) .
Corollary 1.
Assume that σ j k 2 σ 2 and p j k ( n ) p n for any 1 j , k n and any n 1 . Assume that n p n as n and assume that condition C X ( 1 ) holds. Then, ESDs G ˜ n ( x ) converge in probability to the distribution function G ( x ) , which is the additive free convolution of the standard normal distribution function and the semi-circular distribution function:
lim n G ˜ n ( x ) = G ( x ) .
Proof of Corollary.
Note that in the case p j k ( n ) p n and σ j k 2 = σ 2 , we have
a n = n p n σ 2 .
Condition C P ( 0 ) is fulfilled. Moreover, it is simple to see that all conditions of Theorem 1 are fulfilled. □
Theorem 2.
Let conditions
a ^ n a s n ,
and
lim n 1 n a ^ n j = 1 n k = 1 n | p j k ( n ) ( 1 p j k ( n ) ) a ^ n n | = 0
hold. Then, ESDs G ^ n ( x ) converge in probability to the distribution function G ( x ) , which is the additive free convolution of the standard normal distribution function and the semicircular distribution function,
lim n G ^ n ( x ) = G ( x ) .
In what follows, we shall omit the superscript ( n ) in the notations of p i j ( n ) , writing p i j instead.

2. Toy Example

Consider graph { V , E } with clique number d = d ( n ) where | V | = n . The clique number of graph G is the size of the largest clique or a maximal clique of the graph. Let M denote the clique of the graph. Define the weights of vertices as follows
W i = d , if i M 1 , o t h e r w i s e . .
We introduce edge probabilities as follows
p i j = W i W j / d 2 = 1 d 2 , if i M , j M , 1 d if i M , j M , or i M , j M , 1 , if i , j M .
We assume that σ j k 2 σ 2 = 1 , for 1 j , k n . In this case, we have
j , k = 1 n p j k = ( n d d + d ) 2 ,
and
a n = n d 2 ( 1 + α n ) 2 , where α n = d ( d 1 ) n .
Proposition 1.
Under condition
lim n d 2 ( n ) n = 0
conditions C P ( 0 ) , C P ( 0 a ) and C P ( 1 ) hold.
Proof. 
We have
1 n a n j , k = 1 n | p j k a n n | = 1 n a n ( 1 d 2 ( 2 α n + α n 2 ) ( n d ) 2 + 2 | 1 d 1 d 2 ( 1 + α n ) 2 | d ( n d ) + d 2 ( 1 1 d 2 ( 1 + α n ) 2 ) = α n ( 1 + 2 α n ) ( n d ) 2 n 2 ( 1 + α n ) 2 + 2 | 1 1 d ( 1 + α n ) 2 | d 2 ( n d ) n 2 ( 1 + α n ) 2 + d 4 n 2 ( 1 + α n ) 2 ( 1 1 d 2 ( 1 + α n ) 2 ) .
It is straightforward to check that for d = d ( n ) satisfying the condition (13), we have α n = o ( 1 ) , a n as n and
lim n 1 n a n j , k = 1 n | p j k a n n | = 0 .
That means that the conditions C P ( 0 a ) and C P ( 1 ) hold. Furthermore,
max 1 k n l = 1 n p k l n d + d .
It is straightforward to check as well that
sup n 1 max 1 k , l n p k l a n C 0 .
Thus, Proposition 1 is proved. □

3. Proof of Theorem 1

We shall use the method of the Stieltjes transform for the proof of Theorem 1. Introduce the resolvent matrix of matrix L ˜ ,
R : = R L ˜ ( z ) = ( L ˜ z I ) 1 ,
where I : = I n denotes a n × n unit matrix. Let m n ( z ) denote the Stieltjes transform of the empirical spectral distribution function of matrix L ˜ ,
m n ( z ) = 1 x z d G ˜ n ( x ) = 1 n Tr R .
For the proof of Theorem 1, it is enough to prove the convergence of the Stieltjes transforms for any fixed z = u + i v with v > 0 ; moreover, it is enough to prove that m n ( z ) converges to some function, say s ( z ) , in some set with a non-empty interior. According to Lemma A2, it is enough to prove the convergence of the expected Stieltjes transform s n ( z ) = E m n ( z ) = E 1 n Tr R only. Using Lemma A1, the result of Theorem 1 follows from the relation
s n ( z ) s g ( z + s n ( z ) ) 0 as n ,
where s g ( z ) denotes the Stieltjes transform of the standard Gaussian distribution,
s g ( z ) = 1 2 π 1 x z exp { x 2 2 } d x .
First, we need some additional notations. By L ˜ ( j ) , we denote the matrix obtained from L ˜ by replacing diagonal entries L ˜ l l , l = 1 , , n with L ˜ l l ( j ) = 1 a n r j A l r X l r . Note that the diagonal entries of matrix L ˜ ( j ) (except L ˜ j j ( j ) ) do not depend on the r.v. values X j k , A j k for k = 1 , , n . We denote by D ˜ ( j ) the diagonal matrix with diagonal entries D ˜ l l ( j ) = 1 a n A j l X j l . Denote by R ˜ ( j ) the resolvent matrix corresponding to the matrix L ˜ ( j ) ,
R ˜ ( j ) = ( L ˜ ( j ) z I ) 1 .
We have
R = R ˜ ( j ) R D ˜ ( j ) R ˜ ( j ) .
Using this formula, we may write
R j j = R ˜ j j ( j ) 1 a n r = 1 n A j r X j r R j r R ˜ r j ( j ) .
According to Lemma A5, we obtain
lim n | 1 n Tr R 1 n j = 1 n R ˜ j j ( j ) | = 0 .
Furthermore, let us denote by L ˜ ( j , 0 ) the matrix obtained from L ˜ ( j ) by deleting both the j-th column and j-th row. R ˜ ( j , 0 ) denotes the resolvent matrix corresponding to the matrix L ˜ ( j , 0 ) . Using the Schur complement formula, we may write
R ˜ j j ( j ) = 1 L ˜ j j ( j ) z l , k : l j , k j [ R ˜ ( j , 0 ) ( z ) ] k l L ˜ j l L ˜ j k .
Introduce the following notations
ε j 1 : = l k : l j , k j [ R ˜ ( j , 0 ) ] k l L ˜ j l L ˜ j k , ε j 2 = 1 a n k : k j [ R ˜ ( j , 0 ) ] k k ( A j k p j k ) X j k 2 , ε j 3 = 1 a n k : k j [ R ˜ ( j , 0 ) ] k k p j k ( X j k 2 σ j k 2 ) , ε j 4 = 1 a n k : k j [ R ˜ ( j , 0 ) ] k k ( p j k σ j k 2 1 n l = 1 n p j l σ j l 2 ) , ε j 5 = 1 n k : k j R ˜ k k ( j , 0 ) 1 a n l = 1 n p j l σ j l 2 1 , ε j 6 = 1 n k : k j R ˜ k k ( j , 0 ) 1 n k = 1 n R k k , ε j 7 = 1 n k = 1 n [ R ] k k E 1 n k = 1 n [ R ( z ) ] k k .
Put ε j = ν = 1 7 ε j ν . Let
ζ j : = L ˜ j j ( j ) = 1 a n k j A j k X j k .
In these notations, we may write
E [ R ˜ ( j ) ] j j = E 1 ζ j z s n ( z ) ε j .
We continue as follows
E R ˜ j j ( j ) = E 1 ζ j z s n ( z ) + E ε j ζ j z s n ( z ) R ˜ j j ( j ) .
Summing the last equality in j = 1 , , n , we obtain
s n ( z ) = E 1 ζ J z s n ( z ) + E ε J ζ J z s n ( z ) R J J ( J ) + E ( R J , J R ˜ J , J ( J ) ) ,
where J denotes a random variable which is uniform distributed on the set { 1 , , n } and independent on all other random variables. Denote by F n ( x ) the distribution function of ζ J and let
Δ n = sup x | F n ( x ) Φ ( x ) | ,
where Φ ( x ) denotes the distribution function of the standard normal law. Denote the Stieltjes transform of the standard normal law by s g ( z ) ,
s g ( z ) = 1 x z d Φ ( x ) .
Note that
E 1 ζ J z s ^ n ( z ) s g ( z + s ^ n ( z ) ) = 1 x z s ^ n ( z ) d ( F n ( x ) Φ ( x ) ) .
Integrating by part, we obtain
| E 1 ζ J z s ^ n ( z ) s g ( z + s ^ n ( z ) ) | 2 v 2 Δ n .
According to Lemma A3,
| E 1 ζ J z s n ( z ) s g ( z + s n ( z ) ) | 0 as n .
Note that
| E ε J ζ J z s n ( z ) R J J | v 2 E | ε J | .
It remains to prove that E | ε J | 0 and E ( R J , J R ˜ J , J ( J ) ) 0 as n . The last claim follows from Lemmas A6–A11, Lemma A2 and equality (20).
Thus, Theorem 1 is proved.

4. The Proof of Theorem 2

Similar to the previous section, we may write that diagonal entries of matrix L ^
ζ ^ j = 1 a ^ n k j ( A j k p j k ) .
Let R ^ = ( L ^ z I ) 1 denote the resolvent matrix of the matrix L ^ . Let j { 1 , , n } be fixed. We denote by L ^ ( j ) the matrix obtained from L ^ by replacing diagonal entries L ^ l l , l = 1 , , n with L ^ l l ( j ) = 1 a ^ n r j ( A l r p l r ) . Let D ^ ( j ) = L ^ L ^ ( j ) . By definition, D ^ ( j ) = diag ( d ^ 1 ( j ) , , d ^ n ( j ) ) is a diagonal matrix with d ^ l l ( j ) = 1 a ^ n ( A j l p j l ) , for l = 1 , , n . Note that diagonal entries of matrix L ^ ( j ) (except L ^ j j ( j ) ) do not depend on the r.v. values A j k for k = 1 , , n . By L ^ ( j , 0 ) , we denote the matrix obtained from L ^ ( j ) by deleting both the j-th column and j-th row. R ^ ( j , 0 ) denotes the resolvent matrix corresponding to the matrix L ^ ( j , 0 ) . Analogously to (21), we represent the diagonal entries of resolvent matrix R ^ ( j ) = ( L ^ ( j ) z I ) 1 in the form
R ^ j j ( j ) = 1 L ^ j j ( j ) z l , k : l j , k j R ^ k l ( j , 0 ) L ^ j l L ^ j k .
Introduce the following notations
ε ^ j 1 : = l k : l j , k j [ R ^ ( j , 0 ) ] k l L ^ j l L ^ j k , ε ^ j 2 = 1 a ^ n k : k j [ R ^ ( j , 0 ) ] k k ( ( A j k p j k ) 2 p j k ( 1 p j k ) ) ε ^ j 3 = 1 a ^ n k : k j R ^ k k ( j , 0 ) p j k ( 1 p j k ) a ^ n n , ε ^ j 4 = 1 n k : k j R ^ k k ( j , 0 ) 1 n k = 1 n R ^ k k , ε ^ j 5 = 1 n k = 1 n R ^ k k E 1 n k = 1 n R ^ k k .
Put ε ^ j = ν = 1 5 ε ^ j ν . Let
ζ ^ j : = L ^ j j ( j ) = 1 a ^ n k j ( A j k p j k ) .
In these notations, we may write
E [ R ^ ( j ) ] j j = E 1 ζ ^ j z s ^ n ( z ) ε ^ j ,
where s ^ n ( z ) = E 1 n Tr R ^ . We continue as follows
E [ R ^ ( j ) ] j j = E 1 ζ ^ j z s ^ n ( z ) + E ε ^ j ζ j z s ^ n ( z ) R ^ j j ( j ) ( z ) .
Summing the last equality in j = 1 , , n , we obtain
s ^ n ( z ) = E 1 ζ ^ J z s ^ n ( z ) + E ε ^ J ζ ^ J z s ^ n ( z ) R ^ J J ( J ) + E ( R ^ J J R ^ J J J ) ,
where J denotes a random variable which is uniform distributed on the set { 1 , , n } and independent on all other random variables. Similar to inequality (25), we have
| E 1 ζ ^ j z s ^ n ( z ) s g ( z + s ^ n ( z ) ) | 1 v 2 Δ ^ n .
According to Lemma A12
E 1 ζ ^ j z s ^ n ( z ) s g ( z + s ^ n ( z ) ) 0 as n .
Furthermore, since Im z + Im s n ( z ) v and | R ^ J J ( J ) | v 1 , we have
| E ε ^ J ζ ^ J z s ^ n ( z ) R ^ J J ( J ) | v 2 E | ε ^ J | .
By Lemmas A13–A17,
lim n E | ε ^ J | = 0 .
Furthermore, we note that
R ^ = R ^ ( J ) R ^ ( J ) D ^ ( J ) R ^ .
This relation implies that
| E ( R ^ J J R ^ J J ( J ) ) max 1 j n E R ^ R ^ ( j ) v 2 max 1 j n E D ^ ( j ) .
It is straightforward to check that
E D ^ ( j ) 1 a ^ n E max 1 l n | A j l p j l | 1 a ^ n 0 as n .
Combining relations (33), (35), (38), we obtain
ϰ n ( z ) : = s n ( z ) s g ( z + s n ( z ) ) 0 as n .
The last relation and Lemma A1 completed the proof of Theorem 2. Thus, Theorem 2 is proved.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Definition of Additive Free Convolution

We give the definition of the additive free convolution of distribution functions following the paper [16] (Section 5).
Definition A1.
A pair ( A , φ ) consisting of a unital algebra A and a linear functional φ : A C with φ ( 1 ) = 1 is called the free probability space. Elements of A are called random variables, the numbers φ ( a i ( 1 ) a i ( n ) ) for such random variables a 1 , , a k A are called moments, and the collection of all moments is called the joint distribution of a 1 , , a k . Equivalently, we may say that the joint distribution of a 1 , , a k is given by the linear functional μ a 1 , , a k : C X 1 , , X k C with μ a 1 , , a k ( P ( X 1 , , X k ) ) = φ ( P ( a 1 , , a k ) ) , where C X 1 , , X k denotes the algebra of all polynomials in k non-commutative indeterminantes X 1 , , X k .
If for a given element a A there exists a unique probability measure μ a on R such that t k d μ a ( t ) = φ ( a k ) for all k N , we identify the distribution of a with the probability measure μ a .
Definition A2.
Let ( A , φ ) be a non-commutative probability space.
(1) 
Let ( A i ) i I be a family of unital sub-algebras of A . The sub-algebras A i are called free independent if, for any positive integer k, φ ( a 1 a k ) = 0 whenever the following set of conditions holds: a j A i ( j ) (with i ( j ) I ) for j = 1 , , k , φ ( a j ) = 0 for all j = 1 , , k and neighboring elements are from taken different sub-algebras, i.e., i ( 1 ) i ( 2 ) , i ( 2 ) i ( 3 ) , , i ( k 1 ) i ( k ) .
(2) 
Let ( A i ) i I be a family of subset of A . The subsets A i are called free or freely independent if their generated initial sub-algebras are free, i.e., if ( A i ) i I are free, where for each i I , A i is the smallest initial sub-algebra of A which contains A i .
(3) 
Let ( a i ) i I be a family of elements from A . The elements a i are called free independent if the subsets ( { a i } ) i I are free.
Consider two random variables a and b which are free. Then, distributions of a + b (in the sense of linear functionals) depend only on the distribution of a and b.
Definition A3.
For free random variables a and b, the distribution of a + b is called the free additive convolution of μ a and μ b and is denoted by
μ a b = μ a μ b .
To compute the free convolution of concrete distributions, we may use the so-called R-transform introduced by Voiculescu [17]. Let s ( z ) be the Stieltjes transform of some distribution function F ( x ) . Denote by s 1 ( z ) the inverse function of s ( z ) in the science of composition. Define R-transform as follows
R ( z ) = s 1 ( z ) 1 z .
Let F ( x ) be the semicircle distribution function. Its Stieltjes transform satisfies the equation
s 2 ( z ) + z s ( z ) + 1 = 0
Denote by R s c ( z ) the R-transform of the semicicular law. Simple calulations show that
R s c ( z ) = z .
We denote dy R f c ( z ) the R-transform of the free convolution semicircular law and Gaussian law. Let R g denote the R-transform of the standard normal law. Then
R f c ( z ) = R s c ( z ) + R g ( z ) .
See for instance, refs. [18,19]. Using the definition of the R-transform via the Stieltjes transform, we obtain
s f c 1 ( z ) = z s g 1 ( z ) .
It is straightforward to show that this equality implies
s f c ( z ) = s g ( z + s f c ( z ) ) .
We prove the following simple but important lemma.
Lemma A1.
Let a sequence of Stieltjes transforms of the distribution functions F n ( x ) satisfy the equations
s n ( z ) = s g ( z + s n ( z ) ) + ϰ n ( z ) ,
where
ϰ n ( z ) 0 a s n .
Then, the distribution functions F n ( x ) weakly converge to the distribution function F f c ( x ) , which is free convolution of the semicircular law and the standard normal law.
Proof. 
It is enough to prove that the Stieltjes transform s n ( z ) converges in some region with non-empty interior to the Stieltjes transform s f c ( z ) , which satisfies equation (A1). We shall consider the region of z = u + i v with v > 2 . Since the derivative of s g ( z ) does not exceed the level 1 / v 2 , we may write
| s n ( z ) s m ( z ) 1 2 | s n ( z ) s m ( z ) | + | ϰ n ( z ) | + | ϰ m ( z ) | .
or
| s n ( z ) s m ( z ) | 2 | ϰ n ( z ) | + 2 | ϰ m ( z ) | 0 as n , m .
The sequence of the Stieltjes transforms s n ( z ) is Cauchy; consequently, there exists a limit say s f c ( z ) of this sequence,
lim n s n ( z ) = s f c ( z ) .
Taking the limit in the equation (A2), we obtain
s f c ( z ) = s g ( z + s f c ( z ) ) .
The last equality implies that s f c ( z ) is the Stieltjes transform of the semicircular law and the standard Gaussian law. Thus, Lemma is proved. □

Appendix B. Weighted Graphs

Appendix B.1. Variance of Stieltjes Transform of Empirical Measure

In this section, we estimate the variance of m n ( z ) = 1 n Tr R , where R : = R L ( z ) = ( L ˜ z I ) 1 . We prove the following Lemma.
Lemma A2.
For any z = u + i v with v > 0 , the following inequality holds
lim n E | 1 n Tr R 1 n E Tr R | = 0 .
Proof. 
The proof of this lemma is using the martingale representation of ξ E ξ . This method in Random Matrix Theory was firstly used by Girko, see for instance [20]. We introduce the sequence of σ -algebras M k generated by random variables X j , l for 1 j , l k . It is easy to see that M k M k + 1 . Denote by E k the conditional expectation with respect to σ -algebra M k . For k = 0 , E 0 = E . Introduce random variables
γ k : = E k 1 n Tr R E k 1 1 n Tr R .
The sequence of γ k , for k = 1 , , n is martingale difference and
1 n Tr R E 1 n Tr R = k = 1 n γ k .
Introduce the sub-matrices L ˜ ( k ) obtained from L ˜ by deleting both the k-th row and k-th column. Denote by R ( k ) = R ( k ) ( z ) the corresponding resolvent matrix, R ( k ) ( z ) = ( L ˜ ( k ) z I ) 1 . Note that the matrix L ˜ ( k ) depends on the random variables X k l , l = 1 , , n via diagonal entries. To overcome this difficulty, we introduce the matrix L ˜ ( k , 0 ) obtained from L ˜ ( k ) by replacing diagonal entries with L ^ j j ( k ) : = 1 a n l : l k , l j A j l X j l . The corresponding resolvent matrix is denoted via R ( k , 0 ) . We have now
E k Tr R ( k , 0 ) = E k 1 R ( k , 0 ) .
This allows us to write
γ k = E k ( 1 n ( Tr R Tr R ( k ) ) E k 1 ( 1 n ( Tr R Tr R ( k ) ) ) + E k ( 1 n ( Tr R ( k ) Tr R ( k , 0 ) ) ) E k 1 ( 1 n ( Tr R ( k ) Tr R ( k , 0 ) ) ) = : γ k ( 1 ) + γ k ( 2 ) .
By the overlapping theorem, for z = u + i v ,
1 n Tr R L ( z ) 1 n Tr R ( k ) ( z ) 1 n v .
From here, we immediately obtain
| γ k ( 1 ) | 2 n v ,
and
k = 1 n E | γ k | 2 4 n v 2 .
To complete the proof, it remains to show that
lim n k = 1 n E | γ k ( 2 ) | 2 = 0 .
Note that
E | γ k ( 2 ) | 2 2 E | 1 n Tr R ( k ) 1 n Tr R ( k , 0 ) | 2 .
Introduce the diagonal matrix D ( k ) with diagonal entries
D l l ( k ) = 1 a n A k l X k l , l k .
In these notations, we have
1 n Tr R ( k ) 1 n Tr R ( k , 0 ) = 1 n Tr R ( k ) D ( k , 0 ) R ( k , 0 ) = 1 n a n l k , j k R l j ( k ) A k j X k j R j l ( k , 0 ) .
This implies that
k = 1 n E | γ k ( 2 ) | 2 4 n 2 a n k = 1 n E | j k A k j X k j l k R l j ( k ) R j l ( k , 0 ) | 2 .
We continue this inequality as follows
k = 1 n E | γ k ( 2 ) | 2 8 n 2 a n k = 1 n E | j k A k j X k j l k R l j ( k ) R j l ( k , 0 ) I { A k j | X k j | τ a n } | 2 + 8 n 2 a n k = 1 n E | j k A k j X k j l k R l j ( k ) R j l ( k , 0 ) I { A k j | X k j | > τ a n } | 2 .
Applying Cauchy’s inequality to the second term in the right-hand side of the last inequality, we obtain
8 n 2 a n k = 1 n E | j k A k j X k j l k R l j ( k , 0 ) R j l ( k ) I { A k j | X k j | > τ a n } | 2 8 n a n k = 1 n j k E A j k X k j 2 | l k R l j ( k ) R j l ( k , 0 ) | 2 I { A k j | X k j | > τ a n } .
It is straightforward to check that
| l k R l j ( k ) R j l ( k , 0 ) | 2 v 4 .
Using this bound, we obtain
8 n 2 a n k = 1 n E | j k A k j X k j l k R l j ( k ) R j l ( k , 0 ) I { A k j | X k j | > τ a n } | 2 8 v 4 L n ( τ ) .
We estimate now the first term in the r.h.s. of (A12). Using that
R ( k ) = R ( k , 0 ) + R ( k , 0 ) D ( k ) R ( k ) ,
we may write
8 n 2 a n k = 1 n E | j k A k j X k j l k R l j ( k ) R j l ( k , 0 ) I { A k j | X k j | τ a n } | 2 8 n 2 a n k = 1 n E | j k A k j X k j l k R l j ( k , 0 ) R j l ( k , 0 ) I { A k j | X k j | τ a n } | 2 + 8 n 2 a n 2 k = 1 n E | j k A k j X k j l k s = 1 n X k s A k s R l s ( k , 0 ) R s j ( k ) R j l ( k , 0 ) I { A k j | X k j | τ a n } | 2 .
By the independence of random variables A j k X j k for j = 1 , , n and matrix R ^ ( k , 0 ) , we have
8 n 2 a n k = 1 n E | j k A k j X k j l k R l j ( k , 0 ) R j l ( k , 0 ) I { A k j | X k j | τ a n } | 2 8 n 2 a n v 4 k = 1 n j k p j k σ j k 2 + 1 n 2 a n 2 τ 2 v 4 k = 1 n ( j = 1 n p j k E X j k 2 I { | X j k | > τ a n } ) 2 8 n v 4 + L n ( τ ) τ v 2 2 .
For the second term in the r.h.s. of (A17), we have
8 n 2 a n 2 k = 1 n E | j k A k j X k j l k s = 1 n X k s A k s R l s ( k , 0 ) R s j ( k ) R j l ( k , 0 ) I { A k j | X k j | τ a n } | 2 = 8 n 2 a n 2 k = 1 n E | s k A k s X k s j = 1 n X k j A k j l k R l s ( k , 0 ) R s j ( k ) R j l ( k , 0 ) I { A k j | X k j | τ a n } | 2 8 n a n 2 k = 1 n E s k A k s | X k s | 2 | j = 1 X k j A k j l k R l s ( k , 0 ) R s j ( k ) R j l ( k , 0 ) I { A k j | X k j | τ a n } | 2 .
Note that
r = 1 n | R r j ( k ) | | l k R ^ l r ( k ) R ^ j l ( k ) | r = 1 n | R j r ( k ) | 2 1 2 r = 1 n | [ R ( k , 0 ) ] j r 2 | 2 1 2 v 3 .
Using this inequality, we obtain
8 n 2 a n 2 k = 1 n E | j k A k j X k j l k r = 1 n X k r A k r R l r ( k , 0 ) R r j ( k ) R j l ( k , 0 ) | 2 r = 1 n I { A k r | X k r | τ a n } 8 τ 2 n a n v 6 k = 1 n j k p j k σ j k 2 = 8 τ 2 v 6 .
Combining inequalities (A7), (A12), (A20), we obtain
E | Tr R E Tr R | 2 C n v 2 + C τ 2 v 6 + C L n ( τ ) v 4 .
Passing to the limit first in n and then in τ 0 , we obtain
lim n E | 1 n ( Tr R E Tr R ) | 2 = 0 .
Thus, lemma is proved. □
In what follows, we shall assume that z = u + i v is fixed.

Appendix B.2. Convergence of Diagonal Entries Distribution Functions of Laplace Matrices to the Normal Law

Lemma A3.
Under conditions C P ( 0 ) and C X ( 0 ) , we have
lim 1 n j = 1 n max 1 k n p j k σ j k 2 a n = 0 .
Proof. 
We fix arbitrary τ > 0 . We may write
1 n j = 1 n max 1 k n p j k σ j k 2 a n τ 2 + 1 n a n j = 1 n k = 1 n p j k E | X j k | 2 I { | X j k | < τ a n } .
By condition C X ( 0 ) , we obtain
lim sup n 1 n j = 1 n max 1 k n p j k σ j k 2 a n τ 2 .
Because τ is arbitrary, we obtain the claim. □
Lemma A4.
Under conditions C P ( 0 ) , C P ( 2 ) and C X ( 0 ) , C X ( 1 ) , we have
lim n sup x | F n ( x ) Φ ( x ) | = 0
Proof. 
Let J be an independent on A j k and X j k random variable uniform distributed on the set { 1 , , n } . We consider the characterictic function of ζ J = 1 a n k = 1 n A J , k X J , k , f n ( t ) = E exp { i t ζ J } = 1 n j = 1 n E exp { i t ζ j } . Introduce the following set of indices
M = M 1 M 2 M 3 ,
where
M 1 : = j { 1 , , n } : 1 a n | k = 1 n p j k σ j k 2 1 | 1 16 , M 2 : = j { 1 , , n } : 1 a n k = 1 n p j k E X j k 2 I { | X j k | > τ a n } 1 16 , M 3 : = j { 1 , , n } : 1 a n max 1 k n p j k σ j k 2 1 16 t 2 .
We denote by A c the complement set of A and by | A | , we denote the cardinality of set A . Note that by condition C P ( 1 )
| M 1 c | n 16 1 n a n j = 1 n k = 1 n | p j k σ j k 2 a n n | 0 , as n .
Analogously, by C X ( 1 ) ,
| M 2 c | n 16 L n ( τ ) 0 , as n .
Finally, by Lemma A3
| M 3 c | n 16 t 2 1 n a n j = 1 n max 1 k n p j k σ j k 2 0 as n .
Combining the last three relations, we obtain
lim n | M c | n = 0 .
Note that by the independence of A j k and X j k ,
f n j ( t ) : = E exp { i t a n ζ j } = k = 1 n E exp { i t a n A j k X j k } = : k = 1 n f n j k ( t ) .
Furthermore,
f n j k ( t ) = 1 + p j k ( E exp { i t a n X j k } 1 ) ,
and by condition C P ( 0 )
| f n j k ( t ) 1 | t 2 2 a n p j k σ j k 2 t 2 2 a n max 1 j , k n p j k σ j k 2 0 as n .
Without loss of generality, we may assume that
max 1 j m k n | f n j k ( t ) 1 | 1 4 ,
and applying Taylor’s formula, we write that
ln f n j k ( t ) = p j k E exp { i t a n X j k } 1 + 2 θ ( t ) p j k 2 E exp { i t a n X j k } 1 2 ,
where θ ( t ) denotes some function such that | θ ( t ) | 1 . Futhermore, by Taylor’s formula
E exp { i t a n X j k } 1 = t 2 2 a n σ j k 2 + θ 1 ( t ) | t | 3 6 a n 3 2 E | X j k | 3 I { | X j k | τ a n } + θ 2 ( t ) | E exp { i t a n X j k } 1 i t a n X j k + t 2 2 a n X j k 2 I { | X j k | > τ a n } ,
where θ i ( t ) , i = 1 , 2 denotes some functions such that | θ i ( t ) | 1 . Using this equality, we may write
ln f n j k ( t ) = t 2 2 a n p j k σ j k 2 + θ 1 ( t ) τ | t | 3 6 a n p j k σ j k 2 + θ 2 ( t ) t 2 a n p j k E | X j k | 2 I { | X j k | τ a n } + θ 3 ( t ) t 4 4 a n 2 p j k 2 σ j k 4 .
Summing this equality by k = 1 , n , we obtain
ln f n j ( t ) = t 2 2 1 a n k = 1 n p j k σ j k 2 + θ i ( t ) τ | t | 3 6 a n k = 1 n p j k σ j k 2 + θ 2 ( t ) t 2 a n k = 1 n p j k E | X j k | 2 I { | X j k | τ a n } + θ 3 ( t ) t 4 4 max 1 j , k n p j k σ j k 2 a n 1 a n k = 1 n p j k σ j k 2 .
For 8 17 | t | > τ > 0 , we have
| ln f n j ( t ) + t 2 2 | t 2 3 .
This implies that for j M
| f n j ( t ) exp { t 2 2 } | C ( t 2 | 1 a n k = 1 n p j k σ j k 2 1 | + 1 a n k = 1 n p j k E | X j k | 2 I { | X j k | > τ a n } + τ | t | 3 + t 4 max 1 j , k n p j k σ j k 2 a n ) .
From this inequality, it follows that
| f n ( t ) exp { t 2 2 } | 2 | M c | n + 1 n j = 1 n ( t 2 | 1 a n k = 1 n p j k σ j k 2 1 | + 1 a n k = 1 n p j k E | X j k | 2 I { | X j k | > τ a n } + τ | t | 3 + t 4 max 1 j , k n p j k σ j k 2 a n ) .
By conditions C P ( 0 ) and C X ( 0 ) , relation (A32) and Lemma A3, we obtain
lim n f n ( t ) = exp { t 2 2 } .
Thus, the lemma is proved. □
Lemma A5.
Under the conditions of Theorem 1, we have
lim n 1 n j = 1 n E | R j j R ˜ j j ( j ) | = 0 .
Proof. 
By V , we shall denote the operator norm of matrix V . Matrices R ˜ ( j ) and D ˜ ( j ) are defined in the beginning of Section 3 before the relation (18). Note that
R D ˜ ( j ) R ˜ ( j ) v 2 D ˜ ( j ) .
It is easy to check that
1 n j = 1 n E | R j j R ˜ j j ( j ) | 1 n j = 1 n E R R ˜ ( j ) .
Using that
R = R ˜ ( j ) R D ˜ ( j ) R ˜ ( j ) ,
we obtain
R R ˜ ( j ) v 2 D ˜ ( j ) .
Futhermore, for any τ > 0 , we have
E D ˜ ( j ) 1 a n E max 1 l n , l j { | X j l | A j l } τ + 1 τ a n l = 1 n p j l E X j l 2 I { | X j l | > τ a n } .
Summing this inequality in j = 1 , , n , we obtain
1 n j = 1 n E | R j j R ˜ j j ( j ) | v 2 ( τ + 1 τ L n ( τ ) ) .
Since τ is arbitrary, this inequality and condition C X ( 0 ) together imply (A44). Thus, Lemma A5 is proved. □

Appendix B.3. The Bounds of 1 n j = 1 n E | ε j ν | , for ν = 1, …, 7

Lemma A6.
Under the conditions of Theorem 1, we have
1 n j = 1 n E | ε j 1 | τ v + 1 v max 1 j , k n p j k σ j k 2 a n 1 2 L n ( τ ) 1 2 .
Proof. 
By definition of ε j 1 , we may write
ε j 1 : = 1 a n l k : l j , k j [ R ˜ ( j , 0 ) ] k l A j k A j l X j k X j l .
Applying the Cauchy inequality, we obtain
1 n j = 1 n E | ε j 1 | 1 n j = 1 n E | ε j 1 | 2 1 2 .
Simple calculations show that
1 n j = 1 n E | ε j 1 | 1 n a n 2 j = 1 n k j l j E | R ˜ k l ( j , 0 ) | 2 p j k p j l σ j k 2 σ j l 2 1 2 ,
We introduce the following notations
W j = ( | R ˜ k l ( j . 0 ) | 2 ) k , l = 1 n , H j = ( p j 1 σ j 1 2 , , p j n σ j n 2 ) T .
In these notations, we write
1 n j = 1 n E | ε j 1 | 1 n a n 2 j = 1 n H ( j ) T W ( j ) H ( j ) 1 2 .
Using that
l = 1 n | R ˜ k l ( j , 0 ) | 2 1 v 2 ,
we obtain that the spectral norm of matrix W ( j ) satiesfies the inequality
W ( j ) 1 v 2 ,
and
H ( j ) T W ( j ) H ( j ) W ( j ) H ( j ) 2 1 v 2 k = 1 n p j k 2 σ j k j 4 .
Using the last bound, we obtain
1 n j = 1 n E | ε j 1 | 1 v 1 n a n 2 j = 1 n k = 1 n p j k 2 σ j k 4 1 2 .
Furthermore, we apply the bound
σ j k 2 τ 2 a n + E X j k 2 I { | X j k | > τ a n } .
We obtain
1 n j = 1 n E | ε j 1 | 1 v τ 2 + 1 n a n 2 j = 1 n k = 1 n p j k 2 σ j k 2 E | X j k | 2 I { | X j k | > τ a n } 1 2 .
We continue as follows
1 n j = 1 n E | ε j 1 | τ v + 1 v max 1 j , k n p j k σ j k 2 a n 1 2 L n ( τ ) 1 2 .
Thus, Lemma is proved. □
Lemma A7.
Under the conditions of Theorem 1, we have
1 n j = 1 n E | ε j 2 | 1 v L n ( τ ) + τ v .
Proof. 
We recall the definition of ε j 2 ,
ε j 2 = 1 a n k : k j [ R ˜ ( j , 0 ) ] k k ( A j k p j k ) X j k 2 .
Using triangle inequality and Cauchy’s inequality, we may write
1 n j = 1 n E | ε j 2 | 1 n a n v j = 1 n k = 1 n p j k E X j k 2 I { | X j k | τ a n } + 1 n a n 2 j = 1 n E k : k j [ R ˜ ( j , 0 ) ] k k ( A j k p j k ) X j k 2 I { | X j k | τ a n } 2 1 2 .
Since E [ R ˜ ( j , 0 ) ] k k ( A j k p j k ) X j k 2 I { | X j k | τ a n } = 0 and random variables A j k , X j k are independent for k = 1 , n and independent on [ R ˜ ( j , 0 ) ] k k , we obtain
1 n j = 1 n E | ε j 2 | 1 v L n ( τ ) + τ v 1 n a n j = 1 n k : k j p j k σ j k 2 1 2 = 1 v L n ( τ ) + τ v
Thus, the lemma is proved. □
Lemma A8.
Under the conditions of Theorem 1, we have
1 n j = 1 n E | ε j 3 | 3 v L n ( τ ) + τ v .
Proof. 
By definition of ε j 3 , we have
ε j 3 = 1 a n k : k j [ R ˜ ( j , 0 ) ( z ) ] k k p j k ( X j k 2 σ j k 2 ) ,
We may write
1 n j = 1 n E | ε j 3 | 1 v 1 n a n j = 1 n k = 1 n p j k E | X j k 2 σ j k 2 | I { | X j k | > τ a n } + 1 n j = 1 n E | 1 a n k = 1 n p j k R ˜ k k ( j , 0 ) ( X j k 2 σ j k 2 ) I { | X j k | τ a n } |
Furthermore,
1 n a n j = 1 n k = 1 n p j k E | X j k 2 σ j k 2 | I { | X j k | > τ a n } L n ( τ ) + 1 n a n j = 1 n k = 1 n p j k σ j k 2 E I { | X j k | > τ a n } .
Using inequality (A60), we obtain
1 n a n j = 1 n k = 1 n p j k σ j k 2 E I { | X j k | > τ a n } L n ( τ ) + 1 n a n j = 1 n k = 1 n p j k E | X j k | 2 I { | X j k | > τ a n } E I { | X j k | > τ a n } 2 L n ( τ ) .
We estimate now the second term in the right-hand side of (A68). Applying triangle inequality, we obtain
1 n j = 1 n E | 1 a n k = 1 n p j k R ˜ k k ( j , 0 ) ( X j k 2 σ j k 2 ) I { | X j k | τ a n } | 1 n j = 1 n | 1 a n k = 1 n p j k E R ˜ k k ( j , 0 ) E ( X j k 2 σ j k 2 ) I { | X j k | τ a n } | + 1 n j = 1 n E | 1 a n k = 1 n R ˜ k k ( j , 0 ) ( X j k 2 I { | X j k | τ a n } E X j k 2 I { | X j k | τ a n } ) | 2 1 2 .
Simple calculations show that
1 n j = 1 n E | 1 a n k = 1 n R ˜ k k ( j , 0 ) ( X j k 2 I { | X j k | τ a n } E X j k 2 I { | X j k | τ a n } ) | 2 1 v 2 n a n 2 j = 1 n k = 1 n p j k 2 E | X j k | 4 I { | X j k | τ a n } τ 2 v 2 1 n a n j = 1 n k = 1 n p j k σ j k 2 = τ 2 v 2 .
Finally, we note that
E ( X j k 2 σ j k 2 ) I { | X j k | τ a n } = E ( X j k 2 σ j k 2 ) I { | X j k | > τ a n } .
Combining inequalities (A68), (A70), (A71), we obtain the result of the lemma. Thus, the lemma is proved. □
Lemma A9.
Under the conditions of Theorem 1, we have
1 n j = 1 n E | ε j 4 | 1 v n a n j = 1 n k = 1 n | p j k σ j k 2 1 n l = 1 n p j l σ j l 2 | .
Proof. 
By definition of ε j 4 , we have
ε j 4 = 1 a n k : k j R ˜ k k ( j , 0 ) p j k σ j k 2 1 n l = 1 n p j l σ j l 2 .
Using that | R ˜ k k ( j , 0 ) | 1 v , we obtain
1 n j = 1 n E | ε j 4 | 1 v n a n j = 1 n k = 1 n | p j k σ j k 2 1 n l = 1 n p j l σ j l 2 | .
Lemma A10.
Under the conditions of Theorem 1, we have
1 n j = 1 n E | ε j 5 | 1 v n j = 1 n | 1 a n l = 1 n p j l σ j l 2 1 | .
Proof. 
Recall that
ε j 5 = 1 n k : k j R ˜ k k ( j , 0 ) 1 a n l = 1 n p j l σ j l 2 1 .
Using that | R ˜ k k ( j , 0 ) | v 1 , we obtain
1 n j = 1 n E | ε j 5 | 1 v n j = 1 n | 1 a n l = 1 n p j l σ j l 2 1 | .
Thus, the lemma is proved. □
Lemma A11.
Under the conditions of Theorem 1, we have
1 n j = 1 n E | ε j 6 | τ v 2 + 1 n v 2 τ L n ( τ ) .
Proof. 
By definition of ε j 6 , we have
ε j 6 = 1 n k : k j [ R ˜ ( j , 0 ) ] k k 1 n k = 1 n [ R ] k k .
By the triangle inequality, we obtain
1 n j = 1 n E | ε j 6 | 1 n j = 1 n E | 1 n Tr R ˜ ( j , 0 ) 1 n Tr R ˜ ( j ) | + 1 n j = 1 n E | 1 n Tr R ˜ ( j ) Tr R | .
By the overlapping theorem, we have
| 1 n Tr R ˜ ( j , 0 ) 1 n Tr R ˜ ( j ) | 1 n v .
It remains to estimate the second term in the r.h.s. of (A81). Note that
R ˜ ( j ) R = R ˜ ( j ) D ( j ) R .
This equality implies that
Tr R ˜ ( j ) Tr R = 1 a n l = 1 n k = 1 n R k l A j k X j k R ˜ l k ( j ) .
Summing this equality in j, we obtain
1 n j = 1 n E | 1 n Tr R ˜ ( j ) 1 n Tr R | 1 n 2 a n j = 1 n E | l = 1 n k = 1 n R k l A j k X j k R ˜ l k ( j ) | .
Using that
l = 1 n | R k l R ˜ k l ( j ) | 1 v 2 ,
we obtain
1 n j = 1 n E | 1 n Tr R ˜ ( j ) 1 n Tr R | 1 v 2 n 2 a n j = 1 n k = 1 n p j k E | X j k | I { | X j k | τ a n } + 1 n 2 v 2 a n τ j = 1 n k = 1 n p j k E X j k 2 I { | X j k | > τ a n } τ v 2 + 1 n v 2 τ L n ( τ ) .
Thus, the lemma is proved. □

Appendix C. Unweigthed Graphs

Appendix C.1. Convergence of Diagonal Entries Distribution Functions of Laplace Matrices to the Normal Law

We denote by F ^ n ( x ) the distribution function of random variable ζ ^ J and
Δ ^ n : = sup x | F ^ n ( x ) Φ ( x ) | .
Lemma A12.
Under the conditions of Theorem 2, we have
lim n sup x | F ^ n ( x ) Φ ( x ) | = 0 .
Proof. 
We consider the characteristic function of ζ ^ J , f ^ n ( t ) = 1 n j = 1 n E exp { i t ζ ^ j } . Introduce the following set of indices
M ^ = : = j { 1 , , n } : 1 a ^ n k = 1 n | p j k ( 1 p j k ) a ^ n n | 1 16 .
We denote by A c a complement set of A and by | A | , we denote the cardinality of set A . Note that, by condition C P ( 1 ) ,
| M ^ c | n 16 1 n a n j = 1 n k = 1 n | p j k σ j k 2 a n n | 0 , as n .
Note that, by independence of A j k ,
f ^ n j ( t ) : = E exp { i t a ^ n ζ ^ j } = k = 1 n E exp { i t a ^ n ( A j k p j k ) } = : k = 1 n f ^ n j k ( t )
Applying the Taylor formula, we may write
f ^ n j k ( t ) = 1 t 2 p j k ( 1 p j k ) 2 a ^ n + θ ( t ) | t | 3 6 a ^ n 3 2 p j k ( 1 p j k ) ,
where θ ( t ) denotes some function such that | θ ( t ) | 1 .
Using this equality, we may write
ln f ^ n j k ( t ) = t 2 2 a ^ n p j k ( 1 p j k ) + θ 1 ( t ) τ | t | 3 6 a ^ n 3 2 p j k ( 1 p j k ) + θ 2 ( t ) t 4 p j k 2 ( 1 p j k ) 2 a ^ n 2 + θ 3 ( t ) t 6 p j k 2 ( 1 p j k ) 2 a ^ n 3 .
Summing this equality by k = 1 , n , we obtain
ln f ^ n j ( t ) = t 2 2 t 2 2 1 a ^ n k = 1 n ( p j k ( 1 p j k ) a ^ n n ) + θ 1 ( t ) | t | 3 6 a ^ n 3 2 k = 1 n p j k ( 1 p j k ) + θ 2 ( t ) t 4 a ^ n 2 k = 1 n p j k 2 ( 1 p j k ) 2 + θ 3 ( t ) t 6 a ^ n 3 k = 1 n p j k 2 ( 1 p j k ) 2 .
Note that for j M ^ ,
1 a n k = 1 n p j k ( 1 p j k ) 17 16 , for j M ^ ,
and
lim n | M ^ c | n = 0 .
Similar to (A42), we may write
| f ^ n ( t ) exp { t 2 2 } | 2 | M ^ c | n + t 2 2 1 n a ^ n j = 1 n k = 1 n | p j k ( 1 p j k ) a ^ n n | + C | t | 3 a ^ n + C t 4 a ^ n + C | t | 6 a ^ n 2
This inequality implies that
lim n f ^ n ( t ) = exp { t 2 2 } .
Thus, Lemma A12 is proved. □
In what follows, we shall assume that z = u + i v is fixed.

Appendix C.2. The Bounds of 1 n j = 1 n E | ε ^ j ν | , for ν = 1, …, 5

Lemma A13.
Under conditions of Theorem 1, we have
1 n j = 1 n E | ε ^ j 1 | 1 4 a n v 2 1 2 .
Proof. 
By definition of ε j 1 we may write
ε ^ j 1 : = 1 a ^ n l k : l j , k j [ R ^ ( j , 0 ) ] k l ( A j k p j k ) ( A j l p j l ) .
Applying the Cauchy inequality, we obtain
1 n j = 1 n E | ε j 1 | 1 n j = 1 n E | ε j 1 | 2 1 2 .
Simple calculations show that
1 n j = 1 n E | ε ^ j 1 | 1 n a n 2 j = 1 n k j l j E | R ^ k l ( j , 0 ) | 2 p j k p j l ( 1 p j k ) ( 1 p j l ) 1 2 1 4 n a n 2 j = 1 n k j l j E | R ^ k l ( j , 0 ) | 2 p j k ( 1 p j k ) 1 2 1 4 n a n 2 v 2 j = 1 n k j p j k ( 1 p j k ) 1 2 1 4 a n v 2 1 2 .
Thus, Lemma A13 is proved. □
Lemma A14.
Under the conditions of Theorem 1, we have
1 n j = 1 n E | ε ^ j 2 | 1 a ^ n v .
Proof. 
We recall the definition of ε ^ j 2 ,
ε ^ j 2 = 1 a ^ n k : k j [ R ^ ( j , 0 ) ] k k ( ( A j k p j k ) 2 p j k ( 1 p j k ) ) .
Using the triangle inequality and the Cauchy inequality, we may write
1 n j = 1 n E | ε ^ j 2 | 1 n a ^ n 2 j = 1 n k = 1 n E | R ^ k k ( j , 0 ) | 2 p j k ( 1 p j k ) ( 1 2 p j k ) 2 1 2 1 a ^ n v 2 1 n a ^ n j = 1 n k = 1 n p j k ( 1 p j k ) 1 2 = 1 a ^ n v 2 1 2 .
Thus, Lemma A14 is proved. □
Lemma A15.
Under conditions of Theorem 1, we have
1 n j = 1 n E | ε ^ j 3 | 1 v 1 n a n j = 1 n k = 1 n | p j k ( 1 p j k ) a ^ n n | .
Proof. 
By definition of ε ^ j 3 , we have
ε ^ j 3 = 1 a n k : k j [ R ^ ( j , 0 ) ] k k ( p j k ( 1 p j k ) a ^ n n ) .
We may write
1 n j = 1 n E | ε ^ j 3 | 1 v 1 n a ^ n j = 1 n k = 1 n | p j k ( 1 p j k ) a ^ n n | .
Thus, Lemma A15 is proved. □
Lemma A16.
Under the conditions of Theorem 1, we have
1 n j = 1 n E | ε ^ j 4 | 1 v 2 a ^ n .
Proof. 
Recall that
ε ^ j 4 = 1 n k : k j R ^ k k ( j , 0 ) 1 n k = 1 n R ^ k k .
Note that
| 1 n Tr R ^ ( j ) 1 n Tr R ^ ( j , 0 ) | 1 n v .
Furthermore,
R ^ R ^ ( j ) = R ^ D ^ ( j ) R ^ ( j ) .
Recall that A denotes the operator norm of matrix A . The last equality and inequality max { R ^ , R ^ ( j ) } v 1 implies that
| 1 n Tr ( R ^ R ^ ( j ) ) | R ^ R ^ ( j ) R ^ D ^ ( j ) R ^ ( j ) v 2 D ^ ( j ) .
Note that
E D ^ ( j ) 1 a ^ n E max 1 k n | A j k p j k | 1 a ^ n .
Combining the last two inequalities, we obtain the claim. Thus, Lemma A16 is proved. □

Appendix C.3. Variance of 1 n Tr R ^

In this section, we estimate the variance of m n ( z ) = 1 n Tr R ^ , where R ^ = R ^ ( z ) = ( L ^ z I ) 1 . We prove the following lemma.
Lemma A17.
For any v > 0 and z = u + i v , the following inequality holds
lim n E | 1 n Tr R ^ E 1 n Tr R ^ | = 0 .
Proof. 
The proof of this lemma is similar to the proof of Lemma A2. We introduce the sequence of σ -algebras M k generated by random variables A j , l for 1 j , l k . It is easy to see that M k M k + 1 . Denote by E k the conditional expectation with respect to σ -algebra M k . For k = 0 , E 0 = E . Introduce random variables
γ ^ k : = E k ( 1 n Tr R ^ ) E k 1 ( 1 n Tr R ^ ) .
The sequence of γ ^ k , for k = 1 , , n is a martingale difference and
1 n Tr R ^ E 1 n Tr R ^ = k = 1 n γ ^ k .
Furthermore, introduce the sub-matrices L ^ ( k ) obtained from L ^ by replacing the diagonal entries with L ^ l l ( k ) : = 1 a n l : l k , l j ( A j l p j l ) . Denote by R ^ ( k ) ( z ) the corresponding resolvent matrix, R ^ ( k ) ( z ) = ( L ^ ( k ) z I n 1 ) 1 . We introduce the matrix L ^ ( k , 0 ) obtained from L ^ ( k ) by deleting both the k-th row and k-th column. The corresponding resolvent matrix we denote via R ^ ( k , 0 ) . We have now
E k Tr R ^ ( k , 0 ) = E k 1 R ^ ( k , 0 ) .
This allows us to write
γ ^ k = E k ( 1 n ( Tr R ^ Tr R ^ ( k ) ) ) E k 1 ( 1 n ( Tr R ^ Tr R ^ ( k ) ) ) + E k ( 1 n ( Tr R ^ ( k ) Tr R ^ ( k , 0 ) ) ) E k 1 ( 1 n ( Tr R ^ ( k ) Tr R ^ ( k , 0 ) ) ) = : γ ^ k ( 1 ) + γ ^ k ( 2 ) .
By the overlapping theorem
1 n Tr R ^ ( k ) 1 n Tr R ^ ( k , 0 ) 1 n v .
From here, we immediately obtain
| γ ^ k ( 2 ) | 2 n v ,
and
k = 1 n E | γ ^ k ( 2 ) | 2 4 n v 2 .
To complete the proof, it remains to show that
lim n k = 1 n E | γ ^ k ( 1 ) | 2 = 0 .
Note that
E | γ ^ k ( 1 ) | 2 2 E | 1 n Tr R ^ 1 n Tr R ^ ( k ) | 2 .
Introduce the diagonal matrix D ^ ( k ) with diagonal entries
D ^ l l ( k ) = 1 a n ( A k l p k l ) , l k .
In these notations, we have
1 n Tr R ^ 1 n Tr R ^ ( k ) = 1 n Tr R ^ D ^ ( k ) R ^ ( k ) = 1 n a ^ n l k , j k R ^ l j ( k ) ( A k l p k l ) R ^ j l ( k ) .
This implies that
k = 1 n E | γ ^ k ( 1 ) | 2 4 n 2 a ^ n k = 1 n E | j k ( A k j p k j ) l k R ^ l j R ^ j l ( k ) | 2 .
We continue this inequality as follows
k = 1 n E | γ ^ k ( 1 ) | 2 8 n 2 a ^ n k = 1 n E | j k ( A k j p k j ) l k R l j ( k ) R ^ j l ( k ) | 2 8 n 2 v 4 a ^ n k = 1 n j k p j k ( 1 p j k ) 8 n v 2 .
Inequalities (A118) and (A123) completed the proof. Thus, Lemma A17 is proved. □

References

  1. Bordenave, C.; Caputo, P.; Chafai, D. Spectrum of Markov Generatoron Sparse Random Graphs. Pure Appl. Math. 2015, 67, 621–669. [Google Scholar] [CrossRef]
  2. Bordenave, C.; Lelarge, M.; Massoulice, L. Nonbacktracking spectrum of random graphs. Ann. Probab. 2018, 46, 1–71. [Google Scholar] [CrossRef]
  3. Dimitriu, I.; Soumik, P. Sparse regular random graphs: Spectral densitry and eigenvectors. Ann. Probab. 2012, 40, 2197–2235. [Google Scholar] [CrossRef]
  4. Ding, X.; Jiang, T. Spectral Distribution of Adjancency Matrix and Laplace Matrices of Random Graphs. Ann. Appl. Probab. 2010, 20, 2086–2117. [Google Scholar] [CrossRef]
  5. Tran, L.V.; Vu, V.H.; Wang, K. Sparse random graphs: Eigenvalues and egenvectors. Random Struct. Algorithms 2013, 42, 110–134. [Google Scholar] [CrossRef]
  6. Brito, G.; Dimitriu, I.; Harris, K.D. Spectral gap in random bipartite biregular graphs and applications. Comb. Probab. Comput. 2022, 31, 229–267. [Google Scholar] [CrossRef]
  7. Fernando, L.M.; Jeferson, D.S. Spectral density of dense networks and the breakdown of the Wigner semicircle law. Phys. Rev. Res. 2020, 2, 043116. [Google Scholar]
  8. Liang, S.; Obata, N.; Takanashi, S. Asymptotic spectral analysis of general Erdös–Renyi random graphs. Nonvommutative harmonic analysis with applications to probability. Banach Cent. Publ. Inst. Math. Acad. Sci. 2007, 78, 211–229. [Google Scholar]
  9. Erdös, L.; Knowles, A.; Yau, H.-T.; Yin, J. Spectral statistics of Erdös–Rényi graphs I: Local semicircle law. Ann. Probab. 2013, 41, 2279–2375. [Google Scholar] [CrossRef]
  10. Erdös, L.; Knowles, A.; Yau, H.-T.; Yin, J. Spectral statistics of Erdős-Rényi Graphs II: Eigenvalue spacing and the extreme eigenvalues. Commun. Math. Phys. 2012, 314, 587–640. [Google Scholar] [CrossRef]
  11. Wloodzimer, B.; Amir, D.; Jiang, T. Spectral Measure of Large Random Hankel, Markov and Toeplitz Matrices. Ann. Probab. 2006, 34, 1–38. [Google Scholar]
  12. Anirban, C.; Rajat, S.H. Spectral Properties for the Laplace of a Generalized Wigner Matrices. arXiv 2021, arXiv:2011.07912v2. [Google Scholar]
  13. Philippe, B. On the Free Convolution with a Semi-circular Distribution. Indiana Univ. Math. J. 1997, 46, 705–718. [Google Scholar]
  14. Zhu, Y. A Graphon Approach to Limiting Spectral Distribution of Wigner-Type Matrices. Random Struct. Algorithms 2020, 56, 251–279. [Google Scholar] [CrossRef]
  15. Tikhomirov, A.N. On the Wigner law for Generalized Erdös–Renyi Random Graphs. Sib. Adv. Math. 2021, 31, 229–236. [Google Scholar] [CrossRef]
  16. Götze, F.; Koesters, H.; Tikhomirov, A. Asymptotic Spectra of Matrix–Valued Functions of Independent Random Matrices and Free Probability. Random Matrices Theory Appl. 2014, 4, 1–85. [Google Scholar] [CrossRef]
  17. Voiculescu, D. Symmetries of some reduced free product C*-algebras. In Operator Algebras and Their Connections with Topology and Ergodic Theory. Lecture Notes in Mathematics; Springer: Berlin, Germany, 1985; Volume 1132, pp. 556–588. [Google Scholar]
  18. Bercovici, H.; Voiculescu, D. R e e convolution of measures with unbounded support. Indiana Univ. Math. J. 1993, 42, 733–773. [Google Scholar] [CrossRef]
  19. Voiculescu, D. Lectures on free probability theory. In Lectures on Probability Theory and Statistics. Lecture Notes in Mathematics; Springer: Berlin, Germany, 2000; Volume 1738, pp. 279–349. [Google Scholar]
  20. Girko, V.L. Spectral theory of random matrices. Russ. Math. Surv. 1985, 40, 77–120. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tikhomirov, A.N. Limit Theorem for Spectra of Laplace Matrix of Random Graphs. Mathematics 2023, 11, 764. https://doi.org/10.3390/math11030764

AMA Style

Tikhomirov AN. Limit Theorem for Spectra of Laplace Matrix of Random Graphs. Mathematics. 2023; 11(3):764. https://doi.org/10.3390/math11030764

Chicago/Turabian Style

Tikhomirov, Alexander N. 2023. "Limit Theorem for Spectra of Laplace Matrix of Random Graphs" Mathematics 11, no. 3: 764. https://doi.org/10.3390/math11030764

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop