Next Article in Journal
Spatial Effects of Phase Dynamics on Oscillators Close to Bifurcation
Next Article in Special Issue
A Generalized Log Gamma Approach: Theoretical Contributions and an Application to Companies’ Life Expectancy
Previous Article in Journal
Stabilization of n-Order Function Differential Equations by Parametric Distributed Control Function with Palindromic Parameters Set
Previous Article in Special Issue
Quasar Identification Using Multivariate Probability Density Estimated from Nonparametric Conditional Probabilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Percolation Problems on N-Ary Trees

Institute of Management, University of Science and Technology of China, Heifei 230026, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(11), 2571; https://doi.org/10.3390/math11112571
Submission received: 14 April 2023 / Revised: 24 May 2023 / Accepted: 1 June 2023 / Published: 4 June 2023
(This article belongs to the Special Issue Probability Distributions and Their Applications)

Abstract

:
Percolation theory is a subject that has been flourishing in recent decades. Because of its simple expression and rich connotation, it is widely used in chemistry, ecology, physics, materials science, infectious diseases, and complex networks. Consider an infinite-rooted N-ary tree where each vertex is assigned an i.i.d. random variable. When the random variable follows a Bernoulli distribution, a path is called head run if all the random variables that are assigned on the path are 1. We obtain the weak law of large numbers for the length of the longest head run. In addition, when the random variable follows a continuous distribution, a path is called an increasing path if the sequence of random variables on the path is increasing. By Stein’s method and other probabilistic methods, we prove that the length of the longest increasing path with a probability of one focuses on three points. We also consider limiting behaviours for the longest increasing path in a special tree.

1. Introduction and Main Results

Since Broadbent and Hammersley introduced the percolation models in [1], it has become a cornerstone of probability theory and statistical physics, with applications ranging from molecular dynamics to star formation. Standard percolation theory is concerned with the loss of global connectivity in a graph when vertices or bonds are randomly removed, as quantified by the probability for the existence of an infinite cluster of contiguous vertices.
Percolation models have applications in many fields such as the prevention of infectious diseases by SIR model, network robustness and fragility, the effective electrical resistance of a disordered mixture of materials, the prediction of securities price fluctuation in the stock market, etc.
Because of its extensive application, percolation theory has derived a variety of evolutionary percolation models, such as first-passage percolation, invasion peorcolation, accessibility percolation, etc. (see e.g., [2,3,4,5,6]). Here, we consider a site percolation problem and an accessibility percolation problem on N-ary trees.
Let  T ( N )  be an infinite-rooted N-ary tree in which each vertex has exactly N children, of which the root we denote by o. Each vertex  σ T ( N )  is assigned a random variable  X σ , called its fitness. The fitness values are independent and identically distributed (i.i.d.) random variables. We say that the path  P = σ 1 σ 2 σ k + 1  with length  l ( P ) = k  is a path down the tree if P starts at any vertex and descends into children until it stops at some node, i.e.,  | σ 1 | = | σ 2 | 1 = = | σ k + 1 | k , where  | σ |  is defined as the distance from o to  σ .
The accessibility percolation model is inspired by evolutionary biology. Imagine a population of some lifeform endowed with the same genetic type. A particular genotype gives rise to N new genotypes through mutations which either replace the original wild genotype or disappear. Provided that the natural selection is sufficiently strong, the former only happens if the new genotype has larger fitness. Thus, a survival mutation path is one with increasing fitness values.
When the fitness values are Bernoulli random variables with  P ( X σ = 0 ) = 1 P ( X σ = 1 ) = 1 p , it is a site percolation model. We say that the path P is a head run if  X σ 1 = X σ 2 = = X σ k + 1 = 1 . Harris [7] proved that the extinction probability is one when the average number of offspring produced by a vertex is less than one, which means that there does not exist a head run from the root to the leaves when  p 1 / N  on N-ary tree. A natural idea is to study the longest head run from any vertex. Let  T n ( N )  be the subgraph of  T ( N )  induced by the set of nodes with levels not exceeding n, and let  P n  be the set of paths down the  T n ( N ) ; then, the length of the longest head run from any vertex can be defined as
L N , n : = max P P n l ( P ) : P i s a h e a d r u n d o w n t h e T n ( N ) .
For  N = 1 L N , n  has important applications in biology, reliability theory, finance, and nonparametric statistics (see, e.g., [8,9]). One of the most important results on the asymptotic behaviours of  L 1 , n  is the following Erdős–Rényi Law (see [10]):
L 1 , n log n a . s . 1 log p .
In addition, the law of large numbers, the possible asymptotic distribution of  L 1 , n  is also discussed. Goncharov [11] obtained that  L 1 , n + log n / log p  possesses no limit distribution. In [12], Földes derived the distribution of the longest head run and obtained that for  k Z  and  k n ,
P ( L 1 , n [ log n ] < k ) = exp { 2 ( k + 1 { log n } ) } + o ( 1 ) ,
where  { log n } = log n [ log n ] [ log n ]  is the integer part of  log n . A more accurate result of the limit distribution of  L 1 , n  was obtained in [13]. Later, Mao et al. [14] gave the large deviation theorem for  L 1 , n . In [15], an estimation of the accuracy of approximation in terms of the total variation distance was established for the first time.
For  N 2 , we obtain the law of large numbers for  L N , n .
Theorem 1.
For any  N 2 0 < p < 1 , as  n ,
(a) 
if  0 < p < 1 / N , then
L N , n n p log p N ,
where  p  denotes convergence in probability.
(b) 
if  1 / N p < 1 , then
L N , n n a . s . 1 .
Next, we take the fitness values to be i.i.d. continuous random variables. We say that the path  P = σ 1 σ 2 σ k + 1  is an increasing path if  X σ 1 < X σ 2 < < X σ k + 1 . In [5], Nowak and Krug called this accessibility percolation and derived an asymptotically exact expression for the probability that there is at least one accessible path from the root to the leaves. In the evolutionary biology literature, these increasing paths are known as selectively accessible. The probability that there exists an increasing path from the root to the leaves is discussed in [5,16,17].
In addition, the number of increasing paths in other graphs has been studied extensively. It has been considered in the hypercube in [18,19,20]. Jessica et al. [21] studied edge-ordered graphs. On infinite spherically symmetric trees, the increasing path has also been studied in [22]. Arman [23] studied increasing paths in countable graphs. As for the number of accessible vertices, Hu et al. [24] studied it in random-rooted labelled trees.
Because we are only concerned with increasing order of fitness values along the path, changing the exact distribution will not affect the results as long as the random variable is continuous. Without loss of generality, we assume that all the random variables are mutually independent and uniformly distributed on  [ 0 , 1 ] ; then, we can consider the length of the longest increasing path, which can be defined as
L ˜ N , n = max l ( P ) : P i s a n i n c r e a s i n g p a t h d o w n t h e T n ( N ) .
It was shown in [25] that  L 1 , n  may take three to five values with a probability close to one for large n by techniques based on the Borel–Cantelli lemma. Chryssaphinou and Vaggelatou [26] improved the result in [25] and obtained that
lim n P ( [ f n ] 1 L ˜ 1 , n [ f n ] + 1 ) = 1 ,
where  f n = log n b n 1 2 , and  b n  is the solution of the equation  b n e b n = e 1 log n . Later, Hu et al. [27] proved the weak law of large numbers for  L ˜ N , n  when  N 2 . In this paper, we extend it and obtain the limit distribution of  L ˜ N , n  for  N 2 .
Theorem 2.
For any  N 2 ,
lim n P ( [ f N , n ] 1 L ˜ N , n [ f N , n ] + 1 ) = 1 ,
where  f N , n = n log N b N , n 1 2  and  b N , n  is the solution of the equation  b N , n e b N , n = e 1 n log N .
Remark 1.
From definition of  b N , n , it is easy to see that
b N , n = log n log log n + log log N log log log N 1 + o ( 1 ) .
Corollary 1.
Let  f N , n  be defined as in Theorem 2 and  { f N , n } = f N , n [ f N , n ]  be the fractional part of  f N , n . Assume that  ( n k , k 1 )  and  ( n k , k 1 )  are subsequences satisfying
lim sup k { f N , n k } < 1 , lim inf k { f N , n k } > 0 .
Then, we have
lim k P ( [ f N , n k ] 1 L ˜ N , n k [ f N , n k ] ) = 1
and
lim k P ( [ f N , n k ] L ˜ N , n k [ f N , n k ] + 1 ) = 1 .
Furthermore, if  0   <   lim inf k { f N , n k }     lim sup k { f N , n k }   <   1  holds for the subsequence  ( n k , k 1 ) , then
lim k P ( L ˜ N , n k = [ f N , n k ] ) = 1 .
Corollary 2.
Let  f N , n  and  { f N , n }  be defined as in Corollary 1. Assume that  ( n k , k 1 )  and  ( n k , k 1 )  are subsequences satisfying
{ f N , n k } log f N , n k a [ 0 , ] , { f N , n k } 0 ,
and
( 1 { f N , n k } ) log f N , n k a [ 0 , ] , { f N , n k } 1 .
Then, we have
P ( L ˜ N , n k = [ f N , n k ] 1 ) = 1 P ( L ˜ N , n k = [ f N , n k ] ) = exp N e a 2 π ( N 1 ) ; P ( L ˜ N , n k = [ f N , n k ] ) = 1 P ( L ˜ N , n k = [ f N , n k ] + 1 ) = exp N e a 2 π ( N 1 ) .
We also consider a deterministic rooted tree  T ( n )  with arity decreasing from n to 1 in [18]: the root is connected to n first level nodes, each first level node is connected to  n 1  second level nodes, etc. Each vertex  σ T ( n )  is assigned a random continuous variable  Y σ , and the variables are independent and identically distributed. Then, the length of the longest increasing path in  T ( n )  can be defined as
L n = max l ( P ) : P i s a n i n c r e a s i n g p a t h d o w n T ( n ) .
Theorem 3.
When k is such that  n / ( k ! ( k 1 ) ! ) 0 , we have
P ( n k L n < n ) 1 , n .
Remark 2.
When each vertex  σ T ( n )  is assigned a Bernoulli random variable, we also consider the longest head run in Theorem 1. By analogy, define  a m  and  V a r ( T k )  in  T ( n ) ; we find  a m  is more complex than  a m . Finally, we cannot obtain a valid upper bound on  V a r ( T k ) . Interested scholars can try to solve this problem.
In the following Section 2, Section 3 and Section 4, we prove our main results stated above.

2. The Longest Head Run in  T n ( N )

In this section,  N 2  is a fixed positive integer.
Let  P n , k  be the set of paths down  T n ( N )  with length k; it is clear that
M n , k : = # P n , k = j = 0 n k N k + j = N n + 1 N k N 1 .
Lemma 1.
For any path  P P n , k , we denote by  V ( P )  the vertex set of P and define
a m : = # { ( P , P ˜ ) : # ( V ( P ) V ( P ˜ ) ) = m , P , P ˜ P n , k } ,
then we have
a m 2 M n , k N k m + 1 , m = 1 , 2 , , k , k + 1 .
Remark 3.
Here  ( P , P ˜ )  is different from  ( P ˜ , P ) .
Proof of Lemma 1.
Given a path  P = σ 1 σ 2 σ k + 1 P n , k  with  | σ 1 | = j , let  B ( m , j )  denote the number of the path  P ˜ P n , k  which intersects P in  m ( 1 m k + 1 )  vertices; then,  a m  is the sum of  B ( m , j )  over all  P P n , k  with  | σ 1 | = j  and  j [ 0 , n k ] , and hence,
a m = j = 0 n k N k + j B ( m , j ) .
For  m = k + 1 , it is clear that  B ( m , j ) = 1  and  a k + 1 = M n , k . For  1 m k , when  k m + 1 j n 2 k + m 1 , we have
B ( m , j ) = ( k m + 1 ) ( N 1 ) N k m + N k m + 1 + i = 1 k m ( N 1 ) N k m i + 1 = ( k m + 1 ) ( N 1 ) N k m + N k m + 1 + N k m = ( k m ) ( N 1 ) N k m + 2 N k m + 1 ;
when  0 j k m ,
B ( m , j ) = ( k m + 1 ) ( N 1 ) N k m + N k m + 1 + i = 1 j ( N 1 ) N k m i = ( k m + 1 ) ( N 1 ) N k m + N k m + 1 + N k m N k m j = ( k m ) ( N 1 ) N k m + 2 N k m + 1 N k m j ;
similarly, when  n 2 k + m j n k ,
B ( m , j ) = ( n k j ) ( N 1 ) N k m + N k m + 1 .
Those together with (5), imply that
a m = j = 0 k m N k + j B ( m , j ) + j = k m + 1 n 2 k + m 1 N k + j B ( m , j ) + j = n 2 k + m n k N k + j B ( m , j ) : = A 1 + A 2 + A 3 A 4 ,
where
A 1 = j = 0 n k N 2 k m + j + 1 = N n + k m + 2 N 2 k m + 1 N 1 , A 2 = j = 0 n 2 k + m 1 ( k m ) ( N 1 ) N 2 k m + j + N 2 k m + 1 + j = ( k m ) ( N n N 2 k m ) + N n + 1 N 2 k m + 1 N 1 , A 3 = j = n 2 k + m n k ( n k j ) ( N 1 ) N 2 k m + j = j = 0 k m ( N 1 ) j N n + k m j = N n + k m + 1 N n + 1 N 1 ( k m ) N n , A 4 = j = 0 k m N 2 k m = ( k m + 1 ) N 2 k m .
In the calculation of the above equation, we have used  i = 1 n i q i = q ( 1 q n ) ( 1 q ) 2 n q n + 1 1 q . Thus, we have
a m = ( N 1 ) 1 ( N n + k m + 2 + N n + k m + 1 2 N 2 k m + 1 ) ( 2 k 2 m + 1 ) N 2 k m 2 M n , k N k + 1 m ,
the proof of Lemma 1 is completed. □
Define  T n , k  to be the number of head runs in  P n , k :
T n , k = P P n , k I ( P is a head run ) ,
where  I ( x )  is an indicator function of x. It is clear that
E ( T n , k ) = P P n , k P ( P i s a h e a d r u n ) = M n , k p k + 1 = N n + 1 N k N 1 p k + 1 .
Next, we estimate  Var ( T n , k ) .
Lemma 2.
For any  N 2 ,
V a r ( T n , k ) 2 ( 1 N p ) 1 E ( T n , k ) , i f 0 < p < 1 / N ; 2 ( k + 1 ) E ( T n , k ) , i f p = 1 / N ; 2 N k + 1 p k + 1 ( N p 1 ) 1 E ( T n , k ) , i f 1 / N < p < 1 .
Proof. 
We say that two paths P and  P ˜  are vertex disjoint if  V ( P ) V ( P ˜ ) = . Note that  I ( P i s a h e a d r u n )  and  I ( P ˜ i s a h e a d r u n )  are independent if P and  P ˜  are vertex disjoint. Then, it follows from Lemma 1 that
Var ( T n , k ) = P , P ˜ P n , k ( P ( P a n d P ˜ a r e h e a d r u n s ) P ( P i s a h e a d r u n ) 2 ) V ( P ) V ( P ˜ ) P , P ˜ P n , k P ( P a n d P ˜ a r e h e a d r u n s ) = m = 1 k + 1 | V ( P ) V ( P ˜ ) | = m P , P ˜ P n , k P ( P a n d P ˜ a r e h e a d r u n s ) = m = 1 k + 1 a m p 2 k + 2 m m = 1 k + 1 2 M n , k N k m + 1 p 2 k + 2 m ,
which yields Lemma 2. □
Proof of Theorem 1.
When  0 < p < 1 / N , for any sequence  k n ,
E ( T n , k n ) = N n + 1 N k n N 1 p k n + 1 = exp k n log p + n log N + O ( 1 ) .
For any  ϵ > 0 , we take  k n = [ ( log N log p + ϵ ) n ]  and  k ˜ n = [ ( log N log p ϵ ) n ] . Then, by applying (6), we have
P ( L N , n k n ) = P ( T n , k n 1 ) E ( T n , k n ) 0 , n ,
and furthermore, by Lemma 2 and Chebyshev’s inequality,
P ( L N , n < k ˜ n ) = P ( T n , k ˜ n = 0 ) P ( | T n , k ˜ n E ( T n , k ˜ n ) | E ( T n , k ˜ n ) ) Var ( T n , k ˜ n ) ( E ( T n , k ˜ n ) ) 2 2 ( 1 N p ) E ( T n , k ˜ n ) 0 , n .
When  p 1 / N , let  c n = [ ( 1 ϵ ) n ] + 1 , by same method, as  n , we have
P ( L N , n < c n ) = P ( T n , c n = 0 ) Var ( T n , c n ) ( E ( T n , c n ) ) 2 2 ( c n + 1 ) E ( T n , c n ) , if p = 1 / N ; 2 N c n + 1 p c n + 1 ( N p 1 ) E ( T n , c n ) , if p > 1 / N .
Furthermore, we have
n = 1 P ( L N , n < c n ) n = 1 2 N 2 ( ( 1 ϵ ) n + 2 ) N ϵ n < , if p = 1 / N ; n = 1 2 N ( 1 ϵ ) n + 2 ( N p 1 ) N n < , if p > 1 / N .
Thus, when  p 1 / N , for any  ϵ > 0 , we have
n = 1 P ( | L N , n n 1 | > ϵ ) = n = 1 P ( L N , n < c n ) < .
Hence, from (7)–(9), we can obtain Theorem 1. □

3. The Longest Increasing Path in  T n ( N )

Lemma 3.
Let  X 1 , , X n  indicator variables with  P ( X i = 1 ) = p i , W = i = 1 n X i ,  and  λ = E W = i p i .  For each i, let  N i { 1 , , n }  be such that  X i  is independent of  { X j : j N i } . If  p i j : = E [ X i X j ] , then
d T V ( W , P o ( λ ) ) m i n { 1 , λ 1 } i = 1 n j N i p i p j + i = 1 n j N i / { i } p i j ,
where  P o ( λ )  denotes a random variable having a Poisson distribution of parameter λ.
Proof. 
See Theorem 4.7 in [28]. □
Lemma 4.
Let  x R  and  f N , n = n log N b N , n 1 2 , where  b N , n  is the solution of the equation  b N , n e b N , n = e 1 n log N . Then,
lim n + N n Γ ( f N , n + x + 1 ) = lim n + 1 2 π e x log ( f N , n + x ) = + , x < 0 , 0 , x > 0 ,
where  Γ ( y ) = 0 e t t y 1 d t  denotes the Gamma function at the point y.
Proof. 
Stirling’s formula shows
Γ ( f N , n + x + 1 ) 2 π ( f N , n + x ) f N , n + x + 1 / 2 e f N , n + x .
Hence, it suffices to show that
lim n N n e f N , n + x 2 π ( f N , n + x ) f N , n + x + 1 / 2 = + , x < 0 , 0 , x > 0 .
We have
N n e f N , n + x 2 π ( f N , n + x ) f N , n + x + 1 / 2 = N n exp f N , n { log ( f N , n + x ) 1 } 2 π ( f N , n + x ) e x { log ( f N , n + x ) 1 } .
By the proof of Lemma 1 in [26], we have
lim n N n exp f N , n { log ( f N , n + x ) 1 } 2 π ( f N , n + x ) = e x 2 π .
Since  log ( f N , n + x ) + , as  n , we obtain
lim n N n e f N , n + x 2 π ( f N , n + x ) f N , n + x + 1 / 2 = + , x < 0 , 0 , x > 0 .
Then, the proof is completed. □
Similarly with the proof of Theorem 1, we define  T ˜ n , k  to be the number of increasing paths in  P n , k :
T ˜ n , k = P P n , k I ( P is increasing ) .
With  M n , k  defined as in (4), it is clear that
E ( T ˜ n , k ) = P P n , k P ( P i s i n c r e a s i n g ) = M n , k ( k + 1 ) ! .
Lemma 5.
Let  λ 1 = E ( T ˜ n , k ) . Then, for any  N 2  and  q = 8 N / ( 2 k + 3 ) ( 0 , 1 ) , we have
d T V ( T ˜ n , k , P o ( λ 1 ) ) k + 2 ( k + 1 ) ! N k + 4 N ( k + 2 ) ( 1 q ) .
Proof. 
For each path  P P n , k , define  N P = { P : P P n , k , P d e p e n d s o n P } . Then, by applying Lemma 3, we have
d T V ( T ˜ n , k , P o ( λ 1 ) ) min { 1 , λ 1 1 } ( H 1 + H 2 ) ,
where
H 1 = P P n , k P N P 1 ( ( k + 1 ) ! ) 2 , H 2 = P P n , k P N P / { P } P ( P a n d P ˜ a r e i n c r e a s i n g ) .
For  H 1 , for  1 m k , recalling that by the proof of Lemma 1,
max j B ( m , j ) = ( k + 1 m ) ( N 1 ) N k m + N k m + 1 + N k m ,
which together with  B ( k + 1 , j ) = 1 , implies that
max P P n , k | N P | = max j m = 1 k + 1 B ( m , j ) m = 1 k + 1 max j B ( m , j ) m = 1 k ( k m ) ( N 1 ) N k m + 2 N k m + 1 + 1 = k N k + i = 1 k 1 N i ( k + 2 ) N k .
Together with (10), yields
H 1 M n , k ( ( k + 1 ) ! ) 2 max P P ˜ n , k | N P | ( k + 2 ) λ N k ( k + 1 ) ! .
To bound  H 2 , notice that for any two paths  P = x 1 x k + 1 , P ˜ = x ˜ 1 x ˜ k + 1 P n , k  with  | x 1 | | x ˜ 1 | , if P and  P ˜  are not vertex-disjoint, then there exist integers  s , t  such that  1 s t k + 1  and  x t s + i = x ˜ i  if and only if  1 i s . By Lemma 3.1 in [27], under the condition of  x 1 = x P ( P a n d P ˜ a r e i n c r e a s i n g ) = ( 1 x ) 2 k s + 1 ( 2 k + 2 s t ) ! ( 2 k s + 1 ) ! ( k + 1 s ) ! ( k + 1 t ) ! . Then we have
P ( P a n d P ˜ a r e i n c r e a s i n g ) = 0 1 ( 1 x ) 2 k s + 1 ( 2 k + 2 s t ) ! ( 2 k s + 1 ) ! ( k + 1 s ) ! ( k + 1 t ) ! d x = ( 2 k + 2 s t ) ! ( 2 k + 2 s ) ! ( k + 1 s ) ! ( k + 1 t ) ! .
Because  ( 2 k + 2 s t ) ! ( 2 k + 2 s ) ! ( k + 1 s ) ! ( k + 1 t ) ! ( 1 s k + 1 )  is nonincreasing with respect to t, we obtain that
( 2 k + 2 s t ) ! ( 2 k + 2 s ) ! ( k + 1 s ) ! ( k + 1 t ) ! ( 2 k + 2 2 s ) ! ( 2 k + 2 s ) ! ( k + 1 s ) ! ( k + 1 s ) !
holds for every  t [ s , k + 1 ] . Thus, it follows from Lemma 1 and (14) that
H 2 = P P n , k P N P / { P } P ( P a n d P ˜ a r e i n c r e a s i n g ) s = 1 k a s ( 2 k + 2 2 s ) ! ( 2 k + 2 s ) ! ( k + 1 s ) ! ( k + 1 s ) ! = 2 λ s = 1 k ( 2 k + 2 2 s ) ! ( k + 1 ) ! N k + 1 s ( 2 k + 2 s ) ! ( k + 1 s ) ! ( k + 1 s ) ! : = 2 λ s = 1 k b s .
Notice that
b s 1 b s = ( 4 k 4 s + 6 ) N ( k s + 2 ) ( 2 k s + 3 ) ,
which attains its maximum when  s = 2 k + 3 2 k + 3 2 . Hence,
b s 1 b s 8 2 k + 3 N ( 1 + 2 k + 3 ) ( 2 k + 3 + 2 k + 3 ) q .
Then,
H 2 2 λ i = 0 k 1 q i b k 4 N λ ( k + 2 ) ( 1 q ) ,
which together with (11) and (12), completes the proof. □
Proof of Theorem 2.
For simplicity, we write  D ( N , k ) = ( k + 2 ) N k / ( k + 1 ) ! + 4 N / ( k + 2 ) ( 1 q ) . It is obvious that
P ( L ˜ N , n < k ) = P ( T ˜ n , k = 0 ) .
Thus, using Lemma 5, we can derive upper and lower bounds for the distribution of  L ˜ N , n :
e λ D ( N , k ) P ( L ˜ N , n < k ) e λ + D ( N , k ) .
Notice that
P ( [ f N , n ] 1 L ˜ N , n [ f N , n ] + 1 ) = P ( L ˜ N , n < [ f N , n ] + 2 ) P ( L ˜ N , n < [ f N , n ] 1 ) ,
then by the definition of  f N , n , we can determine that the condition of Lemma 5 is satisfied for  k = [ f N , n ] + 2  and  k = [ f N , n ] 1 . Combined with (15), we can obtain
P ( L ˜ N , n < [ f N , n ] + 2 ) exp N n + 1 N [ f N , n ] + 1 ( N 1 ) ( [ f N , n ] + 2 ) ! D ( N , [ f N , n ] + 2 ) ; P ( L ˜ N , n < [ f N , n ] 1 ) exp N n + 1 N [ f N , n ] 2 ( N 1 ) ( [ f N , n ] 1 ) ! D ( N , [ f N , n ] 1 ) .
Because  f N , n + , the bounds  D ( N , [ f N , n ] + 2 )  and  D ( N , [ f N , n ] 1 )  tend to 0 as  n . By Lemma 4, we can verify that
lim n N n + 1 N [ f N , n ] + 1 ( N 1 ) ( [ f N , n ] + 2 ) ! = N N 1 lim n N n ( [ f N , n ] + 2 ) ! = N N 1 lim n N n Γ ( [ f N , n ] + 3 ) = 0
and
lim n N n + 1 N [ f N , n ] 2 ( N 1 ) ( [ f N , n ] 1 ) ! = N N 1 lim n N n ( [ f N , n ] 1 ) ! = N N 1 lim n N n Γ ( [ f N , n ] ) = + ,
which, together with (16), finally completes the proof of the theorem. □
Proof of Corollary 1.
We only prove (1). The proof of the (2) is along the same lines. The assertion (3) is an immediate consequence of (1) and (2). Using (15), we get
P ( L ˜ N , n k < [ f N , n k ] + 1 ) exp N n k + 1 N [ f N , n k ] ( N 1 ) ( [ f N , n k ] + 1 ) ! D ( N , [ f N , n k ] + 1 ) .
Because  ( n k , k 1 )  is a strictly increasing sequence such that  lim sup k { f N , n k } < 1 , we have
lim k N n k + 1 N [ f N , n k ] ( N 1 ) ( [ f N , n k ] + 1 ) ! = N N 1 lim k N n k ( [ f N , n k ] + 1 ) ! = N N 1 lim k N n k Γ ( [ f N , n k ] + 2 ) = 0 ,
which, combined with Theorem 2 and (17), finally completes the proof of the Corollary 1. □
Proof of Corollary 2.
If  { f N , n k } log f N , n k a [ 0 , ] , then
lim k N n k + 1 N [ f N , n k ] 1 ( N 1 ) ( [ f N , n k ] ) ! = N N 1 lim k N n k ( [ f N , n k ] ) ! = N N 1 lim k N n k Γ ( [ f N , n k ] + 1 ) = N 2 π ( N 1 ) e a .
From (15), we have
P ( L ˜ N , n k < [ f N , n k ] ) exp N n k + 1 N [ f N , n k ] 1 ( N 1 ) ( [ f N , n k ] ) ! D ( N , [ f N , n k ] ) ,
which means
lim k P ( L ˜ N , n k < [ f N , n k ] ) = exp N e a 2 π ( N 1 ) ,
and hence, (1) implies that
lim k P ( L ˜ N , n k = [ f N , n k ] 1 ) = 1 lim k P ( L ˜ N , n k = [ f N , n k ] ) = exp N e a 2 π ( N 1 ) .
If  ( 1 { f N , n k } ) log f N , n k a [ 0 , ] , then
lim k N n k + 1 N [ f N , n k ] ( N 1 ) ( [ f N , n k ] + 1 ) ! = N N 1 lim k N n k ( [ f N , n k ] + 1 ) ! = N N 1 lim k N n k Γ ( [ f N , n k ] + 2 ) = N 2 π ( N 1 ) e a ,
and from (17), we have
lim k P ( L ˜ N , n k < [ f N , n k ] + 1 ) = exp N e a 2 π ( N 1 ) .
Hence, (2) implies that
lim k P ( L ˜ N , n k = [ f N , n k ] ) = 1 lim k P ( L ˜ N , n k = [ f N , n k ] + 1 ) = exp N e a 2 π ( N 1 ) .
The proof of Corollary 2 is completed. □

4. The Longest Increasing Path in  T ( n )

Proof of Theorem 3.
By Stirling’s formula,
n k ! ( k 1 ) ! n e 2 k 1 2 π k k ( k 1 ) k = exp log n + 2 k k log k k log ( k 1 ) + O ( 1 ) .
When  k = log n , we have
lim n n k ! ( k 1 ) ! 0 .
Thus, it suffices to show that Theorem 3 is true for  k log n  and  n / ( k ! ( k 1 ) ! ) 0 .
In the following proof,  k log n .
Let  P k  be the set of paths down  T ( n )  with length k, then
M n , k : = # P k = j = 0 n k n ! ( n k j ) ! = j = 0 n k n ! j ! ,
because  j = 0 1 j ! = e , we obtain  M n , k = O ( n ! ) .
Defining  T k  to be the number of increasing paths in  P k :
T k = P P k I ( P is increasing ) ,
it is easy to obtain
E ( T k ) = P P k P ( P is increasing ) = M n , k ( k + 1 ) ! ,
which yields that
P ( L n n ) = P ( T n 1 ) E ( T n ) 0 , n .
Next, we estimate  Var ( T n k ) .
Let A represent the sum of  P ( P and P ˜ are increasing )  over P and  P ˜  under the condition that  P ˜ P n k V ( P ˜ ) V ( P ) , and  P ˜  is not above P, then
Var ( T n k ) = P , P ˜ P n k ( P ( P and P ˜ are increasing ) P 2 ( P is increasing ) ) = V ( P ) V ( P ˜ ) P , P ˜ P n k ( P ( P and P ˜ are increasing ) P 2 ( P is increasing ) ) 2 P P n k P ˜ is not above P P ˜ P N k , V ( P ˜ ) V ( P ) P ( P and P ˜ are increasing ) = 2 A : = 2 ( A ( n , 0 ) + A ( n , 1 ) + + A ( n , k ) ) ,
where  A ( n , i ) ( 0 i k )  denotes A under the condition of  | σ s ( P ) | = i σ s ( P )  is the starting point of P. Let  A j ( n , i )  denote  A ( n , i )  under the condition that the starting point of  P ˜  is j below the starting point of P, then
A ( n , 0 ) = A 0 ( n , 0 ) + A 1 ( n , 0 ) + + A k ( n , 0 ) .
For  A 0 ( n , 0 ) , there are  n / k !  Ps that satisfy  P P n k  and  | σ f ( P ) | = 0 . Given  P P n k  with  | σ f ( P ) | = 0 , there are  ( n s ) ( n s ) ! / k !   P ˜ ’s that satisfy  P ˜ P n k σ f ( P ) = σ f ( P ˜ )  and  V ( P ) V ( P ˜ ) = s ( 1 s n k ) . Then, combined with (13), we have
A 0 ( n , 0 ) = n ! k ! [ s = 1 n k ( 2 n 2 k + 2 2 s ) ! ( n s ) ( n s ) ! ( 2 n 2 k + 2 s ) ! ( n k + 1 s ) ! ( n k + 1 s ) ! k ! + 1 ( n k + 1 ) ! ] = n ! k ! 1 ( n k + 1 ) ! + s = 1 n k ( 2 s ) ! ( k + s 1 ) ( k + s 1 ) ! ( n k + 1 + s ) ! s ! s ! k ! n ! k ! 1 ( n k + 1 ) ! + n k ( n k + 1 ) ! s = 1 n k 2 s s k + s k n k + 1 + s s n k C n ! n k k ! ( n k + 1 ) ! ,
where C is a constant which may have different values in different formulas. In the second equation, we replace s with  n k + 1 s . The last inequality uses the fact that  lim n k s = 1 n k 2 s s k + s k n k + 1 + s s n k  exists and is finite. Similarly, we can obtain
A i ( n , 0 ) C n ! n k k ! ( n k + 1 ) ! ,
which together with (18), yields
A ( n , 0 ) C n ! n k ( k 1 ) ! ( n k + 1 ) ! .
By the same way, we can prove that
A ( n , m ) C n ! n k m ( k 1 ) ! ( n k + 1 ) ! ( 0 m k ) ,
implying that
Var ( T n k ) C n ! n 0 + + n k ( k 1 ) ! ( n k + 1 ) ! C n ! n k ( 1 + k ( k 1 ) n k + 1 ) ( k 1 ) ! ( n k + 1 ) ! C n ! n k ( k 1 ) ! ( n k + 1 ) ! ,
where the last inequality uses  k log n . Furthermore, by the Chebyshev inequality,
P ( L n < n k ) = P ( T n k = 0 ) Var ( T n k ) ( E ( T n k ) ) 2 C ( n k + 1 ) k ! ( k 1 ) ! 0 , n .
Thus, we obtain Theorem 3.

5. Conclusions

Two percolation models are considered on N-ary trees in this paper. For the site percolation model, this paper shows the law of large numbers for the longest head run and this result can also be interpreted by the Galton–Watson branching processes as following: when the average number of offspring produced by each individual is greater than one, the process will survive. Otherwise, it will die out. However, this paper does not obtain the limiting distribution of  L N , n  because of the complexity between head runs. Researchers might make use of Stein’s method for the sum of dependent random variables to prove further results in the future. For the accessibility percolation model,  L ˜ 1 , N n  and  L ˜ N , n  take the same three values with a probability of one asymptotically, which means there may be a connection between these two random variables. In addition, related asymptotic properties of the hypercube could be derived by applying its connection to the deterministic tree  T ( n )  so that researchers may utilize our results of  T ( n )  to prove its asymptotic behaviours.

Author Contributions

Conceptualization, T.R.; Methodology, T.R.; Validation, J.W.; Formal analysis, T.R.; Writing—original draft, T.R.; Writing—review & editing, J.W.; Visualization, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Broadbent, S.R.; Hammersley, J.M. Percolation processes. I. Crystals and mazes. Proc. Camb. Philos. Soc. 1957, 53, 629–641. [Google Scholar] [CrossRef]
  2. Angel, O.; Goodman, J.; Merle, M. Scaling limit of the invasion percolation cluster on a regular tree. Ann. Probab. 2013, 41, 229–261. [Google Scholar] [CrossRef] [Green Version]
  3. Bertacchi, D.; Rodriguez, P.M.; Zucca, F. Galton-Watson processes in varying environment and accessibility percolation. Braz. J. Probab. Stat. 2020, 34, 613–618. [Google Scholar] [CrossRef]
  4. Finn, T.; Stauffer, A. Coexistence in competing first passage percolation with conversion. Ann. Appl. Probab. 2022, 32, 4459–4480. [Google Scholar] [CrossRef]
  5. Nowak, S.; Krug, J. Accessibility percolation on n-trees. Europhys. Lett. EPL 2013, 101, 66004. [Google Scholar] [CrossRef] [Green Version]
  6. Schmiegelt, B.; Krug, J. Accessibility percolation on Cartesian power graphs. J. Math. Biol. 2023, 86, 46. [Google Scholar] [CrossRef]
  7. Harris, T.E. The Theory of Branching Processes; Springer: Berlin, Germany, 1963. [Google Scholar]
  8. Balakrishnan, N.; Koutras, M.V. Runs and Scans with Applications; Wiley: New York, NY, USA, 2002. [Google Scholar]
  9. Schwager, S.J. Run probabilities in sequences of Markov-dependent trials. J. Amer. Statist. Assoc. 1983, 78, 168–175. [Google Scholar] [CrossRef]
  10. Erdős, P.; Rényi, A. On a new law of large numbers. J. Analyse Math. 1970, 23, 103–111. [Google Scholar] [CrossRef]
  11. Goncharov, V.L. On the field of combinatory analysis. Am. Math. Soc. Transl. 1962, 19, 1–46. [Google Scholar]
  12. Földes, A. On the limit distribution of the longest head run. Mat. Lapok 1975, 26, 105–116. [Google Scholar]
  13. Székely, G.; Tusnády, G. Generalized fibonacci numbers, and the number of “pure heads”. Mat. Lapok 1976, 27, 147–151. [Google Scholar]
  14. Mao, Y.; Wang, F.; Wu, X. Large deviation behavior for the longest head run in an IID Bernoulli sequence. J. Theoret. Probab. 2015, 28, 259–268. [Google Scholar] [CrossRef]
  15. Novak, S.Y. On the length of the longest head run. Statist. Probab. Lett. 2017, 130, 111–114. [Google Scholar] [CrossRef] [Green Version]
  16. Chen, X. Increasing paths on N-ary trees. arXiv 2014, arXiv:1403.0843. [Google Scholar]
  17. Roberts, M.I.; Zhao, L.Z. Increasing paths in regular trees. Electron. Commun. Probab. 2013, 18, 87. [Google Scholar] [CrossRef] [Green Version]
  18. Berestycki, J.; Brunet, E.; Shi, Z. The number of accessible paths in the hypercube. Bernoulli 2016, 22, 653–680. [Google Scholar] [CrossRef]
  19. Berestycki, J.; Brunet, E.; Shi, Z. Accessibility percolation with backsteps. ALEA Lat. Am. J. Probab. Math. Stat. 2017, 14, 45–62. [Google Scholar] [CrossRef]
  20. Hegarty, P.; Martinsson, A. On the existence of accessible paths in various models of fitness landscapes. Ann. Appl. Probab. 2014, 24, 1375–1395. [Google Scholar] [CrossRef]
  21. Jessica, D.S.; Theodore, M.; Florian, P.; Troy, R.; Michael, T. Increasing paths in edge-ordered graphs: The hypercube and random graph. Electron. J. Combin. 2016, 23, 2.15. [Google Scholar]
  22. Coletti, C.F.; Gava, R.J.; Rodríguez, P.M. On the existence of accessibility in a tree-indexed percolation model. Physica A 2018, 492, 382–388. [Google Scholar] [CrossRef] [Green Version]
  23. Arman, A.; Elliott, B.; Rödl, V. Increasing paths in countable graphs. J. Combin. Theory Ser. A 2021, 183, 105491. [Google Scholar] [CrossRef]
  24. Hu, Z.S.; Li, Z.; Feng, Q.Q. Accessibility percolation on random rooted labeled trees. J. Appl. Probab. 2019, 56, 533–545. [Google Scholar] [CrossRef]
  25. Révész, P. Three problems on the lengths of increasing runs. Stochastic Process. Appl. 1983, 15, 169–179. [Google Scholar] [CrossRef] [Green Version]
  26. Chryssaphinou, O.; Vaggelatou, E. Compound Poisson approximation for long increasing sequences. J. Appl. Probab. 2001, 38, 449–463. [Google Scholar] [CrossRef]
  27. Hu, Z.; Wu, J.; Dong, L. Accessibility percolation on N-ary trees. J. Univ. Sci. Technol. China 2022, 52, 2. [Google Scholar] [CrossRef]
  28. Nathan, R. Fundamentals of Stein’s method. Probab. Surv. 2011, 8, 210–293. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ren, T.; Wu, J. Percolation Problems on N-Ary Trees. Mathematics 2023, 11, 2571. https://doi.org/10.3390/math11112571

AMA Style

Ren T, Wu J. Percolation Problems on N-Ary Trees. Mathematics. 2023; 11(11):2571. https://doi.org/10.3390/math11112571

Chicago/Turabian Style

Ren, Tianxiang, and Jinwen Wu. 2023. "Percolation Problems on N-Ary Trees" Mathematics 11, no. 11: 2571. https://doi.org/10.3390/math11112571

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop