Next Article in Journal
A Note on Weibull Parameter Estimation with Interval Censoring Using the EM Algorithm
Previous Article in Journal
A Social Media Knowledge Retrieval Method Based on Knowledge Demands and Knowledge Supplies
Previous Article in Special Issue
Polynomial Recurrence for SDEs with a Gradient-Type Drift, Revisited
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Limit Properties of Maxima of Stationary Gaussian Sequences Subject to Random Replacing

1
Department of Mathematic, Zhejiang Normal University, Jinhua 321004, China
2
College of Data Science, Jiaxing University, Jiaxing 314001, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3155; https://doi.org/10.3390/math11143155
Submission received: 26 June 2023 / Revised: 12 July 2023 / Accepted: 17 July 2023 / Published: 18 July 2023
(This article belongs to the Special Issue New Advances and Applications of Extreme Value Theory)

Abstract

:
In applications, missing data may occur randomly and some relevant datum are often used to replace the missing ones. This article mainly explores the influence of the degree of dependence of stationary Gaussian sequences on the joint asymptotic distribution of the maximum of the Gaussian sequence and its maximum when the sequence is subject to random replacing.

1. Introduction

Data missing is a common phenomenon in the field of applications. When it occurs, the most common approach is to treat the available sample as a non complete sample with a random sample size. Furthermore, it is necessary to study the properties of incomplete samples with random sample sizes. In the field of extreme value theory, refs. [1,2] first studied the effect of the missing data on extremes of original sequences. Let { X n , n 1 } be a sequence of stationary random variables with the marginal distribution function F ( x ) , and suppose that some of the random variables in the sequence are missing randomly. Let ε k be the indicator of the event that random variable X k is observed. For the random sequence { X n , n 1 } , define its random missing sequence as:
X ˜ n ( ε ) = ε n X n + ( 1 ε n ) x F , n 1 ,
where x F = inf { x : F ( x ) > 0 } . Suppose that the indicator sequence ε = { ε n , n 1 } is independent of { X n , n 1 } , and let S n = k n ε k be the numbers of the observed variables satisfying
S n n P λ , a s n ,
where λ is a random or nonrandom variable.
When λ [ 0 , 1 ] is a constant, under a global dependent condition D ( u n , v n ) (see [2]) and a well-known local dependent condition D ( u n ) (see e.g., [3]), ref. [2] derived the joint asymptotic distribution of the maximum from a stationary sequence and the maximum from its random missing sequence and proved, for any x < y R
lim n P M n ( X ˜ ( ε ) ) a ˜ n 1 x + b ˜ n , M n ( X ) a ˜ n 1 y + b ˜ n = G λ ( x ) G 1 λ ( y ) ,
with a ˜ n > 0 and b ˜ n R , where G is one of the three types of extreme value distributions (see, e.g., [3]) M n ( X ˜ ( ε ) ) = max { X k ˜ ( ε ) , k = 1 , 2 , , n } and M n ( X ) = max { X k , k = 1 , 2 , , n } .
The result in (3) has been extended to many other cases; we refer to [4,5] for Gaussian cases; ref. [6,7] for the almost sure limit theorem; ref. [8,9] for autoregressive process; ref. [10] for non-stationary random fields; ref. [11] for linear process; and refs. [12,13] for point process.
When λ [ 0 , 1 ] is a random variable, ref. [14] proved a similar result: for any x < y R
lim n P M n ( X ˜ ( ε ) ) a ˜ n 1 x + b ˜ n , M n ( X ) a ˜ n 1 y + b ˜ n = E [ G λ ( x ) G 1 λ ( y ) ] .
Ref. [15] extended the results of (4) to weakly and strongly dependent Gaussian sequences. Let { X n , n 1 } be a sequence of stationary Gaussian variables with correlation function r n = E ( X 1 X n + 1 ) . If r n satisfies
lim n r n log n = γ [ 0 , ) ,
for any x < y , ref. [15] proved that
lim n P M n ( X ˜ ( ε ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n = E + exp ( λ g ( x , z , γ ) ( 1 λ ) g ( x , z , γ ) ) d Φ ( z ) ,
where Φ ( x ) denotes the distribution function of a standard (mean 0 and variance 1) normal random variable, g ( x , z , γ ) = e x γ + 2 γ z , and the normalizing constants a n and b n are defined as
a n = ( 2 log n ) 1 / 2 , b n = ( 2 log n ) 1 / 2 log log n + log 4 π 2 ( 2 log n ) 1 / 2 .
If r n satisfies:
  • (A1) r n is convex with r n = o ( 1 ) ;
  • (A2) ( r n log n ) 1 is monotone with ( r n log n ) 1 = o ( 1 ) ,
  • for any x , y R , ref. [15] proved that
lim n P ( M n ( X ˜ ( ε ) ) ( r n ) 1 / 2 x + ( 1 r n ) 1 / 2 b n , M n ( X ) ( r n ) 1 / 2 y + ( 1 r n ) 1 / 2 b n ) = Φ ( min { x , y } ) .
For more related studies of this situation, we refer to [16,17,18,19].
In application, in addition to treating the available samples as incomplete samples with a random sample size, we often use another set of samples to replace the randomly missing samples, to obtain a relatively complete sample. However, this raises the question of to what extent can the random missing samples replace the original samples. To answer this question, we must study the relationship between the original samples and samples subject to random replacement. In the field of extreme value theory, we need to study the asymptotic relationship between the maximum of the original samples and their maximum when the samples are subject to random replacement.
For the random sequence { X n , n 1 } , define the sequence subject to random replacement as
X n ( ε ) = ε n X n + ( 1 ε n ) X n ^ ,
where the sequence { X n ^ , n 1 } is an independent copy of { X n , n 1 } . When the sequence { X n , n 1 } is strongly mixed, ref. [20] proved that the maximum sequences and the maximum when the sequence is subject to random replacement are asymptotically dependent.Under the dependent conditions D ( u n , v n ) and D ( u n ) , ref. [21] studied the asymptotic distribution of the maximum from a stationary sequence and its maximum subject to random replacement and proved, for x , y R ,
lim n P ( M n ( X ( ε ) ) a ˜ n 1 x + b ˜ n , M n ( X ) a ˜ n 1 y + b ˜ n ) = G ( min { x , y } ) E G 1 λ ( max { x , y } ) ,
where M n ( X ( ε ) ) = max { X k ( ε ) , 1 k n } .
It is worth noting that, in the study of [21], the random replacement sequence was an independent copy of the original sequence, so they had the same dependent structure. However, in practical applications, we may not know the dependent structure of the original sequence, so it is necessary to explore the impact of the dependent structure of the original sequence itself and the dependent structure of the random replacement sequence itself on their maxima.
The main purpose of this article is to explore the influence of the self-dependent structure of an original sequence and the random replacement sequence on the joint asymptotic distribution between their maxima under a Gaussian scenario. The advantages of choosing a Gaussian sequence scenario are as follows: The dependent structure of Gaussian sequences can be characterized by their correlation coefficient functions; in the field of extreme value theory, the dependence of Gaussian sequences can be characterized by the speed at which their correlation coefficient function converges to 0; the relevant conclusions in the case of Gaussian sequences can be easily generalized, such as in the case of chi square sequences, Gaussian ordered sequences, and so on.
The rest of this paper is organized as follows: The main results of the paper are given in Section 2, and their proofs are collected in Section 3. Some conclusions are presented in Section 4.

2. Main Results

In the following part of this paper, let { X n , n 1 } and { X n ^ , n 1 } be stationary standard Gaussian sequences, with correlation functions r n and r ^ n , respectively. Let ε = { ε n , n 1 } be a sequence of indicators and S n = k n ε k . Suppose that (2) holds for some random variable λ [ 0 , 1 ] a.s. In addition, suppose that { X n , n 1 } , { X n ^ , n 1 } , { ε n , n 1 } are independent of each other, and suppose that U and V are independent standard Gaussian random variables, which are independent of λ . Let a n and b n be defined as in (7).
Theorem 1.
Suppose that r n and r ^ n satisfy lim n r n log n = γ 1 [ 0 , ) and lim n r ^ n log n = γ 2 [ 0 , ) , respectively. For any x , y R , we have
lim n P ( M n ( X ( ε ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n ) = E exp ( 1 λ ) g ( x , V , γ 2 ) exp λ g ( min { x , y } , U , γ 1 ) ( 1 λ ) g ( y , U , γ 1 ) .
Corollary 1.
(i). Suppose that r n and r ^ n satisfy lim n r n log n = 0 and lim n r ^ n log n = 0 , respectively. For any x , y R , we have
lim n P ( M n ( X ( ε ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n ) = exp e min { x , y } E exp ( 1 λ ) e max { x , y } ,
(ii). Suppose that r n and r ^ n satisfy lim n r n log n = 0 and lim n r ^ n log n = γ ( 0 , ) , respectively. For any x , y R , we have
lim n P ( M n ( X ( ε ) ) a n 1 x + b n , M n ( X ) a n 1 x + b n ) = E exp ( 1 λ ) g ( x , U , γ ) exp λ e min { x , y } ( 1 λ ) e y .
(iii). Suppose that r n and r ^ n satisfy lim n r n log n = γ ( 0 , ) and lim n r ^ n log n = 0 , respectively. For any x , y R , we have
lim n P ( M n ( X ( ε ) ) a n 1 x + b n , M n ( X ) a n 1 x + b n ) = E exp ( 1 λ ) e x exp λ g ( min { x , y } , U , γ ) ( 1 λ ) g ( y , U , γ ) .
Remark 1.
The first assertion of Corollary 1 indicates that, when both the original sequence and the random replacement sequence are weakly dependent, the result is consistent with that of [21]. The second and third assertions of Corollary 1 indicate that, when the dependent strength between the original sequence and the random replacement sequence are different, the joint asymptotic distribution of the maximum of the original sequence and the maximum of the sequence subject to random replacement is highly dependent on the strength of dependence.
Corollary 2.
Under the conditions of Theorem 1, for any x R , we have
lim n P ( M n ( X ) a n 1 x + b n ) = E exp g ( x , U , γ 1 ) .
and
lim n P ( M n ( X ( ε ) ) a n 1 x + b n ) = E exp ( 1 λ ) g ( x , V , γ 2 ) exp λ g ( x , U , γ 1 )
Theorem 2.
Suppose both r n and r ^ n satisfy the conditions A1 and A2. For any x , y R , we have
lim n P ( M n ( X ( ε ) ) ( r n ) 1 / 2 x + ( 1 r n ) 1 / 2 b n , M n ( X ) ( r n ) 1 / 2 y + ( 1 r n ) 1 / 2 b n ) = Φ ( x ) Φ ( min { x , y } ) .
Corollary 3.
Under the conditions of Theorem 2, for any x R , we have
lim n P ( M n ( X ) a n 1 x + b n ) = Φ ( x ) .
and
lim n P ( M n ( X ( ε ) ) a n 1 x + b n ) = Φ 2 ( x ) .
Remark 2.
Corollaries 2 and 3 indicate that, when both the original sequence and the random replacing sequence are weakly dependent, the limit distribution of the maximum of the original sequence and the limit distribution of the maximum of the sequence subject to random replacing are consistent. At this point, the sequence subject to random replacement can be used to replace the original sequence. When both the original sequence and the sequence subject to random replacement are strongly dependent, the limit distribution of the maximum of the original sequence and the sequence subject to random replacement is inconsistent. In this case, the sequence subject to random replacement cannot be directly used to replace the original sequence.
Theorem 3.
(i). Suppose that r n and r ^ n satisfy lim n r n log n = γ ( 0 , ) and the conditions A1 and A2, respectively. For any x , y R , we have
lim n P ( M n ( X ( ε ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n ) = E exp λ g ( min { x , y } , U , γ ) ( 1 λ ) g ( y , U , γ ) .
(ii). Suppose that r n and r ^ n satisfy the conditions A1 and A2 and lim n r ^ n log n = γ ( 0 , ) , respectively. For any x R , we have
lim n P ( M n ( X ( ε ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n ) = E exp ( 1 λ ) g ( x , U , γ ) .
Remark 3.
Note that, for Gaussian random sequences with correlation functions satisfying the conditions A1 and A2, their maxima have a non-degenerate limit under the normalizing level ( r n ) 1 / 2 x + ( 1 r n ) 1 / 2 b n and have a degenerate limit 1 under the normalizing level a n 1 x + b n ; for Gaussian random sequences with correlation functions satisfying the condition lim n r n log n = γ ( 0 , ) , their maxima have a non-degenerate limit under the normalizing level a n 1 x + b n and have a degenerate limit 0 under the normalizing level ( r n ) 1 / 2 x + ( 1 r n ) 1 / 2 b n . Thus, in order to obtain the non-degenerate limit, we choose the normalizing level a n 1 x + b n in Theorem 3.

3. Proofs

Let α = { α n , n 1 } be a sequence of 0 and 1 ( α { 0 , 1 } N ). For the arbitrary random or nonrandom sequence β = { β n , n 1 } of 0 and 1 and subset I N , put
M ( X ( β ) , I ) = max { X i ( β ) , i I } , M ( X , I ) = max { X i , i I } .
For any I N , put I 1 ( β ) = { i : i I , β i = 1 } and I 0 ( β ) = { i : i I , β i = 0 } . Set N n = { 1 , 2 , , n } . For simplicity, in the following part, denote u n ( x ) = a n 1 x + b n and ρ n ( s ) = γ s log n , s = 1 , 2 .
Lemma 1.
Let { X n , n 1 } be a standard Gaussian sequence with mutually independent elements, which is independent of { ε n , n 1 } . Let { X n ^ , n 1 } be the independent copy. Define X n ( ε ) = ε n X n + ( 1 ε n ) X n ^ . Under the conditions of Theorem 1, we have n ,
| P ( M n ( X ( α ) ) u n ( x ) , M n ( X ) u n ( y ) ) + P ( M ( X ^ , N n 0 ( α ) ) v n ( x , z , γ 2 ) ) d Φ ( z ) × + P ( M ( X , N n 1 ( α ) ) v n ( x , z , γ 1 ) , M n ( X ) v n ( y , z , γ 1 ) ) d Φ ( z ) | 0 ,
where v n ( x , z , γ s ) = ( 1 ρ n ( s ) ) 1 / 2 ( u n ( x ) ( ρ n ( s ) ) 1 / 2 z ) , s = 1 , 2 .
Proof. 
Note that N n 1 ( α ) = { i : i N n , α i = 1 } and N n 0 ( α ) = { i : i N n , α i = 0 } . Let η n = ( 1 ρ n ( 1 ) ) 1 / 2 X n + ( ρ n ( 1 ) ) 1 / 2 U , η n ^ = ( 1 ρ n ( 2 ) ) 1 / 2 X n ^ + ( ρ n ( 2 ) ) 1 / 2 V , where U , V are independent standard Gaussian random variables and are independent of { X n , n 1 } and { X n ^ , n 1 } . Let η n ( ε ) = ε n η n + ( 1 ε n ) η n ^ . It is easy to see that both η n , η n ^ and η n ( α ) are standard Gaussian sequences. Using the normal comparison lemma (see, e.g., [3]),
| P ( M n ( X ( α ) ) u n ( x ) , M n ( X ) u n ( y ) ) P ( M n ( η ( α ) ) u n ( x ) , M n ( η ) u n ( y ) ) | = | P ( M ( X ^ , N n 0 ( α ) ) u n ( x ) , M ( X , N n 1 ( α ) ) u n ( x ) , M n ( X ) u n ( y ) ) P ( M ( η ^ , N n 0 ( α ) ) u n ( x ) , M ( η , N n 1 ( α ) ) u n ( x ) , M n ( η ) u n ( y ) ) | | P ( M ( X ^ , N n 0 ( α ) ) u n ( x ) ) P ( M ( η ^ , N n 0 ( α ) ) u n ( x ) ) | + | P ( M ( X , N n 1 ( α ) ) u n ( x ) , M n ( X ) u n ( y ) ) P ( M ( η , N n 1 ( α ) ) u n ( x ) , M n ( η ) u n ( y ) ) | C n k = 1 n | r ^ k ρ n ( 2 ) | exp u n 2 ( x ) 1 + w k ( 2 ) + C n k = 1 n | r k ρ n ( 1 ) | exp u n 2 ( min { x , y } ) 1 + w k ( 1 ) ,
where w k ( 1 ) = max { | r k | , ρ n ( 1 ) } , w k ( 2 ) = max { | r ^ k | , ρ n ( 2 ) } and C is a constant. Using Lemma 6.4.1 of [3], we know that the above sums tend to 0, as n . With the definition of η ( α ) , we have
P ( M n ( η ( α ) ) u n ( x ) , M n ( η ) u n ( y ) ) = P ( M ( η ^ , N n 0 ( α ) ) u n ( x ) , M ( η , N n 1 ( α ) ) u n ( x ) , M n ( η ) u n ( y ) ) = P ( M ( η ^ , N n 0 ( α ) ) u n ( x ) ) P ( M ( η , N n 1 ( α ) ) u n ( x ) , M n ( η ) u n ( y ) ) = + P ( ( 1 ρ n ( 2 ) ) 1 / 2 M ( X ^ , N n 0 ( α ) ) + ( ρ n ( 2 ) ) 1 / 2 V u n ( x ) | V = z ) d Φ ( z ) × + P ( ( 1 ρ n ( 1 ) ) 1 / 2 M ( X , N n 1 ( α ) ) + ( ρ n ( 1 ) ) 1 / 2 U u n ( x ) , ( 1 ρ n ( 1 ) ) 1 / 2 M n ( X ) + ( ρ n ( 1 ) ) 1 / 2 U u n ( y ) | U = z ) d Φ ( z ) = + P ( M ( X ^ , N n 0 ( α ) ) v n ( x , z , γ 2 ) ) d Φ ( z ) × + P ( M ( X , N n 1 ( α ) ) v n ( x , z , γ 1 ) , M n ( X ) v n ( y , z , γ 1 ) ) d Φ ( z ) .
The proof of Lemma 1 is complete. □
For some fixed k, define K s = { j N : ( s 1 ) t + 1 j s t } , 1 s k , where t = n k , and x denotes the integral part of x.
Lemma 2.
Under the conditions of Theorem 1, for any x , y R ,
| P ( M ( X ^ , N n 0 ( α ) ) v n ( x , z 1 , γ 2 ) ) P ( M ( X , N n 1 ( α ) ) v n ( x , z 2 , γ 1 ) , M ( X , N n ) v n ( y , z 2 , γ 1 ) ) s = 1 k P ( M ( X ^ , K s 0 ( α ) ) v n ( x , z 1 , γ 2 ) ) P ( M ( X , K s 1 ( α ) ) v n ( x , z 2 , γ 1 ) , M ( X , K s ) v n ( y , z 2 , γ 1 ) ) | 2 t Φ ¯ ( v n ( x , z 1 , γ 2 ) ) + 4 t Φ ¯ ( v n ( min { x , y } , z 2 , γ 1 ) ) ,
where Φ ¯ ( x ) = 1 Φ ( x ) .
Proof. 
This proof is the same as that of Lemma 3.2 of [21], so we omit the details. □
Lemma 3.
Under the conditions of Theorem 1, for any 0 r 2 k 1 ,
1 r 2 k t Φ ¯ ( v n ( min { x , y } , z , γ 1 ) ) ( 1 r 2 k ) t Φ ¯ ( v n ( y , z , γ 1 ) ) + j K s α j t r 2 k t ( Φ ¯ ( v n ( y , z , γ 1 ) ) Φ ¯ ( v n ( min { x , y } , z , γ 1 ) ) ) P ( M ( X , K s 1 ( α ) ) v n ( x , z , γ 1 ) , M ( X , K s ) v n ( y , z , γ 1 ) ) 1 r 2 k t Φ ¯ ( v n ( min { x , y } , z , γ 1 ) ) ( 1 r 2 k ) t Φ ¯ ( v n ( y , z , γ 1 ) ) + j K s α j t r 2 k t ( Φ ¯ ( v n ( y , z , γ 1 ) ) Φ ¯ ( v n ( min { x , y } , z , γ 1 ) ) ) + 3 t 2 Φ ¯ 2 ( v n ( min { x , y } , z , γ 1 ) )
and
1 ( 1 r 2 k ) t Φ ¯ ( v n ( x , z , γ 2 ) ) + j K s α j t r 2 k t Φ ¯ ( v n ( x , z , γ 2 ) ) P ( M ( X ^ , K s 0 ( α ) ) v n ( x , z , γ 2 ) ) 1 ( 1 r 2 k ) t Φ ¯ ( v n ( x , z , γ 2 ) ) + j K s α j t r 2 k t Φ ¯ ( v n ( x , z , γ 2 ) ) + t 2 Φ ¯ 2 ( v n ( x , z , γ 2 ) ) .
Proof. 
Recall that K s 1 ( α ) = { i : i K s , α i = 1 } and K s 0 ( α ) = { i : i K s , α i = 0 } . Noting that { X n , n 1 } is a Gaussian random sequence with mutually independent elements, we have
P ( M ( X , K s 1 ( α ) ) v n ( x , z , γ 1 ) , M ( X , K s ) v n ( y , z , γ 1 ) ) = P ( M ( X , K s 1 ( α ) ) v n ( min { x , y } , z , γ 1 ) , M ( X , K s 0 ( α ) ) v n ( y , z , γ 1 ) ) 1 P ( M ( X , K s 1 ( α ) ) > v n ( min { x , y } , z , γ 1 ) ) P ( M ( X , K s 0 ( α ) ) > v n ( y , z , γ 1 ) ) + P ( M ( X , K s 1 ( α ) ) > v n ( min { x , y } , z , γ 1 ) , M ( X , K s 0 ( α ) ) > v n ( y , z , γ 1 ) ) 1 ( K s 1 ( α ) ) Φ ¯ ( v n ( min { x , y } , z , γ 1 ) ) ( K s 0 ( α ) ) Φ ¯ ( v n ( y , z , γ 1 ) ) + 3 t 2 Φ ¯ 2 ( v n ( min { x , y } , z , γ 1 ) ) = 1 r 2 k t Φ ¯ ( v n ( min { x , y } , z , γ 1 ) ) ( 1 r 2 k ) t Φ ¯ ( v n ( y , z , γ 1 ) ) + j K s α j t r 2 k t ( Φ ¯ ( v n ( y , z , γ 1 ) ) Φ ¯ ( v n ( min { x , y } , z , γ 1 ) ) ) + 3 t 2 Φ ¯ 2 ( v n ( min { x , y } , z , γ 1 ) ) ,
where ( A ) denotes the cardinality of the set A. Similarly,
P ( M ( X , K s 1 ( α ) ) v n ( x , z 2 , γ 1 ) , M ( X , K s ) v n ( y , z 2 , γ 1 ) ) 1 r 2 k t Φ ¯ ( v n ( min { x , y } , z , γ 1 ) ) ( 1 r 2 k ) t Φ ¯ ( v n ( y , z , γ 1 ) ) + j K s α j t r 2 k t ( Φ ¯ ( v n ( y , z , γ 1 ) ) Φ ¯ ( v n ( min { x , y } , z , γ 1 ) ) ) ,
which completes the proof of the first result. The proof of the second result is similar, so we omit it. □
Now, for the random variable λ ( 0 , 1 ] a.s., define
B r , l = w : λ ( w ) 0 , 1 2 l , r = 0 r 2 l , r + 1 2 l , 0 < r < 2 l 1
and
B ˜ α , n = { w : ε j ( w ) = α j , 1 j n } .
Put
B r , l , α , n = B r , l B ˜ α , n .
Proof of Theorem 1.
Note that
P ( M n ( X ( ε ) ) u n ( x ) , M n ( X ) u n ( y ) ) = r = 0 2 k 1 α { 0 , 1 } n E P ( M n ( X ( α ) ) u n ( x ) , M n ( X ) u n ( y ) ) I [ B r , k , α , n ] .
We will split the proof into six steps. The first step, using Lemma 1, we have n
Σ n ( 1 ) : = r = 0 2 k 1 α { 0 , 1 } n E | P ( M n ( X ( α ) ) u n ( x ) , M n ( X ) u n ( y ) ) E P ( M ( X ^ , N n 0 ( α ) ) v n ( x , U , γ 2 ) ) | U × E P ( M ( X , N n 1 ( α ) ) v n ( x , U , γ 1 ) , M ( X , N n ) v n ( y , U , γ 1 ) ) | U | I [ B r , k , α , n ] 0 .
In the second step, we will prove n and k
Σ n ( 2 ) : = r = 0 2 k 1 α { 0 , 1 } n E | E P ( M ( X ^ , N n 0 ( α ) ) v n ( x , U , γ 2 ) ) | U × E P ( M ( X , N n 1 ( α ) ) v n ( x , U , γ 1 ) , M ( X , N n ) v n ( y , U , γ 1 ) ) | U E s = 1 k P ( M ( X ^ , K s 0 ( α ) ) v n ( x , U , γ 2 ) ) | U × E s = 1 k P ( M ( X , K s 1 ( α ) ) v n ( x , U , γ 1 ) , M ( X , K s ) v n ( y , U , γ 1 ) ) | U | I [ B r , k , α , n ] 0 .
Using Lemma 2, we have
Σ n ( 2 ) 2 t E Φ ¯ ( v n ( x , U , γ 2 ) ) + 4 t E Φ ¯ ( v n ( min { x , y } , U , γ 1 ) ) 2 k n E Φ ¯ ( v n ( x , U , γ 2 ) ) + 4 k n E Φ ¯ ( v n ( min { x , y } , U , γ 1 ) ) .
It follows from the proof of Theorem 6.5.1 of [3] that
v n ( x , z , γ ) = u n ( x + γ 2 γ z ) + o ( a n 1 ) .
Then, as n
E ( n Φ ¯ ( v n ( x , U , γ ) ) ) = E g ( x , U , γ ) ( 1 + o ( 1 ) ) .
Thus, as n and k , Σ n ( 2 ) tends to 0.
In the third step, we prove that n and k
Σ n ( 3 ) : = r = 0 2 k 1 α { 0 , 1 } n E | E s = 1 k P ( M ( X ^ , K s 0 ( α ) ) v n ( x , U , γ 2 ) ) | U × E s = 1 k P ( M ( X , K s 1 ( α ) ) v n ( x , U , γ 1 ) , M ( X , K s ) v n ( y , U , γ 1 ) ) | U E 1 ( 1 r 2 k ) n Φ ¯ ( v n ( x , U , γ 2 ) ) k k | U × E 1 r 2 k Φ ¯ ( v n ( min { x , y } , U , γ 1 ) ) + ( 1 r 2 k ) n Φ ¯ ( v n ( y , U , γ 1 ) ) k k | U | I [ B r , k , α , n ] 0 .
By the following basic inequality
s = 1 k a s s = 1 k b s s = 1 k a s b s , a s , b s ( 0 , 1 ] ,
we obtain
Σ n ( 3 ) Σ n ( 31 ) + Σ n ( 32 ) ,
where, using Lemma 3, we have
Σ n ( 31 ) : = r = 0 2 k 1 α { 0 , 1 } n E E s = 1 k | P ( M ( X ^ , K s 0 ( α ) ) v n ( x , U , γ 2 ) ) 1 ( 1 r 2 k ) Φ ¯ ( v n ( x , U , γ 2 ) ) k | | U I [ B r , k , α , n ) r = 0 2 k 1 E E s = 1 k | j K s ε j t r 2 k t Φ ¯ ( v n ( x , U , γ 2 ) ) + t 2 Φ ¯ 2 ( v n ( x , U , γ 2 ) ) | | U I [ B r , k ] .
and
Σ n ( 32 ) : = r = 0 2 k 1 α { 0 , 1 } n E E s = 1 k | P ( M ( X , K s 1 ( α ) ) v n ( x , U , γ 1 ) , M ( X , K s ) v n ( y , U , γ 1 ) ) 1 r 2 k Φ ¯ ( v n ( min { x , y } , U , γ 1 ) ) + ( 1 r 2 k ) Φ ¯ ( v n ( y , U , γ 1 ) ) k | | U I [ B r , k , α , n ] r = 0 2 k 1 E E s = 1 k | j K s ε j t r 2 k t ( Φ ¯ ( v n ( y , U , γ 1 ) ) Φ ¯ ( v n ( min { x , y } , U , γ 1 ) ) ) + 3 t 2 Φ ¯ 2 ( v n ( min { x , y } , U , γ 1 ) ) | | U I [ B r , k ] .
Taking into account (2), we have t ,
S s t s t p λ , S ( s 1 ) t ( s 1 ) t p λ ;
furthermore, using dominated convergence theorem, we have as t
E | S s t s t λ | 0 , E | S ( s 1 ) t ( s 1 ) t λ | 0 .
Hence, we obtain t
r = 0 2 k 1 E | j K s ε j t r 2 k | I [ B r , k ] E | j K s ε j t λ | + r = 0 2 k 1 E | λ r 2 k | I [ B r , k ] E | S s t S ( s 1 ) t t λ | + 1 2 k = E | s S s t s t λ + ( s 1 ) S ( s 1 ) t ( s 1 ) t λ | + 1 2 k s E | S s t s t λ | + ( s 1 ) E | S ( s 1 ) t ( s 1 ) t λ | + 1 2 k = o ( 1 ) + 1 2 k .
Combining (10) with (12) and letting t , we have
lim n Σ n ( 3 ) E g ( x , U , γ 2 ) 2 k + ( E g ( x , U , γ 2 ) ) 2 k + E ( g ( y , U , γ 1 ) g ( min { x , y } , U , γ 1 ) ) 2 k + 3 ( E g ( min { x , y } , U , γ 1 ) ) 2 k .
Thus, letting k , Σ n ( 3 ) tends to 0.
In the fourth step, we prove n and k
Σ n ( 4 ) : = r = 0 2 k 1 α { 0 , 1 } n E | E 1 ( 1 r 2 k ) n Φ ¯ ( v n ( x , U , γ 2 ) ) k k | U × E 1 r 2 k Φ ¯ ( v n ( min { x , y } , U , γ 1 ) ) + ( 1 r 2 k ) n Φ ¯ ( v n ( y , U , γ 1 ) ) k k | U E 1 ( 1 λ ) n Φ ¯ ( v n ( x , U , γ 2 ) ) k k | U × E 1 λ n Φ ¯ ( v n ( min { x , y } , U , γ 1 ) ) + ( 1 λ ) n Φ ¯ ( v n ( y , U , γ 1 ) ) k k | U | I [ B r , k , α , n ] 0 .
Using (11) and (12) again, we obtain
Σ n ( 4 ) E g ( x , U , γ 2 ) 2 k + E ( g ( y , U , γ 1 ) g ( min { x , y } , U , γ 1 ) ) 2 k .
Thus, letting k , we have Σ n ( 4 ) tends to 0.
In the fifth step, using (10) again, it is easy to show that n and k
Σ n ( 5 ) : = r = 0 2 k 1 α { 0 , 1 } n E | E 1 ( 1 λ ) n Φ ¯ ( v n ( x , U , γ 2 ) ) k k | U × E 1 λ n Φ ¯ ( v n ( min { x , y } , U , γ 1 ) ) + ( 1 λ ) n Φ ¯ ( v n ( y , U , γ 1 ) ) k k | U E 1 ( 1 λ ) g ( x , U , γ 2 ) k k | U × E 1 λ g ( min { x , y } , U , γ 1 ) + ( 1 λ ) g ( y , U , γ 1 ) k k | U | I [ B r , k , α , n ] 0 .
In the last step, letting k , we have
Σ n ( 6 ) : = r = 0 2 k 1 α { 0 , 1 } n E ( | E 1 ( 1 λ ) g ( x , U , γ 2 ) k k | U × E 1 λ g ( min { x , y } , U , γ 1 ) + ( 1 λ ) g ( y , U , γ 1 ) k k | U E exp ( 1 λ ) g ( x , U , γ 2 ) | U × E exp λ g ( min { x , y } , U , γ 1 ) + ( 1 λ ) g ( y , U , γ 1 ) | U | I [ B r , k , α , n ] ) 0 .
The proof of Theorem is complete. □
Proof of Theorem 2.
First, note that
P ( M n ( X ( ε ) ) ( r n ) 1 / 2 x + ( 1 r n ) 1 / 2 b n , M n ( X ) ( r n ) 1 / 2 y + ( 1 r n ) 1 / 2 b n ) = α { 0 , 1 } n P ( n , α ) P ( B ˜ α , n ) ,
where
P ( n , α ) : = P ( M n ( X ( α ) ) ( r n ) 1 / 2 x + ( 1 r n ) 1 / 2 b n , M n ( X ) ( r n ) 1 / 2 y + ( 1 r n ) 1 / 2 b n ) = P ( M ( X ^ , N n 0 ( α ) ) ( r n ) 1 / 2 x + ( 1 r n ) 1 / 2 b n ) × P ( M ( X , N n 1 ( α ) ) ( r n ) 1 / 2 x + ( 1 r n ) 1 / 2 b n , M n ( X ) ( r n ) 1 / 2 y + ( 1 r n ) 1 / 2 b n ) .
It follows from (3.5) of [15] that
lim n P ( M ( X , N n 1 ( α ) ) ( r n ) 1 / 2 x + ( 1 r n ) 1 / 2 b n ) = Φ ( x ) .
Since { X n , n 1 } and { X n ^ , n 1 } have the same distribution function, using a similar proof, we have
lim n P ( M ( X ^ , N n 0 ( α ) ) ( r n ) 1 / 2 x + ( 1 r n ) 1 / 2 b n ) = Φ ( x ) .
Hence, combining (8) and (16), we have
lim n P ( n , α ) = Φ ( x ) Φ ( min { x , y } ) .
Now, we can finish the proof of Theorem 2 by plugging the last equality into (13) and dominated convergence theorem. □
Proof of Theorem 3.
We only give the proof of case (i), since the proof of case (ii) is similar. First, note that
P ( M n ( X ( ε ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n ) = r = 0 2 k 1 α { 0 , 1 } n P ˜ ( n , α ) P ( B r , k , α , n ) ,
where
P ˜ ( n , α ) : = P ( M n ( X ( α ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n ) = P ( M ( X ^ , N n 0 ( α ) ) a n 1 x + b n ) P ( M ( X , N n 1 ( α ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n ) .
Obviously,
P ( M ( X ^ , N n 0 ( α ) ) a n 1 x + b n ) = P r ^ n 1 / 2 ( M ( X ^ , N n 0 ( α ) ) ( 1 r ^ n ) 1 / 2 b n ) r ^ n 1 / 2 ( a n 1 x + b n ( 1 r ^ n ) 1 / 2 b n ) .
Since the correlation function r ^ n of { X n ^ , n 1 } satisfies the conditions A1 and A2, we have n
r ^ n 1 / 2 ( a n 1 x + b n ( 1 r ^ n ) 1 / 2 b n ) = r ^ n 1 / 2 ( a n 1 x + 1 2 r ^ n b n + o ( r ^ n b n ) ) = 1 2 ( 2 r ^ n log n ) 1 / 2 + o ( ( 2 r ^ n log n ) 1 / 2 ) .
Furthermore, using (16), as n
P ( M ( X ^ , N n 0 ( α ) ) a n 1 x + b n ) 1 .
Hence, for any ε > 0 and sufficiently large n
1 ε P ( M ( X ^ , N n 0 ( α ) ) a n 1 x + b n ) 1 + ε .
Thus, for a sufficiently large n
r = 0 2 k 1 α { 0 , 1 } n ( 1 ε ) P ( M ( X , N n 1 ( α ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n ) P ( M n ( X ( ε ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n ) r = 0 2 k 1 α { 0 , 1 } n ( 1 + ε ) P ( M ( X , N n 1 ( α ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n ) .
Now, using the dominated convergence theorem, in order to finishing the proof, we only need to show n and k
r = 0 2 k 1 α { 0 , 1 } n P ( M ( X , N n 1 ( α ) ) a n 1 x + b n , M n ( X ) a n 1 y + b n ) E + exp λ g ( min { x , y } , z , γ ) ( 1 λ ) g ( y , z , γ ) d Φ ( z ) .
Noting that the correlation function r n of { X n , n 1 } satisfies lim n r n log n = γ ( 0 , ) , repeating the proof of Theorem 1, we can prove that (20) holds. □

4. Conclusions

The joint asymptotic distribution of the maximum of stationary Gaussian sequence and the maximum of the sequence subject to random replacing is highly dependent on the dependent structure of the original sequence and the replacing sequence.

Author Contributions

Y.L. was a major contributor in writing the manuscript; Z.T. provided some helpful discussions in writing the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Innovation of Jiaxing City: a program to support the talented persons and Project of new economy research center of Jiaxing City (No. WYZB202254).

Data Availability Statement

Not Applicable.

Acknowledgments

The authors would like to thank the referees and the Editor for the thorough reading and valuable suggestions.

Conflicts of Interest

The authors declare no competing interests.

References

  1. Hall, A.; Hüsler, J. Extremes of stationary sequences with failures. Stoch. Model. 2006, 22, 537–557. [Google Scholar] [CrossRef]
  2. Mladenović, P.; Piterbarg, V. On asymptotic distribution of maxima of complete and incomplete samples from stationary sequences. Stoch. Process. Their Appl. 2006, 116, 1977–1991. [Google Scholar] [CrossRef] [Green Version]
  3. Leadbetter, M.R.; Lindgren, G.; Rootzén, H. Extremes and Related Properties of Random Sequences and Processes; Springer: New York, NY, USA, 1983. [Google Scholar]
  4. Peng, Z.; Cao, L.; Nadarajah, S. Asymptotic distributions of maxima of complete and incomplete samples from multivariate stationary Gaussian sequences. J. Multivar. Anal. 2010, 101, 2641–2647. [Google Scholar] [CrossRef] [Green Version]
  5. Cao, L.; Peng, Z. Asymptotic distributions of maxima of complete and incomplete samples from strongly dependent stationary Gaussian sequences. Appl. Math. Lett. 2011, 24, 243–247. [Google Scholar] [CrossRef]
  6. Tong, B.; Peng, Z. On almost sure max-limit theorems of complete and incomplete samples from stationary sequences. Acta Math. Sin. 2011, 27, 1323–1332. [Google Scholar] [CrossRef]
  7. Tan, Z.; Wang, Y. Some asymptotic results on extremes of incomplete samples. Extremes 2012, 15, 319–332. [Google Scholar] [CrossRef]
  8. Mladenović, P.; Zivadinovic, L. Uniform AR(1) Processes and maxima on partial samples. Commun. Stat.-Theory Methods 2015, 44, 2546–2563. [Google Scholar] [CrossRef]
  9. Glavaš, L.; Mladenović, P.; Samorodnitsky, G. Extreme values of the uniform order 1 autoregressive processes and missing observations. Extremes 2017, 20, 671–690. [Google Scholar] [CrossRef]
  10. Panga, Z.; Pereira, L. On the maxima and minima of complete and incomplete samples from nonstationary random fields. Stat. Probab. Lett. 2018, 137, 124–134. [Google Scholar] [CrossRef]
  11. Glavaš, L.; Mladenović, P. Extreme values of linear processes with heavy-tailed innovations and missing observations. Extremes 2020, 23, 547–567. [Google Scholar] [CrossRef]
  12. Tong, J.; Peng, Z. Asymptotic distributions of the exceedances point processes for a class of dependent normal sequence. Acta Math. Appl. Sin. 2010, 33, 1–9. [Google Scholar]
  13. Peng, Z.; Tong, J.; Weng, Z. Exceedances point processes in the plane of stationary Gaussian sequences with data missing. Stat. Probab. Lett. 2019, 149, 73–79. [Google Scholar] [CrossRef]
  14. Krajka, T. The asymptotic behaviour of maxima of complete and incomplete samples from statioanry sequences. Stoch. Process. Their Appl. 2011, 121, 1705–1719. [Google Scholar] [CrossRef] [Green Version]
  15. Hashorva, E.; Peng, Z.; Weng, Z. On Piterbarg theorem for maxima of stationary Gaussian sequences. Lith. Math. J. 2013, 53, 280–292. [Google Scholar] [CrossRef] [Green Version]
  16. Hashorva, E.; Weng, Z. Maxima and minima of complete and incomplete stationary sequences. Stochastics 2014, 86, 707–720. [Google Scholar] [CrossRef] [Green Version]
  17. Krajka, T.; Rychlik, Z. The limiting behaviour of sums and maximums of iid random variables from the viewpoint of different observers. Probab. Math. Stat. 2014, 34, 237–252. [Google Scholar]
  18. Zheng, S.; Tan, Z. On the maxima of nonstationary random fields subject to missing observations. arXiv 2023, arXiv:2306.13857. [Google Scholar]
  19. Fang, Y.; Tan, Z. On the extreme order statistics for stationary Gaussian sequences subject to random missing observations. arXiv 2023, arXiv:2306.13861. [Google Scholar]
  20. Robert, C.Y. On asymptotic distribution of maxima of stationary sequences subject to random failure or censoring. Statist. Probab. Lett. 2010, 134, 134–142. [Google Scholar] [CrossRef]
  21. Li, Y.; Tan, Z. The asymptotic distribution of maxima of stationary random sequences under random replacing. arXiv 2023. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Tan, Z. The Limit Properties of Maxima of Stationary Gaussian Sequences Subject to Random Replacing. Mathematics 2023, 11, 3155. https://doi.org/10.3390/math11143155

AMA Style

Li Y, Tan Z. The Limit Properties of Maxima of Stationary Gaussian Sequences Subject to Random Replacing. Mathematics. 2023; 11(14):3155. https://doi.org/10.3390/math11143155

Chicago/Turabian Style

Li, Yuwei, and Zhongquan Tan. 2023. "The Limit Properties of Maxima of Stationary Gaussian Sequences Subject to Random Replacing" Mathematics 11, no. 14: 3155. https://doi.org/10.3390/math11143155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop