Next Article in Journal
Chern Flat and Chern Ricci-Flat Twisted Product Hermitian Manifolds
Previous Article in Journal
Synchronization for Reaction–Diffusion Switched Delayed Feedback Epidemic Systems via Impulsive Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weak Convergence of the Conditional Set-Indexed Empirical Process for Missing at Random Functional Ergodic Data

1
Laboratoire de Mathématiques Appliquées de Compiègne (L.M.A.C.), Université de Technologie de Compiègne, 60200 Compiègne, France
2
Laboratory of Stochastic Models, Statistics and Applications, University of Saida-Dr. Moulay Tahar, P.O. Box 138 EN-NASR, Saïda 20000, Algeria
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(3), 448; https://doi.org/10.3390/math12030448
Submission received: 18 January 2024 / Revised: 25 January 2024 / Accepted: 29 January 2024 / Published: 30 January 2024

Abstract

:
This work examines the asymptotic characteristics of a conditional set-indexed empirical process composed of functional ergodic random variables with missing at random (MAR). This paper’s findings enlarge the previous advancements in functional data analysis through the use of empirical process methodologies. These results are shown under specific structural hypotheses regarding entropy and under appealing situations regarding the model. The regression operator’s asymptotic ( 1 α ) -confidence interval is provided for 0 < α < 1 as an application. Additionally, we offer a classification example to demonstrate the practical importance of the methodology.

1. Introduction

There are several strategies for solving problems in statistics, among which empirical process techniques are considered the best. Historically, many limit theorems for the empirical process have been established in finite dimension frameworks (see, e.g., Refs. [1,2,3] for exhaustive, self-contained texts with a variety of statistical applications) together under mixing conditions and independent identically distributed framework, in the setting of independents variables [4] characterized modulo measurability, the classes C of sets for which the Glivenko–Cantelli theorem holds, we may also cite Refs. [5,6,7,8,9,10,11,12,13,14,15]. Under various mixing conditions, empirical processes based on dependent data have been investigated; for instance, the authors of Ref. [16] established the asymptotic normality of sequences undergoing p h i -mixing. Regarding these areas of investigation concerning an alternative form of mixing, it is possible to refer to Refs. [17,18,19,20]. Nevertheless, the author of [21] identified a bracketing condition that could occur due to vigorous mixing. The function-indexed empirical procedure for beta-mixing sequences was investigated by Ref. [22]. Uniform convergence and asymptotic normality of a set-indexed conditional empirical process within a strictly stationary and strong mixing framework have been established by Ref. [23]. Over the past few decades, there has been a growing interest in the statistical literature regarding matters concerning functional random variables, which are variables with values that exist in an infinite-dimensional space. As is the case, for example, in meteorology, medicine, satellite imagery, and numerous other scientific disciplines, the proliferation of data collected on an ever-increasingly precise temporal and spatial grid has inspired the development of this research topic. Numerous complex theoretical and numerical inquiries were thus engendered by the statistical modeling of these data, which were perceived as stochastic functions. The monographs of Refs. [24,25] provide comprehensive surveys of functional data analysis, encompassing both theoretical and practical aspects. These monographs discuss linear models for random variables that take values in a Hilbert space, scalar-on-function and function-on-function linear models, parametric discriminant analysis, and functional principle component analysis, respectively. To access the most recent findings on FDA and related subjects, we may consult the bibliographic reviews provided by sources such as Refs. [26,27,28,29,30,31], among others. For scalar-on-function nonlinear regression models, the authors of [32] emphasized nonparametric techniques, particularly kernel-type estimation. Such tools were subsequently expanded to include discrimination and classification analysis. An intriguing statistical concept that was extended to the functional data framework was examined by Ref. [33]. These concepts included the portmanteau test, change detection, and goodness-of-fit tests. Good overviews of this literature can be found in Refs. [20,34,35,36,37,38,39,40,41], and, more recently, Ref. [42] gave the first results of the conditional set-indexed empirical process in functional data. Considerable effort has been devoted to developing a convergence theory for empirical processes involving functional random variables, although these topics are well beyond the purview of the paper discussed in Ref. [23]. A theoretical framework of this nature is imperative for contemporary statistical analysis. For over six decades, functional data analysis has been acknowledged in the statistical literature and has since become the focus of numerous works. We observe the extreme limitedness of the outcomes produced by empirical processes utilizing functional frameworks. We may refer for recent references to Refs, [43,44,45,46,47], who achieved numerous valuable outcomes regarding set-indexed conditional empirical processes inside the functional setting of the ergodic framework. One should avoid overlooking the possibility that some pairings of observations may be incomplete in numerous practical applications, including sampling surveys, pharmaceutical tracing tests, and reliability tests. Such instances are commonly referred to as “missing data”. Others in the fields of data science and analytics will attest to the fact that missing data is a common issue. MAR (Missing At Random) indicates that while there may be systematic differences between the missing and observed values, these discrepancies can be fully accounted for by other observed variables. The situation changes significantly when predictors are present; for instance, the authors of [48,49,50,51,52,53,54,55,56,57,58] provide some examples of this in finite dimensionality, as recent references to Refs. [59,60]. In a recent study, the authors of [61] examined the linear quantile regression model in the presence of missing response data that occur randomly. The study utilized the inverse probability weight method. The authors developed a mathematical equation for estimating unknown parameters using quantile regression. They also introduced a standard estimator for quantile regression. Simultaneously, they formulated the empirical likelihood (EL) ratio function for the unknown parameter and established a maximum EL estimator for the unknown parameter. There is a scarcity of work that examines the statistical characteristics of functional nonparametric models for missing data. The kernel estimator of the conditional quantile was introduced by Ref. [62] under the assumptions of ergodicity and random censorship. The author also demonstrated strong consistency (with rate) and defined the asymptotic distribution of the estimator. Additionally, they applied the estimator to forecast the peak electricity demand interval using smart meter data, details of which have been omitted. In their study, the authors of [63] developed a type of estimator for the regression operator in the context of functional stationary ergodic data with missing at random (MAR) responses. They also established the asymptotic properties of the estimator, including its convergence rate in probability and asymptotic normality. For further references, we suggest consulting Refs. [64,65].
Our findings extend upon a prior study [44] by establishing more precise limits under less stringent limitations. This offers a new perspective of the empirical processes theory for random variables with general dependencies. This work addresses a problem that has not been thoroughly examined thus far. The framework of ergodic functional data was introduced by Ref. [66], who established consistencies with rates along with the asymptotic normality of the regression function estimate and provided some examples. For recent papers on the subject, we refer to Ref. [43], where the authors extended Ref. [66] to a more general framework. Some motivations to consider ergodic dependence structure in the data rather than a mixing one are discussed in Refs. [67,68].
The objective of this study is to enhance the development of a practical methodology for addressing MAR samples in functional nonparametric situations. We want to examine the estimation of conditional set-indexed empirical processes in the presence of both missing at random (MAR) data and ergodicity.
The structure of this paper is outlined as follows. In Section 2, we introduce the notation and definitions, along with the conditional empirical process. Our main results are presented in Section 3. Section 3.1 is dedicated to discussing the procedure for selecting the bandwidth. In Section 4, we apply our main result to classification. Concluding remarks and potential future developments are discussed in Section 5. To maintain a smooth presentation flow, all proofs are consolidated in Appendix A.

2. The Set Indexed Conditional Empirical Process

To enhance clarity, let us delve into the definition of the ergodic property for processes. Consider a measurable space ( S , J ) , and denote by S N the space of all functions s : N S . If s j represents the value of the function s at j N , define H j as the j-th coordinate map, i.e., H j ( s ) = s j . Now, consider H j 1 ( J ) for j N ; a random process Z = Z j : j N can be viewed as a random variable defined on the probability space ( Ω , A , P ) , taking values in ( S N , J N ) . For any B F , a set is termed invariant if there exists a set A J N such that B = ( Z n , Z n + 1 , ) A holds for every n 1 . The process Z is then considered ergodic when, for any invariant set B, we have P ( B ) = 0 or P ( Ω B ) = 0 . As per the ergodic theorem, it is well-known that for a stationary ergodic process Z, the following convergence holds almost surely:
lim n 1 n i = 1 n Z i = E ( Z 1 ) , almost surely .
Therefore, the ergodic property in our setting is formulated based on the statement (1). We consider a sample of random elements ( X 1 , Y 1 ) , , ( X n , Y n ) , each drawn from the joint distribution of ( X , Y ) , where X takes values in a space E and Y in R d . The functional space E is endowed with a semi-metric d E ( · , · ) . Our goal is to investigate the relationships between X and Y by estimating functional operators associated with the conditional distribution of Y given X. One such operator is the regression operator for a measurable set C in a class of sets C :
μ ( C x ) = E 1 { Y C } X = x .
To address this, we employ a Nadaraya–Watson-type conditional empirical distribution, as proposed by Refs. [42,44,69,70]. We introduce the term MAR (Missing mechanism with MAR) for the response variable. In an available incomplete sample of size from ( X , Y , δ ) , denoted as ( X i , Y i , δ i ) , 1 i n , X i is fully observed, δ i = 1 if Y i is observed, and δ i = 0 otherwise. The Bernoulli random variable δ satisfies:
P ( δ = 1 X = x ; Y = y ) = P ( δ = 1 X = x ) = P ( x ) ,
where P ( x ) is a function operator, termed the conditional probability of observing the response given the predictor, often unknown. This mechanism implies that δ and Y are conditionally independent given X, akin to the finite-dimensionality case in Ref. [48].
The Nadaraya–Watson-type conditional empirical distribution function is given by:
μ n ( C , x ) = i = 1 n δ i 1 { Y i C } K d E ( x , X i ) h n i = 1 n δ i K d E ( x , X i ) h n ,
where K ( · ) is a real-valued kernel function from [ 0 , ) into [ 0 , ) , h n is a smoothing parameter satisfying h n 0 as n , C is a measurable set, and x E . When choosing C = ( , z ] , where z R d , it reduces to the conditional empirical distribution function F n ( z | x ) = μ n ( ( , z ] , x ) , as referenced in Refs. [71,72,73]. However, the corresponding class C is defined as ( , z ] , z R d . Regarding the semi-metric topology on E , we introduce the notation
B ( x , t ) = { x 1 E : d E ( x 1 , x ) t } ,
which denotes the ball in E with center x and radius t. This concept is commonly referred to as the small ball probability function in the literature, especially when t tends to zero. The significance of this notion is both theoretically and practically profound, as the concept of a ball is intricately connected with the semi-metric d ( · , · ) . The selection of this semi-metric becomes pivotal when dealing with data in infinite-dimensional spaces.
In many cases, the probability function for the small ball can be roughly represented as the multiplication of two independent functions with respect to variables x and h. This insight is illustrated in several examples found in Proposition 1 of [74]:
  • ϕ ( h n ) = C h n υ for some υ > 0 with τ 0 ( s ) = s υ ;
  • ϕ ( h n ) = C h n υ exp ( C h n p ) for some υ > 0 and p > 0 with τ 0 ( s ) is the Dirac’s function;
  • ϕ ( h n ) = C ln ( h n ) 1 with τ 0 ( s ) = ] 0 , 1 ] ( s ) the indicator function in ] 0 , 1 ] .
Define the following σ -fields: F i and G i Let
F i = σ ( ( X i , Y i , δ i ) : 0 i n ) ,
G i = σ ( ( X i , Y i , δ i ) : 0 i n ) ,
where F i be the σ -filed generated by ( ( X 1 , Y 1 , δ 1 ) , , ( X i , Y i , δ i ) ) and G i that generated by ( ( X 1 , Y 1 , δ 1 ) , , ( X i , Y i , δ i ) , X i + 1 ) . Let B ( x , u ) be a ball centered at x E with radius u. Let D i = d ( x , X i ) so that D i is a nonnegative real-valued random variable. Operating within the probability space ( Ω , A , P ) , consider
F x ( u ) = P ( D i u ) = P ( X i B ( x , u ) ) ,
and F x F i 1 = P ( X i B ( x , u ) F i 1 ) to be the distribution function and the conditional distribution function, respectively, given the σ -field F i 1 of ( D i ) i 1 . Here, B ( x , u ) denotes the ball in the space E centered at x with radius u. Let o a . s ( u ) represent a real random function l ( · ) such that l ( u ) / u converges to zero almost surely as u 0 . In a similar vein, define O a . s ( u ) as a real random function l ( · ) such that l ( u ) / u is almost surely bounded. In what follows, we implicitly assume the ergodicity of the sequence of random elements ( X i , Y i ) , i = 1 , , n .

2.1. Assumptions and Notation

In this paper, the variable x is a constant element within the functional space E . We present the metric entropy with inclusion as a means to quantify the richness or complexity of the set class C . For any given ε > 0 , the covering number is defined as:
N ( ε , C , μ · x ) = inf { n N : C 1 , , C n C such that C C 1 i , j n with C i C C j and μ C j C i x < ε } .
The term log N ε , C , μ · x is referred to as the metric entropy with inclusion of C with respect to μ · x . For numerous classes, estimates for these covering numbers are well-documented; refer, for instance, to Ref. [75]. Below, we frequently make the assumption that either log N ε , C , μ · x or N ε , C , μ · x exhibit behaviors reminiscent of powers of ε 1 . We affirm that condition ( R γ ) is satisfied when
log N ε , C , μ · x H γ ( ε ) , for all ε > 0 ,
where
H γ ( ε ) = log ( A ε ) if γ = 0 , A ε γ if γ > 0 ,
for some constants A , r > 0 . As emphasized in Ref. [23], it is notable that the condition (3), where γ = 0 , is fulfilled by intervals, rectangles, balls, ellipsoids, and by classes derived from these through finite set operations of union, intersection, and complement. The class of convex sets in R d ( d 2 ) satisfies the condition (3) with γ = ( d 1 ) / 2 . Various other sets that satisfy (3) with γ > 0 are elaborated upon in Ref. [75]. We give now further notation. For j 1 , set
M j = K j ( 1 ) 0 1 ( K j ) ( u ) τ 0 ( u ) d u .
In this section, we establish the weak convergence of the process ν n ( C , x ) : C C as defined by
ν n ( C , x ) : = n ϕ ( h n ) μ n ( C , x ) E μ n ( C , x ) .
In the course of our analysis, we will rely on the following assumptions.
(H1) 
For every x E , there exists a sequence of nonnegative bounded random functionals ( f i , 1 ) i 1 , a sequence of random functions ( g i , x ) i 1 , a deterministic nonnegative bounded functional f 1 , and a nonnegative real function ϕ such that ϕ ( h n ) 0 as h 0 , as h 0 , such that
(i)
F x ( u ) = ϕ ( u ) f 1 ( x ) + o ( ϕ ( u ) ) a s u 0 .
(ii)
For any i N , F x F i 1 ( u ) = ϕ ( u ) f i , 1 ( x ) + g i , x ( u ) with g i , x ( u ) = o a . s ( ϕ ( u ) ) as u 0 ,   g i , x ( u ) / ϕ ( u ) almost surely bounded and n 1 i = 1 n g i , x j ( u ) = o a . s ( ϕ j ( u ) ) a s n , j = 1 , 2 .
(iii)
n 1 i = 1 n f i , 1 j ( x ) f 1 j ( x ) almost surely as n , for j = 1 , 2 .
(iv)
There exists a nondecreasing bounded function τ 0 ( u ) that uniformly holds for all u ( 0 , 1 ) .
τ 0 ( u ) + o ( 1 ) = ϕ ( r u ) ϕ ( r ) ,
as r 0 and 1 j 2 + δ w i t h δ > 0 , 0 1 ( K j ( u ) ) τ 0 ( u ) d u < .
(H2) 
There exist positive constants β > 0 and η 1 > 0 such that for all x 1 , x 2 N x , a neighborhood of x, the following holds
| μ ( C x 1 ) μ ( C x 2 ) | η 1 d E β ( x 1 , x 2 ) .
(H3) 
(i)
The conditional mean of 1 Y i C given the σ -field G i 1 depends solely on X i , meaning that for any i 1 , E 1 Y 1 C G i 1 = μ ( X i ) almost surely. The conditional mean of 1 Y i C given the σ -field G i 1 also depends only on X i , i.e., for any i 1 ,
E 1 { Y 1 C } μ ( X i ) 2 G i 1 = W ( X i ) ,
almost surely.
(ii)
Furthermore, the functions W ( · ) and P ( · ) are continuous in a neighborhood of x, namely,
sup { u : d ( x , u ) h } W ( u ) W ( x ) = o ( 1 ) a s h 0 ,
sup { u : d ( x , u ) h } P ( u ) P ( x ) = o ( 1 ) a s h 0 .
(iii)
δ > 0 such that we let
W ¯ 2 + δ ( u ) = E | ( 1 { Y 1 C } μ ( x ) ) | 2 + δ X 1 = u
be continuous in a neighborhood of x for u E .
(H4) 
For any ( y 1 , y 2 ) R 2 d and positive constants b 3 > 0 and η 4 > 0 , the following holds for the conditional density f ( · ) of Y given X = x :
f ( y 1 ) f ( y 2 ) η 4 y 1 y 2 b 3 .
(H5) 
The kernel function K ( · ) has support within the interval ( 0 , 1 ) and possesses a continuous first derivative on ( 0 , 1 ) . It satisfies the condition K ( t ) < 0 for all t ( 0 , 1 ) . Moreover,
0 1 ( K j ) ( u ) d u < , f o r j = 1 , 2 .
(H6) 
Suppose that the set class C adheres to condition (3);
(H7) 
The smoothing parameter ( h n ) fulfills the following criterion: h n 0 and n ϕ ( h n ) as n .

2.2. Comments on the Assumptions

The significance of condition (H1) extends to both the ergodic and functional aspects addressed in this paper. The condition utilized here shares similarities with that employed in Ref. [66]. The functions f i , 1 ( · ) and f 1 ( · ) play roles analogous to the conditional and unconditional densities in the finite-dimensional scenario. In the meantime, ϕ ( u ) describes the influence of the radius u on the small ball probability as u tends to zero, as illustrated in Ref. [66]. Conditions (H2)(i) are standard in nonparametric regression estimation. (H3)(i) is essential for establishing consistency, reflecting the Markovian nature of the functionally stationary ergodic data. This condition aligns with that used in Ref. [63]. (H3)(ii,iii) serve as continuous local conditions, necessary for the main results and for conciseness in this paper. Condition (H4) on the density f ( · ) conforms to a classical Lipschitz-type nonparametric functional model. Assumption (H5) relates to the choice of the kernel K ( · ) , a common practice in nonparametric functional estimation. It is worth noting that the Parzen symmetric kernel is unsuitable in this context due to the positivity of the random process d x , X . Hence, we consider K ( · ) with support [ 0 , 1 ] , a natural generalization of the assumption usually made in the multivariate case, where K ( · ) is expected to be a spherically symmetric density function. The conditions K ( 1 ) > 0 and K ( · ) < 0 ensure that M 1 > 0 for all limit functions τ 0 . The condition K ( 1 ) > 0 is necessary for defining the moments M 2 , which, in this case, are determined by the value K ( 1 ) . (H7) provides a condition on the bandwidths, acknowledging that consistency cannot be guaranteed without it.

3. Main Results

Below, we note Z = D N ( μ , σ 2 ) when the random variable Z is distributed according to a normal distribution with mean μ and variance σ 2 . The symbol D represents convergence in distribution, while P indicates convergence in probability.
Theorem 1 
(Uniform Consistency). Assume that the conditions (H1)(H7) are satisfied. Consider a class of measurable sets C for which
N ε , C , μ · x < ,
for any ε > 0 . Moreover, assume that for every C C
| μ ( C , y ) f ( y ) μ ( C , x ) f ( x ) | 0 , a s y x .
If n ϕ ( h n ) and h n 0 as n , then
sup C C μ n ( C , x ) E μ n ( C , x ) P 0 .
Note that the proof of Theorem 1 follows directly from the decomposition
μ n ( C , x ) E μ n ( C , x ) = 1 E ( f n ^ ( x ) ) φ n ^ ( C , x ) E φ n ^ ( C , x ) μ n ( C , x ) E ( f n ^ ( x ) ) f n ^ ( x ) E ( f n ^ ( x ) ) , = Q n ( x ) E ( f n ^ ( x ) ) ,
where
Q n ( x ) = φ n ^ ( C , x ) E φ n ^ ( C , x ) μ n ( C , x ) f n ^ ( x ) E ( f n ^ ( x ) ) .
and
φ n ^ ( C , x ) = 1 n ϕ ( h n ) i = 1 n δ i 1 { Y i C } K d E ( x , X i ) h n , f n ^ ( x ) = 1 n ϕ ( h n ) i = 1 n δ i K d E ( x , X i ) h n .
Let Δ i ( x ) = K d E ( x , X i ) h n . We have
φ n ^ ( C , x ) = 1 n ϕ ( h n ) i = 1 n 1 { Y i C } δ i Δ i ( x ) , f n ^ ( x ) = 1 n ϕ ( h n ) i = 1 n δ i Δ i ( x ) .
Henceforth, for x E , let us denote
E ( φ n ^ ( C , x ) ) = 1 n E ( Δ 1 ( x ) ) i = 1 n E δ i 1 { Y i C } Δ i ( x ) F i 1 ,
and
E ( f n ^ ( x ) ) = 1 n E ( Δ 1 ( x ) ) i = 1 n E ( δ i Δ i ( x ) F i 1 ) ,
here, E ( X F ) represents the conditional expectation of the random variable X given the σ -field F .
To establish asymptotic normality, define the “bias” term as
B n ( x ) = E f n ^ ( x ) μ n ( C , x ) E φ n ^ ( C , x ) E φ n ^ ( C , x )
The subsequent result presents the weak convergence. It is important to note that f 1 ( x ) is specified in (H1).
Theorem 2 
(Asymptotic normality). Assuming (H1)(H7), as n , for m 1 and C 1 , , C m C , we have
{ ν n ( C i , x ) i = 1 , , m } D N ( 0 , Σ ) ,
where Σ = σ i j ( x ) , i , j = 1 , , m and
σ i j ( x ) = M 2 P ( x ) M 1 2 f 1 ( x ) E ( 1 { Y C i C j } X = x ) E ( 1 { Y C i } X = x ) E ( 1 { Y C j } X = x ) ,
whenever f 1 ( x ) > 0 and
M 1 = K ( 1 ) 0 1 K ( u ) τ 0 ( u ) d ( u ) , M 2 = K 2 ( 1 ) 0 1 ( K 2 ) ( u ) τ 0 ( u ) d u .
To obtain the density of the process, it is essential to introduce the following function, which provides insights into the asymptotic behavior of the modulus of continuity:
Λ γ ( σ 2 , n ) = σ 2 log 1 σ 2 , if γ = 0 ; max ( σ 2 ) ( 1 γ ) / 2 , n ϕ ( h n ) ( 3 γ 1 ) / ( 2 ( 3 γ + 1 ) ) , if γ > 0 .
Theorem 3. 
Assume that (H1)(H7) are satisfied. For every σ 2 > 0 , consider C σ C as a class of measurable sets with
t = 1 n sup C C σ μ ( C , x ) σ 2 1 ,
and suppose that C fulfils (3) with γ 0 . Additionally, we assume that ϕ ( h n ) 0 and n ϕ ( h n ) + as n + , such that
n ϕ ( h n ) Λ γ ( σ 2 , n ) 2 ,
and as n + , we have
n ϕ σ 2 log 1 σ 2 1 + γ log ( n ) .
Furthermore, we assume that σ 2 h 2 . For γ > 0 and d = 1 , 2 , the latter has to be replaced by σ 2 ϕ ( h n ) log 1 ϕ ( h n ) . Under the conditions of Theorem 2, the process converges in law to a Gaussian process ν ( C , x ) : C C , which possesses a version with uniformly bounded and uniformly continuous paths with respect to the · 2 norm. The covariance is given by σ i j ( x ) as specified in Theorem 2.
Remark 1. 
The distance of two measures μ 1 , μ 2 in the Prokhorov metric is defined as (see, e.g., Refs. [76,77,78,79])
ρ P μ 1 , μ 2 : = inf ε > 0 μ 1 ( B ) μ 2 B ε + ε , Borel sets B Ω
Here B ε = { x d ( x , B ) < ε } , where d ( x , B ) is the distance of x to B, i.e., d ( x , B ) = inf z B x z . The distance of two random variables ξ 1 , ξ 2 in the Ky Fan metric is defined as [80]
ρ K ξ 1 , ξ 2 : = inf ε > 0 μ ω Ω d ξ 1 ( ω ) , ξ 2 ( ω ) > ε < ε .
It is worthwhile to establish an adequate link of our findings to these distances in the conditional setting.
Remark 2. 
Central limit theorems are frequently utilized to establish confidence intervals for the target being estimated. In the realm of non-parametric estimation, the asymptotic variance Σ ( x ) : = σ i , j ( x ) in the central limit depends on certain unknown functions. Consequently, in practical scenarios, only approximate confidence intervals can be derived, even when Σ ( x ) is functionally specified. Notably, according to Theorem 2, the limiting variance incorporates the unknown function f 1 ( · ) and the normalization is contingent on the function ϕ ( · ) , which is not explicitly identifiable in practice. Furthermore, the quantities W ( · ) and τ 0 need to be estimated. The corollary below, a slight modification of Theorem 2, permits a practical form of the results to be used, as typically the conditional variance W ( x ) is estimated similarly to what is obtained by Ref. [63].
Let
W n = i = 1 n ( δ i 1 { Y i C } μ n ( x ) ) 2 K d E ( x , X i ) h i = 1 n δ i K d E ( x , X i ) h = i = 1 n ( δ i 1 { Y i C } μ n ( x ) ) 2 K d E ( x , X i ) h i = 1 n δ i K d E ( x , X i ) h ( μ n ( x ) ) 2 = g n ^ ( x ) ( μ n ( x ) ) 2 .
Let us introduce the following estimation
F x , n ( t ) = 1 n i = 1 n 1 d x , X i t .
By employing the decomposition of τ 0 ( · ) in (H1)(i) and (H1)(i,iv), one can estimate τ 0 ( · ) as
τ n ( t ) = F x , n ( t h ) F x , n ( h ) .
Subsequently, for a given kernel K ( · ) and the quantities M 1 and M 2 can be estimated as follows
M 1 , n = K ( 1 ) 0 1 K ( s ) τ n ( s ) d s , M 2 , n = K 2 ( 1 ) 0 1 ( K 2 ) ( s ) τ n ( s ) d s .
Finally, the estimator of P ( x ) is denoted by
P n ( x ) = i = 1 n δ i K d E ( x , X i ) h n i = 1 n K d E ( x , X i ) h n .
Corollary 1. 
Suppose that conditions (H1)(H7) are satisfied, where K and ( K 2 ) are integrable functions. Additionally, assume that n F x ( h ) and h β ( n F x ( h ) ) 1 / 2 0 as n . Then, for any x E such that f 1 ( x ) > 0 , we have
M 1 , n M 2 , n n F x , n ( h n ) P n ( x ) W n ( x ) μ n ( C , x ) μ ( C , x ) D N ( 0 , 1 ) .
Using Corollary (1) the asymptotic 100 ( 1 α ) confidence band given by
μ n ( C , x ) c α M 1 , n M 2 , n W n ( x ) n F x , n ( h ) P n ( x ) , μ n ( C , x ) + c α M 1 , n M 2 , n W n ( x ) n F x , n ( h ) P n ( x ) .
where c α is the upper α 2 quantile of the Normal distribution N ( 0 , 1 )

3.1. The Bandwidth Selection Criterion

Several approaches have been devised and refined to formulate asymptotically optimal bandwidth selection rules for nonparametric kernel estimators, particularly for the Nadaraya–Watson regression estimator. Some noteworthy contributions include [81,82,83,84,85,86,87]. Choosing this parameter appropriately is essential, whether in the conventional finite-dimensional case or within the infinite-dimensional framework, to guarantee favorable practical performance. Let us define the leave-out- X j , Y j , δ j estimator for the regression function
μ n , j ( C , x ) = i = 1 , i j n δ i 1 { Y i C } K d E ( x , X i ) h n i = 1 n δ i K d E ( x , X i ) h n .
To minimize the quadratic loss function, we introduce the following criterion, where we have a (known) nonnegative weight function W ( · ) :
C V C , h : = 1 n j = 1 n δ j 1 { Y j C } μ n , j ( C , X j ) 2 W X j .
Building upon the concepts developed by Ref. [83], a natural approach for selecting the bandwidth is to minimize the preceding criterion. Thus, let us choose h ^ n , as the minimizer over h:
sup C C C V C , h .
One can replace (6) by
C V C , h n : = 1 n j = 1 n δ j 1 { Y j C } μ n , j ( C , X j ) 2 W ^ X j , x .
In practice, one takes, for j = 1 , , n , the uniform global weights W X j = 1 , and the local weights
W ^ ( X j , x ) = 1 i f d ( X j , x ) h n , 0 otherwise .
For brevity, we have concentrated on the most popular method, namely, the cross-validated selected bandwidth. This approach can be extended to any other bandwidth selector, such as the bandwidth based on Bayesian ideas [88].

4. Applications to Classification with Partially Labeled Data

In this section, we apply the results developed in the previous sections to the problem of statistical classification. We consider a sample of random elements ( X 1 , Y 1 ) , , ( X n , Y n ) drawn from the joint distribution of ( X , Y ) , where X takes values in a space E and Y in R d . In classification, the objective is to predict the integer-valued label Y based on the covariate vector X. More formally, we aim to find a function (classifier) θ : E R d for which the probability of misclassification error (incorrect prediction), i.e., P ( θ ( X ) Y ) , is minimized. Let
P k ( x ) = P ( Y = k X = x ) , x E , 1 k n .
Demonstrating that the optimal classifier, i.e., the one with the minimum probability of error, is given by
θ B ( x ) = arg max 1 k n P k ( x ) ,
i.e., the best classifier θ B satisfies
max 1 k n P k ( x ) = P θ B ( x ) ( x ) .
As θ B is unknown, the data is utilized to construct estimates of θ B . Specifically, let D n = ( X 1 , Y 1 ) , , ( X n , Y n ) represent a random sample from the distribution of ( X , Y ) , where each ( X i , Y i ) is fully observable. Let θ ^ n be any sample-based classifier. In other words, θ ^ n ( X ) is the predicted value of Y, based on D n and X. Let
L n ( θ ^ n ) = P ( θ ^ n ( X ) Y D n ) ,
be the conditional probability of error of the sample-based classifier θ ^ n . Then θ ^ n is said to be consistent if L n ( θ ^ n ) L n ( θ n ) = P ( θ B ( X ) Y ) as n , for k = 1 , , n . Let P ^ k ( x ) be any sample-based estimators of P k ( x ) = P ( Y = k X = x ) and define the classification rule θ ^ n by
θ ^ n ( x ) = arg max 1 k n P ^ k ( x ) .
In other words, θ ^ n satisfies
max 1 k n P ^ k ( x ) = P ^ θ ^ n ( x ) ( x ) ,
to show L n ( θ ^ n ) L n ( θ B ) 0 it is sufficient to show that P ^ k ( x ) P k ( x ) 0 by posing δ i = P ^ k ( x ) , we have
μ n ( C , x ) = i = 1 n P ^ k ( x ) 1 { Y i C } K d E ( x , X i ) h n i = 1 n P ^ k ( x ) K d E ( x , X i ) h n .
Theorem 4. 
Under the conditions of Theorem 3, we have the convergence
L n ( θ ^ n ) L n ( θ B ) 0 .

5. Concluding Remarks

In this investigation, we have examined the asymptotic properties of the conditional set-indexed empirical process involving ergodic functional data that are missing at random (MAR). Our findings are obtained under assumptions pertaining to the richness of the index class C of sets in terms of metric entropy with bracketing. Our contribution is two-fold: first, we have developed a functional methodology for addressing MAR samples in non-parametric problems, and second, we have extended our non-parametric conditional methodology by incorporating the ergodicity concepts introduced in Ref. [44]. Several challenging open questions remain in this context, including potential extensions to other types of non-parametric predictors such as functional local linear predictors, functional kNN predictors, and others. Furthermore, exploring extensions to problems beyond prediction, such as the estimation of variance error, is an interesting avenue for future research. Another direction for future exploration is the consideration of reducing the predictor’s dimensionality by employing a Single Functional Index Model (SFIM) to estimate the regression, as discussed in Refs. [89,90]. SFIM has shown its effectiveness in improving the consistency of the regression operator estimator.

Author Contributions

Conceptualization, S.B.; methodology, S.B.; validation, S.B., Y.S. and F.M.; formal analysis, S.B. and Y.S.; investigation, S.B. and Y.S.; original draft preparation, S.B. and Y.S.; writing—review and editing, S.B. and Y.S.; supervision, S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors would like to thank the Editor-in-Chief, an Associate-Editor, and the three referees for their extremely helpful remarks, which resulted in a substantial improvement of the original form of the work and a presentation that was more sharply focused.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

The proofs of our results are presented in this section. The notation introduced earlier is also utilized in the subsequent sections.
Lemma A1. 
Assume that conditions (H1(i))–(H1(ii))–(H1(iv))–(H5) hold true for any real numbers 1 j 2 + δ and 1 k 2 + δ with δ > 0 . As n , we have:
(i) 
1 ϕ ( h ) E ( Δ i j ( x ) F i 1 ) = M j f i , 1 ( x ) + O a . s g i , x ( h ) ϕ ( h ) ;
(ii) 
1 ϕ ( h ) E ( Δ 1 j ( x ) ) = M j f 1 ( x ) + o ( 1 ) ;
(iii) 
1 ϕ k ( h ) ( E ( Δ 1 ( x ) ) ) k = M 1 k f 1 k ( x ) + o ( 1 ) .
Proof of Lemma A1. 
For the proof of Lemma A1, the reader is directed to Ref. [66]. □
Lemma A2. 
Assume that the hypotheses (H1) and (H5), along with condition (H7), are satisfied. As n , for every fixed neighborhood N E of x in the functional space E , we have:
t = 1 n lim n sup x N E f n ^ ( x ) = t = 1 n lim n E ( f n ^ ( x ) ) = P ( x ) .
Proof of Lemma A2. 
We shall prove that
f n ^ ( x ) P P ( x ) .
We employ the identical proof as presented in Ref. [63]. See that.
f n ^ ( x ) = R 1 , n ( x ) + R 2 , n ( x ) ,
where
R 1 , n ( x ) = 1 n E ( Δ 1 ( x ) ) t = 1 n ( δ i Δ i ( x ) E ( δ i Δ i ( x ) F i 1 ) ) , R 2 , n ( x ) = 1 n E ( Δ 1 ( x ) ) E [ δ i Δ i ( x ) F i 1 ] .
First, we need to establish under the assumption (H1)(i–iii) and (H3)(i) and for n as n ϕ ( h ) , we have
R 2 , n ( x ) P P ( x ) ,
as n . Using the properties of conditional expectation and the missing at random (MAR) mechanism, and combining assumptions (H1)(ii,iii) and (H3)(i) with the continuity property of P ( x ) along with Lemma A1, we derive:
R 2 , n ( x ) = 1 n E ( Δ 1 ( x ) ) t = 1 n ( E E [ δ i Δ i ( x ) F i 1 ] G i 1 = 1 n E ( Δ 1 ( x ) ) t = 1 n ( E [ P ( x ) + o ( 1 ) δ i Δ i ( x ) F i 1 ] = ( P ( x ) + o ( 1 ) ) 1 n E ( Δ 1 ( x ) ) t = 1 n ( ϕ ( h ) M 1 f i , 1 ( x ) + O ( g i x ( h ) ) ) = ( P ( x ) + o ( 1 ) ) ϕ ( h ) E ( Δ 1 ( x ) ) 1 n t = 1 n M 1 f i , 1 ( x ) + 1 n t = 1 n O a s g i , x ( h ) ϕ ( h ) = ( P ( x ) + o ( 1 ) ) 1 M 1 f 1 ( x ) + o ( 1 ) ( M 1 ( f 1 ( x ) + o ( 1 ) ) + o a . s ( 1 ) ) P ( x ) .
Second, we will prove that as n ,
R 1 , n ( x ) P 0 .
On the one hand, we define η n , i = δ i Δ ( x ) E ( δ i Δ ( x ) ) for i = 1 , , n . Thus, η n , i , 1 i n forms a triangular array of martingale differences with respect to the σ -field F i 1 and
R 1 , n ( x ) = 1 n E ( Δ 1 ( x ) ) t = 1 n η n , i ( x ) .
By combining Burkholder’s inequality [91] and Jensen’s inequality, we establish that for any ϵ > 0 , there exists a constant C 0 such that:
P ( | R 1 , n ( x ) | > ϵ ) = P | t = 1 n η n , i ( x ) | > ϵ n E ( Δ 1 ( x ) ) C 0 E ( η n , i 2 ( x ) ) ϵ 2 n ( E ( Δ 1 ( x ) ) ) 2 < C 0 E ( δ 1 Δ 1 2 ( x ) ) ϵ 2 n ( E ( Δ 1 2 ( x ) ) ) 0 ,
as n , where we use the results from lemma (A1). Since n ϕ ( h ) as n we then conclude that
R 1 , n ( x ) = o P ( 1 ) .
Thus, the proof is complete. □
We will utilize arguments akin to those employed in the work of Ref. [63] to establish the asymptotic normality of the process Q n ( x ) defined as:
Q n ( x ) = φ n ^ ( C , x ) E φ n ^ ( C , x ) μ n ( C , x ) f n ^ ( x ) E ( f n ^ ( x ) ) .
Lemma A3. 
Assuming that the hypotheses (H1)(H7) are fulfilled, we can state that for any x E such that f 1 ( x ) > 0 , we have:
n ϕ ( h n ) Q n ( x ) D N ( 0 , σ 0 2 ( x ) ) , a s n .
where
σ 0 2 ( x ) = M 2 W ( x ) P ( x ) M 1 2 f 1 ( x ) ,
whenever f 1 ( x ) > 0
Proof of Lemma A3. 
Let us introduce some notation. We put
η n i = ϕ ( h ) n 1 / 2 δ i ( 1 { Y i C } μ ( x ) ) Δ i ( x ) E ( Δ 1 ( x ) ) ,
and define ξ n i = η n i E η n i F i 1 . It is easily seen that
( n ϕ ( h ) ) 1 / 2 Q n ( x ) = t = 1 n ξ n i .
Here, for any fixed x E , the terms in (A4) form a triangular array of stationary martingale differences with respect to the σ -field F i 1 . This allows us to apply the central limit theorem for discrete-time arrays of real-valued martingales (refer to Ref. [92], page 23) to establish the asymptotic normality of Q n ( x ) . This can be accomplished by verifying the following statements:
(a) 
t = 1 n E ξ n i 2 F i 1 σ 0 2 ( x ) ,
(b) 
n E ξ n i 2 1 | η n i | > ϵ = o ( 1 ) ,
holds for any ϵ > 0 (Lindeberg condition).
Proof of Part (a). 
Observe first that
t = 1 n E η n i 2 F i 1 t = 1 n E ξ n i 2 F i 1 t = 1 n E η n i F i 1 2 .
Making use of the condition (H2) and Lemma A1, one has
E η n i F i 1 = 1 E ( Δ i ) ϕ ( h ) n 1 / 2 | E ( μ ( X i ) μ ( x ) ) Δ i ( x ) P ( X i ) F i 1 | 1 E ( Δ i ) ϕ ( h ) n 1 / 2 t = 1 n sup u B ( x , h ) | μ ( X i ) μ ( x ) | E Δ i ( x ) F i 1 h β ( o ( 1 ) + P ( x ) ) O ( h β ) ϕ ( h ) n 1 / 2 f i , 1 ( x ) f 1 ( x ) + O a . s g i , x ( h ) ϕ ( h ) h β ( o ( 1 ) + P ( x ) ) .
Thus, by (H1)(ii,iii), we have
t = 1 n E η n i F i 1 2 = O a . s ( h 2 β ϕ ( h ) ) 1 f 1 2 ( x ) 1 n + t = 1 n f i , 1 2 ( x ) + o a . s 1 × ( o ( 1 ) + P ( x ) ) 2 = O a . s ( ϕ ( h ) h 2 β ) .
The statement (a) follows then if we show that
t = 1 n lim n t = 1 n E η n i 2 F i 1 = σ 0 2 .
To prove (A6), observe that
t = 1 n lim n t = 1 n E η n i 2 F i 1 = ϕ ( h ) n ( E ( Δ 1 ( x ) ) ) 2 t = 1 n E ( 1 { Y i C } μ ( x ) ) 2 δ i Δ i 2 ( x ) F i 1 = J 1 n + J 2 n ,
where
J 1 n = ϕ ( h ) n ( E ( Δ 1 ( x ) ) ) 2 t = 1 n E E 1 { Y i C } μ ( X i ) 2 δ i Δ i 2 ( x ) F i 1 ,
and
J 2 n = ϕ ( h ) n ( E ( Δ 1 ( x ) ) ) 2 t = 1 n E ( μ ( X i ) μ ( X ) ) 2 δ i Δ i 2 ( x ) F i 1 .
Hence, leveraging the properties of conditional expectation, we derive:
J 1 n = ϕ ( h ) n ( E ( Δ 1 ( x ) ) ) 2 t = 1 n E E 1 { Y i C } μ ( X i ) 2 δ i Δ i 2 ( x ) B i 1 F i 1 = ϕ ( h ) n ( E ( Δ 1 ( x ) ) ) 2 t = 1 n E Δ i 2 ( x ) E 1 { Y i C } μ ( X i ) 2 δ i X i F i 1 = ϕ ( h ) n ( E ( Δ 1 ( x ) ) ) 2 t = 1 n E W ( X i ) P ( X i ) Δ i 2 ( x ) F i 1 .
Likewise, with the assumptions (H2)(ii,iii) and (H4)(i), along with the aid of Lemma A1 once more, it follows that, as n :
J 1 n = ϕ ( h ) n ( E ( Δ 1 ( x ) ) ) 2 t = 1 n E ( o ( 1 ) + W ( x ) ) ( o ( 1 ) + P ( x ) ) Δ i 2 ( x ) F i 1 = 1 ( E ( Δ 1 ( x ) ) ) 2 ϕ 2 ( h ) 1 n 1 ϕ ( h ) t = 1 n ( o ( 1 ) + W ( x ) ) ( o ( 1 ) + P ( x ) ) × ( M 2 ϕ ( h ) f i 1 ( x ) + O a . s ( g i , x ( h ) ) ) M 2 W ( x ) P ( x ) M 1 2 f 1 ( x ) = σ 0 ( x ) 2 .
Again, combining Lemma A1 with conditions (H1)(ii), and (H3)(ii,iii), it is evident that:
t = 1 n lim n J 1 n = M 2 W ( x ) P ( x ) M 1 2 f 1 ( x ) ,
almost surely, whenever f 1 ( x ) > 0 . Consider now the term J 2 n . Utilizing conditions (H1)(ii,iii) and (H2)(i) alongside Lemma A1, we can express, as n :
| J n 2 | = O ( h 2 β ) ϕ ( h ) n ( E ( Δ 1 ( x ) ) ) 2 t = 1 n E δ i Δ i 2 ( x ) F i 1 = O ( h 2 β ) M 2 M 1 2 1 f 1 ( x ) + o a . s ( 1 ) 0 , a l m o s t s u r e l y ,
whenever f 1 ( x ) > 0 , this completes the proof of Part (a).
Proof of Part (b). 
The Lindeberg condition results from Corollary 9.5.2 in Ref. ([93]), which implies that
n E ( ξ n i 2 1 ( | ξ n i | > ε ) ) 4 n E η n i 2 1 ( | η n i | > ε / 2 ) .
Let a > 1 and b > 1 such that 1 a + 1 b = 1 . Applying Hölder and Markov inequalities, one can express, for all ε > 0 :
E η n i 2 1 ( | η n i | > ε / 2 ) E | η n i | 2 a ( ε / 2 ) 2 a / b ,
where C 0 is a positive constant and 2 a = 2 + δ . Utilizing δ from the condition (H3)(iii) of conditional moments, we obtain:
4 n E η n i 2 1 ( | η n i | > ε / 2 ) C 0 ϕ ( h ) n ( 2 + δ ) / 2 n ( E ( Δ 1 ( x ) ) ) 2 + δ E ( [ | 1 { Y i C } μ ( x ) | δ i Δ i ( x ) ] 2 + δ ) C 0 ϕ ( h ) n ( 2 + δ ) / 2 n ( E ( Δ 1 ( x ) ) ) 2 + δ E E | 1 { Y i C } μ ( x ) | 2 + δ δ i ( Δ i ( x ) ) 2 + δ X i C 0 ϕ ( h ) n ( 2 + δ ) / 2 n ( E ( Δ 1 ( x ) ) ) 2 + δ E ( Δ i ( x ) ) 2 + δ P ( X i ) W ¯ 2 + δ ( X i ) C 0 ϕ ( h ) n ( 2 + δ ) / 2 n E Δ 1 ( x ) 2 + δ E ( Δ 1 ( x ) ) 2 + δ ( P ( x ) + o ( 1 ) ) ( W ¯ 2 + δ ( x ) + o ( 1 ) ) C 0 ( n ϕ ( h ) ) δ / 2 ( M 2 + δ f 1 ( x ) + o ( 1 ) ) ( M 1 2 + δ f 1 2 + δ ( x ) + o ( 1 ) ) P ( x ) W ¯ 2 + δ ( x ) + o ( 1 ) = O ( ( n ϕ ( h ) ) δ / 2 ) ,
where the last equality follows from Lemma A1. This concludes the proof of part (b) as n ϕ ( h ) when n . Thus, the proof is complete. □
Proof of Theorem 1. 
By Lemma A3 it follows that
n ϕ ( h n ) Q n ( x ) = O P ( 1 ) .
Thus, by Lemma A2 the proof is valid. □
Proof of Theorem 2. 
The proof follows from A1, A2, and Slutsky’s Theorem, so the proof is valid. □
Proof of Theorem 3. 
Let us recall some facts. Let f ( · ) = δ i 1 { · C 1 } and g ( · ) = δ i 1 { · C 2 } . Given random measures λ on ( X , X ) , we define
d λ ( 2 ) ( f , g ) : = λ ( f g ) 2 1 / 2 .
Say that a class of functions F has uniformly integrable entropy with respect to L 2 -norm if
0 sup γ M ( X , F ) ln N ϵ γ F 2 1 / 2 , F , d γ ( 2 ) 1 / 2 d ϵ < ,
where
d γ ( 2 ) ( f , g ) : = X ( f g ) 2 d γ 1 / 2 .
If the class F possesses uniformly integrable entropy, F , d γ ( 2 ) is totally bounded for any measure γ . Let κ be an envelope of F , i.e., κ is a measurable function mapping F to [ 0 , ) such that:
sup f F | f ( t ) | κ ( t ) , for all t R .
Let M ( R , κ ) be the set of all measures γ on ( R , F ) with
γ ( κ ) : = R κ 2 d γ < ,
and
d γ ( r ) ( f , g ) : = R ( f g ) r d γ 1 / r .
Given random measures λ on ( R , F ) , we define
d λ ( 2 ) ( f , g ) : = [ λ ( f g ) 2 ] 1 / 2 .
Let us introduce the uniform entropy integral
J ( δ , F , d γ ( 2 ) ) = 0 δ sup γ ( R , F ) log N ϵ [ γ ( κ 2 ) ] 1 / 2 , F , d γ ( 2 ) 1 / 2 d ϵ .
We say that F has uniformly integrable entropy with respect to L 2 -norm if
J , F , d γ ( 2 ) < .
If the class F possesses uniformly integrable entropy, F , d γ ( 2 ) is totally bounded for any measure γ . Let B ( φ ) : φ F be a Gaussian process whose sample paths are contained in
U b ( F , d γ ( 2 ) ) : = f ( F ) : f is uniformaly continuous with respect to d γ ( 2 ) .
Let L ( ) denote the law of •. Notice that obtaining a uniform CLT essentially means that we show the following convergence
L ( A n , φ ) : φ F L ( B ( φ ) ) : f F ,
where the processes are indexed by F and considered as random elements of the bounded real-valued functions on F defined by
( F ) : = f : F R : f F : = sup φ F | f ( φ ) | < ,
which is a Banach space equipped with the sup norm. In the following, we employ the weak convergence in the sense of Ref. [94], which we recap in the following definition. Throughout the paper, E * denotes the upper expectation with respect to the outer probability P * ; for further details and discussion, refer to Ref. [1] (p. 6) and Ref. [95] (§6.2, p. 88). □
Definition A1. 
A sequence of ( F ) -valued random functions { T n : n 1 } converges in law to a ( F ) -valued Borel measurable random function T whose law concentrates on a separable subset of ( F ) , denoted T n T , if,
E g ( T ) = lim n E * g ( T n ) , g C ( ( F ) , · F ) ,
where C ( ( F ) , · F ) is the set of all bounded · F -continuous functions from ( ( F ) , · F ) into R .
We set
η n ; i ( f , x ) : = η n ; i ( C 1 , x ) : = ϕ ( h ) n 1 / 2 δ i 1 { ( Y i C 1 } μ ( C , x ) Δ i ( x ) E ( Δ i ( x ) ) ,
with Δ i ( x ) = K ( h 1 d ( x , X i ) ) , and define η n ; i ( g , x ) in a similar way. Let
ξ n ; i ( f , x ) : = η n ; i ( f , x ) E ( η n ; i ( f , x ) F i 1 ) .
Let us define
σ n 2 ( f , g ) = i = 1 n ξ n ; i ( f , x ) ξ n ; i ( g , x ) 2 .
To establish Theorem 3, we can rely on Theorem 2 of [96] (see also Refs. [10,13,15]). It is sufficient to demonstrate that, for all constant L > 0 , as n tends to infinity:
P * sup f , g F σ n 2 ( f , g ) ( d μ n ( 2 ) ( f , g ) ) 2 > L 0 ,
which is implied by the following,
E * sup d ( 2 ) ( f , g ) δ n i = 1 n E ( ( ξ n ; i ( f , x ) ξ n ; i ( g , x ) ) 2 F i 1 ) ( d ( 2 ) ( f , g ) ) 2 0 , a s δ n 0 ,
where we recall
d ( 2 ) ( f , g ) : = R ( f g ) 2 d P 1 / 2 .
In the rest of the proof, denote by β n ( x ) = ϕ ( h ) E [ Δ 1 ( x ) ] , and
ζ ( f , x ) = ζ ( C 1 , x ) : = δ i 1 { ( Y i C 1 } μ ( C , x ) Δ i ( x ) .
Therefore, we have the following
i = 1 n E ( ( ξ n ; i ( f , x ) ξ n ; i ( g , x ) ) 2 F i 1 ) d ( 2 ) ( f , g ) = β n 2 ( x ) n d ( 2 ) ( f , g ) i = 1 n E ( ζ ( f , x ) ζ ( g , x ) E ζ ( f , x ) ζ ( g , x ) F i 1 ) 2 F i 1 2 β n 2 ( x ) n d ( 2 ) ( f , g ) i = 1 n 2 E ζ ( f , x ) ζ ( g , x ) 2 F i 1 2 E E ζ ( f , x ) ζ ( g , x ) F i 1 2 : = T 1 , n + T 2 , n .
We first evaluate T 1 , n . We have
T 1 , n 2 β n 2 ( x ) n d ( 2 ) ( f , g ) i = 1 n 2 E Δ i 2 ( x ) δ i f ( Y i ) δ i g ( Y i ) 2 F i 1 + 2 E δ i Δ i 2 ( x ) μ ( C 1 , x ) μ ( C 2 , x ) 2 F i 1 : = T 1 , n , 1 + T 1 , n , 2 .
Using the fact that E ( Δ 1 2 ( x ) ) = O ( ϕ ( h ) ) (as indicated in Lemma A1), and taking into account that the class of functions F has a constant envelope and K ( · ) is both bounded and bounded away from zero, one can obtain the following upper bound for the last equation, where C is a positive constant:
T 1 , n , 1 C ϕ ( h ) d ( 2 ) ( f , g ) E Δ 1 ( x ) f ( Y 1 ) g ( Y 1 ) C ϕ ( h ) d ( 2 ) ( f , g ) E Δ 1 ( x ) 2 1 / 2 E f ( Y 1 ) g ( Y 1 ) 2 1 / 2 = C ϕ ( h ) G ¯ 2 ( ζ ) E Δ 1 ( x ) 2 1 / 2 = O ( ϕ ( h ) ) = o ( 1 ) .
Making use of similar arguments, we infer that
T 1 , n , 2 = C ϕ ( h ) 3 / 2 d ( 2 ) ( f , g ) E δ ( f ( Y ) g ( Y ) ) | X = x 2 = O ( ϕ ( h ) 3 / 2 = o ( 1 ) .
We readily obtain that,
T 1 , n = o ( 1 ) .
By employing arguments akin to those utilized in the proof of the previous statement, we can establish that
T 2 , n = o ( 1 ) .
Using the Lindeberg conditions from the preceding proof and (A11), along with Theorem 1 of [96], we deduce that for a given ε > 0 and γ > 0 , there exists η > 0 , such that:
lim sup n P * sup d ( C 1 , C 2 ) ) η | ν n ( C 1 , x ) ν n ( C 2 , x ) | 5 γ 3 ε .
Now, the proof of the theorem is completed by combining this last equation with Theorem 3.

References

  1. van der Vaart, A.W.; Wellner, J.A. Weak Convergence and Empirical Processes; Springer Series in Statistics; With applications to statistics; Springer: New York, NY, USA, 1996; pp. xvi+508. [Google Scholar] [CrossRef]
  2. Shorack, G.R.; Wellner, J.A. Empirical Processes with Applications to Statistics; Classics in Applied Mathematics; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2009; Volume 59, pp. xli+956. [Google Scholar] [CrossRef]
  3. Dudley, R.M. Uniform Central Limit Theorems; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 1999; Volume 63, pp. xiv+436. [Google Scholar] [CrossRef]
  4. Vapnik, V.N.; Červonenkis, A.J. The uniform convergence of frequencies of the appearance of events to their probabilities. Teor. Verojatnost. i Primenen. 1971, 16, 264–279. [Google Scholar]
  5. Dudley, R.M. Central limit theorems for empirical measures. Ann. Probab. 1978, 6, 899–929. [Google Scholar] [CrossRef]
  6. Giné, E.; Zinn, J. Some limit theorems for empirical processes. Ann. Probab. 1984, 12, 929–998. [Google Scholar] [CrossRef]
  7. Le Cam, L. A remark on empirical measures. In A Festschrift for Erich Lehmann in Honor of His Sixty-Fifth Birthday; Wadsworth Statist./Probab. Ser.; UC Berkeley Statistics: Wadsworth, OH, USA; Belmont, CA, USA, 1983; pp. 305–327. [Google Scholar]
  8. Pollard, D. A central limit theorem for empirical processes. J. Aust. Math. Soc. Ser. A 1982, 33, 235–248. [Google Scholar] [CrossRef]
  9. Bass, R.F.; Pyke, R. A strong law of large numbers for partial-sum processes indexed by sets. Ann. Probab. 1984, 12, 268–271. [Google Scholar] [CrossRef]
  10. Bouzebda, S.; Soukarieh, I. Renewal type bootstrap for U-process Markov chains. Markov Process. Relat. Fields 2022, 28, 673–735. [Google Scholar]
  11. Alvarez-Andrade, S.; Bouzebda, S.; Lachal, A. Strong approximations for the p-fold integrated empirical process with applications to statistical tests. Test 2018, 27, 826–849. [Google Scholar] [CrossRef]
  12. Bouzebda, S. Some applications of the strong approximation of the integrated empirical copula processes. Math. Methods Stat. 2016, 25, 281–303. [Google Scholar] [CrossRef]
  13. Soukarieh, I.; Bouzebda, S. Renewal type bootstrap for increasing degree U-process of a Markov chain. J. Multivar. Anal. 2023, 195, 105143. [Google Scholar] [CrossRef]
  14. Bouzebda, S.; Soukarieh, I. Limit theorems for a class of processes generalizing the U-empirical process. Stochastics 2024, 1–36. [Google Scholar]
  15. Soukarieh, I.; Bouzebda, S. Exchangeably Weighted Bootstraps of General Markov U-Process. Mathematics 2022, 10, 3745. [Google Scholar] [CrossRef]
  16. Yoshihara, K.I. Conditional empirical processes defined by ϕ-mixing sequences. Comput. Math. Appl. 1990, 19, 149–158. [Google Scholar] [CrossRef]
  17. Eberlein, E. Weak convergence of partial sums of absolutely regular sequences. Stat. Probab. Lett. 1984, 2, 291–293. [Google Scholar] [CrossRef]
  18. Nobel, A.; Dembo, A. A note on uniform laws of averages for dependent processes. Stat. Probab. Lett. 1993, 17, 169–172. [Google Scholar] [CrossRef]
  19. Yu, B. Rates of convergence for empirical processes of stationary mixing sequences. Ann. Probab. 1994, 22, 94–116. [Google Scholar] [CrossRef]
  20. Bouzebda, S.; Nemouchi, B. Central limit theorems for conditional empirical and conditional U-processes of stationary mixing sequences. Math. Methods Stat. 2019, 28, 169–207. [Google Scholar] [CrossRef]
  21. Andrews, D.W.K.; Pollard, D. An Introduction to Functional Central Limit Theorems for Dependent Stochastic Processes. Int. Stat. Rev. Rev. Int. Stat. 1994, 62, 119–132. [Google Scholar] [CrossRef]
  22. Doukhan, P.; Massart, P.; Rio, E. Invariance principles for absolutely regular empirical processes. Ann. Inst. H. Poincaré Probab. Stat. 1995, 31, 393–427. [Google Scholar]
  23. Polonik, W.; Yao, Q. Set-indexed conditional empirical and quantile processes based on dependent data. J. Multivar. Anal. 2002, 80, 234–255. [Google Scholar] [CrossRef]
  24. Bosq, D. Linear Processes in Function Spaces; Lecture Notes in Statistics; Theory and Applications; Springer: New York, NY, USA, 2000; Volume 149, pp. xiv+283. [Google Scholar] [CrossRef]
  25. Ramsay, J.O.; Silverman, B.W. Functional Data Analysis, 2nd ed.; Springer Series in Statistics; Springer: New York, NY, USA, 2005; pp. xx+426. [Google Scholar]
  26. Cuevas, A. A partial overview of the theory of statistics with functional data. J. Stat. Plan. Inference 2014, 147, 1–23. [Google Scholar] [CrossRef]
  27. Goia, A.; Vieu, P. An introduction to recent advances in high/infinite dimensional statistics [Editorial]. J. Multivar. Anal. 2016, 146, 1–6. [Google Scholar] [CrossRef]
  28. Aneiros, G.; Cao, R.; Fraiman, R.; Genest, C.; Vieu, P. Recent advances in functional data analysis and high-dimensional statistics. J. Multivar. Anal. 2019, 170, 3–9. [Google Scholar] [CrossRef]
  29. Ling, N.; Vieu, P. Nonparametric modelling for functional data: Selected survey and tracks for future. Statistics 2018, 52, 934–949. [Google Scholar] [CrossRef]
  30. Chowdhury, J.; Chaudhuri, P. Multi-sample comparison using spatial signs for infinite dimensional data. Electron. J. Stat. 2022, 16, 4636–4678. [Google Scholar] [CrossRef]
  31. Chowdhury, J.; Chaudhuri, P. Convergence rates for kernel regression in infinite-dimensional spaces. Ann. Inst. Stat. Math. 2020, 72, 471–509. [Google Scholar] [CrossRef]
  32. Ferraty, F.; Vieu, P. Nonparametric Functional Data Analysis; Springer Series in Statistics; Theory and Practice; Springer: New York, NY, USA, 2006; pp. xx+258. [Google Scholar]
  33. Horváth, L.; Kokoszka, P. Inference for Functional Data with Applications; Springer Series in Statistics; Springer: New York, NY, USA, 2012; pp. xiv+422. [Google Scholar] [CrossRef]
  34. Bosq, D.; Blanke, D. Inference and Prediction in Large Dimensions; Wiley Series in Probability and Statistics; John Wiley & Sons, Ltd.: Chichester, UK; Dunod, Scotland; Paris, France, 2007; pp. x+316. [Google Scholar] [CrossRef]
  35. Shi, J.Q.; Choi, T. Gaussian Process Regression Analysis for Functional Data; CRC Press: Boca Raton, FL, USA, 2011; pp. xx+196. [Google Scholar]
  36. Zhang, J.T. Analysis of Variance for Functional Data; Monographs on Statistics and Applied Probability; CRC Press: Boca Raton, FL, USA, 2014; Volume 127, pp. xxiv+386. [Google Scholar]
  37. Bongiorno, E.G.; Goia, A.; Salinelli, E.; Vieu, P. An overview of IWFOS’2014. In Contributions in Infinite-Dimensional Statistics and Related Topics; Esculapio: Bologna, Italy, 2014; pp. 1–5. [Google Scholar]
  38. Hsing, T.; Eubank, R. Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators; Wiley Series in Probability and Statistics; John Wiley & Sons, Ltd.: Chichester, UK, 2015; pp. xiv+334. [Google Scholar] [CrossRef]
  39. Aneiros, G.; Bongiorno, E.G.; Cao, R.; Vieu, P. (Eds.) Functional statistics and related fields. In Proceedings of the 4th International Workshop on Functional and Operational Statistics, IWFOS, Corunna, Spain, 15–17 June 2017; Springer: Cham, Switzerland, 2017; pp. xxiv+288. [Google Scholar]
  40. Berrahou, N.; Bouzebda, S.; Douge, L. Functional uniform-in-bandwidth moderate deviation principle for the local empirical processes involving functional data. Math. Methods Stat. 2024, 33, 1–43. [Google Scholar]
  41. Poryvaĭ, D.V. An invariance principle for conditional empirical processes formed by dependent random variables. Izv. Ross. Akad. Nauk Ser. Mat. 2005, 69, 129–148. [Google Scholar] [CrossRef]
  42. Bouzebda, S.; Madani, F.; Souddi, Y. Some Asymptotic Properties of the Conditional Set-Indexed Empirical Process Based on Dependent Functional Data. Int. J. Math. Stat. 2022, 22, 77–105. [Google Scholar]
  43. Bouzebda, S.; Chaouch, M. Uniform limit theorems for a class of conditional Z-estimators when covariates are functions. J. Multivar. Anal. 2022, 189, 104872. [Google Scholar] [CrossRef]
  44. Souddi, Y.; Madani, F.; Bouzebda, S. Some characteristics of the conditional set-indexed empirical process involving functional ergodic data. Bull. Inst. Math. Acad. Sin. (New Ser.) 2021, 16, 367–399. [Google Scholar] [CrossRef]
  45. Bouzebda, S.; Soukarieh, I. Nonparametric conditional U-processes for locally stationary functional random fields under stochastic sampling design. Mathematics 2022, 10, 16. [Google Scholar] [CrossRef]
  46. Soukarieh, I.; Bouzebda, S. Weak Convergence of the Conditional U-statistics for Locally Stationary Functional Time Series. Stat. Inference Stoch. Process 2024, 16, 1–78. [Google Scholar] [CrossRef]
  47. Bouzebda, S.; Nezzal, A. Uniform in number of neighbors consistency and weak convergence of kNN empirical conditional processes and kNN conditional U-processes involving functional mixing data. AIMS Math. 2024, 9, 4427–4550. [Google Scholar] [CrossRef]
  48. Cheng, P.E. Nonparametric estimation of mean functionals with data missing at random. J. Am. Stat. Assoc. 1994, 89, 81–87. [Google Scholar] [CrossRef]
  49. Cheng, P.E.; Chu, C.K. Kernel estimation of distribution functions and quantiles with missing data. Stat. Sin. 1996, 6, 63–78. [Google Scholar]
  50. Little, R.J.A.; Rubin, D.B. Statistical Analysis with Missing Data; Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics; John Wiley & Sons, Inc.: New York, NY, USA, 1987; pp. xvi+278. [Google Scholar]
  51. Nittner, T. Missing at random (MAR) in nonparametric regression—A simulation experiment. Stat. Methods Appl. 2003, 12, 195–210. [Google Scholar] [CrossRef]
  52. Tsiatis, A.A. Semiparametric Theory and Missing Data; Springer Series in Statistics; Springer: New York, NY, USA, 2006; pp. xvi+383. [Google Scholar]
  53. Wang, Q.; Sun, Z. Estimation in partially linear models with missing responses at random. J. Multivar. Anal. 2007, 98, 1470–1493. [Google Scholar] [CrossRef]
  54. Wang, Q. Probability density estimation with data missing at random when covariables are present. J. Stat. Plan. Inference 2008, 138, 568–587. [Google Scholar] [CrossRef]
  55. Liang, H.; Wang, S.; Carroll, R.J. Partially linear models with missing response variables and error-prone covariates. Biometrika 2007, 94, 185–198. [Google Scholar] [CrossRef]
  56. Efromovich, S. Nonparametric regression with responses missing at random. J. Stat. Plan. Inference 2011, 141, 3744–3752. [Google Scholar] [CrossRef]
  57. Efromovich, S. Nonparametric regression with predictors missing at random. J. Am. Stat. Assoc. 2011, 106, 306–319. [Google Scholar] [CrossRef]
  58. Tang, N.; Zhao, P.; Zhu, H. Empirical likelihood for estimating equations with nonignorably missing data. Stat. Sin. 2014, 24, 723–747. [Google Scholar] [CrossRef]
  59. Müller, U.U.; Schick, A. Efficiency transfer for regression models with responses missing at random. Bernoulli 2017, 23, 2693–2719. [Google Scholar] [CrossRef]
  60. Müller, U.U.; Schick, A. Efficiency for heteroscedastic regression with responses missing at random. J. Stat. Plan. Inference 2018, 196, 132–143. [Google Scholar] [CrossRef]
  61. Shen, Y.; Liang, H.Y. Quantile regression and its empirical likelihood with missing response at random. Stat. Pap. 2018, 59, 685–707. [Google Scholar] [CrossRef]
  62. Ferraty, F.; Sued, M.; Vieu, P. Mean estimation with data missing at random for functional covariables. Statistics 2013, 47, 688–706. [Google Scholar] [CrossRef]
  63. Ling, N.; Liang, L.; Vieu, P. Nonparametric regression estimation for functional stationary ergodic data with missing at random. J. Stat. Plan. Inference 2015, 162, 75–87. [Google Scholar] [CrossRef]
  64. Ling, N.; Liu, Y.; Vieu, P. Conditional mode estimation for functional stationary ergodic data with responses missing at random. Statistics 2016, 50, 991–1013. [Google Scholar] [CrossRef]
  65. Wang, L.; Cao, R.; Du, J.; Zhang, Z. A nonparametric inverse probability weighted estimation for functional data with missing response data at random. J. Korean Stat. Soc. 2019, 48, 537–546. [Google Scholar] [CrossRef]
  66. Laib, N.; Louani, D. Nonparametric kernel regression estimation for functional stationary ergodic data: Asymptotic properties. J. Multivar. Anal. 2010, 101, 2266–2281. [Google Scholar] [CrossRef]
  67. Didi, S.; Bouzebda, S. Wavelet Density and Regression Estimators for Continuous Time Functional Stationary and Ergodic Processes. Mathematics 2022, 10, 4356. [Google Scholar] [CrossRef]
  68. Didi, S.; Al Harby, A.; Bouzebda, S. Wavelet Density and Regression Estimators for Functional Stationary and Ergodic Data: Discrete Time. Mathematics 2022, 10, 3433. [Google Scholar] [CrossRef]
  69. Nadaraja, E.A. On a regression estimate. Teor. Verojatnost. i Primen. 1964, 9, 157–159. [Google Scholar]
  70. Watson, G.S. Smooth regression analysis. Sankhyā Ser. A 1964, 26, 359–372. [Google Scholar]
  71. Stute, W. Conditional empirical processes. Ann. Stat. 1986, 14, 638–647. [Google Scholar] [CrossRef]
  72. Stute, W. On almost sure convergence of conditional empirical distribution functions. Ann. Probab. 1986, 14, 891–901. [Google Scholar] [CrossRef]
  73. Horváth, L.; Yandell, B.S. Asymptotics of conditional empirical processes. J. Multivar. Anal. 1988, 26, 184–206. [Google Scholar] [CrossRef]
  74. Ferraty, F.; Mas, A.; Vieu, P. Nonparametric regression on functional data: Inference and practical aspects. Aust. N. Z. J. Stat. 2007, 49, 267–286. [Google Scholar] [CrossRef]
  75. Dudley, R.M. A course on empirical processes. In École d’été de Probabilités de Saint-Flour, XII—1982; Lecture Notes in Mathematics; Springer: Berlin, Germany, 1984; Volume 1097, pp. 1–142. [Google Scholar] [CrossRef]
  76. Billingsley, P. Convergence of Probability Measures, 2nd ed.; Wiley Series in Probability and Statistics: Probability and Statistics; A Wiley-Interscience Publication; John Wiley & Sons, Inc.: New York, NY, USA, 1999; pp. x+277. [Google Scholar] [CrossRef]
  77. Huber, P.J. Robust Statistics; Wiley Series in Probability and Mathematical Statistics; John Wiley & Sons, Inc.: New York, NY, USA, 1981; pp. ix+308. [Google Scholar]
  78. Parthasarathy, K.R. Probability Measures on Metric Spaces; Reprint of the 1967 original; AMS Chelsea Publishing: Providence, RI, USA, 2005; pp. xii+276. [Google Scholar] [CrossRef]
  79. Hofinger, A. The metrics of Prokhorov and Ky Fan for assessing uncertainty in inverse problems. Österreich. Akad. Wiss. Math.-Natur. Kl. Sitzungsber. II 2006, 215, 107–125. [Google Scholar] [CrossRef]
  80. Fan, K. Entfernung zweier zufälligen Grössen und die Konvergenz nach Wahrscheinlichkeit. Math. Z. 1944, 49, 681–683. [Google Scholar] [CrossRef]
  81. Bouzebda, S.; Nemouchi, B. Uniform consistency and uniform in bandwidth consistency for nonparametric regression estimates and conditional U-statistics involving functional data. J. Nonparametr. Stat. 2020, 32, 452–509. [Google Scholar] [CrossRef]
  82. Hall, P. Asymptotic properties of integrated square error and cross-validation for kernel estimation of a regression function. Z. Wahrsch. Verw. Geb. 1984, 67, 175–196. [Google Scholar] [CrossRef]
  83. Rachdi, M.; Vieu, P. Nonparametric regression for functional data: Automatic smoothing parameter selection. J. Stat. Plan. Inference 2007, 137, 2784–2801. [Google Scholar] [CrossRef]
  84. Dony, J.; Mason, D.M. Uniform in bandwidth consistency of conditional U-statistics. Bernoulli 2008, 14, 1108–1133. [Google Scholar] [CrossRef]
  85. Bouzebda, S. On the weak convergence and the uniform-in-bandwidth consistency of the general conditional U-processes based on the copula representation: Multivariate setting. Hacet. J. Math. Stat. 2023, 52, 1303–1348. [Google Scholar] [CrossRef]
  86. Bouzebda, S.; Taachouche, N. On the variable bandwidth kernel estimation of conditional U-statistics at optimal rates in sup-norm. Phys. A 2023, 625, 129000. [Google Scholar] [CrossRef]
  87. Bouzebda, S. General tests of conditional independence based on empirical processes indexed by functions. Jpn. J. Stat. Data Sci. 2023, 6, 115–177. [Google Scholar] [CrossRef]
  88. Shang, H.L. Bayesian bandwidth estimation for a functional nonparametric regression model with mixed types of regressors and unknown error density. J. Nonparametr. Stat. 2014, 26, 599–615. [Google Scholar] [CrossRef]
  89. Bouzebda, S.; Laksaci, A.; Mohammedi, M. The k-nearest neighbors method in single index regression model for functional quasi-associated time series data. Rev. Mat. Complut. 2023, 36, 361–391. [Google Scholar] [CrossRef]
  90. Bouzebda, S.; Laksaci, A.; Mohammedi, M. Single index regression model for functional quasi-associated time series data. Revstat 2022, 20, 605–631. [Google Scholar]
  91. Hall, P.; Heyde, C.C. Martingale Limit Theory and Its Application; Probability and Mathematical Statistics; Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers]: New York, NY, USA; London, UK, 1980; pp. xii+308. [Google Scholar]
  92. Györfi, L.; Morvai, G.; Yakowitz, S.J. Limits to consistent on-line forecasting for ergodic time series. IEEE Trans. Inf. Theory 1998, 44, 886–892. [Google Scholar] [CrossRef]
  93. Chow, Y.S.; Teicher, H. Probability Theory, 3rd ed.; Springer Texts in Statistics; Independence, interchangeability, martingales; Springer: New York, NY, USA, 1997; pp. xxii+488. [Google Scholar] [CrossRef]
  94. Hoffmann-Jørgensen, J. Stochastic Processes on Polish Spaces; Various Publications Series (Aarhus); Aarhus Universitet, Matematisk Institut: Aarhus, Denmark, 1991; Volume 39, pp. ii+278. [Google Scholar]
  95. Kosorok, M.R. Introduction to Empirical Processes and Semiparametric Inference; Springer Series in Statistics; Springer: New York, NY, USA, 2008; pp. xiv+483. [Google Scholar] [CrossRef]
  96. Bae, J.; Jun, D.; Levental, S. The uniform CLT for martingale difference arrays under the uniformly integrable entropy. Bull. Korean Math. Soc. 2010, 47, 39–51. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bouzebda, S.; Souddi, Y.; Madani, F. Weak Convergence of the Conditional Set-Indexed Empirical Process for Missing at Random Functional Ergodic Data. Mathematics 2024, 12, 448. https://doi.org/10.3390/math12030448

AMA Style

Bouzebda S, Souddi Y, Madani F. Weak Convergence of the Conditional Set-Indexed Empirical Process for Missing at Random Functional Ergodic Data. Mathematics. 2024; 12(3):448. https://doi.org/10.3390/math12030448

Chicago/Turabian Style

Bouzebda, Salim, Youssouf Souddi, and Fethi Madani. 2024. "Weak Convergence of the Conditional Set-Indexed Empirical Process for Missing at Random Functional Ergodic Data" Mathematics 12, no. 3: 448. https://doi.org/10.3390/math12030448

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop