Next Article in Journal
Mass Formula for Self-Orthogonal and Self-Dual Codes over Non-Unital Rings of Order Four
Previous Article in Journal
A Natural-Language-Processing-Based Method for the Clustering and Analysis of Movie Reviews and Classification by Genre
Previous Article in Special Issue
Optimal Investment–Consumption–Insurance Problem of a Family with Stochastic Income under the Exponential O-U Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Marcinkiewicz–Zygmund-Type Strong Law of Large Numbers with General Normalizing Sequences under Sublinear Expectation

1
School of Mathematics, Shandong University, Jinan 250100, China
2
Zhongtai Securities Institute for Financial Studies, Shandong University, Jinan 250100, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(23), 4734; https://doi.org/10.3390/math11234734
Submission received: 14 October 2023 / Revised: 14 November 2023 / Accepted: 21 November 2023 / Published: 22 November 2023
(This article belongs to the Special Issue Statistical Methods in Mathematical Finance and Economics)

Abstract

:
In this paper we study the Marcinkiewicz–Zygmund-type strong law of large numbers with general normalizing sequences under sublinear expectation. Specifically, we establish complete convergence in the Marcinkiewicz–Zygmund-type strong law of large numbers for sequences of negatively dependent and identically distributed random variables under certain moment conditions. We also give results for sequences of independent and identically distributed random variables. The moment conditions in this paper are based on a class of slowly varying functions that satisfy some convergence properties. Moreover, some special examples and comparisons to existing results are also given.

1. Introduction

In recent years, with the development of science and society, more and more uncertain phenomena no longer satisfy the assumption that probability and expectation are linearly additive; therefore, we cannot use linear expectation to construct the models. Inspired by uncertainty problems in financial mathematics, statistics, and other fields, many scholars have begun to study nonlinear probability and nonlinear expectation, for example Choquet expectation and g-expectation, see Chen and Epstein [1], Choquet [2], Schmeidler [3], Wakker [4], and Wasserman and Kadane [5]. Nonlinear expectation theory provides mathematical theoretical tools for the analysis of big data with high uncertainty, and it has a wide range of applications in the fields of risk measurement, financial mathematics, and financial technology, see Barrieu and Karoui [6], El Karoui et al. [7], Gianin [8], Peng [9], and Peng et al. [10].
Recently, Peng [11,12] presented a general theory of sublinear expectation, which differs from classical linear expectation in that sublinear expectation is directly defined via expectation that satisfies certain properties. Based on the framework of sublinear expectation theory, many scholars have generalized the classical law of large numbers (LLN). For example, Chen, Liu, and Zong [13] weakened the independence of the random variables in Peng [14] and obtained the moment conditions for the weak LLN to hold; Chen, Wu, and Li [15] proved that the strong LLN holds for independent and identically distributed random varibles under the condition that the ( 1 + α ) -th moment is finite; Zhang [16] studied the strong LLN for a sequence of independent and negatively dependent random variables under the condition that the first moment is finite for Choquet expectation; Hu [17] proved that the strong LLN is still true under a general moment condition, which is weaker than that for a finite ( 1 + α ) -th moment; Zhan and Wu [18] studied the strong LLN for weighted sums of extended negatively dependent random variables; Feng and Lan [19] studied the Marcinkiewicz–Zygmund-type strong LLN for arrays of row-wise independent random variables.
The Marcinkiewicz–Zygmund (M–Z)-type strong LLN is a very important class of the strong LLN. Let { X n , n 1 } be a sequence of independent and identically distributed random variables, then { X n , n 1 } is said to satisfy the M–Z-type strong LLN, i.e.,
S n n E [ X 1 ] n 1 / p 0 , a . s .
holds if and only if
E [ | X 1 | p ] < ,
where 1 < p < 2 . Anh et al. [20] replaced the series { n 1 / p , n 1 } in (1) with normalizing constant series { n 1 / α L ˜ ( n 1 / α ) , n A α } and proved the M–Z-type strong LLN for sequences of negatively associated and identically distributed random variables, i.e.,
n 1 / α L ˜ 1 ( n 1 / α ) max 1 k n i = 1 k X i 0 , a . s .
holds if and only if
E ( X ) = 0 , E | X | α L α ( | X | + A ) < ,
where 1 α < 2 , L ( x ) is a slowly varying function defined on [ A , ) with some A > 0 and L ˜ ( x ) is the de Bruijn conjugate of L ( x ) . For more conclusions on the M–Z-type strong LLN in the classical framework see Bai and Cheng [21], Chen and Gan [22], Miao, Mu and Xu [23], and Sung [24].
Inspired by Anh et al. [20] under the classical framework, in this paper we generalize to the framework of sublinear expectation theory. It is worth noting that in this paper we consider a slowly varying function satisfying
n A α L ˜ 2 ε ( n 1 / α ) n < , ϵ > 0 ;
we can prove the complete convergence of weighted sums and the M–Z-type strong LLN with general normalizing sequences. Note that in the existing literature, only some special regularization sequences are considered. For example, Deng and Wang [25] studied complete convergence for extended independent random variables under sublinear expectation in the case of L ( x ) = 1 . Feng and Huang [26] studied strong convergence for weighted sums of extended negatively dependent random variables under sublinear expectation in the case of L ( x ) = log 1 / γ ( x ) , 0 < γ < 2 . By comparing the conditions with those in existing results, it is shown that the results in this paper generalize existing results to some extent.
Furthermore, we also studied complete convergence that was introduced by Hsu and Robbins [27] under sublinear expectation. Note that there have been some results about complete convergence under sublinear expectation, such as Deng and Wang [25], Feng and Huang [28], Lin and Feng [29], and Zhong and Wu [30].
The paper is organized as follows. In Section 2, we recall the basic concepts of sublinear expectation and slowly varying functions as well as some lemmas that will be used in the proofs. In Section 3, we give the main results: complete convergence for weighted sums and the M–Z-type strong LLN with general normalizing sequences under sublinear expectation. In Section 4, the results for three specific slowly varying functions are given and compared with existing results.

2. Preliminaries

2.1. Sublinear Expectation

In this paper we use the framework of sublinear expectation introduced by Peng [14]. Given a measurable space ( Ω , F ) , let H be a linear space of real functions defined on Ω satisfying the following: if X 1 , X 2 , , X n H , then φ ( X 1 , X 2 , , X n ) H for each φ C l , L i p ( R n ) , where C l , L i p ( R n ) denotes the linear space of local Lipschitz continuous functions φ , i.e.,
| φ ( x ) φ ( y ) | C 1 + | x | m + | y | m x y , x , y R n ,
where the constant C > 0 and the integer m N depend on φ . The space H can be used as the space of random variables.
Let P be a nonempty set of probability measures on the measurable space ( Ω , F ) . Define the upper probability V ( · ) and the lower probability v ( · ) as
V ( A ) : = sup P P P ( A ) , v ( A ) : = inf P P P ( A ) , A F .
It is easy to see that V ( · ) and v ( · ) are conjugate, i.e., V ( A ) + v ( A c ) = 1 , where A c is the complement of A. Meanwhile, V ( · ) satisfies the following:
(i)
V ( ) = 0 , V ( Ω ) = 1 .
(ii)
Monotonicity: for A , B F , if A B , then V ( A ) V ( B ) .
(iii)
Subadditivity: for A , B , A B F , then V ( A B ) V ( A ) + V ( B ) .
(iv)
Continuity from below: if A n , A F , A n A , then V ( A n ) V ( A ) .
Remark 1. 
Let  G F , a set function V : G [ 0 , 1 ] is called the capacity if it satisfies (i) and (ii).
Define the upper expectation E ^ ( · ) and lower expectation E ^ ( · ) with respect to P ,
E ^ [ X ] : = sup P P E P [ X ] , E ^ [ X ] = inf P P E P [ X ] ,
where X is an F -measurable real-valued random variable such that E P [ X ] < for any P P . ( Ω , F , P , E ^ ) is called the upper expectation space. Obviously, E ^ [ X ] E ^ [ X ] and E ^ [ X ] = E ^ [ X ] hold for every X.
Definition 1. 
A sublinear expectation E is a functional E : H R satisfying the following:
(i) 
Monotonicity: if X Y , then E ( X ) E ( Y ) .
(ii) 
Constant preserving: E ( c ) = c , c R .
(iii) 
Subadditivity: E ( X + Y ) E ( X ) + E ( Y ) , X , Y H .
(iv) 
Positive homogeneity: E ( λ X ) = λ E ( X ) , λ 0 .
The triplet  ( Ω , H , E ) is called a sublinear expectation space.
Remark 2. 
By the definition of the sublinear expectation  E , it’s easy to check that
| E ( X ) E ( Y ) | E ( | X Y | ) , E ( X + c ) = E ( X ) + c , c R ,
and it can be verified that the upper expectation  E ^  is a sublinear expectation. Note that all the results obtained in this paper are in the context of the upper expectation space.
Definition 2. 
For any capacity V, the Choquet expectation is defined by
C V ( X ) : = 0 V ( X t ) d t + 0 V ( X t ) 1 d t .
In particular, if the capacity V satisfies
V ( A B ) + V ( A B ) V ( A ) + V ( B ) , A , B F ,
then the Choquet expectation induced by this capacity V is a sublinear expectation. Replacing the capacity V in the definition by the upper probability V and the lower probability v, respectively, we can obtain a pair of Choquet expectations ( C V , C v ) .
The definitions of independent, negatively dependent, and identically distributed random vectors in sublinear expectation spaces are given below.
Definition 3 
([31], Definition 3.4). In a sublinear expectation space ( Ω , H , E ^ ) , a random vector Y = ( Y 1 , , Y n ) , Y i H is said to be independent to another random vector X = ( X 1 , , X m ) , X i H under E ^ , if for each test function φ C l , L i p ( R m × R n ) , we have
E ^ φ ( X , Y ) = E ^ E ^ φ ( x , Y ) | x = X .
A sequence of random variables { X n , n 1 } is said to be independent if for each n N + , X n + 1  is independent of  ( X 1 , X 2 , , X n ) .
Definition 4 
([16], Definition 1.5). In a sublinear expectation space ( Ω , H , E ^ ) , a random vector Y = ( Y 1 , , Y n ) , Y i H is said to be negatively dependent to another random vector X = ( X 1 , , X m ) , X i H under E ^ , if for each pair of test functions φ 1 C l , L i p ( R m ) and φ 2 C l , L i p ( R n ) , we have
E ^ φ 1 ( X ) φ 2 ( Y ) E ^ φ 1 ( X ) E ^ φ 2 ( Y ) ,
 whenever  φ 1 ( X ) 0 ,  E ^ φ 2 ( Y ) 0 ,  E ^ | φ 1 ( X ) φ 2 ( Y ) | < ,  E ^ | φ 1 ( X ) | < ,  E ^ | φ 2 ( Y ) | < , and either  φ 1  and  φ 2 are coordinatewise nondecreasing or φ 1  and  φ 2  are coordinatewise nonincreasing.
A sequence of random variables  { X n , n 1 }  is said to be negatively dependent if for each  n 1 X n + 1  is negatively dependent to  ( X 1 , X 2 , , X n ) .
Definition 5 
([32], Definition 2.5). Random variables X and Y are said to be identically distributed, denoted by X = d Y , if for each Borel-measurable function φ, φ ( X ) , φ ( Y ) H we have
E ^ φ ( X ) = E ^ φ ( Y ) .
A sequence of random variables { X n , n 1 } is said to be identically distributed if  X n = d X 1  for each  n 1 .

2.2. Slowly Varying Functions

First, we present the relevant definitions and properties of slowly varying functions.
Definition 6 
([33], Definitions 1.1, and 1.2). A function  L ( · )  is said to be regularly varying at infinity if it is real-valued, positive, and measurable on  [ A , )  with some  A > 0 , and if for each  λ > 0 ,
lim x L ( λ x ) L ( x ) = λ ρ ,
where  ρ R  (ρ is called the index of regular variation). A regularly varying function with the index of regular variation  ρ = 0  is called slowly varying.
Definition 7 
([34]). Let  L ( · )  be a slowly varying function, then there exists a slowly varying function  L ˜ ( · )  that can be asymptotically uniquely determined such that 
lim x L ( x ) L ˜ x L ( x ) = 1 , lim x L ˜ ( x ) L x L ˜ ( x ) = 1 .
The function  L ˜  is called the de Bruijn conjugate of L, and  ( L , L ˜ )  is called a (slowly varying) conjugate pair.
Lemma 1 
([34], Theorem 1). Let  L ( · )  be a slowly varying function. If
lim x L ( λ 0 x ) L ( x ) 1 log L ( x ) = 0 ,
 for a fixed  λ 0 > 1 , then
lim x L ( x L α ( x ) ) L ( x ) = 1
holds for every real number α.
Remark 3. 
From  ( 3 )  and Definition 7, it follows that when  L ( x )  satisfies  ( 2 ) ,  L ˜ ( x ) = 1 / L ( x )  is the de Bruijn conjugate (asymptotically unique) of the slowly varying function   L ( x ) .
Next we present some important properties about slowly varying functions.
Lemma 2 
([20], Lemma 2.1). Let  L ( · ) be a slowly varying function, α, β > 0 . Let f ( x ) = x α β L α ( x β ) , g ( x ) = x 1 / ( α β ) L ˜ 1 / β ( x 1 / α ) . Then,
lim x f g ( x ) x = lim x g f ( x ) x = 1 .
Lemma 3 
([20], Lemma 2.2). For any slowly varying function L ( · ) defined on [ A , ) with some A > 0 , there exists a differentiable slowly varying function L 1 ( · ) defined on [ B , ) with some B A such that
lim x L ( x ) L 1 ( x ) = 1 and lim x x L 1 ( x ) L 1 ( x ) = 0 .
Conversely, if  L ( · ) is a positive differentiable function satisfying
lim x x L ( x ) L ( x ) = 0 ,
then  L ( · ) is a slowly varying function.
Lemma 3 shows that for any slowly varying function L ( · ) , we can always find a differentiable slowly varying function L 1 ( · ) that is equivalent to it. Therefore, without loss of generality, in the following we may assume that the slowly varying function L ( x ) is differentiable and satisfies Equation (4). Moreover, Anh et al. [20] proved that if L ( · ) is a slowly varying function defined on [ A , ) with some A > 0 , then there exists B A such that L ( · ) is bounded on every finite closed interval [ a , b ] [ B , ) . Thus, in addition to the assumption of differentiability, we also assume that L ( x ) ( x A , A > 0 ) is bounded on a finite closed interval.
Lemma 4 
([20], Lemma 2.3). Let   p > 0  and  L ( · )  be a slowly varying function defined on  [ A , )  with some  A > 0  satisfying  ( 4 ) , then the following statements hold.
(i) 
There exists  B A  such that  x p L ( x )  is increasing on  [ B , ) ,  x p L ( x )  is decreasing on  [ B , ) , and
lim x x p L ( x ) = , lim x x p L ( x ) = 0 .
(ii) 
For all λ > 0 ,
lim x L ( x ) L ( x + λ ) = 1 .
If L ( x ) does not satisfy (4), we still have lim x x p L ( x ) = and lim x x p L ( x ) = 0 ([33], p.18), but the monotonicity in Lemma 4 (i) no longer holds.
The following lemma describes the convergence of a special class of slowly varying function series, and the conclusion will be used in the proofs.
Lemma 5 
([20], Lemma 2.5). Let  p > 1 , q R and let L ( · ) be a differentiable slowly varying function defined on [ A , ) with some A > 0 , then
k = n L q ( k ) k p L q ( n ) ( p 1 ) n p 1 .

3. The M–Z-Type Strong LLN

3.1. The Strong LLN for Negatively Dependent Random Variables

In this section, we study the M–Z-type strong LLN for negatively dependent and identically distributed sequences of random variables in the upper expectation space ( Ω , F , P , E ^ ) . In order to establish the connection between complete convergence and the M–Z-type strong LLN, we first prove the following lemma under sublinear expectation. In all the proofs, C denotes a positive constant that varies from row to row.
Lemma 6. 
Let  { X n , n 1 }  be a sequence of random variables,  S n = i = 1 n X i . Let  { b n , n 1 }  be a sequence of positive constants such that  0 < b n ,  b 2 n / b n = O ( 1 ) . If
n = 1 1 n V max 1 i n | S i | > ε b n < , ε > 0 ,
then
max 1 i n | S i | b n 0 , a . s . V .
Proof of Lemma 6. 
For 0 < b n , we have
> n = 1 1 n V max 1 i n | S i | > ε b n = k = 0 n = 2 k 2 k + 1 1 1 n V max 1 i n | S i | > ε b n 1 2 k = 0 V max 1 i 2 k | S i | > ε b 2 k + 1 .
By the Borel–Cantelli lemma and b 2 n / b n = O ( 1 ) , we have
1 b 2 k max 1 i 2 k + 1 | S i | 0 , a . s . V ,
and for 2 k n < 2 k + 1 ,
max 1 i n | S i | b n max 1 i 2 k + 1 | S i | b 2 k ;
therefore, by (5), we have
max 1 i n | S i | b n 0 , a . s . V .
The next proposition relates the existence of Choquet expectation to the convergence of some series, providing an equivalent characterization of the moment condition  C V | X | α L α ( | X | + A ) < , while also generalizing the classical result of Proposition 2.6 in Anh et al. [20].
Proposition 1. 
Let X be a random variable and  α 1 . Let  L ( x )  be a slowly varying function defined on  [ A , )  with some  ( A > 0 ) . Assume that  A α  is an integer, otherwise, take  [ A α ] + 1 . Then, 
C V | X | α L α ( | X | + A ) < n = A α V ( | X | b n ) < ,
where b n = n 1 / α L ˜ ( n 1 / α ) , n A α .
Proof of Proposition 1. 
Define f ( x ) = x α L α ( x ) , g ( x ) = x 1 / α L ˜ ( x 1 / α ) . By Lemma 4, there exists B A α such that f ( x ) and g ( x ) are increasing on [ B , ) .
It is obvious that if C V f ( | X | + A ) < , then C V | X | α L α ( | X | + A ) < . Next, we prove that if C V | X | α L α ( | X | + A ) < , then C V f ( | X | + A ) < .
If the random variables X , Y and the constant a are nonnegative, by the definition of Choquet expectation, the subadditivity of V , and { X + Y t } { X t / 2 } { Y t / 2 } , we have
C V ( X + Y ) = 0 V ( X + Y t ) d t 0 V { X t 2 } { Y t 2 } d t 0 V ( X t 2 ) d t + 0 V ( Y t 2 ) d t = 2 C V ( X ) + 2 C V ( Y ) ,
C V ( a X ) = 0 V ( a X t ) d t = a 0 V ( X x ) d x = a C V ( X ) .
Moreover, for any random variables X, Y, if X Y , then by the monotonicity of V , we have C V ( X ) C V ( Y ) .
From the above properties of Choquet expectation and the Cr inequality, we have
C V f ( | X | + A ) = C V ( | X | + A ) α L α ( | X | + A ) C V 2 α 1 | X | α L α ( | X | + A ) + 2 α 1 A α L α ( | X | + A ) 2 C V 2 α 1 | X | α L α ( | X | + A ) + 2 C V 2 α 1 A α L α ( | X | + A ) = 2 α C V | X | α L α ( | X | + A ) + 2 α A α C V L α ( | X | + A ) = I 1 + C I 2 ,
where I 1 = 2 α C V | X | α L α ( | X | + A ) and I 2 = C V L α ( | X | + A ) .
We already have I 1 < , now we focus on I 2 ,
I 2 = C V L α ( | X | + A ) I ( | X | < A ) + L α ( | X | + A ) I ( | X | A ) C + 2 C V | X | α L α ( | X | + A ) I ( | X | A ) < ;
therefore, we obtain C V f ( | X | + A ) < . By the definition of Choquet expectation, we have
C V ( | X | ) < 0 V ( | X | t ) d t < n = 0 V ( | X | n ) < ,
then, it follows from (2) that
C V f ( | X | + A ) < n = 1 V f ( | X | + A ) n < n = B V g f ( | X | + A ) g ( n ) < n = n 0 V ( | X | + A b n ) < , n 0 B n = n 1 V ( | X | b n ) < , n 1 n 0 .
Next, we give the main result of this section.
Theorem 1. 
Let  1 α < 2  and  ε > 0 , for a slowly varying function  L ( x )  defined on  [ A , )  with some  A > 0 , assume that  L ( x )  is increasing when  α = 1  and satisfies
n A α L ˜ 2 ε ( n 1 / α ) n < ,
where  L ˜ ( x )  is the de Bruijn conjugate of  L ( x ) . Let  { X , X n , n 1 }  be a sequence of negatively dependent and identically distributed random variables in the upper expectation space  ( Ω , F , P , E ^ )  that satisfies
E ^ [ X ] = E ^ [ X ] = 0 , C V | X | α L α + ε ( | X | + A ) < .
Define b n = n 1 / α L ˜ ( n 1 / α ) , n A α , then we have the following:
(i)
For any array of nonnegative constants { a n i , n 1 , 1 i n } satisfying
i = 1 n a n i 2 C n , n 1 ,
we have
n A α n 1 V max 1 k n i = 1 k a n i X i > ε b n < , ε > 0 .
Specifically,
n A α n 1 V max 1 k n i = 1 k X i > ε b n < , ε > 0 .
(ii)
The M–Z-type strong LLN holds, i.e.,
lim n max 1 k n | i = 1 k X i | b n = 0 , a . s . V .
Proof of Theorem 1. 
For simplicity, we assume that A α is an integer number, otherwise we can take [ A α ] + 1 . When n A α , let
X n i = X i I ( | X i | b n ) + b n I ( X i > b n ) b n I ( X i < b n ) , 1 i n ,
S n k = i = 1 k a n i X n i E ^ [ a n i X n i ] , 1 k n .
(i)
For any ε > 0 , n A α , we have
V max 1 k n i = 1 k a n i X i > ε b n V max 1 k n X k > b n + V max 1 k n S n k > ε b n i = 1 n | E ^ [ a n i X n i ] | .
First, since C V | X | α L α + ε ( | X | + A ) < , by the subadditivity of V and Proposition 1, we have
n A α n 1 V max 1 k n X k > b n n A α V | X | > b n < .
Next, we focus on the second term on the right-hand side of (12). Define
f b n ( X ) = X I X b n + b n I X > b n b n I X < b n ,
f ^ b n ( X ) = X f b n ( X ) .
By the Cauchy–Schwarz inequality and (8), we have
i = 1 n a n i 2 n i = 1 n a n i 2 C n 2 .
Thus, recall E ^ [ X ] = 0 , we have
i = 1 n E ^ [ a n i X n i ] b n = i = 1 n E ^ a n i f b n ( X i ) b n = i = 1 n a n i E ^ f b n ( X ) E ^ [ X ] b n i = 1 n a n i E ^ f ^ b n ( X ) b n C n C V ( | X | b n ) + b n = C n 1 1 / α L ˜ 1 ( n 1 / α ) n 1 / α L ˜ ( n 1 / α ) V ( | X | t ) d t .
By Lemmas 3 and 4, we can find B A such that x 1 / α L ˜ ( x 1 / α ) and x α 1 L α ( x ) are increasing on [ B , ) . Without loss of generality, we can assume that x 1 / α L ˜ ( x 1 / α ) and x α 1 L α ( x ) are increasing on [ A , ) .
For | X | t n 1 / α L ˜ ( n 1 / α ) , we have
| X | α 1 L α ( | X | + A ) t α 1 L α ( t ) n ( α 1 ) / α L ˜ α 1 ( n 1 / α ) L α n 1 / α L ˜ ( n 1 / α ) ,
therefore,
i = 1 n E ^ [ a n i X n i ] b n C n 1 1 / α L ˜ 1 ( n 1 / α ) n 1 / α L ˜ ( n 1 / α ) V ( | X | t ) d t 1 L ˜ α ( n 1 / α ) L α n 1 / α L ˜ ( n 1 / α ) n L ˜ α ( n 1 / α ) L α n 1 / α L ˜ ( n 1 / α ) V | X | α L α ( | X | + A ) y d y .
By Definition 7 and (7), we have
lim n 1 L ˜ α ( n 1 / α ) L α n 1 / α L ˜ ( n 1 / α ) n L ˜ α ( n 1 / α ) L α n 1 / α L ˜ ( n 1 / α ) V | X | α L α ( | X | + A ) y d y = 0 .
Therefore,
lim n i = 1 n E ^ [ a n i X n i ] b n = 0 .
From (12)–(14), to obtain (9), it remains to show that
n A α n 1 V max 1 i n | S n i | > ε b n 2 < .
By the Chebyshev inequality (see Proposition 2.1 in [15] and Theorem 2.1 in [16]), we have
n A α 1 n V max 1 i n | S n i | > ε b n 2 n A α 4 n E ^ max 1 i n | S n i | 2 ε 2 b n 2 n A α C i = 1 n E ^ | a n i X n i E ^ [ a n i X n i ] | 2 n ε 2 b n 2 + n A α C i = 1 n E ^ a n i X n i E ^ [ a n i X n i ] + E ^ a n i X n i E ^ [ a n i X n i ] + 2 n ε 2 b n 2 = M 1 + M 2 ,
where
M 1 : = n A α C i = 1 n E ^ | a n i X n i E ^ [ a n i X n i ] | 2 n ε 2 b n 2 ,
and
M 2 : = n A α C i = 1 n E ^ a n i X n i E ^ [ a n i X n i ] + E ^ a n i X n i E ^ [ a n i X n i ] + 2 n ε 2 b n 2 .
For M 2 , by E ^ a n i X n i E ^ [ a n i X n i ] = 0 and the Cauchy–Schwarz inequality, we have
i = 1 n E ^ a n i X n i E ^ [ a n i X n i ] + E ^ a n i X n i E ^ [ a n i X n i ] + 2 = i = 1 n E ^ [ a n i X n i ] E ^ [ a n i X n i ] 2 2 n i = 1 n E ^ 2 [ a n i X n i ] + E ^ 2 [ a n i X n i ] = 2 n i = 1 n a n i 2 E ^ 2 [ f b n ( X i ) ] + E ^ 2 [ f b n ( X i ) ] C n 2 E ^ [ f b n ( X ) ] E ^ [ X ] 2 + E ^ [ f b n ( X ) ] E ^ [ X ] 2 C n 2 E ^ 2 ( | X | b n ) + ,
then, from (6), we have
M 2 C n A α n E ^ 2 ( | X | b n ) + b n 2 C n A α n C V 2 ( | X | b n ) + b n 2 C n A α n b n 2 C V 2 | X | I ( | X | b n ) C n A α n b n 2 · b n 2 2 α L 2 α 2 ε ( b n ) C V 2 | X | α L α + ε ( | X | + A ) C n A α n 1 L 2 ε n 1 / α L ˜ ( n 1 / α ) C V 2 | X | α L α + ε ( | X | + A ) C n A α 1 n L ˜ 2 ε ( n 1 / α ) C V 2 | X | α L α + ε ( | X | + A ) < .
For M 1 , we have
M 1 n A α C i = 1 n E ^ ( a n i X n i ) 2 n ε 2 b n 2 = n A α C i = 1 n E ^ a n i f b n ( X i ) 2 n ε 2 b n 2 = n A α C i = 1 n E ^ a n i f b n ( X ) 2 n ε 2 b n 2 n A α C ε 2 b n 2 E ^ X 2 I ( | X | b n ) + n A α C ε 2 V ( | X | > b n ) .
By Proposition 1 and (7), we have
n A α C ε 2 V ( | X | > b n ) < .
From (16)–(18), to obtain (15), it remains to show that
n A α C ε 2 b n 2 E ^ X 2 I ( | X | b n ) < .
Note that
n A α 1 b n 2 E ^ [ X 2 I ( | X | b n ) ] C + n A α + 1 1 b n 2 E ^ [ ( | X | b n ) 2 ] C + n A α + 1 1 b n 2 E ^ [ i = A α n b i 2 I b i 1 < | X | b i + b n 2 I | X | > b n ] C + n A α + 1 1 b n 2 E ^ [ i = A α n 1 b i + 1 2 b i 2 I | X | > b i + b A α 2 I | X | > b A α 1 ] C + n A α + 1 C b n 2 + n A α + 1 1 b n 2 i = A α n 1 ( b i + 1 2 b i 2 ) V ( | X | > b i ) = C + C N 1 + N 2 ,
where
N 1 : = n A α + 1 1 b n 2 ,
N 2 : = i = A α ( b i + 1 2 b i 2 ) n i + 1 1 b n 2 V ( | X | > b i ) .
Let L ^ ( x ) = L ˜ ( x 1 / α ) , x A α . Since L ˜ ( x ) is a slowly varying function defined on [ A , ) with some A > 0 , by Definition 6, for any λ > 0 , we have
L ^ ( λ x ) L ^ ( x ) = L ˜ ( λ 1 / α x 1 / α ) L ˜ ( x 1 / α ) 1 , x ;
therefore, L ^ ( · ) is a slowly varying function defined on [ A α , ) with some A α > 0 . By Lemma 5, we have
N 1 = n A α + 1 1 n 2 / α L ˜ 2 ( n 1 / α ) = n A α + 1 L ^ 2 ( n ) n 2 / α L ^ 2 ( A α + 1 ) ( 2 / α 1 ) ( A α + 1 ) 2 / α 1 < .
For N 2 , by Lemma 5, we have
( b i + 1 2 b i 2 ) n i + 1 1 b n 2 = ( i + 1 ) 2 / α L ˜ 2 ( i + 1 ) 1 / α i 2 / α L ˜ 2 ( i 1 / α ) C L ˜ 2 ( i + 1 ) 1 / α ( i + 1 ) 2 / α 1 C ( i + 1 ) 2 / α L ˜ 2 ( i + 1 ) 1 / α i 2 / α L ˜ 2 ( i 1 / α ) i 2 / α 1 L ˜ 2 ( i 1 / α ) C ( i + 1 ) 2 / α i 2 / α i 2 / α 1 C ( 1 + 1 i ) 2 α 1 ;
therefore, we have
N 2 C i = A α V ( | X | > b i ) < .
Combining (20) and (21), we obtain (19).
Let a n i 1 in (9), we obtain (10).
(ii)
Since
0 < b n = n 1 / α L ˜ ( n 1 / α ) , b 2 n b n = ( 2 n ) 1 / α L ˜ ( 2 n ) 1 / α n 1 / α L ˜ ( n 1 / α ) 2 1 / α , n ,
by Lemma 6, we have
lim n max 1 k n | i = 1 k X i | b n = 0 , a . s . V .
The above theorem requires that the series n A α L ˜ 2 ε ( n 1 / α ) / n is finite, where L ˜ ( x ) is the de Bruijn conjugate of the slowly varying function L ( x ) . By Definition 7, we can show that L ˜ ( x ) is not unique. In fact, it is sufficient that there exists at least one L ˜ ( x ) satisfying the condition. Indeed, by Remark 3, when L ( x ) satisfies (2), L ˜ ( x ) = 1 / L ( x ) is the de Bruijn conjugate of L ( x ) and is asymptotically unique. Condition (6) can then be rewritten as n A α 1 / n L 2 ε ( n 1 / α ) < , from which we obtain the following theorem.
Theorem 2. 
Let 1 α < 2 and ε > 0 , for a slowly varying function L ( x ) defined on [ A , ) with some A > 0 , assume that L ( x ) is increasing when α = 1 and satisfies
lim x L ( λ 0 x ) L ( x ) 1 log L ( x ) = 0 ,
for a fixed λ 0 > 1 and
n A α 1 n L 2 ε ( n 1 / α ) < ,
where L ˜ ( x ) is the de Bruijn conjugate of L ( x ) . Let { X , X n , n 1 } be a sequence of negatively dependent and identically distributed random variables in the upper expectation space ( Ω , F , P , E ^ ) that satisfies
E ^ [ X ] = E ^ [ X ] = 0 , C V | X | α L α + ε ( | X | + A ) < .
Define b n = n 1 / α L ˜ ( n 1 / α ) , n A α , then we have the following:
(i) 
For any array of nonnegative constants { a n i , n 1 , 1 i n } satisfying
i = 1 n a n i 2 C n , n 1 ,
we have
n A α n 1 V max 1 k n i = 1 k a n i X i > ε b n < , ε > 0 .
Specifically,
n A α n 1 V max 1 k n i = 1 k X i > ε b n < , ε > 0 .
(ii) 
The M–Z-type strong LLN holds, i.e.,
lim n max 1 k n | i = 1 k X i | b n = 0 , a . s . V .

3.2. The Strong LLN for Independent Random Variables

In Theorem 1, we consider the strong LLN for a sequence of negatively dependent and identically distributed random variables, and, in order to ensure that X ˜ i = a n i X n i E ^ [ a n i X n i ] is also negatively dependent, we assume that a n i is a nonnegative constant. For a sequence of independent (Definition 3) and identically distributed random variables, we can extend the condition in Theorem 1 to a general array { a n i } . The theorem we obtain is as follows.
Theorem 3. 
Let 1 α < 2 and ε > 0 , for a slowly varying function L ( x ) defined on [ A , ) with some A > 0 , assume that L ( x ) is increasing when α = 1 and satisfies
n A α L ˜ 2 ε ( n 1 / α ) n < ,
where L ˜ ( x ) is the de Bruijn conjugate of L ( x ) . Let { X , X n , n 1 } be a sequence of independent and identically distributed random variables in the upper expectation space ( Ω , F , P , E ^ ) that satisfies
E ^ [ X ] = E ^ [ X ] = 0 , C V | X | α L α + ε ( | X | + A ) < .
Define b n = n 1 / α L ˜ ( n 1 / α ) , n A α , then we have the following:
(i) 
For any array of constants { a n i , n 1 , 1 i n } satisfying
i = 1 n a n i 2 C n , n 1 ,
we have
n A α n 1 V max 1 k n i = 1 k a n i X i > ε b n < , ε > 0 .
Specifically,
n A α n 1 V max 1 k n i = 1 k X i > ε b n < , ε > 0 .
(ii) 
The M–Z-type strong LLN holds, i.e.,
lim n max 1 k n | i = 1 k X i | b n = 0 , a . s . V .
Proof of Theorem 3. 
The proof is similar to Theorem 1 with necessary modifications due to the condition on a n i ; therefore, we only give the proof with respect to a n i . The rest of the proof is similar to that of Theorem 1 and is omitted here.
For every array of constants { a n i , n 1 , 1 i n } , define I 1 = { 1 i n | a n i 0 } and I 2 = { 1 i n | a n i < 0 } . First, for i = 1 n | E ^ [ a n i X n i ] | / b n , we have
i = 1 n E ^ [ a n i X n i ] b n = i = 1 n E ^ a n i f b n ( X ) b n = i I 1 a n i E ^ f b n ( X ) E ^ [ X ] b n + i I 2 | a n i | E ^ f b n ( X ) E ^ [ X ] b n i I 1 a n i E ^ | f ^ b n ( X ) | b n + i I 2 | a n i | E ^ | f ^ b n ( X ) | b n = i = 1 n | a n i | E ^ | f ^ b n ( X ) | b n C n E ^ ( | X | b n ) + b n .
Next, for M 2 , we have
i = 1 n E ^ a n i X n i E ^ [ a n i X n i ] + E ^ a n i X n i E ^ [ a n i X n i ] + 2 2 n i = 1 n E ^ 2 [ a n i X n i ] + E ^ 2 [ a n i X n i ] = 2 n i I 1 a n i 2 E ^ f b n ( X ) E ^ [ X ] 2 + a n i 2 E ^ [ X ] E ^ f b n ( X ) 2 + 2 n i I 2 a n i 2 E ^ [ X ] E ^ f b n ( X ) 2 + a n i 2 E ^ [ X ] E ^ f b n ( X ) 2 2 n i I 1 a n i 2 E ^ 2 | f ^ b n ( X ) | + a n i 2 E ^ 2 | f ^ b n ( X ) | + 2 n i I 2 a n i 2 E ^ 2 | f ^ b n ( X ) | + a n i 2 E ^ 2 | f ^ b n ( X ) | = 4 n i = 1 n a n i 2 E ^ 2 | f ^ b n ( X ) | = 4 n i = 1 n a n i 2 E ^ 2 ( | X | b n ) + .
Remark 4. 
All the above results obtained in this paper hold in the upper expectation space ( Ω , F , P , E ^ ) . If we consider the general sublinear expectation space ( Ω , H , E ^ ) , the corresponding results still hold as long as both the sublinear expectation E ^ and the capacity V are countably subadditive.

4. Further Discussions on the Moment Condition

In this section we consider several special slowly varying functions and compare the corresponding results with existing conclusions. Let ln x denote the natural logarithm function with e as the base and log x denote the logarithmic function with 2 as the base.

4.1. L ( x ) = ln x

Let L ( x ) = ln x , we have the following result.
Theorem 4. 
Let 1 α < 2 , ε > 0 and { X , X n , n 1 } be a sequence of negatively dependent and identically distributed random variables in the upper expectation space ( Ω , F , P , E ^ ) , let b n = n 1 / α / ln ( n 1 / α ) , n [ e α ] + 1 , suppose the random variable X satisfies
E ^ [ X ] = E ^ [ X ] = 0 , C V | X | α ln α + 1 / 2 + ε ( | X | + e ) < ,
then we have the following:
(i)
For any array of nonnegative constants { a n i , n 1 , 1 i n } satisfying
i = 1 n a n i 2 C n , n 1 ,
we have
n [ e α ] + 1 n 1 V max 1 k n i = 1 k a n i X i > ε b n < , ε > 0 .
Specifically,
n [ e α ] + 1 n 1 V max 1 k n i = 1 k X i > ε b n < , ε > 0 .
(ii)
The M–Z-type strong LLN holds, i.e.,
lim n max 1 k n | i = 1 k X i | b n = 0 , a . s . V .
Proof of Theorem 4. 
For ε > 0 , n [ e α ] + 1 , we have
V max 1 k n i = 1 k a n i X i > ε b n V max 1 k n X k > b n + V max 1 k n S n k > ε b n i = 1 n | E ^ [ a n i X n i ] | .
Similar to the proof of Theorem 1, we can show that
n [ e α ] + 1 n 1 V max 1 k n X k > b n < ,
lim n i = 1 n | E ^ [ a n i X n i ] | b n = 0 ,
and
n A α C i = 1 n E ^ | a n i X n i E ^ [ a n i X n i ] | 2 n ε 2 b n 2 < .
To obtain (22), it remains to show that
n A α C i = 1 n E ^ a n i X n i E ^ [ a n i X n i ] + E ^ a n i X n i E ^ [ a n i X n i ] + 2 n ε 2 b n 2 < ,
which can be proved as follows:
n A α C i = 1 n E ^ a n i X n i E ^ [ a n i X n i ] + E ^ a n i X n i E ^ [ a n i X n i ] + 2 n ε 2 b n 2 n [ e α ] + 1 16 n i = 1 n a n i 2 n ε 2 b n 2 E ^ 2 ( | X | b n ) + n [ e α ] + 1 C n b n 2 C V 2 ( | X | b n ) + n [ e α ] + 1 C n 1 1 2 α ln 2 ( n ) ln 2 α 2 ( n ) n ( 2 α 2 ) / α ln 2 α + 1 + ε ( n ) C V 2 | X | α ln α + 1 / 2 + ε ( | X | + e ) = n [ e α ] + 1 C 1 n ln 1 + ε ( n ) C V 2 | X | α ln α + 1 / 2 + ε ( | X | + e ) < .
Then, similar to the proof of Theorem 1, we can obtain (23) and (24). □
Remark 5. 
We note that the order of the moment condition C V | X | α ln α + 1 / 2 + ε ( | X | + e ) < in Theorem 4 is increased by 1 / 2 + ε compared to the result in the linear expectation space. The ε term is commonly used for the generalization of the M–Z-type strong LLN to the sublinear expectation space, which is similar to the α term of the moment condition E ^ [ | X 1 | 1 + α ] < in [32]; the 1/2 term is needed in this paper due to the specific method. We conjecture that the 1/2 term can be removed and we will study this in our future work.

4.2. L ( x ) 1

Let L ( x ) 1 , then we have the following result.
Theorem 5. 
Let 1 α < 2 , ε > 0 and { X , X n , n 1 } be a sequence of negatively dependent and identically distributed random variables in the upper expectation space ( Ω , F , P , E ^ ) , suppose the random variable X satisfies
E ^ [ X ] = E ^ [ X ] = 0 , C V ( | X | α + ε ) < ,
then we have the following:
(i) 
For any array of nonnegative constants { a n i , n 1 , 1 i n } satisfying
i = 1 n a n i 2 C n , n 1 ,
we have
n 1 n 1 V max 1 k n i = 1 k a n i X i > ε n 1 / α < , ε > 0 .
Specifically,
n 1 n 1 V max 1 k n i = 1 k X i > ε n 1 / α < , ε > 0 .
(ii) 
The M–Z-type strong LLN holds, i.e.,
lim n max 1 k n | i = 1 k X i | n 1 / α = 0 , a . s . V .
Deng and Wang [25] studied extended independent (EI, for short) sequences of random variables and also proved complete convergence. First, we recall the definition of an extended independent sequence of random variables.
Definition 8. 
Given a sublinear expectation space ( Ω , H , E ^ ) . A sequence of random variables { X n , n 1 } is said to be extended independent if
E ^ i = 1 n ψ i ( X i ) = i = 1 n E ^ ψ i ( X i ) ,
for any nonnegative function ψ i C l , L i p ( R ) ( i = 1 , 2 , , n ) , n 1 .
For sequences of extended independent random variables, Deng and Wang [25] proved the following proposition.
Proposition 2. 
Let α p = 1 , α > 1 / 2 , and 0 < p < 1 . Assume that { X n , n 1 } is a sequence of identically distributed EI random variables with
lim c E ^ | X 1 | p c + = 0 , C V ( | X 1 | ) < ,
then
n = 1 n 1 V i = 1 n X i > ε n α < , ε > 0 .
Remark 6. 
In Theorem 5, we consider a sequence of negatively dependent and identically distributed random variables. Note that (25) describes the complete convergence of “weighted" sums of random variables, which is more general than the complete convergence of “standard" sums of random variables described by (26).

4.3. L ( x ) = log 1 / γ ( x )

Let L ( x ) = log 1 / γ ( x ) , γ > 0 , x 2 , then we have the following result.
Theorem 6. 
Let 1 < α < 2 , 0 < γ < α , 1 < δ < α / γ , and { X , X n , n 1 } be a sequence of negatively dependent and identically distributed random variables in the upper expectation space ( Ω , F , P , E ^ ) , suppose the random variable X satisfies
E ^ [ X ] = E ^ [ X ] = 0 , C V | X | α log α / γ + δ / 2 ( | X | + 2 ) < ,
then we have the following:
(i) 
For any array of nonnegative constants { a n i , n 1 , 1 i n } satisfying
i = 1 n a n i 2 C n , n 1 ,
we have
n 2 n 1 V max 1 k n i = 1 k a n i X i > ε n 1 / α log 1 / γ ( n ) < , ε > 0 .
Specifically,
n 2 n 1 V max 1 k n i = 1 k X i > ε n 1 / α log 1 / γ ( n ) < , ε > 0 .
(ii) 
The M–Z-type strong LLN holds, i.e.,
lim n max 1 k n | i = 1 k X i | n 1 / α log 1 / γ ( n ) = 0 , a . s . V .
Remark 7. 
In Theorem 6, we remove the case α = 1 . This is because the method we used requires that x α 1 log α / γ ( x ) is increasing on [ A , ) , which is not satisfied when α = 1 .
Proof of Theorem 6. 
First, note that Proposition 1 still holds, i.e., C V | X | α log α / γ + δ / 2 ( | X | + 2 ) < implies that n 2 V ( | X | α 1 / γ n 1 / α log 1 / γ ( n ) ) < holds.
Most of the proof is similar to that of Theorem 1, so we only give the proof of M 2 < with some necessary changes.
M 2 C n 2 n α 1 / γ n 1 / α log 1 / γ ( n ) 2 × C V 2 | X | α log α / γ + δ / 2 ( | X | + 2 ) | X | α 1 log α / γ + δ / 2 ( | X | + 2 ) I | X | α 1 / γ n 1 / α log 1 / γ ( n ) C n 2 n 1 2 / α log 2 / γ ( n ) α 1 / γ n 1 / α log 1 / γ ( n ) 2 2 α × log δ + 2 α / γ ( α 1 / γ n 1 / α log 1 / γ ( n ) + 2 ) C V 2 | X | α log α / γ + δ / 2 ( | X | + 2 ) = C n 2 n 1 log 2 α / γ ( n ) log δ + 2 α / γ α 1 / γ n 1 / α log 1 / γ ( n ) + 2 × C V 2 | X | α log α / γ + δ / 2 ( | X | + 2 ) C n 2 log δ + 2 α / γ ( n ) n log 2 α / γ ( n ) C V 2 | X | α log α / γ + δ / 2 ( | X | + 2 ) = C n 2 1 n log δ ( n ) C V 2 | X | α log α / γ + δ / 2 ( | X | + 2 ) < .
First we recall the definition of an extended negatively dependent sequence of random variables in [26].
Definition 9. 
Given a sublinear expectation space ( Ω , H , E ^ ) . A sequence of random variables { X n ; n 1 } is said to be upper (respectively lower) extended negatively dependent, if there is some dominating constant K 1 such that
E ^ i = 1 n φ i ( X i ) K i = 1 n E ^ φ i ( X i ) , n 2 ,
where the nonnegative functions φ i ( x ) C b , L i p ( R ) ( i = 1 , 2 , ) are all nondecreasing (respectively all nonincreasing). They are called extended negatively dependent (END) if they are both upper extended negatively dependent and lower extended negatively dependent.
Feng and Huang [26] studied extended independent sequences of random variables and proved complete convergence as follows.
Proposition 3. 
Let 1 < α 2 , α > γ > 0 and { X , X n , n 1 } be a sequence of identically distributed END random variables in the sublinear expectation space ( Ω , H , E ^ ) , if the random variable X satisfies
E ^ [ X ] = E ^ [ X ] = 0 ,
E ^ | X | α log α / γ + δ ( | X | ) C V | X | α log α / γ + δ ( | X | ) < , 1 < δ < α / γ ,
then for the array of constants { a n i , n 1 , 1 i n } satisfying
i = 1 n | a n i | α = O ( n ) ,
we have
n = 1 n 1 V i = 1 n a n i X i > ε n 1 / α log 1 / γ ( n ) < , ε > 0 .
Remark 8. 
Note that the moment conditions when comparing Theorem 6 and Proposition 3 are, respectively,
E ^ [ X ] = E ^ [ X ] = 0 , C V | X | α log α / γ + δ / 2 ( | X | + 2 ) < ,
and
E ^ [ X ] = E ^ [ X ] = 0 , E ^ | X | α log α / γ + δ ( | X | ) C V | X | α log α / γ + δ ( | X | ) < .
Hence the order of the moment condition in this paper is δ / 2 less than in [26].

Author Contributions

Conceptualization, S.G.; formal analysis, S.G.; methodology, S.G.; validation, S.G. and Z.M.; visualization, S.G. and Z.M.; writing—original draft preparation, S.G. and Z.M.; writing—review and editing, S.G. and Z.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant numbers 12371148 and 12001317, the Shandong Provincial Natural Science Foundation grant number ZR2020QA019, and the QILU Young Scholars Program of Shandong University.

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current (theoretical) study.

Acknowledgments

We would like to express our sincere gratitude to our supervisor, Xinwei Feng, for his patient guidance during this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, Z.; Epstein, L. Ambiguity, risk, and asset returns in continuous time. Econometrica 2002, 70, 1403–1443. [Google Scholar] [CrossRef]
  2. Choquet, G. Theory of capacities. Ann. Inst. Fourier 1954, 5, 131–295. [Google Scholar] [CrossRef]
  3. Schmeidler, D. Subjective probability and expected utility without additivity. Econom. J. Econom. Soc. 1989, 57, 571–587. [Google Scholar] [CrossRef]
  4. Wakker, P.P. Testing and characterizing properties of nonadditive measures through violations of the sure-thing principle. Econometrica 2001, 69, 1039–1059. [Google Scholar] [CrossRef]
  5. Wasserman, L.A.; Kadane, J.B. Bayes’ theorem for Choquet capacities. In The Annals of Statistics; Springer: Berlin/Heidelberg, Germany, 1990; pp. 1328–1339. [Google Scholar]
  6. Barrieu, P.; Karoui, N.E. Pricing, hedging and optimally designing derivatives via minimization of risk measures. arXiv 2007, arXiv:0708.0948. [Google Scholar]
  7. El Karoui, N.; Peng, S.; Quenez, M.C. Backward stochastic differential equations in finance. Math. Financ. 1997, 7, 1–71. [Google Scholar] [CrossRef]
  8. Gianin, E.R. Risk measures via g-expectations. Insur. Math. Econ. 2006, 39, 19–34. [Google Scholar] [CrossRef]
  9. Peng, S. Dynamical evaluations. Comptes Rendus Math. 2004, 339, 585–589. [Google Scholar] [CrossRef]
  10. Peng, S.; Yang, S.; Yao, J. Improving value-at-risk prediction under model uncertainty. J. Financ. Econom. 2023, 21, 228–259. [Google Scholar] [CrossRef]
  11. Peng, S. Backward SDE and related g-expectation. Pitman Res. Notes Math. Ser. 1997, 364, 141–160. [Google Scholar]
  12. Peng, S. Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob–Meyers type. Probab. Theory Relat. Fields 1999, 113, 473–499. [Google Scholar] [CrossRef]
  13. Chen, Z.; Liu, Q.; Zong, G. Weak laws of large numbers for sublinear expectation. Math. Control Relat. Fields 2018, 8, 637–651. [Google Scholar] [CrossRef]
  14. Peng, S. Nonlinear Expectations and Stochastic Calculus under Uncertainty: With Robust CLT and G-Brownian Motion; Springer Nature: Berlin/Heidelberg, Germany, 2019; Volume 95. [Google Scholar]
  15. Chen, Z.; Wu, P.; Li, B. A strong law of large numbers for non-additive probabilities. Int. J. Approx. Reason. 2013, 54, 365–377. [Google Scholar] [CrossRef]
  16. Zhang, L. Rosenthal’s inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. Sci. China Math. 2016, 59, 751–768. [Google Scholar] [CrossRef]
  17. Hu, C. A strong law of large numbers for sub-linear expectation under a general moment condition. Stat. Probab. Lett. 2016, 119, 248–258. [Google Scholar] [CrossRef]
  18. Zhan, Z.; Wu, Q. Strong laws of large numbers for weighted sums of extended negatively dependent random variables under sub-linear expectations. Commun.-Stat.-Theory Methods 2022, 51, 1197–1216. [Google Scholar] [CrossRef]
  19. Feng, X.; Lan, Y. Strong limit theorems for arrays of rowwise independent random variables under sublinear expectation. Acta Math. Hung. 2019, 159, 299–322. [Google Scholar] [CrossRef]
  20. Anh, V.T.; Hien, N.T.; Thanh, L.V.; Van, V.T. The Marcinkiewicz–Zygmund-type strong law of large numbers with general normalizing sequences. J. Theor. Probab. 2021, 34, 331–348. [Google Scholar] [CrossRef]
  21. Bai, Z.; Cheng, P.E. Marcinkiewicz strong laws for linear statistics. Stat. Probab. Lett. 2000, 46, 105–112. [Google Scholar] [CrossRef]
  22. Chen, P.; Gan, S. Limiting behavior of weighted sums of iid random variables. Stat. Probab. Lett. 2007, 77, 1589–1599. [Google Scholar]
  23. Miao, Y.; Mu, J.; Xu, J. An analogue for Marcinkiewicz–Zygmund strong law of negatively associated random variables. Rev. Real Acad. Cienc. Exactas Físic. Nat. Ser. Mat. 2017, 111, 697–705. [Google Scholar] [CrossRef]
  24. Sung, S.H. Marcinkiewicz–Zygmund type strong law of large numbers for pairwise iid random variables. J. Theor. Probab. 2014, 27, 96–106. [Google Scholar] [CrossRef]
  25. Deng, X.; Wang, X. On complete convergence for extended independent random variables under sub-linear expectations. J. Korean Math. Soc. 2020, 57, 553–570. [Google Scholar]
  26. Feng, F.; Huang, H. Strong convergence for weighted sums of END random variables under the sub-linear expectations. Commun.-Stat.-Theory Methods 2021, 51, 7885–7896. [Google Scholar] [CrossRef]
  27. Hsu, P.L.; Robbins, H. Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33, 25–31. [Google Scholar] [CrossRef]
  28. Feng, F.; Huang, H. A complete convergence theorem for weighted sums under the sub-linear expectations. J. Math. Inequal. 2021, 15, 899–910. [Google Scholar] [CrossRef]
  29. Lin, Y.; Feng, X. Complete convergence and strong law of large numbers for arrays of random variables under sublinear expectations. Commun.-Stat.-Theory Methods 2020, 49, 5866–5882. [Google Scholar] [CrossRef]
  30. Zhong, H.; Wu, Q. Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation. J. Inequal. Appl. 2017, 2017, 1–14. [Google Scholar] [CrossRef]
  31. Peng, S. A new central limit theorem under sublinear expectations. arXiv 2008, arXiv:0803.2656. [Google Scholar]
  32. Chen, Z. Strong laws of large numbers for sub-linear expectations. Sci. China Math. 2016, 59, 945–954. [Google Scholar] [CrossRef]
  33. Seneta, E. Regularly Varying Functions; Springer: Berlin/Heidelberg, Germany, 1976; Volume 508. [Google Scholar]
  34. Bojanic, R.; Seneta, E. Slowly varying functions and asymptotic relations. J. Math. Anal. Appl. 1971, 34, 302–315. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, S.; Meng, Z. The Marcinkiewicz–Zygmund-Type Strong Law of Large Numbers with General Normalizing Sequences under Sublinear Expectation. Mathematics 2023, 11, 4734. https://doi.org/10.3390/math11234734

AMA Style

Guo S, Meng Z. The Marcinkiewicz–Zygmund-Type Strong Law of Large Numbers with General Normalizing Sequences under Sublinear Expectation. Mathematics. 2023; 11(23):4734. https://doi.org/10.3390/math11234734

Chicago/Turabian Style

Guo, Shuxia, and Zhe Meng. 2023. "The Marcinkiewicz–Zygmund-Type Strong Law of Large Numbers with General Normalizing Sequences under Sublinear Expectation" Mathematics 11, no. 23: 4734. https://doi.org/10.3390/math11234734

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop