Next Article in Journal
The Relationships among Three Kinds of Divisions of Type-1 Fuzzy Numbers
Next Article in Special Issue
(ω,c)-Periodic Solutions to Fractional Differential Equations with Impulses
Previous Article in Journal
Analyzing the Main Determinants for Being an Immigrant in Cuenca (Ecuador) Based on a Fuzzy Clustering Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

g-Expectation for Conformable Backward Stochastic Differential Equations

1
Department of Mathematics, Guizhou University, Guiyang 550025, China
2
Department of Mathematical Analysis and Numerical Mathematics, Comenius University in Bratislava, Mlynská dolina, 842 48 Bratislava, Slovakia
3
Mathematical Institute of Slovak Academy of Sciences, Štefánikova 49, 814 73 Bratislava, Slovakia
4
School of Mathematical and Statistical Sciences, National University of Ireland, 999014 Galway, Ireland
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(2), 75; https://doi.org/10.3390/axioms11020075
Submission received: 21 December 2021 / Revised: 27 January 2022 / Accepted: 10 February 2022 / Published: 14 February 2022

Abstract

:
In this paper, we study the applications of conformable backward stochastic differential equations driven by Brownian motion and compensated random measure in nonlinear expectation. From the comparison theorem, we introduce the concept of g-expectation and give related properties of g-expectation. In addition, we find that the properties of conformable backward stochastic differential equations can be deduced from the properties of the generator g. Finally, we extend the nonlinear Doob–Meyer decomposition theorem to more general cases.

1. Introduction

The initial research motivation of nonlinear expectations came from risk measurement and option pricing in financial applications. The Allais paradox, Ellsberg paradox and Simon’s “bounded rationality” theory, and so forth, all show that decision-making in reality is contrary to the hypothesis of expected utility theory. Economists have found that the linearity of classical mathematical expectation (that is, the additivity of probability measures) is the main reason for this kind of problem so researchers wanted to find a new tool which can not only retain some properties of classical mathematical expectations, but also solve financial problems with highly dynamic and complex characteristics.
In the 1950s, Choquet [1] extended the Lebesgue integral to non-additive measure and obtained the Choquet expectation. However, this nonlinear expectation does not have dynamic compatibility and is not suitable for solving practical financial problems. In 1997, Peng [2] introduced a new nonlinear expectation, namely the g-expectation, based on the backward stochastic differential equation driven by Brownian motion. The g-expectation retains all the basic properties of the classical expectation except linearity [3], and it can be applied to the dynamic risk measurement of actuarial and financial valuation. Subsequently, Royer [4] studied the backward stochastic differential equation driven by Brownian motion and Poisson random measure, and introduced the corresponding g-expectation and a large number of studies show that this g-expectation can be applied to financial problems (see [5,6,7,8,9]). Recently, Long et al. [10] proposed a multi-step scheme on time-space grids for solving backward stochastic differential equations, and Chen and Ye [11] investigated solutions of backward stochastic differential equations in the framework of Riemannian manifold. From the paper [12], we could get the averaging principle for backward stochastic differential equations and the solutions can be approximated by the solutions to averaged stochastic systems in the sense of mean square under some appropriate assumptions. In addition, coupled forward backward stochastic differential equations driven by the G-Brownian motion were studied in [13], while [14] investigated the solvability of fully coupled forward–backward stochastic differential equations with irregular coefficients.
The above papers concern research on integer order derivative, while the works of conformable type derivative are very few ([15,16,17,18,19,20]). The conformable derivative not only has some properties of fractional derivative, but also some properties of integer order derivatives. We discussed the necessity of studying conformable backward stochastic differential equations in [21]. In the present paper, we study g-expectation for conformable backward stochastic differential equations.
This paper is mainly divided into four parts. In the second section, we give some definitions and theorems. In the third section, we study the relationship between g-expectation and the filtered consensus expectation, and we give some properties of g-expectation. We find that the g-expectation can be considered as a nonlinear extension of the Girsanov transformation. In the final section, we prove the Doob–Meyer decomposition theorem under mild assumptions.

2. Preliminaries

Let B ( · ) be a standard Brownian motion defined on the complete probability space ( Ω , F , P ) with the filtration { F t } 0 t T satisfying the usual hypotheses of completeness and right continuity. B ( R ) denotes the Borel sets of R and E denotes the expected value. A stochastic process V ( ω , t ) is a real function defined on Ω × [ 0 , T ] such that ω V ( ω , t ) is F -measurable for any t [ 0 , T ] . A stochastic process V is called F t -adapted if ω V ( ω , t ) is F t -measurable for any t [ 0 , T ] . The natural filtration is completed with sets of measure zero. By P we denote the σ -field. A process V : Ω × [ 0 , T ] R is called F -predictable if it is F -adapted and P -measurable. A process is called càdlàg if its trajectories are right-continuous and have left limits. The term a . s . means a l m o s t   s u r e l y with respect to the probability measure. Inspired by [22], we define some spaces that we will use:
L Q 2 ( R ) = { m e a s u r a b l e f u n c t i o n φ : R R ; R | φ ( s ) | 2 Q ( d s ) < ; Q i s a   σ f i n i t e m e a s u r e } , L 2 ( Ω , F T , P ) = { F T m e a s u r a b l e r a n d o m v a r i a b l e ξ : Ω F T ; E [ | ζ | 2 ] < } , H 2 ( R ) = { p r e d i c t a b l e p r o c e s s Y : Ω × [ a , T ] R ; E a T | Y ( t ) | 2 d t < } , H N 2 ( R ) = { p r e d i c t a b l e p r o c e s s Z : Ω × [ a , T ] × R R ; E a T R | Z ( t , s ) | 2 Q ( t , d s ) η ( t ) d t < } , S 2 ( R ) = { a d a p t e d , c à d l à g p r o c e s s X : Ω × [ a , T ] R ; E [ sup t [ a , T ] | X ( t ) | 2 ] < } , L 2 ( Ω , F t , P ) = { F t m e a s u r a b l e r a n d o m v a r i a b l e ξ : Ω F t ; E [ | ξ | 2 ] < ] } .
Furthermore, for any constant σ , we introduce the norms of spaces H 2 , H N 2 and S 2 as:
Y H 2 2 = E a T e σ t | Y ( t ) | 2 d t , Z H N 2 2 = E a T R e σ t | Z ( t , s ) | 2 Q ( t , d s ) η ( t ) d t , X S 2 2 = E [ sup t [ a , T ] e σ t | X ( t ) | 2 ] .
Definition 1.
(see [2] (Definition 3.1)) A functional E : L 2 ( Ω , F T , P ) R is called a nonlinear expectation if it satisfies the following properties:
(i) Strict monotonicity: if X 1 X 2 a.s., E [ X 1 ] E [ X 2 ] , and if X 1 X 2 a.s., E [ X 1 ] = E [ X 2 ] X 1 = X 2 a.s.
(ii) preserving of constants: E [ c ] = c , for any constant c.
Definition 2.
(see [2] (Definition 3.2)) A nonlinear expectation E is a filtration consistent expectation ( F -consistent expectation) if for any ζ L 2 ( Ω , F T , P ) and a t T , there exists a random variable ξ L 2 ( Ω , F t , P ) such that E [ ζ 1 A ] = E [ ξ 1 A ] , A F t , where ξ is uniquely defined. We denote ξ = E [ ζ | F t ] , which is called the conditional expectation of ζ with respect to F t . Therefore, we can write it as E [ ζ 1 A ] = E [ E [ ζ | F t ] 1 A ] , A F t .
Lemma 1.
(see [4](Lemma A.1)) Let A ( · ) be an increasing predictable process. We consider its decomposition as a sum of a continuous and a purely discontinuous process: A ( t ) = A 1 ( t ) + A 2 ( t ) . We also consider a càdlàg martingale W ( · ) , bounded in L 2 .
(i) For any stopping time τ such that a τ T ,
E a τ W ( s ) d A 1 ( s ) = 0 .
(ii) For any predictable stopping time τ such that a τ T ,
E a τ W ( s ) d A 2 ( s ) = E a s τ W ( s ) A 2 ( s ) .
Lemma 2.
(see [21] (Theorem 3.5)) Suppose U ( · ) = U ( X ( · ) , · ) C 2 , 1 ( R × R + , R ) . Then, for any a t T , we have
D ρ a U ( t ) d ( t a ) ρ ρ = ( u t ( t a ) ρ 1 g ( t , X ( t ) , Y ( t ) , Z ( t , · ) ) u x + 1 2 ( t a ) 2 ( ρ 1 ) Y 2 ( t ) 2 u x 2 ) d t + ( t a ) ρ 1 Y ( t ) u x d B ( t ) + ( t a ) ρ 1 u x R Z ( t , s ) N ˜ ( d t , d s ) + 1 2 ( t a ) 2 ( ρ 1 ) 2 u x 2 R Z 2 ( t , s ) N ( d t , d s ) , 0 < ρ 1 .
Lemma 3.
(see [22] (Theorem 2.5.1)) Let B be a ( P , F ) -Brownian motion, N be a ( P , F ) -random measure with compensator ϑ ( d τ , d s ) = Q ( τ , d s ) η ( τ ) d τ . Assume an equivalent probability measure Q P with a positive F -martingale:
d M ( t ) M ( t ) = ϕ ( t ) d B ( t ) + R κ ( t , s ) N ˜ ( d t , d s ) , M ( a ) = 1 ,
where ϕ ( · ) and κ ( · , · ) are the F -predictable processes satisfying
a T | ϕ ( t ) | 2 d t < , a T R | κ ( t , s ) | 2 Q ( t , d s ) η ( t ) d t < , κ ( t , s ) > 1 , a t T , s R .
Then,
B Q ( t ) = B ( t ) a t ϕ ( τ ) d τ , a t T , N ˜ Q ( t , A ) = N ( t , A ) a t R ( 1 + κ ( τ , s ) ) Q ( τ , d s ) η ( τ ) d τ , a t T , A B ( R ) ,
are ( Q , F ) -Brownian motion and a ( Q , F ) -random measure.
Lemma 4.
(see [22] (p. 42)) Let γ > 0 and x 1 , x 2 R . Then,
2 | x 1 x 2 | 1 γ | x 1 | 2 + γ | x 2 | 2 .
Lemma 5.
Consider the following family of conformable backward stochastic differential equation parameterized by n = 1 , 2 , · · ·
X n ( t ) = ζ + t T ( τ a ) ρ 1 g ( τ , X n ( τ ) , Y n ( τ ) , Z n ( τ ) ) d τ + n t T ( τ a ) ρ 1 ( X ( τ ) X n ( τ ) ) t T ( τ a ) ρ 1 Y n ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z n ( τ , s ) N ˜ ( d τ , d s ) , t [ a , T ] , 0 < ρ 1 ,
where ζ L 2 ( Ω , F T , P ) , X is an adapted process, Y and Z are given control processes, g : Ω × [ a , T ] × R × R × L Q 2 ( R ) R is predictable, B ( · ) is a given Brownian motion and N ˜ is a compensated random measure. For any n = 1 , 2 , · · · , we have X ( t ) X n ( t ) .
Proof. 
Following [23] (Lemma 3.4), we assume that X ( t ) < X n ( t ) . Then there exists ϖ > 0 such that the measure of { ( ω , t ) : X n ( t ) X ( t ) ϖ 0 } Ω × [ a , T ] non-zero. Define the following two stopping times:
σ ¯ = min [ T , inf { t : X n ( t ) X ( t ) + ϖ } ] , τ ¯ = inf { t σ : X n ( t ) X ( t ) } .
Then we get σ ¯ τ ¯ T . Since X ( t ) X n ( t ) is right continuous, we have:
X n ( σ ¯ ) X ( σ ¯ ) + ϖ ,
X n ( τ ¯ ) X ( τ ¯ ) .
Suppose X ¯ ( t ) is the solution with the terminal value X ¯ ( τ ¯ ) = X n ( τ ¯ ) on [ a , τ ¯ ] . From (2) and the comparison theorem, we get X n ( σ ¯ ) X ( σ ¯ ) . This is a contradiction. Thus, X ( t ) X n ( t ) . □

3. The Main Results of g-Expectations

Consider the following conformable backward stochastic differential equation
X ( t ) = ζ + t T ( τ a ) ρ 1 g ( τ , X ( τ ) , Y ( τ ) , Z ( τ , s ) ) d τ t T ( τ a ) ρ 1 Y ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ ( d τ , d s ) , 0 < ρ 1 , a t T ,
where ζ L 2 ( Ω , F T , P ) , X is an adapted process, Y and Z are given control processes, g : Ω × [ a , T ] × R × R × L Q 2 ( R ) R is predictable, B ( · ) is a given Brownian motion and N ˜ is a compensated random measure.
Assumption 1.
(i) The generator g : Ω × [ a , T ] × R × R × R R is predictable and Lipschitz in x and y
| g ( t , x , y , z ) g ( t , x , y , z ) | K ( | x x | + | y y | ) , x , x , y , y R ,
where K is a positive constant.
(ii) For any z, z R , there exist constants 1 < C 1 0 and C 2 0 such that
g ( t , x , y , z ) g ( t , x , y , z ) R ( z ( t , s ) z ( t , s ) ) δ x , y , z , z ( t , s ) Q ( t , d s ) η ( t ) ,
where δ x , y , z , z ( t , s ) : Ω × [ a , T ] × R R is predictable and satisfies C 1 ( 1 | s | ) δ x , y , z , z ( t , s ) C 2 ( 1 | s | ) .
(iii) For any x R , g ( t , x , 0 , 0 ) = 0 .
Notice that the comparison theorems in [21] follow from Definition 1. Hence a nonlinear expectation can be defined by conformable backward stochastic differential equations.
Definition 3.
A nonlinear expectation E g [ · ] : L 2 ( Ω , F T , P ) R is called a g-expectation if the generator g of Equation (3) satisfies Assumption 1 and we define the g-expectation as E g [ ζ ] = X ( a ) , where a triple ( X , Y , Z ) is a unique solution of Equation (3) and X ( a ) denotes the initial value of the solution.
Definition 4.
A nonlinear expectation E g [ · | F t ] : L 2 ( Ω , F T , P ) L 2 ( Ω , F t , P ) is called a conditional g-expectation if for any a t T , the generator g of Equation (3) satisfies Assumption 1 and we define the conditional g-expectation as E g [ ζ | F t ] = X ( t ) , where a triple ( X , Y , Z ) is a unique solution of Equation (3) and ζ L 2 ( Ω , F T , P ) denotes the terminal value of the solution.
Proposition 1.
We have the following results:
(i) For a t T , A F t and ζ L 2 ( Ω , F T , P ) , E g [ ζ 1 A | F t ] = E g [ ζ | F t ] 1 A .
(ii) For any a s t T and ζ L 2 ( Ω , F T , P ) , E g [ E g [ ζ | F t ] | F s ] = E g [ ζ | F s ] .
Proof. 
Case (i). Let A F t . For any a t T and 0 < ρ 1 , consider Equation (3) and
X 1 ( t ) = ζ 1 A + t T ( τ a ) ρ 1 g ( τ , X 1 ( τ ) , Y 1 ( τ ) , Z 1 ( τ ) ) d τ t T ( τ a ) ρ 1 Y 1 ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z 1 ( τ , s ) N ˜ ( d τ , d s ) ,
where ζ L 2 ( Ω , F T , P ) and the generator g satisfies Assumption 1. Multiplying by 1 A on both sides of (3) we get
X ( t ) 1 A = ζ 1 A + t T ( τ a ) ρ 1 1 A g ( τ , X ( τ ) , Y ( τ ) , Z ( τ ) ) d τ t T ( τ a ) ρ 1 Y ( τ ) 1 A d B ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) 1 A N ˜ ( d τ , d s ) ,
where a t T and 0 < ρ 1 . Notice that g ( t , X ( t ) , Y ( t ) , Z ( t ) ) 1 A = g ( t , 1 A X ( t ) , 1 A Y ( t ) , 1 A Z ( t ) ) , and then,
X ( t ) 1 A = ζ 1 A + t T ( τ a ) ρ 1 g ( τ , X ( τ ) 1 A , Y ( τ ) 1 A , Z ( τ ) 1 A ) d τ t T ( τ a ) ρ 1 Y ( τ ) 1 A d B ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) 1 A N ˜ ( d τ , d s ) .
Let ( X ¯ ( t ) , Y ¯ ( t ) , Z ¯ ( t ) ) = ( X ( t ) 1 A , Y ( t ) 1 A , Z ( t ) 1 A ) , and (8) can be written as:
X ¯ ( t ) = ζ 1 A + t T ( τ a ) ρ 1 g ( τ , X ¯ ( τ ) , Y ¯ ( τ ) , Z ¯ ( τ ) ) d τ t T ( τ a ) ρ 1 Y ¯ ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z ¯ ( τ , s ) N ˜ ( d τ , d s ) .
By the uniqueness of the conformable backward stochastic differential equation, we get X 1 ( t ) = X ¯ ( t ) = X ( t ) 1 A , a t T . From Definition 4, we have E g [ ζ 1 A | F t ] X 1 ( t ) = X ( t ) 1 A = E g [ ζ | F t ] 1 A .
Case (ii). For any A F s and a s t T , we have A F t . From the result of (i), one has:
E g [ E g [ E g [ ζ | F t ] | F s ] 1 A ] = E g [ E g [ E g [ ζ | F t ] 1 A | F s ] ] = E g [ E g [ E g [ ζ 1 A | F t ] | F s ] ] = E g [ E g [ ζ 1 A | F s ] ] = E g [ E g [ ζ | F s ] 1 A ] ,
where ζ L 2 ( Ω , F T , P ) . Let ζ 1 = E g [ E g [ ζ | F t ] | F s ] and ζ 2 = E g [ ζ | F s ] . If we choose A = { ζ 1 ζ 2 } F t , from Definition 1, ζ 1 1 A ζ 2 1 A and E g [ ζ 1 1 A ] = E g [ ζ 2 1 A ] , we get ζ 1 1 A = ζ 2 1 A . Hence ζ 1 ζ 2 . If we set A = { ζ 1 ζ 2 } F t , we get ζ 1 ζ 2 in the same way. Hence, we conclude that ζ 1 = ζ 2 , that is, E g [ E g [ ζ | F t ] | F s ] = E g [ ζ | F s ] . □
Theorem 1.
The g-expectation is F -consistent expectation.
Proof. 
Let A F t . For any a t T and 0 < ρ 1 , consider the following equations:
X 1 ( t ) = ζ 1 A + t T ( τ a ) ρ 1 g ( τ , X 1 ( τ ) , Y 1 ( τ ) , Z 1 ( τ ) ) d τ t T ( τ a ) ρ 1 Y 1 ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z 1 ( τ , s ) N ˜ ( d τ , d s ) ,
X 2 ( t ) = X 1 ( u ) 1 A + t T ( τ a ) ρ 1 g ( τ , X 2 ( τ ) , Y 2 ( τ ) , Z 2 ( τ ) ) d τ t T ( τ a ) ρ 1 Y 2 ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z 2 ( τ , s ) N ˜ ( d τ , d s ) ,
where ζ L 2 ( Ω , F T , P ) , X 1 ( u ) = E g [ ζ 1 A | F t ] and the generator g satisfies Assumption 1. Multiplying by 1 A on both sides of (7) we get:
X 2 ( t ) 1 A = E g [ ζ 1 A | F t ] 1 A + t T ( τ a ) ρ 1 1 A g ( τ , X 2 ( τ ) , Y 2 ( τ ) , Z 2 ( τ ) ) d τ t T ( τ a ) ρ 1 Y 2 ( τ ) 1 A d B ( τ ) t T ( τ a ) ρ 1 R Z 2 ( τ , s ) 1 A N ˜ ( d τ , d s )
where a t T and 0 < ρ 1 . Notice that g ( t , X ( t ) , Y ( t ) , Z ( t ) ) 1 A = g ( t , 1 A X ( t ) , 1 A Y ( t ) , 1 A Z ( t ) ) , and then,
X 2 ( t ) 1 A = E g [ ζ 1 A | F t ] 1 A + t T ( τ a ) ρ 1 g ( τ , X 2 ( τ ) 1 A , Y 2 ( τ ) 1 A , Z 2 ( τ ) 1 A ) d τ t T ( τ a ) ρ 1 Y 2 ( τ ) 1 A d B ( τ ) t T ( τ a ) ρ 1 R Z 2 ( τ , s ) 1 A N ˜ ( d τ , d s ) .
Let ( X 3 ( t ) , Y 3 ( t ) , Z 3 ( t ) ) = ( X 2 ( t ) 1 A , Y 2 ( t ) 1 A , Z 2 ( t ) 1 A ) , and (8) can be written as:
X 3 ( t ) = E g [ ζ 1 A | F t ] 1 A + t T ( τ a ) ρ 1 g ( τ , X 3 ( τ ) , Y 3 ( τ ) , Z 3 ( τ ) ) d τ t T ( τ a ) ρ 1 Y 3 ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z 3 ( τ , s ) N ˜ ( d τ , d s ) .
By the uniqueness of the conformable backward stochastic differential equation, we get X 1 ( t ) = X 3 ( t ) = E g [ ζ 1 A | F t ] 1 A , a t T . From Definition 3 and Proposition 1, we have:
E g [ ζ 1 A ] = X 1 ( a ) = X 3 ( a ) = E g [ E g [ ζ 1 A | F t ] 1 A ] = E g [ E g [ ζ | F t ] 1 A ] .
Hence, there exists ξ = E g [ ζ | F t ] such that E g [ ζ 1 A ] = E g [ ξ 1 A ] .
Next, we prove the uniqueness of ξ . Assume that there exists another random variable ξ such that E g [ ξ 1 A ] = E g [ ξ 1 A ] and ξ ξ . Choose ξ > ξ . According to the comparison theorem in [21] and Definition 3, we have E g [ ξ 1 A ] > E g [ ξ 1 A ] , which is contrary to E g [ ξ 1 A ] = E g [ ξ 1 A ] . On the other hand, if we choose ξ < ξ , the result E g [ ξ 1 A ] = E g [ ξ 1 A ] still does not hold. Hence ξ = ξ .
Combining the existence and uniqueness of ξ , we conclude that the g-expectation is F -consistent expectation. The proof is complete. □
Next, we give two kinds of g-expectation with the special generators g 1 and g 2 .
Proposition 2.
Let ϕ ( · ) and κ ( · ) be F -predictable processes. For any μ 1 R + and 1 < C 1 0 , we define the following generators:
g 1 ( t , x , y , z ) = μ 1 | y | + C 1 R ( 1 | s | ) | z ( t , s ) | Q ( t , d s ) η ( t ) , g 2 ( t , x , y , z ) = μ 1 | y | C 1 R ( 1 | s | ) | z ( t , s ) | Q ( t , d s ) η ( t ) .
Then E g 1 [ ζ | F t ] = inf Q D E Q [ ζ | F t ] and E g 2 [ ζ | F t ] = sup Q D E Q [ ζ | F t ] , where
D = { Q P , d Q d P | F t = M ( t ) , d M ( t ) M ( t ) = ϕ ( t ) d B ( t ) + R κ ( t , s ) N ˜ ( d t , d s ) , M ( a ) = 1 , | ϕ ( t ) | μ 1 , 1 < κ ( t , s ) C 1 ( 1 | s | ) , a t T } .
Proof. 
Here we consider the case of g 1 ( t , x , y , z ) = μ 1 | y | + C 1 R ( 1 | s | ) | z ( t , s ) | Q ( t , d s ) η ( t ) . The proof of g 2 ( t , x , y , z ) = μ 1 | y | C 1 R ( 1 | s | ) | z ( t , s ) | Q ( t , d s ) η ( t ) is similar. Consider the following equation
X ( t ) = ζ + t T ( τ a ) ρ 1 g 1 ( τ , X ( τ ) , Y ( τ ) , Z ( τ ) ) d τ t T ( τ a ) ρ 1 Y ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ ( d τ , d s ) = ζ + t T μ 1 ( τ a ) ρ 1 | Y ( τ ) | d τ + t T ( τ a ) ρ 1 R C 1 ( 1 | s | ) | Z ( τ , s ) | Q ( τ , d s ) η ( τ ) d τ t T ( τ a ) ρ 1 Y ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ ( d τ , d s ) ,
where a t T , 0 < ρ 1 , μ 1 R + and 1 < C 1 0 . Define
d Q d P | F t = M ( t ) , M ( a ) = 1 , d M ( t ) M ( t ) = ϕ ( t ) d B ( t ) + R κ ( t , s ) N ˜ ( d t , d s ) ,
where | ϕ ( t ) | μ 1 and 1 < κ ( t , s ) C 1 ( 1 | s | ) . Let D = { Q P , d Q d P | F t = M ( t ) } , and from Lemma 3, we can get
X ( t ) = ζ + t T ( τ a ) ρ 1 [ μ 1 | Y ( τ ) | ϕ ( τ ) Y ( τ ) ] d τ + t T ( τ a ) ρ 1 R [ C 1 ( 1 | s | ) | Z ( τ , s ) | κ ( τ , s ) Z ( τ , s ) ] Q ( τ , d s ) η ( τ ) d τ t T ( τ a ) ρ 1 Y ( τ ) d B Q ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ Q ( d τ , d s ) ζ t T ( τ a ) ρ 1 Y ( τ ) d B Q ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ Q ( d τ , d s ) ,
where a t T and 0 < ρ 1 ; note that ϕ ( τ ) Y ( τ ) = μ 1 Y ( τ ) > 0 and κ ( τ , s ) Z ( τ , s ) = C 1 ( 1 | s | ) Z ( τ , s ) > 0 , we have
X ( t ) = ζ t T ( τ a ) ρ 1 Y ( τ ) d B Q ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ Q ( d τ , d s ) .
Taking the conditional expectation under the probability measure Q , we obtain X ( t ) = inf Q D E Q [ ζ | F t ] . Notice that Assumption 1 is satisfied for the generator g 1 ( t , x , y , z ) . From Definition 4, we get X ( t ) = E g 1 [ ζ | F t ] , that is, E g 1 [ ζ | F t ] = inf Q D E Q [ ζ | F t ] .
The proof is complete. □
Proposition 3.
Let E g be a g-expectation and ζ, ζ 1 , ζ 2 L 2 ( Ω , F T , P ) .
(i) Translation invariance: for any constant c R and a t T , we have
E g [ ζ + c | F t ] = E g [ ζ | F t ] + c ,
where the generator g is independent of X ( · ) .
(ii) Homogeneity: for any constant c > 0 and a t T , we have
E g [ c ζ | F t ] = c E g [ ζ | F t ] ,
where the generator g is positively homogenous.
(iii) Convexity: for any c ( 0 , 1 ) and a t T , the g-expectation E g is convex
E g [ c ζ 1 + ( 1 c ) ζ 2 | F t ] c E g [ ζ 1 | F t ] + ( 1 c ) E g [ ζ 2 | F t ] ,
if the generator g is convex, namely:
g ( t , c x 1 + ( 1 c ) x 2 , c y 1 + ( 1 c ) y 2 , c z 1 + ( 1 c ) z 2 ) c g ( t , x 1 , y 1 , z 1 ) + ( 1 c ) g ( t , x 2 , y 2 , z 2 ) , ( x 1 , y 1 , z 1 ) , ( x 2 , y 2 , z 2 ) ( R , R , L Q 2 ) .
(iv) Sub-linearity and sub-additivity: the g-expectation E g is sub-linear, sub-additive and positively homogenous
E g [ ζ 1 + ζ 2 | F t ] E g [ ζ 1 | F t ] + E g [ ζ 2 | F t ] ,
if the generator g is positively homogenous and satisfies:
g ( t , x 1 + x 2 , y 1 + y 2 , z 1 + z 2 ) g ( t , x 1 , y 1 , z 1 ) + g ( t , x 2 , y 2 , z 2 ) , ( x 1 , y 1 , z 1 ) , ( x 2 , y 2 , z 2 ) ( R , R , L Q 2 ) .
Proof. 
Case (i). Consider the following conformable backward stochastic differential equations:
X ( t ) = ζ + t T ( τ a ) ρ 1 g ( τ , X ( τ ) , Y ( τ ) , Z ( τ ) ) d τ t T ( τ a ) ρ 1 Y ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ ( d τ , d s ) ,
X ( t ) = ζ + c + t T ( τ a ) ρ 1 g ( τ , X ( τ ) , Y ( τ ) , Z ( τ ) ) d τ t T ( τ a ) ρ 1 Y ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ ( d τ , d s ) ,
where ζ L 2 ( Ω , F T , P ) , c R , a t T and 0 < ρ 1 . Let
d Q d P | F t = M ( t ) , M ( a ) = 1 d M ( t ) M ( t ) = α ˜ ( t ) d B ( t ) + R β ˜ ( t ) N ˜ ( d t , d s ) , a t T ,
where α ˜ ( · ) and β ˜ ( · ) are the predictable processes. If we choose a generator g ( t , X ( t ) , Y ( t ) , Z ( t ) ) = α ˜ ( t ) Y ( t ) + R β ˜ ( t ) Z ( t , s ) Q ( t , d s ) η ( t ) which does not depend on X ( · ) , using Lemma 3, one has:
X ( t ) = ζ t T ( τ a ) ρ 1 Y ( τ ) d B Q ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ Q ( d τ , d s ) , X ( t ) = ζ + c t T ( τ a ) ρ 1 Y ( τ ) d B Q ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ Q ( d τ , d s ) ,
where ζ L 2 ( Ω , F T , P ) , c R , a t T and 0 < ρ 1 . Hence, we get X ( t ) = E Q [ ζ | F t ] and X ( t ) = E Q [ ζ + c | F t ] = E Q [ ζ | F t ] + c under the probability measure Q , that is, X ( t ) = X ( t ) + c , Y ( t ) = Y ( t ) and Z ( t , s ) = Z ( t , s ) .
On the other hand, since the generator g ( t , X ( t ) , Y ( t ) , Z ( t ) ) = α ˜ ( t ) Y ( t ) + R β ˜ ( t ) Z ( t , s ) Q ( t , d s ) η ( t ) satisfies Assumption 1, we have X ( t ) = E g [ ζ | F t ] and X ( t ) = E g [ ζ + c | F t ] . Hence, we conclude that E g [ ζ + c | F t ] = X ( t ) = X ( t ) + c = E g [ ζ | F t ] + c .
Case (ii). Let c > 0 , and using the same method as in case (i), consider the following equation:
X ( t ) = c ζ + t T ( τ a ) ρ 1 g ( τ , X ( τ ) , Y ( τ ) , Z ( τ ) ) d τ t T ( τ a ) ρ 1 Y ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ ( d τ , d s ) ,
where ζ L 2 ( Ω , F T , P ) , a t T and 0 < ρ 1 . Choose
g ( t , X ( t ) , Y ( t ) , Z ( t ) ) = α ˜ ( t ) Y ( t ) + R β ˜ ( t ) Z ( t , s ) Q ( t , d s ) η ( t ) ,
which is positively homogenous. With the framework of (12), we have
X ( t ) = c ζ t T ( τ a ) ρ 1 Y ( τ ) d B Q ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ Q ( d τ , d s ) ,
where ζ L 2 ( Ω , F T , P ) , a t T and 0 < ρ 1 . Then we get X ( t ) = c E Q [ ζ | F t ] . From Definition 4 and X ( t ) = E Q [ ζ | F t ] = E g [ ζ | F t ] , we have E g [ c ζ | F t ] = X ( t ) = c E Q [ ζ | F t ] = c E g [ ζ | F t ] .
Case (iii). Let 0 < c < 1 , and consider the following conformable backward stochastic differential equations
X ¯ ( t ) = c ζ 1 + ( 1 c ) ζ 2 + t T ( τ a ) ρ 1 g ( τ , X ¯ ( τ ) , Y ¯ ( τ ) , Z ¯ ( τ ) ) d τ t T ( τ a ) ρ 1 Y ¯ ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z ¯ ( τ , s ) N ˜ ( d τ , d s ) ,
X i ( t ) = ζ i + t T ( τ a ) ρ 1 g ( τ , X i ( τ ) , Y i ( τ ) , Z i ( τ ) ) d τ t T ( τ a ) ρ 1 Y i ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z i ( τ , s ) N ˜ ( d τ , d s ) ,
where i = 1 , 2 , ζ i L 2 ( Ω , F T , P ) , 0 < c < 1 , a t T and 0 < ρ 1 .
Notice that:
g ( t , c X 1 + ( 1 c ) X 2 , c Y 1 + ( 1 c ) Y 2 , c Z 1 + ( 1 c ) Z 2 ) c g ( t , X 1 , Y 1 , Z 1 ) + ( 1 c ) g ( t , X 2 , Y 2 , Z 2 ) ,
and we see that:
g ( t , c X 1 + ( 1 c ) X 2 , c Y 1 + ( 1 c ) Y 2 , c Z 1 + ( 1 c ) Z 2 ) + f ( t ) = c g ( t , X 1 , Y 1 , Z 1 ) + ( 1 c ) g ( t , X 2 , Y 2 , Z 2 ) ,
where the nonnegative function f ( t ) depends on X i , Y i and Z i and i = 1 , 2 . Let X 3 ( t ) = c X 1 ( t ) + ( 1 c ) X 2 ( t ) , Y 3 ( t ) = c Y 1 ( t ) + ( 1 c ) Y 2 ( t ) and Z 3 ( t ) = c Z 1 ( t ) + ( 1 c ) Z 2 ( t ) , and we have
X 3 ( t ) = c ζ 1 + ( 1 c ) ζ 2 + t T ( τ a ) ρ 1 [ c g ( τ , X 1 ( τ ) , Y 1 ( τ ) , Z 1 ( τ ) ) + ( 1 c ) g ( τ , X 2 ( τ ) , Y 2 ( τ ) , Z 2 ( τ ) ) ] d τ t T ( τ a ) ρ 1 [ c Y 1 ( τ ) + ( 1 c ) Y 2 ( τ ) ] d B ( τ ) t T ( τ a ) ρ 1 R [ c Z 1 ( τ , s ) + ( 1 c ) Z 2 ( τ , s ) ] N ˜ ( d τ , d s ) = c ζ 1 + ( 1 c ) ζ 2 + t T ( τ a ) ρ 1 [ g ( τ , X 3 ( τ ) , Y 3 ( τ ) , Z 3 ( τ ) ) + f ( τ , X i ( τ ) , Y i ( τ ) , Z i ( τ ) ) ] d τ t T ( τ a ) ρ 1 Y 3 ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z 3 ( τ , s ) N ˜ ( d τ , d s ) ,
where i = 1 , 2 , 0 < c < 1 , ζ i L 2 ( Ω , F T , P ) , a t T and 0 < ρ 1 . Using the comparison theorem to the Equations (10) and (11), we see that X ¯ ( t ) X 3 ( t ) . From Definition 4, one has
E g [ c ζ 1 + ( 1 c ) ζ 2 | F t ] = X ¯ ( t ) X 3 ( t ) = c X 1 ( t ) + ( 1 c ) X 2 ( t ) c E g [ ζ 1 | F t ] + ( 1 c ) E g [ ζ 2 | F t ] ,
where i = 1 , 2 , 0 < c < 1 , ζ i L 2 ( Ω , F T , P ) and a t T .
Case (iv). Similar to the proof process in case (iii), we get the result in case (iv).
The proof is complete. □
Theorem 2.
Suppose g ( t , X ( t ) , Y ( t ) , Z ( t ) ) = α ˜ ( t ) Y ( t ) + R β ˜ ( t ) Z ( t , s ) Q ( t , d s ) η ( t ) , a t T . Then the g-expectation is equivalent to the expectation under a probability measure Q .
Proof. 
Consider the following conformable backward stochastic differential equation
X ( t ) = ζ + t T ( τ a ) ρ 1 g ( τ , X ( τ ) , Y ( τ ) , Z ( τ ) ) d τ t T ( τ a ) ρ 1 Y ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ ( d τ , d s ) ,
where ζ L 2 ( Ω , F T , P ) , a t T and 0 < ρ 1 .
It is clear that the generator g ( t , X ( t ) , Y ( t ) , Z ( t ) ) = α ˜ ( t ) Y ( t ) + R β ˜ ( t ) Z ( t , s ) Q ( t , d s ) η ( t ) satisfies Assumption 1. From Definition 4, we have X ( t ) = E g [ ζ | F t ] .
Let
d Q d P | F t = M ( t ) , M ( a ) = 1 d M ( t ) M ( t ) = α ˜ ( t ) d B ( t ) + R β ˜ ( t ) N ˜ ( d t , d s ) , a t T ,
where α ˜ ( · ) and β ˜ ( · ) are the predictable processes. Using Lemma 3, one has:
X ( t ) = ζ t T ( τ a ) ρ 1 Y ( τ ) d B Q ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ Q ( d τ , d s ) ,
where ζ L 2 ( Ω , F T , P ) , a t T and 0 < ρ 1 . Hence, we get X ( t ) = E Q [ ζ | F t ] under the probability measure Q . From the uniqueness of the solution, we conclude that E g [ ζ | F t ] = X ( t ) = E Q [ ζ | F t ] . The proof is complete. □

4. Doob–Meyer Decomposition Theorem

We first give some definitions.
Definition 5.
The process X ( · ) is called a g-martingale if for any a s t T , we have E [ | X ( t ) | 2 ] < and
E g [ X ( t ) | F s ] = X ( s ) .
Definition 6.
The process X ( · ) is called a g-supermartingale if for any a s t T , we have E [ | X ( t ) | 2 ] < and
E g [ X ( t ) | F s ] X ( s ) .
Theorem 3.
Assume that the generator g satisfies Assumption 1. If the process X ( · ) S 2 ( R ) is a g-supermartingale on [ a , T ] , then there exists a unique triple ( Y , Z ) H 2 ( R ) × H N 2 ( R ) and a continuous increasing process A ( · ) such that:
X ( t ) = ζ + t T ( τ a ) ρ 1 g ( τ , X ( τ ) , Y ( τ ) , Z ( τ ) ) d τ + A ( T ) A ( t ) t T ( τ a ) ρ 1 Y ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ ( d τ , d s ) ,
where ζ L 2 ( Ω , F T , P ) , a t T , 0 < ρ 1 , A ( a ) = 0 and E [ | A ( T ) | 2 ] < .
Proof. 
Consider the following conformable backward stochastic differential equation:
X n ( t ) = ζ + t T ( τ a ) ρ 1 g n ( τ , X n ( τ ) , Y n ( τ ) , Z n ( τ ) ) d τ t T ( τ a ) ρ 1 Y n ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z n ( τ , s ) N ˜ ( d τ , d s ) ,
where ζ L 2 ( Ω , F T , P ) , a t T , 0 < ρ 1 and n = 1 , 2 , 3 , · · · .
Assume that g n ( t , x , y , z ) = g ( t , x , y , z ) + n ( X ( t ) x ) , so Equation (13) can be written as:
X n ( t ) = ζ + t T ( τ a ) ρ 1 g ( τ , X n ( τ ) , Y n ( τ ) , Z n ( τ ) ) d τ + A n ( T ) A n ( t ) t T ( τ a ) ρ 1 Y n ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z n ( τ , s ) N ˜ ( d τ , d s ) ,
where A n ( t ) = a t n ( τ a ) ρ 1 ( X ( τ ) X n ( τ ) ) d τ . From Lemma 5 and a comparison theorem, we get that the sequence ( X n ( t ) ) n N + is increasing and monotonically converges. Hence the sequence ( A n ( t ) ) n N + is continuous and increasing. From Equation (14), we have
A n ( T ) = X n ( a ) ζ a T ( τ a ) ρ 1 g ( τ , X n ( τ ) , Y n ( τ ) , Z n ( τ ) ) d τ + a T ( τ a ) ρ 1 Y n ( τ ) d B ( τ ) + a T ( τ a ) ρ 1 R Z n ( τ , s ) N ˜ ( d τ , d s ) | X n ( a ) | + | ζ | + k a T ( τ a ) ρ 1 [ | X n ( τ ) | + | Y n ( τ ) | + R | Z n ( τ , s ) | Q ( τ , d s ) η ( τ ) ] d τ + a T ( τ a ) ρ 1 | Y n ( τ ) | d B ( τ ) + a T ( τ a ) ρ 1 R | Z n ( τ , s ) | N ˜ ( d τ , d s ) ,
where k depends on K and δ x , y , z , z ( t , s ) . Then,
E [ | A n ( T ) | 2 ] E [ | X n ( a ) | 2 ] + E [ ζ 2 ] + k 1 E [ sup a t T | X n ( t ) | 2 ] + k 2 E a T ( τ a ) 2 ( ρ 1 ) | Y n ( τ ) | 2 + R | Z n ( τ , s ) | 2 Q ( τ , d s ) η ( τ ) d τ ,
for a k 1 depending on T, ρ and a, and k 2 is a constant. Hence, there exists a constant l 1 such that:
E [ | A n ( T ) | 2 ] l 1 + k 2 E a T ( τ a ) 2 ( ρ 1 ) | Y n ( τ ) | 2 + R | Z n ( τ , s ) | 2 Q ( τ , d s ) η ( τ ) d τ ,
where a t T , 0 < ρ 1 and n = 1 , 2 , 3 , · · · . Apply Lemma 2 to | X n ( t ) | 2 , and we have:
| X n ( t ) | 2 = ζ 2 + 2 t T ( τ a ) ρ 1 | X n ( τ ) | g ( τ , X n ( τ ) , Y n ( τ ) , Z n ( τ ) ) d τ + 2 t T | X n ( τ ) | d A n ( τ ) 2 t T ( τ a ) ρ 1 | X n ( τ ) | Y n ( τ ) d B ( τ ) 2 t T ( τ a ) ρ 1 | X n ( τ ) | R Z n ( τ , s ) N ˜ ( d τ , d s ) t T ( d X n ( τ ) ) 2 ,
where a t T , 0 < ρ 1 and n = 1 , 2 , 3 , · · · . According to Lemma 4, Assumption 1 and Lemma 1, one has:
E [ | X n ( t ) | 2 ] E [ ζ 2 ] + 2 E t T ( τ a ) ρ 1 | X n ( τ ) | g ( τ , X n ( τ ) , Y n ( τ ) , Z n ( τ ) ) d τ + 2 E t T | X n ( τ ) | d A n ( τ ) E t T ( τ a ) 2 ( ρ 1 ) | Y n ( τ ) | 2 d τ E t T R ( τ a ) 2 ( ρ 1 ) | Z n ( τ , s ) | 2 Q ( τ , d s ) η ( τ ) d τ E [ ζ 2 ] + 2 K E t T ( τ a ) ρ 1 | X n ( τ ) | 2 d τ + 2 K E t T ( τ a ) ρ 1 | X n ( τ ) | Y n ( τ ) d τ + 2 k E t T R ( τ a ) ρ 1 | X n ( τ ) | Z n ( τ , s ) Q ( τ , d s ) η ( τ ) d τ + 2 E t T | X n ( τ ) | d A n ( τ ) E t T ( τ a ) 2 ( ρ 1 ) | Y n ( τ ) | 2 d τ E t T R ( τ a ) 2 ( ρ 1 ) | Z n ( τ , s ) | 2 Q ( τ , d s ) η ( τ ) d τ E [ ζ 2 ] + 2 K E t T ( τ a ) ρ 1 | X n ( τ ) | 2 d τ + ( γ K 1 ) E t T ( τ a ) 2 ( ρ 1 ) | Y n ( τ ) | 2 d τ + 1 γ ( K + k + 1 ) E t T | X n ( τ ) | 2 d τ + γ E [ | A n ( T ) | 2 ] + ( γ K 1 ) E t T R ( τ a ) 2 ( ρ 1 ) | Z n ( τ , s ) | 2 Q ( τ , d s ) η ( τ ) d τ ,
namely,
E a T ( τ a ) 2 ( ρ 1 ) | Y n ( τ ) | 2 + R | Z n ( τ , s ) | 2 Q ( τ , d s ) η ( τ ) d τ l 2 + γ 1 γ K E [ | A n ( T ) | 2 ] ,
where l 2 is a constant and γ 1 K . Combining Equations (15) and (16), we conclude that there exists a constant C, independent of n, such that E [ | A n ( T ) | 2 ] C and
E a T ( τ a ) 2 ( ρ 1 ) | Y n ( τ ) | 2 d τ + E a T R ( τ a ) 2 ( ρ 1 ) | Z n ( τ , s ) | 2 Q ( τ , d s ) η ( τ ) d τ C ,
where a t T , 0 < ρ 1 and n = 1 , 2 , 3 , · · · . In addition, we also get
E a T ( τ a ) 2 ( ρ 1 ) g ( τ , X n ( τ ) , Y n ( τ ) , Z n ( τ ) ) d τ C .
In other words, these sequences ( X n ( t ) ) n N + , ( Y n ( t ) ) n N + and ( Z n ( t , · ) ) n N + weakly converge in their spaces, and then for all stopping time ς , we have
lim n a ς ( τ a ) ρ 1 X n ( τ ) d τ = a ς ( τ a ) ρ 1 X ( τ ) d τ ,
lim n a ς ( τ a ) ρ 1 Y n ( τ ) d B ( τ ) = a ς ( τ a ) ρ 1 Y ( τ ) d B ( τ ) ,
and
lim n a ς ( τ a ) ρ 1 R Z n ( τ , s ) N ˜ ( d τ , d s ) = a ς ( τ a ) ρ 1 R Z ( τ , s ) N ˜ ( d τ , d s ) ,
and hence we have:
X ( t ) = ζ + t T ( τ a ) ρ 1 g ( τ , X ( τ ) , Y ( τ ) , Z ( τ ) ) d τ + A ( T ) A ( t ) t T ( τ a ) ρ 1 Y ( τ ) d B ( τ ) t T ( τ a ) ρ 1 R Z ( τ , s ) N ˜ ( d τ , d s ) ,
where a continuous increasing process A ( · ) satisfies A ( a ) = 0 and E [ | A ( T ) | 2 ] < . □

Author Contributions

The contributions of all authors (M.L., M.F., J.-R.W. and D.O.) are equal. All the main results were developed together. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by the Training Object of High Level and Innovative Talents of Guizhou Province ((2016)4006), Major Research Project of Innovative Group in Guizhou Education Department ([2018]012), Guizhou Data Driven Modeling Learning and Optimization Innovation Team ([2020]5016), the Slovak Research and Development Agency under the contract No. APVV-18-0308, and the Slovak Grant Agency VEGA No. 1/0358/20 and No. 2/0127/20.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful to the referees for their careful reading of the manuscript and valuable comments. The authors thank the editor too.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Choquet, G. Theory of capacities. Ann. L’Institut Fourier 1954, 5, 131–295. [Google Scholar] [CrossRef] [Green Version]
  2. Coquet, F.; Hu, Y.; Mémin, J.; Peng, S.G. Filtration-consistent nonlinear expectations and related g-expectations. Probab. Theory Relat. Fields 2002, 123, 1–27. [Google Scholar] [CrossRef]
  3. Peng, S.G. Nonlinear Expectations, Nonlinear Evaluations and Risk Measures. In Stochastic Methods in Finance. Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2004; pp. 165–253. [Google Scholar]
  4. Royer, M. Backward stochastic differential equations with jumps and related non-linear expectations. Stoch. Process. Their Appl. 2006, 116, 1358–1376. [Google Scholar] [CrossRef] [Green Version]
  5. Li, H.W.; Peng, S.G. Reflected backward stochastic differential equation driven by G-Brownian motion with an upper obstacle. Stoch. Process. Their Appl. 2020, 130, 6556–6579. [Google Scholar] [CrossRef]
  6. Zheng, S.Q.; Zhang, L.D.; Feng, L.C. On the backward stochastic differential equation with generator f(y)|z|2. J. Math. Anal. Appl. 2021, 500, 125102. [Google Scholar] [CrossRef]
  7. Nam, K. Locally Lipschitz BSDE driven by a continuous martingale a path-derivative approach. Stoch. Process. Their Appl. 2021, 141, 376–411. [Google Scholar] [CrossRef]
  8. Moon, J. The risk-sensitive maximum principle for controlled forward-backward stochastic differential equations. Automatica 2020, 120, 10906. [Google Scholar] [CrossRef]
  9. Tangpi, L. Concentration of dynamic risk measures in a Brownian filtration. Stoch. Process. Their Appl. 2018, 129, 1477–1491. [Google Scholar] [CrossRef] [Green Version]
  10. Long, T.; Lapitckii, A.; Guenther, M. A multi-step scheme based on cubic spline for solving backward stochastic differential equations. Appl. Numer. Math. 2020, 150, 117–138. [Google Scholar]
  11. Chen, X.; Ye, W.J. A study of backward stochastic differential equation on a Riemannian manifold. Electron. J. Probab. 2021, 26, 1–31. [Google Scholar] [CrossRef]
  12. Jing, Y.Y.; Li, Z. Averaging principle for backward stochastic differential equations. Discret. Dyn. Nat. Soc. 2021, 2, 1–10. [Google Scholar] [CrossRef]
  13. Zheng, G.Q. Local wellposedness of coupled backward stochastic differential equations driven by G-Brownian motions. J. Math. Anal. Appl. 2022, 506, 125540. [Google Scholar] [CrossRef]
  14. Luo, P.; Olivier, M.P.; Ludovic, T.P. Strong solutions of forward-backward stochastic differential equations with measurable coefficients. Stoch. Process. Their Appl. 2022, 144, 1–22. [Google Scholar] [CrossRef]
  15. Qiu, W.Z.; Fečkan, M.; O’Regan, D.; Wang, J. Convergence analysis for iterative learning control of conformable impulsive differential equations. Bull. Iran. Math. Soc. 2022, 48, 193–212. [Google Scholar] [CrossRef]
  16. Ding, Y.L.; O’Regan, D.; Wang, J. Stability analysis for conformable non-instantaneous impulsive differential equations. Bull. Iran. Math. Soc. 2021. [Google Scholar] [CrossRef]
  17. Ding, Y.L.; Fečkan, M.; Wang, J. Conformable linear and nonlinear non-instantaneous impulsive differential equations. Electron. J. Differ. Equ. 2020, 2020, 1–19. [Google Scholar]
  18. Li, M.M.; Wang, J.; O’Regan, D. Existence and Ulam’s stability for conformable fractional differential equations with constant coefficients. Bull. Malays. Math. 2019, 40, 1791–1812. [Google Scholar] [CrossRef]
  19. Qiu, W.Z.; Wang, J.; O’Regan, D. Existence and Ulam stability of solutions for conformable impulsive differential equations. Bull. Iran. Math. Soc. 2020, 46, 1613–1637. [Google Scholar] [CrossRef]
  20. Xiao, G.L.; Wang, J.; O’Regan, D. Existence and stability of solutions to neutral conformable stochastic functional differential equations. Qual. Theory Dyn. Syst. 2022, 21, 7. [Google Scholar] [CrossRef]
  21. Luo, M.; Wang, J.; O’Regan, D. A class of conformable backward stochastic differential equations with jumps. Miskolc Math. Notes 2021. code: MMN-3766. [Google Scholar]
  22. Delong, Ł. Backward Stochastic Differential Equations with Jumps and Their Actuarial and Financial Applications; Springer: London, UK, 2013; 288p. [Google Scholar]
  23. Peng, S.G. Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyer’s type. Probab. Theorem Relat. Fields 1999, 113, 473–499. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luo, M.; Fečkan, M.; Wang, J.-R.; O’Regan, D. g-Expectation for Conformable Backward Stochastic Differential Equations. Axioms 2022, 11, 75. https://doi.org/10.3390/axioms11020075

AMA Style

Luo M, Fečkan M, Wang J-R, O’Regan D. g-Expectation for Conformable Backward Stochastic Differential Equations. Axioms. 2022; 11(2):75. https://doi.org/10.3390/axioms11020075

Chicago/Turabian Style

Luo, Mei, Michal Fečkan, Jin-Rong Wang, and Donal O’Regan. 2022. "g-Expectation for Conformable Backward Stochastic Differential Equations" Axioms 11, no. 2: 75. https://doi.org/10.3390/axioms11020075

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop