Next Article in Journal
Fault-Tolerant Metric Dimension of Circulant Graphs
Previous Article in Journal
New Exact Solutions with a Linear Velocity Field for the Gas Dynamics Equations for Two Types of State Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weak Approximations of the Wright–Fisher Process

by
Vigirdas Mackevičius
and
Gabrielė Mongirdaitė
*,†
Faculty of Mathematics and Informatics, Institute of Mathematics, Vilnius University, Naugarduko 24, 03225 Vilnius, Lithuania
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(1), 125; https://doi.org/10.3390/math10010125
Submission received: 8 December 2021 / Revised: 28 December 2021 / Accepted: 30 December 2021 / Published: 1 January 2022
(This article belongs to the Section Probability and Statistics)

Abstract

:
In this paper, we construct first- and second-order weak split-step approximations for the solutions of the Wright–Fisher equation. The discretization schemes use the generation of, respectively, two- and three-valued random variables at each discretization step. The accuracy of constructed approximations is illustrated by several simulation examples.

1. Introduction

We are interested in weak first- and second-order approximations for the Wright–Fisher equation
X t x = x + 0 t ( a b X s x ) d s + σ 0 t X s x ( 1 X s x ) d B s , x [ 0 , 1 ] ,
with parameters 0 a b and σ > 0 . The Wright–Fisher (WF) process (a solution of Equation (1)) is well defined in [ 0 , 1 ] and models the gene frequencies in a population. The main problem in developing numerical methods for “square-root” diffusions is that the diffusion coefficient has unbounded derivatives near “singular” points (in our case, 0 and 1), and therefore standard methods (see, e.g., Milstein and Tretyakov [1]) are not applicable; typically, discretization schemes involving (explicitly or implicitly) the derivatives of the coefficients usually lose their accuracy near singular points, especially for large σ .
Alfonsi [2] (Chap. 6) constructed a weak second-order approximation of the WF process by using its connection with the Cox–Ingersoll–Ross (CIR) [3] process and the earlier constructed approximations of the latter (Alfonsi [4]). The main result of this paper is a direct construction of first- and second-order weak split-step approximations of the WF processes by discrete random variables. We believe that in comparison with the numerical scheme of Alfonsi [2] (Prop. 6.1.13, Algs. 6.1 and 6.2), our algorithm is much simpler and easier to implement. In our construction, we follow some ideas of Lileika and Mackevičius [5,6]. However, we had to overcome a serious additional challenge (in comparison with CIR or CKLS processes): two “singular” points, 0 and 1, of the diffusion coefficient make it essentially more difficult to ensure that the approximations take values in [ 0 , 1 ] (instead of [ 0 , + ) as in [5,6]).
The paper is organized as follows. In Section 2, we recall some definitions and results. In Section 3 and Section 4, we construct first- and second-order approximations for the WF equation by two- and three-valued discrete random variables, respectively. The main results of these sections are presented as Theorems 4 and 5. We illustrate the accuracy of our approximations by several simulation examples. In Section 5, we prove an auxiliary result on the smoothness of solutions of the corresponding backward Kolmogorov PDE equation. Tedious technical calculations have been performed using Maple and Python.

2. Preliminaries

In this section, we give some known definitions adapted to our context of the WF process defined by Equation (1).
Having a fixed time interval [ 0 , T ] , consider an equidistant time discretization Δ h = { i h , i = 0 , 1 , , [ T / h ] , h ( 0 , T ] } , where [ a ] is the integer part of a. By a discretization scheme (or approximation) of Equation (1) we mean any family of discrete-time homogeneous Markov chains X ^ h = X ^ h ( x , t ) , x [ 0 , 1 ] , t Δ h in [0, 1] with initial values X ^ h ( x , 0 ) = x and one-step transition probabilities p h ( x , d z ) , x [ 0 , 1 ] . For convenience, we only consider steps h = T / n , n N . For brevity, we sometimes write X ^ t x or X ^ ( x , t ) instead of X ^ h ( x , t ) . Note that because of the Markovity, a one-step approximation X ^ h x of the scheme completely defines the distribution of the whole discretization scheme X ^ t x , so that we only need to construct the former. Therefore, we will abbreviate one-step approximations as approximations. As usual, N and R are the sets of natural and real numbers, N ¯ : = N { 0 } , and R + : = [ 0 , ) .
We will write g ( x , h ) = O ( h n ) if, for some C > 0 and h 0 > 0 ,
| g ( x , h ) | C h n , x [ 0 , 1 ] , 0 < h h 0 .
Definition 1
(c.f. [4], Def. 1.3, [6], Def. 1). A discretization scheme X ^ h is a weak νth-order approximation for the solution X t x , t [ 0 , T ] of Equation (1) if for every f C [ 0 , 1 ] ,
E f X T x E f X ^ T x = O ( h ν ) .
Definition 2
(c.f. [4], Def. 1.8, [6], Defs. 2, 3). The νth-order remainder of a discretization scheme X ^ t x for X t x is the operator R v h : C [ 0 , 1 ] C [ 0 , 1 ] defined by
R ν h f ( x ) : = E f X ^ h x f ( x ) + k = 1 ν A k f ( x ) k ! h k , x [ 0 , 1 ] , h > 0 ,
where A is the generator of X t x ,
A f ( x ) = ( a b x ) f ( x ) + 1 2 σ 2 x ( 1 x ) f ( x ) .
A discretization scheme X ^ t x is a potential νth-order weak approximation of Equation (1) if
R ν h f ( x ) = O ( h ν + 1 )
for all f C [ 0 , 1 ] and x [ 0 , 1 ] .
The following two theorems ensure that a potential ν th-order weak approximation is in fact a ν th-order weak approximation (in the sense of Definition 1). Note that the requirement of uniformly bounded moments (see, e.g., [4]) is obviously satisfied by our approximations since they take values in [0, 1].
Theorem 1
(see Theorem 1.19 of [4]). Let X ^ h be a discretization scheme with transition probabilities p h ( x , d z ) on [ 0 , 1 ] that starts from X ^ 0 x = x [ 0 , 1 ] . We assume that
1.
the scheme is a potential weak νth-order discretization scheme for the operator A.
2.
f C [ 0 , 1 ] is a function such that u ( t , x ) = E f ( X T t x ) defined on [ 0 , T ] × [ 0 , 1 ] solves t u ( t , x ) = A u ( t , x ) for ( t , x ) [ 0 , T ] × [ 0 , 1 ] .
Then | E f ( X ^ T x ) E f ( X T x ) | = O ( h ν ) .
Theorem 2
(see Theorem 6.1.12 of [2]). Let f C [ 0 , 1 ] . Then
u ˜ ( t , x ) : = E f ( X t x ) , ( t , x ) R + × [ 0 , 1 ] ,
is a C function that solves
t u ˜ ( t , x ) = A u ˜ ( t , x ) .
We split Equation (1) into the deterministic part
d D t x = a b D t x d t , D 0 x = x [ 0 , 1 ] ,
and the stochastic part
d S t x = σ S t x ( 1 S t x ) d B t , S 0 x = x [ 0 , 1 ] .
The solution of the deterministic part is positive for all ( x , t ) [ 0 , 1 ] × ( 0 , T ] , namely:
D t x = D ( x , t ) = x e b t + a b 1 e b t , 0 a b 0 , x , a = b = 0 .
The solution of the stochastic part is not explicitly known. However, suppose that S ^ t x is a discretization scheme for the stochastic part. We define the first-order composition X ^ t x of the latter with the solution of the deterministic part as a Markov chain that has the transition probability in one step equal to the distribution of the random variable
X ^ h ( x , h ) : = D ( S ^ ( x , h ) , h ) .
Similarly, the second-order composition is defined by
X ^ h ( x , h ) : = D S ^ D x , h 2 , h , h 2 .
Theorem 3
(see [4], Thm. 1.17). Let S ^ t x be a potential first- or second-order approximation of the stochastic part of the WF equation. Then, compositions (6) and (7) define, respectively, a first- or second-order approximation X ^ t x of the WF Equation (1).
From this theorem, it follows that to construct a first- or second-order weak approximation, we only need to construct a first- or second-order approximation of the stochastic part, respectively.
Remark 1.
For various applications, we may be interested in similar processes with values in [ α , β ] satisfying the equation
d X ˜ t = ( a ˜ b X ˜ t ) d t + σ ( X ˜ t α ) ( β X ˜ t ) d B t , X ˜ 0 [ α , β ] ,
which is well defined when b α a ˜ b β . A popular choice is the Jacobi process with α = 1 and β = 1 . Process (8) can be obtained from the WF process by the affine transformation X ˜ t = α + ( β α ) X t ( a ˜ = a ( β α ) ) . Clearly, by the same transformation we can get weak approximations for (8) from weak approximations for the WF process.

3. First-Order Weak Approximation of Wright–Fisher Equation

3.1. Approximation of the Stochastic Part

Let us construct an approximation for the stochastic part of the WF equation, that is, the solution S t x of Equation (1) with a = b = 0 . A two-valued discrete random variable S ^ h x taking values x 1 , x 2 [ 0 , 1 ] with probabilities p 1 , p 2 is a first-order weak approximation if (see [5] and references therein)
p 1 + p 2 = 1 ,
E S ^ h x = x 1 p 1 + x 2 p 2 = m 1 : = E S h x = x ,
E ( S ^ h x ) 2 = x 1 2 p 1 + x 2 2 p 2 = m 2 + O ( h 2 ) ,
E ( S ^ h x x ) 3 = ( x 1 x ) 3 p 1 + ( x 2 x ) 3 p 2 = O ( h 2 ) ,
E ( S ^ h x x ) 4 = ( x 1 x ) 4 p 1 + ( x 2 x ) 4 p 2 = O ( h 2 ) ,
where the second moment m 2 = E ( S h x ) 2 can be calculated by Lemma 6 with a = b = 0 :
m 2 = m 2 ( x , h ) = x 2 e σ 2 h + x ( 1 e σ 2 h )
= x 2 + x ( 1 x ) σ 2 h + O ( h 2 ) , x [ 0 , 1 ] .
One of the solutions to the equation system (9)–(11) is (see [5])
x 1 , 2 = m 2 m 1 m 2 ( m 2 m 1 2 ) m 1 2 , p 1 , 2 = x 2 x 1 , 2 .
Therefore, in our case, we get
x 1 , 2 = x e σ 2 h + 1 e σ 2 h x e σ 2 h + ( 1 e σ 2 h ) x x 2 e σ 2 h + x ( 1 e σ 2 h ) x 2
= x e σ 2 h + 1 e σ 2 h ( x e σ 2 h + 1 e σ 2 h ) ( 1 x ) ( 1 e σ 2 h ) .
Since 1 e σ 2 h = σ 2 h + O ( h 2 ) , to simplify the expressions, we may try to replace 1 e σ 2 h by σ 2 h and, instead of (17), use
x 1 , 2 = x 1 , 2 ( x , h ) = x ( 1 σ 2 h ) + σ 2 h ( x ( 1 σ 2 h ) + σ 2 h ) ( 1 x ) σ 2 h = x + ( 1 x ) σ 2 h ( x + ( 1 x ) σ 2 h ) ( 1 x ) σ 2 h .
In Lemma 1, we will check that after this replacement, x 1 , 2 and p 1 , 2 still satisfy (9)–(13). Unfortunately, for the values of x near 1, the values of x 2 are slightly greater than 1 (as well as those defined by (17)), which is unacceptable. We overcome this problem by using the symmetry of the solution of the stochastic part with respect to the point 1 2 ; to be precise, S t x = d 1 S t 1 x for all x [ 0 , 1 ] ( = d means equality in distribution). Therefore, in the interval [ 0 , 1 / 2 ] , we can use the values x 1 , 2 defined by (18), whereas in the interval ( 1 / 2 , 1 ] , we use the values corresponding to the process 1 S t 1 x , that is,
x ˜ 1 , 2 = x ˜ 1 , 2 ( x , h ) : = 1 x 1 , 2 ( 1 x , h ) = x x σ 2 h ± ( 1 x + x σ 2 h ) x σ 2 h
with probabilities p ˜ 1 , 2 = 1 x 2 x 1 , 2 ( 1 x , h ) . Thus, we obtain the acceptable (i.e., with values in [0, 1]) approximation S ^ h x taking the values
x ^ 1 , 2 : = x 1 , 2 ( x , h ) with   probabilities p 1 , 2 = x 2 x 1 , 2 ( x , h ) , x [ 0 , 1 / 2 ] , 1 x 1 , 2 ( 1 x , h ) with   probabilities p 1 , 2 = 1 x 2 x 1 , 2 ( 1 x , h ) , x ( 1 / 2 , 1 ] .
Lemma 1.
The values x ^ 1 , 2 defined by (20) satisfy conditions (9)–(13), and x ^ 1 , 2 [ 0 , 1 ] .
Proof. 
We first check that x 1 , 2 defined by (18) obtain values from the interval [0, 1] when x [ 0 , 1 / 2 ] and h is sufficiently small ( 0 < h h 0 with h 0 > 0 independent from x):
x 1 = x + ( 1 x ) σ 2 h ( x + ( 1 x ) σ 2 h ) ( 1 x ) σ 2 h 0 x + ( 1 x ) σ 2 h ( x + ( 1 x ) σ 2 h ) ( 1 x ) σ 2 h x + ( 1 x ) σ 2 h ( 1 x ) σ 2 h x 0 ; x 2 = x + ( 1 x ) σ 2 h + ( x + ( 1 x ) σ 2 h ) ( 1 x ) σ 2 h 1 ( x + ( 1 x ) σ 2 h ) ( 1 x ) σ 2 h ( 1 x ) ( 1 σ 2 h ) x σ 2 h + ( 1 x ) ( σ 2 h ) 2 ( 1 x ) ( 1 σ 2 h ) 2 x σ 2 h + ( 1 x ) ( σ 2 h ) 2 ( 1 x ) ( 1 2 σ 2 h + ( σ 2 h ) 2 ) x σ 2 h ( 1 x ) ( 1 2 σ 2 h ) x σ 2 h + 1 x 2 σ 2 h 0 .
If x [ 0 , 1 / 2 ] , then
x σ 2 h + 1 x 2 σ 2 h 1 / 2 2 σ 2 h 0 , where   0 < h h 0 : = 1 4 σ 2 .
Thus 0 x 1 < x 2 1 for x [ 0 , 1 / 2 ] and 0 < h h 0 = 1 / 4 σ 2 . So, if x ( 1 / 2 , 1 ] , then 1 x [ 0 , 1 / 2 ) , and according to (19), instead of x 1 , 2 , we can take x ˜ 1 , 2 = 1 x 1 , 2 ( 1 x , h ) for 0 < h h 0 . Thus, as we have just checked, we have 0 x 1 , 2 ( 1 x , h ) 1 , that is, 0 x ˜ 1 , 2 1 for x ( 1 / 2 , 1 ] and 0 < h h 0 .
Now we check conditions (9)–(13) for x 1 , 2 :
p 1 + p 2 = x 2 x 1 + x 2 x 2 = 2 x ( x + ( 1 x ) σ 2 h ) 2 ( ( x 2 + 2 x ( 1 x ) σ 2 h + ( 1 x ) 2 ( σ 2 h ) 2 ( x ( 1 x ) σ 2 h + ( 1 x ) 2 ( σ 2 h ) 2 ) = 2 x ( x + ( 1 x ) σ 2 h ) 2 ( ( x 2 + x ( 1 x ) σ 2 h ) = 1 ; x 1 p 1 + x 2 p 2 = x 1 x 2 x 1 + x 2 x 2 x 2 = x , x 1 2 p 1 + x 2 2 p 2 = x 1 2 x 2 x 1 + x 2 2 x 2 x 2 = x 2 ( x 1 + x 2 ) = x 2 · 2 ( x + ( 1 x ) σ 2 h ) = x 2 + x ( 1 x ) σ 2 h = m 2 + O ( h 2 ) ; ( x 1 x ) 3 p 1 + ( x 2 x ) 3 p 2 = 2 x ( 1 x ) 2 ( σ 2 h ) 2 = O ( h 2 ) , ( x 1 x ) 4 p 1 + ( x 2 x ) 4 p 2 = x ( 1 x ) 2 ( x + 4 ( 1 x ) σ 2 h ) ( σ 2 h ) 2 = O ( h 2 ) .
The last two equalities were obtained by using the Python SymPy package. The conditions for x ˜ 1 , 2 follow automatically from the symmetry. □
For the initial Equation (1) we obtain an approximation X ^ h x by the “split-step” procedure defined by (6):
X ^ h x : = S ^ h x e b h + a b ( 1 e b h ) .
Now we can state our first main result.
Theorem 4.
Let X ^ t x be the discretization scheme defined by one-step approximation (22). Then, X ^ t x is a first-order weak approximation of Equation (1) for functions f C [ 0 , 1 ] .

3.2. Algorithm

In this section, we provide an algorithm for calculating X ^ ( i + 1 ) h given X ^ i h = x at each simulation step i:
1.
Draw a uniform random variable U from the interval [0, 1].
2.
If x 1 2 , then
calculate x 1 , x 2 according to (18),
2.
else
calculate x 1 , x 2 according to (18) with x : = 1 x ,
x 1 , 2 : = 1 x 1 , 2 .
3.
Calculate p 1 , 2 : = x 2 x 1 , 2 ( x , h ) .
4.
If U < p 1 , then S ^ : = x 1 else S ^ : = x 2 .
5.
Calculate (see (6) and (22))
X ^ ( i + 1 ) h = D S ^ , h = S ^ e b h + a b ( 1 e b h ) .

3.3. Simulation Examples

We illustrate our approximation for the test functions f ( x ) = x 5 and f ( x ) = e x . Since we do not explicitly know the moments E e X t x , we use the approximate equality e x 1 x + x 2 2 x 3 6 + x 4 24 x 5 120 . We have chosen the parameters of the WF equation so that the fifth moment of X t x is nonmonotonic as a function of t to see how the approximated fifth moment “follows” the bends of the true one as t varies. In Figure 1, Figure 2 and Figure 3, we compare E f ( X ^ t x ) and E f ( X t x ) as functions of t (left plots) and as functions of a discretization step h (right plots) in terms of the relative error | 1 E f ( X ^ t x ) E f ( X t x ) | . As expected, the approximations agree with exact values pretty well. Note an impressive match between the approximated and true values of E e X t x in Figure 3 even for rather large discretization step h.

4. Second-Order Weak Approximation of Wright–Fisher Equation

4.1. Approximation of the Stochastic Part

Let S ^ h x be any discretization scheme. Applying Taylor’s formula to f C [ 0 , 1 ] , we have
E f ( S ^ h x ) = f ( x ) + f ( x ) E S ^ h x x + f ( x ) 2 E S ^ h x x 2 + f ( x ) 6 E S ^ h x x 3 + f ( 4 ) ( x ) 4 ! E S ^ h x x 4 + f ( 5 ) ( x ) 5 ! E S ^ h x x 5 + 1 5 ! E x S ^ h x f ( 6 ) ( s ) S ^ h x s 5 d s .
The generator A 0 and its square of the stochastic part are
A 0 f ( x ) = 1 2 σ 2 x ( 1 x ) f ( x ) , A 0 2 f ( x ) = 1 2 σ 4 x ( 1 x ) f ( x ) + 1 2 σ 4 x ( 1 x ) ( 1 2 x ) f ( x ) + 1 4 σ 4 x 2 ( 1 x ) 2 f ( 4 ) ( x ) .
Thus, the second-order remainder of the discretization scheme S ^ h x is
R 2 h f ( x ) = E f ( S ^ h x ) f ( x ) + A 0 f ( x ) h + A 0 2 f ( x ) h 2 2 = f ( x ) E S ^ h x x + f ( x ) 2 E S ^ h x x 2 σ 2 x ( 1 x ) h 1 1 2 σ 2 h + f ( x ) 6 E S ^ h x x 3 3 2 σ 4 h 2 x ( 1 x ) ( 1 2 x ) + f ( 4 ) ( x ) 4 ! E S ^ h x x 4 3 σ 4 ( x ( 1 x ) h ) 2 + f ( 5 ) ( x ) 5 ! E S ^ h x x 5 + r 2 ( x , h ) , x 0 , h > 0 ,
where
r 2 ( x , h ) = 1 5 ! E x S ^ h x f ( 6 ) ( s ) S ^ h x s 5 d s 1 6 ! max s [ 0 , 1 ] | f ( 6 ) ( s ) | E S ^ h x x 6 .
This expression shows that S ^ h x is a potential second-order approximation of the stochastic part (4) if
E S ^ h x x = O ( h 3 ) ,
E S ^ h x x 2 = σ 2 x ( 1 x ) h 1 1 2 σ 2 h + O ( h 3 ) ,
E S ^ h x x 3 = 3 2 σ 4 h 2 x ( 1 x ) ( 1 2 x ) + O ( h 3 ) ,
E S ^ h x x 4 = 3 σ 4 ( x ( 1 x ) h ) 2 + O ( h 3 ) ,
E S ^ h x x 5 = O ( h 3 ) ,
E S ^ h x x 6 = O ( h 3 ) .
Let us denote z = σ 2 h for brevity. Converting the central moments of S ^ h x to noncentral moments, from (23)–(28) we get
E S ^ h x i = m ^ i + O ( h 3 ) , i = 1 , , 6 ,
where
m ^ 1 = x , m ^ 2 = x 2 + z x ( 1 x ) ( 1 1 2 z ) , m ^ 3 = x 3 + 3 2 x z 2 ( 3 x 2 4 x + 1 ) 3 x z ( x 2 x ) , m ^ 4 = x 4 + 9 x 2 z 2 ( 2 x 2 3 x + 1 ) 6 x 2 z ( x 2 x ) , m ^ 5 = x 5 + 10 x 3 z 2 ( 5 x 2 8 x + 3 ) 10 x 3 z ( x 2 x ) , m ^ 6 = x 6 + 75 2 x 4 z 2 ( 3 x 2 5 x + 2 ) 15 x 4 z ( x 2 x ) .
Our aim is to construct a potential second-order approximation for the WF equation by discrete random variables at each generation step. Therefore, we look for approximations S ^ h x taking three values x 1 , x 2 , x 3 from the interval [ 0 , 1 ] with probabilities p 1 , p 2 , p 3 satisfying the following conditions:
p 1 + p 2 + p 3 = 1 ,
x 1 i p 1 + x 2 i p 2 + x 3 i p 3 = m ^ i + O ( h 3 ) , i = 1 , , 6 .
In anticipation, note that when solving system (31)–(32), a serious challenge is ensuring the first equality p 1 + p 2 + p 3 = 1 . A simple way out of this situation is relaxing the latter by the inequality
p 1 + p 2 + p 3 1
and, at the same time, allowing S ^ h x to take the additional trivial value 0 with probability p 0 = 1 ( p 1 + p 2 + p 3 ) . Notice that this does not change Equation (32) in any way.
Solving the system
x 1 i p 1 + x 2 i p 2 + x 3 i p 3 = m ^ i , i = 1 , 2 , 3 ,
with respect to x 1 , x 2 , x 3 , we obtain (cf. [6])
p 1 = m ^ 1 x 2 x 3 m ^ 2 x 2 m ^ 2 x 3 + m ^ 3 x 1 ( x 1 x 3 ) ( x 1 x 2 ) , p 2 = m ^ 1 x 1 x 3 m ^ 2 x 1 m ^ 2 x 3 + m ^ 3 x 2 ( x 1 x 2 ) ( x 2 x 3 ) , p 3 = m ^ 1 x 1 x 2 m ^ 2 x 1 m ^ 2 x 2 + m ^ 3 x 3 ( x 2 x 3 ) ( x 1 x 3 ) .
Note that, here, differently from [6], we used approximate “moments” m ^ i instead of the true moments m i . This eventually allows us to get simpler expressions because m ^ i are polynomials in x and z.
Now we have to find x 1 , 2 , 3 that, together with p 1 , 2 , 3 defined by Equations (34), satisfy the remaining conditions
x 1 4 p 1 + x 2 4 p 2 + x 3 4 p 3 m ^ 4 = O ( h 3 ) , x 1 5 p 1 + x 2 5 p 2 + x 3 5 p 3 m ^ 5 = O ( h 3 ) , x 1 6 p 1 + x 2 6 p 2 + x 3 6 p 3 m ^ 6 = O ( h 3 ) .
Motivated by the first-order approximation (20) and [6], we look for x 1 , 2 , 3 of the following form:
x 1 = x + z A 1 ( 1 x ) + A 2 x z ( z ( 1 x ) ( B x + C z ( 1 x ) ) ) ,
x 2 = x + A 3 x z ,
x 3 = x + z A 1 ( 1 x ) + A 2 x z + ( z ( 1 x ) ( B x + C z ( 1 x ) ) ) ,
with unknown parameters A 1 , A 2 , A 3 , B , C 0 .

4.2. Calculation of the Parameters

Substituting (36)–(38) into the left-hand sides of (35), we have (for technical calculations, using Maple and Python)
x 1 4 p 1 + x 2 4 p 2 + x 3 4 p 3 m ^ 4 = [ ( B A 3 + B + 2 A 1 2 A 2 A 3 6 ) x 4 + A 3 2 B A 3 B 4 A 1 + 2 A 2 + 21 2 x 3 + B + 2 A 1 9 2 x 2 ] z 2 + O ( h 3 ) ,
x 1 5 p 1 + x 2 5 p 2 + x 3 5 p 3 m ^ 5 = [ ( 8 A 1 + ( 4 B 4 ) A 3 + 5 B 8 A 2 27 ) x 5 + ( 4 B + 4 ) A 3 10 B + 8 A 2 16 A 1 + 48 x 4 + ( 8 A 1 + 5 B 21 ) x 3 ] z 2 + O ( h 3 ) ,
x 1 6 p 1 + x 2 6 p 2 + x 3 6 p 3 m ^ 6 = [ ( 20 A 1 + ( 10 B 10 ) A 3 + 15 B 20 A 2 75 ) x 6 + ( 10 B + 10 ) A 3 30 B + 20 A 2 40 A 1 + 135 x 5 + ( 15 B + 20 A 1 60 ) x 4 ] z 2 + O ( h 3 ) .
To ensure equalities (35), we need to choose A 1 , A 2 , A 3 , B such that expressions at z 2 would be equal to 0. Equating the coefficients at the lowest powers of x to zero, we get the system for the parameters A 1 and B:
B + 2 A 1 9 2 = 0 , 8 A 1 + 5 B 21 = 0 , 15 B + 20 A 1 60 = 0 .
Although the system contains three equations with respect to two unknowns, it has the solution A 1 = 3 4 , B = 3 . Substituting these values back to Equations (39)–(41), we get the relation A 3 = A 2 + 3 4 , which makes all the expressions at z 2 vanish. Summarizing, we have that x 1 , 2 , 3 of the form (36)–(38) and p 1 , 2 , 3 defined by (34) satisfy all of Equation (32), provided that the parameters satisfy the following relations:
A 1 = 3 4 , A 2 0 , A 3 = A 2 + 3 4 , B = 3 , C 0 .

4.3. Positivity of the Solution

Now we would like to choose the values of free parameters A 2 and C so that all x 1 , x 2 , x 3 , p 1 , p 2 , p 3 are positive and p 1 + p 2 + p 3 1 . We first consider the latter restriction.
Lemma 2.
We have p 1 + p 2 + p 3 1 if
A 2 ( 3 + 2 2 ) 1 3 4 + 1 4 ( 3 + 2 2 ) 1 3 0.58883
and
0 C 3 ( 32 A 2 3 6 A 2 3 ) 16 A 2 2 ( 16 A 2 2 + 24 A 2 + 9 ) .
Proof. 
We have
p 1 + p 2 + p 3 = N D
with numerator
N = 64 x 2 + ( 192 A 2 + 144 ) x 2 96 x z + ( 192 A 2 2 64 C + 96 A 2 + 108 ) x 2 + ( 128 C 144 ) x 64 C + 36 z 2 + 48 + ( 96 A 2 + 24 ) x 2 + ( 96 A 2 72 ) x z 3
and denominator
D = 64 x 2 + ( 192 A 2 + 144 ) x 2 96 x z + ( 192 A 2 2 64 C + 96 A 2 + 108 ) x 2 + ( 128 C 144 ) x 64 C + 36 z 2 + ( 64 A 2 3 64 C A 2 48 A 2 2 48 C 36 A 2 + 27 x 2 + 128 C A 2 + 96 A 2 2 + 96 C 54 ) x 64 A 2 C 48 C + 36 A 2 + 27 z 3 .
The numerator and denominator differ only by the coefficients at z 3 . Thus, it suffices to show that their difference D N is nonnegative, that is,
( 64 A 2 3 48 A 2 2 + ( 64 C + 60 ) A 2 48 C + 3 ) x 2 + ( 96 A 2 2 + ( 128 C 96 ) A 2 + 96 C + 18 ) x + ( 64 C + 36 ) A 2 48 C 21 = : a 1 x 2 + a 2 x + a 3 0 .
Inequality (44) is satisfied for all x R if
a 1 > 0 and a 3 a 2 2 4 a 1 .
Solving inequality (45), we get
A 2 > 0 , C 3 ( 32 A 2 3 6 A 2 3 ) 16 ( 16 A 2 2 + 24 A 2 + 9 ) A 2 2 .
Since C must be nonnegative, we obtain condition (43) for A 2 . □
Remark 2.
We observe that possible values of C are rather small (see Figure 4). Therefore, to simplify the expressions for x 1 , x 2 , x 3 , we simply take C = 0 and A 2 = 2 3 .
We have now arrived at the following expressions for x 1 , x 2 , x 3 :
x 1 = x + 3 ( 1 x ) z 4 + 2 x z 3 3 x ( 1 x ) z ,
x 2 = x + 17 x z 12 ,
x 3 = x + 3 ( 1 x ) z 4 + 2 x z 3 + 3 x ( 1 x ) z .
However, the game is not over. At this point, we only have that x 1 , x 2 , x 3 defined by (46)–(48), together with p 1 , p 2 , p 3 defined by (34), satisfy conditions (32) and (33). From numerical calculations it appears that for “small” x, it happens that x 1 > x 2 and thus p 1 , p 2 < 0 . Moreover, on the other hand, for “not small” x and “large” h, it happens that x 3 > 1 . We can see a typical situation in Figure 5 with z = σ 2 h = 1 5 , where for small x, p 1 and p 2 take values outside the interval [ 0 , 1 ] , whereas x 3 > 1 for x near 1 2 .
Due to these reasons, similarly to [4], for small x below the threshold K z (with some fixed K > 0 ), we will switch to the first-order approximation (20), which behaves as a second-order one for such x. We also have to consider z z 0 , where z 0 is to be sufficiently small to ensure that x 3 1 . To be precise, for 0 x K z , 0 < z z 0 , we will use scheme (20), whereas for K z x 1 2 , 0 < z z 0 , we will use scheme (46)–(48) together with (34); finally, for x ( 1 2 , 1 ] , we will use the symmetry S t x = d 1 S t 1 x as in the first-order approximation. The following lemmas justify such a switch for K = 1 3 and z 0 = 1 6 .
Lemma 3.
The first-order approximation (20) in the region x K σ 2 h (with arbitrary fixed K > 0 ) satisfies conditions (29). In other words, in this region, it behaves as a second-order approximation.
Proof. 
We prove equalities (29) in the region x K z = K σ 2 h , where S ^ x h and m ^ i , i = 1 , , 6 , are defined by (20) and (30), respectively:
E ( S ^ h x ) 2 m ^ 2 = x 1 2 p 1 + x 2 2 p 2 m ^ 2 = x 2 + x ( 1 x ) z ( x 2 + z x ( 1 x ) ( 1 1 2 z ) ) = 1 2 x ( 1 x ) z 2 = O ( h 3 ) , E ( S ^ h x ) 3 m ^ 3 = x 1 3 p 1 + x 2 3 p 2 m ^ 3 = x 3 3 x z ( x 2 x ) + 2 z 2 x 2 ( x 1 ) ( x 3 + 3 2 x z 2 ( 3 x 2 4 x + 1 ) 3 x z ( x 2 x ) ) = 1 2 x z 2 ( 5 x 2 8 x + 3 ) = O ( h 3 ) , E ( S ^ h x ) 4 m ^ 4 = x 1 4 p 1 + x 2 4 p 2 m ^ 4 = x 4 6 x 2 z ( x 2 x ) + x 2 z 2 ( 9 x 2 10 x + 1 ) 4 x 3 z 3 ( x 1 ) ( x 4 + 9 x 2 z 2 ( 2 x 2 3 x + 1 ) 6 x 2 z ( x 2 x ) ) = x 2 z 2 ( 4 x 2 z 9 x 2 + 4 x z + 17 x 8 ) = O ( h 4 ) , E ( S ^ h x ) 5 m ^ 5 = x 1 5 p 1 + x 2 5 p 2 m ^ 5 = x 5 10 x 3 z ( x 2 x ) + 5 x 3 z 2 ( 5 x 2 6 x + 1 ) + 4 x 3 z 3 ( 6 x 2 + 7 x 1 ) + 8 x 4 z 4 ( x 1 ) ( x 5 + 10 x 3 z 2 ( 5 x 2 8 x + 3 ) 10 x 3 z ( x 2 x ) ) = x 3 z 2 ( x 2 ( 8 z 2 24 z 25 ) 2 x ( 4 z 2 14 z 25 ) 4 z 25 ) = O ( h 5 ) , E ( S h x ) 6 m ^ 6 = x 1 6 p 1 + x 2 6 p 2 m ^ 6 = x 6 15 x 4 z ( x 2 x ) + 5 x 4 z 2 ( 11 x 2 14 x + 3 ) + x 3 z 3 ( 85 x 3 + 111 x 1 27 x + 1 ) + 12 x 4 z 4 ( 5 x 2 6 x + 1 ) 16 x 5 z 5 ( x 1 ) ( x 6 + 75 2 x 4 z 2 ( 3 x 2 5 x + 2 ) 15 x 4 z ( x 2 x ) ) = 1 2 x 3 z 2 ( x 1 ) ( 32 x 2 z 3 + 120 x 2 z 2 170 x 2 z 115 x 2 24 x z 2 170 x z + 120 x 2 z ) = O ( h 6 ) .   
Lemma 4.
For z [ 0 , 1 6 ] and x [ 0 , 1 2 ] , x 1 , x 2 , x 3 defined in (46)–(48) take values in the interval [ 0 , 1 ] .
Proof. 
Obviously, x 2 [ 0 , 1 ] . Thus, we focus on x 1 and x 3 . Since x 1 x 3 , it suffices to prove that x 1 0 and x 3 1 .
The condition x 1 0 is equivalent to the inequality
x + 3 ( 1 x ) z 4 + 2 x z 3 2 3 x ( 1 x ) z 0 .
By denoting y = 1 x > 0 , this becomes
x + 3 y z 4 + 2 x z 3 2 3 x y z = x 2 + 9 16 y 2 z 2 + 4 9 x 2 z 2 3 2 x y z + 4 3 x 2 z + x y z 2 0 .
We will prove the stronger inequality
x 2 + 9 16 y 2 z 2 + 4 3 x 2 z 3 2 x y z 0 ,
which after substitution y = 1 x becomes
1 + 17 6 z + 9 16 z 2 x 2 3 2 z + 9 8 z 2 x + 9 z 2 16 0 .
The discriminant of the quadratic polynomial (49) in x is
D = 3 2 z + 9 8 z 2 2 4 1 + 17 6 z + 9 16 z 2 · 9 z 2 16 = 3 z 3 ,
which is negative for all z > 0 . This means that the left-hand side (49) is positive and thus x 1 > 0 for all x [ 0 , 1 ] and z 0 except for x = z = 0 , where x 1 = 0 .
Let us now prove that x 3 1 . For z [ 0 , 1 6 ] and x [ 0 , 1 2 ] , we have
x 3 = x + 3 ( 1 x ) z 4 + 2 x z 3 + 3 x ( 1 x ) z x + 1 8 ( 1 x ) + 1 18 + x ( 1 x ) 2 1 8 + 7 8 · 1 2 + 1 18 + 1 8 0.972 < 1 .   
Lemma 5.
For x ( z 3 , 1 2 ] and z 1 6 , we have p 1 , p 2 , p 3 [ 0 , 1 ] .
Proof. 
From Lemma 2, we already have that p 1 + p 2 + p 3 1 . Therefore, it suffices to prove that p 1 , p 2 , p 3 0 . Because of the complex expressions of p 1 , p 2 , p 3 , we prefer to show this graphically by using the Maple function plot3d. See Figure 6, where the 3D graphs of p 1 , p 2 , p 3 as functions of ( x , z ) are plotted in the domain { ( x , z ) : z / 3 x 1 / 2 , 0 z 1 / 6 } . □

4.4. The Second Main Result

Now let us summarize the results of this section. For clarity, recall the main notations:
x 1 = x 1 ( x , h ) = x + 3 ( 1 x ) σ 2 h 4 + 2 x σ 2 h 3 3 x ( 1 x ) σ 2 h ,
x 2 = x 2 ( x , h ) = x + 17 x σ 2 h 12 ,
x 3 = x 3 ( x , h ) = x + 3 ( 1 x ) σ 2 h 4 + 2 x σ 2 h 3 + 3 x ( 1 x ) σ 2 h .
To distinguish the functions x 1 , 2 , 3 from x 1 , 2 given by (18), here we denote the latter by
y 1 , 2 = y 1 , 2 ( x , h ) = x + ( 1 x ) σ 2 h ( x + ( 1 x ) σ 2 h ) ( 1 x ) σ 2 h .
Using the symmetry S t x = d 1 S t 1 x for x [ 0 , 1 ] , we define the approximation of the stochastic part of the WF equation as follows:
S ^ h x : = x 1 , 2 , 3 ( x , h ) with   probabilities   p 1 , 2 , 3 ( 34 ) and 0 with   probability p 0 = 1 ( p 1 + p 2 + p 3 ) , x ( σ 2 h 3 , 1 2 ] , 1 x 1 , 2 , 3 ( 1 x , h ) with   prob.   p 1 , 2 , 3 ( 1 x , h ) and 1 with   probabilities p 0 = 1 ( p 1 + p 2 + p 3 ) , x ( 1 2 , 1 σ 2 h 3 ) , y 1 , 2 ( x , h ) with   probabilities   p ˜ 1 , 2 ( x , h ) : = x 2 y 1 , 2 ( x , h ) , x [ 0 , σ 2 h 3 ] , 1 y 1 , 2 ( 1 x , h ) with   probabilities   p ˜ 1 , 2 ( 1 x , h ) , x [ 1 σ 2 h 3 , 1 ] .
Now in view of Theorem 3 and Lemmas 2–5, we can state the main result on the second-order approximation of the WF process.
Theorem 5.
Let X ^ t x be the discretization scheme defined by one-step approximation
X ^ h x = D ( S ^ ( D ( x , h / 2 ) , h ) , h / 2 ) ,
where D ( x , h ) is defined by (5), and S ^ ( x , h ) = S ^ h x is defined by (54). Then, X ^ t x is a second-order weak approximation of Equation (1).

4.5. Algorithm for Second-Order Approximation

In this section, we provide an algorithm for calculating X ^ ( i + 1 ) h given X ^ i h = x at each simulation step i:
1.
Draw a uniform random variable U from the interval [0, 1].
2.
x : = D ( x , h / 2 ) (where D is given by (5))
3.
If x 1 2 , then
3.1.
if x > σ 2 h 3 , then
    x 0 : = 0 ,
   calculate x 1 , x 2 , x 3 according to (50)–(52),
   calculate p 1 , p 2 , p 3 according to (34),
   if U < p 1 then S ^ : = x 1 else if U < p 1 + p 2 then S ^ : = x 2
      else if U < p 1 + p 2 + p 3 then S ^ : = x 3 else S ^ : = x 0
else
   calculate y 1 , y 2 according to (53),
    p 1 , 2 : = x y 1 , 2 ( x , h ) ,
   if U < p 1 then S ^ : = y 1 else S ^ : = y 2
else
3.2.
do step 3.1 with x : = 1 x , x 0 , 1 , 2 , 3 : = 1 x 0 , 1 , 2 , 3 ,
y 1 , 2 : = 1 y 1 , 2 .
4.
X ^ ( i + 1 ) h : = D S ^ , h / 2 .

4.6. Simulation Examples

We illustrate our approximation for the test function f ( x ) = x 5 . In Figure 7 and Figure 8, we compare the moments E f ( X ^ t x ) and E f ( X t x ) as functions of t (left plots, h = 0.01 ) and as functions of a discretization step h (right plots) in terms of the relative error. We observe that with a rather small number of iterations, the second-order approximation agrees with the exact values pretty well. These specific examples have been chosen to illustrate the behavior of approximations with small ( σ 2 = 0.6 ) and high ( σ 2 = 2 ) volatility. In comparison with the simulation results for the first-order approximation (Section 3.3), we see that to get a similar accuracy, we can use the second-order approximation with a significantly smaller number of iterations N and larger step size h, which in turn requires significantly less computation time.

5. Probabilistic Proof of Regularity of Solutions of the Kolmogorov Backward Equation

Theorem B is in fact Theorem 1.19 of [4] stated based on the results of [7], which are proved by methods of partial differential equation theory. Here, we provide a significantly simpler probabilistic proof of the theorem for a rather wide subclass of C [ 0 , 1 ] , which practically includes all functions interesting for applications, for example, polynomials or exponentials.
Definition 3.
We denote by C * [ 0 , 1 ] the class of infinitely differentiable functions on [ 0 , 1 ] with “not too fast” growing derivatives:
C * [ 0 , 1 ] : = f C [ 0 , 1 ] : lim sup k 1 k ! max x [ 0 , 1 ] | f ( k ) ( x ) | = 0 .
Every f C * [ 0 , 1 ] is the sum of its (uniformly convergent) Taylor series:
f ( x ) = k = 0 c k x k , x [ 0 , 1 ] ,
where c k = f ( k ) ( 0 ) / k ! , k N ¯ . This easily follows from the Lagrange error bound for Taylor series.
Remark 3.
Clearly, every f C * [ 0 , 1 ] is a real analytic function; see [8].
Denote m k ( x , t ) : = E ( X t x ) k , k N ¯ . Then, from (56), we formally have
u ˜ ( t , x ) = E f ( X t x ) = k = 0 c k m k ( x , t ) , x [ 0 , 1 ] , t 0 .
If u ˜ is infinitely continuously differentiable, then it satisfies Equation (2) (see, e.g., [9] (Thm. 8.1.1)). Therefore, it suffices to show that
(1)
the moments m k ( x , t ) are infinitely continuously differentiable and
(2)
all formal partial derivatives of the series in (57),
k = 0 c k t p x q m k ( x , t ) ,
converge uniformly for ( x , t ) [ 0 , 1 ] × [ 0 , T ] (for any fixed T > 0 ).
Lemma 6.
The moments of the WF process X t x satisfy the following recurrence relation:
m 1 ( x , t ) = x e b t + a b ( 1 e b t ) , 0 a b 0 , x , a = b = 0 ,
m k ( x , t ) = e b k t x k + a k 0 t e b k s m k 1 ( x , s ) d s , k 2 ,
where b k = k b + k ( k 1 ) σ 2 2 , a k = k a + k ( k 1 ) σ 2 2 .
In particular, by induction on k it follows that m k ( x , t ) are infinitely continuously differentiable with respect to ( x , t ) [ 0 , 1 ] × R + .
Proof. 
Taking the expectations of both sides of Equation (1) and then differentiating with respect to t, we get
t m 1 ( x , t ) = a b m 1 ( x , t ) , m 1 ( x , 0 ) = x .
Solvingthe latter, we get (59).
When k 2 , by Itô’s formula, we have
( X t x ) k = x k + k 0 t ( X t x ) k 1 d X s x + 1 2 k ( k 1 ) 0 t ( X t x ) k 2 d X x s = x k + k 0 t ( X t x ) k 1 ( a b X s x ) d s + k σ 0 t ( X t x ) k 1 X s x ( 1 X s x ) d B s + 1 2 k ( k 1 ) σ 2 0 t ( X t x ) k 2 X s x ( 1 X s x ) d s = x k + k 0 t a ( X t x ) k 1 b ( X s x ) k d s + k σ 0 t ( X t x ) k 1 X s x ( 1 X s x ) d B s + 1 2 k ( k 1 ) σ 2 0 t ( X t x ) k 1 ( X s x ) k d s .
By taking the expectations, we get
m k ( x , t ) = x k + 0 t [ k a + k ( k 1 ) σ 2 2 ] m k 1 ( x , s ) [ k b + k ( k 1 ) σ 2 2 ] m k ( x , s ) d s = x k + 0 t a k m k 1 ( x , s ) b k m k ( x , s ) d s ,
and thus
t m k ( x , t ) = b k m k ( x , t ) + a k m k 1 ( x , t ) , m k ( x , 0 ) = x k .
Solving the latter with respect to m k , we arrive at (60). □
Lemma 7.
All formal partial derivatives of the series (57),
k = 0 c k t p x q m k ( x , t ) ,
converge uniformly for ( x , t ) [ 0 , 1 ] × [ 0 , T ] (for any fixed T > 0 ).
Proof. 
It is obvious that 0 m k ( x , t ) 1 , x [ 0 , 1 ] , k N ¯ . First, consider the derivatives with respect to x. Let us prove by induction on k that
x m k ( x , t ) k , x [ 0 , 1 ] , k N .
For k = 1 , we have m 1 ( x , t ) = e b t 1 . Suppose
x m k 1 ( x , t ) k 1 , x [ 0 , 1 ] .
Then,
x m k ( x , t ) = e b k t k x k 1 + a k 0 t e b k s x m k 1 ( x , s ) d s e b k t k + a k ( k 1 ) 0 t e b k s d s = e b k t k + a k b k ( k 1 ) ( e b k t 1 ) e b k t k + k ( 1 e b k t ) = k ,
where we used the fact that 0 a k b k , since 0 a b .
Similarly, by induction on k, we can prove that
x l m k ( x , t ) ( k ) l = k ( k 1 ) ( k l + 1 ) , x [ 0 , 1 ] , k N , l N .
Indeed, for k = 1 , x m 1 ( x , t ) = e b t 1 = ( 1 ) 1 , and x l m k ( x , t ) = 0 = ( 1 ) l for l 2 . Now suppose that for some k,
x l m k 1 ( x , t ) ( k 1 ) l , x [ 0 , 1 ] , l N .
Then,
x l m k ( x , t ) = e b k t k ( k 1 ) ( k l + 1 ) x k l + a k 0 t e b k s x l m k 1 ( x , s ) d s e b k t k ( k 1 ) ( k l + 1 ) + a k b k k ( k 1 ) ( k l + 1 ) ( e b k t 1 ) k ( k 1 ) ( k l + 1 ) = ( k ) l .
Now let us differentiate the moments with respect to t. We have
| t m 1 ( x , t ) | = | e b t x a b + a b t | = | b e b t x a b | = | ( a b x ) e b t | b , x [ 0 , 1 ] ; | t m k ( x , t ) | = | b k e b k t x k + a k 0 t e b k s m k 1 ( x , s ) d s + e b k t a k e b k t m k 1 ( x , t ) | b k e b k t x k + a k b k e b k t 0 t e b k s d s + a k b k + a k e b k t ( e b k t 1 ) + a k 3 b k ; | t 2 m k ( x , t ) | = | b k 2 e b k t x k + a k 0 t e b k s m k 1 ( x , s ) d s a k b k m k 1 ( x , t ) + a k t m k 1 ( x , t ) | b k 2 + b k a k + b k a k + 3 a k b k 6 b k 2 , | t 3 m k ( x , t ) | | b k 3 e b k t x k + a k 0 t e b k s m k 1 ( x , s ) d s + a k b k 2 m k 1 ( x , t ) + a k b k t m k 1 ( x , t ) + a k t 2 m k 1 ( x , t ) | 12 b k 3 ,
and by induction
| t l m k ( x , t ) | 3 × 2 l 1 b k l .
Finally, for all mixed partial derivatives, we have
| t p x q m k ( x , t ) | = | t p x q e b k t x k + a k 0 t e b k s m k 1 ( x , s ) d s | | t p e b k t ( k ( k 1 ) ( k q + 1 ) + a k ( k 1 ) ( k 2 ) ( k q ) 0 t e b k s d s ) | | t p e b k t k ( k 1 ) ( k q + 1 ) a k 0 t e b k s d s + 1 | = | ( b k ) p e b k t k ( k 1 ) ( k q + 1 ) a k 0 t e b k s d s + 1 + k ( k 1 ) ( k q + 1 ) a k | = ( b k p + 1 ) k ( k 1 ) ( k q + 1 ) a k = O ( k 2 p + q + 2 ) , k .
Since c k = o ( 1 / k ! ) , we have that
k = 1 c k k 2 p + q + 2 < + ,
and by the Weierstrass M-test it follows that, indeed, the function series (61) converges uniformly for all p , q N ¯ . □

6. Conclusions

We have constructed first- and second-order weak split-step approximations of the Wright–Fisher (WF) process. The approximations use generation of a two- or three-valued random variable at each discretization step. The main difficulty was ensuring that the values of approximations take values in [0, 1], the domain of the WF process. Illustrative simulations show perfect accuracy of the constructed approximations.

Author Contributions

All authors have participated equally in the development of this work, in both theoretical and computational aspects. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authorsdeclare no conflict of interest.

Abbreviations

The following abbreviations are used in this paper:
WFWright–Fisher model
CIRCox–Ingersoll–Ross model
PDEPartial differential equation
BBrownian motion
N The set of positive integers 1 , 2 ,
N ¯ The set of nonnegative integers, N 0
R The set of real numbers
R + The set of positive real numbers
C * [ 0 , 1 ] A subclass of C [ 0 , 1 ] , see Definition 3.
O ( h n ) g ( x , h ) = O ( h n ) if, for some C > 0 and h 0 > 0 ,
| g ( x , h ) | C h n , x [ 0 , 1 ] , 0 < h h 0 .
X ^ h A discretization scheme of the WF process
D t x The solution of the deterministic part of the WF equation
S t x The solution of the stochastic part of the WF equation
S ^ h A discretization scheme of S t x
E ( X ) The mean of a random variable X
R v h The ν th-order remainder of a discretization scheme X ^ t x
AThe generator of the WF process
A 0 The generator of the stochastic part of the WF process

References

  1. Milstein, G.N.; Tretyakov, M.V. Stochastic Numerics for Mathematical Physics; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  2. Alfonsi, A. Affine Diffusions and Related Processes: Simulation, Theory and Applications; Bocconi & Springer Series; Springer: Cham, Switzerland, 2015. [Google Scholar]
  3. Cox, J.C.; Ingersoll, J.E.; Ross, S.A. A theory of the term structure of interest rates. Econometrica 1985, 53, 385–407. [Google Scholar] [CrossRef]
  4. Alfonsi, A. High order discretization schemes for the CIR process: Application to affine term structure and Heston models. Math. Comput. 2010, 79, 209–237. [Google Scholar] [CrossRef]
  5. Lileika, G.; Mackevičius, V. Weak approximation of CKLS and CEV processes by discrete random variables. Lith. Math. J. 2020, 60, 208–224. [Google Scholar] [CrossRef]
  6. Lileika, G.; Mackevičius, V. Second-order weak approximations of CKLS and CEV processes by discrete random variables. Mathematics 2021, 9, 1337. [Google Scholar] [CrossRef]
  7. Epstein, C.L.; Mazzeo, R. Wright–Fisher diffusion in one dimension. SIAM J. Math. Anal. 2010, 42, 568–608. [Google Scholar] [CrossRef]
  8. Krantz, S.G.; Parks, H.R. A Primer of Real Analytic Functions, 2nd ed.; Birkhäuser: Boston, MA, USA, 2002. [Google Scholar]
  9. Ksendal, B. Stochastic Differential Equations: An Introduction with Applications, 5th ed.; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2000. [Google Scholar]
Figure 1. Comparison of E f ( X ^ t x ) and E f ( X t x ) as functions of t and h for f ( x ) = x 5 : x = 0.24 , σ 2 = 0.6 , a = 0.8 , b = 5 , the number of iterations N = 1 , 000 , 000 . Left: h = 0.001 ; Right: the relative error at t = 1 .
Figure 1. Comparison of E f ( X ^ t x ) and E f ( X t x ) as functions of t and h for f ( x ) = x 5 : x = 0.24 , σ 2 = 0.6 , a = 0.8 , b = 5 , the number of iterations N = 1 , 000 , 000 . Left: h = 0.001 ; Right: the relative error at t = 1 .
Mathematics 10 00125 g001
Figure 2. Comparison of E f ( X ^ t x ) and E f ( X t x ) as functions of t and h for f ( x ) = x 5 : x = 0.83 , σ 2 = 2 , a = 4 , b = 5 , N = 1 , 000 , 000 . Left: h = 0.001 ; Right: the relative error at t = 1 .
Figure 2. Comparison of E f ( X ^ t x ) and E f ( X t x ) as functions of t and h for f ( x ) = x 5 : x = 0.83 , σ 2 = 2 , a = 4 , b = 5 , N = 1 , 000 , 000 . Left: h = 0.001 ; Right: the relative error at t = 1 .
Mathematics 10 00125 g002
Figure 3. Comparison of E f ( X ^ t x ) and E f ( X t x ) as functions of t and h for f ( x ) = e x : x = 0.4 , σ 2 = 1.6 , a = 4 , b = 5 , N = 100,000 . Left: h = 0.1 ; Right: the relative error at t = 1 .
Figure 3. Comparison of E f ( X ^ t x ) and E f ( X t x ) as functions of t and h for f ( x ) = e x : x = 0.4 , σ 2 = 1.6 , a = 4 , b = 5 , N = 100,000 . Left: h = 0.1 ; Right: the relative error at t = 1 .
Mathematics 10 00125 g003
Figure 4. Possible values of A 2 and C .
Figure 4. Possible values of A 2 and C .
Mathematics 10 00125 g004
Figure 5. Graphs of p 1 , 2 , 3 (Left) and x 1 , 2 , 3 (Right) as functions of x with fixed z. Gray area shows the region where first-order approximation is used to avoid negative probabilities. Parameters: K = 1 3 , z = 1 5 .
Figure 5. Graphs of p 1 , 2 , 3 (Left) and x 1 , 2 , 3 (Right) as functions of x with fixed z. Gray area shows the region where first-order approximation is used to avoid negative probabilities. Parameters: K = 1 3 , z = 1 5 .
Mathematics 10 00125 g005
Figure 6. Graphs of p 1 , p 2 , p 3 as functions of x and z. (a) p 1 , (b) p 2 , (c) p 3 .
Figure 6. Graphs of p 1 , p 2 , p 3 as functions of x and z. (a) p 1 , (b) p 2 , (c) p 3 .
Mathematics 10 00125 g006
Figure 7. Comparison of E f ( X ^ t x ) and E f ( X t x ) as functions of t and h for f ( x ) = x 5 : x = 0.24 , σ 2 = 0.6 , a = 0.8 , b = 5 , the number of iterations N = 100,000 . Left: h = 0.01 ; Right: the relative error at t = 1 .
Figure 7. Comparison of E f ( X ^ t x ) and E f ( X t x ) as functions of t and h for f ( x ) = x 5 : x = 0.24 , σ 2 = 0.6 , a = 0.8 , b = 5 , the number of iterations N = 100,000 . Left: h = 0.01 ; Right: the relative error at t = 1 .
Mathematics 10 00125 g007
Figure 8. Comparison of E f ( X ^ t x ) and E f ( X t x ) as functions of t and h for f ( x ) = x 5 : x = 0.83 , σ 2 = 2 , a = 4 , b = 5 , the number of iterations N = 100,000 . Left: h = 0.01 ; Right: the relative error at t = 1 .
Figure 8. Comparison of E f ( X ^ t x ) and E f ( X t x ) as functions of t and h for f ( x ) = x 5 : x = 0.83 , σ 2 = 2 , a = 4 , b = 5 , the number of iterations N = 100,000 . Left: h = 0.01 ; Right: the relative error at t = 1 .
Mathematics 10 00125 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mackevičius, V.; Mongirdaitė, G. Weak Approximations of the Wright–Fisher Process. Mathematics 2022, 10, 125. https://doi.org/10.3390/math10010125

AMA Style

Mackevičius V, Mongirdaitė G. Weak Approximations of the Wright–Fisher Process. Mathematics. 2022; 10(1):125. https://doi.org/10.3390/math10010125

Chicago/Turabian Style

Mackevičius, Vigirdas, and Gabrielė Mongirdaitė. 2022. "Weak Approximations of the Wright–Fisher Process" Mathematics 10, no. 1: 125. https://doi.org/10.3390/math10010125

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop