Next Article in Journal
Previous Article in Journal
Design of Mutation Operators for Testing Geographic Information Systems

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

# Quasi-Regression Monte-Carlo Method for Semi-Linear PDEs and BSDEs †

1
Centre de Mathématiques Appliquées, École Polytechnique and CNRS, route de Saclay, 91128 Palaiseau CEDEX, France
2
Department of Mathematics, Faculty of Informatics, Universidade da Coruña, Campus de Elviña s/n, 15071 A Coruña, Spain
*
Author to whom correspondence should be addressed.
Presented at the 2nd XoveTIC Conference, A Coruña, Spain, 5–6 September 2019.
Proceedings 2019, 21(1), 44; https://doi.org/10.3390/proceedings2019021044
Published: 6 August 2019
(This article belongs to the Proceedings of The 2nd XoveTIC Conference (XoveTIC 2019))

## Abstract

:
In this work we design a novel and efficient quasi-regression Monte Carlo algorithm in order to approximate the solution of discrete time backward stochastic differential equations (BSDEs), and we analyze the convergence of the proposed method. With the challenge of tackling problems in high dimensions we propose suitable projections of the solution and efficient parallelizations of the algorithm taking advantage of powerful many core processors such as graphics processing units (GPUs).

## 1. Introduction

In this work we are interested in numerically approximating the solution $( X , Y , Z )$ of a decoupled forward-backward stochastic differential equation
$Y t = g ( X T ) + ∫ t T f ( s , X s , Y s ) d s − ∫ t T Z s d W s ,$
$X t = x + ∫ 0 t b ( s , X s ) d s + ∫ 0 t σ ( s , X s ) d W s .$
The terminal time $T > 0$ is fixed. These equations are considered in a filtered probability space $( Ω , F , P , ( F t ) 0 ≤ t ≤ T )$ supporting a $q ≥ 1$ dimensional Brownian motion W. In this representation, X is a d-dimensional adapted continuous process (called the forward component), Y is a scalar adapted continuous process and Z is a q-dimensional progressively measurable process. Regarding terminology, $g ( X T )$ is called terminal condition and f the driver.

## 2. Results

Our aim is to solve
$Y i = E g ( X N ) + ∑ j = i N − 1 f j ( X j , Y j + 1 ) Δ ∣ F t i for i ∈ { N − 1 , … , 0 } ,$
where $f j ( x , y ) : = f ( t j , x , y )$, f being the driver in (1). In other words, our subsequent scheme will approximate the solutions to
$X t = x + ∫ 0 t b ( s , X s ) d s + ∫ 0 t σ ( s , X s ) d W s , Y t = E g ( X T ) + ∫ t T f ( s , X s , Y s ) d s ∣ F t ,$
and
$∂ t u ( t , x ) + A u ( t , x ) + f ( t , x , u ( t , x ) ) = 0 for t < T and u ( T , . ) = g ( . ) .$
One important observation is that, due to the Markov property of the Euler scheme, for every i, there exist measurable deterministic functions $y i : R d → R$, such that $Y i = y i ( X i ) ,$ almost surely. A second crucial observation is that the value functions $y i ( · )$ are independent of how we initialize the forward component. Our subsequent algorithm takes advantage of this observation. For instance, let $X i i$ be a random variable in $R d$ with some distribution $ν$ and let $X j i$ be the Euler scheme evolution of $X j$ starting from $X i$; it writes
$X j + 1 i = X j i + b ( t j , X j i ) Δ + σ ( t j , X j i ) ( W t j + 1 − W t j ) , j ≥ i .$
This flexibility property w.r.t. the initialization then writes
$y i ( X i i ) : = E g ( X N i ) + ∑ j = i N − 1 f j X j i , y j + 1 ( X j + 1 i ) Δ ∣ X i i .$
Approximating the solution to (3) is actually achieved by approximating the functions $y i ( · )$. In this way, we are directly approximating the solution to the semi-linear PDE (5). In order to control better the truncation error we define a weighted modification of $y i$ by $y i ( q ) ( x ) = y i ( x ) ( 1 + | x | 2 ) q / 2$ for a damping exponent $q ≥ 0$. For $q = 0$, $y i ( q )$ and $y i$ coincide. The previous DPE (7) becomes
$y i ( q ) ( X i i ) : = E g ( X N i ) ( 1 + | X i i | 2 ) q / 2 + ∑ j = i N − 1 f j X j i , y j + 1 ( q ) ( X j + 1 i ) ( 1 + | X j + 1 i | 2 ) q / 2 ( 1 + | X i i | 2 ) q / 2 Δ ∣ X i i .$
The introduction of the polynomial factor $( 1 + | X i i | 2 ) q / 2$ gives higher flexibility in the error analysis, it ensures that $y i ( q )$ decreases faster at infinity, which will provide nicer estimates on the approximation error when dealing with Fourier-basis.
Then we define some proper basis functions $ϕ k$ which satisfy orthogonality properties in $R d$ and which span some $L 2$ space. It turns out that the choice of measure for defining the $L 2$ space has to coincide with the sampling measure of $X i i ∼ ν$. Our strategy for defining such basis functions is to start from trigonometric basis on $[ 0 , 1 ] d$ and then to apply appropriate changes of variable: later, this transform will allow to easily quantify the approximation error when truncating the basis. Using the notation
$S i ( q ) ( x i : N i ) : = g ( x N i ) ( 1 + | x i i | 2 ) q / 2 + ∑ j = i N − 1 f j x j i , y j + 1 ( q ) ( x j + 1 i ) ( 1 + | x j + 1 i | 2 ) q / 2 ( 1 + | x i i | 2 ) q / 2 Δ ,$
we can rewrite the exact solution as $y i ( q ) ( x ) = E S i ( q ) X i : N i | X i i = x , x ∈ R d .$ Under mild conditions on f, g and $ν$, $S i ( q ) ( X i : N i )$ is square-integrable, and therefore $y i ( q )$ is in $L ν 2 ( R d )$, thus $y i ( q ) ( x ) = ∑ k ∈ N d α i , k ( q ) ϕ k ( x ) , for some coefficients ( α i , k ( q ) : k ∈ N d ) .$ Using the orthonormality property of the basis functions $ϕ k$’s, $α i , k ( q ) = ( y i ( q ) , ϕ k ) L ν 2 ( R d )$$= E y i ( q ) ( X i i ) ϕ k ( X i i ) = E E S i ( q ) X i : N i | X i i ϕ k ( X i i ) = E S i ( q ) X i : N i ϕ k ( X i i ) ,$ thus allowing us to the use of Monte Carlo simulation in order to compute the Fourier coefficients. The resulting Algorithm 1 is shown below.
 Algorithm 1. Global Quasi-Regression Multistep-forward Dynamical Programming (GQRMDP) algorithm Initialization. Set $y ¯ N ( q , M ) ( x N ) : = g ( x N ) ( 1 + | x N | 2 ) q / 2$.Backward iteration for $i = N − 1$ to $i = 0$, $y ¯ i ( q , M ) ( · ) : = ∑ k ∈ Γ α ¯ i , k ( q , M ) ϕ k ( · ) ,$ (9)   where for all $k ∈ Γ$, $α ¯ i , k ( q , M ) : = 1 M ∑ m = 1 M S i ( q , M ) ( X i : N i , m ) ϕ k ( X i i , m ) ,$ (10)   and $S i ( q , M ) ( x i : N i ) : = g ( x N i ) ( 1 + | x i i | 2 ) q / 2 ∑ j = i N − 1 f j x j i , T L ★ y ¯ j + 1 ( q , M ) ( x j + 1 i ) ( 1 + | x j + 1 i | 2 ) q / 2 ( 1 + | x i i | 2 ) q / 2 Δ .$

## 3. Discussion

A implementation on GPUs of the GQRMDP algorithm is proposed. It includes two kernels, one simulates the paths of the forward process and computes the associated responses, the other one computes the regression coefficients $( α i , k ( q ) , k ∈ Γ )$. In the first kernel the initial value of each simulated path of the forward process is stored in a device vector in global memory, it will be read later in the second kernel. In order to minimize the number of memory transactions and therefore maximize performance, all accesses to global memory have been implemented in a coalesced way. The random numbers needed for the path generation of the forward process were generated on the fly (inline generation) taking advantage of the NVIDIA cuRAND library [1] and the generator MRG32k3a proposed by L’Ecuyer in [2]. Therefore, inside this kernel the random number generator is called as needed. Another approach would be the pre-generation of the random numbers in a separate previous kernel, storing them in GPU global memory and reading them back from this device memory in the next kernel. Both alternatives have advantages and drawbacks. In this work we have chosen inline generation having in mind that this option is faster and saves global memory. Besides, register swapping was not observed on the implementation and the quality of the obtained solutions is similar to the accuracy of pure sequential traditional CPU solutions achieved employing more complex random number generators. In the second kernel, in order to compute the regression coefficients, a parallelization not only over the multi-indices $k ∈ Γ$ but also over the simulations $1 ≤ m ≤ M$ was proposed. Thus, blocks of threads parallelize the outer for loop $∀ k ∈ Γ$, whilst the threads inside each block carry out in parallel the inner loop traversing the vectors of the responses and the simulations.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. NVIDIA cuRAND Web Page. Available online: https://developer.nvidia.com/curand (accessed on 5 October 2018).
2. L’Ecuyer, P. Good parameters and implementations for combined multiple recursive random number generators. Oper. Res. 1999, 47, 159–164. [Google Scholar] [CrossRef]

## Share and Cite

MDPI and ACS Style

Gobet, E.; Salas, J.G.L.; Vázquez, C. Quasi-Regression Monte-Carlo Method for Semi-Linear PDEs and BSDEs. Proceedings 2019, 21, 44. https://doi.org/10.3390/proceedings2019021044

AMA Style

Gobet E, Salas JGL, Vázquez C. Quasi-Regression Monte-Carlo Method for Semi-Linear PDEs and BSDEs. Proceedings. 2019; 21(1):44. https://doi.org/10.3390/proceedings2019021044

Chicago/Turabian Style

Gobet, Emmanuel, José Germán López Salas, and Carlos Vázquez. 2019. "Quasi-Regression Monte-Carlo Method for Semi-Linear PDEs and BSDEs" Proceedings 21, no. 1: 44. https://doi.org/10.3390/proceedings2019021044