Next Article in Journal
DNN-MLVEM: A Data-Driven Macromodel for RC Shear Walls Based on Deep Neural Networks
Previous Article in Journal
Mathematical Modeling and Multi-Criteria Optimization of Design Parameters for the Gyratory Crusher
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Generalized Finite Difference Method for Solving Hamilton–Jacobi–Bellman Equations in Optimal Investment

1
Department of Mathematics, Jinan University, Guangzhou 510632, China
2
College of Business, Texas A&M University-Commerce, Commerce, TX 75428, USA
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(10), 2346; https://doi.org/10.3390/math11102346
Submission received: 25 April 2023 / Revised: 15 May 2023 / Accepted: 15 May 2023 / Published: 17 May 2023

Abstract

:
This paper studies the numerical algorithm of stochastic control problems in investment optimization. Investors choose the optimal investment to maximize the expected return under uncertainty. The optimality condition, the Hamilton–Jacobi–Bellman (HJB) equation, satisfied by the value function and obtained by the dynamic programming method, is a partial differential equation coupled with optimization. One of the major computational difficulties is the irregular boundary conditions presented in the HJB equation. In this paper, two mesh-free algorithms are proposed to solve two different cases of HJB equations with regular and irregular boundary conditions. The model of optimal investment under uncertainty developed by Abel is used to study the efficacy of the proposed algorithms. Extensive numerical studies are conducted to test the impact of the key parameters on the numerical efficacy. By comparing the numerical solution with the exact solution, the proposed numerical algorithms are validated.

1. Introduction

The stochastic optimal control problem of investment optimization, which involves making investment strategies under uncertainty, has always been an important issue in finance and economics. In 1970, Merton [1] conducted a pioneering study on stochastic optimal control in finance, which garnered the application of optimal control in finance widespread attention. Similar theoretical and applied analysis in capital investment includes Abel [2], Karatzas [3], Dixit and Pindyck [4], and Zhou and Li [5].
Stochastic optimal control problems are usually difficult to solve directly in terms of control and state variables. Therefore, most existing studies use indirect methods which are based on the optimality conditions derived from the problem. The indirect methods include the dynamic programming [6] and the maximum principle [7,8,9]. The dynamic programming method derives the Hamilton-Jacobi–Bellman (HJB) equation satisfied by the value function that maximizes the objective function. The optimal control problem is then converted into a problem solving the partial differential equations (PDEs) coupled with optimization. The resulting PDEs are often nonlinear, and thus remain challenging to solve. By solving the corresponding HJB equations, the optimal control and value function of the control problems are derived. This approach is the most commonly used solution technique in finance [10,11,12].
The classical numerical algorithms for HJB equations are grid methods (mesh methods) such as the finite difference method (FDM). Wang et al. [13] approximated the viscosity solution of the HJB equation using the upwind finite difference. Peyrl et al. [14] proposed a successive approximation algorithm, where the partial differential equations are solved with the upwind difference scheme. Ma and Ma [15] developed the iterative finite difference methods with policy iterations to solved the sequence of decoupled HJB equations. Inoue et al. [16] investigated the convergence properties of the upwind difference scheme for HJB equations. Forsyth et al. [17] studied the optimal control problem in nonlinear option pricing, using Newton’s iteration method to solve the implicit discretized control equations. Wang et al. [12] used a fully implicit time-stepping finite difference method to solve a nonlinear HJB equation derived from an asset allocation problem based on the mean-variance approach. In addition, there are the finite element method [18,19,20,21,22,23] and the finite volume method [24,25,26].
Mesh methods have been widely studied and applied in partial differential equations. However, mesh methods require a regular mesh of grids and are only applicable to regular boundaries, whereas, in dynamic stochastic economic models, the natural modeling domain may have an irregular shape [27,28]. For example, in the option pricing problem, the pricing model for American backdated options can be described as a two-dimensional free boundary problem [29,30,31]. De Angelis et al. [32] considered the problem of optimal entry into an irreversible investment scheme with a cost function that is non-convex for the control variables, where it is necessary to study optimal entry policies with an irregular boundary. If the state space is irregular, the special handling of region interiors and boundaries is required, which may increase some computational complexity [33].
As a consequence, mesh-free numerical methods have been proposed. Relevant research on meshless or semi-meshless methods for solving partial differential equations includes the global radial basis function method [34], the least squares configuration radial basis function method [35], the Haar wavelet collocation method [36], the Chebyshev method [37], the generalized finite difference method (hereafter GFDM) (see Benito et al. [38,39]), etc. Among them, the GFDM is one of the most promising methods. Based on Taylor series expansion and moving least squares approximation, the GFDM approximates the spatial derivatives at each discrete node as the linear combination of node values in its neighborhood. It overcomes the dependence of the traditional difference method on a regular grid and allows more flexibility in obtaining approximations for functions. Existing research work, such as nonlinear parabolic and hyperbolic partial differential equations studied by Benito et al. [40] and Ureña et al. [41], and linear elliptic partial differential equations studied by Gavete et al. [39], showed that the GFDM is very effective in solving second-order partial differential equations.
Our major contribution is to solve the HJB equation using a numerical scheme based on a meshless method, the GFDM, which is suitable for handling both regular and irregular boundaries. We propose two numerical algorithms, one for the general cases and the other for the case where the optimal control in the HJB equation can be explicitly obtained. For general cases, we propose a successive approximation algorithm which combines the GFDM discretization scheme with the optimization algorithm. For special cases with the explicit expression of optimal control, we first obtain the discretization scheme using the GFDM and then solve the discretized nonlinear equations using Newton’s iterative method. We carry out experiments under regular and irregular boundaries, and verify the validity of the proposed numerical algorithms. We also test the effect of key parameters on the accuracy of the proposed algorithms including total points, weighting functions, and irregular index.
The remainder of this paper is organized as follows. Section 2 presents a review of the HJB equation and the GFDM. Section 3 develops the numerical methods to solve the HJB equation. In Section 4, an example of the optimal investment problem is applied to demonstrate the efficiency and accuracy of the numerical methods proposed. Moreover, the key parameters of the algorithms are also tested to verify the validity of the algorithms. Section 5 provides concluding remarks.

2. Review of HJB Equation and GFDM

2.1. Review of Stochastic Optimal Control Problem and HJB Equation

In this section, we will give a review of stochastic optimal control problems over an infinite time horizon. Let us consider the state variable x ( s ) which satisfies the following stochastic differential equation
d x ( s ) = g ( x , u ) d s + σ ( x , u ) d w , x ( t ) = x ,
together with the infinite horizon objective functional
J ( t , x , u ( . ) ) = E t + e α ( s t ) f ( x , u ( . ) ) d s .
The goal of the optimal control problem is to maximize J ( t , x , u ( . ) ) , that is
sup u ( . ) J ( t , x , u ( . ) ) ,
where we assume that u is the control and the set of admissible controls has the form
U a d = { u U | u ( t ) U a d , t 0 } ,
with U = L 2 ( 0 , ; R m ) , and U a d R m denotes a compact convex subset. The known constant α is a discount rate. w ( t ) is a k-dimensional standard Brownian motion on a probability space ( Ω , F , P ) , g ( x , u ) : G × U R n and σ ( x , u ) : G × U R n × k denote the drift and diffusion terms, respectively, which satisfy the linear growth condition and Lipschitz condition such that the stochastic differential equation for the state variable x ( t ) G has a unique strong solution [42], where G is some open and bounded set on R n .
We define the value function of the problem as follows:
V ( t , x ) = sup u ( . ) E t + e α ( s t ) f ( x , u ( . ) ) d s ,
and set V ( x ) : = V ( 0 , x ) . This value function gives the best value for every initial condition, given the set of admissible control. Define the Hamiltonian as
H ( x , D V , D 2 V ) = sup u ( . ) [ f ( x , u ) + A ( x , u ) V ( x ) ] .
It is characterized as the solution of the Hamilton–Jacobi–Bellman (HJB) equation
α V ( x ) = sup u ( . ) [ f ( x , u ) + A ( x , u ) V ( x ) ] , x G ,
where the operator A is defined as
A ( x , u ) = 1 2 i , j a i j 2 x i x j + i g i x i ,
and a = σ ( x , u ) σ ( x , u ) T is a symmetric matrix.
The HJB Equation (6) includes two subproblems to be solved: the Hamiltonian operator to be optimized and the partial differential equation.

2.2. Review of Generalized Finite Difference Method

In this paper, the GFDM, mainly based on the Taylor series expansion and the weighted least squares method, is used to approximate the spatial derivatives. In order to derive the GFD scheme for the HJB Equation (6), we first review the basic principles of GFDM in Benito et al. [38,39].
Assuming that the solution domain for an unknown function V is Z R 2 , we need to discretize the solution domain Z into N points, namely Z = { x 1 , x 2 , , x N } . For each x c = ( y c , K c ) Z , we define the subdomain M = { x c ; x 1 , x 2 , , x p } Z where the center point x c Z , and x i = ( y i , K i ) Z where i = 1 , , p is a set of points located in the neighborhood of x c selected based on certain criteria such as a four-quadrant method or distance criterion. Define V c = V ( x c ) and V i = V ( x i ) . By Taylor series expansion, V i expanded at the center point x c is
V i = V c + h i V c y + l i V c K + 1 2 ( h i 2 2 V c y 2 + l i 2 2 V c K 2 + 2 h i l i 2 V c y K ) + ,
where h i = y i y c , l i = K i K c with i = 1 , 2 , , p .
We only retain the Taylor expansion formula to the second-order term, so that the approximate value with second-order accuracy of V i can be obtained, and the approximate value of V i truncated at the second order is set to v i . Based on the formula (8), the weighted residual function can be defined as
F ( v ) = i = 1 p { w i 2 [ v i + v c + h i v c y + l i v c K + 1 2 ( h i 2 2 v c y 2 + l i 2 2 v c K 2 + 2 h i l i 2 v c y K ) ] 2 } ,
where w i = w ( x i ) denotes the weighting function at point x i . The weighting functions are usually in the form: e n ( d i s t ) 2 or 1 d i s t n , where d i s t is the distance between x i and x c , n N .
We define
H = [ v c y , v c K , 2 v c y 2 , 2 v c K 2 , 2 v c y K ] T .
The goal is to minimize the weighted residual function (9) with respect to the partial derivatives. Taking the partial derivatives of function (9) with respect to (10) and setting them equal to 0, i.e., F H = 0 , we have
F ( v c y ) = 2 i = 1 p w i 2 h i Φ = 0 , F ( v c K ) = 2 i = 1 p w i 2 l i Φ = 0 , F ( 2 v c y 2 ) = 2 i = 1 p w i 2 h i 2 2 Φ = 0 , F ( 2 v c K 2 ) = 2 i = 1 p w i 2 l i 2 2 Φ = 0 , F ( 2 v c y K ) = 2 i = 1 p w i 2 h i l i Φ = 0 ,
where
Φ = v i + v c + h i v c y + l i v c K + 1 2 ( h i 2 2 v c y 2 + l i 2 2 v c K 2 + 2 h i l i 2 v c y K ) .
From (11), each equation is linear with respect to the five unknown partial derivatives. In order to facilitate the calculation, the above formulae can be rewritten as a system of matrix equations AH = b , where
A = h 1 h 2 h p l 1 l 2 l p h 1 2 2 h 2 2 2 h p 2 2 l 1 2 2 l 2 2 2 l p 2 2 h 1 l 1 h 2 l 2 h p l p w 1 2 w 2 2 w p 2 h 1 l 1 h 1 2 2 l 1 2 2 h 1 l 1 h 2 l 2 h 2 2 2 l 2 2 2 h 2 l 2 h p l p h p 2 2 l p 2 2 h p l p ,
b = h 1 h 2 h p l 1 l 2 l p h 1 2 2 h 2 2 2 h p 2 2 l 1 2 2 l 2 2 2 l p 2 2 h 1 l 1 h 2 l 2 h p l p w 1 2 w 2 2 w p 2 v 1 v c v 2 v c v p v c .
Let
Q = h 1 h 2 h p l 1 l 2 l p h 1 2 2 h 2 2 2 h p 2 2 l 1 2 2 l 2 2 2 l p 2 2 h 1 l 1 h 2 l 2 h p l p , W = w 1 2 w 2 2 w p 2 ,
and
v = ( v 1 , v 2 , , v p ) T , 1 = ( 1 , 1 , , 1 ) T p ,
then we have A = QWQ T and b = QW ( v v c 1 ) . Assuming that all the points in each support domain are different, then it is easy to prove that the solution of linear equations AH = b is unique and H can be explicitly solved as
H = A 1 b .
By introducing canonical base vectors e i ( i = 1 , 2 , , p ) , vector H can be expressed as
H = A 1 QWe 1 v 1 + + A 1 QWe p v p A 1 QW ( e 1 + + e p ) v c = m c v c + i = 1 p m i v i ,
where m i = A 1 QWe i = { m i 1 , m i 2 , m i 3 , m i 4 , m i 5 } T , m c = { m c 1 , m c 2 , m c 3 , m c 4 , m c 5 } T , and m c = i = 1 p m i . We can write the explicit expressions for the value of the five unknown partial derivatives as    
v ( x c ) y = i = 1 p m i 1 v i m c 1 v c + o ( h i 2 , l i 2 ) , with m c 1 = i = 1 p m i 1 , v ( x c ) K = i = 1 p m i 2 v i m c 2 v c + o ( h i 2 , l i 2 ) , with m c 2 = i = 1 p m i 2 , 2 v ( x c ) y 2 = i = 1 p m i 3 v i m c 3 v c + o ( h i 2 , l i 2 ) , with m c 3 = i = 1 p m i 3 , 2 v ( x c ) K 2 = i = 1 p m i 4 v i m c 4 v c + o ( h i 2 , l i 2 ) , with m c 4 = i = 1 p m i 4 , 2 v ( x c ) y K = i = 1 p m i 5 v i m c 5 v c + o ( h i 2 , l i 2 ) , with m c 5 = i = 1 p m i 5 .
From (16), it can be seen that the spatial partial derivatives of each point can be approximated as a linear combination of the function values of each node in its support domain, which is similar to the traditional finite difference method.

3. Numerical Scheme of HJB Equations

In this section, we present the numerical methods of the HJB equation with the Dirichlet boundary condition arising from the corresponding infinite horizon optimal control problem
α V ( x ) = sup u [ f ( x , u ) + A ( x , u ) V ( x ) ] , x Z R n , V ( x ) = B ( x ) , x Z .
We propose two numerical algorithms for different cases of HJB equations:
  • For the general case of the HJB equation coupled with optimization, we propose a successive approximation algorithm which combines the GFDM discretization scheme with the optimization algorithm.
  • For the special case where one can explicitly express the optimal control u by maximizing the Hamiltonian, we propose an algorithm combining the GFDM and Newton’s iterative method.

3.1. The General Case

Assuming that the state variable is x = ( y , K ) , we consider the HJB Equation (17) in two-dimensional space, which can be written in the general form
α V ( x ) sup u U [ f ( x , u ) + 1 2 [ a 11 2 V ( x ) y 2 + 2 a 12 2 V ( x ) y K + a 22 2 V ( x ) K 2 ] + g 1 V ( x ) y + g 2 V ( x ) K ] = 0 ,
for x Z , where a i j ( x , u ) = ( σ ( x , u ) σ ( x , u ) T ) i j , i = 1 , 2 .
We will give a discretization scheme which is based on the GFDM for the spatial variables in the HJB Equation (18). Assuming that the number of interior points in the solution domain is M, then for c = 1 , 2 , , M , we can obtain the discrete equation at each point x c as
α V c sup u c U [ f ( x c , u c ) + 1 2 [ a 11 ( x c , u c ) 2 V c y 2 + 2 a 12 ( x c , u c ) 2 V c y K + a 22 ( x c , u c ) 2 V c K 2 ] + g 1 ( x c , u c ) V c y + g 2 ( x c , u c ) V c K ] = 0 , x c Z ,
with the boundary value V j at the boundary points x j for j = M + 1 , M + 2 , , N ,
V j = B ( x j ) , x j Z .
Using the formulae (16), and denoting the approximate value of V c by v c  ( c = 1 , 2 , , M ), we have
α v c sup u c U [ f ( x c , u c ) + 1 2 [ a 11 ( x c , u c ) ( i = 1 p m i 3 v i m c 3 v c ) + 2 a 12 ( x c , u c ) ( i = 1 p m i 5 v i m c 5 v c ) + a 22 ( x c , u c ) ( i = 1 p m i 4 v i m c 4 v c ) ] + g 1 ( x c , u c ) ( i = 1 p m i 1 v i m c 1 v c ) + g 2 ( x c , u c ) ( i = 1 p m i 2 v i m c 2 v c ) ] = 0 , x c Z .
Denote the left-hand side of Equation (21) as F c ( v ) , where v = ( v 1 , v 2 , , v M ) T . Then, each discrete point  x c  corresponds to a discrete equation
F c ( v ) = 0 .
The Hamiltonian in Equation (21) can be solved using the standard optimization tools such as “fminbnd” in Matlab software which is based on a golden section search and parabolic interpolation method. After solving the optimization for every discrete point, it turns out that the equations are only related to unknown discrete value functions. Therefore, we need to solve a system of equations consisting of M discrete linear algebra equations with M unknown variables v c , that is
F ( v ) = [ F 1 ( v ) , F 2 ( v ) , , F M ( v ) ] T = 0 .
Using the GFDM discretization scheme and the optimization method given above, we propose an algorithm alternately updating the control u and the value function v . Algorithm 1 summarizes the general successive approximation algorithm which is an iterative method.
Algorithm 1 Successive Approximation Algorithm
Input: Initial control law u ˜ 0 ;   given tolerance ε
Output: Approximation of control law u ,   value function v
 1:
Initialize k = 0 ;   u 0 = u ˜ 0
 2:
repeat
 3:
   Solve the boundary value problem using GFDM scheme in Section 3.1 for the given control u c k ,   i.e.,
   solve equations F ( v k ) = 0 , where F c ( v k ) = α v c k [ f ( x c , u c k ) + A ( x c , u c k ) v c k ] for c = 1 , 2 , , M , x c Z ;   with the boundary condition v j = B ( x j ) , x j Z .
 4:
   Compute u k + 1 by solving the optimization for every discrete point x c : u c k + 1 = arg sup u c U [ f ( x c , u c ) + A ( x c , u c ) v c k ] , c = 1 , 2 , , M
 5:
    k = k + 1
 6:
until  | | u k + 1 u k | | < ε
 7:
u = u k + 1 , v = v k + 1
 8:
return  u , v

3.2. The Special Case

For some special cases, one can explicitly express the optimal control u by maximizing the Hamiltonian in the form
u ^ = u ^ ( x , D V , D 2 V ) .
Substituting (24) into (18), we have
α V ( x ) [ f ( x , u ^ ) + 1 2 [ a 11 ( x , u ^ ) 2 V ( x ) y 2 + 2 a 12 ( x , u ^ ) 2 V ( x ) y K + a 22 ( x , u ^ ) 2 V ( x ) K 2 ] + g 1 ( x , u ^ ) V ( x ) y + g 2 ( x , u ^ ) V ( x ) K ] = 0 ,
for x Z . That is, the HJB equation is transformed into a partial differential equation without the optimization. Consequently, we can obtain the optimal control and optimal value by directly solving the partial differential equation. However, quite often, the obtained equations are nonlinear partial differential equations, presenting computational challenges. We will use Newton’s iteration method to solve the system of nonlinear generalized finite difference equations.
Denote the left-hand side of Equation (25) by F c ( v ) . Similarly to the discretization scheme in the general case, we can obtain the discrete equations for each discrete point
F ( v ) = [ F 1 ( v ) , F 2 ( v ) , , F M ( v ) ] T = 0 .
The idea of the iterative method is to treat the system as the zero of the operator F : R M R M ,
v s o l v e s ( 26 ) F ( v ) = 0 .
Define D as the differential operator. The differentiation of F is then denoted by D F . Algorithm 2 summarizes how to find the zero of F using Newton’s iterative method.
Algorithm 2 Newton Iterations
Input: Initial guess v ˜ 0 ;   max number of iterations m a x I t e r ;   given tolerance ε
Output: Approximation of v
 Initialize k = 0 ;   v 0 = v ˜ 0
while  k < m a x I t e r  do
   Obtain F c ( v ) for every discrete point using Formula (25) and (16) arising from GFDM
   Compute D F ( v k ) and F ( v k ) based on each discrete equation F c ( v )
   Let y k solve: D F ( v k ) y k = F ( v k )
    v k + 1 = v k + y k
   if  | | v k + 1 v k | | < ε  then
      v = v k + 1
     break
   end if
    k = k + 1
  end while
  return  v

4. Case Study

4.1. Optimal Investment Problem

The case study presents an optimal investment model under uncertainty in Abel [2]. Consider the problem of a firm making optimal investment decisions under uncertainty. The evolution of the firm’s capital stock K ( t ) over time satisfies the following differential equation
d K ( t ) = ( u ( t ) δ K ( t ) ) d t ; K ( 0 ) = K 0 ,
where u ( t ) is the control variable denoting the amount of capital investment, and δ 0 denotes the depreciation rate of the capital stock. The output price y ( t ) satisfies the following stochastic process
d y ( t ) = μ y ( t ) d t + σ y ( t ) d w ( t ) ; y ( 0 ) = y 0 ,
where w ( t ) is a standard Brownian motion on a probability space ( Ω , F , P ) , and the coefficients μ and σ positively represent the growth rate and volatility, respectively.
The output technology is assumed to be of the Cobb–Douglas form. The elements of production considered are the capital stock K ( t ) and the labor L ( t ) . Assuming returns to scale are constant, then the form of the Cobb–Douglas production function is F ( K ( t ) , L ( t ) ) = L ( t ) ξ K ( t ) 1 ξ , where 0 < ξ < 1 . The profit at time t is
π ( t ) = y ( t ) L ( t ) ξ K ( t ) 1 ξ w L ( t ) .
The firm pays a fixed wage w, and the labor can be instantaneously at no cost, so the firm chooses L ( t ) to maximize the instantaneous operating income at time t. Take the partial derivative of formula (30) with respect to L ( t ) , and set the partial derivative to be 0; it then yields
L ( t ) = w 1 ξ 1 y ( t ) 1 1 ξ K ( t ) ξ 1 1 ξ .
Substituting formula (31) into (30), the maximized total profit at time t becomes
π ( t ) = c y ( t ) γ K ( t ) ,
where c and γ are constants, and γ = 1 1 ξ , c = γ γ ( γ 1 ) γ 1 w 1 γ .
The firm will incur certain adjustment costs in its operation. Assuming that the convex adjustment cost generated by capital is h ( u ( t ) ) , the actual profit at time t is
p ( t ) = c y ( t ) γ K ( t ) h ( u ( t ) ) .
The firm’s goal is to choose the capital investment u ( t ) that maximizes the expected discount cash flow at a given discount rate α over an infinite time horizon. This problem is a standard stochastic optimal control problem, and the value function is
V ( y 0 , K 0 ) = sup u ( t ) U E 0 + e α t c y ( t ) γ K ( t ) h ( u ( t ) ) d t ,
where U represents the set of all admissible controls.
Assume that the convex adjustment cost is h ( u ( t ) ) = u ( t ) 2 2 . Using the dynamic programming principle and Ito’s formula, we can obtain that the value function V satisfies the following HJB equation
1 2 σ 2 y 2 2 V y 2 + α V = sup u ( . ) [ V x · g ( x , u ) + f ( x , u ) ] = sup u ( . ) [ μ y V y + ( δ K + u ) V K + c y γ K 1 2 u 2 ] .
In order to solve the Equation (35) using numerical methods, a Dirichlet condition is imposed at the boundary Z : V ( y , K ) = B ( y , K ) . Then, the problem to be solved is
1 2 σ 2 y 2 2 V y 2 + α V = sup u ( . ) [ μ y V y + ( δ K + u ) V K + c y γ K 1 2 u 2 ] , ( y , K ) Z , V ( y , K ) = B ( y , K ) , ( y , K ) Z ,
where
B ( y , K ) = A K y γ + A 2 2 [ α 2 γ μ γ ( 2 γ 1 ) σ 2 ] y 2 γ ,
with
A = c α + δ γ μ 1 2 γ ( γ 1 ) σ 2 .

4.2. Numerical Results

From (35), we are able to obtain the explicit expression of the optimal control given by u ^ = V K . Therefore, we are able to use both algorithms proposed in Section 3 to obtain the numerical approximation of V and u in problem (36). The effectiveness of the proposed schemes is verified in the cases of regular and node distribution, and the influence of related parameters on numerical accuracy is explored.
We first set the basic parameters involved in the problem (34) as follows:
μ = 0.01 , σ = 0.2 , δ = 0.1 , ξ = 0.2 , w = 5 , α = 0.2 , y 0 = 0.5 , K 0 = 1.5 .
The errors are evaluated using the following formulae:
g l o b a l e r r o r = 1 N i = 1 N [ a p p r o x ( i ) e x a c t ( i ) ] 2 | e x a c t m a x | ,
m e a n r e l a t i v e e r r o r ( m r e ) = 1 N i = 1 N | a p p r o x ( i ) e x a c t ( i ) e x a c t ( i ) | ,
where N represents the total number of points considered in the solution domain, a p p r o x ( i ) and e x a c t ( i ) represent the approximate solution and exact solution at point x i , respectively, and e x a c t m a x represents the maximum value of the exact values of the points of all support domains. The analytical solution of Equation (35) is
V ( y , K ) = c α + δ γ μ 1 2 γ ( γ 1 ) σ 2 K y γ + c α + δ γ μ 1 2 γ ( γ 1 ) σ 2 2 2 [ α 2 γ μ γ ( 2 γ 1 ) σ 2 ] y 2 γ ,
and
u ^ = c α + δ γ μ 1 2 γ ( γ 1 ) σ 2 y γ .
The threshold ε used to terminate the iteration is set as 10 4 , and the maximum number of iterations m a x I t e r is set as 20. All examples in this paper are implemented using Matlab R2017b on a 2.10 GHz desktop computer with 12 GB of memory.
The default weighting function is chosen as e d i s t 2 , and the points that constitute the support domain are selected based on the distance criterion (see Figure 1).
According to [43], the degree of the irregularity of node clouds will also have an impact on the accuracy of the results. Therefore, under the irregular node distribution, the influence of the irregular index of the cloud of nodes on the numerical solution is mainly considered. Following [43], we define the irregularity index of the clouds of nodes (IIC) as follows. The distance from the node x i ( i = 1 , 2 , , p ) to the center node x k in each support domain is
d i = h i 2 + l i 2 .
Taking each point x k in the solution domain as the central node, the average distance of the support domain where it is located is
r m k = i = 1 p d i p ,
where p represents the number of nodes in each support domain except the central node.
Then, the average distance from all support domains in the entire solution domain is
r m c = k = 1 M r m k M ,
where M represents the number of interior points in the solution domain. Thus, the irregularity index of clouds of nodes can be defined as
I I C = k = 1 M ( r m k r m c ) 2 M .
To discuss the convergence of the proposed algorithms, we evaluate the order of convergence using the following formula:
o r d e r ( V k ) = l o g | | V k 1 V * | | 2 | | V k V * | | 2 l o g h k 1 h k ,
where V * , V k , | | V k V * | | 2 , and h k denote the exact value of the function V, approximate solution of the function V, the L 2 norm of the error, and the step size of the k-th set of points, respectively.

4.2.1. The Regular Node Distributions

Consider the regular node distribution given in Figure 2. We set the solution domain as a half-unit side square area [ 0.5 , 1 ] × [ 1.5 , 2 ] , and the total number of nodes N are set as 64 , 225 , 361 and 529, respectively. The errors of the value function V and the optimal control u are listed in Table 1. It can be seen that our proposed algorithms perform well over the regular domain, and the greater the number of nodes, the smaller the errors. Since we use the exact form of optimal control in conducting the Algorithm 2, it is not surprising that the errors of u obtained by Algorithm 2 are smaller than those obtained by Algorithm 1. The CPU time in seconds and the order of convergence of v are listed in Table 2. The results show that our algorithms are stable and are close to the second-order convergence.
In addition, we implement the standard FDM [44] to solve our problem, where the central-difference approximation for the first- and the second-order derivatives are adopted, and then, the results obtained by the GFDM and by the FDM are compared in Table 3. It is observed that the results from the GFDM has a greater level of accuracy than the results obtained by the FDM.
We then investigate the stability property of proposed algorithms. Table 4 gives the global error and the mean relative error under different numbers of nodes and weighting functions with the implementation of Algorithm 1. It shows the consistent accuracy across various N and weighting functions. The standard deviation of the global errors of the value function V and that of the optimal control u are 5.2468 × 10 6 and 1.2410 × 10 3 , respectively. Thus, Algorithm 1 is stable in the numerical approximation. Since Algorithm 2 differs from Algorithm 1 in replacing the optimization search with the Newton iteration due to the known expression in the optimal control, the stability of Algorithm 2 is inferred. Therefore, both algorithms are efficient and robust.
The convergence histories using Algorithm 1 are shown in Figure 3. Figure 4 and Figure 5 show the comparison of the exact solution and numerical solutions of V and u , respectively, with N = 529 . In Figure 3, we see that Algorithm 1 converges to the solution of the problem after a few iterations, which demonstrates the fast convergence of the algorithm.
From Table 1 as well as Figure 4 and Figure 5, we observe that Algorithms 1 and 2 perform equally well, so we can conclude that both algorithms can be used. However, we do find that Algorithm 1 takes a longer computational time than Algorithm 2, because Algorithm 1 requires solving the optimization in each iteration as shown in Table 2.

4.2.2. The Irregular Node Distributions

We conduct experiments under two irregular node distributions with the total points N = 361 (we view the rectangular boundaries used by traditional mesh algorithms as regular boundaries, and those with non-rectangular boundaries as irregular ones). The shapes of the nodes are shown in Figure 6. The solution domains of both clouds of nodes are given as
  • Shape a: Ω a = { ( y , K ) R 2 | ( y 0.75 ) 2 + ( K 1.75 ) 2 0.125 } .
  • Shape b: Ω b = { ( y , K ) R 2 | 1.5 K 8 ( y 0.75 ) 2 + 2 , 0.5 y 1 } .
The accuracy of the numerical solutions under different shapes is shown in Table 5. The L 2 difference between the two successive iterations obtained by Algorithm 1 is shown in Figure 7. Similarly to the case of regular nodes, Algorithm 1 converges to the solution of the problem after several iterations.
Figure 8, Figure 9, Figure 10 and Figure 11 show the exact solutions and numerical solutions of V and u under different shapes, respectively. It can be seen that the results are also accurate under irregular points as in regular ones.
We will then test the influence of the irregular index of the clouds of points (IIC) and a key parameter, the weighting functions, on the accuracy of the numerical solution using Algorithm 2.
For each support domain, p = 15 nodes were selected. Table 6 shows the numerical results for different values of IIC. It can be seen that the IIC obtained for the circular region is greater, and the error of the results is larger.
From the basic principle of GFDM, it is known that the approximated value of the central node of each support domain is the weighted sum of the function values of the rest of the nodes, and the nodes closer to the central node have a greater impact on the function approximation, so the nodes that are closer to the central node should be given higher weights. Based on this feature, this paper uses two different weighting functions: the exponential function e n ( d i s t ) 2 and the potential function 1 d i s t n for comparison. Table 7 gives the numerical results under different weighting functions. Both forms of weighting functions, the exponential function and the potential function, yield small errors. Moreover, the exponential weighting function produces more stable results with respect to the changes in n than the potential function.
In sum, our two mesh-free numerical methods are robust in solving HJB equations (both general and some special form) with regular and irregular boundary conditions. The effectiveness and the accuracy of the proposed methods are evidenced by comprehensive numerical analyses.

5. Conclusions

In this paper, we present two algorithms for computing different cases of HJB equations with boundary conditions. The first algorithm, combining the mesh-free GFD method and successive approximation method, is proposed to solve the general case of HJB equations. The second algorithm, combining the mesh-free GFD method and Newton’s iterative method, is proposed to solve some special case of HJB equations. Numerical examples show that both algorithms are efficient and have high accuracy over regular and irregular domains. We also tested the effect of key parameters on the accuracy of the proposed algorithm under regular node distributions and irregular node distributions, respectively, including the total points, weighting functions and irregular index.
Due to the meshless nature of the generalized finite difference method, the proposed algorithms can be easily applied for solving nonlinear PDEs over complicated and realistic boundaries. Additionally, it can also be extended to solve HJB-FP systems in the mean field type control problem, which will be investigated in our future study.

Author Contributions

Methodology, J.L., X.L., S.H. and Z.Y. The authors equally contributed to the research and writing of this article. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Guangdong Basic and Applied Basic Research Foundation (under Grant Nos. 2022A1515011267).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Merton, R.C. Analytical optimal control theory as applied to stochastic and non-stochastic economics. Doctoral Dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 1970. [Google Scholar]
  2. Abel, A.B. Optimal investment under uncertainty. Am. Econ. Rev. 1983, 73, 228–233. [Google Scholar]
  3. Karatzas, I. Optimization problems in the theory of continuous trading. SIAM J. Control Optim. 1989, 27, 1221–1259. [Google Scholar] [CrossRef]
  4. Dixit, A.K.; Pindyck, R.S. Investment under Uncertainty; Princeton University Press: Princeton, NJ, USA, 1994. [Google Scholar]
  5. Zhou, X.Y.; Li, D. Continuous-time mean-variance portfolio selection: A stochastic LQ framework. Appl. Math. Optim. 2000, 42, 19–33. [Google Scholar] [CrossRef]
  6. Bellman, R. The theory of dynamic programming. Bull. Am. Math. Soc. 1954, 60, 503–515. [Google Scholar] [CrossRef]
  7. Ji, S.; Zhou, X.Y. A maximum principle for stochastic optimal control with terminal state constraints, and its applications. Commun. Inf. Syst. 2006, 6, 321–337. [Google Scholar] [CrossRef]
  8. Peng, S. A general stochastic maximum principle for optimal control problems. SIAM J. Control Optim. 1990, 28, 966–979. [Google Scholar] [CrossRef]
  9. Zhou, X.Y. A unified treatment of maximum principle and dynamic programming in stochastic controls. Stoch. Rep. 1991, 36, 137–161. [Google Scholar]
  10. Ma, K.; Forsyth, P.A. Numerical solution of the Hamilton–Jacobi-Bellman formulation for continuous-time mean-variance asset allocation under stochastic volatility. J. Comput. Financ. 2016, 20, 1–37. [Google Scholar] [CrossRef]
  11. Naicker, V.; Andriopoulos, K.; Leach, P.G.L. Symmetry reductions of a Hamilton–Jacobi- Bellman equation arising in financial mathematics. J. Nonlinear Math. Phys. 2005, 12, 268–283. [Google Scholar] [CrossRef]
  12. Wang, J.; Forsyth, P.A. Numerical solution of the Hamilton–Jacobi-Bellman formulation for continuous time mean variance asset allocation. J. Econ. Dyn. Control 2010, 34, 207–230. [Google Scholar] [CrossRef]
  13. Wang, S.; Gao, F.; Teo, K.L. An upwind finite-difference method for the approximation of viscosity solutions to Hamilton–Jacobi-Bellman equations. IMA J. Math. Control Inf. 2000, 17, 167–178. [Google Scholar] [CrossRef]
  14. Peyrl, H.; Herzog, F.; Geering, H.P. Numerical solution of the Hamilton–Jacobi-Bellman equation for stochastic optimal control problems. In Proceedings of the 2005 WSEAS International Conference on Dynamical Systems and Control, VCE, Venice, Italy, 2–4 November 2005; pp. 489–497. [Google Scholar]
  15. Ma, J.; Ma, J. Finite difference methods for the Hamilton–Jacobi-Bellman equations arising in regime switching utility maximization. J. Sci. Comput. 2020, 85, 1–27. [Google Scholar] [CrossRef]
  16. Inoue, D.; Ito, Y.; Kashiwabara, T.; Saito, N.; Yoshida, H. Convergence Analysis of the Upwind Difference Methods for Hamilton–Jacobi-Bellman Equations. arXiv 2023, arXiv:2301.06415. [Google Scholar]
  17. Forsyth, P.A.; Labahn, G. Numerical methods for controlled Hamilton–Jacobi-Bellman PDEs in finance. J. Comput. Financ. 2007, 11, 1–44. [Google Scholar] [CrossRef]
  18. Boulbrachene, M.; Haiour, M. The finite element approximation of Hamilton–Jacobi-Bellman equations. Comput. Math. Appl. 2001, 41, 993–1007. [Google Scholar] [CrossRef]
  19. Jensen, M.; Smears, I. On the convergence of finite element methods for Hamilton–Jacobi –Bellman equations. SIAM J. Numer. Anal. 2013, 51, 137–162. [Google Scholar] [CrossRef]
  20. Jaroszkowski, B.; Jensen, M. Finite Element Approximation of Hamilton–Jacobi-Bellman equations with nonlinear mixed boundary conditions. arXiv 2021, arXiv:2105.09585. [Google Scholar] [CrossRef]
  21. Smears, I. Hamilton-Jacobi-Bellman Equations, Analysis and Numerical Analysis. Univ. Durham, Durham, UK. Available online: http://fourier.dur.ac.uk/Ug/projects/highlights/PR4/SmearsHJBreport.pdf (accessed on 2 February 2018).
  22. Wu, S. C0 finite element approximations of linear elliptic equations in non-divergence form and Hamilton–Jacobi-Bellman equations with Cordes coefficients. Calcolo 2021, 58, 1–26. [Google Scholar] [CrossRef]
  23. Mousavi, A.; Lakkis, O.; Mokhtari, R. A least-squares Galerkin approach to gradient recovery for Hamilton–Jacobi-Bellman equation with Cordes coefficients. arXiv 2022, arXiv:205.07583. [Google Scholar]
  24. Dleuna Nyoumbi, C.; Tambue, A. A fitted finite volume method for stochastic optimal control problems in finance. AIMS Math. 2021, 6, 3053–3079. [Google Scholar] [CrossRef]
  25. Gooran Orimi, A.; Effati, S.; Farahi, M.H. Approximate solution of the Hamilton–Jacobi-Bellman equation. J. Math. Model. 2022, 10, 71–91. [Google Scholar]
  26. Richardson, S.; Wang, S. Numerical solution of Hamilton–Jacobi-Bellman equations by an exponentially fitted finite volume method. Optimization 2006, 55, 121–140. [Google Scholar]
  27. Cui, L.J.; Lin, C.D. Lattice-Gas-Automaton Modeling of Income Distribution. Entropy 2020, 22, 778. [Google Scholar] [CrossRef] [PubMed]
  28. Scheidegger, S.; Bilionis, I. Machine learning for high-dimensional dynamic stochastic economies. J. Comput. Sci. 2019, 33, 68–82. [Google Scholar] [CrossRef]
  29. Song, H.M.; Qi, Z.R. A fast numerical method for the valuation of American lookback put options. Newsl. Nonlinear Sci. Numer. Simul. 2015, 27, 302–313. [Google Scholar] [CrossRef]
  30. Song, H.M.; Wang, X.S.; Zhang, K.; Zhang, Q. Primal-Dual Active Set Method for American Lookback Put Option Pricing. East Asian J. Appl. Math. 2017, 7, 603–614. [Google Scholar] [CrossRef]
  31. Zhang, Q.; Song, H.M.; Yang, C.B.; Wu, F.F. An efficient numerical method for the valuation of American multi-asset options. J. Comput. Appl. Math. 2020, 39, 1–12. [Google Scholar] [CrossRef]
  32. De Angelis, T.; Ferrari, G.; Martyr, R.; Moriarty, J. Optimal entry to an irreversible investment plan with non convex costs. Math. Financ. Econ. 2017, 11, 423–454. [Google Scholar] [CrossRef]
  33. Lagaris, I.E.; Likas, A.C.; Papageorgiou, D.G. Neural-network methods for boundary value problems with irregular boundaries. IEEE Trans. Neural Netw. 2000, 11, 1041–1049. [Google Scholar] [CrossRef]
  34. Huang, C.S.; Wang, S.; Chen, C.S. A radial basis collocation method for Hamilton–Jacobi-Bellman equation. Automatica 2006, 42, 2201–2207. [Google Scholar] [CrossRef]
  35. Alwardi, H.; Wang, S.; Richardon, S. An adaptive least-squares collocation radial basis function method for the HJB equation. J. Glob. Optim. 2012, 52, 305–322. [Google Scholar] [CrossRef]
  36. Swaidan, W.; Hussin, A. Feedback control method using Haar wavelet operational matrices for solving optimal control problems. Abstr. Appl. Anal. 2013, 2013, 240352. [Google Scholar] [CrossRef]
  37. Mehrali-Varjani, M.; Shamsi, M.; Malek, A. Solving a class of Hamilton–Jacobi-Bellman equations using pseudospectral methods. Kybernetika 2018, 54, 629–647. [Google Scholar] [CrossRef]
  38. Benito, J.J.; Ureña, F.; Gavete, L. Influence of several factors in the generalized finite difference method. Appl. Math. Model. 2001, 25, 1039–1053. [Google Scholar] [CrossRef]
  39. Gavete, L.; Ureña, F.; Benito, J.J.; García, A.; Ureña, M.; Salete, E. Solving second order non-linear elliptic partial differential equations using generalized finite difference method. J. Comput. Appl. Math. 2017, 318, 378–387. [Google Scholar] [CrossRef]
  40. Benito, J.J.; Ureña, F.; Gavete, L. Solving parabolic and hyperbolic equations by the generalized finite difference method. J. Comput. Appl. Math. 2007, 209, 208–233. [Google Scholar] [CrossRef]
  41. Ureña, F.; Gavete, L.; García, A.; Benito, J.J.; Vargas, A.M. Solving second order non-linear parabolic PDEs using generalized finite difference method (GFDM). J. Comput. Appl. Math. 2019, 354, 221–241. [Google Scholar] [CrossRef]
  42. Kloeden, P.E.; Platen, E. Stochastic Differential Equations; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
  43. Gavete, L.; Benito, J.J.; Ureña, F. Generalized finite differences for solving 3D elliptic and parabolic equations. Appl. Math. Model. 2016, 40, 955–965. [Google Scholar] [CrossRef]
  44. Smith, G.D.; Smith, G.D.; Smith, G.D.S. Numerical Solution of Partial Differential Equations: Finite Difference Methods; Oxford University Press: New York, NY, USA, 1985. [Google Scholar]
Figure 1. Criterion for selecting points in the support domain–distance criterion.
Figure 1. Criterion for selecting points in the support domain–distance criterion.
Mathematics 11 02346 g001
Figure 2. Regular node distribution with N = 225 .
Figure 2. Regular node distribution with N = 225 .
Mathematics 11 02346 g002
Figure 3. L 2 difference between two successive iterations with 361 and 529 regular points using Algorithm 1.
Figure 3. L 2 difference between two successive iterations with 361 and 529 regular points using Algorithm 1.
Mathematics 11 02346 g003
Figure 4. The exact solution and numerical solution of value function V ( N = 529 ).
Figure 4. The exact solution and numerical solution of value function V ( N = 529 ).
Mathematics 11 02346 g004
Figure 5. The exact solution and numerical solution of optimal control u ( N = 529 ).
Figure 5. The exact solution and numerical solution of optimal control u ( N = 529 ).
Mathematics 11 02346 g005
Figure 6. Irregular node distributions: (a) circular domain, (b) parabolic domain.
Figure 6. Irregular node distributions: (a) circular domain, (b) parabolic domain.
Mathematics 11 02346 g006
Figure 7. L 2 difference between two successive iterations with irregular points using Algorithm 1.
Figure 7. L 2 difference between two successive iterations with irregular points using Algorithm 1.
Mathematics 11 02346 g007
Figure 8. The exact solution and numerical solution of the value function V under shape a.
Figure 8. The exact solution and numerical solution of the value function V under shape a.
Mathematics 11 02346 g008
Figure 9. The exact solution and numerical solution of the optimal control u under shape a.
Figure 9. The exact solution and numerical solution of the optimal control u under shape a.
Mathematics 11 02346 g009
Figure 10. The exact solution and numerical solution of the value function V under shape b.
Figure 10. The exact solution and numerical solution of the value function V under shape b.
Mathematics 11 02346 g010
Figure 11. The exact solution and numerical solution of the optimal control u under shape b.
Figure 11. The exact solution and numerical solution of the optimal control u under shape b.
Mathematics 11 02346 g011
Table 1. Numerical results of both algorithms under regular node distributions.
Table 1. Numerical results of both algorithms under regular node distributions.
NAlgorithmGlobal Error (V)mre (V)Global Error (u)mre (u)
6412.0781 × 10 5 2.9544 × 10 5 5.6422 × 10 3 4.0917 × 10 3
21.8414 × 10 5 4.0311 × 10 5 4.7054 × 10 3 3.4200 × 10 3
22511.9742 × 10 5 2.5480 × 10 5 4.9443 × 10 3 2.5629 × 10 3
21.0119 × 10 5 1.4615 × 10 5 1.1014 × 10 3 7.8821 × 10 4
36111.2136 × 10 5 1.5951 × 10 5 3.4399 × 10 3 1.5857 × 10 3
27.1307 × 10 6 1.2235 × 10 5 6.0667 × 10 4 7.0508 × 10 4
52918.2071 × 10 6 1.0909 × 10 5 2.5735 × 10 3 1.0818 × 10 3
22.6321 × 10 6 4.7869 × 10 6 4.0350 × 10 4 2.6199 × 10 4
Table 2. Comparison of both algorithms on the order of convergence and computation time (s).
Table 2. Comparison of both algorithms on the order of convergence and computation time (s).
N h k Algorithm 1 Algorithm 2
| | V k V * | | 2 Order ( V k ) Time (s) | | V k V * | | 2 Order ( V k ) Time (s)
641/141.1794 × 10 2 1.9060.465.4618 × 10 3 1.7030.32
2251/283.1493 × 10 3 1.9051.861.7476 × 10 3 1.6441.10
3611/361.9458 × 10 3 1.9163.091.0931 × 10 3 1.8671.57
5291/441.3199 × 10 3 1.9344.367.4195 × 10 4 1.9312.73
Table 3. Comparison of the numerical errors with the FDM and the GFDM using Algorithm 1.
Table 3. Comparison of the numerical errors with the FDM and the GFDM using Algorithm 1.
NAlgorithmGlobal Error (V)mre (V)Global Error (u)mre (u)
64GFDM2.0781 × 10 5 2.9544 × 10 5 5.6422 × 10 3 4.0917 × 10 3
FDM2.1321 × 10 5 4.4319 × 10 5 7.1449 × 10 3 9.1369 × 10 3
225GFDM1.9742 × 10 5 2.5480 × 10 5 4.9443 × 10 3 2.5629 × 10 3
FDM2.0754 × 10 5 3.0998 × 10 5 5.1921 × 10 3 6.3504 × 10 3
361GFDM1.2136 × 10 5 1.5951 × 10 5 3.4399 × 10 3 1.5857 × 10 3
FDM1.8986 × 10 5 2.4111 × 10 5 4.1956 × 10 3 5.2431 × 10 3
529GFDM8.2071 × 10 6 1.0909 × 10 5 2.5735 × 10 3 1.0818 × 10 3
FDM9.5459 × 10 6 1.2736 × 10 5 3.7906 × 10 3 4.5419 × 10 3
Table 4. Robustness analysis of Algorithm 1 for the number of nodes and weighting functions.
Table 4. Robustness analysis of Algorithm 1 for the number of nodes and weighting functions.
NWeighting FunctionsGlobal Error (V)mre (V)Global Error (u)mre (u)
e 0.5 ( d i s t ) 2 2.0816 × 10 5 2.9594 × 10 5 5.6588 × 10 3 4.1064 × 10 3
e 1.5 ( d i s t ) 2 2.0746 × 10 5 2.9494 × 10 5 5.6256 × 10 3 4.0771 × 10 3
64 1 d i s t 0.5 1.8371 × 10 5 2.6113 × 10 5 4.7302 × 10 3 3.3365 × 10 3
1 d i s t 1 1.5793 × 10 5 2.2421 × 10 5 3.8283 × 10 3 2.6050 × 10 3
e 0.5 ( d i s t ) 2 1.9751 × 10 5 2.5492 × 10 5 4.9469 × 10 3 2.5643 × 10 3
e 1.5 ( d i s t ) 2 1.9732 × 10 5 2.5468 × 10 5 4.9419 × 10 3 2.5614 × 10 3
225 1 d i s t 0.5 9.9952 × 10 6 1.1013 × 10 5 2.4839 × 10 3 1.1695 × 10 3
1 d i s t 1 1.4409 × 10 5 1.8664 × 10 5 3.6017 × 10 3 1.8337 × 10 3
e 0.5 ( d i s t ) 2 1.2139 × 10 5 1.5955 × 10 5 3.4409 × 10 3 1.5862 × 10 3
e 1.5 ( d i s t ) 2 1.2132 × 10 5 1.5946 × 10 5 3.4389 × 10 3 1.5851 × 10 3
361 1 d i s t 0.5 1.0514 × 10 5 1.3844 × 10 5 2.9795 × 10 3 1.3648 × 10 3
1 d i s t 1 8.8539 × 10 6 1.1680 × 10 5 2.5104 × 10 3 1.1440 × 10 3
e 0.5 ( d i s t ) 2 8.2087 × 10 6 1.0911 × 10 5 2.5740 × 10 3 1.0820 × 10 3
e 1.5 ( d i s t ) 2 8.2055 × 10 6 1.0907 × 10 5 2.5730 × 10 3 1.0815 × 10 3
529 1 d i s t 0.5 7.1091 × 10 6 9.4666 × 10 6 2.2295 × 10 3 9.3215 × 10 4
1 d i s t 1 5.9865 × 10 6 7.9877 × 10 6 1.8787 × 10 3 7.8164 × 10 4
Table 5. Numerical errors of both algorithms under irregular node distributions.
Table 5. Numerical errors of both algorithms under irregular node distributions.
ShapeAlgorithmGlobal Error (V)mre (V)Global Error (u)mre (u)
a11.4845 × 10 5 1.6205 × 10 5 2.3411 × 10 3 1.7330 × 10 3
21.7672 × 10 5 2.0889 × 10 5 7.7989 × 10 4 7.8551 × 10 4
b14.6104 × 10 5 4.6373 × 10 5 4.5303 × 10 3 4.0952 × 10 3
28.0699 × 10 6 1.0453 × 10 5 1.5865 × 10 3 1.0809 × 10 3
Table 6. Numerical errors of Algorithm 2 under irregular nodes versus IIC.
Table 6. Numerical errors of Algorithm 2 under irregular nodes versus IIC.
ShapeIICGlobal Error (V)mre (V)Global Error (u)mre (u)
a2.4362 × 10 2 1.8241 × 10 5 3.1043 × 10 5 1.6932 × 10 3 1.8460 × 10 3
b2.1311 × 10 2 8.0699 × 10 6 1.0453 × 10 5 1.5865 × 10 3 1.0809 × 10 3
Table 7. Numerical errors of Algorithm 2 under irregular nodes versus weighting functions.
Table 7. Numerical errors of Algorithm 2 under irregular nodes versus weighting functions.
ShapeWeighting FunctionsnGlobal Error (V)mre (V)Global Error (u)mre (u)
0.51.7631 × 10 5 2.0844 × 10 5 7.7985 × 10 4 7.8864 × 10 4
e n ( d i s t ) 2 11.7672 × 10 5 2.0889 × 10 5 7.7989 × 10 4 7.8551 × 10 4
a 1.51.7718 × 10 5 2.0936 × 10 5 7.7993 × 10 4 7.8838 × 10 4
0.52.5221 × 10 5 2.3074 × 10 5 7.4626 × 10 4 7.2931 × 10 4
1 d i s t n 11.8287 × 10 5 1.6347 × 10 5 7.3280 × 10 4 7.1948 × 10 4
1.51.3395 × 10 5 1.4910 × 10 5 7.3636 × 10 4 7.5369 × 10 4
0.58.0773 × 10 6 1.0463 × 10 5 1.5875 × 10 3 1.0816 × 10 3
e n ( d i s t ) 2 18.0699 × 10 6 1.0453 × 10 5 1.5865 × 10 3 1.0809 × 10 3
b 1.58.0625 × 10 6 1.0443 × 10 5 1.5855 × 10 3 1.0801 × 10 3
0.55.7602 × 10 6 7.7763 × 10 6 1.4208 × 10 3 8.9286 × 10 4
1 d i s t n 11.3814 × 10 5 9.3381 × 10 6 3.4880 × 10 3 1.5311 × 10 3
1.54.9254 × 10 6 5.8692 × 10 6 1.7206 × 10 3 1.0926 × 10 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, J.; Li, X.; Hoe, S.; Yan, Z. A Generalized Finite Difference Method for Solving Hamilton–Jacobi–Bellman Equations in Optimal Investment. Mathematics 2023, 11, 2346. https://doi.org/10.3390/math11102346

AMA Style

Lin J, Li X, Hoe S, Yan Z. A Generalized Finite Difference Method for Solving Hamilton–Jacobi–Bellman Equations in Optimal Investment. Mathematics. 2023; 11(10):2346. https://doi.org/10.3390/math11102346

Chicago/Turabian Style

Lin, Jiamian, Xi Li, SingRu (Celine) Hoe, and Zhongfeng Yan. 2023. "A Generalized Finite Difference Method for Solving Hamilton–Jacobi–Bellman Equations in Optimal Investment" Mathematics 11, no. 10: 2346. https://doi.org/10.3390/math11102346

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop