Next Article in Journal
Human Networks and Toxic Relationships
Next Article in Special Issue
A Square-Root Factor-Based Multi-Population Extension of the Mortality Laws
Previous Article in Journal
A Hyper Heuristic Algorithm Based Genetic Programming for Steel Production Scheduling of Cyber-Physical System-ORIENTED
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two Approaches for a Dividend Maximization Problem under an Ornstein-Uhlenbeck Interest Rate

by
Julia Eisenberg
1,*,
Stefan Kremsner
2 and
Alexander Steinicke
3
1
Department of Financial and Actuarial Mathematics, TU Wien, Wiedner Hauptstraße 8–10/E105-1, 1040 Vienna, Austria
2
Department of Mathematics, University of Graz, Heinrichstraße 36, 8010 Graz, Austria
3
Department of Mathematics and Information Technology, Montanuniversitaet Leoben, Peter Tunner-Straße 25/I, 8700 Leoben, Austria
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(18), 2257; https://doi.org/10.3390/math9182257
Submission received: 31 July 2021 / Revised: 8 September 2021 / Accepted: 10 September 2021 / Published: 14 September 2021

Abstract

:
We investigate a dividend maximization problem under stochastic interest rates with Ornstein-Uhlenbeck dynamics. This setup also takes negative rates into account. First a deterministic time is considered, where an explicit separating curve α ( t ) can be found to determine the optimal strategy at time t. In a second setting, we introduce a strategy-independent stopping time. The properties and behavior of these optimal control problems in both settings are analyzed in an analytical HJB-driven approach, and we also use backward stochastic differential equations.

1. Introduction

Maximizing expected discounted dividends is a popular and well-investigated topic in insurance mathematics, with roots going back to the work of Bruno De Finetti in 1957 [1]. The value of the future dividend cash flow discounted to time zero can serve as a risk measure reflecting the well-being of a company. The higher the present value of the future dividends, the more stable and profitable the company appears to potential partners, investors and creditors. The question arises on how to choose a dividend stream for the proposed risk measure. In this context, it is natural to look at the optimal choice of a dividend stream, leading to the maximal possible value of expected discounted dividends. Then, the company is evaluated more objectively, independent of any possibly unlucky future management decisions. The time horizon can be chosen to be infinite, indicating that the company intends to stay in the business forever. In addition, one can stop paying dividends at the time of ruin, and stress, in this way, the importance of a positive surplus for the company’s rating. Alternatively, the time horizon can be given by a finite deterministic or random but strategy-independent time. The latter choice targets the company’s short-term soundness.
Modeling the surplus of a company by a Brownian motion with drift, Cramér-Lundberg model or a general Lévy process, when putting constraints in the form of values at risk, time-inconsistent preferences, or incorporating random funding, these problems have been studied, for instance, in [2,3,4,5]. A general overview of the existing literature can be found in [6,7,8].
In several cases, e.g., in [2,3,9], the optimal dividend strategy turns out to be a constant barrier or a band strategy—meaning that the dividends are paid only if the surplus is above a certain barrier or inside a certain band. However, in some settings, it cannot help but be shown that the value function is smooth enough, and therefore the optimal strategy cannot help but be determined analytically. Then, typically, one applies a viscosity solution approach, allowing to find the optimal strategy numerically, for instance by using the finite differences method.
An important feature of almost every setting is that one considers the value of expected discounted dividends up to some time of ruin due to the chosen dividend strategy. The dependence of the time horizon on the strategy makes the solution of the problem quite complicated, particularly in the presence of a stochastic interest rate. Several papers consider the setting with a stochastic interest rate, e.g., [9,10,11].
Taking into account the protracted negative interest rate environment (the ECB set the interest to 0 in March 2016), it seems reasonable to describe interest rates by an Ornstein-Uhlenbeck process. This class of processes is also widely used beyond the classical problems in finance and insurance, for instance, in quantitative finance for pairs trading strategies [12], physics [13], and biology [14].
Letting the interest rate be given by a mean-reverting Ornstein-Uhlenbeck (Vasicek model) process leads to a two dimensional control problem, which cannot be easily solved via the corresponding Hamilton–Jacobi–Bellman equation.
The optimal solution seems to be of a barrier type where the barrier is given by a highly non-linear function depending on the interest rate. Therefore, it is hard to explicitly solve such a problem or even calculate the return function corresponding to a non-linear barrier. An attempt to tackle this problem was done for instance in [15]. However, the value function could not be shown to be sufficiently smooth, and a viscosity solution approach was applied.
In the present paper, we close this gap in the literature by considering the dividend maximization problem in a different setup. We modify the usual setting and consider two different model scenarios. Under the assumption of an Ornstein-Uhlenbeck interest rate and Brownian surplus, the ruin time of an ex-dividend surplus process is neglected. Instead, we introduce a finite deterministic time horizon in the first model, and a strategy-independent but surplus-dependent stochastic time horizon in the second model. Thus, we want to measure the healthiness of the insurance company within a given time period, as it is usual for such risk measures like value at risk (VaR). However, the deterministic time periods we have in mind should be longer than it is common for the VaR, for instance, 1–5 years long. In the case of a random time horizon, a “healthy” dividend rate and the corresponding ruin time serve as a lighthouse helping to safely navigate the dividend process.
In both models (with a deterministic and a random time horizon), one faces a three-dimensional control problem. We demonstrate two solution approaches: solving the corresponding Hamilton–Jacobi–Bellman (HJB) equation and a backward stochastic differential equation (BSDE) approach. Whilst in the first model, we are able to calculate the value function and the optimal strategy explicitly via the HJB approach, in the second model only the BSDE approach allows us to show that the value function is smooth enough.
Both approaches, applied in a parallel way, demonstrate their advantages and disadvantages. If one is able to calculate a return function that is sufficiently smooth and solves the HJB equation, the HJB approach leads to stronger results than the BSDE approach. However, the calculation of a candidate return function can be extremely time- and space-consuming. Instead, the BSDE approach allows to calculate the value function and the optimal strategy numerically.
BSDEs were first introduced by Bismut [16] and later extensively studied in non-linear form by Pardoux and Peng [17]. The concept of BSDEs has proved itself very useful, especially in the context of finance and stochastic optimal control, see for example [18,19,20,21,22]. Moreover, for high-dimensional problems, there exist numerical algorithms for BSDEs which do not suffer from a curse of dimensionality (see for example [23] or the survey on BSDE numerics [24]), whereas classical HJB-related methods like finite differences to solve the PDE associated to the stochastic control problem may be very inefficient.
The remainder of the paper is structured as follows. In Section 2, we solve the problem via a HJB and a BSDE approach in a setting with finite time T > 0 . In Section 3, we consider the same problem with a stochastic surplus-dependent time horizon. We illustrate both sections with numerical examples.

2. Dividend Maximization with a Deterministic Time Horizon

In the following, we consider an insurance company whose surplus is given by a Brownian motion with drift X t = x + μ t + σ W t , where { W t } is a standard Brownian motion on a probability space Ω , F , P . The considered insurance company is allowed to pay out dividends, where the accumulated dividends until t are given by C t : = 0 t c s d s , yielding for the post-dividend surplus X c :
X t c = x + μ t + σ W t C t .
Let further { B t } be a standard Brownian motion independent of { W t } , generating the filtration { F t B } , augmented with the P -null sets N . We let the underlying filtration { F t } be the filtration generated by { W t , B t } , also augmented with the probability space’s P -null sets N . Moreover, we assume that F is the completed sigma-algebra generated by { W t , B t } . In the following, we will use the common convention E t , y [ . ] = E [ . | Y t = y ] for some process { Y t } .
We let the dividends be discounted by an Ornstein-Uhlenbeck process ( a , δ > 0 , b R ):
r t = r e a t + b ( 1 e a t ) + δ e a t 0 t e a u d B u ,
or as an SDE
d r t = a ( b r t ) d t + δ d B t .
Further, we let
U t s : = t s r u d u , t s .
We allow only strategies c = { c t } with c t [ 0 , ξ ] for some given, fixed real number ξ > 0 . Such a strategy is called admissible if moreover c is adapted to { F t } . The set of admissible strategies will be denoted by A .
In this section, we consider a deterministic time horizon T ( 0 , ) . Differently than in the classical setting, we do not stop our considerations once the surplus process ruins. As a risk measure, we consider the value of expected discounted dividends and define the return function corresponding to some admissible strategy c = { c s } to be
V c ( t , r , x ) = E t , r , x t T e U t s c s d s + e U t T X T c ,
The value function is then given by
V ( t , r , x ) = sup c A V c ( t , r , x ) , ( t , r , x ) [ 0 , T ] × R × R , V ( T , r , x ) = x , ( r , x ) R × R .

2.1. HJB Approach

The heuristically derived HJB equation corresponding to the problem (for details, see, for instance, [8]) is
V t + μ V x + σ 2 2 V x x + a ( b r ) V r + δ 2 2 V r r r V + sup 0 c ξ c { 1 V x } = 0 .

2.1.1. Payout on the Maximal Rate

In order to get a feeling of how the optimal strategy might look like, we first consider the return function corresponding to the strategy “always pay out on the maximal rate ξ ”, that is, c s ξ for all s.
A simple calculation yields
U t s = t s r u d u = r t r s a + b ( s t ) + δ a ( B s B t ) .
In order to investigate some further properties of the value function, we consider the moment generating function of U t s . Using an elementary change of measure technique (see, for instance, Schmidli ([8] p. 216)) or using the formula from Brigo and Mercurio ([25] p. 59), and letting b ˜ : = b δ 2 2 a 2 , one obtains
M ( s t , r ) : = E t , r [ e U t s ] = e b ˜ ( s t ) exp b ˜ r a ( 1 e a ( s t ) ) δ 2 4 a 3 ( 1 e a ( s t ) ) 2 .
Then, one immediately obtains
V ξ ( t , r , x ) = ξ t T M ( s t , r ) d s + M ( T t , r ) E t , x X T ξ = ξ t T M ( s t , r ) d s + M ( T t , r ) x + ( μ ξ ) ( T t ) ,
where X t ξ = x + ( μ ξ ) t + σ W t .
Remark 1.
Note that the exponent of the function M defined in (2) can be split into a linear and an exponential part. For the exponential part, it obviously holds that
exp b ˜ r a ( 1 e a ( s t ) ) δ 2 4 a 3 ( 1 e a ( s t ) ) 2 1 : b ˜ r e b ˜ r a : b ˜ > r .
Thus, if b ˜ 0 and T is big enough, the function M will exponentially increase with the growing time, implying a strong and long-lasting negative interest rate environment.
Assumption 1.
In the following, we assume b ˜ > 0 .
The question arises whether the strategy “pay out on the maximal rate” might be optimal on some time interval independent of the values ( r , x ) . This question will be answered in the next section where a candidate for the value function will be inserted into HJB Equation (1).

2.1.2. Derivation of the Value Function

The form of the return function V ξ corresponding to the strategy “always pay out on the maximal rate” hints that the value function may also be linear in x. Therefore, we assume that there is a function G ( t , r ) such that a candidate for the value function is given by
v ( t , r , x ) : = G ( t , r ) + x E t , r [ e U T ; t ] = G ( t , r ) + x M ( T t , r ) .
We need then to check whether v solves HJB Equation (1). In representation (4), the factor determining the optimal strategy, that is, to pay or to wait, is given by M ( T t , r ) and depends just on r and T t . If M ( T t , r ) > 1 , it is optimal to wait, and if M ( T t , r ) < 1 , then it is optimal to pay on the maximal possible rate ξ .
For the convenience of explanations, we will use the following notation for sufficiently smooth functions f ( t , r ) on [ 0 , T ] × R :
L ( f ) ( t , r ) : = f t ( t , r ) + a ( b r ) f r ( t , r ) + δ 2 2 f r r ( t , r ) r f ( t , r ) .
Lemma 1.
M ( u , r ) solves the following differential equation
M u ( u , r ) + a b r M r ( u , r ) + δ 2 2 M r r ( u , r ) r M ( u , r ) = 0 .
with the boundary conditions M ( 0 , r ) = 1 , lim r M ( u , r ) = 0 and lim r M ( u , r ) = for u > 0 .
Proof. 
The proof is straightforward. □
If v in (4) is indeed the value function, then the function G should fulfil
L ( G ) ( t , r ) + μ M ( T t , r ) + sup 0 c ξ c { 1 M ( T t , r ) } = 0 .
Our next step is to find the function G solving the above differential equation. For this purpose, we have to investigate the properties of the function M in order to get rid of the supremum expression in the differential equation above.

2.1.3. Properties of the Function M

In this section, we investigate the properties of the function M. Recall from (2) that it holds
M ( T t , r ) = E t , r e U t T = exp b ˜ ( T t ) + b ˜ r δ 2 2 a 2 a 1 e a ( T t ) + δ 2 4 a 3 1 e 2 a ( T t ) .
It is immediately clear that M is strictly decreasing in r on [ 0 , T ) × R .
We are searching for a curve α ( t ) such that for ( t , r ) with r > α ( t ) it holds that M ( T t , r ) < 1 and for r < α ( t ) it holds that M ( T t , r ) > 1 . Consider the exponent of M and let
ln M ( T t , r ) = b ˜ ( T t ) + b ˜ r δ 2 2 a 2 a 1 e a ( T t ) + δ 2 4 a 3 1 e 2 a ( T t ) .
Obviously, ln ( M ( T t , r ) ) is strictly decreasing in r for t [ 0 , T ] . Solving the equation ln ( M ( T t , r ) ) = 0 for r guides us to define
α ( t ) : = a b ˜ T t 1 e a ( T t ) + δ 2 4 a 2 1 + e a ( T t ) + b ˜ δ 2 2 a 2 = b ˜ 1 a T t 1 e a ( T t ) δ 2 4 a 2 1 e a ( T t ) .
The curve α is uniquely defined. Due to M r < 0 on [ 0 , T ) × R , α is separating the sets
S 1 : = { ( t , r ) : M ( T t , r ) > 1 } and S 2 : = { ( t , r ) : M ( T t , r ) < 1 } .
Remark 2.
The following properties hold true:
  • α ( T ) = 0 , α ( T ) = b a / 2 > 0 .
  • T t 1 e a ( T t ) is decreasing in t.
  • If b ˜ > 0 , then because a T t 1 e a ( T t ) + 1 0 it holds that α ( t ) < 0 for all t [ 0 , T ) .
  • Since b ˜ > 0 , the function α is strictly increasing in t with α ( t ) [ α ( 0 ) , 0 ] .
Example 1.
In Figure 1, we see the function α for different parameters. On the right picture, the curve is increasing with b ˜ > 0 . The left picture with b ˜ < 0 shows a curve that decreases first and increases close to the time horizon T = 5 .
As we assume b ˜ > 0 , see Assumption 1, the curve given on the left side of Figure 1 is impossible in our setting.
Now, knowing the function α , we can investigate the properties of the remaining function G from (4).

2.1.4. Properties of the Function G

In order to find a function G such that v in (4) solves HJB Equation (1), we define an auxiliary function v ( t , r ) = x M ( T t , r ) + G ˜ ( t , r ) with
G ˜ ( t , r ) : = E t , r t T e U t s · ξ 1 1 [ r s > α ( s ) ] d s + e U t T t T μ ξ 1 1 [ r s > α ( s ) ] d s ,
that is, v is the return function corresponding to the strategy c s = ξ 1 1 [ r s > α ( s ) ] . Our target is to show that the function G ˜ ( t , r ) solves the differential equation
L ( G ˜ ) ( t , r ) + μ M ( T t , r ) + ξ 1 1 [ r > α ( t ) ] 1 M ( T t , r ) = 0
with boundary conditions G ˜ ( T , r ) = 0 , lim r G ˜ ( t , r ) = 0 and lim r G ˜ ( t , r ) = .
Letting
γ ( t , r ) : = E t , r t T e U t s e U t T · 1 1 [ r s > α ( s ) ] d s
we can rewrite the function G ˜ as follows:
G ˜ ( t , r ) = μ ( T t ) M ( T t , r ) + ξ E t , r t T e U t s e U t T · 1 1 [ r s > α ( s ) ] d s = μ ( T t ) M ( T t , r ) + ξ γ ( t , r ) .
Remark 3.
Note that M ( u , r ) solves Differential Equation (5). Therefore, for μ ( T t ) M ( T t , r ) , one obtains
L μ ( T t ) M ( T t , r ) = μ M ( T t , r ) .
The function μ ( T t ) M ( T t , r ) attains zero at t = T , lim r μ ( T t ) M ( T t , r ) = 0 and lim r μ ( T t ) M ( T t , r ) = . Moreover, it holds μ ( T t ) M ( T t , α ( t ) ) = μ ( T t ) .
Remark 4.
Consider the return function V ξ given in (3) corresponding to the strategy “always pay out on the maximal rate”. In the same way like in Remark 3, one can show that ξ t T M ( T s , r ) d s solves L ξ t T M ( T s , r ) d s = ξ 1 M ( T t , r ) . It means that V ξ solves
L ( V ξ ) ( t , r ) + μ M ( T t , r ) + ξ 1 M ( T t , r ) = 0
on [ 0 , T ] × R .
We now see that for r < α ( t ) , the function V ξ does not solve HJB Equation (1).
In the following, we concentrate on the function γ given in (10). Due to Remark 3, we need to show that γ fulfils
L ( γ ) ( t , r ) + 1 1 [ r > α ( t ) ] 1 M ( T t , r ) = 0 .
Lemma 2.
The function γ defined in (10) can be written as
0 T t α ( u + t ) M ( u , r ) · 1 M ( T u t , z ) · φ ( z , u , r ) d z d u ,
where M is given in (2) and
φ ( z , u , r ) : = 1 2 π δ 2 2 a ( 1 e 2 a u ) × exp z r e a u b ( 1 e a u ) + δ 2 2 a 2 ( 1 e a u ) 2 2 2 δ 2 2 a ( 1 e 2 a u ) .
Proof. 
• Using Tonelli’s theorem and the law of total probability, we get
γ ( t , r ) = E t , r t T e U t s e U t T · 1 1 [ r s > α ( s ) ] d s = t T E t , r e U t s e U t T · 1 1 [ r s > α ( s ) ] d s = t T α ( s ) E t , r e U t s e U t s U s T | r s = z · P t , r [ r s d z ] d s = t T α ( s ) E t , r e U t s | r s = z · 1 E e U s T | r s = z · P t , r [ r s d z ] d s = t T α ( s ) E t , r e U t s | r s = z · 1 M ( T s , z ) · P t , r [ r s d z ] d s .
• The term P t , r [ r s d z ] , inside the above integrals, is the density of the random variable r s t . Since r s t is normally distributed with mean β ( s t , r ) : = r e a ( s t ) + b ( 1 e a ( s t ) ) and variance δ 2 2 a ( 1 e 2 a ( s t ) ) , see [26] (p. 522), it holds that
P t , r [ r s d z ] = 1 2 π δ 2 2 a ( 1 e 2 a ( s t ) ) exp z β ( s t , r ) 2 2 δ 2 2 a ( 1 e 2 a ( s t ) ) .
• Consider now the first factor inside the integrals. Formula 1.8.7(1) in [26] (p. 525) along with ( 1 e 2 a ( s t ) ) = ( 1 e a ( s t ) ) ( 1 + e a ( s t ) ) yield
E t , r e U t s | r s = z = e b ˜ ( s t ) exp z + r 2 b + δ 2 a 2 a · 1 e a ( s t ) 1 + e a ( s t ) = M ( s t , r ) · exp 2 · z β ( s t , r ) 2 δ 2 2 a ( 1 e 2 a ( s t ) ) · δ 2 2 a 2 ( 1 e a ( s t ) ) 2 × exp δ 2 2 a 2 2 ( 1 e a ( s t ) ) 4 2 δ 2 2 a ( 1 e 2 a ( s t ) ) .
• Then, completing the square gives
E t , r e U t s | r s = z · P t , r [ r s d z ] = M ( s t , r ) · φ ( z , s t , r ) .
• Changing the variable u = s t in (13) yields the desired result. □
Lemma 3.
The function γ defined in (10) fulfils γ C 1 , 2 ( [ 0 , T ) × R ) .
Proof. 
Recall that φ is the density of a normal distribution, see (12). Let further
Δ ( u , r ) : = r a u + b ( 1 e a u ) δ 2 2 a 2 ( 1 e a u ) 2 .
Then,
lim u 0 z Δ ( u , r ) δ 2 2 a ( 1 e 2 a u ) = : z > r : z < r 0 : z = r .
Changing the variable by letting y = z Δ ( u , r ) δ 2 2 a ( 1 e 2 a u ) yields
lim u 0 α ( t + u ) φ ( z , u , r ) · 1 M ( T t u , z ) d z = lim u 0 α ( t + u ) Δ ( u , r ) δ 2 ( 1 e 2 a u ) / ( 2 a ) e y 2 / 2 2 π · 1 M T t u , y δ 2 2 a ( 1 e 2 a u ) + Δ ( u , r ) d y = 1 M ( T t , r ) : α ( t ) > r , 0 : α ( t ) r .
Using similar arguments, the representation of γ given in Lemma 2 and the Leibniz integral rule yields the claim. □
Lemma 4.
The function γ defined in (10) solves Differential Equation (11).
Proof. 
• Recall from Lemma 2 that γ can be written as
0 T t α ( u + t ) M ( u , r ) · 1 M ( T u t , z ) · φ ( z , u , r ) d z d u
with M given in (2) and φ given in (12). Lemma 3 yields γ C 1 , 2 ( [ 0 , T ) × R ) .
• It is straightforward to build derivatives of φ and show that
φ t ( z , t , r ) a b r δ 2 a 2 ( 1 e a t ) φ r ( z , t , r ) δ 2 2 φ r r ( z , t , r ) = 0 .
• Recall that M ( u , r ) solves Differential Equation (5).
• Using M ( 0 , r ) = M ( T t , α ( t ) ) = 1 and M r ( u , r ) = ( 1 e a u ) / a , we conclude
γ t ( t , r ) = 0 T t M ( u , r ) α ( u + t ) φ ( z , u , r ) · M t ( T u t , z ) d z d u , γ r ( t , r ) = 0 T t M r ( u , r ) α ( u + t ) φ ( z , u , r ) · ( 1 M ( T u t , z ) ) d z d u + 0 T t M ( u , r ) α ( u + t ) φ r ( z , u , r ) · ( 1 M ( T u t , z ) ) d z d u , γ r r ( t , r ) = 0 T t M r r ( u , r ) α ( u + t ) φ ( z , u , r ) · ( 1 M ( T u t , z ) ) d z d u 2 0 T t M ( u , r ) α ( u + t ) 1 e a u a φ r ( z , u , r ) · ( 1 M ( T u t , z ) ) d z d u + 0 T t M ( u , r ) α ( u + t ) φ r r ( z , u , r ) · ( 1 M ( T u t , z ) ) d z d u .
• Note that M t ( T u t , z ) = M u ( T u t , z ) and L ( M ) ( u , r ) = 0 . Therefore, we can conclude that
L ( γ ) ( t , r ) = 0 T t M ( u , r ) α ( u + t ) φ ( z , u , r ) · M u ( T u t , z ) d z d u + 0 T t L ( M ) ( u , r ) + M u ( u , r ) α ( u + t ) φ ( z , u , r ) · 1 M ( T u t , z ) d z d u + 0 T t M ( u , r ) α ( u + t ) a b r δ 2 a 2 ( 1 e a t ) φ r ( z , t , r ) + δ 2 2 φ r r ( z , t , r ) × 1 M ( T u t , z ) d z d u = 0 T t u M ( u , r ) α ( u + t ) φ ( z , u , r ) · 1 M ( T u t , z ) d z d u .
Consequently, we can get rid of the d u -integral and get
L ( γ ) ( t , r ) = M ( T t , r ) α ( T ) φ ( z , T t , r ) · 1 M ( 0 , z ) d z M ( 0 , r ) lim u 0 α ( t + u ) φ ( z , u , r ) · 1 M ( T t u , z ) d z .
The proof of Lemma 3 gives
L ( γ ) ( t , r ) = 1 M ( T t , r ) 1 1 [ α ( t ) > r ] ,
which corresponds to Differential Equation (11). □
Now, we are ready to prove the verification theorem.
Theorem 1
(Verification Theorem). The function v = x M ( T t , r ) + G ˜ ( t , r ) , with M given in (2) and
G ˜ ( t , r ) = μ ( T t ) M ( T t , r ) + ξ 0 T t α ( u + t ) M ( u , r ) · 1 M ( T u t , z ) · φ ( z , u , r ) d z d u
with φ in (12) and α in (6), is the value function, solves HJB Equation (1). The optimal strategy is given by c * = { c s * } with c s * = ξ 1 1 [ r s > α ( s ) ]
Proof. 
Remark 3 and Lemma 4 prove that v solves HJB Equation (1).
Let c be an arbitrary admissible strategy. Then, using Ito’s formula, one has
e U t s v ( t , r t , X t c ) = v ( s , r , x ) + s t e U s y L ( v ) ( y , r y ) + ( μ c y ) M ( T y , r y ) d y + δ s t e U s y v r d B y + σ s t e U s y v x d W y .
Since the stochastic integrals are martingales (due to Itô isometry) with expectation zero, and v solves HJB Equation (1), we can conclude
E s , r , x e U s t v ( t , r t , X t c ) v ( s , r , x ) E s , r , x s t e U s y c y d y .
Letting t T then yields
E s , r , x e U s T X T c + E s , r , x s T e U s y c y d y v ( s , r , x ) ,
which proves our claim. □
We conclude that the optimal strategy does not depend on the surplus. This is due to the fact that we do not stop our considerations at the time of ruin, that is, at the time when the surplus hits zero. The penalty for having a negative surplus is reflected only in the expected lump sum payment at T, which is given by x + μ T 0 T c s d s .
The decision to pay or to wait is a feedback strategy of a current interest rate. Given the curve α defined in (6), if the interest rate at time t lies above α ( t ) , it is optimal to pay on the maximal possible rate and not to pay otherwise.
The economic interpretation is as follows. Assumption b ˜ > δ 2 2 a 2 implicates that α ( t ) 0 , and the mean-reverting OU process being below zero will be pushed up. If the interest rate is below α ( t ) , that is, negative, then the expected discounting factor is increasing in time.
Thus, if the interest rate stays under α ( t ) , the discounting factor attains its maximum at T. Since μ t > μ 0 T c s d s for any strategy c = { c s } , it is not surprising that for r s < α ( s ) , one should not pay dividends until T.
On the other hand, if the interest rate r t lies above the curve α ( t ) , excursions of r t into the positive half-line will lead to a decreasing discounting factor. Therefore, one would be rather willing to pay immediately on the maximal rate than to wait until T.

2.2. BSDE Approach

In this section, we will tackle the problem of finding the optimal strategy c for
V c ( t , r , x ) = E t , r , x t T e U t s c s d s + e U t T X T c
by an ansatz using a generalized Hamiltonian and resulting coupled forward-backward SDEs (FBSDEs), elaborated, for example, in [27] (Section 4.2), [21] (Section 6.4.2) and [28] (Section 10.1.1). Note that the dynamics of the process X c is
d X s c = ( μ c s ) d s + σ d W s + 0 d B s , for s t , and X t c = x .
Therefore, the generalized Hamiltonian H : Ω × [ 0 , T ] × R × R × R 1 × 2 R takes the form
H ( s , x , c , y , z ) = ( μ c ) y + tr σ 0 T z + e U t s c
and actually does not depend on x . The function g : Ω × R R is given by
g ( x ) = e t T r u d u x = e U t T x .
Both H and g are affine in ( x , c ) resp. x , hence they are concave. The maximum can be achieved by setting c to
c ^ = arg max c H ( s , x , c , y , z ) = ξ , e U t s y > 0 , 0 , e U t s y 0 .
With the above expressions, we are able to put up the corresponding BSDE,
Y s = x g ( X T c ^ ) + s T x H ( u , X u c ^ , c ^ u , Y u , Z u ) d u s T Z u d W u d B u , t s T ,
taking a particularly easy form without a generator here:
Y s = e U t T s T Z u , 1 d W u s T Z u , 2 d B u .
As this BSDE’s terminal condition only depends on B, it follows that Z u , 1 = 0 and that the solution Y is given by Y s = E t , r , x e t T r u d u | F s B = E t , r , x e U s T | F s B , which we will calculate in the following. Note that
Y s = E t , r , x e t T r u d u | F s B = e t s r u d u E t , r , x e s T r u d u | F s B = e U t s E t , r , x e U s T | F s B .
By the SDE for the r process, we may write
s T r v d v = r s r T a + δ a ( B T B s ) + b ( T s ) ,
and can hence compute
E t , r , x e U s T | F s B = E t , r , x e s T r u d u | F s B = E t , r , x e r T r s a δ a ( B T B s ) b ( T s ) | F s B = e r s a b ( T s ) E t , r , x e r T a δ a ( B T B s ) | F s B = e r s a b ( T s ) E t , r , x e r T a δ a ( B T B s ) | F s B = e r s a b ( T s ) E t , r , x e 1 a r s e a ( T s ) + b 1 e a ( T s ) + δ e a T s T e a u d B u δ a ( B T B s ) | F s B = e r s a 1 e a ( T s ) b ( T s ) + b a 1 e a ( T s ) E t , r , x e δ a s T ( e a ( u T ) 1 ) d B u | F s B = e r s a 1 e a ( T s ) b ( T s ) + b a 1 e a ( T s ) E t , r , x e δ a s T ( e a ( u T ) 1 ) d B u = e r s a 1 e a ( T s ) b ( T s ) + b a 1 e a ( T s ) + δ 2 2 a 2 s T ( 1 e a ( u T ) ) 2 d u ,
where we used the explicit form of r T , independent increments of Wiener integrals and the expression for the mean of a log-normal distribution. We conclude
Y s = e t s r u d u e r s a 1 e a ( T s ) b ( T s ) + b a 1 e a ( T s ) + δ 2 2 a 2 s T ( 1 e a ( u T ) ) 2 d u = e t s r u d u e b ˜ ( T s ) + b ˜ r s δ 2 2 a 2 a 1 e a ( T s ) + δ 2 4 a 3 1 e 2 a ( T s ) = e U t s M ( T s , r s ) ,
with M from (2). Using this explicit expression and (14) we obtain an optimal strategy c ^ by setting for s t ,
c ^ s = ξ , e U t s Y s > 0 , 0 , e U t s Y s 0 .
Inserting for Y s at the barrier e U t s Y s = 0 , we see that it can be reduced to
e U t s Y s = 0 e t s r u d u e t s r u d u E t , r , x e s T r u d u | F s B = 0 E t , r , x e s T r u d u | F s B = 1 E t , r , x e U s T | F s B = 1 ,
coinciding with the results from Section 2. Using the explicit form of the conditional expectation and taking logarithms, the above equality yields
0 = b ˜ ( T s ) + b ˜ r s δ 2 2 a 2 a 1 e a ( T s ) + δ 2 4 a 3 1 e 2 a ( T s ) .
From there, it is easy to infer the same barrier curve α from (6).
A numerical illustration of the interest rate compared to the curve α , separating the strategy areas, is provided in Figure 2.

3. Dividend Maximization with an Exogenous Stochastic Time Horizon

In this section, we consider again an insurance company whose surplus is given by a Brownian motion with drift X t = x + μ t + σ W t , where { W t } is a standard Brownian motion on a probability space Ω , F , P . The insurance company is paying out dividends, where the accumulated dividends until t are given by C t = 0 t c s d s .
The discounting rate is given by an Ornstein-Uhlenbeck process with the dynamics ( a , δ > 0 , b R ):
r t = r e a t + b ( 1 e a t ) + δ e a t 0 t e a u d B u .
The problem considered in Section 2 is now modified by changing the time horizon. Usually, one assumes that the time horizon depends on the chosen dividend strategy. However, this approach does not seem to be entirely realistic. The analysts of the insurance company under consideration may fix the life time of the company as the ruin time corresponding to the dividend payout strategy with a fixed constant rate, say ζ μ . We allow just for the strategies c = { c s } with accumulated dividends given by C t = 0 t c s d s and c s [ 0 , ξ ] for some given and fixed ξ > 0 . Such a strategy is called admissible if moreover c is adapted to { F t } . The set of admissible strategies will be denoted by A .
As in Section 2, we will use b ˜ = b δ 2 2 a 2 ,
U s : = U 0 s = 0 s r u d u .
and recall
M ( s , r ) = E r [ e U s ] = e b ˜ s exp b ˜ r a ( 1 e a s ) δ 2 4 a 3 ( 1 e a s ) 2 .
In the following, we let d X t H , ζ : = ( μ ζ ) d t + σ d W t , where H indicates that a company whose surplus’ drift exceeds μ ζ is considered as financially healthy. Further, ζ [ 0 , μ ] is fixed and:
τ ζ : = inf { t 0 : X t H , ζ = 0 } .

3.1. HJB Approach

For the dynamic programming principle to work, we need to introduce an auxiliary process
L t = l ζ t a n d L t c = l ζ t + 0 t c s d s .
The process L describes the difference of X H , ζ and an ex-dividend process X c . The initial value l describes the historical difference existing at time 0, that is, X 0 H , ζ = x l , X 0 = x . In particular, for an admissible strategy c = { c s } , it holds that
X τ ζ c = x + 0 τ ζ μ c s d s + σ W τ ζ = 0 τ ζ ζ c s d s + l = L τ ζ c .
And, we define the target functional as
V c ( r , x , l ) : = E r , x , l 0 τ ζ e U s c s d s + e U τ ζ τ c X τ ζ c = E r , x , l 0 τ ζ e U s c s d s + e U τ ζ l + 0 τ ζ ζ c s d s = E r , x , l 0 τ ζ e U s c s d s e U τ ζ L τ ζ c .
To ensure that the value function is well-defined, we again require Assumption 1, that is, b ˜ > 0 . Otherwise, by using, for instance, the constant strategy c s = ζ ξ and noting that P x [ τ ζ = ] > 0 as ζ μ , one would get for x > 0 :
V ζ ( r , x , l ) = E r , x , l 0 τ ζ e U s ζ d s + e U τ ζ X τ ζ ζ = E r , x , l 0 τ ζ e b ˜ s M ( s , r ) ζ d s = .
We are searching for the value function
V ( r , x , l ) : = sup c A V c ( r , x , l ) .
Using standard dynamic programming arguments, see, for example, [8] (p. 98), one can heuristically derive the following HJB equation:
μ V x + σ 2 2 V x x + a ( b r ) V r + δ 2 2 V r r r V ζ V l + sup 0 c ξ c 1 V x + V l = 0 .
We conjecture again that the optimal strategy is of a barrier type. In order to get an idea about the desired barrier, we consider the differential quotient of the value function with respect to x. For this purpose, let h > 0 be very small and c be an ε admissible ( r , x + h , l + h ) -strategy, that is,
V c ( r , x + h , l + h ) + ε V ( r , x + h , l + h ) .
Then, c is also an admissible strategy for ( r , x , l ) . With a slight abuse of notation, we write τ ζ x l to indicate the starting value of the underlying process X H , ζ , one gets τ ζ x l = τ ζ x + h l h a.s. Therefore, we can conclude
V ( r , x + h , l + h ) V ( r , x , l ) V c ( r , x + h , l + h ) + ε V c ( r , x , l ) = h E r exp U τ ζ x l + ε .
On the other hand, if C ˜ is an ε strategy for ( r , x , l ) , then it is also admissible for ( r , x + h , l + h ) , meaning that
V ( r , x + h , l + h ) V ( r , x , l ) V c ˜ ( r , x + h , l + h ) ε V c ˜ ( r , x , l ) = h E r exp U τ ζ x l ε .
Since ε was arbitrary, we can conclude that
lim h 0 V ( r , x + h , l h ) V ( r , x , l ) h = E r exp U τ ζ x l .
In particular, if the value function V is differentiable with respect to x and l, then one gets
V x ( r , x , l ) V l ( r , x , l ) = E r exp U τ ζ x l .
The stopping time τ ζ x l is independent of B, and the distribution function of τ ζ x l is well-known, see [26] (p. 295). It means the expression E r exp U τ ζ x l can be explicitly calculated, at least as a power series.
Remark 5.
Following the path of Section 2, one should first consider E r exp U τ ζ x l and find, if the case may be, a curve θ ( r , l ) 0 such that E r exp U τ ζ θ ( r , l ) l 1 for all ( r , l ) R 2 .In the second step, one calculates the return function corresponding to the strategy c s = ξ 1 1 [ x > θ ( r s , L s ) ] .Then, if the regularity conditions are fulfilled, one can check whether the function solves the HJB equation.

3.1.1. Properties of E r exp U τ ζ x

Because W and B are independent, we can define
ϕ ( r , z ) : = E r exp U τ ζ x = E r [ M ( τ ζ z , r ) ] = E e b ˜ s exp b ˜ r a ( 1 e a τ ζ z ) δ 2 4 a 3 ( 1 e a τ ζ z ) 2 .
Note that for r 0 , the function M ( t , r ) is strictly decreasing in t. Since τ ζ z is strictly increasing in z, we conclude that ϕ ( r , z ) is strictly decreasing in z, yielding
ϕ ( r , z ) < ϕ ( r , 0 ) = 1
for ( r , z ) R 2 .
This means, in particular, that the desired curve lies in { r < 0 } . In the next section, we consider the case r < 0 .

3.1.2. The Case R < 0

For the sake of clarity, we itemize the properties of the function ϕ below if r < 0 . • It is easy to see that
ϕ ( r , 0 ) = 1 a n d lim z ϕ ( r , z ) = 0 .
• It is hard to derive the properties of the function ϕ from its expectation representation. To calculate the expectation, one can consider the function M ( r , t ) first and write it as the power series
M ( s , r ) = e b ˜ s exp b ˜ r a 1 e a s δ 2 4 a 3 1 e a s 2 = e b ˜ r δ 2 4 a 2 a n = 0 e ( a n + b ˜ ) s k = 0 [ n / 2 ] ( 1 ) n k k ! ( n 2 k ) ! b ˜ r δ 2 2 a 2 a n 2 k δ 2 4 a 3 k .
From [26] (p. 295), we know that the density of τ ζ z is given by
f τ ζ ( t ) : = P z [ τ ζ d t ] = z 2 π σ t 3 / 2 exp { ( z ( μ ζ ) t ) 2 2 σ 2 t } .
Moreover,
P z [ τ ζ = ] = 1 exp ( μ ζ ) z + | μ ζ | z σ 2 , z 0 .
Letting
θ n : = μ + ζ ( μ ζ ) 2 + 2 σ 2 ( a n + b ˜ ) σ 2 ,
the power series representation of ϕ becomes
ϕ ( r , z ) = e b ˜ r δ 2 4 a 2 a n = 0 E x e a n τ ζ z b ˜ τ ζ z k = 0 [ n / 2 ] ( 1 ) n k k ! ( n 2 k ) ! b ˜ r δ 2 4 a 2 a n 2 k δ 2 4 a 3 k = e b ˜ r δ 2 4 a 2 a n = 0 e θ n z k = 0 [ n / 2 ] ( 1 ) n k k ! ( n 2 k ) ! b ˜ r δ 2 2 a 2 a n 2 k δ 2 4 a 3 k .
Inserting x l instead of z in the above expression, we get the condition specifying the curve θ (if such a curve exists):
ϕ ( r , x l ) = e b ˜ r δ 2 4 a 2 a n = 0 e θ n ( x l ) k = 0 [ n / 2 ] ( 1 ) n k k ! ( n 2 k ) ! b ˜ r δ 2 2 a 2 a n 2 k δ 2 4 a 3 k = 1 .
The power series representation does not allow to show the existence and uniqueness of a curve θ 0 such that ϕ ( r , x l ) > 1 if x > θ ( r , l ) and ϕ ( r , x l ) < 1 if x < θ ( r , l ) . We conclude that the approach consisting in finding a candidate return function and showing that this function solves the corresponding HJB equation cannot be applied here. Although we will address this question in future research, in the present paper we will now tackle the problem using BSDEs.

3.2. BSDE Approach

Similar to the case of a deterministic time horizon, for an arbitrary strategy-independent F t t 0 -stopping time τ , we define the return function corresponding to some admissible strategy c = { c s } to be
V c ( t , r , x ) = E t τ τ e U t τ s c s d s + e U t τ τ X τ c 1 1 { τ < } | r t τ = r , X t τ c = x . + E t e U t s c s d s 1 1 { τ = } | r t = r , X t c = x ,
since in the event of τ = , the value X c is never reached. This is also consistent with the condition b ˜ > 0 from Assumption 1, because then
lim T e U t τ T X T c = 0 .
Note that in the event { τ < t } , any strategy c comes too late, since then V c ( t , r , x ) = x is already determined (and any c is optimal, thus not interesting). As τ does not depend on the strategy c, the problem above reduces to the case when τ < , as for the event τ = it is clear that c = ξ yields the best strategy. Hence, we have to find a strategy to optimize only
sup c A E t τ τ e U t τ s c s d s + e U t τ τ X τ c 1 1 { τ < } | r t τ = r , X t τ c = x ,
which we will tackle in the sequel using a BSDE approach.
Within this section, for readability, we will use E [ · ] instead of E · | r t τ = r , X t τ c = x .
The arguments of [27] (Section 4.2), [21] (Section 6.4.2) and [28] (Section 10.1.1) may all be modified in an obvious way to use stopping times instead of deterministic ending times T. Like in Section 2.2, the solution for the optimal strategy can again be found via maximizing the generalized Hamiltonian
H ( s , x , c , y , z ) = ( μ c ) y + tr σ 0 T z + e U t s c ,
which one achieves by
c ^ = arg max c H ( s , x , c , y , z ) = ξ , e U t τ s y > 0 , 0 , e U t τ s y 0 .
We have to solve the corresponding BSDE for Y,
Y s = e U t τ τ s τ Z u , 1 d W u s τ Z u , 2 d B u , t τ s τ < .
For BSDEs with stopping times as time horizon see, for example, [29] (Section 3), [30] or [31] (Section 5). Again, in this very BSDE, no generator appears, and hence we end up with the conditional expectation Y s = E exp t τ τ r u d u | F s = E exp U t τ τ | F s , here with respect to the filtration F s instead of { F s B } . The reason is that, because of the presence of τ , the terminal condition e U t τ τ = e t τ τ r u d u is not necessarily only F B -measurable. This is in particular the case for the stopping times τ ζ treated in the next section.
With this solution for Y, it follows that if t τ s τ < ,
c ^ s = ξ , e U t τ s Y s > 0 , 0 , e U t τ s Y s 0 .
As before, to determine the barrier, we see that for t τ s τ ,
Y s = E e U t τ τ | F s = E e t τ τ r u d u | F s = e t τ s r u d u E e s τ r u d u | F s ,
and hence the barrier is attained if
e t τ s r u d u e t τ s r u d u E e s τ r u d u | F s = 0 E e s τ r u d u | F s = 1 E e U s τ | F s = 1 .

Evaluating the Strategy for the Stopping Times τ ζ

In the above subsection, we found that the optimal strategy can be determined calculating the conditional expectation E exp U s τ | F s = E exp s τ r u d u | F s on { t s τ } . We will now do this explicitly for the stopping time τ ζ = inf t 0 { X t H , ζ = 0 } . Note that X H , ζ is independent of B, and therefore r and X H , ζ are independent processes, and also τ ζ is independent of r. We will use this independence to show with f τ ζ , the density from (17), we have
E e s τ ζ r u d u | F s 1 1 { τ ζ > s } = s E e s t ˜ r u d u | F s B f τ ζ ( t ˜ ) d t ˜ 1 1 { τ ζ > s } .
To that end, first consider that F s is generated by elements A of the form
A = B · s 1 ( L 1 ) W · s 1 ( L 2 ) N ,
where L 1 , L 2 are Borel subsets of C [ 0 , ) , the space of continuous functions on [ 0 , ) endowed with the topology of uniform convergence on compacts, and N N is a null set. Our first observation is that for such A,
E E e s τ ζ r u d u | F s 1 1 { τ ζ > s } 1 1 A = E E e s τ ζ r u d u | F s 1 1 { τ ζ > s } 1 1 B · s 1 ( L 1 ) 1 1 W · s 1 ( L 2 ) = E e s τ ζ r u d u 1 1 { τ ζ > s } 1 1 B · s 1 ( L 1 ) 1 1 W · s 1 ( L 2 ) ,
where we used that N is a null set as well as the defining property of conditional expectation. We continue with
E e s τ ζ r u d u 1 1 { τ ζ > s } 1 1 B · s 1 ( L 1 ) 1 1 W · s 1 ( L 2 ) = s E e s t ˜ r u d u 1 1 B · s 1 ( L 1 ) 1 1 W · s 1 ( L 2 ) | τ ζ = t ˜ f τ ζ ( t ˜ ) d t ˜ ,
where we used regular conditional probabilities for the integrand, conditioning on τ ζ . Now, using independence of B and r from τ ζ , we get
s E e s t ˜ r u d u 1 1 B · s 1 ( L 1 ) 1 1 W · s 1 ( L 2 ) | τ ζ = t ˜ f τ ζ ( t ˜ ) d t ˜ = s E e s t ˜ r u d u 1 1 B · s 1 ( L 1 ) E 1 1 W · s 1 ( L 2 ) | τ ζ = t ˜ f τ ζ ( t ˜ ) d t ˜ .
By definition of the conditional expectation with respect to F s B , the last expression equals
s E E e s t ˜ r u d u | F s B 1 1 B · s 1 ( L 1 ) E 1 1 W · s 1 ( L 2 ) | τ ζ = t ˜ f τ ζ ( t ˜ ) d t ˜ .
From here, taking the same steps as before, and using that F s B F s , we find that the first term of (20) is given by
E E e s τ ζ r u d u | F s B 1 1 { τ ζ > s } 1 1 A ,
from which we conclude, as A was an arbitrary generator of F s , that
E e s τ ζ r u d u | F s 1 1 { τ ζ > s } = E e s τ ζ r u d u | F s B 1 1 { τ ζ > s } .
Performing now similar steps for some Borel set L C ( [ 0 , ) ) with the expectation
E E e s τ ζ r u d u | F s B 1 1 { τ ζ > s } 1 1 B · s 1 ( L ) ,
we get
E E e s τ ζ r u d u | F s B 1 1 { τ ζ > s } 1 1 B · s 1 ( L ) = s E E e s t ˜ r u d u | F s B 1 1 B · s 1 ( L ) f τ ζ ( t ˜ ) d t ˜ ,
where we may exchange the integrals to end up with
E s E e s t ˜ r u d u | F s B f τ ζ ( t ˜ ) d t ˜ 1 1 B · s 1 ( L ) ,
from which (19), that is,
E e s τ ζ r u d u | F s 1 1 { τ ζ > s } = s E e s t ˜ r u d u | F s B f τ ζ ( t ˜ ) d t ˜ 1 1 { τ ζ > s }
follows. Applying (16) to evaluate the conditional expectation in the last term, we get that
E e s τ ζ r u d u | F s 1 1 { τ ζ > s } = s e r s a 1 e a ( t ˜ s ) b ( t ˜ s ) + b a 1 e a ( t ˜ s ) + δ 2 2 a 2 s t ˜ ( 1 e a ( u t ˜ ) ) 2 d u f τ ζ ( t ˜ ) d t ˜ 1 1 { τ ζ > s } = s M ( t ˜ s , r s ) f τ ζ ( t ˜ ) d t ˜ 1 1 { τ ζ > s } .
This expression (21) again defines the barrier when being equal to 1. In the special case of s = t , we get
t e r a 1 e a ( t ˜ t ) b ( t ˜ t ) + b a 1 e a ( t ˜ t ) + δ 2 2 a 2 t t ˜ ( 1 e a ( u t ˜ ) ) 2 d u f τ ζ ( t ˜ ) d t ˜ 1 1 { τ ζ > t } = t M ( t ˜ t , r ) f τ ζ ( t ˜ ) d t ˜ 1 1 { τ ζ > t } = M ( · , r ) * ( f τ ζ · 1 1 { [ t , ) } ) 1 1 { τ ζ > t } .
With the BSDE approach, we were able to find a solution to this control problem in form of the integral solution above. The exogenous stopping time now enters into the solution in the form of the convolution M ( · , r ) * f τ ζ · 1 1 { [ t , ) } . A numerical illustration can be found in Figure 3 and Figure 4.

Author Contributions

All authors contributed equally to conceptualization, methodology, formal analysis, draft preparation and editing. All authors have read and agreed to the published version of the manuscript.

Funding

The research of Julia Eisenberg was funded by the Austrian Science Fund (FWF), Project number V 603-N35. Stefan Kremsner was supported by the Austrian Science Fund (FWF): Project F5508-N26, which is part of the Special Research Program “Quasi-Monte Carlo Methods: Theory and Applications”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. De Finetti, B. Su un’impostazione alternativa della teoria collettiva del rischio. Trans. XVth Int. Congr. Actuar. 1957, 2, 433–443. [Google Scholar]
  2. Asmussen, S.; Taksar, M. Controlled diffusion models for optimal dividend pay-out. Insur. Math. Econ. 1997, 20, 1–15. [Google Scholar] [CrossRef]
  3. Azcue, P.; Muler, N. Optimal reinsurance and dividend distribution policies in the Cramér-Lundberg model. Math. Financ. Int. J. Math. Stat. Financ. Econ. 2005, 15, 261–308. [Google Scholar] [CrossRef]
  4. Strini, J.A.; Thonhauser, S. On a dividend problem with random funding. Eur. Actuar. J. 2019, 9, 607–633. [Google Scholar] [CrossRef] [Green Version]
  5. Zhu, J.; Siu, T.K.; Yang, H. Singular dividend optimization for a linear diffusion model with time-inconsistent preferences. Eur. J. Oper. Res. 2020, 285, 66–80. [Google Scholar] [CrossRef]
  6. Albrecher, H.; Thonhauser, S. Optimality results for dividend problems in insurance. RACSAM-Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. A Mat. 2009, 103, 295–320. [Google Scholar] [CrossRef] [Green Version]
  7. Avanzi, B. Strategies for dividend distribution: A review. N. Am. Actuar. J. 2009, 13, 217–251. [Google Scholar] [CrossRef]
  8. Schmidli, H. Stochastic Control in Insurance; Springer: London, UK, 2008. [Google Scholar]
  9. Jiang, Z.; Pistorius, M. Optimal dividend distribution under markov regime switching. Financ. Stoch. 2012, 16, 449–476. [Google Scholar] [CrossRef] [Green Version]
  10. Akyildirim, E.; Güney, I.E.; Rochet, J.C.; Soner, H.M. Optimal dividend policy with random interest rates. J. Math. Econ. 2014, 51, 93–101. [Google Scholar] [CrossRef]
  11. Bandini, E.; de Angelis, T.; Ferrari, G.; Gozzi, F. Optimal dividend payout under stochastic discounting. arXiv 2021, arXiv:2005.11538. [Google Scholar]
  12. Stübinger, J.; Endres, S. Pairs trading with a mean-reverting jump-diffusion model on high-frequency data. Quant. Financ. 2018, 18, 1735–1751. [Google Scholar] [CrossRef] [Green Version]
  13. Yatabe, Z.; Asubar, J.T. Ornstein–Uhlenbeck process in a human body weight fluctuation. Phys. A Stat. Mech. Appl. 2021, 582, 126286. [Google Scholar] [CrossRef]
  14. Blomberg, S.P.; Rathnayake, S.I.; Moreau, C.M. Beyond Brownian motion and the Ornstein-Uhlenbeck process: Stochastic diffusion models for the evolution of quantitative characters. Am. Nat. 2020, 195, 145–165. [Google Scholar] [CrossRef] [PubMed]
  15. Eisenberg, J. Optimal dividends under a stochastic interest rate. Insur. Math. Econ. 2015, 65, 259–266. [Google Scholar] [CrossRef]
  16. Bismut, J.-M. An introductory approach to duality in optimal stochastic control. SIAM Rev. 1978, 20, 62–78. [Google Scholar] [CrossRef]
  17. Pardoux, E.; Peng, S. Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 1990, 14, 55–61. [Google Scholar] [CrossRef]
  18. El Karoui, N.; Peng, S.; Quenez, M.C. Backward stochastic differential equations in finance. Math. Financ. 1997, 7, 1–71. [Google Scholar] [CrossRef]
  19. Ma, J.; Morel, J.; Yong, J. Forward-Backward Stochastic Differential Equations and Their Applications; Number 1702 in Lecture Notes in Mathematics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  20. Kohlmann, M.; Zhou, X.Y. Relationship between backward stochastic differential equations and stochastic controls: A linear-quadratic approach. SIAM J. Control Optim. 2000, 38, 1392–1407. [Google Scholar] [CrossRef]
  21. Pham, H. Continuous-Time Stochastic Control and Optimization with Financial Applications; Series SMAP; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  22. Kremsner, S.; Steinicke, A.; Szölgyenyi, M. A deep neural network algorithm for semilinear elliptic PDEs with applications in insurance mathematics. Risks 2020, 8, 136. [Google Scholar] [CrossRef]
  23. Han, J.; Jentzen, A.; Weinan, E. Solving high-dimensional partial differential equations using deep learning. Proc. Natl. Acad. Sci. USA 2018, 115, 8505–8510. [Google Scholar] [CrossRef] [Green Version]
  24. Chessari, J.; Kawai, R. Numerical methods for backward stochastic differential equations: A survey. arXiv 2021, arXiv:2101.08936. [Google Scholar]
  25. Damiano Brigo and Fabio Mercurio. Interest Rate Models-Theory and Practice: With Smile, Inflation and Credit; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  26. Borodin, A.N.; Salminen, P. Handbook of Brownian Motion-Facts and Formulae; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  27. Carmona, R. Lectures on BSDEs, Stochastic Control, and Stochastic Differential Games with Financial Applications; SIAM: Philadelphia, PA, USA, 2016. [Google Scholar]
  28. Touzi, N. Optimal Stochastic Control, Stochastic Target Problems, and Backward SDE; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 29. [Google Scholar]
  29. Pardoux, É. Backward stochastic differential equations and viscosity solutions of systems of semilinear parabolic and elliptic PDEs of second order. In Stochastic Analysis and Related Topics VI; Springer: Berlin/Heidelberg, Germany, 1998; pp. 79–127. [Google Scholar]
  30. Pardoux, É. BSDEs, weak convergence and homogenization of semilinear PDEs. In Nonlinear Analysis, Differential Equations and Control; Springer: Berlin/Heidelberg, Germany, 1999; pp. 503–549. [Google Scholar]
  31. Briand, P.; Delyon, B.; Hu, Y.; Pardoux, E.; Stoica, L. Lp solutions of backward stochastic differential equations. Stoch. Process. Appl. 2003, 108, 109–129. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The curve α ( t ) for a = δ = 1 , T = 5 , b = 0.2 (left picture) and b = 0.51 (right picture).
Figure 1. The curve α ( t ) for a = δ = 1 , T = 5 , b = 0.2 (left picture) and b = 0.51 (right picture).
Mathematics 09 02257 g001
Figure 2. Two paths of r t in view of α ( t ) .
Figure 2. Two paths of r t in view of α ( t ) .
Mathematics 09 02257 g002
Figure 3. Interest rate (left picture) and company wealth (right picture), where the blue path hits the stopping boundary.
Figure 3. Interest rate (left picture) and company wealth (right picture), where the blue path hits the stopping boundary.
Mathematics 09 02257 g003
Figure 4. The corresponding payout strategy c t for the paths of the surplus process in Figure 3.
Figure 4. The corresponding payout strategy c t for the paths of the surplus process in Figure 3.
Mathematics 09 02257 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Eisenberg, J.; Kremsner, S.; Steinicke, A. Two Approaches for a Dividend Maximization Problem under an Ornstein-Uhlenbeck Interest Rate. Mathematics 2021, 9, 2257. https://doi.org/10.3390/math9182257

AMA Style

Eisenberg J, Kremsner S, Steinicke A. Two Approaches for a Dividend Maximization Problem under an Ornstein-Uhlenbeck Interest Rate. Mathematics. 2021; 9(18):2257. https://doi.org/10.3390/math9182257

Chicago/Turabian Style

Eisenberg, Julia, Stefan Kremsner, and Alexander Steinicke. 2021. "Two Approaches for a Dividend Maximization Problem under an Ornstein-Uhlenbeck Interest Rate" Mathematics 9, no. 18: 2257. https://doi.org/10.3390/math9182257

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop