Next Article in Journal
Multivariate Credibility in Bonus-Malus Systems Distinguishing between Different Types of Claims
Previous Article in Journal
Under What Conditions Do Rules-Based and Capability-Based Management Modes Dominate?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mixed Periodic-Classical Barrier Strategies for Lévy Risk Processes

by
José-Luis Pérez
1 and
Kazutoshi Yamazaki
2,*
1
Department of Probability and Statistics, Centro de Investigación en Matemáticas A.C. Calle Jalisco s/n. C.P. 36240, Guanajuato, Mexico
2
Department of Mathematics, Faculty of Engineering Science, Kansai University, 3-3-35 Yamate-cho, Suita-shi, Osaka 564-8680, Japan
*
Author to whom correspondence should be addressed.
Risks 2018, 6(2), 33; https://doi.org/10.3390/risks6020033
Submission received: 25 February 2018 / Revised: 25 March 2018 / Accepted: 26 March 2018 / Published: 5 April 2018

Abstract

:
Given a spectrally-negative Lévy process and independent Poisson observation times, we consider a periodic barrier strategy that pushes the process down to a certain level whenever the observed value is above it. We also consider the versions with additional classical reflection above and/or below. Using scale functions and excursion theory, various fluctuation identities are computed in terms of the scale functions. Applications in de Finetti’s dividend problems are also discussed.

1. Introduction

In actuarial risk theory, the surplus of an insurance company is typically modeled by a compound Poisson process with a positive drift and negative jumps (Cramér–Lundberg model) or more generally by a spectrally-negative Lévy process. Thanks to the recent developments of the fluctuation theory of Lévy processes, there now exists a variety of tools available to compute various quantities that are useful in insurance mathematics.
By the existing fluctuation theory, it is relatively easy to deal with (classical) reflected Lévy processes that can be written as the differences between the underlying and running supremum/infimum processes.
The known results on these processes can be conveniently and efficiently applied in modeling the surplus of a dividend-paying company: under a barrier strategy, the resulting controlled surplus process becomes the process reflected from above. The work in Avram et al. (2007) obtained the expected net present value (NPV) of dividends until ruin; a sufficient condition for the optimality of a barrier strategy is given in Loeffen (2008). Similarly, capital injection is modeled by reflections from below. In the bail-out case with a requirement that ruin must be avoided, Avram et al. (2007) obtained the expected NPV of dividends and capital injections under a double barrier strategy. They also showed that it is optimal to reflect the process at zero and at some upper boundary, with the resulting surplus process being a doubly-reflected Lévy process.
These seminal works give concise expressions for various fluctuation identities in terms of the scale function. In general, conciseness is still maintained when the underlying spectrally one-sided Lévy process is replaced with its reflected process. This is typically done by using the derivative or the integral of the scale function depending on whether the reflection barrier is higher or lower. For the results on a variant called refracted Lévy processes, see, e.g., Kyprianou (2010), Kyprianou et al. (2014).
In this paper, we consider a different version of reflection, which we call the Parisian reflection. Motivated by the fact that, in reality, dividend/capital injection decisions can only be made at some intervals, several recent papers consider periodic barrier strategies that reflect the process only at discrete observation times. In particular, Avram et al. (2018) consider, for a general spectrally-negative Lévy process, the case capital injections can be made at the jump times of an independent Poisson process (the reflection barrier is lower). This current paper considers the case when dividends are made at these Poisson observation times (reflection barrier is upper). Other related papers in the compound Poisson cases include Albrecher et al. (2011); Avanzi et al. (2013), where in the former, several identities are obtained when the solvency is also observed periodically, whereas the latter studies the case where observation intervals are Erlang-distributed.
This work is also motivated by its applications in de Finetti’s dividend problems under Poisson observation times. In the dual (spectrally positive) model, Avanzi et al. (2014) solved the case where the jump size is hyper-exponentially distributed; Pérez and Yamazaki (2017) generalized the results to a general spectrally-positive Lévy case and also solved the bail-out version using the results in Avram et al. (2018). An extension with a combination of periodic and continuous dividend payments (with different transaction costs) was recently solved by Avanzi et al. (2016) when the underlying process is a Brownian motion with a drift. In these papers, optimal strategies are of a periodic barrier-type. On the other hand, this paper provides tools to study the spectrally-negative case. Recently, our results have been used to show the optimality of periodic barrier strategies in (Noba et al. 2017, 2018); see Remarks 2 and 8.
In this paper, we study the following four processes that are constructed from a given spectrally-negative Lévy process X and the jump times of an independent Poisson process with rate r > 0 :
  • The process with Parisian reflection from above X r : The process X r is constructed by modifying X so that it is pushed down to zero at the Poisson observation times at which it is above zero. Note that the barrier level zero can be changed to any real value by the spatial homogeneity of X. This process models the controlled surplus process under a periodic barrier dividend strategy.
  • The process with Parisian and classical reflection from above X ˜ r b : Suppose Y ¯ b is the reflected process of X with the classical upper barrier b > 0 . The process X ˜ r b is constructed in the same way as X r in (1) with the underlying process X replaced with Y ¯ b . This process models the controlled surplus process under a combination of the classical and periodic barrier dividend strategies. This is a generalization of the Brownian motion case as studied in Avanzi et al. (2016).
  • The process with Parisian reflection from above and classical reflection from below Y r a : Suppose Y ̲ a is the reflected process of X with the classical lower barrier a < 0 . The process Y r a is constructed in the same way as X r as in (1) with the underlying process X replaced with Y ̲ a . By shifting the process (by a ), it models the surplus under a periodic barrier dividend strategy with classical capital injections (so that it does not go below zero).
  • The process with Parisian and classical reflection from above and classical reflection from below Y ˜ r a , b : Suppose Y a , b is the doubly-reflected process of X with a classical lower barrier a < 0 and a classical upper barrier b > 0 . The process Y ˜ r a , b is constructed in the same way as X r in (1) with the underlying process X replaced with Y a , b . By shifting the process (by a ), it models the controlled surplus process under a combination of the classical and periodic barrier dividend strategies as in (2) with additional classical capital injections.
For these four processes, we compute various fluctuation identities that include:
(a)
the expected NPV of dividends (both corresponding to Parisian and classical reflections) with the horizon given by the first exit time from an interval and those with the infinite horizon,
(b)
the expected NPV of capital injections with the horizon given by the first exit time from an interval and those with the infinite horizon,
(c)
the two-sided (one-sided) exit identities.
In order to compute these for the four processes defined above, we first obtain the identities for the process (1) killed upon exiting [ a , b ] . Using the observation that the paths of the processes (2)–(4) are identical to those of (1) before the first exit time from [ a , b ] , the results for (2)–(4) can be obtained as corollaries, via the strong Markov property and the existing known identities for classical reflected processes.
The identities for (1) are obtained separately for the case that X has paths of bounded variation and for the case that it has paths of unbounded variation. The former is done by a relatively well-known technique via the strong Markov property combined with the existing known identities for the spectrally-negative Lévy process. The case of unbounded variation is done via excursion theory (in particular excursions away from zero as in Pardo et al. (2018)). Thanks to the simplifying formulae obtained in Avram et al. (2018) and Loeffen et al. (2014), concise expressions can be achieved.
The rest of the paper is organized as follows. In Section 2, we review the spectrally-negative Lévy process and construct more formally the four processes described above. In addition, scale functions and some existing fluctuation identities are briefly reviewed. In Section 3, we state the main results for the process (1) and, then, in Section 4, those for the processes (2)–(4). In Section 5 and Section 6, we give proofs for the main results for (1) for the case of bounded variation and unbounded variation, respectively.
Throughout the paper, for any function f of two variables, let f ( · , · ) be the partial derivative with respect to the first argument.

2. Spectrally-Negative Lévy Processes with Parisian Reflection above

Let X = ( X ( t ) ; t 0 ) be a Lévy process defined on a probability space ( Ω , F , P ) . For x R , we denote by P x the law of X when it starts at x and write for convenience P in place of P 0 . Accordingly, we shall write E x and E for the associated expectation operators. In this paper, we shall assume throughout that X is spectrally negative, meaning here that it has no positive jumps and that it is not the negative of a subordinator. It is a well-known fact that its Laplace exponent ψ : [ 0 , ) R , i.e.,
E e θ X ( t ) = : e ψ ( θ ) t , t , θ 0 ,
is given by the Lévy–Khintchine formula:
ψ ( θ ) : = γ θ + σ 2 2 θ 2 + ( , 0 ) e θ x 1 θ x 1 { x > 1 } Π ( d x ) , θ 0 ,
where γ R , σ 0 and Π is a measure on ( , 0 ) called the Lévy measure of X that satisfies:
( , 0 ) ( 1 x 2 ) Π ( d x ) < .
It is well known that X has paths of bounded variation if and only if σ = 0 and ( 1 , 0 ) | x | Π ( d x ) < ; in this case, X can be written as:
X ( t ) = c t S ( t ) , t 0 ,
where:
c : = γ ( 1 , 0 ) x Π ( d x )
and ( S ( t ) ; t 0 ) is a driftless subordinator. Note that necessarily c > 0 , since we have ruled out the case that X has monotone paths; its Laplace exponent is given by:
ψ ( θ ) = c θ + ( , 0 ) e θ x 1 Π ( d x ) , θ 0 .
Let us define the running infimum and supremum processes:
X ̲ ( t ) : = inf 0 t t X ( t ) and X ¯ ( t ) : = sup 0 t t X ( t ) , t 0 .
Then, the processes reflected from above at b and below at a are given, respectively, by:
Y ¯ b ( t ) : = X ( t ) L b ( t ) and Y ̲ a ( t ) : = X ( t ) + R a ( t ) , t 0 ,
where:
L b ( t ) : = ( X ¯ ( t ) b ) 0 and R a ( t ) : = ( a X ̲ ( t ) ) 0 , t 0 ,
are the cumulative amounts of reflections that push the processes downward and upward, respectively.

2.1. Lévy Processes with Parisian Reflection above

Let T r = { T ( i ) ; i 1 } be an increasing sequence of jump times of an independent Poisson process with rate r > 0 . We construct the Lévy process with Parisian reflection above X r = ( X r ( t ) ; t 0 ) as follows: the process is only observed at times T r and is pushed down to zero if and only if it is above zero.
More specifically, we have:
X r ( t ) = X ( t ) , 0 t < T 0 + ( 1 ) ,
where:
T 0 + ( 1 ) : = inf { T ( i ) : X ( T ( i ) ) > 0 } ;
here and throughout, let inf = . The process then jumps downward by X ( T 0 + ( 1 ) ) so that X r ( T 0 + ( 1 ) ) = 0 . For T 0 + ( 1 ) t < T 0 + ( 2 ) : = inf { T ( i ) > T 0 + ( 1 ) : X r ( T ( i ) ) > 0 } , we have X r ( t ) = X ( t ) X ( T 0 + ( 1 ) ) , and X r ( T 0 + ( 2 ) ) = 0 . The process can be constructed by repeating this procedure.
Suppose L r ( t ) is the cumulative amount of (Parisian) reflection until time t 0 . Then, we have:
X r ( t ) = X ( t ) L r ( t ) , t 0 ,
with
L r ( t ) : = T 0 + ( i ) t X r ( T 0 + ( i ) ) , t 0 ,
where ( T 0 + ( n ) ; n 1 ) can be constructed inductively by (4) and:
T 0 + ( n + 1 ) : = inf { T ( i ) > T 0 + ( n ) : X r ( T ( i ) ) > 0 } , n 1 .

2.2. Lévy Processes with Parisian and Classical Reflection above

Fix b > 0 . Consider an extension of the above with additional classical reflection from above at b > 0 , which we denote by X ˜ r b . More specifically, we have:
X ˜ r b ( t ) = Y ¯ b ( t ) , 0 t < T ˜ 0 + ( 1 ) ,
where T ˜ 0 + ( 1 ) : = inf { T ( i ) : Y ¯ b ( T ( i ) ) > 0 } . The process then jumps downward by Y ¯ b ( T ˜ 0 + ( 1 ) ) so that X ˜ r b ( T ˜ 0 + ( 1 ) ) = 0 . For T ˜ 0 + ( 1 ) t < T ˜ 0 + ( 2 ) : = inf { T ( i ) > T ˜ 0 + ( 1 ) : X ˜ r b ( T ( i ) ) > 0 } , it is the reflected process of X ( t ) X ( T ˜ 0 + ( 1 ) ) (with classical reflection above at b as in (2)) and X ˜ r b ( T ˜ 0 + ( 2 ) ) = 0 . The process can be constructed by repeating this procedure.
Suppose L ˜ r , P b ( t ) and L ˜ r , S b ( t ) are the cumulative amounts of Parisian reflection (with upper barrier zero) and classical reflection (with upper barrier b) until time t 0 . Then, we have:
X ˜ r b ( t ) = X ( t ) L ˜ r , P b ( t ) L ˜ r , S b ( t ) , t 0 .

2.3. Lévy Processes with Parisian Reflection above and Classical Reflection below

Fix a < 0 . The process Y r a with additional (classical) reflection below can be defined analogously. We have:
Y r a ( t ) = Y ̲ a ( t ) , 0 t < T ^ 0 + ( 1 )
where T ^ 0 + ( 1 ) : = inf { T ( i ) : Y ̲ a ( T ( i ) ) > 0 } . The process then jumps downward by Y ̲ a ( T ^ 0 + ( 1 ) ) so that Y r a ( T ^ 0 + ( 1 ) ) = 0 . For T ^ 0 + ( 1 ) t < T ^ 0 + ( 2 ) : = inf { T ( i ) > T ^ 0 + ( 1 ) : Y r a ( T ( i ) ) > 0 } , Y r a ( t ) is the reflected process of X ( t ) X ( T ^ 0 + ( 1 ) ) (with the classical reflection below at a as in (2)), and Y r a ( T ^ 0 + ( 2 ) ) = 0 . The process can be constructed by repeating this procedure. It is clear that it admits a decomposition:
Y r a ( t ) = X ( t ) L r a ( t ) + R r a ( t ) , t 0 ,
where L r a ( t ) and R r a ( t ) are, respectively, the cumulative amounts of Parisian reflection (with upper barrier zero) and classical reflection (with lower barrier a) until time t.

2.4. Lévy Processes with Parisian and Classical Reflection above and Classical Reflection below

Fix a < 0 < b . Consider a version of Y r with additional classical reflection from above at b > 0 . More specifically, we have:
Y ˜ r a , b ( t ) = Y a , b ( t ) , 0 t < T ˇ 0 + ( 1 ) ,
where Y a , b is the classical doubly-reflected process of X with lower barrier a and upper barrier b (see Pistorius (2003)) and:
T ˇ 0 + ( 1 ) : = inf { T ( i ) : Y a , b ( T ( i ) ) > 0 } .
The process then jumps downward by Y a , b ( T ˇ 0 + ( 1 ) ) so that Y ˜ r a , b ( T ˇ 0 + ( 1 ) ) = 0 . For T ˇ 0 + ( 1 ) t < T ˇ 0 + ( 2 ) : = inf { T ( i ) > T ˇ 0 + ( 1 ) : Y ˜ r a , b ( T ( i ) ) > 0 } , it is the doubly-reflected process of X ( t ) X ( T ˇ 0 + ( 1 ) ) (with classical reflections at a and b), and Y ˜ r a , b ( T ˇ 0 + ( 2 ) ) = 0 . The process can be constructed by repeating this procedure.
Suppose L ˜ r , P a , b ( t ) and L ˜ r , S a , b ( t ) are the cumulative amounts of Parisian reflection (with upper barrier zero) and classical reflection (with upper barrier b) until time t 0 , and R ˜ r a , b ( t ) is that of the classical reflection (with lower barrier a). Then, we have:
Y ˜ r a , b ( t ) = X ( t ) L ˜ r , P a , b ( t ) L ˜ r , S a , b ( t ) + R ˜ r a , b ( t ) , t 0 .

2.5. Review of Scale Functions

Fix q 0 . We use W ( q ) for the scale function of the spectrally-negative Lévy process X. This is the mapping from R to [ 0 , ) that takes the value zero on the negative half-line, while on the positive half-line, it is a strictly increasing function that is defined by its Laplace transform:
0 e θ x W ( q ) ( x ) d x = 1 ψ ( θ ) q , θ > Φ ( q ) ,
where ψ is as defined in (1) and:
Φ ( q ) : = sup { λ 0 : ψ ( λ ) = q } .
We also define, for x R ,
W ¯ ( q ) ( x ) : = 0 x W ( q ) ( y ) d y , W ¯ ¯ ( q ) ( x ) : = 0 x 0 z W ( q ) ( w ) d w d z , Z ( q ) ( x ) : = 1 + q W ¯ ( q ) ( x ) , Z ¯ ( q ) ( x ) : = 0 x Z ( q ) ( z ) d z = x + q W ¯ ¯ ( q ) ( x ) .
Noting that W ( q ) ( x ) = 0 for < x < 0 , we have:
W ¯ ( q ) ( x ) = 0 , W ¯ ¯ ( q ) ( x ) = 0 , Z ( q ) ( x ) = 1 , and Z ¯ ( q ) ( x ) = x , x 0 .
Define also:
Z ( q ) ( x , θ ) : = e θ x 1 + ( q ψ ( θ ) ) 0 x e θ z W ( q ) ( z ) d z , x R , θ 0 ,
and its partial derivative with respect to the first argument:
Z ( q ) ( x , θ ) = θ Z ( q ) ( x , θ ) + ( q ψ ( θ ) ) W ( q ) ( x ) , x R , θ 0 .
In particular, for x R , Z ( q ) ( x , 0 ) = Z ( q ) ( x ) and, for r > 0 ,
Z ( q ) ( x , Φ ( q + r ) ) = e Φ ( q + r ) x 1 r 0 x e Φ ( q + r ) z W ( q ) ( z ) d z , Z ( q + r ) ( x , Φ ( q ) ) = e Φ ( q ) x 1 + r 0 x e Φ ( q ) z W ( q + r ) ( z ) d z .
Remark 1.
1. If X has paths of unbounded variation or the Lévy measure is atomless, it is known that W ( q ) is C 1 ( R { 0 } ) ; see, e.g., (Chan et al. 2011, Theorem 3). In particular, if σ > 0 , then W ( q ) is C 2 ( R { 0 } ) ; see, e.g., (Chan et al. 2011, Theorem 1).
2. Regarding the asymptotic behavior near zero, as in Lemmas 3.1 and 3.2 of Kuznetsov et al. (2013),
W ( q ) ( 0 ) = 0 i f   X   h a s   p a t h s   o f   u n b o u n d e d   v a r i a t i o n , 1 c i f   X   h a s   p a t h s   o f   b o u n d e d   v a r i a t i o n , W ( q ) ( 0 + ) : = lim x 0 W ( q ) ( x ) = 2 σ 2 i f   σ > 0 , i f   σ = 0 a n d Π ( , 0 ) = , q + Π ( , 0 ) c 2 i f   σ = 0 a n d Π ( , 0 ) < .
On the other hand, as in Lemma 3.3 of Kuznetsov et al. (2013),
e Φ ( q ) x W ( q ) ( x ) ψ ( Φ ( q ) ) 1 , a s   x ,
where in the case ψ ( 0 + ) = 0 , the right-hand side, when q = 0 , is understood to be infinity.
Below, we list the fluctuation identities that will be used later in the paper.

2.6. Fluctuation Identities for X

Let:
τ a : = inf t 0 : X ( t ) < a and τ b + : = inf t 0 : X ( t ) > b , a , b R .
Then, for b > a and x b ,
E x e q τ b + ; τ b + < τ a = W ( q ) ( x a ) W ( q ) ( b a ) , E x e q τ a θ [ a X ( τ a ) ] ; τ b + > τ a = Z ( q ) ( x a , θ ) Z ( q ) ( b a , θ ) W ( q ) ( x a ) W ( q ) ( b a ) , θ 0 .
By taking b in the latter, as in (Albrecher et al. 2016, (7)) (see also the identity (3.19) in Avram et al. (2007)),
E x e q τ a θ [ a X ( τ a ) ] ; τ a < = Z ( q ) ( x a , θ ) W ( q ) ( x a ) ψ ( θ ) q θ Φ ( q ) ,
where, for the case θ = Φ ( q ) , it is understood as the limiting case. In addition, it is known that a spectrally-negative Lévy process creeps downwards if and only if σ > 0 ; by Theorem 2.6 (ii) of Kuznetsov et al. (2013),
E x e q τ a ; X ( τ a ) = a , τ a < = σ 2 2 W ( q ) ( x a ) Φ ( q ) W ( q ) ( x a ) , x > a ,
where we recall that W ( q ) is differentiable when σ > 0 as in Remark 1 (1). By this, the strong Markov property and (10), we have for a < b and x b ,
E x ( e q τ a ; X ( τ a ) = a , τ a < τ b + )    = E x ( e q τ a ; X ( τ a ) = a , τ a < ) E x ( e q τ b + ; τ b + < τ a ) E b ( e q τ a ; X ( τ a ) = a , τ a < )    = C b a ( q ) ( x a )
where:
C β ( q ) ( y ) : = σ 2 2 W ( q ) ( y ) W ( q ) ( y ) W ( q ) ( β ) W ( q ) ( β ) , y R { 0 } , β > 0 .

2.7. Fluctuation Identities for Y ¯ b ( t )

Fix a < b . Define the first downcrossing time of Y ¯ b ( t ) of (2):
τ ˜ a , b : = inf { t > 0 : Y ¯ b ( t ) < a } .
The Laplace transform of τ ˜ a , b is given, as in Proposition 2 (ii) of Pistorius (2004), by:
E x ( e q τ ˜ a , b ) = Z ( q ) ( x a ) q W ( q ) ( b a ) W ( q ) ( x a ) W ( q ) ( ( b a ) + ) , q 0 , x b .
As in Proposition 1 of Avram et al. (2007), the discounted cumulative amount of reflection from above as in (3) is:
E x [ 0 , τ ˜ a , b ] e q t d L b ( t ) = W ( q ) ( x a ) W ( q ) ( ( b a ) + ) , q 0 , x b .

2.8. Fluctuation Identities for Y ̲ a ( t )

Fix a < b . Define the first upcrossing time of Y ̲ a ( t ) of (2):
η a , b + : = inf { t > 0 : Y ̲ a ( t ) > b } .
First, as on page 228 of Kyprianou (2006), its Laplace transform is concisely given by:
E x ( e q η a , b + ) = Z ( q ) ( x a ) Z ( q ) ( b a ) , q 0 , x b .
Second, as in the proof of Theorem 1 of Avram et al. (2007), the discounted cumulative amount of reflection from below as in (3) is, given ψ ( 0 + ) > ,
E x [ 0 , η a , b + ] e q t d R a ( t ) = l ( q ) ( x a ) + Z ( q ) ( x a ) Z ( q ) ( b a ) l ( q ) ( b a ) , q 0 , x b ,
where:
l ( q ) ( x ) : = Z ¯ ( q ) ( x ) ψ ( 0 + ) W ¯ ( q ) ( x ) , q 0 , x R .

2.9. Some More Notations

For the rest of the paper, we fix r > 0 and use e r for the first observation time, or an independent exponential random variable with parameter r.
Let, for q 0 and x R ,
Z ˜ ( q , r ) ( x , θ ) : = r Z ( q ) ( x , θ ) + ( q ψ ( θ ) ) Z ( q ) ( x , Φ ( q + r ) ) Φ ( q + r ) θ , θ 0 , Z ˜ ( q , r ) ( x ) : = Z ˜ ( q , r ) ( x , 0 ) = r Z ( q ) ( x ) + q Z ( q ) ( x , Φ ( q + r ) ) Φ ( q + r ) ,
where the case θ = Φ ( q + r ) is understood as the limiting case.
We define, for any measurable function f : R R ,
M a ( q , r ) f ( x ) : = f ( x a ) + r 0 x W ( q + r ) ( x y ) f ( y a ) d y , x R , a < 0 .
In particular, we let, for a < 0 , q 0 and x R ,
W a ( q , r ) ( x ) : = M a ( q , r ) W ( q ) ( x ) , W ¯ a ( q , r ) ( x ) : = M a ( q , r ) W ¯ ( q ) ( x ) , Z a ( q , r ) ( x , θ ) : = M a ( q , r ) Z ( q ) ( x , θ ) , θ 0 , Z ¯ a ( q , r ) ( x ) : = M a ( q , r ) Z ¯ ( q ) ( x ) ,
with Z a ( q , r ) ( · ) : = Z a ( q , r ) ( · , 0 ) .
Thanks to these functionals, the following expectations admit concise expressions. By Lemma 2.1 in Loeffen et al. (2014) and Theorem 6.1 in Avram et al. (2018) , for all q 0 , a < 0 < b and x b ,
E x e ( q + r ) τ 0 W ( q ) ( X ( τ 0 ) a ) ; τ 0 < τ b + = W a ( q , r ) ( x ) W ( q + r ) ( x ) W ( q + r ) ( b ) W a ( q , r ) ( b ) , ( 21 ) E x e ( q + r ) τ ˜ 0 , b W ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a ) = W a ( q , r ) ( x ) W ( q + r ) ( x ) W ( q + r ) ( b + ) ( W a ( q , r ) ) ( b + ) . ( 22 )
In addition, we give a slight generalization of Lemma 2.1 of Loeffen et al. (2014) and Theorem 6.1 in Avram et al. (2018). The proofs are given in Appendix A.1.
Lemma 1.
For q 0 , θ 0 , a < 0 < b , and x b ,
E x e ( q + r ) τ 0 Z ( q ) ( X ( τ 0 ) a , θ ) ; τ 0 < τ b + = Z a ( q , r ) ( x , θ ) W ( q + r ) ( x ) W ( q + r ) ( b ) Z a ( q , r ) ( b , θ ) , ( 23 ) E x e ( q + r ) τ ˜ 0 , b Z ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a , θ ) = Z a ( q , r ) ( x , θ ) W ( q + r ) ( x ) W ( q + r ) ( b + ) ( Z a ( q , r ) ) ( b , θ ) . ( 24 )

3. Main Results for X r

In this section, we obtain the fluctuation identities for the process X r as constructed in Section 2.1. The main theorems are obtained for the case killed upon exiting an interval [ a , b ] for a < 0 < b . As their corollaries, we also obtain the limiting cases as a and b . The proofs for the theorems are given in Section 5 and Section 6 for the bounded and unbounded variation cases, respectively. The proofs for the corollaries are given in the Appendix B.
Define the first down-/up-crossing times for X r ,
τ a ( r ) : = inf { t > 0 : X r ( t ) < a } and τ b + ( r ) : = inf { t > 0 : X r ( t ) > b } , a , b R .
Define also for q 0 , a < 0 and x R ,
I a ( q , r ) ( x ) : = W a ( q , r ) ( x ) W ( q ) ( a ) r W ¯ ( q + r ) ( x ) , J a ( q , r ) ( x , θ ) : = Z a ( q , r ) ( x , θ ) r Z ( q ) ( a , θ ) W ¯ ( q + r ) ( x ) , J a ( q , r ) ( x ) : = J a ( q , r ) ( x , 0 ) = Z a ( q , r ) ( x ) r Z ( q ) ( a ) W ¯ ( q + r ) ( x ) .
Note in particular that:
I a ( q , r ) ( 0 ) = 1 and J a ( q , r ) ( 0 , θ ) = Z ( q ) ( a , θ ) ,
and that:
J a ( 0 , r ) ( x ) = 1 and ( J a ( 0 , r ) ) ( x ) = 0 , x R .
We shall first obtain the expected NPV of dividends (see the decomposition (5)) killed upon exiting [ a , b ] .
Theorem 1 
(Periodic control of dividends). For q 0 , a < 0 < b and x b , we have:
f ( x , a , b ) : = E x 0 τ b + ( r ) τ a ( r ) e q t d L r ( t ) = r W ¯ ¯ ( q + r ) ( b ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) W ¯ ¯ ( q + r ) ( x ) .
By taking a and b in Theorem 1, we have the following.
Corollary 1.
(i) For q 0 , b > 0 and x b , we have:
E x 0 τ b + ( r ) e q t d L r ( t ) = r W ¯ ¯ ( q + r ) ( b ) I ( q , r ) ( x ) I ( q , r ) ( b ) W ¯ ¯ ( q + r ) ( x ) ,
where:
I ( q , r ) ( x ) : = lim a I a ( q , r ) ( x ) = Z ( q + r ) ( x , Φ ( q ) ) r W ¯ ( q + r ) ( x ) , q 0 , x R .
(ii) For q 0 , a < 0 and x R , we have:
E x 0 τ a ( r ) e q t d L r ( t ) = r I a ( q , r ) ( x ) Φ ( q + r ) W ( q ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) W ¯ ¯ ( q + r ) ( x ) ,
where, by (7),
Z ( q ) ( x , Φ ( q + r ) ) = Φ ( q + r ) Z ( q ) ( x , Φ ( q + r ) ) r W ( q ) ( x ) , x R .
(iii) Suppose q > 0 or q = 0 with ψ ( 0 + ) < 0 . Then, for x R ,
E x 0 e q t d L r ( t ) = Φ ( q + r ) Φ ( q ) Φ ( q + r ) Φ ( q ) I ( q , r ) ( x ) r W ¯ ¯ ( q + r ) ( x ) .
Otherwise, it is infinity for x R .
Remark 2.
Recently, in Noba et al. (2018), Corollary 1 (ii) was used to show the optimality of a periodic barrier strategy in de Finetti’s dividend problem under the assumption that the Lévy measure has a completely monotone density. Thanks to the semi-analytic expression in terms of the scale function, the selection of a candidate optimal barrier, as well as the verification of optimality are conducted efficiently, without focusing on a particular class of Lévy processes.
We shall now study the two-sided exit identities. The main results are given in Theorems 2 and 3, and their corollaries are obtained by taking limits. We first obtain the Laplace transform of the upcrossing time τ b + ( r ) on the event { τ a ( r ) > τ b + ( r ) } .
Theorem 2 (Upcrossing time).
For q 0 , a < 0 < b , and x b , we have:
g ( x , a , b ) : = E x e q τ b + ( r ) ; τ a ( r ) > τ b + ( r ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) .
The following technical result will be helpful for obtaining the Laplace exponent of the upcrossing time τ b + ( r ) , as a corollary to Theorem 2.
Remark 3.
Fix b > 0 and x < b . By Lemma 3 below, we see that
I a ( q , r ) ( x ) W ( q + r ) ( x ) I a ( q , r ) ( b ) / W ( q + r ) ( b ) r 1 . Because I a ( q , r ) ( b ) r and by (9),
lim r I a ( q , r ) ( x ) I a ( q , r ) ( b ) = lim r W ( q + r ) ( x ) W ( q + r ) ( b ) = 0 .
Hence, we see that g ( x , a , b ) vanishes in the limit as r .
By taking a in Theorem 2, we have the following.
Corollary 2.
(i) For q 0 , b > 0 and x b , we have E x ( e q τ b + ( r ) ) = I ( q , r ) ( x ) / I ( q , r ) ( b ) where I ( q , r ) is given as in (28). (ii) In particular, when ψ ( 0 + ) 0 , then τ b + ( r ) < P x -a.s. for any x R .
For θ 0 , q 0 , a < 0 and x R , let:
J ^ a ( q , r ) ( x , θ ) : = Z a ( q , r ) ( x , θ ) Z ( q ) ( a , θ ) W ( q ) ( a ) W a ( q , r ) ( x ) = M a ( q , r ) Z ( q ) ( x , θ ) Z ( q ) ( a , θ ) W ( q ) ( a ) W ( q ) ( x ) ,
which satisfies:
J ^ a ( q , r ) ( x , θ ) = J a ( q , r ) ( x , θ ) Z ( q ) ( a , θ ) I a ( q , r ) ( x ) ,
and, by (26),
J ^ a ( q , r ) ( 0 , θ ) = 0 .
Using these, we express the Laplace transform of the downcrossing time τ a ( r ) on the event { τ a ( r ) < τ b + ( r ) } .
Theorem 3 (Downcrossing time and overshoot).
For q 0 , a < 0 < b , θ 0 , and x b , we have:
h ( x , a , b , θ ) : = E x e q τ a ( r ) θ [ a X r ( τ a ( r ) ) ] ; τ a ( r ) < τ b + ( r ) = J ^ a ( q , r ) ( x , θ ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) J ^ a ( q , r ) ( b , θ ) = J a ( q , r ) ( x , θ ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) J a ( q , r ) ( b , θ ) .
By taking b in Theorem 3, we obtain the following.
Corollary 3.
(i) For q 0 , a < 0 , θ 0 and x R ,
E x e q τ a ( r ) θ [ a X r ( τ a ( r ) ) ] = J a ( q , r ) ( x , θ ) I a ( q , r ) ( x ) Z ˜ ( q , r ) ( a , θ ) r Z ( q ) ( a , θ ) Φ ( q + r ) W ( q ) ( a ) Φ ( q + r ) Z ( q ) ( a , Φ ( q + r ) ) ,
where in particular:
E x e q τ a ( r ) = J a ( q , r ) ( x ) q I a ( q , r ) ( x ) Z ( q ) ( a , Φ ( q + r ) ) W ( q ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) .
(ii) For a < 0 and x R , τ a ( r ) < P x -a.s.
By taking θ in Theorem 3 and Corollary 3, we have the following identities related to the event that the process goes continuously below a level.
Corollary 4 (Creeping).
(i) For q 0 , a < 0 < b and x b , we have:
w ( x , a , b ) : = E x e q τ a ( r ) ; X ( τ a ( r ) ) = a , τ a ( r ) < τ b + ( r ) = C a ( q , r ) ( x ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) C a ( q , r ) ( b )
where (recall that W ( q ) is differentiable when σ > 0 as in Remark 1 (1)):
C a ( q , r ) ( y ) : = σ 2 2 M a ( q , r ) W ( q ) ( y ) r W ¯ ( q + r ) ( y ) W ( q ) ( a ) , y R .
(ii) For q 0 , a < 0 and x R , we have:
E x e q τ a ( r ) ; X ( τ a ( r ) ) = a = C a ( q , r ) ( x ) I a ( q , r ) ( x ) W ( q ) ( a ) σ 2 2 Φ ( q + r ) r W ( q ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) .
In Theorem 3, by taking the derivative with respect to θ and taking θ 0 , we obtain the following. This will later be used to compute the identities for capital injection in Proposition 5.
Corollary 5.
Suppose ψ ( 0 + ) > . For q 0 , a < 0 < b and x b ; we have:
j ( x , a , b ) : = E x e q τ a ( r ) [ a X r ( τ a ( r ) ) ] ; τ a ( r ) < τ b + ( r ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) K a ( q , r ) ( b ) K a ( q , r ) ( x )
with
K a ( q , r ) ( y ) : = l a ( q , r ) ( y ) r l ( q ) ( a ) W ¯ ( q + r ) ( y ) , y R ,
where l a ( q , r ) ( y ) : = M a ( q , r ) l ( q ) ( y ) , y R .
By taking b in Corollary 5, we have the following.
Corollary 6.
Suppose ψ ( 0 + ) > . For q 0 , a < 0 and x R , we have:
E x e q τ a ( r ) [ a X r ( τ a ( r ) ) ] = I a ( q , r ) ( x ) W ( q ) ( a ) Z ( q ) ( a , Φ ( q + r ) ) Z ˜ ( q , r ) ( a ) ψ ( 0 + ) Z ( q ) ( a , Φ ( q + r ) ) K a ( q , r ) ( x ) .
The following remark states that as the rate r of the Poisson process associated with the Parisian reflection goes to zero, we recover classical fluctuation identities.
Remark 4.
Note that, for q 0 , a < 0 and x R ,
lim r 0 I a ( q , r ) ( x ) = W ( q ) ( x a ) W ( q ) ( a ) a n d lim r 0 J a ( q , r ) ( x , θ ) = Z ( q ) ( x a , θ ) .
Hence, as r 0 , we have the following.
1. 
By Theorem 1, f ( x , a , b ) vanishes in the limit.
2. 
By Theorems 2 and 3, g ( x , a , b ) and h ( x , a , b , θ ) converge to the right-hand sides of (10).
3. 
By Corollary 4 (i), w ( x , a , b ) converges to the right-hand side of (12).
The convergence for the limiting cases a = and/or b = hold in the same way.

4. Main Results for the Cases with Additional Classical Reflections

In this section, we shall extend the results in Section 3 and obtain similar identities for the processes X ˜ r b , Y r a and Y ˜ r a , b as defined in Section 2.2, Section 2.3 and Section 2.4, respectively. Again, the proofs for the corollaries are deferred to the Appendix B.

4.1. Results for X ˜ r b

We shall first study the process X ˜ r b as constructed in Section 2.2. Let:
τ ˜ a , b ( r ) : = inf { t > 0 : X ˜ r b ( t ) < a } , a < 0 < b ,
and ( I a ( q , r ) ) ( x + ) be the right-hand derivative of (25) with respect to x given by:
( I a ( q , r ) ) ( x + ) : = ( W a ( q , r ) ) ( x + ) W ( q ) ( a ) r W ( q + r ) ( x ) , q 0 , a < 0 , x R .
Recall the classical reflected process Y ¯ b and τ ˜ 0 , b as in (13). We shall first compute the following.
Lemma 2.
For q 0 and a < 0 < b ,
E b e q e r ; e r < τ ˜ 0 , b + E b e ( q + r ) τ ˜ 0 , b W ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a ) W ( q ) ( a ) = I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) .
Proof. 
We first note that, by (14),
E b e q e r ; e r < τ ˜ 0 , b = r r + q E b 1 e ( q + r ) τ ˜ 0 , b = r ( W ( q + r ) ( b ) ) 2 W ( q + r ) ( b + ) W ¯ ( q + r ) ( b ) .
By summing this and (22), the result follows. ☐
In order to obtain the results for X ˜ r b , we shall use the following observation and the strong Markov property.
Remark 5.
(i) For 0 t < τ ˜ 0 , b e r , X ˜ r b ( t ) = Y ¯ b ( t ) and L ˜ r , P b ( t ) = 0 . (ii) For 0 t τ 0 + , X ˜ r b ( t ) = X ( t ) and L ˜ r , P b ( t ) = L ˜ r , S b ( t ) = 0 . (iii) For 0 t τ b + ( r ) , X ˜ r b ( t ) = X r ( t ) .
We shall first compute the expected NPV of the periodic part of dividends using Lemma 2 and Remark 5. It attains a concise expression in terms of the function I a ( q , r ) and its derivative.
Proposition 1 (Periodic part of dividends).
For q 0 , a < 0 < b and x b , we have:
f ˜ P ( x , a , b ) : = E x 0 τ ˜ a , b ( r ) e q t d L ˜ r , P b ( t ) = r W ¯ ( q + r ) ( b ) I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) W ¯ ¯ ( q + r ) ( x ) .
Proof. 
By Remark 5 (i) and the strong Markov property, we can write:
f ˜ P ( b , a , b ) = E b e q τ ˜ 0 , b f ˜ P ( Y ¯ b ( τ ˜ 0 , b ) , a , b ) ; τ ˜ 0 , b < e r + E b e q e r [ Y ¯ b ( e r ) + f ˜ P ( 0 , a , b ) ] ; e r < τ ˜ 0 , b .
For x 0 , by Remark 5 (ii) and the strong Markov property, f ˜ P ( x , a , b ) = E x ( e q τ 0 + ; τ 0 + < τ a ) f ˜ P ( 0 , a , b ) . This together with (10) gives:
E b e q τ ˜ 0 , b f ˜ P ( Y ¯ b ( τ ˜ 0 , b ) , a , b ) ; τ ˜ 0 , b < e r = f ˜ P ( 0 , a , b ) W ( q ) ( a ) E b e q τ ˜ 0 , b W ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a ) ; τ ˜ 0 , b < e r .
On the other hand, by the resolvent given in Theorem 1 (ii) of Pistorius (2004),
E b e q e r Y ¯ b ( e r ) ; e r < τ ˜ 0 , b = r E b 0 τ ˜ 0 , b e ( q + r ) s Y ¯ b ( s ) d s    = r 0 b ( b y ) W ( q + r ) ( b ) W ( q + r ) ( y ) W ( q + r ) ( b + ) W ( q + r ) ( y ) d y + b r W ( q + r ) ( b ) W ( q + r ) ( 0 ) W ( q + r ) ( b + )    = r W ( q + r ) ( b ) W ( q + r ) ( b + ) W ¯ ( q + r ) ( b ) W ¯ ¯ ( q + r ) ( b ) .
Substituting (34) and (35) in (33) and applying Lemma 2,
f ˜ P ( b , a , b ) = I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) f ˜ P ( 0 , a , b ) + r W ( q + r ) ( b ) W ( q + r ) ( b + ) W ¯ ( q + r ) ( b ) W ¯ ¯ ( q + r ) ( b ) .
Now, by Remark 5 (iii), the strong Markov property and Theorems 1 and 2, for all x b ,
f ˜ P ( x , a , b ) = f ( x , a , b ) + g ( x , a , b ) f ˜ P ( b , a , b ) = r W ¯ ¯ ( q + r ) ( x ) + I a ( q , r ) ( x ) I a ( q , r ) ( b ) r W ¯ ¯ ( q + r ) ( b ) + f ˜ P ( b , a , b ) = r W ¯ ¯ ( q + r ) ( x ) + I a ( q , r ) ( x ) I a ( q , r ) ( b ) [ I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) f ˜ P ( 0 , a , b ) + r W ( q + r ) ( b ) W ( q + r ) ( b + ) W ¯ ( q + r ) ( b ) ] .
Setting x = 0 and solving for f ˜ P ( 0 , a , b ) (using (26)), we have f ˜ P ( 0 , a , b ) = r W ¯ ( q + r ) ( b ) / ( I a ( q , r ) ) ( b + ) . Substituting this back in (36), the proof is complete. ☐
By taking a in Proposition 1, we have the following.
Corollary 7.
(i) For q > 0 or q = 0 with ψ ( 0 + ) < 0 , we have, for b > 0 and x b ,
E x 0 e q t d L ˜ r , P b ( t ) = r W ¯ ( q + r ) ( b ) I ( q , r ) ( x ) ( I ( q , r ) ) ( b ) W ¯ ¯ ( q + r ) ( x ) ,
where ( I ( q , r ) ) is the derivative of I ( q , r ) of (28) given by:
( I ( q , r ) ) ( x ) = Z ( q + r ) ( x , Φ ( q ) ) r W ( q + r ) ( x ) = Φ ( q ) Z ( q + r ) ( x , Φ ( q ) ) , q 0 , x R .
(ii) If q = 0 with ψ ( 0 + ) 0 , it becomes infinity.
Now, consider the singular part of dividends. We see that the related identities can again be written in terms of I a ( q , r ) and its derivative.
Proposition 2 (Singular part of dividends).
For q 0 , a < 0 < b and x b , we have:
f ˜ S ( x , a , b ) : = E x [ 0 , τ ˜ a , b ( r ) ] e q t d L ˜ r , S b ( t ) = I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) .
Proof. 
By Remark 5 (i) and the strong Markov property,
f ˜ S ( b , a , b ) = E b [ 0 , τ ˜ 0 , b e r ] e q t d L b ( t )                                                 + E b e q e r ; e r < τ ˜ 0 , b f ˜ S ( 0 , a , b ) + E b e q τ ˜ 0 , b f ˜ S ( Y ¯ b ( τ ˜ 0 , b ) , a , b ) ; τ ˜ 0 , b < e r .
By (15) and the computation similar to (34) (thanks to Remark 5 (ii)),
f ˜ S ( b , a , b ) = W ( q + r ) ( b ) W ( q + r ) ( b + ) + f ˜ S ( 0 , a , b ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) .
For x b , because Remark 5 (iii) and the strong Markov property give f ˜ S ( x , a , b ) = g ( x , a , b ) f ˜ S ( b , a , b ) , Theorem 2 and (37) give:
f ˜ S ( x , a , b ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) + f ˜ S ( 0 , a , b ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) .
Setting x = 0 and solving for f ˜ S ( 0 , a , b ) (using (26)), we have f ˜ S ( 0 , a , b ) = [ ( I a ( q , r ) ) ( b + ) ] 1 . Substituting this in (38), we have the result. ☐
By taking a in Proposition 2, we have the following.
Corollary 8.
Fix b > 0 and x b . (i) For q > 0 or q = 0 with ψ ( 0 + ) < 0 , we have E x ( 0 e q t d L ˜ r , S b ( t ) ) = I ( q , r ) ( x ) / ( I ( q , r ) ) ( b + ) . (ii) If q = 0 with ψ ( 0 + ) 0 , it becomes infinity.
Finally, we obtain the (joint) identities related to τ ˜ a , b ( r ) and the position of the process at this stopping time. We first compute their Laplace transform.
Proposition 3 (Downcrossing time and overshoot).
Fix a < 0 < b and x b . (i) For q 0 and θ 0 ,
h ˜ ( x , a , b , θ ) : = E x e q τ ˜ a , b ( r ) θ [ a X ˜ r ( τ ˜ a , b ( r ) ) ] = J a ( q , r ) ( x , θ ) ( J a ( q , r ) ) ( b , θ ) I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) .
(ii) We have τ ˜ a , b ( r ) < , P x -a.s.
Proof. 
(i) By Remark 5 (i) and the strong Markov property, we can write:
h ˜ ( b , a , b , θ ) = E b e q τ ˜ 0 , b h ˜ ( Y ¯ b ( τ ˜ 0 , b ) , a , b , θ ) ; τ ˜ 0 , b < e r + E b e q e r ; e r < τ ˜ 0 , b h ˜ ( 0 , a , b , θ ) .
For x 0 , by Remark 5 (ii), the strong Markov property and (10),
h ˜ ( x , a , b , θ ) = E x e q τ a θ [ a X ( τ a ) ] ; τ 0 + > τ a + E x e q τ 0 + ; τ 0 + < τ a h ˜ ( 0 , a , b , θ ) = Z ( q ) ( x a , θ ) Z ( q ) ( a , θ ) W ( q ) ( x a ) W ( q ) ( a ) + h ˜ ( 0 , a , b , θ ) W ( q ) ( x a ) W ( q ) ( a ) ,
and hence, together with (22) and Lemmas 1 and 2,
h ˜ ( b , a , b , θ ) = E b e ( q + r ) τ ˜ 0 , b Z ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a , θ ) Z ( q ) ( a , θ ) W ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a ) W ( q ) ( a ) + h ˜ ( 0 , a , b , θ ) E b e q e r ; e r < τ ˜ 0 , b + 1 W ( q ) ( a ) E b e ( q + r ) τ ˜ 0 , b W ( q ) ( Y ¯ b ( τ ˜ 0 , b ) a ) = Z a ( q , r ) ( b , θ ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( Z a ( q , r ) ) ( b , θ ) Z ( q ) ( a , θ ) W ( q ) ( a ) W a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( W a ( q , r ) ) ( b ) + h ˜ ( 0 , a , b , θ ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) = J ^ a ( q , r ) ( b , θ ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( J ^ a ( q , r ) ) ( b , θ ) + h ˜ ( 0 , a , b , θ ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) .
On the other hand, by Remark 5 (iii), the strong Markov property and Theorems 2 and 3, we have that, for all x b ,
h ˜ ( x , a , b , θ ) = h ( x , a , b , θ ) + g ( x , a , b ) h ˜ ( b , a , b , θ )   = J ^ a ( q , r ) ( x , θ ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) J ^ a ( q , r ) ( b , θ ) + I a ( q , r ) ( x ) I a ( q , r ) ( b ) [ J ^ a ( q , r ) ( b , θ ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( J ^ a ( q , r ) ) ( b , θ )   + h ˜ ( 0 , a , b , θ ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) ]   = J ^ a ( q , r ) ( x , θ ) + I a ( q , r ) ( x ) I a ( q , r ) ( b ) [ W ( q + r ) ( b ) W ( q + r ) ( b + ) ( J ^ a ( q , r ) ) ( b , θ )                                                          + h ˜ ( 0 , a , b , θ ) I a ( q , r ) ( b ) W ( q + r ) ( b ) W ( q + r ) ( b + ) ( I a ( q , r ) ) ( b + ) ] .
Setting x = 0 and solving for h ˜ ( 0 , a , b , θ ) (via (26) and (30)), h ˜ ( 0 , a , b , θ ) = ( J ^ a ( q , r ) ) ( b , θ ) / ( I a ( q , r ) ) ( b + ) . Substituting this back in (40), we have:
h ˜ ( x , a , b , θ ) = J ^ a ( q , r ) ( x , θ ) ( J ^ a ( q , r ) ) ( b , θ ) I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) .
Using (29), it equals the right-hand side of (39).
(ii) In view of (i), it is immediate by (27) by setting q = θ = 0 . ☐
Similar to Corollary 5, we obtain the following by Proposition 3.
Corollary 9.
Suppose ψ ( 0 + ) > . For q 0 , a < 0 < b and x b ; we have that:
j ˜ ( x , a , b ) : = E x e q τ ˜ a , b ( r ) [ a X ˜ r b ( τ ˜ a , b ( r ) ) ] = I a ( q , r ) ( x ) ( I a ( q , r ) ) ( b + ) ( K a ( q , r ) ) ( b ) K a ( q , r ) ( x ) .
Similar to Remark 4, in the following result, we see how we can recover classical fluctuation identities by taking the rate r, related to the Parisian reflection, to zero.
Remark 6.
Recall (32). As r 0 , we have the following.
1. 
By Proposition 1, f ˜ P ( x , a , b ) vanishes in the limit.
2. 
By Proposition 2, f ˜ S ( x , a , b ) converges to the right-hand side of (15).
3. 
By Proposition 3, h ˜ ( x , a , b , θ ) converges to:
E x e q τ ˜ a , b θ [ a Y ¯ b ( τ ˜ a , b ) ] = Z ( q ) ( x a , θ ) Z ( q ) ( b a , θ ) W ( q ) ( x a ) W ( q ) ( ( b a ) + ) ,
which is given in Theorem 1 of Avram et al. (2004).
4. 
By Corollary 9, j ˜ ( x , a , b ) converges to:
E x e q τ ˜ a , b [ a Y ¯ b ( τ ˜ a , b ) ] = W ( q ) ( x a ) W ( q ) ( ( b a ) + ) l ( q ) ( b a ) l ( q ) ( x a ) ,
which is given in (3.16) of Avram et al. (2007).
The convergence for the limiting case a = holds in the same way.

4.2. Results for Y r a

We shall now study the process Y r a as defined in Section 2.3. We let:
η a , b + ( r ) : = inf { t > 0 : Y r a ( t ) > b } , a < 0 < b .
Remark 7.
Recall the classical reflected process Y ̲ a = X + R a and η a , 0 + as in (16). (i) For 0 t η a , 0 + , we have Y r a ( t ) = Y ̲ a ( t ) and R r a ( t ) = R a ( t ) . (ii) For 0 t < τ a ( r ) , we have Y r a ( t ) = X r ( t ) .
Using this remark, we obtain the following identity related to Parisian reflection (periodic dividends).
Proposition 4 (Periodic part of dividends).
For q 0 , a < 0 < b and x b ,
f ^ ( x , a , b ) : = E x 0 η a , b + ( r ) e q t d L r a ( t ) = r W ¯ ¯ ( q + r ) ( b ) J a ( q , r ) ( x ) J a ( q , r ) ( b ) W ¯ ¯ ( q + r ) ( x ) .
Proof. 
By an application of Remark 7 (i), (17) and the strong Markov property,
f ^ ( a , a , b ) = E a ( e q η a , 0 + ) f ^ ( 0 , a , b ) = f ^ ( 0 , a , b ) / Z ( q ) ( a ) .
By this, Remark 7 (ii) and the strong Markov property, together with Theorems 1 and 3, we have for x b :
f ^ ( x , a , b ) = f ( x , a , b ) + h ( x , a , b , 0 ) f ^ ( a , a , b ) = r W ¯ ¯ ( q + r ) ( b ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) W ¯ ¯ ( q + r ) ( x ) + J a ( q , r ) ( x ) J a ( q , r ) ( b ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) f ^ ( 0 , a , b ) Z ( q ) ( a ) .
Setting x = 0 and solving for f ^ ( 0 , a , b ) (using (26)), we get f ^ ( 0 , a , b ) = r W ¯ ¯ ( q + r ) ( b ) Z ( q ) ( a ) / J a ( q , r ) ( b ) . Substituting this in (41), we have the claim. ☐
By taking b in Proposition 4, we have the following.
Corollary 10.
Fix a < 0 and x R . (i) For q > 0 , we have:
E x 0 e q t d L r a ( t ) = r 1 q Φ ( q + r ) J a ( q , r ) ( x ) Z ( q ) ( a , Φ ( q + r ) ) W ¯ ¯ ( q + r ) ( x ) .
(ii) For q = 0 , it becomes infinity.
For q 0 and a < 0 , let:
H a ( q , r ) ( y ) : = l a ( q , r ) ( y ) l ( q ) ( a ) Z ( q ) ( a ) Z a ( q , r ) ( y ) = K a ( q , r ) ( y ) J a ( q , r ) ( y ) Z ( q ) ( a ) l ( q ) ( a ) , y R .
In particular,
H a ( q , r ) ( 0 ) = 0 .
For the identities related to classical reflection below (capital injections), we will write them in terms of the functions H a ( q , r ) and J a ( q , r ) .
Proposition 5 (Capital injections).
For q 0 , a < 0 < b and x b ,
j ^ ( x , a , b ) : = E x [ 0 , η a , b + ( r ) ] e q t d R r a ( t ) = H a ( q , r ) ( b ) J a ( q , r ) ( x ) J a ( q , r ) ( b ) H a ( q , r ) ( x ) .
Proof. 
First, by Remark 7 (i), (17), (18) and an application of the strong Markov property,
j ^ ( a , a , b ) = E a [ 0 , η a , 0 + ] e q t d R a ( t ) + E a ( e q η a , 0 + ) j ^ ( 0 , a , b ) = l ( q ) ( a ) + j ^ ( 0 , a , b ) Z ( q ) ( a ) .
This, together with Remark 7 (ii), Corollary 5, Theorem 3 and the strong Markov property, gives, for x b ,
j ^ ( x , a , b ) = j ( x , a , b ) + h ( x , a , b , 0 ) j ^ ( a , a , b ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) K a ( q , r ) ( b ) K a ( q , r ) ( x ) + J a ( q , r ) ( x ) I a ( q , r ) ( x ) I a ( q , r ) ( b ) J a ( q , r ) ( b ) l ( q ) ( a ) + j ^ ( 0 , a , b ) Z ( q ) ( a ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) K a ( q , r ) ( b ) J a ( q , r ) ( b ) Z ( q ) ( a ) l ( q ) ( a ) + j ^ ( 0 , a , b ) K a ( q , r ) ( x ) + J a ( q , r ) ( x ) Z ( q ) ( a ) l ( q ) ( a ) + j ^ ( 0 , a , b ) = I a ( q , r ) ( x ) I a ( q , r ) ( b ) H a ( q , r ) ( b ) J a ( q , r ) ( b ) Z ( q ) ( a ) j ^ ( 0 , a , b ) H a ( q , r ) ( x ) + J a ( q , r ) ( x ) Z ( q ) ( a ) j ^ ( 0 , a , b ) .
Setting x = 0 and solving for j ^ ( 0 , a , b ) (using (26) and (42)), j ^ ( 0 , a , b ) = H a ( q , r ) ( b ) Z ( q ) ( a ) / J a ( q , r ) ( b ) .
Substituting this back in (44), we have the claim. ☐
By taking b in Proposition 5, we have the following.
Corollary 11.
For q > 0 , a < 0 , and x R , we have
E x [ 0 , ) e q t d R r a ( t ) = r Z ( q ) ( a ) q Φ ( q + r ) Z ( q ) ( a , Φ ( q + r ) ) + 1 Φ ( q + r ) Z a ( q , r ) ( x ) r Z ( q ) ( a ) W ¯ ( q + r ) ( x ) + r Z ¯ ( q ) ( a ) W ¯ ( q + r ) ( x ) Z ¯ a ( q , r ) ( x ) + ψ ( 0 + ) q .
Remark 8.
Recently, in Noba et al. (2017), Corollaries 10 and 11 were used to show the optimality of a mixed periodic-classical barrier strategy in de Finetti’s dividend problem with periodic dividends and classical capital injections. The candidate optimal barrier is chosen so that the slope at the barrier becomes one. The optimality is shown to hold for a general spectrally-negative Lévy process by the observation that the slope of the candidate value function is proportional to the Laplace transform of the stopping time given in Corollary 3.
Finally, we compute the Laplace transform of the upcrossing time