Next Article in Journal
Existence and Symmetry of Solutions for a Class of Fractional Schrödinger–Poisson Systems
Next Article in Special Issue
A Simple Out-of-Sample Test of Predictability against the Random Walk Benchmark
Previous Article in Journal
The Optimal Mechanism Design of Retail Prices in the Electricity Market for Several Types of Consumers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Current Trends in Random Walks on Random Lattices

by
Jewgeni H. Dshalalow
* and
Ryan T. White
Department of Mathematical Sciences, Florida Institute of Technology, Melbourne, FL 32940, USA
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(10), 1148; https://doi.org/10.3390/math9101148
Submission received: 5 April 2021 / Revised: 5 May 2021 / Accepted: 13 May 2021 / Published: 19 May 2021
(This article belongs to the Special Issue Latest Advances in Random Walks Dating Back to One Hundred Years)

Abstract

:
In a classical random walk model, a walker moves through a deterministic d-dimensional integer lattice in one step at a time, without drifting in any direction. In a more advanced setting, a walker randomly moves over a randomly configured (non equidistant) lattice jumping a random number of steps. In some further variants, there is a limited access walker’s moves. That is, the walker’s movements are not available in real time. Instead, the observations are limited to some random epochs resulting in a delayed information about the real-time position of the walker, its escape time, and location outside a bounded subset of the real space. In this case we target the virtual first passage (or escape) time. Thus, unlike standard random walk problems, rather than crossing the boundary, we deal with the walker’s escape location arbitrarily distant from the boundary. In this paper, we give a short historical background on random walk, discuss various directions in the development of random walk theory, and survey most of our results obtained in the last 25–30 years, including the very recent ones dated 2020–21. Among different applications of such random walks, we discuss stock markets, stochastic networks, games, and queueing.

In a classical random walk model, a particle or walker moves through a deterministic d-dimensional integer lattice. The walk is random without drifting in any direction. The particle’s steps are also associated with time units as in the case that leads to Brownian motion. Of interest is the first passage time, that is, when the particle escapes from a bounded set.
There have been many variants of the random walk in the literature. The one we introduce is with a walker randomly moving over a lattice with a random real-valued configuration, equidistant, and formed at random times. The first passage time of such a walker and the location upon its escape is our focus.
In a further embellishment, we allow the particle to move through a random lattice not one step at a time as in the general setting, but to jump a random number of steps. In some other variants, we also allow only a limited access to the moves of the walker. That is, the walker’s movements are not available in real time. Instead, the observations are limited to some random epochs, τ 1 , τ 2 , τ 3 , . Consequently, we deal with delayed information on the real-time position A t of the particle and upon its escape at τ ν ( ν is the escape index)—the virtual first passage time and virtual escape location at A τ ν —that may end up being arbitrarily distant from the boundary of an underlying set. Obviously the virtual first passage time τ ν is delayed compared to the real first passage time.
On the other hand, in most of our settings, we restrict the walker’s moves only in positive directions. Additionally, the set that the particle is to escape is a d-dimensional rectangle, rather than an arbitrary manifold.
We note that our work on random walk models pertain to two distinct problems. In the first one, we work on the joint distribution of the first passage time t ν and the first escape location A t ν , where t 1 , t 2 , are the real-time epochs of walker’s jumps, with no relationship between time t ν and the deterministic time interval 0 , t , thus referred to as time insensitive. In the second problem, the first passage time t ν (or virtual first passage time τ ν ) can be placed inside 0 , t or outside the interval and considered along with the real-time position A t of the particle at time t. The second problem is more complex and it is called time sensitive.
We give a short historical background on random walk, discuss various directions in the development of random walk theory, and survey most of our results obtained in the last 25–30 years, including the very recent ones dated 2020–21. Among different applications of such random walks, we discuss stock markets, stochastic networks, games, and queueing.

1. Introduction

The term “random walk” was first introduced in [1] by Karl Pearson in 1905. It is generally considered as a recurrent process S n = X 1 + + X n made of a sequence of iid Z d -valued r.v.’s X = { X 1 , X 2 , . } In its simple form, X i X and X is uniformly distributed on 1 , 1 d , such that if S n = x Z d , the particle or walker moves from state x to state y within the integer hypercube x + 1 , 1 d equally likely in any direction with probability 1 2 d . So here the walker moves randomly within the d-dimensional integer lattice. One of the key objectives is to find the probability distribution of r.v.’s ν = inf n : S n A c and S ν , where A is a bounded subset of Z d . That is, the position of the walker when it escapes from set A.
If we allow X to be R d -valued and arbitrarily distributed with the entries X 1 , , X d of X not independent as assumed above, then such random walk is largely embellished. Here we can think of a randomly generated grid replacing Z d , so that if S n = x R d , the walker moves from state x to state y according to random increment X n X (the equivalence class of all r.v.’s a.s. equal X) and it turns out that the grid along which the particle moves is randomly generated each time the walker lands at some x. Furthermore, we take into account the time that the walker takes to move from x to y in one step thereby forming a point process T = t 1 , t 2 , and the associated marked point process S = n = 0 X n ε t n ( ε α is the unit measure at point a). The escape parameters of such random walk now require more specifications. If A is a bounded subset of R d , then ν = inf n : S n A c (exit index), t ν is the first passage time or escape time, and S ν is the position of the walker on its escape (escape position).
It is a challenge to find the distributions (joint or marginal) of the above random entities in a closed form. The assumption on X to be R + d -valued is helpful and still enough practical. The random walk terminology is appropriate to describe the physical motion of a walker regardless of where the walker moves, although other descriptive terms like a marked point process or marked random measure or recurrent or renewal process are also common in the literature. Furthermore, the additive components or jumps X i ’s of S need not be iid and can form Markov or semi-Markov processes, although these classes of S are out of scope and interest of this paper that targets only cases that lead to analytically closed forms.
Besides the terms walker or particle applied to a moving object, some authors also use the terms like a random walk itself that walks. We note that X can also be integer-valued, whereas we retain all other assumptions on S . Then if the walker at step n at time t n lands at some S n = X 0 + + X n , then at time t n + 1 , the walker moves to state S n + 1 = S n + X n + 1 , where X n + 1 is a Z + d -valued r.v. that generates a path on Z + d running in all non-negative directions from S n .
There is a way to at least partially circumvent the obstacle of mixed jumps(or increments), rather than just non-negative that we intend to discuss alittle later in this paper, but for now we stay with the above assumptions.With non-negative increments, the representation S = n = 0 X n ε t n of an underlying random walk is an atomic random measure that is often a convenient alternative interpretation.

Related Literature

There are myriads of papers on random walk and applications that would make a very long list. As the result of such wealth and efforts by many from different branches of mathematics and other disciplines, there is no unity about similar notions and notations. First of all, there is an ambiguity about what a random walk is as oppose to what is not. This is because of various embellishments to the original notion of a random walk as a recurrent process, that is, a sequence S n of partial sums of a sequence X 0 , X 1 , of iid r.v.’s. Note that if X i ’s are non-negative, then S n is referred to as a renewal process. If X i ’s are real-valued, S n is recurrent (cf. Tak a ´ cs [2]).
Now and then we read about constructions like semi-Markov random walks, that is, S n is a semi-Markov process (cf. Unver et al. [3]). Another embellishment which we believe is fully legitimate, is when the walker jumps occur upon random points t n that make the analysis of escape parameters more challenging. In addition, the jump length X n ’s can be position dependent, that is when X n depends on t n t n 1 only, n = 1 , 2 , , i.e., the time since the previous jump. Now because very often, one is concerned about escape of a random walk from a bounded set, the underlying analysis of the escape is referred to fluctuations of sums of random variables. However, the “sums” are not always a traditional random walk with independent jumps (cf. Andersen [4,5]). Besides, the fluctuations are also mentioned in reference to processes with continuous paths like Brownian motion and here the escape from a set A means crossing its boundary with a location next to A. The latter differs from leaving A and landing at a point distant from A as it takes place under non simple random walk. Hence, to include certain work in the literature we will use the common sense and space constraints.
The first mention of random walk was made by Pearson [1] in 1905 characterizing the distribution of the distance traveled in an N-step random walk in the plane. The walk starts at 0 , 0 and involve N steps of unit length, each taken in a equally likely random direction. In reference to Pearson, the problem was posted by Rayleigh [6] also in 1905 who claimed that the random walk problem proposed by Pearson was the same as that of the composition of N iso-periodic vibrations of unit amplitude and of phases distributed at random and studied in his earlier papers.
There are two seminal articles by Andersen [4,5] that belong to the literature on fluctuations, but they deal with sums of not independent r.v.’s. Yet it is worth including them in the reference list. Takćs [2,7,8] had been a key prolific contributor to the fluctuations of sums of random variables, some of which are traditional random walks and some are embellished variants, cf. Dshalalow and Syski [9] about Takács’ work. Some random walk problems pertain to exit and return to a fixed set. Van den Berg [10], obtained estimates for the “average probability” that a simple random walk in Z d starting at a point x V exits V and then returns to x. The average is taken over all points x V . Paper [11] studied the asymptotic behavior of the probability P ν = n , as n , where ν = inf k > 0 : S k y for some y 0 and S k = j = 1 k X j is a recurrent process.
Becker and König [12] studied a random walk in Z d targeting local times defined as l ( n , x ) = k = 0 n 1 S k = x , n N , giving the number of visits of x Z d at step n and the large-n asymptotics of the functional
L n α = x Z d l n , x α , α 0 .
Csáki, Földes, and Révész [13] studied the maximal local time l n = max l n , x : x Z d in a simple symmetric walk in Z d , i.e., with P X 1 = e i = P X 1 = e i = 2 d 1 . Gluck [14] studied a random walk on a finite group G based on a generating set that is a union of conjugacy classes. Let the non-negative integer valued random variable T denote the first time the walk arrives at the identity element 1 of G , if the starting point of the walk is uniformly distributed on G. Under suitable hypotheses, the author shows that the distribution function F of T is almost exponential. Other work on random walks on groups are by Fayolle, Iasnogorodski, and Malyshev [15], Gluck [14], Hildebrand [16], and Tak a ´ cs [7].
A continuous time random walk CTRW process was introduced in 1965’s paper by the physicists Montroll and Weiss [17]. A CTRW can be defined as follows. Let S = n = 0 X n ε t n ( ε α is the unit measure at point a) be a marked signed random measure. Suppose X 0 , X 1 , X 2 , are independent and for n = 1 , 2 , , identically distributed r.v.’s valued in R d while T = n = 0 ε t n is the associated support counting measure. Thus, N t = T 0 , t is the associated counting process. Then, the CTRW is S t = S 0 , t . The inter-renewal times t 1 t 0 , t 2 t 1 , are referred to as waiting times. If S is with position independent marking, then S t is called decoupled. A coupled CTRW is S t such that S is with position dependent marking, that is X n depends on t n t n 1 . The marks X n ’s are called jumps and in physics, they represent instantaneous jumps of a diffusing walker. (A so-called CTRW characteristic function pertains to fractional diffusion equations.) CTRW’s find applications in physics, insurance, and finance. The literature on CTRW’s is contained within its own terminology distinct from that in on random walks. It needs more scrutiny to see one and the same notions. See interesting surveys in Kutner and Masoliver [18] and Scalas [19]. See Balakrishnan and Khantha paper about the first passage time in CTRW [20].
We only briefly mention random walks on graphs. In its basic form, finite Markov chains are random walks on weighted directed graphs with possible loops. An electrical network is a multigraph G = V , E and with a weight function r : E R + representing the resistance of the edges. Other notable applications on random walks on graphs are random walks on social graphs. See related work in Blanchard and Volchenkov [21], Brémaud [22], Fujie and Zhang [23], Sarkar and Moore [24], Shi [25], Tak a ´ cs [8], and Telcs [26].
Random walks in queueing is a very notable subset of the entire literature on random walks. They picked up their notions very early on and found very close connections to various processes in queueing systems, including queueing, waiting times, departures and other processes. The interest in random walks in queueing surged in the 80’s and 90’s and ever since led to independent developments. Back then, very popular queues were those with N-, D-, and T-policies that carried random walk, later on joined by maintenance and vacation disciplines. Most of them required closed form expressions that led to novel analysis of random walks. e.g., in queues under N-policy, when the queue is exhausted, the server rests until the new customers joining the waiting room will cross a positive number N. The problem becomes less trivial if the input stream bulk (that is, when it is a marked point process). If the server goes on maintenance (also referred to as vacations), he is absent from the system and cannot resume his service immediately, once the queue crosses N, because he cannot interrupt any individual vacation segment. So he does it on a first opportune time. The problem of finding the first passage time (in this case it is when the server resumes his service) and the queue level accumulated by then became a target of numerous work, including Abolnikov and Dshalalow [27,28,29,30], Abolnikov, Dshalalow, and Agarwal [31], Abolnikov, Dshalalow, and Dukhovny [32,33,34], Dshalalow [35,36,37,38,39,40], Dshalalow and Motir [41], Dshalalow and Russell [42], and Dshalalow and Yellen [43]. Now with the D-policy, the server resumes his service, when the total service needed to process certain amount of jobs exceeds a positive real D. See Agarwal and Dshalalow [44]. In all these papers, explicit joint functionals of the first passage time and the position of the queue upon the passage were obtained. The cited results in closed forms were possible through the introduction and implementation of the so-called D -operator (see Section 2), specifically designed to deal with escape parameters of random walks.
A further embellishment of the named above queues is those with hysteretic control. This is when the server suspends his service upon of the service completions, when the queue level drops below some r and resumes his service when the queue accumulates to N or more customers. Here r N . During his primary inactivity, the server may rest or go on multiple vacations, or combine a single vacation with a followed rest if the queue has not reached N on his return. A closed form expression for the joint distribution of the first passage time and the queue level at the passage we obtained in Abolnikov, Dshalalow, and Treerattrakoon [45], Dshalalow [46], Dshalalow and Dikong [47,48], Dshalalow, Kim, and Tadj [49] in different variants of hysteretic control policy. In some cases, batch (group) service took place that added to the existing complexity of the underlying random walk problems.
Bacot and Dshalalow [50] in 2001 considered a further embellishment of the hysteretic control random walks by including a so-called gated service. This was a bulk-input-batch-service queue with multiple vacation policy and hysteretic control. The gated service applies to the policy when the service consists of two phases. The server takes a batch of customers during the first phase and if all available customers joined the batch that is of a lesser size than server’s capacity, then newly arriving customers can join the first batch (not in excess to server’s capacity), but during the second phase, such an option is no longer honored even if server’s capacity has not been filled. All related random walk problems pertaining to the joint distribution of all key escape parameters in the context of the queueing process was obtained. In this particular problem, the authors chained the results obtained during vacations and the first phase.
The utility and versatility of the D -operator enabled us to enlarge classes of random walk problems that could be identified in stochastic games, stochastic networks, queueing, and economics. In queueing, we take advantage of multidimensional versions of the D -operator to analyze queues with parallel queueing stations or servicing facilities where one server can perform simultaneous and yet asynchronous work on more than one task at the same time, as per studies in Abolnikov, Dshalalow, and Agarwal [31], Dshalalow and Merie [51], and Dshalalow, Merie, and White [52].
Further utility of the D -operator in random walk is found in its chaining property between different modes that may include multistage vacations, followed by rests, and several service phases as in Abolnikov, Dshalalow, and Treerattrakoon [45], Dshalalow [46], Dshalalow, Kim, and Tadj [49], Dshalalow and Merie [51] in the context of queueing and Dshalalow and Huang [53,54,55], in the context of stochastic games.
Multidimensional Lévy walks with competing components were noticed to model games of several players under hostile action. The game is over when one of the players is ruined, that is, when one or more competing components cross their respective thresholds. Here again we see the escape of the walk from a set. The model is definitely not a stochastic game in the traditional sense, but it serves purposes of a game-related setting and as such it works very well to model wars, economic competitions, and stock and stock option trading, to name a few. Dshalalow [56,57], Dshalalow and Liew [58,59,60] studied applications of random walk fluctuations to finance, while Dshalalow and Huang [53,54,55], Dshalalow and Iwezulu [61], Dshalalow and Ke [62,63], and Dshalalow and Treerattrakoon [64] studied exclusively antagonistic games and in the latter case, with three players two of whom can team up against the third player. Dshalalow and White [65,66] focused on random walks applications to stochastic networks. Additionally, Dshalalow and Iwezulu [61] considered applications to cancer research.
Most of work mentioned above is about random walks in R + d . Thus, random walks that move in all directions are analytically more challenging and unfortunately they end up in not close forms for their principal functionals. There has been a way to circumvent this obstacle by introducing so-called auxiliary active components. For example, if underlying random walk’s components are not monotone increasing but fluctuate, the true escape scenario is difficult to model without a cost of analyticity, but appending auxiliary active components can alleviate the predicament because they can point to the direction when the nonmonotone components raise, dive, spike, all once or more times (see Section 4). So that the traditional notion of escape is modified, but it still gives us an ample amount of explicit information about, say the behavior of financial instruments. There are various alternatives that can predict the future of a stock portfolio if it comes to trading options strategies as to buy underlying contracts long or short. (See a discussion in Section 4 and in Dshalalow and Liew [59].) One simple example of an antagonistic game in finance is set about best time to exercise a stock option on a stock just before it plunges and prior to its maturity, whichever comes first.
Note that since the escape of random walks offer predictive tools for outcomes of the games such as those occurring in finance, it stands for reason to refine the information that leads to ruin. One of such efforts were undertaken in Dshalalow and Ke [62,63] by introducing a smaller subset A A that a walk will escape from before it escapes from A that should give us an extra layer of security. Another way to refine the information is to allow an access to the underlying walk at any epoch of time, thereby making it time dependent. Until now, we meant only walks whose escape parameters were not related to any deterministic times. Such refinement allows us to have the first passage time and the position of the walker upon its escape team up with the continuous time parameter version S ( t ) of the walk and attempt to still yield tame analytical expressions. We call such approach time sensitive analysis. It was introduced in Dshalalow [37] and further refined in [67] and then picked up in a series of papers. We just mention some: Agarwal, Dshalalow, and O’Regan [68], Al-Matar and Dshalalow [69], Dshalalow [70,71], Dshalalow and Bacot [72], Dshalalow and Nandyose [73,74], Dshalalow, Nandyose, and White [75], Dshalalow and White [76]. See further discussions in Section 5 and Section 6.
Among other random walk applications, Antal and Redner [77] studied first passage time properties of a discrete time random walk in which the length of each step is uniformly distributed on interval a , a . The walker starts initially at arbitrary point x 0 , 1 with end points absorbing. The idea comes from the problem of DNA sequence recognition by a mobile protein.
Hughes, in his book [78], discusses various modifications of random walks, such as random walks on triangular lattices and on fractals as well as “self-avoiding walks” in which the walker does not visit the same point more than once. Among various applications, self-avoiding walks can model long-chain polymers in dilute solutions.
We would also like to mention applications of random walks and fluctuations to physics in Redner [79], finance in Kyprianou and Pistorius [80], Muzy, Delour, and Bacry [81], and Scalas [19], astronomy in Uchaikin and Gusarov [82], Zhou, Sun, and Zhou [83], biology and medicine in Odagaki and Kasuya [84], electrical networks in Telcs [26], social networks in Sarkar and Moore [24], wireless communications in Jabbari, Zhou, and Hillier [85], and queueing in Asmussen [86], Bayer and Boxma [87], Bladt and Nielsen [87], Cohen [88], Gannon, Pechersky, Suhov, and Yambartsev [89], Guillemin and van Leeuwaarden [90], Janssen and van Leeuwaarden [91], Lemoine [92], Stadje [93], Tak a ´ cs [2], and Zorine [94] published by other authors.
Besides, there is a variety of excellent books and survey articles on random walks or directly related to random walks by Bingham [95,96], Bladt and Nielsen [97], Blanchard and Volchenkov [21], Brémaud [22], Fayolle, Iasnogorodski, and Malyshev [15], Foss, Korshunov, and Zachary [98], Fujie and Zhang [23], Gut [99], Hildebrand [16], Iksanov [100], Lawler [101], Redner [79], Shi [25], Slade [102], Tak a ´ cs [2], Telcs [103], and Wijesundera, Halgamuge, and Nanayakkara [104].

2. The Operational Calculus of One-Dimensional Random Walks

All processes are defined on a probability space Ω , F , P . Our work on random walk with non-negative integer jumps dates back to the 1990s [35,37,38,39,40,105] about S = n = 0 X n ε t n with position dependent marking, that is when X n depends on t n t n 1 , but not on any other components of the support counting measure T = n = 0 ε t n . More specifically, the sequence X 0 , t 0 , X 1 , t 1 , is a delayed renewal process.
We further assume that S is a Lévy process that, in particular, warranted against clustering of t n ’s. With A = [ 0 , M ) , assuming M N , we are interested in the time and position of S upon its escape from A. Thus we have: ν = inf m : S m = X 0 + + X m A c , as the exit or escape index, t ν —the exit time or first passage time, S ν —the position of the walker at t ν (or excess value of M).
The transform
Φ ν = Φ ν ( ξ , u , v , ϑ , θ ) = E ξ ν u S ν 1 v S ν e ϑ t ν 1 θ t ν
for ξ , u , v B ¯ 0 , 1 , Re ϑ 0 , Re θ 0 , where B ¯ 0 , 1 is the compact unit ball in C ). This is of the joint distribution of ν , S ν , t ν , and two more useful pre-escape quantities S ν 1 and t ν 1 (pre-exit time) representing the position and time the walker last seen in set A before its escape. Note that because the jumps X n are valued in N , the walker at time t ν is at S ν that can be positioned arbitrarily away from A on its escape time.
We claim that Φ ν can be expressed in a closed form. First we define the D -operator as follows
D x k φ ( x , y ) = lim x 0 1 k ! k x k 1 1 x φ ( x , y ) , k 0 0 , k < 0
for x B ¯ ( 0 , 1 ) and Re y 0 . Here φ is a function, analytic at zero in the first variable. Suppose the joint transforms
γ 0 z , θ = E z X 0 e t 0 θ , γ z , θ = E z X 1 e t 1 t 0 θ
with z B ¯ ( 0 , 1 ) and Re θ 0 are known. Then, the following theorem holds.
Theorem 1.
The following formula holds.
Φ ν = D x M 1 γ 0 ( v , θ ) γ 0 ( x v , θ ) + ξ γ 0 ( x u v , ϑ + θ ) 1 ξ γ ( x u v , ϑ + θ ) γ ( v , θ ) γ ( x v , θ
where x B ( 0 , 1 ) and the rest of domains are specified in (1).
Proof. 
(i) Introduce the transformation applied to function N 0 , B ¯ ( 0 , 1 ) C , f
D p { f ( p ) } ( x ) : = p = 0 x p f ( p ) ( 1 x ) , x B 0 , 1
It can be readily shown that the inverse operator (2) can restore f, if we apply it for every k:
D x k D p { f ( p ) } ( x ) = f ( k ) , k = 0 , 1 ,
Further, introduce the auxiliary family of exit indices
ν ( p ) : = inf { m : S m = X 0 + + X m > p }
for p = 0 , 1 , along with the family of the functionals
Φ ν ( p ) = E ξ ν ( p ) u A ν ( p ) 1 v A ν ( p ) e ϑ τ ν ( p ) 1 θ τ ν ( p )
for p = 0 , 1 , , noticing that ν = ν ( M 1 ) . So, if we apply D p to Φ ν ( p ) , we can then restore Φ ν ( M 1 ) = Φ ν by applying the inverse operator D M 1 to D p Φ ν ( p ) .
(ii) Before we apply D p to Φ ν ( p ) we notice that
D p ( 1 { ν ( p ) = j } ) ( x ) = x A j 1 x A j , j = 0 , 1 ,
with , A 1 : = 0 . Indeed, observe that
{ ν ( p ) = j } = { A j 1 p } { A j > p } , j = 0 , 1 , ,
since { A j } is a monotone, nondecreasing sequence of partial sums. Hence,
1 { ν ( p ) = j } = 1 { A j 1 p } 1 { A j > p } = 1 { A j 1 p } 1 { A j p } .
Then,
D p ( 1 { ν ( p ) = j } ) ( x ) = ( 1 x ) p = 0 x p 1 { A j 1 p } 1 { A j > p } = ( 1 x ) p = A j 1 A j 1 x p = ( 1 x ) p A j 1 x p p A j x p .
The rest is obvious.
(iii) We first break Φ ν ( p ) into
Φ ν ( p ) = E ξ ν ( p ) u A ν ( p ) 1 v A ν ( p ) e ϑ τ ν ( p ) 1 θ τ ν ( p ) = j = 0 E ξ j u A j 1 v A j e ϑ τ j 1 θ τ j 1 { ν ( p ) = j }
Then, applying D p to Φ ν ( p ) and using the Fubini’s theorem we have
Φ * ( x ) = D p ( Φ ν ( p ) ) ( x ) = j = 0 E ξ j u A j 1 v A j e ϑ τ j 1 θ τ j D p ( 1 { ν ( p ) = j } ) ( x ) = j = 0 ξ j E ( x u v ) A j 1 e ( ϑ + θ ) τ j 1 E v X j ( 1 x X j ) e θ Δ j
Denote
E j : = E ( x u v ) A j 1 e ( ϑ + θ ) τ j 1 F j : = E v X j ( 1 x X j ) e θ Δ j
for j = 0 , 1 , Taking τ 1 = A 1 = 0 , we have E 0 = 1 and F 0 = γ 0 ( v , θ ) γ 0 ( x v , θ ) .
For j 1 ,
E j = γ 0 ( x u v , ϑ + θ ) γ j 1 ( x u v , ϑ + θ ) F j = γ ( v , θ ) γ ( x v , θ ) .
It can be shown that γ ( x u v , ϑ + θ < 1 , if x < 1 due to the above assumption (proven below in part (iv), which will warrant the convergence of the geometric series
j > 0 ξ j γ j 1 ( x u v , ϑ + θ ) = ξ 1 ξ γ ( x u v , ϑ + θ ) .
The rest is obvious.
(iv) With x u v = z , we now show that for γ ( z , θ ) = E z X 1 e Δ 1 θ , γ ( z , θ ) < 1 if z < 1 . We have
E z X 1 e Δ 1 θ = k = 0 z k t = 0 e t θ P X 1 Δ 1 ( k , d t ) E z X 1 e Δ 1 θ k = 0 z k t = 0 e t θ P X 1 Δ 1 ( k , d t ) = k = 0 z k t = 0 e t Re ( θ ) P X 1 Δ 1 ( k , d t ) = t = 0 e t Re ( θ ) P X 1 Δ 1 ( 0 , d t ) + k = 1 z k t = 0 e t Re ( θ ) P X 1 Δ 1 ( k , d t ) < a + e Re ( θ ) b + z c + z e Re ( θ ) d
as z < z k for k 1 and z < 1 , and where
a : = t = 0 1 P X 1 Δ 1 ( 0 , d t ) b : = t = 1 P X 1 Δ 1 ( 0 , d t ) c : = k 1 t = 0 1 P X 1 Δ 1 ( k , d t ) d : = k 1 t = 1 P X 1 Δ 1 ( k , d t )
Then, a + b + c + d = 1 while
a + e Re ( θ ) b + z c + z e Re ( θ ) d < 1
if z < 1 and e Re ( θ ) 1 . The latter inequality holds because Re ( θ ) 0 . The former inequality z < 1 holds because z = x u v and x < 1 .
Furthermore, e Re ( ϑ + θ ) = e Re ( ϑ ) e Re ( θ ) 1 satisfied with Re ( ϑ ) 0 and Re ( θ ) 0 in the LST anyway. This shows that γ x u v , θ + ϑ < 1 . □
The below properties of D are in support of our claim that the expression in (4) is tractable.
(i)
D k is a linear functional on the space of all functions analytic at zero.
(ii)
D x k ( 1 ( x ) ) = 1 , where 1 ( x ) = 1 for all x R .
(iii)
Let g be an analytic function at zero. Then,
D x k x j g ( x ) = D x k j g ( x )
Proof. 
With the use the Leibnitz formula
d k d x k F ( x ) G ( x ) = s = 0 k k s ( F ( x ) ) ( s ) G ( k s ) ( x )
and F ( x ) = x j and G ( x ) = g ( x ) 1 x . Hence, when applying D k , we have
D x k x j g ( x ) = 1 k ! s = 0 k k s d s d x s x j | x = 0 ( k s ) ! D x k s g ( x ) = D x k j g x .
(iv)
In particular, if j = k , we have D x k x k g ( x ) = g ( 0 ) .
(v)
If x s f ( x ) is analytic at zero, then
D x k x s f ( x ) = D x k + s ( f ( x ) )
(vi)
If a ( x ) = i = 0 a i x i , then
D x k a ( x ) = i = 0 k a i
and
D x k a ( x y ) = i = 0 k a i y i .
Remark 1.
Formulas (7) and (8) enable one to calculate partial sums of a power series.
(vii)
For any real number a and for a positive integer n, except for a = n = 1 , it holds true that
D x k 1 1 a x n = k + 1 , a = n = 1 j = 0 k n + j 1 j a j , else .
(viii)
For any two real numbers a , b and two positive integers n and r (except for a = n = 1 and b = r = 1 ) it holds that
D x k 1 1 a x n 1 ( 1 b x ) r = j = 0 k n + j 1 j a j i = 0 k j r + i 1 i b i .
Proof. 
By the proof of property (iii),
D x k 1 1 a x n 1 ( 1 b x ) r = D x k j = 0 n + j 1 j a x j 1 ( 1 b x ) r
Then, interchanging the operator and the series and then using properties (iii) and (vii), this simplifies to
D x k 1 1 a x n 1 ( 1 b x ) r = j = 0 k n + j 1 j a j D x k j 1 ( 1 b x ) r = j = 0 k n + j 1 j a j i = 0 k j r + i 1 i b i .
(ix)
If X be an integer-valued non-negative r.v. with h ( z ) = E z X and k is a positive integer, then
E z X k = D x k h ( x z ) + z k [ 1 h ( x ) ] .
(x)
If X be an integer-valued non-negative r.v. with h ( z ) = E z X and k is a positive integer, then
E z ( X k ) + = D x k h ( x ) + z k [ h ( z ) h ( x z ) ] .
Formula (4) of Theorem 1 is largely simplified when the marginal transform Φ ν ( 1 , 1 , 1 , v , θ ) = E v S ν e θ t ν is sought. That is, with ξ = u = 1 , ϑ = 0 , we have
Φ ν ( 1 , 1 , 1 , v , θ ) = γ 0 ( v , θ ) 1 γ ( v , θ ) D x M 1 γ 0 ( x v , θ ) 1 γ ( x v , θ )
In some applications t 0 = 0 and X 0 = i ( 0 ) , then γ 0 ( x v , θ ) = ( x v ) i . Then, using property (iii) of D , this reduces to
Φ ν ( 1 , 1 , 1 , v , θ ) = v i v i ( 1 γ ( v , θ ) ) D x M 1 i 1 1 γ ( x v , θ ) .
Example 1.
To see the utility of the above expressions, consider the classical queueing system M X / G N / MV / 1 / , that is, with marked Poisson input, N-policy general service, and multiple vacations. The server goes on maintenance (known as vacations) that starts when the queue is exhausted and it consists of multiple series of random segments none of each can be interrupted. The primary service resumes when the queue accumulates to at least N units upon the end of one of the maintenance segments. Other than the usual routine with Pollaczek–Khintchine formula for the pgf of the equilibrium distribution of the queue length upon departures, there is a need of the contents of the queue upon the end of maintenance when the server attains to the system finding the line of units most likely in excess of N. In other words, there is a need of finding the distribution of the number of units entering the queue during the maintenance period whose length is implicitly controlled by N.
Here we are under the following specifications. t 0 = X 0 = 0 , that is the server starts off its maintenance immediately after the queue drops to zero implying that γ 0 z , θ = 1 . Then,
γ z , θ = E z X 1 e t 1 θ = γ θ + λ λ a z ,
where γ θ = E e t 1 θ is the LST of a maintenance segment and a z is the pgf of a batch size of the input (that is marked Poisson with rate λ of its support counting measure). It is thus obvious that the queueing process involved during the maintenance sequence is a random walk S observed upon the successive ends t 1 , t 2 , of maintenance segments. (We are not concerned about the status of the process upon each arrival.) So S = n = 0 X n ε t n is with position dependent marking characterized by the functional γ z , θ in (15). Combining the special case of (14) i = 0 and (15) gives the joint transform
Φ ν ( 1 , 1 , 1 , z , θ ) = E z S ν e θ t ν = 1 [ 1 γ ( θ + λ λ a ( z ) ) ] D x N 1 1 1 γ ( θ + λ λ a ( x z ) )
of the maintenance length and the number of units accumulated during maintenance on server’s return.
To further illustrate the use of the D -operator, consider the special case the system under the assumptions that the input is ordinary, i.e., a z = z and the maintenance segments are a.s. of constant length c. Hence, γ θ = e c θ and
1 1 γ ( λ λ a ( x z ) ) = 1 1 e c λ ( 1 z x ) , x z < 1 .
The latter condition is met when x < 1 which is sufficient when using the D -operator in (16). It is readily seen that 1 1 e c λ ( 1 z x ) is analytic when xRe z < 1 that we can easily satisfy with x small without forcing z B 0 , 1 , implying that
1 1 e c λ ( 1 z x ) = 1 + n = 1 e c λ n e c λ n z x = 1 + k = 0 ( c λ z ) k k ! x k n = 1 e c λ n n k
and
D x N 1 1 1 γ ( λ λ a ( x z ) ) = 1 + k = 0 N 1 ( c λ z ) k k ! n = 1 e c λ n n k
after using formula (7). Finally, the marginal pgf of S ν (the number of units in the system accumulated upon server’s return) reads
Φ ν ( 1 , 1 , 1 , z , 0 ) = E z S ν = 1 1 e c λ 1 z 1 + k = 0 N 1 ( c λ z ) k k ! n = 1 e c λ n n k

3. Random Walks on Infinite Graphs and Cybersecurity: A Bivariate Model

Consider an infinite weighted graph in which weights are associated with nodes rather than edges. There are no infinite graphs in the real world, rather large graphs (representing large-scale networks), see Dshalalow and White [65,66]. Assume that during a series t 0 , t 1 , of cyberattacks, successive batches of nodes are incapacitated upon random time increments. Associated with each node is a random weight representing its value to the health of the network. We assume the network enters a critical state wherein it may become dysfunctional if the number of nodes incapacitated by hostile attacks exceeds a fixed integer threshold M or the magnitude of weights associated with the compromised nodes exceeds a fixed real threshold W. We proceed with more formalism of the model.
Let Ω , F Ω , P be a probability space and let
η = N W = k 1 n k , w k ε t k ,
where ε a is a Dirac point measure, be a marked Poisson random measure on this probability space describing the evolution of damage taken to a network, where
n k nodes are destroyed at time t k , k = 1 , 2 , ,
w k = j = 1 n k w j k is the non-negative real weight associated with the n k nodes
and the underlying support counting measure k = 1 ε t k is Poisson of rate λ directed by λ · , where · is the Borel-Lebesgue measure on B R + .
We assume that n k ’s are iid (and independent of w j k ’s) with common marginal pgf g z , and w j k ’s are iid with common LST l u for j , k N .
Obviously, η is a bivariate Poisson random walk on a two-dimensional random grid generated at times t k ’s in such a way that if an underlying walker is located at point k = 0 m n k , k = 0 m w k at time t n , it moves to k = 0 m + 1 n k , k = 0 m + 1 w k by time t n + 1 driven by the jump n m + 1 , w m + 1 that goes to the right or upward.
We have the following representation for η as the transform of its dependent components N and W .
E z N T e u W T = e λ T g z l u 1
with Re ( u ) 0 , where where T is a Borel subset of R + .
Now, suppose random walk η is observed by a delayed renewal process
T = n = 0 ε τ n
such that
Δ n = τ n τ n 1 , n N
are iid and independent of Δ 0 = τ 0 and
L 0 θ = E e θ τ 0
L θ = E e θ Δ 1 ,
each with Re θ 0 are the LST of Δ 0 = τ 0 and the common LST of Δ n for n N , respectively.
Then,
E z N τ 0 e u W τ 0 θ τ 0 = L 0 θ + λ λ g z l u = γ 0 z , v , θ
E z N Δ 1 e u W Δ 1 θ Δ 1 = L θ + λ λ g z l u = γ z , w , θ
are the functionals describing the total number of lost nodes and their associated weights observed within time intervals 0 , τ 0 and ( τ 0 , τ 1 ] , respectively, that we assume known or readily obtainable.
X Y T = n = 0 X n , Y n ε τ n
is the bivariate random walk with mutually dependent marks
X n , Y n : Ω N × R +
that emerged from embedding in η upon times T = { τ 0 , τ 1 , τ 2 , } . The random walk X Y T describes the path of the walker that moves on the associated random grid updated upon walker’s moves at T .
Let S n = i = 0 n X i , i = 0 n Y i . Given the rectangle A = 0 , M × [ 0 , V ] , where M N , V R + , we are interested in the escape parameters upon walker’s exit from set A. Namely,
μ : = inf m 0 : N m = i = 0 m X i > M
ν : = inf n 0 : W n = i = 0 n Y i > V
are the exit indices.
We would say that the component X of random walk X Y T is terminated at time τ μ , and component Y is terminated at time τ ν if X and Y acted alone, but we seek the time one of them terminates. If the original marked Poisson walk η is observed by T , then the embedded process will exhibit (mutually dependent) increments X n and Y n as the marks in the process X Y T . The motion of the walker represented by the walk X Y T is observed upon T and gives us the escape time from set A that takes place at τ ρ = τ μ τ ν , where ρ = μ ν , the first observed passage time, which is delayed information regarding the actual real-time crossing (which occurred earlier).
The target functional is
Φ = Φ α 0 , α , β 0 , β , h 0 , h = E α 0 N ρ 1 α N ρ e β 0 W ρ 1 β W ρ e h 0 τ ρ 1 h τ ρ ,
where α 0 , α B ¯ ( 0 , 1 ) , Re β 0 0 , Re β 0 , Re h 0 0 , and Re h 0 . This functional includes all relevant virtual escape and pre-escape parameters including the first passage time, pre-first passage time and locations of the walker upon its virtual escape from set A and the location in set A as seen prior to the escape.
The primary tool we use is the composition of the familiar D -operator of (2) and the inverse L C 1 of the Laplace–Carson transform defined as
L C q · w = w q = 0 e w q · d q ,
with Re w > 0 , with the inverse
L C w 1 ( · ) ( q ) = L w 1 ( · 1 w ) q ,
where L w 1 is the inverse of the Laplace transform. Then, the composition reads
D x w · p , q = L C w 1 D x p · q
The key result lies in the following theorem [66] (see the results combined in Theorem 2.1 through Corollary 2.6 in that paper).
Theorem 2.
The functional Φ satisfies the following formula.
Φ = D x w γ α , β , h γ 0 α x , β + y , h + γ 0 1 γ γ α , β , h γ α x , β + y , h ,
where
γ = γ α 0 α x , β 0 + β + y , h 0 + h ,
γ 0 = γ 0 α 0 α x , β 0 + β + y , h 0 + h ,
under notation (22)–(25), where x , y B 0 , 1 and the domains of the rest are as in (29).
Note that formula (33) resembles that of (4) in Theorem 1 and for a good reason. (We explain the similarity in forthcoming results.)
We skip the discussion about the claim about analytical tractability of formula (33), which by all means is verifiable, instead moving to a noteworthy embellishment of Theorem 3.1 in Section 3 of the paper [66]. Namely, to refine information on the nature of the random walk’s escape from set A or equivalently, severe loss of nodes in the network under attack, we would like to add one more control level referred to as an auxiliary threshold. The latter is with the objective to offset the inevitable crudeness due to the delay through restricted observations on the walk around the first passage time.
Let M 1 < M . Introduce the auxiliary index
μ 1 = inf j : N j > M 1
Our attention is now on the confined sub- σ -algebra F μ 1 < μ ν and the associated functional
Φ μ 1 < ( μ ν ) = Φ μ 1 < ( μ ν ) u 0 , u , α 0 , α , v 0 , v , β 0 , β , θ 0 , θ , h 0 , h = E [ u 0 N μ 1 1 u N μ 1 α 0 N ( μ ν ) 1 α N μ ν e v 0 W μ 1 1 v W μ 1 β 0 W μ ν β W μ ν × e θ 0 τ μ 1 1 θ τ μ 1 h 0 τ ( μ ν ) 1 h τ μ ν 1 { μ 1 < ( μ ν ) } ] = Φ μ 1 < μ < ν + Φ μ 1 < μ = ν + Φ μ 1 < ν < μ
A realization of process X Y T of losses (defined in (26) above) in Figure 1 illustrates how it operates with respect to the introduced main and auxiliary thresholds. We can regard X Y T as a two-dimensional random walk on a random grid (rather than traditional lattice). We have a rectangular region formed of rectangular sectors in white, green, and red colors. In real-time the walker attempts to escape the white-green area at the first opportune time when the cumulative loss of nodes exceeds M or the cumulative weight loss exceeds V, whichever comes first. It leaves a polygonal path in blue and a cruder, observed, path in green. The walker enters the green area indicating that a lower threshold M 1 is crossed, while neither M nor V was. In reality, the green area can be empty with a positive probability.
In Figure 1, the underlying real-time process (the blue dots) represents the real-time incoming damages, which are observed only upon τ k ’s (depicted by the green dots), where the M 1 -observed-crossing occurs before the first observed passage time (FOPT) of M or V (i.e., there is an observation in the green area), at which the components of the process may or may not coincide with their values at the real-time FPT (first passage time).
The following assertion on functional Φ μ 1 < ( μ ν ) about the escape parameters of random walk X Y T defined in (26) is an embellishment of the random walk model of Theorem 2 verbalized in the context of cyberattacks on a network, and can be found in [66] (Theorem 3.5).
Theorem 3.
The functional Φ μ 1 < ( μ ν ) of the network damage on the confined sub-σ-algebra F Ω μ 1 < ( μ ν ) satisfies the formula
Φ μ 1 < ( μ ν ) = D x y w ϕ 0 1 ϕ 0 + φ 0 1 φ ( ϕ 1 ϕ ) ξ 1 χ 1 ψ M 1 , M , V
under the abbreviated notation in (39)–(44).
φ = γ u 0 u α 0 α x y , v 0 + v + β 0 + β + w , θ 0 + θ + h 0 + h
ϕ = γ u α 0 α x y , v + β 0 + β + w , θ + h 0 + h
ϕ 1 = γ u α 0 α y , v + β 0 + β + w , θ + h 0 + h
ψ = γ α 0 α y , β 0 + β + w , h 0 + h
χ = γ α y , β + w , h
ξ 1 = γ α , β , h
and expressions with 0 subscripts involve initial-jump functionals, e.g., φ 0 is like φ except it uses γ 0 instead of γ.
Example 2.
In this example we will present fully explicit probabilistic results for a special case of the process with the M 1 -auxiliary threshold, under the following five assumptions made in the context of network’s security.
  • As previously, the attack times t 1 , t 2 , form a Poisson point process of rate λ.
  • Inter-observation times are constant, i.e.,
    Δ k = τ k τ k 1 = c a.s., L θ = e θ c .
  • Nodes lost per strike have an arbitrary finite discrete distribution, i.e., P { n k = j } = p j , j = 1 , , R and PGF g z = s = 1 R p s z s , with p = p 1 , , p R
  • Weight per node w j k G a m m a α , β , so we have LST l z = β z + β α .
  • The initial functional γ 0 = 1 (i.e., zero initial damage).
We note that deterministic observations present more of a challenge than many random observations. The assumption on the number nodes destroyed in a single strike being bounded by R is analytically convenient but not too restrictive, because R can be made arbitrarily large. The general gamma distribution of weight of a single node is also pretty general.
Let E u N μ 1 e v W μ 1 e θ τ μ 1 1 μ 1 < ( μ ν ) be the Φ μ 1 < ( μ ν ) -marginal functional of walk’s position on the passage of threshold M 1 (see the green area in the figure above) with the main escape from set A not occurred. Then, the following holds formulated in the context of the cyberattack.
Under Assumptions 1–5, the joint transform of the number of lost nodes, their cumulative weight, and the first passage time of the crossing of M 1 preceding the first crossing of M or V (i.e., on the sub-σ-algebra F Ω μ 1 < ( μ ν ) ) satisfies the following formula [66]:
Φ μ 1 < ( μ ν ) ( 1 , u , 1 , 1 , 0 , v , 0 , 0 , θ , 0 , 0 ) = E u N μ 1 e v W μ 1 e θ τ μ 1 1 μ 1 < ( μ ν ) = e c θ + λ { k = 0 M 1 1 u k F k θ , p m = 0 M 1 k u m E m p β v + β α k + m P α k + m , v + β V k = 0 M 1 1 u k β β + v α k P α k , v + β V n = 0 k E n p F k n θ , p } ,
where
F j θ , p = r = 0 R 1 R j ( c λ ) j r Li ( j r ) e c ( θ + λ ) δ 1 = j [ R ] · δ = r + j p 1 δ 1 p R δ R δ 1 ! δ R ! ,
R = 1 , , R , δ = δ 1 , , δ R N 0 R ( δ j R for each j),
E j p = r = 0 R 1 R j c λ j r δ 1 = j [ R ] · δ = r + j p 1 δ 1 p R δ R δ 1 ! δ R ! ,
Li s z = k = 1 z k k s is the polylogarithm, which is numerically tractable for our domain e w : Re w > 0 with s Z 0 , P x , y = 1 Γ x , y Γ x is the upper regularized Gamma function, Γ x , y is the incomplete gamma function, and Γ x is the gamma function.
Remark 2.
Through simulation of the process, we were able to produce some verification of the results via numerical examples presented in Figure 2 and Figure 3. For two sets of parameters of the process with R = 3 , λ , [ p 1 , p 2 , p 3 ] , [ α , β ] , c , M 1 , M , V , we generated 100 realizations of the process for each of a range of M 1 values and calculated the empirical probabilities P μ 1 < ( μ ν ) for each:
Some current work by White [106] goes further and considers large networks with nodes connected by edges and the edges, rather than the nodes, have weights. Here, successive attacks take out a random number of nodes, which removes a random number of connected edges, each with a weight indicating its value to the network. The network enters into a critical state if losses of any of these three types accumulate beyond some pre-specified thresholds.
This is modeled through a Poisson random measure
η = N E W = k 1 n k , e k , w k ε t k ,
where n k nodes are incapacitated in the kth attack at time t k , the jth node lost in the k th attack has e j k incident edges that go down, and the ith edge from the jth node lost in the kth attack has weight w i j k , so we have
e k = j = 1 n k e j k
w k = j = 1 n k i = 1 e j k w i j k
This process is studied as above under delayed observation, so we consider the following random measure, similar to ((26)).
X Y Z T = n = 0 X n , Y n , Z n ε τ n ,
which is a three-dimensional random walk with mutually dependent marks X n , Y n , Z n : Ω N × N × R + and we define
γ 0 u , v , w , h = E u N τ 0 u E τ 0 e w W τ 0 h τ 0
γ u , v , w , h = E u N Δ 1 v E Δ 1 e w W Δ 1 h Δ 1
We will define
S n = ( N n , E n , W n ) = i = 0 n X i , i = 0 n Y i , i = 0 n Z i .
Here, there are three thresholds, M n , M e , M w for losses of nodes, edges, and weights, respectively, and three corresponding exit indices ρ n , ρ e , ρ w with ρ = ρ n ρ e ρ w . We seek the functional
Φ = Φ ξ , u 0 , u , v 0 , v , w 0 , w , h 0 , h = E ξ ρ u 0 N ρ 1 u N ρ v 0 E ρ 1 v E ρ e w 0 W ρ 1 w W ρ e h 0 τ ρ 1 h τ ρ ,
which is much like the functional (29) from Theorem 2, except it includes some extra terms for the edge losses upon the exit, E ρ 1 and E ρ , and also includes a term ξ ρ corresponding to a probability-generating function of the number of observations before the network enters into its critical state.
The functional has been derived through a procedure similar to Theorem 2 above, although it is a bit more difficult since this problem is three-dimensional and the weight lost per attack is more complex. To accomplish this, we can use the operator
D x y z · p , q , r = L C z 1 D y q D x p · r
Theorem 4.
The functional Φ satisfies the formula
Φ = D x y z [ γ 0 u , v , w , h γ 0 u x , v x , w + z , h + ξ γ 0 1 ξ γ γ u , v , w , h γ u x , v y , w + z , h ] M n , M e , M w ,
where
γ 0 = γ 0 u 0 u x , v 0 v y , w 0 + w + z , h 0 + h
γ = γ u 0 u x , v 0 v y , w 0 + w + z , h 0 + h
It is easy to draw many parallels between the functionals in Theorem 2 [65] and Theorem 4 [106], which is suggestive of some structure that can extend to higher dimensional results, as has recently been shown and will be outlined in Section 5 below.

4. Time Insensitive Random Walk and Applications

In summary, the random walk analysis surveyed in Section 2 and Section 3 is referred to as time insensitive analysis and the associate random walk is time insensitive. See Agarwal, Dshalalow, and O’Regan [107], Agarwal and Dshalalow [108], Dshalalow [56,57], and Dshalalow and Liew [58,59,60]. It pertains to the fact that the random walk process we analyzed so far cannot be associated with continuous time information, say S t giving us the status of the walk at any time t simultaneously with S ν , t ν and other escape parameters within interval 0 , t . Of course, we can pull out some probabilistic information about the location of τ ν such as P t ν t or P t ν B and likewise P S ν R or for that matter, the finite-dimensional distribution of S ν . However, this still falls short of the time continuous information S t on the walk that is very important in control theory. It was very obvious that with our efforts to embed an auxiliary control level M 1 prior to the main escape, we make up for the lack of S t . In the forthcoming sections we will address this issue and present time sensitive walks which appear to provide us with a bulk of additional information, but at a cost, because the insensitive analysis is tamer.
We return to this topic later, but for now we would like to present some applications of the insensitive walks beyond the cyberattack models in Section 3.
Consider the random walk
S = A P = k = 0 A k , P k ε t k ,
valued in N d × R l such that
A n , P n = k = 0 n A k , k = 0 n P k
is the position of the walker on the associated random grid. From this position, the walker jumps to position A n + 1 , P n + 1 . Here,
A k = A k 1 , , A k d ,
P k = P k 1 , , P k l ,
while
A n = α n 1 = k = 0 n A k 1 , , α n d = k = 0 n A k d
P n = π n 1 = k = 0 n P k 1 , , π n l = k = 0 n P k l .
The objective is to investigate time and the position of the walker when its component A escapes from rectangle A = i = 1 d R i N d . Often of main interest is the position of the walker in R l . Component A of S is called the active, while P is passive implying that in our case, only A is contained, while P is unrestricted. Note that, unlike the previous assumptions in Section 2 and Section 3, the walk in R l runs in all directions along a randomly generated grid. However, the escape is determined by the projection N d × R l , N d , π of the position of the walker relative to set A. Here π is defined as the projection map from N d × R l to N d .
In a nutshell, the escape coordinates in N d × R l are determined by the time and location of the active components A of the walker upon their first crossing of A. Obviously, the exit from A occurs when at least one of the active entries α n j crosses R j of rectangle R.
The exit index is defined as
ρ = inf n : A n A c ,
implying that t ρ is the first passage time and S ρ = ( A ρ , P ρ ) is the global location of the walker in A × R l upon A ’s escape from set A.
Before we continue with more formalism, let us bring up a situation that led to the above model.
Example 3.
Suppose an agent decides whether or not to short an option on some stock S 1 he does not own. In the event he decides to short the option, he wonders if he is to acquire the stock or not to dependent on the stock’s chance to hit the strike price. In this case, if the agent does not own the stock, while it hits the strike price and the option holder will exercise the contract, he will have to deliver the stock and thus buy it at a market price. This particular example does not demonstrate how to find the probability that the stock described by a random walk process will cross the threshold determined by the strike price, but it will give the functional of the first passage time when the stock drops for the first time or when its increment will increase above some level M 1 . This can be used as an initial information for the stock to run its further path. This information can also be used whenever one decides whether or not to buy a volatile stock. Now, the prediction about the first drop or sharp increase can be refined by adhering to stock S 1 yet another stock, say S 2 , which is proved to be well correlated with S 1 . Then, instead of a scrutiny on stock S 1 alone, we can mix S 1 and S 2 to see whichever of the two will come first to drop or rise. The prediction can be more accurate.
In a similar situation, suppose an owner of a stock portfolio is interested to extend or update it with more stocks. A diversification is a common strategy. Another strategy is to mix a portfolio with longs and shorts. Suppose the agent would want to know whether to long or short just two stocks dependent on what direction they may take. For instance, if a sharp drop occurs, it could be a signal of price decline; if a significant rise takes place without any economical reasons, it could be an indication of overpricing that would present another risk to the stock owner. In this case, the owner would also like to predict the moment either the first drop or a significant rise is to happen (which we will associate with the first passage or exit time). Then, if necessary, shorting the two stocks, the agent would be able to minimize risk, optimize the respective portfolio’s performance and thus attain a higher return on his investment.
In the context of the above notation, π n 1 and π n 2 give the prices of the two stocks upon reference times t n , n = 1 , 2 , so that P n 1 and P n 2 are increments of the stock price changes between their respective subintervals ( t n 1 , t n ] , where t 0 = 0 . We would like to emphasize that because the stock prices periodically change their directions, the named increments are real-valued r.v.’s, rather than positive and thus the stock prices are not monotone, in contrast with monotone components of Section 2 and Section 3.
Introduce four auxiliary active components
A n 1 = 0 , P n 1 0 1 , P n 1 < 0
A n 2 = 0 , P n 2 0 1 , P n 2 < 0
A n 3 = 0 , P n 1 < M 1 1 , P n 1 M 1
A n 4 = 0 , P n 2 < M 2 1 , P n 2 M 2
that follow the evolution of two questionable stocks’ prices. Now since the stock price changes are not monotone sequences, we associate them with passive components whereas the auxiliary components of (64)–(67) are monotone.
Now while stock S i ’s price appreciates, A k i = 0 , k = 1 , 2 , , resulting in α n 1 , α n 2 = 0 , 0 , n = 1 , 2 , , i = 1 , 2 . When at some t n , at least one of the two stock prices drops for the first time, we will see ( α n 1 , α n 2 ) = 0 , 1 or 1 , 0 , or 1 , 1 . The other two active components of A will watch similarly behaving rising trends of the two stocks as per (66) and (67).
Thus, setting the rectangle A = { 0 , R 1 } × { 0 , R 2 } × { 0 , R 3 } × { 0 , R 4 } we can predict the trend of the two stocks to appreciate or dive and suggest a longing or shorting strategy of acquiring stock portfolio. For example, crossing R 1 or R 2 at t ρ points to changing direction of the respective stock prices from rising to dropping. On the other hand, crossing R 3 or R 4 points to a solid gain in prices that suggests longing rather than shorting the stocks. A mixed trend speaks of unwanted volatility.
Back to the general case, we introduce the functional
Φ ρ = Φ ρ ξ , u , v , η , ξ , ϑ , θ = E ξ ρ u 1 α ρ 1 1 u d α ρ d 1 v 1 α ρ 1 v d α ρ d × e i η · P ρ 1 + i ξ · P ρ ϑ t ρ 1 θ t ρ
such that u  = u 1 , , u d , v ( v 1 , , v d ) C d with u i , v i B ¯ 0 , 1 for i = 1 , , d , η , ξ R l , and ϑ , θ C + . (i.e., Re ( ϑ ) 0 and Re ( θ ) 0 .)
This functional includes familiar escape and pre-escape parameters. One of the utilities of pre-escape parameters, at least in the context of stock prices or option trading, is to predict the highest stock price, say before a drop or a second drop or sharp drop.
Next we have the multivariate version of the D -operator defined as
D x R = D x 1 R 1 D x d R d
such that D x i R i φ x = lim x i 0 1 R i ! R i x R i 1 1 x i φ ( x ) , where R = R 1 , , R d , x = x 1 , , x d C d , x i B ( 0 , 1 ) for i = 1 , , d , and φ is analytic at 0 with respect to each x 1 , , x d .
Theorem 5.
The functional Φ ρ satisfies the following formula.
Φ ρ = D x R 1 [ γ 0 v ; ξ , θ γ 0 x v ; ξ , θ + γ 0 x u v ; η + ξ , ϑ + θ ξ γ v ; ξ , θ γ x v ; ξ , θ 1 ξ γ x u v ; η + ξ , ϑ + θ ]
where
γ 0 u ; η , ϑ = E u 1 A 01 u d A 0 d e i η · P 0 ϑ t 0
γ u ; η , ϑ = E u 1 A 11 u d A 1 d e i η · P 1 ϑ ( t 1 t 0 )
anduvis the Hadamard product of vectorsuandv.
We note that the functionals γ and γ 0 are supposed to be known or they can be obtained. We revisit Example 3 about stocks trading.
Example 4.
In the context of Example 3 consider a special case when an agent observes two stocks with some initial constant prices P 01 , P 02 at the beginning, under assumption that t 0 = 0 . Because P 01 and P 02 are positive, A 01 = A 02 = 0 . It makes sense to set P 01 < M 1 and P 02 < M 2 , thereby setting A 03 = A 04 = 0 as well. Thus we have u = 1 , so
γ 0 1 ; η , 0 = e i η 1 P 01 + η 2 P 02 = g 0 η
is a fixed constant.
Next, the agent wants to predict the first instant t ρ when at least one of the four events takes place: Stocks prices of S 1 or S 2 drop for the first time after appreciating, or with their increments P = P 1 , P 2 spike above M 1 or M 2 . If any of these events occur at time t ρ , they will turn P ρ 1 or P ρ 2 negative and thus A ρ 1 or A ρ 2 = 1 , or P ρ 1 M 1 or P ρ 2 M 1 and thus A ρ 3 or A ρ 4 = 1 . Therefore, A = 0 , 1 4 and R = 1 , 1 , 1 , 1 .
We need to figure out
γ u ; η , ϑ = E u 1 A 11 u d A 14 e i η · P 1 ϑ t 1 = E e i η · P 1 ϑ t 1 1 0 P 1 < M + E u 1 e i η · P 1 + ϑ t 1 1 P 11 < 0 , 0 P 12 < M 2 + E u 2 e i η · P 1 ϑ t 1 1 0 P 11 < M 1 , P 12 < 0 + E u 3 u 4 e i η · P 1 ϑ t 1 1 P 1 M + E u 4 e i η · P 1 ϑ t 1 1 0 P 11 < M 1 , P 12 M 2 + E u 1 u 4 e i η · P 1 ϑ t 1 1 P 11 < 0 , P 12 M 2 + E u 2 u 3 e i η · P 1 ϑ t 1 1 P 11 M 1 , P 12 < 0 + E u 1 u 2 e i η · P 1 ϑ t 1 1 P 1 < 0 + E u 3 e i η · P 1 ϑ t 1 1 P 11 M 1 , 0 P 12 < M 2
where M = M 1 , M 2 . This can be calculated dependent on the choice of distributions of Pand t 1 . (With position independent marking, that is, assuming Pand t 1 independent, the computation can be straightforward.) Now applying Theorem 5, we have
Φ ρ = D x 0 [ γ 0 v ; ξ , θ γ 0 x v ; ξ , θ + γ 0 x u v ; η + ξ , ϑ + θ ξ γ v ; ξ , θ γ x v ; ξ , θ 1 ξ γ x u v ; η + ξ , ϑ + θ ] = g 0 η + ξ ξ γ v ; ξ , θ γ 0 ; ξ , θ 1 ξ γ 0 ; η + ξ , ϑ + θ
For example, the most explicit functional is the marginal distribution of the exit index ρ (which is the predicted observation number from 1 , 2 , when the above mentioned events take place). Thus, the pgf of ρ reads
E ξ ρ = Φ ρ ξ , 1 , 1 , 0 , 0 , 0 , 0 = ξ γ 1 ; 0 , 0 γ 0 ; 0 , 0 1 ξ γ 0 ; 0 , 0 = ξ 1 a 1 ξ a ,
where
a = γ 0 , 0 , 0 = P 0 P 1 < M
In particular, the mean of ρ is
E ρ = 1 1 a
Next, the marginal LST of t ρ 1 is
Φ ρ 1 , 1 , 1 , 0 , 0 , ϑ , 0 = E e ϑ τ ρ 1 = 1 γ 0 , 0 , 0 1 γ 0 , 0 , ϑ = 1 a 1 a ϑ ,
where
a ϑ = E e ϑ t 1 1 0 P 1 < M = a E e ϑ t 1 = a q ϑ
(if with position independent marking).
Thus, under the position independent marking assumption (that is price variations are independent of the time increments, the marginal transform of the highest portfolio price before at least one of the two stocks drops or spikes is
E e ϑ t ρ 1 = 1 a 1 a q ϑ ,
implying that the mean time prior to one of these events is
E t ρ 1 = a E t 1 1 a .

5. Higher Dimensional Random Walks

Consider a random measure S , T = n = 0 x n ε τ n , where x n = X n , Y n R + k × R m , and the corresponding delayed renewal process
S n = A n , P n = i = 0 n X i , i = 0 n Y i ,
where A n = A n 1 , , A n k and P n = P n 1 , , P n m . Unlike any model considered previously, some components of the jumps, the passive components Y n , are permitted to be negative.
Given the rectangle A = 0 , L 1 × × 0 , L k × R m , where L = L 1 , , L k R + k , we are interested in the escape parameters upon walker’s exit from set A. Namely,
ν i = inf { n 0 : A n i > L i }
are the exit indices, and we focus on the first exit index
ρ = inf ν i : i = 1 , , k = inf n 0 : S n A .
As seen in a prior section, Dshalalow and Liew [59,60] derived a functional containing the pre-exit and post-exit non-negative active components A ρ 1 and A ρ as well as real-valued passive components P ρ 1 and P ρ ,
Φ ρ = Φ ρ ( ξ , α , ϕ , β , ψ ) = E ξ ρ e α · A ρ 1 e i ϕ · P ρ 1 e β · A ρ e i ψ · P ρ ,
where ξ B ¯ ( 0 , 1 ) , each component of α , β C k have non-negative real parts, and ϕ , ψ C m .
A new work by White [109] departs from the works above to derive a general formula for the probability of an arbitrary weak ordering of threshold crossings, a question of practical interest in numerous applications outlined above related to stochastic network defense, queueing theory, finance, and actuarial sciences.
In particular, a weak ordering of the exit indices ν 1 , ..., ν k is a member of the set
W = ν p ( 1 ) ν p ( 2 ) ν p ( k ) : p is a permutation of 1 , , k ,
where each ⪯ is fixed to be either = or <. Without loss of generality, each W W may be represented as
W = ν 1 = = ν s 1 < ν s 1 + 1 = = ν s 2 < < ν s n 1 + 1 = = ν s n ,
keeping in mind some permutation may be applied to the indices of the ν ’s. The proofs within Dshalalow [40] and Dshalalow and Liew [59,60] among others partition the sample space into W and derive functionals of the form
Φ W = Φ W ( ξ , α , ϕ , β , ψ ) = E ξ ρ e α · A ρ 1 e i ϕ · P ρ 1 e β · A ρ e i ψ · P ρ 1 W
for each weak ordering W W separately for k 4 before adding them to find Φ ρ . Note that a special case of Φ W is
Φ W ( 1 , 0 , 0 , 0 , 0 ) = E 1 W = P W ,
the probability of the weak ordering W occurring.
This could be done manually for k 5 in principle, but it is impractical. W contains all permutations of { ν 1 , , ν k } with strict inequalities, i.e., where threshold crossings occur at distinct times, which is already k ! elements. Further, W also contains many other weak orderings because arbitrary subsets of the threshold crossings may occur upon the same jump of the process. It turns out the cardinalities of W for dimension k correspond to the Fubini numbers (or ordered Bell numbers): 1, 3, 13, 75, 541, 4683, 47,293, ... Deriving 75 Φ W functionals for dimension four was feasible but took quite a lot of effort. Some experimentation has shown that moving up to seven dimensions on a similar problem is time-consuming but feasible with an automated procedure with a consumer-grade computer, and a few more dimensions should work on more substantial hardware, but it soon becomes infeasible regardless of computational resources.
The main result of White [109] takes an alternate path and generalizes the derivation of an arbitrary P W in any finite k dimensions in its proof. Before we formulate the result, let’s define the composition of k operators as
H χ · = H χ 1 L 1 H χ k L k · ,
where
H x k · = D x k · , if component k is discrete in N L C x 1 · k , if component k is continuous in R +
This allows us to have a unified notation for the operator while permitting components of the process to be discrete, continuous, or mixed.
Given this, we formulate the result.
Theorem 6.
If each component of S n is continuous and each vector x T j contains at least one component with a positive real part, then for each W W ,
P ( W ) = H χ j = 1 n 1 1 Γ T j l = 0 r j ( 1 ) l J S j J 1 = l Γ T j J ,
where χ B = χ 1 B , , χ k B , χ j B = 1 B ( j ) χ j for B { 1 , , k } ,
γ ( χ ) = E e χ · X 1 ,
Γ B = γ χ B ,
r j = s j s j 1 + 1 , and S j = n N : s j 1 < n s j .
For convenience, the result above was formulated under the assumption the components of S n are continuous-valued. However, if any discrete component of the process, the definition of H χ implies the appropriate change in the individual components to D operators. The only other necessary change is to replace x m with ln ( x m ) input to the appropriate γ terms in order to convert the Laplace-Stieltjes transform E e x m X to a probability-generating function E x m X .
In all, this result gives a probability of each weak ordering of threshold crossings, whether the components are continuous, discrete, or mixed. However, it is under k operators, which at first glance seems to do little but push the problem to another impasse, but some examples below demonstrate it is a practical result that agrees with empirical experiments in special cases.
Example 5.
Recall the stochastic network problems in two dimensions addressed in Theorem 2 above. In the context of this section’s models, suppose
S n = A n = i = 0 n X i , i = 0 n j = 0 X i Y i j ,
where each X i represents the i.i.d. sizes batches of nodes incapacitated by attacks. The nodes lost from the ith attack have i.i.d. random weights Y i 1 , ..., Y i X i representing their value to the overall health of the network.
An interesting question is the probability the node loss crosses its critical threshold before, after, or simultaneous to critical weight loss occurring. If one is clearly more likely, it provides some path to make decisions to improve the reliability of the network–for example, whether efforts should be made to shield nodes to reduce node losses or to decentralize the value within the network to reduce weight losses.
If we assume node batches X i are geometrically distributed with parameter p and the node weights Y i j are exponential with parameter μ, the following probabilities were computed explicitly by simplifying Theorem 6 and applying the appropriate H χ operator.
P ν 1 < ν 2 = P ( M 1 1 , μ M 2 ) e p μ M 2 ( 1 p ) M 1 1 P ( M 1 1 , ( 1 p ) μ M 2 )
P ν 1 > ν 2 = Q ( M 1 1 , μ M 2 ) ( 1 p ) M 1 1 e p μ M 2 1 p Q M 1 1 , μ M 2 1 p ,
with P ( ν 1 = ν 2 ) = 1 P ν 1 < ν 2 P ν 1 > ν 2 , where Q ( M , y ) = Γ ( M , y ) Γ ( M ) is the upper regularized gamma function and P ( M , y ) = 1 Q ( M , y ) is the lower regularized gamma function.
These probabilities are compared to empirical probabilities computed from simulated special case where μ = 1 , p = 1 2 , and varying values of the thresholds M 1 and M 2 in the diagrams below.
In Figure 4, predicted results above are plotted as solid curves and empirical probabilities from simulations of 10,000 paths of the process are computed and plotted as dots, which show strong agreement.
We see a sigmoid pattern for the probability ν 1 < ν 2 when M 1 is fixed and M 2 grows. If we realize ν 1 < ν 2 implies M 1 is crossed before M 2 and that the means of both the node and weight jumps from an attack are equal in this special case, this is very intuitive. A small M 2 should be crossed first with high probability resulting in a low probability of the converse, ν 1 < ν 2 . At M 1 = M 2 , the probability is 0.5. A large M 2 is rarely crossed first, giving a large probability that ν 1 < ν 2 .
For P ( ν 1 = ν 2 ) , we see similarly intuitive results.
Here, in Figure 5 again, simulated and predicted results strongly agree, as does intuition: note that the peak occurs when M 1 = M 2 , which should be when a simultaneous crossing is most common in this case where the mean of the jumps in each dimension is equal.
While we were able to simply compute the probabilities to arbitrary precision with numerical approximations in the example above, this is not always possible, especially at higher dimensions, but the next example demonstrates an alternate path to practical results.
Example 6.
Suppose the jumps of the process X i are made up of three independent exponential random variables, X i 1 , X i 2 , X i 3 , with parameters μ 1 , μ 2 , and μ 3 , respectively. In a three-dimensional problem, W is made up of 13 weak orders of four types:
ν 1 < ν 2 < ν 3 , ν 1 < ν 3 < ν 2 , ν 2 < ν 1 < ν 3 , ν 2 < ν 3 < ν 1 , ν 3 < ν 1 < ν 2 , ν 3 < ν 2 < ν 1 ν 1 = ν 2 < ν 3 , ν 1 = ν 3 < ν 2 , ν 2 = ν 3 < ν 1 ν 1 < ν 2 = ν 3 , ν 2 < ν 1 = ν 3 , ν 3 < ν 1 = ν 2 ν 1 = ν 2 = ν 3
Since this example has jumps with independent components, it is enough to compute the four probabilities in the first column and simply apply a permutation to the results and adjust the parameters accordingly to get the others in the same line. We refer the reader to [109] to see the full results, but we reproduce the first one for the sake of discussion. We find
P ( ν 1 < ν 2 < ν 3 ) = 1 e μ 2 M 2 1 + μ 1 μ 2 M 2 0 M 1 e μ 1 τ τ I 1 2 μ 1 μ 2 M 2 τ d τ e μ 3 M 3 1 + μ 2 μ 3 M 3 0 M 2 e μ 2 τ τ I 1 2 μ 2 μ 3 M 3 τ d τ + e μ 2 M 2 μ 3 M 3 L x 1 1 1 x 1 0 M 2 e μ 1 μ 2 M 2 τ μ 1 + x 1 τ I 1 2 μ 1 μ 2 μ 3 M 3 τ μ 1 + x 1 d τ ( M 1 )
where I 1 is the modified Bessel function of the first kind. The formulas derived for the other 13 probabilities had similar expressions, all made up of a term involving an inverse Laplace transform of an integral involving a Bessel functions and less difficult terms to compute.
The expression above is not quite explicit since some expressions are under integrals and one portion remains under a Laplace transform. The Bessel functions can be computed numerically to high precision quickly and the integrals turn out to be quite easy to approximate to high accuracy with standard numerical integration techniques.
The inverse Laplace transform poses some less widely-understood challenges, but it turns it can reliably be inverted numerically in this instance using the fixed Talbot algorithm [110], which uses trapezoidal numerical integration along a specific deformed contour, using the framework and best practices developed by Abate and Whitt [111].
500 choices of parameters ( μ 1 , μ 2 , μ 3 , M 1 , M 2 , M 3 ) were sampled uniformly fromthe region [ 0.5 , 3 ] 3 × [ 10 , 40 ] 3 . For each vector of parameters, 100,000 realizations of the corresponding processes were simulated and empirical probabilities were computed. The numerical scheme for at least one of the 14 probabilities failed to converge properly in 11 of 500 cases, but the maximum error on any of the 14 probabilities in the remaining 489 cases was 0.004, indicating success with numerical inversion.
These two examples demonstrate the result of Theorem 6 is versatile and can be computed explicitly or at least in a form that can be numerically approximated to high precision in numerous interesting special cases.
The probabilities of [109] are unique in this area of study, but what about the full functional Φ ρ in m + k dimensions? This is the focus of some current work by White [112], which has recently confirmed the conjecture made by Dshalalow and Liew [59,60] that their result applies for arbitrarily many active components k for a functional that may be considered in the simplest sense as
Φ ρ = Φ ρ ( u ) = E e u · A ρ ,
where continuous jumps have a common joint LST γ ( u ) = E e u · X 1 . In the spirit of Theorem 6, it has been shown the following holds.
Theorem 7.
If at least one component of u has a positive real part, then for each W W where the permutation p is the identity function, then
Φ W ( u ) = E e u · A ρ 1 W = H χ 1 1 γ l = 0 r 1 ( 1 ) l J S j J 1 = l γ T 2 J j = 2 n 1 1 Γ T j l = 0 r j ( 1 ) l J S j J 1 = l Γ T j J
where γ = γ u + χ , γ B = γ u + χ B , and Γ B = γ χ B for B { 1 , , k } .
The result readily extends to the situation where p is any permutation, hence giving us an expression for Φ W for any weak ordering W W .
The proof is a somewhat trivial extension to the proof of Theorem 6, but the much larger challenge beyond the work in [112] is to find Φ ρ by summing the Φ W terms over all weak orderings W W ,
Φ ρ ( u ) = W W Φ W ( u )
Very recently, this problem has been solved in [112] exploiting an interesting recursive pattern in the way in the expressions Φ W simplify when added together. The result is formulated below.
Theorem 8.
If at least one component of u has a positive real part, then
Φ ρ ( u ) = E e u · A ρ = H χ γ ( u ) γ 1 γ .
This result has a remarkably simple formula, but an expression analogous to γ ( u ) γ 1 γ is common to many of the other results herein when the pre-exit terms, passive components, and ξ ρ terms are omitted from the functional. As such, this theorem most of the time unifies insensitive functionals above, and confirms the conjecture of Dshalalow and Liew [59,60] about a model with k active components.
Of course, many embellishments are possible, such as adding the pre-exit terms, passive components, and ξ ρ terms to the functional to seek a fuller functional
Φ ρ ( ξ , α , ϕ , β , ψ ) = E ξ ρ e α · A ρ 1 e i ϕ · P ρ 1 e β · A ρ e i ψ · P ρ
This is a simple extension of Theorem 8, which will appear in [112]

6. Time Sensitive Analysis of Random Walks

In several models outlined above, particularly those studied by the authors in [65,66] and outlined in Section 3, considered a process running in real time with jumps at times t 1 , t 2 , , which can only be observed upon an independent delayed renewal process τ 0 , τ 1 , ... rather than in real time. In this case, the exit of the process from a k-dimensional rectangular region was studied upon the pre-exit observation time and the post-exit observation time but access to the real exit was unavailable. This approach introduces some insurmountable uncertainty dependent on the crudeness of the observation process { τ n } n = 1 .
A sequence of papers, Dshalalow and his collaborators [50,68,69,73,74,75,76,113] pursue methods referred to as time sensitive analysis that try to offer more precise look into the intermediate time period between the pre-exit observation and post-exit observation times during which the real time exit actually occurs, to glean some further insights about the process upon its exit.
The simplest case of this approach is the study of a one-dimensional discrete random walk by the authors in 2016 [113], where we have a random measure S = n = 0 a n ε t n where a n : Ω Z + are independent and identically distributed non-negative random variables, and study the continuous-time Poisson process with parameter λ ,
S ( t ) = n = 0 a n ε t n ( [ 0 , t ] ) ,
where each a n has a common probability-generating function g ( z ) = E z a n . S ( t ) is referred to as the real time stochastic process. The time insensitive methods rely upon study of the process S ( t ) through its observed values
S n = S ( τ n ) ,
where the point process τ 0 , τ 1 , is delayed renewal process representing the observation times of S ( t ) . As a delayed renewal process, the inter-observation times, Δ 0 = τ 0 and Δ n = τ n τ n 1 , n N , are independent, and the times for n 1 are identically distributed. Denote the Laplace–Stieltjes transforms of each as L 0 ( θ ) = E e θ τ 0 and L ( θ ) = E e θ Δ 1 , each with Re ( θ ) 0 .
Then, the increments of the Poisson process between observations satisfy
γ 0 z , θ = E z S τ 0 e θ τ 0 = L 0 θ + λ λ g ( z ) ,
γ z , θ = E z S Δ 1 e θ Δ 1 = L θ + λ λ g ( z ) ,
which we assume to be known or readily obtainable.
Given the interval A = 0 , M , where M Z + , we are interested in the index of the first observed exit of the process from set A, ν = inf n 0 : S n A . With the one-dimensional time insensitive analysis outlined in Section 2, a functional of the form
Φ ν ( u , v , ϑ , θ ) = E u S ν 1 v S ν e ϑ t ν 1 θ t ν
was derived. In contrast, one-dimensional time sensitive analysis focuses on targets of the form
Φ ν 1 ( t , u , v , ϑ , θ , y ) = E u S ν 1 v S ν e ϑ τ ν 1 θ Δ ν y S ( t ) 1 { t < τ ν 1 } ,
Φ ν 2 ( t , u , v , ϑ , θ , y ) = E u S ν 1 v S ν e ϑ τ ν 1 θ Δ ν y S ( t ) 1 { τ ν 1 t < τ ν }
of the value of S upon the observations immediately before and after the real time crossing S ( τ ν 1 ) and S ( τ ν ) , the real time value of the process S ( t ) , and the times of the observations immediately before and after the crossing τ ν 1 and τ ν themselves. Notice that each functional deals with t placed within a particular random time interval, either before τ ν 1 or between τ ν 1 and τ ν .
In [113], the authors derived formulas for each of these two functionals under a Laplace transform, which are reproduced below.
Theorem 9.
The joint functional Φ ν 1 ( t , u , v , ϑ , θ , y ) of the process S ( t ) on the interval [ 0 , τ ν 1 ) satisfies
Φ ν 1 ( t , u , v , ϑ , θ , y ) = L x 1 [ D s M 1 ( γ ( v , θ ) γ ( v s , θ ) x + λ g ( u v s ) λ g ( u v y s ) × γ 0 ( u v s , ϑ ) 1 γ ( u v s , ϑ ) γ 0 ( u v y s , x + ϑ ) 1 γ ( u v y s , x + ϑ ) ) ] ( t )
Theorem 10.
The joint functional Φ ν 2 ( u , v , ϑ , θ , y ) of the process S ( t ) on the interval [ τ ν 1 , τ ν ) satisfies
Φ ν 2 ( t , u , v , ϑ , θ , y ) = L x 1 [ D s M 1 ( γ 0 ( v , θ ) γ 0 ( v y , x + θ ) x + λ g ( v ) λ g ( v y ) γ 0 ( v s , θ ) γ 0 ( v y s , x + θ ) x + λ g ( v s ) λ g ( v y s ) + γ 0 ( u v y s , x + θ ) 1 γ ( u v y s , x + θ ) [ γ ( v , θ ) γ ( v y , x + θ ) x + λ g ( v ) λ g ( v y ) γ ( v s , θ ) γ ( v y s , x + θ ) x + λ g ( v s ) λ g ( v y s ) ] ) ] ( t )
The results are each under a Laplace transform, so it is necessary to evaluate an additional inverse operator to extract probabilistic results from this expression, but they provide a path to some deeper insights than the time sensitive analysis, although they are a bit more of a challenge to derive, as we see in the following example.
Example 7.
To derive practical results, we need merely to specify some details about the real time process and the delayed renewal process of observation times and then apply the transforms. We will make the following assumptions.
  • The jump times t 1 , t 2 , form a Poisson point process of rate λ.
  • Inter-observation times Δ n are exponentially distributed with parameter μ, so their LST is L z = μ μ + z .
  • The marks of the real time process are geometrically distributed with parameter a, so their PGF is g ( z ) = a z 1 b z where b = 1 a .
  • The initial functional γ 0 = 1 (i.e., zero initial state and time).
It turns out that in such special cases, time sensitive analysis can be used to derive explicit formulas for joint distributions of random quantities associated with the exit. For example, to find the joint probability mass and distribution function of the exit position of the process and pre-exit observation time, P { S ν = r , τ ν 1 > t } , one can find
Φ ν 1 ( 1 , v , 0 , 0 , 0 ) = E v S ν 1 { t < τ ν 1 }
explicitly by applying the inverse Laplace transform and D operator before using properties of probability generating functions to find the function in question as follows.
Proposition 1.
Under Assumptions 1–4,
P { S ν = r , τ ν 1 > t } = μ λ R 0 r + a μ μ + λ j = 1 M 1 R j r μ λ + μ G 0 R 0 r + j = 1 M 1 ( G j H j 1 ) R j r μ λ [ j = 0 M 1 i = 0 M 1 j c i 1 { r = i + j } ( b + c ) j = 0 M 2 i = 0 M 2 j c i 1 { r = i + j + 1 } + b c j = 0 M 3 i = 0 M 3 j c i 1 { r = i + j + 2 } ] + μ λ + μ [ j = 0 M 1 G j i = 0 M 1 j c i 1 { r = i + j } j = 0 M 2 ( b G j + H j ) i = 0 M 2 j c i 1 { r = i + j + 1 } + b j = 0 M 3 H j i = 0 M 3 j c i 1 { r = i + j + 2 } ] ,
where c = b μ + λ μ + λ ,
R j r = 0 , if r < j 1 , if r = j ( c b ) c r j 1 , if r > j
G j = b j k = 0 j j k a b k P ( k , λ t ) + μ λ P ( k + 1 , λ t )
H j = b j + 1 k = 0 j j k a b k P ( k , λ t ) + μ λ + a b P ( k + 1 , λ t )
and P ( k , λ t ) = 1 Γ ( k , λ t ) Γ ( k ) is the lower regularized gamma function.
While this expression may seem rather large, it is simply a linear combination of terms of the form G j , H j , R j r , and some constants associated with the process, which is easy to efficiently compute, as the lower regularized gamma function can be quickly computed to arbitrary precision with common numerical computing tools.
This was just one example of a result that time sensitive analysis permits. It clearly could work for any pair of the time and position random variables upon the exit represented in Φ ν 1 and Φ ν 2 , noting in particular,
Φ ν 1 + Φ ν 2 = E u S ν 1 v S ν e ϑ τ ν 1 θ Δ ν y S ( t ) 1 { t < τ ν }
allows one to use the post-exit observation τ ν instead of the pre-exit observation, if preferred.
Some later work by the authors in 2019 [76] pursued time sensitive analysis for a problem extended in several directions: (1) some interesting results are derived for general processes with independent and stationary increments (ISI), (2) instead of one dimension, it assumes a active components and b passive components for a process in R + a × R b , and (3) instead of just two times of interest—the pre-exit and post-exit observation previously—the position of the process and time at any finite number of such random times are included in the functional here.
The general results are worth mentioning as they have some interesting implications beyond the scope of this work. Suppose {S ( t ) : t 0 } is a continuous-time ISI stochastic process, defined on a filtered probability space Ω , F , F t , P . Let T = { T 0 , T 1 , } be a point process in R + with T n = T n 1 + δ n where each δ n is independent of the prior time increments δ 0 , δ 1 , , δ n 1 and each is non-negative. In this setting, the following interesting result is established regarding the functional
F n ( t , v 0 , v 1 , , v m , y , θ ) = E e j = 0 m i v j · S ( T j ) + θ j δ j e i y · S ( t ) 1 [ T n 1 , T n ) ( t )
for each n = 1 , , m assuming v j , y R a + b and θ = ( θ 0 , , θ m ) C + m + 1 , where we denote C + = { z C : Re ( z ) 0 } . This is a joint characteristic function of the ISI process S ( T j ) at each time in T with j m , each corresponding random time increment δ n , and the real time value of the process S ( t ) , restricted to times in [ T n 1 , T n ) .
Denote the Laplace transform of F n ( t , v 0 , v 1 , , v m , y , θ ) as
F n * ( x , v 0 , v 1 , , v m , y , θ ) = L t F n ( t , v 0 , v 1 , , v m , y , θ ) ( x ) .
Theorem 11.
For an independent and stationary increments process S ( t ) on the trace σ-algebra F { T n 1 t T n } , where T is independent of F t , the functional F n satisfies
F n * ( x , v 0 , v 1 , , v m , y , θ ) = j = 0 n 1 ϕ j ( b j + y , θ j + x ) E e θ n δ n ψ ( b n + y , b n , δ n , x ) j = n + 1 m ϕ j ( b j , θ j )
under the notation b j = Σ j = 0 m v j ,
ϕ j ( b , θ ) = E e i b · S ( δ j ) θ δ j ,
φ ( b , t ) = E e i b · S ( t ) ,
ψ ( b + y , b , r , x ) = e x ( · ) φ ( b + y , · ) φ ( b , · ) ( r ) .
This result gives a formula for a joint functional on some m independent random times of interest of a very general stochastic process S ( t ) . If the process is assumed to be a collection of marked Poisson processes, the result simplifies to an interesting result.
Corollary 1.
If S ( t ) is made up of d = a + b parallel marked Poisson processes with rates λ 1 , , λ d and T is independent of F t , then on the trace σ-algebra F { T n 1 t T n } , the functional F n satisfies
F n * ( x , v 0 , v 1 , , v m , y , θ ) = j = 0 n 1 ϕ j ( x + θ j + λ · G ( b j + y ) ) ϕ n ( θ n + λ · G ( b n ) ) ϕ n ( x + θ n + λ · G ( b n + y ) ) x + λ · ( G ( b n + y ) G ( b n ) ) × j = n + 1 m ϕ j ( θ j + λ · G ( b j ) ) ,
where ϕ j ( θ ) = E e θ δ j , g j ( b ) = E e i b X m j , and G ( b ) = 1 g 1 ( b 1 ) , , 1 g d ( b d ) .
Suppose next the process has two active components, a = 2 , and we will make some additional assumptions to turn the process S ( t ) into a random walk and review the related results from [76].
Consider the random measure
S = n = 0 a n 1 , a n 2 , p n ε t n ,
where the jumps a n 1 , a n 2 , p n : Ω R + × R + × R b are independent and identically distributed non-negative random vectors, and we study the stochastic process
S ( t ) = n = 0 a n 1 , a n 2 , p n ε t n ( [ 0 , t ] ) ,
where each jump has a common joint transform G ( z ) . The time insensitive methods rely upon study of the process S ( t ) through its observed values
S n = S ( τ n ) = j = 0 n x j
where the point process τ 0 , τ 1 , is delayed renewal process representing the observation times of S ( t ) with initial LST L 0 ( θ ) = E e θ τ 0 and common LST L ( θ ) = E e θ Δ 1 for the inter-observation times. We can represent the joint transforms of x n as
γ 0 ( v , ϑ ) = E e i v · x 0 ϑ τ 0
γ ( v , ϑ ) = E e i v · x 1 ϑ Δ 1
for n 1 , which we assume to be known or readily obtainable.
Given the rectangular cylinder A = 0 , M 1 × 0 , M 2 × R b , where M 1 , M 2 R + , we are interested in the index of the first observed exit of the process from set A, ν = inf n 0 : S n A , and we will target the time sensitive functionals
Φ ν 1 ( t , u , v , θ 0 , θ , y ) = E e i u · S ν 1 i v · S ν θ 0 τ ν 1 θ Δ ν i y · S ( t ) 1 [ 0 , τ ν 1 ) ( t )
Φ ν 2 ( t , u , v , θ 0 , θ , y ) = E e i u · S ν 1 i v · S ν θ 0 τ ν 1 θ Δ ν i y · S ( t ) 1 [ τ ν 1 , τ ν ) ( t )
of the positions of the process upon the pre-exit and post-exit observations, the position at the real time t, and the pre-exit and post-exit times themselves, restricted to the random time intervals [ 0 , τ ν 1 ) before the pre-passage times and [ τ ν 1 , τ ν ) between the pre-exit and post-exit times.
Through a stochastic summation over a conveniently chosen partition of the sample space, Corollary 1 can be used to derive these functionals in the case where the components of the process are marked Poisson processes.
Theorem 12.
Let S ( t ) be the constant interpolation of the process embedded in a process made up of 2 + b parallel marked Poisson processes of rates λ 1 , , λ 2 + b where the two active components are discrete, continuous, or mixed. For the process on the trace σ-algebra F { t < τ ν 1 } , the joint functional Φ ν 1 ( t , u , v , θ 0 , θ , y ) satisfies
L t Φ ν 1 ( t , u , v , θ 0 , θ , y ) ( x ) = H s ( γ 0 ( u + v + s + y , x + θ 0 ) x + λ · ( G ( u + v + s ) G ( u + v + s + y ) ) γ ( v , θ ) γ ( v + s , θ ) × 1 1 γ ( u + v + s , θ 0 ) 1 1 γ 0 ( u + v + s + y , x + θ 0 ) + γ 0 ( u + v + s , θ 0 ) γ 0 ( u + v + s + y , x + θ 0 ) x + λ · ( G ( u + v + s ) G ( u + v + s + y ) ) γ ( v , θ ) γ ( v + s , θ ) 1 γ ( u + v + s , θ 0 ) ) ( M ) .
Theorem 13.
Let S ( t ) be the constant interpolation of the process embedded in a process made up of 2 + b parallel marked Poisson processes of rates λ 1 , , λ 2 + b where the two active components are discrete, continuous, or mixed. For the process on the trace σ-algebra F { τ ν 1 t < τ ν } , the joint functional Φ ν 2 ( t , u , v , θ 0 , θ , y ) satisfies
L t Φ ν 2 ( t , u , v , θ 0 , θ , y ) ( x ) = G s 1 ( γ 0 ( v + y , x + θ ) γ 0 ( v , θ ) x + λ · ( G ( v ) G ( v + y ) ) γ 0 ( v + s + y , x + θ ) γ 0 ( v + s , θ ) x + λ · ( G ( v + s ) G ( v + s + y ) ) + γ 0 ( u + v + s + y , x + θ 0 ) 1 γ ( u + v + s + y , x + θ 0 ) × [ γ ( v + y , x + θ ) γ ( v , θ ) x + λ · ( G ( v ) G ( v + y ) ) γ ( v + s + y , x + θ ) γ ( v + s , θ ) x + λ · ( G ( v + s ) G ( v + s + y ) ) ] ) ( M ) .
The expression of Theorem 13 [76] is clearly very similar to the one dimensional time sensitive result from Theorem 10 [113], but this one happens to be for a continuous problem. Indeed, this expression actually is in a similar form to the time insensitive functionals of Theorem 1, Theorem 2 [65], and Theorem 4 [106], a common thread running throughout much of the work discussed in this article.

Author Contributions

The authors contributed equally to the work, but the nature of contributions are summarized next. Conceptualization, J.H.D. and R.T.W.; methodology; methodology, J.H.D. and R.T.W.; software, R.T.W.; validation, R.T.W.; formal analysis, J.H.D. and R.T.W.; writing—original draft preparation, J.H.D. and R.T.W.; writing—review and editing, J.H.D. and R.T.W.; visualization, R.T.W.; supervision, J.H.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors want to give many thanks to anonymous referees whose insightful remarks and suggestions led to a largely improved version of our paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pearson, K. The Problem of the Random Walk. Nature 1905, 72, 294. [Google Scholar] [CrossRef]
  2. Takács, L. On fluctuations of sums of random variables. In Sudies in Probability and Ergodic Theory. Advances in Mathematics. Supplementary Studies; Rota, G.C., Ed.; Academic Press: New York, NY, USA, 1978; Volume 2, pp. 45–93. [Google Scholar]
  3. Unver, I.; Tundzh, Y.S.; Ibaev, E. Laplace-Stieltjes transform of the distribution of the first moment of crossing the level a (a > 0) by a semi-Markovian random walk with positive drift and negative jumps. Autom. Control. Comput. Sci. 2014, 48, 144–149. [Google Scholar] [CrossRef]
  4. Andersen, E.S. On the fluctuations of sums of random variables. Math. Scand. 1953, 1, 263. [Google Scholar] [CrossRef] [Green Version]
  5. Andersen, E.S. On the fluctuations of sums of random variables II. Math. Scand. 1954, 2, 194. [Google Scholar] [CrossRef] [Green Version]
  6. Rayleigh, J.W.S. The Problem of the Random Walk. Nature 1905, 72, 318. [Google Scholar] [CrossRef] [Green Version]
  7. Takács, L. Random walk on a finite group. Acta Sci. Math. 1983, 45, 395–408. [Google Scholar]
  8. Takacs, C. Biased random walks on directed trees. Probab. Theory Relat. Fields 1983, 111, 123–139. [Google Scholar] [CrossRef]
  9. Dshalalow, J.H.; Syski, R. Lajos Takács and his work. J. Appl. Math. Stoch. Anal. 1994, 7, 215–237. [Google Scholar] [CrossRef] [Green Version]
  10. Van den Berg, M. Exit and Return of a Simple Random Walk. Potential Anal. 2005, 23, 45–53. [Google Scholar] [CrossRef]
  11. Mogul’skiĭ, A.A. Local limit theorem for the first crossing time of a fixed level by a random walk. Sib. Adv. Math. 2010, 20, 191–200. [Google Scholar] [CrossRef]
  12. Becker, M.; König, W. Moments and Distribution of the Local Times of a Transient Random Walk on ℤd. J. Theor. Probab. 2008. [Google Scholar] [CrossRef]
  13. Csáki, E.; Földes, A.; Révész, P. Maximal Local Time of a d-dimensional Simple Random Walk on Subsets. J. Theor. Probab. 2005, 18, 687–717. [Google Scholar] [CrossRef] [Green Version]
  14. Gluck, D. First hitting times for some random walks on finite groups. J. Theor. Probab. 1999, 12, 739–755. [Google Scholar] [CrossRef]
  15. Fayolle, G.; Iasnogorodski, R.; Malyshev, V. Random Walks in the Quarter Plane; Springer: Berlin/Heisenberg, Germany, 2017. [Google Scholar] [CrossRef]
  16. Hildebrand, M. A survey of results on random random walks on finite groups. Probab. Surv. 2005, 2. [Google Scholar] [CrossRef]
  17. Montroll, E.W.; Weiss, G.H. Random Walks on Lattices. II. J. Math. Phys. 1965, 6, 167–181. [Google Scholar] [CrossRef]
  18. Kutner, R.; Masoliver, J. The continuous time random walk, still trendy: Fifty-year history, state of art and outlook. Eur. Phys. J. B 2017, 90. [Google Scholar] [CrossRef] [Green Version]
  19. Scalas, E. The application of continuous-time random walks in finance and economics. Phys. A Stat. Mech. Its Appl. 2006, 362, 225–239. [Google Scholar] [CrossRef]
  20. Balakrishnan, V.; Khantha, M. First passage time and escape time distributions for continuous time random walks. Pramana 1983, 21, 187–200. [Google Scholar] [CrossRef]
  21. Blanchard, P.; Volchenkov, D. Random Walks and Diffusions on Graphs and Databases; Springer: Berlin/Heisenberg, Germany, 2011. [Google Scholar] [CrossRef] [Green Version]
  22. Brémaud, P. Discrete Probability Models and Methods; Springer: Berlin/Heisenberg, Germany, 2017. [Google Scholar] [CrossRef]
  23. Fujie, F.; Zhang, P. Covering Walks in Graphs; Springer: New York, NY, USA, 2014. [Google Scholar] [CrossRef]
  24. Sarkar, P.; Moore, A.W. Random Walks in Social Networks and their Applications: A Survey. In Social Network Data Analytics; Springer: New York, NY, USA, 2011; Chapter 3. [Google Scholar] [CrossRef]
  25. Shi, Z. Branching Random Walks; Springer: Berlin/Heisenberg, Germany, 2015. [Google Scholar] [CrossRef]
  26. Telcs, A. Random Walks on graphs, electric networks and fractals. Probab. Theory Relat. Fields 1989, 82, 435–449. [Google Scholar] [CrossRef]
  27. Abolnikov, L.; Dshalalow, J.H. Ergodicity conditions and invariant probability measure for an embedded Markov chain in a controlled bulk queueing system with a bilevel service delay discipline part I. Appl. Math. Lett. 1992, 5, 25–27. [Google Scholar] [CrossRef] [Green Version]
  28. Abolnikov, L.; Dshalalow, J.H. A first passage problem and its applications to the analysis of a class of stochastic models. J. Appl. Math. Stoch. Anal. 1992, 5, 83–97. [Google Scholar] [CrossRef] [Green Version]
  29. Abolnikov, L.; Dshalalow, J.H. On a multilevel controlled bulk queueing system MX/Gr,R/1. J. Appl. Math. Stoch. Anal. 1992, 5, 237–260. [Google Scholar] [CrossRef] [Green Version]
  30. Abolnikov, L.M.; Dshalalow, J.H. Semi-regenerative analysis of controlled bulk queueing systems with a bilevel service delay discipline and some ergodic theorems. Comput. Math. Appl. 1993, 25, 107–116. [Google Scholar] [CrossRef] [Green Version]
  31. Abolnikov, L.; Agarwal, R.P.; Dshalalow, J.H. Random walk analysis of parallel queueing stations. Math. Comput. Model. 2008, 47, 452–468. [Google Scholar] [CrossRef]
  32. Abolnikov, L.M.; Dshalalow, J.H.; Dukhovny, A.M. On stochastic processes in a multilevel control bulk queueing system. Stoch. Anal. Appl. 1992, 10, 155–179. [Google Scholar] [CrossRef]
  33. Abolnikov, L.M.; Dshalalow, J.H.; Dukhovny, A.M. Stochastic analysis of a controlled bulk queueing system with continuously operating server: Continuous time parameter queueing process. Stat. Probab. Lett. 1993, 16, 121–128. [Google Scholar] [CrossRef]
  34. Abolnikov, L.M.; Dshalalow, J.H.; Dukhovny, A.M. A multilevel control bulk queueing system with vacationing server. Oper. Res. Lett. 1993, 13, 183–188. [Google Scholar] [CrossRef]
  35. Dshalalow, J.H. On a first passage problem in general queueing systems with multiple vacations. J. Appl. Math. Stoch. Anal. 1992, 5, 177–192. [Google Scholar] [CrossRef]
  36. Dshalalow, J.H.; Tadj, L. On applications of first excess level random processes to queueing systems with random server capacity and capacity dependent service time. Stochastics Stoch. Rep. 1993, 45, 45–60. [Google Scholar] [CrossRef]
  37. Dshalalow, J.H. First excess levels of vector processes. J. Appl. Math. Stoch. Anal. 1994, 7, 457–464. [Google Scholar] [CrossRef] [Green Version]
  38. Dshalalow, J. First excess level analysis of random processes in a class of stochastic servicing systems with global control. Stoch. Anal. Appl. 1994, 12, 75–101. [Google Scholar] [CrossRef]
  39. Dshalalow, J. Excess level processes in queueing. In Advances in Queueing; Dshalalow, J., Ed.; CRC Press: Boca Raton, FL, USA, 1995; pp. 243–262. [Google Scholar]
  40. Dshalalow, J. On the level crossing of multi-dimensional delayed renewal processes. J. Appl. Math. Stoch. Anal. 1997, 10, 355–361. [Google Scholar] [CrossRef] [Green Version]
  41. Dshalalow, J.H.; Motir, R. Random Walk Processes in a Bilevel (M-N)-Policy Queue with Multiple Vacations. Qual. Technol. Quant. Manag. 2011, 8, 303–332. [Google Scholar] [CrossRef]
  42. Dshalalow, J.H.; Russell, G. On a single-server queue with fixed accumulation level, state dependent service, and semi-Markov modulated input flow. Int. J. Math. Math. Sci. 1992, 15, 593–600. [Google Scholar] [CrossRef] [Green Version]
  43. Dshalalow, J.H.; Yellen, J. Bulk input queues with quorum and multiple vacations. Math. Probl. Eng. 1996, 2, 95–106. [Google Scholar] [CrossRef]
  44. Agarwal, R.; Dshalalow, J. New fluctuation analysis of D-policy bulk queues with multiple vacations. Math. Comput. Model. 2005, 41, 253–269. [Google Scholar] [CrossRef]
  45. Abolnikov, L.; Dshalalow, J.H.; Treerattrakoon, A. On a dual hybrid queueing system. Nonlinear Anal. Hybrid Syst. 2008, 2, 96–109. [Google Scholar] [CrossRef]
  46. Dshalalow, J.H. Queues with hysteretic control by vacation and post-vacation periods. Queueing Syst. 1998, 29, 231–268. [Google Scholar] [CrossRef]
  47. Dshalalow, J.; Dikong, E. On generalized hysteretic control queues with modulated input and state dependent service. Stoch. Anal. Appl. 1999, 17, 937–961. [Google Scholar] [CrossRef]
  48. Dikong, E.E.; Dshalalow, J.H. Bulk input queues with hysteretic control. Queueing Syst. 1999, 32, 287–304. [Google Scholar] [CrossRef]
  49. Dshalalow, J.; Kim, S.; Tadj, L. Hybrid queueing systems with hysteretic bilevel control policies. Nonlinear Anal. Theory Methods Appl. 2006, 65, 2153–2168. [Google Scholar] [CrossRef]
  50. Bacot, J.B.; Dshalalow, J. A bulk input queueing system with batch gated service and multiple vacation policy. Math. Comput. Model. 2001, 34, 873–886. [Google Scholar] [CrossRef]
  51. Dshalalow, J.H.; Merie, A. Fluctuation analysis in queues with several operational modes and priority customers. Top 2018, 26, 309–333. [Google Scholar] [CrossRef]
  52. Dshalalow, J.H.; Merie, A.; White, R.T. Fluctuation Analysis in Parallel Queues with Hysteretic Control. Methodol. Comput. Appl. Probab. 2019, 22, 295–327. [Google Scholar] [CrossRef]
  53. Dshalalow, J.H.; Huang, W. A stochastic games with a two-phase conflict. In Jubilee Volume: Legacy of the Legend, Professor V. Lakshmikantham; Cambridge Scientific Publishers: Cottenham, UK, 2009; pp. 201–209. [Google Scholar]
  54. Dshalalow, J.H.; Huang, W. Tandem antagonistic games. Nonlinear Anal. Ser. Theory Methods 2009, 71, 259–270. [Google Scholar]
  55. Dshalalow, J.H.; Huang, W. Sequential antagonistic games with an auxiliary initial phase. In Functional Equations, Difference Inequalities, and Ulam Stability Notions (F.U.N.); Nova Science Publishers: New York, NY, USA, 2010; Chapter 2; pp. 15–36. [Google Scholar]
  56. Dshalalow, J.H. Fluctuations of Recurrent Processes and Their Applications to the Stock Market. Stoch. Anal. Appl. 2004, 22, 67–79. [Google Scholar] [CrossRef]
  57. Dshalalow, J.H. On exit times of a multivariate random walk with some applications to finance. Nonlinear Anal. 2005, 63, 569–577. [Google Scholar] [CrossRef]
  58. Dshalalow, J.H.; Liew, A. Level crossings of an oscillating marked random walk. Comput. Math. Appl. 2006, 52, 917–932. [Google Scholar] [CrossRef] [Green Version]
  59. Dshalalow, J.; Liew, A. On fluctuations of a multivariate random walk with some applications to stock options trading and hedging. Math. Comput. Model. 2006, 44, 931–944. [Google Scholar] [CrossRef]
  60. Dshalalow, J.H.; Liew, A. On exit times of a multivariate random walk and its embedding in a quasi Poisson process. Stoch. Anal. Appl. 2006, 24, 451–474. [Google Scholar] [CrossRef]
  61. Dshalalow, J.H.; Iwezulu, K. Discrete versus continuous operational calculus in antagonistic stochastic games. São Paulo J. Math. Sci. 2017, 11, 471–489. [Google Scholar] [CrossRef]
  62. Dshalalow, J.H.; Ke, H.J. Layers of noncooperative games. Nonlinear Anal. Ser. Theory Methods 2009, 71, 283–291. [Google Scholar] [CrossRef]
  63. Dshalalow, J.H.; Ke, H.J. Multilayers in a modulated stochastic game. J. Math. Anal. Appl. 2009, 353, 553–565. [Google Scholar] [CrossRef] [Green Version]
  64. Dshalalow, J.H.; Treerattrakoon, A. Set-theoretic inequalities in stochastic noncooperative games with coalition. J. Inequalities Appl. 2008, 2008, 1–14. [Google Scholar] [CrossRef]
  65. Dshalalow, J.H.; White, R. On Reliability of Stochastic Networks. Neural Parallel Sci. Comput. 2013, 21, 141–160. [Google Scholar]
  66. Dshalalow, J.H.; White, R. On Strategic Defense in Stochastic Networks. Stoch. Anal. Appl. 2014, 32, 365–396. [Google Scholar] [CrossRef]
  67. Dshalalow, J.H. Time dependent analysis of multivariate marked renewal processes. J. Appl. Probab. 2001, 38, 707–721. [Google Scholar] [CrossRef]
  68. Agarwal, R.P.; Dshalalow, J.H.; O’Regan, D. Time sensitive functionals of marked Cox processes. J. Math. Anal. Appl. 2004, 293, 14–27. [Google Scholar] [CrossRef] [Green Version]
  69. Al-Matar, N.; Dshalalow, J.H. Time sensitive functionals in a queue with sequential maintenance. Stoch. Model. 2011, 27, 687–704. [Google Scholar] [CrossRef]
  70. Dshalalow, J.H. Random Walk Analysis in Antagonistic Stochastic Games. Stoch. Anal. Appl. 2008, 26, 738–783. [Google Scholar] [CrossRef]
  71. Dshalalow, J.H. On multivariate antagonistic marked point processes. Math. Comput. Model. 2009, 49, 432–452. [Google Scholar] [CrossRef]
  72. Dshalalow, J.H.; Bacot, J.B. On functionals of a marked Poisson process observed by a renewal process. Int. J. Math. Math. Sci. 2001, 26, 427–436. [Google Scholar] [CrossRef] [Green Version]
  73. Dshalalow, J.H.; Nandyose, K.M. Continous time interpolation of monotone marked random measures with applications. Neural Parallel Sci. Comput. 2018, 26, 119–141. [Google Scholar] [CrossRef]
  74. Dshalalow, J.H.; Nandyose, K.M. Real time analysis of signed marked random measures with applications to finance and insurance. Nonlinear Dyn. Syst. Theory 2018, 19, 36–54. [Google Scholar]
  75. Dshalalow, J.H.; Nandyose, K.M.; White, R.T. Time dependent analysis of stochastic games of three players with applications. Math. Stat. 2021. pending minor revision. [Google Scholar]
  76. White, R.T.; Dshalalow, J.H. Characterizations of random walks on random lattices and their ramifications. Stoch. Anal. Appl. 2019, 38, 307–342. [Google Scholar] [CrossRef]
  77. Antal, T.; Redner, S. Escape of a Uniform Random Walk from an Interval. J. Stat. Phys. 2006, 123, 1129–1144. [Google Scholar] [CrossRef] [Green Version]
  78. Hughes, B.D. Random Walks and Random Environments; Clarendon Press Oxford University Press: Oxford, NY, USA, 1995. [Google Scholar]
  79. Redner, S. A Guide to First-Passage Processes; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar] [CrossRef]
  80. Kyprianou, A.E.; Pistorius, M.R. Perpetual options and Canadization through fluctuation theory. Ann. Appl. Probab. 2003, 13, 1077–1098. [Google Scholar] [CrossRef] [Green Version]
  81. Muzy, J.; Delour, J.; Bacry, E. Modelling fluctuations of financial time series: From cascade process to stochastic volatility model. Eur. Phys. J. B 2000, 17, 537–548. [Google Scholar] [CrossRef] [Green Version]
  82. Uchaikin, V.V.; Gusarov, G.G. Analysis of the structure function for the spatial distribution of galaxies in the random-walk model. Russ. Phys. J. 1997, 40, 707–710. [Google Scholar] [CrossRef]
  83. Zhou, J.L.; Sun, Y.S.; Zhou, L.Y. Evidence for Lévy Random Walks in the Evolution of Comets from the Oort Cloud. Celest. Mech. Dyn. Astron. 2002, 84, 409–427. [Google Scholar] [CrossRef]
  84. Odagaki, T.; Kasuya, K. Alzheimer random walk. Eur. Phys. J. B 2017, 90. [Google Scholar] [CrossRef]
  85. Jabbari, B.; Zhou, Y.; Hillier, F.S. A decomposable random walk model for mobility in wireless communications. Telecommun. Syst. 2001, 16, 523–537. [Google Scholar] [CrossRef]
  86. Asmussen, S. Phase-Type Representations in Random Walk and Queueing Problems. Ann. Probab. 1992, 20, 772–789. [Google Scholar] [CrossRef]
  87. Bayer, N.; Boxma, O.J. Wiener-Hopf analysis of an M/G/1 queue with negative customers and of a related class of random walks. Queueing Syst. 1996, 23, 301–316. [Google Scholar] [CrossRef] [Green Version]
  88. Cohen, J. Random Walk with a Heavy-Tailed Jump Distribution. Queueing Syst. 2002, 40, 35–73. [Google Scholar] [CrossRef]
  89. Gannon, M.; Pechersky, E.; Suhov, Y.; Yambartsev, A. Random walks in a queueing network environment. J. Appl. Probab. 2016, 53, 448–462. [Google Scholar] [CrossRef] [Green Version]
  90. Guillemin, F.; van Leeuwaarden, J.S.H. Rare event asymptotics for a random walk in the quarter plane. Queueing Syst. 2010, 67, 1–32. [Google Scholar] [CrossRef] [Green Version]
  91. Janssen, A.; van Leeuwaarden, J. Spitzer’s identity for discrete random walks. Oper. Res. Lett. 2018, 46, 168–172. [Google Scholar] [CrossRef] [Green Version]
  92. Lemoine, A.J. On Random Walks and StableGI/G/1 Queues. Math. Oper. Res. 1976, 1, 159–164. [Google Scholar] [CrossRef]
  93. Stadje, W. The embedded random walk in the stationary M/M/1 queue. Methodol. Comput. Appl. Probab. 2002, 4, 143–151. [Google Scholar] [CrossRef]
  94. Zorine, A.V. Study of a Service Process by a Loop Algorithm by Means of a Stopped Random Walk. In Information Technologies and Mathematical Modelling. Queueing Theory and Applications; Springer: Berlin/Heisenberg, Germany, 2019; pp. 121–135. [Google Scholar] [CrossRef]
  95. Bingham, N.H. Fluctuation theory in continuous time. Adv. Appl. Probab. 1975, 7, 705–766. [Google Scholar] [CrossRef]
  96. Bingham, N.H. Random walk and fluctuation theory. In Handbook of Statistics; Shanbhag, D.N., Rao, C.R., Eds.; Elsevier: Amsterdam, The Netherlands, 2001; Volume 19, pp. 171–213. [Google Scholar]
  97. Bladt, M.; Nielsen, B.F. Matrix-Exponential Distributions in Applied Probability; Springer: New York, NY, USA, 2017. [Google Scholar] [CrossRef]
  98. Foss, S.; Korshunov, D.; Zachary, S. An Introduction to Heavy-Tailed and Subexponential Distributions; Springer: New York, NY, USA, 2013. [Google Scholar] [CrossRef]
  99. Gut, A. Stopped Random Walks; Springer: New York, NY, USA, 2009. [Google Scholar] [CrossRef]
  100. Iksanov, A. Renewal Theory for Perturbed Random Walks and Similar Processes; Springer: Berlin/Heisenberg, Germany, 2016. [Google Scholar] [CrossRef]
  101. Lawler, G.F. Intersections of Random Walks; Springer: New York, NY, USA, 2013. [Google Scholar] [CrossRef]
  102. Slade, G. The Lace Expansion and Its Applications; Springer: Berlin/Heisenberg, Germany, 2006. [Google Scholar] [CrossRef] [Green Version]
  103. Telcs, A. The Art of Random Walks; Springer: Berlin/Heisenberg, Germany, 2006. [Google Scholar] [CrossRef]
  104. Wijesundera, I.; Halgamuge, M.N.; Nanayakkara, T.; Nirmalathas, T. Natural Disasters, When Will They Reach Me? Springer: Singapore, 2016. [Google Scholar] [CrossRef]
  105. Dshalalow, J.H. Single-server queues with controlled bulk service, random accumulation level, and modulated input. Stoch. Anal. Appl. 1993, 11, 29–41. [Google Scholar] [CrossRef]
  106. White, R.T. Reliability of networks under stochastic attacks. manuscript in progress.
  107. Agarwal, R.P.; Dshalalow, J.H.; O’Regan, D. Random observations of marked Cox processes. Time insensitive functionals. J. Math. Anal. Appl. 2004, 293, 1–13. [Google Scholar] [CrossRef] [Green Version]
  108. Agarwal, R.P.; Dshalalow, J.H. On multivariate delayed recurrent processes. Pan Am. Math. J. 2005, 15, 35–49. [Google Scholar]
  109. White, R.T. On the exiting patterns of sums of independent random vectors with an application to stochastic networks. 2021. submitted. [Google Scholar]
  110. Talbot, A. The Accurate Numerical Inversion of Laplace Transforms. IMA J. Appl. Math. 1979, 23, 97–120. [Google Scholar] [CrossRef]
  111. Abate, J.; Whitt, W. A Unified Framework for Numerically Inverting Laplace Transforms. Informs J. Comput. 2006, 18, 408–421. [Google Scholar] [CrossRef]
  112. White, R.T. On exits and overshoots of dependent jump processes. manuscript in progress.
  113. Dshalalow, J.H.; White, R.T. Time sensitive analysis of independent and stationary increment processes. J. Math. Anal. Appl. 2016, 443, 817–833. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The process confined to { μ 1 < min { μ , ν } } .
Figure 1. The process confined to { μ 1 < min { μ , ν } } .
Mathematics 09 01148 g001
Figure 2. Predicted and empirical (simulated) probabilities for parameters ( 1 , [ 0.25 , 0.5 , 0.25 ] , [ 1.5 , 1.5 ] , 1 , M 1 , 1000 , 1000 ) .
Figure 2. Predicted and empirical (simulated) probabilities for parameters ( 1 , [ 0.25 , 0.5 , 0.25 ] , [ 1.5 , 1.5 ] , 1 , M 1 , 1000 , 1000 ) .
Mathematics 09 01148 g002
Figure 3. Predicted and empirical (simulated) probabilities for parameters ( 1 , [ 0.25 , 0.5 , 0.25 ] , [ 1.5 , 1.5 ] , 1 , M 1 , 100 , 50 ) .
Figure 3. Predicted and empirical (simulated) probabilities for parameters ( 1 , [ 0.25 , 0.5 , 0.25 ] , [ 1.5 , 1.5 ] , 1 , M 1 , 100 , 50 ) .
Mathematics 09 01148 g003
Figure 4. Predicted and empirical (simulated) probabilities P ( ν 1 < ν 2 ) with parameters μ = 1 , p = 0.5 , and 0 M 2 20 for various M 1 values.
Figure 4. Predicted and empirical (simulated) probabilities P ( ν 1 < ν 2 ) with parameters μ = 1 , p = 0.5 , and 0 M 2 20 for various M 1 values.
Mathematics 09 01148 g004
Figure 5. Predicted and empirical (simulated) probabilities P ( ν 1 = ν 2 ) with parameters μ = 1 , p = 0.5 , and 0 M 2 20 for various M 1 values.
Figure 5. Predicted and empirical (simulated) probabilities P ( ν 1 = ν 2 ) with parameters μ = 1 , p = 0.5 , and 0 M 2 20 for various M 1 values.
Mathematics 09 01148 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dshalalow, J.H.; White, R.T. Current Trends in Random Walks on Random Lattices. Mathematics 2021, 9, 1148. https://doi.org/10.3390/math9101148

AMA Style

Dshalalow JH, White RT. Current Trends in Random Walks on Random Lattices. Mathematics. 2021; 9(10):1148. https://doi.org/10.3390/math9101148

Chicago/Turabian Style

Dshalalow, Jewgeni H., and Ryan T. White. 2021. "Current Trends in Random Walks on Random Lattices" Mathematics 9, no. 10: 1148. https://doi.org/10.3390/math9101148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop