Next Article in Journal
Impact of Third-Degree Price Discrimination on Welfare under the Asymmetric Price Game
Next Article in Special Issue
Theoretical Bounds on the Number of Tests in Noisy Threshold Group Testing Frameworks
Previous Article in Journal
Optimal Bayesian Estimation of a Regression Curve, a Conditional Density, and a Conditional Distribution
Previous Article in Special Issue
Introducing a Novel Method for Smart Expansive Systems’ Operation Risk Synthesis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Limiting Distributions of a Non-Homogeneous Markov System in a Stochastic Environment in Continuous Time

Department of Statistical Sciences, University College London, Gower St., London WC1E 6BT, UK
Mathematics 2022, 10(8), 1214; https://doi.org/10.3390/math10081214
Submission received: 10 March 2022 / Revised: 2 April 2022 / Accepted: 5 April 2022 / Published: 7 April 2022

Abstract

:
The stochastic process non-homogeneous Markov system in a stochastic environment in continuous time (S-NHMSC) is introduced in the present paper. The ordinary non-homogeneous Markov process is a very special case of an S-NHMSC. I studied the expected population structure of the S-NHMSC, the first central classical problem of finding the conditions under which the asymptotic behavior of the expected population structure exists and the second central problem of finding which expected relative population structures are possible limiting ones, provided that the limiting vector of input probabilities into the population is controlled. Finally, the rate of convergence was studied.

1. Introductory Notes

The stochastic process of a non-homogeneous Markov system in a stochastic environment (S-NHMS) in discrete time was introduced in [1]. The main goal was to satisfy the need for a more realistic stochastic model in populations with various entities, which were possible to be categorized in a finite number of exhaustive and exclusive states. The expected population structure is studied, that is, the distribution of the expected number of memberships in each state. Note that in the population, apart from the transition of memberships among the states, there are transitions to the external environment, often called wastage from the population, and flow of memberships in the population (system) in the various states, often called recruitment.
The S-NHMS in discrete time is a generalization of the stochastic concept of an NHMS in discrete time, which incorporated the idea of having a pool I t of transition probability matrices to choose from, the roots of which were in [2,3], for the special case where the transition matrices are Leslie matrices.
The stochastic process of an NHMS was first introduced in [4]. This new concept provided a more general framework for a number of Markov chain models in manpower systems, which was actually the initial motive. For examples, see [5,6,7,8,9,10].
There are also a large number and a great diversity of applied probability models that could be accommodated in this general framework. A simple fact that shows the dynamics of the concept of an NHMS is, as we will show later, that the well known simple Markov chain is a very special case of an NHMS.
In the present paper, we study the development of a continuous time version of a S-NHMS. The choice in practice between a stochastic process in discrete and continuous time is partly a matter of realism and partly one of convenience. With regard to realism, for example, usually one would want to deal with the transitions between the states of the members of the population in continuous time. However, in practice, the computational advantages of discrete time, as well as the mental process of the researcher, leads all too often to the choice of a discrete time process. On the other hand, continuous time models are often more amenable to mathematical analysis and this may count many times in their favor. Having developed both versions of the theory of S-NHMS, more choices are at our disposal, and hence, a more complete version of the entire theory.
A first concise and complete presentation of the theory of non-homogeneous Markov process exists in [11], Section 8.9. There, apart from building a rigorous foundation of the subject, in the respective references, one could also find the initial founders of the subject. Reference [12] started a period of intense study of non-homogeneous Markov processes. Strong ergodicity for continuous time non-homogeneous Markov processes, using mean visit times, was studied in [13]. Important results on the strong ergodicity for continuous time non-homogeneous Markov processes, using criteria on the functions of intensity transition matrices, were provided by [14,15,16]. I will make extensive use of these results in the present paper.
The estimates of rate of convergence for non-homogeneous Markov processes were studied in a series of papers [17,18,19,20,21]. For Markov systems in continuous time results, could be found in [22,23,24,25].
Estimations of the transition intensities in NHMS in continuous time were provided by [26] for various cases of missing data. In [27], transition intensities were studied for homogeneous Markov systems (HMS) in continuous time, as well as the relation between the volume of the attainable expected population structures at time t and the trace and rank of the intensity matrix.
In [28], the authors studied, for closed HMS in continuous time, the stability of size order of elements in an expected population structure as t . The state sizes of the elements of the expected population structures and their distributions for an HMS in continuous time were studied in [29] with the use of factorial moments. In [30], the author discussed the case of closed HMS with finite capacities of the states. In [31], the close relation between M / M / k / T / T queues and close HMS in continuous time is presented. More recent results on NHMS in continuous time could be found in [32], while a more recent review on the subject was given by [33].
The paper is organized as follows: In Section 2, I define in detail for the first time the stochastic process S-NHMSC. I also show that the ordinary non-homogeneous Markov process is a special case of an S-NHMSC. Furthermore, I clarify that the open homogeneous Markov models and the ordinary NHMS in continuous time are special cases of the S-NHMSC.
In Section 3, I evaluate the expected population structure of the S-NHMSC at any time t, as a function of the basic parameters of the population by establishing the appropriate differential and integral equation it satisfies.
In Section 4, I study the central classical problem, that of finding the conditions under which the asymptotic behavior of the expected population structure E N t as t exists, and finding its limit in closed analytic form as a function of the limits of the basic parameters of the system. The second central problem is finding which expected relative population structures are possible limiting ones, provided that we control the limiting vector of input probabilities into the population. We prove that the set A of asymptotically expected relative population structure E q t , under asymptotic input control of the S-NHMSC, is a convex hull of the points, which are functions of the left eigenvector of a certain limiting transition probability matrix and the limiting transition intensity matrices of the inherent non-homogeneous Markov process.
I conclude this section by studying an important question, which logically arises, that is, what is the rate of convergence to asymptotically attainable structures in an S-NHMSC. In fact, I am interested in finding conditions under which the rate is exponential, because then, the practical value of the asymptotic result is greater.
Finally, in Section 5, I present an illustrative example from manpower planning.

2. The S-NHMS in Continuous Time

I will start by presenting the concept of a non-homogeneous Markov system in a stochastic environment in continuous time (S-NHMSC). Let T t , t 0 , a known continuous function of time or a realization of a known stochastic process denoting the total number of members in the system. Let S = 1 , 2 , . . . , k be the set of states that are assumed to be exclusive and exhaustive. The state of the system at any time t is represented by the expected population structure:
E N t = E N 1 t , E N 2 t , . . . , E N k t ,
where E N i t is the expected number of members of the population at time t. Another representation of the state of the system is provided by the relative expected population structure:
E q t = E N t T t = E q 1 t , E q 2 t , . . . , E q k t .
Furthermore, among the states of the system, as in the case of a non-homogeneous Markov process ([11]), at the infinitesimal time interval t , t + δ t , the probabilities of members of the system to move from state i to state j are generated by the transition intensities r i j t :
p i j t , t + δ t = r i j t δ t + o δ t , for i j S .
It is important to note at this point that 1 is valid as long as during the interval t , t + δ t the transition intensities r i j t will operate. When taking a step up the ladder towards reality, I will assume a stochastic mechanism of selecting the values of r i j t , and the equation will be altered accordingly.
Furthermore, let state k + 1 represent members leaving the population and assume that r i , k + 1 t is the transition intensity for a member of the population in state i to leave in the time interval t , t + δ t :
p i k + 1 t , t + δ t = r i k + 1 t δ t + o δ t , for i j S .
The transition intensities r j j t are defined by:
r j j t = i = 1 i j k + 1 r j i t for j S .
Let R t = r i j t i , j S be the matrix of transition intensities at time t and r k + 1 t = r 1 , k + 1 t , r 2 , k + 1 t , . . . , r k , k + 1 t be the vector of leaving intensities at time t. Now, let p 0 i t , t + δ t be the probability of a new member to enter the population in state i, given that it will enter the population in the time interval t , t + δ t and let p 0 t , t + δ t = p 01 t , t + δ t , p 02 t , t + δ t , . . . , p 0 k t , t + δ t . Define the following probabilities:
p ^ i j t , t + δ t = p i j t , t + δ t + p i k + 1 t , t + δ t p 0 j t , t + δ t
= r i j t δ t + r i , k + 1 t δ t p 0 j t , t + δ t + o δ t .
Now, let:
q i j t = lim δ t 0 p ^ i j t , t + δ t δ t = r i j t + r i , k + 1 t p 0 j t for i j ,
be the transition intensity of a membership to move to state j in the time interval t , t + δ t , given that it was in state i at time t. To visualize this deeper, let there be T t memberships at the beginning of the interval t , t + δ t , and each member of the population holds one. During the interval t , t + δ t , members are leaving the population and at the exit they give their memberships to their replacement, who is distributed among the states with probabilities p 0 t , t + δ t at the end of the interval. Furthermore, let Q t = q i j t i , j S be the matrix of transition intensities of the memberships. Assume that Q t is measurable and that sup i j S q i j t is integrable on every finite interval of t. We call the Markov process defined by the matrix of intensities Q t , t 0 the imbedded or inherent Markov process of the S-NHMSC.
Assume now that in the infinitesimal time interval t , t + δ t , the system has the choice of selecting a transition intensity matrix from the pool:
R I t = R 1 t , R 2 t , . . . , R ν t ,
such that R i t 1 + r k + 1 t = 0 for i = 1 , 2 , . . . , ν and for every t. Furthermore, assume that it makes its choice in a stochastic way, and more specifically, in the infinitesimal time interval t , t + δ t , the probability of selecting an intensity matrix from the set R I t is given by
c i j t , t + δ t = P R t + δ t = R j t + δ t R t = R j t
= z i j t δ t + o δ t for i j , t 0 ,
and z i i t is defined to be:
z i i t = j i z i j t , i , j I , t 0 ,
and let c i 0 for i = 1 , 2 , . . . , k be the probabilities of the initial states.
Let Z t = z i j t i , j I be the above intensity matrix and assume that Z t is measurable for every t 0 and that sup i I z i i t is integrable on every finite interval of time. Then, the intensity matrices Z t t 0 define a non-homogeneous Markov process, which we call the compromise non-homogeneous Markov process of the S-NHMSC. The word ’compromise’ is selected in the sense that it is the outcome of the choice of strategy under the various pressures in the environment. We call a process like the one described above a non-homogeneous Markov system in a stochastic environment in continuous time (S-NHMSC).
We defined the S-NHMSC in the most general way, in order to provide an inclusive framework that could accommodate a large variety of applied probability models. Furthermore, in the following, some basic questions will be answered within this general framework. However, it is of great importance, in order to increase our intuition about the potential power of applicability of the present theory and in order to place it at the right position in the pyramid of progress towards reality, to make the following comments. Firstly, when:
T t = 1 , p k + 1 t = 0 , p 0 t = 0 for every t > 0 and
R I t = R t for every t > 0 ,
then the S-NHMSC is the ordinary non-homogeneous Markov process, which has found applications in almost all areas.
Secondly, when:
p k + 1 t = p k + 1 , p 0 t = p 0 , R I t = R for every t > 0 ,
then the S-NHMSC is the open homogeneous Markov model applied extensively in manpower systems (see [5,34]).
Thirdly, when:
R I t = R t for every t > 0 ,
then the S-NHMSC is the ordinary NHMS in continuous time, which is a general framework for many applied probability models (see [35,36]).

3. The Expected Population Structure of the S-NHMSC

We will now study the problem of finding the expected population structure E N t in terms of the basic functions of the parameters of the system. We call basic functions of the parameters the least number of parameters that uniquely determine an S-NHMSC. These are the functions R I t t 0 , Z t t 0 , T t t 0 , r k + 1 t t 0 , p 0 t t 0 , the initial population structure N 0 , and the initial probabilities c j 0 . These are defined by:
D t = d T t / d t or T t + δ t T t = D t δ t + o δ t .
Let N 0 t , t + δ t be the random variable which represents the number of new members entering the population in the infinitesimal time interval t , t + δ t . Then, since the number of losses from the population is a random variable, with the distribution for each state i S , the binomial B N i t , r i , k + 1 t δ t conditional on N i t , we have:
E N 0 t , t + δ t = i = 1 k E N i t r i , k + 1 t δ t + D t δ t .
Furthermore, let N i j t , t + δ t be the random variable representing the number of members of the system moving from state i to state j in the time interval t , t + δ t . Then, these flows from i to j S are multinomial random variables, in the sense that:
E N i j t , t + δ t = E E N i j t , t + δ t N i t , R I t
= E N i t E p i j t , t + δ t
= E N i t E r i j t δ t + o δ t for i j S ,
and:
E N i i t , t + δ t = E E N i i t , t + δ t N i t , R I t
= E N i t E p i i t , t + δ t
= E N i t + E N i t E r i i t δ t + o δ t for i j S .
Consequently, we have:
E N j t + δ t = i j E N i t E r i j t δ t + r i , k + 1 t p 0 j t , t + δ t
+ E N j t 1 + E r j j t δ t + r j k + 1 t δ t p 0 j t , t + δ t
+ D t δ t p 0 j t , t + δ t + o δ t .
Equation 12 , for all j S , could be written in matrix notation:
d E N t d t = E N t E Q t + D t p 0 t ,
where:
E Q t = E R t + r k + 1 t p 0 t .
We will now prove that the sum of the rows of the matrix E Q t is equal to zero. We have:
E Q t 1 = E R t 1 + r k + 1 t p 0 t 1
= j = 1 ν P R t = R j t R j t 1 + r k + 1 t
= j = 1 ν P R t = R j t r k + 1 t + r k + 1 t
= r k + 1 t + r k + 1 t = 0 .
Hence, the matrix E Q t is an intensity matrix and defines a non-homogeneous Markov process which, by analogy with the ordinary NHMS in discrete time [5,35], we call the expected embedded or inherent non-homogeneous Markov process for the S-NHMSC. Assume that 0 t E Q u d u < for all t 0 , then there exists a unique transition function (see [36] paragraph 8.9) E P q . , . , such that:
lim h + h 0 E P q t h , t + h I h + h = E Q t ,
for all t E , where E 0 , is a set of Lebesgue measure zero. Moreover, E P q . , . satisfies the integral matrix equations:
E P q s , t = I + s t E Q u E P q u , t d u ,
and:
E P q s , t = I + s t E P q u , t E Q u d u .
A detailed solution of (17) and (18) could be found in [36], paragraph 8.9, where apparently E Q t is a function of Z t t 0 and R I t t 0 due to the selection of R t by the compromise non-homogeneous Markov process. However, we are not interested in a closed analytic formula E P q s , t , and it is sufficient that we know that it exists and that it is unique.
In what follows, I will use a probabilistic argument in order to find E N t , which will also be the solution of the differential Equation 13 . The initial number of memberships T 0 = N 0 1 at time t will be distributed to the various states with probabilities E P q 0 , t , which are the probabilities of transitions of the expected embedded non-homogeneous Markov process generated by the intensity matrix E Q t . Thus, the expected distribution across the states of the initial memberships will be:
N 0 E P q 0 , t .
Now, let the time interval be x , x + δ x , then the new memberships entering in that time interval are D x δ x , and their expected values in the various states at the end of the interval are given by p 0 x , x + δ x D x δ x . After time t x , the expected number of new memberships will be distributed to the various states of the population and their expected values will be p 0 x , x + δ x D x δ x E P q x , t ; therefore, integrating x from 0 to t, we get:
E N t = N 0 E P q 0 , t + 0 t p 0 x D x E P q x , t d x ,

4. The Asymptotic Behavior of the S-NHMSC

It is evident from previous studies, for example [1,4,35,37,38,39,40], that the central problems in the theory of NHMS and S-NHMS in discrete time, which will be studied in the present for S-NHMSC, are basically of two natures. The first classical problem is that of finding the conditions under which the asymptotic behavior of the expected population structure E N t as t exists and finding its limit in closed analytic form as a function of the limits of the basic parameters of the system. The second classical problem is finding which expected relative population structures are possible limiting ones, provided that we control the limiting vector of input probabilities in the population.
In what follows, I will use as a norm of matrix A M k × k R the following:
A = sup i i a i j .
I will start by refreshing concepts and borrowing some important results from the theory of non-homogeneous Markov processes, starting with the following definitions for non-homogeneous Markov processes with countable state spaces.
Definition 1.
A Markov process X t t = 0 is weakly ergodic if for every s 0 , lim t δ P s , t = 0.
In the case of weak ergodicity the probability of the occurrence of any of the states at time t tends to be independent from the initial probability distribution, but is in general dependent on t.
Definition 2.
A Markov process X t t = 0 is ergodic if for every s 0 , there exists a vector Π = π 1 , π 2 , . . . such that:
lim t p i j s , t π j = 0 for every i , j S .
Definition 3.
A Markov process X t t = 0 is strongly ergodic if there exists a row-constant matrix Π such that, for all s 0 :
lim t P s , t Π = 0 .
Remark 1.
When the state space S is finite, then the concepts of ergodic and strongly ergodic coincide.
As the reader by now may have recognized, the generator of a non-homogeneous Markov process is the sequence of intensity matrices Q t t = 0 . This is so in the sense that the transition probability matrix P could be seen as the generator of a homogeneous Markov chain, and the sequence of transition probability matrices P t t = 1 as the generator of a non-homogeneous Markov chain. Hence, our goal will now be to find conditions for strong ergodicity for a non-homogeneous Markov process based on the convergence of the sequence of intensity matrices Q t t = 1 .
I will now borrow a basic theorem concerning strong ergodicity for a non-homogeneous Markov chain based on its sequence of intensity matrices.
Theorem 1
([14,15]). Let a complete probability space be Ω , F , P and a non-homogeneous Markov process X t t = 0 with sequence of intensity matrices Q t t = 0 , which is such that sup t 0 Q t c . Let also a homogeneous Markov process be X ^ t t = 0 with intensity matrix Q , such that Q c , and which is strongly ergodic. If lim t Q t Q = 0 , then if Π is the stable stochastic matrix, the limit of X ^ t t = 0 , then X t t = 0 is also strongly ergodic with limit Π.
Remark 2.
At this point, let us refresh the fact that for finite homogeneous, discrete, or continuous Markov chains, the concept of ergodicity, strong ergodicity, and weak ergodicity coincide. For an infinite chain, the notions of ergodicity and strong ergodicity are separated.
I will present an important result from [16]. Let Q be the intensity matrix of a homogeneous Markov process X t t = 0 and sup i S q i i < c < and b > c , define:
P ^ = I + Q b ,
then P ^ generates a discrete Markov chain X ^ t t = 0 .
Theorem 2
([16]). Let a complete probability space be Ω , F , P and a finite homogeneous Markov process X t t = 0 , then it is ergodic if and only if the Markov chain X ^ t t = 0 generated by P ^ = I + Q / b is ergodic.
I will now prove the following basic theorem:
Theorem 3.
Let a complete probability space be Ω , F , P and a finite S-NHMSC, as defined in Section 2. Assume that the following conditions hold:
1 lim t R j t R j = 0 , 2 lim t r k + 1 t r k + 1 = 0 ,
3 lim t p 0 t p 0 , 4 lim t Z t Z = 0 , w i t h
sup t 0 Z t < , sup i I z i i < z < a n d l e t P Z = I + Z c 1
w i t h c 1 > z , P Z a n i r r e d u c i b l e , a p e r i o d i c m a t r i x
5 sup i S r j , i i < a < , sup r i , k + 1 < b <
w h e r e r j , i i t h e i , j e l e m e n t o f R j ,
then, as t E Q t converges in norm to the intensity matrix:
E Q = j = 1 ν π z j R j + r k + 1 p 0 ,
where Π Z = π z 1 , π z 2 , . . . , π z k is the left eigenvector of the eigenvalue 1 of the matrix P Z .
Proof. 
From condition 4 , since P Z is an irreducible, aperiodic stochastic matrix, then there exists a stable stochastic matrix Π Z with common row Π Z = π z 1 , π z 2 , . . . , π z k , which is the left eigenvector of the eigenvalue 1 of the matrix P Z , that is:
lim t P Z t Π Z = 0 .
Furthermore, from condition 4 , we have that the intensity matrices Z t t 0 converge to the intensity matrix Z , and from 22 , we know that it generatesan ergodic Markov process. Therefore, Z t t 0 , due to Theorem 2, generates an ergodic non-homogeneous Markov process, and we have that:
lim t C s , t Π Z = 0 .
We have that:
E R t = i = 1 ν j = 1 ν c i j 0 , t c i 0 R j t .
Now, consider:
E Q t j = 1 ν π z j R j r k + 1 p 0
E R t j = 1 ν π z j R j + r k + 1 t p 0 t r k + 1 p 0
i = 1 ν j = 1 ν c i j 0 , t c i 0 R j t i = 1 ν j = 1 ν π z j c i 0 R j +
r k + 1 t r k + 1 + r k + 1 p 0 t p 0
i = 1 ν j = 1 ν c i j 0 , t R j t π z j R j c i 0 +
r k + 1 t r k + 1 + r k + 1 p 0 t p 0
i = 1 ν j = 1 ν c i j 0 , t π z j R j t + R j t R j
+ r k + 1 t r k + 1 + r k + 1 p 0 t p 0 .
We have that:
R j t R j t R j + R j ,
and since r j , i i = l i r j , i l + r j , i k + 1 , and by condition 5 we have sup i S r j , i i < a < , we could easily prove that:
R j 2 l i r j , i l + sup i S r j , i k + 1 < b < .
By condition 1 , one can choose t 0 * such that for t > t * , R j t R j < 1 . Let M * = sup 0 t t * R j t R j , denoted by M = M * + 1 + b . Then:
R j t < M < .
From 25 , 27 , and the conditions of the Theorem, we get that for t > t 0 :
E Q t j = 1 ν π z j R j + r k + 1 p 0 ϵ .
Furthermore, it is not difficult using the conditions of the Theorem to see that:
E Q 1 = j = 1 ν π z j R j 1 + r k + 1 p 0 1 = j = 1 ν π z j r k + 1 + r k + 1 = 0 .
In analogy with the discrete case for an S-NHMS, we provide the following definition:
Definition 4.
We say that an S-NHMSC has an asymptotically attainable expected relative population structure E q under asymptotic input control, if there exists a p 0 = lim t p 0 t such that lim t E q t = E q . We denote by A the set of asymptotically expected relative population structures under asymptotic input control of the S-NHMSC.
We now provide the following basic theorem concerning the asymptotic behavior of the S-NHMSC.
Theorem 4.
Let a complete probability space be Ω , F , P and a finite S-NHMSC as defined in Section 2. Assume that the conditions 1 5 of Theorem 2 hold and, in addition, that the following conditions are true 6 :
lim t T t = T ,
where T t is a non-dicreasing continuous function. 7 The matrix:
P q = I + E Q c 2 ,
with c 2 > sup i S E q i i is an irreducible, aperiodic stochastic matrix. Then, i as t , E q t converges to Π q = π q 1 , π q 2 , . . . , π q k , which is the left eigenvector of the eigenvalue 1 of the matrix P q . i i The set A is the convex hull of the points:
μ i e i j = 1 ν π z j R j 1 , w h e r e μ i = e i j = 1 ν π z j R j 1 1 .
Proof. 
Since Π z is the left eigenvector of the eigenvalue 1 of the irreducible, aperiodic matrix P Z , we have that 0 π z j 1 for j = 1 , 2 , . . . , ν . Furthermore, condition 5 of Theorem 3 is also valid for the present; hence, sup i S r j , i i < and sup r i , k + 1 < b < . Consequently, from the expression of E Q in Theorem 3 we get that:
c 2 > sup i S E q i i < .
Now, since P q is an irreducible, aperiodic stochastic matrix, we have that:
lim t P q t Π q = 0 ,
where Π q is a stable stochastic matrix with row Π q = π q 1 , π q 2 , . . . , π q k , which is the left eigenvector of the eigenvalue 1 of the matrix P q . From 28 , Theorems 1 and 2, we have that if we denote with E P q s , t the probability transition matrix of the non-homogeneous Markov process defined by the intensities E Q t t 0 , then:
lim t E P q s , t Π q = 0 for every s N .
Therefore, as t is the first part of the right hand side of Equation 20 :
lim t N 0 E P q s , t = N 0 Π q = T 0 Π q .
Now, consider:
U t = 0 t p 0 x D x E P q x , t d x 0 t p 0 D x Π q d x
0 t p 0 x E P q x , t p 0 Π q D x d x
0 t p 0 x E P q x , t Π q D x d x + 0 t p 0 x p 0 D x d x
= 0 t E P q x , t Π q D x d x + 0 t p 0 x p 0 D x d x
= A t + B t
From 29 , we have that there exists a t 0 > 0 such that for t x > t 0 :
E P q x , t Π q < ϵ .
Thus:
A t 0 t t 0 E P q x , t Π q D x d x +
t t 0 t E P q x , t Π q D x d x
ϵ T t t 0 T 0 + 2 T t T t t 0 .
Now, from condition 3 , we have that there exists a t 1 > 0 such that for t > t 1 , p 0 t p 0 < ϵ . Thus:
B t = 0 t p 0 x p 0 D x d x
0 t 1 p 0 x p 0 D x d x + t 1 t p 0 x p 0 D x d x
2 T t 1 T 0 + ϵ T t T t 1 .
Hence, we have that lim t A t = 0 and lim t B t < . Thus, U t is an increasing function bounded from above and lim t U t = 0 . Therefore, from 31 , we have that:
lim t 0 t p 0 x D x E P q x , t d x = 0 t p 0 D x Π q d x
= p 0 Π q T T 0 = Π q T T 0 .
Hence, from 20 , 30 and 32 , we get that:
lim t E N t = E N t = T Π q .
Since E Q is finitely bounded and defines an ergodic Markov process, it is known that:
Π q E Q = 0 .
From Theorem 3 and Equation 34 , we get that
Π q j = 1 ν π z j R j = Π q r k + 1 p 0 = Π q R j 1 p 0 .
The matrix j = 1 ν π z j R j , due to condition 7 , is irreducible and aperiodic and is part of the intensity matrix E Q . Hence, ([41]) j = 1 ν π z j R j 1 exists and is nonnegative. Therefore:
Π q = Π q R j 1 p 0 j = 1 ν π z j R j 1 ,
and:
Π q = Π q R j 1 i = 1 k p 0 i e i j = 1 ν π z j R j 1 .
Multiplying both sides of 37 by 1 , we obtain:
1 = Π q R j 1 i = 1 k p 0 i e i j = 1 ν π z j R j 1 1 .
Let:
μ i = e i j = 1 ν π z j R j 1 1 .
Then:
1 = Π q R j 1 i = 1 k p 0 i μ i .
Therefore, from 33 and the above, we get that:
lim t E q t = Π q = i = 1 k p 0 i μ i j = 1 k p 0 j μ j μ i 1 e i j = 1 ν π z j R j 1 .
Hence, E q is a convex combination of the vertices:
μ i 1 e i j = 1 ν π z j R j 1 .
It is well known that for a homogeneous Markov process, with intensity matrix Q and transition matrix P t , which is strongly ergotic, the rate of convergence with which P t converges to a stable stochastic matrix is exponential. Logically, this fact creates the intuition, that possibly for a non-homogeneous Markov process with sequence of intensity matrices Q t , the rate at which the transition probability matrices converge to a stable stochastic matrix is also exponential. The answer to this is negative, since we need one more condition for this to be true, and that is lim t Q t Q = 0 with an exponential rate of convergence. This result is stated formally in the following theorem, the proof of which could be found in [14].
Theorem 5.
Let a complete probability space be Ω , F , P and a non-homogeneous Markov process X t t = 0 with sequence of intensity matrices Q t t = 0 , which is strongly ergodic. Let also a homogeneous Markov process be X ^ t t = 0 with intensity matrix Q , which is strongly ergodic. Let g : R + R + be a monotonically increasing function. If lim t g 2 t Q t Q = 0 then:
lim t sup s 0 min exp λ t , g t P s , t Π = 0 ,
where 0 < λ < β / 2 and β > 0 is the constant parameter of the exponential rate of convergence at which X ^ t t = 0 converges.
An important question which logically arises is: what is the rate of convergence to asymptotically attainable structures in an S-NHMSC? In fact, I am interested in finding conditions under which the rate is exponential, because then, the practical value of the asymptotic result is greater (see [42,43]). Furthermore, as in [20], the problem of construction of sharp bounds for the rate of convergence of characteristics of Markov chains to their limiting vectors is very important. That is, all too often, it is easier to calculate the limit characteristics of a process than to find the exact distribution of state probabilities. Therefore, it is very important to have a possibility to use the limit characteristics as asymptotic approximations for the exact distribution. The following Theorem answers the question of the rate of convergence of the expected structure of an S-MHMSC.
Theorem 6.
Let a complete probability space be Ω , F , P and a finite S-NHMSC as defined in Section 2. Furthermore, let the conditions 1 7 of Theorem 4 hold and in addition assume that the convergences in conditions 1 4 and 6 are exponentially fast. Then, the convergence of E N t as t is exponentially fast.
Proof. 
Since lim t Z t Z = 0 is exponentially fast and in addition Z is strongly ergodic, then in Theorem 5 there are constants c 3 and λ 1 > 0 such that:
C s , s + t Π z c 3 e λ 1 t for every s , t > 0 .
Since the convergences in conditions 1 3 are exponentially fast, we have that:
c 0 > 0 , a 0 > 0 such that p 0 t p 0 c 0 e a 0 t for every t .
c 1 > 0 , a 1 > 0 such that r k + 1 t r k + 1 c 1 e a 1 t for every t .
c 2 > 0 , a 2 > 0 such that R j t R j c 2 e a 2 t for every t .
From 25 , 42 45 , we arrive at:
E Q t E Q c e a t with c > 0 , a > 0 .
Now, from 46 , condition 7 , of Theorems 4 and 5 we get that:
E P q s , s + t Π q c q e λ 2 t , c q , λ 2 , t > 0 for every t .
We now have the following:
E N t T Π q = N 0 E P q 0 , t
+ 0 t p 0 x D x E P q x , t d x T Π q
N 0 E P q 0 , t Π q +
0 t p 0 x D x E P q x , t d x T T 0 Π q
N 0 E P q 0 , t Π q + T t T Π q +
0 t p 0 x D x E P q x , t d x T t T 0 Π q
N 0 E P q 0 , t Π q + T t T +
0 t E P q x , t Π q D x d x + 0 t p 0 x p 0 D x d x
From 47 , condition 3 , we obtain the fact that the convergence as t of T t is exponentially fast, and based on 49 , we arrive at the following relation:
E N t T Π q c e λ t with c , λ , t > 0 and for every t > 0 ,
which proves the Theorem. □

5. An Illustrative Example from Manpower Planning

In the present section, the previous results are illustrated through an example from manpower planning. Interesting examples of such systems can be found in [44]. Suppose that intensities were estimated from the historical records of a firm with three grades, and they found that three were repeatedly exercised; thus, the pool R I t has the elements:
R 1 t = 4 2 e 3 t 3 + e 3 t 0 0 5 3 e t 3 + 2 e t 0 0 7 e 5 t ,
R 2 t = 5 10 e 3 t 4 + 9 e 3 t 0 0 6 9 e t 4 + 7 e t 0 0 7 e 5 t ,
R 1 t = 3 4 e 3 t 2 + 3 e 3 t 0 0 7 3 e t 5 + e t 0 0 7 e 5 t .
Let also:
r k + 1 t = 1 + e 3 t , 2 + e 7 , 7 + e 5 t , p 0 t = 0.2 0.3 0.5 .
In addition, let us utilize the well-known maximum likelihood estimates for transition intensities ([44]); the matrix of the transition intensities of the compromise non-homogeneous Markov process Z t t 0 , under the assumption that they are time independent, was found to be:
Z ^ = 5 3 2 4 9 5 2 5 8 .
Applying Theorem 3 to the above data, we have that conditions 1 3 are satisfied with:
R 1 = 4 3 0 0 5 3 0 0 7 , R 2 = 5 4 0 0 6 4 0 0 7 ,
R 3 = 3 2 0 0 7 5 0 0 7 and r k + 1 = 1 2 7 .
Obviously, sup t 0 Z t < , and with c 1 = 10 , we get:
P Z = 0.5 0.3 0.2 0.4 0.1 0.5 0.3 0.5 0.2 ,
which is obviously an irreducible regular stochastic matrix, and thus, condition 4 and condition 5 of Theorem 3 are satisfied. Now, the asymptotic expected intensity matrix is found to be:
E Q = 3.8 3.3 0.5 0.4 5.3 4.9 1.4 2.1 3.5 ,
which, apparently, is a matrix of transition intensities.
Theorems 4 and 5 are straightforwardly applicable with the above data. The present example could be used as a guide for applying the theoretical results in many areas of potential applications, such as for example in [44,45,46,47,48].

6. Conclusions

The concept of a non-homogeneous Markov system in a stochastic environment and in continuous time was introduced. It was found under which conditions, using basic parameters, the limiting population structure and the relating relative population structure exist, and they were evaluated in elegant closed analytic forms. The set of all possible relative population structures was characterized under all possible input probability vectors. Finally, an illustrative example from manpower planning was presented, which could be used as a guide for applications in other areas.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Tsantas, N.; Vassiliou, P.-C.G. The non-homogeneous Markov system in a stochastic environment. J. Appl. Probab. 1993, 30, 285–301. [Google Scholar] [CrossRef]
  2. Choen, J.E. Ergodicity of age structure in populations with Markovian vital rates, I: Countable states. J. Am. Stat. Assoc. 1976, 71, 335–339. [Google Scholar]
  3. Choen, J.E. Ergodicity of age structure in populations with Markovian vital rates, II: General states. Adv. Appl. Probab. 1977, 9, 18–37. [Google Scholar]
  4. Vassiliou, P.-C.G. Asymptotic behavior of Markov systems. J. Appl. Probab. 1982, 19, 815–817. [Google Scholar] [CrossRef]
  5. Bartholomew, D.J. Stochastic Models for Social Processes, 3rd ed.; Wiley: New York, NY, USA, 1982. [Google Scholar]
  6. Gani, J. Formulae for projecting enrolments and degrees awarded in universities. J. R. Stat. Soc. 1963, 126, 400–409. [Google Scholar] [CrossRef]
  7. Colnisk, J. Interactive Markov chains. J. Math. Sociol. 1976, 6, 163–168. [Google Scholar]
  8. Young, A.; Vassiliou, P.-C.G. A non-linear model for the promotion of staff. J. R. Stat. Soc. 1974, 137, 584–595. [Google Scholar] [CrossRef]
  9. Vassiliou, P.-C.G. A Markov chain model for wastage in manpower systems. Oper. Res. Q. 1976, 27, 57–70. [Google Scholar] [CrossRef]
  10. Vassiliou, P.-C.G. A high order non-linear Markovian model for promotion in Manpower systems. J. R. Stat. Soc. 1978, 141, 86–94. [Google Scholar] [CrossRef]
  11. Iosifescu, M. Finite Markov Processes and Applications; John Wiley: New York, NY, USA, 1980. [Google Scholar]
  12. Goodman, G.S. An intrinsic time for non-stationary Markov chains. Z. Wahrscheinlichkeitsth. 1970, 16, 165–180. [Google Scholar] [CrossRef]
  13. Scott, M.; Arnold, B.C.; Isaacson, D.L. Strong ergodicity for continuous time non-homogeneous Markov chains. J. Appl. Probab. 1982, 19, 692–694. [Google Scholar] [CrossRef]
  14. Johnson, J.T. Ergodic Properties of Non-Homogeneous continuous Markov Chains. Ph.D. Thesis, Iowa State University, Ames, IA, USA, 1984. [Google Scholar]
  15. Johnson, J.T.; Isaacson, D. Conditions for strong ergodicity using intensity matrices. J. Appl. Probab. 1988, 25, 34–42. [Google Scholar] [CrossRef]
  16. Yong, P.L. Some results related to q-bounded Markov processes. Nanta Math. 1976, 8, 34–41. [Google Scholar]
  17. Zeifman, A.I. Quasi-ergodicity for non-homogeneous continuous time Markov chains. J. Appl. Probab. 1989, 26, 643–648. [Google Scholar] [CrossRef]
  18. Zeifman, A.I.; Isaacson, D.L. On strong ergodicity for non-homogeneous continuous-time Markov chains. Stoch. Process. Their Appl. 1994, 50, 263–273. [Google Scholar] [CrossRef] [Green Version]
  19. Zeifman, A.I.; Korolev, V.Y. Two sided bounds on the rate of convergence for continuous-time finite inhomogeneous Markov chains. Stat. Probab. Lett. 2015, 137, 84–90. [Google Scholar] [CrossRef]
  20. Zeifman, A.I.; Korolev, V.Y.; Satin, Y.A.; Kiseleva, K.M. Lower bounds for the rate of convergence for continuous-time inhomogeneous Markov chains with a finite state space. Stat. Probab. Lett. 2018, 103, 30–36. [Google Scholar] [CrossRef] [Green Version]
  21. Mitrophanov, A.Y. Stability and exponential convergence of continuous time Markov chains. J. Appl. Probab. 2003, 40, 970–979. [Google Scholar] [CrossRef]
  22. Bartholomew, D.J. Stochastic Models for Social Processes, 2nd ed.; Wiley: New York, NY, USA, 1973. [Google Scholar]
  23. McClean, S.I. A continuous time population model with Poisson recruitment. J. Appl. Probab. 1976, 13, 348–354. [Google Scholar] [CrossRef]
  24. McClean, S.I. Continuous time stochastic models of a multigrade population. J. Appl. Probab. 1978, 15, 26–37. [Google Scholar] [CrossRef]
  25. Gerontidis, I. On certain aspects of non-homogeneous Markov systems in continuous time. J. Appl. Probab. 1990, 27, 530–544. [Google Scholar] [CrossRef]
  26. McClean, S.I.; Montgomery, E.; Ugwuowo, F. Non-homogeneous continuous-time Markov and semi-Markov manpower models. Appl. Stoch. Models Data Anal. 1998, 13, 191–198. [Google Scholar] [CrossRef]
  27. Tsaklidis, G. The evolution of the attainable structures of a continuous time homogeneous Markov system with fixed size. J. Appl. Probab. 1996, 33, 34–47. [Google Scholar] [CrossRef]
  28. Kipouridis, I.; Tsaklidis, G. The size order of the state vector of continuous-time homogeneous Markov system with fixed size. J. Appl. Probab. 2001, 38, 635–646. [Google Scholar] [CrossRef]
  29. Vasiliadis, G.; Tsaklidis, G. On the distribution of the state sizes of closed continuous time homogeneous Markov systems. Method. Comput. Appl. Probab. 2009, 11, 561–582. [Google Scholar] [CrossRef]
  30. Vasiliadis, G. On the distributions of the state sizes of the continuous time homogeneous Markov system with finite state capacities. Methodol. Comput. Appl. Probab. 2012, 14, 863–882. [Google Scholar] [CrossRef]
  31. Vasiliadis, G. Transient analysis of the M/M/k/N/N queue using acontinuous time homogeneous Markov system with finite state size capacity. Commun. Stat. Theory Methods 2014, 43, 1548–1562. [Google Scholar] [CrossRef]
  32. Dimitriou, V.A.; Georgiou, A.C. Introduction, analysis and asymptotic behavior of a multi-level manpower planning model in a continuous time setting under potential department contraction. Commun. Stat. Theory Methods 2020, 50, 1173–1199. [Google Scholar] [CrossRef]
  33. Esquivel, M.L.; Krasil, N.P.; Guerreiro, G.R. Open Markov type population models: From discrete to continuous time. Mathematics 2021, 9, 1496. [Google Scholar] [CrossRef]
  34. Bartholomew, D.J. Maintaining a grade or age structure in a stochastic environment. Adv. Appl. Prob. 1977, 11, 603–615. [Google Scholar] [CrossRef]
  35. Vassiliou, P.-C.G. The evolution of the theory of non-homogeneous Markov systems. Appl. Stoch. Models Data Anal. 1997, 13, 159–176. [Google Scholar] [CrossRef]
  36. Iosifescu, M. Finite Markov Processes and Applications; Dover Publications: New York, NY, USA, 2007. [Google Scholar]
  37. Georgiou, A.C.; Vassiliou, P.-C.G. Periodicity of asymptotically attainable structures in Non-homogeneous Markov systems. Linear Algebra Its Appl. 1992, 176, 137–174. [Google Scholar] [CrossRef] [Green Version]
  38. Tsantas, N.; Georgiou, A.C. Periodicity of equilibrium structures in a time dependent Markov model under stochastic environment. Appl. Stoch. Models Data Anal. 1994, 10, 269–277. [Google Scholar] [CrossRef]
  39. Tsantas, N. Ergodic behavior of a Markov chain model in a stochastic environment. Math. Methods Oper. Res. 2001, 54, 101–117. [Google Scholar] [CrossRef]
  40. Vassiliou, P.-C.G.; Tsantas, N. Stochastic Control in Non-Homogeneous Markov Systems. Int. J. Comput. Math. 1984, 16, 139–155. [Google Scholar] [CrossRef]
  41. Darroch, J.N.; Seneta, E. On quasi-stationary distribution in absorbing continuous-time finite Markov chains. J. Appl. Probab. 1988, 25, 34–42. [Google Scholar]
  42. Vassiliou, P.-C.G.; Tsaklidis, G. The rate of convergence of the vector of variances and covariances in non-homogeneous Markov set systems. J. Appl. Probab. 1989, 27, 776–783. [Google Scholar] [CrossRef]
  43. Vassiliou, P.-C.G. On the periodicity of non-homogeneous Markov chains and systems. Linear Algebra Its Appl. 2015, 471, 654–684. [Google Scholar] [CrossRef]
  44. Bartholomew, D.J.; Forbes, A.; McClean, S.I. Statistical Techniques in Manpower Planning; Wiley: Chichester, UK, 1991. [Google Scholar]
  45. McClean, S.I. Using Markov models to characterize and predict process target compliance. Mathematics 2021, 9, 1187. [Google Scholar] [CrossRef]
  46. McClean, S.I.; Gillespie, J.; Garg, L.; Barton, M.; Scotney, B.; Fullerton, K. Using phase-type models to cost stroke patient care across health, social and community services. Eur. J. Oper. Res. 2014, 236, 190–199. [Google Scholar] [CrossRef]
  47. Patoucheas, P.D.; Stamou, G. Non-homogeneous Markovian models in ecological modeling a study of zoobenthos in Thermaikos Gulf, Greece. Ecol. Modell. 1993, 66, 197–215. [Google Scholar] [CrossRef]
  48. Gao, K.; Yan, X.; Peng, R.; Xing, L. Economic design of a linear consecutive connected system considering cost and signal loss. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 5116–5128. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vassiliou, P.-C.G. Limiting Distributions of a Non-Homogeneous Markov System in a Stochastic Environment in Continuous Time. Mathematics 2022, 10, 1214. https://doi.org/10.3390/math10081214

AMA Style

Vassiliou P-CG. Limiting Distributions of a Non-Homogeneous Markov System in a Stochastic Environment in Continuous Time. Mathematics. 2022; 10(8):1214. https://doi.org/10.3390/math10081214

Chicago/Turabian Style

Vassiliou, P. -C. G. 2022. "Limiting Distributions of a Non-Homogeneous Markov System in a Stochastic Environment in Continuous Time" Mathematics 10, no. 8: 1214. https://doi.org/10.3390/math10081214

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop