Next Article in Journal
Automatic Product Classification Using Supervised Machine Learning Algorithms in Price Statistics
Next Article in Special Issue
Spectral Analysis for Comparing Bitcoin to Currencies and Assets
Previous Article in Journal
Globalization–Income Inequality Nexus in the Post-Soviet Countries: Analysis of Heterogeneous Dataset Using the Quantiles via Moments Approach
Previous Article in Special Issue
A New Probabilistic Approach: Estimation and Monte Carlo Simulation with Applications to Time-to-Event Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Wavelet-Based Computational Framework for a Block-Structured Markov Chain with a Continuous Phase Variable

1
School of Traffic and Logistics, Central South University of Forestry and Technology, Changsha 410004, China
2
Department of Statistics and Probability, Michigan State University, East Lansing, MI 48824, USA
3
School of Mathematics and Statistics, HNP-LAMA, New Campus, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(7), 1587; https://doi.org/10.3390/math11071587
Submission received: 25 February 2023 / Revised: 19 March 2023 / Accepted: 20 March 2023 / Published: 24 March 2023
(This article belongs to the Special Issue Probability Theory and Stochastic Modeling with Applications)

Abstract

:
We consider the computing issues of the steady probabilities for block-structured discrete-time Markov chains that are of upper-Hessenberg or lower-Hessenberg transition kernels with a continuous phase set. An effective computational framework is proposed based on the wavelet transform, which extends and modifies the arguments in the literature for quasi-birth-death (QBD) processes. A numerical procedure is developed for computing the steady probabilities based on the fast discrete wavelet transform, and several examples are presented to illustrate its effectiveness.

1. Introduction

Consider a two-dimensional block-structured discrete-time Markov chain (DTMC) { ( L n , X n ) : n N } on the state space N × R , where N and R are sets of non-negative integers and real numbers, respectively. Denote by B ( R ) the Borel σ -field of the set R . The transition probability law is time homogeneous and is characterized by the following transition kernel
P i j ( x , A ) = P { ( L n + 1 , X n + 1 ) j × A | ( L n , X n ) = ( i , x ) } ,
where i , j N , x R and A B ( R ) . Recall that a two-dimensional function F ( x , A ) is called a kernel if it is a measurable function in x for each A B ( R ) , and a non-negative measure on R for each x R . When A = ( , y ] , we write F ( x , A ) to be F ( x , y ) for simplicity. Note that the kernel function P i j ( x , y ) is stochastic in the sense that P i j ( x , ) : = lim y j 0 P i j ( x , y ) = 1 for all i and all x. The level and phase of each state ( i , x ) are respectively represented by the first component i and the second component x. For any i 0 , define i = { ( i , x ) : x R } to be the i level set. Then, the state space E can be decomposed as E = i = 0 i . For n 1 , the corresponding n-step transition kernel is given by
P n ( i , x ; j , A ) = k N R P n 1 ( i , x ; k , d z ) P ( k , z ; j , A ) = P { X n j × A | X 0 = ( i , x ) } .
Two different types of block-structured discrete-time Markov chains are the focus of this paper. The first one is the discrete-time GI/M/1-type Markov chain, whose transition kernel matrix P G I ( x , y ) : = ( P G I ( i , x ; j , y ) ) i , j N is level independent and has the following lower-Hessenberg block form:
P G I ( x , y ) = B 0 ( x , y ) A 0 ( x , y ) 0 0 B 1 ( x , y ) A 1 ( x , y ) A 0 ( x , y ) 0 B 2 ( x , y ) A 2 ( x , y ) A 1 ( x , y ) A 0 ( x , y ) B 3 ( x , y ) A 3 ( x , y ) A 2 ( x , y ) A 1 ( x , y ) .
The second one is the discrete-time M/G/1-type Markov chain, whose transition kernel matrix P M ( x , y ) : = ( P M ( i , x ; j , y ) ) i , j N is level independent and has the following upper-Hessenberg block form:
P M ( x , y ) = B 0 ( x , y ) B 1 ( x , y ) B 2 ( x , y ) B 3 ( x , y ) A 0 ( x , y ) A 1 ( x , y ) A 2 ( x , y ) A 3 ( x , y ) 0 A 0 ( x , y ) A 1 ( x , y ) A 2 ( x , y ) 0 0 A 0 ( x , y ) A 1 ( x , y ) ,
These block-structured Markov chains are of the special features that the transition of the level is skip-free to the right or skip-free to the left, respectively.
Tweedie [1] proposed the GI/M/1-type Markov chain with a continuous phase set and demonstrated that the positive recurrent GI/M/1-type Markov chain is of the operator-geometric stationary distribution. Thus, Tweedie [1] extended the well-known results for the GI/M/1-type Markov chain with a finite phase set, which was derived by Neuts [2]. Tweedie’s finding was later applied by Breuer [3] to investigate the stationary distribution for the embedded GI/G/k queue with a Lebsegue-dominated inter-arrival time distribution. A positive recurrent tridiagonal block-structured quasi-birth-death (QBD) process with a continuous phase set, as well as a computational framework of its stationary distribution, are investigated by Nielsen and Ramaswami [4]. They also demonstrated the motivation for investigating a model with a continuous phase set. The computational framework was recently extended and improved by Jiang et al. [5] by incorporating the wavelet transform approach.
The GI/M/1-type and M/G/1-type Markov chains with a finite phase set were investigated systematically by Neuts in 1981 [2] and 1989 [6], respectively. Effective solver tools for solving the stationary distribution for these chains were developed by Bini et al. in [7], based on the algorithms collected in [8]. It is known that the matrices R and G are key matrices for solving stationary distributions for GI/M/1-type and M/G/1-type Markov chains, respectively. Since R and G are closely connected by Ramaswami dual and Bright dual, the computation of matrix R for GI/M/1-type chains can be reduced to the computation of matrix G for M/G/1-type Markov chains ([9,10,11,12]). Several effective algorithms have been developed to compute the matrix G, such as functional iteration, Newton iteration, invariant subspace method, cyclic reduction and Ramaswami Reduction. Please refer to [13] for a detailed description of the algorithms.
As far as we know, the following two issues are still not well addressed in the literature:
(i) For a positive recurrent GI/M/1-type Markov chain with a continuous phase set, numerical algorithms for computing the stationary distribution are missing, although the theoretical framework has been established in [1],
(ii) M/G/1-type Markov chains are of the same importance as GI/M/1-type Markov chains. However, both the theoretical and computational framework are missing for M/G/1-type Markov chains with a continuous phase set.
The current research is motivated to investigate the above two issues. This paper is organized into six sections. We provide an overview of DTMCs on a general state space and the wavelet series expansion in two dimensions in Section 2. The GI/M/1-type Markov chains are introduced in Section 3, most of which are well known in the literature [1], except for the computational analysis. The analysis of stationary distributions for M/G/1-type Markov chains is performed in Section 4. Numerical experiments, including a brief description of numerical algorithms and two illustrative examples, are presented in Section 5. Comparisons among different algorithms are executed with respect to the accuracy and speed of calculation. Conclusions are presented in Section 6. Please refer to Table A1 for a summary of frequently used notations.

2. Preliminaries

2.1. Basics about DTMCs on a General State Space

We present some basic concepts for DTMCs on a general state space. Please refer to [14] for more details.
Let Φ n be a DTMC on a general state space E endowed with the countably generated σ -field B ( E ) . Define τ A = { n 1 : Φ n A } to be the first return time on A. For a non-negative nontrivial measure ψ , the chain Φ n is called ψ -irreducible if there exits a non-negative nontrivial measure φ , such that Φ n is φ -irreducible, i.e.,
L ( x , A ) : = P { τ A < | Φ 0 = x } > 0
for any A B ( E ) , φ ( A ) > 0 and any x A , and ψ is a maximal irreducible measure with respect to φ . A set A B ( E ) is called a Harris recurrent if L ( x , A ) = 1 for all x A . The chain Φ n is called a Harris recurrent if it is ψ -irreducible and every set in B + ( E ) : = { A B ( E ) : ψ ( A ) > 0 } is Harris recurrent. A Harris recurrent chain has an unique invariant measure Π such that
Π ( A ) = E Π ( d x ) P ( x , A ) .
A Harris recurrent chain with a finite Π ( E ) is said to be Harris positive recurrent. If Φ n is Harris positive recurrent and aperiodic, then
Π ( A ) = lim n P n ( x , A ) ,
which implies that the limit of the transition kernel exists independently of the initial state ( i , x ) . In this case, Π ( A ) is called the invariant probability measure or the stationary probability distribution.
We now introduce the censored Markov chain, which will be used later to deal with the invariant probability distributions for block-structured Harris positive recurrent chains. Let A be a non-empty subset in B ( E ) . Let θ k be the k t h time that Φ n successively visits a state in A, i.e., θ 0 = inf m 0 : Φ m A and θ k + 1 = inf m θ k + 1 : Φ m A . The censored Markov chain Φ A = Φ k A , k 0 on A is defined by Φ k A = Φ θ k , k 0 , whose one-step transition kernel is denoted by P A ( x , B ) , x E , B B ( E ) . Define
A P n ( x , B ) = P { Φ n B , Φ m A , 1 m n Φ 0 = x } ,
and
U A ( x , B ) = n = 1 A P n ( x , B ) .
When starting with Φ 0 = x A and B A , the censored chain Φ A evolves according to the transition law
P A ( x , B ) = U A ( x , B ) = P { Φ τ A B Φ 0 = x } .

2.2. Basics about Wavelet in Two Dimensions

This section is concerned on some basics about the wavelet, most of which is taken from [5] directly. Please refer to [15,16] for more details about the wavelet analysis.
Respectively denote the scaling function and the wavelet function by ϕ and ψ . For all j Z , let ϕ j , n ( x ) = 2 j / 2 ϕ 2 j ( x 2 j n ) and ψ j , n ( x ) = 2 j / 2 ψ 2 j ( x 2 j n ) . Define three wavelets
W ( 1 ) ( x 1 , x 2 ) = ϕ ( x 1 ) ψ ( x 2 ) , W ( 2 ) ( x 1 , x 2 ) = ψ ( x 1 ) ϕ ( x 2 ) , W ( 3 ) ( x 1 , x 2 ) = ψ ( x 1 ) ψ ( x 2 ) ,
and for all j in Z ,
W j , n 1 , n 2 ( k ) ( x 1 , x 2 ) = 1 2 j W ( k ) x 1 2 j n 1 2 j , x 2 2 j n 2 2 j , n 1 , n 2 Z , 1 k 3 .
Now, we consider the wavelet series expansion of a two-dimensional function. For each i Z , define column vectors ϕ i ( x ) = [ ϕ i , n ( x ) : n Z ] , ψ i ( x ) = [ ψ i , n ( x ) : n Z ] , and ζ i ( x ) = ϕ i T ( x ) , ψ i T ( x ) T . By Lemma 3.1 in [5], any function u ( x , y ) L 2 ( R 2 ) can be expanded as follows
u ( x , y ) = ζ T ( x ) U ¯ ζ ( y ) ,
where ζ ( x ) = [ ζ i ( x ) : i Z ] is a column vector, the diagonal blocks of U ¯ are written as
U ¯ i = 0 U ¯ i ( 1 ) U ¯ i ( 3 ) U ¯ i ( 3 )
with ( U ¯ i ( k ) ) m , n = < u , W i , m , n ( k ) > .
Let U ( x , y ) be a kernel function whose density function is assumed to exist and is denoted by u ( x , y ) : = U ( x , y ) y . On the one hand, by performing (3) for the density of the kernel function U ( x , y ) , which is referred to as the wavelet transform (WT), we can find the matrix U ¯ , also known as the associated matrix of U ( x , y ) . On the other hand, for a given associated matrix U ¯ , we can find the density function u ( x , y ) by performing (3) in the other side, which is called the inverse wavelet transform (IWT).
As you will see in the following sections, it is crucial to deal with the convolution operations of the transition kernels in order to investigate the stationary distributions. The wavelet transform is introduced to transform these convolution operations into matrix operations by expanding the kernels using the wavelet series. For any A B ( E ) , define the convolution C 1 * C 2 of two kernel functions C 1 ( x , y ) and C 2 ( x , y ) by
C 1 * C 2 ( x , A ) = R C 1 ( x , d z ) C 2 ( z , A ) ,
and define C 1 ( k ) ( x , y ) recursively by
C 1 ( k ) ( x , y ) = R C 1 ( k 1 ) ( x , d z ) C 1 ( z , y ) = R C 1 ( x , d z ) C 1 ( k 1 ) ( z , y ) ,
where C 1 ( 0 ) ( x , x ) = 1 and C 1 ( 0 ) ( x , y ) = 0 for any y x . If ν is a signed measure on E, we write
ν * C 1 ( A ) = E ν ( d x ) C 1 ( x , A ) .
In order to expand the kernel functions through the wavelet transform, we need the following assumption and theorem, which are both taken from [5].
Assumption 1.
All the kernel functions B i ( x , y ) and A i ( x , y ) , i 1 belong to H , where H R is of finite Lebesgue measure, and H is the set of kernel functions U ( x , y ) having a density function u ( x , y ) equaling to zeros outside of H × H .
Theorem 1
([5]). Let { F k ( x , y ) , k 1 } be a sequence of kernel functions in H . Denote their density functions by { f k ( x , y ) , k 1 } , and their associated matrices by { F ¯ k , k 1 } .
(i) For any fixed n, the convolution kernel function F 1 * F 2 * * F n ( x , y ) is also in H , and its associated matrix is k = 1 n F ¯ k ;
(ii) For any fixed n, the additive kernel function ( F 1 + F 2 + + F n ) ( x , y ) is also in H and its associated matrix is k = 1 n F ¯ k ;
(iii) If f n ( x , y ) converges to f ( x , y ) , then the kernel function F ( x , y ) : = y f ( x , y ) d y is also in H , and its associated matrix is F ¯ = lim n F ¯ n .

3. GI / M / 1 -Type Markov Chains

Consider a GI/M/1-type Markov chain ( L n , X n ) , whose transition law P given by (1) satisfies that for any C B ( R )
P ( i , x ; j , R ) = 0 , j > i + 1 ,
P ( i , x ; j , C ) = A i j + 1 ( x , C ) , i j 1 , j 1 ,
P ( i , x ; 0 , C ) = B i ( x , C ) , i 0 .
Define the kernel R ( x , C ) to be the expected number of visits to ( i + 1 ) × C , starting from ( i , x ) under the taboo set of k = 0 i k . From [1], we know that the censored Markov chain ( L n , X n ) 0 of the GI/M/1-type Markov chain on the zero level set 0 has the following transition kernel
P G I 0 ( x , C ) = k = 0 R ( k ) * B k ( x , C ) .
The following theorem is taken from [1], which characterizes the invariant probability measure for GI/M/1-type Markov chains with a continuous phase set.
Theorem 2
([1]). Suppose that the GI/M/1-type Markov Chain ( L n , X n ) with a continuous phase set is Ψ-irreducible and Harris positive recurrent. Then, its unique stationary probability measure Π, decomposed by Π ( A ) = ( Π 0 ( A ) , Π 1 ( A ) , Π 2 ( A ) , ) , satisfies the following recursive formula
Π k ( A ) = Π 0 * R ( k ) ( A ) ,
where the kernel R ( x , A ) is the minimal non-negative solution of the following equation
R ( x , A ) = i = 0 R ( i ) * A i ( x , A ) ,
and Π 0 ( A ) is uniquely determined by
Π 0 * P G I 0 ( x , A ) = Π 0 ( A ) , k = 0 Π 0 * R ( k ) ( x , E ) = 1 .
Applying Theorem 2 and Theorem 1, we can obtain the following theorem directly.
Theorem 3.
Suppose that the GI/M/1-type Markov Chain ( L n , X n ) with a continuous phase set is ψ-irreducible and Harris positive recurrent and that Assumption 1 holds.
(i) The kernels P G I 0 ( x , y ) and R ( x , y ) are in H , whose associated matrices are, respectively, denoted by P ¯ G I 0 , R ¯ and B ¯ k .
(ii) The invariant probability measure Π k is in H . Let Π ¯ k be the associated row vector of Π k ( y ) , i.e., π k ( y ) = Π ¯ k ζ ( y ) , where π k ( y ) is the density of Π k ( y ) , and ζ ( y ) is defined in Section 2.2. Then, we have
Π ¯ k = Π ¯ 0 R ¯ ( k ) ,
and
Π ¯ 0 P ¯ G I 0 = Π ¯ 0 , Π ¯ 0 ( I R ¯ ) 1 1 = 1 ,
where P ¯ G I 0 = k = 0 R ¯ ( k ) B ¯ k and 1 is the vector of 1’s with an appropriate dimension.

4. M / G / 1 -Type Markov Chains

In this section, we consider a M/G/1-type Markov Chain ( L n , X n ) , whose transition law P, given by (1), satisfies that for any C B ( R )
P ( i , x ; j , C ) = 0 , f o r j < i 1 , i 1 ,
P ( 0 , x ; j , C ) = B j ( x , C ) ,
P ( i , x ; j , C ) = A j i + 1 ( x , C ) , f o r j i 1 , i 1 .
Define τ i = inf n 1 : L n i to be the first return time to the level set i for any i 0 . For any x R and any A B ( R ) , define the following kernel function
G ( x , A ) = P τ i < , X τ i A L 0 = i + 1 , X 0 = x ,
which is independent of i due to the level independent structure of the chain. The first result is about the kernel G ( x , A ) , which plays a key role in analyzing M/G/1-type Markov chains.
Theorem 4.
Suppose that the M/G/1-type Markov chain ( L n , X n ) is ψ-irreducible. For any A B ( R ) , the kernel G ( x , A ) is the minimal nonnegative solution of the following equation
G ( x , A ) = i = 0 A i * G ( i ) ( x , A ) ,
where G ( i ) ( x , A ) is the i-fold convolution of the kernel G ( x , A ) itself.
Proof. 
We first show that the kernel G ( x , A ) is a solution of Equation (4).
By conditioning on the state of the first transition, the kernel G ( x , A ) can be decomposed as follows
G ( x , A ) = R i N P L 1 = i , X 1 d y L 0 = 1 , X 0 = x × P τ 0 < , X τ 0 A L 0 = i , X 0 = y = R i N A i ( x , d y ) P τ 0 < , X τ 0 A L 0 = i , X 0 = y .
We will use the inductive arguments to show
G ( i ) ( y , A ) = P τ 0 < , X τ 0 A L 0 = i , X 0 = y , i 1 .
Since the chain is level independent, when i = 1 , we have
G ( 1 ) ( y , A ) = G ( y , A ) = P τ k < , X τ k A L 0 = k + 1 , X 0 = y
for any k 0 . Suppose that G ( n ) satisfies
G ( n ) ( y , A ) = P τ 0 < , X τ 0 A L 0 = n , X 0 = y .
By conditioning on the state of the first hitting on level n and using the strong Markov property, we have
P τ 0 < , X τ 0 A L 0 = n + 1 , X 0 = y = R P τ n < , X τ n d x L 0 = n + 1 , X 0 = y P τ 0 < , X τ 0 A L τ n = n , X τ n = x = R P τ n < , X τ n d x L 0 = n + 1 , X 0 = y P τ 0 < , X τ 0 A L 0 = n , X 0 = x = R G ( 1 ) ( y , d x ) G ( n ) ( x , A ) = G ( n + 1 ) ( y , A ) .
Substituting (6) into (5), we have
G ( x , A ) = R i = 0 A i ( x , d y ) G ( i ) ( y , A ) = i = 0 A i * G ( i ) ( y , A ) ,
where we exchange the order between integration and summation by Fubini theorem.
Next we demonstrate that G ( x , A ) is the minimal non-negative solution of (4). We divide the proof into two steps.
We first define a sequence of kernels { T N ( x , A ) , N 1 } by setting T 0 ( x , A ) = 0 , and
T N + 1 ( x , A ) = i = 0 A i * T N ( i ) ( x , A ) , N 0 .
Let G ^ ( x , A ) be any solution of Equation (4). Obviously, G ^ ( x , A ) 0 = T 0 ( x , A ) . Suppose that T N 1 ( x , A ) G ^ ( x , A ) , then T N 1 ( i ) ( x , A ) G ^ ( i ) ( x , A ) for i 1 . Moreover, we have
T N ( x , A ) = i = 0 A i * T N 1 ( i ) ( x , A ) i = 0 A i * G ^ ( i ) ( x , A ) = G ^ ( x , A ) .
Similarly, if we assume inductively that T N 1 ( x , A ) T N ( x , A ) , we have
T N ( x , A ) = i = 0 A i * T N 1 ( i ) ( x , A ) i = 0 A i * T N ( i ) ( x , A ) = T N + 1 ( x , A ) ,
and so T N ( x , A ) is monotonically increasing in N. Hence, the limit T * ( x , A ) : = lim N T N ( x , A ) exists. Further, we have
T N ( k ) ( x , A ) T * ( k ) ( x , A ) , k 1 ,
By taking the limit of both sides of Equation (6) and using the dominated convergence theorem, we know that the kernel T * ( x , A ) is a solution of (4), i.e.,
T * ( x , A ) = i = 0 A i * T * ( i ) ( x , A ) ,
We further have that T * ( x , A ) is the minimal solution since T * ( x , A ) G ^ ( x , A ) .
Next, we need to prove that T * ( x , A ) = G ( x , A ) . Define
G N ( x , A ) = P τ 0 N , X τ 0 A L 0 = 1 , X 0 = x , N 1 .
Obviously, we know that G N ( x , A ) G ( x , A ) . By conditioning on the state of the first transition, we have
G N + 1 ( x , A ) = R i = 0 N P τ 0 N , X τ 0 A L 0 = i , X 0 = y × P L 1 = i , X 1 d y L 0 = 1 , X 0 = x .
Denote
M N ( i ) ( y , A ) = P τ 0 N , X τ 0 A L 0 = i , X 0 = y .
By conditioning on the state of the first return time to level i 1 and repeating the same arguments, we have
M N ( i ) ( y , A ) R P τ i 1 N , X τ i 1 d x L 0 = i , X 0 = y M N ( i 1 ) ( x , A ) = G N ( 1 ) * M N ( i 1 ) ( y , A ) G N ( 2 ) * M N ( i 2 ) ( y , A ) G N ( i 1 ) * M N ( 1 ) ( y , A ) = G N ( i ) ( y , A ) .
By (8) and (9), we can deduce that
G N + 1 ( x , A ) R i = 0 N A i ( x , d y ) G N ( i ) ( y , A ) R i = 0 A i ( x , d y ) G N ( i ) ( y , A ) = i = 0 A i * G N ( i ) ( x , A )
Finally, note that G 1 ( x , A ) = A 0 ( x , A ) = T 1 ( x , A ) , and so from (6), we have by induction G N ( x , A ) T N ( x , A ) . Taking the limit as N gives G ( x , A ) T * ( x , A ) , as required. □
In the following, we will investigate numerical computing issues of the invariant probability distribution for M/G/1-type chains. The key point is to set up the Ramaswami algorithm, a well-known result for M/G/1-type chains with finite phases, for the M/G/1-type chains with continuous phases.
Theorem 5.
Suppose that the M/G/1-type Markov chain ( L n , X n ) is ψ-irreducible and Harris positive recurrent. Let the unique invariant probability measure be Π with Π ( C ) = ( Π 0 ( C ) , Π 1 ( C ) , Π 2 ( C ) , ) , C B ( R ) . Then, the measure Π satisfies the following recursive formula
Π k ( C ) = Π 0 * B ^ k ( C ) + i = 1 k Π i * A ^ k + 1 i ( C ) ,
where
B ^ k ( x , C ) = i = k B i * G ( i k ) ( x , C ) , k N ,
A ^ m ( x , C ) = i = m A i * G ( i m ) ( x , C ) , 1 m k ,
and Π 0 is a unique solution of the equation Π 0 ( C ) = Π 0 * B ^ 0 ( C ) .
Proof. 
By (6), we know that for any v 1 and i 0 , the kernel function G ( v ) ( x , C ) is the probability that the Markov chain first returns to level i by hitting the state ( i , C ) , given that it starts from the state ( i + v , x ) . The transition kernel function of the Markov chain embedded at epochs of visits to the set A = m = 0 m is given by
P A ( x , C ) = B 0 ( x , C ) B 1 ( x , C ) B k 1 ( x , C ) B ^ k ( x , C ) A 0 ( x , C ) A 1 ( x , C ) A k 1 ( x , C ) A ^ k ( x , C ) 0 A 0 ( x , C ) A k 2 ( x , C ) A ^ k 1 ( x , C ) 0 0 A 0 ( x , C ) A ^ 1 ( x , C ) .
We now explain how to determine the transition kernel P A ( x , C ) . The first k block columns of the kernel function P A ( x , C ) are the same as those of P ( x , C ) , since the chain ( L n , X n ) can only move down by one level at a time. As for the ( k + 1 ) th (i.e., last) block column of P A ( x , C ) , its first entry is as follows.
B ^ k ( x , C ) = P { ( L 1 , X 1 ) A k × C ( L 0 , X 0 ) A = ( 0 , x ) } = P { L 1 = k , X 1 C L 0 = 0 , X 0 = x } + i = k + 1 R [ P { L 1 = i , X 1 d y L 0 = 0 , X 0 = x } × P { τ k < , X τ k C L 1 = i , X 1 = y } ] = B k ( x , C ) + i = k + 1 B i * G ( i k ) ( x , C ) = i = k B i * G ( i k ) ( x , C ) .
The equality (11) can be proved in a similar way.
Since this chain ( L n , X n ) is ψ -irreducible and a Harris positive recurrent, for x X , M E , starting from x, the set M will almost certainly be returned infinitely, and so is the censored Markov chain ( L n , X n ) A . Thus, ( L n , X n ) A is also ψ -irreducible and a Harris positive recurrent. Let Π A ( C ) = ( Π 0 A ( C ) , Π 1 A ( C ) , , Π k A ( C ) ) be the unique invariant probability measure of ( L n , X n ) A .
Next, we will demonstrate that ( Π 0 ( C ) , Π 1 ( C ) , , Π k ( C ) ) is also an invariant measure of the censored chain Φ n A . Define the measure Π by
Π i ( C ) : = R Π i A ( d x ) U A ( x , i × C ) , i N .
By Propostion 10.4.8 in [14], we know that
Π i A ( C ) = Π i ( C ) , 0 i k
and that Π is invariant measure for ( L n , X n ) . Since ( L n , X n ) is assumed to be a Harris positive recurrent, the invariant measure is unique up to a constant. This shows that Π ( C ) = c Π ( C ) for some constant c, from which and (12), and we have
Π i A ( C ) = c Π i ( C ) , for 0 i k .
Since i = 0 k Π i A ( , ) = 1 , we can obtain
c = 1 i = 0 k Π i ( , ) .
Thus, we have proved that ( Π 0 , Π 1 , Π 2 , , Π k ) is an invariant measure of Φ A . Taking into account the last block equation of Π A ( C ) = Π * P A ( C ) , we have
Π k ( y ) = Π 0 * B ^ k ( y ) + i = 1 k Π i * A ^ k + 1 i ( y ) , k 1 .
This proves (10).
To determine Π 0 ( C ) , we reset A = 0 and consider the censored chain ( L n , X n ) A , whose transition kernel is given by
P 0 ( x , C ) = B ^ 0 ( x , C ) .
By (13), we know that Π 0 ( C ) = Π 0 * B ^ 0 ( C ) . □
Applying Theorem 5 and performing the wavelet series expansion, we can obtain the following theorem directly.
Theorem 6.
Suppose that the M/G/1-type Markov chain ( L n , X n ) is ψ-irreducible and Harris positive recurrent and that Assumption 1 holds.
(i) The kernels G ( x , y ) , A ¯ k ( x , y ) and B ¯ k ( x , y ) are in H , whose associated matrices satisfies that
B ^ ¯ k = i = k B i G ¯ i k , A ^ ¯ k = i = k A i G ¯ i k .
(ii) The invariant probability measure Π k is in H . Let Π ¯ k be the associated row vector of Π k ( y ) . Then, the associated matrices satisfy
Π ¯ k = Π ¯ 0 B ^ ¯ k + i = 1 k 1 Π ¯ i A ^ ¯ k + 1 i I A ^ ¯ 1 1 , k 1 ,
where Π ¯ 0 = Π ¯ 0 B ^ ¯ 0 .
Remark 1.
(i) We note that the entries in the associated matrices of a kernel function may be negative. Hence, the associated transition kernel matrices A ¯ k s and B ¯ k s cannot construct a stochastic transition matrix.
(ii) We now consider numerical algorithms for computing the associated matrix G ¯ . In the literature, several known algorithms, including the functional iteration ([17,18]), Newton iteration ([19]), invariant subspace method ([20]), cyclic reduction ([21]) and Ramaswami Reduction ([22]), have been developed to solve the G-matrix for M/G/1-type chains with a finite phase. For a collection of these algorithms, please refer to [13]. Similar to what we did in Theorems 4 and 5, we can set up the corresponding algorithms for G ¯ by modifying these algorithms from the finite phase to the general phase. We omit the details in order to avoid tedious presentations.

5. Numerical Experiments

5.1. Discrete Wavelet Transforms

We need to perform discrete wavelet transforms for numerical experiments. Without a loss of generality, we assume that the phase space is taken to be R . In the following, we only give a simple presentation of the computation framework; please refer to Section 5 in [5] for more details.
We first consider numerical issues of the M/G/1-type Markov chain with a continuous phase, which are divided into the following steps:
Step 1: Choose appropriate real numbers y ̲ , y ¯ and positive integer N. Then, evenly sample N points from the truncated phase space [ y ̲ , y ¯ ] . Performing the DWT in Algorithm 5.1 in [5] to kernels A i ( x , y ) and B i ( x , y ) produces the associated sample matrices ( A i ) a s m and ( B i ) a s m .
Step 2: Solve the the associated sample matrix G a s m through the algorithms listed in (ii) of Remark 1, such as functional iteration, Newton iteration, invariant subspace method, cyclic reduction and Ramaswami Reduction.
Step 3: Solve the associated sample invariant probability vector Π a s m using Theorem 6.
Step 4: Performing the IDWT in Algorithm 5.2 in [5] to the matrix of G a s m and the vector Π a s m produces the kernels G ( x , y ) and Π ( x ) .
Now, we consider numerical issues of GI/M/1-type Markov chains with the continuous phase, which are also divided into four steps. The first step and the last step are the same as that for the M/G/1-type Markov chains. In Step 3, we solve the associated sample invariant probability vector Π a s m based on Theorem 3. For Step 2, we use the Ramaswami dual to solve the associated sample matrix R a s m . It is known that the Ramaswami dual [11] enables us to compute the matrix R for a GI/M/1-type chain with a finite phase in terms of computing the matrix G for a dual M/G/1-type Markov chain. Note that the Ramaswami dual can be modified and extended to the case of M/G/1-type and GI/M/1-type chains with a continuous phase.

5.2. Illustration with Examples

5.2.1. Example 1: An M / G / 1 -Type Chain

The Markov chain in this example is modified from Example 2 in [5] by extending the tri-diagonal structure to the more general upper-Hessenberg setting.
Denote by S i , i 0 the arrival times of a Poisson process with parameter λ . Let S 0 = 0 . Define a sequence of i.i.d. random variables V S n , n 0 , which are distributed with
P V S n = j = p j , j 1 ,
where p j s are non-negative constants such that j = 1 p j = 1 . We define
L ( t ) = L S 0 = L ( 0 ) , if 0 = S 0 t < S 1 , max 0 , L S n 1 + j , if V S n = j , S n t < S n + 1 ,
Y ( t ) = Y S k + t S k , if V S k = , S k t < S k + 1 , t S k , if V S k = 1 , S k t < S k + 1 ,
where n 1 , j 1 , k 0 , 0 .
Let L n = L S n and Y n = Y S n + 1 0 , then L n , Y n is a M/G/1-type chain, whose phase space is R + . Its transition kernels are derived as
A 0 ( x , y ) = p 1 1 e λ y , for all x , y , A j + 1 ( x , y ) = 0 , if y < x ; p j 1 e λ ( y x ) , if y x , if j 0 .
and finally
B 0 ( x , y ) = A 0 ( x , y ) + A 1 ( x , y ) , B i ( x , y ) = A i + 1 ( x , y ) , i 1 .
The marginal invariant probability measures for the M / G / 1 -type chain L n , Y n has analytical expressions, as follows:
L Π 0 = p 1 n = 0 n p n p 1 , L Π k = L π 0 ( n = k p n ) + i = 1 k 1 j = 0 L π i p k i + j p 1 , P Π ( y ) = 0 , if y < 0 1 e λ p 1 y , if y 0 ,
where L Π k and P Π ( y ) are, respectively, the level and phase marginal invariant probability measures.
Take p 1 = 1 2 , p k = ( 1 3 ) k + 1 , k 0 . From (19), we can have the following exact value of the marginal level stationary probabilities
L Π 0 = 1 2 , L Π k = 1 4 ( 2 3 ) k , k 1 .
Evenly take 256 samples on [ 0 , 45 ] as values of x and 256 samples on [ 0 , 50 ] as values of y, and choose the Haar wavelet for the wavelet transform. Figure 1 presents the numerical solutions of kernel functions G ( x , y ) . The marginal distributions are obtained numerically based on the G a s m solved by functional iteration. The numerical solutions for level marginal distribution and phase marginal distribution are, respectively, shown in Figure 2 and Figure 3, together with the corresponding analytical solutions. For each method used to derive G a s m , we calculate its mean absolute error defined as 1 K + 1 k = 0 K | L Π k L Π ^ k | , where K = 500 and L Π ^ k is the numerical solution of the marginal level stationary probability at level k. (We take K = 500 because values of L Π k when k > 500 are small enough to be considered as negligible.) In this example, different methods of solving matrix G lead to the same numerical solutions of level and phase marginal distributions. According to Figure 4, performances of various methods are similar in the sense of accuracy and computational time.

5.2.2. Example 2: A G I / M / 1 -Type Chain

Consider a first-come first-served single server G I / G / 1 queuing system, which was considered by [1] for the theoretical analysis of the invariant probability distribution. Here, we consider the computational issue. In this G I / G / 1 queue, the service times and interarrival times are distributed with general distribution functions S ( x ) and F ( x ) . We assume that both the mean arrival interval λ = 0 t d F ( t ) and the mean service time μ 0 : = 0 t d S ( t ) are finite.
Let L n be the number of customers right before the arrival time of the nth customer and let X n be the remaining service time just after the nth arrival. Let Z n be the departure time of the nth customer, and let X ( t ) be the remaining service time at time t of a customer who is receiving the service. Write
D n t ( x , y ) = P { Z n t < Z n + 1 , X ( t ) y | X ( 0 ) = x } .
Then, ( L n , X n ) , n 1 is a GI/M/1-type Markov chain with discrete levels and continuous phases, whose transition probabilities are given by (1) with (see [1])
A n ( x , y ) = 0 D n t ( x , y ) d F ( t ) ,
B n ( x , y ) = j = n + 1 A j ( x , ) S ( y ) .
From [1], we know that if λ > μ 0 , then ( L n , X n ) has an invariant probability measure Π j with
Π k ( · ) = d 0 d S ( x ) R ( k ) ( x , · )
where the constant d is given by
d = 1 + 0 n = 0 F n * ( x ) d S ( x ) exp n = 1 1 0 [ 1 F n * ( x ) ] d S * n ( x ) / n .
To illustrate our algorithm, we would like to compare numerical solutions for the level of marginal distribution with its analytical value. For numerical calculation, let F ( t ) be uniformly distributed in the interval ( 0 , 1 ] , i.e., F ( t ) U ( 0 , 1 ) , and let the service time be exponentially distributed with parameter μ , i.e., S ( t ) exp ( μ ) . Then we have, for n 1
D n t ( x , y ) = μ n 1 ( t x ) n 1 ( n 1 ) ! ( e μ ( t x ) e μ ( t x + y ) ) , t x , 0 , t < x ,
and, for n = 0
D 0 t ( x , y ) = I [ 0 , y ] ( x t ) .
The kernels A n ( x , y ) and B n ( x , y ) are calculated as follows. For n 1 , by (20)
A n ( x , y ) = x 1 μ n 1 ( t x ) n 1 ( n 1 ) ! e μ ( t x ) e μ ( t x + y ) d t = 1 e μ y μ 0 1 x μ n ( n 1 ) ! t n 1 e μ t d t .
For 0 x 1 , y x
A 0 ( x , y ) = x y x 1 d t = 0 x 1 d t = x ,
and for 0 x 1 , y < x
A 0 ( x , y ) = x y x 1 d t = y .
For n 0 , by (21) we have
B n ( x , y ) = ( 1 e μ y ) · 0 1 x 1 j = 0 n 1 e μ t ( μ t ) j j ! d t .
Since λ = 1 / 2 and E [ S ] = μ 0 = 1 μ , the queuing system is stable if
ρ = 1 / λ 1 / E [ S ] = E [ S ] λ = 2 μ < 1 .
It is well known that the level marginal invariant probability distribution is
L Π j = c j ( 1 c ) ,
where c is the solution of 0 e μ t ( 1 c ) d F ( t ) = c on the interval [ 0 , 1 ] . Since F U ( 0 , 1 ) , then
e ( c 1 ) μ 1 = c ( c 1 ) μ .
For numerical experiments, we take μ = 4.7 . The constant c is solved to be approximately 0.2885. The exact marginal level distribution L Π can be obtained. On the other hand, we can perform the numerical algorithm in the previous section to approximate L Π . We may then compare the numerical results to the analytical results, and provide a verification of the algorithm afterward. Here, we do not consider the marginal phase distribution, since its closed form cannot be obtained.
Evenly take 256 samples on [ 0 , 1 ] as values of x and 256 samples on [ 0 , 1.5 ] as values of y, and choose the Haar wavelet for a wavelet transform. The numerical solutions of kernel functions G ( x , y ) and R ( x , y ) are shown in Figure 5. The level marginal distribution of this queuing system could be computed by a previous algorithm. We show the numerical solutions using G a s m solved by functional iteration, together with the analytical solutions in Figure 6. Among the five numerical methods mentioned in (ii) of Remark 1, we note that the method of the invariant subspace does not work during the run of the algorithm, which may be caused by the fact that some matrices are not invertible. With numerical solutions of marginal invariant probability distributions for the level and phase, we can further estimate the mean and variance of the queue length and the remaining service time, which are listed in Table 1. This implies practical uses of invariant probability distributions.
Since we are not able to obtain analytical solutions of marginal invariant probability distributions for the phase, only the analytical mean and variance for the level are presented in Table 1. When using the Ramaswami reduction, the mean queue length is the most accurate among four methods, but the variance of the queue length is not close to the analytical variance. However, the mean and variance solved using the functional iteration are both relatively accurate.
We compare properties of the other four methods according to their mean absolute errors and speeds of computation. From Figure 7 and Table 1, the functional iteration performs the best among all the four methods, since it is the fastest and also the most accurate. When we raise the sample size from 256 to 512, it takes 20.98 s to solve G a s m using a functional iteration. The computational times of the cyclic reduction, Newton iteration and Ramaswami reduction are 116.35 s, 1997.53 s and 2191.45 s, respectively. The differences between the mean absolute errors of numerical solutions and the errors when using a sample size of 256, however, are only around 10 4 .

6. Conclusions

For invariant probability measures of Harris positive recurrent G I / M / 1 -type or M / G / 1 -type Markov chains with discrete levels and a general phase set, we establish wavelet-based computational frameworks in this paper. A theoretical analysis framework is also established for M / G / 1 -type Markov chains. These results extend the known findings in [4,5] for QBD processes to the current more general block-structured Markov chains. Numerical experiments support the effectiveness of our numerical algorithms based on DTWC. An interesting observation in Example 2 is that among the adopted five algorithms for G-matrix, the functional iteration performs the best, but the invariant subspace may fail.
For future research, it is interesting to consider block-structured continuous-time Markov processes with discrete levels and continuous phases. In this case, the processes should be presented in terms of the extended generators. It is expected that the research is more challenging when setting up these models and preforming the theoretical and numerical analysis of their invariant probability measures.

Author Contributions

Methodology, Y.L.; Software, S.J.; Writing—original draft, S.J., Y.L. and N.L.; Writing—review and editing, Y.L and N.L.; Visualization, S.J. and N.L.; Funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China (Grants No. 11971486).

Data Availability Statement

No data is used in this study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Summary of frequently used notations.
Table A1. Summary of frequently used notations.
NotationDescription
EState space of a Markov chain
i The i level set of Markov chain { ( L n , X n ) : n N }
τ i The first return time to the level set i
P G I ( · , · ) Transition kernel matrix of G I / M / 1 -type Markov chain
P M ( · , · ) Transition kernel matrix of M / G / 1 -type Markov chain
P A ( · , · ) One-step transition kernel of a censored Markov chain on set A
Π Invariant probability measure
U ¯ Associated matrix of kernel function U ( x , y )
H Set of kernel functions having a density function equaling to zeros outside of H × H

References

  1. Tweedie, R.L. Operator-geometric stationary distributions for Markov chains, with application to queueing models. Adv. Appl. Probab. 1982, 14, 368–391. [Google Scholar] [CrossRef] [Green Version]
  2. Neuts, M.F. Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach; Johns Hopkins University Press: Baltimore, MD, USA, 1981. [Google Scholar]
  3. Breuer, L. Transient and stationary distributions for the GI/G/k Queue with Lebesgue-dominated inter-arrival time distribution. Queueing Syst. 2003, 45, 47–57. [Google Scholar] [CrossRef]
  4. Nielsen, B.F.; Ramaswami, V. A Computational Framework for a Quasi Birth and Death Process with a Continuous Phase Variable; Ramaswami, V., Wirth, P., Eds.; ITC 15; Elsevier: Amsterdam, The Netherlands, 1997; pp. 477–486. [Google Scholar]
  5. Jiang, S.; Latouche, G.; Liu, Y. Wavelet transform for quasi-birth-death process with a continuous phase set. Appl. Math. Comput. 2015, 252, 354–376. [Google Scholar] [CrossRef]
  6. Neuts, M.F. Structured Stochastic Matrices of M/G/1 type and Their Applications; Marcel Dekker: New York, NY, USA, 1989. [Google Scholar]
  7. Bini, D.A.; Meini, B.; Steffé, S.; Van Houdt, B. Structured Markov chains solver: Software tools. In Proceedings of the SMCTOOLS, Pisa, Italy, 10 October 2006. [Google Scholar]
  8. Bini, D.A.; Meini, B.; Steffé, S.; Van Houdt, B. Structured Markov chains solver: Algorithms. In Proceedings of the SMCTOOLS, Pisa, Italy, 10 October 2006. [Google Scholar]
  9. Bright, L.W. Matrix-Analytic Methods in Applied Probability; The University of Adelaide: Adelaide, Australia, 1996. [Google Scholar]
  10. Ramaswami, V. Nonlinear matrix equations in applied probability-solution techniques and open problems. Siam Rev. 1988, 30, 256–263. [Google Scholar] [CrossRef]
  11. Ramaswami, V. A duality theorem for the matrix paradigms in aueueing theory. Commun. Stat. Stoch. Model. 1990, 6, 151–161. [Google Scholar] [CrossRef]
  12. Taylor, P.G.; Van Houdt, B. On The dual relationship between Markov chains of GI/M/1 and M/G/1 type. Adv. Appl. Probab. 2010, 42, 210–225. [Google Scholar] [CrossRef] [Green Version]
  13. Bini, D.A.; Latouche, G.; Meini, B. Numerical Methods for Structured Markov Chains; Oxford University Press: New York, NY, USA, 2005. [Google Scholar]
  14. Meyn, S.P.; Tweedie, R.L. Markov Chains And Stochastic Stability, 2nd ed.; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  15. Stéphane, M. A Wavelet Tour of Signal Processing; Academic Press: New York, NY, USA, 2009. [Google Scholar]
  16. Daubechies, I. Orthonormal base of compactly supported wavelets. Commun. Pure Appl. Math. 1988, 41, 909–996. [Google Scholar] [CrossRef] [Green Version]
  17. Favati, P.; Meini, B. Relaxed functional iteration techniques for the numerical solution of M/G/1 type Markov chains. Bit Numer. Math. 1998, 38, 510–526. [Google Scholar] [CrossRef]
  18. Favati, P.; Meini, B. On functional iteration methods for solving nonlinear matrix equations arising in queueing problems. IMA J. Numer. Anal. 1999, 19, 39–49. [Google Scholar] [CrossRef]
  19. Latouche, G. Newton’s iteration for non-linear equations in Markov chains. IMA J. Numer. Anal. 1994, 14, 583–598. [Google Scholar] [CrossRef]
  20. Akar, N.; Sohraby, K. An invariant subspace approach in M/G/1 and G/M/1 type Markov chains. Stoch. Model. 1997, 13, 381–416. [Google Scholar] [CrossRef]
  21. Bini, D.A.; Meini, B. On Cyclic Reduction Applied to a Class of Toeplitz-like Matrices Arising in Queueing Problems, Computations with Markov Chains; Springer: Boston, MA, USA, 1995. [Google Scholar]
  22. Ramaswami, V. The generality of Quasi Birth-and-Death processes. In Advances in Matrix Analytic Methods for Stochastic Models; Alfa, A.S., Chakravarthy, S.R., Eds.; Notable Publications: Branchburg, NJ, USA, 1998; pp. 93–113. [Google Scholar]
Figure 1. Numerical solution of kernel function G ( x , y ) .
Figure 1. Numerical solution of kernel function G ( x , y ) .
Mathematics 11 01587 g001
Figure 2. Level marginal invariant probability distribution L Π k .
Figure 2. Level marginal invariant probability distribution L Π k .
Mathematics 11 01587 g002
Figure 3. Phase marginal invariant probability distribution P Π ( y ) .
Figure 3. Phase marginal invariant probability distribution P Π ( y ) .
Mathematics 11 01587 g003
Figure 4. Difference among methods on level marginal invariant probability distribution L Π k . The legend of the first plot includes mean absolute errors, and the legend of the second plot includes computational times. (FI: functional iteration, CR: cyclic reduction, NI: Newton iteration, RR: Ramaswami Reduction and IS: invariant subspace).
Figure 4. Difference among methods on level marginal invariant probability distribution L Π k . The legend of the first plot includes mean absolute errors, and the legend of the second plot includes computational times. (FI: functional iteration, CR: cyclic reduction, NI: Newton iteration, RR: Ramaswami Reduction and IS: invariant subspace).
Mathematics 11 01587 g004
Figure 5. Numerical solutions of kernel functions. The right picture is about the kernel R ( x , y ) , and the left is about its dual kernel G ( x , y ) .
Figure 5. Numerical solutions of kernel functions. The right picture is about the kernel R ( x , y ) , and the left is about its dual kernel G ( x , y ) .
Mathematics 11 01587 g005
Figure 6. Level marginal invariant probability distribution L Π k .
Figure 6. Level marginal invariant probability distribution L Π k .
Mathematics 11 01587 g006
Figure 7. Difference among methods on level marginal invariant probability distribution L Π k . The legend of the first plot includes mean absolute errors, and the legend of the second plot includes computational times. (FI: functional iteration, CR: cyclic reduction, NI: Newton iteration and RR: Ramaswami Reduction).
Figure 7. Difference among methods on level marginal invariant probability distribution L Π k . The legend of the first plot includes mean absolute errors, and the legend of the second plot includes computational times. (FI: functional iteration, CR: cyclic reduction, NI: Newton iteration and RR: Ramaswami Reduction).
Mathematics 11 01587 g007
Table 1. Mean and variance of level and phase.
Table 1. Mean and variance of level and phase.
Queue Length (Level)Remaining Service Time (Phase)
MethodMeanVarianceMeanVariance
FI0.34150.36800.31250.1127
CR0.69641.39180.33450.1255
NI0.69641.39180.33450.1255
RR0.39110.95960.43850.1641
Analytical0.40540.5698--
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, S.; Liu, N.; Liu, Y. A Wavelet-Based Computational Framework for a Block-Structured Markov Chain with a Continuous Phase Variable. Mathematics 2023, 11, 1587. https://doi.org/10.3390/math11071587

AMA Style

Jiang S, Liu N, Liu Y. A Wavelet-Based Computational Framework for a Block-Structured Markov Chain with a Continuous Phase Variable. Mathematics. 2023; 11(7):1587. https://doi.org/10.3390/math11071587

Chicago/Turabian Style

Jiang, Shuxia, Nian Liu, and Yuanyuan Liu. 2023. "A Wavelet-Based Computational Framework for a Block-Structured Markov Chain with a Continuous Phase Variable" Mathematics 11, no. 7: 1587. https://doi.org/10.3390/math11071587

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop