Next Article in Journal
Modeling Particle Size Distribution in Lunar Regolith via a Central Limit Theorem for Random Sums
Next Article in Special Issue
On Designing Non-Parametric EWMA Sign Chart under Ranked Set Sampling Scheme with Application to Industrial Process
Previous Article in Journal
A Predator–Prey Two-Sex Branching Process
Previous Article in Special Issue
The Lambert-F Distributions Class: An Alternative Family for Positive Data Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Order for a Multivariate Uniform Distributions Family

by
Luigi-Ionut Catana
1,*,† and
Anisoara Raducan
2,†
1
Faculty of Mathematics and Computer Science, Mathematical Doctoral School, University of Bucharest, Str. Academiei nr. 14, Sector 1, C.P. 010014 Bucharest, Romania
2
“Gheorghe Mihoc-Caius Iacob” Institute of Mathematical Statistics and Applied Mathematics of the Romanian Academy, 13 Calea 13 Septembrie Street, 050711 Bucharest, Romania
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(9), 1410; https://doi.org/10.3390/math8091410
Submission received: 30 July 2020 / Revised: 14 August 2020 / Accepted: 16 August 2020 / Published: 23 August 2020
(This article belongs to the Special Issue Probability, Statistics and Their Applications)

Abstract

:
In this article we give sufficient conditions for stochastic order of multivariate uniform distributions on closed convex sets.

1. Introduction

Stochastic dominance has become a topic of great interest which is widely studied due to its various applications. The univariate case has been carefully studied since the appearance of the first papers (see Dudley [1] and Hadar [2]) which have introduced the basic criteria and definitions. For introduction in the field, we recommend the reader more recent books (see, for instance Levy [3], Shaked [4] and Zbaganu [5]) that address different types of stochastic dominance and the links between them, but at the same time offer a broad perspective over the multiple implications of stochastic dominance in different domains: economics (see Kim et al. [6]), finances, banking, statistics, risk theory, medicine and others.
The generalization of these results in the multivariate case is justified by practical aspects, for instance in finance: an investor wants a portfolio that has the return rate dominant over another given benchmark portfolio. Post et al. [7] have developed an optimization method for constructing investment portfolios that dominate a given benchmark portfolio in terms of third-degree stochastic dominance, Petrova [8] have introduced multivariate stochastic dominance constraints in multistage portfolio optimization problem.
Stochastic comparison is also strongly related to the insurance and risk theory (see Tarp [9] and Xu [10]). Denuit et al. [11] and Raducan et al. [12] present a mirror analysis of risk seeking versus risk averse behavior, while Jamali et al. [13] provide comparison between different type of stochastic ordering.
Several comparison criteria have been defined, some of them require very strong assumptions on the utility functions. Our analysis relies only on comparison of the cumulative distribution function. There is, however, a significant difference from the univariate case: more precisely, if X is a multivariate r.v., then it is not true that P X > x = 1 P ( X x ) , as one can see in Catana [14].
Di Crescenzo et al. [15] investigated the past lifetime of this system, given that at a larger time t the system is found to be failed. They performed some stochastic comparisons between the random lifetimes of the single items and the doubly truncated random variable that describes the system lifetime. Bello et al. [16] introduced a family of stochastic orders and studied its main properties and compare it with other families of stochastic orders that have been proposed in the literature to compare tail risks. Toomaj et. al. [17] proposed a new measure and showed that this measure proposed is equivalent to the generalized cumulative residual entropy of the cumulative weighted random variable. Wang et al. [18] showed that the system performance is better (worse) with the stronger component heterogeneity in the parallel (series) system under the usual stochastic order and the (reversed) hazard rate order under the conditions of interdependency and independency. Also, Di Crescenzo et al. [19] gave results for stochastic comparisons of random lifetimes in a replacement model.
We present the structure of this article. In the Section 2 we have introduced some notation and definitions (see, for instance, the dual stochastic domination) and we have reminded known facts. In the Section 3 we give neccesary conditions for stochastic order using affine transforms. In the Section 4 we give neccesary conditions for stochastic order using decompositions. In the last section we discuss the conclusions.

2. Preliminaries

Let Ω , F , P be a probability space. Let X : Ω R d be a random vector, d 2 .
For x , y R d , we denote x y ( x < y ) iff x i y i , 1 i d ( x i < y i 1 i d ). The minimum are defined componentwise: min x , y = min x i , y i i and max x , y = max x i , y i i .
For a random vector X we consider μ ( B ) = P ( X B ) be its distribution on R d , B ( R d ) , F ( x ) = P ( X x ) its distribution function, F ¯ ( x ) = 1 F ( x ) and F * ( x ) = P ( X x , X x ) .
In this article for the random vectors X and Y we will denote by μ and ν their distributions and by F and G their distribution functions and λ d is Lebesgue measure on R d , B ( R d ) . The support of X (or, in terms of distribution, the support of μ ) is defined as the smallest closed set K having the property that P X K = 1 . It will be denoted by SuppX or Supp μ . Thus
S u p p X = S u p p μ = μ F = 1 , F is closed F
For the function f : R R , we denote f ( x ) + = max f ( x ) , 0 and f ( x ) = min f ( x ) , 0 .
For A R d we denote c l ( A ) the closure of A.
If A R d is bounded we denote by a * = inf A and a * = sup A .
We also denote by b A = p r o j 1 A × × p r o j d A the smallest box containing A . Here p r o j i a 1 , a 2 , , a d = a i are the canonical projections. It is obvious that
Fact 1.
inf A = inf b A = inf 1 i d p r o j i A sup A = sup b A = sup 1 i d p r o j i A
Indeed, if x a for all a A then x j t for all t p r o j j A hence x j inf p r o j j A for all j therefore x inf b A . It follows that inf A inf b A . But A b A hence inf A inf b A a.s.o.
For instance if A = x , 1 x : 0 < x < 1 then b A = 0 , 1 × 0 , 1 and inf A = 0 , 0 , sup A = 1 , 1 .
We call a set A R d increasing iff x A , y x y A .
We denote the sets
L c = { x R n x c } U c = { x R n x > c }
for each c R n .
Also, for A , B R d we denote A B iff x A y B such that x y and y B x A such that y x . The following well known fact is more or less obvious:
Proposition 1.
Let A , B R d be compact. Then
(1) If d = 1 then A B iff inf A inf B , sup A sup B .
(2) In general, if A B then p r o j i A p r o j i B for all 1 i d .
(3) If A j , B j R d j , j 1 , , n and A j B j for all j then j = 1 n A j j = 1 n B j .
(4) If A B then b A b B .
(5) If A B then inf A inf B , sup A sup B .
Proof. 
(1) “⇒” We know that for all a A there exists b B such that a b . Let a n n be a decreasing sequence such that a n sup A and b n B such that a n b n . Then
sup A = lim a n lim sup b n sup B
In the same way let b n n be a increasing sequence such that b n inf B and a n A such that a n b n . Then inf A lim inf a n lim b n = inf B .
“⇐” Now we know that a * b * and a * b * and, as A and B are compact, that a * = inf A , A , a * = sup A A , b * = inf B B , b * = sup B B . Let a A . We want to find a b B such that a b . This is pretty obvious: a a * b * B . Conversely, if b B then b b * a * A .
(2) Let s p r o j i A and a A such that a i = s . There exists b B such that a b hence s b i p r o j i B . Conversely, if t p r o j i B let b B such that b i = t and a A such that a b . Then a i t and a i p r o j i A .
(3) Let a = a 1 , , a n j = 1 n A j . We know that A j B j for all j. Therefore there exists b j B j such that a j b j hence a = a 1 , , a n b = b 1 , , b n
(4) Apply (2) and(3).
(5) Apply Fact 2.1, (1) and (4). ☐
Remarks 1.
(a) Notice that (1) fails to be true if one of the sets A or B are not closed. For instance if A = 0 , 1 and B = 0 , 1 then a * = b * = 0 , a * = b * = 1 but it is not true that A B neither that B A .
(b) Even in the compact case, (1) fails to be true in the multidimensional case. For instance if A = x , x : 0 x 1 and B = x , 1 x : 0 x 1 then inf A = inf B = 0 , 0 and sup A = sup B = 1 , 1 but it is not true neither that A B or that B A .
Definition 1.
Let · : R d R d a norm. A ball is a set of the form B a , r = x R d : x a r . The unity ball is B = B 0 R d , 1 .
Definition 2.
Let X, Y : Ω R d be two random variables. We say that
X is stochastic dominated by Y and we denote X s t Y iff P ( X C ) P ( Y C ) , for all increasing set C R d .
X is weakly stochastic dominated by Y and we denote X s t w Y iff F * G * .
X is dual stochastic dominated by Y and we denote X s t d Y iff F G .
Obviously in the unidimensional case (when d = 1 ) all these three relations are the same.
We shall use the same definition for the distributions corresponding to our variables. More precise X s t s t w , s t d Y μ s t s t w , s t d ν .
It is well known that (see Shaked et al. [4]):
Definition 3.
Let X, Y : Ω R d random variables. Then X s t Y if E u ( X ) E u ( Y ) , for all u : R d R increasing.
First interesting case is the stochastic order between uniform distributions defined on closed compact sets having positive Lebesgue measure or on finite sets. We shall be interested in the connection between the assertions “ U n i f ( A ) s t U n i f ( B ) " and “ A B
In the unidimensional case it is true that if X s t Y and X , Y L Ω , F , P then Supp X Supp Y .
Indeed, let A = Supp X , B = Supp Y . Then A , B are compact sets. Let a * = Ess inf X , b * = Ess inf Y , a * = Ess sup X , b * = Ess sup Y . According to Proposition 2.1, in order to prove that A B it is enough to check that a * b * and a * b * . We know that X s t Y hence P X > x P Y > x for all x . As P Y > b * = 0 it follows that P X > b * = 0 too, meaning that a * b * . In the same way P X a * = 1 hence P Y a * = 1 thus b * a * .
Of course the converse cannot be true even for uniform distributions. For instance if A = 0 , 1 and B = 0 , 1 4 1 , 5 4 and X ~ U n i f A , Y ~ U n i f B then A B but it is not true that X s t Y : F x = min x + , 1 is not greater than G x = min 2 x + , 1 2 + min 2 x 1 + , 1 2 .
Or, in the discrete case: A = 0 , 3 , B = 1 , 2 , 4 . Clearly A B but F 2 = 1 2 , G 2 = 2 3 hence it is not true that F G .
But if A , B are convex sets (meaning intervals) the situation changes.
Proposition 2.
If A , B are compact intervals or finite intervals of integers then U n i f A U n i f B iff A B .
Proof. 
The continuous case is easy and well known. For the arithmetical case: now the “intervals” are of the form A = a * , a * + 1 , , a * + m = a * B = b * , b * + 1 , , b * + n = b * . And we claim that, as in the continuous case, the inequalities a * b * and a * b * imply F G .
It is enough to check that F a * + k G a * + k for b * a * k m . But this is equivalent to k + 1 m + 1 k + 1 b * a * n + 1 k + 1 b * a * m n m + 1 which is obvious for m n and, for m > n , it can be further written as k + 1 b * a * m n m + 1 and this is also true, since k m and a * = a * + m b * + n = b * .
Remark 1.
If A = { a , a + 2 , a + 3 } and B = { a , a + 1 , a + 3 } then a * = b * = a , a * = b * = a + 3 , A B but U n i f A s t U n i f B as one can easily see: F α = 1 3 < G α = 1 2 for any α [ a + 1 , a + 2 ) .
What remains true from these facts in the multidimensional case?
Anyway, the implication X s t Y Supp X Supp Y remains true due to Strassen’s Theorem (see, for instance Zbaganu [5]):
If X s t Y then there exist versions X of X and Y of Y (on another probability space Ω , F , P ) such that X Y and cl( X Ω ) = Supp X , cl( Y Ω ) = Supp Y .
As a method which is working sometimes we have.
Lemma 1.
If X Y a.s. then S u p p X S u p p Y .
Proof. 
Let Ω = ω Ω : X ω Y ω . As P Ω = 1 we may as well think that X Y . Let A = X Ω , B = Y Ω . Obviously A B (if a = X ω A then a b = Y ω B and if b = Y ω B then b a = X ω A ). We can further modify on null sets the random variables X and Y such that c l A = S u p p X and c l B = S u p p Y and of course if A B then c l A c l B .
As a particular case, if A , B are compact and U n i f A s t U n i f B then A B .
If A , B are convex, we can prove a stronger assertion:
Proposition 3.
Let A , B R d be two positive measure compact convex sets. Suppose that U n i f ( A ) s t w U n i f ( B ) and U n i f ( A ) s t d U n i f ( B ) . Then A B .
Proof. 
Let I n t A denote the intrinsic interior of A . If a I n t A with P ( X > a ) > 0 , then P ( Y > a ) > 0 and there exists b = Y ( ω ) I n t B , ω { ω : Y ( ω ) > a } such that a b . Let b I n t B with P ( Y b ) > 0 . Then P ( X b ) > 0 , thus there exists a = X ( ω ) I n t A , ω { ω : X ( ω ) b } such that b a . But A , B are compact. Then, using a standard argument, for all a A there exists b B such that a b and for all b B there exists a A such that b a , in other words A B .
As about the converse implication, A B U n i f A s t U n i f B , we know that, in general, it fails to be true even in the unidimensional case. But it is verified if A , B are intervals. The analog of the intervals are either the boxes I 1 × I 2 × × I d or the convex sets.
In the case of boxes the implication holds (see Zbaganu [5]):
Proposition 4.
Let A = I 1 × I 2 × × I d , B = J 1 × J 2 × × J d two closed boxes. Here I i , J j are compact intervals. Then U n i f A s t U n i f B if and only if A B .
Proof. 
Suppose that A B . According to Proposition 1 pro j i A pro j i B hence I i J i for all i 1 , , d . Let X = X j 1 j d ~ U n i f A , Y = Y j 1 j d ~ U n i f B . Then X i ~ U n i f I i and Y i ~ U n i f J i . It results X i s t Y i for all i 1 , , d . Moreover, the components X i i are independent and Y i i are independent, too. But it is known that if X i s t Y i for all i 1 , , d then X i i s t Y i i .
In general, the converse is not true, A B does not imply even the weak stochastic order.
Counterexample 1.
Let be A = { ( x , y ) [ 0 , 1 ] 2 : x 2 + y 2 1 } and B = { ( x , y ) [ 0 , 2 ] 2 : y 2 x } . Then A , B are closed convex sets and A B but it is not true that U n i f ( A ) s t w U n i f ( B ) .
Proof. 
It is obvious that the sets verify the properties. Let C = [ a , ) × [ b , ) with a , b A . Let f a , b = U n i f ( A ) ( C ) U n i f ( B ) ( C ) , more precisely
f a , b = 2 π arcsin 1 b 2 arcsin a a 1 a 2 b 1 b 2 + 2 a b 2 a b 2 2
Then one can notice that f a , a > 0 , for instance f 1 2 , 1 2 = 0 , 02 , thus U n i f ( A ) s t w U n i f ( B ) . Even worse, it is not even true that X j s t w Y j , j = 1 , 2

3. Stochastic Orders between Multivariate Uniform Distributions via Affine Transforms

We give sufficient conditions such that U n i f ( A ) s t U n i f ( ϕ ( A ) ) . An obvious candidate for ϕ is smooth mapping and ϕ x x for all x A .
Lemma 2.
If X U n i f ( A ) and φ : R d R d smooth then φ ( X ) is distributed U n i f ( φ ( A ) ) iff the Jacobian is constant, more precisely, J φ λ d ( A ) λ d ( φ ( A ) ) with λ d : B R d [ 0 , ) the Lebesgue measure.
Proof. 
Let f X , f φ ( X ) be the density functions of X and φ ( X ) , thus f φ ( X ) ( y ) = f X φ 1 ( y ) · J φ ( y ) , y R d . Then φ ( X ) U n i f ( φ ( A ) ) f φ ( X ) ( y ) = 1 φ ( A ) y λ d ( φ ( A ) ) 1 φ ( A ) y λ d ( φ ( A ) ) = 1 A φ 1 ( y ) λ d ( A ) · J φ ( y ) , y R d . It is obvious that J φ ( R d ) 0 , λ d ( A ) λ d ( φ ( A ) ) . If y φ ( A ) we obtain J φ ( y ) = λ d ( A ) λ d ( φ ( A ) ) . But J φ is continue, thus J φ λ d ( A ) λ d ( φ ( A ) ) .
Remark 2.
Let us consider pair of sets A B having the property that B = ϕ A with ϕ affine and ϕ x x for all x A . This is the case of balls.
Obviously B a , r = a + r B and all the balls are convex, compact. Moreover, B is symmetric, meaning that x B x B . It follows that pro j j B is of the form b j , b j and the box b B = j = 1 d b j , b j . According to Fact 1 inf B = inf b B = b , sup B = sup b B = b .
Proposition 5.
1. If B B a , r then a 0 R d and 1 r min a j b j : 1 j d where b = sup B .
2. The function ϕ x = a + r x has the property that ϕ x x for all x B iff the conditions from 1 hold.
3. U n i f B s t U n i f B a , r if and only if a 0 R d and 1 r min a j b j : 1 j d .
4. U n i f B a , r s t U n i f B α , ρ iff B a , r B α , ρ and that happens precisely if and only if a α , r ρ min α j a j b j : 1 j d .
5. If the norm is the usual L p norm on R d defined by x p = k = 1 d x j p 1 / p for p [ 1 , ) and x = max j x j if p = , we know who is b: it is 1 , 1 , , 1 . Therefore U n i f B a , r s t U n i f B α , ρ a α , r ρ min α j a j : 1 j d
Proof. 
1. Apply Proposition 1.
Suppose that B B a , r = a + r B . It follows that inf B inf a + r B = a + r inf B = a r b and sup B a + r b hence b a r b , b a + r b . Adding these we get a 0 R d .
Moreover, a + r b b a + r b can be written further as a 1 r b a or, componentwise a j b j 1 r a j b j .
2. We know that a + r x x for all x B . If x = 0 R d we get a 0 R d . For x = t e j it follows that a j + r t t ; the inequality must hold for t b j , b j . Therefore a j ± r b j ± r b j and we find the same conditions as in 1. Conversely, if a 0 and a 1 r b a we want to prove that ϕ x x for all x B . But that is true even x j b j , b j because the affine function ϕ j t = a j + r t t have the property that ϕ j ± b j 0 .
3. Let X ~ U B and Y ~ U B a , r . The random vector Y = a + r X has the same distribution as Y and X Y . Apply Lemma 1.
4. Suppose that B a , r B α , ρ or a + r B α + ρ B . Using the same tricks as before we get r b α a + ρ b , a r b α ρ b and a α . It follows that r ρ α j a j b j for all 1 j d and this is the condition that a + r x α + r x for all x B .
5. It is similar. ☐
A slight generalization for L p norms is the following result. Here B is the unity ball from L p .
Proposition 6.
Let f : R d R d be defined by f x = A x + b where A = a i , j 1 i , j d has the form a i , j = λ i δ i , j and b R d . Then f x x for all x B iff b 0 R d and 1 b λ 1 + b .
Proof. 
Let M = { x B : 1 i d with x i = ± 1 , x j = 0 j i } . According to the definition f x = λ i x i + b i 1 i d . Then f x x for all x M implies b 0 R d and λ i 1 b i , 1 + b i for all 1 i d .
For the converse we need to verify that λ i 1 x i + b i 0 , 1 i d . But this is obvious: x B x i 1 , 1 and 1 b λ 1 + b b i λ i 1 b i , thus b i λ i 1 x i b i and λ i 1 x i + b i 0 .
Another idea runs as follows: suppose that F = U n i f C is the distribution of stochastic vector X . Let Y = j = 1 n f j X 1 Ω j where Ω j j = 1 , , n is a partition of Ω which is independent on X .
The distribution of Y is a mixture. If f j x x then X Y hence we have stochastic domination.
If X is uniformly distributed on C and f j are affine and, moreover f j C are disjoint then one may hope to be able to choose the weights p j = P Ω j in such a way that Y be again uniformly distributed. Even if they are not affine we could try somehow. Precisely
Proposition 7.
Let C R d be a Borel set having positive finite Lebesgue measure. Let X be a random vector uniformly distributed on C , I be a set at most countable and let f j j I : R d R d having the property that f j x x for almost all x C and, moreover, that f j X U n i f f j C .
Suppose that f j C are almost disjoint, meaning that i j λ d f i C f j C = 0 .
Suppose that j I λ d f j C < .
Then U n i f C s t U n i f j I f j C .
Proof. 
We know that f j X ~ U n i f C j where C j = f j C . Let π j = λ d f j C . Then the density of f j X is 1 π j 1 C j . Let Ω j j I be a partition of Ω which is independent of X and let Y = j = 1 n f j X 1 Ω j . Then the distribution of Y is G = j J P Ω j U n i f C j and its density is j J P Ω j π j 1 C j = n o t ρ . If we choose P Ω j = π j π with π = j J π j = λ d j I C j , we get ρ = 1 π j J 1 C j = 1 π 1 j I C j hence Y is uniformly distributed and, as X Y , it follows that U n i f C s t U n i f j I f j C .
In terms of transition operators, Q x = j J p j δ f j C x with p j = π j π .
The real problem is how to construct the functions f j . An idea is to split the set C in almost disjoint subsets Δ j , k k K , K at most countable and to define f j x = a j , k + A j , k x on the sets Δ j , k taking care that det A j , k not to depend on k . To understand that, let us look at
Example 1.
Let X ~ U n i f C and Y ~ U n i f Δ where C = 0 , 1 2 and Δ is the triangle with the vertices O 0 , 0 , A 2 , 0 , B 0 , 2 . Clearly C Δ but we already know that this is not enough to imply that U n i f C s t U n i f Δ .
Let f 1 x , y = x , y and f 2 x , y = x , 2 y i f x y 2 x , y i f x > y . Then f 1 C = C , f 2 C = Δ \ C , f j x x and 1 = λ 2 f 1 C = n o t π 1 , 1 = λ 2 f 2 C = λ 2 Δ \ C = n o t π 2 , λ 2 Δ = π 1 + π 2 = n o t π , if we put P Ω i = π i π = 1 2 we have indeed Y = f 1 X 1 Ω 1 + f 2 X 1 Ω 2 distributed U n i f Δ = P Ω 1 U n i f f 1 C + P Ω 2 U n i f f 2 C .
More general, we do not need C to be the unity square.
Let C = C a = c o 0 , 0 , 0 , 1 + a , 1 + a , 0 , 1 , 1 with a > 0 .
Notice that C 1 = Δ i.e., the same as in Example 1.
Let f 1 x , y = x , y and
f 2 x , y = x , 2 x y x 1 a 1 + a i f x y 2 y x y 1 a 1 + a , y i f x > y
The idea is that if the point x , y is under the diagonal, it is moved up to x , y and y is constructed in such a way that 2 x y y 1 a + a x = y x 1 + a a x y . It is a kind of “symmetry” with respect to the line which joins 0 , 1 + a with 1 , 1 . Thus we have spitted C into C 1 = c o 0 , 0 , 0 , 1 + a , 1 , 1 and C 2 = c o 0 , 0 , 1 + a , 0 , 1 , 1 )
Both branches of f 2 are of the form f 2 z = α + A z for z = x , y T
On C 1 : α = 2 0 , A = 1 0 2 a 1 + a 1 a 1 + a and on C 2 : α = 0 2 , A = 0 1 1 a 1 + a 2 a 1 + a
The absolute value of det A is the same on both branches: it is 1 a 1 + a .
As f 2 C 1 = c o 0 , 1 + a , 1 , 1 , 0 , 2 , f 2 C 2 = c o 1 + a , 0 , 1 , 1 , 2 , 0 we see that f 2 C = Δ \ C and f 1 C = C
Apply Proposition 7. it follows that U n i f C s t U n i f C Δ \ C ) = U n i f Δ .

4. Stochastic Orders between Multivariate Uniform Distributions via Decomposition

Another criterion to decide the stochastic order could be the total probability formula. Here is its simplest variant
Theorem 1.
Let F be a probability distribution on R 2 . Then there exist a probability μ on R and a transition probability from R to R such that F = μ Q .
Or, even more precise in random vectors terms:
Let Z = X , Y be a stochastic vector in plane. Let F be its distribution, μ the distribution of X and Q x be the conditioned distribution of Y provided that X = x .
Then F = μ Q .
This is a notation meaning that u d F = u x , y Q x , d y d μ x .
Now suppose that we have two random vectors in plane, Z j = X j , Y j with their distributions written as F j = μ j Q j .
It seems to be plausible that if μ 1 s t μ 2 and Q 1 x s t Q 2 x for all x then F 1 s t F 2 .
We were able to prove a weaker result. Call a transition probability Q monotone if x x Q x s t Q x .
Proposition 8.
Let μ 1 , μ 2 be two probability distributions and Q 1 , Q 2 be two transition probabilities. Suppose that at least one of them is monotonous.
Then μ 1 s t μ 2 and Q 1 x s t Q 2 x implies μ 1 Q 1 x s t μ 2 Q 2 .
Proof. 
Let u : R 2 R be measurable, bounded and increasing.
Let v j x = u x , y Q j x , d y .
Suppose that Q 1 is monotonous. Then v 1 is non-decreasing.
Indeed, let x < x . Then v 1 x = u x , y Q 1 x , d y u x , y Q 1 x , d y (as the mapping y u x , y is nondecreasing). As u x , y u x , y it follows that u x , y Q 1 x , d y u x , y Q 1 x , d y or v 1 x v 1 x .
Now u d μ 1 Q 1 = u x , y Q 1 x , d y d μ 1 x = v 1 d μ 1 v 1 d μ 2 (as μ 1 s t μ 2 and v 1 is non-decreasing) = u x , y Q 1 x , d y d μ 2 x u x , y Q 2 x , d y d μ 2 x (since Q 1 x Q 2 x ) and the last term is u d μ 2 Q 2 .
If Q 2 is monotonous we write
u d μ 1 Q 1 = u x , y Q 1 x , d y d μ 1 x u x , y Q 2 x , d y d μ 1 x =
= v 2 d μ 1 v 2 d μ 2 = u d μ 1 Q 1 . ☐
Remark 3.
In general there are no reasons why v j should be increasing.
Suppose for instance that Q x = U n i f [ 0 , 1 x ] , x 0 , 1 . Then v x = 1 1 x 0 1 x u x , y d y . If u x , y = x y then v x = 1 1 x 0 1 x x y d y = x 1 x 2 2 1 x = x 1 x 2 which is not increasing.
A slight generalization of Proposition 8 having the same proof is:
Proposition 9.
Let μ 1 , μ 2 be two probability distributions and Q 1 , Q 2 be two transition probabilities. Suppose that v 1 or v 2 are nondecreasing for every nondecreasing u , where v j x = u x , y Q j x , d y .
Then μ 1 s t μ 2 and Q 1 x s t Q 2 x implies μ 1 Q 1 x s t μ 2 Q 2 x .
Example 2.
The same as before, let X ~ U n i f C and Y ~ U n i f Δ .
But U n i f C = μ 1 Q 1 , U n i f Δ = μ 2 Q 2 with F μ 1 x = min x + , 1 , F μ 2 x = x x 2 4 1 0 , 2 x + 1 0 , x .
Q 1 x = U n i f [ 0 , 1 ] , Q 2 x = U n i f [ 0 , 2 x ] .
Now v 1 x = 0 1 u x , y d y is obviously non-decreasing, even if
v 2 x = 1 2 x 0 2 x u x , y d y is not. As μ 1 s t μ 2 , Q 1 x Q 2 x for x 0 , 1 and v 1 is non-decreasing it follows that U n i f C s t U n i f Δ or 2 0 1 0 1 u x , y d y d x 0 2 0 2 x u x , y d y d x for any increasing u .

5. Conclusions

This study has started from a well-known result in univariate case. We have followed two different approaches: transforms and decompositions, in order to prove similar results in a more general framework. Examples, remarks and counterexamples highlight interesting cases which, themselfs, might generate new questions. For instance, different types of order might be considered and the links between them. Therefore we consider that this study merit further investigations.

Author Contributions

These authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dudley, R. Weak convergence of probabilities on non-separable metric spaces and empirical measures on euclidian spaces. Ill. J. Math. 1966, 10, 109–126. [Google Scholar] [CrossRef]
  2. Hadar, J.; Russell, W.R. Rules for Ordering Uncertain Prospects. Am. Econ. Rev. 1969, 59, 25–34. [Google Scholar]
  3. Levy, H. Stochastic Dominance: Investment Decision Making under Uncertainty, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  4. Shaked, M.; Shantikumar, J.G. Stochastic Orders; Springer Series in Statistics: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  5. Zbaganu, G. Metode Matematice in Teoria Riscului si Actuariat; Editura Universitatii Bucuresti: Bucharest, Romania, 2004. [Google Scholar]
  6. Kim, B.; Kim, J. Stochastic ordering of Gini indexes for multivariate elliptical risks. Insur. Mathe. Econ. 2019, 88, 151–158. [Google Scholar] [CrossRef]
  7. Post, T.; Kopa, M. Portfolio Choice Based on Third-Degree Stochastic Dominance. Manag. Sci. 2017, 63, 3147–3529. [Google Scholar] [CrossRef] [Green Version]
  8. Petrova, B. Multistage portfolio optimization with multivariate dominance constraints. Comput. Manag. Sci. 2019, 16, 17–46. [Google Scholar] [CrossRef]
  9. Tarp, F.; Osterdal, L.P. Multivariate Discrete First Order Stochastic Dominance; Discussion Paper; Department of Economics, No. 07–23, University of Copenhagen: Copenhagen, Denmark, 2007; Available online: http://www.econ.ku.dk/research/Publications/pink/2007/pink2007.asp (accessed on 23 July 2007).
  10. Xu, G.; Wong, W.K. Multivariate Stochastic Dominance for Risk Averters and Risk Seekers. RAIRO Oper. Res. 2016, 50, 575–586. [Google Scholar]
  11. Denuit, M.; Eeckhoudt, L.; Tsetlin, I.; Winkler, R.L. Multivariate Concave and Convex Stochastic Dominance, CORE; Discussion Paper; Center of Operational Research and Econometrics: Ottignies-Louvain-la-Neuve, Belgium, 2010; Volume 44. [Google Scholar]
  12. Raducan, A.M.; Vernic, R.; Zbaganu, G. On the ruin probability for nonhomogeneous claims and arbitrary inter-claim revenues. J. Comput. Appl. Math. 2015, 290, 319–333. [Google Scholar] [CrossRef]
  13. Jamali, D.; Amiri, M.; Jamalizadeh, A. Comparison of the Multivariate Skew-Normal Random Vectors Based on the Integral Stochastic Ordering. Communications in Statistics—Theory and Methods. Available online: https://www.tandfonline.com/doi/abs/10.1080/03610926.2020.1740934?journalCode=lsta20 (accessed on 23 March 2020). [CrossRef]
  14. Catana, L.I. A property of unidimensional distributions which is lost in multidimensional case. Gaz. Mat. Ser. A 2016, 3-4, 39–41. [Google Scholar]
  15. Di Crescenzo, A.; Di Gironimo, P.; Kayal, S. Analysis of the Past Lifetime in a Replacement Model through Stochastic Comparisons and Differential Entropy. Mathematics 2020, 8, 1203. [Google Scholar] [CrossRef]
  16. Bello, A.J.; Mulero, J.; Sordo, M.A.; Suárez-Llorens, A. On Partial Stochastic Comparisons Based on Tail Values at Risk. Mathematics 2020, 8, 1181. [Google Scholar] [CrossRef]
  17. Toomaj, A.; Di Crescenzo, A. Connections between Weighted Generalized Cumulative Residual Entropy and Variance. Mathematics 2020, 8, 1072. [Google Scholar] [CrossRef]
  18. Wang, J.; Yan, R.; Lu, B. Stochastic Comparisons of Parallel and Series Systems with Type II Half Logistic-Resilience Scale Components. Mathematics 2020, 8, 470. [Google Scholar] [CrossRef] [Green Version]
  19. Di Crescenzo, A.; Di Gironimo, P. Stochastic Comparisons and Dynamic Information of Random Lifetimes in a Replacement Model. Mathematics 2018, 6, 204. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Catana, L.-I.; Raducan, A. Stochastic Order for a Multivariate Uniform Distributions Family. Mathematics 2020, 8, 1410. https://doi.org/10.3390/math8091410

AMA Style

Catana L-I, Raducan A. Stochastic Order for a Multivariate Uniform Distributions Family. Mathematics. 2020; 8(9):1410. https://doi.org/10.3390/math8091410

Chicago/Turabian Style

Catana, Luigi-Ionut, and Anisoara Raducan. 2020. "Stochastic Order for a Multivariate Uniform Distributions Family" Mathematics 8, no. 9: 1410. https://doi.org/10.3390/math8091410

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop