Next Article in Journal
Entropy and Multifractal-Multiscale Indices of Heart Rate Time Series to Evaluate Intricate Cognitive-Autonomic Interactions
Next Article in Special Issue
Information Geometry of the Exponential Family of Distributions with Progressive Type-II Censoring
Previous Article in Journal
Menzerath’s Law in the Syntax of Languages Compared with Random Sentences
Previous Article in Special Issue
Fractional Deng Entropy and Extropy and Some Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Order and Generalized Weighted Mean Invariance

1
Graphics and Imaging Laboratory, University of Girona, 17003 Girona, Spain
2
Department of Civil Engineering, Urban Institute, School of Engineering, Kyushu University, Fukuoka 819-0395, Japan
3
School of Mathematics, University of Edinburgh, Edinburgh EH9 3FD, UK
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(6), 662; https://doi.org/10.3390/e23060662
Submission received: 6 May 2021 / Revised: 15 May 2021 / Accepted: 17 May 2021 / Published: 25 May 2021
(This article belongs to the Special Issue Measures of Information)

Abstract

:
In this paper, we present order invariance theoretical results for weighted quasi-arithmetic means of a monotonic series of numbers. The quasi-arithmetic mean, or Kolmogorov–Nagumo mean, generalizes the classical mean and appears in many disciplines, from information theory to physics, from economics to traffic flow. Stochastic orders are defined on weights (or equivalently, discrete probability distributions). They were introduced to study risk in economics and decision theory, and recently have found utility in Monte Carlo techniques and in image processing. We show in this paper that, if two distributions of weights are ordered under first stochastic order, then for any monotonic series of numbers their weighted quasi-arithmetic means share the same order. This means for instance that arithmetic and harmonic mean for two different distributions of weights always have to be aligned if the weights are stochastically ordered, this is, either both means increase or both decrease. We explore the invariance properties when convex (concave) functions define both the quasi-arithmetic mean and the series of numbers, we show its relationship with increasing concave order and increasing convex order, and we observe the important role played by a new defined mirror property of stochastic orders. We also give some applications to entropy and cross-entropy and present an example of multiple importance sampling Monte Carlo technique that illustrates the usefulness and transversality of our approach. Invariance theorems are useful when a system is represented by a set of quasi-arithmetic means and we want to change the distribution of weights so that all means evolve in the same direction.

1. Introduction and Motivation

Stochastic orders [1,2], are orders defined in probability theory and statistics, to quantify the concept of one random variable being bigger or smaller than another one. Discrete probability distributions, also called probability mass functions (pmf), are sequences of n-tuples of non-negative values that add up to 1, and can be thus interpreted in several ways, for instance, as weights in computation of moments of the discrete random variable described by the pmf, or as equivalence classes of compositional data [3]. Stochastic orders have found application in decision and risk theory [4], and in economics in general, among many other fields [2]. Some stochastic orders have been defined based on order invariance: two pmf’s are ordered when the arithmetic means of any increasing sequence of real numbers weighted with the corresponding pmf’s are ordered in the same direction. This raises the question whether this invariance might also hold for other kind of means beyond the arithmetic mean.
The quasi-arithmetic means, also called Kolmogorov or Kolmogorov–Nagumo means, are ubiquitous in many branches of science [5]. They have the expression g 1 k = 1 M α k g ( b k ) , where g ( x ) is a real-valued strictly monotonous function, { b k } k = 1 M a sequence of reals, and { α k } k = 1 M a set of weights with k = 1 M α k = 1 . This family of means comprises the usual means: arithmetic g ( x ) = x , k = 1 M α k b k , harmonic g ( x ) = 1 / x , k = 1 M α k b k 1 , power mean g ( x ) = x p , ( k = 1 M α k b k p ) 1 p . For a long time, economists have discussed the best mean for a problem [6]. Harmonic mean is used for the price earning ratio, and power means are used to represent the aggregate labor demand and its corresponding wage [7], and the constant elasticity of substitution (CES) [8]. Yoshida [9,10] has studied the invariance under quasi-arithmetic means with function g ( x ) increasing and for utility functions. In information theory, Alfred Rényi [11] defined axiomatically the entropy of a probability distribution as a Kolmogorov mean of the information 1 / log p k conveyed by result k with probability p k , and recently, Americo et al. [12] defined conditional entropy based on quasi-arithmetic means. In physics, the equivalent spring constant of springs combined in series is obtained as the harmonic mean of the individual spring constants, and in parallel, as their arithmetic mean [13], while the equivalent resistance of resistors combined in parallel is obtained by the harmonic mean of the individual resistances, and in series by their arithmetic mean [14]. In traffic flow [15], arithmetic and harmonic mean of speed distribution are used. In [16] both geometric and harmonic mean are used in addition to arithmetic mean to improve noise source maps.
In our recent work on inequalities for generalized, quasi-arithmetic weighted means [17], we found some invariance properties, that depended on the particular relationship considered between the sequences of weights. These relationships between weights define first stochastic order, and likelihood ratio order. Their application to multiple importance sampling (MIS), a Monte Carlo technique, has been presented in [18], its application to cross entropy in [19], and in [20] applications to image processing, traffic flow and income distribution have been shown. In [21], the invariance results on products of distributions of independent scalar r.v.’s [22] was generalized to any joint distribution of a 2-dimensional r.v.
In this paper, we show that the order invariance is a necessary and sufficient condition for first stochastic order, and that it holds under any quasi-arithmetic mean. We also study invariance under the second stochastic order, likelihood ratio, hazard-rate, and increasing convex stochastic orders. The fact that the invariance results hold for both increasing and decreasing monotonic functions allows us to use both utilities and liabilities, represented by efficiencies and expected error, respectively, to look for an optimal solution, where in liabilities we look for minimum expected error, while in utilities for maximum efficiency.
The rest of the paper is organized as follows. In Section 2, we introduce the stochastic order; in Section 3, the arithmetic mean and its relationship with stochastic order. In Section 4, we present the invariance theorems; in Section 5, we discuss the invariance for concave (convex) functions; in Section 6, its application to stochastic orders; in Section 7, we present an example based on the linear combination of Monte Carlo estimators. Finally, conclusions and future work are given in Section 8.

2. Stochastic Orders

Stochastic orders are pre-orders (i.e, binary relationships holding symmetric and transitive properties) defined on probability distributions with finite support. Note that equivalently, one can think of sequences (i.e., ordered sets) of non-negative weights/values that sum up to one. Observe that any sequence { α k } of M positive numbers such that k = 1 M α k = 1 can be considered a probability distribution. It can be seen too as an element of the (M-1)-Simplex, i.e., { α k } Δ M 1 . While several interpretations hold and hence increase the range of applicability, in the remaining of this paper, we will talk of sequence without any loss of generality.
Notation. 
We use the symbols , to represent orders between two sequences { α k } and { α k } of size M, e.g., and we write { α k } { α k } or equivalently { α k } { α k } . We will denote the elements of the sequences without the curly brackets, e.g., the first element of the sequence { α k } is denoted as α 1 , while the last one is α M . Moreover, the first and last elements of the sequence receive special attention, since when α 1 α 1 , we can write this order as { α k } F { α k } (where F stands for first ) and whenever α M α M , we can write { α k } L { α k } (where L stands for last ). The case with both α 1 α 1 and α M α M , we can denote as { α k } F L { α k } . The orders F , L , F L are superorders of first stochastic dominance order, { α k } F S D { α k } , that will be studied in Section 6. We denote as { α M k + 1 } the sequence with the same elements of { α k } but in reversed order.
Example 1
(Toy example). Given sequences { α k } = { 0.5 , 0.1 , 0.2 , 0.2 } , and { α k } = { 0.4 , 0.1 , 0.2 , 0.3 } , we have both { α k } F { α k } , { α k } L { α k } , and thus { α k } F L { α k } . On the other hand for sequences { α k } and { α k } = { 0.4 , 0.3 , 0.2 , 0.1 } , we have { α k } F { α k } , { α k } L { α k } , and thus { α k } F L { α k } .
Property 1
(Mirror property). One desirable property of stochastic orders is such that when reversing the ordering of the sequence elements, the stochastic order should be reversed as well, i.e., if { α k } { α k } , then { α M k + 1 } { α M k + 1 } . This is similar to the invariance of physical laws to the right and left hand. We call this property the mirror property.
Definition 1
(Mirror property). We say that a stochastic order has the mirror property if
{ α k } { α k } { α M k + 1 } { α M k + 1 } .
Observe that the simple orders defined before, F , L , do not hold this property, but F L holds it. We will see in Section 6 that usual stochastic orders do hold the mirror property. However, an order that is insensitive to the permutation of the elements of a sequence, like majorization or Lorentz order, does not hold the mirror property.

3. Quasi-Arithmetic Mean

Stochastic orders are usually defined by invariance to arithmetic mean [1,2] (see Section 6), and we want to investigate in this paper invariance to more general means. We define here the kind of means we are interested in.
Definition 2
(Quasi-arithmetic or Kolmogorov or Kolmogorov–Nagumo mean). A quasi-arithmetic weighted mean (or Kolmogorov mean) M ( { b k } , { α k } ) of a sequence of real numbers { b k } k = 1 M , is of the form g 1 k = 1 M α k g ( b k ) ) , where g ( x ) is a real valued, invertible strictly monotonic function, with inverse function g 1 ( x ) and { α k } k = 1 M are positive weights such that k = 1 M α k = 1 .
Examples of such mean are arithmetic weighted mean ( g ( x ) = x ), harmonic weighted mean ( g ( x ) = 1 / x ), geometric weighted mean ( g ( x ) = log x ) and in general, the weighted power means ( g ( x ) = x r ).
Given a distribution { p k } , Shannon entropy, H ( { p k } ) = k p k log p k , and Rényi entropy, R β ( { p k } ) = 1 1 β log k p k β , can be considered as quasi-arithmetic means of the sequence { 1 log p k } with weights { p k } : Shannon entropy with g ( x ) = x (arithmetic mean or expected value), and Rényi entropy with g ( x ) = 2 ( β 1 ) x [11]. Tsallis entropy, S q ( { p k } ) = 1 i = 1 M p i q q 1 , can be considered the weighted arithmetic mean of the sequence { 1 ln q p k } with weights { p k } , where ln q x = x 1 q 1 1 q is the q-logarithm function [23]. Without loss of generality, we consider from now on that { b k = f ( k ) } where f ( x ) is a real valued function.
Lemma 1.
Consider the sequences of M positive weights { α k } k = 1 M and { α k } k = 1 M , k = 1 M α k = 1 , k = 1 M α k = 1 , g ( x ) a strictly monotonic function. The following conditions (a), (b), (a’), (b’), (c) and (d) are equivalent.
(a) for g ( x ) increasing, for any increasing function f ( x ) the following inequality holds:
k = 1 M α k g ( f ( k ) ) k = 1 M α k g ( f ( k ) )
(b) for g ( x ) increasing, for any increasing function f ( x ) the following inequality holds:
k = 1 M α M k + 1 g ( f ( k ) ) k = 1 M α M k + 1 g ( f ( k ) )
(a’) for g ( x ) decreasing, for any increasing function f ( x ) the following inequality holds:
k = 1 M α k g ( f ( k ) ) k = 1 M α k g ( f ( k ) )
(b’) for g ( x ) decreasing, for any increasing function f ( x ) the following inequality holds:
k = 1 M α M k + 1 g ( f ( k ) ) k = 1 M α M k + 1 g ( f ( k ) )
(c) the following inequalities hold:
α 1 α 1 α 1 + α 2 α 1 + α 2 α 1 + + α M 1 α 1 + + α M 1 α 1 + + α M 1 + α M = α 1 + + α M 1 + α M
(d) the following inequalities hold:
α M α M α M + α M 1 α M + α M 1 α M + + α 2 α M + + α 2 α M + α M 1 + + α 1 = α M + α M 1 + + α 1
If f ( x ) is decreasing, the inequalities in a) through b’) are reversed.
Proof. 
Indirect or partial proofs can be found in [17,20]. We provide a complete proof in the Appendix A. □
Note. 
Observe that in Lemma 1, it is sufficient to consider the monotonicity of the sequences { f ( k ) } k = 1 M . Furthermore, Lemma 1 can be extended to any real sequences { y k } and { x k } , such that k = 1 M y k = k = 1 M x k . It is enough to observe that the order of all the inequalities are unchanged by adding a positive constant, so that { y k } and { x k } can be made positive, and also are unchanged by the multiplication of a positive constant, so that the resulting { y k } and { x k } sequences can be normalized.
Theorem 1.
Given a mean M with strictly monotonic function g ( x ) and two distributions { α k } , { α k } , the following propositions are equivalent:
(a) for all increasing functions f ( x )
M ( { f ( k ) } , { α k } ) M ( { f ( k ) } , { α k } )
(b) for all increasing functions f ( x )
M ( { f ( k ) } , { α M k + 1 } ) M ( { f ( k ) } , { α M k + 1 } )
(a’) for all decreasing functions f ( x )
M ( { f ( k ) } , { α k } ) M ( { f ( k ) } , { α k } )
(b’) for all decreasing functions f ( x )
M ( { f ( k ) } , { α M k + 1 } ) M ( { f ( k ) } , { α M k + 1 } )
(c) Condition (c) of Lemma 1 holds
Proof. 
It is a direct consequence of Lemma 1 and the definition of quasi-arithmetic mean, observing that the inverse g 1 ( x ) of a strictly monotonic increasing (respectively decreasing) function g ( x ) is also increasing (respectively decreasing). □

4. Invariance

Theorem 2
(Invariance). Given two distributions { α k } , { α k } , and two quasi-arithmetic means M , M 🟉 , the following propositions are equivalent:
(a) for all increasing functions f ( x )
M ( { f ( k ) } , { α k } ) M ( { f ( k ) } , { α k } )
(b) for all increasing functions f ( x )
M 🟉 ( { f ( k ) } , { α k } ) M 🟉 ( { f ( k ) } , { α k } )
(a’) for all decreasing functions f ( x )
M ( { f ( k ) } , { α k } ) M ( { f ( k ) } , { α k } )
(b’) for all decreasing functions f ( x )
M 🟉 ( { f ( k ) } , { α k } ) M 🟉 ( { f ( k ) } , { α k } )
Proof. 
It is a direct consequence of the observation that conditions (c) and (d) in Lemma 1 do not depend on any particular g ( x ) function considered, and thus the order of inequalities does not change with the mean considered as long as { α k } and { α k } are kept fixed. □
The following properties relate stochastic order with the quasi-arithmetic mean. Let I be the set of monotonous functions, I < the set of increasing functions, I > the set of decreasing functions.
Definition 3
(preserve mean order property). We say that a stochastic order preserves mean order for a given mean M and a set I < I < of increasing functions when, for all functions f ( x ) I < and any distributions { α k } , { α k } ,
{ α k } { α k } M ( { f ( k ) } , { α k } ) M ( { f ( k ) } , { α k } ) .
Definition 4
(preserve inverse mean order property). We say that a stochastic order preserves inverse mean order for a given mean M and a set I > I > of decreasing functions when, for all functions f ( x ) I > and any distributions { α k } , { α k } ,
{ α k } { α k } M ( { f ( k ) } , { α k } ) M ( { f ( k ) } , { α k } ) .
Theorem 2 together with the preserve mean order properties allows us to state the following invariance property:
Theorem 3
(preserve mean order invariance). Given a stochastic order that preserves mean order (respectively preserves inverse mean order) for a given mean and for I < (respectively for I > ), then for any mean it preserves both mean order for I < and inverse mean order for I > . In other words, the preserve mean order properties are invariant with respect to the mean considered.
Observe that from Lemma 1 and Theorems 1 and 3, we have that a necessary and sufficient condition for an order to preserve mean order for I < (or preserve inverse mean order for I > ) is the holding of Equation (6), independently of the mean considered. We will see in Section 6 that this corresponds to first stochastic dominance order.

5. Concavity and Convexity

Let us consider now I v < I < , the set of all increasing concave functions, I x < I < , the set of all increasing convex functions, I v > I > , the set of all decreasing concave functions, and I x > I > , the set of all decreasing convex functions. The following theorem relates the preserve mean order properties with the mirror property.
Theorem 4.
If an order holds the mirror property, then holding preserve mean order for I v < (respectively I x < ) implies holding preserve inverse mean order for I v > (respectively I x > ) and viceversa.
Proof. 
Suppose { α k } { α k } and f ( x ) decreasing and concave (respectively convex). Then, by the mirror property { α M k + 1 } { α M k + 1 } , and by the hypothesis of the theorem
M ( { f ( k ) } , { α k } ) = M ( { f ( M k + 1 ) } , { α M k + 1 } ) = M ( { f ( m ( k ) ) } , { α M k + 1 } ) M ( { f ( m ( k ) ) } , { α M k + 1 } ) = M ( { f ( k ) } , { α k } ) ,
where m ( x ) = M x + 1 , and because if f ( x ) is decreasing and concave (respectively convex) then f ( m ( x ) ) is increasing and concave (respectively convex). □
The following result is necessary to prove Lemma 2.
Theorem 5.
Given f ( x ) and weights { α k } , { α k } , if inequality M ( { f ( k ) } , { α k } ) M ( { f ( k ) } , { α k } ) holds for any g ( x ) strictly increasing and convex (respectively concave) then it holds for any g ( x ) strictly decreasing and concave (respectively convex). If it holds for any g ( x ) strictly decreasing and convex (respectively concave) then it holds for any g ( x ) strictly increasing and concave (respectively convex).
Proof. 
Consider the quasi-arithmetic mean with Kolmogorov function g 🟉 = g ( x ) (with inverse g 🟉 1 = g 1 ( x ) ). When g ( x ) is increasing, g 🟉 ( x ) is decreasing and viceversa. When g ( x ) is convex, g 🟉 ( x ) is concave and viceversa. We have
g 🟉 1 i α i g 🟉 ( f ( i ) ) = g 1 i α i ( g ( f ( i ) ) ) = g 1 i α i g ( f ( i ) ) g 1 i α i g ( f ( i ) ) = g 1 i α i ( g ( f ( i ) ) ) = g 🟉 1 i α i g 🟉 ( f ( i ) )
The following Lemma is needed to prove how the invariance properties for one mean extends to other means.
Lemma 2.
Given two distributions { α k } , { α k } , and a quasi-arithmetic mean M ( { f ( k ) } , { α k } ) with function g ( x ) . Consider the following Equations:
M ( { f ( k ) } , { α k } ) M ( { f ( k ) } , { α k } ) ,
and
M { f ( k ) } , { α M k + 1 } M { f ( k ) } , { α M k + 1 } .
Then, for each line in Table 1, and for g ( x ) , f ( x ) filling the conditions in first and second column in Table 1 and holding Equations (20) and (21), Equations (20) and (21) hold too for g 🟉 ( x ) , f 🟉 ( x ) filling the conditions in columns three and four.
Additionally, for each line in Table 1, changing f 🟉 ( x ) in column four from increasing to decreasing, Equations (20) and (21) hold with the inequalities reversed.
Proof. 
The proof of lines 1–8 in Table 1 is in the Appendix B. Lines 1’-8’ in Table 1 are a direct consequence of Theorem 5 applied to lines 1–8. □
Theorem 6.
Given a quasi-arithmetic mean M ( { f ( k ) } , { α k } ) with function g ( x ) , we have that if an order holds the mirror property then for each line in Table 1, and for g ( x ) filling the condition in the first column in Table 1 and preserving order for f ( x ) in the second column in Table 1, the mean for g 🟉 ( x ) filling the condition in column three in Table 1 preserves order for f 🟉 ( x ) filling the condition in column four in Table 1.
Proof. 
It is enough to apply Lemma 2, taking into account that if Equation (20) holds then Equation (21) holds too by the mirror property. □
Corollary 1.
Consider the weighted arithmetic mean A ( { f ( k ) } , { α k } ) . Given an order that holds the mirror property and preserves the order for mean A , then
(a) If order is preserved for mean A and for I v < then order is preserved for any mean with concave-increasing/convex-decreasing (respectively concave-decreasing/convex-increasing) function g ( x ) and for I v < (respectively for I x < ).
(b) If order is preserved for mean A and for I x < , then order is preserved for any mean with convex-increasing/concave-decreasing (respectively convex-decreasing/concave-increasing) function g ( x ) and for I x < (respectively for I v < ).
Proof. 
Arithmetic mean is a quasi-arithmetic mean with increasing function g ( x ) = x , which is both concave-increasing and convex-increasing, and thus Table 1 collapses to Table 2.
Functions x p , for p 1 are convex-increasing and with p 0 are convex-decreasing over R + + , e x convex-increasing over R , while log x , x p , for 0 < p 1 are concave-increasing over R + + . Affine functions are both concave and convex over R . If g ( x ) is convex, g ( x ) is concave and viceversa, and the composition of concave-increasing and concave is concave, and convex-increasing and convex is convex. We will see in the next section that preserving the order for mean A for I v < is defined as second-order stochastic dominance or increasing concave order, and preserving the order for mean A for I x < is defined as increase convex order stochastic dominance. Both orders hold the mirror property and thus Corollary 1 applies to both.

6. Application to Stochastic Orders and Cross-Entropy

6.1. First-Order Stochastic Dominance

Originating in the economics risk literature [24], first-order stochastic dominance [1,2] FSD, between two probability distributions, { α k } , { α k } , { α k } F S D { α k } , is defined as:
Definition 5.
{ α k } F S D { α k } ⇔ for any increasing function f ( x ) ,
E [ f ( k ) ] α k E [ f ( k ) ] α k .
Remember that the expected value is the arithmetic weighted mean.
A necessary and sufficient condition for first-order stochastic dominance is defined by the condition c ) of Lemma 1, which is equivalent to condition d ) of Lemma 1, thus { α k } F S D { α k } { α M k + 1 } F S D { α M k + 1 } , i.e., the mirror property holds. From the definition of FSD, and from Theorem 3, we can redefine FSD, { α k } F S D { α k } as there exists a mean M such that, for any increasing function f ( x ) , Equation (16) holds. This is, the definition of FSD is independent of the mean considered, while the original definition relies on the expected value (arithmetic mean). The mean considered can be arithmetic, harmonic, geometric or any other quasi-arithmetic mean.
Let us consider now a strictly monotonous function g ( x ) , and define a generalized cross-entropy GCE ( { p i } , { q i } ) = g 1 ( i p i g ( log q i ) ) . Observe that it is a quasi-arithmetic mean, and for g ( x ) = x , we get the cross-entropy C E ( { p i } , { q i } ) = p i log q i . Other functions that generalize cross entropy have been defined in the context of training deep neural networks [25]. We can state the following theorem:
Theorem 7.
Given distributions { p i } , { p i } , { q i } , increasing, and { p k } F S D { p k } , then for any mean GCE with function g ( x ) , GCE ( { p i } , { q i } ) GCE ( { p i } , { q i } ) .
Proof. 
Observe first that f ( k ) = log q k is a decreasing sequence. From the hypothesis, condition c ) of Lemma 1 holds, thus we can then apply Theorem 1 to the mean with function g ( x ) and for { f ( k ) } decreasing. □
The following result relates the entropies of two distributions with the first stochastic order.
Theorem 8.
Given { p k } , { q k } increasing, and { q k } F S D { p k } , then H ( { p k } ) H ( { q k } ) , where H stands for Shannon entropy.
Proof. 
The Kullback–Leibler distance is always positive [26], i p i log p i q i 0 , and thus we have that H ( { p k } ) = i p i log p i p i log q i . Applying Theorem 7 for g ( x ) = x , p i log q i q i log q i = H ( { q k } ) .

6.2. Second-Order Stochastic Dominance and Increase Convex Ordering

Definition 6.
Second-order stochastic dominance between two probability distributions, { α k } , { α k } , { α k } S S D { α k } , occurs when for any increasing concave function f ( x ) ,
E [ f ( k ) ] α k E [ f ( k ) ] α k .
Trivially
{ α k } F S D { α k } { α k } S S D { α k } .
Let us consider the cumulative distribution function, F i = α 1 + α 2 + + α i , and the survival function F ¯ i = 1 F i = α M + α M 1 + + α i + 1 . A necessary and sufficient condition for second-order stochastic dominance [1,2] is the following:
F 1 F 1 F 1 + F 2 F 1 + F 2 F 1 + + F M 1 F 1 + + F M 1 F 1 + + F M 1 + F M F 1 + + F M 1 + F M ,
or equivalently,
F ¯ 1 F ¯ 1 F ¯ 1 + F ¯ 2 F ¯ 1 + F ¯ 2 F ¯ 1 + + F ¯ M 1 F ¯ 1 + + F ¯ M 1 F ¯ 1 + + F ¯ M 1 + F ¯ M F ¯ 1 + + F ¯ M 1 + F ¯ M .
Let us see first that { α k } S S D { α k } { α M k + 1 } S S D { α M k + 1 } , i.e., SSD holds the mirror property. Effectively, define G i = α M + α M 1 + + α i + 1 , and the survival function G ¯ i = 1 G i = α 1 + α 2 + + α i . However, G i = F ¯ i , G ¯ i = F i , G i = F ¯ i , G ¯ i = F i , and substituting in Equation (22) or Equation (23), we obtain the desired result.
From Corollary 1, second-order stochastic dominance preserves mean order for all means M defined by a concave-increasing/convex-decreasing (respectively concave-decreasing/convex-increasing) function g ( x ) , and for the set of all concave-increasing functions I v < (respectively convex-increasing functions I x < ). For instance, order is preserved for the geometric mean, or any mean with g ( x ) = x p , 0 < p 1 and for I v < , or for or any mean with g ( x ) = x p with p < 0 , as the harmonic mean, for I x < . In particular, we can state the following theorem about cross-entropy of two distributions,
Theorem 9.
Given distributions { p i } , { p i } , { q i = f ( i ) } , f ( x ) concave-increasing, and { p i } S S D { p i } , then C E ( { p i } , { q i } ) C E ( { p i } , { q i } ) .
Proof. 
We have that C E ( { p i } , { q i } ) = log G ( { q i } , { p i } ) = log ( Π i q i p i ) = log ( exp ( i p i log q i ) ) , where G stands for geometric mean. The geometric mean is a quasi-arithmetic mean with function g ( x ) = log x , concave-increasing. Using Definition 6 and applying Corollary 1(a), we obtain the inequality G ( { q i } , { p i } ) G ( { q i } , { p i } ) and we apply the function log x to both members of this inequality. □
When we consider f ( x ) , a convex instead of a concave function, we talk of increasing convex order, ICX. This is,
Definition 7.
Second-order stochastic dominance between two probability distributions, { α k } , { α k } , { α k } I C X { α k } , occurs when for any increasing convex function f ( x ) ,
E [ f ( k ) ] α k E [ f ( k ) ] α k .
{ α k } is greater in increasing convex order than { α k } , { α k } I C X { α k } , if and only if the following inequalities hold:
F M F M F M + F M 1 F M + F M 1 F M + + F 2 F M + + F 2 F M + + F 2 + F 1 F M + + F 2 + F 1
or equivalently,
F ¯ M F ¯ M F ¯ M + F ¯ M 1 F ¯ M + F ¯ M 1 F ¯ M + + F ¯ 2 F ¯ M + + F ¯ 2 F ¯ M + + F ¯ 2 + F ¯ 1 F ¯ M + + F ¯ 2 + F ¯ 1
Trivially
{ α k } F S D { α k } { α k } I C X { α k } .
Mirror property is also immediate. From Corollary 1, second-order stochastic dominance preserves mean order for all means M defined by a convex-increasing/concave-decreasing (respectively convex-decreasing/concave-increasing) function g ( x ) and for the set of all convex-increasing functions I x < (respectively concave-increasing functions I v < ). For instance, any mean with g ( x ) = x p , p 1 as the weighted quadratic mean with I x < , or g ( x ) = x p , p < 0 as the harmonic mean with I v < , or the mean with g ( x ) = log x with I v < . In particular, we can state the following result,
Theorem 10.
Given distributions { p i } , { p i } , { q i = f ( i ) } , f ( x ) concave-increasing, and { p i } I C X { p i } , then C E ( { p i } , { q i } ) C E ( { p i } , { q i } ) .
Proof. 
Consider the quasi-arithmetic mean G ( { q i } , { p i } ) with g ( x ) = log x , convex-decreasing. We have that C E ( { p i } , { q i } ) = log G ( { q i } , { p i } ) . Using Definition 7 and applying Corollary 1(b), we have that G ( { q i } , { p i } ) G ( { q i } , { p i } ) , and we apply to each member of the inequality the function log x . □

6.3. Likelihood Ratio Dominance

Definition 8.
Likelihood ratio dominance, LR, { α k } L R { α k } , is defined as
{ α k } L R { α k } ( i , j ) , 1 i , j M , i j α i / α j α i / α j .
As α i / α j α i / α j α j / α i α j / α i , we have that LR holds the mirror property.
It can be shown that LR order implies FSD order,
and then LR order holds Theorems 3 and 7 and Equation (16) holds for any mean M ,
Theorem 11.
Likelihood-ratio order implies first stochastic order, i.e.,
{ α k } L R { α k } { α k } F S D { α k } .
Proof. 
There are indirect proofs in the literature [1,17]. A direct proof is in the Appendix C. □
As the condition for LR order, Equation (26), is easy to check, this order comes very handy in proving sufficient condition for FSD order. Additionally, for the uniform distribution { 1 / N } and any increasing distribution { α k } , we have that { 1 / M } L R { α k } , while for any decreasing distribution { α k } we have that { α k } L R { 1 / M } .
Consider Shannon entropy, H ( { p k } ) , and Rényi entropy R β ( { p k } ) of a distribution { p k } , that as seen in Section 3 are quasi-arithmetic means of the sequence { 1 / log p k } with weights { p k } , and with g ( x ) = x and g ( x ) = 2 ( β 1 ) x , respectively. Without loss of generality, we can consider { p k } in increasing order, then the sequence { 1 / log p k } will be decreasing. We have that { 1 / M } { p k } , and then we can apply Theorem 3, and obtain for Shannon entropy
H ( { p k } ) A ( { 1 / log p k } ) ,
where A ( { 1 / log p k } ) is the arithmetic mean of { 1 / log p k } , and for Rényi entropy, observing that g 1 ( x ) = 1 1 β log x ,
R β ( { p k } ) 1 1 β log k 1 M p k ( β 1 ) = 1 1 β log A ( { p k ( β 1 ) } .
For Tsallis entropy, which can be considered the weighted arithmetic mean of the sequence { 1 / ln q p k } with weights { p k } , as for { p k } in increasing order the sequence { 1 / ln q p k } will be decreasing (the derivative of ln q x is positive), we have similarly to Shannon entropy,
S q ( { p k } ) A ( { 1 / ln q p k } ) .
We can also state the following result for the generalized cross entropy GCE . We say that { p i } is comonotonic [27,28] with { q i } when, for all 1 i , j M , q i < q j p i p j .
Theorem 12.
Given distributions { p i } , { q i } comonotonic, then GCE ( { p i } , { q i } ) GCE ( { 1 / M } , { q i } ) .
Proof. 
Without loss of generality, we can reorder { p i } , { q i } so that both are increasing. We know that { 1 / M } L R { p k } and thus { 1 / M } F S D { p k } . From the definition of F S D , preserve mean order holds for increasing functions and for arithmetic mean, thus we can then apply Theorem 3 for { log q i } decreasing. □
From Theorem 12, when { p i } , { q i } are comonotonic, then
C E ( { p i } , { q i } ) 1 M k = 1 M log q k .

6.4. Hazard Rate

Definition 9.
A probability distribution { α k } is greater in hazard rate, HR, to { α k } , { α k } H R { α k } , if and only if for all j i , the following condition is filled
F ¯ i F ¯ i F ¯ j F ¯ j .
Observe that this can be written as
F ¯ j F ¯ j F ¯ i F ¯ i ,
or
1 F j 1 F j 1 F i 1 F i ,
If we consider now the sequences written in inverse order, i.e., α M i + 1 , α M i + 1 , it is clear that if i j then M j + 1 M i + 1 , and from 1 F ¯ j = F ¯ M j + 1 , we get
F ¯ M j + 1 F ¯ M j + 1 F ¯ M i + 1 F ¯ M i + 1 ,
which means { α M k + 1 } H R { α M k + 1 } , and HR order holds the mirror property.
It can be shown that { α k } H R { α k } { α k } F S D { α k } and then HR order holds Theorems 3 and 7, and Equation (16) holds for any mean M .

7. Example: Linear Combination of Monte Carlo Techniques

When we want to estimate an integral μ = h ( x ) d x using MIS (multiple importance sampling) Monte Carlo methods, we have several choices of techniques, each of them with a given pdf { p k ( x ) } , 1 k n , which provide the primary estimators { I k , 1 = h ( x ) p k ( x ) } . If for h ( x ) > 0 , we have that p k ( x ) > 0 , then the technique is unbiased. We are interested in optimal ways of combining the techniques. One option is linearly combining the different estimators { I i , n i } , i = 1 M w i I i , n i , with weights { w k } and sampling proportions { n i = α i N } , with N = i = 1 M n i . If all techniques are unbiased, the resulting combination is also unbiased. The variance is given by
V N = i = 1 M w i 2 v i n i = 1 N i = 1 M w i 2 v i α i = 1 N V ,
where v i are the variances for the primary estimators of each technique and V is the variance for the primary MIS estimator.
The optimal combination of weights, i.e. the one that leads to minimum variance, has been studied in [18,29,30].
The variance value will depend on two sets of weights, { w i } and { α i } , but there are cases where we can reduce it to a single set of weights. We present the following examples, where variances are taken for N = 1 :
  • when α i = w i , then V = i = 1 M α i v i , the weighted arithmetic mean ( g ( x ) = x ) of { v i } .
  • when the sampling proportions are fixed, the optimal variance is given by H ( { v k α k } ) , the weighted harmonic mean ( g ( x ) = 1 / x ) of { v i } .
  • when weights { w i } are fixed, the optimal variance is given by ( k = 1 M w k v k ) 2 , which is the weighted power mean of { v i } with exponent r = 1 / 2 ( g ( x ) = x 1 / 2 ).
Observe that the variance in these three cases is a quasi-arithmetic mean.
Let us now order { v k } in increasing order, and let us take α k = h ( v k ) , h ( x ) decreasing, and α k = 1 / M . We have that α k = h ( v k ) L R { α k = 1 / M } and thus, by Theorem 11, α k = h ( v k ) F S D { α k = 1 / M } , and we can apply Theorem 2 with f ( k ) = v k to the quasi-arithmetic means above. Thus, the variance is less when taking sampling proportions or coefficients decreasing in v k than for equal sampling or equal weighting.
The same would be the case when considering a different measure of error, whenever the error of the combined techniques can also be expressed as a weighted quasi-arithmetic mean of the values of this measure for all techniques. For instance, suppose we use as measure of error the standard deviation, σ i 2 = v i , σ 2 = V , the above cases become
  • when α i = w i , then σ = V = i = 1 M α i σ i 2 , the weighted root mean square or quadratic mean of { σ i } , or weighted power mean with r = 2 ( g ( x ) = x 2 ).
  • when the weights { w i } are fixed, the optimal standard deviation is given by σ = V = H ( { σ k 2 α k } ) , the power mean of { σ i } with r = 2 ( g ( x ) = x 2 ).
  • when the sampling proportions are fixed, the optimal standard deviation is given by σ = V = k = 1 M w k σ k , the arithmetic mean of { σ i } ( g ( x ) = x ).
For more details see [18].

Liabilities vs. Utilities

From the above example, we can study the relationship between utilities, which we try to maximize, and liabilities, that we try to minimize. As a liability can always be considered as the inverse of a utility, we can see from Theorem 1 that establishing invariance properties on the order between means of utilities is equivalent to doing this with liabilities, except that the order is inverted. We can define for instance the efficiency E = 1 / V as the inverse of the variance, and obtain it as a weighted mean of individual technique’s efficiencies, e i = 1 / v i . Consider, for instance, the first case in the example above, where V = i = 1 M α i v i . We have
E = 1 V = 1 i = 1 M α i v i = 1 i = 1 M α i e i 1 = H ( { e i α i } ) ,
where H denotes the weighted harmonic mean.
Considering now the second case
E = 1 V = 1 H ( { v k α k } ) = i = 1 M α i v i 1 = i = 1 M α i e i ,
or weighted arithmetic mean. For the third case,
E = 1 V = 1 ( k = 1 M w k v k ) 2 = ( k = 1 M w k e k 1 / 2 ) 2 ,
which is the weighted power mean with r = 1 / 2 .

8. Conclusions and Future Work

We have presented in this paper the relationship between stochastic orders and quasi-arithmetic means. We have proved several ordering invariance theorems, that show that given two distributions under a certain stochastic order, the ordering of the means is preserved for any quasi-arithmetic mean we might consider, this is, not only for the arithmetic mean (or expected value). We have shown how the results apply to first order, second order, likelihood ratio, hazard-rate, and increasing convex stochastic orders, and its application to cross-entropy. We have also presented an application example based on the linear combination of Monte Carlo estimators, and shown that the invariance allows costs or liabilities to be considered as the symmetric case of utilities.
In the future, we want to generalize our results to spatial weight matrices [31]. The rows in a spatial weight matrix are weights that give the influence of n entities over each other. Different weighted means such as arithmetic, harmonic, or geometric [32] can be used to compute this influence. We can thus apply our invariance results to each row. We will also investigate which of our results for Shannon entropy extend to Tsallis entropy too. Both Shannon and Tsallis entropy are weighted arithmetic means [23], and given { p k } monotonic, both { 1 / log p k } and { 1 / ln q p k } are monotonic too. Finally, we will investigate the invariance of the different stochastic orders under the operations of compositional data [3].

Author Contributions

Conceptualization, M.S. and S.C.; methodology, M.S. and J.P.; validation, J.P. and V.E.; writing—original draft preparation, M.S.; writing—review and editing, all authors. All authors have read and agreed to the published version of the manuscript.

Funding

Mateu Sbert and Jordi Poch are funded in part by grant PID2019-106426RB-C31 from the Spanish Government. Víctor Elvira was partially supported by Agence Nationale de la Recherche of France under PISCES project (ANR-17-CE40-0031-01).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Lemma 1

Proof. 
Subtracting 1 from both sides of each inequality proves ( c ) ( d ) . To prove ( a ) ( c ) , we proceed in the following way. Consider the increasing sequence { g ( f ( 1 ) ) , , g ( f ( 1 ) ) , g ( f ( M ) ) , , g ( f ( M ) ) } , f ( 1 ) < f ( M ) (and thus g ( f ( 1 ) ) < g ( f ( M ) ) by the strict monotonicity of g ( x ) ), and where g ( f ( 1 ) ) is written l times, denote L = a 1 + + a l , L = a 1 + + a l . Since a l + 1 + + a M = 1 L , a l + 1 + + a M = 1 L , then (a) gives
L g ( f ( 1 ) ) + ( 1 L ) g ( f ( M ) ) L g ( f ( 1 ) ) ( 1 L ) g ( f ( M ) ) 0 ,
i.e.,
( L L ) ( g ( f ( 1 ) ) g ( f ( M ) ) ) 0 L L .
This proves the first M 1 inequalities in c, thus ( a ) ( c ) . Observe now that
L L 1 L 1 L ,
and thus ( a ) ( d ) (although this can be seen directly from ( a ) ( c ) and ( c ) ( d ) . To prove that ( b ) implies ( c ) and ( d ) , consider the sequence,
{ g ( f ( 1 ) ) , , g ( f ( 1 ) ) , g ( f ( M ) ) , , g ( f ( M ) ) } ,
f ( 1 ) < f ( M ) , g ( f ( M ) ) is written l times, and the same definitions as before for L , L , then ( b ) gives
( 1 L ) g ( f ( 1 ) ) + L g ( f ( M ) ) ( 1 L ) g ( f ( 1 ) ) L g ( f ( M ) ) 0 ,
and we proceed as above. Thus, ( a ) ( c ) .
Let us see now that ( c ) implies ( a ) .
Define for 1 k M , A k = j = 1 k α k , A k = j = 1 k α k , and A 0 = A 0 = 0 . Then,
k = 1 M α k g ( f ( k ) ) k = 1 M α k g ( f ( k ) ) = k = 1 M ( α k α k ) g ( f ( k ) ) = k = 1 M ( A k A k 1 A k + A k 1 ) g ( f ( k ) ) = k = 1 M ( A k A k ) g ( f ( k ) ) k = 1 M ( A k 1 A k 1 ) g ( f ( k ) ) = k = 1 M 1 ( A k A k ) g ( f ( k ) ) k = 0 M 1 ( A k A k ) g ( f ( k + 1 ) ) = k = 1 M 1 ( A k A k ) g ( f ( k ) ) k = 1 M 1 ( A k A k ) g ( f ( k + 1 ) ) = k = 1 M 1 ( A k A k ) ( g ( f ( k ) ) g ( f ( k + 1 ) ) ) 0 ,
as ( c ) implies that for all k, A k A k 0 , and { g ( f ( k ) ) } is an increasing sequence. Thus, ( c ) ( a ) .
Repeating the proof for the sequences { α M k + 1 } and { α M k + 1 } , we obtain that ( d ) ( b ) .
Let g ( x ) be decreasing. Observe that
k = 1 M α M k + 1 g ( f ( k ) ) = k = 1 M α M k + 1 g ( f ( m ( M k + 1 ) ) ) = k = 1 M α k g ( f ( m ( k ) ) ) ,
where m ( k ) = M k + 1 , and thus ( a ) ( b ) because { g ( f ( m ( k ) ) ) } is an increasing sequence. Now suppose g ( x ) is increasing, from
k = 1 M α k g ( f ( k ) ) = k = 1 M α M k + 1 g ( f ( M k + 1 ) ) = k = 1 M α M k + 1 g ( f ( m ( k ) ) ) ,
we have that ( b ) ( a ) because { g ( f ( m ( k ) ) ) } is a decreasing sequence. We can show similarly that ( b ) ( a ) .
Consider now f ( x ) decreasing. Reversed ( a ) is
k = 1 M α k g ( f ( k ) ) k = 1 M α k g ( f ( k ) ) ,
but it can be written as
k = 1 M α M k + 1 g ( f ( M k + 1 ) ) k = 1 M α M k + 1 g ( f ( M k + 1 ) ) = k = 1 M α M k + 1 g ( f ( m ( k ) ) ) k = 1 M α M k + 1 g ( f ( m ( k ) ) ) ,
where { g ( f ( m ( k ) ) ) } is an increasing sequence, and thus ( c ) ( a ) when f ( x ) is decreasing and for order of inequality in ( a ) reversed. The other cases for f ( x ) decreasing can be analogously proven. □

Appendix B. Proof of Lines 1–8 from Table 1 in Lemma 2

Proof. 
Let us prove first Lines 1–4 from Table 1.
Let g ( x ) be the strictly monotonic function associated with mean M . Consider the following inequalities. For g ( x ) increasing, g 🟉 increasing:
k = 1 M α k g 🟉 ( f 🟉 ( k ) ) = k = 1 M α k g ( g 1 ( g 🟉 ( f 🟉 ( k ) ) ) ) k = 1 M α k g ( g 1 ( g 🟉 ( f 🟉 ( k ) ) ) ) = k = 1 M α k g 🟉 ( f 🟉 ( k ) ) .
For g ( x ) decreasing, g 🟉 decreasing:
k = 1 M α k g 🟉 ( f 🟉 ( k ) ) = k = 1 M α k g ( g 1 ( g 🟉 ( f 🟉 ( k ) ) ) ) k = 1 M α k g ( g 1 ( g 🟉 ( f 🟉 ( k ) ) ) ) k = 1 M α k g 🟉 ( f 🟉 ( k ) ) .
Observe that the central inequalities in Equations (A4) and (A5) are true whenever Equation (20) holds and viceversa. We are interested in considering all possibilities for it to hold, when f 🟉 and f ( x ) = g 1 ( h ( x ) ) are increasing, with h ( x ) = g 🟉 ( f 🟉 ( x ) ) . We need first to study the convexity/concavity of the function g 1 ( h ( x ) ) . Let us suppose all functions are in C 2 , then h ( x ) = g 🟉 ( f 🟉 ( x ) ) f 🟉 ( x ) , and h ( x ) = g 🟉 ( f 🟉 ( x ) ) ( f 🟉 ( x ) ) 2 + g 🟉 ( f 🟉 ( x ) ) f 🟉 ( x ) . Dropping the arguments for convenience, we have h = g 🟉 ( f 🟉 ) 2 + g 🟉 f 🟉 . For h to be positive (convex) g 🟉 has to be positive (convex) and g 🟉 , f 🟉 have to be both positive or negative (thus g 🟉 convex-increasing, f convex, or g 🟉 convex-decreasing, f concave). For h to be negative, g 🟉 has to be negative (concave) and g 🟉 , f 🟉 have to have a different sign (thus g 🟉 concave-increasing, f concave, or g 🟉 concave-decreasing, f convex). For g 1 ( h ( x ) ) , we can apply the same rule. Observe also that the composition of two increasing or two decreasing functions is increasing, and increasing and decreasing or viceversa is decreasing. For the inverse function, as ( g 1 ( g ( x ) ) = x we have that g 1 ( g ( x ) ) g ( x ) = 1 , g 1 ( g ( x ) ) ( g ( x ) ) 2 + g 1 ( g ( x ) ) g ( x ) = 0 , which isolating g 1 ( g ( x ) ) and dropping arguments becomes g 1 = g 1 g , and thus for g 1 convex-increasing (concave-increasing), the inverse is concave-increasing (convex-increasing), while for g 1 convex-decreasing (concave-decreasing), the inverse is convex-decreasing (concave-decreasing). In Table A1, we show the different possibilities for f ( x ) to be increasing when f 🟉 is increasing.
Table A1. Different possible combinations where the concavity/convexity of g 1 ( h ( x ) ) can be predicted for g ( x ) and f 🟉 , f ( x ) increasing. ICX: convex and increasing, ICV: concave and increasing, DCX: convex and decreasing, DCV: concave and decreasing.
Table A1. Different possible combinations where the concavity/convexity of g 1 ( h ( x ) ) can be predicted for g ( x ) and f 🟉 , f ( x ) increasing. ICX: convex and increasing, ICV: concave and increasing, DCX: convex and decreasing, DCV: concave and decreasing.
g g 1 g 🟉 f 🟉 h = g 🟉 f 🟉 f = g 1 h
1ICXICVICVICVICVICV
2DCXDCXDCVICXDCVICX
3ICVICXICXICXICXICX
4DCVDCVDCXICVDCXICV
Let us prove now Lines 5–8 from Table 1.
Consider the following inequalities, where m ( x ) = M x + 1 , for g ( x ) increasing, g 🟉 decreasing,
k = 1 M α k g 🟉 ( f 🟉 ( k ) ) = k = 1 M α M k + 1 g 🟉 ( f 🟉 ( M k + 1 ) ) = k = 1 M α M k + 1 g ( g 1 ( g 🟉 ( f 🟉 ( m ( k ) ) ) ) ) k = 1 M α M k + 1 g ( g 1 ( g 🟉 ( f 🟉 ( m ( k ) ) ) ) ) = k = 1 M α M k + 1 g 🟉 ( f 🟉 ( M k + 1 ) ) = k = 1 M α k g 🟉 ( f 🟉 ( k ) ) ,
and for g ( x ) decreasing, g 🟉 increasing
k = 1 M α k g 🟉 ( f 🟉 ( k ) ) = k = 1 M α M k + 1 g 🟉 ( f 🟉 ( M k + 1 ) ) = k = 1 M α M k + 1 g ( g 1 ( g 🟉 ( f 🟉 ( m ( k ) ) ) ) ) k = 1 M α M k + 1 g ( g 1 ( g 🟉 ( f 🟉 ( m ( k ) ) ) ) ) = k = 1 M α M k + 1 g 🟉 ( f 🟉 ( M k + 1 ) ) = k = 1 M α k g 🟉 ( f 🟉 ( k ) ) .
Observe that the central inequalities in Equations (A6) and (A7) are true whenever Equation (21) holds and viceversa. We are interested in considering all possibilities for it to hold, when f 🟉 and f ( x ) = g 1 ( h ( x ) ) are increasing, with h ( x ) = g 🟉 ( f 🟉 ( m ( x ) ) ) . As before, we need first to study the convexity/concavity of the function g 1 ( h ( x ) ) .
Consider the function h ( x ) = g 🟉 ( f 🟉 ( m ( x ) ) . The second derivative is, as m ( x ) = 1 and m ( x ) = 0 , h ( x ) = g 🟉 ( f 🟉 ( m ( x ) ) f 🟉 2 ( m ( x ) ) + g 🟉 ( f 🟉 ( m ( x ) ) f 🟉 ( m ( x ) ) . Let us drop the arguments for convenience, then h = g 🟉 f 🟉 2 + g 🟉 f 🟉 . Observe that it is the same result obtained above. In Table A2, we see the possible combinations for f ( x ) to be increasing when f 🟉 ( x ) is increasing. The difference with Table A1 is that we have to consider here m ( x ) .
Table A2. Different possible combinations where the concavity/convexity of g 1 ( h ( x ) ) can be predicted and f ( x ) is increasing when g ( x ) is increasing and g 🟉 ( x ) decreasing (or viceversa) when f 🟉 is increasing. ICX: convex and increasing, ICV: concave and increasing, DCX: convex and decreasing, DCV: concave and decreasing.
Table A2. Different possible combinations where the concavity/convexity of g 1 ( h ( x ) ) can be predicted and f ( x ) is increasing when g ( x ) is increasing and g 🟉 ( x ) decreasing (or viceversa) when f 🟉 is increasing. ICX: convex and increasing, ICV: concave and increasing, DCX: convex and decreasing, DCV: concave and decreasing.
g g 1 g 🟉 f 🟉 f 🟉 ( m ) h = g 🟉 f 🟉 m f = g 1 h
5ICXICVDCVICXDCXICVICV
6DCXDCXICVICVDCVDCVICX
7ICVICXDCXICVDCVICXICX
8DCVDCVICXICXDCXDCXICV
Then, we can summarize the results from Table A1 and Table A2 in Table 1. This will do to prove Equation (20). The results for Equation (21) are obtained as above by substituting { α k } , { α k } by { α M k + 1 } , { α M k + 1 } .
Suppose now f 🟉 ( x ) is decreasing and that Equations (20) and Equation (21) hold for g 🟉 ( x ) and for f 🟉 ( x ) increasing. Then, the following inequalities, for g 🟉 ( x ) increasing,
k α k g 🟉 ( f 🟉 ( k ) ) = k α M k + 1 g 🟉 ( f 🟉 ( m ( k ) ) ) k α M k + 1 g 🟉 ( f 🟉 ( m ( k ) ) ) = k α k g 🟉 ( f 🟉 ( k ) ) ,
k α M k + 1 g 🟉 ( f 🟉 ( k ) ) = k α k g 🟉 ( f 🟉 ( m ( k ) ) ) k α k g 🟉 ( f 🟉 ( m ( k ) ) ) = k α M k + 1 g 🟉 ( f 🟉 ( k ) ) ,
and for g 🟉 ( x ) decreasing,
k α k g 🟉 ( f 🟉 ( k ) ) = k α M k + 1 g 🟉 ( f 🟉 ( m ( k ) ) ) k α M k + 1 g 🟉 ( f 🟉 ( m ( k ) ) ) = k α k g 🟉 ( f 🟉 ( k ) ) ,
k α M k + 1 g 🟉 ( f 🟉 ( k ) ) = k α k g 🟉 ( f 🟉 ( m ( k ) ) ) k α k g 🟉 ( f 🟉 ( m ( k ) ) ) = k α M k + 1 g 🟉 ( f 🟉 ( k ) ) ,
hold because f 🟉 ( m ( x ) ) is an increasing function. Thus, Equations (20) and (21) hold with the direction of inequality reversed. Observe also that f 🟉 ( m ( x ) ) will have the same character of convexity/concavity than f 🟉 ( x ) . Applying Equations (A8)–(A11), according to whether g 🟉 is increasing or decreasing, to each line of Table 1, we obtain the desired result.

Appendix C. Proof of Theorem 11

Proof. 
First, note that if α i α j α i α j for i j , then α 1 α 1 and α M α M , see [17]. Let us proceed now by induction. For M = 2 , the condition c ) of Lemma 1 is true, because α 1 α 1 , and α 1 + α 2 = α 1 + α 2 . Let us suppose that it is true for M 1 . If { α k } k = 1 M L R { α k } k = 1 M , then it is immediate that { α k / k = 1 M 1 α k } k = 1 M 1 L R { α k / k = 1 M 1 α k } k = 1 M 1 , and by the induction hypothesis, the condition c ) of Lemma 1 holds, and thus
α 1 / k = 1 M 1 α k α 1 / k = 1 M 1 α k ( α 1 + α 2 ) / k = 1 M 1 α k ( α 1 + α 2 ) / k = 1 M 1 α k ( α 1 + α 2 + + α M 1 ) / k = 1 M 1 α k = ( α 1 + α 2 + + α n 1 ) / k = 1 M 1 α k .
However, we know that α M α M , and thus k = 1 M 1 α k k = 1 M 1 α k because k = 1 M 1 α k = k = 1 M 1 α k = 1 , and thus we obtain condition c ) of Lemma 1. □

References

  1. Belzunce, F.; Martinez-Riquelme, C.; Mulero, J. An Introduction to Stochastic Orders; Academic Press: Cambridge, MA, USA, 2016; pp. i–ii. [Google Scholar] [CrossRef]
  2. Shaked, M.; Shanthikumar, G. Stochastic Orders; Springer: Berlin/Heidelberg, Germany, 2007; 474p. [Google Scholar] [CrossRef]
  3. Pawlowsky-Glahn, V.; Egozcue, J.; Tolosana-Delgado, R. Modeling and Analysis of Compositional Data; J. Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  4. Levy, H. Stochastic Dominance and Expected Utility: Survey and Analysis. Manag. Sci. 1992, 38, 555–593. [Google Scholar] [CrossRef]
  5. Bullen, P. Handbook of Means and Their Inequalities; Springer Science+Business Media: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  6. Coggeshall, F. The Arithmetic, Geometric, and Harmonic Means. Q. J. Econ. 1886, 1, 83–86. [Google Scholar] [CrossRef]
  7. Fernández-Villaverde, J. The Econometrics of DSGE Models. SERIEs 2010, 1, 3–49. [Google Scholar] [CrossRef] [Green Version]
  8. Wikipedia Contributors. Constant Elasticity of Substitution. Available online: https://en.wikipedia.org/wiki/Constant_elasticity_of_substitution (accessed on 14 April 2021).
  9. Yoshida, Y. Weighted Quasi-Arithmetic Means and a Risk Index for Stochastic Environments. Int. J. Uncertainty, Fuzziness -Knowl.-Based Syst. 2011, 19, 1–16. [Google Scholar] [CrossRef]
  10. Yoshida, Y. Weighted Quasi-Arithmetic Means: Utility Functions and Weighting Functions. In Modeling Decisions for Artificial Intelligence, MDAI 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 25–36. [Google Scholar]
  11. Rényi, A. On Measures of Entropy and Information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Berkeley, CA, USA, 1961; Volume 1, pp. 547–561. [Google Scholar]
  12. Américo, A.; Khouzani, M.; Malacaria, P. Conditional Entropy and Data Processing: An Axiomatic Approach Based on Core-Concavity. IEEE Trans. Inf. Theory 2020, 66, 5537–5547. [Google Scholar] [CrossRef]
  13. Wikipedia Contributors. Series and Parallel Springs. Available online: https://en.wikipedia.org/wiki/Series_and_parallel_springs (accessed on 2 April 2021).
  14. Wikipedia Contributors. Resistor. Available online: https://en.wikipedia.org/wiki/Resistor (accessed on 2 April 2021).
  15. Wikibooks Contributors. Fundamentals of Transportation/Traffic Flow. 2017. Available online: https://en.wikibooks.org/wiki/Fundamentals_of_Transportation/Traffic_Flow (accessed on 18 January 2018).
  16. Padois, T.; Doutres, O.; Sgard, F.; Berry, A. On the use of geometric and harmonic means with the generalized cross-correlation in the time domain to improve noise source maps. J. Acoust. Soc. Am. 2016, 140. [Google Scholar] [CrossRef] [Green Version]
  17. Sbert, M.; Poch, J. A necessary and sufficient condition for the inequality of generalized weighted means. J. Inequalities Appl. 2016, 2016, 292. [Google Scholar] [CrossRef]
  18. Sbert, M.; Havran, V.; Szirmay-Kalos, L.; Elvira, V. Multiple importance sampling characterization by weighted mean invariance. Vis. Comput. 2018, 34, 843–852. [Google Scholar] [CrossRef]
  19. Sbert, M.; Poch, J.; Chen, M.; Bardera, A. Some Order Preserving Inequalities for Cross Entropy and Kullback-Leibler Divergence. Entropy 2018, 20, 959. [Google Scholar] [CrossRef] [Green Version]
  20. Sbert, M.; Ancuti, C.; Ancuti, C.O.; Poch, J.; Chen, S.; Vila, M. Histogram Ordering. IEEE Access 2021, 9, 28785–28796. [Google Scholar] [CrossRef]
  21. Sbert, M.; Yoshida, Y. Stochastics orders on two-dimensional space: Application to cross entropy. In Modeling Decisions for Artificial Intelligence - MDAI 2020; Torra, V., Narukawa, Y., Nin, J., Agell, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  22. Yoshida, Y. Weighted quasi-arithmetic means on two-dimensional regions: An independent case. In Modeling Decisions for Artificial Intelligence - MDAI 2016; Torra, V., Narukawa, Y., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 82–93. [Google Scholar]
  23. Tsallis, C.; Baldovin, F.; Cerbino, R.; Pierobon, P. Introduction to Nonextensive Statistical Mechanics and Thermodynamics. Available online: http://arXiv:cond-mat/0309093v1 (accessed on 13 May 2021).
  24. Hadar, J.; Russell, W. Rules for Ordering Uncertain Prospects. Am. Econ. Rev. 1969, 59, 25–34. [Google Scholar]
  25. Zhang, Z.; Sabuncu, M.R. Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels. arXiv 2018, arXiv:1805.07836. [Google Scholar]
  26. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: New York, NY, USA, 2006. [Google Scholar]
  27. Dellacherie, C. Quelques commentarires sur les prolongements de capacités. Available online: http://www.numdam.org/article/SPS_1971__5__77_0.pdf (accessed on 22 May 2021).
  28. Renneberg, D. Non Additive Measure and Integral; Kluwer Academic Publ.: Dordrecht, The Netherlands, 1994. [Google Scholar]
  29. Havran, V.; Sbert, M. Optimal Combination of Techniques in Multiple Importance Sampling. In Proceedings of the 13th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry, VRCAI ’14; ACM: New York, NY, USA, 2014; pp. 141–150. [Google Scholar] [CrossRef]
  30. Sbert, M.; Havran, V. Adaptive multiple importance sampling for general functions. Vis. Comput. 2017, 33, 845–855. [Google Scholar] [CrossRef]
  31. Zhou, X.; Lin, H. Spatial Weights Matrix. In Encyclopedia of GIS; Springer: Boston, MA, USA, 2008; p. 1113. [Google Scholar] [CrossRef]
  32. Smith, M.J.D.; Goodchild, M.F.; Longley, P. Geospatial Analysis: A Comprehensive Guide to Principles, Techniques and Software Tools; Troubador Publishing Ltd: Leicester, UK, 2015. [Google Scholar]
Table 1. For each line, for g ( x ) , f ( x ) filling the conditions in first and second column, then Equations (20) and (21) hold for g 🟉 , f 🟉 filling the conditions in columns three and four. By changing f 🟉 from increasing to decreasing, the reverse of Equations (20) and (21) hold. ICX: convex and increasing, ICV: concave and increasing, DCX: convex and decreasing, DCV: concave and decreasing.
Table 1. For each line, for g ( x ) , f ( x ) filling the conditions in first and second column, then Equations (20) and (21) hold for g 🟉 , f 🟉 filling the conditions in columns three and four. By changing f 🟉 from increasing to decreasing, the reverse of Equations (20) and (21) hold. ICX: convex and increasing, ICV: concave and increasing, DCX: convex and decreasing, DCV: concave and decreasing.
g f g 🟉 f 🟉
1ICXICVICVICV
1’""DCXICV
2DCXICXDCVICX
2’""ICXICX
3ICVICXICXICX
3’""DCVICX
4DCVICVDCXICV
4’""ICVICV
5ICXICVDCVICX
5’""ICXICX
6DCXICXICVICV
6’""DCXICV
7ICVICXDCXICV
7’""ICVICV
8DCVICVICXICX
8’""DCVICX
Table 2. For each line, g ( x ) = x , f ( x ) filling the conditions in second column and Equations (20) and (21) holding, then Equations (20) and (21) hold too for g 🟉 , f 🟉 filling the conditions in columns three and four. ICX: convex and increasing, ICV: concave and increasing, DCX: convex and decreasing, DCV: concave and decreasing.
Table 2. For each line, g ( x ) = x , f ( x ) filling the conditions in second column and Equations (20) and (21) holding, then Equations (20) and (21) hold too for g 🟉 , f 🟉 filling the conditions in columns three and four. ICX: convex and increasing, ICV: concave and increasing, DCX: convex and decreasing, DCV: concave and decreasing.
f g 🟉 f 🟉
ICVICVICV
ICVDCXICV
ICXICXICX
ICXDCVICX
ICVDCVICX
ICVICXICX
ICXDCXICV
ICXICVICV
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sbert, M.; Poch, J.; Chen, S.; Elvira, V. Stochastic Order and Generalized Weighted Mean Invariance. Entropy 2021, 23, 662. https://doi.org/10.3390/e23060662

AMA Style

Sbert M, Poch J, Chen S, Elvira V. Stochastic Order and Generalized Weighted Mean Invariance. Entropy. 2021; 23(6):662. https://doi.org/10.3390/e23060662

Chicago/Turabian Style

Sbert, Mateu, Jordi Poch, Shuning Chen, and Víctor Elvira. 2021. "Stochastic Order and Generalized Weighted Mean Invariance" Entropy 23, no. 6: 662. https://doi.org/10.3390/e23060662

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop