# Results on Varextropy Measure of Random Variables

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Varextropy Measure

**Definition**

**1.**

**Example**

**1.**

- (i)
- If X is uniformly distributed in $[0,a]$, then $VJ\left(X\right)\text{}=\text{}VH\left(X\right)\text{}=\text{}0$. As one can see, conceptually, the varextropy is compatible with varentropy and both take values greater than or equal to zero. So, when both varextropy and varentropy are zero, they represent certain information, that is, the event is certain.
- (ii)
- If X follows the Weibull distribution with cdf$$F\left(x\right)=1-{e}^{-\lambda {x}^{\alpha}},\phantom{\rule{4pt}{0ex}}x>0,$$$$\begin{array}{ccc}\hfill VJ\left(X\right)& =& \frac{{\alpha}^{2}{\lambda}^{2}}{4{\lambda}^{\frac{2(\alpha -1)}{\alpha}}}\left[\frac{\mathrm{\Gamma}(\frac{2(\alpha -1)}{\alpha}+1)}{{3}^{\frac{2(\alpha -1)}{\alpha}+1}}-\frac{{\mathrm{\Gamma}}^{2}(\frac{(\alpha -1)}{\alpha}+1)}{{2}^{\frac{2(\alpha -1)}{\alpha}+2}}\right],\hfill \\ \hfill VH\left(X\right)& =& \frac{{\pi}^{2}}{6}{\left(1-\frac{1}{\alpha}\right)}^{2}+\frac{2}{\alpha}-1.\hfill \end{array}$$
- (iii)
- If X follows a power distribution with parameter $\alpha >1$, i.e., $f\left(x\right)=\alpha {x}^{\alpha -1}$, $x\in (0,1)$, then, we have$$\begin{array}{ccc}\hfill VJ\left(X\right)& =& \frac{{\alpha}^{3}{(\alpha -1)}^{2}}{4(3\alpha -2){(2\alpha -1)}^{2}},\hfill \\ \hfill VH\left(X\right)& =& {(\alpha -1)}^{2}\dot{\psi}\left(\alpha \right)-{(\alpha -1)}^{2}\dot{\psi}(\alpha +1),\hfill \end{array}$$
- (iv)
- If X follows a two-parameter exponential distribution with density function$$f\left(x\right)=\lambda exp\{-\lambda (x-\mu \left)\right\},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x>\mu ,$$
- (v)
- If X follows the Laplace distribution with density function$$f\left(x\right)=\frac{1}{2\beta}exp\left\{\frac{-\left|x\right|}{\beta}\right\},\phantom{\rule{4pt}{0ex}}x\in \mathbb{R},$$
- (vi)
- If X is beta-distributed with parameters α and β, then$$\begin{array}{ccc}\hfill VJ\left(X\right)& =& \frac{B\left(3\right(\alpha -1)+1,3(\beta -1)+1)}{4{B}^{3}(\alpha ,\beta )}-\frac{{B}^{2}(2(\alpha -1)+1,2(\beta -1)+1)}{4{B}^{4}(\alpha ,\beta )},\hfill \\ \hfill VH\left(X\right)& =& {(\alpha -1)}^{2}\dot{\psi}\left(\alpha \right)+{(\beta -1)}^{2}\dot{\psi}\left(\beta \right)-{(\alpha -1+\beta -1)}^{2}\dot{\psi}(\alpha +\beta ),\hfill \end{array}$$
- (vii)
- If $X\sim N(\mu ,{\sigma}^{2})$, then $VJ\left(X\right)=\frac{2-\sqrt{3}}{16\pi {\sigma}^{2}\sqrt{3}}$ and $VH\left(X\right)=\frac{1}{2}$. In this case, the varextropy depends on the scale parameter ${\sigma}^{2}$, whereas it is independent on the location parameter μ. From the above examples, it can be seen that the varextropy measure is more flexible than the varentropy, in the sense that the latter is free of the model parameters in some cases.

**Proposition**

**1.**

**Proposition**

**2.**

**Proposition**

**3.**

**Remark**

**1.**

**Proposition**

**4.**

**Remark**

**2.**

**Theorem**

**1.**

**Proof.**

**Definition**

**2.**

**Remark**

**3.**

#### 2.1. Residual and Past Varextropies

**Example**

**2.**

- (i)
- If X has an exponential distribution, then$$VJ\left({X}_{t}\right)=VJ\left({X}_{\left[t\right]}\right)=VJ\left(X\right),$$
- (ii)
- If X follows a power distribution with parameter $\alpha >1$, then$$\begin{array}{cc}\hfill \phantom{\rule{1.em}{0ex}}& VJ\left({X}_{t}\right)=\frac{{\alpha}^{3}}{4{(1-{t}^{\alpha})}^{4}}\left[\frac{(1-{t}^{3\alpha -2})(1-{t}^{\alpha})}{3\alpha -2}-\frac{\alpha {(1-{t}^{2\alpha -1})}^{2}}{{(2\alpha -1)}^{2}}\right],\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& VJ\left({X}_{\left[t\right]}\right)=\frac{{\alpha}^{3}{(\alpha -1)}^{2}}{4{t}^{2}(3\alpha -2){(2\alpha -1)}^{2}}.\hfill \end{array}$$

**Proposition**

**5.**

#### 2.2. Reliability Theory

**Proposition**

**6.**

**Proposition**

**7.**

**Remark**

**4.**

- (i)
- $VJ\left({X}_{1:n}\right)=\frac{{n}^{3}}{4}\mathbb{E}\left[{f}^{2}\left({F}^{-1}(1-U)\right){U}^{3(n-1)}\right]-\frac{{n}^{4}}{4}{\mathbb{E}}^{2}\left[f\left({F}^{-1}(1-U)\right){U}^{2(n-1)}\right];$
- (ii)
- $VJ\left({X}_{n:n}\right)=\frac{{n}^{3}}{4}\mathbb{E}\left[{f}^{2}\left({F}^{-1}\left(U\right)\right){U}^{3(n-1)}\right]-\frac{{n}^{4}}{4}{\mathbb{E}}^{2}\left[f\left({F}^{-1}\left(U\right)\right){U}^{2(n-1)}\right];$

**Proposition**

**8.**

**Example**

**3.**

- (i)
- If X is uniformly distributed in $[a,b]$, then$$VJ\left({X}_{k:n}\right)={\left(\frac{1}{b-a}\right)}^{2}\left(\frac{B(3k-2,3(n-k)+1)}{4{B}^{3}(k,n-k+1)}-\frac{{B}^{2}(2k-1,2(n-k)+1)}{4{B}^{2}(k,n-k+1)}\right);$$
- (ii)
- If X has exponential distribution with parameter $\theta ,$ then$$VJ\left({X}_{k:n}\right)={\theta}^{2}\left(\frac{B(3k-2,3(n-k)+3)}{4{B}^{3}(k,n-k+1)}-\frac{{B}^{2}(2k-1,2(n-k)+2)}{4{B}^{2}(k,n-k+1)}\right);$$
- (iii)
- If X has Pareto distribution with shape and scale parameters λ and β respectively, then$$VJ\left({X}_{k:n}\right)=\frac{{\lambda}^{2}}{{\beta}^{2}}\left(\frac{\beta (3k-2,3(n-k)+\frac{2}{\lambda}+3)}{4{\beta}^{3}(k,n-k+1)}-\frac{{\beta}^{2}(2k-1,2(n-k)+\frac{1}{\lambda}+3)}{4{\beta}^{2}(k,n-k+1)}\right).$$

**Proposition**

**9.**

#### 2.3. The Discrete Case

**Example**

**4.**

**Example**

**5.**

`fzero`of MATLAB, it is found that if $p=q=0.1508$, then $J\left(X\right)=0.639032$ and $VJ\left(X\right)=0.0158$. Hence, with this choice of parameters, the considered random variable has the same entropy as the one considered in Example 4 with $\theta ={\theta}^{\ast}$, but the varextropy of X is larger. This implies that the coding procedure is more reliable for sequences generated by the random variable Y considered in Example 4.

## 3. General Results on Conditional Varextropy

**Definition**

**3.**

**Proposition**

**10.**

**Lemma**

**1.**

**Proposition**

**11.**

**Theorem**

**2.**

**Proof.**

**Lemma**

**2.**

- (i)
- $VJ\left(Z\right|Y,X)=VJ(Z\left|Y\right).$
- (ii)
- $\mathbb{E}\left[VJ\right(Z\left|Y\right)]\le \mathbb{E}[VJ\left(Z\right|X\left)\right].$

**Proof.**

- (i)
- By using the Markov property and definition of $VJ\left(Z\right|Y,X)$, the result follows.
- (ii)
- Let $\mathcal{G}=\sigma \left(X\right)$ and $\mathcal{F}=\sigma (X,Y)$, then from (5), we have$$\begin{array}{ccc}\hfill \mathbb{E}\left[VJ\right(Z\left|X\right)]& \ge & \mathbb{E}\left(\mathbb{E}\right[VJ\left(Z\right|X,Y\left)\right|X\left]\right)=\mathbb{E}\left[VJ\right(Z|X,Y)]=\mathbb{E}[VJ\left(Z\right|Y\left)\right],\hfill \end{array}$$

**Remark**

**5.**

## 4. Stochastic Comparisons

**Definition**

**4.**

- 1
- X is smaller than Y in the stochastic ordering, denoted by $X\stackrel{st}{\le}Y$, if $\overline{F}\left(t\right)\le \overline{G}\left(t\right)$ for all t;
- 2
- X is smaller than Y in the likelihood ratio ordering, denoted by $X\stackrel{lr}{\le}Y,$ if $\frac{g\left(t\right)}{f\left(t\right)}$ is increasing in $t>0$;
- 3
- X is smaller than Y in the hazard rate order, denoted by $X\stackrel{hr}{\le}Y$, if ${\lambda}_{X}\left(x\right)\ge {\lambda}_{Y}\left(x\right)$ for all x;
- 4
- X is smaller than Y in the dispersive order, denoted by $X\stackrel{disp}{\le}Y$, if $f\left({F}^{-1}\left(u\right)\right)\ge g\left({G}^{-1}\left(u\right)\right)$ for all $u\in (0,1)$, where ${F}^{-1}$ and ${G}^{-1}$ are right continuous inverses of F and G, respectively;
- 5
- X is said to have decreasing failure rate (DFR) if ${\lambda}_{X}\left(x\right)$ is decreasing in x;
- 6
- X is smaller than Y in the convex transform order, denoted by $X\stackrel{c}{\le}Y$, if ${G}^{-1}\left(F\left(x\right)\right)$ is a convex function on the support of X;
- 7
- X is smaller than Y in the star order, denoted by $X\stackrel{\ast}{\le}Y$, if $\frac{{G}^{-1}F\left(x\right)}{x}$ is increasing in $x\ge 0$;
- 8
- X is smaller than Y in the superadditive order, denoted by $X\stackrel{su}{\le}Y$, if ${G}^{-1}\left(F(t+u)\right)\ge {G}^{-1}\left(F\left(t\right)\right)+{G}^{-1}\left(F\left(u\right)\right)$ for $t\ge 0,u\ge 0$.

**Definition**

**5.**

**Example**

**6.**

- (i)
- If $X\sim Laplace(0,1)$ and $Y\sim Exp\left(1\right)$, then we have $X\stackrel{VJ}{\le}Y$, since $VJ\left(X\right)=\frac{1}{192}$ and $VJ\left(Y\right)=\frac{1}{48}$;
- (ii)
- If $X\sim Weibull(1,2)$ and $Y\sim Weibull(1,1)$, then we have $X\stackrel{VJ}{\le}Y$, since $VJ\left(X\right)=0.0129$ and $VJ\left(Y\right)=0.02$;
- (iii)
- If $X\sim Exp\left({\lambda}_{1}\right)$ and $Y\sim Exp\left({\lambda}_{2}\right)$ with ${\lambda}_{1}\le {\lambda}_{2}$, then $X\stackrel{VJ}{\le}Y$;
- (iv)
- If $X\sim N({\mu}_{1},{\sigma}_{1}^{2})$ and $Y\sim N({\mu}_{2},{\sigma}_{2}^{2})$ with ${\sigma}_{2}\le {\sigma}_{1}$, then $X\stackrel{VJ}{\le}Y$.

**Remark**

**6.**

**Proposition**

**12.**

**Proposition**

**13.**

- (i)
- ${X}_{k:n}\stackrel{VJ}{\le}{X}_{1:n}$ and ${X}_{k:n}\stackrel{VJ}{\le}{X}_{n:n}$ for all $1\le k\le n.$
- (ii)
- when n is even, we have ${X}_{\frac{n}{2}:n}\stackrel{VJ}{\le}{X}_{k:n}$ for all $1\le k\le n.$
- (iii)
- when n is odd, we have ${X}_{\frac{n+1}{2}:n}\stackrel{VJ}{\le}{X}_{k:n}$ for all $1\le k\le n.$

**Remark**

**7**

**Corollary**

**1.**

**Corollary**

**2.**

**Proposition**

**14.**

**Example**

**7.**

**Corollary**

**3.**

**Proof.**

**Corollary**

**4.**

**Proof.**

**Corollary**

**5.**

**Proof.**

**Corollary**

**6.**

**Corollary**

**7.**

**Corollary**

**8.**

## 5. Conclusions

- (1)
- $VJ$ of a uniform random variable as well as $VH$ are both equal to zero, see Example 1.
- (2)
- The new introduced varextropy measure is more flexible than the varentropy, in the sense that the latter is free of the model parameters in some cases, see Example 1.
- (3)
- In this case of normal distribution, the varextropy only depends on the scale parameter ${\sigma}^{2}$, see Example 1.
- (4)
- For symmetric distributions, the $VJ$ is unchanged under symmetry, see Proposition 3.
- (5)
- $VJ$ of half normal can be easily obtained via $VJ$ of normal distribution, see Remark 1.
- (6)
- $VJ$ can be approximated using Taylor series, for further details see Theorem 1.
- (7)
- $VJ$ is invariant under translations, for further details, see Proposition 4.
- (8)
- The residual $VJ$ of an exponential distribution is independent of lifetime model, more specific explanation can be seen in Example 2.
- (9)
- $VJ$ of the PHRM can be obtained from the original model properties, see Proposition 6.
- (10)
- For symmetric distributions, $VJ$ of k-th order statistic is equal to $VJ$ of $(n-k+1)$-th order statistic from a sample of size n, for further details see Proposition 8.
- (11)
- The median of order statistics has a minimum $VJ$, more specific explanation can be seen in Section 2.2.
- (12)
- $VJ$ of a random variable X is bigger than that of the expected value of conditional $VJ$ of X, see Proposition 10.
- (13)
- If $X\to Y\to Z$ is a Markov chain, then $VJ\left(Z\right|Y,X)=VJ(Z\left|Y\right)$, for further details see Lemma 2.
- (14)
- For the one-parameter exponential distribution, when the value of parameter increases then the exponential distribution increases in varextropy order, see Example 6.
- (15)
- For the normal distribution, when the value of scale parameter increases then the normal distribution decreases in varextropy order independently of location parameter, see Example 6.
- (16)
- If X is smaller than Y in varextropy order then the result also holds for absolute value of X and Y and vice versa, see Remark 6.
- (17)
- Based on varextropy order, every continuous random variable is bigger than the uniform distribution, for further details, see Proposition 12.

## Author Contributions

## Funding

## Conflicts of Interest

## Abbreviations

cdf | Cumulative distribution function |

Probability density function | |

VJ | Varextropy |

## References

- Clausius, R.; Hirst, T.A. The Mechanical Theory of Heat: With Its Application to the Steam Engine and to the Physical Properties of Bodies; J. Van Voorst: London, UK, 1867. [Google Scholar]
- Shannon, C.E. A mathematical theory of communication. AT&T Bell Labs. Tech. J.
**1948**, 27, 379–423. [Google Scholar] - Song, K.S. Rényi information, log likelihood and an intrinsic distribution measure. J. Statist. Plann. Inference
**2001**, 93, 51–69. [Google Scholar] [CrossRef] - Liu, J. Information Theoretic Content and Probability. Ph.D. Thesis, University of Florida, Gainesville, FL, USA, 2007. [Google Scholar]
- Fradelizi, M.; Madiman, M.; Wang, L. Optimal concentration of information content for logconcave densities. In High Dimensional Probability VII. Progress in Probability; Houdre, C., Mason, D., Reynaud-Bouret, P., Rosinski, J., Eds.; Springer: Berlin/Heidelberger, Germany, 2016. [Google Scholar]
- Di Crescenzo, A.; Paolillo, L. Analysis and applications of the residual varentropy of random lifetimes. Probab. Eng. Inf. Sci.
**2020**. [Google Scholar] [CrossRef] [Green Version] - Maadani, S.; Mohtashami Borzadaran, G.R.; Rezaei Roknabadi, A.H. Varentropy of order statistics and some stochastic comparisons. Commun. Stat. Theory Methods
**2020**, 1–16. [Google Scholar] [CrossRef] - Lad, F.; Sanfilippo, G.; Agrò, G. Extropy: Complementary dual of entropy. Statist. Sci.
**2015**, 30, 40–58. [Google Scholar] [CrossRef] - Qiu, G. The extropy of order statistics and record values. Stat. Probabil. Lett.
**2017**, 120, 52–60. [Google Scholar] [CrossRef] - Qiu, G.; Jia, K. The residual extropy of order statistics. Stat. Probabil. Lett.
**2018**, 133, 15–22. [Google Scholar] [CrossRef] - Qiu, G.; Jia, K. Extropy estimators with applications in testing uniformity. J. Nonparametr. Stat.
**2018**, 30, 182–196. [Google Scholar] [CrossRef] - Gilchrist, W. Statistical Modelling with Quantile Functions; Champman & Hall: New York, NY, USA, 2000. [Google Scholar]
- Beigi, S.; Gohari, A. Φ-Entropic Measures of Correlation. IEEE Trans. Inf. Theory
**2018**, 64, 2193–2211. [Google Scholar] [CrossRef] - Balakrishnan, N.; Buono, F.; Longobardi, M. On weighted extropies. Commun. Stat. Theory Methods
**2020**. [Google Scholar] [CrossRef] - Krishnan, A.S.; Sunoj, S.M.; Nair, N.U. Some reliability properties of extropy for residual and past lifetime random variables. J. Korean Stat. Soc.
**2020**. [Google Scholar] [CrossRef] - Kamari, O.; Buono, F. On extropy of past lifetime distribution. Ric. Mat.
**2020**. [Google Scholar] [CrossRef] - Gupta, R.C.; Gupta, R.D. Proportional reversed hazard rate model and its applications. J. Statist. Plan. Inference
**2007**, 137, 3525–3536. [Google Scholar] [CrossRef] - Beigi, S.; Gohari, A. Monotone Measures for Non-Local Correlations. IEEE Trans. Inf. Theory
**2015**, 61, 5185–5208. [Google Scholar] [CrossRef] [Green Version] - Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: New York, NY, USA, 2007. [Google Scholar]
- Chernoff, H. A note on an inequality involving the normal distribution. Ann. Probab.
**1981**, 9, 533–535. [Google Scholar] [CrossRef] - Bagai, I.; Kochar, S.C. On tail-ordering and comparison of failure rates. Commun. Stat. Theory Methods
**1986**, 15, 1377–1388. [Google Scholar] [CrossRef] - Ahmed, A.N.; Alzaid, A.; Bartoszewicz, J.; Kochar, S.C. Dispersive and Superadditive Ordering. Adv. Appl. Probab.
**1986**, 18, 1019–1022. [Google Scholar] [CrossRef] [Green Version]

k | 1 | 2 | 3 | 4 | 5 |

$VJ\left({X}_{k:n}\right)$ | $8.859319$ | $2.225035$ | $1.407279$ | $1.129121$ | $1.027418$ |

k | 6 | 7 | 8 | 9 | 10 |

$VJ\left({X}_{k:n}\right)$ | $1.027418$ | $1.129121$ | $1.407279$ | $2.225035$ | $8.859319$ |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Vaselabadi, N.M.; Tahmasebi, S.; Kazemi, M.R.; Buono, F.
Results on Varextropy Measure of Random Variables. *Entropy* **2021**, *23*, 356.
https://doi.org/10.3390/e23030356

**AMA Style**

Vaselabadi NM, Tahmasebi S, Kazemi MR, Buono F.
Results on Varextropy Measure of Random Variables. *Entropy*. 2021; 23(3):356.
https://doi.org/10.3390/e23030356

**Chicago/Turabian Style**

Vaselabadi, Nastaran Marzban, Saeid Tahmasebi, Mohammad Reza Kazemi, and Francesco Buono.
2021. "Results on Varextropy Measure of Random Variables" *Entropy* 23, no. 3: 356.
https://doi.org/10.3390/e23030356