Next Article in Journal
An Order Reduction Design Framework for Higher-Order Binary Markov Random Fields
Previous Article in Journal
FLoCIC: A Few Lines of Code for Raster Image Compression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Explicit Expressions for Most Common Entropies

Department of Mathematics, Howard University, Washington, DC 20059, USA
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(3), 534; https://doi.org/10.3390/e25030534
Submission received: 10 February 2023 / Revised: 14 March 2023 / Accepted: 15 March 2023 / Published: 20 March 2023

Abstract

:
Entropies are useful measures of variation. However, explicit expressions for entropies available in the literature are limited. In this paper, we provide a comprehensive collection of explicit expressions for four of the most common entropies for over sixty continuous univariate distributions. Most of the derived expressions are new. The explicit expressions involve known special functions.

1. Introduction

Let X denote a continuous random variable with probability density and cumulative distribution functions specified by f X ( · ) and F X ( · ) , respectively. Four of the most popular entropies are the geometric mean [1,2], Shannon entropy ([3], pp. 379–423; [3], pp. 623–656), Rényi entropy [4] and the cumulative residual entropy [5], defined by
G M ( X ) = log x f X ( x ) d x ,
S ( X ) = log f X ( x ) f X ( x ) d x ,
R ( X ) = 1 1 γ log f X ( x ) γ d x
and
C E ( X ) = 1 F X ( x ) log 1 F X ( x ) d x ,
respectively, for γ 0 and γ 1 .
There have been several papers giving explicit expressions for entropies. Ref. [6] derived expressions for S ( X ) for twenty univariate distributions. Ref. [7] derived expressions for S ( X ) for five multivariate distributions. Ref. [8] derived expressions for S ( X ) and mutual information for eight multivariate distributions. Ref. [9] derived expressions for S ( X ) and R ( X ) for fifteen bivariate distributions. Ref. [10] derived expressions for S ( X ) and R ( X ) for fifteen multivariate distributions. Ref. [11] derived expressions for S ( X ) , R ( X ) and the q-entropy for the Dagum distribution. Ref. [12] derived expressions for S ( X ) for certain binomial type distributions. Ref. [13] derived expressions for G M ( X ) and C E ( X ) for three Lindley type distributions.
All of these and other papers are restrictive in terms of the entropies considered and the number of distributions considered. In this paper, we derive expressions for (1)–(4) for more than sixty continuous univariate distributions, see Section 3. Most of the derived expressions are new. Some technicalities used in the derivations are given in Section 2. The derivations themselves are not given and can be obtained from the corresponding author. Some conclusions and future work are noted in Section 4.
The calculations of this paper involve several special functions, including the the exponential integral defined by
Ei ( a ) = a exp ( t ) t d t ;
the gamma function defined by
Γ ( a ) = 0 t a 1 exp ( t ) d t ;
the lower incomplete gamma function defined by
Γ ( a , x ) = x t a 1 exp ( t ) d t ;
the upper incomplete gamma function defined by
γ ( a , x ) = 0 x t a 1 exp ( t ) d t ;
the digamma function defined by
ψ ( a ) = log Γ ( a ) d a ;
the standard normal distribution function defined by
Φ ( a ) = 1 2 π x exp t 2 2 d t ;
the error function defined by
erf ( a ) = 2 π 0 a exp t 2 d t ;
the complementary error function defined by
erfc ( a ) = 2 π a exp t 2 d t ;
the beta function defined by
B ( a , b ) = 0 1 t a 1 ( 1 t ) b 1 d t ;
the incomplete beta function defined by
B x ( a , b ) = 0 x t a 1 ( 1 t ) b 1 d t ;
the incomplete beta function ratio defined by
I x ( a , b ) = B x ( a , b ) B ( a , b ) ;
the modified Bessel function of the first kind of order ν defined by
I ν ( x ) = k = 0 1 Γ ( k + ν + 1 ) k ! x 2 2 k + ν ;
the modified Bessel function of the second kind defined by
K ν ( x ) = π 2 sin ( π ν ) I ν ( x ) I ν ( x ) , if   ν Z , lim μ ν K μ ( x ) , if   ν Z ;
the confluent hypergeometric function defined by
1 F 1 a ; b ; x = k = 0 ( a ) k ( b ) k x k k ! ,
where ( a ) k = a ( a + 1 ) ( a + k 1 ) denotes the ascending factorial; the Kummer function defined by
Ψ a ; b ; x = Γ ( 1 b ) Γ ( 1 + a b ) 1 F 1 a ; b ; x + Γ ( b 1 ) Γ ( a ) x 1 b 1 F 1 1 + a b ; 1 b ; x ;
the Gauss hypergeometric function defined by
2 F 1 a , b ; c ; x = k = 0 ( a ) k ( b ) k ( c ) k x k k ! ;
the degenerate hypergeometric series of two variables defined by
Φ 1 a , b , c , x , y = m = 0 n = 0 ( a ) m + n ( b ) n x m y n ( c ) m + n m ! n ! ;
the degenerate hypergeometric function of two variables defined by
F 1 a , b , c ; d ; x , y = m = 0 n = 0 ( a ) m + n ( b ) m ( c ) n x m y n ( d ) m + n m ! n ! .
The properties of these special functions can be found in [14,15].

2. Technical Lemmas

The derivations in Section 3 use the following two lemmas.
Lemma 1.
The geometric mean defined by (1) can be calculated using
G M ( X ) = d d α E X α α = 0 ,
where E ( · ) denotes the expectation defined by
E X α = x α f X ( x ) d x .
Proof. 
Note that
G M ( X ) = d d α x α α = 0 f X ( x ) d x = d d α x α f X ( x ) d x α = 0 .
Hence, the result. □
Lemma 2.
The cumulative residual entropy defined by (4) can be calculated using
C E ( X ) = k = 1 1 k F X ( x ) k d x k = 1 1 k F X ( x ) k + 1 d x .
Proof. 
Using the Taylor series expansion for log ( 1 z ) , we can write
C E ( X ) = 1 F X ( x ) k = 1 1 k F X ( x ) k d x = k = 1 1 k F ( x ) k d x k = 1 1 k F ( x ) k + 1 d x .
Hence, the result. □

3. The Tabulation

In this section, we give expressions for f X ( x ) (the probability density function), F X ( x ) (the cumulative distribution function), G M ( X ) (the geometric mean), S ( X ) (Shannon entropy), R ( X ) (Rényi entropy), and C E ( X ) (the cumulative residual entropy) for over sixty continuous univariate distributions.
1. Gauss hypergeometric beta distribution [16]: for this distribution,
f X ( x ) = K x a 1 ( 1 x ) b 1 ( 1 + d x ) c ,
F X ( x ) = K x a a F 1 a , c , 1 b , a + 1 ; d x , x ,
G M ( X ) = exp Γ ( a ) Γ ( a ) + 1 2 F 1 c , a ; a + b ; d α 2 F 1 c , α + a ; α + a + b ; d α = 0 Γ ( a + b ) Γ ( a + b ) ,
S ( X ) = Γ ( a ) Γ ( a ) 1 2 F 1 c , a ; a + b ; d α 2 F 1 c , α + a ; α + a + b ; d α = 0 + 2 Γ ( a + b ) Γ ( a + b ) Γ ( b ) Γ ( b ) 1 2 F 1 c , a ; a + b ; d α 2 F 1 c , a ; α + a + b ; d α = 0 1 2 F 1 c , a ; a + b ; d d d α 2 F 1 c α , a ; a + b ; d log B ( a , b ) log 2 F 1 c α , a ; a + b ; d ,
R ( X ) = 1 1 γ log B a γ γ + 1 , b γ γ + 1 + 1 1 γ log 2 F 1 c γ , a γ γ + 1 ; a γ + b γ 2 γ + 2 ; d γ 1 γ log B ( a , b ) γ 1 γ log 2 F 1 c , a ; a + b ; d
and
C E ( X ) = 0 1 1 K x a a F 1 a , c , 1 b , a + 1 ; d x , x log 1 K x a a F 1 a , c , 1 b , a + 1 ; d x , x d x
for 0 < x < 1 , a > 0 , b > 0 , < c < and d > 1 , where Γ ( x ) = d Γ ( x ) d x and 1 K = B ( a , b ) 2 F 1 c , a ; a + b ; d .
2. q Weibull distribution [17]: for this distribution,
f X ( x ) = ( 2 q ) a b x a 1 1 ( 1 q ) b x a 1 1 q ,
F X ( x ) = 1 1 ( 1 q ) b x a 2 q 1 q ,
G M ( X ) = 2 q a ( q 1 ) log ( q 1 ) b Γ 2 q q 1 Γ 1 q 1 2 q a ( q 1 ) Γ 2 q q 1 Γ 1 q 1 + 2 q a ( q 1 ) Γ ( 1 ) Γ 2 q q 1 Γ 1 q 1 , if   1 < q < 2 , 2 q a ( q 1 ) log ( 1 q ) b Γ 2 q 1 q Γ 3 2 q 1 q + 2 q a ( q 1 ) Γ 3 2 q 1 q Γ 3 2 q 1 q 2 2 q a ( q 1 ) Γ ( 1 ) Γ 2 q 1 q Γ 3 2 q 1 q , if   q < 1 ,
S ( X ) = log ( 2 q ) a b + 1 2 q ( 1 a ) ( 2 q ) a ( q 1 ) log ( q 1 ) b Γ 2 q q 1 Γ 1 q 1 ( 1 a ) 2 q a ( q 1 ) Γ 2 q q 1 Γ 1 q 1 + ( 1 a ) ( 2 q ) a ( q 1 ) Γ ( 1 ) Γ 2 q q 1 Γ 1 q 1 , if   1 < q < 2 , log ( 2 q ) a b + 1 2 q + ( 1 a ) ( 2 q ) a ( q 1 ) log ( 1 q ) b Γ 2 q 1 q Γ 3 2 q 1 q + ( 1 a ) ( 2 q ) a ( q 1 ) Γ 3 2 q 1 q Γ 3 2 q 1 q 2 ( 1 a ) ( 2 q ) a ( q 1 ) Γ ( 1 ) Γ 2 q 1 q Γ 3 2 q 1 q , if   q < 1 ,
R ( X ) = 1 1 γ log ( 2 q ) γ a γ 1 b γ 1 a ( q 1 ) γ + 1 γ a B γ + 1 γ a , γ 1 q γ 1 γ a , if   1 < q < 2 , log ( 2 q ) γ a γ 1 b γ 1 a ( q 1 ) γ + 1 γ a B γ + 1 γ a , 1 + γ 1 q , if   q < 1
and
C E ( X ) = 2 q a ( q 1 ) Γ 1 a ( q 1 ) b 1 a Γ 2 q q 1 1 a Γ 2 q q 1 + 2 q a ( q 1 ) Γ 1 a ( q 1 ) b 1 a Γ 2 q q 1 1 a Γ 2 q q 1 Γ 2 q q 1 2 , if   1 < q < 2 , 2 q a ( q 1 ) Γ 1 a ( 1 q ) b 1 a Γ 2 q 1 q + 1 Γ 2 q 1 q + 1 + 1 a 2 q a ( q 1 ) Γ 1 a ( 1 q ) b 1 a Γ 2 q 1 q + 1 Γ 2 q 1 q + 1 + 1 a Γ 2 q 1 q + 1 + 1 a 2 , if   q < 1
for a > 0 , b > 0 , 0 < x < if 1 < q < 2 and 0 < x < ( 1 q ) b 1 a if q < 1 .
3. q exponential distribution [17]: for this distribution,
f X ( x ) = ( 2 q ) b 1 ( 1 q ) b x 1 1 q ,
F X ( x ) = 1 1 ( 1 q ) b x 2 q 1 q ,
G M ( X ) = 2 q q 1 Γ q q 1 Γ 2 q 1 q 1 + 2 q q Γ 1 2 q q log ( q 1 ) b , if   1 < q < 2 , Γ 3 2 q 1 q Γ 3 2 q 1 q + Γ 1 log ( 1 q ) b , if   q < 1 ,
S ( X ) = log ( 2 q ) b 2 q 1 q 1 1 + ( α + 1 ) ( 1 q ) ,
R ( X ) = log b + 1 1 γ log ( 2 q ) γ γ + 1 q
and
C E ( X ) = 2 q b 3 2 q 2
for b > 0 , 0 < x < if 1 < q < 2 and 0 < x < ( 1 q ) b if q < 1 .
4. Weighted exponential distribution: for this distribution,
f X ( x ) = a + 1 a b exp ( b x ) 1 exp ( a b x ) ,
F X ( x ) = 1 a exp ( b x ) exp ( a b x ) a 1 ,
G M ( X ) = a Γ ( 1 ) a log b + ( a + 1 ) log ( a + 1 ) ,
S ( X ) = log ( a + 1 ) b a + a + 1 1 a + 1 Γ ( 2 ) + Γ 1 a + 2 Γ 1 a + 2 ,
R ( X ) = log b 1 + γ 1 γ log a + γ 1 γ log ( a + 1 ) + 1 1 γ log B γ a , γ + 1
and
C E ( X ) = a + 1 a b 1 1 ( 1 + a ) 3 + ( a + 1 ) log a a b 1 1 ( 1 + a ) 2
for x > 0 , a > 0 and b > 0 .
5. Teissier distribution [18]: for this distribution,
f X ( x ) = exp ( a x ) 1 exp a x exp ( a x ) + 1 ,
F X ( x ) = 1 exp a x exp ( a x ) + 1 ,
G M ( X ) = e α a α 1 ( log y ) α ( y 1 ) exp ( y ) d y α = 0 ,
S ( X ) = log a a e α 1 y ( y 1 ) α + 1 exp ( y ) d y α = 0 e α Γ ( α + 2 , 1 ) Γ ( α + 1 , 1 ) α = 0 + 2 ,
R ( X ) = a γ Γ ( γ + 1 ) Ψ γ + 2 , 2 γ + 1 ; γ
and
C E ( X ) = e a Ei ( 1 ) exp ( 1 )
for x > 0 and a > 0 .
6. Maxwell distribution [19,20]: for this distribution,
f X ( x ) = 4 a 3 2 π x 2 exp a x 2 ,
F X ( x ) = 2 π γ 3 2 , a x 2 ,
G M ( X ) = 1 log a 2 ,
S ( X ) = Γ 3 2 log a + Γ 3 2 π ,
R ( X ) = log 2 a + log 2 1 γ γ log π 2 ( 1 γ ) γ + 1 2 log γ 1 γ + 1 1 γ log Γ γ + 1 2
and
C E ( X ) = 2 π 0 log 2 log π + log Γ 3 2 , a x 2 Γ 3 2 , a x 2 d x
for x > 0 and a > 0 .
7. Inverse Maxwell distribution: for this distribution,
f X ( x ) = 4 a 3 2 π x 4 exp a x 2 ,
F X ( x ) = 2 π Γ 3 2 , a x 2 ,
G M ( X ) = Γ 3 2 log a Γ 3 2 π ,
S ( X ) = log 4 π + 1 2 log a 4 π Γ 3 2 ,
R ( X ) = 1 1 γ log 2 a 5 2 π + γ 1 γ log 4 a π + 1 1 γ log Γ 2 γ 1 2
and
C E ( X ) = 2 a π log 2 π 2 π 0 γ 3 2 , a x 2 log γ 3 2 , a x 2 d x
for x > 0 and a > 0 .
8. Power Maxwell distribution [21]: for this distribution,
f X ( x ) = 4 a b 3 2 π x 3 a 1 exp b x 2 a ,
F X ( x ) = 2 π γ 3 2 , b x 2 a ,
G M ( X ) = Γ 3 2 log b + Γ 3 2 a π ,
S ( X ) = 3 2 log b 2 a + 1 3 a a π Γ 3 2 log 4 a π ,
R ( X ) = log 2 a b 1 2 a + log 2 1 γ γ log π 2 ( 1 γ ) 3 γ 2 + 1 γ 2 a log γ 1 γ + 1 1 γ log Γ 3 γ 2 + 1 γ 2 a
and
C E ( X ) = 2 π b 1 2 a log 2 π Γ 1 2 a + 3 2 2 π 0 γ 3 2 , b x 2 a log Γ 3 2 , b x 2 a d x
for x > 0 , a > 0 and b > 0 .
9. Inverse power Maxwell distribution [22]: for this distribution,
f X ( x ) = 4 a b 3 2 π x 3 a 1 exp b x 2 a ,
F X ( x ) = 2 π Γ 3 2 , b x 2 a ,
G M ( X ) = Γ 3 2 log b Γ 3 2 a π ,
S ( X ) = log 4 a π + 1 2 a log b 3 a + 1 a π Γ 3 2 ,
R ( X ) = 1 1 γ log 2 b 5 2 π + γ 1 γ log 4 a b π + 1 1 γ log Γ 3 γ 2 + γ 1 2 a
and
C E ( X ) = 2 π b 1 2 a log 2 π Γ 3 2 1 2 a 2 π 0 γ 3 2 , b x 2 a log γ 3 2 , b x 2 a d x
for x > 0 , a > 0 and b > 0 .
10. Omega distribution [23]: for this distribution,
f X ( x ) = a b x b 1 1 x 2 b 1 + x b 1 x b a 2 ,
F X ( x ) = 1 1 + x b 1 x b a 2 ,
G M ( X ) = a α B α b + 1 , a 2 2 F 1 α b + 1 , a 2 + 1 ; α b + 1 + a 2 ; 1 α = 0 ,
S ( X ) = log ( a b ) + a ( 1 b ) α B α b + 1 , a 2 2 F 1 α b + 1 , a 2 + 1 ; α b + 1 + a 2 ; 1 α = 0 + ( a + 2 ) α 2 F 1 1 , a 2 + 1 α ; α 2 + 1 ; 1 α = 0 a ( a 2 ) α 1 a + 2 α 2 F 1 1 , a 2 + 1 ; α + 1 + a 2 ; 1 α = 0 ,
R ( X ) = 1 1 γ log a γ 1 b γ B γ + 1 γ b , 1 γ + a γ 2 2 F 1 γ + 1 γ b , γ a 2 + γ ; a γ 2 + 1 γ b + 1 ; 1
and
C E ( X ) = a 2 b α B b , a 2 + 1 2 F 1 b , a 2 α ; b + a 2 + 1 ; 1 α = 0 a 2 b α B b , α + a 2 + 1 2 F 1 b , a 2 ; b + α + a 2 + 1 ; 1 α = 0
for x > 0 , a > 0 and b > 0 .
11. Colak et al.’s distribution [24]: for this distribution,
f X ( x ) = a ( b + 1 ) ( 1 x ) a 1 ( 1 + b x ) a + 1 ,
F X ( x ) = 1 x 1 + b x a ,
G M ( X ) = a ( b + 1 ) α B ( α + 1 , a ) 2 F 1 α + 1 , a + 1 ; α + 1 + a ; b α = 0 ,
S ( X ) = log a ( b + 1 ) + a ( 1 a ) ( b + 1 ) α 1 α + a 2 F 1 1 , a + 1 ; α + a + 1 ; b α = 0 + ( a + 1 ) ( b + 1 ) α 2 F 1 a , a + 1 α ; a + 1 ; b α = 0 ,
R ( X ) = 1 1 γ log a ( b + 1 ) γ γ a γ + 1 2 F 1 1 , γ a + γ ; γ a γ + 2 ; b
and
C E ( X ) = a a + 1 α 2 F 1 1 , a α ; a + 2 ; b α = 0 α a a + α + 1 2 F 1 1 , a ; a + α + 2 ; b α = 0
for x > 0 , a > 0 and b > 0 .
12. Bimodal beta distribution [25]: for this distribution,
f X ( x ) = ρ + ( 1 δ x ) 2 C B ( α , β ) x α 1 ( 1 x ) β 1 ,
F X ( x ) = 1 C ( 1 + ρ ) I x ( α , β ) 2 δ B x ( α + 1 , β ) B x ( α , β ) + δ 2 B x ( α + 2 , β ) B x ( α , β ) ,
G M ( X ) = Γ ( α + β ) C Γ ( α ) i = 0 2 c i Γ ( α + i ) Γ ( α + β + i ) Γ ( α + i ) Γ ( α + β + i ) Γ ( α + β + i ) 2 ,
S ( X ) = log C B ( α , β ) + ( 1 α ) Γ ( α + β ) C Γ ( α ) i = 0 2 c i Γ ( α + i ) Γ ( α + β + i ) Γ ( α + i ) Γ ( α + β + i ) Γ ( α + β + i ) 2 + ( 1 β ) Γ ( α + β ) C Γ ( α ) i = 0 2 c i Γ ( α + i ) Γ ( β ) Γ ( α + β + i ) Γ ( α + i ) Γ ( β ) Γ ( α + β + i ) Γ ( α + β + i ) 2 a 1 + ρ a + 1 C F 1 α , a 1 , a 1 ; α + β ; δ 1 + i ρ , δ 1 i ρ a = 0 ,
R ( X ) = B α γ γ + 1 , β γ γ + 1 ( 1 + ρ ) γ C γ B ( α , β ) γ F 1 α γ γ + 1 , γ , γ , α γ + β γ 2 γ + 2 ; δ 1 + i ρ , δ 1 i ρ
and
C E ( X ) = 0 1 1 1 C ( 1 + ρ ) I x ( α , β ) 2 δ B x ( α + 1 , β ) B x ( α , β ) + δ 2 B x ( α + 2 , β ) B x ( α , β ) · log 1 1 C ( 1 + ρ ) I x ( α , β ) 2 δ B x ( α + 1 , β ) B x ( α , β ) + δ 2 B x ( α + 2 , β ) B x ( α , β ) d x
for 0 < x < 1 , α > 0 , β > 0 , ρ 0 and < δ < , where i = 1 , c 0 = 1 + ρ , c 1 = 2 δ , c 2 = δ 2 and C = 1 + ρ 2 δ α α + β + δ 2 α ( α + 1 ) ( α + β ) ( α + β + 1 ) .
13. Confluent hypergeometric beta distribution [26]: for this distribution,
f X ( x ) = x a 1 ( 1 x ) b 1 exp ( c x ) B ( a , b ) 1 F 1 a ; a + b ; c ,
F X ( x ) = x a Φ 1 a , 1 b , a + 1 ; x , c x a B ( a , b ) 1 F 1 ( a ; a + b ; c ) ,
G M ( X ) = Γ ( a + b ) Γ ( a ) 1 F 1 a ; a + b ; c α Γ ( a + α ) Γ ( a + b + α ) 1 F 1 a + α ; a + b + α ; c α = 0 ,
S ( X ) = ( 1 a ) Γ ( a + b ) Γ ( a ) 1 F 1 a ; a + b ; c α Γ ( a + α ) Γ ( a + b + α ) 1 F 1 a + α ; a + b + α ; c α = 0 + ( 1 b ) Γ ( a + b ) Γ ( b ) 1 F 1 a ; a + b ; c α Γ ( b + α ) Γ ( a + b + α ) 1 F 1 a ; a + b + α ; c α = 0 + c a a + b 1 F 1 a + 1 ; a + b + 1 ; c 1 F 1 a ; a + b ; c + log B ( a , b ) + log 1 F 1 a ; a + b ; c ,
R ( X ) = 1 1 γ log B a γ γ + 1 , b γ γ + 1 1 F 1 a γ γ + 1 ; a γ + b γ 2 γ + 2 ; c γ B a , b γ 1 F 1 a ; a + b ; c γ
and
C E ( X ) = 0 1 1 x a Φ 1 a , 1 b , a + 1 ; x , c x a B ( a , b ) 1 F 1 ( a ; a + b ; c ) log 1 x a Φ 1 a , 1 b , a + 1 ; x , c x a B ( a , b ) 1 F 1 ( a ; a + b ; c ) d x
for 0 < x < 1 , a > 0 , b > 0 and c > 0 .
14. Libby and Novick’s beta distribution [27]: for this distribution,
f X ( x ) = c a x a 1 ( 1 x ) b 1 B ( a , b ) 1 ( 1 c ) x a + b ,
F X ( x ) = I 1 x 1 + c x x ( b , a ) ,
G M ( X ) = c a B ( a , b ) α B ( α + a , b ) 2 F 1 α + a , a + b ; α + a + b ; 1 c α = 0 ,
S ( X ) = a log c + ( 1 a ) c a B ( a , b ) α B ( α + a , b ) 2 F 1 α + a , a + b ; α + a + b ; 1 c α = 0 + ( 1 b ) c a B ( a , b ) α B ( a , b + α ) 2 F 1 a , a + b ; α + a + b ; 1 c α = 0 + c a ( a + b ) α 2 F 1 a , a + b α ; a + b ; 1 c α = 0 ,
R ( X ) = 1 1 γ log c a γ B a γ γ + 1 , b γ γ + 1 B ( a , b ) γ 2 F 1 a γ γ + 1 , a + b ; a γ + b γ 2 γ + 2 ; 1 c
and
C E ( X ) = 0 1 I c x 1 + c x x ( a , b ) log I c x 1 + c x x ( a , b ) d x
for 0 < x < 1 , a > 0 , b > 0 , and c > 0 .
15. Generalized beta distribution [28]: for this distribution,
f X ( x ) = a x a p 1 1 ( 1 c ) x b a q 1 b a p B ( p , q ) 1 + c x b a p + q ,
F X ( x ) = x a p p B ( p , q ) b a p F 1 p , 1 q , p + q , p + 1 ; ( 1 c ) x b a , c x b a ,
G M ( X ) = α b α B p + α a , q B ( p , q ) 2 F 1 p + α a , α a ; p + q + α a ; c α = 0 ,
S ( X ) = log a + a p log b + log B ( p , q ) + ( 1 a p ) α b α B p + α a , q B ( p , q ) 2 F 1 p + α a , α a ; p + q + α a ; c α = 0 ( 1 q ) α B p , q + α B ( p , q ) 2 F 1 p , α ; p + q + α ; c α = 0 + ( p + q ) α 2 F 1 p , α ; p + q ; c α = 0 ,
R ( X ) = 1 1 γ log { b 1 γ B p γ + 1 γ a , q γ γ + 1 B ( p , q ) γ · 2 F 1 p γ + 1 γ a , ( a + 1 ) ( 1 γ ) a ; p γ + q γ + ( 1 γ ) 1 a + 1 ; c }
and
C E ( X ) = 0 1 1 x a p p B ( p , q ) b a p F 1 p , 1 q , p + q , p + 1 ; ( 1 c ) x b a , c x b a · log 1 x a p p B ( p , q ) b a p F 1 p , 1 q , p + q , p + 1 ; ( 1 c ) x b a , c x b a d x
for 0 < x a < b a 1 c , b > 0 , 0 < c < 1 , p > 0 and q > 0 .
16. Log-logistic distribution: for this distribution,
f X ( x ) = b a b x b 1 a b + x b 2 ,
F X ( x ) = x b a b + x b ,
G M ( X ) = log a ,
S ( X ) = log a log b + 2 ,
R ( X ) = log a log b + 2 b γ log a 1 γ + 1 1 γ log B γ + γ 1 b , γ + 1 γ b
and
C E ( X ) = a b Γ 1 b Γ 1 1 b Γ ( 1 ) Γ 1 1 b
for x > 0 , a > 0 and b > 0 .
17. Inverse Gaussian distribution [29]: for this distribution,
f X ( x ) = a 2 π x 3 exp a ( x b ) 2 2 b 2 x ,
F X ( x ) = Φ a x x b 1 Φ a x x b + 1 exp 2 a b ,
G M ( X ) = 2 a log b π b exp a b K 1 2 exp a b + 2 a π b exp a b α K α 1 2 exp a b α = 0 ,
S ( X ) = 1 2 1 2 log a 2 π + 3 a log b 2 π b exp a b K 1 2 exp a b + 3 a 2 π b exp a b α K α 1 2 exp a b α = 0 ,
R ( X ) = γ 2 ( 1 γ ) log a 2 π b 3 + 1 1 γ log 2 b γ + a ( 1 γ ) b + 1 1 γ K 1 3 γ 2 a b
and
C E ( X ) = 0 Φ a x 1 x b Φ a x x b + 1 exp 2 a b · log Φ a x 1 x b Φ a x x b + 1 exp 2 a b d x
for x > 0 , a > 0 and b > 0 .
18. Gompertz distribution [30]: for this distribution,
f X ( x ) = a b exp a + b x a exp ( b x ) ,
F X ( x ) = 1 exp a a exp ( b x ) ,
G M ( X ) = log b + a exp ( a ) 1 log log y exp ( a y ) d y ,
S ( X ) = a log ( a b ) Ei ( a ) exp ( a ) + a 2 exp ( a ) 1 t exp ( a t ) d t ,
R ( X ) = log b γ log γ 1 γ + a γ 1 γ + log Γ ( γ , a γ ) 1 γ
and
C E ( X ) = 1 a exp ( a ) Ei ( a ) b
for x > 0 , a > 0 and b > 0 .
19. Exponential distribution: for this distribution,
f X ( x ) = a exp ( a x ) ,
F X ( x ) = 1 exp ( a x ) ,
G M ( X ) = Γ ( 1 ) log a ,
S ( X ) = 1 log a ,
R ( X ) = log a log γ 1 γ
and
C E ( X ) = 1 a
for x > 0 and a > 0 .
20. Inverse exponential distribution: for this distribution,
f X ( x ) = b x 2 exp b x ,
F X ( x ) = exp b x ,
G M ( X ) = log b Γ ( 1 ) ,
S ( X ) = log b 2 Γ ( 1 ) + 1 ,
R ( X ) = log b + 1 2 γ 1 γ log γ + log Γ ( 2 γ 1 ) 1 γ
and
C E ( X ) = 0 1 exp b x log 1 exp b x d x
for x > 0 and b > 0 .
21. Exponentiated exponential distribution [31]: for this distribution,
f X ( x ) = a b exp ( b x ) 1 exp ( b x ) a 1 ,
F X ( x ) = 1 exp ( b x ) a ,
G M ( X ) = a 0 1 ( 1 y ) a 1 log ( log y ) d y log b ,
S ( X ) = log ( a b ) Γ ( 1 ) + Γ ( a + 1 ) Γ ( a + 1 ) + a 1 a ,
R ( X ) = log b + γ log a 1 γ + 1 1 γ log B γ , a γ γ + 1
and
C E ( X ) = 1 b k = 1 1 k B ( 0 , a k + 1 ) 1 b k = 1 1 k B ( 0 , a k + a + 1 )
for x > 0 and a > 0 .
22. Gamma distribution: for this distribution,
f X ( x ) = b a x a 1 exp b x Γ ( a ) ,
F X ( x ) = γ a , b x Γ ( a ) ,
G M ( X ) = Γ a Γ ( a ) log b Γ a 2 ,
S ( X ) = a a log b + log Γ ( a ) + ( 1 a ) Γ ( a ) Γ ( a ) 2 log b Γ ( a ) ,
R ( X ) = log b + γ a γ 1 1 γ log γ + 1 1 γ log Γ ( a γ γ + 1 ) γ 1 γ log Γ ( a )
and
C E ( X ) = 0 Γ a , b x Γ ( a ) log Γ a , b x Γ ( a ) d x
for x > 0 , a > 0 and b > 0 .
23. Chisquare distribution: for this distribution,
f X ( x ) = x k 2 1 exp x 2 2 k 2 Γ k 2 ,
F X ( x ) = γ k 2 , x 2 Γ k 2 ,
G M ( X ) = Γ k 2 + Γ k 2 log 2 Γ k 2 2 ,
S ( X ) = k 2 + k 2 log 2 + log Γ k 2 + k 2 Γ k 2 Γ k 2 2 + log 2 Γ k 2 ,
R ( X ) = log 2 + γ k 2 γ 1 1 γ log γ + 1 1 γ log Γ k 2 γ γ + 1 γ 1 γ log Γ k 2
and
C E ( X ) = 0 Γ k 2 , x 2 Γ k 2 log Γ k 2 , x 2 Γ k 2 d x
for x > 0 and k > 0 .
24. Chi distribution: for this distribution,
f X ( x ) = x k 1 exp x 2 2 2 k 2 1 Γ k 2 ,
F X ( x ) = γ k 2 , x 2 2 Γ k 2 ,
G M ( X ) = Γ k 2 log 2 + Γ k 2 2 Γ k 2 ,
S ( X ) = 1 2 log 2 + k 2 1 k 2 Γ k 2 Γ k 2 + log Γ k 2 ,
R ( X ) = 1 1 γ log 2 γ 1 2 γ γ γ k 1 2 Γ k 2 γ Γ γ k γ + 1 2
and
C E ( X ) = 0 Γ k 2 , x 2 2 Γ k 2 log Γ k 2 , x 2 2 Γ k 2 d x
for x > 0 and k > 0 .
25. Inverse gamma distribution: for this distribution,
f X ( x ) = b a x a 1 exp b x Γ ( a ) ,
F X ( x ) = Γ a , b x Γ ( a ) ,
G M ( X ) = Γ a log b Γ ( a ) Γ a ,
S ( X ) = a log b + ( a + 1 ) log b Γ ( 1 ) Γ ( a ) + a + log Γ ( a ) ,
R ( X ) = log b + 1 γ a γ 1 γ log γ γ 1 γ log Γ ( a ) + log Γ ( a γ + γ 1 ) 1 γ
and
C E ( X ) = 0 γ a , b x Γ ( a ) log γ a , b x Γ ( a ) d x
for x > 0 , a > 0 and b > 0 .
26. Inverse chisquare distribution: for this distribution,
f X ( x ) = x k 2 1 exp 1 2 x 2 k 2 Γ k 2 ,
F X ( x ) = Γ k 2 , 1 2 x Γ k 2 ,
G M ( X ) = Γ k 2 log 2 Γ k 2 Γ k 2 ,
S ( X ) = k 2 log 2 k 2 + 1 log 2 + Γ ( 1 ) Γ k 2 + k 2 + log Γ k 2 ,
R ( X ) = log 2 + 1 γ k γ 2 1 γ log γ γ 1 γ log Γ k 2 + log Γ k 2 γ + γ 1 1 γ
and
C E ( X ) = 0 γ k 2 , 1 2 x Γ k 2 log γ k 2 , 1 2 x Γ k 2 d x
for x > 0 and k > 0 .
27. Inverse chi distribution: for this distribution,
f X ( x ) = x k 1 exp 1 2 x 2 2 k 2 1 Γ k 2 ,
F X ( x ) = Γ k 2 , 1 2 x 2 Γ k 2 ,
G M ( X ) = Γ k 2 log 2 + Γ k 2 2 Γ k 2 ,
S ( X ) = 3 2 log 2 k 2 1 + k 2 Γ k 2 Γ k 2 + log Γ k 2 ,
R ( X ) = 1 1 γ log 2 3 ( γ 1 ) 2 γ 1 γ γ k 2 Γ k 2 γ Γ γ + γ k 1 2
and
C E ( X ) = 0 γ k 2 , 1 2 x 2 Γ k 2 log γ k 2 , 1 2 x 2 Γ k 2 d x
for x > 0 and k > 0 .
28. Rayleigh distribution: for this distribution,
f X ( x ) = 2 b 2 x exp ( b x ) 2 ,
F X ( x ) = 1 exp ( b x ) 2 ,
G M ( X ) = 1 2 Γ 1 log b ,
S ( X ) = 1 log ( 2 b ) 1 2 Γ ( 1 ) ,
R ( X ) = log ( 2 b ) log γ 2 γ log γ 1 γ + 1 1 γ log 1 + γ 2
and
C E ( X ) = π 4 b
for x > 0 and b > 0 .
29. Weibull distribution [32]: for this distribution,
f X ( x ) = a b a x a 1 exp ( b x ) a ,
F X ( x ) = 1 exp ( b x ) a ,
G M ( X ) = 1 a Γ 1 log b ,
S ( X ) = 1 log ( a b ) + 1 a a Γ ( 1 ) ,
R ( X ) = log ( a b ) log γ a γ log γ 1 γ + 1 1 γ log 1 γ a + γ
and
C E ( X ) = 1 a b Γ 1 + 1 a
for x > 0 , a > 0 and b > 0 .
30. Inverse Rayleigh distribution: for this distribution,
f X ( x ) = 2 a x 3 exp a x 2 ,
F X ( x ) = exp a x 2 ,
G M ( X ) = log a Γ ( 1 ) 2 ,
S ( X ) = 1 + log a 2 log 2 Γ ( 1 ) b ,
R ( X ) = 1 2 log a log 2 + 3 γ 1 2 log γ 1 γ + 1 1 γ log Γ 3 γ 1 2
and
C E ( X ) = a 2 Γ 1 2 k = 1 k 1 2 a 2 Γ 1 2 k = 1 k + 1 k
for x > 0 and a > 0 .
31. Inverse Weibull distribution: for this distribution,
f X ( x ) = a b x b 1 exp a x b ,
F X ( x ) = exp a x b ,
G M ( X ) = log a Γ ( 1 ) b ,
S ( X ) = 1 + log a b log b Γ ( 1 ) b ,
R ( X ) = 1 b log a log b + γ + γ 1 b log γ 1 γ + 1 1 γ log Γ γ + γ 1 b
and
C E ( X ) = a 1 b b Γ 1 b k = 1 k 1 b 1 a 1 b b Γ 1 b k = 1 ( k + 1 ) 1 b k
for x > 0 , a > 0 and b > 0 .
32. Gumbel distribution [33]: for this distribution,
f X ( x ) = 1 a exp x b a exp exp x b a ,
F X ( x ) = exp exp x b a ,
G M ( X ) = 1 a log x exp x b a exp exp x b a d x ,
S ( X ) = 1 + log a Γ ( 1 ) ,
R ( X ) = 1 1 γ log Γ ( γ ) a γ 1 γ γ
and
C E ( X ) = 1 exp exp x b a log 1 exp exp x b a d x
for < x < , a > 0 and < b < .
33. Generalized extreme value distribution [34]: for this distribution,
f X ( x ) = 1 a 1 + ξ x b a ξ + 1 ξ exp 1 + ξ x b a 1 ξ ,
F X ( x ) = exp 1 + ξ x b a 1 ξ ,
G M ( X ) = 1 a log x 1 + ξ x b a ξ + 1 ξ exp 1 + ξ x b a 1 ξ d x ,
S ( X ) = 1 + log a ( ξ + 1 ) Γ ( 1 ) ,
R ( X ) = 1 1 γ log Γ γ ξ ξ + γ a γ 1 γ γ ξ ξ + γ
and
C E ( X ) = a Γ ξ k = 1 k ξ a Γ ξ k = 1 ( k + 1 ) ξ + 1 k
for b a ξ < x < if ξ > 0 , < x < b a ξ if ξ < 0 , < b < and a > 0 .
34. Generalized gamma distribution [35]: for this distribution,
f X ( x ) = p a d x d 1 exp ( a x ) p Γ d p ,
F X ( x ) = γ d p , ( a x ) p Γ d p ,
G M ( X ) = 1 p Γ d p Γ d p log a Γ d p 3 ,
S ( X ) = log p d log a + Γ 1 + d p + log Γ d p + 1 d p Γ d p Γ d p 3 ( 1 d ) log a Γ d p 2 ,
R ( X ) = log p 1 γ log a d γ γ + 1 p log γ 1 γ γ 1 γ log Γ d p + 1 1 γ log Γ d γ γ + 1 p
and
C E ( X ) = 0 Γ d p , ( a x ) p Γ d p log Γ d p , ( a x ) p Γ d p d x
for x > 0 , a > 0 , d > 0 and p > 0 .
35. Pareto distribution of type I [36]: for this distribution,
f X ( x ) = a K a x a + 1 ,
F X ( x ) = 1 K x a ,
G M ( X ) = log K + 1 a ,
S ( X ) = 1 a + 1 a + log K ,
R ( X ) = log K + γ log a 1 γ log a γ + γ 1 1 γ
and
C E ( X ) = K a a 1 2
for x K , K > 0 and a > 0 .
36. Pareto distribution of type II [37]: for this distribution,
f X ( x ) = a b a ( x + b ) a + 1 ,
F X ( x ) = 1 b a ( x + b ) a ,
G M ( X ) = log b + Γ ( 1 ) Γ ( a ) Γ ( a ) ,
S ( X ) = log a a log b + ( a + 1 ) ( a log b + 1 ) ,
R ( X ) = log b + γ log a 1 γ log ( a γ + γ 1 ) 1 γ
and
C E ( X ) = a b a 1 2
for x > 0 , a > 0 and b > 0 .
37. Generalized Pareto distribution [38]: for this distribution,
f X ( x ) = 1 + ξ x ξ + 1 ξ ,
F X ( x ) = 1 1 + ξ x 1 ξ ,
G M ( X ) = log ξ + Γ ( 1 ) Γ 1 ξ ξ Γ 1 + 1 ξ , if   ξ > 0 , log ( ξ ) + Γ ( 1 ) Γ 1 1 ξ ξ Γ 1 1 ξ , if   ξ < 0 ,
S ( X ) = ξ + 1 ,
R ( X ) = 1 γ ( ξ + 1 ) ξ
and
C E ( X ) = 1 ξ 1 2
for 0 < x < if ξ > 0 and 0 < x < 1 ξ if ξ < 0 .
38. Uniform distribution: for this distribution,
f X ( x ) = 1 b a ,
F X ( x ) = x a b a ,
G M ( X ) = b log b a log a b + a b a ,
S ( X ) = log ( b a ) ,
R ( X ) = log ( b a )
and
C E ( X ) = b a 4
for a < x < b and > b > a > .
39. Power function distribution of type I: for this distribution,
f X ( x ) = a x a 1 ,
F X ( x ) = x a ,
G M ( X ) = 1 a ,
S ( X ) = 1 1 a log a ,
R ( X ) = γ log a 1 γ log a γ γ + 1 1 γ
and
C E ( X ) = ψ 1 a + 2 ψ ( 2 )
for 0 < x < 1 and a > 0 .
40. Power function distribution of type II: for this distribution,
f X ( x ) = a ( 1 x ) a 1 ,
F X ( x ) = 1 ( 1 x ) a ,
G M ( X ) = 1 a ,
S ( X ) = 1 1 a log a ,
R ( X ) = γ log a 1 γ log a γ γ + 1 1 γ
and
C E ( X ) = a ( a + 1 ) 2
for 0 < x < 1 and a > 0 .
41. Arcsine distribution: for this distribution,
f X ( x ) = 1 π x ( 1 x ) ,
F X ( x ) = 2 π arcsin x ,
G M ( X ) = Γ 1 2 π Γ 1 ,
S ( X ) = log π + Γ 1 2 π Γ ( 1 ) ,
R ( X ) = 1 1 γ log B γ 2 γ + 1 , γ 2 γ + 1 π γ
and
C E ( X ) = 0 1 2 π arcsin x log 1 2 π arcsin x d x
for 0 < x < 1 .
42. Beta distribution: for this distribution,
f X ( x ) = x a 1 ( 1 x ) b 1 B ( a , b ) ,
F X ( x ) = I x ( a , b ) ,
G M ( X ) = Γ a Γ a Γ a + b Γ a + b ,
S ( X ) = log B ( a , b ) + ( 1 a ) Γ ( a ) Γ ( a ) + ( 1 b ) Γ ( b ) Γ ( b ) ( 2 a b ) Γ ( a + b ) Γ ( a + b ) ,
R ( X ) = 1 1 γ log B a γ γ + 1 , b γ γ + 1 B ( a , b ) γ
and
C E ( X ) = 0 I 1 x ( b , a ) log I 1 x ( b , a ) d x
for 0 < x < 1 , a > 0 and b > 0 .
43. Inverted beta distribution: for this distribution,
f X ( x ) = x a 1 ( 1 + x ) a b B ( a , b ) ,
F X ( x ) = I x 1 + x ( a , b ) ,
G M ( X ) = Γ a Γ a Γ a + b Γ a + b ,
S ( X ) = log B ( a , b ) + ( 1 a ) Γ ( a ) Γ ( a ) + ( a + b ) Γ ( a + b ) Γ ( a + b ) ( 1 + b ) Γ ( b ) Γ ( b ) ,
R ( X ) = 1 1 γ log B a γ γ + 1 , b γ + γ 1 B ( a , b ) γ
and
C E ( X ) = 0 I 1 1 + x ( b , a ) log I 1 1 + x ( b , a ) d x
for 0 < x < 1 , a > 0 and b > 0 .
44. Kumaraswamy distribution [39]: for this distribution,
f X ( x ) = a b x a 1 1 x a b 1 ,
F X ( x ) = 1 1 x a b ,
G M ( X ) = b Γ ( 1 ) b Γ 1 + 1 a Γ 1 + 1 a ,
S ( X ) = 1 1 b log ( a b ) + ( 1 a ) b Γ ( 1 ) ( 1 a ) b Γ 1 a + 1 Γ 1 a + 1 ,
R ( X ) = log ( a b ) + γ log b 1 γ + 1 1 γ log B γ + 1 γ a , b γ + 1 γ
and
C E ( X ) = b a B 1 a , b + 1 ψ ( b + 1 ) ψ 1 a + b + 1
for 0 < x < 1 , a > 0 and b > 0 .
45. Inverted Kumaraswamy distribution [40]: for this distribution,
f X ( x ) = a b ( 1 + x ) a 1 1 ( 1 + x ) a b 1 ,
F X ( x ) = 1 ( 1 + x ) a b ,
G M ( X ) = b 0 1 log y 1 a 1 ( 1 + y ) b 1 d y ,
S ( X ) = log ( a b ) + 1 + 1 a Γ ( b + 1 ) Γ ( b + 1 ) Γ 1 + 1 1 b ,
R ( X ) = log a + γ log b 1 γ + 1 1 γ log B γ + γ 1 a , b γ γ + 1
and
C E ( X ) = 1 a k = 1 1 k B 1 a , k b + 1 1 a k = 1 1 k B 1 a , k b + b + 1
for x > 0 , a > 0 and b > 0 .
46. Normal distribution: for this distribution,
f X ( x ) = 1 2 π a exp ( x b ) 2 2 a 2 ,
F X ( x ) = Φ x b a ,
G M ( X ) = 1 2 π a log x exp ( x b ) 2 2 a 2 d x ,
S ( X ) = log 2 π a + 1 2 ,
R ( X ) = log 2 π a log γ 2 ( 1 γ )
and
C E ( X ) = 2 a 4 erfc ( x ) log erfc ( x ) d x
for < x < , a > 0 and < b < .
47. Lognormal distribution: for this distribution,
f X ( x ) = 1 2 π a x exp ( log x b ) 2 2 a 2 ,
F X ( x ) = Φ log x b a ,
G M ( X ) = b ,
S ( X ) = log 2 π a + b + 1 2 ,
R ( X ) = 2 π a exp ( b ) + ( 1 γ ) a 2 2 γ log γ 2 ( 1 γ )
and
C E ( X ) = 0 Φ b log x a log Φ b log x a d x
for x > 0 , a > 0 and < b < .
48. Half normal distribution: for this distribution,
f X ( x ) = 2 π a exp x 2 2 a 2 ,
F X ( x ) = erf x 2 a ,
G M ( X ) = log 2 2 + log a + 1 π Γ 1 2 ,
S ( X ) = 1 2 1 2 log 2 π + log a ,
R ( X ) = a 2 π 1 2 log γ 1 γ
and
C E ( X ) = 2 a erf ( x ) log erf ( x ) d x
for x > 0 and a > 0 .
49. Student’s t distribution [41]: for this distribution,
f X ( x ) = Γ a + 1 2 a π Γ a 2 1 + x 2 a a + 1 2 ,
F X ( x ) = 1 2 x Γ a + 1 2 a π Γ a 2 2 F 1 1 2 , a + 1 2 ; 3 2 ; x 2 a ,
G M ( X ) = Γ a + 1 2 a π Γ a 2 log x 1 + x 2 a a + 1 2 d x ,
S ( X ) = log a π Γ a 2 Γ a + 1 2 + a ( a + 1 ) 2 Γ a + 1 2 Γ a + 1 2 Γ a 2 Γ a 2 ,
R ( X ) = 1 1 γ log 2 Γ a + 1 2 a π Γ a 2 γ B a γ + γ 1 2 , 1 2
and
C E ( X ) = 1 2 + x Γ a + 1 2 a π Γ a 2 2 F 1 1 2 , a + 1 2 ; 3 2 ; x 2 a log 1 2 + x Γ a + 1 2 a π Γ a 2 2 F 1 1 2 , a + 1 2 ; 3 2 ; x 2 a d x
for < x < and a > 0 .
50. Cauchy distribution: for this distribution,
f X ( x ) = 1 π 1 + x 2 ,
F X ( x ) = 1 2 + 1 π arctan ( x ) ,
G M ( X ) = 1 π log x 1 + x 2 d x ,
S ( X ) = log π + Γ 1 Γ 1 2 π ,
R ( X ) = 1 1 γ log 2 π γ B γ 1 2 , 1 2
and
C E ( X ) = 1 2 1 π arctan ( x ) log 1 2 1 π arctan ( x ) d x
for < x < .
51. Laplace distribution [42]: for this distribution,
f X ( x ) = 1 2 a exp x b a ,
F X ( x ) = 1 2 exp x b a , if   x b , 1 1 2 exp x b a , if   x b ,
G M ( X ) = 1 2 a log x exp x b a d x ,
S ( X ) = 1 + log ( 2 a ) ,
R ( X ) = log ( 2 a ) log γ 1 γ
and
C E ( X ) = a 2 + a 2 log 2 a α B 1 2 0 , 2 + α α = 0
for < x < , a > 0 and < b < .
52. Logistic distribution of type I: for this distribution,
f X ( x ) = a c exp ( c x ) 1 + exp ( c x ) a + 1 ,
F X ( x ) = 1 1 + exp ( c x ) a ,
G M ( X ) = a c log x exp ( c x ) 1 + exp ( c x ) a + 1 d x ,
S ( X ) = log a log c + ( a + 1 ) ψ ( a + 1 ) a ψ ( a ) Γ ( 1 ) ,
R ( X ) = log c + 1 1 γ log a γ B a γ , γ
and
C E ( X ) = 1 1 1 + exp ( c x ) a log 1 1 1 + exp ( c x ) a d x
for < x < , a > 0 and c > 0 .
53. Logistic distribution of type II: for this distribution,
f X ( x ) = a c exp ( a c x ) 1 + exp ( c x ) a + 1 ,
F X ( x ) = 1 1 1 + exp ( c x ) a ,
G M ( X ) = a c log x exp ( a c x ) 1 + exp ( c x ) a + 1 d x ,
S ( X ) = log a log c + ( a + 1 ) ψ ( a + 1 ) Γ ( 1 ) a ψ ( a ) ,
R ( X ) = log c + 1 1 γ log a γ B γ , a γ
and
C E ( X ) = a 1 + exp ( c x ) a log 1 + exp ( c x ) d x
for < x < , a > 0 and c > 0 .
54. Logistic distribution of type III: for this distribution,
f X ( x ) = c exp ( a c x ) B ( a , a ) 1 + exp ( c x ) 2 a ,
F X ( x ) = I 1 1 + exp ( c x ) ( a , a ) ,
G M ( X ) = c B ( a , a ) log x exp ( a c x ) 1 + exp ( c x ) 2 a d x ,
S ( X ) = log B ( a , a ) log c + 2 a ψ ( 2 a ) 2 a ψ ( a ) ,
R ( X ) = log c + 1 1 γ log B a γ , a γ B ( a , a ) γ
and
C E ( X ) = I exp ( c x ) 1 + exp ( c x ) ( a , a ) log I exp ( c x ) 1 + exp ( c x ) ( a , a ) d x
for < x < , a > 0 and c > 0 .
55. Logistic distribution of type IV [43]: for this distribution,
f X ( x ) = c exp ( b c x ) B ( a , b ) 1 + exp ( c x ) a + b ,
F X ( x ) = I 1 1 + exp ( c x ) ( a , b ) ,
G M ( X ) = c B ( a , b ) log x exp ( b c x ) 1 + exp ( c x ) a + b d x ,
S ( X ) = log B ( a , b ) log c + ( a + b ) ψ ( a + b ) a ψ ( a ) b ψ ( b ) ,
R ( X ) = log c + 1 1 γ log B a γ , b γ B ( a , b ) γ
and
C E ( X ) = I exp ( c x ) 1 + exp ( c x ) ( b , a ) log I exp ( c x ) 1 + exp ( c x ) ( b , a ) d x
for < x < , a > 0 , b > 0 and c > 0 .
56. Burr distribution [44]: for this distribution,
f X ( x ) = c k x c 1 1 + x c k 1 ,
F X ( x ) = 1 1 + x c k ,
G M ( X ) = Γ ( 1 ) Γ ( k ) Γ k c Γ ( k ) ,
S ( X ) = 1 + 1 k log ( c k ) + 1 c c Γ ( k ) Γ ( 1 ) Γ ( k ) Γ ( k ) ,
R ( X ) = γ log ( c k ) 1 γ + 1 1 γ log B k γ + γ 1 c , γ + 1 γ c
and
C E ( X ) = k c B 1 c , k 1 c ψ 1 c ψ ( k )
for x > 0 , k > 0 and c > 0 .
57. Dagum distribution [45]: for this distribution,
f X ( x ) = a p x a p 1 1 + x a p 1 ,
F X ( x ) = 1 + x a p ,
G M ( X ) = Γ ( 1 ) a + Γ ( p ) a Γ ( p ) ,
S ( X ) = log ( a p ) + ( p + 1 ) Γ ( 1 ) Γ ( p + 1 ) Γ ( p + 1 ) + 1 a p Γ ( 1 ) + Γ ( p ) Γ ( p ) ,
R ( X ) = 1 1 γ log a γ 1 p γ B γ 1 a , p γ + 1 γ a
and
C E ( X ) = k = 1 1 k B p k 1 a , 1 a k = 1 1 k B p k + p 1 a , 1 a
for x > 0 , a > 0 and p > 0 .
58. J shaped distribution [46]: for this distribution,
f X ( x ) = 2 a ( 1 x ) x ( 2 x ) a 1 ,
F X ( x ) = x ( 2 x ) a ,
G M ( X ) = a 2 a α 1 a + α 2 F 1 a + α , 1 a ; a + α + 1 ; 1 2 α = 0 ,
S ( X ) = log ( 2 a ) + a ( 1 a ) 2 a α 1 a + α 2 F 1 a + α , 1 a ; a + α + 1 ; 1 2 α = 0 a 2 a α B ( a , α + 2 ) 2 F 1 a , 1 a ; a + α + 2 ; 1 2 α = 0 + 2 a ( 1 a ) 1 + a α 2 α 2 F 1 a , 1 α a ; a + 1 ; 1 2 α = 0 ,
R ( X ) = 1 1 γ log a γ 2 a γ B a γ γ + 1 , γ + 1 2 F 1 a γ γ + 1 , γ a γ ; a γ + 2 ; 1 2
and
C E ( X ) = k = 1 2 a k k ( a k + 1 ) 2 F 1 a k + 1 , a k ; a k + 2 ; 1 2 k = 1 2 a ( k + 1 ) k ( a k + a + 1 ) 2 F 1 a k + a + 1 , a k a ; a k + a + 2 ; 1 2
for 0 < x < 1 and a > 0 .
59. Nadarajah–Haghighi distribution [47]: for this distribution,
f X ( x ) = a b ( 1 + b x ) a 1 exp 1 ( 1 + b x ) a ,
F X ( x ) = 1 exp 1 ( 1 + b x ) a ,
G M ( X ) = e log b + e 0 log y 1 a 1 exp ( y ) d y ,
S ( X ) = log ( a b ) + ( 1 a ) e α Γ α a + 1 , 1 α = 0 + 1 ,
R ( X ) = γ 1 γ log ( a b ) 1 γ γ + 1 γ a log γ 1 γ + 1 1 γ log Γ γ + 1 γ a , γ
and
C E ( X ) = 1 b + e b 1 a 1 Γ 1 a + 1 , 1
for x > 0 , a > 0 and b > 0 .
60. Two-sided power distribution [48]: for this distribution,
f X ( x ) = a x θ a 1 , if   0 < x θ , a 1 x 1 θ a 1 , if   θ x < 1 ,
F X ( x ) = θ x θ a , if   0 < x θ , 1 ( 1 θ ) 1 x 1 θ a , if   θ x < 1 ,
G M ( X ) = θ log θ θ a + a ( 1 θ ) a 1 α B 1 θ ( a , α + 1 ) α = 0 ,
S ( X ) = log a 1 a a ,
R ( X ) = 1 1 γ log a γ a γ γ + 1
and
C E ( X ) = k = 1 θ k + 1 k ( k θ + 1 ) k = 1 1 k m = 0 k k m ( 1 ) m ( 1 θ ) m + 1 a m + 1 k = 1 θ k + 2 k ( k θ + k + 1 ) + k = 1 1 k m = 0 k + 1 k + 1 m ( 1 ) m ( 1 θ ) m + 1 a m + 1
for a > 0 .
61. Power Lindley distribution [49]: for this distribution,
f X ( x ) = a b 2 b + 1 1 + x a x a 1 exp b x a ,
F X ( x ) = 1 1 + b x a b + 1 exp b x a ,
G M ( X ) = b Γ ( 1 ) + Γ ( 2 ) a ( b + 1 ) log b a ,
S ( X ) = log a b b + 1 log b a + b ( 1 a ) a ( b + 1 ) Γ ( 1 ) + 1 b Γ ( 2 ) + b + 2 b + 1 exp ( b ) b + 1 α Γ ( α + 2 , b ) α = 0 ,
R ( X ) = log a γ 1 γ log ( b + 1 ) + 1 1 γ log Γ γ + 1 γ a Ψ γ + 1 γ a , 1 γ a + 1 ; b γ
and
C E ( X ) = 1 a b 1 a Γ 1 a + 1 + 1 b + 1 Γ 1 a + 2 Γ 1 a ( 1 + b ) 1 a a b 1 a α Ψ 1 a , 1 a + α + 2 ; b + 1 α = 0
for x > 0 , a > 0 and b > 0 .
62. Modified slash Lindley–Weibull distribution [50]: for this distribution,
f X ( x ) = 2 a 3 b 2 x a 1 a + 1 ( a + 2 ) x a + 2 b a a x a + 2 b a 3 ,
F X ( x ) = a 2 x a a + 1 ( a + 1 ) x a + 2 b a a x a + 2 b a 2 ,
G M ( X ) = ( a + 2 ) log 2 4 ( a + 1 ) ( a + 2 ) log a 4 ( a + 1 ) + ( a + 2 ) log b 2 ( a + 1 ) Γ ( 2 ) a + 2 a ( a + 1 ) Γ ( 1 ) + b 1 a a 2 1 a log 2 2 5 2 ( a + 1 ) Γ 1 a Γ 3 1 a + b 1 a a 2 1 a log b 2 a 1 2 ( a + 1 ) Γ 1 a Γ 3 1 a b 1 a a log 2 2 3 2 ( a + 1 ) Γ 1 a Γ 3 1 a + b 1 a a 1 1 a 2 3 2 ( a + 1 ) Γ 1 a Γ 3 1 a b 1 a a 1 1 a 2 3 2 ( a + 1 ) Γ 1 a Γ 3 1 a ,
S ( X ) = log 2 a 3 b a a + 1 + ( 1 a ) ( a + 2 ) log 2 4 ( a + 1 ) ( 1 a ) ( a + 2 ) log a 4 ( a + 1 ) + ( 1 a ) ( a + 2 ) log b 2 ( a + 1 ) Γ ( 2 ) ( 1 a ) ( a + 2 ) a ( a + 1 ) Γ ( 1 ) + ( 1 a ) b 1 a a 2 1 a log 2 2 5 2 ( a + 1 ) Γ 1 a Γ 3 1 a + ( 1 a ) b 1 a a 2 1 a log b 2 a 1 2 ( a + 1 ) Γ 1 a Γ 3 1 a ( 1 a ) b 1 a a log 2 2 3 2 ( a + 1 ) Γ 1 a Γ 3 1 a + ( 1 a ) b 1 a a 1 1 a 2 3 2 ( a + 1 ) Γ 1 a Γ 3 1 a ( 1 a ) b 1 a a 1 1 a 2 3 2 ( a + 1 ) Γ 1 a Γ 3 1 a 2 a b a a + 1 α a + 2 a α + 1 0 y + 2 a b a a + 2 α + 1 y + 2 b a 3 d y α = 0 + 3 a a + 1 a + 2 a 1 + log 2 b 2 a 2 4 1 + 2 log 2 b 2 ,
R ( X ) = 1 1 γ log 2 a 2 b a a + 1 γ a γ 1 a 1 0 y γ + 1 γ a a + 2 a y + 2 b a γ y + 2 b a 3 γ d y
and
C E ( X ) = log 2 b a a + 1 2 1 a b a 1 a + 2 a + 1 a Γ 1 a Γ 1 1 a ( a + 1 ) 1 a 2 b a 1 a a 1 a + 1 ( a + 2 ) 1 a α 2 ( a + 1 ) b a α B 1 a , 1 α 1 a 2 F 1 1 a , 2 ; 1 α ; 1 a + 2 α = 0 2 1 a + 1 b log 2 a 1 a + 1 Γ 1 a Γ 2 1 a 2 1 a + 1 b log b a 1 a Γ 1 a Γ 2 1 a + 2 1 a + 1 b a 1 a Γ 1 a Γ 2 1 a 2 1 a + 1 b a 1 a + 1 Γ 1 a Γ 2 1 a Γ ( 2 ) 2 1 a + 1 b ( a + 2 ) log 2 a 1 a + 1 ( a + 1 ) Γ 1 + 1 a Γ 1 1 a 2 1 a + 1 b ( a + 2 ) log b a 1 a ( a + 1 ) Γ 1 + 1 a Γ 1 1 a + 2 1 a + 1 b ( a + 2 ) a 1 a + 1 ( a + 1 ) Γ 1 + 1 a Γ 1 1 a 2 1 a + 1 b ( a + 2 ) log 2 a 1 a + 1 ( a + 1 ) Γ 1 a Γ 2 1 a Γ ( 2 )
for x > 0 , a > 0 and b > 0 .
63. Reciprocal distribution: for this distribution,
f X ( x ) = 1 x log b log a ,
F X ( x ) = log x log a log b log a ,
G M ( X ) = log b + log a 2 ,
S ( X ) = log log b log a + log b + log a 2 ,
R ( X ) = 1 1 γ log b 1 γ a 1 γ 1 γ log b log a γ
and
C E ( X ) = ( b 2 a ) log log b log a log b log a b log b log a α γ α + 2 , log b log a α = 0
for 0 < a x b < .

4. Conclusions

We have derived the most comprehensive collection of explicit expressions for the geometric mean, Shannon entropy, Rényi entropy and the cumulative residual entropy for the following continuous univariate distributions: 1. Gauss hypergeometric beta distribution, 2. q Weibull distribution, 3. q exponential distribution, 4. Weighted exponential distribution, 5. Teissier distribution, 6. Maxwell distribution, 7. Inverse Maxwell distribution, 8. Power Maxwell distribution, 9. Inverse power Maxwell distribution, 10. Omega distribution, 11. Colak et al.’s distribution, 12. Bimodal beta distribution, 13. Confluent hypergeometric beta distribution, 14. Libby and Novick’s beta distribution, 15. Generalized beta distribution, 16. Log-logistic distribution, 17. Inverse Gaussian distribution, 18. Gompertz distribution, 19. Exponential distribution, 20. Inverse exponential distribution, 21. Exponentiated exponential distribution, 22. Gamma distribution, 23. Chisquare distribution, 24. Chi distribution, 25. Inverse gamma distribution, 26. Inverse chisquare distribution, 27. Inverse chi distribution, 28. Rayleigh distribution, 29. Weibull distribution, 30. Inverse Rayleigh distribution, 31. Inverse Weibull distribution, 32. Gumbel distribution, 33. Generalized extreme value distribution, 34. Generalized gamma distribution, 35. Pareto distribution of type I, 36. Pareto distribution of type II, 37. Generalized Pareto distribution, 38. Uniform distribution, 39. Power function distribution of type I, 40. Power function distribution of type II, 41. Arcsine distribution, 42. Beta distribution, 43. Inverted beta distribution, 44. Kumaraswamy distribution, 45. Inverted Kumaraswamy distribution, 46. Normal distribution, 47. Lognormal distribution, 48. Half normal distribution, 49. Student’s t distribution, 50. Cauchy distribution, 51. Laplace distribution, 52. Logistic distribution of type I, 53. Logistic distribution of type II, 54. Logistic distribution of type III, 55. Logistic distribution of type IV, 56. Burr distribution, 57. Dagum distribution, 58. J shaped distribution, 59. Nadarajah–Haghighi distribution, 60. Two-sided power distribution, 61. Power Lindley distribution, 62. Modified slash Lindley–Weibull distribution, 63. Reciprocal distribution. This collection could be a useful reference for both theoreticians and practitioners of entropies. Future work will be to derive similar collections of explicit expressions for entropies of discrete univariate distributions, continuous bivariate distributions, discrete bivariate distributions, continuous multivariate distributions, discrete multivariate distributions, continuous matrix variate distributions, discrete matrix variate distributions, continuous complex variate distributions, and discrete complex variate distributions.

Author Contributions

Conceptualization, S.N. and M.K.; methodology, S.N. and M.K.; writing—original draft preparation, S.N.; writing—review and editing, S.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the Editor and the two referees for careful reading and comments which improved the paper.

Conflicts of Interest

Authors declare no conflicts of interest.

References

  1. Feng, C.; Wang, H.; Tu, X.M. Geometric mean of nonnegative random variable. Commun. Stat. Theory Methods 2013, 42, 2714–2717. [Google Scholar] [CrossRef]
  2. Vogel, R.M. The geometric mean? Commun. Stat. Theory Methods 2022, 51, 82–94. [Google Scholar] [CrossRef]
  3. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  4. Rényi, A. On measures of information and entropy. In Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; Neyman, J., Ed.; University of California Press: Berkeley, CA, USA, 1960; Volume 1, pp. 547–561. [Google Scholar]
  5. Rao, M.; Chen, Y.; Vemuri, B.C.; Wang, F. Cumulative residual entropy: A new measure of information. IEEE Trans. Inf. Theory 2004, 50, 1220–1228. [Google Scholar] [CrossRef]
  6. Lazo, A.V.; Rathie, P. On the entropy of continuous probability distributions. IEEE Trans. Inf. Theory 1978, 24, 120–122. [Google Scholar] [CrossRef]
  7. Ahmed, N.A.; Gokhale, D.V. Entropy expressions and their estimators for multivariate distributions. IEEE Trans. Inf. Theory 1989, 35, 688–692. [Google Scholar] [CrossRef]
  8. Darbellay, G.A.; Vajda, I. Entropy expressions for multivariate continuous distributions. IEEE Trans. Inf. Theory 2000, 46, 709–712. [Google Scholar] [CrossRef]
  9. Nadarajah, S.; Zografos, K. Expressions for Rényi and Shannon entropies for bivariate distributions. Inf. Sci. 2005, 170, 173–189. [Google Scholar] [CrossRef]
  10. Zografos, K.; Nadarajah, S. Expressions for Rényi and Shannon entropies for multivariate distributions. Stat. Probab. Lett. 2005, 71, 71–84. [Google Scholar] [CrossRef]
  11. Khan, M.Z.; Khan, M.A. Explicit expressions for three entropies of Dagum distribution. In Proceedings of the 13th International Conference on Statistical Sciences, Peshawar, Pakistan, 16–18 March 2015; Volume 28, pp. 193–196. [Google Scholar]
  12. Cheraghchi, M. Expressions for the entropy of binomial-type distributions. In Proceedings of the 2018 IEEE International Symposium on Information Theory, Vali, CO, USA, 17–22 June 2018; pp. 2520–2524. [Google Scholar]
  13. Giuclea, M.; Popescu, C.-C. On geometric mean and cumulative residual entropy for two random variables with Lindley type distribution. Mathematics 2022, 10, 1499. [Google Scholar] [CrossRef]
  14. Prudnikov, A.P.; Brychkov, Y.A.; Marichev, O.I. Integrals and Series; Gordon and Breach Science Publishers: Amsterdam, The Netherlands, 1986; Volume 1–3. [Google Scholar]
  15. Gradshteyn, I.S.; Ryzhik, I.M. Table of Integrals, Series, and Products, 6th ed.; Academic Press: San Diego, CA, USA, 2000. [Google Scholar]
  16. Armero, C.; Bayarri, M.J. Prior assessments for prediction in queues. Statistician 1994, 43, 139–153. [Google Scholar] [CrossRef]
  17. Picoli, S., Jr.; Mendes, R.S.; Malacarne, L.C. q-exponential, Weibull, and q-Weibull distributions: An empirical analysis. Phys. Stat. Mech. Its Appl. 2003, 324, 678–688. [Google Scholar] [CrossRef] [Green Version]
  18. Teissier, G. Recherches sur le vieillissement et sur les lois de mortalite. Ann. Physiol. Phys. Chim. Biol. 1934, 10, 237–284. [Google Scholar]
  19. Maxwell, J.C. Illustrations of the dynamical theory of gases. Part I. On the motions and collisions of perfectly elastic spheres. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1860, 19, 19–32. [Google Scholar] [CrossRef]
  20. Maxwell, J.C. Illustrations of the dynamical theory of gases. Part II. On the process of diffusion of two or more kinds of moving particles among one another. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1860, 20, 21–37. [Google Scholar] [CrossRef]
  21. Yadav, A.S.; Bakouch, H.S.; Singh, S.K.; Singh, U. Power Maxwell distribution: Statistical properties, estimation and application. arXiv 2022, arXiv:1807.01200. [Google Scholar]
  22. Al-Kzzaz, H.S.; Abd El-Monsef, M.M.E. Inverse power Maxwell distribution: Statistical properties, estimation and application. J. Appl. Stat. 2022, 49, 2287–2306. [Google Scholar] [CrossRef]
  23. Dombi, J.; Jonas, T.; Toth, Z.E.; Arva, G. The omega probability distribution and its applications in reliability theory. Qual. Reliab. Eng. Int. 2019, 35, 600–626. [Google Scholar] [CrossRef] [Green Version]
  24. Colak, A.B.; Sindhu, T.N.; Lone, S.A.; Akhtar, M.T.; Shafiq, A. A comparative analysis of maximum likelihood estimation and artificial neural network modeling to assess electrical component reliability. Qual. Reliab. Eng. Int. 2022. accepted. [Google Scholar] [CrossRef]
  25. Vila, R.; Alfaia, L.; Meneze, A.F.B.; Cankaya, M.N.; Bourguignon, M. A model for bimodal rates and proportions. J. Appl. Stat. 2022. accepted. [Google Scholar] [CrossRef]
  26. Gordy, M.B. Computationally convenient distributional assumptions for common-value auctions. Comput. Econ. 1988, 12, 61–78. [Google Scholar] [CrossRef]
  27. Libby, D.L.; Novick, M.R. Multivariate generalized beta-distributions with applications to utility assessment. J. Educ. Stat. 1982, 7, 271–294. [Google Scholar] [CrossRef]
  28. McDonald, J.B.; Xu, Y.J. A generalization of the beta distribution with applications. J. Econom. 1995, 66, 133–152. [Google Scholar] [CrossRef]
  29. Wald, A. On cumulative sums of random variables. Ann. Math. Stat. 1944, 15, 283–296. [Google Scholar] [CrossRef]
  30. Gompertz, B. On the nature of the function expressive of the law of human mortality, and on a new mode of determining the value of life contingencies. Philos. Trans. R. Soc. Lond. 1825, 115, 513–583. [Google Scholar]
  31. Gupta, R.D.; Kundu, D. Exponentiated exponential family: An alternative to gamma and Weibull distributions. Biom. J. 2001, 43, 117–130. [Google Scholar] [CrossRef]
  32. Weibull, W. A statistical distribution function of wide applicability. J. Appl. Mech. 1951, 18, 293–297. [Google Scholar] [CrossRef]
  33. Gumbel, E.J. Les valeurs extremes des distributions statistiques. Ann. L’Institut Henri Poincare 1935, 5, 115–158. [Google Scholar]
  34. Jenkinson, A.F. The frequency distribution of the annual maximum (or minimum) values of meteorological elements. Q. J. R. Meteorol. Soc. 1955, 81, 158–171. [Google Scholar] [CrossRef]
  35. Stacy, E.W. A generalization of the gamma distribution. Ann. Math. Stat. 1962, 33, 1187–1192. [Google Scholar] [CrossRef]
  36. Pareto, V. La legge della domanda. G. Degli Econ. 1895, 10, 59–68. [Google Scholar]
  37. Lomax, K.S. Business failures; Another example of the analysis of failure data. J. Am. Stat. Assoc. 1954, 49, 847–852. [Google Scholar] [CrossRef]
  38. Pickands, J. Statistical inference using extreme order statistics. Ann. Stat. 1975, 3, 119–131. [Google Scholar]
  39. Kumaraswamy, P. A generalized probability density function for double bounded random processes. J. Hydrol. 1980, 46, 79–88. [Google Scholar] [CrossRef]
  40. Abd Al-Fattah, A.M.; El-Helbawy, A.A.; Al-Dayian, G.R. Inverted Kumaraswamy distribution: Properties and estimation. Pak. J. Stat. 2017, 33, 37–61. [Google Scholar]
  41. Gosset, W.S. The probable error of a mean. Biometrika 1908, 6, 1–25. [Google Scholar]
  42. Laplace, P.S. Memoire sur la probabilite des causes par les evenements. Mem. L’Academie R. Des Sci. Present. Par Divers. Savan 1774, 6, 621–656. [Google Scholar]
  43. Prentice, R.L. Discrimination among some parametric models. Biometrika 1975, 62, 607–614. [Google Scholar] [CrossRef]
  44. Burr, I.W. Cumulative frequency functions. Ann. Math. Stat. 1942, 13, 215–232. [Google Scholar] [CrossRef]
  45. Dagum, C. A model of income distribution and the conditions of existence of moments of finite order. Bull. Int. Stat. Inst. 1975, 46, 199–205. [Google Scholar]
  46. Topp, C.W.; Leone, F.C. A family of J-shaped frequency functions. J. Am. Stat. Assoc. 1955, 50, 209–219. [Google Scholar] [CrossRef]
  47. Nadarajah, S.; Haghighi, F. An extension of the exponential distribution. Statistics 2011, 45, 543–558. [Google Scholar] [CrossRef]
  48. van Dorp, J.R.; Kotz, S. The standard two-sided power distribution and its properties: With applications in financial engineering. Am. Stat. 2002, 56, 90–99. [Google Scholar] [CrossRef]
  49. Ghitany, M.E.; Al-Mutairi, D.K.; Balakrishnan, N.; Al-Enezi, L.J. Power Lindley distribution and associated inference. Comput. Stat. Data Anal. 2013, 64, 20–33. [Google Scholar] [CrossRef]
  50. Reyes, J.; Arrue, J.; Venegas, O.; Gomez, H.W. The modified slash Lindley-Weibull distribution with applications to nutrition data. J. Appl. Stat. 2022, 49, 4206–4224. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nadarajah, S.; Kebe, M. Explicit Expressions for Most Common Entropies. Entropy 2023, 25, 534. https://doi.org/10.3390/e25030534

AMA Style

Nadarajah S, Kebe M. Explicit Expressions for Most Common Entropies. Entropy. 2023; 25(3):534. https://doi.org/10.3390/e25030534

Chicago/Turabian Style

Nadarajah, Saralees, and Malick Kebe. 2023. "Explicit Expressions for Most Common Entropies" Entropy 25, no. 3: 534. https://doi.org/10.3390/e25030534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop