Next Article in Journal
Accurate Computations with Block Checkerboard Pattern Matrices
Previous Article in Journal
GreenNAS: A Green Approach to the Hyperparameters Tuning in Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Orderings between Two Finite Mixtures with Inverted-Kumaraswamy Distributed Components

1
Department of Mathematics, National Institute of Technology Rourkela, Rourkela 769008, Odisha, India
2
School of Computer Science and Engineering, XIM University, Bhubaneswar 752050, Odisha, India
3
Department of Statistics, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr 75169, Iran
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(6), 852; https://doi.org/10.3390/math12060852
Submission received: 11 February 2024 / Revised: 5 March 2024 / Accepted: 11 March 2024 / Published: 14 March 2024
(This article belongs to the Section Probability and Statistics)

Abstract

:
In this paper, we consider two finite mixture models (FMMs) with inverted-Kumaraswamy distributed components’ lifetimes. Several stochastic ordering results between the FMMs are obtained. Mainly, we focus on three different cases in terms of the heterogeneity of parameters. The usual stochastic order between the FMMs is established when heterogeneity presents in one parameter as well as two parameters. In addition, we also study ageing faster order in terms of the reversed hazard rate between two FMMs when heterogeneity is in two parameters. For the case of heterogeneity in three parameters, we obtain the comparison results based on reversed hazard rate and likelihood ratio orders. The theoretical developments are illustrated using several examples and counterexamples.

1. Introduction

The FMMs have been widely used in many study areas, including biology, reliability, and survival analysis. As a result, both theorists and practitioners have shown a great deal of interest in these models. Due to its unique ability to model heterogeneous data, whose pattern cannot be produced by a single parametric distribution, the mixture model (MM) has acquired a lot of appeal. An unknown model is developed by mixing a collection of homogeneous subpopulations (infinite) in order to capture this heterogeneity. Keep in mind that the mixing is carried out over a latent parameter, which is regarded as a random variable (RV) and chosen from an unknown mixing distribution. Here, we refer to it as mixing proportions or weights throughout this paper. There are many circumstances in which FMMs spontaneously occur. For more information on the different applications of FMMs, see [1,2,3,4]. For instance,
  • An FMM can be used in reliability theory to model the “failure time” of a system. The model assumes that the “failure time” is a mixture of two or more distributions, as usually there is more than one reason causing the failures of a component or system [3].
  • To study the distribution of time to death after a major cardiovascular surgery, FMMs are useful. Here, one may consider that the lifetime of such patients after surgery contains three phases of time. In the first phase, that is, immediately after surgery, the death risk is relatively high. In the next phase, the hazard rate remains constant up to some certain time. Then, in the final phase, the risk of death of the patient increases. The convenient way to model this situation is to adopt an MM with three components. In each phase, a separate parametric model can be assigned to each (here, three) component [2].
  • An FMM can be used in biological sciences to model the distribution of gene expression levels across different cell types. The model assumes that the gene expression levels are a mixture of two or more distributions, such as a normal distribution and a gamma distribution [2,5].
Also, FMMs are used in a variety of real-life applications, such as clustering, image segmentation, anomaly detection, and speech recognition. They are also used in medical diagnosis, market segmentation, customer segmentation, etc. In this paper, we considered two FMMs for inverted-Kumaraswamy ( IK ) distributed components.
The topic of stochastic comparisons between two FMMs has been extensively studied. For more details, see [3,6,7,8] and so on. Ref. [9] studied stochastic comparisons of finite mixtures (FMs) where the subpopulations are from semiparametric models, that is, the scale model, proportional hazard rate model, and proportional reversed hazard rate model. Ref. [10] focused on two finite α -MMs and established sufficient conditions for comparing two α -MRVs. Please see Ref. [11] for some properties of the α -MM. Ref. [12] investigated MMs with generalized Lehmann distributed components and presented several ordering outcomes. Ref. [13] studied two FMMs with location-scale family distributed components and established some stochastic comparison results between them in terms of the usual stochastic order and the reversed hazard rate order. By utilizing the majorization idea, Ref. [14] studied a stochastic comparison for two FMs in terms of the usual stochastic order, hazard rate order, and reversed hazard rate order. Ref. [15] established ordering results between two finite mixture random variables (FMRVs), where the mixing components are based on proportional odds, proportional hazards, and proportional reversed hazards models. Ref. [16] obtained some ordering results between two FMMs considering general parametric families of distributions. Mainly, the authors established sufficient conditions for the usual stochastic order based on p-larger order and reciprocally majorization order. Very recently, Ref. [17] considered similar general parametric families of distributions as in [16] and then examined various ordering results with respect to the usual stochastic order, hazard rate order, and reversed hazard rate order between two FMMs.
Inverted distributions have several applications in various fields, including econometrics, life testing, biology, engineering sciences, and medicine. Additionally, it is used in reliability theory, survival analysis, the financial literature, and environmental research. For more details on inverted distributions and their applications, see [18]. Ref. [19] developed the IK distribution by using the transformation x = t 1 1 from the Kumaraswamy ( K ) distribution, that is, T K ( α , β ) , where α and β are the shape parameters. Then X has an IK distribution with cdf and pdf as
F X ( x ) F ( x ; α , β ) = ( 1 ( 1 + x ) α ) β , x > 0 , α > 0 , β > 0
and
f X ( x ) f ( x ; α , β ) = α β ( 1 + x ) α 1 ( 1 ( 1 + x ) α ) β 1 , x > 0 , α > 0 , β > 0 ,
respectively, where α and β are both shape parameters. We note that the curves of the pdf and hazard function show that the IK distribution exhibits a long right tail compared with other commonly used distributions. As a result, it affects long-term reliability predictions, producing optimistic predictions of rare events occurring in the right tail of the distribution compared with other well-known distributions. Here, we use the notation X IK ( α , β ) for convenience. Many well-known distributions fall under the IK distribution as special cases, e.g., the Lomax distribution (for β = 1 ), beta Type II distribution (for α = 1 ) and the log-logistic distribution (for α = β = 1 ) [19]. Also, using appropriate transformations, the IK distribution can be transformed to many well-known distributions, such as the exponentiated exponential and Weibull, generalized uniform, generalized Lomax, beta Type II and F-distribution, Burr Type III, and log-logistic distributions [19]. IK distribution fits those real data.
This paper focuses on the stochastic comparison results between two finite mixture models following IK distributed components. The goal of this paper is to obtain sufficient conditions for which two finite mixture random variables with IK distributed component lifetimes are comparable in the sense of the usual stochastic order, reversed hazard rate order, likelihood ratio order, and ageing faster order in terms of reversed hazard rate order.
The main contributions and organization of the paper are as follows. In the next section, we present several definitions and lemmas that are essential to obtain our main results. Section 3 contains three subsections, with a description on the proposed model. In Section 3.1, we establish the usual stochastic order between two FMMs based on the concepts of the weak supermajorization and weak submajorization orders. Section 3.2 deals with the ordering results when there is heterogeneity in two parameters. Here, we obtain comparison results with respect to the usual stochastic order and ageing faster order in terms of the reversed hazard rate. In Section 3.3, we examine ordering results between the FMMs with respect to the reversed hazard rate and likelihood ratio orders. Besides the theoretical contributions, we present many examples and counterexamples for the validation and justification.
Throughout the paper, the term “increasing” refers to “nondecreasing”, while the term “decreasing” will mean “nonincreasing”. For any differentiable function η ( · ) , we write η ( t ) to represent the first-order derivative of η ( t ) with respect to t. The partial derivative of ξ ( x ) with respect to its kth component is denoted as “ ξ ( k ) ( x ) = ξ ( x ) / x k ” for k = 1 , , n . Also, “ = s i g n ” is used to indicate that the signs on both sides of an equality are the same. We use the notation R = ( , + ) , R + = [ 0 , + ) , R n = ( , + ) n , and R n + = [ 0 , + ) n , respectively.

2. The Basic Definitions and Some Prerequisites

This section presents some preliminary definitions and results, which are essential to establish our main results in the subsequent section. Let U and V be two continuous and non-negative independent random variables with pdfs f U ( · ) and g V ( · ) , cdfs F U ( · ) and G V ( · ) , sfs F ¯ U ( · ) = 1 F U ( · ) and G ¯ V ( · ) = 1 G V ( · ) , and rhs r ˜ U ( · ) = f U ( · ) / F U ( · ) and r ˜ V ( · ) = g V ( · ) / G V ( · ) , respectively.
Definition 1.
The random variable U is smaller than V in the sense of
(i)
usual stochastic order (abbreviated as U s t V ), if F ¯ U ( x ) G ¯ V ( x ) for all x R + ;
(ii)
reversed hazard rate order (abbreviated as U r h V ), if G V ( x ) / F U ( x ) is increasing in x for all x R + ; or, equivalently, if r ˜ U ( x ) r ˜ V ( x ) for all x R + ;
(iii)
likelihood ratio order (abbreviated as U l r V ), if g V ( x ) / f U ( x ) is increasing in x for all x R + ;
(iv)
ageing faster order with respect to reversed hazard rate order (abbreviated as U R r h V ), if r ˜ U ( x ) / r ˜ V ( x ) is decreasing in x for all x R + .
It is important to note that U l r V U r h V U s t V . Ref. [6] provides a comprehensive overview of stochastic orders and their applications. The idea of majorization and related orders are then discussed, which are highly helpful in establishing the main results in the subsequent sections. Let R n be an n-dimensional Euclidean space. Let ς = ( ς 1 , , ς n ) and ε = ( ε 1 , , ε n ) be two real vectors. Further, let ς ( 1 ) ς ( n ) and ε ( 1 ) ε ( n ) denote the increasing arrangements of the components of the vectors ς and ε , respectively.
Definition 2
([20]). The vector ς is said to
  • majorize the vector ε (abbreviated as ς m ε ) if j = 1 i ς ( j ) j = 1 i ε ( j ) for i = 1 , , n 1 , and j = 1 n ς ( j ) = j = 1 n ε ( j ) ;
  • weakly supermajorize the vector ε (abbreviated as ς w ε ), if j = 1 i ς ( j ) j = 1 i ε ( j ) , for i = 1 , , n ;
  • weakly submajorize the vector ε (abbreviated as ς w ε ), if j = i n ς ( j ) j = i n ε ( j ) , for i = 1 , , n .
It is obvious that the majorization order implies both weak supermajorization and weak submajorization orders. In the following, we will present a definition that demonstrates how the Schur-convex function is able to maintain the ordering of the majorization.
Definition 3
([20]). A real-valued function Ξ defined on a set S R n is said to be Schur-convex (Schur-concave) on S if ς m ε implies Ξ ( ς ) ( ) Ξ ( ε ) for any ς, ε S .
Definition 4.
Let P = { p i j } and Q = { q i j } be two m × n matrices. Further, let p 1 R , , p m R and q 1 R , , q m R be two rows of P and Q, respectively, in such a way that each of these quantities is a row vector of length n. Then, P is said to be chain majorized by Q (abbreviated as P Q ) if there exists a finite number of n × n T-transform matrices T ω 1 , , T ω k such that Q = P T ω 1 T ω k .
A T-transform matrix has the form T = ϖ I + ( 1 ϖ ) Π , where 0 ϖ 1 and Π is a permutation matrix that just interchanges two coordinates, that is, row and column. Define a matrix
L n = ( ϱ , τ ) = ϱ 1 ϱ n τ 1 τ n : ϱ i , τ j > 0 , ( ϱ i ϱ j ) ( τ i τ j ) 0 , i , j = 1 , , n .
Lemma 1.
A differentiable function Υ : R 4 + R + satisfies Υ ( P ) Υ ( Q ) for all P, Q such that P L 2 , and P Q if and only if
(i)
Υ ( P ) = Υ ( P Π ) for all permutation matrices Π and for all P L 2 and;
(ii)
i = 1 2 ( p i k p i j ) [ Υ i k ( P ) Υ i j ( P ) ] 0 , ∀ j , k = 1 , 2 and for all P L 2 , where Υ i j ( P ) = Υ ( P ) p i j .

3. Model Description and Results

Consider n number of homogeneous independent subpopulations of items, denoted by π 1 , , π n , which are infinite in nature. Let X i be the lifetime of an item of π i , i = 1 , , n . Further, let R ( X , p ) denote the RV, representing the mixture of items taken from π 1 , , π n , where X = ( X 1 , , X n ) and p = ( p 1 , , p n ) . Here, p i s ( > 0 ) are known as the mixing proportions with i = 1 n p i = 1 . Thus, the survival function of the mixture random variable (MRV) R ( X , p ) is given by (see [2])
F ¯ R ( X , p ) ( x ) = i = 1 n p i F ¯ X i ( x ) ,
where F ¯ X i ( x ) is the survival function of the items of π i . For more details on the formulation of the mixture distribution, readers are referred to [2,21,22]. Because the failure rate is a conditional characteristic, the equivalent mixture failure rate is defined using modified conditional weights (on the condition of survival function in x 0 ). For a detailed discussion of the mixture failure rate, one may refer to the references [23,24,25], and so on. In this paper, we consider that the lifetime of the unit of the i-th subpopulation, denoted by X α i , β i , follows IK ( α i , β i ) distribution, i = 1 , , n . Denote by R n ( X α , β ; p ) the MRV, constructed from π 1 , , π n with i-th subpopulation π i , following the IK ( α i , β i ) distribution, where X α , β = ( X α 1 , β 1 , , X α n , β n ) . Thus, from (3), we write
F ¯ R n ( X α , β ; p ) ( x ) = i = 1 n p i [ 1 ( 1 ( 1 + x ) α i ) β i ] , x > 0 ,
where α = ( α 1 , , α n ) and β = ( β 1 , , β n ) .
In this section, we obtain ordering results between two FMMs under three different scenarios. In particular, we consider heterogeneity in one parameter, two parameters, and three parameters.

3.1. Ordering Results for MMs When Heterogeneity Presents in One Parameter

In this subsection, we obtain stochastic ordering results by considering heterogeneity in one parameter. The first result studies the usual stochastic ordering between two FMMs when heterogeneity is present in mixing proportions. Furthermore, in this model, we consider α as a common shape parameter vector and β as a fixed shape parameter. The following lemma is useful in proving the upcoming theorem.
Lemma 2.
The function η ( x ; α , β ) = 1 ( 1 ( 1 + x ) α ) β , x > 0 , α > 0 , β > 0 ,
(i)
is decreasing and convex with respect to α > 0 for fixed x > 0 , β > 0 ;
(ii)
is increasing with respect to β > 0 for fixed x > 0 , α > 0 .
Proof. 
The proof is straightforward, and hence, it is omitted. □
Assumption 1.
Let π 1 , , π n be n homogeneous and independent (infinite) subpopulations, with lifetime (denoted by X i ) of the items in the i-th subpopulation π i following an IK ( α i , β i ) distribution, i = 1 , , n .
Theorem 1.
Under the setup in Assumption 1, with β 1 = = β n = β , for ( α , p ) , ( α , p * ) L n , and β > 0 , we have
p * w p R n ( X α , β ; p * ) s t R n ( X α , β ; p ) ,
where X α , β = ( X α 1 , β , , X α n , β ) .
Proof. 
See Appendix A. □
The following corollary is an immediate consequence of Theorem 1 using the well-known result m w .
Corollary 1.
Based on the assumptions and conditions in Theorem 1, we have
p * m p R n ( X α , β ; p * ) s t R n ( X α , β ; p ) .
To validate Theorem 1, we next provide an example.
Example 1.
For n = 3 , consider α = ( 0.5 , 0.4 , 0.3 ) , p = ( 0.2 , 0.2 , 0.6 ) , p * = ( 0.3 , 0.3 , 0.4 ) , and β = 1 . Clearly, the sufficient conditions in Theorem 1 are satisfied. Based on these numerical values, the sfs of R 3 ( X α , β ; p * ) and R 3 ( X α , β ; p ) are plotted in Figure 1a, validating the result established in Theorem 1.
Below, we consider a counterexample to reveal that the usual stochastic order between two MRVs may not hold if ( α , p ) and ( α , p * ) do not belong to L 3 .
Counterexample 1.
With n = 3 , let p = ( 0.1 , 0.7 , 0.2 ) , p * = ( 0.2 , 0.5 , 0.3 ) , α = ( 3.5 , 4.8 , 5.6 ) , and β = 20 . Here, ( 0.2 , 0.5 , 0.3 ) w ( 0.1 , 0.7 , 0.2 ) ; however, other two conditions are not satisfied. That is, ( α , p ) , ( α , p * ) L 3 . Now, we plot F ¯ R 3 ( X α , β ; p * ) ( x ) F ¯ R 3 ( X α , β ; p ) ( x ) in Figure 1b. From the graph, it is clear that the desired usual stochastic order between R 3 ( X α , β ; p * ) and R 3 ( X α , β ; p ) does not hold.
Denote X α , β = ( X α , β 1 , , X α , β n ) . In the next result, we obtain the usual stochastic ordering between R n ( X α , β ; p * ) and R n ( X α , β ; p ) , taking heterogeneity in mixing proportions. Additionally, we consider a common shape parameter vector β and fixed α .
Theorem 2.
Under the setup in Assumption 1, with α 1 = = α n = α , for ( β , p ) , ( β , p * ) L n , and α > 0 , we have
p * w p R n ( X α , β ; p * ) s t R n ( X α , β ; p ) .
Proof. 
See Appendix A. □
The following example illustrates Theorem 2 for n = 3 .
Example 2.
Assume that p = ( 0.1 , 0.3 , 0.6 ) , p * = ( 0.2 , 0.3 , 0.5 ) , β = ( 0.5 , 0.4 , 0.3 ) , and α = 0.5 . Clearly, the assumptions made in Theorem 2 are satisfied. Using the numerical values, we plot the graphs of R 3 ( X α , β ; p * ) and R 3 ( X α , β ; p ) in Figure 2a, validating the usual stochastic order in Theorem 2.
Next, we consider a counterexample to establish that the result in Theorem 2 does not hold if ( β , p ) and ( β , p * ) do not belong to L 3 .
Counterexample 2.
For n = 3 , set p = ( 0.2 , 0.6 , 0.2 ) , p * = ( 0.2 , 0.5 , 0.3 ) , β = ( 5.2 , 15.8 , 5.6 ) , and α = 1 . Here, though p * w p holds, the conditions ( β , p ) L 3 and ( β , p * ) L 3 do not hold. Write
K 1 ( x ) = F ¯ R 3 ( X α , β ; p * ) ( x ) F ¯ R 3 ( X α , β ; p ) ( x )   = i = 1 3 p i * [ 1 ( 1 ( 1 + x ) 1 ) β i ] i = 1 3 p i [ 1 ( 1 ( 1 + x ) 1 ) β i ] .
Now, for x = 10 , K 1 ( 10 ) = 0.00262105 ( > 0 ) and for x = 100 , K 1 ( 100 ) = 0.00408561 ( < 0 ) , establishing that K 1 ( x ) changes its sign. Thus, clearly the usual stochastic order between R 3 ( X α , β ; p * ) and R 3 ( X α , β ; p ) does not hold.
In the upcoming theorem, we consider heterogeneity in the shape parameter α , while β is fixed. In addition, we assume the mixing proportion vector p to be common. Denote X α * , β = ( X α 1 * , β , , X α n * , β ) .
Theorem 3.
Under the setup as in Assumption 1, with β 1 = = β n = β ( 0 < β < 1 ) , for ( α , p ) , ( α * , p ) L n , we have
α * w α R n ( X α * , β ; p ) s t R n ( X α , β ; p ) .
Proof. 
See Appendix A. □
The following example illustrates Theorem 3.
Example 3.
Consider n = 3 , α = ( 1.1 , 0.9 , 0.4 ) , α * = ( 1.2 , 0.9 , 0.8 ) , p = ( 0.1 , 0.2 , 0.7 ) , and β = 0.5 . Clearly, the assumptions made in Theorem 3 are satisfied. Using the numerical values, we plot the graphs of F ¯ R 3 ( X α * , β ; p ) ( x ) and F ¯ R 3 ( X α , β ; p ) ( x ) in Figure 2b, validating the usual stochastic order in Theorem 3.
Next, we present a counterexample to show that the condition α * w α in Theorem 3 cannot be dropped to get the usual stochastic ordering between the MRVs.
Counterexample 3.
Assume that n = 3 , α = ( 1.2 , 0.8 , 0.7 ) , α * = ( 1.5 , 0.9 , 0.55 ) , p = ( 0.20 , 0.35 , 0.45 ) , and β = 0.5 . Here, ( 1.5 , 0.9 , 0.55 ) w ( 1.2 , 0.8 , 0.7 ) , while other conditions are satisfied. We plot the difference F ¯ R 3 ( X α * , β ; p ) ( x ) F ¯ R 3 ( X α , β ; p ) ( x ) in Figure 3a, which suggests that the usual stochastic order between R 3 ( X α * , β ; p ) and R 3 ( X α , β ; p ) does not hold.

3.2. Ordering Results for MMs When Heterogeneity Presents in Two Parameters

In the previous subsection, we have assumed heterogeneity in one parameter. There are various situations where more than one parameter is heterogeneous. In this subsection, we consider heterogeneity in two parameters and obtain some ordering results. The results are established using the concept of chain majorization between the parameter-matrices of two MMs. The following lemma is useful in proving the upcoming theorem.
Lemma 3.
The function ζ ( x ; α , β ) = ( 1 + x ) α ( 1 ( 1 + x ) α ) β 1 , for fixed x > 0 and 0 < β < 1 , is decreasing with respect to α > 0 .
Proof. 
The proof is straightforward, and thus, it is omitted. □
Theorem 4.
Let F ¯ R 2 ( X α , β ; p ) ( x ) = i = 1 2 p i [ 1 ( 1 ( 1 + x ) α i ) β ] and F ¯ R 2 ( X α * , β ; p * ) ( x ) = i = 1 2 p i * [ 1 ( 1 ( 1 + x ) α i * ) β ] be the sfs of the MRVs R 2 ( X α , β ; p ) and R 2 ( X α * , β ; p * ) , respectively. For ( p , α ) L 2 and β ( 0 , 1 ) , we have
p 1 p 2 α 1 α 2 p 1 * p 2 * α 1 * α 2 * R 2 ( X α , β ; p ) s t R 2 ( X α * , β ; p * ) .
Proof. 
See Appendix A. □
To validate Theorem 4, we consider the following example.
Example 4.
For n = 2 , consider p = ( 0.6 , 0.4 ) , p * = ( 0.46 , 0.54 ) , α = ( 1 , 9 ) , λ = ( 6.6 , 3.4 ) , and β = 0.5 . Clearly, we observe that ( 0.6 0.4 1 9 ) L 2 . Take a T-transform matrix T 0.3 = ( 0.3 0.7 0.7 0.3 ) . It can then be seen that
0.46 0.54 6.6 3.4 = 0.6 0.4 1 9 × 0.3 0.7 0.7 0.3 .
Thus,
0.6 0.4 1 9 0.46 0.54 6.6 3.4 ,
and from Theorem 4, we obtain R 2 ( X α , β ; p ) s t R 2 ( X α * , β ; p * ) , as observed in Figure 3b.
Note that Theorem 4 has been established for β ( 0 , 1 ) with other sufficient conditions. Thus, one may be curious to know whether the usual stochastic order in Theorem 4 holds if β is not in ( 0 , 1 ) . In this regard, we consider the next counterexample.
Counterexample 4.
For n = 2 , set α = ( 2 , 3 ) , p = ( 0.7 , 0.3 ) , α * = ( 2.6 , 2.4 ) , p * = ( 0.46 , 0.54 ) , and β = 100 . Clearly, β ( 0 , 1 ) . Further, for a T-transform matrix T 0.4 = ( 0.4 0.6 0.6 0.4 ) , we obtain
0.46 0.54 2.6 2.4 = 0.7 0.3 2 3 × 0.4 0.6 0.6 0.4 ,
implying that
0.7 0.3 2 3 0.46 0.54 2.6 2.4 .
Now, we plot the graph of the difference of the sfs of R 2 ( X α , β ; p ) and R 2 ( X α * , β ; p * ) in Figure 4a. From this figure, the usual stochastic order between R 2 ( X α , β ; p ) and R 2 ( X α * , β ; p * ) does not hold, since the graph crosses the x-axis.
Next, we extend the result in Theorem 4 for n number of subpopulations.
Theorem 5.
Let F ¯ R n ( X α , β ; p ) ( x ) = i = 1 n p i [ 1 ( 1 ( 1 + x ) α i ) β ] and F ¯ R n ( X α * , β ; p * ) ( x ) = i = 1 n p i * [ 1 ( 1 ( 1 + x ) α i * ) β ] be the sfs of the MRVs R n ( X α , β ; p ) and R n ( X α * , β ; p * ) , respectively. For ( p , α ) L n and β ( 0 , 1 ) ,
p 1 p n α 1 α n p 1 * p n * α 1 * α n * R n ( X α , β ; p ) s t R n ( X α * , β ; p * ) .
Proof. 
The proof of this theorem is similar to that of Theorem 4.2 of [17]. Thus, it is omitted. □
It is well-known fact that a finite product of T-transform matrices with a common structure yields a T-transform matrix. Using this, the following corollary is an immediate consequence of Theorem 5.
Corollary 2.
Consider k number of T-transformed matrices with a common structure, denoted by T ω 1 , , T ω n . Then, under the setup in Theorem 5, with ( p , α ) L n and β ( 0 , 1 ) ,
p 1 p n α 1 α n p 1 * p n * α 1 * α n * R n ( X α , β ; p ) s t R n ( X α * , β ; p * ) .
From the result in Corollary 2, an obvious question is “does the result in Corollary 2 hold if the T-transform matrices have different structures?” In the next theorem we discuss this issue. We observe that a similar result to Corollary 2 holds with an additional assumption.
Theorem 6.
Consider k number of T-transformed matrices with different structures, denoted by T ω 1 , , T ω k . Then, under the similar setup as in Theorem 5, with ( p , α ) L n , ( p , α ) T ω 1 T ω i L n , i = 1 , , k 1 , ( k 2 ) , and β ( 0 , 1 ) ,
p 1 * p n * α 1 * α n * = p 1 p n α 1 α n T ω 1 T ω k R n ( X α , β ; p ) s t R n ( X α * , β ; p * ) .
Proof. 
The proof is similar to that of the proof of Theorem 4.3 of [17]. Hence, it is not presented. □
The preceding results of this subsection deal with the stochastic comparison of two MRVs when the matrix ( p , α ) changes to another matrix ( p * , α * ) with fixed β . In the upcoming results, we assume that the matrix ( p , β ) changes to ( p * , β * ) with fixed α . First, we consider two subpopulations. The following lemma is useful to establish the next theorem.
Lemma 4.
The function ϰ ( x ; p , α , β ) = p ( 1 ( 1 + x ) α ) β
(i)
is decreasing with respect to β for fixed p > 0 , α > 0 ;
(ii)
is increasing with respect to p for fixed β > 0 , α > 0 .
Proof. 
The proof is straightforward, and thus, it is omitted. □
Theorem 7.
Let F ¯ R 2 ( X α , β ; p ) ( x ) = i = 1 2 p i [ 1 ( 1 ( 1 + x ) α ) β i ] and F ¯ R 2 ( X α , β * ; p * ) ( x ) = i = 1 2 p i * [ 1 ( 1 ( 1 + x ) α ) β i * ] be the sfs of the MRVs R n ( X α , β ; p ) and R n ( X α , β * ; p * ) , respectively. For ( p , β ) L 2 and fixed α > 0 ,
p 1 p 2 β 1 β 2 p 1 * p 2 * β 1 * β 2 * R 2 ( X α , β ; p ) s t R 2 ( X α , β * ; p * ) .
Proof. 
See Appendix A. □
In order to justify Theorem 7, an example is provided.
Example 5.
Let p = ( 0.2 , 0.8 ) , p * = ( 0.74 , 0.26 ) , β = ( 6 , 2 ) , β * = ( 2.4 , 5.6 ) , and α = 0.6 . It is not hard to check that ( p , β ) L 2 . Further, let T 0.1 = 0.1 0.9 0.9 0.1 . It can then be seen that
0.74 0.26 2.4 5.6 = 0.2 0.8 6 2 × 0.1 0.9 0.9 0.1 ,
which implies that
0.2 0.8 6 2 0.74 0.26 2.4 5.6 .
Thus, from Theorem 7, we obtain R 2 ( X α , β ; p ) s t R 2 ( X α , β * ; p * ) , which can be verified from Figure 4b.
The following counterexample shows that the desired ordering result in Theorem 7 does not hold if ( p , β ) L 2 .
Counterexample 5.
Let us assume
p 1 p 2 β 1 β 2 = 0.6 0.4 7 1 L 2 a n d p 1 * p 2 * β 1 * β 2 * = 0.48 0.52 3.4 4.6 L 2 .
It is then easy to check that
0.48 0.52 3.4 4.6 = 0.6 0.4 7 1 × 0.4 0.6 0.6 0.4 ,
where T 0.4 = 0.4 0.6 0.6 0.4 , implying that
0.6 0.4 7 1 0.48 0.52 3.4 4.6 .
Now, the difference between the sfs of the MRVs R 2 ( X α , β ; p ) and R 2 ( X α , β * ; p * ) is plotted in Figure 5a. Clearly, the difference takes negative as well as positive values, which means that the desired usual stochastic order in Theorem 7 does not hold.
Next, we present a result dealing with n number of subpopulations.
Theorem 8.
Let F ¯ R n ( X α , β ; p ) ( x ) = i = 1 n p i [ 1 ( 1 ( 1 + x ) α ) β i ] and F ¯ R n ( X α , β * ; p * ) ( x ) = i = 1 n p i * [ 1 ( 1 ( 1 + x ) α ) β i * ] be the sfs of the MRVs R n ( X α , β ; p ) and R n ( X α , β * ; p * ) , respectively. For ( p , β ) L n and fixed α > 0 ,
p 1 p n β 1 β n p 1 * p n * β 1 * β n * R n ( X α , β ; p ) s t R n ( X α , β * ; p * ) .
Proof. 
The proof is similar to that of Theorem 5, and thus, it is omitted. □
The following corollary can be established from Theorem 8 using arguments similar to Corollary 2.
Corollary 3.
Consider k number of T-transform matrices denoted by T ω 1 , , T ω k having a common structure. Then, under the setup in Theorem 8, for ( p , β ) L n and fixed α > 0 ,
p 1 p n β 1 β n p 1 * p n * β 1 * β n * R n ( X α , β ; p ) s t R n ( X α , β * ; p * ) .
The next theorem proves a result associated with k ( 2 ) number of T-transformed matrices having different structures.
Theorem 9.
Let T ω 1 , , T ω i , i = 1 , , k ( k 2 ) be the k number of T-transformed matrices with different structures. Then, under the setup as in Theorem 8, for ( p , β ) L n , p 1 p n β 1 β n T ω 1 T ω i L n , i = 1 , , ( k 1 ) , and fixed α > 0 ,
p 1 * p n * β 1 * β n * = p 1 p n β 1 β n T ω 1 T ω k R n ( X α , β ; p ) s t R n ( X α , β * ; p * ) .
We end this subsection with ageing faster order between two MRVs. Here, we assume that the mixing proportions and one of the shape parameter vectors are varying. We recall that using ageing faster order one is able to compare the relative ageings of two engineering systems. In the following theorem, we study the ageing faster order in terms of the reversed hazard rate function.
Theorem 10.
Let R n ( X α , β ; p ) and R n ( X α , β * ; p * ) be the MRVs with reversed hazard functions r ˜ R n ( X α , β ; p ) ( x ) and r ˜ R n ( X α , β * ; p * ) ( x ) , respectively. Then,
R n ( X α , β ; p ) R r h R n ( X α , β * ; p * ) ,
provided max { i , j : i j } | β i β j | min { i , j : i j } | β i * β j * | , for all i , j = 1 , , n .
Proof. 
See Appendix A. □
The following example illustrates Theorem 10 for n = 3 .
Example 6.
Assume that β = ( 0.1 , 0.2 , 0.3 ) , β * = ( 0.5 , 1 , 2 ) , p = ( 0.1 , 0.3 , 0.6 ) , p * = ( 0.2 , 0.3 , 0.5 ) , and α = 2 . Clearly, the condition max | β i β j | min | β i * β j * | is satisfied for i j = 1 , 2 , 3 . Taking these numerical values of the parameters, r ˜ R 3 ( X α , β ; p ) ( x ) / r ˜ R 3 ( X α , β * ; p * ) ( x ) is plotted in Figure 5b, validating the result in Theorem 10.

3.3. Ordering Results for MMs When Heterogeneity Presents in Three Parameters

In the previous subsection, we assume that two parameters are heterogeneous. In this subsection, we present the ordering results considering heterogeneity in three parameters. We mainly obtain the stochastic comparison results between two MRVs with respect to the reversed hazard rate and likelihood ratio orders. First, we provide the reversed hazard rate order between R n ( X α , β ; p ) and R n ( X α * , β * ; p * ) .
Theorem 11.
Consider two MRVs R n ( X α , β ; p ) and R n ( X α * , β * ; p * ) with sfs F ¯ R n ( X α , β ; p ) ( x ) and F ¯ R n ( X α * , β * ; p * ) ( x ) , respectively. Then,
R n ( X α , β ; p ) r h R n ( X α * , β * ; p * ) ,
provided max { α 1 * , , α n * } min { α 1 , , α n } and max { α 1 β 1 , , α n β n } min { α 1 * β 1 * , , α n * β n * } .
Proof. 
See Appendix A. □
An illustrative example is provided below for n = 3 .
Example 7.
Assume that p = ( 0.1 , 0.7 , 0.2 ) , p * = ( 0.2 , 0.5 , 0.3 ) , α = ( 5 , 8 , 6 ) , β = ( 2 , 1 , 1 ) , α * = ( 3 , 4 , 2 ) , and β * = ( 5 , 3 , 6 ) . Clearly, max { 3 , 4 , 2 } min { 5 , 8 , 6 } and max { 10 , 8 , 6 } min { 15 , 12 , 12 } . Thus, the conditions of Theorem 11 hold. Now, the ratio of the cdfs of R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) is plotted in Figure 6a, illustrating the result R 3 ( X α , β ; p ) r h R 3 ( X α * , β * ; p * ) in Theorem 11.
The following counterexample illustrates that the result in Theorem 11 does not hold if max { α 1 * , α 2 * , α 3 * } min { α 1 , α 2 , α 3 } .
Counterexample 6.
Set p = ( 0.60 , 0.25 , 0.15 ) , p * = ( 0.45 , 0.30 , 0.25 ) , α = ( 1 , 3 , 5 ) , β = ( 3 , 6 , 9 ) , α * = ( 2 , 4 , 6 ) , and β * = ( 25 , 30 , 35 ) . Obviously, max { 2 , 4 , 6 } min { 1 , 3 , 5 } and max { 3 , 18 , 45 } min { 50 , 120 , 210 } . It can be seen that all the conditions of Theorem 11 are satisfied except max { 2 , 4 , 6 } min { 1 , 3 , 5 } . Now, the ratio of the cdfs of the MRVs R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) is presented in Figure 6b, from which we see that the ratio is nonmonotone in x > 0 . As a conclusion, R 3 ( X α , β ; p ) r h R 3 ( X α * , β * ; p * ) . In other words, the reversed hazard rate order between R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) does not hold.
In the next theorem, under some certain parameter restrictions, the likelihood ratio ordering between two MRVs R n ( X α , β ; p ) and R n ( X α * , β * ; p * ) is presented.
Theorem 12.
Let f R n ( X α , β ; p ) ( x ) = i = 1 n p i α i β i ( 1 + x ) ( α i + 1 ) ( 1 ( 1 + x ) α i ) β i 1 and f R n ( X α * , β * ; p * ) ( x ) = i = 1 n p i * α i * β i * ( 1 + x ) ( α i * + 1 ) ( 1 ( 1 + x ) α i * ) β i * 1 be the pdfs of the MRVs R n ( X α , β ; p ) and R n ( X α * , β * ; p * ) , respectively. Then,
R n ( X α , β ; p ) l r R n ( X α * , β * ; p * ) ,
provided max { α 1 , , α n } min { α 1 * , , α n * } and min { α 1 β 1 , , α n β n } max { α 1 * β 1 * , , α n * β n * } .
Proof. 
See Appendix A. □
The following example illustrates Theorem 12 for n = 3 .
Example 8.
Set p = ( 0.2 , 0.4 , 0.4 ) , p * = ( 0.3 , 0.5 , 0.2 ) , α = ( 2 , 4 , 6 ) , β = ( 25 , 13 , 9 ) , α * = ( 8 , 10 , 12 ) , and β * = ( 3 , 4 , 1 ) . Observe that max { 2 , 4 , 6 } min { 8 , 10 , 12 } and max { 24 , 40 , 12 } min { 50 , 52 , 54 } . Thus, all the conditions of Theorem 12 are satisfied. Based on the numerical values of the parameters, the ratio of the pdfs of the MRVs R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) is provided in Figure 7a, which readily establishes that R 3 ( X α , β ; p ) l r R 3 ( X α * , β * ; p * ) , validating the result in Theorem 12.
Next, we present a counterexample to show that the condition max { α 1 , , α n } min { α 1 * , , α n * } is necessary for establishing the likelihood ratio order in Theorem 12.
Counterexample 7.
Let p = ( 0.1 , 0.2 , 0.7 ) , p * = ( 0.6 , 0.3 , 0.1 ) , α = ( 3 , 6 , 9 ) , β = ( 14 , 15 , 11 ) , α * = ( 5 , 8 , 12 ) , and β * = ( 4 , 2 , 3 ) . Clearly, max { 3 , 6 , 9 } min { 5 , 8 , 12 } and max { 20 , 16 , 36 } min { 42 , 90 , 99 } . Thus, all the conditions of Theorem 12 are satisfied except max { 3 , 6 , 9 } min { 5 , 8 , 12 } . Now, the ratio of the pdfs of the MRVs R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) is depicted in Figure 7b, and we see that the ratio is a nonmonotone function in x > 0 . Thus, R 3 ( X α , β ; p ) l r R 3 ( X α * , β * ; p * ) . In other words, the likelihood ratio order between R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) in Theorem 12 does not hold.

4. Concluding Remarks

In this paper, MMs are considered as suitable tools for analyzing population heterogeneity. We are interested in heterogeneous populations with distinct components such as lifetime. We have derived some sufficient conditions for the comparison between two FMMs of IK distributed components with respect to the usual stochastic order, reversed hazard rate order, likelihood ratio order, and ageing faster order in terms of reversed hazard rate order, corresponding to the heterogeneity in the model parameters in the sense of some majorization orders, namely, weakly supermajorized and weakly submajorized order. Here, we have considered heterogeneity in one parameter, two parameters, and three parameters. We have presented some numerical examples and counterexamples to illustrate the established results in this paper.
The presented results of this paper are mostly theoretical. However, one may find some applications of the established results. Below, we consider an example.
Assume two engineering systems, with components produced by different companies. It is further reasonable to assume that the components have different reliability characteristics. Each of the components can operate in n operational regimes with corresponding different probabilities p i and p i * . Let the lifetimes of the ith regime have different distributions, say, IK ( α i , β ) and IK ( α i * , β ) , respectively. Then, the important question is which of these two systems performs better in some stochastic sense? By using Theorem 1, we conclude that under the condition p * w p , the first system performs better than the second system. Similar applications can be found for the other established results.
It is naturally of interest that one can extend this work with respect to some stronger stochastic orders, such as the hazard rate order, or with respect to the variability orders, like the star order, dispersive order, Lorenz order, right-spread order, convex transform order, increasing convex order, etc. Future research on the generalizations of these findings may be considered.

Author Contributions

Conceptualization, R.B.; Investigation, R.B.; Writing—review & editing, R.B. and P.K.; Supervision, S.K.; Funding acquisition, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

The financial support (vide D.O.No. F. 14-34/2011 (CPP-II) dated 11.01.2013, F.No. 16-9 (June 2019)/2019(NET/CSIR), UGC-Ref.No.:1238/(CSIR-UGC NET JUNE 2019)) from the University Grants Commission (UGC), Government of India, is sincerely acknowledged with thanks by Raju Bhakta.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors have no conflicts of interest/competing interests to declare.

Appendix A

Proof of Theorem 1.
Denote ξ ( p ) = F ¯ R n ( X α , β ; p ) ( x ) . Differentiating ξ ( p ) partially with respect to p i , we obtain
ξ ( p ) p i = 1 ( 1 ( 1 + x ) α i ) β 0 .
Under the assumptions made, we have ( α i α j ) ( p i p j ) 0 , for 1 i j n . This implies two possibilities: { p i p j & α i α j } and { p i p j & α i α j } . Here, the proof for { p i p j & α i α j } is provided, since the proof under the other follows similarly. Using (A1),
ξ ( p ) p i ξ ( p ) p j = { 1 ( 1 ( 1 + x ) α i ) β } { 1 ( 1 ( 1 + x ) α j ) β } ,
which is clearly non-negative using Lemma 2. Thus, from (A1) and (A2), we have
ξ ( p ) p i ξ ( p ) p j 0 .
Now, the proof is completed using Theorem 3 . A . 7 of [20]. This completes the proof of the theorem. □
Proof of Theorem 2.
Without loss of generality, we consider 0 < β 1 β n . Since ( β , p ) , ( β , p * ) L n , p 1 p n > 0 and p 1 * p n * > 0 . Now, define
ϕ ( p ) = F ¯ R n ( X α , β ; p ) ( x ) .
Furthermore, for 1 i j n , clearly, β i β j and p i p j . Now, after differentiating (A4) partially with respect to p i and p j , we obtain
ϕ ( p ) p i ϕ ( p ) p j = { 1 ( 1 ( 1 + x ) α ) β j } { 1 ( 1 ( 1 + x ) α ) β i } .
Using Lemma 2 in (A5), it is easy to obtain that ϕ ( p ) p i ϕ ( p ) p j is non-negative. Thus, for 1 i j n , we obtain
0 ϕ ( p ) p i ϕ ( p ) p j ,
since ϕ ( p ) p i 0 . Now, using Theorem 3 . A . 7 of [20], the proof can be completed. □
Proof of Theorem 3.
Without loss of generality, we assume p 1 p n > 0 . Thus, from the assumptions made, we have α 1 α n and α 1 * α n * . Define
ψ ( α ) = F ¯ R n ( X α , β ; p ) ( x ) .
From (A7), we obtain the partial derivative of ψ ( α ) with respect to α i as
ψ ( α ) α i = β log ( 1 + x ) p i ( 1 + x ) α i ( 1 ( 1 + x ) α i ) β 1 ,
which is clearly non-positive. Thus, ψ ( α ) is decreasing with respect to α i . Further, using Lemma 2, for 1 i j n , we obtain, after some calculations,
ψ ( α ) α i ψ ( α ) α j 0 .
Now, from Lemma 2.4 of [17], it can be shown that ψ ( α ) is Schur-convex with respect to α . Thus, the remaining proof of the theorem follows from Theorem 3.A.8 of [20], p. 87. This completes the proof of the Theorem. □
Proof of Theorem 4.
The theorem will be proved if the conditions of Lemma 1 are satisfied. Here, F ¯ R 2 ( X α , β ; p ) ( x ) is clearly permutation invariant on L 2 . Further,
Δ 1 ( p , α ) = ( p 1 p 2 ) F ¯ R 2 ( X α , β ; p ) ( x ) p 1 F ¯ R 2 ( X α , β ; p ) ( x ) p 2 + ( α 1 α 2 ) F ¯ R 2 ( X α , β ; p ) ( x ) α 1 F ¯ R 2 ( X α , β ; p ) ( x ) α 2 = ( p 1 p 2 ) [ 1 ( 1 ( 1 + x ) α 1 ) β ] [ 1 ( 1 ( 1 + x ) α 2 ) β ] + β log ( 1 + x ) ( α 1 α 2 ) ( p 2 ( 1 + x ) α 2 ( 1 ( 1 + x ) α 2 ) β 1 ) p 1 ( 1 + x ) α 1 ( 1 ( 1 + x ) α 1 ) β 1 ) .
Under the assumption made, we have ( p , α ) L 2 , that is, either { p 1 p 2 & α 1 α 2 } holds or { p 1 p 2 & α 1 α 2 } holds. We present the proof for the case { p 1 p 2 & α 1 α 2 } . The proof for the other case { p 1 p 2 & α 1 α 2 } is similar. Using Lemma 2, the first term in the right-hand side of (A10) is clearly non-negative, since α 1 α 2 . Further, using Lemma 3, the second term in (A10) can be shown to be non-negative. Thus,
Δ 1 ( p , α ) 0 ,
satisfying the second condition of Lemma 1. This completes the proof of the Theorem. □
Proof of Theorem 7.
We note that the proof of this theorem follows using Lemma 1. It is easy to notice that R 2 ( X α , β ; p ) is permutation invariant on L 2 , confirming condition ( i ) of Lemma 1. To check condition ( i i ) of Lemma 1, we consider
Δ 2 ( p , β ) = ( p 1 p 2 ) F ¯ R 2 ( X α , β ; p ) ( x ) p 1 F ¯ R 2 ( X α , β ; p ) ( x ) p 2 + ( β 1 β 2 ) F ¯ R 2 ( X α , β ; p ) ( x ) β 1 F ¯ R 2 ( X α , β ; p ) ( x ) β 2 = ( p 1 p 2 ) [ 1 ( 1 ( 1 + x ) α ) β 1 ] [ 1 ( 1 ( 1 + x ) α ) β 2 ] + ( β 1 β 2 ) log ( 1 ( 1 + x ) α ) p 2 ( 1 ( 1 + x ) α ) β 2 p 1 ( 1 ( 1 + x ) α ) β 1 .
Here, we have assumed that ( p , β ) L 2 , implying either { p 1 p 2 & β 1 β 2 } or { p 1 p 2 & β 1 β 2 } . The proof will be presented for { p 1 p 2 & β 1 β 2 } . For the other case, it is similar. For p 1 p 2 and β 1 β 2 , using Lemma 2, it can be shown after some calculations that the first term of the right-hand side in (A12) is non-positive. Further, from Lemma 4, the second term on the right-hand side of (A12) is non-positive when p 1 p 2 and β 1 β 2 . Combining these, we obtain
Δ 2 ( p , β ) 0 ,
which proves that the second condition of Lemma 1 is also satisfied. Thus, the proof is completed. □
Proof of Theorem 10.
Here, it is sufficient to establish that the ratio φ 1 ( x ) φ 2 ( x ) = r ˜ R n ( X α , β ; p ) ( x ) r ˜ R n ( X α , β * ; p * ) ( x ) is non-increasing with respect to x, where
φ 1 ( x ) φ 2 ( x ) = f R n ( X α , β ; p ) ( x ) F R n ( X α , β * ; p * ) ( x ) f R n ( X α , β * ; p * ) ( x ) F R n ( X α , β ; p ) ( x ) .
On differentiating (A14) with respect to x, we obtain
x r ˜ R n ( X α , β ; p ) ( x ) r ˜ R n ( X α , β * ; p * ) ( x ) = s i g n φ 1 ( x ) φ 2 ( x ) φ 1 ( x ) φ 2 ( x ) = f R n ( X α , β ; p ) ( x ) F R n ( X α , β ; p ) ( x ) f R n ( X α , β * ; p * ) ( x ) F R n ( X α , β * ; p * ) ( x ) + f R n ( X α , β ; p ) ( x ) f R n ( X α , β * ; p * ) 2 ( x ) F R n ( X α , β ; p ) ( x ) f R n ( X α , β ; p ) 2 ( x ) f R n ( X α , β * ; p * ) ( x ) F R n ( X α , β * ; p * ) ( x ) f R n ( X α , β ; p ) ( x ) F R n ( X α , β ; p ) ( x ) f R n ( X α , β * ; p * ) ( x ) F R n ( X α , β * ; p * ) ( x ) = ξ ( x ) , ( say ) .
Our goal is to show that ξ ( x ) is non-positive. Now, using the model assumptions, from (A15), we obtain
ξ ( x ) = { ( i = 1 n p i α β i [ α ( β i 1 ) ( 1 + x ) 2 ( α + 1 ) ( 1 ( 1 + x ) α ) β i 2 ( α + 1 ) ( 1 + x ) ( α + 2 ) ( 1 ( 1 + x ) α ) β i 1 ] ) × i = 1 n p i ( 1 ( 1 + x ) α ) β i × i = 1 n p i * α β i * ( 1 + x ) ( α + 1 ) ( 1 ( 1 + x ) α ) β i * 1 × i = 1 n p i * ( 1 ( 1 + x ) α ) β i * } + { i = 1 n p i α β i ( 1 + x ) ( α + 1 ) ( 1 ( 1 + x ) α ) β i 1 × i = 1 n p i ( 1 ( 1 + x ) α ) β i × i = 1 n p i * α β i * ( 1 + x ) ( α + 1 ) ( 1 ( 1 + x ) α ) β i * 1 2 } { ( i = 1 n p i α β i ( 1 + x ) ( α + 1 ) ( 1 ( 1 + x ) α ) β i 1 ) 2 × i = 1 n p i * α β i * ( 1 + x ) ( α + 1 ) ( 1 ( 1 + x ) α ) β i * 1 × i = 1 n p i * ( 1 ( 1 + x ) α ) β i * } { i = 1 n p i α β i ( 1 + x ) ( α + 1 ) ( 1 ( 1 + x ) α ) β i 1 × i = 1 n p i ( 1 ( 1 + x ) α ) β i × ( i = 1 n p i * α β i * [ α ( β i * 1 ) ( 1 + x ) 2 ( α + 1 ) ( 1 ( 1 + x ) α ) β i * 2 ( α + 1 ) ( 1 + x ) ( α + 2 ) ( 1 ( 1 + x ) α ) β i * 1 ] ) × i = 1 n p i * ( 1 ( 1 + x ) α ) β i * } = i = 1 n j = 1 n k = 1 n l = 1 n p i p j p k * p l * α 2 β i β k * ( 1 + x ) ( 2 α + 3 ) ( 1 ( 1 + x ) α ) β i 1 { α ( β i 1 ) ( 1 + x ) α ( 1 ( 1 + x ) α ) 1 ( α + 1 ) } ( 1 ( 1 + x ) α ) β j ( 1 ( 1 + x ) α ) β k * + β l * 1 + i = 1 n j = 1 n k = 1 n l = 1 n p i p j p k * p l * α 3 β i β k * β l * ( 1 + x ) 3 ( α + 1 ) ( 1 ( 1 + x ) α ) β i + β j + β k * + β l * 3 i = 1 n j = 1 n k = 1 n l = 1 n p i p j p k * p l * α 3 β i β j β k * ( 1 + x ) 3 ( α + 1 ) ( 1 ( 1 + x ) α ) β i + β j + β k * + β l * 3 i = 1 n j = 1 n k = 1 n l = 1 n p i p j p k * p l * α 2 β i β k * ( 1 + x ) ( 2 α + 3 ) ( 1 ( 1 + x ) α ) β i + β j + β k * + β l * 2 α ( β k * 1 ) ( 1 + x ) α ( 1 ( 1 + x ) α ) 1 ( α + 1 )
= i = 1 n j = 1 n k = 1 n l = 1 n p i p j p k * p l * α 2 β i β k * ( 1 + x ) ( 2 α + 3 ) ( 1 ( 1 + x ) α ) β i + β j + β k * + β l * 2 Δ i , j , k , l ( x ) ,
where
Δ i , j , k , l ( x ) = α ( β i 1 ) ( 1 + x ) α ( 1 ( 1 + x ) α ) 1 ( α + 1 ) α ( β k * 1 ) ( 1 + x ) α ( 1 ( 1 + x ) α ) 1 ( α + 1 ) + α β l * ( 1 + x ) α ( 1 ( 1 + x ) α ) 1 α β j ( 1 + x ) α ( 1 ( 1 + x ) α ) 1 .
Consider 1 i , j n . Under the assumptions made, from (A16), we obtain Δ i , j , k , l ( x ) 0 , and hence, ξ ( x ) 0 . This completes the proof of the theorem. □
Proof of Theorem 11.
The proof will be completed if we show that
F R n ( X α , β ; p ) ( x ) F R n ( X α * , β * ; p * ) ( x ) = i = 1 n p i ( 1 ( 1 + x ) α i ) β i i = 1 n p i * ( 1 ( 1 + x ) α i * ) β i * = ϕ 1 ( x ) ϕ 2 ( x ) , ( say ) ,
is non-increasing with respect to x > 0 . The partial derivative of (A17) with respect to x is
x F R n ( X α , β ; p ) ( x ) F R n ( X α * , β * ; p * ) ( x ) = s i g n ϕ 1 ( x ) ϕ 2 ( x ) ϕ 1 ( x ) ϕ 2 ( x ) = ξ ( x ) ,
where
ξ ( x ) = i = 1 n p i α i β i ( 1 + x ) ( α i + 1 ) ( 1 ( 1 + x ) α i ) β i 1 i = 1 n p i * ( 1 ( 1 + x ) α i * ) β i * i = 1 n p i ( 1 ( 1 + x ) α i ) β i i = 1 n p i * α i * β i * ( 1 + x ) ( α i * + 1 ) ( 1 ( 1 + x ) α i * ) β i * 1 = i = 1 n j = 1 n p i p j * ( 1 ( 1 + x ) α i ) β i 1 ( 1 ( 1 + x ) α j * ) β j * 1 [ α i β i ( 1 + x ) ( α i + 1 ) ( 1 ( 1 + x ) α j * ) α j * β j * ( 1 + x ) ( α j * + 1 ) ( 1 ( 1 + x ) α i ) ] .
For 1 i , j n , under the assumptions made, it can be shown that ξ ( x ) 0 , as desired. □
Proof of Theorem 12.
To prove the result, it is required to establish that
f R n ( X α , β ; p ) ( x ) f R n ( X α * , β * ; p * ) ( x ) = i = 1 n p i α i β i ( 1 + x ) ( α i + 1 ) ( 1 ( 1 + x ) α i ) β i 1 i = 1 n p i * α i * β i * ( 1 + x ) ( α i * + 1 ) ( 1 ( 1 + x ) α i * ) β i * 1 = ξ ( x ) , ( say ) ,
is non-decreasing with respect to x > 0 . The derivative of ξ ( x ) with respect to x is obtained as
ξ ( x ) = s i g n ( i = 1 n p i α i β i [ α i ( β i 1 ) ( 1 + x ) 2 ( α i + 1 ) ( 1 ( 1 + x ) α i ) β i 2 ( α i + 1 ) ( 1 + x ) ( α i + 2 ) ( 1 ( 1 + x ) α i ) β i 1 ] ) × i = 1 n p i * α i * β i * ( 1 + x ) ( α i * + 1 ) ( 1 ( 1 + x ) α i * ) β i * 1 i = 1 n p i α i β i ( 1 + x ) ( α i + 1 ) ( 1 ( 1 + x ) α i ) β i 1 × ( i = 1 n p i * α i * β i * [ α i * ( β i * 1 ) ( 1 + x ) 2 ( α i * + 1 ) ( 1 ( 1 + x ) α i * ) β i * 2 ( α i * + 1 ) ( 1 + x ) ( α i * + 2 ) ( 1 ( 1 + x ) α i * ) β i * 1 ] ) = ( i = 1 n j = 1 n p i p j * α i β i α j * β j * ( 1 + x ) ( α j * + 1 ) ( 1 ( 1 + x ) α j * ) β j * 1 × [ α i ( β i 1 ) ( 1 + x ) 2 ( α i + 1 ) ( 1 ( 1 + x ) α i ) β i 2 ( α i + 1 ) ( 1 + x ) ( α i + 2 ) ( 1 ( 1 + x ) α i ) β i 1 ] ) ( i = 1 n j = 1 n p i p j * α i β i α j * β j * ( 1 + x ) ( α i + 1 ) ( 1 ( 1 + x ) α i ) β i 1 × [ α j * ( β j * 1 ) ( 1 + x ) 2 ( α j * + 1 ) ( 1 ( 1 + x ) α j * ) β j * 2 ( α j * + 1 ) ( 1 + x ) ( α j * + 2 ) ( 1 ( 1 + x ) α j * ) β j * 1 ] ) .
Consider 1 i , j n . Under the assumptions made, from (A21), we obtain ξ ( x ) 0 , implying that ξ ( x ) is non-decreasing with respect to x > 0 . Hence, the result follows. □

References

  1. Lindsay, B.G. Mixture Models: Theory, Geometry, and Applications; NSF-CBMS Regional Conference Series in Probability and Statistics; Institute of Mathematical Statistics: Hayward, CA, USA, 1995. [Google Scholar]
  2. McLachlan, G.J.; Peel, D. Finite Mixture Models; Wiley: New York, NY, USA, 2000. [Google Scholar]
  3. Amini-Seresht, E.; Zhang, Y. Stochastic comparisons on two finite mixture models. Oper. Res. Lett. 2017, 45, 475–480. [Google Scholar] [CrossRef]
  4. Wu, J.W. Characterizations of generalized mixtures. Stat. Pap. 2001, 42, 123–133. [Google Scholar] [CrossRef]
  5. Schork, N.J.; Allison, D.B.; Thiel, B. Mixture distributions in human genetics research. Stat. Methods Med. Res. 1996, 5, 155–178. [Google Scholar] [CrossRef] [PubMed]
  6. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  7. Navarro, J. Likelihood ratio ordering of order statistics, mixtures and systems. J. Stat. Plan. Inference 2008, 138, 1242–1257. [Google Scholar] [CrossRef]
  8. Navarro, J. Stochastic comparisons of generalized mixtures and coherent systems. Test 2016, 25, 150–169. [Google Scholar] [CrossRef]
  9. Hazra, N.K.; Finkelstein, M. On stochastic comparisons of finite mixtures for some semiparametric families of distributions. Test 2018, 27, 988–1006. [Google Scholar] [CrossRef]
  10. Barmalzan, G.; Kosari, S.; Zhang, Y. On stochastic comparisons of finite α-mixture models. Stat. Probab. Lett. 2021, 173, 109083. [Google Scholar] [CrossRef]
  11. Asadi, M.; Ebrahimi, N.; Kharazmi, O.; Soofi, E.S. Mixture models, Bayes Fisher information, and divergence measures. IEEE Trans. Inf. Theory 2018, 65, 2316–2321. [Google Scholar] [CrossRef]
  12. Sattari, M.; Barmalzan, G.; Balakrishnan, N. Stochastic comparisons of finite mixture models with generalized Lehmann distributed components. Commun. Stat.-Theory Methods 2021, 51, 7767–7782. [Google Scholar] [CrossRef]
  13. Barmalzan, G.; Kosari, S.; Balakrishnan, N. Orderings of finite mixture models with location-scale distributed components. Probab. Eng. Inform. Sci. 2022, 36, 461–481. [Google Scholar] [CrossRef]
  14. Nadeb, H.; Torabi, H. New results on stochastic comparisons of finite mixtures for some families of distributions. Commun. Stat.-Theory Methods 2022, 51, 3104–3119. [Google Scholar] [CrossRef]
  15. Panja, A.; Kundu, P.; Pradhan, B. On stochastic comparisons of finite mixture models. Stoch. Model. 2022, 38, 190–213. [Google Scholar] [CrossRef]
  16. Kayal, S.; Bhakta, R.; Balakrishnan, N. Some results on stochastic comparisons of two finite mixture models with general components. Stoch. Model. 2023, 39, 363–382. [Google Scholar] [CrossRef]
  17. Bhakta, R.; Majumder, P.; Kayal, S.; Balakrishnan, N. Stochastic comparisons of two finite mixtures of general family of distributions. Metrika 2023, 1–32. [Google Scholar] [CrossRef]
  18. Abd EL-Kader, R. A General Class of Some Inverted Distributions. Ph.D. Thesis, AL-Azhar University, Girls’ Branch, Egypt, Cairo, 2013. [Google Scholar]
  19. Abd AL-Fattah, A.; El-Helbawy, A.; Al-Dayian, G. Inverted Kumaraswamy Distribution: Properties and Estimation. Pak. J. Stat. 2017, 33. [Google Scholar]
  20. Marshall, A.W.; Olkin, I.; Arnold, B.C. Inequalities: Theory of Majorization and Its Applications; Springer: Berlin/Heidelberg, Germany, 2011; Volume 143. [Google Scholar]
  21. Chen, J. On finite mixture models. Stat. Theory Relat. Fields 2017, 1, 15–27. [Google Scholar] [CrossRef]
  22. McLachlan, G.J.; Lee, S.X.; Rathnayake, S.I. Finite mixture models. Annu. Rev. Stat. Its Appl. 2019, 6, 355–378. [Google Scholar] [CrossRef]
  23. Navarro, J.; Hernandez, P.J. How to obtain bathtub-shaped failure rate models from normal mixtures. Probab. Eng. Informational Sci. 2004, 18, 511–531. [Google Scholar] [CrossRef]
  24. Finkelstein, M. Failure Rate Modelling for Reliability and Risk; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  25. Cha, J.H.; Finkelstein, M. The failure rate dynamics in heterogeneous populations. Reliab. Eng. Syst. Saf. 2013, 112, 120–128. [Google Scholar] [CrossRef]
Figure 1. (a) Plots of the sfs of R 3 ( X α , β ; p * ) (blue curve) and R 3 ( X α , β ; p ) (red curve) in Example 1. (b) Plot of the difference between the sfs of R 3 ( X α , β ; p * ) and R 3 ( X α , β ; p ) in Counterexample 1.
Figure 1. (a) Plots of the sfs of R 3 ( X α , β ; p * ) (blue curve) and R 3 ( X α , β ; p ) (red curve) in Example 1. (b) Plot of the difference between the sfs of R 3 ( X α , β ; p * ) and R 3 ( X α , β ; p ) in Counterexample 1.
Mathematics 12 00852 g001
Figure 2. (a) Plots of the sfs of R 3 ( X α , β ; p * ) (purple curve) and R 3 ( X α , β ; p ) (orange curve) in Example 2. (b) Plots of the sfs of R 3 ( X α * , β ; p ) (green curve) and R 3 ( X α , β ; p ) (pink curve) in Example 3.
Figure 2. (a) Plots of the sfs of R 3 ( X α , β ; p * ) (purple curve) and R 3 ( X α , β ; p ) (orange curve) in Example 2. (b) Plots of the sfs of R 3 ( X α * , β ; p ) (green curve) and R 3 ( X α , β ; p ) (pink curve) in Example 3.
Mathematics 12 00852 g002
Figure 3. (a) Plot of the difference between the sfs of R 3 ( X α * , β ; p ) and R 3 ( X α , β ; p ) in Counterexample 3. (b) Plots of the sfs of R 2 ( X α , β ; p ) (brown curve) and R 2 ( X α * , β ; p * ) (cyan curve) in Example 4.
Figure 3. (a) Plot of the difference between the sfs of R 3 ( X α * , β ; p ) and R 3 ( X α , β ; p ) in Counterexample 3. (b) Plots of the sfs of R 2 ( X α , β ; p ) (brown curve) and R 2 ( X α * , β ; p * ) (cyan curve) in Example 4.
Mathematics 12 00852 g003
Figure 4. (a) Plot of the difference between the sfs of R 2 ( X α , β ; p ) and R 2 ( X α * , β ; p * ) in Counterexample 4. (b) Plots of the sfs of R 2 ( X α , β ; p ) (purple curve) and R 2 ( X α , β * ; p * ) (black curve) in Example 5.
Figure 4. (a) Plot of the difference between the sfs of R 2 ( X α , β ; p ) and R 2 ( X α * , β ; p * ) in Counterexample 4. (b) Plots of the sfs of R 2 ( X α , β ; p ) (purple curve) and R 2 ( X α , β * ; p * ) (black curve) in Example 5.
Mathematics 12 00852 g004
Figure 5. (a) Plot of the difference between the sfs of R 2 ( X α , β ; p ) and R 2 ( X α , β * ; p * ) in Counterexample 5. (b) Plot of the ratio of the rhs of R 3 ( X α , β ; p ) and R 3 ( X α , β * ; p * ) in Example 6.
Figure 5. (a) Plot of the difference between the sfs of R 2 ( X α , β ; p ) and R 2 ( X α , β * ; p * ) in Counterexample 5. (b) Plot of the ratio of the rhs of R 3 ( X α , β ; p ) and R 3 ( X α , β * ; p * ) in Example 6.
Mathematics 12 00852 g005
Figure 6. (a) Plot of the ratio of the cdfs of R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) in Example 7. (b) Plot of the ratio of the cdfs of R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) in Counterexample 6.
Figure 6. (a) Plot of the ratio of the cdfs of R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) in Example 7. (b) Plot of the ratio of the cdfs of R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) in Counterexample 6.
Mathematics 12 00852 g006
Figure 7. (a) Plot of the ratio of the pdfs of R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) in Example 8. (b) Plot of the ratio of the pdfs of R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) in Counterexample 7.
Figure 7. (a) Plot of the ratio of the pdfs of R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) in Example 8. (b) Plot of the ratio of the pdfs of R 3 ( X α , β ; p ) and R 3 ( X α * , β * ; p * ) in Counterexample 7.
Mathematics 12 00852 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bhakta, R.; Kundu, P.; Kayal, S.; Alizadeh, M. Stochastic Orderings between Two Finite Mixtures with Inverted-Kumaraswamy Distributed Components. Mathematics 2024, 12, 852. https://doi.org/10.3390/math12060852

AMA Style

Bhakta R, Kundu P, Kayal S, Alizadeh M. Stochastic Orderings between Two Finite Mixtures with Inverted-Kumaraswamy Distributed Components. Mathematics. 2024; 12(6):852. https://doi.org/10.3390/math12060852

Chicago/Turabian Style

Bhakta, Raju, Pradip Kundu, Suchandan Kayal, and Morad Alizadeh. 2024. "Stochastic Orderings between Two Finite Mixtures with Inverted-Kumaraswamy Distributed Components" Mathematics 12, no. 6: 852. https://doi.org/10.3390/math12060852

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop