Next Article in Journal
An Equilibrium Strategy for Target Benefit Pension Plans with a Longevity Trend and Partial Information
Previous Article in Journal
Passivity Analysis and Complete Synchronization of Fractional Order for Both Delayed and Non-Delayed Complex Dynamical Networks with Couplings in the Derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tsallis Entropy for the Past Lifetime Distribution with Application

by
Mohamed Kayid
1,* and
Mashael A. Alshehri
2
1
Department of Statistics and Operations Research, College of Science, King Saud University, Riyadh 11451, Saudi Arabia
2
Department of Quantitative Analysis, College of Business Administration, King Saud University, Riyadh 11362, Saudi Arabia
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(8), 731; https://doi.org/10.3390/axioms12080731
Submission received: 9 May 2023 / Revised: 20 June 2023 / Accepted: 25 July 2023 / Published: 27 July 2023
(This article belongs to the Special Issue Stochastic Modeling and Analysis with Multiple Applications)

Abstract

:
A fundamental factor in relevant applications is the predictability of the life cycle of a coherent system consisting of more than one component. In this context, we examine how entropy can be applied to evaluate the degree of predictability. In particular, in order to calculate the Tsallis entropy of the past life, we consider a scenario in which all components of the system fail at a given time t and use the system signature to calculate the Tsallis entropy of the past life. We examine a number of analytical results, e.g., expressions, thresholds and orders for the measure at issue in our study. The results may provide insights into the predictability of a coherent system’s life cycle.

1. Introduction

The concept of entropy was originally developed by physicists in the context of equilibrium thermodynamics and later extended to information theory and statistical mechanics. The most widely used entropy is attributed to Shannon [1], who also played an important role in determining the average uncertainty of a random variable (rv). Many different generalized entropy measures can be found in the literature as these measures are more flexible in some situations. Tsallis entropy of order α , acquired as a generalization of Boltzmann–Gibbs entropy, is a well-known generalization provided by Tsallis [2]. Let X be a non-negative continuous rv X with a probability density function (pdf) f. The Tsallis entropy of order α , denoted by H α ( X ) , is defined as
H α ( X ) = 1 1 α 0 f α ( x ) d x 1 , = 1 1 α [ E ( f α 1 ( F 1 ( U ) ) ) 1 ] ,
for all α > 0 , where E ( · ) denotes the expected, and F 1 ( u ) = inf { x ; F ( x ) u } , for u [ 0 , 1 ] , denotes the quantile function, while the rv U is uniformly distributed on [ 0 , 1 ] . Tsallis entropy is also referred to as Tsallis–Havrda–Charvat entropy in the literature. The concept of Tsallis entropy was first introduced by Havrda and Charvát [3], and later popularized by Tsallis [2]. Tsallis entropy is a powerful tool that plays a fundamental role in information theory, physics, chemistry, and technology. One of its unique properties is that it can be negative or non-negative depending on the chosen value of α . With a suitable choice of α , the Tsallis entropy can become non-negative. Moreover, it is interesting to note that the Tsallis entropy reduces to the well-known Shannon differential entropy when α approaches 1, i.e., H ( X ) = lim α 1 H α ( X ) . Unlike Shannon entropy, which is additive in the sense that H ( X , Y ) = H ( X ) + H ( Y ) for an independent rvs X and Y, Tsallis entropy is non-additive. Specifically, the Tsallis entropy of the joint distribution of X and Y is given by H α ( X , Y ) = H α ( X ) + H α ( Y ) + ( 1 α ) H α ( X ) H α ( Y ) . This non-additivity property of Tsallis entropy provides more flexibility and makes it an indispensable tool in many research areas. The non-additive nature of Tsallis entropy allows its application in various fields, such as thermodynamics, statistical mechanics, and quantum mechanics, where systems are inherently non-additive. Therefore, one can discern that the motivation for the use of Tsallis entropy instead of Shannon entropy is that it is not additive. This is at least part of the motivation for the use of Tsallis entropy in the physics literature to which we refer, where non-additive entropy is referred to as non-extensive. Tsallis entropy is a monotonic function of Rényi entropy, which is additive. So if additivity or non-additivity is an issue, why not use Rényi entropy instead of Tsallis entropy? If Rényi entropy can be used similarly to Tsallis entropy, then the additivity argument collapses, and, without this argument, it is not clear why Shannon entropy is not used instead of Rényi entropy or Tsallis entropy. However, since the year 2000, a wide range of organic, synthetic, and social complex systems have been discovered that support the hypotheses and conclusions drawn from this non-additive entropy, such as non-extensive statistical mechanics, a generalization of the Boltzmann–Gibbs theory (see, for instance, Tsallis [4]). As a monotonic function of Rényi entropy, Tsallis entropy also maximizes at the same value, but the concavity of H α ( X ) is a characteristic of the latter that is not mentioned in theoretical debates. Additionally, H α ( X ) is more or less stable under minor perturbations in situations wherein discrete probabilities are taken into account, a property not shared by other entropies (cf. Nair et al. [5]). The other motivation for the use of Tsallis entropy is α -distributions. A related problem is that there is no indication of why one value of α would be a better choice than another. This is a major problem for statistical analysis when we are given an additional parameter to choose with no indication of how to choose it. This can lead to simply choosing the value that leads to the conclusion that we prefer. This problem has also been pointed out by Tsallis: if we cannot argue for a particular value of the parameter, we should not use this generalized entropy. The α -distributions, also known as Tsallis distributions, are created by maximizing the Tsallis entropy within a set of restrictions. These distributions are more adaptable with a continuous real parameter α , which also produces models with heavy tails (see, e.g., Nair et al. [5]). In the context of statistical inference, when fitting one of the α -distributions to data, a particular value of α is usually chosen via statistical inference strategies such as the popular maximum likelihood estimation method. Therefore, the idea of using Tsallis entropy for further analysis and distributional properties seems quite valid.
The process whereby one quantifies the uncertainty in a system’s lifetime is a necessary task for engineers who are working in survival analysis. Reducing uncertainty and extending the lifetimes of systems are widely recognized as critical factors in improving system reliability (e.g., Ebrahimi et al. [6]). When considering the lifetime of a new system as the rv X, the Tsallis entropy, H α ( X ) , may serve as a tool in calculating the uncertainty related to the lifetime of the system. In some cases, operators have additional information about the current age of the system. For example, suppose that the system is known to be in service at time t and we are interested in measuring the uncertainty of its remaining lifetime, denoted by X t = X t | X > t . In such scenarios, the traditional Tsallis entropy may not be sufficient to accurately capture the uncertainty of the system. To address this limitation, the residual Tsallis entropy is introduced as
H α ( X t ) = 1 1 α t f ( x ) S ( t ) α d x 1 ,
where S ( t ) = P ( X > t ) is the survival function of X. Several aspects, some generalizations, and a number of applications of H α ( X t ) have been investigated by Rajesh and Sunoj [7], Baratpour and Khammar [8], Baratpour and Khammar [9], Misagh and Yari [10], and Chakraborty and Pradhan [11], and the references therein.
Alomani and Kayid [12] have investigated the Tsallis entropy characteristics of a coherent system with a potentially mixed structure. For more applications and research on the uncertainty aspects of systems in reliability, the readers are referred to [9,13,14,15] and the references therein. In the current study, we shall consider a coherent system with n components with an additional feature that, at time t , all the system’s components have broken down. Then, we utilize the concept of the signatures of systems to find the Tsallis entropy of the excess lifetime or residual lifetime of a coherent system, for all α > 0 . It should be kept in mind that H α ( X t ) is an intriguing idea that has captured the interest of academics in several science and engineering sectors. It has been demonstrated that this entropy measure, which is a generalization of the traditional Shannon entropy, has a number of useful features and applications. In this field of research, Asadi et al. [16], Gupta and Nanda [17], Nanda and Paul [18], Mesfioui et al. [19], and numerous other researchers have investigated the characteristics and uses of H α ( X t ) .
Many genuine systems have a pervasive element of uncertainty, and it has an impact on both the present and the past. As a result, a complementary concept of entropy emerges, which distinguishes itself from residual entropy and characterizes uncertainty about past occurrences. The concept of past entropy has been studied extensively in the literature on information theory and reliability analysis. Past entropy has been used as a tool to measure the amount of information that can be extracted from past observations to improve the prediction of future events. The literature has given a great deal of attention to the investigation of past entropy and its various applications, as seen in publications such as Di Crescenzo and Longobardi [20], Nair and Sunoj [21], and Gupta et al. [22]. By researching the characteristics and uses of past entropy in the framework of order statistics, they have significantly advanced the area. They have examined order statistics’ residual and past entropies in particular and conducted stochastic comparisons between them.
In addition to the mentioned papers, several other studies have investigated the concept of past entropy and related measures. In this case, Krishnan et al. [23] and Kamari and Buono [24] have studied the past extropy, which is a complementary dual of entropy that provides a measure of the amount of information that can be stored from past observations. Recently, Vaselabadi et al. [25] investigated the varextropy measure of residual and past lifetimes of order statistics, record values, and proportional hazard rate models. The varextropy measure is a generalization of the entropy and provides a measure of the amount of uncertainty in a system. Overall, the study of past entropy and related measures has important implications in various fields, such as reliability analysis, machine learning, and information theory.
We give a thorough investigation of Tsallis entropy applied to the distribution of past lifetimes in this research, which results in a generalized formulation, Equation (2). Our suggested measure enables a nuanced comparison of the forms of various distributions of past lifetimes by introducing the parameter α , which permits the varied weighting of the conditional probabilities. Our findings highlight this measure’s enormous potential to shed light on these distributions’ underlying mechanisms and have uses that extend beyond the purview of the current study.
To further explore the practical utility of the measure that we propose, we contemplate a coherent system involving n components that all stop at time t. We compute the Tsallis entropy of the past life distribution of the coherent system using the system signature technique. The results have important implications for the interpretation and modeling of complex systems, with potential applications in areas such as reliability engineering, industrial systems, and network science.

2. Findings on Past Tsallis Entropy

We consider an rv X, which denotes the life length of a system. The pdf of X t = [ X | X < t ] is derived as f t ( x ) = f ( x ) / F ( t ) for x < t and f t ( x ) = 0 for x t . In this context, we define the past Tsallis entropy at time t of X as
H ¯ α ( X t ) = 1 1 α 0 t f t α ( x ) d x 1
                                          = 1 1 α 0 t f ( x ) F ( t ) α d x 1 ,
for all α > 0 . It is important to remember that the past Tsallis entropy H ¯ α ( X t ) can take values between and + . In the context of the lifespan related to the underlying coherent system, given that the system collapsed at time t, H ¯ α ( X t ) carries some doubt regarding the system’s past lifetime. Consider the following example as a scenario to demonstrate the significance of past entropy when comparing random lifetimes. This emphasizes the significance of our proposed measure in identifying minute variations in the forms of various distributions of past lifetimes and emphasizes its potential to illuminate the principal mechanisms driving these phenomena.
Example 1.
Let us assume that the system components’ lifetimes X and Y follow B e t a ( 2 , 1 ) and B e t a ( 1 , 2 ) , respectively. The Tsallis entropy of both X and Y is elegantly captured by the expression
H ¯ α ( X ) = H ¯ α ( Y ) = 1 1 α 2 α α + 1 1 .
This statement concludes that the average uncertainty in predicting the outcomes of X and Y in terms of Tsallis entropy is identical for both pdfs f and g. Suppose that both components fail at a time t between 0 and 1 during the inspection. In such a scenario, it is possible to measure the uncertainty related to the respective failure times by using the concept of past entropy. More specifically, we can use Equation (2) to calculate the past Tsallis entropy as follows:
H ¯ α ( X t ) = 1 α 2 1 α + 1 2 α t , H ¯ α ( Y t ) = 1 α 2 1 α + 1 2 α ( 1 ( 1 t ) α + 1 ) ( 2 t t 2 ) α ,
for all t ( 0 , 1 ) . The results are depicted in Figure 1. Specifically, we demonstrate that for α = 0.2 , the Renyi entropy of X t is dominated by that of Y t , whereas for α = 2 and t ( 0 , 1 ) , the opposite inequality holds despite the fact that H ¯ α ( X ) = H ¯ α ( Y ) .
A startling observation is that Equation (2) can be illuminated as the Tsallis entropy of the inactivity time [ t X | X t ] . This alternative identification sheds new light on the underlying dynamics of the system. Moreover, Equation (2) provides alternative expressions for the past Tsallis entropy given by
H ¯ α ( X t ) = 1 1 α 1 α E [ τ α 1 ( X α , t ) ] 1 ,
where τ ( x ) = f ( x ) / F ( x ) denotes the reversed hazard rate of X and X α , t has the pdf as
f α , t ( x ) = α f t ( x ) F t α 1 ( x ) ,
for all α > 0 such that F t ( x ) = F ( x ) / F ( x ) for all 0 < x < t . The following theorem establishes a basic result about the monotonicity of the past Tsallis entropy of an rv X, based on the assumption that X has the decreasing reverse hazard rate (DRHR) property. Recall that X has the DRHR property if its hazard rate function τ ( x ) decreases monotonically for all x > 0 . This clarifies the conduction of past Tsallis entropy in the presence of the DRHR class.
Theorem 1.
If X has a distribution belonging to the DRHR class, then H ¯ α ( X t ) is increasing in t for all α > 0 .
Proof. 
Let us differentiate (6) with respect to t and observe that
( 1 α ) H ¯ α ( X t ) = τ α ( t ) α τ ( t ) 0 t f α ( x ) F α ( t ) d x = τ α ( t ) τ ( t ) 0 t τ α 1 ( x ) f α , t ( x ) d x ,
where f α , t ( x ) is given in (7). Given that X exhibits DRHR, its hazard rate function τ ( x ) decreases with increasing x. As a consequence, for any value of α > 1 ( α < 1 ), we have τ α 1 ( x ) ( ) τ α 1 ( t ) when x t . Inserting this inequality into Equation (8), we obtain
τ α ( t ) τ ( t ) 0 t τ α 1 ( x ) f α , t ( x ) d x ( ) 0 ,
which can be rearranged as
( 1 α ) H ¯ α ( X t ) ( ) 0 ,
where H ¯ α ( X t ) denotes the derivative of the past Tsallis entropy with respect to time t. This inequality implies that H ¯ α ( X t ) is increasing in t , for all α > 0 . Hence, the proof is obtained. □
The following theorem reveals a connection between the past Tsallis entropy and the reversed hazard rate ordering. Suppose that X and Y have cdfs F X and F Y (which are absolutely continuous) with pdfs f X and f Y , respectively. It is said that X r h Y , whenever τ X ( x ) τ Y ( x ) for all x > 0 , where τ X and τ Y are the reversed hazard functions of X and Y , respectively, in which τ X ( x ) = f X ( x ) / F X ( x ) and τ Y ( x ) = f Y ( x ) / F Y ( x ) .
Theorem 2.
If X r h Y and either X or Y is DRHR, then, for all α > 0 , and for all t > 0 , one has H ¯ α ( X t ) H ¯ α ( Y t ) .
Proof. 
Let X t = [ X | X < t ] and Y t = [ Y | Y < t ] . We can observe that when τ X ( x ) τ Y ( x ) , then, for all 0 x t ,
F X ( x ) F X ( t ) F Y ( x ) F Y ( t ) .
Thus, for any α > 0 , the following relationship
F X α ( x ) F X α ( t ) F Y α ( x ) F Y α ( t ) ,
is satisfied, concluding that X α , t s t Y α , t , where X α and Y α have cdfs F X α ( x ) and F Y α ( x ) , respectively. Here, we suppose that X has a cdf belonging to the DRHR class. For α > 1 (similarly for α ( 0 , 1 ) ), the next relation is also obtained:
E [ τ X α 1 ( X α , t ) ] E [ τ X α 1 ( Y α , t ) ] E [ τ Y α 1 ( Y α , t ) ] .
Keeping Equation (6) in mind, one obtains
1 1 α 1 α E [ τ X α 1 ( X α , t ) ] 1 1 1 α 1 α E [ τ Y α 1 ( Y α , t ) ] 1 ,
and this finalizes the proof of the theorem. If we suppose that the rv Y has a distribution with the DRHR feature, we can reach a similar conclusion. □
The next result provides an upper bound for H ¯ α ( X t ) involving the reversed hazard rate function.
Theorem 3.
Assume that τ ( x ) < , for all x > 0 . If X is DRHR, then, for all α > 0 , it holds that
H ¯ α ( X t ) 1 1 α τ α 1 ( t ) α 1 , t > 0 .
Proof. 
If X is DRHR, then τ ( t ) is decreasing in t , and so, recalling (6) for all α 1 > 0 ( α 1 < 0 ) , we have
H ¯ α ( X t ) = 1 1 α 1 α 0 t τ α 1 ( x ) f α , t ( x ) d x 1 1 1 α τ α 1 ( t ) α 0 t f α , t ( x ) d x 1 = 1 1 α τ α 1 ( t ) α 1 ,
and this completes the proof. □

3. Findings on Inactive Coherent Systems and Their Past Lifetimes

In this section, we demonstrate how the system signature approach may be used to calculate the past life entropy of a coherent system with any structure, presuming that all of the system’s components have broken down as of time t. A coherent system is one that meets the requirements of not having any unnecessary components and having a monotonic structure function. The signature induced by system is an n-dimensional vector p = ( p 1 , , p n ) , where the i-th element is given by p i = P ( T = X i : n ) and represents the probability that the i-th component is the last to fail (see [26]).
We contemplate a coherent system composed of components with random lifetimes X 1 , , X n , which are independent, and, further, they have an identical distribution (i.i.d.). We suppose that this system is recognized by the signature vector p = ( p 1 , , p n ) . We assume that T t = [ t T | X n : n t ] , which measures the inactivity time of the system at time t, at which all components of the system have broken down. From Khaledi and Shaked [27], the reliability function of T t is acquired as follows:
P ( T t > x ) = i = 1 n p i P ( t X i : n > x | X n : n t ) ,
where
P ( t X i : n > x | X n : n t ) = k = i n n k F ( t x ) F ( t ) k 1 F ( t x ) F ( t ) n k , 0 < x < t ,
designates the reliability function of the past lifetime of an i-out-of-n system presupposing that all of the components have stopped working at time t. It follows from (9) that
f T t ( x ) = i = 1 n p i f T t i ( x ) ,
in which
f T t i ( x ) = Γ ( n + 1 ) Γ ( i ) Γ ( n i + 1 ) F ( t x ) F ( t ) i 1 1 F ( t x ) F ( t ) n i f ( t x ) F ( t ) , 0 < x < t ,
where Γ ( · ) is the (complete) gamma function. Given that the system failed at or before time t, T t i = [ t X i : n | X n : n t ] , i = 1 , 2 , , n , is the amount of time that has passed since the component with lifetime X i : n in the system failed. It should be kept in mind that, by (9), the rv T t i indicates the ith order statistics of the lifetimes of n i.i.d. components with the cdf F ( t x ) F ( t ) , 0 < x < t . Next, we give an assertion for the entropy of T t . In this regard, we set F t ( x ) = F ( x ) F ( t ) , 0 < x < t . The change in variable V = F t ( T t ) is useful to impose. We clearly observe that U i : n = F t ( T t i ) is distributed according to the beta distribution with parameters i and n i + 1 for all i = 1 , , n . We give a formula for the Tsallis entropy of T t in the following result by using the foregoing transformation techniques.
Theorem 4.
Suppose that T t is the inactivity time in a coherent system provided that the components working in the system have all stopped at time t. The Tsallis entropy of T t is
H α ( T t ) = 1 α ¯ 0 1 g V α ( u ) f t α 1 ( F t 1 ( u ) ) d u 1 , t > 0 ,
where α ¯ = 1 α for α > 0 , and V the coherent system’s lifetime having pdf g V ( v ) = i = 1 n p i g i ( v ) and F t 1 ( u ) = inf { x | F t ( x ) u } is the right-continuous function of F t ( x ) = F ( x ) / F ( t ) , 0 < x t .
Proof. 
In the context of (2) and (10), and with the change in variable z = t x , one obtains
H ¯ α ( T t ) = 1 1 α 0 t f T t ( x ) α d x 1 = 1 1 α 0 t i = 1 n p i f T t i ( x ) α d x 1 = 1 1 α 0 t i = 1 n p i Γ ( n + 1 ) Γ ( i ) Γ ( n i + 1 ) F ( t x ) F ( t ) i 1 1 F ( t x ) F ( t ) n i f ( t x ) F ( t ) α d x 1 1 α = 1 1 α 0 t i = 1 n p i Γ ( n + 1 ) Γ ( i ) Γ ( n i + 1 ) F t ( z ) i 1 1 F t ( z ) n i f t ( z ) α d x 1 = 1 α ¯ 0 1 g V α ( u ) f t ( F t 1 ( u ) ) α 1 d u 1 .
The recent identity is due to u = F t ( z ) . Hence, the proof is obtained. □
Given that the components have all broken down at time t, H ¯ ( T t ) evaluates the average uncertainty induced by the conditional distribution of t T given X n : n t , which is useful to predict the past lifetime. For the particular case whereby an i-out-of-n system is considered, which has signature p = ( 0 , , 0 , 1 i , 0 , , 0 ) , i = 1 , 2 , , n , Equation (12) reduces to
H α ( T t ) = 1 1 α 0 1 g i α ( u ) f t α 1 ( F t 1 ( u ) ) d u 1 , t > 0 .
The next theorem follows directly from Theorem 4 to conclude that the reversed hazard rate of X is a non-increasing function.
Theorem 5.
If X has a decreasing reversed hazard rate, then H ¯ α ( T t ) is non-increasing in t , for all α > 0 .
Proof. 
Using the identity f t ( F t 1 ( x ) ) = x τ t ( F t 1 ( x ) ) , Equation (12) is written as
e ( 1 α ) H ¯ α ( T t ) = 0 1 g V α ( u ) u α 1 τ t ( F t 1 ( u ) ) α 1 d u ,
for all α > 0 . It is easy to see that F t 1 ( u ) = F 1 ( u F ( t ) ) , for all 0 < u < 1 , and, therefore, one obtains
τ t ( F t 1 ( u ) ) = τ ( F 1 ( u F ( t ) ) ) , 0 < u < 1 .
If t 1 t 2 , then F 1 ( u F ( t 1 ) ) F 1 ( u F ( t 2 ) ) . Thus, if X has a distribution belonging to the DRHR class, then, for all α > 1 ( 0 < α 1 ) , one has
0 1 g V α ( u ) u α 1 τ t 1 ( F t 1 1 ( u ) ) α 1 d u = 0 1 g V α ( u ) u α 1 τ ( F 1 ( u F ( t 1 ) ) ) α 1 d u ( ) 0 1 g V α ( u ) u α 1 τ ( F 1 ( u F ( t 2 ) ) ) α 1 d u = 0 1 g V α ( u ) u α 1 τ t 2 ( F t 2 1 ( u ) ) α 1 d u ,
for all t 1 t 2 . Using (13), we derive
e ( 1 α ) H ¯ α ( T t 1 ) ( ) e ( 1 α ) H ¯ α ( T t 2 ) ,
for all α > 1 ( 0 < α 1 ) . It yields H ¯ α ( T t 1 ) H ¯ α ( T t 2 ) for all α > 0 which end the proof. □
To make use of Theorems 4 and 5, we provide the following example.
Example 2.
Let us consider a coherent system with four components, as depicted in Figure 2, with i.i.d. component lifetimes having a common cdf F ( x ) = e x k , where k > 0 . The system’s signature can be evaluated as p = ( 0 , 1 6 , 7 12 , 1 4 ) . To compute the accurate value that H ¯ α ( T t ) takes, we use relation (12), which, after algebraic manipulation, yields the following equation:
H ¯ α ( T t ) = 1 1 α k α 1 0 1 t k log u ( 1 k + 1 ) ( α 1 ) u α 1 g V α ( u ) d u 1 , t > 0 .
It is remarkable that providing a blatant assertion for this relation is not easy. Thus, we can proceed numerically to obtain meaningful results. Figure 3 displays the entropy of T t with respect to t for α = 0.2 and α = 2 , with k > 0 . It is worth noting that X is DRHR for all k > 0 . From Theorem 5, it is evident that H α ( T t ) increases with t for k > 0 . See Figure 3 for the results.
The aforementioned illustration clarifies the complex relationship between time and an rv’s Tsallis entropy and emphasizes the significance of taking the DRHR feature into account while examining such systems. The temporal behavior of the Tsallis entropy of T t is thus strongly influenced by the DRHR property of X, according to our findings. This finding could have significant implications for a number of applications, such as the analysis of complex systems and the creation of effective data compression methods.
Engineering dependability benefits from understanding a system’s duality since it can cut the computing cost of identifying the signatures of all coherent systems of a given size by around half. Kochar et al. [28] have suggested that there is a duality relation between a system’s signature and that of its counterpart. Let us assume that p = ( p 1 , , p n ) is the signature of the underlying coherent system having lifetime T; then, the signature of its counterpart (dual) system, which has lifetime T D , is identified by p D = ( p n , , p 1 ) . The duality condition is used in the next theorem to make it easier to calculate the past entropy for coherent systems. We first require the subsequent lemma.
Lemma 1.
Let ϕ be a continuous function on [ 0 , 1 ] such that 0 1 x n ϕ ( x ) d x = 0 for all n 0 , and ϕ ( x ) = 0 for any x [ 0 , 1 ] .
Theorem 6.
Suppose that T t is the random lifetime of an inactive coherent system (in which all components have stopped working at time t) inducing the signature p . If f t ( F t 1 ( u ) ) = f t ( F t 1 ( 1 u ) ) holds for all 0 < u < 1 , then H ¯ α ( T t ) = H ¯ α ( T t D ) for all p and all n .
Proof. 
Let us suppose that f t ( F t 1 ( u ) ) = f t ( F t 1 ( 1 u ) ) for all 0 < u < 1 . We remark that g i ( 1 u ) = g n i + 1 ( u ) for all i = 1 , , n and, further, for all 0 < u < 1 . Consequently, utilizing (12), we obtain that
0 1 g V D α ( u ) f t ( F t 1 ( u ) ) α 1 d u = 0 1 i = 1 n p n i + 1 g i ( u ) α f t ( F t 1 ( u ) ) α 1 d u = 0 1 r = 1 n p r g n r + 1 ( u ) α f t ( F t 1 ( u ) ) α 1 d u = 0 1 r = 1 n p r g r ( 1 u ) α f t ( F t 1 ( u ) ) α 1 d u = 0 1 r = 1 n p r g r ( u ) α f t ( F t 1 ( u ) ) α 1 d u = 0 1 g V α ( u ) f t ( F t 1 ( u ) ) α 1 d u ,
and this together with Equation (12) finalizes the proof. □
For the i-out-of-n systems, the prompt termination of the aforementioned theorem is presented.
Corollary 1.
Suppose that T t i is the lifetime of an i-out-of-n system in which there are n components with i.i.d. lifetimes. Provided that f t ( F t 1 ( u ) ) = f t ( F t 1 ( 1 u ) ) holds true for all 0 < u < 1 and the given t , then H ¯ α ( T t i ) = H ¯ α ( T t n i + 1 ) for all n and i = 1 , 2 , , n / 2 as long as n is an even number and i = 1 , 2 , , ( n 1 ) / 2 when n is an odd number.

4. Some Bounds Involving the Past Tsallis Entropy Measure

It can be challenging to precisely determine the past Tsallis entropy H ¯ α ( T t ) of a coherent system for complicated systems with uncertain component lifespan distributions. Since this situation occurs frequently in practice, there is an increasing demand for accurate estimates of the behavior of the system. Utilizing prior Tsallis entropy constraints, which have been demonstrated to accurately approximation the lifespan of coherent systems under such conditions, is one viable strategy.
Such constraints were first developed by Toomaj and Doostparast [13,14] for a novel system, and, more recently, Toomaj et al. [29] expanded on this work by determining bounds for the entropy of a coherent system with all of its parts operating; see also Mesfioui et al. [19]. The past Tsallis entropy of the coherent system’s lifespan is given new limitations in the theorem that follows, which is defined in terms of the past Tsallis entropy of the higher-order distribution. i.e., H ¯ α ( X t ) . Even with limited knowledge of the component lives, we can characterize complex systems more accurately and effectively by adding these limitations into our analysis.
Theorem 7.
Suppose that the inactivity time T t = [ t T | X n : n t ] is related to an inactive coherent system with n components having i.i.d. lifetimes, with the cdf F and also the signature p = ( p 1 , , p n ) . Let H ¯ α ( T t ) < for all α > 0 . Then, one has
H ¯ α ( T t ) B n ( p ) α H ¯ α ( X t ) + B n ( p ) α 1 1 α ,
for all α > 1 and
H ¯ α ( T t ) B n ( p ) α H ¯ α ( X t ) + B n ( p ) α 1 1 α ,
for 0 < α < 1 , where B n ( p ) = i = 1 n p i g i ( m i ) , and m i = i 1 n 1 .
Proof. 
The mode of the beta distribution B e t a ( i , n i + 1 ) is m i = i 1 n 1 . Therefore, we can write
g V ( v ) i = 1 n p i g i ( m i ) = B n ( p ) , 0 < v < 1 .
Therefore, for α > 1 ( 0 < α < 1 ) , one obtains
1 + ( 1 α ) H ¯ α ( T t ) = 0 1 g V α ( v ) f t α 1 ( F t 1 ( v ) ) d v B n ( p ) α 0 1 f t α 1 ( F t 1 ( v ) ) d v = B n ( p ) α ( 1 α ) H ¯ α ( X t ) + 1 .
The recent identity follows from (12). The proof of the theorem is thus finalized. □
Equation (14) and the lower and upper bounds in it are a useful tool for the study of systems with numerous components or intricate configurations. We can use the Tsallis information measure and mathematical principles to obtain a more general lower bound, though, in cases where these bounds do not apply. The following theorem is given using this method, which makes use of the Tsallis information measure and mathematical concepts to offer fresh perspectives on how complex systems behave.
Theorem 8.
By adopting the assumptions imposed in Theorem 7, one obtains
H ¯ α ( T t ) H ¯ α L ( T t ) ,
where H ¯ α L ( T t ) = i = 1 n p i H ¯ α ( T t i ) for all α > 0 .
Proof. 
Jensen’s inequality states that for a convex function h and an rv X, we have h ( E [ X ] ) E [ h ( X ) ] . In the case of the function t α , where 0 < α < 1 ( α > 1 ), it is concave (convex). Therefore, we have
i = 1 n p i f T t i ( x ) α ( ) i = 1 n p i f T t i α ( x ) , t > 0 ,
and thus one derives
0 t f T t α ( x ) d x ( ) i = 1 n p i 0 t f T t i α ( x ) d x .
Since 1 α > 0 ( 1 α < 0 ) , if one multiplies both sides of (17) in 1 / ( 1 α ) , then
H ¯ α ( T t ) 1 1 α i = 1 n p i 0 t f T t i α ( x ) d x 1 = 1 1 α i = 1 n p i 0 t f T t i α ( x ) d x i = 1 n p i = i = 1 n p i 1 1 α 0 t f T t i α ( x ) d x 1 = i = 1 n p i H ¯ α ( T t i ) ,
which ends the proof of the theorem. □
It is interesting to note that the identity condition in (16) is truly satisfied for i-out-of-n systems, where the failure probability p j is zero for j i and one for j = i . The conditional entropy of the system in this instance, H ¯ α ( T t ) , is equal to the conditional entropy of the ith component, H ¯ α ( T t i ) . The largest of the two lower bounds may be used when the lower bounds for 0 < α < 1 in both portions of Theorems 7 and 8 can be determined.
Example 3.
Let us contemplate a coherent system of size n = 5 , as depicted in Figure 4, with i.i.d. component lifetimes, each following a standard uniform distribution with cdf F ( t ) = t , 0 < t < 1 and with signature p = ( 0 , 1 5 , 2 5 , 1 5 , 1 5 ) . Let T t = [ t T | X 5 : 5 t ] denote the past lifetime of this system. It is straightforward to show that the Tsallis entropy of X t is given by
H ¯ α ( X t ) = 1 1 α 1 t α 1 1 , t > 0 .
Furthermore, we have B 5 ( p ) = 2.6 . Therefore, by Theorem 7, the Tsallis entropy of T t is bounded for 0 < α < 1 ( α > 1 ) as follows:
H ¯ α ( T t ) ( ) 1 1 α 2 . 6 α t α 1 1 , t > 0 .
It is straightforward to show that f t ( F t 1 ( u ) ) = 1 / t , 0 < t < 1 , for all 0 < u < 1 . Therefore, we can obtain the lower bound given in (16) as follows:
H ¯ α ( T t ) 1 1 α t 1 α i = 1 n p i 0 1 g i α ( u ) d u 1 , t > 0 ,
for all α > 0 .  Figure 5 illustrates how the past Tsallis entropy has changed over time. The curve in this picture shows how H ¯ α ( T t ) changes over time for the common exponential distribution. The bounds determined from Equations (18) and (19) are represented by the dashed and dotted lines, respectively. The solid line shows the exact value of H ¯ α ( T t ) . The image demonstrates the remarkable agreement between the boundaries and the exact value of H ¯ α ( T t ) . It is noticeable that, for α > 1 , the lower bound from Equation (19) (dotted line) exceeds the lower bound provided in Equation (18).

5. Concluding Remarks

The ability to quantify uncertainty is a critical factor in determining how predictable a component or system will be during its lifetime. Tsallis entropy has proven to be a useful metric to express the degree of uncertainty related to the lifetimes of systems. In this paper, we explored some basic properties of the dynamic Tsallis entropy. Then, assuming that each system component failed at time t, we derived an expression for the Tsallis entropy of the lifetime of a system. Furthermore, using the idea of the system signature, we explored the various properties of this proposed measure, including the ability to identify boundaries and partial orderings between the previous lifetimes of two coherent systems based on their Tsallis entropy uncertainty. The method presented in this paper is a constructive method to assess the extent to which the lifetime of a system is predictable, and it can be considered a useful approach for use in engineering applications. To achieve this goal, we found that the proposed measure was useful and applicable, and we used several application examples. The results presented in this paper illustrate the potential of this measure to improve the predictability of engineering systems and its relevance to current research. The results presented not only demonstrate the potential for future research in this area, but also the clear value of Tsallis entropy with respect to engineering reliability analysis. We would like to emphasize that the results of this paper are based on the use of Tsallis entropy, which is a measure of uncertainty that depends on the pdf f. In contrast, some other studies, such as the work in [11], utilize the cumulative residual and past Tsallis entropy, which is based on the survival function and the cumulative distribution function F. By employing different measures of uncertainty, the present paper and those given in the literature based on the survival function contribute to a deeper understanding of the properties of coherent systems and offer complementary insights into their behavior.

Author Contributions

Conceptualization, M.K.; methodology, M.K.; software, M.A.A.; validation, M.K.; formal analysis, M.A.A.; investigation, M.A.A.; resources, M.A.A.; writing—original draft preparation, M.K.; writing—review and editing, M.A.A.; visualization, M.A.A.; supervision, M.K.; project administration, M.K.; funding acquisition, M.K. All authors have read and agreed to the published version of the manuscript.

Funding

Researchers Supporting Project Number RSP2023R392, King Saud University, Riyadh, Saudi Arabia.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors are grateful to the two anonymous reviewers for their constructive comments and suggestions. The authors acknowledge the financial support of the Researchers Supporting Project Number RSP2023R392, King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  2. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  3. Havrda, J.; Charvát, F. Quantification method of classification processes. Concept of structural a-entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
  4. Tsallis, C. Introduction to Nonextensive Statistical Mechanics: Approaching a Complex World; Springer: Berlin/Heidelberg, Germany, 2009; Volume 1. [Google Scholar]
  5. Nair, N.U.; Sunoj, S.; Rajesh, G. Reliability Modelling with Information Measures; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  6. Ebrahimi, N.; Pellerey, F. New partial ordering of survival functions based on the notion of uncertainty. J. Appl. Probab. 1995, 32, 202–211. [Google Scholar] [CrossRef]
  7. Rajesh, G.; Sunoj, S. Some properties of cumulative Tsallis entropy of order α. Stat. Pap. 2019, 60, 933–943. [Google Scholar] [CrossRef]
  8. Baratpour, S.; Khammar, A. Results on Tsallis entropy of order statistics and record values. Istat. J. Turk. Stat. Assoc. 2016, 8, 60–73. [Google Scholar]
  9. Baratpour, S.; Khammar, A. Tsallis entropy properties of order statistics and some stochastic comparisons. J. Stat. Res. Iran JSRI 2016, 13, 25–41. [Google Scholar] [CrossRef] [Green Version]
  10. Misagh, F.; Yari, G. Interval entropy and informative distance. Entropy 2012, 14, 480–490. [Google Scholar] [CrossRef] [Green Version]
  11. Chakraborty, S.; Pradhan, B. Generalized weighted survival and failure entropies and their dynamic versions. Commun.-Stat.-Theory Methods 2023, 52, 730–750. [Google Scholar] [CrossRef]
  12. Alomani, G.; Kayid, M. Further Properties of Tsallis Entropy and Its Application. Entropy 2023, 25, 199. [Google Scholar] [CrossRef]
  13. Abdolsaeed, T.; Doostparast, M. A note on signature-based expressions for the entropy of mixed r-out-of-n systems. Nav. Res. Logist. (NRL) 2014, 61, 202–206. [Google Scholar]
  14. Toomaj, A. Renyi entropy properties of mixed systems. Commun.-Stat.-Theory Methods 2017, 46, 906–916. [Google Scholar] [CrossRef]
  15. Toomaj, A.; Di Crescenzo, A.; Doostparast, M. Some results on information properties of coherent systems. Appl. Stoch. Model. Bus. Ind. 2018, 34, 128–143. [Google Scholar] [CrossRef]
  16. Asadi, M.; Ebrahimi, N.; Soofi, E.S. Dynamic generalized information measures. Stat. Probab. Lett. 2005, 71, 85–98. [Google Scholar] [CrossRef]
  17. Gupta, R.; Nanda, A. α-and β-entropies and relative entropies of distributions. J. Stat. Theory Appl. 2002, 1, 177–190. [Google Scholar]
  18. Nanda, A.K.; Paul, P. Some results on generalized residual entropy. Inf. Sci. 2006, 176, 27–47. [Google Scholar] [CrossRef]
  19. Mesfioui, M.; Kayid, M.; Shrahili, M. Renyi Entropy of the Residual Lifetime of a Reliability System at the System Level. Axioms 2023, 12, 320. [Google Scholar] [CrossRef]
  20. Di Crescenzo, A.; Longobardi, M. Entropy-based measure of uncertainty in past lifetime distributions. J. Appl. Probab. 2002, 39, 434–440. [Google Scholar] [CrossRef]
  21. Nair, N.U.; Sunoj, S. Some aspects of reversed hazard rate and past entropy. Commun.-Stat.-Theory Methods 2021, 32, 2106–2116. [Google Scholar] [CrossRef]
  22. Gupta, R.C.; Taneja, H.; Thapliyal, R. Stochastic comparisons of residual entropy of order statistics and some characterization results. J. Stat. Theory Appl. 2014, 13, 27–37. [Google Scholar] [CrossRef] [Green Version]
  23. Krishnan, A.S.; Sunoj, S.; Unnikrishnan Nair, N. Some reliability properties of extropy for residual and past lifetime random variables. J. Korean Stat. Soc. 2020, 49, 457–474. [Google Scholar] [CrossRef]
  24. Kamari, O.; Buono, F. On extropy of past lifetime distribution. Ric. Mat. 2021, 70, 505–515. [Google Scholar] [CrossRef]
  25. Vaselabadi, N.M.; Tahmasebi, S.; Kazemi, M.R.; Buono, F. Results on varextropy measure of random variables. Entropy 2021, 23, 356. [Google Scholar] [CrossRef] [PubMed]
  26. Samaniego, F.J. System Signatures and Their Applications in Engineering Reliability; Springer Science & Business Media: Berlin, Germany, 2007; Volume 110. [Google Scholar]
  27. Khaledi, B.E.; Shaked, M. Ordering conditional lifetimes of coherent systems. J. Stat. Plan. Inference 2007, 137, 1173–1184. [Google Scholar] [CrossRef]
  28. Kochar, S.; Mukerjee, H.; Samaniego, F.J. The “signature” of a coherent system and its application to comparisons among systems. Nav. Res. Logist. (NRL) 1999, 46, 507–523. [Google Scholar] [CrossRef]
  29. Toomaj, A.; Chahkandi, M.; Balakrishnan, N. On the information properties of working used systems using dynamic signature. Appl. Stoch. Model. Bus. Ind. 2021, 37, 318–341. [Google Scholar] [CrossRef]
Figure 1. The Tsallis entropy of H ¯ α ( X t ) (solid line) and H ¯ α ( Y t ) (dashed line) for Example 1 for values of α = 0.2 and α = 2 .
Figure 1. The Tsallis entropy of H ¯ α ( X t ) (solid line) and H ¯ α ( Y t ) (dashed line) for Example 1 for values of α = 0.2 and α = 2 .
Axioms 12 00731 g001
Figure 2. A coherent system with the system signature of p = ( 0 , 1 6 , 7 12 , 1 4 ) , as illustrated in Example 2.
Figure 2. A coherent system with the system signature of p = ( 0 , 1 6 , 7 12 , 1 4 ) , as illustrated in Example 2.
Axioms 12 00731 g002
Figure 3. The amount of H ¯ ( T t ) in Example 2 for different choices of k .
Figure 3. The amount of H ¯ ( T t ) in Example 2 for different choices of k .
Axioms 12 00731 g003
Figure 4. A coherent system with the system signature of p = ( 0 , 1 6 , 7 12 , 1 4 ) , as illustrated in Example 3.
Figure 4. A coherent system with the system signature of p = ( 0 , 1 6 , 7 12 , 1 4 ) , as illustrated in Example 3.
Axioms 12 00731 g004
Figure 5. The amount of H ¯ α ( T t ) (solid line) as well as the corresponding lower bounds (18) (dashed line) and (19) (dotted line) for the standard exponential distribution concerning time t .
Figure 5. The amount of H ¯ α ( T t ) (solid line) as well as the corresponding lower bounds (18) (dashed line) and (19) (dotted line) for the standard exponential distribution concerning time t .
Axioms 12 00731 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kayid, M.; Alshehri, M.A. Tsallis Entropy for the Past Lifetime Distribution with Application. Axioms 2023, 12, 731. https://doi.org/10.3390/axioms12080731

AMA Style

Kayid M, Alshehri MA. Tsallis Entropy for the Past Lifetime Distribution with Application. Axioms. 2023; 12(8):731. https://doi.org/10.3390/axioms12080731

Chicago/Turabian Style

Kayid, Mohamed, and Mashael A. Alshehri. 2023. "Tsallis Entropy for the Past Lifetime Distribution with Application" Axioms 12, no. 8: 731. https://doi.org/10.3390/axioms12080731

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop