Next Article in Journal
Solution for Convergence Problem in DEMATEL Method: DEMATEL of Finite Sum of Influences
Next Article in Special Issue
The Modified-Lomax Distribution: Properties, Estimation Methods, and Application
Previous Article in Journal
Stability of Two Kinds of Discretization Schemes for Nonhomogeneous Fractional Cauchy Problem
Previous Article in Special Issue
A New Right-Skewed One-Parameter Distribution with Mathematical Characterizations, Distributional Validation, and Actuarial Risk Analysis, with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Lomax Extension: Properties, Risk Analysis, Censored and Complete Goodness-of-Fit Validation Testing under Left-Skewed Insurance, Reliability and Medical Data

1
Department of Applied Statistics, Damanhour University, Damanhour 22511, Egypt
2
Department of Statistics and Operations Research, Faculty of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
3
Department of Applied, Mathematical and Actuarial Statistics, Faculty of Commerce, Damietta University, Damietta 34517, Egypt
4
Department of Mathematical Sciences, Ball State University, Muncie, IN 47306, USA
5
Laboratory of Probability and Statistics LaPS, Department of Mathematics, Badji Mokhtar Annaba University, Annaba 23000, Algeria
6
Department of Statistics, Mathematics and Insurance, Benha University, Benha 13511, Egypt
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(7), 1356; https://doi.org/10.3390/sym15071356
Submission received: 31 May 2023 / Revised: 22 June 2023 / Accepted: 28 June 2023 / Published: 3 July 2023
(This article belongs to the Special Issue Symmetry in Statistics and Data Science, Volume 2)

Abstract

:
The idea of symmetry, which is used to describe the shape of a probability distribution, is a key concept in the theory of probability. The use of symmetric and asymmetric distributions is common in statistical inference, decision-making, and probability calculations. This article introduces a novel asymmetric model for assessing risks under a skewed claims dataset. The new distribution is also employed for both censored and uncensored validation testing. Four estimation methods, maximum likelihood, ordinary least squares, L-Moment, and Anderson Darling, were used for the risk assessment and analysis. To explain the exposure to risk within actuarial claims data, we introduced five crucial indicators, namely value-at-risk, tail-value-at-risk, tail variance, tail mean-variance, and mean excess losses. A numerical and graphical analysis is presented to assess the actuarial risk. Furthermore, the article discusses a newly developed Rao Robson Nikulin statistic for censored and uncensored validation testing. The validation testing also involved the insurance claims dataset.

1. Introduction

Skewed distributions can have important implications for statistical analysis. For example, if the data are positively skewed, the mean may not be a good measure of central tendency, as it will be influenced by the few extreme values on the right-hand side of the distribution. In this case, the median may be a better measure of central tendency. Similarly, if the data are negatively skewed, the mean may be lower than expected, and again, the median may be a better measure of central tendency. In a negatively skewed distribution, the mean is less than the median, and the tail of the distribution is longer on the left-hand side. This means that there are more data points on the right-hand side of the distribution than on the left-hand side. An example of a negatively skewed distribution is the distribution of reaction times, where most people react quickly, but a few people take a long time to react (see Aboraya et al. [1], Acerbi and Tasche [2]).
Following Adcock et al. [3], Eling [4] and Ali et al. [5,6], the concept of symmetry refers to the shape of a probability distribution. When the left and right sides of a probability distribution mirror the other side exactly, the distribution is said to be symmetric. Or, to put it another way, the left and right sides of the distribution would be identical if a vertical line were drawn down the center of it. The most well-known illustration of a symmetric distribution is probably the normal distribution. In the study of statistics, the normal distribution is commonly used to describe a wide range of real-world events. The mean and the median are both located in the center of the bell-shaped curve that symbolizes the normal distribution. Because of the distribution’s symmetry, the mean, median, and mode are the same in a normal distribution. As a result, symmetry plays a crucial role in the definition of a probability distribution’s shape in the field of probability theory. Anything from a circle to a bell curve might be this shape. Symmetric distributions are commonly used in the areas of statistical inference, decision-making, and probability computation. However, not all distributions are symmetric, and probability theory also heavily utilizes asymmetric distributions, also referred to as skewed distributions.
This study investigates a new model called the generalized exponential Lomax (GELX) for the negatively skewed insurance data. The new model is a new generalization of the well-known Lomax (LX) model. A random variable (RV)  Z  has the LX distribution if it has cumulative distribution function (CDF) (for  𝓏 > 0 ) given by
H β ( z ) | ( β > 0 ) = 1 φ β ( z ) ,
where  φ β ( z ) = ( 1 + 1 2 z ) β  and  β  refers to the shape parameter. The scale parameter is considered as 2 for reducing the number of the parameters. Then, the corresponding probability density function (PDF) of (1) is
h β ( z ) = 1 2 β ( 1 + 1 2 z ) ( β + 1 ) .
Based on Alizadeh et al. [7], the CDF and PDF of GELX model is given, respectively, by
F V _ ( z ) = [ 1 e x p { [ ψ δ 2 , β ( z ) 1 ] 1 } ] δ 1 | V _ = δ 1 , δ 2 , β ,
where  V _ = ( δ 1 , δ 2 , β ) ψ δ 2 , β ( z ) = [ 1 φ β ( z ) ] δ 2  and
f V _ ( z ) = δ 1 δ 2 β 2 [ 1 φ β ( z ) ] δ 2 1 ( 1 + 1 2 z ) ( β + 1 ) { 1 [ 1 φ β ( z ) ] δ 2 } 2 e x p { [ ψ δ 2 , β ( z ) 1 ] 1 } [ 1 e x p { [ ψ δ 2 , β ( z ) 1 ] 1 } ] δ 1 1 A δ 1 , δ 2 , β ( z ) ,
where  δ 1 , δ 2 > 0  are two additional shape parameters. Henceforth,  Z  GELX( V _ ) denotes an RV having density function (4). The reliability function (rf) and hazard rate function (HRF) of    Z  are, respectively, given by  h V _ ( z ) = f V _ ( z ) / R V _ ( z ) , where  R V _ ( z ) = 1 F V _ ( z ) .
In this paper, we employ the new model in (4) in two significant directions which are the risk analysis and distributional validity. The use of probability-based distributions to describe risk exposure is an important new statistical direction. Typically, a single number or a small set of numbers, known as key risk indicators (KRIs), are used to represent the level of risk exposure. The KRIs serve as a valuable tool for actuaries and risk managers by providing information about the extent to which a company is exposed to specific risks. Various KRIs, such as value-at-risk (VaRK), tail-value-at-risk (TVaRK), conditional-value-at-risk (CVaRK), tail variance (TV), and tail mean-variance (TMV), can be analyzed. For the risk validation, the Rao–Robson Nikulin (RRNIK) test statistic (see Nikulin [8], Nikulin [9], Nikulin [10], and Rao and Robson [11]) is used for uncensored validation. On the other hand, a new modified version of the RRNIK test statistic is used for the censored validation. Finally, the actuarial datasets are examined for validation under the RRNIK test statistic. These risk indicators help investors and risk managers make informed decisions by quantifying and assessing the potential downside risk of investments. They contribute to a more comprehensive understanding of risk and support the development of robust risk management strategies.
The following reasons are the primary drivers driving the introduction of this new distribution:
I.
We created a new probability distribution with only three parameters for validation and risk analysis. For applied modeling, estimation, simulation trials, etc., the fewer the distribution parameters, the better.
II.
A new probability distribution is presented with simple mathematical features that make it simple to compute and use. The quantile function is the only mathematical or statistical property of the present distribution that cannot be obtained in precise formulas, as will be demonstrated later. The latest statistical programs such as “R” and “Mathcad”, however, greatly aid in overcoming this issue with numerical approaches and fixes. In these circumstances, it is essential to comprehend numerical methods (and the numerical solutions they provide) in order to get around some of the more difficult formulations that researchers may run against.
III.
We examined numerous traditional estimation techniques in light of the new distribution, either through simulative experiments or through real-world applications.
IV.
The majority of practical statistical work must include the new distributions into the modeling procedures. In this work, we did this by employing an old, well-known goodness-of-fit test and a new, modified goodness-of-fit test, and we gave evidence and justifications in support of the validity of the new modified test as well as the significance of the new distribution.
V.
We examined the new distribution from a number of perspectives in order to better understand it, including the mathematical side, the modeling side, the estimation and simulation processes in various ways, and the statistical hypothesis tests.
In fact, the specialized literature has a good number of extensions of the LX distribution, and these extensions were mostly used in mathematical and statistical modeling processes. The Lomax distribution is often used to model extreme events or rare occurrences that fall in the upper tail of a distribution. In actuarial sciences, extreme events are of particular interest since they often represent catastrophic or high-impact events such as insurance claims, natural disasters, or financial losses. The Lomax distribution provides a flexible framework for modeling the tail behavior of these events and estimating their probabilities. Actuaries play a crucial role in the insurance industry by assessing and managing risks. The Lomax distribution is often employed in insurance modeling, particularly in areas such as property and casualty insurance. It helps actuaries estimate the likelihood and severity of extreme events, which is essential for determining appropriate insurance premiums, policy limits, and reserves (see Aboraya et al. [1]).
Actuarial risk analysis involves modeling the frequency and severity of losses or claims. The Lomax distribution can be used to model the severity component, representing the distribution of individual claim sizes or losses. By fitting the Lomax distribution to historical data, actuaries can gain insights into the statistical properties of losses and make informed decisions regarding risk management and pricing. Actuarial risk analysis involves estimating extreme quantiles, such as value-at-risk or Conditional Tail Expectation, which are used to assess the potential losses or liabilities associated with extreme events. The Lomax distribution can be fitted to historical data or used in combination with other models to estimate these tail quantiles accurately. Its tail heaviness allows for the extreme values and tail behavior of the data to be captured better (see Wirch [12] and Tasche [13]). The LX distribution serves as a useful tool for actuarial science researchers. It provides a flexible framework for studying extreme value theory, analyzing heavy-tailed distributions, and developing new statistical models for actuarial risk. Researchers can explore various aspects of risk analysis, such as dependence modeling, copulas, multivariate extensions, and time series modeling, using the Lomax distribution as a building block. For more details, regression models, applications, and real datasets, see Butt and Khalil [14] for a novel skewed bimodal model for modeling the asymmetric heavy-tail bimodal datasets, Reyes et al. [15] for a new bimodal exponential extension with some applications in risk theory, and Gómez et al. [16] for asymmetric bimodal double-regression modeling. A few plots of the GELX PDF and HRF are shown in Figure 1. We conclude from Figure 1′s left panel that the PDF of the GELX distribution displays a variety of significant forms with varied kurtosis values. Based on Figure 1a, it is seen that the new PDF can be symmetric with one peak, asymmetric with one peak and right skewed with no peak. Based on Figure 1b, it is seen that the new HRF can be constant, decreasing, increasing, and increasing–constant.
Section 2 presents some mathematical properties of the new model. Section 3 presents the KRIs. The risk assessment using different estimation methods is given in Section 4. A case study is illustrated in Section 5. The construction of the RRNIK statistic for the GELX model is given in Section 6. Section 7 and Section 8 give the uncensored and censored distributional validations, respectively. Section 9 offers some concluding remarks.

2. Mathematical Properties

First, we give a simple formula for (4). Using the following series expansion which holds for  | ζ 1 ζ 2 | < 1  and  ζ 3 > 0  real non-integer
( 1 ζ 1 ζ 2 ) ζ 3 1 = ζ 4 = 0 ( ζ 1 ) ζ 4 ( ζ 3 1 ζ 4 ) ( 1 ζ 2 ) ζ 4 ,
to expand the quantity  A δ 1 , δ 2 , β ( z ) ,  we obtain
A δ 1 , δ 2 , β ( z ) = i = 0 ( 1 ) i ( δ 1 1 i ) e x p ( i { ψ δ 2 , β ( z ) 1 } 1 ) .
Then, the novel PDF in (4) can be derived as
f V _ ( z ) = δ 1 δ 2 β 2 [ 1 φ β ( z ) ] δ 2 1 { 1 [ 1 φ β ( z ) ] δ 2 } 2 i = 0 ( 1 ) i ( δ 1 1 i ) ( 1 + 1 2 z ) ( β + 1 ) e x p ( ( 1 + i ) { ψ δ 2 , β ( z ) 1 } 1 ) B δ 2 , β ( z ) .
Expanding the quantity  B δ 2 , β ( z )  using the power series ,  we obtain
B δ 2 , β ( z ) = d 1 = 0 ( 1 ) d 1 ( 1 + i ) d 1 d 1 ! ( [ 1 φ β ( z ) ] δ 2 d 1 { 1 [ 1 φ β ( z ) ] δ 2 } d 1 ) .
Then, the PDF in (4) can be formulated as
f V _ ( z ) = δ 1 δ 2 β 2 i , d 1 = 0 ( 1 ) i + d 1 ( 1 + i ) d 1 [ 1 φ β ( z ) ] δ 2 ( d 1 + 1 ) 1 d 1 ! { 1 [ 1 φ β ( z ) ] δ 2 } 2 + d 1 C d 1 , δ 2 , β ( z ) ( δ 1 1 i ) ( 1 + 1 2 z ) ( β + 1 ) .
We apply the series expansion again to the quantity  C d 1 , δ 2 , β ( z ) . Then, we can obtain
f V _ ( z ) = d 1 , d 2 = 0 w d 1 , d 2   h δ , β ( z ) | δ = δ 2 ( d 1 + d 2 + 1 ) ,
where  h δ , β ( z ) = 1 2 ξ a [ 1 φ β ( z ) ] δ 1 ( 1 + 1 2 z ) ( β + 1 )  represents the PDF of the exp-LX model with power parameter  δ  and
w d 1 , d 2 = δ 1 δ 2 d 1 ! δ ( 1 ) d 1 + d 2 ( ( d 1 + 2 ) d 2 ) i = 0 ( 1 ) i ( i + 1 ) d 1 ( δ 1 1 i ) .
The CDF of the GELX model can also be expressed as a mixture of exponentiated Lomax (ELX) CDFs. By integrating (4), we obtain the same mixture representation
F V _ ( z ) = d 1 , d 2 = 0 w d 1 , d 2 H δ , β ( z ) | δ = δ 2 ( d 1 + d 2 + 1 ) ,
where  H δ , β ( z ) = [ 1 φ β ( z ) ] δ  is the CDF of the ELX model with power parameter  δ . The stochastic properties of probability distributions are important because they describe the behavior of random variables and stochastic processes. These properties are used to model and analyze a wide range of real-world phenomena, from stock prices to weather patterns to the behavior of subatomic particles.
Stochastic properties are used to model the uncertainty of and variability in financial assets and to analyze the risk associated with investments. The mean and variance of probability distributions are used to calculate the expected return and risk of a portfolio of assets.

3. The KRIs

The VaRK is a widely used risk indicator that estimates the maximum potential loss within a specified confidence level over a given time horizon. It provides a single number that represents the worst-case loss a portfolio or investment is expected to experience. VaRK helps investors assess and compare the riskiness of different assets or portfolios and set risk management limits. It is particularly useful in risk measurement, risk monitoring, and regulatory compliance; see Wirch [12], Tasche [13], and Acerbi and Tasche [2] for more details and applications. We can simply obtain the quantity VaR ( z )  for the GELX distribution from the following probability:
Pr { Z > 2 [ ( 1 { q ( δ 1 ) / [ 1 q ( δ 1 ) ] } 1 δ 2 ) 1 β 1 ] } = { 1 % | q = 99 % 5 % | q = 95 % ,
where  q ( δ 1 ) = l o g ( 1 q 1 δ 1 ) . .
The TVaRK, which is also known as conditional value-at-risk (CVaR), is an extension of VaR by considering the expected loss beyond the VaRK threshold. Unlike VaR, which only measures the loss at a specific confidence level, TVaRK estimates the average loss in the tail of the distribution (see Hogg and Klugman [17], Klugman et al. [18], Lane [19], McNeil et al. [20], Vernic [21], Artzner [22], and Charpentier [23]). This risk indicator provides additional information about extreme losses and tail risks. TVaRK is often used to evaluate downside risk and is popular in risk measurement, risk budgeting, and portfolio optimization. The  TVaRK ( Z )  can also be derived due to the main result below as
TVaRK ( Z ) = E ( Z | Z > π ( q ) ) = 1 1 F V _ ( π ( q ) ) π ( q ) z f V _ ( z ) d z ,
which can be simplified to
TVaRK ( Z ) = 1 1 q π ( q ) z f V _ ( z ) d z .
Then, using (5), we reach
TVaRK ( Z ) = 1 1 q d 1 , d 2 = 0 d 3 = 0 1 ϒ d 1 , d 2 , d 3 ( 1 , δ ) [ B ( δ , 1 + d 3 1 β ) B π ( q ) ( δ , 1 + d 3 1 β ) ] | ( β > 1 ) ,
where
ϒ d 1 , d 2 , d 3 ( 1 , δ ) = 2 w d 1 , d 2 δ ( 1 ) d 3 ( 1 d 3 ) ,
B ( 1 + ϒ 1 , 1 + ϒ 2 ) = 0 1 z ϒ 1 ( 1 z ) ϒ 2 d z ,   and   B π ( q ) ( 1 + ϒ 1 , 1 + ϒ 2 ) = 0 π ( q ) z ϒ 1 ( 1 z ) ϒ 2 d z .
Thus, the quantity TVaRK ( z )  is an average of all VaRK values above the confidence level  q , which provides more information about the tail of the GELX distribution. Further, it can also be expressed as
TVaRK ( Z ) = VaRK   ( Z ) + e ( VaRK   ( Z ) ) ,
where  e ( VaRK   ( Z ) )  is the mean excess loss function evaluated at the  100 q %  th quantile. So, TVaRK ( Z )  is larger than its corresponding VaRK ( Z )  by the amount of average excess of all losses that exceed the MELS ( Z )  value of VaRK ( Z )  (see Acerbi and Tasche [2], Wirch [12], and Tasche [13] for more details and applications). Tail variance measures the variability in or dispersion of returns in the extreme tails of the distribution. It focuses on the higher moments of the distribution beyond the mean and standard deviation. The TV provides information about the thickness and shape of the tails, helping investors gauge the potential losses in extreme market conditions. It is particularly useful in assessing tail risk, designing risk management strategies, and constructing tail risk hedging instruments. Following Furman and Landsman [24], the TV risk indicator can be derived due to the following formula:
TVq ( Z ) = E ( Z 2 | Z > π ( q ) ) [ TVaRK ( Z ) ] 2 ;
then, using (6) we reach
E ( Z 2 | Z > π ( q ) ) = 1 1 q d 1 , d 2 = 0 d 3 = 0 2 ϒ d 1 , d 2 , d 3 ( 2 , δ ) [ B ( δ , 1 + d 3 2 β ) B π ( q ) ( δ , 1 + d 3 2 β ) ] | ( β > 2 ) ,
and  ϒ d 1 , d 2 , d 3 ( 2 , δ ) = 4 w d 1 , d 2 δ ( 1 ) d 3 ( 2 d 3 ) .  Inserting (9) and (12) in (11), the TVq ( Z ; V _ )  can be expressed as
TVq ( Z ; V _ ) = 1 1 q ( d 1 , d 2 = 0 d 3 = 0 2 ϒ d 1 , d 2 , d 3 ( 2 , δ ) [ B ( δ , 1 + d 3 2 β ) B π ( q ) ( δ , 1 + d 3 2 β ) ] 1 1 q { d 1 , d 2 = 0 d 3 = 0 1 ϒ d 1 , d 2 , d 3 ( 1 , δ ) [ B ( δ , 1 + d 3 1 β ) B π ( q ) ( δ , 1 + d 3 1 β ) ] } 2 ) .
The TMVK analysis combines elements of mean-variance analysis with a focus on the tail of the distribution. It considers the trade-off between expected return and risk, specifically targeting the downside risk associated with extreme losses. By incorporating tail risk measures, such as TVaRK or tail variance, into the traditional mean-variance framework, investors can construct portfolios that optimize risk-adjusted returns, giving more weight to the tail behavior of the returns distribution. Following Landsman [25], the TMVK risk can be expressed as
TMVK ( Z , ς ) = TVaRK ( Z ) + ς TVq ( Z ) | 0 < ς < 1 .
Then, for any LRV, TMVK ( Z , ς ) >  TVq ( Z )  and, for  ς = 0 TMVK ( Z ) =  TVaRK ( Z ) .  For more applications, see Punzo [26] and Punzo et al. [27,28].

4. Risk Assessment Using Different Estimation Methods

In this section, we consider the following estimation methods—maximum likelihood estimation (MAXLE), ordinary least squares (OLS), L-Moment (L-MO), and Anderson Darling estimation (ADE)—for calculating the KRIs. These quantities are estimated using  N = 1000  with different sample sizes  ( n = 50 ,   150 ,   300 ,   500 )  and three confidence levels (CLs) ( q = ( 70 % , 90 % , 99 % ) ). All results are calculated and checked using the Mathcad program and reported in Table 1 ( n = 50 ), Table 2 ( n = 150 ), Table 3 ( n = 300 ), and Table 4 ( n = 300 ), from which we conclude:
  • VaRK ( Z ) , TVaRK ( Z ) , TVq ( Z ) TMVK ( Z ) ,  and  MELS ( Z )  increase when  q  increases for all selected methods. For example,
For   n = 50 : VaRK   ( Z ) M A X L E = 1.61281 | q = 70 % ,   2.09861 | q = 90 % ,   2.92329 | q = 99 % ,
For   n = 500 : VaRK   ( Z ) M A X L E = 1.62956 | q = 70 % ,   2.12570 | q = 90 % ,   2.96993 | q = 99 % ,
For   n = 50 : VaRK   ( Z ) O R L S E = 1.61918 | q = 70 % , 2.10724 | q = 90 % , 2.93591 | q = 99 % ,
For   n = 500 : VaRK   ( Z ) O R L S E = 1.631067 | q = 70 % , 2.12570 | q = 90 % , 2.97316 | q = 99 % ,
For   n = 50 : VaRK   ( Z ) L - M O = 1.62969 | q = 70 % , 2.119556 | q = 90 % , 2.119556 | q = 99 % ,
For   n = 500 : VaRK   ( Z ) L - M O = 1.639371 | q = 70 % , 2.139063 | q = 90 % , 2.98971 | q = 99 % .
  • VaRK ( Z ) M A X L E <  VaRK ( Z ) O R L S E <  VaRK ( Z ) A D E <  VaRK ( Z ) L - M O  for most  q . For example,
For   n = 50 : VaRK   ( Z ) M A X L E = 1.61281 | q = 70 % < < VaRK   ( Z ) L - M O = 1.62969 | q = 70 % ,
For   n = 150 : VaRK   ( Z ) M A X L E = 1.62028 | q = 70 % < < VaRK   ( Z ) L - M O = 1.62885 | q = 70 % ,
For   n = 300 : VaRK   ( Z ) M A X L E = 1.62484 | q = 70 % < < VaRK   ( Z ) L - M O = 1.638691 | q = 70 % ,
For   n = 500 : VaRK   ( Z ) M A X L E = 1.62956 | q = 70 % < < VaRK   ( Z ) L - M O = 1.639371 | q = 70 % ,
For   n = 50 : TVaRK ( Z ) M A X L E = 2.81002 | q = 70 % < < TVaRK ( Z ) L - M O = 2.837329 | q = 70 % ,
For   n = 150 : TVaRK ( Z ) M A X L E = 2.83521 | q = 70 % < < TVaRK ( Z ) L - M O = 2.854852 | q = 70 % ,
For   n = 300 : TVaRK ( Z ) M A X L E = 2.84929 | q = 70 % < < TVaRK ( Z ) L - M O = 2.872898 | q = 70 % ,
For   n = 500 : TVaRK ( Z ) M A X L E = 2.85558 | q = 70 % < < TVaRK ( Z ) L - M O = 2.874868 | q = 70 % ,
  • For the same method, we have the following results:
VaRK   ( Z ) M A X L E = 1.61281 | q = 70 % , n = 50 < < VaRK   ( Z ) M A X L E = 1.62956 | q = 70 % , n = 500 ,
VaRK   ( Z ) O R L S E = 1.61918 | q = 70 % , n = 50 < < VaRK   ( Z ) O R L S E = 1.631067 | q = 70 % , n = 500 ,
TVaRK ( Z ) M A X L E = 2.81002 | q = 70 % , n = 50 < < TVaRK ( Z ) M A X L E = 2.85558 | q = 70 % , n = 500 ,
TVaRK ( Z ) O R L S E = 2.82220 | q = 70 % , n = 50 < < TVaRK ( Z ) O R L S E = 2.85872 | q = 70 % , n = 500 ,
TV ( Z ) M A X L E = 1.45159 | q = 70 % , n = 50 < < TV ( Z ) M A X L E = 1.53073 | q = 70 % , n = 500 ,
TV ( Z ) O R L S E = 1.46627 | q = 70 % , n = 50 < < TV ( Z ) O R L S E = 1.5351 | q = 70 % , n = 500 ,
VaRK   ( Z ) M A X L E = 2.92329 | q = 99 % , n = 50 < < VaRK   ( Z ) M A X L E = 2.96993 | q = 99 % , n = 500 ,
VaRK   ( Z ) O R L S E = 2.93591 | q = 99 % , n = 50 < < VaRK   ( Z ) O R L S E = 2.97316 | q = 99 % , n = 500 ,
TVaRK ( Z ) M A X L E = 4.12641 | q = 99 % , n = 50 < < TVaRK ( Z ) M A X L E = 4.20603 | q = 99 % , n = 500 ,
TVaRK ( Z ) O R L S E = 2.93591 | q = 99 % , n = 50 < < TVaRK ( Z ) O R L S E = 4.211052 | q = 99 % , n = 500 ,
TV ( Z ) M A X L E = 1.48586 | q = 99 % , n = 50 < < TV ( Z ) M A X L E = 1.57468 | q = 70 % , n = 500 ,
and
TV ( Z ) O R L S E = 1.50138 | q = 99 % , n = 50 < < TV ( Z ) O R L S E = 1.57946 | q = 99 % , n = 500 .

5. Risk Analysis under the Actuarial Negatively Skewed Claims Data: A Case Study

The applications of these risk indicators are varied, but some common applications include:
I.
Investors and portfolio managers use these risk indicators to assess and manage the risk exposure of their portfolios. They aid in setting risk limits, designing optimal asset allocations, and monitoring portfolio performance (see Furman and Landsman [24]).
II.
Financial institutions, regulatory bodies, and risk management professionals use these indicators to quantify and report risks associated with different investment strategies. They provide insights into potential losses and help ensure compliance with regulatory requirements.
III.
Risk indicators play a crucial role in stress testing and scenario analysis. By simulating extreme market conditions, these indicators help assess the resilience of portfolios and investment strategies in adverse scenarios.
IV.
Risk indicators facilitate the effective communication of risk to stakeholders, including investors, clients, and regulators. They provide a standardized and concise measure of risk that can be easily understood and compared across different investments. Skewed distributions are used in risk management to model the potential losses from different types of risks, such as credit risk, market risk, and operational risk. For example, the skewed-t distribution is commonly used to model credit risk in banking. Skewed distributions are widely used in insurance to model the frequency and severity of losses (see Shrahili et al. [20]). For example, the Lomax distribution is often used to model the distribution of losses in insurance claims.
We examine the actuarial claims payment triangle from a U.K. Motor Non-Comprehensive account in this section as a practical illustration of case studies. We choose the 2007–2013 origin period for practical reasons (Charpentier [23]). The actuarial claims payment data frame displays the claims data similarly to how a database would normally keep it. The development year, incremental payments, and origin year are all listed in the first column and range from 2007 to 2013. You should be aware that these actuarial claims data are initially assessed using a probability-based distribution (for relevant applications, see Ali et al. [5]). Table 5 and Table 6 present the results obtained from the analysis. The key risk indicators (KRIs) for the GELX model under the four different techniques are listed in four sections of Table 5: MAXLE, ORLSE, L-MO, and AD. Table 6 also has four sections, each of which lists the KRIs for the ELX model using the same procedures.
In addition to numerical research, graphical methods are employed to look at how theoretical distributions initially fit and how the densities of actuarial claims take shape. The Cullen and Frey plot helps identify whether a dataset deviates from a normal distribution. Understanding the distributional characteristics of data is crucial in many statistical analyses, as assumptions about normality often underlie many statistical tests. Skewness, which is represented on the x-axis of the plot, measures the extent to which the data distribution is asymmetric. Positive skewness indicates a longer right tail, while negative skewness suggests a longer left tail. The plot helps visualize and quantify the skewness of the dataset. Kurtosis, shown on the y-axis of the plot, measures the degree of peakedness or flatness of a distribution compared to a normal distribution. High positive kurtosis implies heavy tails, while negative kurtosis suggests lighter tails. Analyzing kurtosis helps us understand the presence of outliers or extreme values in the dataset. Figure 2 gives the Cullen and Frey plot for actuarial claims data. The Cullen and Frey (skewness–kurtosis plot) in Figure 2 reveals that the data are left-skewed with a kurtosis of less than 3. The big blue dot refers to our insurance data. It is clear that the data stay away from the main points which represent normal distribution and exponential distribution. Also, our insurance data are away from the lines which represent gamma distribution and lognormal distribution. By projecting vertically to those lines, we can see that the data have lower kurtosis than the lognormal and the gamma models of the same skewness.
The nonparametric kernel density estimation (NKDE) method (top left), the quantile-quantile (Q-Q) plot (top right), the total time on test (TTT) plot (bottom left), and the “box plot” (bottom right) are only a few of the graphical techniques shown in Figure 3 below.
Figure 3 (top left panel) shows that the initial density is an asymmetric function with a left tail and that there are no extreme observations. Figure 3 (top right panel) shows that data do not contain any extreme values. According to the TTT plot (Figure 3 (bottom right panel)), the hazard rate function for the models used to explain the current data should be consistently growing. Figure 3 (bottom right panel) shows that the data do not contain any extreme values. The scattergrams for the data on actuarial claim size are shown in Figure 4 (first row panels), where Figure 4 (top left) refers to the initial scattergram and Figure 4 (top right) refers to the fitted scattergram. For the actuarial claim size data, the autocorrelation function (ACF) and partial autocorrelation function (partial-ACF) are shown in Figure 4’s second row. We offer the ACF, which can be used to demonstrate how the correlation between any two signal values alters when the distance between them alters the ACF.
The theoretical ACF is a time domain measure of the stochastic process memory and offers no insight into the frequency content of the process. Also included is the theoretical partial ACF with Lag = k = 1; see Figure 4 (bottom right panel). The initial lag value is statistically significant, while none of the other partial autocorrelations for any other delays are, according to Figure 4’s bottom right corner. This proposes an autoregressive (AR(1)) model as a potential fit to these data. Due to our data, skewness = −0.74828 (left-skewed data), kurtosis  = 2.78846 <  3, and dispersion index (DIx)  = 0.070835  (under dispersed data). Based on these Table 5 and Table 6, the following results can be highlighted:
  • For all risk assessment methods:
For   GELX :   VaRK   ( Z | 1 q = 0.3 ) < VaRK   ( Z | 1 q = 0.25 ) < < VaRK   ( Z | 1 q = 0.1 ) < VaRK   ( Z | 1 q = 0.01 ) ,
For   ELX :   VaRK   ( Z | 1 q = 0.3 ) < VaRK   ( Z | 1 q = 0.25 ) < < VaRK   ( Z | 1 q = 0.1 ) < VaRK   ( Z | 1 q = 0.01 ) .
2.
For all risk assessment methods:
For   GELX :   TVaRK ( Z | 1 q = 0.3 ) < TVaRK ( Z | 1 q = 0.25 ) < < TVaRK ( Z | 1 q = 0.1 ) < TVaRK ( Z | 1 q = 0.01 ) ,
For   ELX :   TVaRK ( Z | 1 q = 0.3 ) < TVaRK ( Z | 1 q = 0.25 ) < < TVaRK ( Z | 1 q = 0.1 ) < TVaRK ( Z | 1 q = 0.01 ) .
3.
For all risk assessment methods:
For   GELX :   TV ( Z | 1 q = 0.3 ) < TV ( Z | 1 q = 0.25 ) < < TV ( Z | 1 q = 0.1 ) < TV ( Z | 1 q = 0.01 ) ,
For   ELX :   TV ( Z | 1 q = 0.3 ) > TV ( Z | 1 q = 0.25 ) > > TV ( Z | 1 q = 0.1 ) > TV ( Z | 1 q = 0.01 ) .
4.
For all risk assessment methods:
For   GELX :   TMVK ( Z | 1 q = 0.3 ) < TMVK ( Z | 1 q = 0.25 ) < < TMVK ( Z | 1 q = 0.1 ) < TMVK ( Z | 1 q = 0.01 ) ,
For   ELX :   TMVK ( Z | 1 q = 0.3 ) > TMVK ( Z | 1 q = 0.25 ) > > TMVK ( Z | 1 q = 0.1 ) > TMVK ( Z | 1 q = 0.01 ) .
5.
For all risk assessment methods:
For   GELX :   MELS ( Z | 1 q = 0.3 ) < MELS ( Z | 1 q = 0.25 ) < < MELS ( Z | 1 q = 0.1 ) < MELS ( Z | 1 q = 0.01 ) ,
For   ELX :   MELS ( Z | 1 q = 0.3 ) > MELS ( Z | 1 q = 0.25 ) > > MELS ( Z | 1 q = 0.1 ) > MELS ( Z | 1 q = 0.01 ) .
6.
Under the MAXLE method and the GELX model: The VaRK ( Z )  is consistently growing, starting with 3104.32244 and ending with 12,791.47826; the TVaRK ( Z )  is consistently growing, starting with 5606.27701 and ending with 17,177.67141.
7.
Under the ORLSE method and the GELX model: The VaRK ( Z )  is consistently growing, starting with 3559.79706 and ending with 18,274.02938; the TVaRK ( Z )  is consistently growing, starting with 7134.33969 and ending with 26,462.60076.
8.
Under the L-MO method and the GELX model: The VaRK ( Z )  is consistently growing, starting with 2684.87356 and ending with 7251.04539; the TVaRK ( Z )  is consistently growing, starting with 3955.54267 and ending with 8938.81397.
9.
Under the ADE method and the GELX model: The VaRK ( Z )  is consistently growing, starting with 3428.17317 and ending with 14,714.50356; the TVaRK ( Z )  is consistently growing, starting with 6329.36348 and ending with 19,871.73902. Similarly, the TVq ( Z ) , the  TMVK ( Z ) ,  and the MEL ( Z )  are consistently growing. Under the ADE method and the ELX model: The VaRK is consistently growing, starting with 3440.462517 and ending with 11,119.778633; the TVaRK is consistently growing, starting with 5686.884025 and ending with 13,422.696625. However, the TVq, the TMVK, and the MEL are monotonically decreasing.
10.
For the GELX model: For nearly for all  q  values, the ORLSE method is recommended since it provides the most acceptable risk exposure analysis; then, the MAXLE method is recommended as a second one, then the ADE method, and then the L-MO method. However, the other two methods perform well. For the ELX model: For nearly for all  q  values, the ORLSE method is recommended since it provides the most acceptable risk exposure analysis; then, the MAXLE method is recommended as a second one. However, the other two methods also perform well.

6. Construction of RRNIK Statistic for the GELX Model

Many statistical tests assume that the data are normally distributed. If the data are skewed, these tests may not be appropriate and can lead to incorrect conclusions. By using a skewed distribution to model the data, more robust statistical tests can be used to test hypotheses and make inferences. Skewed distributions can be used to make better decisions in various fields, such as finance, insurance, and engineering. By accurately modeling the data and estimating the likelihood of extreme events, decisions can be made that are more informed and better account for risk.
The RRNIK statistic is a well-known substitute for the traditional chi-squared tests where there are complete data (for additional information, see Nikulin [8], Nikulin [9], Nikulin [10], and Rao and Robson [11]). The most popular test to check whether a mathematical model is suitable for the data from observations is the Pearson chi-square statistic. But in cases where the model’s parameters are unknown or the data are censored, these tests are useless. For the entire set of data, Nikulin [8], Nikulin [9], Nikulin [10], and Rao and Robson [11] all indicated natural variations in the Pearson statistic, which is referred to as RRNIK. The chi-square distribution, a logical extension of the Pearson statistic, is used in this statistical test.
When the filtering is applied on top of the unknown parameter, the standard test is not strong enough to establish the null hypothesis. Nikulin [8], Nikulin [9], Nikulin [10], and Rao and Robson [11] suggest that the RRNIK statistic be modified to take into consideration random right censoring. In the current study, we provide a modified chi-square test for the GELX model (see also Bagdonavičius et al. [29], Bagdonavičius and Nikulin [30], Bagdonavičius and Nikulin [31] for more details). To test the theory, Nikulin [8], Nikulin [9], Nikulin [10], and Rao and Robson [11] established the RRNIK statistic  Y 2 ( V _ ^ m ) ). Let
H 0 : Pr { Z V _ z } = F V _ ( z ) | z R .
Then, according to Nikulin [8], Nikulin [9], Nikulin [10] and Rao and Robson [11], we have
Y 2 ( V _ ^ m ) = Z m 2 ( V _ ^ m ) + m 1 L T ( V _ ^ m ) ( Ι ( V _ ^ m ) J ( V _ ^ m ) ) 1 L ( V _ ^ m ) ,
where
Z m 2 ( V _ ^ m ) = ( ς 1 m p 1 ( V _ ^ m ) m p 1 ( V _ ^ m ) , ς 2 m p 2 ( V _ ^ m ) m p 2 ( V _ ^ m ) , , ς m m p b ( V _ ^ m ) m p b ( V _ ^ m ) ) T
and  J ( V _ ^ m )  is the information matrix for the grouped data
J ( V _ ^ m ) = ( V _ ^ m ) T ( V _ ^ m ) ,
with
( V _ ^ m ) = [ 1 p i V _ ^ m p i ( V _ ^ m ) ] r × s | ( i = 1 , 2 , , b a n d   k = 1 , 2 , , s ) ,
and then
L ( V _ ^ m ) = ( L 1 ( V _ ^ m ) , , L s ( V _ ^ m ) ) T with L k ( V _ ^ m ) = i = 1 r ς i p i ( V _ ^ m ) p i ( V _ ^ m ) ,
where  I ^  refers to the estimated Fisher INFMX. Then, the  Y 2  statistic has  ( b 1 )  degrees of freedom (DOF), and it follows the Chi square model. Consider a set of observations that are divided into  I 1 , I 2 , , I b , where
I j = ( A j , b ( z ) 1 ; A j , b ( z ) )
p j ( V _ ) = A j , b ( z ) 1 A j , b ( z ) f V _ ( z ) d z | ( j = 1 , 2 , , b ) ,
and
A j , b ( z ) = F 1 ( j b ) | ( j = 1 , , b 1 ) .
In this research, we create a modified goodness-of-fit test known as the RRNIK statistic to examine if the used data follow the distribution of the GELX model when a parameter is unknown. Our new statistic relies on the estimated Fisher INFMX, which we employ after computing the highest likelihood estimator of the unknown GELX distribution parameter on the dataset.

7. Uncensored Distributional Validation

To test a hypothesis with censored data and an unknown parameter, we utilize a specific type of statistical test based on the RRNIK statistic variation, as suggested by Nikulin [7,8,9] and Rao and Robson [11]. Our adaptation of this test is tailored for the GELX model, where the failure rate  z i  follows a GELX distribution. The null hypothesis is then considered:
H 0 : F ( z ) F 0 = F 0 , V _ ( z ) | z R .
Below are the expressions for the survival function (SrF) and cumulative hazard function of the GELX distribution:
S V _ ( z ) = 1 F V _ ( z ) = 1 [ 1 e x p { [ ψ δ 2 , β ( z ) 1 ] 1 } ] δ 1 | V _ = δ 1 , δ 2 , β ,
and
V V _ ( z ) = ln [ S V _ ( z ) ] ,
For all  j , we have a constant value of  e j , Z = E k / k , where
E k = E k ( z ) = l = 1 i 1 V V _ ( z ) | 0 = < A 0 , b < A 1 , b < < A k 1 , b < A k , b = + . ,
and  A j , b ( z )  are random data functions. Then, the test statistic can be formulated as
Y m , r 1 , q 2 ( V _ ^ ) = Z T S ^ Z | Z = ( Z 1 , Z 2 , , Z k ) T
where
Z j = 1 m ( Q j , Z e j , Z ) | ( j = 1 , 2 , , k ) .
Then, the test statistic can also be presented in the following formula:
Y m , r 1 , q 2 ( V _ ^ ) = j = 1 k 1 Q j , Z ( Q j , Z e j , Z ) 2 + V W , G ,
where  V W , G  and many other details are given in Nikulin [8], Nikulin [9], Nikulin [10], and Rao and Robson [11].

7.1. Uncensored Simulation Study and Assessment under the RRNIK Statistics  Y 2 ( q ; V _ ^ )

Here are some general steps to conduct a simulation study using the Rao–Robson Nikulin test statistic:
I.
Define the GELX distribution by specifying its parameters, such as the location, scale, and shape parameters.
II.
Set the sample size and the censoring mechanism. For instance, you can choose a fixed sample size and right-censor the data at a certain time point.
III.
Generate multiple sets of simulated data from the GELX distribution with the specified parameters and censoring mechanism. The number of simulated datasets should be sufficiently large to obtain reliable results.
IV.
For each simulated dataset, estimate the GELX parameters using maximum likelihood estimation or another appropriate method.
V.
Compute the Rao-Robson Nikulin test statistic for each simulated dataset using the estimated parameters and the censoring information.
VI.
Calculate the empirical distribution of the test statistic across all simulated datasets.
VII.
Compare the observed test statistic from the actual data with the empirical distribution obtained from the simulation study. Evaluate whether the observed test statistic falls within the expected range under the null hypothesis or not.
Repeat steps III–VII for different sets of GELX parameters, sample sizes, and censoring mechanisms to assess the performance of the test statistic under various scenarios. Then, Table 7 shows the relevant empirical and theoretical levels. It is clear that the determined empirical level value and its equivalent theoretical level value are fairly similar. As a result, we draw the conclusion that the suggested test is excellent for the GELX distribution.

7.2. Uncensored Reliability Data Modeling and Testing under the RRNIK Statistics  Y 2 ( q ; V _ ^ )

7.2.1. Uncensored Reliability Strength’s Dataset

To model the uncensored reliability strengths dataset using a probability distribution, one commonly used distribution is the Weibull distribution. The Weibull distribution is frequently employed in reliability analyses due to its flexibility in capturing a wide range of failure patterns. In this application, we test the GELX distribution for the uncensored reliability strengths dataset. Using the BB algorithm (see Ravi (2009)) and the reliability data of Nichols and Padgett [32], we can obtain the MAXLE value of the parameter  δ ,  assuming that our GELX model can fit the strength data of  1.5  cm glass fiber:  δ 1 ^ = 1.52493 ,     δ 2 ^ = 1.99834 , β ^ = 0.73854 . We can compute and provide the Fisher INFMX as follows using the value:
I ( V _ ^ ) = [ 0.953428 1.26758 2.139537 2.10348 4.801254 0.996598 ]
Then,  Y 2 ( q ; V _ ^ ) = 12.004875  and the critical value  χ 0.05 2 ( 6 ) = 12.5916 , which means that the GELX distribution can effectively express, simulate, and model the uncensored reliability strength’s dataset. However, selecting the appropriate distribution necessitates evaluating the data’s fit to each distribution, usually through the use of statistical tests or a visual examination of the data plotted against the selected distribution. The selection procedure may also be aided by domain expertise and theoretical factors.

7.2.2. Uncensored Heat Exchanger Tube Crack

To model the uncensored heat exchanger tube crack dataset using a probability distribution, you need to determine the appropriate distribution that best fits the data. The choice of distribution depends on the characteristics of the dataset and the underlying assumptions you wish to make. Following the BB algorithm and using the data of Meeker and Escobar [33], we are concerned with testing the null hypothesis that the heat exchanger tube crack data follow our GELX distribution under the RRNIK statistic test. First, we have the MAXLE for the three parameters, where  δ 1 ^ = 3.12542 , δ 2 ^ = 2.73154 , β ^ = 0.83496 . Second, the INFMX can be obtained as
I ( V _ ^ ) = [ 0.437812 3.76831 1.00703 1.76948 5.11147 1.354706 ]
Then, we also have the following main results:  Y 2 ( q ; V _ ^ ) = 19.60143  and  χ 0.01 2 ( 12 ) = 21.02607 , which means that the GELX distribution can effectively express, simulate, and model the uncensored heat exchanger tube crack dataset. However, choosing the best distribution involves assessing the fit of the data to each distribution, typically by using statistical tests or visual inspection of the data plotted against the chosen distribution. Additionally, domain knowledge and theoretical considerations can help guide the selection process.

8. Censored Distributional Validation

8.1. Censored Simulation Study and Assessment under the RRNIK Statistics  Y n , r 1 , q 2 ( V _ ^ )

A censored simulation study under the Rao–Robson Nikulin (RRNIK) statistics is a method for testing the goodness of fit of a censored distribution using simulated data. This method is based on the RRNIK test statistic, which is a variation of the Kolmogorov–Smirnov test statistic that takes into account the censoring of the data. To perform a censored simulation study using the RRNIK statistics, the first step is to generate a set of simulated data with known distribution parameters. The data should be generated in such a way that it contains a mixture of censored and uncensored data. This can be achieved by randomly censoring a portion of the data or by using a censoring mechanism that is appropriate for the application. Once the simulated data are generated, the RRNIK test statistic can be calculated by comparing the empirical distribution function of the censored data to the hypothesized distribution function. The test statistic is then compared to a critical value from a distribution table to determine whether the null hypothesis can be rejected. The null hypothesis is that the censored data comes from the hypothesized distribution function. The censored simulation study can be repeated multiple times with different sets of simulated data to obtain a more accurate assessment of the goodness of fit of the hypothesized distribution to the observed data. This can help to determine the robustness of the RRNIK test statistic and its ability to accurately detect deviations from the hypothesized distribution. Overall, censored simulation studies using the RRNIK statistics are a useful tool for assessing the performance of the RRNIK test statistic in detecting deviations from the hypothesized distribution in censored data. This method can be applied in a variety of applications, including reliability analysis, survival analysis, and the modeling of extreme events.
Under the censored simulated studies, the RRNIK statistics  Y m , r 1 , q 2 ( V _ ^ )  can be evaluated with some experiments. For N = 17,000 samples and censoring at  25 %  with  DOF = 5 , we can calculate the average value of the non-rejection numbers of the null hypothesis for  q = 0.01 ,   0.02 ,   0.05 ,   0.1 , where  Y m , r 1 , q 2 ( V _ ^ )   χ q 2 ( r 1 ) . The simulation results are presented in Table 8.

8.2. Right Censored Medical and Reliability Datasets under the RRNIK Statistics  Y n , r 1 , q 2 ( V _ ^ )

8.2.1. Right Censored Medical Lung Cancer Dataset

The censored lung cancer dataset (see Loprinzi et al. [34]) is a valuable resource for researchers interested in developing and testing statistical models for survival data, as well as for clinicians and policymakers interested in understanding the factors that contribute to survival outcomes for patients with lung cancer, where  m = 228  and right censored items  = 63 . First, we have the following results:  δ 1 ^ = 1.93548 , δ 2 ^ = 1.44046 , β ^ = 0.69421 . As a number of classes, we employ DOF  = 8 . The following results refer to how the test statistic  Y m , r 1 , q 2 ( V _ ^ )  items are presented:
  A j , b ( Z ) ^ 92.14171.69216.04283.0355.10456.30685.301022.32
  Q j , Z ^ 29.030.035.031.032.025.028.018.0
  e j , Z 8.271738.271738.271738.271738.271738.271738.271738.27173
Then, the estimated matrix  P i j ^ ( Z )  are as follows:
  P 1 j ^ ( Z ) −0.56740.8437−0.73910.16370.90000.7064−0.20610.2836
  P 1 j ^ ( Z ) 0.7168−0.18340.82450.76010.37610.30660.70040.2994
  P 1 j ^ ( Z ) 0.6974−0.83120.64980.85260.50840.09420.64740.1991
and
I ( V _ ^ ) = [ 0.86451 1.62348 2.49312 0.75550 5.11147 0.93784 ] .
The critical value of  χ 0.05 2 ( DOF = 8 ) = 15.50731 , whereas  Y m , r 1 , q 2 ( V _ ^ ) = 12.95384 , which means that the GELX distribution can effectively express, simulate, and model the censored medical lung cancer dataset.

8.2.2. Right Censored Reliability Dataset

The censored reliability dataset is an excellent resource for researchers who are interested in developing and testing statistical models for time-to-failure data. These researchers can obtain the dataset on the Internet. The filtered reliability dataset is an extremely helpful resource for engineers and manufacturers who are interested in improving the dependability of the goods or systems they produce. A set of data for basic reliability assessments, obtained from an experiment with several factors, includes data on the lifetime of glass capacitors as a function of voltage and operating temperature. The dataset includes data on the longevity of glass capacitors as a function of voltage and operating temperature (see Meeker and Escobar [33]). Consider the data of Meeker and Escobar [33], where  m = 64  and right censored items  = 32 . Then, we have:  δ 1 ^ = 2.1473 , δ 2 ^ = 1.3614 , β ^ = 0.83760 . Considering DOF  =   8 , we gain the following results:
  A j , b ( Z ) ^ 346.16469.50587.11679.021078.831089.111102.171106.44
  Q j , Z ^ 11.015.06.010.06.05.06.05.0
  e j , Z 3.67623.67623.67623.67623.67623.67623.67623.6762
Then, the estimated matrix   P i j ^ ( Z )  are as follows:
  P 1 j ^ ( Z ) 0.36740.9005−0.61180.16670.7100−0.60080.74870.6791
  P 1 j ^ ( Z ) 0.39140.43510.7003−0.33320.38460.39170.70660.2994
  P 1 j ^ ( Z ) 0.69470.86680.19790.28520.49370.0971−0.44440.1985
and
I ( V _ ^ ) = [ 0.49382 4.0006 2.35623 1.67940 0.48163 0.77185 ] .
Then, sine  Y m , r 1 , q 2 ( V _ ^ ) = 14.882811  and  χ 0.05 2 ( 8 ) = 15.50731 , which means that the GELX distribution can effectively express, simulate, and model the censored reliability dataset.

9. Validation for the Actuarial Data

The RRNIK test statistic is an essential tool that can be of service to researchers in the design of improved models and in the generation of more accurate forecasts. The validation of actuarial data using this test statistic is a critical instrument that can be of assistance to researchers. The validation of data using this test statistic is an important tool to have at your disposal when conducting an examination of actuarial data. The validation findings will be presented in accordance with the revised RRNIK statistical test in this section of the article. On the other hand, we are going to make use of the actuarial claims data this time around. In the event that our GELX model is able to produce a satisfactory match with the actuarial claims data, then we will be able to make use of the BB approach to obtain the MAXLE value of the parameters. This assumes that the MAXLE value can be calculated with a high degree of precision; then, we have  δ 1 ^ = 2.19046 , δ 2 ^ = 0.93875 ,   β ^ = 0.84755 . We can compute and provide the Fisher INFMX as follows using the value:
I ( V _ ^ ) = [ 0.89317 2.30159 4.12578 3.95148 5.601474 1.689457 ] .
The critical values for the RRNIK statistical test were  Y 2 ( ε ; V _ ^ ) = 9.843197  and  χ 0.05 2 ( 5 ) = 11.070 , so the GELX distribution can effectively simulate and model the actuarial claims data.

10. Concluding Remarks

In this paper, we introduce a new model for actuarial claims analysis and actuarial risk assessment. The risk assessment process was carried out through four different classical assessment methods: the maximum likelihood estimation (MAXLE), ordinary least squares (OLS), L-Moment (L-MO), and Anderson Darling estimation (ADE) methods. The risk exposure under actuarial claims data was also described using five important risk indicators: value-at-risk (VaRK ( Z ) ), tail-value-at-risk (TVaRK ( Z ) ), tail variance (TVq ( Z ) ), tail mean-variance ( TMVK ( Z ) ), and mean excess loss ( MELS ( Z ) ). These metrics were developed for the proposed weighted exponential model. The utilization of actuarial claims data in accordance with the five individual risk indicators is necessary in order to carry out an appropriate risk assessment. We came to the opinion that studying the actuarial claims data that fall within the five major risk indicators would be the most fruitful course of action given that the data have a distinct peak and a tail that progresses in a leftwards and unambiguous fashion. This led us to the conclusion that the best line of action would be to examine the facts. We felt compelled to provide both a numerical and graphical risk evaluation and analysis because the new distribution was able to model the data under a variety of different risk factors. This was because the new distribution was flexible enough to model the actuarial claims data under a variety of different risk indicators. This was mostly the cause of this. This was the main factor that motivated us to put out the effort. The fact that the properties of the new distribution matched those of the actuarial claims data sparked our interest in doing this study even more. We therefore determined that we ought to go and carry it out.
The following results can be highlighted:
  • Under the artificial claims data: VaRK ( Z ) , TVaRK ( Z ) , TVq ( Z ) TMVK ( Z ) ,  and  MELS ( Z )  increase when  q  increases for all estimation methods.
  • Under the artificial claims data:
    VaRK ( Z ) M A X L E <  VaRK ( Z ) O R L S E <  VaRK ( Z ) A D E <  VaRK ( Z ) L - M O  for most  q .
  • Under the actuarial claims data, the MAXLE method, and the proposed model: The VaRK ( Z ) , TVaRK ( Z ) , TVq ( Z ) , the  TMVK ( Z ) ,  and the MEL ( Z )  are consistently growing.
  • Under the actuarial claims data, the ORLSE method, and the proposed model: The VaRK ( Z ) , TVaRK ( Z ) , TVq ( Z ) , the  TMVK ( Z ) ,  and the MEL ( Z )  are consistently growing.
  • Under the actuarial claims data, the L-MO method, and the proposed model: The VaRK ( Z ) , TVaRK ( Z ) , TVq ( Z ) , the  TMVK ( Z ) ,  and the MEL ( Z )  are consistently growing.
  • Under the actuarial claims data, the ADE method, and the proposed model: The VaRK ( Z ) , TVaRK ( Z ) , TVq ( Z ) , the  TMVK ( Z ) ,  and the MEL ( Z )  are consistently growing.
  • The ORLSE technique is strongly recommended for use with the new model as it provides the most dependable risk exposure analysis available. The MAXLE approach comes in a close second in the order of operations that have been proposed, followed by the ADE method, and finally, the L-MO method completes the series. On the other hand, the other two options are both successful in their own right and in their own particular way. Because it provides the most gratifying risk exposure analysis, the ORLSE methodology should be used for the ELX model. It is recommended to use this methodology. This is due to the fact that the ORLSE methodology delivers. As an alternative recommendation for a backup strategy, the MAXLE approach is the one that should be employed instead. On the other hand, the other two options are both successful in their own right and in their own particular way.
  • For the testing and validation of the uncensored strengths of glass fibers data, the critical values for the RRNIK statistical test were  Y 2 ( q ; V _ ^ ) = 12.004875  and  χ 0.05 2 ( 6 ) = 12.5916 ; therefore, the new GELX distribution can effectively simulate and model the uncensored  1.5  cm glass fiber data.
  • For the testing and validation of the uncensored heat exchanger tube crack data, the critical values for the RRNIK statistical test were  Y 2 ( q ; V _ ^ ) = 19.60143  and  χ 0.05 2 ( 6 ) = 12.5916 . As a result, the uncensored heat exchanger tube fracture data can be efficiently simulated and modelled using the new GELX distribution.
  • For the testing and validation of the censored lung cancer dataset, the value of the statistical test  Y m , r 1 , q 2 ( V _ ^ ) = 12.95384 , where  χ 0.05 2 ( 8 ) = 15.50731 > Y m , r 1 , q 2 ( V _ ^ ) = 14.61654 . Consequently, the censored lung cancer dataset may be modeled using the new GELX model.
  • For the testing and validation of the censored reliability dataset, the value of the statistical test  Y m , r 1 , q 2 ( V _ ^ ) = 13.84577 , where  χ 0.05 2 ( 8 ) = 15.50731 > Y m , r 1 , q 2 ( V _ ^ ) = 14.882811 . Therefore, the censored capacitor data reliability dataset may be modelled using the new GELX model.

Author Contributions

M.S.: review and editing, software, validation, writing the original draft preparation, and conceptualization. W.E.: review and editing, validation, writing the original draft preparation, conceptualization, data curation, formal analysis, project administration and software; Y.T.: review and editing, methodology, conceptualization, and software; M.I.: review and editing, software, validation, writing the original draft preparation, conceptualization, and supervision. M.M.A.: review and editing, writing the original draft preparation, conceptualization, and supervision; H.G.: validation, writing the original draft preparation, and conceptualization; H.M.Y.: review and editing, software, validation, writing the original draft preparation, conceptualization, and supervision. All authors have read and agreed to the published version of the manuscript.

Funding

The study was funded by Researchers Supporting Project number (RSP2023R488), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset can be provided upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aboraya, M.; Ali, M.M.; Yousof, H.M.; Ibrahim, M. A novel Lomax extension with statistical properties, copulas, different estimation methods and applications. Bull. Malays. Math. Sci. Soc. 2022, 45, 85–120. [Google Scholar] [CrossRef]
  2. Acerbi, C.; Tasche, D. On the coherence of expected shortfall. J. Bank. Financ. 2002, 26, 1487–1503. [Google Scholar] [CrossRef] [Green Version]
  3. Adcock, C.; Eling, M.; Loperfido, N. Skewed distributions in finance and actuarial science: A review. Eur. J. Financ. 2015, 21, 1253–1281. [Google Scholar] [CrossRef]
  4. Eling, M. Fitting insurance claims to skewed distributions: Are the skew-normal and skew-student good models? Insur. Math. Econ. 2012, 51, 239–248. [Google Scholar] [CrossRef]
  5. Ali, M.M.; Korkmaz, M.Ç.; Yousof, H.M.; Butt, N.S. Odd Lindley-Lomax model: Statistical properties and applications. Pak. J. Stat. Oper. Res. 2019, 15, 419–430. [Google Scholar] [CrossRef]
  6. Ali, M.M.; Yousof, H.M.; Ibrahim, M. A new Lomax type distribution: Properties, copulas, applications, Bayesian and non-Bayesian estimation methods. Int. J. Stat. Sci. 2021, 21, 61–104. [Google Scholar]
  7. Alizadeh, M.; Ghosh, I.; Yousof, H.M.; Rasekhi, M.; Hamedani, G.G. The generalized odd-generalized exponential family of distributions: Properties, characterizations and applications. J. Data Sci. 2017, 15, 443–466. [Google Scholar] [CrossRef]
  8. Nikulin, M.S. Chi-squared test for normality. In Proceedings of the International Vilnius Conference on Probability Theory and Mathematical Statistics, Vilnius, Lithuania, 25–30 June 1973; Volume 2, pp. 119–122. [Google Scholar]
  9. Nikulin, M.S. Chi-squared test for continuous distributions with shift and scale parameter. Theory Probab. Appl. 1973, 18, 559–568. [Google Scholar] [CrossRef]
  10. Nikulin, M.S. On a Chi-squared test for continuous distributions. Theory Probab. Appl. 1973, 19, 638–639. [Google Scholar]
  11. Rao, K.C.; Robson, D.S. A chi-square statistic for goodness-of-fit tests within the exponential family. Commun. Stat.-Simul. Comput. 1974, 3, 1139–1153. [Google Scholar] [CrossRef] [Green Version]
  12. Wirch, J. Raising Value at Risk. N. Am. Actuar. J. 1999, 3, 106–115. [Google Scholar] [CrossRef]
  13. Tasche, D. Expected shortfall and beyond. J. Bank. Financ. 2002, 26, 1519–1533. [Google Scholar] [CrossRef] [Green Version]
  14. Butt, N.S.; Khalil, M.G. A new bimodal distribution for modeling asymmetric bimodal heavy-tail real lifetime data. Symmetry 2020, 12, 2058. [Google Scholar] [CrossRef]
  15. Reyes, J.; Gómez-Déniz, E.; Gómez, H.W.; Calderín-Ojeda, E. A bimodal extension of the exponential distribution with applications in risk theory. Symmetry 2021, 13, 679. [Google Scholar] [CrossRef]
  16. Gómez, Y.M.; Gallardo, D.I.; Venegas, O.; Magalhães, T.M. An asymmetric bimodal double regression model. Symmetry 2021, 13, 2279. [Google Scholar] [CrossRef]
  17. Hogg, R.U.; Klugman, S.A. Loss Distributions; John Wiley and Sons: Hoboken, NJ, USA, 1984. [Google Scholar]
  18. Klugman, S.A.; Panjer, H.H.; Willmot, G.E. Loss Models: From Data to Decisions; John Wiley & Sons: Hoboken, NJ, USA, 2012; Volume 715. [Google Scholar]
  19. Lane, M.N. Pricing Risk Transfer Transactions 1. ASTIN Bull. J. IAA 2000, 30, 259–293. [Google Scholar] [CrossRef]
  20. McNeil, A.J.; Frey, R.; Embrechts, P. Quantitative Risk Management: Concepts, Techniques and Tools; Princeton University Press: Princeton, NJ, USA, 2005; Volume 3. [Google Scholar]
  21. Vernic, R. Multivariate skew-normal distributions with applications in insurance. Insur. Math. Econ. 2006, 38, 413–426. [Google Scholar] [CrossRef]
  22. Artzner, P. Application of coherent risk measures to capital requirements in insurance. N. Am. Actuar. J. 1999, 3, 11–25. [Google Scholar] [CrossRef]
  23. Charpentier, A. Computational Actuarial Science with R; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar]
  24. Furman, E.; Landsman, Z. Tail variance premium with applications for elliptical portfolio of risks. ASTIN Bull. 2006, 36, 433–462. [Google Scholar] [CrossRef] [Green Version]
  25. Landsman, Z. On the tail mean-variance optimal portfolio selection. Insur. Math. Econ. 2010, 46, 547–553. [Google Scholar] [CrossRef]
  26. Punzo, A. A new look at the inverse Gaussian distribution with applications to insurance and economic data. J. Appl. Stat. 2019, 46, 1260–1287. [Google Scholar] [CrossRef]
  27. Punzo, A.; Bagnato, L.; Maruotti, A. Compound unimodal distributions for insurance losses. Insur. Math. Econ. 2018, 81, 95–107. [Google Scholar] [CrossRef]
  28. Punzo, A.; Mazza, A.; Maruotti, A. Fitting insurance and economic data with outliers: A flexible approach based on finite mixtures of contaminated gamma distributions. J. Appl. Stat. 2018, 45, 2563–2584. [Google Scholar] [CrossRef]
  29. Bagdonavičius, V.B.; Levuliene, R.J.; Nikulin, M.S. Chi-squared goodness-of-fit tests for parametric accelerated failure time models. Commun. Stat.-Theory Methods 2013, 42, 2768–2785. [Google Scholar] [CrossRef]
  30. Bagdonavičius, V.; Nikulin, M. Chi-squared goodness-of-fit test for right censored data. Int. J. Appl. Math. Stat. 2011, 24, 30–50. [Google Scholar]
  31. Bagdonavičius, V.; Nikulin, M. Chi-squared tests for general composite hypotheses from censored samples. Comptes Rendus Math. 2011, 349, 219–223. [Google Scholar] [CrossRef]
  32. Nicholas, M.D.; Padgett, W.J. A bootstrap control chart for Weibull percentiles. Qual. Reliab. Eng. Int. 2006, 22, 141–151. [Google Scholar] [CrossRef]
  33. Meeker, W.Q.; Escobar, L.A.; Lu, C.J. Accelerated degradation tests: Modeling and analysis. Technometrics 1998, 40, 89–99. [Google Scholar] [CrossRef]
  34. Loprinzi, C.L.; ALaurie, J.; Wieand, H.S.; EKrook, J.; Novotny, P.J.; Kugler, J.W.; Bartel, J.; Law, M.; Bateman, M.; EKlatt, N. Prospective evaluation of prognostic variables from patient-completed questionnaires. North Central Cancer Treatment Group. J. Clin. Oncol. 1994, 12, 601–607. [Google Scholar] [CrossRef]
Figure 1. Some plots for the new PDF (b) and its corresponding HRF (a).
Figure 1. Some plots for the new PDF (b) and its corresponding HRF (a).
Symmetry 15 01356 g001
Figure 2. Cullen and Frey plot for actuarial claims data.
Figure 2. Cullen and Frey plot for actuarial claims data.
Symmetry 15 01356 g002
Figure 3. NKDE plot (top left), Q-Q plot (top right), TTT plot (bottom left), and box plot (bottom right) for actuarial claims data.
Figure 3. NKDE plot (top left), Q-Q plot (top right), TTT plot (bottom left), and box plot (bottom right) for actuarial claims data.
Symmetry 15 01356 g003
Figure 4. Scattergrams (top left and top right), ACF (bottom left), and practical ACF (bottom right) for the actuarial claims data.
Figure 4. Scattergrams (top left and top right), ACF (bottom left), and practical ACF (bottom right) for the actuarial claims data.
Symmetry 15 01356 g004
Table 1. Simulated KRIs for assessing the artificial data when n = 50.
Table 1. Simulated KRIs for assessing the artificial data when n = 50.
Method   VaRK ( Z ; V _ ^ )   TVaRK ( Z ; V _ ^ )   TVq ( Z ; V _ ^ )   TMVK ( Z ; V _ ^ )   MELS ( Z ; V _ ^ )
MAXLE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.04918, 0.49665, 0.91060)
70% 1.612812.810021.451593.535811.19721
90% 2.098613.295131.461624.025941.19652
99% 2.923294.126411.485864.869341.20313
ORLSE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.04195, 0.49835, 0.910004)
70% 1.619182.822201.466273.555331.20302
90% 2.107243.309691.476614.047991.20245
99% 2.935914.145151.501384.895841.20924
L-MO   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.0585, 0.49779, 0.90782)
70% 1.629692.8373291.4783633.576511.20764
90% 2.1195563.326711.4891474.0712841.207154
99% 2.9512974.165521.5148224.9229311.214223
ADE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.03741, 0.49876, 0.90752)
70% 1.6248432.8348971.4854093.5776011.210054
90% 2.115463.3253191.4966234.0736311.209859
99% 2.948882l4.1661211.5228544.9275481.21724
Table 2. Simulated KRIs for assessing the artificial data when n = 150.
Table 2. Simulated KRIs for assessing the artificial data when n = 150.
Method   VaRK ( Z ; V _ ^ )   TVaRK ( Z ; V _ ^ )   TVq ( Z ; V _ ^ )   TMVK ( Z ; V _ ^ )   MELS ( Z ; V _ ^ )
MAXLE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.0195, 0.49829, 0.90467)
70% 1.620282.835211.500463.585441.21493
90% 2.112363.327751.512904.084201.21539
99% 2.949064.172701.540984.943181.22364
ORLSE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.0113, 0.49932, 0.903658)
70% 1.6237382.8436951.5139353.6006621.219957
90% 2.1176813.3383161.5268694.1017511.220635
99% 2.9578124.1870161.5557514.9648911.229203
L-MO   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.00845, 0.49962, 0.90153)
70% 1.628852.8548521.5306883.6201951.226002
90% 2.1249823.3519911.5444164.12421.22701
99% 2.9691954.2052921.5746244.9926041.236097
ADE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.00961, 0.49939, 0.90306)
70% 1.624632.8461791.5184123.6053851.221549
90% 2.1191353.3414681.5315814.1072581.222332
99% 2.9603424.1913991.5620794.9724381.231056
Table 3. Simulated KRIs for assessing the artificial data when n = 300.
Table 3. Simulated KRIs for assessing the artificial data when n = 300.
Method   VaRK ( Z ; V _ ^ )   TVaRK ( Z ; V _ ^ )   TVq ( Z ; V _ ^ )   TMVK ( Z ; V _ ^ )   MELS ( Z ; V _ ^ )
MAXLE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.00366, 0.49948, 0.90189)
70% 1.624842.849291.526813.612691.22445
90% 2.120333.345801.540484.116051.22548
99% 2.963494.198041.570544.983311.23455
ORLSE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.00291, 0.49977, 0.90154)
70% 1.6267662.8528871.531253.6185121.22612
90% 2.122893.350091.545074.1226251.227201
99% 2.9671934.203551.575384.9912381.236357
L-MO   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.00794, 0.5006, 0.89939)
70% 1.6386912.8728981.5527913.6492941.234208
90% 2.1379253.3734221.5673154.157081.235497
99% 2.98774.2327771.598925.0322371.245077
ADE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.00315, 0.49981, 0.90111)
70% 1.6281262.8554691.5346163.6227771.227343
90% 2.124703.3531811.5485854.1274731.22848
99% 2.9698284.207561.579154.9971351.237733
Table 4. Simulated KRIs for assessing the artificial data when n = 500.
Table 4. Simulated KRIs for assessing the artificial data when n = 500.
Method   VaRK ( Z ; V _ ^ )   TVaRK ( Z ; V _ ^ )   TVq ( Z ; V _ ^ )   TMVK ( Z ; V _ ^ )   MELS ( Z ; V _ ^ )
MAXLE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.01061, 0.49948, 0.90142)
70% 1.629562.855581.530733.620941.22602
90% 2.125703.352721.544464.124951.22702
99% 2.969934.206031.574684.993371.23611
ORLSE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.00896, 0.49979, 0.901087)
70% 1.6310672.858721.53513.626271.227653
90% 2.1278293.356541.548984.131031.228711
99% 2.973164.2110521.579465.000781.237892
L-MO   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.0063, 0.50069, 0.89895)
70% 1.6393712.8748681.5564453.653091.235496
90% 2.1390633.3759311.5711564.161511.236869
99% 2.989714.236281.6030635.0378111.24657
ADE   δ 1 ^   δ 2 ^   β ^ V _ ^  = (2.00866, 0.4998, 0.90094)
70% 1.6312942.8593251.5361723.627411.22803
90% 2.1281893.3573021.5501114.1323571.229113
99% 2.9737744.2121061.5806865.002451.238333
Table 5. KRIs under the actuarial claims data for the GELX model.
Table 5. KRIs under the actuarial claims data for the GELX model.
Method   VaRK ( Z ; V _ ^ )   TVaRK ( Z ; V _ ^ )   TVq ( Z ; V _ ^ )   TMVK ( Z ; V _ ^ )   MELS ( Z ; V _ ^ )
MAXLE
70%3104.322445606.277019,440,936.452134,726,074.503082501.95457
75%3484.413216070.000210,036,485.938695,024,312.969542585.58699
80%3966.009776658.8320110,807,176.413695,410,247.038862692.82224
85%4616.765267454.1365811,867,945.163025,941,426.718092837.37132
90%5596.912248647.4840213,490,055.419926,753,675.193983050.57178
95%7459.5528810,893.7539416,607,628.923418,314,708.215643434.20106
99%12,791.4782617,177.6714125,556,711.3692412,795,533.35604386.19315
ORLSE
70%3559.797067134.3396924,911,253.9443912,462,761.311893574.54263
75%4045.11137802.5952927,210,192.225913,612,898.708243757.48399
80%4673.910668667.1100930,267,699.5856715,142,516.902923993.19942
85%5546.816139861.6872634,627,832.7809517,323,778.077744314.87113
90%6908.3682111,708.3446241,634,963.9246920,829,190.306964799.97641
95%9635.1939815,345.0582556,219,134.9501628,124,912.533335709.86427
99%18,274.0293826,462.60076105,590,441.5107852,821,683.356168188.57138
L Moment
70%2684.873563955.542671,918,151.15731963,031.121321270.66911
75%2903.628734188.460691,975,480.70746991,928.814421284.83196
80%3171.94934477.406712,050,406.713711,029,680.763571305.45741
85%3521.447034857.243412,153,396.375041,081,555.430931335.79639
90%4025.08565408.335342,308,485.338261,159,651.004471383.24974
95%4925.87526397.337972,594,824.311431,303,809.493681471.46277
99%7251.045398938.813973,335,108.337881,676,492.982911687.76858
ADE
70%3428.173176329.3634812,876,115.360836,444,387.04392901.19032
75%3864.860966867.4935813,710,663.082866,862,199.035013002.63262
80%4419.703827551.8641614,790,120.578337,402,612.153333132.16035
85%5171.62758477.7887516,275,193.883728,146,074.730613306.16126
90%6307.812879869.8156918,546,348.353699,283,043.992543562.00282
95%8475.4776112,496.528322,913,550.892111,469,271.974354021.0507
99%14,714.5035619,871.7390235,461,472.2961617,750,607.887095157.23545
Table 6. KRIs under the actuarial claims data for the LX model.
Table 6. KRIs under the actuarial claims data for the LX model.
Method   VaRK ( Z ; V _ ^ )   TVaRK ( Z ; V _ ^ )   TVq ( Z ; V _ ^ )   TMVK ( Z ; V _ ^ )   MELS ( Z ; V _ ^ )
MAXLE
70%3111.073815021.84633,603,201.421341,806,622.556971910.77249
75%3463.603495369.795923,595,282.614821,803,011.103331906.19243
80%3892.708945794.719753,586,988.893121,799,289.166321902.01082
85%4443.266086341.237823,579,959.558021,796,321.016841897.97175
90%5215.885117110.030853,571,410.797271,792,815.429481894.14574
95%6531.315088421.415023,562,554.541661,789,698.685851890.09994
99%9573.5772111,459.395953,552,197.80181,787,558.296861885.81874
ORLSE
70%3542.738065874.906395,536,193.620652,773,971.716712332.16832
75%3965.207376300.372485,554,334.387042,783,467.5662335.16511
80%4482.3196821.934315,577,229.56142,795,436.715012339.61531
85%5149.918257495.905215,607,030.117842,811,010.964132345.98696
90%6093.652228449.017335,648,474.196652,832,686.115662355.36512
95%7716.0318410,087.13575,715,983.277612,868,078.77452371.10386
99%11,526.7574513,930.04665,853,915.351082,940,887.722142403.28915
L Moment
70%1780.4699633924.9827026,530,455.0615453,269,152.5134752144.512739
75%2099.8752614323.1928816,883,421.4036263,446,033.8946942223.31762
80%2513.055954829.7943497,317,506.3452043,663,582.966952316.738398
85%3080.3465685513.0766497,880,283.6448643,945,654.8990812432.730082
90%3942.6339136531.3667318,679,013.5764164,346,038.1549392588.732817
95%5574.9308078413.42102210,058,024.9876845,037,425.9148642838.490216
99%10,051.86680913,408.96809113,316,139.7982686,671,478.8672253357.101282
ADE
70%3440.4625175686.8840255,117,423.4745362,564,398.6212942246.421509
75%3848.2934026096.6159475,130,859.0992282,571,526.1655612248.322545
80%4347.155186598.6561475,148,180.2814152,580,688.7968552251.500967
85%4990.7169117247.0546665,171,100.8647622,592,797.4870472256.337756
90%5899.6781558163.4080155,203,399.792052,609,863.304042263.729861
95%7460.4759389736.8330615,258,762.7701512,639,118.2181362276.357122
99%11,119.77863313,422.6966255,369,579.5814772,698,212.4873642302.917992
Table 7. Empirical levels and corresponding theoretical levels ( q  = 0.01, 0.02, 0.05, and 0.1) and N = 16,000.
Table 7. Empirical levels and corresponding theoretical levels ( q  = 0.01, 0.02, 0.05, and 0.1) and N = 16,000.
  m       and   q q 1 = 0.01   q 2 = 0.02   q 3 = 0.05   q 4 = 0.1
  m 1 = 25 0.99350.98190.95220.9033
  m 2 = 40 0.99290.98170.95150.9025
  m 3 = 150 0.99220.98110.95090.9017
  m 5 = 300 0.99110.98070.95060.9010
  m 6 = 700 0.99060.98050.95030.9003
Table 8. Censored simulation study and assessment under the RRNIK. Statistics for  q = 0.01 ,   0.02 ,   0.05 ,   and   0.1  and N = 17,000.
Table 8. Censored simulation study and assessment under the RRNIK. Statistics for  q = 0.01 ,   0.02 ,   0.05 ,   and   0.1  and N = 17,000.
  m       and   q q 1 = 0.01   q 2 = 0.02   q 3 = 0.05   q 4 = 0.1
  m 1 = 25 0.99240.98310.95220.9029
  m 2 = 40 0.99200.98220.95150.9019
  m 3 = 150 0.99110.98180.95100.9009
  m 5 = 300 0.99080.98090.95050.9003
  m 6 = 700 0.99050.98040.95020.9002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Salem, M.; Emam, W.; Tashkandy, Y.; Ibrahim, M.; Ali, M.M.; Goual, H.; Yousof, H.M. A New Lomax Extension: Properties, Risk Analysis, Censored and Complete Goodness-of-Fit Validation Testing under Left-Skewed Insurance, Reliability and Medical Data. Symmetry 2023, 15, 1356. https://doi.org/10.3390/sym15071356

AMA Style

Salem M, Emam W, Tashkandy Y, Ibrahim M, Ali MM, Goual H, Yousof HM. A New Lomax Extension: Properties, Risk Analysis, Censored and Complete Goodness-of-Fit Validation Testing under Left-Skewed Insurance, Reliability and Medical Data. Symmetry. 2023; 15(7):1356. https://doi.org/10.3390/sym15071356

Chicago/Turabian Style

Salem, Moustafa, Walid Emam, Yusra Tashkandy, Mohamed Ibrahim, M. Masoom Ali, Hafida Goual, and Haitham M. Yousof. 2023. "A New Lomax Extension: Properties, Risk Analysis, Censored and Complete Goodness-of-Fit Validation Testing under Left-Skewed Insurance, Reliability and Medical Data" Symmetry 15, no. 7: 1356. https://doi.org/10.3390/sym15071356

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop