Next Article in Journal
Three-Species Predator–Prey Stochastic Delayed Model Driven by Lévy Jumps and with Cooperation among Prey Species
Next Article in Special Issue
Bayesian Spatial Split-Population Survival Model with Applications to Democratic Regime Failure and Civil War Recurrence
Previous Article in Journal
On Parsing Programming Languages with Turing-Complete Parser
Previous Article in Special Issue
On Designing of Bayesian Shewhart-Type Control Charts for Maxwell Distributed Processes with Application of Boring Machine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian and Non-Bayesian Risk Analysis and Assessment under Left-Skewed Insurance Data and a Novel Compound Reciprocal Rayleigh Extension

1
Department of Applied, Mathematical and Actuarial Statistics, Faculty of Commerce, Damietta University, Damietta 34517, Egypt
2
Department of Statistics and Operations Research, Faculty of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
3
Department of Mathematical Sciences, Ball State University, Muncie, IN 47306, USA
4
Department of Statistics, Mathematics and Insurance, Benha University, Benha 13518, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(7), 1593; https://doi.org/10.3390/math11071593
Submission received: 28 January 2023 / Revised: 21 March 2023 / Accepted: 23 March 2023 / Published: 25 March 2023
(This article belongs to the Special Issue Application of the Bayesian Method in Statistical Modeling)

Abstract

:
Continuous probability distributions can handle and express different data within the modeling process. Continuous probability distributions can be used in the disclosure and evaluation of risks through a set of well-known basic risk indicators. In this work, a new compound continuous probability extension of the reciprocal Rayleigh distribution is introduced for data modeling and risk analysis. Some of its properties including are derived. The estimation of the parameters is carried out via different techniques. Bayesian estimations are computed under gamma and normal prior. The performance and assessment of all techniques are studied and assessed through Monte Carlo experiments of simulations and two real-life datasets for applications. Two applications to real datasets are provided for comparing the new model with other competitive models and to illustrate the importance of the proposed model via the maximum likelihood technique. Numerical analysis for expected value, variance, skewness, and kurtosis are given. Five key risk indicators are defined and analyzed under Bayesian and non-Bayesian estimation. An extensive analytical study that investigated the capacity to reveal actuarial hazards used a wide range of well-known models to examine actuarial disclosure models. Using actuarial data, actuarial hazards were evaluated and rated.

1. Introduction

Actuarial science is a mathematical branch that deals with the financial consequences of uncertain future events. It employs statistical and mathematical methods to evaluate and manage risks in the finance and insurance industries. Actuaries use probability distributions to model and measure the likelihood of different outcomes and determine the anticipated future losses. Probability distribution is a function that describes the probability of different outcomes for a random variable. Actuaries utilize various probability distributions, such as Poisson, normal, exponential, and log-normal, to model diverse types of risks, including morbidity, mortality, and property damage, based on the nature of the risk being modeled and the data available for the modeling. Actuaries use probability distributions to compute the expected future losses, which are used to set insurance premiums, design insurance products, and assess investment strategies. Actuaries also use simulation techniques to test their models and evaluate the financial results of insurance policies and investments under various scenarios.
The adequacy of probability-based distributions in describing risk exposure is a common practice in the field of risk management. Typically, risk exposure statistics are defined by one or a small group of numbers that are functions of a specific model, commonly referred to as key risk indicators (KRIs) (Lane [1]; Klugman et al. [2]). These KRIs provide actuaries and risk managers with valuable information about a company’s exposure to specific types of risk. Several KRIs, including value-at-risk (VARK), tail-value-at-risk (TVARK) or conditional tail expectation (CTE), conditional-value-at-risk (CVARK), tail variance (TV), mean excess loss (MEL), and tail mean-variance (TMV), have been developed and can be analyzed (Shrahili et al. [3]; Mohamed [4]). In particular, VARK is commonly used to estimate the quantile distribution of aggregate losses. Actuaries and risk managers focus on calculating the probability of a bad outcome, measured by the VARK indicator at a specific probability or confidence level. This indicator is used to estimate the amount of capital required to manage potential unfavorable events. The ability of an insurance company to handle such situations is highly valued by actuaries, authorities, investors, and rating agencies (Wirch [5]; Artzner [6]; Tasche [7]; Acerbi [8]; Landsman [9]; Furman and Landsman [10]). In summary, probability-based distributions and KRIs are essential tools for evaluating risk exposure in companies. The use of such indicators is widespread, and the ability to manage risk effectively is highly valued in the field of risk management.
For the left-skewed insurance-claims data, this work suggests certain KRI variables, such as VARK, TVARK, TV, MEL, and TMV, using a new model termed the exponentiated generalized reciprocal Rayleigh Poisson (EGRRP) distribution. Statistical methods frequently employed in actuarial risk analysis include:
  • Actuaries employ probability distributions to simulate the possibility of a variety of events, including claims, fatalities, and policy cancellations. The Poisson distribution, the exponential distribution, and the Weibull distribution are frequently used distributions in actuarial science.
  • Modeling the time until a specific event, such as a death or policy cancellation, is conducted using survival analysis. This method is employed to compute life expectancy and assess the likelihood of survival for a specific time period.
  • Modeling with stochastic processes: stochastic modeling is used to simulate unpredictable events such as insurance-claims and policy cancellations. This method is employed to compute the variability of these estimations and to estimate the expected value of upcoming claims.
  • Loss distributions: Loss distributions are used to simulate how losses are distributed as a result of things such as claims and insurance cancellations. The estimated value of potential losses is calculated using this method, and the risk involved with such losses is also identified.
  • Actuaries employ statistical approaches, such as portfolio optimization and hedging strategies, to evaluate and manage financial risks.
The Reciprocal Rayleigh (RR), also known as the inverse Rayleigh distribution, is an important probability distribution that is widely used in many fields, including reliability engineering, signal processing, and wireless communications. The RR distribution is a flexible distribution that can model a wide range of phenomena. It is a continuous distribution with support on the positive real line, and it has two parameters that control the location and scale of the distribution. The RR distribution can be used to model data that are positively skewed and have long tails, which are common in many real-world applications. The RR distribution is commonly used in reliability engineering to model the lifetime of a system or component. In this context, the RR distribution is used to model the time until failure, and it has been shown to provide a good fit to many types of failure data. The RR distribution is used in signal processing to model the amplitude of a random signal. In this context, the RR distribution is used to model the probability density function of the envelope of a narrowband Gaussian noise signal, which is commonly used in wireless communications. The RR distribution is used in statistical inference to model the distribution of the inverse of a random variable. In this context, the RR distribution is used to model the distribution of the ratio of two independent Rayleigh-distributed random variables, which is commonly used in wireless communications and signal processing.
The RR distribution is used in reliability engineering to model the lifetime of a system or component. In this context, the RR distribution is used to model the time until failure, and it has been shown to provide a good fit to many types of failure data. The RR distribution has also been used in finance to model the distribution of returns on investment portfolios. In this context, the RR distribution can be used to model the distribution of the inverse of returns, which can be useful in portfolio risk management. Overall, the RR distribution has a wide range of applications in many fields, including reliability engineering, wireless communications, signal processing, statistical inference, and finance. Its flexibility, reliability modeling capabilities, and usefulness in modeling the distribution of the inverse of a random variable make it a valuable tool for researchers and practitioners in many different areas.
The RR distribution can be used to model the distribution of losses in insurance-claims. This is useful in estimating the risk of losses and setting premiums. It can also be used to model the risk of many events in insurance, such as natural disasters or accidents. This can help insurance companies to better understand and manage their risk exposure. Moreover, the RR distribution can be used to model the behavior of policyholders in insurance, such as the frequency and severity of claims. This can help insurance companies to design policies that are better suited to their customers’ needs. Finally, the RR distribution can be used in actuarial modeling to estimate the probability of future events based on historical data. This is useful in predicting the likelihood of future claims and setting reserves. Overall, the RR distribution has several applications in actuarial sciences and insurance. Its flexibility and usefulness in modeling the distribution of losses and risk make it a valuable tool for insurance companies and actuaries. The RR distribution, also known as the inverse Rayleigh (IR) distribution, has several applications in actuarial sciences and insurance. The RR distribution is considered as a distribution for a lifetime random variable (r.v.). The probability density function (PDF) and cumulative distribution function (CDF) of the RR model are given by
g ( x ) = g ( x ; θ ) = 2 θ 2 x 3 e x p [ ( θ / x ) 2 ] ,
and
G ( x ) = G ( x ; θ ) = e x p [ ( θ / x ) 2 ] ,
respectively, where θ > 0   is   a   scale   parameter   x > 0 . The exponentiated-generalized-Poisson (EGP) family of distributions is a new flexible compound family of distributions that Aryal and Yousof [11] introduced and explored. The EGP family’s CDF and PDF are provided by:
F ( x ; α , β ) = 1 c ( λ ) { 1 e x p ( λ { 1 [ 1 G ( x ; θ ) ] α } β ) } ,
and
f ( x ; α , β ) = α β λ g ( x ; θ ) [ 1 G ( x ; θ ) ] α 1 { 1 [ 1 G ( x ; θ ) ] α } β 1 c ( λ ) e x p ( λ { 1 [ 1 G ( x ; θ ) ] α } β ) ,
respectively, where α , β > 0 ,   λ R - { 0 } , x > 0 and c ( λ ) = 1 e x p ( λ ) . For β = 1 we have the exponentiated-G Poisson (EGP) class of distribution, and for α = 1 we have the generalized Poisson (GP) class of distribution, both of which are embedded in EGP class. Since (2) refers to the baseline CDF of the RR model and (3) refers to the baseline CDF of the EGP family, then, substituting (2) in (3), we derive a new compound RR distribution called EGRRP with CDF which can be expressed as
F ( x ; _ ) = 1 c ( λ ) { 1 e x p ( λ { 1 [ 1 Δ θ ( x ) ] α } β ) } ,
where _ = ( α , β , λ , θ ) ,   α , β , θ > 0 , λ R - { 0 } , x > 0 , and Δ θ ( x ) = e x p [ ( θ / x ) 2 ] . The corresponding PDF can be written as
f ( x ; _ ) = 2 α β λ θ 2 c ( λ ) 1 x 3 Δ θ ( x ) ( 1 [ 1 Δ θ ( x ) ] α ) β 1 { 1 Δ θ ( x ) } 1 α e x p ( λ { 1 [ 1 Δ θ ( x ) ] α } β ) .
Figure 1 illustrates that the PDF of the EGRRP model may exhibit various shapes, such as right-skewed, left-skewed, and unimodal. On the other hand, Figure 2 shows that the hazard rate function (HRF) of the EGRRP model may be decreasing and upside down. Moreover, there are several notable extensions of the RR distribution, including Voda [12], Mukerjee and Saran [13], Nadarajah and Kotz [14], Nadarajah and Gupta [15], Barreto-Souza et al. [16], Krishna et al. [17], Mahmoud and Mandouh [18], Mead et al. [19], Chakraborty et al. [20], and Cordeiro et al. [21], among others.

2. Properties and Numerical Analysis

Using the power series expansion of e x p ( x ) the PDF in (6) can be expressed as
f ( x ; _ ) = 2 α β θ 2 c ( λ ) 1 x 3 Δ θ ( x ) i = 0 ( 1 ) i { 1 [ 1 Δ θ ( x ) ] α } β ( i + 1 ) 1 i ! λ i 1 { 1 Δ θ ( x ) } α + 1 .
Using
( 1 δ 1 δ 2 ) δ 4 = δ 3 = 0 ( δ 1 δ 2 ) δ 3 Γ ( 1 + δ 4 ) ( δ 3 ) ! Γ ( 1 + δ 4 δ 3 ) ,
the last equation of f ( x ; _ ) can be expressed as
f ( x ; _ ) = k = 0 ξ k g k ( x ; θ ) | k = k + 1 ,
where
ξ k = α β ( 1 ) k c ( λ ) k i , j = 0 ( 1 ) i + j i ! λ i 1 ( β ( i + 1 ) 1 j ) ( α ( j + 1 ) 1 k ) ,
and g k ( x ; θ ) is the RR density with scale parameter θ k . By integrating (7), we obtain another simple results of F ( x ) as F ( x ) = k = 0 ξ k G k ( x ; θ ) where G k ( x ; θ ) is the CDF of the RR distribution with scale parameter θ k . The rth ordinary moment of X is given by μ r , X = E ( X r ) =   x r f ( x ) d x . Using (7), we obtain
μ r , X = Γ ( 1 r 2 ) k = 0 ξ k [ θ k ] r | 2 > r ,
where Γ ( 1 + η ) | ( η R + ) = η ! = w = 0 η 1 ( η w ) and Γ ( η ) = 0 x η 1 e x p ( x ) d x . Setting r = 1 in (8), we have the mean of X as E ( X ) = μ 1 , X = θ Γ ( 1 2 ) k = 0 ξ k k . We can find the MGF, say M X ( t ) = E ( e t X ) by
M X ( t ) = r = 0 t r r ! μ r = Γ ( 1 r 2 ) k , r = 0 t r r ! ξ k [ θ k ] r | 2 > r .
The sth incomplete moments, say I s , X ( t ) , is given by I s ( t ) = t x s f ( x ) d x . Using (7), we obtain
I s , X ( t ) = k = 0 ξ k [ θ k ] s γ ( 1 s 2 , ( θ t ) 2 k ) | 2 > s ,
where
γ ( η , q ) | ( η 0 , 1 , 2 , ) = 0 q t η 1 e x p ( t ) d t = k = 0 q η + k ( 1 ) k k ! ( η + k ) ,
and Γ ( η , x ) | ( x > 0 ) = q t η 1 e x p ( t ) d t . The ( s , r ) th probability weighted moments (PWMs) of X following the EGRRP model, say ρ s , r , X , is formally defined by ρ s , r , X = E { X s F ( X ) r } . Using Equations (5) and (6), we can write f ( x ; _ ) F ( x ; _ ) r = k = 0 w k g k ( x ; θ ) where
w k = α β c ( λ ) r + 1 k w , i , j = 0 ( 1 ) w + i + j + k i ! λ i 1 ( 1 + w ) i ( r w ) ( β ( i + 1 ) 1 j ) ( α ( 1 + j ) 1 k ) .
Then, the ( s , r ) th PWM of X can be obtained and summarized as
ρ s , r , X = Γ ( 1 s 2 ) k = 0 w k [ θ k ] s | 2 > s .
The n th moment of the residual life, say τ n , X ( t ) = E [ ( X t ) n   | n = 1 , 2 , X > t ] , uniquely determine F ( x ) . Therefore
τ n , X ( t ) = 1 1 F ( t ) k = 0 ξ k * [ θ k ] n Γ ( 1 n 2 , ( θ t ) 2 k ) | 2 > n ,
where ξ k * = ξ k r = 0 n ( n r ) ( t ) n r and Γ ( η , x ) = Γ ( η ) γ ( η , x ) . The n th moment of the reversed residual life, say ω n ( t ) = E [ ( t X ) n   | n = 1 , 2 , X t   and   t > 0 ] . Therefore
ω n , X ( t ) = 1 F ( t ) k = 0 ξ k * * [ θ k ] n γ ( 1 n 2 , ( θ t ) 2 k ) | 2 > n .
where ξ k * * = ξ k r = 0 n ( 1 ) r ( n r ) t n r .  Table 1 lists a few sub models from the EGRRP model. A numerical investigation of the E(X), Variance (V(X)), skewness (Ske(X)), and kurtosis (Ku(X)) is shown in Table 2. According to Table 2, the proposed model’s skewness can have both positive and negative values. The proposed model’s kurtosis ranges from greater than three to fewer than three.

3. Actuarial Indicators for Risk Analysis and Management

A specific insurance policy or the insurance sector as a whole may be affected by future occurrences, and risk analysis in insurance data refers to the process of assessing and estimating the chance of such events. Identification, evaluation, and development of solutions to control or mitigate potential risks are the objectives of risk analysis. To ascertain the degree of risk associated with a certain policy or portfolio of policies, this procedure entails gathering, evaluating, and interpreting data regarding a variety of elements, including demographic data, insurance-claims history, and economic indicators. Insurance firms use the findings of risk analysis to determine rates, make underwriting judgements, and create loss mitigation plans. KRIs are an essential tool for risk management as they provide a clear, quantifiable, and actionable measure of an organization’s key risks, allowing organizations to take proactive steps to manage these risks and avoid negative consequences. The selection of KRIs is important and should be based on the specific risks faced by the organization and its overall risk management strategy.

3.1. VARK Indicator

The VARK, a widely utilized financial term, is a measure of the maximum expected loss that a portfolio or investment may incur over a given time period and is commonly employed as a risk management tool by financial institutions to assess market risk. This single-number indicator provides a concise summary of the potential loss of a portfolio or investment. For example, a portfolio with a VARK of USD 1 million at a 95% confidence level implies a 5% probability that the portfolio will experience a loss exceeding USD 1 million during the specified time frame. Risk exposure is an inevitable aspect of the operations of insurance organizations, and actuaries have developed Various risk indicators to statistically evaluate it. VARK is used to determine the most probable maximum amount of capital that might be lost over a specified duration. However, a loss that is unbounded or at least equal to the value of the portfolio is not necessarily informative. The risk profiles of different portfolios with the same maximum loss can vary substantially. Therefore, the VARK typically depends on the probability distribution of the loss random variable, which is influenced by the overall distribution of the risk factors that affect the portfolio. Then, for EGRRP distributions, we can simply write
Pr ( X > Q ( X ) ) = { 0.01 | q = 99 %   0.05 | q = 95 %     .
The Q ( X ) = F 1 ( x ; _ ) refers to the quantile of the EGRRP model. The VARK is a practical instrument for risk management since it gives a clear and succinct assessment of the potential loss of an investment or portfolio. It has drawbacks, too, as it simply offers a point estimate of the probable loss and ignores tail risks or extreme events. Financial organizations frequently combine VARK with other risk management tools such as stress testing and scenario analysis to address these constraints.

3.2. TVARK Risk Indicator

The TVARK ( X ; q , _ ) at the 100 q % confidence level, can be defined as the expected losses given that the losses exceed the 100 q % of the distribution of X . Then, the TVARK( X ; q , _ ) can be then calculated as
TVARK ( X ; q , _ ) = E ( X | X > π ( q ) ) = 1 1 q π ( q ) + x f V _ ( x ; _ ) d x .
Then, we have
T VARK ( X ; q , _ ) = θ 1 q k = 0 ξ k * k   Γ ( 1 2 , ( θ t ) 2 k ) ,
So, TVARK ( X ; q , _ ) can be calculated as average all the VARK ( X ; q , _ ) values over the confidence level q . That means that the indicator of TVARK ( X ; q , _ ) gives us many more information about the tail of the EGRRP distribution and its properties. Generally, the TVARK ( X ; q , _ ) can also be written as
e ( X ; q , _ ) = TVARK ( X ; q , _ ) VARK ( X ; q , _ ) ,
where e ( X ; q , _ ) is the MEL function evaluated at the 100 q t h quantile.

3.3. The TV Indicator

The TV indicator (TV ( X ; q , _ )) can be expressed as
TV ( X ; q , _ ) = E ( X 2 | X > π ( q ) ) [ TVARK ( X ; q , _ ) ] 2 .
For the EGRRP model, the quantity E ( X 2 | X > π ( q ) ) is not exist; however, we dealt with this amount using numerical techniques to find the closest possible value for it. It is known that numerical techniques represent the best solution in many of the problems of estimation and specialist modeling, where TVARK ( X ; q , _ ) is given in (13).

3.4. TMV Risk Indicator

The TMV risk indicator (TMV ( X )) for the EGRRP model can then be derived as
TMV ( X ; q , _ ; π ) | 0 < π < 1 = TVARK ( X ; q , _ ) + π TV ( X ; q , _ ) .
Then, for any loss random variable, ( TMVX ; q , _ ; π ) > TV ( X ; q , _ ) , and for π = 1 the ( TMVX ; q , _ ; π ) = TVARK ( X ; q , _ ) . Some other common examples of KRSIs can be mentioned such as:
  • Indicators of the frequency and size of losses resulting from different risks, including accidents, losses from fraud, or losses from natural catastrophes, are measured by these KRSIs.
  • Volatility indicators: These KRSIs measure the level of volatility in various financial markets, such as the stock market, currency market, or commodities market.
  • Credit risk indicators: These KRSIs measure the credit risk of various borrowers, such as individuals or organizations, based on their credit history and financial information.
  • Operational risk indicators: These KRSIs measure the level of operational risk associated with various processes, such as supply chain disruptions, IT failures, or human errors.
  • Market risk indicators: These KRSIs measure the level of market risk associated with investments, such as stocks, bonds, or commodities.

4. Estimation

4.1. Classical Estimation

4.1.1. Maximum Likelihood Technique

For determining the maximum likelihood estimation (MLE) of _ , we formulate the log-likelihood function as follows
( x i ; _ ) = n l o g 2 + n l o g α + n l o g β + n l o g λ n l o g [ 1 e x p ( λ ) ] + n 2 l o g θ 3 i = 1 n l o g x i i = 1 n ( θ / x ) 2 + ( α 1 ) i = 1 n l o g [ 1 Δ θ ( x i ) ] + ( β 1 ) i = 1 n l o g { 1 [ 1 Δ θ ( x i ) ] α } λ i = 1 n { 1 [ 1 Δ θ ( x i ) ] α } β .
The components of the score vector components are easily derived and then solved. To solve these equations, it is usually more convenient to use nonlinear optimization techniques such as the quasi-Newton algorithm to numerically maximize ( x i ; _ ) . A popular numerical optimization approach for maximizing functions is the quasi-Newton algorithm. It is an iterative approach that computes the search direction at each iteration using approximations of the Hessian matrix. The fundamental goal of the quasi-Newton approach is to update a Hessian matrix approximation based on gradient data gathered from evaluating the objective function at various locations. Using the updated Hessian approximation, the technique determines a search direction at each iteration and moves in that direction to find a new location to evaluate the objective function. The Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is the most used quasi-Newton algorithm. The rank-two update formula used by the BFGS algorithm to update the Hessian approximation is made to keep the approximation’s positive definiteness. When there are many variables in an optimization issue, the BFGS algorithm is frequently used.

4.1.2. Bootstrapping Technique

The bootstrapping technique is a potent statistical technique, particularly useful when dealing with small sample sizes. In traditional scenarios, assuming a normal or t-distribution is not feasible when working with less than 40 samples. However, bootstrap techniques are well-suited for sample sizes of less than 40 as they involve resampling and do not make any assumptions about the data distribution. With the increasing availability of computing resources, bootstrapping has gained popularity as a practical approach that requires the use of a computer. The following section will illustrate how this technique operates. This is due to the necessity of using a computer for bootstrapping to be useful. In the section that follows, we will examine how these function. There are several different types of bootstrapping techniques, including:
  • Non-parametric bootstrapping: In non-parametric bootstrapping, the statistic of interest is calculated directly from the resampled data without making any assumptions about the underlying probability distribution. This is the most commonly used type of bootstrapping and can be used for a wide range of estimators.
  • Parametric bootstrapping: In parametric bootstrapping, the resampled data are generated from a specific parametric distribution that is assumed to describe the data. This can be useful when the underlying distribution is known or can be reasonably assumed and can lead to more accurate estimates than non-parametric bootstrapping.
  • Bootstrap aggregating (bagging): In bagging, multiple copies of the original dataset are created by resampling, and then a separate model is trained on each of these new datasets. The final estimate is then obtained by averaging the estimates of the individual models. Bagging is commonly used in machine learning and can improve the accuracy and stability of models that are prone to overfitting.
  • Cross-validation bootstrapping: In cross-validation bootstrapping, the original dataset is divided into several subsets, and then a separate model is trained on each subset while using the remaining data for validation. The final estimate is then obtained by averaging the estimates of the individual models. Cross-validation bootstrapping is commonly used in machine learning and can help to prevent overfitting by reducing the variance of the estimate.
  • Bootstrapping has become a widely used technique for statistical inference and estimation, and it has been applied to a wide range of fields, including finance, engineering, social sciences, and natural sciences. Bootstrapping can be implemented using various statistical software packages, including R, Python, MATLAB, and SAS.

4.1.3. Technique of Cramér–von Mises

The Cramér–von Mises estimation (CVME) technique of the parameters is based on the theory of minimum distance estimation. The CVME of the parameters α , β ,   λ , and θ are obtained by minimizing the following expression with respect to the parameters α , β ,   λ , and θ , respectively, where
CVM ( _ ) = 1 12 n + i = 1 n [ F ( x ) | α , β , θ , λ   x i : n k ( i , n ) ] 2 ,
where k ( i , n ) = ( 2 i 1 ) / n and
CVM ( _ ) = 1 12 n + i = 1 n { c 1 ( λ ) [ 1 e x p ( λ { 1 [ 1 Δ θ ( x i ) ] α } β ) ] k ( i , n ) } 2 .
Then, CVME of the parameters are obtained by solving the following non-linear equations
i = 1 n ξ α ( x i | _ ) { c 1 ( λ ) [ 1 e x p ( λ { 1 [ 1 Δ θ ( x i ) ] α } β ) ] k ( i , n ) } = 0 ,
i = 1 n ξ β ( x i | _ ) { c 1 ( λ ) [ 1 e x p ( λ { 1 [ 1 Δ θ ( x i ) ] α } β ) ] k ( i , n ) } = 0 ,
i = 1 n ξ λ ( x i | _ ) { c 1 ( λ ) [ 1 e x p ( λ { 1 [ 1 Δ θ ( x i ) ] α } β ) ] k ( i , n ) } = 0 ,
and
i = 1 n ξ θ ( x i | _ ) { c 1 ( λ ) [ 1 e x p ( λ { 1 [ 1 Δ θ ( x i ) ] α } β ) ] k ( i , n ) } = 0 ,
where ξ α ( x i | _ ) ,   ξ β ( _ ) ,   ξ λ ( _ ) , and ξ θ ( _ ) are the values of the first derivatives of the CDF of EGRRP distribution with respect to α , β , λ , θ , respectively.

4.2. Bayesian Estimation

In this part, we build estimators for the EGRRP distribution’s unknown parameters using Bayesian techniques. The maximum likelihood estimator frequently fails to converge, particularly in models with larger dimensions. In these situations, Bayesian approaches are sought after. Bayesian approaches initially appear to be highly complicated because the estimators entail unsolvable integrals. Here we assume the gamma priors of the parameters ( α , β , λ , θ ) of the following forms
π 1 ( α ) G a m m a ( ξ 1 , d 1 ) , π 2 ( β ) G a m m a ( ξ 2 , d 2 ) , π 3 ( θ ) G a m m a ( ξ 3 , d 3 ) , π 4 ( λ ) N o r m a l ( ξ 4 , d 4 2 ) ,
where, Gamma ( ξ i , d i ) | ( i = 1 , 2 , 3 ) stands for gamma distribution with shape parameter ξ i and scale parameter d i , and normal ( ξ 4 , d 4 2 ) stands for the normal distribution with shape parameter ξ 4 and d 4 2 .
The Gamma distribution is a conjugate prior for several common likelihood functions, including the Poisson, exponential, and normal distributions. This means that if we choose a Gamma prior, the resulting posterior distribution will also be a Gamma distribution. This makes the Bayesian inference computationally efficient and enables us to obtain the posterior distribution in closed form. The Gamma distribution is a distribution over positive values only, which makes it a natural choice for modeling quantities that are inherently positive, such as rates, counts, or durations. The Gamma distribution is a flexible distribution that can take on a wide range of shapes, including skewed, unimodal, and multimodal shapes. This makes it a good choice for modeling a wide range of different data types. The parameters of the Gamma distribution have clear and intuitive interpretations, which makes it easy to incorporate prior knowledge into the model. For example, the shape parameter of the Gamma distribution can be interpreted as the number of prior observations, and the scale parameter can be interpreted as the prior sum of the observations. The Gamma distribution is relatively robust to deviations from the assumed model, which makes it a good choice when the data are noisy or when there is uncertainty about the model specification.
It is further assumed that the parameters are to be independently distributed. The joint prior distribution is given by
π ( α , β , λ , θ ) = d 1 ξ 1 Γ ( ξ 1 ) d 2 ξ 2 Γ ( ξ 2 ) d 3 ξ 3 Γ ( ξ 3 ) α ξ 1 1 β ξ 2 1 θ ξ 3 1 ( 2 π d 4 2 ) 1 2 e ( α d 1 + β d 2 + λ d 3 ) [ ( λ ξ 4 ) 2 / ( 2 d 4 2 ) ] .
The posterior distribution of the parameters is defined by π ( α , β , λ , θ | x _ ) likelihood ( α , β , λ , θ | x _ ) × π ( α , β , λ , θ ) . As a consequence, we recommend employing Markov chain Monte Carlo (MCMC) methods, particularly the Gibbs sampler and the Metropolis Hastings (MH) technique. We implemented a hybrid MCMC approach to draw samples from the joint posterior of the parameters because it is not possible to collect the conditional posteriors of the parameters in any basic structures. To implement the Gibbs algorithm, the full conditional posteriors of α , λ , θ , and θ are given by
π ( α , β , λ , θ ) = d 1 ξ 1 Γ ( ξ 1 ) d 2 ξ 2 Γ ( ξ 2 ) d 3 ξ 3 Γ ( ξ 3 ) α ξ 1 1 β ξ 2 1 θ ξ 3 1 d 4 1 ( 2 π ) 1 2 e ( α d 1 + β d 2 + λ d 3 ) 1 2 ( λ ξ 4 d 4 ) 2 .
π 1 ( α | β , λ , θ , x _ ) α n + ξ 1 1 e ( α d 1 ) i = 1 n Λ i , π 2 ( β | α , λ , θ , x _ ) β n + ξ 2 1 e ( β d 2 ) i = 1 n Λ i ,
π 3 ( θ | α , β , λ , x _ ) θ n + ξ 3 1 e ( θ d 3 ) i = 1 n Λ i , π 4 ( λ | α , β , λ , x _ ) d 4 n e 1 2 ( λ ξ 4 d 4 ) 2 i = 1 n Λ i ,
where
Λ i = x i 3 Δ θ ( x i ) [ c ( λ ) ] { 1 Δ θ ( x i ) } 1 α ( 1 [ 1 Δ θ ( x i ) ] α ) β 1 e x p [ λ ( 1 [ 1 Δ θ ( x i ) ] α ) β ] .
The simulation algorithm we followed is given by
1.
Provide the initial values, say α ( 0 ) , β ( 0 ) , λ ( 0 ) , and θ ( 0 ) where α ( 0 ) > 0 ,   β ( 0 ) > 0 , λ ( 0 ) R - { 0 } and θ ( 0 ) > 0 (the initial values are randomly determined by the selection of the researcher, provided that it is within the specified range) then at i th stage;
2.
Using MH algorithm, generate α ( i ) π 1 ( α ( i 1 ) | β ( i 1 ) , λ ( i 1 ) , θ ( i 1 ) ) ;
3.
Then, using the well-known algorithm MH, generate β ( i ) π 2 ( β ( i 1 ) | α ( i ) , λ ( i 1 ) , θ ( i 1 ) ) ;
4.
Then, using the well-known algorithm MH, generate θ ( i ) π 3 ( θ ( i 1 ) | α ( i ) , β ( i ) , λ ( i 1 ) ) ;
5.
Then, using the well-known algorithm MH, generate λ ( i ) π 3 ( λ ( i 1 ) | α ( i ) , β ( i ) , θ ( i ) ) ;
6.
Repeat steps 2 5 , M = 100,000 times to obtain the samples of size M from the corresponding posteriors of interest.
Obtain the Bayesian estimates of α ,   β ,   λ and θ using the following formulae h ^ Bayesian = 1 M M 0 j = 1 + M 0 M h j   | h = α ,   β ,   λ   and   θ where M 0 ( 50,000 ) is the burn-in period of the generated Markov chains.

5. Simulations for Comparing Bayesian and Classical Approaches

Simulation studies are a crucial tool for assessing and contrasting various statistical techniques, including traditional estimation techniques. In simulation studies, data are generated based on a given model, and the efficacy of various estimating techniques is evaluated using the generated data. We shall talk about the value of simulation studies for contrasting traditional estimating techniques in this essay. Simulations are an important tool for comparing Bayesian and classical estimation techniques because they allow us to systematically evaluate the performance of different techniques under a wide range of conditions. Simulations allow us to examine the performance of Bayesian and classical techniques under different sample sizes, which can be particularly useful when the sample size is small. By simulating data with different sample sizes, we can assess how well the different techniques perform under conditions of low data availability. Simulations allow us to evaluate the performance of Bayesian and classical techniques under different parameter values, which can be important when the parameters of interest are not known a priori. By simulating data with different parameter values, we can assess how well the different techniques perform under conditions of parameter uncertainty. Simulations allow us to evaluate the robustness of Bayesian and classical techniques to deviations from the assumed model. By simulating data that deviate from the assumed model, we can assess how well the different techniques perform under conditions of model misspecification. Simulations allow us to compare the accuracy and precision of Bayesian and classical techniques under different conditions. By simulating data with known parameter values, we can assess how accurately and precisely the different techniques estimate the true parameters. Simulations allow us to assess the computational efficiency of Bayesian and classical techniques under different conditions. By simulating data with different sample sizes and parameter values, we can assess how well the different techniques scale to larger and more complex datasets.
The mean squared error (MSE) is a performance indicator that is frequently used in simulation studies to assess the precision of a statistical model or estimator. The average of the squared discrepancies between the estimated values and the actual values of the parameter being estimated is known as the MSE. In simulation research, MSE is chosen over measures of dispersion and biases for a number of reasons. It is a thorough evaluation. The MSE accounts for both the estimator’s bias and variability. Measurements of bias simply reflect the discrepancy between the estimator and the true value, while measures of dispersion, such as variance or standard deviation, only record the estimator’s variability. Because it takes into account both types of error, the MSE offers a more complete evaluation of the estimator’s performance. It is simple to understand. The MSE is simple to read because it uses the same units as the parameter being estimated.
A MCMC simulation study is conducted and performed in this section to assess and compare the performance of the different estimators of the unknown parameters of the EGRRP distribution. This performance is assessed using the average values (AVs) of estimates and the MSEs. First, we generated 1000 samples of the EGRRP distribution, where n = ( 20 ,   50 ,   100 ,   200 ) and choosing
α β λ θ I 2 1.5 1.5 1.2 I I 0.6 2 1.5 0.5
The tables present the AVs and MSEs of various parameter estimators, namely MLEs, Bootstrap, CVMEs, and Bayesian estimators. To evaluate the Bayesian estimators, the MCMC technique is used with a flexible gamma prior under the SELF for all parameters, except for parameter λ, which uses a normal prior. Hyperparameters are assumed to be known and selected to have a prior mean equal to the initial value and a prior variance of one. Results from Table 3, Table 4, Table 5 and Table 6 demonstrate that all estimators exhibit consistency, as evidenced by the decreasing MSEs as the sample size increases. Moreover, the Bayesian estimators have lower MSEs than the other estimators, and in some cases, the MSEs of the Bayesian and MLEs are very similar. The computations in this section were performed using the Mathcad program, version 15.0.

6. Applications for Comparing Bayesian and Classical Estimations

Two real-life datasets for applications are introduced and analyzed to for some purposed including comparing Bayesian and classical estimations. In these applications, we recommend and consider the Cramér–von Mises ( W * ), the Anderson–Darling ( A * ) and the Kolmogorov–Smirnov (KS) test statistic for comparing methods. The 1st dataset consists of 100 observations of breaking stress of carbon fibers (see Nichols and Padgett [22]). Table 7 gives the values of estimators of α ,   β , , and θ , the KS test statistics and its p-value, and W * and A * for all techniques using the 1st dataset. From Table 7 we conclude that the Bayesian technique is the best technique with KS = 0.067 , p-value = 0.766 , W * = 0.066 , and A * = 0.52 . However, all other techniques performed well. For the 2nd dataset (see Smith and Naylor [23]), these data were originally obtained by workers at the UK National Physical Laboratory. Table 8 gives the values of estimators of α ,   β ,   λ , and θ , the KS test statistics and its p-value, and W * and A * for all techniques using the 2nd dataset. From Table 8 we conclude all other techniques performed well, and according to these results we cannot select a technique as the best one.

7. Applications for Comparing Competitive Distributions

This section presents two real-life applications of demonstrating the EGRRP distribution using real datasets. We compare the results of the new EGRRP distribution with the Weibull-inverse-Weibull (WIW), exponentiated-inverse-Weibull (EIW) (see Nadarajah and Kotz [14]), Kumaraswamy-inverse-Weibull (KumIW) (see Mead and Abd-Eltawab [19]), beta-inverse-Weibull (BIW) (Barreto-Souza et al. [16]), transmuted-inverse-Weibull (TIW) (see Mahmoud and Mandouh [18]), gamma extended-inverse-Weibull (see GEIW) (Silva et al. [24]), Marshall-Olkin-inverse-Weibull (MOIW) (see Krishna et al. [17]), and Reciprocal Weibull (RW) distributions.
The unknown parameters of these PDFs are all positive real numbers, except for the TIW distribution. To compare the distributions, we use various criteria such as the maximized Log-Likelihood, AIC (Akaike Information Criterion), CAIC (Consistent Akaike Information Criterion), BIC (Bayesian Information Criterion), and HQIC (Hannan–Quinn Information Criterion). All computations are conducted using the R PROGRAM. Additionally, the TTT graph is used to graphically verify whether the data can be fit to a specific distribution or not, which is an important graphical approach (Aarset [25]). A straight diagonal TTT graph indicates a constant HRF; a concave TTT graph indicates an increasing (or decreasing) HRF; and a U-shaped (bathtub) TTT graph indicates a unimodal HRF, while other shapes indicate otherwise. The TTT graphs for the two real datasets are presented in Figure 3, where we conclude that the empirical HRFs of the two datasets are increasing.
We contrast the EGRRP model with strong competitive distributions in Table 9, Table 10, Table 11 and Table 12. Table 9 gives −2ℓ, AIC, BIC, HQIC, and CAIC for the 1st dataset. Table 10 lists MLEs and their standard errors (in parentheses) for the 1st dataset. Table 11 gives −2ℓ, AIC, BIC, HQIC, and CAIC for the 2nd dataset. Table 12 lists MLEs and their standard errors (in parentheses) for the 2nd dataset. Of all models fitted to the two real-life datasets, the EGRRP model provides the best values for the AIC, BIC, HQIC, and CAIC statistics. So, it may be picked as the best option. For the first set of data, Figure 4 shows graphs for estimated CDFs, estimated PDFs, P-P graph, and the Kaplan–Meier graph. The graphs of estimated CDFs estimated PDFs, P-P graph, and the Kaplan–Meier survival graph for the second dataset are shown in Figure 5. These graphs suggest that for both datasets, the suggested distribution provides a better match than other non-nested and nested distributions.

8. Risk Analysis for Insurance-Claims Data

In insurance data analysis, the temporal growth of claims over time for each relevant exposure period is often presented in a triangle format. The exposure period can refer to the year the insurance policy was purchased or the time frame in which the loss occurred. It should be noted that the origin period need not be annual and can be monthly or quarterly. The claim age or claim lag is the duration between the origin period and when the claim is made. To identify consistent trends, division levels, or risks, data from various insurance policies are often combined. In this study, we utilize a U.K. Motor Non-Comprehensive account as an example of an insurance-claims payment triangle. For convenience, we set the origin period between 2007 and 2013 (see Shrahili et al. [3]; Mohamed [4]). The claims data are presented in an insurance-claims payment data frame in a standard database format, with the first column listing the development year, the incremental payments, and the origin year spanning from 2007 to 2013. It is crucial to note that a probability-based distribution was initially used to analyze this insurance-claims data. The data are analyzed using numerical and graphical techniques. The numerical approach involves fitting theoretical distributions such as the normal, uniform, exponential, logistic, beta, lognormal, and Weibull distributions. These distributions are then examined using graphical tools such as the skewness-kurtosis graph (or the Cullen and Frey graph) (see Figure 6). Figure 6 shows that our data are left-skewed and have a kurtosis of less than three.
Different approaches are utilized to examine multiple aspects of the insurance-claims data, which are presented in Figure 7. The NKDE method is implemented to analyze the initial shape of the insurance-claims density, while the Q-Q graph is used to evaluate the normality of the data. The TTT graph is employed to assess the initial shape of the empirical HRF, and the “box graph” is utilized to identify explanatory variables.
Figure 7 displays the results of the various graphs. The initial density is demonstrated to be an asymmetric function with a left tail in the top left graph. The bottom right graph indicates that there are no extreme claims. The bottom left graph shows that the HRF for models explaining the data should have a monotonically increasing trend. Scattergrams for the insurance-claims data are presented in Figure 8. The autocorrelation function (ACF) and partial autocorrelation function (partial ACF) for the data are depicted in Figure 9.
To assess risk for the insurance-claims data, measures such as VARK, TVARK, TV, TMV, and MEL are employed at various confidence levels, which are listed in Table 13. Table 14 gives the estimated parameters for the EGRRP under insurance-claims data. Table 15 lists the results of the KRIs for the IR under insurance-claims data. Table 16 gives the estimated parameters for the IR under insurance-claims data. The EGRRP and RR distributions are compared using five measures, while the KRIs for the EGRRP and IR are listed in Table 13 and Table 15, respectively. Table 14 provides the estimators and ranks for the EGRRP model under the claims data for all estimation methods, while Table 16 lists the KRIs for the RR distribution under the claims data for all estimation methods. The RR distribution is chosen as the baseline distribution for comparison with the new EGRRP distribution. Based on these tables, the following results can be highlighted:
  • For all risk assessment Bayesian and non-Bayesian techniques | q = 0.6 ,   0.7 ,   0.8 ,   0.9 ,   0.95 ,   0.99 ,   and   0.999 :
    VARK ( X ; q , _ ) | q = 0.6 < VARK ( X ; q , _ ) | q = 0.7 < VARK ( X ; q , _ ) | q = 0.999 .
  • For all risk assessment Bayesian and non-Bayesian techniques | q = 0.6 ,   0.7 ,   0.8 ,   0.9 ,   0.95 ,   0.99 ,   and   0.999 :
    TVARK ( X ; q , _ ) | q = 0.6 < TVARK ( X ; q , _ ) | q = 0.7 < TVARK ( X ; q , _ ) | q = 0.999 .
  • For most risk assessment techniques | q = 0.6 ,   0.7 ,   0.8 ,   0.9 ,   0.95 ,   0.99 ,   and   0.999 :
    TV ( X ; q , _ ) | q = 0.6 < TV ( X ; q , _ ) | q = 0.7 < TV ( X ; q , _ ) | q = 0.999 .
  • For all risk assessment techniques | q = 0.6 ,   0.7 ,   0.8 ,   0.9 ,   0.95 ,   0.99 ,   and   0.999 :
    TMV ( X ; q , _ ) | q = 0.6 < TMV ( X ; q , _ ) | q = 0.7 < TMV ( X ; q , _ ) | q = 0.999 .
  • For all risk assessment Bayesian and non-Bayesian techniques | q = 0.6 ,   0.7 ,   0.8 ,   0.9 ,   0.95 ,   0.99 ,   and   0.999 :
    MEL ( X ; q , _ ) | q = 0.6 > MEL ( X ; q , _ ) | q = 0.7 > MEL ( X ; q , _ ) | q = 0.999 .
  • Under the EGRRP model and the MLE technique: The VARK( X ; _ ^ ) is a consistently growing indicator which starts with 2602.272196 | q = 0.6 and terminates with 145,993.327739 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 7534.159674 | q = 0.6 and terminates with 65,808.922847 | q = 0.999 . However, the TV( X ; q , _ ^ ), the TMV( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing.
  • Under the RR distribution and the MLE technique: The VARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 1628.234966 | q = 0.6 and terminates with 36,791.27132 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 3547.122526 | q = 0.6 and terminates with 73,594.813549 | q = 0.999 ; the TV( X ; q , _ ^ ), the TMV( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing indicators.
  • Under the EGRRP model and the bootstrapping technique: The VARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 2832.283001 | q = 0.6 and terminates with 123,576.386617 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 7560.430006 | q = 0.6 and terminates with 84,874.267755 | q = 0.999 . However, the TV( X ; q , _ ^ ), the TMV( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing.
  • Under the RR distribution and the OLSE technique: The VARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 1692.496862 | q = 0.6 and terminates with 38,243.320243 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 3687.117754 | q = 0.6 and terminates with 76,499.395591 | q = 0.999 . Additionally, the TV ( X ; q , _ ^ ), the TMV ( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing indicators.
  • Under the EGRRP model and the CVM technique: The VARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 2770.998013 | q = 0.6 and terminates with 123,130.65901 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 7453.230383 | q = 0.6 and terminates with 84,870.02498 | q = 0.999 . However, the TV( X ; q , _ ^ ), the TMV( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing.
  • Under the RR distribution and the CVM technique: The VARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 2635.229755 | q = 0.6 and terminates with 59545.123954 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 5740.868751 | q = 0.6 and terminates with 119,110.108205 | q = 0.999 . However, the TV( X ; q , _ ^ ), the TMV( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing indicators.
  • Under the EGRRP model and the Bayesian technique: The VARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 2463.713921 | q = 0.6 and terminates with 129,788.635 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 6896.600374 | q = 0.6 and terminates with 28,585.213639 | q = 0.999 . However, the TV( X ; q , _ ^ ), the TMV( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing.
  • Under the RR distribution and the Bayesian technique: The VARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 1628.247409 | q = 0.6 and terminates with 36,791.552475 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 3547.149632 | q = 0.6 and terminates with 73,595.375934 | q = 0.999 ; the TV( X ; q , _ ^ ), the TMV( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing indicators.
  • For the EGRRP model and its corresponding RR base line model, the Bayesian approach is recommended since it offers the most acceptable risk exposure analysis.

9. Conclusions

Probability-based distributions are used by actuaries to determine the expected values of unexpected risk, which are then used to determine insurance premiums, create insurance products, and assess investment plans. Actuaries also employ simulation tools to test their algorithms and assess how different scenarios may affect the financial results of investments and insurance policies. Probability-based distributions can be used to explain risk exposure, often expressed as one or a few key risk indicators derived from a specific probability model. These indicators provide valuable insights for actuaries and risk managers to understand a company’s exposure to various types of risks, such as value-at-risk, tail-value-at-risk, conditional value-at-risk, tail variance, mean excess loss, and tail mean-variance. Different types of data can be analyzed using probability distributions in the modeling process. A new extension of the Reciprocal Rayleigh distribution is introduced and analyzed, including properties such as moments, incomplete moments, probability-weighted moments, moment generating function, residual life, and reversed residual life functions. Parameter estimation is performed through various techniques, including Bayesian estimators under gamma and normal priors using the squared error loss function. The performance of all estimation techniques is evaluated through Monte Carlo simulations and two real data applications. These applications compare the new model with other competitive models and demonstrate the importance of the proposed model via the maximum likelihood technique. Numerical analysis for expected value, variance, skewness, and kurtosis is provided. An extensive analytical study is conducted to evaluate and rate actuarial hazards using a wide range of well-known models for actuarial disclosure. Actuarial data are used to reveal and assess these hazards.
Based on risk analysis, the following results can be highlighted:
  • For all risk assessment Bayesian and non-Bayesian techniques | q = 0.6 ,   0.7 ,   0.8 ,   0.9 ,   0.95 ,   0.99 ,   and   0.999 :
    VARK ( X ; q , _ ) | q = 0.6 < < VARK ( X ; q , _ ) | q = 0.999 .
  • For all risk assessment Bayesian and non-Bayesian techniques | q = 0.6 ,   0.7 ,   0.8 ,   0.9 ,   0.95 ,   0.99 ,   and   0.999 :
    TVARK ( X ; q , _ ) | q = 0.6 < < TVARK ( X ; q , _ ) | q = 0.999 .
  • For most risk assessment techniques | q = 0.6 ,   0.7 ,   0.8 ,   0.9 ,   0.95 ,   0.99 ,   and   0.999 :
    TV ( X ; q , _ ) | q = 0.6 < < TV ( X ; q , _ ) | q = 0.999 .
  • For all risk assessment techniques | q = 0.6 ,   0.7 ,   0.8 ,   0.9 ,   0.95 ,   0.99 ,   and   0.999 :
    TMV ( X ; q , _ ) | q = 0.6 > > TMV ( X ; q , _ ) | q = 0.999 .
  • For all risk assessment Bayesian and non-Bayesian techniques | q = 0.6 ,   0.7 ,   0.8 ,   0.9 ,   0.95 ,   0.99 ,   and   0.999 :
    MEL ( X ; q , _ ) | q = 0.6 > > MEL ( X ; q , _ ) | q = 0.999 .
  • Under the EGRRP model and the MLE technique: The VARK( X ; _ ^ ) is a consistently growing indicator which starts with 2602.272196 | q = 0.6 and terminates with 145,993.327739 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 7534.159674 | q = 0.6 and terminates with 65,808.922847 | q = 0.999 . However, the TV( X ; q , _ ^ ), the TMV( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing.
  • Under the EGRRP model and the bootstrapping technique: The VARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 2832.283001 | q = 0.6 and terminates with 123,576.386617 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 7560.430006 | q = 0.6 and terminates with 84,874.267755 | q = 0.999 . However, the TV( X ; q , _ ^ ), the TMV( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing.
  • Under the EGRRP model and the CVM technique: The VARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 2770.998013 | q = 0.6 and terminates with 123,130.65901 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 7453.230383 | q = 0.6 and terminates with 84,870.02498 | q = 0.999 . However, the TV( X ; q , _ ^ ), the TMV( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing.
  • Under the EGRRP model and the Bayesian technique: The VARK( | q = 0.6 ) is a consistently growing indicator which starts with 2463.713921 | q = 0.6 and terminates with 129,788.635 | q = 0.999 ; the TVARK( X ; q , _ ^ ) is a consistently growing indicator which starts with 6896.600374 | q = 0.6 and terminates with 28,585.213639 | q = 0.999 . However, the TV( X ; q , _ ^ ), the TMV( X ; q , _ ^ ), and the MEL( X ; q , _ ^ ) are monotonously reducing.
  • For the EGRRP model and its corresponding RR base line model, the Bayesian approach is recommended since it offers the most acceptable risk exposure analysis.
  • For all q values and risk approaches, the EGRRP model outperforms the RR distribution in terms of performance. Despite the probability distributions having the same number of parameters, the new distribution performs the best when modeling insurance-claims reimbursement data and calculating actuarial risk.

Author Contributions

M.I.: review and editing, software, validation, writing the original draft preparation, conceptualization. W.E.: validation, writing the original draft preparation, conceptualization, data curation, formal analysis, software. Y.T.: conceptualization, software. M.M.A.: review and editing, conceptualization, supervision. H.M.Y.: review and editing, software, validation, writing the original draft preparation, conceptualization, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

The study was funded by Researchers Supporting Project number (RSP2023R488), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset can be provided upon requested.

Acknowledgments

The study was funded by Researchers Supporting Project number (RSP2023R488), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lane, M.N. Pricing risk transfer transactions1. ASTIN Bull. J. IAA 2000, 30, 259–293. [Google Scholar] [CrossRef] [Green Version]
  2. Klugman, S.A.; Panjer, H.H.; Willmot, G.E. Loss Models: From Data to Decisions, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2012; Volume 715. [Google Scholar]
  3. Shrahili, M.; Elbatal, I.; Yousof, H.M. Asymmetric Density for Risk Claim-Size Data: Prediction and Bimodal Data Applications. Symmetry 2021, 13, 2357. [Google Scholar] [CrossRef]
  4. Mohamed, H.S.; Cordeiro, G.M.; Minkah, R.; Yousof, H.M.; Ibrahim, M. A size-of-loss model for the negatively skewed insurance-claims data: Applications, risk analysis using different techniques and statistical forecasting. J. Appl. Stat. Forthcom. 2022. [Google Scholar] [CrossRef]
  5. Wirch, J.L. Raising value at risk. N. Am. Actuar. J. 1999, 3, 106–115. [Google Scholar] [CrossRef]
  6. Artzner, P. Application of Coherent Risk Measures to Capital Requirements in Insurance. N. Am. Actuar. J. 1999, 3, 11–25. [Google Scholar] [CrossRef]
  7. Tasche, D. Expected Shortfall and Beyond. J. Bank. Financ. 2002, 26, 1519–1533. [Google Scholar] [CrossRef] [Green Version]
  8. Acerbi, C.; Tasche, D. On the coherence of expected shortfall. J. Bank. Finance 2002, 26, 1487–1503. [Google Scholar] [CrossRef] [Green Version]
  9. Landsman, Z. On the Tail Mean–Variance optimal portfolio selection. Insur. Math. Econ. 2010, 46, 547–553. [Google Scholar] [CrossRef]
  10. Furman, E.; Landsman, Z. Tail Variance premium with applications for elliptical portfolio of risks. ASTIN Bull. J. IAA 2006, 36, 433–462. [Google Scholar] [CrossRef] [Green Version]
  11. Aryal, G.R.; Yousof, H.M. The Exponentiated Generalized-G Poisson Family of Distributions. Stoch. Qual. Control. 2017, 32, 7–23. [Google Scholar] [CrossRef]
  12. Voda, V.G.H. On the Reciprocal Rayleigh Distributed Random Variable. Rep. Statis. App. Res. JUSE. 1972, 19, 13–21. [Google Scholar]
  13. Mukerjee, S.P.; Saran, L.K. BiVARKiate Reciprocal Rayleigh distributions in reliability studies. J. Ind. Statist. Assoc. 1984, 22, 23–31. [Google Scholar]
  14. Nadarajah, S.; Kotz, S. The exponentiated Fréchet distribution. Interstat Electron. J. 2003, 14, 1–7. [Google Scholar]
  15. Nadarajah, S.; Gupta, A.K. The Beta Fréchet Distribution. Far East J. Theor. Stat. 2004, 14, 15–24. [Google Scholar]
  16. Barreto-Souza, W.; Cordeiro, G.M.; Simas, A.B. Some results for beta Fréchet distribution. Commun. Stat. Theory Methods 2011, 40, 798–811. [Google Scholar] [CrossRef]
  17. Krishna, E.; Jose, K.K.; Alice, T.; Ristic, M.M. The Marshall-Olkin Fréchet Distribution. Commun. Stat. Theory Tech. 2013, 42, 4091–4107. [Google Scholar] [CrossRef]
  18. Mahmoud, M.R.; Mandouh, R.M. On the Transmuted Fréchet Distribution. J. Appl. Sci-Ences Res. 2013, 9, 5553–5561. [Google Scholar]
  19. Mead, M.E.; Abd-Eltawab, A.R. A note on Kumaraswamy-Fréchet Distribution. Aust. J. Basic Appl. Sci. 2014, 8, 294–300. [Google Scholar]
  20. Chakraborty, S.; Handique, L.; Altun, E.; Yousof, H.M. A new statistical model for extreme values: Mathematical properties and applications. Int. J. Open Probl. Comput. Sci. Math. 2018, 12, 1–18. [Google Scholar]
  21. Cordeiro, G.M.; Yousof, H.M.; Ramires, T.G.; Ortega, E.M.M. The Burr XII system of densities: Properties, regression model and applications. J. Stat. Comput. Simul. 2018, 88, 432–456. [Google Scholar] [CrossRef]
  22. Nichols, M.D.; Padgett, W.J. A Bootstrap Control Chart for Weibull Percentiles. Qual. Reliab. Eng. Int. 2005, 22, 141–151. [Google Scholar] [CrossRef]
  23. Smith, R.L.; Naylor, J.C. A Comparison of Maximum Likelihood and Bayesian Estimators for the Three- Parameter Weibull Distribution. J. R. Stat. Soc. Ser. C Appl. Stat. 1987, 36, 358. [Google Scholar] [CrossRef]
  24. Silva, R.V.; de Andrade, T.A.; Maciel, D.B.; Campos, R.P.S.; Cordeiro, G.M. A New Lifetime Model: The Gamma Extended Frechet Distribution. J. Stat. Theory Appl. 2013, 12, 39–54. [Google Scholar] [CrossRef] [Green Version]
  25. Aarset, M.V. How to Identify a Bathtub Hazard Rate. IEEE Trans. Reliab. 1987, R-36, 106–108. [Google Scholar] [CrossRef]
Figure 1. Graphs of the EGRRP PDF for selected parameter values.
Figure 1. Graphs of the EGRRP PDF for selected parameter values.
Mathematics 11 01593 g001
Figure 2. Graphs of the EGRRP HRF for selected parameter values.
Figure 2. Graphs of the EGRRP HRF for selected parameter values.
Mathematics 11 01593 g002
Figure 3. TTT plot for breaking stress of carbon fibers (left) and TTT plot for the strengths data of the glass fibers (right).
Figure 3. TTT plot for breaking stress of carbon fibers (left) and TTT plot for the strengths data of the glass fibers (right).
Mathematics 11 01593 g003
Figure 4. Estimated CDF (top left), estimated PDF (top right), P-P graph (bottom left), and Kaplan–Meier survival (bottom right) for the 1st dataset.
Figure 4. Estimated CDF (top left), estimated PDF (top right), P-P graph (bottom left), and Kaplan–Meier survival (bottom right) for the 1st dataset.
Mathematics 11 01593 g004
Figure 5. Estimated CDF (top left), estimated PDF (top right), P-P graph (bottom left), and Kaplan–Meier survival (bottom right) for the 2nd dataset.
Figure 5. Estimated CDF (top left), estimated PDF (top right), P-P graph (bottom left), and Kaplan–Meier survival (bottom right) for the 2nd dataset.
Mathematics 11 01593 g005
Figure 6. Cullen-Frey graph for the actuarial claims data.
Figure 6. Cullen-Frey graph for the actuarial claims data.
Mathematics 11 01593 g006
Figure 7. The NKDE graph (top left graph), the Q-Q graph (top right graph), the TTT graph (bottom left graph), and the box graph (bottom right graph) for the claims data.
Figure 7. The NKDE graph (top left graph), the Q-Q graph (top right graph), the TTT graph (bottom left graph), and the box graph (bottom right graph) for the claims data.
Mathematics 11 01593 g007
Figure 8. The initial scattergram (left graph), the fitted scattergram (middle graph), and smoothed scattergram (right graph).
Figure 8. The initial scattergram (left graph), the fitted scattergram (middle graph), and smoothed scattergram (right graph).
Mathematics 11 01593 g008
Figure 9. The ACF (left graph), and the partial ACF (right graph) for the insurance-claims data.
Figure 9. The ACF (left graph), and the partial ACF (right graph) for the insurance-claims data.
Mathematics 11 01593 g009
Table 1. Some sub models from the EGRRP model.
Table 1. Some sub models from the EGRRP model.
NαβθλReduced Model
1 1 EIRP
2 1 λ→0EIR
3 11 EIRP
4 11λ→0EIR
51 GIRP
61 λ→0GIR
71 1 GIRP
81 1λ→0GIR
911 IRP
1011 λ→0IR
11111 IRP
Table 2. E(X), VARK(X), Ske(X), and Ku(X) of the EGRRP distribution.
Table 2. E(X), VARK(X), Ske(X), and Ku(X) of the EGRRP distribution.
αβλθE(X)VARK(X)Ske(X)Ku(X)
51.5−132.5705930.454841−70.642231102.708
3 3.1633451.331691−29.92022368.7397
3110.50.394040.021403165,229.7−1,780,082
1.5 0.440280.023666479,669.1−5,491,216
2.5 0.501860.0268751,835,150−2,247,195
3.5 0.544500.0292674,431,099−5,641,370
5 0.591610.03209511,249,343−1,485,963
31.5−301.52.9885850.711030−55.15369840.369
−20 2.7611010.634484−52.04927778.093
−10 2.3956710.530971−45.4008648.572
−5 2.0561470.458989−37.0731494.613
1 1.320840.2129917,734.68−202,978
2 1.2149510.155364258.9211−3135.18
5 1.0204180.054401−69.31331288.51
3210.50.4744980.025416102,1606−12,162,466
10.9489970.101664127,669.8−1,519,896
54.7449832.541611986.3462−11,691.72
109.48996710.1664492.41038−1049.147
2018.9799340.66578−19.3316281.174
3028.469991.49799−30.56492414.910
5047.44983254.1611−34.2731459.057
10094.899671016.644−35.16704469.699
200189.79934066.578−35.27878471.030
500474.498325,416.11−35.29372471.208
1000948.9967101,664.4−35.29461471.218
Table 3. The results of the AVs and their corresponding MSEs (in parentheses) for n = 50 .
Table 3. The results of the AVs and their corresponding MSEs (in parentheses) for n = 50 .
InitialsBayesianMLEBootstrapCVM
α = 21.975172.014612.040592.01152
(0.04134)(0.05516)(0.06786)(0.05536)
β = 1.51.536741.530981.509031.54462
(0.06150)(0.06659)(0.07169)(0.08980)
λ = −1.5−1.26544−1.52255−1.45553−1.54183
(0.27332)(0.27681)(0.34521)(0.29966)
θ = 1.21.208011.204021.197691.20571
(0.004192)(0.00425)(0.00475)(0.00508)
α = 0.60.582000.605770.614740.60646
(0.00444)(0.00697)(0.00968)(0.00595)
β = 21.950152.029981.974902.02591
(0.05002)(0.05832)(0.05927)(0.07796)
λ = 1.51.091991.510261.593061.52918
(0.36095)(0.38682)(0.54474)(0.31561)
θ = 0.50.499980.504630.494840.50403
(0.00140)(0.00161)(0.00166)(0.00238)
Table 4. The results of the AVs and their corresponding MSEs (in parentheses) for n = 100 .
Table 4. The results of the AVs and their corresponding MSEs (in parentheses) for n = 100 .
InitialsBayesianMLEBootstrapCVM
α = 22.014452.014792.086852.00714
(0.01989)(0.02742)(0.03198)(0.02675)
β = 1.51.435781.507251.432331.51994
(0.03121)(0.03105)(0.02795)(0.04102)
λ = −1.5−1.67548−1.49394−1.35285−1.51710
(0.08359)(0.13440)(0.11286)(0.14247)
θ = 1.21.192351.199911.180561.20238
(0.00192)(0.00207)(0.00201)(0.00246)
α = 0.60.621880.600240.608840.60335
(0.00259)(0.00321)(0.00530)(0.00298)
β = 21.886552.022831.995532.01315
(0.02794)(0.02865)(0.03892)(0.03870)
λ = 1.51.724901.485531.528401.51536
(0.07370)(0.18452)(0.25946)(0.15856)
θ = 0.50.485970.503620.498920.50205
(0.00088)(0.00079)(0.00106)(0.00119)
Table 5. The results of the AVs and their corresponding MSEs (in parentheses) for n = 200 .
Table 5. The results of the AVs and their corresponding MSEs (in parentheses) for n = 200 .
InitialsBayesianMLEBootstrapCVM
α = 21.953262.005432.085072.00823
(0.01153)(0.01489)(0.02350)(0.01327)
β = 1.51.505381.506831.427061.50400
(0.01469)(0.016659)(0.01938)(0.01954)
λ = −1.5−1.33473−1.502215−1.33426−1.49740
(0.04993)(0.07365)(0.09376)(0.06982)
θ = 1.21.198591.200701.179541.19973
(0.00112)(0.00113)(0.00146)(0.00121)
α = 0.60.604560.600100.602450.59973
(0.00128)(0.00169)(0.00165)(0.00141)
β = 22.029972.011852.003002.01313
(0.01397)(0.01465)(0.01382)(0.01806)
λ = 1.51.454231.491911.510231.49367
(0.05122)(0.09747)(0.09543)(0.07502)
θ = 0.50.508090.501880.500380.50219
(0.00041)(0.00041)(0.00038)(0.00056)
Table 6. The results of the AVs and their corresponding MSEs (in parentheses) for n = 500 .
Table 6. The results of the AVs and their corresponding MSEs (in parentheses) for n = 500 .
InitialsBayesianMLEBootstrapCVM
α = 21.953302.002012.049711.99759
(0.00520)(0.00555)(0.00827)(0.00487)
β = 1.51.489861.502471.455211.50826
(0.00489)(0.00629)(0.00784)(0.00750)
λ = −1.5−1.44498−1.50082−1.40077−1.51180
(0.02654)(0.02772)(0.03621)(0.02653)
θ = 1.21.200431.200251.187701.20158
(0.00042)(0.00043)(0.00057)(0.00046)
α = 0.60.589940.600660.579350.60102
(0.00052)(0.00062)(0.00105)(0.00056)
β = 21.993032.002232.060602.00119
(0.00509)(0.00531)(0.01004)(0.00716)
λ = 1.51.388151.501801.339881.50565
(0.02737)(0.03596)(0.06247)(0.02962)
θ = 0.50.498770.500330.509890.50016
(0.00015)(0.00015)(0.00027)(0.00022)
Table 7. The estimated parameters, KS, p-values, W * , and A * for all estimation techniques using the 1st dataset.
Table 7. The estimated parameters, KS, p-values, W * , and A * for all estimation techniques using the 1st dataset.
Technique α ^ β ^ λ ^ θ ^ KSp-Value W * A *
ML2.7144.804−9.9360.7490.0720.6830.0730.55
Bayesian2.8995.119−7.0380.8430.0670.7660.0660.52
Bootstrap2.8995.163−7.0880.8440.0700.7170.0660.52
CVM2.6606.705−17.7030.6070.0670.7560.0870.623
Table 8. The estimated parameters, KS, p-values, W * , and A * for all estimation techniques using the 2nd dataset.
Table 8. The estimated parameters, KS, p-values, W * , and A * for all estimation techniques using the 2nd dataset.
Technique α ^ β ^ λ ^ θ ^ KSp-Value W * A *
ML3.2065.73−19.870.7260.0690.9260.0590.47
Bayesian4.8043.5280.2701.7050.0670.9430.0670.57
Bootstrap4.7803.6020.2351.7100.080.7980.0670.56
CVM2.95415.5061.7551.1070.0780.840.0580.43
Table 9. −2ℓ, AIC, BIC, HQIC, and CAIC for the 1st dataset.
Table 9. −2ℓ, AIC, BIC, HQIC, and CAIC for the 1st dataset.
ModelCriteria
−2ℓAICBICHQICCAIC
EGRRP105.26113.26123.68117.48113.68
WIW287.53295.50305.92299.71295.90
EIW284.77296.74304.51299.92297.00
KumIW286.13298.22308.50302.35298.50
BIW304.12312.10322.60316.40313.60
GEIW305.12313.03333.46318.22314.40
IW345.33348.30354.50352.40349.44
TIW346.55353.55359.36354.61352.74
MOIW344.33352.30358.13352.50353.68
Table 10. MLEs and their standard errors (in parentheses) for the 1st dataset.
Table 10. MLEs and their standard errors (in parentheses) for the 1st dataset.
ModelEstimates
EGRRP (α,β,λ,θ)2.71444.8045−9.93610.7499
(0.0784)(0.537)(0.996)(0.018)
WIW (α,β,a,b)2.223130.35526.972134.91791
(11.409)(0.411)(113.83)(3.7562)
KumIW (α,β,a,b)2.055620.46546.28153224.188
(0.0716)(0.007)(0.0603)(0.1646)
BIW (α,β,a,b)1.609730.404622.014329.7624
(2.4982)(0.108)(21.432)(17.488)
GEIW (α,β,a,b)1.369620.477627.645217.4584
(2.0178)(0.133)(14.136)(14.823)
EIW (α,β,a)69.14890.5019145.328
(57.349)(0.086)(122.924)
TIW (α,β,a)1.931561.74350.08195
(0.0975)(0.076)(0.1984)
MOIW (α,β,a)2.306661.57960.5995
(0.4982)(0.1666)(0.3093)
IW (α,β)1.870541.7779
(0.1126)(0.11346)
Table 11. −2ℓ, AIC, BIC, HQIC, and CAIC for the 2nd dataset.
Table 11. −2ℓ, AIC, BIC, HQIC, and CAIC for the 2nd dataset.
ModelCriteria
−2ℓAICBICHQICCAIC
EGRRP39.61147.66156.23151.03148.351
EGIWP39.63549.16658.07752.37350.291
PBXIW41.40349.58858.10552.93450.366
BIW60.60168.60377.20172.02969.324
GEIW61.60569.60378.10172.99170.333
IW93.77397.722102.0199.40497.903
TIW94.144100.12106.54102.666100.500
MOIW95.722101.78108.28104.299102.132
Table 12. MLEs and their standard errors for the 2nd dataset.
Table 12. MLEs and their standard errors for the 2nd dataset.
ModelEstimates
EGIWP (α,β,λ,θ)3.2065.73431−19.83210.72639
(0.58)(9.22421)(26.6332)(0.32141)
PBXIW (λ,θ,α,β)4.492119.99820.38330.5063
(1.7783)(9.2431)(0.1438)(0.1094)
BIW (α,β,a,b)2.051810.646415.07836.944
(0.9861)(0.1633)(12.061)(22.77)
GEIW (α,β,a,b)1.662750.7421332.112113.324
(0.9521)(0.1979)(17.393)(9.974)
TIW (α,β,a)1.306292.78440.12982
(0.0334)(0.165)(0.2082)
MOIW (α,β,a)1.544132.387670.48163
(0.226)(0.253)(0.2525)
IW (α,β)1.264432.88873
(0.0592)(0.23476)
Table 13. The results of the EGRRP under insurance-claims data.
Table 13. The results of the EGRRP under insurance-claims data.
q VARK ( X ; q , _ ^ ) TVARK ( X ; q , _ ^ ) TVq ( X ; q , _ ^ ) TMVq ( X ; q , _ ^ ) M E L ( X ; q , _ ^ )
MLE 0.6 2602.2721967534.159674121,153,998.8375560,584,533.578454931.887478
0.7 3314.0993099070.251654152,485,547.0159476,251,843.759625756.152345
0.8 4519.42650211,679.71693209,525,473.41096104,774,416.42247160.290435
0.9 7368.46688217,637.56762338,212,668.58184169,123,971.858510,269.10074
0.95 11,723.90043926,230.44642544,732,714.27725272,392,587.585014,506.54598
0.99 33,355.28626458,883.989741,269,594,815.6707634,856,291.825125,528.70348
0.999 145,993.32773965,808.922846,908,619,382.48083,454,375,500.163−80,184.40489
Bayesian 0.6 2463.7139216896.60037486,857,109.25953943,435,451.2301444432.886453
0.7 3125.833708271.666098108,234,173.1492154,125,358.2407035145.832398
0.8 4242.1014410,594.482279146,113,725.3516573,067,457.1581066352.380839
0.9 6863.7024415,873.911604235,935,854.18517117,983,801.004199010.209163
0.95 10,841.787923,261.286766361,466,587.16925180,756,554.8713912,419.49881
0.99 30,345.469749,970.512506816,811,981.90482408,455,961.4649119,625.04280
0.999 129,788.63528,585.2136393,170,744,914.93861,585,401,042.6829−101,203.422
Bootstrap 0.6 2832.2830017560.43000693,823,029.75244846,919,075.306234728.147005
0.7 3555.9453189027.338287116,455,425.5637458,236,740.1201625471.392969
0.8 4759.23123911,489.206663156,469,913.8258178,246,446.1195696729.975424
0.9 7529.01848517,084.073298249,678,104.66656124,856,136.406589555.054813
0.95 11,636.6312724,950.966796374,329,321.09034187,189,611.5119613,314.33552
0.99 30,996.1422655,050.720255643,890,093.87870322,000,097.6596024,054.57799
0.999 123,576.386684,874.2677551,367,037,379.7192683,603,564.12738−38,702.11886
CVM 0.6 2770.9980137453.23038392,883,930.9489646,449,418.7048634682.232369
0.7 3483.9132338903.322882115,420,157.121757,718,981.8837495419.409649
0.8 4670.53188511,346.47232155,165,515.490177,594,104.2174136675.940442
0.9 7406.18351016,897.92006248,097,708.6971124,065,752.268629491.736551
0.95 11,470.7819424,724.77277372,360,756.4232186,205,102.9844013,253.990839
0.99 30,692.3594454,850.39454630,810,155.0580315,459,927.9235524,158.035102
0.999 123,130.659084,870.024981,379,680,759.512689,925,249.78127−38,260.63403
Table 14. The estimated parameters for the EGRRP under insurance-claims data.
Table 14. The estimated parameters for the EGRRP under insurance-claims data.
Techniques α ^ β ^ θ ^ λ ^
MLE0.7817133.0159879.08656−3.78474
Bayesian0.7941233.0160779.08662−3.78437
Bootstrap0.8345160.0414880.82928−3.31509
CVM0.8307742.24841100.49062−3.05291
Table 15. The results of the KRIs for the IR under insurance-claims data.
Table 15. The results of the KRIs for the IR under insurance-claims data.
q VARK ( X ; θ ^ ) TVARK ( X ; θ ^ ) TV ( X ; θ ^ ) TMV ( X ; θ ^ ) M E L ( X ; θ ^ )
MLE 0.6 1628.2349663547.12252676,135,623.63070938,071,358.937881918.887559
0.7 1948.5749414136.781558100,120,217.3989650,064,245.481042188.206617
0.8 2463.5492215114.338228147,302,529.0797273,656,378.878082650.789007
0.9 3585.2089367297.644053284,971,253.77263142,492,924.53033712.435117
0.95 5,138.34330810,364.98915550,930,629.98247275,475,679.98035226.645843
0.99 11,608.15307823,255.233812,535,785,437.55151,267,915,974.00911,647.08073
0.999 36,791.2713273,594.8135422,237,318,310.63111,118,732,750.1236,803.54223
Bayesian 0.6 1628.2474093547.14963291,723,332.275395045,865,213.2873301918.902223
0.7 1948.5898324136.813171120,902,095.19890960,455,184.4126252188.223339
0.8 2463.5680475114.377311178,487,896.63326689,249,062.6939442650.809265
0.9 3585.2363347297.699821347,361,234.231741173,687,914.815693712.463487
0.95 5138.38257510,365.068359675,631,280.252160337,826,005.194435226.685784
0.99 11,608.24178623,255.411533,159,263,800.179681,579,655,155.501311,647.16974
0.999 36,791.55247573,595.37593428,472,994,153.306014,236,570,672.02836,803.82345
Bootstrap 0.6 1692.4968623687.11775498,832,130.627645049,419,752.4315761994.620892
0.7 2025.4797624300.048794130,269,755.14028865,139,177.6189382274.569032
0.8 2560.7786415316.186898192,293,925.06067896,152,278.7172372755.408257
0.9 3726.7071397585.661722374,182,772.362154187,098,971.842793858.954583
0.95 5341.13939510,774.066441727,822,354.386544363,921,951.259715432.927046
0.99 12,066.29452224,173.053073,402,703,365.706891,701,375,855.906512,106.758548
0.999 38,243.32024376,499.39559130,654,515,544.679215,327,334,271.73538,256.075348
CVM 0.6 2635.2297555740.868751190,889,214.5058995,450,348.1217003105.638997
0.7 3153.6865206695.206821250,869,428.56578125,441,409.489713541.520301
0.8 3987.1507148277.341152368,764,811.87516184,390,683.278734290.190438
0.9 5802.50974911,810.92971712,295,720.907023,561,59671.383226008.419968
0.95 8316.19235316,775.298731,374,796,679.7789687,415,115.188208459.106384
0.99 18,787.3071237,637.616906,300,638,164.11973,150,356,719.676718,850.309781
0.999 59,545.12395119,110.108254,832,802,342.86427,416,520,281.54059,564.984251
Table 16. The estimated parameters for the IR under insurance-claims data.
Table 16. The estimated parameters for the IR under insurance-claims data.
Techniques θ ^
MLE1163.73317
Bayesian1163.74207
Bootstrap1209.66248
CVM1883.45315
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ibrahim, M.; Emam, W.; Tashkandy, Y.; Ali, M.M.; Yousof, H.M. Bayesian and Non-Bayesian Risk Analysis and Assessment under Left-Skewed Insurance Data and a Novel Compound Reciprocal Rayleigh Extension. Mathematics 2023, 11, 1593. https://doi.org/10.3390/math11071593

AMA Style

Ibrahim M, Emam W, Tashkandy Y, Ali MM, Yousof HM. Bayesian and Non-Bayesian Risk Analysis and Assessment under Left-Skewed Insurance Data and a Novel Compound Reciprocal Rayleigh Extension. Mathematics. 2023; 11(7):1593. https://doi.org/10.3390/math11071593

Chicago/Turabian Style

Ibrahim, Mohamed, Walid Emam, Yusra Tashkandy, M. Masoom Ali, and Haitham M. Yousof. 2023. "Bayesian and Non-Bayesian Risk Analysis and Assessment under Left-Skewed Insurance Data and a Novel Compound Reciprocal Rayleigh Extension" Mathematics 11, no. 7: 1593. https://doi.org/10.3390/math11071593

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop