Next Article in Journal
On Circular q-Rung Orthopair Fuzzy Sets with Dombi Aggregation Operators and Application to Symmetry Analysis in Artificial Intelligence
Previous Article in Journal
A Non-Convex Fractional-Order Differential Equation for Medical Image Restoration
Previous Article in Special Issue
Comprehensive Sensitivity Analysis Framework for Transfer Learning Performance Assessment for Time Series Forecasting: Basic Concepts and Selected Case Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of the Meta-Heuristic Algorithms for Maximum Likelihood Estimation of the Exponentially Modified Logistic Distribution

Department of Statistics, University of Ondokuz Mayis, Samsun 55139, Turkey
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(3), 259; https://doi.org/10.3390/sym16030259
Submission received: 29 December 2023 / Revised: 5 February 2024 / Accepted: 13 February 2024 / Published: 20 February 2024

Abstract

:
Generalized distributions have been studied a lot recently because of their flexibility and reliability in modeling lifetime data. The two-parameter Exponentially-Modified Logistic distribution is a flexible modified distribution that was introduced in 2018. It is regarded as a strong competitor for widely used classical symmetrical and non-symmetrical distributions such as normal, logistic, lognormal, log-logistic, and others. In this study, the unknown parameters of the Exponentially-Modified Logistic distribution are estimated using the maximum likelihood method. Five meta-heuristic algorithms, including the genetic algorithm, particle swarm optimization algorithm, grey wolf optimization algorithm, whale optimization algorithm, and sine cosine algorithm, are applied in order to solve the nonlinear likelihood equations of the study model. The efficiencies of all maximum likelihood estimates for these algorithms are compared via an extensive Monte Carlo simulation study. The performance of the maximum likelihood estimates for the location and scale parameters of the Exponentially-Modified Logistic distribution developed with the genetic algorithm and grey wolf optimization algorithms is the most efficient among others, according to simulation findings. However, the genetic algorithm is two times faster than grey wolf optimization and can be considered better than grey wolf optimization considering the computation time criterion. Six real datasets are analyzed to show the flexibility of this distribution.

1. Introduction

The two-parameter exponentially-modified logistic (EMLOG) distribution is one of the distributions that have been generalized recently. It is a distribution with more flexibility than other similar distributions for fitting data in many scientific fields, such as biological and psychological evolution, energy resource prediction, and technological and economic diffusion. Practically, it can be considered a better alternative than symmetrical distributions such as logistic and normal distributions, as well as many other non-symmetrical statistical distributions in different application cases, especially when a little skewness exists. Therefore, this distribution can be considered important for application studies in terms of data fit, and estimating its unknown parameters will contribute significantly to the literature.
EMLOG distribution has emerged from the importance of logistic and exponential distributions, which have both been combined to form a new, more reliable distribution called the two-parameter exponentially-modified logistic distribution. Reyes et al. first presented this distribution in 2018 [1]. The logistic distribution is recognized to be similar to the normal distribution; they are both members of the location-scale family, but the difference is that the logistic distribution has heavier tails. The importance of logistic distribution is that it has the ability to be used in many scientific areas like physical science and finance, and many applications in reliability and survival analysis, besides the major utility of its distribution function in logistic regression, logit models, and neural networks. More properties and details about logistic distribution are available in [2]. On the other hand, the exponential distribution, which is concerned with the measurement of time needed until the occurrence of a specific event, previously was the basis of reliability and life expectancy evaluation for many lifetime data distributions, but in further research in reliability theory, it was revealed that modeling by the exponential distribution is only useful for the first approximation and cannot be enough for a lot of problems in many cases. More details about the exponential distribution can be found in [3].
In the last two decades, in order to give better-fitting solutions and increase fit effectiveness for model functions that have no closed form and require a numerical method in lifetime data analysis, many generalizations and modified extensions of the exponential distribution have been suggested to become more flexible and capable for modeling real-world data, especially when the characteristics of classical distributions are limited and, practically, they cannot provide a good fit in many situations [4]. Various exponentiated distributions have been generalized; for instance, in 1998, the exponentiated exponential (EE) distribution was first introduced by Gupta and Kundu [5] and is considered the first extension of the exponential distribution family. The exponentiated Weibull (EW) distribution was presented in 2006 by Pal et al. [6], the exponentiated Gamma (EG) distribution was generalized in 2007 by Nadarajah & Gupta [7], another extension was proposed in 2011 by Nadarajah & Haghighi [8], the exponentiated log-logistic (ELL) distribution with two parameters was introduced in 2019 by Chaudhary [9], and many other well-known distributions have been extended and modified by the exponential distribution family.
In general, there are many different statistical methodologies for estimating the parameters of any distribution, such as the maximum likelihood (ML) method, the Method of Moments (MOM), the Least Squares (LS) method, the Bayes method, and so forth. ML is the most widely used methodology among all statistical methods because of its high performance and well-known asymptotic properties for parameter estimators such as bias, consistency, efficiency, and so forth in comparison with any other method [10]. The basic principle of the ML methodology is to find the estimator values for the parameters of concern that maximize the likelihood function of the model, but in most cases, an explicit solution is rarely available because of the presence of nonlinear functions. Therefore, iterative algorithms to maximize the likelihood function are needed [11].
The primary goal of this research is to choose the best algorithm for calculating the maximum likelihood estimation of the location α and scale β parameters of the EMLOG distribution and to show the applicability of this distribution in many areas, which is the primary contribution of this study too. However, for the ML estimation of the EMLOG distribution, explicit solutions to the likelihood equations have not been found, and this is the main problem underlined by this study. To solve this problem, ML estimates are obtained through the use of iterative numerical techniques based on traditional or non-traditional algorithms.
In general, the Newton method is a common type of traditional iterative technique that is commonly used to solve the equation system generated by partial derivatives of the likelihood function to find estimated values of the parameter of interest for statistical distributions, but its major drawback is that it uses a gradient-based search algorithm to find the best parameter values based on the inverse of the hessian matrix, making it only applicable to functions that can be differentiated at least twice. At the same time, to stay in the same category and avoid such limitations, another type of traditional iterative technique based on direct search algorithms without any need for the gradient information of the likelihood function can be used, such as the Nelder Mead (NM) algorithm. However, all traditional numerical algorithms start with a randomly selected initial guess and move towards the optimum solution iteratively, with no guarantee that the final solution is globally optimal. In addition to that, the gradient-based methods cannot be used for discontinuous functions. Moreover, the final solution may become stuck at local optimum points, and the global optimum may never be reached. To avoid such drawbacks, the use of population-based algorithms like meta-heuristic algorithms is preferable and recommended for solving complex problems, especially when traditional algorithms fail. At the same time, meta-heuristic algorithms guarantee global convergence with more simplicity, flexibility, and a derivation-free mechanism [12].
In this study, five meta-heuristic algorithms have been applied in the ML method for estimating the parameters of the EMLOG distribution, which are the genetic algorithm (GA), particle swarm optimization (PSO), grey wolf optimization (GWO), whale optimization algorithm (WOA), and sine cosine algorithm (SCA). GA and PSO are well-known algorithms used in many fields [13,14], while GWO, WOA, and SCA, newer than GA and PSO, are distinguished by their simplicity, flexibility, ease of implementation, and lower number of required parameters [15,16,17]. These algorithms have demonstrated their reliability in solving real optimization problems with nonlinear objective functions and their high efficiency in estimating the parameters of many models and distributions. These algorithms have been implemented in the ML method in various studies. For instance, the GA algorithm was applied to find the ML estimators of the skew normal distribution parameters, then compared with other numerical algorithms and showed the best performance [18]. Also, GA was used to find the ML estimators of the Weibull distribution and various regression model parameters [19,20] and GA was used in the first stage as a starting point to obtain the final posterior ML estimates for logit models [21]. It is further employed to estimate the ML values for the parameters of a cosmological model by maximizing its likelihood function [22]. Other studies, including [23,24,25,26] considered the PSO algorithm for estimating parameters of the Weibull, logistic model, Three Parameter Gamma, Nakagami, and mixture of two Weibull distributions. PSO has been employed in statistical population reconstruction using the ML estimation method, and it showed the best performance when compared with alternative numerical algorithms [27]. GWO is applied for the estimation of three parameters of the Marshall Olkin Topp Leon exponential distribution [28]. The ML estimation method is used for solving localization problems, and to improve the accuracy of localization at high levels of noise, a GWO algorithm with slight improvement is implemented to enhance the results [29]. In [30], WOA is implemented for estimating the parameters of the log-logistic distribution and showed better performance in comparison with another numerical algorithm. Furthermore, WOA is used in the ML estimation method to obtain ML estimates for statistical distribution parameters such as Weibull, Rayleigh, and Gamma to model wind speed data [31]. Wang et al. applied three meta-heuristic algorithms, including the GWO, PSO, and four numerical methods for estimating the parameters of the Weibull distribution [32]. However, the outcomes of the experimental work showed that the GWO gave the most accurate and efficient results. Wadi [33] estimated the parameters of five statistical distributions, including Rayleigh, Weibull, Gamma, Burr Type XII, and generalized extreme value distributions, by using the ML method based on two heuristic algorithms: GWO and WOA. The results showed that the Gamma distribution based on GWO and WOA outperformed other distributions in modeling wind speed data and that GWO was more robust and faster than WOA. Furthermore, Wadi and Elmasry [34] used GWO, WOA, and other three metaheuristic algorithms for estimating the parameters of the Rayleigh, Weibull, inverse Gaussian, Burr Type XII, and generalized Pareto to describe different wind speed data. According to the performance criteria, the Weibull distribution based on GWO and WOA has the best goodness of fit. Al-Mhairat and Al-Quraan [35] estimated distribution parameters for Weibull, Rayleigh, and Gamma distributions by using the ML method based on three heuristic algorithms, which are PSO, GWO, and WOA, by implementing wind speed data. According to the performance indicators, SCA is used for solving many optimization problems in various scientific areas; furthermore, it was successfully used to address a range of optimization problems in the context of estimating the parameters of several models and signals [36]. GWO and SCA were implemented in the ML estimation method for optimizing the ML function for estimating signal angle for a uniform linear array. The simulated results show that GWO performs better than SCA [37].
This paper’s originality is to employ and examine five meta-heuristic optimization algorithms, including GA, PSO, GWO, WOA, and SCA, by applying the ML method to obtain the ML estimator’s values for the location and scale parameters of the two-parameter EMLOG distribution based on these meta-heuristic algorithms, then comparing and evaluating their performances by conducting an extensive Monte Carlo simulation study. To the best of our knowledge, this is the first study to obtain the ML estimators for the location α and scale β parameters of the EMLOG distribution by using various meta-heuristic algorithms.
The following is how the rest of the article is organized: In Section 2, the EMLOG distribution and its basic properties are presented, along with the ML estimation methodology based on the meta-heuristic algorithms used in this study. In Section 3, the efficiencies of the parameter estimators are compared via a comprehensive Monte Carlo simulation study. Six applications of real datasets are implemented in Section 4. In the last section, the study ends with some conclusions.

2. Materials and Methods

2.1. Two-Parameter Exponentially-Modified Logistic Distribution

The generalization of EMLOG distribution is made by the combination of logistic distribution with parameters for location α and scale β and the same scale parameter for the exponential distribution. As a result, the two-parameter exponentially modified logistic distribution is produced, with the left tail influenced by exponential distribution and the right tail by logistic distribution.
If X is a random variable with a parameterized location α and scale β that follows an EMLOG distribution, X∼EMLOG (α, β), then the probability density function (pdf) of X is:
f x ; α , β = 1 β e x α β + 1   e x α β + 1 l o g e x α β + 1 1 , x   R ,       α R ,   β > 0
where log (.) refers to the natural logarithm. X’s cumulative distribution function (cdf) seems to be as follows:
F x ; α , β = 1 e x α β l o g e x α β + 1
See the EMLOG distribution in Figure 1, where the plots of the EMLOG distribution are illustrated for certain values of α and β. The general formula for the EMLOG distribution’s kth moment (μk) expression (for k = 1, 2, 3, ⋯) is:
E X k = i = 0 k k i 2 i 2 π i B i β k i Γ k i + 1
where Bi and Γ(.) refer to Bernoulli numbers and the Gamma function, respectively. The first four values of Bi and Γ(.) that are useful for calculating the mean E(X), variance Var(X), skewness (γ1), and kurtosis (γ2) are given in Table 1 below.
The mean, variance, skewness (γ1), and kurtosis (γ2) values of the random variable X are computed using Equation (3); see [1].
E X = α + β
V a r X = 1 +     π 2 3 β 2
γ 1 = 2 1 +     π 2 3 3 / 2 = 0.2251
γ 2 = 7 π 4 15 + 2 π 2 + 9 1 +     π 2 3 2 = 4.0318

2.2. Maximum Likelihood Estimation

The ML estimates for the parameters of interest are the values in the parameter space that maximize the likelihood function; in most cases, for calculation simplicity, the likelihood function’s logarithm is used. In this study, the log-likelihood (ln L) function is given below for estimating the unknown parameters α and β for the EMLOG distribution.
l n   L α , β = n l o g β i = 1 n l o g e   z i + 1 + i = 1 n l o g e z i + 1 l o g e   z i + 1 1          
where zi = (xiα)/β. In order to estimate the likelihood parameters for the ln L function for the EMLOG distribution, the partial derivatives with respect to the parameters of interest are taken and equated to zero. The likelihood equations are shown below.
ln L α , β α = 1 β i = 1 n x i α e   z i 1 + e   z i 1 β i = 1 n x i α e   z i 1 + e z i 1 + e   z i 1 1 + e z i l o g 1 + e   z i 1 1 β i = 1 n x i α e z i l o g 1 + e   z i 1 1 + e z i l o g 1 + e   z i 1 = 0
and
ln L α , β β = n β + 1 β 2 i = 1 n x i α e   z i 1 + e   z i 1 β 2 i = 1 n x i α e   z i 1 + e z i 1 + e   z i 1 1 + e z i l o g 1 + e   z i 1 1 β 2 i = 1 n x i α e z i l o g 1 + e   z i 1 1 + e z i l o g 1 + e   z i 1 = 0
As we can see from Equations (9) and (10), they have nonlinear functions, and an explicit solution for the likelihood equations cannot be obtained. Therefore, iterative algorithms are needed to solve these equations and obtain ML estimates for the location and scale parameters. In this study, GA, PSO, GWO, WOA, and SCA are some effective and powerful meta-heuristic algorithms considered numerical techniques for estimating the likelihood estimators for the EMLOG distribution, and they are briefly introduced in the next few subsections.

2.2.1. Genetic Algorithm

The GA is an evolutionary meta-heuristic search algorithm used for optimization issues, taking into account a procedure of natural selection that imitates Darwin’s theory of biological evolution. It was introduced for the first time by John Holland [13], and then significantly enhanced by Goldberg [38]. Each possible solution indicates a chromosome, and each set of chromosomes indicates a population in GA. The fitness value for every chromosome is evaluated by the main objective function of this study, which is the same as the log-likelihood function provided in Equation (5) with a negative sign. The chromosomes with the highest fitness values are selected and directly transmitted without any alteration to the new generation, which are assigned as elite chromosomes. However, these elite chromosomes are selected according to the predefined elite number parameter (EN). Since GA is motivated by natural selection and genetic mechanisms, it includes genetic operators called crossover and mutation. Crossover operators are used to produce new chromosomes that hold good features from the prior generation, so the parents can pass segments of their own chromosomes onto their generated offspring. By doing so, the potential of the current desired chromosomes is exploited. However, mutation operators explore new solutions and provide a diversity of solutions in order to prevent the solutions from being locked in the local optimum, and they are usually kept with very low probability so good chromosomes obtained from crossover are not lost. Mutation probability (MP) and crossover probability (CP), are assumed to be 0.8 and 0.01, respectively, in accordance with the literature). Figure 2 depicts a flow diagram of the GA.
The solution values are called the GA estimates of the parameters. The GA algorithm steps are given in Algorithm 1.
Algorithm 1. GA Algorithm
Input:
   Population Size (n), Maximum Number of Iteration ( T m a x ), Search Space, Mutation
    Probability (MP), Crossover Probability (CP), and Elite Number (EN)
Output:
   Global best solution, gbest
Begin
   Generate initial population of n chromosomes X i   ( i = 1 ,   2 ,     ,   n )
   Calculate the fitness value of each X i
   Set iteration counter t = 0
   while (t < T m a x ) do
     Assign elite chromosomes to be directly transmitted to the new generation
     Select a pair of chromosomes from the initial population based on fitness
      function apart from the elite chromosomes
     Apply the crossover operation to the selected pair with crossover probability
     Apply mutation to the offspring with mutation probability
     Replace the old population with a newly generated population
     Increment the current iteration t by 1
   end while
   return gbest
   end

2.2.2. Particle Swarm Optimization

The PSO is considered one of the best-known population-based meta-heuristic algorithms dependent on swarm intelligence. It was proposed by Eberhart and Kennedy in 1995 [14]. PSO is a simulation of the continuous movements of particles in a swarm in a specific search area that mimics the movement behavior of bird flocks in nature using certain formulas until finally reaching the optimal solution. It can be used to solve various constrained or unconstrained optimization problems, multi-objective optimization, non-linear programming, probabilistic programming, and combinatorial optimization issues. The PSO algorithm has fixed parameters, including c1, c2 representing acceleration coefficients, r1, r2 representing random numbers uniformly distributed among 0 and 1, and ω indicating the inertia weight parameter. The fitness value of each generated solution (particle) in the population refers to its position. The best fitness value of each particle in the population is found, compared with its previous historical movement, and then saved as a personal best solution (pbest) value. At the same time, the best value for the fitness of each and every particle is found, compared with the previous historical global best, and saved as a (gbest) value. Each particle’s position and velocity are updated in every iteration by using the following equations:
V i t + 1 = ω V i t + c 1 r 1   p b e s t i t X i t +   c 2 r 2 g b e s t t X i t  
x i t + 1 = x i t + V i t + 1
The solution values are called the PSO estimates of the parameters. The PSO algorithm steps are given in Algorithm 2.
Algorithm 2. PSO Algorithm
Input:
   Population Size (n), Maximum Number of Iteration ( T m a x ), Search Space
Output:
   Global best solution, gbest
Begin
   Generate the initial position and velocity of n particles X i   ( i = 1 ,   2 ,     ,   n )
   Calculate the fitness value of each Xi
   Set iteration counter t = 0
   while (t < T m a x ) do
     for each X i
        assign pbest and gbest
          If the fitness value is better than the Pbest in history, current fitness
          value set as the new Pbest
          end if
     end for
     from all the particles choose the particle with best fitness value as the gbest
     for each X i
      update the particle velocity according to the Equation (11)
      update the particle position according to the Equation (12)
      Increment the current iteration t by 1
     end for
   end while
   return gbest
end

2.2.3. Grey Wolf Optimization

The GWO is an intelligent algorithm that is obtained from swarms based on meta-heuristic techniques that were modeled by the grey wolf leadership hierarchy in the process of trapping and hunting prey in nature. Mirjalili et al. made the first proposal in 2014 [15]. GWO has become a widely known critical device in swarm intelligence for optimization in almost all areas, such as engineering, physics, and many other applications in various scientific fields. GWO is a straightforward population-based probabilistic algorithm motivated by the hunting and socialization of grey wolves. According to the swarm intelligence categorization, it is classified as the only algorithm for solving continuous real-life optimization problems that rely on a leadership hierarchy [39]. For the hunting process, the groups of grey wolves are categorized into four types to compose hierarchical commands. These types are called alpha (α), beta (β), delta (δ), and omega (ω), in that order. Figure 3 shows this hierarchy.
At the first level of the hierarchy, type alpha (α) represents the dominant gray wolf that makes decisions and gives orders to the other wolves in the pack. Type beta (β) represents the gray wolf that helps the alpha (α) type in making decisions and observing the movements of other wolves at the next level of the hierarchal chain. A group of alpha types will be replaced by beta types when a group of alphas dies or becomes older. Types delta (δ) and omega (ω) are the third and fourth types of gray wolves, respectively. For sure, the delta (δ) wolf type dominates the omega (ω) wolf type, and both of them represent the lowest level in the hierarchy; they are allowed to eat after the alpha (α) and beta (β) types have finished eating. Figure 4 depicts a flow diagram of the GWO.
The GWO algorithm consists of three main phases, which are: (1) searching; (2) encircling; and (3) hunting the prey. The solution starts by initializing wolves’ positions randomly within the search space. Generally, the formula in Equation (13), given below, is recommended for initializing diverse solutions within the search space in all meta-heuristic algorithms.
x = L + r a n d × ( U L )  
where L and U are the lower and upper limits of the search space, and rand is a random number between 0 and 1. The main GWO algorithm parameters for the mathematical modelling are assigned as follows:
  • Control parameter (a), which is an important parameter that declines linearly for each iteration in the range [0, 2] used in this algorithm. This parameter can indeed be determined using the formula:
a = 2 1 t T m a x  
where the iteration in progress (current) and the entire number of iterations are denoted by t and Tmax, respectively.
  • The coefficient vectors, A and C, can be found using the following formulas:
A = 2 a     r 1 a  
C = 2     r 2  
where r1 and r2 are arbitrary vectors ranging from [0, 1]. Fitness value of each wolf type is the same as the objective function represented by the ln L function in this study, and this fitness value refers to each wolf’s site in the pack. The highest value of the fitness function is considered the best position and assigned to the wolf of type alpha (α). The second and third highest fitness values are assigned to beta (β) and delta (δ) types of wolves, respectively. The position of each wolf in the pack surrounding the prey is updated by calculating the distance between the current location (denoted by D) and the next location using the equations below.
D = C . X p ( t ) X ( t )  
X ( t + 1 ) = X p ( t ) A D
where X ( t ) is the current position vector at iteration t and Xp(t) is the best solution’s position vector (optimal) when iterating to the tth time. The average value of the first three best solutions that refer to alpha (α), beta (β), and delta (δ) types of wolves is calculated because they have the best positions in the population. Furthermore, they have the best knowledge of the prey’s potential location, which forces and obliges all the other wolves, including omega (ω), to change their current positions toward the best position, which has been determined by the following equations:
X ( t + 1 ) = X 1 ( t ) + X 2 ( t ) + X 3 ( t ) 3
where
X 1 ( t ) = X α ( t ) A 1 D α X 2 ( t ) = X β ( t ) A 1 D β X 3 ( t ) = X δ ( t ) A 1 D δ
and
D α = C . X α ( t ) X ( t ) D β = C . X β ( t ) X ( t ) D δ = C . X δ ( t ) X ( t )
The fitness value of the new position is calculated, and according to that, the wolves’ alpha (α), beta (β), and delta (δ) types are updated. The attack happens with respect to the changing value of the coefficient vector A, which depends on the parameter (a) and decreases from 2 to 0 for each iteration step while the algorithm is working. Therefore, A is a value generated randomly within the interval [−2a, 2a]. If |A| < 1, the wolves are ready to attack the prey. Otherwise, the wolves are forced to diverge to explore a better location. The solution values are called the GWO estimates of the parameters. The GWO algorithm steps are given in Algorithm 3.
Algorithm 3. GWO Algorithm
Input:
   Population Size (n), Maximum Number of Iteration ( T m a x ), Search Space.
Output:
    X α
Begin
   Generate initial population of n wolves X i   ( i = 1 ,   2 ,     ,   n )
   Initialize a, A, and C
   Calculate the fitness value of each X i
    X α = the best wolf
    X β = the second best wolf
    X δ = the third best wolf
   Set iteration counter t = 0
   while (t < T m a x ) do
     for each X i do
      Update the position of the current wolf by Equation (19)–(21)
     end for
     Update a, A, and C
     Calculate the fitness of all wolves
     Update X α , X β and X δ
     Increment the current iteration t by 1
    end while
    return X α
end

2.2.4. Whale Optimization Algorithm

The WOA is also a unique swarm-based intelligent meta-heuristic methodology, recommended in 2016 by Mirjalili as well as Lewis for continuous optimization problems [16]. The inspiration for this algorithm has come from mimicking the hunting behavior of a specific type of whale (called a humpback) that applies a hunting strategy called the bubble-net feeding technique by creating bubbles along a circle around the prey, then slowly shrinking, encircling, and approaching the prey in a spiral shape through random search concerning each search agent’s location until finally, the hunt is complete.
The algorithm has three main phases that are: (1) encircling; (2) attacking by using the bubble net method, which includes shrinking encircling besides the mechanisms for spiral position updating; and (3) searching to catch prey. The position of the whale population is randomly initialized using Equaiton (13) in the search space for the first iteration. Also, the WOA parameters (a), A, and C, which are similar to GWO parameters previously calculated by Equations (11), (12), and (13), respectively, are generated, as well as other parameters such as parameter (b), which is a fixed value used to define the shape of the logarithmic spiral, and (l), a number drawn at random from the interval [−1, 1]. Finally, the probability parameter (P) is set to 0.5 to give an equal chance of simulating both the shrinking surrounding and spiral approach movements of whales. Each whale’s fitness value is evaluated by the objective function ln L in this study, and the best whale position in the initialized population is found and saved. If p < 0.5 and |A| < 1, the ongoing whale’s location is updated using the same Equations (17) and (18) as in GWO. Otherwise, if |A| ≥ 1, one of the whales is chosen at random, and its position is updated using the following formulas:
D = C . X r a n d ( t ) X ( t )
X ( t + 1 ) = X r a n d ( t ) A D
where X r a n d is the position vector of any whale chosen at random from the current whale population. If p ≥ 0.5, the current whale’s location is updated by the following formulas:
D = X p ( t ) X ( t )
X ( t + 1 ) = D e b l c o s ( 2 π l ) + X p ( t )
where D’refers to the path length between both the ith whale and the best solution (prey) currently available. The solution values are called the WOA parameter estimates. The WOA algorithm steps are given in Algorithm 4.
Algorithm 4. WOA Algorithm
Input:
    Population Size (n), Maximum Number of Iteration ( T m a x ), Search Space.
Output:
    X
Begin
    Generate initial population of n whales X i   ( i = 1 ,   2 ,     ,   n )
    Initialize a, A, and C
    Calculate the fitness value of each X i and determine the best whale X
    Set iteration counter t = 0
    while (t < T m a x ) do
      for each X i do
       If p < 0.5 then
         If |A| < 1 then update D by Equaiton (17) and X by Equaiton
          (18).
         else if |A| ≥ 1 then update D by Equaiton (22) and X by Equaiton (23).
         end if
       else if p ≥ 0.5 then the whale’s location is updated by Equaitons (24)
       and (25)
       end if
     end for
     Calculate the fitness of each whale and update X
     Update a, A, and C
     Increment the current iteration t by 1
    end while
    return X
end
It should be noted that if the probability of p is always less than 0.5 (p < 0.5), then we look at the value of A; if |A| < 1, then we use Equations (17) and (18) in step 4. Otherwise, if (|A| ≥ 1), we use Equations (22) and (23) to update the position of only the current whale according to any random whale from the population (and all the other whales will remain in their locations with no change in their positions). In step 5, if the probability of p is greater than or equal to 0.5 (p ≥ 0.5), then we use Equations (24) and (25) to update the position of the current whale.

2.2.5. Sine Cosine Algorithm

The SCA is a population-based meta-heuristic technique proposed by Mirjalili in 2016, which is motivated by the mathematical trigonometric sine and cosine functions [17]. It has been utilized to overcome a wide range of optimization issues in several areas by initializing, within the search space, a collection of a population of solutions that are iteratively assessed in relation to the objective function under the control of a set of developed optimization parameters. After that, the algorithm keeps the better solution and continuously updates it until convergence is satisfied by reaching the maximum number of iterations. This updated best position represents the best solution.
The main two phases of the SCA algorithm are (1) exploration (diversification), considered a global lookup search, and (2) exploitation (intensification), considered a local lookup search. The position of the solution population is initialized randomly within the search space for the first iteration using Equation (13) as well as the random parameters r1, r2, r3, and r4 of this algorithm, which are incorporated to strike a balance between exploration and exploitation capabilities and thus to avoid settling for local optimums. The parameter r1 helps in determining whether an updated solution position or the movement direction of the next position is towards the best solution in the search space (r1 < 1) or outwards from it (r1 > 1). The r1 parameter falls linearly from a constant (a) to 0, as seen in the Equaiton (26) below.
r 1 = a t a T m a x
The parameter r 2 is set within the interval, which helps in determining how large the extended movement of the solution towards or away from the intended target will be. It is found by Equaiton (27) below.
r 2 = 2 × π × r a n d 1 , t ,   t :   D i m e n s i o n
The r3 parameter is a random weight score to emphasize (r3 > 1) or underemphasize (r3 < 1) the significant effect of the intended target on distance calculation. It can be found by Equaiton (28) below.
r 3 = 2 × r a n d 1 , t
The final random parameter r 4 , which is a random value defined in [0, 1], can be considered a switch to choose between the trigonometric functions of sine and cosine elements. It can be found by Equaiton (29) below.
r 4 = r a n d 1 , t
The fitness value of each solution is evaluated by the objective function in this study. Each fitness value refers to the position of each solution. The best (highest) value in the population is found and saved. The main parameters, which are r1, r2, r3, and r4, are updated randomly, while the positions of all solutions are updated by utilizing Equaiton (30) below.
X i i + 1 = X i t + r 1 × s i n ( r 2 ) × r 3 P i t X i t ,           r 4 < 0.5     X i t + r 1 × c o s ( r 2 ) × r 3 P i t X i t ,           r 4 0.5
where X i t denotes the position of the current solution in the ith dimension at the tth iteration and P i t denotes the position of the target destination point in the dimension. The solution values are called the SCA parameter estimates. The SCA algorithm steps are given in Algorithm 5.
Algorithm 5. SCA Algorithm
Input:
    Population Size (n), Maximum Number of Iteration ( T m a x ), Search Space.
Output:
    best solution, P
Begin
    Generate initial population of n solutions X i   ( i = 1 ,   2 ,     ,   n )
    Set iteration counter t = 0
    while (t < T m a x ) do
     for each solution do
      Calculate the fitness value of each X i
      Update best solution found thus far P
      Calculate r 1 , r 2 , r 3 , and r 4 by Equaiton (26), Equaiton (27), Equaiton
       (28) and Equaiton (29), respectively.
      Update position of each solution using Equaiton (30)
     end for
     Increment the current iteration t by 1
   end while
   return best solution, P
end

3. Simulation Results and Discussion

In this section, an extensive Monte Carlo (MC) simulation study is carried out to compare the efficiencies of ML estimators of the model parameters for varying sample sizes, utilizing meta-heuristic algorithms including GA, PSO, GWO, WOA, and SCA. All computations for the simulation study are made by Matlab R2021a software. The main parameters for GA are considered to be EN = 5, CP = 0.8, MP = 0.01, while for PSO, c1 = c2 = 1.49 and ω = max {0.1, 1.1}, in accordance with other studies in the literature. The GWO, WOA, and SCA are coded in accordance with Section 2. Each Monte Carlo simulation run is replicated 1000 times. The location α and scale β parameters are considered to be (α = 0, 1, 2, 3) and (β = 1, 2), respectively, for different values of sample size (n), which is taken as n = 30, 50, 100, 150, and 200. Initialized population size (N) = 100. The search space (SS) for both α and β parameters is selected to be [−20, 20]. Thus, 8 × 5 × 1000 = 40,000 different samples are generated. The resulting estimates for location and scale parameters in the simulations are denoted by α ^ and β ^ , respectively. To analyze and evaluate the estimators’ performance, the simulated mean, bias, variance, mean square error (MSE), deficiency (Def), and average computation time (CT) values given by the Equations (31)–(36) below are used.
M e a n θ ^ = i n θ ^ i s
B i a s θ ^ = E θ ^ θ
V a r θ ^ = 1 s 1 i = 1 n θ i ^ M e a n   θ ^ 2
M S E θ ^ = V a r θ ^ + B i a s θ ^ 2
D e f α ^ , β ^ = M S E α ^ + M S E β ^
C T ¯ θ ^ = i n C T θ ^ i s
where θ = (α, β) ∈ R × R+ and s is the total number of MC simulation runs. The resulting simulated values of mean, bias, MSE, Def, and C T ¯ for α ^ and β ^ are given in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9.
The simulated values show that the GA and GWO provide the best results in comparison with the other algorithms. Based on the simulated bias results, it has been seen that the GA and GWO algorithms’ estimator values for both parameters α and β provide the smallest bias values for almost all sample sizes when the true value of β = 1. Otherwise, when β = 2, the smallest bias values in most cases belong to the PSO algorithm. Concerning WOA, it shows the lowest bias values (which are the same as GA and GWO values) in some cases and higher values in others, which indicates that it is not a stable algorithm for the EMLOG model. Anyway, in all cases, SCA demonstrates the worst performance with the largest bias values.
Concerning MSE values, it is very clear that MSE values for GA and GWO estimators outperform other algorithms for all n values. Also, it can be noticed from Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 that the MSE values of α and β for WOA estimators give almost the same values as GA and GWO estimators in particular cases, but they do not show the best performance for all n values. Specifically, these cases are classified into two categories: Firstly, when β = 1, as the following: for (1) α = 0, and n = 30, 100 (2) α = 1, and n = 150. (3) α = 2, n = 30, 100, 150, 200. (4) α = 3 and n = 50. Secondly, when β = 2, as follows: (1) for α = 1, and n = 50, 100, 200. (2) α = 2 with n = 30, 50, 100, and 150. (3) For all n values, α must be 0 or 3. In these two categories, the WOA is considered a highly efficient algorithm, the same as the GA and GWO, for computing and estimating both the location α and scale β parameters with respect to MSE values. Except for these two categories, the PSO is considered the second-best method after the GA and GWO with respect to MSE values for estimating only the location parameter α. Also, it is clear that for all n values, SCA gives the lowest performance with the largest MSE values for both α and β parameters.
In terms of the Def criterion, the strongest performance with the lowest deficiency for all n values is demonstrated by GA and GWO. Regarding the CT criterion, which is the average time needed in seconds to complete the required iteration for estimating the parameters in one MC run, it is seen that GA is the fastest algorithm for estimating α and β parameters for all n values. In other words, the time needed for GA to estimate parameters is lower than that of other algorithms, at least by two times. For example, the CT needed for executing a 1000 MC run in the case of n = 50 in Table 1 for GA is estimated as 1000 × 0.0581≅ 58.1 s, while the CT of the same case needed for PSO, GWO, WOA, and SCA is 375.7, 169.1, 171.1, and 171 s, respectively. Also, it can be noticed that GWO, WOA, and SCA have almost the same CT, with a slight preference for WOA in nearly all cases, while PSO is the slowest among all the used algorithms.
It can be concluded that the ML estimator values of GA outperform GWO, with better performance if the CT criterion is considered. However, concerning only deficiency values, it can be said that the ML estimator values of α and β parameters using GA and GWO give the most efficient values among the others but when the two categories mentioned before are considered, we see that the WOA is as efficient as the GA and GWO. Otherwise, out of these two categories, the PSO will be the second-best algorithm after the GA and GWO for estimating both α and β parameters. The weakest performance with the highest deficiency values is provided by SCA in all cases.
Based on these findings, we can conclude that GA and GWO are highly recommended and preferred over any other meta-heuristic algorithm utilized in this study for calculating ML estimators for unknown EMLOG distribution parameters (α, β). However, GA performs better and is more powerful, desirable, and preferred to be employed if a rapid estimating process is required. The simulation results also indicate that the SCA algorithm is unsuitable for estimating the location and scale parameters of this distribution due to the high values of its deficiency criterion.
The Def and TC criteria for Table 2 are plotted with n values in Figure 5 and Figure 6, respectively, to show the performance of each algorithm.
From Figure 5, it is illustrated that the GA and GWO algorithms have the best performance with the lowest Def values, and SCA has the worst performance, while Figure 6 shows that the GA algorithm has the best pest performance according to CT values among the other algorithms.

4. Applications

Six real datasets are modeled using the EMLOG distribution in this section to demonstrate the utility of this model in practice for the applications. In the first two datasets, which are used in various engineering areas, the modeling performance of the EMLOG distribution is compared with the other asymmetrical well-known and commonly used distributions in the statistical literature, such as Gamma, log-normal, log-logistic, and others. In the middle two datasets, which are used in medical and banking fields, the EMLOG distribution’s performance is compared to the normal and logistic distributions, which are symmetrical and belong to the location-scale family, to examine the flexibility and better goodness of fit of the EMLOG distribution among them. In the last two datasets, which are further used in medical and engineering fields, the modeling performance of the EMLOG distribution is compared with symmetrical and asymmetrical distributions such as normal and/or logistic, Weibull, gamma, and others.
The modeling performance of the EMLOG distribution has been compared using well-known different criteria, including log-likelihood values, the Akaike Information Criterion (AIC), the corrected AIC (AICc), the consistent AIC (CAIC), the Bayesian Information Criterion (BIC), and the Hannan-Quinn Information Criterion (HQIC). For extra information on these criteria and their implementation, see [40,41,42]. The mathematical expression of these criteria is given by
A I C = 2 P 2   l n   L
A I C c = A I C + 2 P P + 1 n P 1
C A I C = P   [ l n ( n ) + 1 ] 2   l n   L  
B I C = P   l n ( n ) 2   l n   L
H Q I C = 2   P   l n   [ l n ( n ) ] 2   l n   L
where ln L is the maximized likelihood function, the number of observations is n, and p is just the total number of model parameters. The model of probability is considered the best-fit model when it has lower values for these mentioned criteria in comparison with other probability distributions.

4.1. Dataset 1: Tensile Strength of 69 Carbon Fibers

This dataset contains the tensile strength (in GPa) of 69 carbon fibers evaluated under stress at 20 mm gauge lengths. This dataset was originally used for the first time in 1982 by Bader and Priest [43]. The data are given as: 1.312, 1.314, 1.479, 1.552, 1.700, 1.803, 1.861, 1.865, 1.944, 1.958, 1.966, 1.997, 2.006, 2.021, 2.027, 2.055, 2.063, 2.098, 2.140, 2.179, 2.224, 2.240, 2.253, 2.270, 2.272, 2.274, 2.301, 2.301, 2.359, 2.382, 2.382, 2.426, 2.434, 2.435, 2.478, 2.490, 2.511, 2.514, 2.535, 2.554, 2.566, 2.570, 2.586, 2.629, 2.633, 2.642, 2.648, 2.684, 2.697, 2.726, 2.770, 2.773, 2.800, 2.809, 2.818, 2.821, 2.848, 2.880, 2.954, 3.012, 3.067, 3.084, 3.090, 3.096, 3.128, 3.233, 3.433, 3.585, 3.858. Descriptive statistics, including the values of sample size (n), minimum (min), first quartile (1st Qu.), mean, mode, median, third quartile (3rd Qu.), maximum (max), variance (S2), skewness (γ1), and kurtosis (γ2) coefficients, respectively, are given by Table 10.
Various well-known distributions such as Gamma, log-normal, Log-Logistic, Rayleigh, Weibull, and exponential are used for modeling this data. A comparison between EMLOG distribution modeling performance and the mentioned commonly used distributions according to the lnL, AIC, AICc, CAIC, BIC, and HQIC criteria is given in Table 11.
The results in Table 11 show that the EMLOG distribution performance gives a better fit than its rivals in terms of the considered criteria.

4.2. Dataset 2: Strengths of Glass Fibers

The Strength of Glass Fibers dataset was introduced for the first time by Smith and Naylor [44]. It is made up of 63 observations about the strengths of glass fibers (1.5cm) and is widely used in the statistical literature. The data are as follows: 0.55, 0.93, 1.25, 1.36, 1.49, 1.52, 1.58, 1.61, 1.64, 1.68, 1.73, 1.81, 2.00, 0.74, 1.04, 1.27, 1.39, 1.49, 1.53, 1.59, 1.61, 1.66, 1.68, 1.76, 1.82, 2.01, 0.77, 1.11, 1.28, 1.42, 1.5, 1.54, 1.6, 1.62, 1.66, 1.69, 1.76, 1.84, 2.24, 0.81, 1.13, 1.29, 1.48, 1.5, 1.55, 1.61, 1.62, 1.66, 1.7, 1.77, 1.84, 0.84, 1.24, 1.3, 1.48, 1.51, 1.55, 1.61, 1.63, 1.67, 1.7, 1.78, 1.89. The descriptive statistics for this dataset are given in Table 12.
The resulting values of fitting this data to the EMLOG distribution model in comparison with Gamma, lognormal, log-logistic, Rayleigh, and Exponential distributions in terms of the criteria that were chosen are obtained in Table 13.
The results show that the EMLOG model outperforms any competitor distribution in terms of modeling performance.

4.3. Dataset 3: Bladder Cancer Patients

This dataset is a biologically uncensored univariate dataset that reflects the remission periods (in months) of 128 bladder cancer patients who were randomly selected by Lee and Wang in 2003 [45]. The data are given as: 0.08, 2.09, 3.48, 4.87, 6.94, 8.66, 13.11, 23.63, 0.20, 2.23, 3.52, 4.98, 6.97, 9.02, 13.29, 0.40, 2.26, 3.57, 5.06, 7.09, 9.22, 13.80, 25.74, 0.50, 2.46, 3.64, 5.09, 7.26, 9.47, 14.24, 25.82, 0.51, 2.54, 3.70, 5.17, 7.28, 9.74, 14.76, 26.31, 0.81, 2.62, 3.82, 5.32, 7.32, 10.06, 14.77, 32.15, 2.64, 3.88, 5.32, 7.39, 10.34, 14.83, 34.26, 0.90, 2.69, 4.18, 5.34, 7.59, 10.66, 15.96, 36.66, 1.05, 2.69, 4.23, 5.41, 7.62, 10.75, 16.62, 43.01, 1.19, 2.75, 4.26, 5.41, 7.63, 17.12, 46.12, 1.26, 2.83, 4.33, 5.49, 7.66, 11.25, 17.14, 79.05, 1.35, 2.87, 5.62, 7.87, 11.64, 17.36, 1.40, 3.02, 4.34, 5.71, 7.93, 11.79, 18.10, 1.46, 4.40, 5.85, 8.26, 11.98, 19.13, 1.76, 3.25, 4.50, 6.25, 8.37, 12.02, 2.02, 3.31, 4.51, 6.54, 8.53, 12.03, 20.28, 2.02, 3.36, 6.76, 12.07, 21.73, 2.07, 3.36, 6.93, 8.65, 12.63, 22.69. The descriptive statistics values of this dataset are given in Table 14 below.
The EMLOG distribution’s modeling performance is compared to that of the normal and logistic distributions. The results given in Table 15 show that the EMLOG distribution has a better fit in comparison with the normal and logistic distributions in terms of the considered criteria.

4.4. Dataset 4: Waiting Times (in Minutes) of 100 Bank Customers

This dataset contains 100 observations that indicate the waiting periods (in minutes) before service for 100 bank customers, as examined and described by Ghitany et al. [46]. The data are given as: 0.8, 0.8, 1.3, 1.5, 1.8, 1.9, 1.9, 2.1, 2.6, 2.7, 2.9, 3.1, 3.2, 3.3, 3.5, 3.6, 4.0, 4.1, 4.2, 4.2, 4.3, 4.3, 4.4, 4.4, 4.6, 4.7, 4.7, 4.8, 4.9, 4.9, 5.0, 5.3, 5.5, 5.7, 5.7, 6.1, 6.2, 6.2, 6.2, 6.3, 6.7, 6.9, 7.1, 7.1, 7.1, 7.1, 7.4, 7.6, 7.7, 8.0, 8.2, 8.6, 8.6, 8.6, 8.8, 8.8, 8.9, 8.9, 9.5, 9.6, 9.7, 9.8, 10.7, 10.9, 11.0, 11.0, 11.1, 11.2, 11.2, 11.5, 11.9, 12.4, 12.5, 12.9, 13.0, 13.1, 13.3, 13.6, 13.7, 13.9, 14.1, 15.4, 15.4, 17.3, 17.3, 18.1, 18.2, 18.4, 18.9, 19.0, 19.9, 20.6, 21.3, 21.4, 21.9, 23.0, 27.0, 31.6, 33.1, 38.5. Descriptive statistics for this dataset are given in Table 16.
From Table 17, we can see that the EMLOG distribution seems to provide a better fit than normal and logistic distributions according to the considered criteria.

4.5. Dataset 5: Patients Relief Times Dataset

This dataset relates to the observations of relief times (in minutes) for 20 patients receiving an analgesic. It was first used by [47]. The data are given as: 1.1, 1.4, 1.3, 1.7, 1.9, 1.8, 1.6, 2.2, 1.7, 2.7, 4.1, 1.8, 1.5, 1.2, 1.4, 3, 1.7, 2.3, 1.6, 2. Descriptive statistics for this dataset are given in Table 18.
The modeling performance of the EMLOG distribution is compared with symmetrical distributions such as Normal and Logistic as well as asymmetrical distributions such as Weibull, Nakagami, and others. The EMLOG distribution was found to have the best fit based on the criteria considered, as shown below in Table 19.

4.6. Dataset 6: Windshield Failure Time Dataset

This dataset contains observations of the failure time (the unit for measurement is 1000 h) of 84 windshields for a given aircraft model [48]. The data are given as: 0.040, 1.866, 2.385, 3.443, 0.301, 1.876, 2.481, 3.467, 0.309, 1.899, 2.610, 3.478, 0.557, 1.911, 2.625, 3.578, 0.943, 1.912, 2.632, 3.595, 1.070, 1.914, 2.646, 3.699, 1.124, 1.981, 2.661, 3.779,1.248, 2.010, 2.688, 3.924, 1.281, 2.038, 2.82,3, 4.035, 1.281, 2.085, 2.890, 4.121, 1.303, 2.089, 2.902, 4.167, 1.432, 2.097, 2.934, 4.240, 1.480, 2.135, 2.962, 4.255, 1.505, 2.154, 2.964, 4.278, 1.506, 2.190, 3.000, 4.305, 1.568, 2.194, 3.103, 4.376, 1.615, 2.223, 3.114, 4.449, 1.619, 2.224, 3.117, 4.485, 1.652, 2.229, 3.166, 4.570, 1.652, 2.300, 3.344, 4.602, 1.757, 2.324, 3.376, 4.663. Descriptive statistics for this dataset are given in Table 20.
When the modeling performance of the EMLOG distribution is compared to a symmetrical distribution like logistic and asymmetrical distributions like gamma, log-logistic, and others, as shown in Table 21 below, the EMLOG distribution showed the best data fit with regard to the concerned criteria.
It can be noticed that in the first and second datasets, the EMLOG distribution outperforms asymmetrical distributions such as Weibull, Gamma, and others. Also, in the third and fourth datasets, the EMLOG distribution provides a better fit than symmetrical distributions such as the normal and logistic distributions. However, in the fifth and sixth datasets, the EMLOG distribution outperforms symmetrical and asymmetrical distributions with better modeling performance. All this can lead us to say that the EMLOG distribution can provide a better fit than many popular distributions, which shows the distinction of this distribution and its suitability for many practical cases and applications.

5. Conclusions

The EMLOG distribution is obtained by combining a logistic distribution with an exponential distribution. This distribution can be used as an alternative to symmetrical and asymmetrical distributions with a better fit in many applications. EMLOG distribution is found more flexible than other alternative distributions, so this is why it has been applied in various fields, including technology, energy, marketing, biology, psychology, and so on. As a result, parameter estimation is essential for this distribution. In this study, ML estimates of the EMLOG distribution’s location α and scale β parameters are investigated, which cannot be obtained explicitly because of the complication of finding a solution for their nonlinear likelihood equations. In such cases, iterative techniques are needed. The ML estimation method based on meta-heuristic algorithms such as GA, PSO, GWO, WOA, and SCA is considered a wise alternative and more convenient than any other traditional statistical methods. A Monte Carlo simulation study was conducted to compare the performance of meta-heuristic algorithms used in this study with respect to bias, MSE, Def, and CT criteria. According to the simulation results, the ML estimates of GA and GWO show the best performance in comparison with the PSO, WOA, and SCA algorithms. However, GA is the fastest for the estimation process; therefore, GA is considered the best algorithm that shows the best performance if the computation time is considered. GWO has the advantage of high performance with fewer parameters, simple principles, and easy implementation. On the other hand, GWO converges more slowly, and it is possible to fall into the local optimum. While the genetic algorithm has two major operators that are robust in generating efficient solutions with achieved diversity to help solutions not be trapped in local optimum solutions, at the same time it can process large amounts of data with fast convergence speed, which helps to save time. However, genetic algorithms rely on fixed parameters such as mutation probability (MP) and crossover probability (CP), so defining these parameters improperly can lead to lower performance.

Author Contributions

Conceptualization, P.K.; methodology, P.K. and A.O.F.; software, A.O.F.; validation, P.K. and A.O.F.; formal analysis, P.K. and A.O.F.; investigation, P.K. and A.O.F.; resources, P.K. and A.O.F.; data curation, P.K. and A.O.F.; writing—original draft preparation, P.K. and A.O.F.; writing—review and editing, P.K. and A.O.F.; visualization, P.K. and A.O.F.; supervision, P.K.; project administration P.K. and A.O.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The study’s data are both publicly available in the references [43,44,45,46,47,48] and contained in the article itself.

Acknowledgments

We respect the editor’s and referees’ thoughtful, constructive feedback, which worked to enhance the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Reyes, J.; Venegas, O.; Gómez, H.W. Exponentially-Modified Logistic Distribution with Application to Mining and Nutrition Data. Appl. Math. Inf. Sci. 2018, 12, 1109–1116. [Google Scholar] [CrossRef]
  2. Kissell, R.L.; Poserina, J. Advanced Math and Statistics. In Optimal Sports Math, Statistics, and Fantasy, 1st ed; Elsevier: Amsterdam, The Netherlands, 2017; pp. 103–135. [Google Scholar] [CrossRef]
  3. Gupta, A.K.; Zeng, W.B.; Wu, Y. Exponential Distribution. In Probability and Statistical Models; Birkhäuser: Boston, MA, USA, 2017; pp. 23–43. [Google Scholar] [CrossRef]
  4. Hussein, M.; Elsayed, H.; Cordeiro, G.M. A New Family of Continuous Distributions: Properties and Estimation. Symmetry 2022, 14, 276. [Google Scholar] [CrossRef]
  5. Gupta, R.D.; Kundu, D. Exponentiated exponential family: An alternative to gamma and Weibull distributions. Biom. J. J. Math. Methods Biosci. 2001, 43, 117–130. [Google Scholar] [CrossRef]
  6. Pal, M.; Ali, M.M.; Woo, J. Exponentiated weibull distribution. Statistica 2006, 66, 139–147. [Google Scholar] [CrossRef]
  7. Nadarajah, S.; Gupta, A.K. The Exponentiated Gamma Distribution with Application to Drought Data. Calcutta Stat. Assoc. Bull. 2007, 59, 29–54. [Google Scholar] [CrossRef]
  8. Nadarajah, S.; Haghighi, F. An extension of the exponential distribution. Statistics 2011, 45, 543–558. [Google Scholar] [CrossRef]
  9. Chaudhary, A.K. Frequentist Parameter Estimation of Two-Parameter Exponentiated Log-logistic Distribution BB. NCC J. 2019, 4, 1–8. [Google Scholar] [CrossRef]
  10. Sorensen, D. Likelihood. In Statistical Learning in Genetics: An Introduction Using R; Springer International Publishing: Cham, Switzerland, 2023; pp. 51–75. [Google Scholar] [CrossRef]
  11. Bartolucci, F.; Scrucca, L. Point Estimation Methods with Applications to Item Response Theory Models. In International Encyclopedia of Education; Elsevier: Amsterdam, The Netherlands, 2010. [Google Scholar] [CrossRef]
  12. Sreenivas, P.; Kumar, S.V. A review on non-traditional optimization algorithm for simultaneous scheduling problems. J. Mech. Civ. Eng. 2015, 12, 50–53. [Google Scholar]
  13. Holland, J.H. Adaptation in Natural and Artificial System: An Introduction with Application to Biology, Control and Artificial Intelligence; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  14. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  15. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  17. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  18. Yalçınkaya, A.; Şenoğlu, B.; Yolcu, U. Maximum likelihood estimation for the parameters of skew normal distribution using genetic algorithm. Swarm Evol. Comput. 2018, 38, 127–138. [Google Scholar] [CrossRef]
  19. Karakoca, A.; Pekgör, A. Maximum Likelihood Estimation of the Parameters of Progressively Type-2 Censored Samples from Weibull Distribution Using Genetic Algorithm. Acad. Platf. J. Eng. Sci. 2019, 7, 189–199. [Google Scholar]
  20. Yalçınkaya, A.; Balay, İ.G.; Şenoǧlu, B. A new approach using the genetic algorithm for parameter estimation in multiple linear regression with long-tailed symmetric distributed error terms: An application to the COVID-19 data. Chemom. Intell. Lab. Syst. 2021, 216, 104372. [Google Scholar] [CrossRef] [PubMed]
  21. Swait, J. Distribution-free estimation of individual parameter logit (IPL) models using combined evolutionary and optimization algorithms. J. Choice Model. 2023, 47, 100396. [Google Scholar] [CrossRef]
  22. Medel-Esquivel, R.; Gómez-Vargas, I.; Morales Sánchez, A.A.; García-Salcedo, R.; Alberto Vázquez, J. Cosmological Parameter Estimation with Genetic Algorithms. Universe 2023, 10, 11. [Google Scholar] [CrossRef]
  23. Wang, F.-K.; Huang, P.-R. Implementing particle swarm optimization algorithm to estimate the mixture of two Weibull parameters with censored data. J. Stat. Comput. Simul. 2014, 84, 1975–1989. [Google Scholar] [CrossRef]
  24. Acitas, S.; Aladag, C.H.; Senoglu, B. A new approach for estimating the parameters of Weibull distribution via particle swarm optimization: An application to the strengths of glass fibre data. Reliab. Eng. Syst. Saf. 2019, 183, 116–127. [Google Scholar] [CrossRef]
  25. Mahmood, S.W.; Algamal, Z.Y. Reliability Estimation of Three Parameters Gamma Distribution via Particle Swarm Optimization. Thail. Stat. 2021, 19, 308–316. [Google Scholar]
  26. Faouri, A.O.; Kasap, P. Maximum Likelihood Estimation for the Nakagami Distribution Using Particle Swarm Optimization Algorithm with Applications. Necmettin Erbakan Üniv. Fen Mühendis. Bilim. Derg. 2023, 5, 209–218. [Google Scholar] [CrossRef]
  27. Berg, S.S. Utility of Particle Swarm Optimization in Statistical Population Reconstruction. Mathematics 2023, 11, 827. [Google Scholar] [CrossRef]
  28. Abdullah, Z.M.; Hussain, N.K.; Fawzi, F.A.; Abdal-Hammed, M.K.; Khaleel, M.A. Estimating parameters of Marshall Olkin Topp Leon exponential distribution via grey wolf optimization and conjugate gradient with application. Int. J. Nonlinear Anal. Appl. 2022, 13, 3491–3503. [Google Scholar] [CrossRef]
  29. Li, Z.; Xia, X.; Yan, Y. A Novel Semidefinite Programming-based UAV 3D Localization Algorithm with Gray Wolf Optimization. Drones 2023, 7, 113. [Google Scholar] [CrossRef]
  30. Faouri, A.O.; Kasap, P. Maximum Likelihood Estimation for the Log-Logistic Distribution Using Whale Optimization Algorithm with Applications. Black Sea J. Eng. Sci. 2023, 6, 639–647. [Google Scholar] [CrossRef]
  31. Al-Quraan, A.; Al-Mhairat, B.; Malkawi, A.M.A.; Radaideh, A.; Al-Masri, H.M.K. Optimal Prediction of Wind Energy Resources Based on WOA—A Case Study in Jordan. Sustainability 2023, 15, 3927. [Google Scholar] [CrossRef]
  32. Wang, J.; Huang, X.; Li, Q.; Ma, X. Comparison of seven methods for determining the optimal statistical distribution parameters: A case study of wind energy assessment in the large-scale wind farms of China. Energy 2018, 164, 432–448. [Google Scholar] [CrossRef]
  33. Mohammed, W.A.D.I. Five different distributions and metaheuristics to model wind speed distribution. J. Therm. Eng. 2021, 7 (Suppl. S14), 1898–1920. [Google Scholar] [CrossRef]
  34. Wadi, M.; Elmasry, W. A Comparative Assessment of Five Different Distributions Based on Five Different Optimization Methods for Modeling Wind Speed Distribution. GAZI Univ. J. Sci. 2023, 36, 1096–1120. [Google Scholar] [CrossRef]
  35. Al-Mhairat, B.; Al-Quraan, A. Assessment of Wind Energy Resources in Jordan Using Different Optimization Techniques. Processes 2022, 10, 105. [Google Scholar] [CrossRef]
  36. Abualigah, L.; Diabat, A. Advances in Sine Cosine Algorithm: A comprehensive survey. Artif. Intell. Rev. 2021, 54, 2567–2608. [Google Scholar] [CrossRef]
  37. Raghuvanshi, A.; Sharma, A.; Gupta, M.K. Maximum Likelihood Direction of Arrival Estimation using GWO Algorithm. In Proceedings of the 2022 International Conference on Advances in Computing, Communication and Materials (ICACCM), Dehradun, India, 10–11 November 2022; pp. 1–5. [Google Scholar] [CrossRef]
  38. Golberg, D.E. Genetic Algorithms in Search, Optimization, and Machine Learning; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1989. [Google Scholar]
  39. Gupta, S.; Deep, K. A novel Random Walk Grey Wolf Optimizer. Swarm Evol. Comput. 2019, 44, 101–112. [Google Scholar] [CrossRef]
  40. Anderson, D.R.; Burnham, K.P.; White, G.C. Comparison of Akaike information criterion and consistent Akaike information criterion for model selection and statistical inference from capture-recapture studies. J. Appl. Stat. 1998, 25, 263–282. [Google Scholar] [CrossRef]
  41. Neath, A.A.; Cavanaugh, J.E. The Bayesian information criterion: Background, derivation, and applications. WIREs Comput. Stat. 2012, 4, 199–203. [Google Scholar] [CrossRef]
  42. Ayalew, S.; Babu, M.C.; Rao, L.M. Comparison of new approach criteria for estimating the order of autoregressive process. IOSR J. Math. 2012, 1, 10–20. [Google Scholar] [CrossRef]
  43. Bader, M.G.; Priest, A.M. Statistical aspects of fibre and bundle strength in hybrid composites. In Progress in Science and Engineering of Composites, Proceedings of the Fourth International Conference on Composite Materials, ICCM-IV, 25–28 October 1982, Tokyo, Japan; Japan Society for Composite Materials: Tokyo, Japan, 1982; pp. 1129–1136. [Google Scholar]
  44. Smith, R.L.; Naylor, J.C. A Comparison of Maximum Likelihood and Bayesian Estimators for the Three- Parameter Weibull Distribution. J. R. Stat. Soc. Ser. C (Appl. Stat.) 1987, 36, 358. [Google Scholar] [CrossRef]
  45. Lee, E.T.; Wang, J. Statistical Methods for Survival Data Analysis; John Wiley & Sons: New York, NY, USA, 2003. [Google Scholar]
  46. Ghitany, M.E.; Atieh, B.; Nadarajah, S. Lindley distribution and its application. Math. Comput. Simul. 2008, 78, 493–506. [Google Scholar] [CrossRef]
  47. Gross, A.J.; Clark, V. Survival Distributions: Reliability Applications in the Biomedical Sciences; John Wiley & Sons: New York, NY, USA, 1975. [Google Scholar]
  48. Blischke, W.R.; Murthy, D.P. Reliability: Modeling, Prediction, and Optimization; John Wiley & Sons: New York, NY, USA, 2000. [Google Scholar]
Figure 1. EMLOG pdf for different scale values.
Figure 1. EMLOG pdf for different scale values.
Symmetry 16 00259 g001
Figure 2. Flow diagram of the GA.
Figure 2. Flow diagram of the GA.
Symmetry 16 00259 g002
Figure 3. Hierarchy of gray wolf population.
Figure 3. Hierarchy of gray wolf population.
Symmetry 16 00259 g003
Figure 4. Flow diagram of the GWO.
Figure 4. Flow diagram of the GWO.
Symmetry 16 00259 g004
Figure 5. Performance of GA, PSO, GWO, WOA, and SCA according to the Def values in Table 2.
Figure 5. Performance of GA, PSO, GWO, WOA, and SCA according to the Def values in Table 2.
Symmetry 16 00259 g005
Figure 6. Performance of GA, PSO, GWO, WOA, and SCA according to the CT values in Table 2.
Figure 6. Performance of GA, PSO, GWO, WOA, and SCA according to the CT values in Table 2.
Symmetry 16 00259 g006
Table 1. The first four values of Bi and Γ(i).
Table 1. The first four values of Bi and Γ(i).
iBiΓ(i)
1 ± 1 / 2 1
2 1 / 6 1
302
4 1 / 30 6
Table 2. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 0 ,   β = 1 .
Table 2. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 0 ,   β = 1 .
α ^ β ^
nAlgorithm
α = 0 ,   β = 1
MeanVarianceBiasMSEMeanVarianceBiasMSEDef C T ¯
30GA0.01530.14630.01530.14650.99090.02190.16850.0220.16850.0522
PSO−0.05410.2880−0.05410.29091.03760.08200.37430.08340.37430.3771
GWO0.01530.14630.01530.14650.99090.02190.16850.0220.16850.1307
WOA0.01550.14630.01550.14650.99080.02190.16850.0220.16850.1302
SCA0.994118.77100.994119.75920.94350.063319.82570.066519.82570.1312
50GA0.01980.08320.01980.08360.98170.01270.09660.01300.09660.0581
PSO−0.06390.2861−0.06390.29021.02870.07620.36720.07700.36720.3757
GWO0.01980.08320.01980.08360.98170.01270.09660.01300.09660.1691
WOA0.03930.48220.03930.48370.98110.01330.49740.01370.49740.1711
SCA0.714813.60000.714814.11090.94820.043314.15690.046014.15690.1710
100GA0.00610.04090.00610.04090.99070.00680.04780.00690.04780.0708
PSO−0.06950.1923−0.06950.19711.03300.06150.25970.06260.25970.4127
GWO0.00610.04090.00610.04090.99070.00680.04780.00690.04780.2691
WOA0.00610.04090.00610.04090.99070.00680.04780.00690.04780.2702
SCA0.886516.87000.886517.65590.94940.044517.70290.047117.70290.2711
150GA0.00690.02780.00690.02780.99650.00440.03230.00440.03230.0896
PSO−0.07840.2568−0.07840.26291.04910.08150.34690.08390.34690.4952
GWO0.00690.02780.00690.02780.99650.00440.03230.00440.03230.3805
WOA0.02660.42750.02660.42820.99550.00540.43360.00540.43360.3834
SCA0.705313.54300.705314.04040.96400.035014.07680.036314.07680.3829
200GA0.00780.02110.00780.02120.99470.00350.02470.00350.02470.1125
PSO−0.06180.1450−0.06180.14881.04460.06560.21640.06760.21640.5934
GWO0.00780.02110.00780.02120.99470.00350.02470.00350.02470.5244
WOA0.02770.42080.02770.42160.99380.00430.42590.00430.42590.5267
SCA0.705713.53600.705714.03400.96110.034114.06960.035614.06960.5266
Table 3. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 0 ,   β = 2 .
Table 3. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 0 ,   β = 2 .
α ^ β ^
nAlgorithm
α = 0 ,   β = 2
MeanVarianceBiasMSEMeanVarianceBiasMSEDef C T ¯
30GA0.03580.56680.03580.56811.9550.0867−0.0450.08870.65680.0482
PSO0.02810.64430.02810.64511.97370.1036−0.02630.10430.74940.6155
GWO0.03580.56680.03580.56811.9550.0867−0.0450.08870.65680.1168
WOA0.03590.56700.03590.56831.9550.0867−0.0450.08870.65700.1181
SCA0.633912.16200.633912.56381.90090.1909−0.09910.200712.76460.1184
50GA0.01500.33270.01500.33291.97360.0533−0.02640.05400.38690.0531
PSO−0.01200.3795−0.01200.37961.98910.0623−0.01090.06240.44210.5314
GWO0.01500.33270.01500.33291.97360.0533−0.02640.05400.38690.1590
WOA0.01500.33270.01500.33291.97360.0533−0.02640.05400.38690.1601
SCA0.48129.77520.481210.00681.92770.1431−0.07230.148310.15510.1606
100GA0.00280.17940.00280.17941.98700.0266−0.01300.02680.20620.0714
PSO−0.02010.2167−0.02010.21711.99810.0343−0.00190.03430.25140.5279
GWO0.00280.17940.00280.17941.98700.0266−0.01300.02680.20620.2728
WOA0.00290.17950.00290.17951.98700.0266−0.01300.02680.20630.2738
SCA0.38998.11990.38998.27191.94730.1040−0.05270.10688.37870.2751
150GA−0.00130.1119−0.00130.11191.98710.0164−0.01290.01660.12850.0869
PSO−0.01940.1379−0.01940.13832.00540.03440.00540.03440.17270.56322
GWO−0.00130.1119−0.00130.11191.98710.0164−0.01290.01660.12850.3769
WOA−0.00110.1120−0.00110.11201.98710.0164−0.01290.01660.12860.37859
SCA0.26045.24780.26045.31561.96150.0645−0.03850.06605.38160.38043
200GA0.00880.08480.00880.08491.98550.0136−0.01450.01380.09870.1052
PSO−0.00240.0988−0.00240.09881.99590.0238−0.00410.02380.12260.61728
GWO0.00880.08480.00880.08491.98550.0136−0.01450.01380.09870.49681
WOA0.00890.08480.00890.08491.98550.0136−0.01450.01380.09870.4991
SCA0.18853.65250.18853.68801.96870.0470−0.03130.04803.73600.50185
Table 4. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 1 ,   β = 1 .
Table 4. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 1 ,   β = 1 .
α ^ β ^
nAlgorithm
α = 1 ,   β = 1
MeanVarianceBiasMSEMeanVarianceBiasMSEDef C T ¯
30GA1.02890.1366 0.02890.13740.97680.0218−0.02320.02230.15980.0622
PSO0.93250.3457−0.06750.35031.03870.1069 0.03870.10840.45870.4518
GWO1.02890.1366 0.02890.13740.97680.0218−0.02320.02230.15980.1497
WOA1.04770.4966 0.04770.49890.97600.0226−0.02400.02320.52210.1475
SCA2.023717.8320 1.023718.88000.92700.0652−0.07300.070518.95050.1506
50GA1.01000.0913 0.01000.09141.00020.0132 0.00020.01320.10460.0548
PSO0.96350.1723−0.03650.17361.02850.0419 0.02850.04270.21630.3729
GWO1.01000.0913 0.01000.09141.00020.0132 0.00020.01320.10460.1651
WOA1.06801.1704 0.06801.17500.99760.0158−0.00240.01581.19080.1644
SCA2.201721.4220 1.201722.86610.94080.0688−0.05920.072322.93840.1661
100GA1.01030.0401 0.01030.04020.99260.0070−0.00740.00710.04730.0786
PSO0.92860.2145−0.07140.21961.04080.0784 0.04080.08010.29970.4563
GWO1.01030.0401 0.01030.04020.99260.0070−0.00740.00710.04730.2968
WOA1.02920.4007 0.02920.40160.99150.0079−0.00850.00800.40950.2937
SCA2.074519.1190 1.074520.27360.93940.0546−0.06060.058320.33180.2987
150GA1.00350.0277 0.00350.02770.99880.0040−0.00120.00400.03170.0929
PSO0.93310.2009−0.06690.20541.04340.0665 0.04340.06840.27380.5273
GWO1.00350.0277 0.00350.02770.99880.0040−0.00120.00400.03170.4014
WOA1.00370.0277 0.00370.02770.99870.0040−0.00130.00400.03170.3990
SCA1.781714.2310 0.781814.84220.95980.0400−0.04020.041614.88380.4052
200GA1.00380.0222 0.00380.02220.99710.0033−0.00290.00330.02550.1256
PSO0.93560.1518−0.06440.15591.04070.0550 0.04070.05670.21260.6650
GWO1.00380.0222 0.00380.02220.99710.0033−0.00290.00330.02550.5799
WOA1.02260.3830 0.02260.38350.99660.0037−0.00340.00370.38720.5768
SCA1.895316.2030 0.895317.00460.95190.0445−0.04810.046817.05140.5838
Table 5. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 1 ,   β = 2 .
Table 5. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 1 ,   β = 2 .
α ^ β ^
nAlgorithm
α = 1 ,   β = 2
MeanVarianceBiasMSEMeanVarianceBiasMSEDef C T ¯
30GA1.04270.56600.04270.56781.95870.0905−0.04130.09220.66000.0554
PSO1.01850.60510.01850.60541.97510.1007−0.02490.10130.70680.7952
GWO1.04270.56600.04270.56781.95870.0905−0.04130.09220.66000.1353
WOA1.06110.92510.06110.92881.95660.0938−0.04340.09571.02450.1352
SCA1.756813.71400.756914.28691.88890.2220−0.11110.234314.52120.1381
50GA1.01200.32410.01200.32421.96850.0504−0.03150.05140.37560.0520
PSO0.99320.3727−0.00680.37271.98820.0671−0.01180.06720.44000.5752
GWO1.01200.32410.01200.32421.96850.0504−0.03150.05140.37560.1575
WOA1.01210.32420.01210.32431.96850.0504−0.03150.05140.37570.1572
SCA1.48229.17860.48229.41111.91970.1437−0.08030.15019.56130.1592
100GA0.99930.1657−0.00070.16571.98150.0271−0.01850.02740.19310.0905
PSO0.96280.2391−0.03720.24052.00280.0449 0.00280.04490.28540.6579
GWO0.99930.1657−0.00070.16571.98150.0271−0.01850.02740.19310.3577
WOA0.99890.1658−0.00110.16581.98150.0271−0.01850.02740.19320.3564
SCA1.30185.85440.30185.94551.95170.0862−0.04830.08856.03400.3631
150GA1.01000.11280.01000.11291.98310.0173−0.01690.01760.13050.1236
PSO0.99130.1655−0.00870.16562.00340.03690.00340.03690.20250.8276
GWO1.01000.11280.01000.11291.98310.0173−0.01690.01760.13050.5436
WOA1.02910.47320.02910.47401.98120.0210−0.01880.02140.49540.5463
SCA1.18003.33220.18003.36461.96670.0506−0.03330.05173.41630.5512
200GA1.02040.09070.02040.09111.97880.0120−0.02120.01240.10360.1326
PSO0.99550.1422−0.00450.14221.99720.0294−0.00280.02940.17160.8674
GWO1.02040.09070.02040.09111.97880.0120−0.02120.01240.10360.6412
WOA1.02000.09090.02000.09131.97880.0120−0.02120.01240.10370.6443
SCA1.24824.36580.24834.42751.95620.0565−0.04380.05844.48590.6538
Table 6. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 2 ,   β = 1 .
Table 6. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 2 ,   β = 1 .
α ^ β ^
nAlgorithm
α = 2 ,   β = 1
MeanVarianceBiasMSEMeanVarianceBiasMSEDef C T ¯
30GA2.04040.1378 0.04040.13940.97200.0211−0.02800.02190.16130.0531
PSO1.97950.2660−0.02050.26641.02450.1029 0.02450.10350.36990.3777
GWO2.04040.1378 0.04040.13940.97200.0211−0.02800.02190.16130.1322
WOA2.04090.1378 0.04090.13950.97190.0211−0.02810.02190.16140.1318
SCA3.347021.9970 1.347023.81140.90440.0802−0.09560.089323.90070.1337
50GA2.01380.0865 0.01380.08670.98340.0127−0.01660.01300.09970.0576
PSO1.94250.2365−0.05750.23981.04140.1003 0.04140.10200.34180.3659
GWO2.01380.0865 0.01380.08670.98340.0127−0.01660.01300.09970.1700
WOA2.03110.4098 0.03110.41080.98300.0134−0.01710.01370.42450.1690
SCA3.372722.9320 1.372724.81630.90960.0756−0.09040.083824.90010.1721
100GA2.01580.0455 0.01580.04570.99610.0069−0.00390.00690.05270.1000
PSO1.93360.2321−0.06640.23651.05110.0886 0.05110.09120.32770.6127
GWO2.01580.0455 0.01580.04570.99610.0069−0.00390.00690.05270.3760
WOA2.01610.0455 0.01610.04580.99600.0069−0.00400.00690.05270.3776
SCA3.110618.5920 1.110619.82540.93890.0594−0.06110.063119.88860.3831
150GA1.99840.0264−0.00160.02640.99400.0043−0.00600.00430.03070.1196
PSO1.93110.1683−0.06890.17301.03910.0699 0.03910.07140.24450.6363
GWO1.99840.0264−0.00160.02640.99400.0043−0.00600.00430.03070.5154
WOA1.99820.0264−0.00180.02640.99390.0043−0.00610.00430.03070.5155
SCA2.845014.5540 0.845015.26800.94850.0451−0.05150.047815.31580.5180
200GA2.00160.0207 0.00160.02071.00010.0033 0.00010.00330.02400.1107
PSO1.92990.2492−0.07010.25411.05010.0766 0.05010.07910.33320.5733
GWO2.00160.0207 0.00160.02071.00010.0033 0.00010.00330.02400.5225
WOA2.00190.0207 0.00190.02071.00020.0033 0.00020.00330.02400.5193
SCA2.733612.8290 0.733613.36720.96010.0408−0.03990.042413.40960.5225
Table 7. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 2 ,   β = 2 .
Table 7. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 2 ,   β = 2 .
α ^ β ^
nAlgorithm
α = 2 ,   β = 2
MeanVarianceBiasMSEMeanVarianceBiasMSEDef C T ¯
30GA2.01780.59610.01780.59641.95160.0887−0.04840.09100.68750.0533
PSO1.99180.6671−0.00820.66721.97340.1048−0.02660.10550.77270.9082
GWO2.01780.59610.01780.59641.95160.0887−0.04840.09100.68750.1354
WOA2.01750.59620.01750.59651.95160.0887−0.04840.09100.68750.1356
SCA2.608210.91200.608211.28191.89010.2033−0.11000.215411.49730.1375
50GA1.99740.3451−0.00260.34511.96310.0523−0.03690.05370.39880.0554
PSO1.96390.4095−0.03610.41081.98860.0764−0.01140.07650.48730.5359
GWO1.99740.3451−0.00260.34511.96310.0523−0.03690.05370.39880.1724
WOA1.99710.3451−0.00290.34511.96300.0523−0.03700.05370.39880.1723
SCA2.54019.7643 0.540110.05601.90780.1589−0.09220.167410.22340.1749
100GA2.03920.1603 0.03920.16181.97070.0239−0.02930.02480.18660.0912
PSO2.00460.2233 0.00460.22331.99170.0425−0.00830.04260.26590.6931
GWO2.03920.1603 0.03920.16181.97070.0239−0.02930.02480.18660.3520
WOA2.03950.1603 0.03950.16191.97070.0239−0.02930.02480.18660.3533
SCA2.18512.7227 0.18512.75701.95550.0534−0.04450.05542.81230.3633
150GA1.99350.1175−0.00650.11751.97450.0178−0.02550.01850.13600.1943
PSO1.97120.1582−0.02880.15901.99310.0329−0.00690.03290.19201.2466
GWO1.99350.1175−0.00650.11751.97450.0178−0.02550.01850.13600.8421
WOA1.99380.1176−0.00620.11761.97440.0178−0.02560.01850.13610.8386
SCA2.08291.7308 0.08291.73771.96580.0364−0.03420.03761.77520.8539
200GA1.99270.0850−0.00730.08511.98590.0130−0.01410.01320.09830.1060
PSO1.97310.1099−0.02690.11061.99890.0230−0.00110.02300.13360.6544
GWO1.99270.0850−0.00730.08511.98590.0130−0.01410.01320.09830.5218
WOA2.01030.4093 0.01030.40941.98410.0168−0.01590.01710.42650.5206
SCA2.13412.6630 0.13412.68101.97160.0429−0.02840.04372.72470.5306
Table 8. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 3 ,   β = 1 .
Table 8. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 3 ,   β = 1 .
α ^ β ^
nAlgorithm
α = 3 ,   β = 1
MeanVarianceBiasMSEMeanVarianceBiasMSEDef C T ¯
30GA3.01240.1393 0.01240.13950.98170.0219−0.01830.02220.16170.0767
PSO2.93480.2984−0.06520.30271.02950.0889 0.02950.08980.39240.5446
GWO3.01240.1393 0.01240.13950.98170.0219−0.01830.02220.16170.1825
WOA3.02860.4277 0.02860.42850.98120.0227−0.01880.02310.45160.1855
SCA4.422822.4900 1.422824.51440.90250.0906−0.09760.100124.61450.1889
50GA3.01510.0839 0.01510.08410.99330.0135−0.00670.01350.09770.0662
PSO2.96080.2002−0.03920.20171.03020.0621 0.03020.06300.26470.4253
GWO3.01510.0839 0.01510.08410.99330.0135−0.00670.01350.09770.1948
WOA3.01460.0839 0.01460.08410.99310.0135−0.00690.01350.09770.1934
SCA4.354921.0940 1.354922.92980.91920.0805−0.08080.087023.01680.1958
100GA3.00140.0441 0.00140.04410.99430.0064−0.00570.00640.05050.0951
PSO2.92910.1967−0.07090.20171.04220.0639 0.04220.06570.26740.5439
GWO3.00140.0441 0.00140.04410.99430.0064−0.00570.00640.05050.3592
WOA3.03490.6214 0.03490.62260.99260.0082−0.00740.00830.63090.3592
SCA4.277920.1040 1.277921.73700.92340.0699−0.07670.075821.81280.3625
150GA3.00660.0299 0.00660.02990.99870.0043−0.00130.00430.03420.1358
PSO2.92080.2204−0.07920.22671.05120.0725 0.05120.07510.30180.7339
GWO3.00660.0299 0.00660.02990.99870.0043−0.00130.00430.03420.5695
WOA3.02340.3187 0.02340.31920.99780.0052−0.00220.00520.32450.5686
SCA4.167818.3840 1.167819.74780.93320.0639−0.06680.068419.81610.5708
200GA3.00320.0221 0.00320.02210.99580.0034−0.00420.00340.02550.1161
PSO2.93890.1504−0.06110.15411.05110.0865 0.05110.08910.24320.6130
GWO3.00320.0221 0.00320.02210.99580.0034−0.00420.00340.02550.5469
WOA3.03620.5993 0.03620.60060.99410.0051−0.00590.00510.60570.5417
SCA3.783912.7150 0.783913.32950.95210.0441−0.04790.046413.37590.5441
Table 9. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 3 ,   β = 2 .
Table 9. Simulated Mean, Bias, Variance, MSE, and Def values for the ML estimators α ^ and β ^ when α = 3 ,   β = 2 .
α ^ β ^
nAlgorithm
α = 3 ,   β = 2
MeanVarianceBiasMSEMeanVarianceBiasMSEDef C T ¯
30GA3.02250.6012 0.02250.60171.94680.0841−0.05320.08690.68860.0576
PSO2.99040.6933−0.00960.69341.97250.1022−0.02750.10300.79630.9065
GWO3.02250.6012 0.02250.60171.94680.0841−0.05320.08690.68860.1570
WOA3.02240.6012 0.02240.60171.94680.0842−0.05320.08700.68870.1464
SCA3.674311.9780 0.674312.43271.86900.2285−0.13100.245712.67830.1486
50GA3.01070.3429 0.01070.34301.96760.0537−0.03240.05470.39780.0718
PSO2.99270.3700−0.00730.37011.98430.0655−0.01570.06570.43580.5668
GWO3.01070.3429 0.01070.34301.96760.0537−0.03240.05470.39780.1890
WOA3.01100.3430 0.01100.34311.96750.0537−0.03250.05480.39790.1890
SCA3.25054.3296 0.25054.39241.94300.1048−0.05700.10804.50040.1919
100GA2.99490.1723−0.00510.17231.97300.0268−0.02700.02750.19990.1183
PSO2.96360.2362−0.03640.23751.99340.0475−0.00660.04750.28510.9177
GWO2.99490.1723−0.00510.17231.97300.0268−0.02700.02750.19990.4767
WOA2.99520.1726−0.00480.17261.97280.0268−0.02720.02750.20020.4748
SCA3.29885.2872 0.29885.37651.94010.0927−0.05990.09635.47280.4843
150GA2.99390.1153−0.00610.11531.97350.0167−0.02650.01740.13270.0886
PSO2.96860.1893−0.03140.19031.99770.0426−0.00230.04260.23290.6120
GWO2.99390.1153−0.00610.11531.97350.0167−0.02650.01740.13270.4065
WOA2.99410.1153−0.00590.11531.97340.0167−0.02660.01740.13270.4053
SCA3.25574.8181 0.25574.88351.94070.0793−0.05930.08284.96630.4130
200GA3.01120.0830 0.01120.08311.97640.0141−0.02360.01470.09780.1310
PSO2.98900.1246−0.01100.12471.98880.0276−0.01120.02770.15240.8417
GWO3.01120.0830 0.01120.08311.97640.0141−0.02360.01470.09780.6391
WOA3.01120.0830 0.01120.08311.97640.0141−0.02360.01470.09780.6333
SCA3.22503.8130 0.22503.86361.94890.0657−0.05110.06833.93190.6508
Table 10. The descriptive statistics for the tensile strength data.
Table 10. The descriptive statistics for the tensile strength data.
nMin1st Qu.MeanModeMedian3rd Qu.MaxS2γ1γ2
691.31202.08922.45532.30102.47802.77973.85800.25540.10213.2253
Table 11. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for tensile strength dataset.
Table 11. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for tensile strength dataset.
λ ^ α ^ β ^ -ln LAICAICCCAICBICHQIC
EMLOG-2.21410.247250.4143104.8286105.0104111.2968109.2968106.6013
Gamma22.8047-0.107750.9856105.9712106.1530112.4394110.4394107.7439
Lognormal-0.87620.216152.1663108.3326108.5144114.8008112.8008110.1053
Log-logistic-0.88830.118751.4346106.8692107.0510113.3374111.3374108.6419
Weibull2.6585-5.270251.7165107.4330107.6148113.9012111.9012109.2057
Rayleigh--1.772087.4975176.9950177.0547180.2291179.2291177.8813
Exponential-2.4553-130.979263.9580264.0177267.1921266.1921264.8443
Table 12. The descriptive statistics for the Strengths of glass fibers data.
Table 12. The descriptive statistics for the Strengths of glass fibers data.
nMin1st Qu.MeanModeMedian3rd Qu.MaxS2γ1γ2
630.551.36751.50681.611.591.68752.240.1051-0.89993.9238
Table 13. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for Strengths of glass fibers dataset.
Table 13. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for Strengths of glass fibers dataset.
λ ^ α ^ β ^ -ln LAICAICCCAICBICHQIC
EMLOG-1.39230.155018.128040.256040.456046.542344.542341.9418
Gamma17.4396-0.086423.951551.903052.103058.189356.189353.5888
Lognormal-0.38110.259928.008960.017860.217866.304164.304161.7036
Log-logistic-0.42280.126222.790049.580049.780055.866353.866351.2658
Rayleigh--1.089549.7909101.5818101.6474104.7249103.7249102.4247
Exponential-1.5068-88.8303179.6606179.7262182.8037181.8037180.5035
Table 14. The descriptive statistics for the bladder cancer patient’s data.
Table 14. The descriptive statistics for the bladder cancer patient’s data.
nMin1st Qu.MeanModeMedian3rd Qu.MaxS2γ1γ2
1280.083.33509.20942.026.2811.715079.05108.21323.398719.3942
Table 15. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for the bladder cancer patient’s dataset.
Table 15. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for the bladder cancer patient’s dataset.
α ^ β ^ -ln LAICAICCCAICBICHQIC
EMLOG4.12733.7637450.4944904.9888905.0848912.6929910.6929907.3064
Normal9.209410.4026480.9070965.8140965.9100973.5181971.5181968.1316
Logistic7.45464.3693453.7950911.5900911.6860919.2941917.2941913.9076
Table 16. The descriptive statistics for the waiting times’ data.
Table 16. The descriptive statistics for the waiting times’ data.
nMin1st Qu.MeanModeMedian3rd Qu.MaxS2γ1γ2
1000.804.659.87707.108.1013.0538.5052.37411.47285.5403
Table 17. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for the waiting times’ dataset.
Table 17. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for the waiting times’ dataset.
α ^ β ^ -ln LAICAICCCAICBICHQIC
EMLOG5.90903.2337331.9065667.8130667.9367675.0233673.0233669.9217
Normal9.87707.2370339.3140682.6280682.7517689.8383687.8383684.7367
Logistic8.92963.7895334.6980673.3960673.5197680.6063678.6063675.5047
Table 18. The descriptive statistics for the patient’s relief times data.
Table 18. The descriptive statistics for the patient’s relief times data.
nMin1st Qu.MeanModeMedian3rd Qu.MaxS2γ1γ2
201.1001.45001.90001.70001.70002.10004.1000.49581.71975.9241
Table 19. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for the patient’s relief time’s data.
Table 19. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for the patient’s relief time’s data.
μ ^ σ ^ α ^ -ln LAICAICCCAICBICHQIC
EMLOG1.52920.2907-18.638541.277041.982945.268543.268541.6658
Nakagami-4.08102.347819.170142.340243.046146.331744.331742.7290
Logistic1.79050.3390-19.243342.486643.192546.478144.478142.8754
Weibull-2.13002.787020.586445.172845.878749.164347.164345.5616
Normal1.90000.7041-20.862745.725446.431349.716947.716946.1142
Rayleigh-1.4285-22.478846.957647.179848.953347.953347.1520
Extreme Value2.29130.9163-26.792757.585458.291361.576959.576957.9742
Table 20. The descriptive statistics for the windshield failure data.
Table 20. The descriptive statistics for the windshield failure data.
nMin1st Qu.MeanModeMedian3rd Qu.MaxS2γ1γ2
840.0401.81152.55751.28102.35453.40954.66301.25180.09952.3477
Table 21. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for the Windshield failure data.
Table 21. Parameter estimates, ln L, AIC, AICc, CAIC, BIC, and HQIC values for the Windshield failure data.
μ ^ σ ^ α ^ -ln LAICAICCCAICBICHQIC
EMLOG1.98770.5581-130.5797265.1594265.3075272.0210270.0210267.8467
Logistic2.53440.6485-131.359266.7180266.8661273.5796271.5796268.6723
Extreme Value3.11801.0633-133.498270.9960271.1441277.8576275.8576272.9503
Gamma-0.73233.4922136.937277.8740278.0221284.7356282.7356279.8283
Log-Logistic0.87180.31019-139.581283.1620283.3101290.0236288.0236285.1163
Lognormal0.78910.691-153.923309.8460309.8948313.2768312.2768310.8232
Inverse Gaussian-2.55752.3595182.557369.1140369.2621375.9756373.9756371.0683
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kasap, P.; Faouri, A.O. Comparison of the Meta-Heuristic Algorithms for Maximum Likelihood Estimation of the Exponentially Modified Logistic Distribution. Symmetry 2024, 16, 259. https://doi.org/10.3390/sym16030259

AMA Style

Kasap P, Faouri AO. Comparison of the Meta-Heuristic Algorithms for Maximum Likelihood Estimation of the Exponentially Modified Logistic Distribution. Symmetry. 2024; 16(3):259. https://doi.org/10.3390/sym16030259

Chicago/Turabian Style

Kasap, Pelin, and Adi Omaia Faouri. 2024. "Comparison of the Meta-Heuristic Algorithms for Maximum Likelihood Estimation of the Exponentially Modified Logistic Distribution" Symmetry 16, no. 3: 259. https://doi.org/10.3390/sym16030259

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop