Bayesian Approximation Techniques for the Generalized Inverted Exponential Distribution

In this article, Bayesian techniques are adopted to estimate the shape parameter of the generalized inverted exponential distribution (GIED) in the case of complete samples. Normal approximation, Lindley’s approximation, and Tierney and Kadane’s approximation are used for deriving Bayesian estimators. Different informative priors are considered, such as Jeffrey’s prior, Quasi prior, modified Jeffrey’s prior, and the extension of Jeffrey’s prior. Non-informative priors are also used, including Gamma prior, Pareto prior, and inverse Levy prior. The Bayesian estimators are derived under the quadratic loss function. Monte Carlo simulations are carried out to make a comparison among estimators based on the mean square error of the estimates. All estimators using normal, Lindley’s, and Tierney and Kadane’s approximation techniques perform consistently since the MSE decreases as the sample size increases. For large samples, estimators based on non-informative priors using normal approximation are usually better than the ones using Lindley’s approximation. Two real data sets in reliability and medicine are applling to the GIED distribution to assess its flexibility. By comparing the estimation results with other generalized models, we prove that estimating this model using Bayesian approximation techniques gives good results for investigating estimation problems. The models compared in this research are generalized inverse Weibull distribution (GIWD), inverse Weibull distribution (IWD), and inverse exponential distribution (IED).


Introduction
Lifetime models are widely used in the statistical inference field. These models are very important in many areas such as engineering, medicine, zoology, and forecasting. The generalized inverted exponential distribution (GIED) is one of the important lifetime models. It is first proposed by Bakoban et al. [1]. GIED is a flexible model because it has various shapes of the hazard function.
The probability density function (PDF) of a two-parameter GIED is ; x > 0; ; a > 0; (1) and the cumulative distribution function (CDF) is where a is the shape parameter and is the scale parameter.
The GIED distribution has attracted the recent attention of statisticians but has not been discussed in detail in the Bayesian approach. Some authors are interested in this distribution or its generalization [2][3][4][5][6][7][8].
On the other hand, others study GIED using Bayesian methods. Ahmed [9] obtains the Bayesian estimators of GIED based on Type II progressive censored samples by applying Lindley's approximation and importance sampling technique. In addition, Oguntunde et al. [10] discuss Bayesian predictors based on progressive Type-II censoring. Further, Hassan et al. [11] derive the Bayesian estimators based on the Markov Chain Monte Carlo method. Abouammoh et al. [12] study the exponentiated generalized inverse exponential distribution. They derive statistical properties and study applications to real-life data as compared with some other generalized models. Moreover, Singh et al. [13] study Bayesian estimators of reliability function based on upper record value and upper record ranked set sample using Lindley's approximation. Eraikhuemen et al. [14] discuss Bayesian and maximum likelihood estimation of the shape parameter of the exponential inverse exponential distribution. They use a comparative approach. Bayesian estimation is derived with informative and non-informative priors. In Bayesian analysis, it is well known that Bayesian estimators are usually expressed in an implicit form. Therefore, many approximation procedures are used to evaluate Bayesian estimators. Shawky et al. [15], Singh et al. [16], Sultan et al. [17,18], and Fatima et al. [19] discuss some approximation approaches as the Lindley's, Tierney and Kadane's (T-K), and normal approximation methods to compute the Bayesian estimators of the exponentiated Gamma, Marshall-Olkin extended exponential, Kumaraswamy, Topp-Leone, and inverse exponential distributions, respectively. So, in this article, we use normal, Lindley's, and Tierney and Kadane's approximation methods to derive Bayesian estimators for the shape parameter of GIED in Sections 2 and 3. The rest of the article is organized as follows. Section 4 studies the simulation and presents numerical results. Section 5 applies the model to real data sets. Finally, Sections 6 and 7 discuss the results and present the conclusion of the study.

Bayesian Approximation Methods
The Bayesian estimate of any function of m; say uðmÞ; under squared error loss function iŝ where Pðm x j Þ ¼ LðxjmÞ pðmÞ is the posterior function; LðxjmÞ and pðmÞ are the likelihood function and the prior distribution of m; respectively. The estimatorûðmÞ is also called the posterior mean. The Bayesian method is one of the important estimation methods. Sometimes the posterior distribution contains complicated functions and requires further computation. So, an approximation is needed for the posterior distribution. Thus in this article, we use the following approximation techniques.

Normal Approximation
When the posterior distribution Pðm x j Þ has a unimodal symmetric curve, it could be approximated by a normal distribution centered at the mode, so we can write the approximation as Pðm x j Þ $ N ½m; I À1 ðmÞ; where I ðmÞ ¼ À@ 2 Log Lðm x j Þ @m 2 is the negative Hessian and LðxjmÞ is the likelihood function.

Lindley's Approximation
Lindley [20] gives an approximation to the following integration ratio where LðmÞ is the log-likelihood function; uðmÞ and UðmÞ are arbitrary functions of m; represents the value range of m: Thus the posterior mean, I ¼ E½uðmÞ x j ; can be evaluated as where l ðiÞ ¼ @ ðiÞ LogL @m ðiÞ ; i ¼ 1; 2; 3;f 2 ¼ ½Àl ð2Þ ðmÞ À1 ; U ðmÞ ¼ Log pðmÞ; U 0 ðmÞ ¼ @U ðmÞ @m ; where Lðx mÞ j is the likelihood function and pðmÞ is the prior distribution.

Tierney and Kadane's (T-K) Approximation
For any arbitrary function uðmÞ Tierney et al. [21] propose a Laplace form to compute the posterior mean E½uðmÞ x j as E½uðmÞ x j ¼ R e n g Ã ðmÞ dm R e n g 0 ðmÞ dm ffir Ã e n g Ã ðm Ã Þ r 0 e n g 0 ðmÞ ; n g Ã ¼ n g 0 þ Log uðmÞ; where n g 0 ¼ Log Pðm x j Þ; and Pðm x j Þ is the posterior distribution.
In the following section, we derive Bayesian estimators for GIED using these approximation methods with different priors.

Bayesian Estimators of GIED
In this section, Bayesian approximation techniques are studied for estimating the shape parameter of the generalized inverted exponential distribution based on complete samples. Normal, Lindley's, and Tierney and Kadane's approximation methods are used to compute the Bayesian estimators. Also, Bayesian estimators are derived based on squared error loss function for different prior distributions. Let x ¼ ðx 1 ; x 2 ; …; x n Þ be random samples of size n drawn from a GIED, then by using Eq. (1), the loglikelihood function could be written as Logð1 À e À=x i Þ; where Logð1 À e À=x i Þ: The posterior distribution is obtained by multiplying Eq. (12) with the prior distribution pðaÞ as Pða x j Þ / l pðaÞ: In this article, we choose some informative and non-informative priors, as shown in Tab. 1. In the following subsections, the Bayesian estimators are obtained based on the squared error loss function Eða x j Þ, which is the posterior mean.

Bayesian Estimators Using Normal Approximation
In this subsection, we derive Bayesian estimators for GIED using the priors in Tab. 1.

ii. Quasi Prior
The posterior distribution of a is Pða x j Þ / a nÀd e Àa T ; d ! 0; a > 0; where T is defined in Eq. (12). Then the logarithmic posterior distribution is By taking the first derivative of Eq. (16) concerning a; we get the posterior mode asâ ¼ ðn À dÞ=T: Therefore, the negative of Hessian is IðâÞ ¼ T 2 =ðn À dÞ: The posterior distribution can be approximated as Eq. (3). Thus, for GIED, the posterior distribution can be approximated as Pða x j Þ $ N ½ðn À dÞ=T ; ðn À dÞ=T 2 : Hence, the Bayesian estimator of a with Quasi prior using normal approximation isâ ¼ ðn À dÞ=T: iii. Modified Jeffrey's Prior The posterior distribution of a is Pða x j Þ / a nÀ3=2 e Àa T ; a > 0; where T is defined in Eq. (12). Then the logarithmic posterior distribution is By taking the first derivative of Eq. (17) concerning a; we get the posterior mode asâ ¼ ðn À 3=2Þ=T: Therefore, the negative of Hessian is IðâÞ ¼ T 2 =ðn À 3=2Þ: The posterior distribution can be approximated as Eq. (3). Thus, for GIED, the posterior distribution can be approximated as Pða x j Þ $ N ½ðn À 1:5Þ=T; ðn À 1:5Þ=T 2 : Hence, the Bayesian estimator of a with modified Jeffrey's prior using normal approximation isâ ¼ ðn À 1:5Þ=T:

iv. Extension of Jeffrey's Prior
The posterior distribution of a is Pða x j Þ / a nÀ2 s e Àa T ; s > 0; a > 0; where T is defined in Eq. (12).

iv. Extension of Jeffrey's Prior
By taking the logarithm of the extension of Jeffrey's prior, we get U ðaÞ ' À2 s Loga; U 0 ðaÞ ¼ À2 s=a; and U 0 ðâÞ ¼ À2 s T =n: Then by using Eqs. (22)-(25), we obtain the posterior mean for GIED with the extension of Jeffrey's prior as E½a x j ffi ðn À 2 s þ 1Þ=T :

Bayesian Estimators Using Tierney and Kadane's Approximation
To obtain the Bayesian estimator for GIED, we evaluate the posterior mean E½uðaÞ x j using Tierney and Kadane's approximation technique as Eq. (6). By setting uðaÞ ¼ a in Eq. (6), recalling T as defined in Eq. (11), and using the priors in Tab. 1, the Bayesian estimators are derived as follows.

Simulation and Numerical Results
In this section, a simulation study is conducted to assess the performance of the estimators in the previous section. Mathematica V. 11.0 is used to run a Monte Carlo simulation with 10,000 iterations. Samples of sizes n = 20, 50, 100, and 500 are generated from GIED using the quantile formula x ¼ À=Log½1ð1 À UÞ 1=a ; U $ Uniformð0; 1Þ;with true values a ¼ 2 and ¼ 2. Bayesian estimates are computed using normal, Lindley's, and Tierney and Kadane's approximation methods. All estimates are evaluated and tabulated in Tabs. 2-4. Mean square error (MSE) is computed to assess the performance of Bayesian estimates with informative and non-informative priors.
From Tabs. 2-4, we note that all estimators using normal, Lindley's, and Tierney and Kadane's approximation techniques perform consistently that the MSE decreases as the sample size increases. In addition, we conclude that the estimator with Gamma prior has the lowest MSE for all techniques. Tierney and Kadane's technique works more effectively with large samples (n = 100 and 500) than normal and Lindley's approximations. However, the normal approximation is better than Lindley's and Tierney and Kadane's approximations with non-informative priors for n = 20 and 50, and Tierney and Kadane's approximation is better than Lindley's approximation in this case. Moreover, estimators under   informative priors for n = 20 and 50 using Lindley's approximation are usually better than normal and Tierney and Kadane's approximations. For n = 100 and 500, estimators based on non-informative priors using normal approximation are usually better than the ones using Lindley's approximation. The normal approximation works as well as Lindley's approximation with informative priors for n = 100, but Lindley's works better than normal for n = 500 with informative priors.

Applications to Real Data
In this section, the GIED distribution is applied to real-life data sets to assess its flexibility over its baseline distribution and some other generalized models. The baseline models are generalized inverse Weibull distribution [22,GIWD], inverse Weibull distribution [23,IWD], and inverse exponential distribution [24,IED]. The fitting performance is evaluated by Kolmogorov-Smirnov (K-S) statistic and some information criteria. A model is the best if it has the lowest Akaike information criteria (AIC), loglikelihood (LL), Bayesian information criterion (BIC), consistent Akaike information criterion (CAIC), and Hannan-Quinn information criterion (HQIC) [25]. The formulas of these criteria are BIC ¼ À2lðmÞ þ k Log n; HQIC ¼ À2lðmÞ þ 2k LogðLog nÞ; LL ¼ À2lðmÞ; where lðmÞ denotes the log-likelihood function evaluated at the maximum likelihood estimatesm; k is the number of parameters, and n is the sample size. The following two case studies illustrate the estimators' validity in applications.

I. Animal Vaccination Data Set
The first data is the numbers (in thousands) of animals vaccinated against the most widespread epidemic diseases in the 13 regions of Saudi Arabia from 1/1/2020 to 6/30/2020, according to the introduction on the electronic platform (Anaam). This data is downloaded from (https://data.gov.sa/Data/en/dataset/thenumbers-of-animals-vaccinated-against-the-most-widespread-epidemic-diseases).
The statistics of the data set and the performance measures of the models are presented in Tabs. 5 and 6, respectively.

II. Medical Data Set
The second data below shows the survival time (in months) of patients with Hodgkin's disease and heavy therapy (nitrogen mustards) [26].
The statistics of the data set and the performance measures of the models are presented in Tabs. 7 and 8, respectively.   According to the experiments, GIED shows flexibility to real data sets since it has the lowest AIC, BIC, CAIC, HQIC, LL, and K-S as is shown in Tabs. 6 and 8. Our model fits better than the competitive models of IED, IWD, and GIWD. We choose IWD and GIWD as competitive models because GIED is a special case for them. So researchers can use GIED instead of those models. It decreases the amount of computation for estimating and gets better results. On the other hand, GIED shows better fitting performance than its special case IED. Plots in Figs. 1 and 2 show that GIED has the best approximate fitting performance especially for survival time of patients with Hodgkin's disease.

Discussion
It is well known that the Bayesian estimators usually have explicit forms, which is hard for a researcher to code, program, and compute the estimates. Therefore, it is very important to study and compare Bayesian approximation techniques using different priors in statistical inference and especially in Bayesian analysis. These approximations are useful for computing non-closed form estimators, which are very important for reliability analysis. This article presents a comparison among normal, Lindley's, and Tierney and Kadane's approximations for Bayesian estimators using seven informative and non-informative priors.
Singh et al. [16] compare Lindley's, Tierney and Kadane's, and Markov Chain Monte Carlo (MCMC) methods for Marshall-Olkin extended exponential distribution. Their results show that for n = 20 and 50 with  Fatima et al. [19] compare two techniques: normal and Tierney and Kadane's for IED. Their results show that normal approximation with the extension of Jeffrey's prior performs better.
Our results on simulated data show that estimators under informative priors for n = 20 and 50 using Lindley's approximation are usually better than normal and Tierney and Kadane's approximations. But with non-informative priors for n = 20 and 50, Tierney and Kadane's approximation is better than Lindley's approximation, which agrees with the results of Singh et al. [16].
Moreover, our results in the simulation study show that normal approximation is better than Tierney and Kadane's approximation with non-informative priors for n = 20 and 50, which agrees with the results of Fatima et al. [19].
Furthermore, in this article, a flexible model, generalized inverted exponential distribution, is used as a lifetime model and applied to two data sets of reliability and medicine. So estimating this model using Bayesian approximation techniques gives good results for investigating estimation problems.

Conclusion
In this article, we estimate the shape parameter of GIED using three Bayesian approximation techniques, which are the normal, Lindley's, and Tierney and Kadane's approximations. Tierney and Kadane's works better than the rest of the other methods for large samples. Estimates with informative priors are better than those with non-informative priors. Estimates with Gamma prior are the best among all estimators with the three techniques. This work is a generalization of the inverted exponential distribution studied by Fatima et al. [19].