Bayesian Estimation for Two Parameters of Exponential Distribution under Different Loss Functions

In this paper, two parameters for the Exponential distribution were estimated using the Bayesian estimation method under three different loss functions: the Squared error loss function, the Precautionary loss function, and the Entropy loss function. The Exponential distribution prior and Gamma distribution have been assumed as the priors of the scale γ and location δ parameters respectively. In Bayesian estimation, Maximum likelihood estimators have been used as the initial estimators, and the Tierney-Kadane approximation has been used effectively. Based on the Monte-Carlo simulation method, those estimators were compared depending on the mean squared errors (MSEs). The results showed that the Bayesian estimation under the Entropy loss function, assuming Exponential distribution and Gamma distribution priors for the scale and location parameters, respectively, is the best estimator for the scale parameter. The best estimation method for location is the Bayesian estimation under the Entropy loss function in case of a small value of the scale γ (say γ < 1). Bayesian estimation under the Precautionary loss function is the best in case of a relatively large value of the scale γ (say γ > 1).

performance of these estimators under a mean squared error (MSE)-based Monte Carlo simulation study.
The probability density function (p.d.f) of the two parameters exponential distribution is given by [1]: Where, γ and δ are the scale and location parameters, respectively. The Exponential distribution is a special case of the Weibull distribution where γ = 1 [2].
The cumulative distribution function (CDF) of the Exponential distribution is: .

Estimation Methods
This section focuses on some Bayesian estimators of the two unknown parameters of Exponential distribution using the Maximum likelihood estimator (MLE) as an initial values for Bayesian estimators.

Maximum Likelihood Estimator (MLE)
The Maximum likelihood (ML) method was developed by R. A. Fisher (1912) and has been widely used since then [3]. The Maximum likelihood method aims to maximize the likelihood function. Assume that 1 , 2 , …, are a random samples of size ( ) drawn from an Exponential distribution with scale parameter γ and the location parameter δ. Then, the maximum likelihood estimator can be obtained by deriving the logarithm of the likelihood function and its equality to zero. The likelihood function for the two-parameter the Exponential distribution will be as follows: The logarithm of the likelihood function is given by.
Differentiating ℓ partially with respect to and , respectively and equating to zero yields: Notice that, δ ̂ that maximizes the likelihood function can be obtained using order statistic where, δ = min(X 1 , X 2 , … , X n ) = (1) .
After some simplification, we get:

Bayesian Estimation 3.1 Posterior Density Function Using Exponential and Gamma Priors
To estimate the two unknown parameters for Exponential distribution and δ, the prior for J1(.) is assumed as Exponential distribution with the scale parameter c, i.e. [4]: On the other hand, the prior distribution J2(.) for δ is assumed as the Gamma distribution with two unknown parameters (a, b) as a scale and shape parameters for respectively. i.e. [4]: ; 0. Furthermore, γ and μ are assumed as independent random variables. Therefore, the joint prior distribution of γ and δ will be as follows: Hence, the joint posterior density function of γ and δ is given by: h(γ, δ|x 1 , x 2 , … , x n ) = L(x 1 , x 2 , … , x n , ; γ, δ) J(γ, δ) ∫ ∫ L(x 1 , x 2 , … , x n , ; γ, δ) J(γ, δ)dγdδ

Tierney-Kadane Approximation
Assume that u(γ, δ) be any function for γ and δ. Therefore, Notice that, there is a difficulty in obtaining the solution of the ratio of two integrals. Therefore, many approximation methods exist for this purpose. One of them is the Tierney-Kadane approximation which can be applied as follows: Consider the functions δ(γ, δ) and * ( , ) are defined as follows, respectively: Furthermore, assume that δ(̂,̂) and * (̂ * ,̂ * ) maximize the functions δ(γ, δ) and * ( , ), respectively, where ̂ * is the initial value for γ and ̂ * is the initial value for δ. Therefore, E[u(γ, δ)] can be approximated as the following:

Bayes Estimator under Squared Error Loss Function
The Squared error loss function is one of the symmetric functions of γ ̂ given by Mood, Graybill, and Boes (1974). It is widely used for most estimation problems. It can be defined as [5]: The risk function R S (γ, γ) is the posterior expectation of the loss function L(γ, γ) with respect to h(γ|X). That is: (9) Thus, the value of γ that minimizes the posterior risk (9) is obtained by setting its first partial derivative with respect to γ equal to zero. That is, γ s is the posterior mean. Taking the partial derivative for R S (γ, γ) with respect to γ and setting it equal to zero yields, 2γ − 2E(γ|x) = 0, Where γ s is denoted by the Bayesian estimation for γ under the Squared error loss function.

ii) Bayesian Estimation for δ under Squared Error Loss Function
Similarly, can be estimated as follows: Assume that, ( , ) = . And therefore, In order to compute |∑ * | , we first get the following expressions: * Then,

Bayes Estimator under Precautionary Loss Function
Norstrom (1996) introduced asymmetric surrogate precautionary loss functions and also introduced a general class of precautionary loss functions as a special case. These loss functions get infinitely close to the origin to prevent underestimation, thus giving conservative estimates, especially when estimating low failure rates. These capabilities are very useful, but underestimating them can have disastrous consequences. A very useful and simple asymmetric spare-loss function is [6]: Based on the precautionary loss function, the risk function R P (γ, γ) can be derived as follows: Taking the partial derivative for R P (γ, γ) with respect to γ and setting it equal to zero yields: −E(γ 2 |x)γ −2 + 1 = 0.

i) Bayesian Estimation for γ under Precautionary Loss Function
Obtaining Bayesian estimation for under PLF assumes that: u(γ, δ ) = 2 . And therefore, In order to compute |∑ * |, we first get the following expressions: (13)
In order to compute |∑ * | , we first get the following expressions: (14)

Bayes Estimator under Entropy Loss Function
The entropy loss function was originally proposed by Galabria and Pulcini (1994). It is derived from the Linex loss function and defined as [7]: We'll assume that = 1. Based on Entropy loss function, the risk function R E (θ, θ) can be derived as follows: Taking the partial derivative for R E (θ, θ) with respect to θ yields,  Hence, the Bayesian estimator under entropy loss function will be as follows:

i) Bayesian Estimation for γ under Entropy Loss Function
Bayesian estimation for γ under Entropy loss function can be obtained as follows: Let ( , ) = 1 .
In order to compute |∑ * |, we first get the following expressions: (15) .

ii) Bayesian Estimation for δ under Entropy Loss Function
In order to compute |∑ * | , we first get the following expressions: (16)

Simulation Study
In this section, Monte-Carlo simulation is employed to compare the performance of five different estimates (Maximum likelihood estimator, Bayes estimator under Squared error loss function, Bayes estimator under Entropy loss function, Bayes estimator under Precautionary loss function) for unknown scale and location parameters. The comparison is made on the basis of the mean squared error (MSE's), which is defined as follows: Where R is the number of replications (generated samples). In this paper, R = 5000 sample of size n = 10, 30, 50, and 100 is to represent small, moderate, and large sample sizes from an Exponential distribution with γ = 0.5, 1.5 and δ = 0.8, 2. The parameter for the prior distribution of γ is chosen as c = 0.4 and the two parameters of the Gamma prior of δ are assumed as a = 0.3 and b = 0.6

Discussion
The results are summarized and tabulated in Tables (1-8) which contain the expected values and (MSEs) for estimating γ and δ, and we have observed that: 1. The MSE values for different estimation methods increase with increasing values of γ or δ. 2. The Bayesian estimation under Entropy loss function with assuming the Exponential distribution and Gamma distribution priors of the scale and the location parameters, respectively, is the best estimator for γ. 3. It is clear that the best estimation method for δ is the Bayesian estimation under the Entropy loss function in the case of a small value of γ (say γ < 1) while Bayesian estimation under the Precautionary loss function is the best in the case of a relatively large value of γ (say γ > 1). 4. Generally, the Bayes estimate for each of γ and δ are better than the Maximum-likelihood estimates.