The Comparison Between the MLE and Standard Bayes Estimators of the Reliability Function of Exponential Distribution

In this paper, a Monte Carlo Simulation technique is used to compare the performance of MLE and the standard Bayes estimators of the reliability function of the one parameter exponential distribution. Two types of loss functions are adopted, namely, squared error loss function (SELF) and modified square error loss function (MSELF) with informative and noninformative prior. The criterion integrated mean square error (IMSE) is employed to assess the performance of such estimators.


Introduction
The reliability theory is related with random happening of unwanted events or failure during the life of a physical or biological system [1].Reliability is an essential feature of a system.Basic concepts associated with reliability has been known for a number of years, however, it has got greatest significance during the past decade as a consequence of the use of highly complex systems.In reliability theory, the exponential distribution plays an important role in life testing experiments.Historically, it was the first life time model for which statistical procedures were widely developed.Many researchers gave numerous results and generalized the exponential distribution as a life time distribution, particularly, in the field of industrial life testing.The exponential distribution is desirable because of its simplicity and its own features such as lacks memory and self-producing property.The probability density, cumulative distribution and reliability functions of one parameter exponential distribution are respectively defined as [2]:  , ɵ ɵ ɵ , , ɵ 0 1 The Cumulative distribution function is given by: F t pr T t 1 e ɵ 2 R t 1 F t e ɵ 3

Maximum Likelihood Estimator
Let  ,  , … ,  be the set of n random lifetimes from the one parameter exponential distribution then L ɵ; t , t , … , t ɵ e ɵ ∑ 4 The value of ɵ maximize L, also maximize ln L. Hence, by taking the natural logarithm for the likelihood function we get [3] lnL ɵ; t , t , t nlnɵ ɵ Differentiating the log likelihood function with respect to ɵ and setting the resultant derivative equal to zero, yield:  ln  ;  ,  , … ,  ɵ  ɵ  0 Therefore the MLE for ɵ denoted by ɵ is: Where we suppose that ∑  .Hence, the MLE of the reliability function will be    ɵ 6

Standard Bayes Estimators
The researchers employed two types of loss functions, namely, the squared error loss function (SELF) and modified squared error loss function (MSELF) .The Bayes estimator of the parameter ɵ is the value of ɵ that minimize the risk function R(ɵ , ɵ where [ The Modified square error loss function is defined as [5]: L ɵ , ɵ ɵ ɵ ɵ 10 Where r is a positive integer.If (MSELF) is adopted, it can be in the same manner show that the Bayes estimator of ɵ is

Posterior Density Based on Jeffrey's prior information
Let us assume that ɵ has non informative prior density.Jeffrey's (1961) developed a general rule for obtaining the prior distribution of ɵ [6].He established that the single unknown parameter ɵ which is regarded as a random variable follows such a distribution that is proportional to the square root of the fisher information on ɵ, that is [5]: From Bayes theorem the posterior density function of ɵ denoted by ℎ ɵ‫|‬ can be derived as [4]: ,T=∑  Hence, the posterior density function for ɵ based on Jeffery's prior information will be: The posterior density in equation ( 14) is defined identified as a density of the Gamma distribution, that is: ɵ‫|‬ ,  , … ,  ~ Gamma (n, ) with E(ɵ) = and var(ɵ) = ɵ~Gamma (n , )

Posterior Density Based on Gamma Prior Distribution
Assuming that ɵ has informative prior as Gamma distribution which takes the following form: ɵ  ɵ  ɵ Г  ; ɵ 0 ,  0 ,  0 15 Where α,β are the shape parameter and scale parameter respectively.

Simulation Study
The simulation study was conducted in order to compare the performance of the maximum likelihood estimator (MLE) and Bayesian estimators of the reliability function R(t)of one parameter exponential distribution.We adopted the integrated mean squared error (IMSE) as a criterion of comparison where Where  is the random limits of  , using t=(0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1).L is the number of replications which we assumed that L=1000 in our study,  ( ) is the estimator of R(t) at the  replication.
The Bayesian estimators of R(t) are derived with respect to two loss function which are the Square error loss function (SELF) and Modified squared error loss function (MSELF), moreover, the informative and non-informative prior were postulated.The sample sizes n=10,50, 100 and 200 were chosen to represent small, moderate, large and very large sample sizes from the one parameter exponential distribution.The postulated values of the unique parameter ɵ were ɵ=0.5,1.5 and the values of the parameters for Gamma prior were α=0.3,1 and β=1.2,3.
The values assumed for the loss parameter of modified loss function were r=1,3.The results are presented in Tables (1-4).

Simulation Results and Conclusions
From our simulation study, the following results are clear  From table 1: when ɵ=0.5 the Bayes estimator under modified squared error loss function when r=3 with Jeffrey's prior is the best comparing to other estimators for all sample sizes. From table 2: when ɵ=1.5 the Bayes estimator under modified squared error loss function when r=3 with Jeffrey's prior is the best comparing to other estimators for sample sizes (50,100,200) and in sample size (10) the best is modified squared error loss function when r=1.  From table 3: when ɵ=0.5 for (n=10) the best is Bayes estimator under modified squared error loss function when r=3 with Gamma prior (α=1, β=3) and for (n=50,100,200) the Bayes estimator under modified squared error loss function when r=3 with Gamma prior (α=0.3,β=3) is the best. From table 4: when ɵ=1.5 for (n=10) the Bayes estimator under square error loss function with Gamma prior (α=1, β=1.2) is the best, and for (n=50,200) the Bayes estimator under modified squared error loss function when r=3 with Gamma prior (α=0.3,β=3) is the best and for (n=100) the Bayes estimator under modified squared error loss function when r=3 with Gamma prior (α=1, β=3) is the best. According to the simulation results we conclude that the Bayes estimator under modified squared error loss function with r=3 is superior to the performance of other estimators in almost cases that are studied in this paper, where the integrated mean squared error (IMSE) is employed as a criterion to assess the performance of such estimators.
Where c is a constant of proportionality and I(ɵ) represent fisher information defined as follows: Let us assume that r=1,3 and ɵ , ɵ represent the estimator of the parameter ɵ corresponding to r=1,3 respectively then by applying the formulas in (11) and (21) we get 2.5.Bayes Estimator When (SELF) is Adopteda: The case of Jeffrey's prior information.The  moment of ɵ‫|‬ can be evaluated as follows: E(ɵ ‫|‬ ɵ ℎ ɵ‫|‬ ɵ Hence, E(ɵ ‫|‬ Г Г 21 Now, we have to determine each of E[(   ‫|‬ ,E[   ‫|‬ and E   ‫|‬ Now, we have to determine each of E[(   ‫|‬ , E[   ‫|‬ and E   ‫|‬

Table 1 .
(IMSE) values of the reliability function estimators by using Jeffrey's prior information at ɵ=0.5

Table 2 .
(IMSE) values of the reliability function estimators by using Jeffrey's prior information at ɵ=1.5

Table 3 .
(IMSE) of the reliability function estimators by using Gamma prior information at ɵ=0.5

Table 4 .
(IMSE) of the reliability function estimators by using Gamma prior information at ɵ=1.5