Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Entropy Estimation Using a Linguistic Zipf–Mandelbrot–Li Model for Natural Sequences
Previous Article in Journal
Non-Equilibrium Entropy and Irreversibility in Generalized Stochastic Loewner Evolution from an Information-Theoretic Perspective
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inference for Inverse Power Lomax Distribution with Progressive First-Failure Censoring

1
School of Electronics Engineering, Xi’an University of Posts and Telecommunications, Xi’an 710121, China
2
School of Mathematics and Statistics, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(9), 1099; https://doi.org/10.3390/e23091099
Submission received: 28 July 2021 / Revised: 21 August 2021 / Accepted: 23 August 2021 / Published: 24 August 2021

Abstract

:
This paper investigates the statistical inference of inverse power Lomax distribution parameters under progressive first-failure censored samples. The maximum likelihood estimates (MLEs) and the asymptotic confidence intervals are derived based on the iterative procedure and asymptotic normality theory of MLEs, respectively. Bayesian estimates of the parameters under squared error loss and generalized entropy loss function are obtained using independent gamma priors. For Bayesian computation, Tierney–Kadane’s approximation method is used. In addition, the highest posterior credible intervals of the parameters are constructed based on the importance sampling procedure. A Monte Carlo simulation study is carried out to compare the behavior of various estimates developed in this paper. Finally, a real data set is analyzed for illustration purposes.

1. Introduction

In the life test of a product, due to the restrictions of test time, cost and other conditions, the complete life test is not generally performed. In these cases, experimenters often use censoring schemes to obtain censored lifetime data. There are many types of censoring schemes, and the most popular censoring schemes are Type-I and Type-II censoring. In Type-I censoring, the test ends at a pre-fixed time, while in Type-II censoring, the test ends when the m-th failure occurs (m is fixed in advance). For the above two censoring schemes, the common disadvantage is that no unit in the test can be removed before the test is terminated. Thus, progressive censoring (PC) was proposed, which has better efficiency in lifetime experiments. Under this censoring scheme, one can remove the test units at various stages of the experiment. For more details, refer to Balakrishnan and Aggarwala [1]. An excellent review of progressive censoring schemes can be found in Ref. [2]. Besides the PC, there is another censoring scheme, namely the first failure censoring scheme. Under this censoring scheme, experimenters group the test units into several sets and then perform all the test units simultaneously until the first failure in each set. The first-failure censoring scheme was studied by Johnson [3], Balasooriya et al. [4], Wu et al. [5] and Wu and Yu [6]. However, this censoring scheme does not allow the removal of units from the test at points other than the final termination point. Wu and Kus [7] combined the advantages of the first failure censoring and progressive censoring to propose mixed censoring, that is, a progressive first-failure censoring (PFFC) scheme. They obtained maximum likelihood estimates (MLEs), interval estimation and expected time on test for the parameters of the Weibull distribution based on the PFFC sample.
The PFFC scheme can be described as follows: suppose that n independent groups with k items within each group are put on a life test at time zero, and the progressive censoring Scheme R ˜ = ( R 1 , R 2 , , R m ) is fixed in advance. At the first failure time X 1 : m : n : k , R 1 groups and the group in which the first failure is observed are randomly removed from the test. Similarly, at the second failure time X 2 : m : n : k , R 2 groups and the group in which the second failure is observed are randomly removed from the remaining ( n R 1 1 ) groups. This procedure continues until the mth failure time X m : m : n : k is observed in the remaining groups, and then all the remaining R m groups are removed. It is clear that n = m + R 1 + R 2 + + R m . The observed failure times, X 1 : m : n : k < X 2 : m : n : k < < X m : m : n : k , are called the PFFC sample with the progressive censoring scheme R ˜ = ( R 1 , R 2 , , R m ) . Here, ( m , n , k ) must be pre-specified.
The main advantage of the PFFC scheme is that it reduces time where more items are used, but only m out of n × k items are observed. It is observed that if R 1 = R 2 = = R m = 0 , the PFFC reduces to first failure censoring; If k = 1 , the scheme becomes progressively Type II censoring; when k = 1, R 1 = R 2 = = R m 1 = 0 and R m = n m , this scheme reduces to Type II censoring scheme. Furthermore, the progressively first-failure censored sample X 1 : m : n : k < X 2 : m : n : k < < X m : m : n : k can be viewed as a progressively Type-II censored sample from a population with the distribution function 1 ( 1 F ( x ) ) k , which enables us to extend all the results on progressive type-II censored order statistics to progressive first-failure (PFF) censored order statistics.
Because of the flexibility of the PFFC scheme, many scholars have discussed and applied it in reliability studies. Ref. [8] studied statistical inferences of the unknown parameters, the reliability and failure functions of the inverted exponentiated Half-Logistic distribution using PFFC samples. Ref. [9] investigated a competing risks data model under PFFC from a Gompertz distribution using Bayesian and non-Bayesian methods. Ref. [10] considered the estimates of the unknown parameters and reliability characteristics of generalized inverted exponential distribution using PFFC samples. Ref. [11] established different reliability sampling plans using two criteria from a Lognormal distribution based on the PFFC. Some recent studies on the PFFC scheme can be found in Refs. [12,13,14,15,16].
The inverse distributions have a wide range of applications in issues related to econometrics, biological sciences, survey sampling, engineering sciences, medical research and life testing problems. In recent years, some scholars have studied the statistical inference of inverse distribution. For example, Dube et al. [10] studied the MLEs and Bayesian estimators of the unknown parameters and reliability characteristics of generalized inverted exponential distribution using progressively first-failure censored samples. Panahi and Moradi [17] discussed the estimation of the inverted exponentiated Rayleigh distribution based on an adaptive Type II progressive hybrid censored sample. Bantan et al. [18] studied the estimation of the Rényi and q-entropies for inverse Lomax distribution under multiple censored data. An efficient estimation strategy was proposed by using the maximum likelihood and plugging methods. But they did not investigate the statistical inference of the three-parameter inverse power Lomax distribution under the progressive first failure sample. Some other related studies on inverse distribution can be found in Nassar and Abo-Kasem [19], Lee and Cho [20], Xu and Cui [21] and Rashad et al. [22].
In 2019, a new three-parameter lifetime distribution named the inverse power Lomax (IPL) distribution was introduced by Hassan and Abd-Allah [23]. The probability density function (PDF) f ( ) , cumulative distribution function (CDF) F ( ) of the IPL distribution are given, respectively, by
f ( t ; α , β , λ ) = α β λ 1 t ( β + 1 ) ( 1 + λ 1 t β ) α 1 , t > 0 ,
F ( t ; α , β , λ ) = ( 1 + λ 1 t β ) α , t > 0 ,
where α > 0 , β > 0 are shape parameters, and λ > 0 is scale parameter. The IPL is very flexible in analyzing situations with a realized non-monotonic failure rate. Therefore, the IPL model can be used for several practical data modeling and analysis, see Ref. [23]. In order to facilitate engineering applications, Ref. [23] studied some statistical properties for the IPL distribution. The MLEs of the model parameters are obtained based on conventional Type I and Type II censored samples. However, they did not discuss the PFFC scheme. The PFFC scheme is more widely used in survival analysis and the life test.
Since the IPL distribution contains three unknown parameters, it is more complicated to estimate the unknown parameters under progressive censoring. So, to date, there has been no published work on statistical inference for IPL distribution under the PFFC scheme. The main aim of this paper is to focus on the classical and Bayesian inference for IPL distribution under the PFFC scheme.
The rest of this paper is organized as follows: In Section 2, the MLEs and asymptotic confidence intervals of the unknown parameters are derived. Based on Tierney–Kadane’s approximation method, Bayesian estimates of the parameters under squared error loss and generalized entropy loss function are obtained in Section 3. In addition, the highest posterior density (HPD) credible intervals of the parameters are constructed by using the importance sampling method. In Section 4, Monte Carlo simulations are carried out to investigate the performances of different point estimates and interval estimates. In Section 5, a real data set has been analyzed for illustrative purposes. The conclusions are given in Section 6.

2. Maximum Likelihood Estimation

In this section, the MLEs of the parameters for the IPL distribution will be discussed under the PFFC. Let X i = X i : m ; n , i = 1 , 2 , , m be the progressive first-failure censored order statistics from the IPL distribution with the censored scheme R ˜ = ( R 1 , R 2 , , R m ) . Then, using Equations (1) and (2), the likelihood function is given by
L ( α , β , λ | x ) = C k m i = 1 m f ( x i ; β , λ ) [ 1 F ( x i ; α , β , λ ) ] k ( R i + 1 ) 1 = C k m α m β m λ m i = 1 m x i β 1 ξ i α 1 ( 1 ξ i α ) k ( R i + 1 ) 1 , 0 < x 1 < x 2 < < x m ,
where C = n ( n R 1 1 ) ( n m + 1 R 1 R m 1 ) , x = ( x 1 , x 2 , , x m ) and ξ i = ( 1 + λ 1 x i β ) . The log-likelihood function is given by
ln L ( α , β , λ | x ) m ln α β m ln λ ( β + 1 ) i = 1 m ln x i ( α + 1 ) i = 1 m ln ξ i + i = 1 m [ k ( R i + 1 ) 1 ] ln ( 1 ξ i α ) .
Let l ( α , β , λ | x ) = ln L ( x ; α , β , λ ) . By taking the first partial derivative of log-likelihood function with regard to α , β  and λ  and equating them to zero, the following results can be obtained.
l α = m α i = 1 m ln ξ i + i = 1 m [ k ( R i + 1 ) 1 ] ln ξ i ξ i α 1 = 0 ,
l β = m β i = 1 m ln x i + α + 1 λ i = 1 m x i β ln x i ξ i i = 1 m [ k ( R i + 1 ) 1 ] α x i β ln x i λ ( ξ i α + 1 ξ i ) = 0 ,
l λ = m λ + α + 1 λ 2 i = 1 m x i β ξ i i = 1 m [ k ( R i + 1 ) 1 ] α x i β λ 2 ( ξ i α + 1 ξ i ) = 0 .
The MLEs of α , β and λ can be obtained by solving the Equations (5)–(7), but these equations do not yield an analytical solution. Therefore, we use the Newton–Raphson iteration method to obtain the MLEs of the parameters. For this purpose, we first compute the quantity of interest I i j , i , j = 1 , 2 , 3 . Here,
I 11 = 2 l α 2 = m α 2 i = 1 m [ k ( R i + 1 ) 1 ] ξ i α ( ln ξ i ) 2 ( ξ i α 1 ) 2 ,
I 12 = I 21 = 2 l α β = i = 1 m x i β ln x i λ ξ i + i = 1 m [ k ( R i + 1 ) 1 ] [ x i β ln x i λ ξ i ( ξ i α 1 ) 2 α ξ i α 1 x i β ln ξ i ln x i λ ( ξ i α + 1 ξ i ) ]   ,
I 13 = I 31 = 2 l α λ = i = 1 m x i β λ 2 ξ i i = 1 m [ k ( R i + 1 ) 1 ] [ x i β ( ξ i α 1 ) α ξ i α x i β ln ξ i λ 2 ξ i ( ξ i α 1 ) 2 ] ,
I 22 = 2 l β 2 = m β 2 α + 1 λ i = 1 m 1 λ ξ i 2 [ λ x i β ξ i ( ln x i ) 2 x i 2 β ( ln x i ) 2 ] + i = 1 m [ k ( R i + 1 ) 1 ] 1 λ 2 ( ξ i α + 1 ξ i ) 2 × [ λ α x i β ( ln x i ) 2 ( ξ i α + 1 ξ i ) α ( α + 1 ) ξ i α x i 2 β ( ln x i ) 2 + α x i 2 β ( ln x i ) 2 ] ,
I 23 = I 32 = 2 l β λ = α + 1 λ 2 i = 1 m 1 ξ i 2 [ x i β ξ i ln x i + λ 1 x i 2 β ln x i ] + i = 1 m [ k ( R i + 1 ) 1 ] 1 λ 4 ( ξ i α + 1 ξ i ) 2 × [ α λ 2 x i β ln x i ( ξ i α + 1 ξ i ) λ α ( α + 1 ) ξ i α x i 2 β ln x i λ 1 α x i 2 β ln x i ]
I 33 = 2 l λ 2 = m λ 2 2 ( α + 1 ) λ 3 i = 1 m x i β ξ i + α + 1 λ 4 i = 1 m x i 2 β ξ i 2 + i = 1 m [ k ( R i + 1 ) 1 ] [ 2 α x i β λ 3 ( ξ i α + 1 ξ i ) α ( α + 1 ) x i 2 β ξ i α λ 4 ( ξ i α + 1 ξ i ) 2 + α x i 2 β λ 4 ( ξ i α + 1 ξ i ) 2 ] .
The Newton–Raphson iteration method can be implemented according to the following steps:
Step 1: Give the initial values of θ = ( α , β , λ ) , say θ ( 0 ) = ( α ( 0 ) , β ( 0 ) , λ ( 0 ) ) .
Step 2: In the M-th iteration, calculate ( l α , l β , l λ ) | θ = θ ( M ) and matrix I ( θ ( M ) ) . Here, I ( θ ( M ) ) = [ I 11 I 12 I 13 I 21 I 22 I 23 I 31 I 32 I 33 ] | θ = θ ( M ) , θ ( M ) = ( α ( M ) , β ( M ) , λ ( M ) ) , I i j , i , j = 1 , 2 , 3 are given by Equations (9)–(14).
Step 3: Update ( α , β , λ ) T by
( α ( M + 1 ) , β ( M + 1 ) , λ ( M + 1 ) ) T = ( α ( M ) , β ( M ) , λ ( M ) ) T + I 1 ( θ ( M ) ) × ( l α , l β , l λ ) T | θ = θ ( M )
here, ( α , β , λ ) T is the transpose of vector ( α , β , λ ) , and I 1 ( θ ( M ) ) represents the inverse of matrix I ( θ ( M ) ) .
Step 4: Setting M = M + 1 , the MLEs of θ = ( α , β , λ ) , say θ ^ = ( α ^ , β ^ , λ ^ ) can be obtained by repeating steps 2–3 until | ( α ( M + 1 ) , β ( M + 1 ) , λ ( M + 1 ) ) T ( α ( M ) , β ( M ) , λ ( M ) ) T | < ε , where ε is a threshold value and fixed in advance.

Asymptotic Confidence Interval

In this subsection, the asymptotic confidence intervals (ACIs) of the unknown parameters of the IPL distribution are derived. Based on regularity conditions, the MLEs ( α ^ , β ^ , λ ^ ) are approximately normal distribution with the mean ( α , β , λ ) and the covariance matrix I 1 ( α , β , λ ) . In practice, we usually estimate I 1 ( α , β , λ ) by I 1 ( α ^ , β ^ , λ ^ ) . A simpler and equally valid procedure is to use the approximation ( α ^ , β ^ , λ ^ ) N ( ( α , β , λ ) , I 1 ( α ^ , β ^ , λ ^ ) ) , where I 1 ( α ^ , β ^ , λ ^ ) denotes the inverse of observed fisher information matrix I ( α ^ , β ^ , λ ^ ) and
I ( α ^ , β ^ , λ ^ ) = [ I 11 I 12 I 13 I 21 I 22 I 23 I 31 I 32 I 33 ] ( α , β , λ ) = ( α ^ , β ^ , λ ^ ) .
Here, I i j , i , j = 1 , 2 , 3 can be calculated by Equations (8)–(13), respectively. Thus, the approximate 100 ( 1 γ ) % two-sided CIs for parameters α , β , λ are, respectively, given by
( α ^ ± z γ / 2   V ^ ar ( α ^ ) ) , ( β ^ ± z γ / 2   V ^ ar ( β ^ ) ) , ( λ ^ ± z γ / 2   V ^ ar ( λ ^ ) ) .
Here,   V ^ ar ( α ^ ) ,   V ^ ar ( β ^ ) and   V ^ ar ( λ ^ ) are diagonal elements of the observed variance–covariance matrix I 1 ( α ^ , β ^ , λ ^ ) , and z γ / 2 is the upper γ / 2 -th percentile of the standard normal distribution.

3. Bayesian Estimation

In this section, we discuss the Bayesian estimates and corresponding credible intervals of the unknown parameters for the IPL distribution. In order to select the best decision in the decision theory, an appropriate loss function must be specified. Here, we consider both the symmetric and asymmetric loss functions. A very well-known symmetric loss function is the square error loss (SEL). The most commonly used asymmetric loss function is the generalized entropy loss (GEL) function. The SEL and GEL function are, respectively, defined by
L 1 ( θ , θ ^ ) = ( θ ^ θ ) 2 , L 2 ( θ , θ ^ ) ( θ ^ θ ) q q ln ( θ ^ θ ) 1 , q 0 .
Here, θ ^ is an estimation of θ , and the constant q denotes how much influence that an error will have. When q < 0 , negative errors affect the consequences more seriously. When q > 0 , positive errors cause more serious consequences than negative ones.
Under the SEL and GEL function, the Bayesian estimator of θ are, respectively, given by
θ ^ B S = E [ θ | x ]
θ ^ B G = [ E ( θ q | x ) ] 1 / q , q 0 .
The Bayesian analysis requires the choice of appropriate priors for the unknown parameters in addition to the experimental data. Arnold and Press [24] correctly pointed out that there is no clear cut way how to choose prior. Now, we assume the following independent gamma priors for the parameters α , β and λ as
π 1 ( α ) α a 1 1 exp ( b 1 α ) , α > 0 , a 1 , b 1 > 0 , π 2 ( β ) β a 2 1 exp ( b 2 β ) , β > 0 , a 2 , b 2 > 0 , π 3 ( λ ) λ a 3 1 exp ( b 3 λ ) , λ > 0 , a 3 , b 3 > 0 .
therefore, the joint prior distribution of α , β and λ is given by
π ( α , β , λ ) α a 1 1 β a 2 1 λ a 3 1 exp ( b 1 α b 2 β b 3 λ ) , α , β , λ > 0 , a i , b i , c i > 0 , i = 1 , 2 , 3 .
The assumption of independent gamma priors is reasonable [10]. The class of the gamma prior distributions is quite flexible as it can model a variety of prior information. It should be noted that the non-informative priors on the parameters are the special cases of independent gamma priors and can be achieved by approaching hyper-parameters to zero [10].
Based on Equations (3) and (18), the joint posterior distribution of the parameters α , β and λ can be written as:
π * ( α , β , λ | x ) = π ( α , β , λ ) L ( x ; α , β , λ ) 0 0 0 π ( α , β , λ ) L ( x ; α , β , λ ) d α d β d λ π ( α , β , λ ) L ( x ; α , β , λ ) α m + a 1 1 β m + a 2 1 λ m α + a 3 1 exp ( α b 1 β b 2 b 3 λ ) i = 1 m x i β 1 i = 1 m ( λ + x i β ) α 1 i = 1 m ( 1 ξ i α ) k ( R i + 1 ) 1 .
Let g = g ( α , β , λ ) be a function of α , β and λ , then the posterior mean of g is given by
E [ g ( α , β , λ ) | x ] = 0 0 0 g ( α , β , λ ) π ( α , β , λ ) L ( α , β , λ | x ) d α d β d λ 0 0 0 π ( α , β , λ ) L ( α , β , λ | x ) d α d β d λ .
From Equation (20), we observe that the posterior mean of g ( α , β , λ ) is in the form of ratio of two integrals for which a closed-form solution is not available [10]. Therefore, we use Tierney–Kadane’s approximation method to obtain the approximate solution of Equation (20).

3.1. Tierney–Kadane’s Approximation Method

Tierney and Kadane [25] proposed an alternative method to approximate such a ratio of integrals to derive the Bayesian estimates of unknown parameters. In this subsection, we present the approximate Bayesian estimates of α , β and λ under the SEL and GEL function using Tierney–Kadane’s (T–K) method. Although Lindley’s approximation [26] plays an important role in the Bayesian analysis, this approximation requires the evaluation of third derivatives of the log-likelihood function, which is very tedious in some situations, such as the present one. Moreover, Lindley’s approximation has an error of order O ( n 1 ) , whereas the T–K approximation has an error of order O ( n 2 ) .
To apply the T–K approximation method, we set
Q ( α , β , λ ) = 1 n [ l ( α , β , λ | x ) + ln π ( α , β , λ ) ]
and
Q * ( α , β , λ ) = Q ( α , β , λ ) + 1 n ln g ( α , β , λ ) ,
where l ( α , β , λ | x ) is log-likelihood function. According to the T–K method, the approximation of the posterior mean of g ( α , β , λ ) is given by
E [ g ( α , β , λ ) | x ] = 0 0 0 exp { n Q * ( α , β , λ ) } d α d β d λ 0 0 0 exp { n Q ( α , β , λ ) } d α d β d λ
Using the T–K approximation method, the approximation of the posterior mean of g ( α , β , λ ) can be given as
E ^ [ g ( α , β , λ ) | x ] = det Σ * det Σ exp { n Q * ( α ^ Q * , β ^ Q * , λ ^ Q * ) n Q ( α ^ Q , β ^ Q , λ ^ Q ) } .
Here, ( α ^ Q * , β ^ Q * , λ ^ Q * ) and ( α ^ Q , β ^ Q , λ ^ Q ) maximize Q * ( α , β , λ ) and Q ( α , β , λ ) , respectively. Σ * and Σ are the inverse of negative Hessians of Q * ( α , β , λ ) and Q ( α , β , λ ) at ( α ^ Q * , β ^ Q * , λ ^ Q * ) and ( α ^ Q , β ^ Q , λ ^ Q ) , respectively. For the IPL distribution, we have
Q ( α , β , λ ) = 1 n { lnCk m + m ln α β m ln λ ( β + 1 ) i = 1 m ln x i ( α + 1 ) i = 1 m ln ξ i + i = 1 m [ k ( R i + 1 ) 1 ] ln ( 1 ξ i α ) + ( a 1 1 ) ln α b 1 α + ( a 2 1 ) ln β b 2 β + ( a 3 1 ) ln λ b 3 λ } , α , β , λ > 0 , a i , b i , c i > 0 , i = 1 , 2 , 3 .
Then, ( α ^ Q , β ^ Q , λ ^ Q ) are computed by solving the following non-linear equations.
Q α = 1 n { m + a 1 1 α i = 1 m ln ξ i + i = 1 m [ k ( R i + 1 ) 1 ] ln ξ i ξ i α 1 b 1 } = 0 ,
Q β = 1 n { m + a 2 1 β i = 1 m ln x i + α + 1 λ i = 1 m x i β ln x i ξ i i = 1 m [ k ( R i + 1 ) 1 ] α x i β ln x i λ ( ξ i α + 1 ξ i ) b 2 } = 0   ,
Q λ = 1 n { a 3 m 1 λ + α + 1 λ 2 i = 1 m x i β ξ i i = 1 m [ k ( R i + 1 ) 1 ] α x i β λ 2 ( ξ i α + 1 ξ i ) b 3 } = 0 .  
We obtain the Σ from
Σ = [ Q 11 Q 12 Q 13 Q 21 Q 22 Q 23 Q 31 Q 32 Q 33 ] ( α ^ Q , β ^ Q , λ ^ Q ) 1 ,
where
Q 11 = 2 Q α 2 = 1 n { m + a 1 1 α 2 + i = 1 m [ k ( R i + 1 ) 1 ] ξ i α ( ln ξ i ) 2 ( ξ i α 1 ) 2 } , Q 12 = Q 21 = 2 Q α β = 1 n { i = 1 m x i β ln x i λ ξ i i = 1 m [ k ( R i + 1 ) 1 ] [ x i β ln x i λ ξ i ( ξ i α 1 ) 2 α ξ i α 1 x i β ln ξ i ln x i λ ( ξ i α + 1 ξ i ) ] } Q 13 = Q 31 = 2 Q α λ = 1 n { i = 1 m x i β λ 2 ξ i + i = 1 m [ k ( R i + 1 ) 1 ] [ x i β ( ξ i α 1 ) α ξ i α x i β ln ξ i λ 2 ξ i ( ξ i α 1 ) 2 ] } , Q 22 = 2 Q β 2 = 1 n { m + a 2 1 β 2 + α + 1 λ i = 1 m 1 λ ξ i 2 [ λ x i β ξ i ( ln x i ) 2 x i 2 β ( ln x i ) 2 ] i = 1 m [ k ( R i + 1 ) 1 ] 1 λ 2 ( ξ i α + 1 ξ i ) 2 × [ λ α x i β ( ln x i ) 2 ( ξ i α + 1 ξ i ) α ( α + 1 ) ξ i α x i 2 β ( ln x i ) 2 + α x i 2 β ( ln x i ) 2 ] } , Q 23 = Q 32 = 2 Q β λ = 1 n { α + 1 λ 2 i = 1 m 1 ξ i 2 [ x i β ξ i ln x i + λ 1 x i 2 β ln x i ] i = 1 m [ k ( R i + 1 ) 1 ] 1 λ 4 ( ξ i α + 1 ξ i ) 2 × [ α λ 2 x i β ln x i ( ξ i α + 1 ξ i ) λ α ( α + 1 ) ξ i α x i 2 β ln x i λ 1 α x i 2 β ln x i ] } Q 33 = 2 Q λ 2 = 1 n { a 3 m 1 λ 2 + 2 ( α + 1 ) λ 3 i = 1 m x i β ξ i α + 1 λ 4 i = 1 m x i 2 β ξ i 2 i = 1 m [ k ( R i + 1 ) 1 ] [ 2 α x i β λ 3 ( ξ i α + 1 ξ i ) α ( α + 1 ) x i 2 β ξ i α λ 4 ( ξ i α + 1 ξ i ) 2 + α x i 2 β λ 4 ( ξ i α + 1 ξ i ) 2 ] } .
Based on the T–K approximation method, we can derive the Bayesian estimates of the parameters α , β and λ under the different loss functions.
(I) Squared error loss function
In order to compute the Bayesian estimator of unknown parameters under the squared error loss function (SELF), we take g ( α , β , λ ) = α , and accordingly, the function Q α * ( α , β , λ ) becomes
Q α * ( α , β , λ ) = Q ( α , β , λ ) + 1 n ln α
The MLE ( α ^ Q * , β ^ Q * , λ ^ Q * ) of ( α , β , λ ) can be obtained by solving the following system of the equations.
Q α * α = Q α + 1 n α = 0 , Q α * β = Q β = 0 , Q α * λ = Q λ = 0
Thus, Σ α * can be calculated by
Σ α * = [ Q α 11 * Q α 12 * Q α 13 * Q α 21 * Q α 22 * Q α 23 * Q α 31 * Q α 32 * Q α 33 * ] ( α ^ Q * , β ^ Q * , λ ^ Q * ) 1
where
Q α 11 * = 2 Q α * α 2 = 2 Q α 2 + 1 n α 2 , Q α 12 * = Q α 21 * = 2 Q α * α β = 2 Q α β , Q α 13 * = Q α 31 * = 2 Q α * α λ = 2 Q α λ , Q α 22 * = 2 Q α * β 2 = 2 Q β 2 , Q α 23 * = Q α 32 * = 2 Q α * β λ = 2 Q α β , Q α 33 * = 2 Q α * λ 2 = 2 Q λ 2 ,
Under SELF, the Bayesian estimator of α is given by
α ^ B S = det Σ * det Σ exp { n Q α * ( α ^ Q * , β ^ Q * , λ ^ Q * ) n Q ( α ^ Q , β ^ Q , λ ^ Q ) } .
Similarly, the Bayesian estimators of β and λ under SELF are given, respectively, by
β ^ B S = det Σ * det Σ exp { n Q β * ( α ^ Q * , β ^ Q * , λ ^ Q * ) n Q ( α ^ Q , β ^ Q , λ ^ Q ) } , λ ^ B S = det Σ * det Σ exp { n Q λ * ( α ^ Q * , β ^ Q * , λ ^ Q * ) n Q ( α ^ Q , β ^ Q , λ ^ Q ) } .
(II) General entropy loss function
Firstly, we compute the Bayesian estimator of parameter α . In this case, g ( α , β , λ ) = α q , then function Q α * ( α , β , λ ) is given by
Q α * ( α , β , λ ) = Q ( α , β , λ ) q n ln α
By solving the following system of the equations, we obtain the maximum likelihood estimator ( α ^ Q * , β ^ Q * , λ ^ Q * ) of α , β and λ .
Q α * α = Q α q n α = 0 , Q α * β = Q β = 0 , Q α * λ = Q λ = 0
Thus, Σ α * can be calculated by
Σ α * = [ Q α 11 * Q α 12 * Q α 13 * Q α 21 * Q α 22 * Q α 23 * Q α 31 * Q α 32 * Q α 33 * ] ( α ^ Q * , β ^ Q * , λ ^ Q * ) 1
where
Q α 11 * = 2 Q α * α 2 = 2 Q α 2 q n α 2 , Q α 12 * = Q α 21 * = 2 Q α * α β = 2 Q α β , Q α 13 * = Q α 31 * = 2 Q α * α λ = 2 Q α λ , Q α 22 * = 2 Q α * β 2 = 2 Q β 2 , Q α 23 * = Q α 32 * = 2 Q α * β λ = 2 Q α β , Q α 33 * = 2 Q α * λ 2 = 2 Q λ 2 ,
The Bayesian estimator of α under the general entropy loss function (GELF) is given by
α ^ B G = { det Σ * det Σ exp [ n Q α * ( α ^ Q * , β ^ Q * , λ ^ Q * ) n Q ( α ^ Q , β ^ Q , λ ^ Q ) ] } 1 / q .
Similarly, the Bayesian estimators of β and λ under GELF are given by, respectively,
β ^ B G = { det Σ * det Σ exp [ n Q β * ( α ^ Q * , β ^ Q * , λ ^ Q * ) n Q ( α ^ Q , β ^ Q , λ ^ Q ) ] } 1 / q , λ ^ B G = { det Σ * det Σ exp [ n Q λ * ( α ^ Q * , β ^ Q * , λ ^ Q * ) n Q ( α ^ Q , β ^ Q , λ ^ Q ) ] } 1 / q .

3.2. The Highest Posterior Density Credible Interval

In the previous subsection, we used the T–K approximation method to obtain Bayesian point estimation of unknown parameters. However, this approximation method cannot determine the Bayesian credible intervals of unknown parameters. The importance sampling method is an effective approach to attain the Bayesian credible interval of unknown parameters. Kundu [27] considered Bayesian estimation for the Marshall–Olkin bivariate Weibull distribution, and the Bayesian estimates and associated credible intervals of the unknown parameters were constructed using the importance sampling method. Maurya et al. [28] derived the HPD credible intervals of unknown parameters in a Burr Type XII distribution using the importance sampling method. Sultana et al. [29] considered the estimation of unknown parameters for two-parameter Kumaraswamy distribution with hybrid censored samples. In the subsection, we use the importance sampling method to obtain the HPD credible intervals of unknown parameters of the inverse power Lomax distribution.
Based on the Equation (19), the joint posterior distribution of the parameters α , β and λ can be rewritten as
π * ( α , β , λ | x ) α m + a 1 1 β m + a 2 1 λ m α + a 3 1 exp ( α b 1 β b 2 b 3 λ ) i = 1 m x i β 1 i = 1 m ( λ + x i β ) α 1 i = 1 m ( 1 ξ i α ) k ( R i + 1 ) 1 = α m + a 1 1 V 1 m + a 1 exp ( α V 1 ) β m + a 2 1 V 2 m + a 2 exp ( α V 2 ) λ m α + a 3 1 exp ( b 3 λ ) × V 1 ( m + a 1 ) V 2 ( m + a 2 ) i = 1 m ( λ + x i β ) 1 i = 1 m ( 1 ξ i α ) k ( R i + 1 ) 1 π 1 * ( α | β , λ , x ) π 2 * ( β | α , λ , x ) π 3 * ( λ | α , β , x ) W ( α , β , λ | x ) ,
where
π 1 * ( α | β , λ , x ) α m + a 1 1 V 1 m + a 1 exp ( α V 1 ) , π 2 * ( β | α , λ , x ) β m + a 2 1 V 2 m + a 2 exp ( α V 2 ) , π 3 * ( λ | α , β , x ) λ m α + a 3 1 exp ( b 3 λ ) , W ( α , β , λ | x ) , V 1 ( m + a 1 ) V 2 ( m + a 2 ) i = 1 m ( λ + x i β ) 1 i = 1 m ( 1 ξ i α ) k ( R i + 1 ) 1 V 1 = b 1 + i = 1 m ln ( λ + x i β ) , V 2 = b 2 + i = 1 m ln x i
It is observed that π 1 * ( α | β , λ , x ) is the PDF of the Gamma distribution G a ( m + a 1 , V 1 ) , π 2 * ( β | α , λ , x ) and π 3 * ( λ | α , β , x ) are the PDF of the Gamma distribution G a ( m + a 2 , V 2 ) and G a ( α m + a 3 , b 3 ) , respectively. To obtain the HPD credible intervals for unknown parameters, the importance sampling method is used and the steps as follows.
Step 1: Generate α 1 from π 1 * ( α | β , λ , x ) ,
Step 2: Generate β 1 from π 2 * ( β | α , λ , x ) ,
Step 3: Generate λ 1 from π 3 * ( λ | α , β , x ) .
Repeat the steps 1–3 N times, we get ( α 1 , β 1 , λ 1 ) , ( α 2 , β 2 , λ 2 ) , , ( α N , β N , λ N ) .
The 100 ( 1 - γ ) % Bayesian credible intervals for unknown parameters can be constructed by using the method given in Ref. [27]. We briefly discuss the method below. Let g ( α , β , λ ) be any function of ( α , β , λ ) . For 0 < γ < 1 , suppose that g γ satisfies P ( g ( α , β , λ ) g γ | x ) = γ . Using the sample ( α 1 , β 1 , λ 1 ) ,   ( α 2 , β 2 , λ 2 ) ,   , ( α N , β N , λ N ) , we can calculate W ( α i , β i , λ i | x ) and g ( α i , β i , λ i ) . For simplicity, we let g i = g ( α i , β i , λ i ) and
u i = W ( α i , β i , λ i | x ) i = 1 N W ( α i , β i , λ i | x ) , i = 1 , 2 , , N .
When γ is given, we can attain the estimation of g γ and use it to establish the HPD credible intervals for g ( α , β , λ ) .
Rearrange ( g 1 , u 1 ) , ( g 2 , u 2 ) , , ( g N , u N ) into ( g ( 1 ) , u ( 1 ) ) , ( g ( 2 ) , u ( 2 ) ) , , ( g ( N ) , u ( N ) ) , where g ( 1 ) < g ( 2 ) < g ( N ) , denote the ordered values of g i , , and g ( i ) is related to u ( i ) , but u ( i ) is not ordered for i = 1 , 2 , , N . The estimator of g γ is g ^ γ = g ( H γ ) , where H γ is an integer satisfying
i = 1 H γ u ( i ) γ < i = 1 H γ + 1 u ( i )
Using the above method, a 100 ( 1 - γ ) % Bayesian credible interval of the function g ( α , β , λ ) can be given by ( g ^ δ , g ^ δ + 1 γ ) , for δ = u ( 1 ) , u ( 1 ) + u ( 2 ) , , i = 1 H ( 1 γ ) u ( i ) . Therefore, a 100 ( 1 - γ ) % HPD credible interval of g ( α , β , λ ) is given by
( g ^ δ * , g ^ δ * + 1 γ ) ,
where δ * satisfies g ^ δ * + 1 γ g ^ δ * g ^ δ + 1 γ g ^ δ for all δ , and g ( α , β , λ ) could be α , β , λ , respectively. So, we obtain the HPD credible intervals for unknown parameters of α , β , λ .

4. Simulation Study

In this section, we evaluate the performance of different estimates developed in this paper by the Monte Carlo simulation study. For the given true values of parameters α , β , λ and different combinations of ( n , m , k , R ˜ ) , progressive first-failure censored samples are generated from the IPL distribution by modifying the method introduced by Ref. [30]. The following steps provide the specific generation method.
  • Step 1: Set the initial values of both group size k and censoring scheme R ˜ = ( R 1 , R 2 , , R m ) .
  • Step 2: Generate m independent observations Z 1 , Z 2 , , Z m that obey the uniform distribution U ( 0 , 1 ) .
  • Step 3: Let ξ i = Z i 1 / ( i + R m + R m 1 + + R m i + 1 ) , i = 1 , 2 , , m . .
  • Step 4: Set U i = 1 ξ m ξ m 1 ξ m i + 1 , i = 1 , 2 , , m .
  • Step 5: For given α , β and λ , using inverse transformation X i = F 1 ( U i ) , i = 1 , 2 , , m , we obtain the PFF censored sample from IPL distribution, where F 1 ( ) represents the inverse CDF in (2).
In the simulation study, the true values of parameters in the IPL distribution are taken as α = 1.5 , β = 1 , λ = 0.5 . For Bayesian estimates, the means of prior distributions are equal to the true values of the parameters, that is, a 1 / b 1 = α , a 2 / b 2 = β , a 3 / b 3 = λ . Therefore, the true values of the hyper-parameters in prior distribution are taken as ( a 1 , b 1 ) = ( 1.5 , 1 ) , ( a 2 , b 2 ) = ( 1 , 1 ) , ( a 3 , b 3 ) = ( 1 , 2 ) . For the GELF, we set q = 0.5 , 0.5 , 1.0 . Two different group sizes k = 2 , 3 are chosen, and two different combinations for n and m say n = 30 , m = 15 , 20 , 30 ; n = 50 , m = 25 , 30 , 50 with different R i are determined. For convenience, the different censoring schemes (CS) used in this paper have been represented by short notations, such as (0*4) denotes (0,0,0,0) and ((2,0)*3) denotes (2,0, 2,0, 2,0).
In each case, we compute the MLEs and Bayesian estimates of the unknown parameters. In the Newton iterative algorithm and importance sampling algorithm, we choose the initial values of α , β and λ as α ( 0 ) = 1.4 , β ( 0 ) = 0.9 , λ ( 0 ) = 0.4 ; the value of ε is taken as 10 5 . All Bayesian points and interval estimates are computed under two different loss functions, SELF and GELF, using the the T–K approximation and importance sampling methods, respectively. In addition, we obtain the average length (AL) of 95% asymptotic confidence and HPD credible intervals and corresponding coverage probability (CP) of the parameters based on the simulation. Here, we use N = 2000 for the importance sampling procedure and use M = 2000 simulated samples in each case.
The expected values (EV) and mean square error (MSE) of different estimates are computed. Here, EV = M 1 g ^ ( θ ) and M S E = M 1 [ g ^ ( θ ) g ( θ ) ] 2 , where g ^ ( θ ) is the estimate of g ( θ ) .
Extensive computations are performed using R statistical programming language software. The results of ML and Bayesian point estimates using the Monte Carlo simulation are presented in Table 1, Table 2, Table 3, Table 4 and Table 5. From these tables, the following observations can be made:
  • When n increases but m and k are fixed, the MSEs of MLEs and Bayesian estimates of three parameters decrease. Therefore, we tend to get better estimation results with an increase in sample size.
  • When m increases but n and k are fixed, the MSEs of MLEs and Bayesian estimates decrease. While when k increases but n and m are fixed, the MSEs of all estimates decrease in most of the cases.
  • In the case of Bayesian estimates, there is little difference between the MSEs under SELF and GELF, and the estimation effect of GELF is slightly better than SELF in terms of MSE. While under GELF, there is no significant difference in MSEs among the three modes. The estimation effect seems better when q = 1.
Furthermore, the average lengths of 95% asymptotic confidence HPD credible intervals were computed. These results are displayed in Table 6 and Table 7. From the obtained results in Table 6 and Table 7, the following conclusions can be drawn:
  • When n increases but m and k are fixed, the average length of asymptotic confidence and HPD credible intervals narrow down. While the average length of 95% asymptotic confidence and HPD credible intervals narrow down when the group size k increases.
  • When m increases but n and k are fixed, the average length of 95% asymptotic confidence HPD credible intervals narrow down in most of the cases.
  • The HPD credible intervals are better than asymptotic confidence intervals in respect of average length.
  • For the CPs of interval for the unknown parameters, the HPD credible intervals are slightly better than asymptotic confidence intervals in almost all cases.

5. Real Data Analysis

In this section, a real data set is considered to illustrate the proposed method. The data set represents the survival times (in days) of 72 guinea pigs infected with virulent tubercle bacilli. This data set was observed and reported by Bjerkedal [31]. The data are listed as follows: 0.1, 0.33, 0.44, 0.56, 0.59, 0.59, 0.72, 0.74, 0.92, 0.93, 0.96, 1, 1, 1.02, 1.05, 1.07, 1.07, 1.08, 1.08, 1.08, 1.09, 1.12, 1.13, 1.15, 1.16, 1.2, 1.21, 1.22, 1.22, 1.24, 1.3, 1.34,1.36, 1.39, 1.44, 1.46, 1.53, 1.59, 1.6, 1.63, 1.63, 1.68, 1.71, 1.72, 1.76, 1.83, 1.95,1.96, 1.97, 2.02, 2.13, 2.15, 2.16, 2.22, 2.3, 2.31, 2.4, 2.45, 2.51, 2.53, 2.54, 2.54, 2.78,2.93, 3.27, 3.42, 3.47, 3.61, 4.02, 4.32, 4.58, 5.55.
The above data set was analyzed by Hassan and Abd-Allah [23] in fitting the IPL distribution (IPLD). The IPLD was compared with Lomax (L), exponentiated Lomax (EL), power Lomax (PL), inverse Weibull (IW), generalized inverse Weibull (GIW) and inverse Lomax (IL) distribution, respectively. The method of maximum likelihood is used to estimate the unknown parameters of the selected models. The following statistics: Akaike information criterion (AIC), the corrected Akaike information criterion (CAIC), Bayesian formation criterion (BIC), the Hannan–Quinn information criterion (HQIC), and Kolmogorov–Smirnov (K–S) statistic was used to compare all the models.
In this section, all computations are performed using R statistical programming language software. Table 8 lists the values of MLEs of the parameters, AIC, CAIC, BIC, HQIC and K-S statistic for the considered models. The plots of the estimated CDFs of the fitted distributions are displayed in Figure 1.
From the numerical results in Table 8, it can be seen that the most fitted distribution to these data is IPLD compared to other distributions since the IPLD has the lower statistics. According to the results in Figure 1, it is clear that the IPLD is the most appropriate model for this data set. Therefore, we can perform statistical analysis on this data set.
To analyze this data set under PFF censored samples, we randomly divide the given data into 36 groups with k = 2 independent items within each group. Then the following first-failure censored data are obtained: 0.1, 0.44, 0.59, 0.74, 0.93, 1, 1.05, 1.07, 1.08, 1.12, 1.15, 1.2, 1.22, 1.24, 1.4, 1.34, 1.39, 1.46, 1.59, 1.63, 1.68, 1.72, 1.83, 1.97, 2.02, 2.15, 2.22, 2.31, 2.45, 2.53, 2.54, 2.78, 2.93, 3.42, 3.61, 4.02.
Next, we generate progressive first-failure censored samples using three different censoring schemes from the above first-failure censored sample with m = 26. The different censoring schemes and the corresponding progressive first-failure censored samples are presented in Table 9. In the different censoring schemes, we calculate the ML and Bayesian estimates of the parameters. For Bayesian estimates, we use non-informative priors as we have no prior information about the parameters. We obtain 95% asymptotic confidence and HPD credible intervals for the parameters. The results of all estimates are listed in Table 10, Table 11 and Table 12.

6. Conclusions

In this paper, the statistical inference of the parameters of inverse power Lomax distribution has been studied based on a progressive first-failure censoring sample. Both the classical and Bayesian estimates of the parameters are provided. Since the MLEs of the parameters cannot be obtained in closed form, an iterative procedure has been used. Using the asymptotic normality theory of MLEs, we have developed the approximate confidence intervals of the parameters. The Bayesian estimates are derived by Tierney–Kadane’s approximation method under square error loss and generalized entropy loss functions. Since Tierney–Kadane’s method fails to construct the Bayesian credible intervals, we utilize the importance sampling procedure to obtain the HPD credible intervals of the parameters. A Monte Carlo simulation has been provided to show all the estimation results. Finally, a real data set has been analyzed to illustrate our model. In this paper, although we have used Newton’s iterative method to obtain maximum likelihood estimates of parameters for IPL distribution, other methods such as the gradient and the conjugate gradient methods can also be considered. These methods were proposed by Boumaraf et al. [32], and some good conclusions were obtained. The application of these new methods in parameter statistical inference and reliability analysis will be one of our future research topics.

Author Contributions

Methodology and writing, X.S.; supervision, Y.S. Both authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (71571144, 71401134, 71171164, 11701406) and the Program of International Cooperation and Exchanges in Science and Technology Funded by Shaanxi Province (2016KW-033).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are available from the corresponding author upon request.

Acknowledgments

The authors would like to thank the Associate Editor, Editor and the anonymous reviewers for carefully reading the paper and for their comments, which greatly improved the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Birkhauser: Boston, MA, USA, 2000. [Google Scholar]
  2. Balakrishnan, N. Progressive censoring methodology: An appraisal. Test 2007, 16, 211–259. [Google Scholar] [CrossRef]
  3. Johnson, L.G. Theory and Technique of Variation Research; Elsevier Publishing Company: New York, NY, USA, 1964. [Google Scholar]
  4. Balasooriya, U.; Saw, S.L.C.; Gadag, V. Progressively censored reliability sampling plans for the weibull distribution. Technometrics 2000, 42, 160–167. [Google Scholar] [CrossRef]
  5. Wu, J.W.; Hung, W.L.; Tsai, C.H. Estimation of the parameters of the Gompertz distribution under the first failure censored sampling plan. Statistics 2003, 37, 517–525. [Google Scholar] [CrossRef]
  6. Wu, J.W.; Yu, H.Y. Statistical inference about the shape parameter of the Burr type XII distribution under the failure censored sampling plan. Appl. Math. Comput. 2005, 163, 443–482. [Google Scholar] [CrossRef]
  7. Wu, S.J.; Kus, C. On estimation based on progressive first failure censored sampling. Comput. Stat. Data. Anal. 2009, 53, 3659–3670. [Google Scholar] [CrossRef]
  8. Zhang, F.; Gui, W. Parameter and reliability inferences of inverted exponentiated Half-Logistic distribution under the progressive first-Failure censoring. Mathematics 2020, 8, 708. [Google Scholar] [CrossRef]
  9. Bakoban, R.A.; Abd-Elmougod, G.A. MCMC in analysis of progressively first failure censored competing risks data for gompertz model. J. Comput. Theor. Nanosci. 2016, 13, 6662–6670. [Google Scholar] [CrossRef]
  10. Dube, M.; Krishna, H.; Garg, R. Generalized inverted exponential distribution under progressive first-failure censoring. J. Stat. Comput. Simul. 2016, 86, 1095–1114. [Google Scholar] [CrossRef]
  11. Singh, S.; Tripathi, Y.M. Reliability sampling plans for a lognormal distribution under progressive first-failure censoring with cost constraint. Stat. Pap. 2015, 56, 773–817. [Google Scholar] [CrossRef]
  12. Soliman, A.A.; Abou-Elheggag, N.A.; Ellah, A.H.A.; Modhesh, A.A. Bayesian and non-Bayesian inferences of the Burr-XII distribution for progressive first-failure censored data. Metron 2014, 70, 1–25. [Google Scholar] [CrossRef]
  13. Ahmadi, M.V.; Doostparast, M. Pareto analysis for the lifetime performance index of products on the basis of progressively first-failure-censored batches under balanced symmetric and asymmetric loss functions. J. Appl. Stat. 2018, 46, 1–32. [Google Scholar] [CrossRef]
  14. Amal, H.; Hani, S. On estimation of overlapping measures for exponential populations under progressive first failure censoring. Qual. Technol. Quant. Manag. 2019, 16, 560–574. [Google Scholar]
  15. Abd El-Monsef, M.M.E.; El-Latif Hassanein, W.A.A. Assessing the lifetime performance index for Kumaraswamy distribution under first-failure progressive censoring scheme for ball bearing revolutions. Qual. Reliab. Engng. Int. 2020, 36, 1086–1097. [Google Scholar] [CrossRef]
  16. Yu, J.; Gui, W.H.; Shan, Y.Q. Statistical inference on the Shannon entropy of inverse Weibull distribution under the progressive first-failure censoring. Entropy 2019, 21, 1209. [Google Scholar] [CrossRef] [Green Version]
  17. Panahi, H.; Morad, N. Estimation of the inverted exponentiated Rayleigh distribution based on adaptive Type II progressive hybrid censored sample. J. Comput. Appl. Math. 2020, 364, 112345. [Google Scholar] [CrossRef]
  18. Bantan, R.A.R.; Elgarhy, M.; Chesneau, C.; Jamal, F. Estimation of Entropy for Inverse Lomax Distribution under Multiple Censored Data. Entropy 2020, 22, 601. [Google Scholar] [CrossRef] [PubMed]
  19. Nassar, M.; Abo-Kasem, O.E. Estimation of the inverse Weibull parameters under adaptive type-II progressive hybrid censoring scheme. J. Comput. Appl. Math. 2017, 315, 228–239. [Google Scholar] [CrossRef]
  20. Lee, K.; Cho, Y. Bayesian and maximum likelihood estimations of the inverted exponentiated half logistic distribution under progressive Type II censoring. J. Appl. Stat. 2017, 44, 811–832. [Google Scholar] [CrossRef]
  21. Xu, R.; Gui, W.H. Entropy estimation of inverse Weibull distribution under adaptive Type-II progressive hybrid censoring schemes. Symmetry 2019, 11, 1463. [Google Scholar] [CrossRef] [Green Version]
  22. Bantan, R.A.R.; Jamal, F.; Chesneau, C.; Elgarhy, M. A New Power Topp–Leone Generated Family of Distributions with Applications. Entropy 2019, 21, 1177. [Google Scholar] [CrossRef] [Green Version]
  23. Hassan, A.S.; Abd-Allah, M. On the Inverse Power Lomax distribution. Ann. Data Sci. 2019, 6, 259–278. [Google Scholar] [CrossRef]
  24. Arnold, B.C.; Press, S.J. Bayesian inference for Pareto populations. J. Econom. 1983, 21, 287–306. [Google Scholar] [CrossRef]
  25. Tierney, T.; Kadane, J.B. Accurate approximations for posterior moments and marginal densities. J. Am. Stat. Assoc. 1986, 81, 82–86. [Google Scholar] [CrossRef]
  26. Lindley, D.V. Approximate Bayes methods. Trabajos de Estadistica 1980, 31, 223–237. [Google Scholar] [CrossRef]
  27. Kundu, D.; Gupta, A.K. Bayes estimation for the Marshall–Olkin bivariate Weibull distribution. Comput. Statist. Data Anal. 2013, 57, 271–281. [Google Scholar] [CrossRef]
  28. Maurya, R.K.; Tripathi, Y.M.; Rastogi, M.K.; Asgharzadeh, A. Parameter estimation for a Burr XII distribution under progressive censoring. Am. J. Math. Manag. Sci. 2017, 36, 259–276. [Google Scholar] [CrossRef]
  29. Sultana, F.; Tripathi, Y.M.; Rastogi, M.K.; Wu, S.J. parameter estimation for the kumaraswamy distribution based on hybrid censoring. Am. J. Math. Manag. Sci. 2018, 37, 243–261. [Google Scholar] [CrossRef]
  30. Balakrishnan, N.; Sandhu, R.A. A simple simulation algorithm for generating progressively type-II generated samples. Am. Statist. 1995, 49, 229–230. [Google Scholar]
  31. Bjerkedal, T. Acquisition of resistance in guinea pigs infected with different doses of virulent tubercle bacilli. Am. J. Epidemiol. 1960, 72, 130–148. [Google Scholar] [CrossRef]
  32. Boumaraf, B.; Seddik-Ameur, N.; Barbu, V.S. Estimation of beta-pareto distribution based on several optimization methods. Mathematics 2020, 8, 1055. [Google Scholar] [CrossRef]
Figure 1. Empirical CDF against CDF of IPLD, LD, ELD, PLD, IWD, GIWD and ILD for the given data set.
Figure 1. Empirical CDF against CDF of IPLD, LD, ELD, PLD, IWD, GIWD and ILD for the given data set.
Entropy 23 01099 g001
Table 1. MLEs and MSEs of the parameters when α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
Table 1. MLEs and MSEs of the parameters when α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
knmCensoring
Scheme
α ^ M L β ^ M L λ ^ M L
EVMSEEVMSEEVMSE
23015(15, 0*14)1.63920.13270.89250.10840.59290.1061
(0*6, 6, 5, 4, 0*6)1.63420.13590.88930.11060.59670.1146
(0*14, 15)1.64150.13910.88430.11380.59740.1076
20(10, 0 *19)1.62400.11750.91880.09460.58180.0917
(1, 0)*101.63920.12960.88930.10840.60290.1048
(0*19, 10)1.63500.13710.91090.10330.58210.0942
30(0*30)1.54710.07210.95170.06550.55190.0697
5025(25, 0*24)1.58610.09340.92170.08860.57590.0824
(0*8, 1, 3*8, 0*8)1.59320.09560.91590.09080.57630.0831
(0*24, 25)1.59160.09400.92140.08940.56400.0828
30(20, 0*29)1.57350.07140.94970.07960.56310.0772
(2, 0, 0)*101.57420.07560.94590.08750.55150.0810
(0*29, 20)1.57690.07510.93530.08180.55410.0780
50(0*50)1.53280.07040.97380.06370.54680.0638
33015(15, 0*14)1.63870.13160.91620.09280.57250.0955
(0*6, 6, 5, 4, 0*6)1.63060.13530.91010.09340.57480.0994
(0*14, 15)1.63540.13870.90180.09540.57320.1004
20(10, 0*19)1.62710.10470.93030.08320.57210.0829
(1, 0)*101.62300.12120.91090.0944057340.0904
(0*19, 10)1.62450.13810.92350.09460.57450.0931
30(0*30)1.54660.07130.96360.06470.55160.0690
5025(25, 0*24)1.58500.09360.93230.07820.56410.0784
(0*8, 1, 3*8, 0*8)1.58780.09650.93180.07950.53150.0791
(0*24,25)1.59010.09720.93050.08990.54190.0815
30(20, 0*29)1.57140.07320.95880.06650.55010.0692
(2, 0, 0)*101.57310.07670.95850.06870.55810.0711
(0*29, 20)1.57620.07941.04800.08560.55340.0766
50(0*50)1.52180.06590.97430.06320.54270.0633
Table 2. Bayesian estimates and MSEs of the parameters under SELF, when α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
Table 2. Bayesian estimates and MSEs of the parameters under SELF, when α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
knmCensoring Scheme α ^ BS β ^ BS λ ^ BS
EVMSEEVMSEEVMSE
23015(15, 0*14)1.59930.11660.91630.09860.57010.1019
(0*6, 6, 5, 4, 0*6)1.59770.11870.92000.09360.56840.0982
(0*14, 15)1.59680.10981.07690.09420.56750.0951
20(10, 0*19)1.58870.09900.92870.08600.57930.0878
(1, 0)*101.58470.09480.93340.08410.56820.0865
(0*19, 10)1.58220.09080.95230.07590.55790.0769
30(0*30)1.54120.07030.95890.07120.54630.0753
5025(25, 0*24)1.55820.09280.93090.08540.55860.0824
(0*8, 1, 3*8, 0*8)1.56470.09101.06420.08190.55790.0821
(0*24, 25)1.56300.09040.93970.08100.55650.0802
30(20, 0*29)1.55820.07930.95180.07460.54900.0774
(2, 0, 0)*101.54510.07580.95460.07280.54460.0727
(0*29, 20)1.54320.07260.95670.07240.54720.0715
50(0*50)1.53070.07010.96430.07040.54370.0738
33015(15, 0*14)1.58710.11320.92740.09320.58930.0906
(0*6, 6, 5, 4, 0*6)1.58340.11240.92860.09150.56480.0874
(0*14,15)1.58230.10520.93510.09130.56580.0857
20(10, 0*19)1.57750.09870.94870.07830.55220.0795
(1, 0)*101.57440.09430.94510.07800.54890.0778
(0*19,10)1.57160.08910.94880.07430.55330.0766
30(0*30)1.54010.07010.95910.07090.54300.0742
5025(25, 0*24)1.54350.09220.94910.08170.55270.0763
(0*8, 1, 3*8, 0*8)1.56820.09060.95240.07450.55180.0751
(0*24,25)1.54790.08970.95690.07370.55760.0744
30(20, 0*29)1.54740.07621.03800.06780.54430.0742
(2,0,0)*101.54460.07540.96800.06650.55630.0620
(0*29,20)1.54190.07190.96820.06890.55410.0687
50(0*50)1.52090.06430.96880.07020.54130.0730
Table 3. Bayesian estimates and MSEs of the parameters under GELF when q = 0.5 , α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
Table 3. Bayesian estimates and MSEs of the parameters under GELF when q = 0.5 , α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
knmCensoring
Scheme
α ^ BG β ^ BG λ ^ BG
EVMSEEVMSEEVMSE
23015(15, 0*14)1.59850.11650.91700.09860.56730.1004
(0*6, 6, 5, 4, 0*6)1.59650.11830.92130.09350.56400.0976
(0*14,15)1.59510.10951.07610.09400.56210.0950
20(10, 0*19)1.58720.09880.92940.08580.57790.0871
(1, 0)*101.58410.09410.93460.08400.56770.0859
(0*19,10)1.58180.09020.95290.07610.55480.0757
30(0*30)1.54070.07010.95930.07100.54580.0751
5025(25, 0*24)1.55730.09230.93270.08520.55800.0822
(0*8, 1, 3*8, 0*8)1.56400.09071.06510.08150.55580.0814
(0*24, 25)1.56250.09010.94270.08050.55490.0801
30(20, 0*29)1.55720.07900.95360.07410.54870.0772
(2, 0, 0)*101.54410.07520.95510.07230.54380.0724
(0*29, 20)1.54280.07230.95740.07200.54690.0711
50(0*50)1.52780.06960.96500.07020.54300.0736
33015(15, 0*14)1.58630.11280.92890.09300.58870.0904
(0*6, 6, 5, 4, 0*6)1.58250.11200.92940.09140.56320.0869
(0*14, 15)1.58120.10480.93780.09110.56520.0857
20(10, 0*19)1.57630.09810.94900.07820.55160.0794
(1, 0)*101.57390.09390.94590.07780.54780.0773
(0*19, 10)1.57100.08820.94920.07400.55270.0764
30(0*30)1.53520.06930.95980.07070.54280.0740
5025(25, 0*24)1.54260.09170.95210.08130.55150.0761
(0*8, 1, 3*8, 0*8)1.56200.09010.95340.07410.55050.0748
(0*24,25)1.54680.08920.95780.07350.55700.0744
30(20, 0*29)1.54610.07591.03870.06770.54370.0740
(2, 0, 0)*101.54380.07510.96890.06630.55560.0619
(0*29, 20)1.54010.07120.96800.06850.55320.0683
50(0*50)1.52090.06400.96940.07010.54100.0729
Table 4. Bayesian estimates and MSEs of the parameters under GELF when q = 0.5 , α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
Table 4. Bayesian estimates and MSEs of the parameters under GELF when q = 0.5 , α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
knmCensoring
Scheme
α ^ BG β ^ BG λ ^ BG
EVMSEEVMSEEVMSE
23015(15, 0*14)1.59740.11630.92710.09640.56580.1003
(0*6, 6, 5, 4, 0*6)1.59480.11810.92640.09350.56340.0974
(0*14, 15)1.59500.10951.07410.09380.56170.0931
20(10, 0*19)1.42540.09820.93030.08530.57170.0869
(1, 0)*101.42690.09400.93660.08400.56650.0854
(0*19, 10)1.42870.09010.95290.07600.55380.0751
30(0*30)1.54210.07030.96020.07050.54210.0750
5025(25, 0*24)1.55640.09210.93410.08520.55680.0820
(0*8, 1, 3*8, 0*8)1.56360.09051.06490.08120.55610.0814
(0*24, 25)1.56090.08990.94460.08040.55400.0796
30(20, 0*29)1.55720.07900.95370.07400.54820.0771
(2, 0, 0)*101.54380.07500.95420.07210.54340.0724
(0*29, 20)1.54190.07200.95560.07190.54680.0710
50(0*50)1.52670.06950.96570.07010.54320.0736
33015(15, 0*14)1.58560.11240.93130.09270.58490.0902
(0*6, 6, 5, 4, 0*6)1.58160.11180.93450.09120.56210.0863
(0*14,15)1.58030.10420.93970.09100.56380.0853
20(10, 0*19)1.57580.09800.94900.07820.55160.0794
(1, 0)*101.57270.09350.94590.07780.54780.0773
(0*19,10)1.57040.08800.94920.07400.55270.0764
30(0*30)1.53480.06910.95980.07070.54280.0740
5025(25, 0*24)1.54170.09120.95210.08130.55150.0761
(0*8, 1, 3*8, 0*8)1.56070.08970.95340.07410.55050.0748
(0*24, 25)1.54570.08890.95780.07350.55700.0744
30(20, 0*29)1.54570.07541.03870.06770.54370.0740
(2, 0, 0)*101.54270.07490.96890.06630.54560.0615
(0*29, 20)1.53760.07070.96800.06850.54320.06831
50(0*50)1.52030.06360.96630.06970.52490.0721
Table 5. Bayesian estimates and MSEs of the parameters under GELF when q = 1 , α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
Table 5. Bayesian estimates and MSEs of the parameters under GELF when q = 1 , α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
knmCensoring
Scheme
α ^ BG β ^ BG λ ^ BG
EVMSEEVMSEEVMSE
23015(15, 0*14)1.59710.11620.92680.09620.56510.1002
(0*6, 6, 5, 4, 0*6)1.59290.11780.92810.09340.56040.0972
(0*14, 15)1.59380.10901.07320.09290.55970.0934
20(10, 0*19)1.43420.09800.93790.08500.57030.0867
(1, 0)*101.43810.09320.93980.08310.56570.0850
(0*19, 10)1.43790.08950.95630.07590.55160.0750
30(0*30)1.54120.07020.96380.07020.54190.0748
5025(25, 0*24)1.55470.09180.93860.08500.55450.0818
(0*8, 1, 3*8, 0*8)1.56240.09041.06270.08100.55490.0811
(0*24, 25)1.56010.08960.94580.08020.55270.0792
30(20, 0*29)1.55530.07870.95490.07380.54580.0768
(2, 0, 0)*101.54310.07490.95380.07200.54280.0722
(0*29, 20)1.54160.07190.95520.07200.54670.0706
50(0*50)1.52640.06950.96710.07000.54280.0734
33015(15, 0*14)1.58270.11170.93480.09240.58270.0901
(0*6, 6, 5, 4, 0*6)1.58010.11160.94310.09100.56180.0861
(0*14, 15)1.58010.10420.94160.09070.56260.0850
20(10, 0*19)1.57360.09780.95120.07780.55100.0792
(1, 0)*101.57270.09350.94590.07780.54780.0773
(0*19,10)1.57010.08800.95310.07390.55200.0764
30(0*30)1.53270.06900.96320.07050.54230.0739
5025(25, 0*24)1.54100.09100.95360.08110.55010.0760
(0*8, 1, 3*8, 0*8)1.56010.08970.95280.07400.54970.0743
(0*24, 25)1.54260.08830.95900.07330.55370.0742
30(20, 0*29)1.54190.07521.03780.06740.54320.0740
(2, 0, 0)*101.54120.07460.96970.06610.54470.0612
(0*29, 20)1.53540.07050.96940.06830.54200.0631
50(0*50)1.52020.06350.97680.06940.52380.0718
Table 6. The average length (AL) and coverage probability (CP) of 95% asymptotic confidence interval for parameters when α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
Table 6. The average length (AL) and coverage probability (CP) of 95% asymptotic confidence interval for parameters when α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
knmCensoring
Scheme
α ^ ACI β ^ ACI λ ^ ACI
ALCPALCPALCP
23015(15, 0*14)2.13590.9451.65600.9441.23980.948
(0*6, 6, 5, 4, 0*6)2.09360.9431.68490.9421.23560.947
(0*14, 15)2.05280.9431.79360.9511.21250.945
20(10, 0*19)1.92670.9461.53120.9491.12430.950
(1, 0)*101.95870.9481.72870.9521.11830.951
(0*19, 10)1.89420.9431.52420.9461.11460.949
30(0*30)1.90510.9531.53510.9521.10480.955
5025(25, 0*24)1.87970.9551.56170.9481.11180.953
(0*8, 1, 3*8, 0*8)1.84150.9521.54250.9451.05810.952
(0*24, 25)1.83440.9511.28890.9461.00240.951
30(20, 0*29)1.65770.9581.10180.9510.91890.952
(2, 0, 0)*101.61340.9561.51340.9570.96640.959
(0*29, 20)1.55810.9531.19800.9540.88930.955
50(0*50)1.51280.9571.46510.9590.92460.957
33015(15, 0*14)1.75360.9481.10760.9471.00560.948
(0*6, 6, 5, 4, 0*6)1.76250.9451.04310.9450.98340.947
(0*14, 15)1.75600.9421.65600.9540.90620.945
20(10, 0*19)1.59210.9510.96110.9490.87530.950
(1, 0)*101.63130.9530.96610.9540.87660.951
(0*19, 10)1.54420.9521.34420.9560.80430.949
30(0*30)1.59560.9551.52470.9590.80160.956
5025(25, 0*24)1.50680.9560.94550.9510.76430.953
(0*8, 1, 3*8, 0*8)1.50820.9540.86360.9480.77430.952
(0*24, 25)1.48890.9521.27280.9470.71690.951
30(20, 0*29)1.47860.9600.92450.9510.67310.952
(2, 0, 0)*101.43910.9570.85450.9570.68170.957
(0*29, 20)1.39800.9541.12730.9590.62900.955
50(0*50)1.38790.9611.13480.9580.62030.959
Table 7. The average length (AL) and coverage probability (CP) of 95% HPD credible intervals for parameters when α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
Table 7. The average length (AL) and coverage probability (CP) of 95% HPD credible intervals for parameters when α = 1.5 , β = 1 , λ = 0.5 , M = 2000 .
knmCensoring
Scheme
α ^ HPD β ^ HPD λ ^ HPD
ALCPALCPALCP
23015(15, 0*14)1.95070.9461.30700.9511.17720.951
(0*6, 6, 5, 4, 0*6)1.92490.9451.31220.9511.18030.952
(0*14, 15)1.87990.9441.28520.9501.14230.948
20(10, 0*19)1.73470.9 511.15150.9521.08890.952
(1, 0)*101.70340.9481.27900.9551.07660.953
(0*19, 10)1.67230.9491.13280.9511.04960.956
30(0*30)1.65490.9581.12580.9541.04670.956
5025(25, 0*24)1.56960.9561.06080.9510.99540.953
(0*8, 1, 3*8, 0*8)1.57260.9541.10230.9490.98310.954
(0*24, 25)1.43190.9581.02810.9470.90680.952
30(20, 0*29)1.35330.9610.98630.9520.84660.954
(2, 0, 0)*101.42840.9621.00470.9590.86290.960
(0*29, 20)1.26570.9560.96780.9550.82230.956
50(0*50)1.26570.9590.97890.9600.83410.960
33015(15, 0*14)1.47180.9510.98020.9480.88650.950
(0*6, 6, 5, 4, 0*6)1.49720.9530.99270.9500.94720.951
(0*14, 15)1.39360.9490.91720.9540.84740.949
20(10, 0*19)1.32150.9530.90640.9510.77530.952
(1, 0)*101.34590.9560.89430.9560.82020.953
(0*19, 10)1.28810.9520.82980.9570.75460.956
30(0*30)1.35520.9570.87620.9610.78130.961
5025(25, 0*24)1.17330.9590.81940.9540.76560.958
(0*8, 1, 3*8, 0*8)1.23390.9570.81660.9500.73880.953
(0*24, 25)1.17560.9610.77110.9530.68230.952
30(20, 0*29)1.01910.9610.62640.9530.66430.954
(2, 0, 0)*101.09890.9580.68450.9590.67430.959
(0*29, 20)0.95350.9560.66200.9600.66190.961
50(0*50)0.96720.9630.67980.9630.65970.959
Table 8. The fitting results for the real data set of survival times of 72 guinea pigs data.
Table 8. The fitting results for the real data set of survival times of 72 guinea pigs data.
DistributionMLEsAICCAICBICHQICK-S
IPLD α ^ = 0.6971 , λ ^ = 0.1302 β ^ = 3.3638 193.0546193.3983199.8854195.77380.0743
LD α ^ = 182.4212 , λ ^ = 103.5113 230.5347230.7038235.0892232.34820.6904
ELD α ^ = 3.7163 , λ = 0.0151 , θ ^ = 78.3218 194.5692194.9124201.3987197.28820.0941
PLD α ^ = 1.7087 , λ ^ = 6.1974 , β ^ = 2.5742 193.0753193.4182199.9052195.79430.0782
IWD α ^ = 1.0692 , λ ^ = 1.1734 240.3324240.5014244.8854242.14530.1968
GIWD α ^ = 0.1089 , γ ^ = 14.3738 , β ^ = 1.1732 242.3318242.6753249.1618245.05120.1973
ILD α ^ = 12.9073 , β ^ = 0 . 0958 242.8217242.9958247.3747244.63460.9986
Table 9. Progressive first-failure censored samples under the given censoring schemes when k = 2, n = 36, m = 26.
Table 9. Progressive first-failure censored samples under the given censoring schemes when k = 2, n = 36, m = 26.
Censoring SchemeProgressive First-Failure Censored Sample
C S 1 = (10, 0*25)0.1, 1.2, 1.22, 1.24, 1.4, 1.34, 1.39, 1.46, 1.59, 1.63, 1.68, 1.72, 1.83, 1.97, 2.02, 2.15, 2.22, 2.31, 2.45, 2.53, 2.54, 2.78, 2.93, 3.42, 3.61, 4.02.
C S 2 = (0*11, 3,4,3, 0*12)0.1, 0.44, 0.59, 0.74, 0.93, 1, 1.05, 1.07, 1.08, 1.12, 1.15, 1.2, 1.22, 1.39, 1.72, 2.15, 2.22, 2.31, 2.45, 2.53, 2.54, 2.78, 2.93, 3.42, 3.61, 4.02.
C S 3 = (0*25, 10)0.1, 0.44, 0.59, 0.74, 0.93, 1, 1.05, 1.07, 1.08, 1.12, 1.15, 1.2, 1.22, 1.24, 1.4, 1.34, 1.39, 1.46, 1.59, 1.63, 1.68, 1.72, 1.83, 1.97, 2.02, 2.15
Table 10. MLEs and Bayesian estimations (BEs) of parameters for the real data sets under different censoring scheme.
Table 10. MLEs and Bayesian estimations (BEs) of parameters for the real data sets under different censoring scheme.
MLEsCensoring SchemesBEs
(Squared Loss)
Censoring Schemes
CS1CS2CS3CS1CS2CS3
α ^ M L 0.82450.42630.5248 α ^ BS 0.81680.43750.5357
β ^ M L 0.19820.07210.1156 β ^ BS 0.18930.07860.1274
λ ^ M L 4.10892.37162.4823 λ ^ BS 4.10252.37852.4969
Table 11. Bayesian estimations of parameters under GELF.
Table 11. Bayesian estimations of parameters under GELF.
BEs
Entropy Loss
q = 0.5 q = 0.5 q = 1
Censoring SchemesCensoring SchemesCensoring Schemes
CS1CS2CS3CS1CS2CS3CS1CS2CS3
α ^ BG 0.84720.42360.53180.81470.43800.53660.80250.44530.5354
β ^ BG 0.19270.07350.12980.18940.07920.12890.18230.07860.1274
λ ^ BG 4.13542.36922.49874.10162.37752.49774.10252.37852.4969
Table 12. The 95% asymptotic confidence intervals (ACIs) and HPD credible intervals HPDCIs of the parameters.
Table 12. The 95% asymptotic confidence intervals (ACIs) and HPD credible intervals HPDCIs of the parameters.
ParameterACIsParameterHPDCIs
CS1CS2CS3CS1CS2CS3
α (0.2426, 2.4109)(0.1917, 1.5328)(0.1879, 2.1357) α (0.2426, 2.4103)(0.1931, 1.5319)(0.1884, 2.1352)
β (0.0943, 1.8561)(0.0257, 1.3771)(0.0876,1.7457) β (0.0950, 1.8546)(0.0265, 1.3762)(0.0882,1.7451)
λ (0.8465,5.8102)(0.5413, 3.1485)(0.6874, 3.5438) λ (0.8479, 5.8068)(0.5620, 3.1424)(0.6892, 3.5416)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shi, X.; Shi, Y. Inference for Inverse Power Lomax Distribution with Progressive First-Failure Censoring. Entropy 2021, 23, 1099. https://doi.org/10.3390/e23091099

AMA Style

Shi X, Shi Y. Inference for Inverse Power Lomax Distribution with Progressive First-Failure Censoring. Entropy. 2021; 23(9):1099. https://doi.org/10.3390/e23091099

Chicago/Turabian Style

Shi, Xiaolin, and Yimin Shi. 2021. "Inference for Inverse Power Lomax Distribution with Progressive First-Failure Censoring" Entropy 23, no. 9: 1099. https://doi.org/10.3390/e23091099

APA Style

Shi, X., & Shi, Y. (2021). Inference for Inverse Power Lomax Distribution with Progressive First-Failure Censoring. Entropy, 23(9), 1099. https://doi.org/10.3390/e23091099

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop