Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Recent Progress on Point-Countable Covers and Sequence-Covering Mappings
Previous Article in Journal
Fixed Point Results in Modular b-Metric-like Spaces with an Application
Previous Article in Special Issue
An Analysis of Type-I Generalized Progressive Hybrid Censoring for the One Parameter Logistic-Geometry Lifetime Distribution with Applications
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Lifetime Performance Index for Generalized Inverse Lindley Distribution Under Adaptive Progressive Type-II Censored Lifetime Test

1
Department of Mathematics and Data Science, Chengyi College, Jimei University, Xiamen 361021, China
2
Department of Basic Subjects, Nanchang Jiaotong Institute, Nanchang 330100, China
3
Teaching Department of Basic Subjects, Jiangxi University of Science and Technology, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(10), 727; https://doi.org/10.3390/axioms13100727
Submission received: 10 August 2024 / Revised: 2 October 2024 / Accepted: 11 October 2024 / Published: 18 October 2024

Abstract

:
The lifetime performance index (LPI) is an important metric for evaluating product quality, and research on the statistical inference of the LPI is of great significance. This paper discusses both the classical and Bayesian estimations of the LPI under an adaptive progressive type-II censored lifetime test, assuming that the product’s lifetime follows a generalized inverse Lindley distribution. At first, the maximum likelihood estimator of the LPI is derived, and the Newton–Raphson iterative method is adopted to solve the numerical solution due to the log-likelihood equations having no analytical solutions. If the exact distribution of the LPI is not available, then the asymptotic confidence interval and bootstrap confidence interval of the LPI are constructed. For the Bayesian estimation, the Bayesian estimators of the LPI are derived under three different loss functions. Due to the complex multiple integrals involved in these estimators, the MCMC method is used to draw samples and further construct the HPD credible interval of the LPI. Finally, Monte Carlo simulations are used to observe the performance of these estimators in terms of the average bias and mean squared error, and two practical examples are used to illustrate the application of the proposed estimation method.

1. Introduction

Process capability analysis is a key component of quality management and a core aspect of statistical process control. This analytical method effectively assesses the potential performance and capability of a process, making it widely applicable in quality improvement initiatives within production processes [1]. The process capability index, as the evaluation result of the process capability analysis, measures a process’ ability to meet specification requirements for product quality characteristics. It provides management with an intuitive basis for judgment, helping to determine whether the process capability meets the standards of technical specifications and customer demands [2]. Therefore, the process capability index holds a crucial position in the science of quality management and is of significant importance in reducing waste, enhancing product quality, and improving management levels. Quality management experts and statisticians have proposed various process capability indices, including C p , C p m , C p k , and C p m k [3,4]. Each of these indices has its unique calculation method and application scenarios, offering diverse choices for process capability assessments across different industries and domains.
With the continuous advancement of manufacturing technologies, consumers are increasingly concerned about the lifetime of products when making purchasing decisions. To enhance their market competitiveness, manufacturers must prioritize the assessment and improvement of product quality and reliability. Although process capability indices hold a significant position in quality management, they are not suitable for measuring product lifetime. Typically, the quality of a product can be assessed by its lifetime, as a longer lifetime often indicates a higher product quality and reliability. Therefore, there is a need for a “bigger is better” metric to evaluate product lifetime.
Montgomery [5] proposed a process capability index suitable for one-sided specifications, named the lifetime performance index δ , which is used to measure the performance of a product’s lifetime. The mathematical expression of δ is defined as follows:
δ = μ L σ ,
where L is the given lower specification limit, μ is the process mean, and σ is the process standard deviation. This lifetime performance index (LPI) is particularly applicable to products where a longer lifetime represents better performance. It provides a quantitative tool to assist manufacturers in evaluating and improving the lifetime characteristics of their products. Manufacturers can use the value of this index to determine whether the lifetime of a product meets specific quality requirements and take appropriate measures to enhance product reliability and market competitiveness. Shaabani and Jafari [6] conducted a classical estimation on the LPI, assuming that the product lifetime followed a gamma distribution, based on a complete sample. İklim [7] constructed bootstrap confidence intervals of process capability indices for a generalized inverse Lindley distribution, based on a complete sample. However, due to various constraints such as time costs, material costs, and the tools for data collection, it is often not feasible to obtain the lifetimes of all products in a life test. Therefore, scholars have proposed the censored life test as a more efficient method of data collection, which has been widely applied.
The type-I censored life test, type-II censored life test, and hybrid censored life test have been extensively studied by researchers. These censored tests, however, lack flexibility as they do not allow for the removal of test products during the test, which hinders the implementation of related studies. To address these issues, Cohen and Clifford [8] proposed the progressive censored test, which can reduce the testing time and material costs to some extent. Kilany and Lobna [9] considered the maximum likelihood (ML) estimation and Bayesian estimation of the LPI based on the three-parameter Omega distribution, using a progressive type-II censored sample. Mohammad and Mahdi [10] assumed that the product lifetime followed a Weibull distribution and discussed the Bayesian estimation of the LPI based on progressive censored data. Hanan et al. [11] assumed that the sample followed an Ishita distribution and considered the estimation of the LPI based on a progressive type-II censored scheme.
However, with the continuous advancement of science and technology, product lifetimes are becoming longer. Even when using a progressive censored life test, there may still be cases where the testing duration becomes excessively long or no failures are observed during the test. To address these issues, Kundu and Joarder [12] proposed the progressive type-II hybrid censored life test, which combines the progressive type-II censored test with the hybrid censored test. This test ensures that the experiment ends within a predetermined time frame. However, this test may not guarantee that a sufficient number of failures are collected, and there may even be no failures observed. As a result, the statistical inference results and efficiency may be unsatisfactory. Building upon this, Ng et al. [13] proposed the adaptive progressive type-II censored life test. In this test, the censored scheme can be adjusted in a timely manner based on the actual situation to ensure a sufficient number of failure data are collected. This approach can flexibly handle potential changes during the test, improving the statistical inference results and efficiency.
In a censored life test, n products are put into the test. Only m failed products need to be observed. The censored scheme R = ( R 1 , R 2 , , R m ) satisfies R 1 + R 2 + + R m = n m . When the first failed product occurs, the failure time is recorded as X 1 : m : n , and then R 1 non-failed products are arbitrarily removed. When the second failed product occurs, the failure time is recorded as X 2 : m : n , and then R 2 non-failed products are arbitrarily removed. The process continues until the m-th failure occurs. In the meantime, the remaining R m non-failed products are removed, bringing the test to a close. The recorded failure times X 1 : m : n , X 2 : m : n , , X m : m : n are termed progressive type-II censored samples, and the arbitrarily removed non-failed products R 1 , R 2 , , R m are termed the progressive type-II censored scheme.
The adaptive progressive type-II censored test can be seen as a combination of the type-I censored test and the progressive type-II censored test. Before the test begins, a time T is set to represent the ideal duration of the test. In fact, the actual duration of the test is allowed to exceed T . If the number of failures observed before the time T reaches m , the test ends before T . However, if the test time exceeds T and m failures have not yet been observed, the test should end as soon as possible. The experimenters should make some adjustments to ensure the rapid occurrence of m failures. Therefore, it is necessary to retain as many non-failed products as possible throughout the test period. The two cases of the adaptive progressive type-II censored test are as follows:
Case 1. If m failure times have been recorded before the time T , then the censored scheme R = ( R 1 , R 2 , , R m ) remains consistent with the predetermined one.
Case 2. If only k   ( k < m ) failure times have been recorded before the time T , that is,
X k : m : n < T < X k + 1 : m : n .
In order to retain as many non-failed products as possible in the experiment, the experimenter stops removing non-failed products once the (k + 1)-th failure time is recorded. All remaining non-failed products are removed only after the m-th failure time has been recorded. Therefore, R 1 , R 2 , , R k remain consistent with the predetermined censored scheme. But there are R k + 1 = R k + 2 = = R m 1 = 0 and R m = n m R 1 R 2 R k . In other words, the censored scheme is adjusted to R = ( R 1 , , R k , 0 , , 0 , R m ) .
The specific experimental process can be seen in Figure 1.
The process capability index allows for the continuous monitoring of process quality, ensuring that the produced products meet specification requirements and providing a basis for reducing product failure costs. Typically, the process capability index is calculated based on the lifetime of products; thus, a longer lifetime is generally associated with a higher product quality. Considering this, this paper chooses to use the LPI to evaluate product quality. The lifetime of many products may not follow a normal distribution and may instead exhibit characteristics such as an exponential distribution, Weibull distribution, or Burr distribution. Consequently, extensive research has been conducted on the inference of the LPI. Table 1 provides a review of more references on the LPI.
The generalized versions of statistical models offer greater flexibility for modeling and analyzing real-world data. Among these, the generalized inverse Lindley distribution (GILD) is a novel statistical model suitable for analyzing inverted bathtub survival information. More details about the GILD can be found in Section 2. To our knowledge, the inference of the LPI has not been addressed in the case where the shape and scale parameters of the GILD are unknown. Therefore, this paper assumes that the lifetimes of the products follow the GILD and considers the estimation of the LPI under the adaptive progressive type-II censored sample.
The remaining sections are organized as follows: In Section 2, a brief overview of the GILD is provided, and the expressions of the LPI for the GILD are derived. In Section 3, the maximum likelihood (ML) estimator of the LPI is obtained, and the asymptotic confidence interval (ACI) is constructed. The bootstrap confidence interval (BCI) is discussed in Section 4. In Section 5, using the symmetric entropy loss function (SELF), LINEX loss function (LLF), and general entropy loss function (GELF), the Bayesian estimators of the LPI are derived based on the gamma priors, and the highest posterior density (HPD) credible intervals are constructed. In Section 6, the performance of these estimators and confidence intervals are compared through Monte Carlo simulations. In Section 7, two sets of real data are used to illustrate the feasibility of these estimation methods. Finally, Section 8 presents the conclusions of this paper.

2. Generalized Inverse Lindley Distribution

In real life, there is a type of data whose hazard function often exhibits a distinctive shape known as the “upside-down bathtub” shape. This type of data includes, but is not limited to, the failure rates of electronic products or mechanical devices, the mortality rates of biological individuals, the occurrence rates of natural disasters, and product demand rates. Specifically, this “upside-down bathtub”-shaped hazard function provides a more accurate depiction of the entire process from the occurrence and development to the eventual decline of certain phenomena. A profound understanding of this pattern holds significant importance for the prediction and management of risks in relevant domains. Addressing this characteristic, Sharma et al. [20] proposed a statistical model called the generalized inverse Lindley distribution. This distribution is an extension of the inverse Lindley distribution and is particularly suitable for modeling survival information with an “upside-down bathtub” shape. By introducing a new parameter, the GILD can more accurately capture and describe the true characteristics of the data, thus providing a more powerful tool for risk prediction and management.
Some research on the GILD has been conducted. Basu et al. [21] considered the classical estimation and Bayesian estimation for the GILD under the progressive type-I hybrid censoring with binomial removal. Vikas [22] assumed that X and Y represented the survival times of two groups of cancer patients under different treatment plans, respectively, and that they were independent and followed the GILD. Under this premise, the Bayesian estimator of P ( X > Y ) was proposed. Devendra et al. [23] used generalized order statistics to obtain two types of estimators for the parameters of the GILD. Fatma et al. [24] considered the generalized inverse Lindley stress–strength reliability model under different sampling designs and obtained the maximum likelihood estimator of stress–strength reliability. More related literature can be found in the references [25,26,27,28,29].
Let X be a random variable that follows the GILD with the shape parameter λ > 0 and scale parameter β > 0 , denoted by G I L D ( λ , β ) . The probability density function (PDF), cumulative distribution function (CDF), and hazard function (HF) are given as (Sharma et al. [20])
f ( x | λ , β ) = λ β 2 ( 1 + x λ ) x 2 λ + 1 ( 1 + β ) exp ( β x λ ) ,   x > 0 ,
F ( x | λ , β ) = [ 1 + β x λ ( 1 + β ) ] exp ( β x λ ) ,   x > 0 ,
and
h ( x | λ , β ) = λ β 2 ( 1 + x λ ) x λ + 1 [ x λ ( 1 + β ) ( e β x λ 1 ) β ] ,   x > 0 .
Figure 2 presents the diagrams of the PDF and HF for the GILD with different combinations of the parameters λ and β , respectively.
In the following discussion, we always suppose that λ > 2 . Let the product lifetime X follow the GILD with the PDF (2) and CDF (3); the LPI δ can be derived as
δ = g 1 ( λ , β ) λ ( 1 + β ) L g 2 ( λ , β ) ,
where
g 1 ( λ , β ) = β 1 λ [ λ ( 1 + β ) 1 ] Γ ( 1 1 λ ) ,
g 2 ( λ , β ) = β 1 λ { λ ( 1 + β ) [ λ ( 1 + β ) 2 ] [ Γ ( 1 2 λ ) Γ 2 ( 1 1 λ ) ] Γ 2 ( 1 1 λ ) } 1 / 2 .
If the product lifetime X exceeds the lower specification limit L (i.e., X > L ), then the product is marked as a conforming product. Otherwise, the product is marked as an unconforming product. The probability P ( X > L ) is referred to as the conforming rate of the product, denoted as P c r . Therefore, P c r can be defined as
P c r = P ( X > L ) = 1 { 1 + λ λ β ( 1 + β ) λ 1 [ g 1 ( λ , β ) δ g 2 ( λ , β ) ] λ } exp { λ λ β ( 1 + β ) λ [ g 1 ( λ , β ) δ g 2 ( λ , β ) ] λ }   . .
From Equation (8), it is evident that, for the given values of λ and β , the conforming rate P c r and δ exhibit a strict positive correlation relationship. To show this relationship, Table 2 lists the values of the LPI δ and the corresponding conforming rate P c r under λ = 3.1956 and β = 7.3937 .

3. Maximum Likelihood Estimation

Let X = ( X 1 : m : n , X 2 : m : n , , X m : m : n ) be an adaptive progressive type-II censored sample from G I L D ( λ , β ) ; the relative censored scheme is R = ( R 1 , , R k , 0 , , 0 , R m ) , where R m = n m R 1 R k . Denote x = ( x 1 : m : n , x 2 : m : n , , x m : m : n ) as the observation of X . The likelihood function of λ and β is
l ( λ , β | x ) = A [ 1 F ( x m | λ , β ) ] R m [ i = 1 m f ( x i | λ , β ) ] i = 1 k [ 1 F ( x i | λ , β ) ] R i = A λ m β 2 m ( 1 + β ) m [ 1 e β x m λ β e β x m λ x m λ ( 1 + β ) ] R m [ i = 1 m ( 1 + x i λ ) e β x i λ x i 2 λ + 1 ] × i = 1 k [ 1 e β x i λ β e β x i λ x i λ ( 1 + β ) ] R i ,
where A = i = 1 m ( n i + 1 j = 1 i 1 R j ) is a constant and x i is used instead of x i : m : n for simplicity. Then, the log-likelihood function is given by
L ( λ , β | x ) = ln A + m ln λ β 2 1 + β β i = 1 m x i λ + R m ln [ 1 e β x m λ β e β x m λ x m λ ( 1 + β ) ] + i = 1 m [ ln ( 1 + x i λ ) ( 2 λ + 1 ) ln x i ] + i = 1 k R i ln [ 1 e β x i λ β e β x i λ x i λ ( 1 + β ) ] .
The partial derivatives of the log-likelihood function with respect to λ and β are as follows:
L ( λ , β | x ) λ = β i = 1 k R i x i λ e β x i λ ( ln x i ) [ 1 ( β + 1 ) 1 + β ( β + 1 ) 1 x i λ ] e β x i λ + β ( β + 1 ) 1 e β x i λ x i λ 1 + β R m x m λ e β x m λ ( ln x m ) [ 1 ( β + 1 ) 1 + β ( β + 1 ) 1 x m λ ] e β x m λ + β ( β + 1 ) 1 e β x m λ x m λ 1 + m λ 2 i = 1 m ln x i + i = 1 m ( β x i λ ln x i + x i λ ln x i x i λ + 1 ) .
L ( λ , β | x ) β = β β + 1 i = 1 k R i ( β + β x i λ + 2 x i λ + 1 ) x i λ ( β + β x i λ + x i λ x i λ e β x i λ β x i λ e β x i λ ) β R m ( β + β x m λ + 2 x m λ + 1 ) x m λ ( β + 1 ) ( β + β x m λ + x m λ x m λ e β x m λ β x m λ e β x m λ ) + m ( β + 2 ) β ( β + 1 ) i = 1 m x i λ .
Therefore, the ML estimators of λ and β , say, λ ^ and β ^ , are the solutions of Equation (13).
{ L ( λ , β | x ) λ = 0 L ( λ , β | x ) β = 0 .
Due to the invariance of the maximum likelihood estimation, the ML estimator δ ^ M L can be obtained by putting λ ^ and β ^ into Equation (5), that is,
δ ^ M L = g 1 ( λ ^ , β ^ ) λ ^ ( 1 + β ^ ) L g 2 ( λ ^ , β ^ ) .
Equation (13) is nonlinear and does not admit an analytical solution. Therefore, we employ a numerical iterative method to solve it. In this paper, we utilize the Newton–Raphson iteration approach, and the iteration process is provided in Algorithm 1.
Algorithm 1. Newton–Raphson iteration approach using calculate ML estimate of δ
(1) Initial Guess: Start with an initial guess ( λ ( 0 ) , β ( 0 ) ) for the root of Equation (13).
(2) Calculate the Derivative: Find the first partial derivatives of the Equations (11) and (12).
(3) Iteration: For each iteration j , calculate the next approximation ( λ ( j + 1 ) , β ( j + 1 ) ) by using the following equation:
( λ ( j + 1 ) , β ( j + 1 ) ) T = ( λ ( j ) , β ( j ) ) T [ I 2 ( λ ( j ) , β ( j ) ) ] 1 I 1 ( λ ( j ) , β ( j ) ) .
(4) Convergence Check: Check if the absolute or relative error between ( λ ( j + 1 ) , β ( j + 1 ) ) and ( λ ( j ) , β ( j ) ) is less than a predetermined tolerance level. If it is, then ( λ ( j + 1 ) , β ( j + 1 ) ) is considered a root of the Equation (13), and set λ ^ = λ ( j + 1 ) and β ^ = β ( j + 1 ) .
(5) Repeat: If the convergence criterion is not met, repeat the process from the step (3) with ( λ ( j + 1 ) , β ( j + 1 ) ) as the new approximation.
(6) Termination: The process is terminated when the accuracy level is achieved or after a maximum number J of iterations is reached.
(7) the ML estimator δ ^ M L can be obtained according to Equation (14).
In the above algorithm, there are
I 1 ( λ , β ) = [ L ( λ , β | x ) λ L ( λ , β | x ) β ] ,
I 2 ( λ , β ) = [ 2 L ( λ , β | x ) λ 2 2 L ( λ , β | x ) λ β 2 L ( λ , β | x ) β λ 2 L ( λ , β | x ) β 2 ] ,
and
2 L ( λ , β | x ) λ 2 = β 2 i = 1 k R i x i 2 λ ( ln x i ) 2 ( β + β x i λ 2 x i λ x i 2 λ ) β + β x i λ + x i λ x i λ e β x i λ β x i λ e β x i λ β i = 1 m ( ln x i ) 2 x i λ β 4 ( β + 1 ) 2 i = 1 k R i x i 4 λ ( ln x i ) 2 e 2 β x i λ ( x i λ + 1 ) 2 [ e β x i λ + β ( β + 1 ) x i λ e β x i λ 1 ] 2 m λ 2 + β 2 R m x m 2 λ ( ln x m ) 2 ( β + β x m λ 2 x m λ x m 2 λ ) β + β x m λ + x m λ x m λ e β x m λ β x m λ e β x m λ + i = 1 m x i λ ( ln x i ) 2 x i λ + 1 β 4 ( β + 1 ) 2 R m x m 4 λ ( ln x m ) 2 e 2 β x m λ ( x m λ + 1 ) 2 [ e β x m λ + β ( β + 1 ) x m λ e β x m λ 1 ] 2 i = 1 m x i 2 λ ( ln x i ) 2 ( x i λ + 1 ) 2 ,
2 L ( λ , β | x ) β 2 = 1 ( β + 1 ) 3 i = 1 k R i x i 3 λ e β x i λ ( β + 2 β 2 + β 3 + β x i λ x i λ ) e β x i λ + β ( β + 1 ) 1 x i λ e β x i λ 1 + 2 m β 2 ( β + 1 ) 2 + 1 ( β + 1 ) 3 i = 1 k R i x i 3 λ e β x i λ ( 3 β 2 x i λ + β 3 x i λ 2 x i 2 λ ) e β x i λ + β ( β + 1 ) 1 x i λ e β x i λ 1 2 m ( β + 2 ) β 2 ( β + 1 ) β 2 ( β + 1 ) 4 i = 1 k R i x i 4 λ e 2 β x i λ ( β + β x i λ + 2 x i λ + 1 ) 2 [ e β x i λ + β ( β + 1 ) 1 x i λ e β x i λ 1 ] 2 + m ( β + 2 ) β ( β + 1 ) 2 + 1 ( β + 1 ) 3 R m x m 3 λ e β x m λ ( β + 2 β 2 + β 3 + β x m λ x m λ ) e β x m λ + β ( β + 1 ) 1 x m λ e β x m λ 1 + 1 ( β + 1 ) 3 R m x m 3 λ e β x m λ ( 3 β 2 x m λ + β 3 x m λ 2 x m 2 λ ) e β x m λ + β ( β + 1 ) 1 x m λ e β x m λ 1 β 2 ( β + 1 ) 4 R m x m 4 λ e 2 β x m λ ( β + β x m λ + 2 x m λ + 1 ) 2 [ e β x m λ + β ( β + 1 ) 1 x m λ e β x m λ 1 ] 2 ,
2 L ( λ , β | x ) λ β = β ( β + 1 ) i = 1 k R i x i 2 λ ( ln x i ) ( x i λ + 1 ) ( β + β 2 β x i λ 2 x i λ ) 2 β + β x i λ + x i λ x i λ e β x i λ β x i λ e β x i λ + β 3 ( β + 1 ) 3 i = 1 k R i x i 4 λ e 2 β x i λ ( ln x i ) ( x i λ + 1 ) ( β + β x i λ + 2 x i λ + 1 ) [ e β x i λ + β ( β + 1 ) 1 x i λ e β x i λ 1 ] 2 β ( β + 1 ) R m x m 2 λ ( ln x m ) ( x m λ + 1 ) ( β + β 2 β x m λ 2 x m λ ) 2 β + β x m λ + x m λ x m λ e β x m λ β x m λ e β x m λ + β 3 R m x m 4 λ e 2 β x m λ ( ln x m ) ( x m λ + 1 ) ( β + β x m λ + 2 x m λ + 1 ) ( β + 1 ) 3 [ e β x m λ + β ( β + 1 ) 1 x m λ e β x m λ 1 ] 2 + i = 1 m x i λ ln x i .
2 L ( λ , β | x ) β λ = β ( β + 1 ) i = 1 k R i x i 2 λ ( ln x i ) ( x i λ + 1 ) ( β + β 2 β x i λ 2 x i λ ) 2 β + β x i λ + x i λ x i λ e β x i λ β x i λ e β x i λ + β 3 ( β + 1 ) 3 i = 1 k R i x i 4 λ e 2 β x i λ ( ln x i ) ( x i λ + 1 ) ( β + β x i λ + 2 x i λ + 1 ) [ e β x i λ + β ( β + 1 ) 1 x i λ e β x i λ 1 ] 2 β ( β + 1 ) R m x m 2 λ ( ln x m ) ( x m λ + 1 ) ( β + β 2 β x m λ 2 x m λ ) 2 β + β x m λ + x m λ x m λ e β x m λ β x m λ e β x m λ + β 3 R m x m 4 λ e 2 β x m λ ( ln x m ) ( x m λ + 1 ) ( β + β x m λ + 2 x m λ + 1 ) ( β + 1 ) 3 [ e β x m λ + β ( β + 1 ) 1 x m λ e β x m λ 1 ] 2 + i = 1 m x i λ ln x i .
The preceding discussion has made it evident that Equation (13) is complex, precluding the derivation of an exact expression for the ML estimators of λ and β . Consequently, it is impractical to ascertain precise confidence intervals. Considering the above, we construct an ACI for δ using the delta method in this paper. An ACI is constructed using the principles of asymptotic theory, which deals with the behavior of estimators and statistical tests as the sample size grows indefinitely. The primary idea is that, as the sample size increases, the distribution of the estimator tends to a normal distribution due to the central limit theorem. This allows for the approximation of the sampling distribution of the estimator with a normal distribution, even if the underlying population distribution is not normal.
The Fisher information matrix is
H ( λ , β ) = I 2 ( λ , β ) .
Let Ψ ( λ , β ) = ( δ λ , δ β ) ,
δ λ = g 2 1 ( λ , β ) g 1 ( λ , β ) λ ( 1 + β ) L g 2 ( λ , β ) [ g 1 ( λ , β ) λ ( 1 + β ) L ] g 2 2 ( λ , β ) g 2 ( λ , β ) λ ,
δ β = g 2 1 ( λ , β ) g 1 ( λ , β ) β λ L g 2 ( λ , β ) [ g 1 ( λ , β ) λ ( 1 + β ) L ] g 2 2 ( λ , β ) g 2 ( λ , β ) β .
Let H 1 ( λ , β ) be the inverse matrix of H ( λ , β ) ; then, according to the delta method, the estimate of the variance r is obtained as follows:
r ^ = Ψ ( λ ^ , β ^ ) H 1 ( λ ^ , β ^ ) [ Ψ ( λ ^ , β ^ ) ] T .
The 100 ( 1 γ ) % ACI of the LPI δ is
( δ ^ M L z γ / 2 r ^ , δ ^ M L + z γ / 2 r ^ ) ,
where z γ / 2 is the upper γ / 2 quantile of the standardized normal distribution.

4. Bootstrap Confidence Interval

The bootstrap method is a resampling-based statistical inference approach that is widely used for constructing confidence intervals for unknown parameters. The core aim of this method is to generate some bootstrap samples through repeated sampling from the original sample and then calculate the required statistics for each bootstrap sample to obtain the empirical distribution of the statistic. Compared to the ACI, the BCI has the following advantages:
(i)
The BCI does not require assumptions such as the population following a normal distribution, and it is applicable to complex statistical models and non-normal data.
(ii)
The BCI can more accurately reflect the actual distribution characteristics of the parameters, especially when the sample size is small or the distribution is skewed.
(iii)
The computation process is relatively flexible and can be applied to various statistical inference problems, such as parameter estimation, hypothesis testing, and model evaluation.
Therefore, the bootstrap method has received widespread attention and been widely applied in contemporary statistical research, and it has become an important statistical inference tool. Under various distributional assumptions, Ouyang et al. [30] constructed three bootstrap confidence intervals for the process capability index C p c and compared their performance. Saha et al. [31] employed five bootstrap methods to construct the BCIs of the process capability index C p c for an exponentiated exponential distribution. Based on a progressive type-II right censored sample, Tolba et al. [32] constructed the BCI unknown parameter for a one-parameter Akshaya distribution. For more research about the bootstrap method, please refer to [33,34,35,36].
In this section, we use the bootstrap-t method (Hall [37]) to construct the BCI of the LPI δ . The steps are presented in the following Algorithm 2.
Algorithm 2. The calculation process of the bootstrap-t method to construct the BCI of δ
(1) According to Algorithm 1, compute the ML estimates λ ^ and β ^ under the censored sample ( x 1 , x 2 , , x m ) and censored scheme ( R 1 , R 2 , , R m ) . Furthermore, the ML estimate of δ , denoted as δ ^ , is obtained.
(2) Generate adaptive progressive type-II censored samples from G I L D ( λ ^ , β ^ ) with ( R 1 , R 2 , , R m ) , and denote as ( x 1 * , x 2 * , , x m * ) .
(3) As in step (1), calculate the ML estimates of λ and β based on ( x 1 * , x 2 * , , x m * ) , say λ ^ * and β ^ * . Then, obtain the Bootstrap sample estimates δ ^ * by putting λ ^ * and β ^ * into Equation (5).
(4) Compute the statistic τ = J ( δ ^ * δ ^ ) [ var ( δ ^ * ) ] 1 / 2 , where var ( δ ^ * ) is estimated by Equation (22).
(5) Repeat steps (2) to (4) for J times and obtain ( τ 1 , τ 2 , , τ J ) , where τ 1 < τ 2 < < τ J .
(6) Let F τ ( t ) = P ( τ t ) be the CDF of τ . For a given γ , define δ ^ B o o t t ( γ ) = δ ^ + J 1 / 2 F τ 1 ( γ ) var ( δ ^ ) . Thus, the BCI of δ is
( δ ^ B o o t t ( γ / 2 ) , δ ^ B o o t t ( 1 γ / 2 ) ) .

5. Bayesian Estimation

Bayesian statistics is a branch of statistics that applies the Bayesian theorem to update the probability of a hypothesis as more evidence or information becomes available. It provides a powerful framework for making inferences based on prior knowledge combined with new data. Bayesian statistics allows the incorporation of prior knowledge or expert opinions into the analysis, which can be particularly useful when data are scarce or expensive to obtain.
The gamma prior is a commonly used prior distribution in Bayesian statistics, often employed in establishing Bayesian inference models for parameters, and it is particularly suitable for parameters in the positive domain. The gamma distribution itself is concise in form, allowing for flexible modeling of various prior information, and does not lead to overly complex inference and computational issues. Additionally, the use of a gamma prior can yield a more explicit form of the posterior distribution. Thus, we suppose that λ and β are independent and follow the gamma prior Γ ( a , b ) and Γ ( c , d ) , respectively, that is,
π ( λ ) = b a Γ ( a ) λ a 1 e b λ ,   λ > 0   , a , b > 0 ,
π ( β ) = d c Γ ( c ) β c 1 e d β ,   β > 0 ,   c , d > 0 .
The joint prior distribution of λ and β is
π ( λ , β ) = b a d c Γ ( a ) Γ ( c ) λ a 1 β c 1 e b λ d β ,   λ , β > 0 .
According to the Bayesian theorem, the joint posterior distribution of λ and β under the adaptive progressive type-II censored samples is given by
π ( λ , β | x ) = l ( λ , β | x ) π ( λ , β ) 0 + 0 + l ( λ , β | x ) π ( λ , β ) d λ d β = λ m + a 1 β 2 m + c 1 e b λ d β B ( 1 + β ) m [ 1 e β x m λ β e β x m λ x m λ ( 1 + β ) ] R m × [ i = 1 m ( 1 + x i λ ) e β x i λ x i 2 λ + 1 ] i = 1 k [ 1 e β x i λ β e β x i λ x i λ ( 1 + β ) ] R i .
where B = 0 + 0 + l ( λ , β | x ) π ( λ , β ) d λ d β is a constant.
In this section, we select the symmetric entropy loss function, LINEX loss function, and general entropy loss function to derive the Bayesian estimators of the LPI δ .
Let α ^ be the estimator of a parameter α ; then, the three loss functions are defined as follows:
(i)
SELF (Xu et al. [38]):
S S E ( α ^ , α ) = α ^ α + α α ^ 2 .
(ii)
LLF (Varian [39]):
S L L ( α ^ , α ) = e ρ 1 ( α ^ α ) ρ 1 ( α ^ α ) 1 ,   ρ 1 0
(iii)
GELF (Calabria and Pulcini [40]):
S G E ( α ^ , α ) = ( α ^ α ) ρ 2 ρ 2 ln ( α ^ α ) 1 ,   ρ 2 0 .
Given the observation x , the Bayesian estimators of parameter α under the SELF, LLF, and GELF are expressed by Equation (35), Equation (36), and Equation (37), respectively. In the expressions below, E ( | x ) denotes the posterior expectation.
α ^ S E [ E ( α | x ) E ( α 1 | x ) ] 1 / 2 ,
α ^ L L 1 ρ 1 ln [ E ( e ρ 1 α | x ) ] ,
α ^ G E = E ( α ρ 2 | x ) 1 ρ 2 .
Thus, based on adaptive progressive type-II censored samples, the Bayesian estimator δ ^ S E of δ under the SELF is presented by
δ ^ S E = [ E ( δ | x ) E ( δ 1 | x ) ] 1 / 2 = [ 0 + 0 + δ π ( λ , β | x ) d λ d β 0 + 0 + δ 1 π ( λ , β | x ) d λ d β ] 1 / 2 .
The Bayesian estimator δ ^ L L of δ under the LLF is
δ ^ L L = 1 ρ 1 ln [ E ( e ρ 1 δ | x ) ] = 1 ρ 1 ln [ 0 + 0 + e ρ 1 δ π ( λ , β | x ) d λ d β ] .
The Bayesian estimator δ ^ G E of δ under the GELF is
δ ^ G E = E ( δ ρ 2 | x ) 1 ρ 2 = [ 0 + 0 + δ ρ 2 π ( λ , β | x ) d λ d β ] 1 ρ 2 .
However, the inherent complexity of δ results in these expressions involving numerous complex integrals, making analytical solutions difficult to obtain. Commonly used methods for obtaining approximate solutions include the Lindley approximation and the Tierney–Kadane (TK) approximation. The Lindley approximation is relatively convenient as it does not involve complex integrals or iterative processes; it only requires the calculation of the partial derivatives of the posterior density of the parameters. However, its drawbacks include the inability to construct credible intervals for the parameters, and the final approximation depends on the maximum likelihood estimates. In contrast, the TK approximation is less complex in terms of computation but involves solving nonlinear equations. After considering various factors, this paper employs the Markov chain Monte Carlo (MCMC) method to perform iterative computations.
Gibbs sampling and the Metropolis–Hastings (MH) algorithm are two of the most commonly used methods in the MCMC method. Gibbs sampling is designed to draw samples from a multidimensional probability distribution, while the MH algorithm is used to sample from complex probability distributions. The key to Gibbs sampling is identifying the conditional distribution for each variable. Typically, this requires us to compute the conditional probability distribution of each variable given the values of the other variables. If the conditional distribution cannot be directly computed, the MH algorithm can be utilized for sampling. The MH algorithm works by setting acceptance–rejection criteria for different candidate values, ensuring that the final set of generated samples adheres to the target distribution.
According to Equation (31), the full condition distributions of λ and β are given below.
π ( λ | x , β ) λ m + a 1 e b λ [ 1 e β x m λ β e β x m λ x m λ ( 1 + β ) ] R m [ i = 1 m ( 1 + x i λ ) e β x i λ x i 2 λ + 1 ] × i = 1 k [ 1 e β x i λ β e β x i λ x i λ ( 1 + β ) ] R i ,
π ( β | x , λ ) β 2 m + c 1 e d β ( 1 + β ) m ( i = 1 m e β x i λ ) [ 1 e β x m λ β e β x m λ x m λ ( 1 + β ) ] R m × i = 1 k [ 1 e β x i λ β e β x i λ x i λ ( 1 + β ) ] R i .
From a formal perspective, both the aforementioned full conditional distributions cannot be expressed as any known distribution. Therefore, this paper combines Gibbs sampling with the MH algorithm to draw samples from Equations (41) and (42). Assuming that the proposal distributions for λ and β are both normal distributions, the calculation steps are as follows:
Step 1. Set the initial values ( λ ( 0 ) , β ( 0 ) ) and start with j = 1 .
Step 2. Generate the candidate values λ * and β * from the proposal distributions N ( λ ( j 1 ) , var ( λ ) ) and N ( β ( j 1 ) , var ( β ) ) , respectively.
Step 3. Generate u 1 from the uniform distribution U ( 0 , 1 ) , and calculate v 1 from the formula below.
v 1 = min { 1 , π ( λ * | x , β ( j 1 ) ) π ( λ ( j 1 ) | x , β ( j 1 ) ) } .
Step 4. Determine λ ( j ) based on the following acceptance rules.
λ ( j ) = { λ *   ,   u 1 v 1 λ ( j 1 )   ,   u 1 > v 1 .
Step 5. Generate u 2 from the uniform distribution U ( 0 , 1 ) , and calculate v 2 from the formula below.
v 2 = min { 1 , π ( β * | x , λ ( j ) ) π ( β ( j 1 ) | x , λ ( j ) ) } .
Step 6. Determine β ( j ) based on the following acceptance rules.
β ( j ) = { β *   ,   u 2 v 2 β ( j 1 )   ,   u 2 > v 2 .
Step 7. Set j = j + 1 .
Step 8. Repeat steps 2 to 7 for J times. We can obtain a sequence of samples ( λ ( 1 ) , β ( 1 ) ) , ( λ ( 2 ) , β ( 2 ) ) , , ( λ ( J ) , β ( J ) ) . By substituting these parameter samples into Equation (5), a sequence of samples for δ can be obtained, that is, δ ( 1 ) , δ ( 2 ) , , δ ( J ) . After discarding the first J 0 samples before the Markov chain reaches a stationary state, the Bayesian estimators of δ under the SELF, LLF and GELF are given by
δ ^ S E = [ ( i = J 0 + 1 J δ ( i ) ) / i = J 0 + 1 J ( δ ( i ) ) 1 ] 1 / 2 ,
δ ^ L L = 1 ρ 1 ln [ 1 J J 0 i = J 0 + 1 J exp ( ρ 1 δ ( i ) ) ] ,
and
δ ^ G E = [ 1 J J 0 i = J 0 + 1 J ( δ ( i ) ) ρ 2 ] 1 ρ 2 .
Chen and Shao [41] mentioned that after sorting the sample sequence obtained by the MCMC method in ascending order, the corresponding HPD credible interval can be constructed. Let the sorted sequence be denoted as δ 1 , δ 2 , , δ J . The 100 ( 1 γ ) % HPD credible interval for the life performance index δ is
( δ i * , δ i * + [ J ( 1 γ ) ] ) ,
where i * satisfies the condition below, and [ s ] represents the largest integer not exceeding s .
δ i * + [ J ( 1 γ ) ] δ i * = min 1 i J [ J ( 1 γ ) ] ( δ i + [ J ( 1 γ ) ] δ i ) .

6. Monte Carlo Simulation

The purpose of this section is to compare the performance of the ML estimator and Bayesian estimators under different censoring schemes (as shown in Table 3) using the mean squared error (MSE) and average bias (AB) for evaluation, which are defined as
M S E = 1 N i = 1 N ( δ ^ i δ r e a l ) 2     and   A B = 1 N i = 1 N ( δ ^ i δ r e a l ) .
Additionally, this section compares the estimation effectiveness of the ACI, BCI, and HPD credible interval in terms of the coverage probability (CP) and average width (AW), which is defined by A W = 1 N i = 1 N ( δ ^ i , u p δ ^ i , l o w ) . In the Monte Carlo simulation, the number of repetitions is set to N = 1000 , with the true parameter values ( λ r e a l , β r e a l ) = ( 2.5 , 1.0 ) . The lower specification limit is set to L = 0.5 , and then the true LPI value is δ r e a l = 0.5983 . Then, ρ 1 = 2.0 and ρ 2 = 0.5 are set, respectively. Without loss generality, T = x m / 2 is set. Next, MATLAB R2021a simulations are used to perform the adaptive progressive type-II censored test to obtain adaptive progressive type-II censored samples. Three priors are considered, namely, Prior 1, Prior 2, and Prior 3. Prior 1 is a non-informative prior, that is, a = b = c = d = 0 . The hype-parameters of Prior 2 are ( a , b ) = ( 2.0 , 3.0 ) and ( c , d ) = ( 4.0 , 1.0 ) . The hype-parameters of Prior 3 are ( a , b ) = ( 0.5 , 0.25 ) and ( c , d ) = ( 0.5 , 0.5 ) . The algorithm mentioned in reference [20] is used to generate complete samples from G I L D ( λ r e a l , β r e a l ) , with specific steps detailed as follows:
Step 1. Generate p 1 and p 2 from the standard uniform distribution U ( 0 , 1 ) .
Step 2. Generate w from the gamma distribution Γ ( 2 , β r e a l ) .
Step 3. Transform w as in y = ( w 1 ) λ r e a l 1 , and calculate z by z = ( β r e a l ln p 1 ) λ r e a l 1 .
Step 4. If p 2 β r e a l ( 1 + β r e a l ) 1 , accept z as a sample from G I L D ( λ r e a l , β r e a l ) , that is, x = z otherwise x = y .
Step 5. Repeat steps 1 to 5 for n times to obtain some samples of size n from G I L D ( λ r e a l , β r e a l ) .
All calculation results are listed in Table 4, Table 5 and Table 6.
Comparing the data in the above tables, the following conclusions can be drawn.
(1)
As can be seen from Table 4, when the effective samples proportion m / n increases, the MSE of each estimator decreases. In comparing the ML estimation with the Bayesian estimation under a non-informative prior, the results indicate that the MSEs for both the CS-I and CS-II are significantly higher than that of the CS-III. Furthermore, when comparing the Bayesian estimations under Prior 2 and Prior 3, it was found that, for Prior 2, the Bayesian estimate under the CS-II exhibits the lowest MSE; conversely, under Prior 3, the Bayesian estimate under the CS-III demonstrates the lowest MSE. These findings highlight the critical role that the choice of censored scheme and prior distribution plays in the accuracy of the estimations.
(2)
When using Prior 1 and Prior 2, the ML estimation demonstrates superior performance compared to the Bayesian estimation under the CS-III. In contrast, when using Prior 3, the Bayesian estimation outperforms the ML estimation under the CS-I and CS-II. Additionally, under different loss functions, there is no significant difference in the performance of the Bayesian estimators under Prior 1 and Prior 2; however, the Bayesian estimators under Prior 3 exhibit smaller MSEs compared to those under Prior 1 and Prior 2.
(3)
When the effective samples proportion m / n is large, the MSEs of the different Bayesian estimates are relatively close under the same censored scheme. If we fix the values of n and m then, for a given censored scheme, the Bayesian estimates under the SELF and the GELF have larger MSEs compared to those under the LLF.
(4)
Based on Table 5, we observe that the ABs of all the Bayesian estimators are negative. In contrast, the ABs of the ML estimator exhibit both positive and negative signs. This result indicates that, under the various conditions simulated in this study, the Bayesian estimation tends to underestimate the true LPI values, while the ML estimation exhibits a mixture of overestimation and underestimation across different scenarios.
(5)
From Table 6, as the proportion m / n increases, the CPs of two confidence intervals and one credible interval increase. Regardless of whether a non-informative prior or gamma priors are used, the coverage performance of the HPD credible intervals is better than that of the ACI and BCI under the CS-II. For a small proportion m / n , the coverage performance of the BCI outperforms the other two interval types in comparison to under the CS-I and CS-III.
(6)
From Table 6, when m / n is small, the AW of the ACI is relatively large. For example, when n = 100 and m = 30 , the AW of the ACI under the CS-II is 1.2137. Under the CS-II, the CPs of the HPD credible intervals consistently exceed 95%. However, under the CS-I, the CPs of the HPD credible intervals are smaller than 95%.

7. Real Data Analysis

In this section, we provide two sets of real data in Table 7 to illustrate some methods proposed in this paper. Wu and Wu [42] provided data on the duration of the remission achieved by four drugs for the treatment of leukemia in a clinical trial. This study selects the duration of the remission for drug 1 as Data 1. Data 2, provided by Blischke and Murthy [43], represents the failure times of aircraft windshields, with the measurement unit expressed in 1000 h. Based on the ML estimates, we employ the Kolmogorov–Smirnov (K–S) test to evaluate the suitability of the GILD for fitting with the two datasets. The specific values are also presented in Table 7. According to the K–S statistic and the corresponding p-values, we conclude that the GILD is an appropriate model for fitting with the two datasets. To visually represent the fitness of the GILD for these two datasets, we choose the Bayesian estimators of the parameters under the complete sample and plot the corresponding CDFs. The graphs, along with the empirical distribution functions of the two datasets, are presented in Figure 3. The red lines represent the CDF of GILD, and the blue lines represent the empirical CDF.
Before calculating the estimates and intervals, we need to explain the existence and uniqueness of the ML estimates. Due to the nonlinear nature of the problem, proving this directly through Equations (11) and (12) would be quite difficult. We utilize visual representations to illustrate the existence and uniqueness of the ML estimates. Without the loss of generality, we choose Data 1 with the censored scheme ( 8 × 1 , 0 × 10 ) from Table 8 to plot, as shown in Figure 4.
Let the ideal duration of the test with Data 1 be T 1 = 3 and the ideal duration of the test with Data 2 be T 2 = 4 . Since there is no prior information, we take the hyper-parameters as a = b = c = d = 0 . Under the censored schemes listed in Table 8, the point estimates and confidence intervals for the LPI are presented in Table 9. The censored scheme ( 8 × 1 , 0 × 10 ) denotes R 1 = 8 and R 2 = R 3 = = R 11 = 0 . Other symbols have similar meanings to the aforementioned symbols and will not be elaborated on here.

8. Conclusions

This paper assumed that the product’s lifetime follows a generalized inverse Lindley distribution and discussed the maximum likelihood estimation and Bayesian estimation of the LPI δ under an adaptive progressive type-II censored sample. By applying the Newton–Raphson iteration algorithm, we obtained the ML estimator of δ . Based on this, we constructed the ACI using the delta method, as well as the BCI using the bootstrap-t method. In the Bayesian estimation part, the MCMC method was used to draw samples and further obtain the Bayesian estimators and the HPD credible interval of δ based on the SELF, LLF and GELF. Finally, we conducted Monte Carlo simulations under the three censored schemes. The simulation results show that, in terms of the MSE, under a fixed censored scheme, the ML estimation outperforms the Bayesian estimation under Prior 1 and Prior 2; conversely, the Bayesian estimation under Prior 3 outperforms ML estimation. There is no significant difference in the performance of the Bayesian estimators under the three loss functions. Additionally, under the CS-II, the CP of the HPD credible interval is generally higher. Meanwhile, under the CS-I and CS-III, the coverage performance of the BCI outperforms that of the ACI and HPD credible interval, and it exhibits the best performance when the sample size is relatively small. However, the ABs of all the Bayesian estimators are negative. In contrast, the ABs of the ML estimator exhibit both positive and negative signs.
In future research, we will study the estimations of the entropy, failure rate, and stress–strength reliability of the GILD. Similar estimation methods can also be used to study other lifetime distributions, such as the generalized exponential distribution and compound Rayleigh distribution.

Author Contributions

Conceptualization, S.X. and X.H.; methodology, X.H. and H.R.; software, X.H.; validation, S.X. and X.H.; writing—original draft preparation, X.H., S.X. and H.R.; writing—review and editing, X.H., S.X. and H.R.; funding acquisition, S.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Research Project of the Jiangxi Provincial Department of Education, grant number GJJ2200814.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yen, C.H.; Chang, C.H.; Lee, C.C. A new multiple dependent state sampling plan based on one-sided process capability indices. Int. J. Adv. Manuf. Technol. 2023, 126, 3297–3309. [Google Scholar] [CrossRef]
  2. Wang, S.; Chiang, J.Y.; Tsai, T.R.; Qin, Y. Robust process capability indices and statistical inference based on model selection. Comput. Ind. Eng. 2021, 156, 107265. [Google Scholar] [CrossRef]
  3. Yalçın, S.; Kaya, İ. Analyzing of process capability indices based on neutrosophic sets. Comp. Appl. Math. 2022, 41, 287. [Google Scholar] [CrossRef]
  4. Wang, T.C. Developing an adaptive sampling system indexed by Taguchi capability with acceptance-criterion-switching mechanism. Int. J. Adv. Manuf. Technol. 2022, 122, 2329–2342. [Google Scholar] [CrossRef]
  5. Montgomery, D.C. Introduction to Statistical Quality Control; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  6. Shaabani, J.; Jafari, A.A. Inference on the lifetime performance index of gamma distribution: Point and interval estimation. Commun. Stat. Simulat. Comput. 2022, 53, 1368–1386. [Google Scholar] [CrossRef]
  7. İklim, G.B. Estimation of the generalized process capability index Cpyk based on bias-corrected maximum-likelihood estimators for the generalized inverse Lindley distribution and bootstrap confidence intervals. J. Stat. Comput. Simulat. 2021, 91, 1960–1979. [Google Scholar] [CrossRef]
  8. Cohen, A.; Clifford, J. Progressively censored samples in life testing. Technometrics 1963, 5, 327–339. [Google Scholar] [CrossRef]
  9. Kilany, N.M.; Lobna, H.E.R. Evaluating the lifetime performance index of omega distribution based on progressive type-II censored samples. Sci. Rep. 2024, 14, 5694–5708. [Google Scholar] [CrossRef]
  10. Mohammad, V.A.; Mahdi, D. Bayesian analysis of the lifetime performance index on the basis of progressively censored Weibull observations. Qual. Technol. Quant. Manag. 2022, 19, 187–214. [Google Scholar] [CrossRef]
  11. Hanan, H.A.; Kariema, E.; Dina, R. Investigating the lifetime performance index under Ishita distribution based on progressive type II censored data with applications. Symmetry 2023, 15, 1779. [Google Scholar] [CrossRef]
  12. Kundu, D.; Joarder, A. Analysis of Type-II progressively hybrid censored data. Comput. Stat. Data Anal. 2006, 50, 2509–2528. [Google Scholar] [CrossRef]
  13. Ng, H.K.T.; Kundu, D.; Chan, P.S. Statistical analysis of exponential lifetimes under an adaptive type II progressively censoring scheme. Nav. Res. Logist. 2009, 56, 687–698. [Google Scholar] [CrossRef]
  14. Hassan, A.S.; Elsherpieny, E.A.; Felifel, A.M. Bayesian and non-Bayesian analysis for the lifetime performance index based on generalized order statistics from Pareto distribution. J. Auton. Intell. 2024, 7. [Google Scholar] [CrossRef]
  15. Wu, S.F.; Cheng, Y.W.; Kang, C.W.; Chang, W.T. Reliability sampling design for the lifetime performance index of Burr XII lifetime distribution under progressive type I interval censoring. Commun. Stat. Simul. Comput. 2023, 52, 5483–5497. [Google Scholar] [CrossRef]
  16. Wu, S.F.; Song, M.Z. Experimental Design for Progressive Type I Interval Censoring on the Lifetime Performance Index of Chen Lifetime Distribution. Mathematics 2023, 11, 1554. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Gui, W.H. Statistical inference for the lifetime performance index of products with Pareto distribution on basis of general progressive type II censored sample. Commun. Stat. Theory Methods 2021, 50, 3790–3808. [Google Scholar] [CrossRef]
  18. Rady, E.A.; Hassanein, W.; Yehia, S. Evaluation of the lifetime performance index on first failure progressive censored data based on Topp Leone Alpha power exponential model applied on HPLC data. J. Biopharm. Stat. 2021, 31, 565–582. [Google Scholar] [CrossRef]
  19. Alharthi, A.S.; Fatimah, A.A. Evaluating the lifetime performance index of the generalized half-logistic population in the generalized Type I hybrid censoring scheme. Alex. Eng. J. 2024, 105, 237–244. [Google Scholar] [CrossRef]
  20. Sharma, V.K.; Singh, S.K.; Singh, U.; Merovci, F. The generalized inverse Lindley distribution: A new inverse statistical model for the study of upside-down bathtub data. Commun. Stat. Theory Methods 2016, 45, 5709–5729. [Google Scholar] [CrossRef]
  21. Basu, S.; Singh, S.K.; Singh, U. Inference on generalized inverse Lindley distribution under progressive hybrid censoring scheme. J. Iran. Stat. Soc. 2022, 21, 21–50. [Google Scholar] [CrossRef]
  22. Vikas, K.S. Bayesian analysis of head and neck cancer data using generalized inverse Lindley stress–strength reliability model. Commun. Stat. Theory Methods 2018, 47, 1155–1180. [Google Scholar] [CrossRef]
  23. Devendra, K.; Mazen, N.; Sanku, D. Inference for generalized inverse Lindley distribution based on generalized order statistics. Afr. Mat. 2020, 31, 1207–1235. [Google Scholar] [CrossRef]
  24. Fatma, G.A.; Keming, Y.; Birdal, S. Estimation of the system reliability for generalized inverse Lindley distribution based on different sampling designs. Commun. Stat. Theory Methods 2021, 50, 1532–1546. [Google Scholar] [CrossRef]
  25. Asgharzadeh, A.; Nadarajah, S.; Sharafi, F. Generalized inverse Lindley distribution with application to Danish fire insurance data. Commun. Stat. Theory Methods 2017, 46, 5001–5021. [Google Scholar] [CrossRef]
  26. Intekhab, A.; Murshid, K.; Mohammad, T.I.; Saqib, S.W.; Imran, A. Statistical analysis from the generalized inverse Lindley distribution with adaptive type-II progressively hybrid censoring scheme. Ann. Data Sci. 2024, 11, 479–506. [Google Scholar] [CrossRef]
  27. Devendra, K.; Mazen, N.; Sanku, D.; Farouq, M.A.A. On estimation procedures of constant stress accelerated life test for generalized inverse Lindley distribution. Qual. Reliab. Eng. Int. 2022, 38, 211–228. [Google Scholar] [CrossRef]
  28. Mojammel, H.S.; Manas, R.T.; Debasis, K. Estimating parameters from the generalized inverse Lindley distribution under hybrid censoring scheme. Commun. Stat. Theory Methods 2019, 48, 5839–5862. [Google Scholar] [CrossRef]
  29. Muhammad, S.; Irum, S.D.; Muhammad, A.H.; Rana, M.U. A sustainable generalization of inverse Lindley distribution for wind speed analysis in certain regions of Pakistan. Model. Earth Syst. Environ. 2022, 8, 625–637. [Google Scholar] [CrossRef]
  30. Ouyang, L.; Dey, S.; Byun, J.H.; Park, C. Confidence intervals of the process capability index Cpc revisited via modified bootstrap technique and ROC curves. Qual. Reliab. Eng. Int. 2023, 39, 2162–2184. [Google Scholar] [CrossRef]
  31. Saha, M.; Sanku, D.; Saralees, N. Parametric inference of the process capability index Cpc for exponentiated exponential distribution. J. Appl. Stat. 2022, 49, 4097–4121. [Google Scholar] [CrossRef]
  32. Tolba, A.H.; Ehab, M.A.; Dina, A.R. Bayesian estimation of a one parameter Akshaya distribution with progressively type ii censord data. J. Stat. Appl. Probab. 2022, 11, 565–579. [Google Scholar] [CrossRef]
  33. Maiti, K.; Suchandan, K. Estimation of stress-strength reliability following extended Chen distribution. Int. J. Reliab. Qual. Saf. Eng. 2022, 29, 2150048–2150075. [Google Scholar] [CrossRef]
  34. Kumar, S.; Yadav, A.S.; Dey, S.; Saha, M. Parametric inference of generalized process capability index Cpyk for the power Lindley distribution. Qual. Technol. Quant. Manag. 2022, 19, 153–186. [Google Scholar] [CrossRef]
  35. Migdadi, H.S.; Al-Olaimat, N.M.; Maryam, M.; Omar, M. Statistical inference for the Power Rayleigh distribution based on adaptive progressive Type-II censored data. AIMS Math. 2023, 8, 22553–22576. [Google Scholar] [CrossRef]
  36. Jana, N.; Samadrita, B. Interval estimation of multicomponent stress–strength reliability based on inverse Weibull distribution. Math. Comput. Simulat. 2022, 191, 95–119. [Google Scholar] [CrossRef]
  37. Hall, P. Theoretical comparison of bootstrap confidence intervals. Ann. Stat. 1988, 16, 927–953. [Google Scholar] [CrossRef]
  38. Xu, B.; Wang, D.; Wang, R. Estimator of scale parameter in a subclass of the exponential family under symmetric entropy loss. Northeast. Math. J. 2008, 24, 447–457. [Google Scholar] [CrossRef]
  39. Varian, H.R. A Bayesian Approach to Real Estate Assessment; Studies in Bayesian Econometrics and Statistics in Honor of Leonard J. Savage; American Elsevier: Amsterdam, NY, USA, 1975. [Google Scholar]
  40. Calabria, R.; Pulcini, G. An engineering approach to bayes estimation for the Weibull distribution. Microelectron. Reliab. 1994, 34, 789–802. [Google Scholar] [CrossRef]
  41. Chen, M.; Shao, Q. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar] [CrossRef]
  42. Wu, S.F.; Wu, C.C. Two stage multiple comparisons with the average for exponential location parameters under heteroscedasticity. J. Stat. Plan. Inference 2005, 134, 392–408. [Google Scholar] [CrossRef]
  43. Blischke, W.R.; Murthy, D.N.P. Reliability: Modeling, Prediction, and Optimization; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
Figure 1. Schematic representation of the adaptive progressive type-II censored test.
Figure 1. Schematic representation of the adaptive progressive type-II censored test.
Axioms 13 00727 g001
Figure 2. (a) The diagram of the PDF. (b) The diagram of the HF.
Figure 2. (a) The diagram of the PDF. (b) The diagram of the HF.
Axioms 13 00727 g002
Figure 3. (a) Fitting of GILD on duration of remission. (b) Fitting of GILD on failure time.
Figure 3. (a) Fitting of GILD on duration of remission. (b) Fitting of GILD on failure time.
Axioms 13 00727 g003
Figure 4. The partial derivatives of the log-likelihood function.
Figure 4. The partial derivatives of the log-likelihood function.
Axioms 13 00727 g004
Table 1. The review of references on the LPI.
Table 1. The review of references on the LPI.
ReferencesSample TypeLifetime ModelFocus
Kilany and Lobna [9]Progressive type-II censored sampleOmega distributionML and Bayesian estimation
Mohammad and Mahdi [10]Weibull distributionBayesian estimation
Hanan et al. [11]Ishita distributionML and Bayesian estimation
Hassan et al. [14]Generalized order statisticsPareto distributionML and Bayesian estimation
Wu et al. [15]Progressive type-I interval censored sampleBurr XII distributionSampling design
Wu and Song [16]Chen distributionSampling design
Zhang and Gui [17]General progressive type-II censored samplePareto distributionML and Bayesian estimation
Rady et al. [18]First failure progressive censored sampleTopp Leone Alpha power Exponential distributionML estimation
Alharthi and Fatimah [19]Generalized type-I hybrid censored sampleGeneralized half-logistic distributionML and Bayesian estimation
Table 2. The values of δ and P c r with ( λ , β ) = ( 3.1956 , 7.3937 ) .
Table 2. The values of δ and P c r with ( λ , β ) = ( 3.1956 , 7.3937 ) .
δ P c r δ P c r δ P c r
0.00000.200.46330.600.8259
−3.000.01500.250.50150.650.8706
−2.500.02150.300.54250.700.9103
−2.000.03220.350.58620.750.9432
−1.500.05110.400.63220.800.9682
−1.000.08690.450.68010.850.9849
0.000.33770.500.72910.900.9943
0.100.39530.550.77820.950.9984
Table 3. The predetermined censored schemes.
Table 3. The predetermined censored schemes.
Serial NumberCensored Scheme (CS)
I R 1 = R 2 = = R m 1 = 0 ,   R m = n m
II R 1 = n m / 2 ,   R 2 = R 3 = = R m 1 = 0 ,   R m = 3 m / 2
III R 1 = R m = ( n m ) / 2 ,   R 2 = R 3 = = R m 1 = 0
Table 4. The estimates (in brackets) and MSEs of δ .
Table 4. The estimates (in brackets) and MSEs of δ .
n m CS δ ^ M L Prior 1Prior 2Prior 3
δ ^ S E δ ^ L L δ ^ G E δ ^ S E δ ^ L L δ ^ G E δ ^ S E δ ^ L L δ ^ G E
3020I0.0737
(0.4210)
0.0718
(0.3504)
0.0556
(0.3831)
0.0797
(0.3347)
0.0916
(0.3021)
0.0717
(0.3372)
0.1008
(0.2871)
0.0654
(0.3394)
0.0577
(0.3745)
0.0732
(0.3246)
II0.0799
(0.3228)
0.0720
(0.3710)
0.0555
(0.4017)
0.0797
(0.3560)
0.1052
(0.3878)
0.0631
(0.3797)
0.0628
(0.3674)
0.0591
(0.3589)
0.0452
(0.4375)
0.0603
(0.3277)
III0.0620
(0.6865)
0.0463
(0.4616)
0.0348
(0.4898)
0.0495
(0.4485)
0.0816
(0.3216)
0.0634
(0.3564)
0.0898
(0.3069)
0.0402
(0.4496)
0.0300
(0.4778)
0.0447
(0.4351)
7030I0.0813
(0.3541)
0.1181
(0.2603)
0.0959
(0.2945)
0.1278
(0.2464)
0.1216
(0.2542)
0.0982
(0.2894)
0.1323
(0.2391)
0.0767
(0.2928)
0.0658
(0.3287)
0.0804
(0.2773)
II0.0998
(0.2485)
0.0961
(0.2962)
0.0755
(0.3311)
0.1053
(0.2815)
0.0876
(0.3123)
0.0667
(0.3510)
0.0978
(0.2955)
0.0679
(0.3919)
0.0566
(0.3281)
0.0782
(0.2770)
III0.0380
(0.6546)
0.0645
(0.3725)
0.0508
(0.4003)
0.0698
(0.3615)
0.0975
(0.2952)
0.0788
(0.3270)
0.1050
(0.2831)
0.0555
(0.3894)
0.0425
(0.4192)
0.0610
(0.3767)
50I0.0639
(0.3987)
0.0966
(0.2939)
0.0753
(0.3304)
0.1059
(0.2790)
0.0957
(0.2985)
0.0751
(0.3334)
0.1057
(0.2826)
0.0501
(0.3572)
0.0475
(0.3915)
0.0596
(0.3426)
II0.0677
(0.3865)
0.0617
(0.4249)
0.0511
(0.4420)
0.0665
(0.4138)
0.0524
(0.4082)
0.1029
(0.2087)
0.0710
(0.4047)
0.0235
(0.4042)
0.0359
(0.4203)
0.0355
(0.3906)
III0.0237
(0.6261)
0.0588
(0.3814)
0.0451
(0.4121)
0.0645
(0.3687)
0.1005
(0.2864)
0.0802
(0.3209)
0.1090
(0.2731)
0.0318
(0.3746)
0.0286
(0.4023)
0.0370
(0.3635)
10030I0.0782
(0.3328)
0.1391
(0.2283)
0.1143
(0.2673)
0.1492
(0.2150)
0.1332
(0.2388)
0.1098
(0.2719)
0.1443
(0.2238)
0.0617
(0.3021)
0.0506
(0.3393)
0.0651
(0.2867)
II0.0798
(0.3159)
0.0802
(0.3293)
0.0618
(0.3643)
0.0874
(0.3163)
0.1050
(0.2785)
0.0831
(0.3143)
0.1140
(0.2646)
0.0619
(0.3249)
0.0464
(0.3576)
0.0793
(0.3117)
III0.0426
(0.6654)
0.0766
(0.3373)
0.0616
(0.3656)
0.0819
(0.3273)
0.1078
(0.2765)
0.0880
(0.3082)
0.1149
(0.2656)
0.0478
(0.3570)
0.0327
(0.3882)
0.0540
(0.3443)
50I0.0708
(0.3587)
0.1145
(0.2653)
0.0920
(0.3009)
0.1250
(0.2500)
0.1033
(0.2892)
0.0829
(0.3218)
0.1140
(0.2732)
0.0616
(0.2631)
0.0535
(0.2980)
0.0663
(0.2482)
II0.0846
(0.3100)
0.0819
(0.3311)
0.0629
(0.3649)
0.0919
(0.3142)
0.0639
(0.3658)
0.0486
(0.3963)
0.0726
(0.3496)
0.0623
(0.3287)
0.0431
(0.3651)
0.0726
(0.3121)
III0.0238
(0.6186)
0.0734
(0.3467)
0.0588
(0.3748)
0.0795
(0.3349)
0.1074
(0.2765)
0.0871
(0.3095)
0.1153
(0.2644)
0.0222
(0.4469)
0.0172
(0.4765)
0.0280
(0.4359)
70I0.0601
(0.4114)
0.0902
(0.3052)
0.0706
(0.3401)
0.1004
(0.2886)
0.0787
(0.3309)
0.0599
(0.3660)
0.0873
(0.3156)
0.0601
(0.3420)
0.0570
(0.3757)
0.0644
(0.3278)
II0.0381
(0.6054)
0.0459
(0.3981)
0.0382
(0.4145)
0.0513
(0.3866)
0.0432
(0.4043)
0.0359
(0.4207)
0.0486
(0.3926)
0.0266
(0.4766)
0.0194
(0.5061)
0.0311
(0.4769)
III0.0187
(0.6110)
0.0643
(0.3598)
0.0492
(0.3919)
0.0705
(0.3469)
0.1076
(0.2739)
0.0861
(0.3087)
0.1161
(0.2609)
0.0171
(0.4374)
0.0113
(0.5062)
0.0126
(0.4723)
15040I0.0791
(0.3281)
0.1237
(0.2494)
0.1034
(0.2847)
0.1393
(0.2328)
0.1033
(0.2896)
0.0829
(0.3224)
0.1150
(0.2720)
0.0691
(0.2905)
0.0577
(0.3263)
0.0614
(0.2727)
II0.0867
(0.3040)
0.0861
(0.3178)
0.0687
(0.3495)
0.0931
(0.3056)
0.1132
(0.2685)
0.0919
(0.2992)
0.1223
(0.2523)
0.0624
(0.3259)
0.0457
(0.3567)
0.0691
(0.3139)
III0.0269
(0.6490)
0.0890
(0.3104)
0.0730
(0.3385)
0.0951
(0.3001)
0.1176
(0.2594)
0.0973
(0.2909)
0.1252
(0.2486)
0.0338
(0.5191)
0.0260
(0.5519)
0.0303
(0.5076)
60I0.0716
(0.3486)
0.1108
(0.2725)
0.0884
(0.3083)
0.1224
(0.2553)
0.0932
(0.3072)
0.0724
(0.3428)
0.1044
(0.2895)
0.0613
(0.3479)
0.0508
(0.3106)
0.0619
(0.3577)
II0.0639
(0.3736)
0.1184
(0.2621)
0.0959
(0.2968)
0.1298
(0.2458)
0.1003
(0.2949)
0.0792
(0.3302)
0.1117
(0.2774)
0.0482
(0.4600)
0.0392
(0.4955)
0.0437
(0.4439)
III0.0204
(0.6388)
0.0890
(0.3084)
0.0713
(0.3397)
0.0955
(0.2974)
0.1199
(0.2549)
0.0993
(0.2865)
0.1283
(0.2430)
0.0283
(0.4097)
0.0120
(0.4401)
0.0355
(0.3989)
80I0.0616
(0.3925)
0.0939
(0.3013)
0.0728
(0.3377)
0.1040
(0.2848)
0.0772
(0.3355)
0.0584
(0.3704)
0.0867
(0.3186)
0.0526
(0.3565)
0.0493
(0.3912)
0.0541
(0.3403)
II0.0306
(0.6394)
0.0508
(0.3966)
0.0372
(0.4261)
0.0579
(0.3819)
0.0408
(0.4190)
0.0308
(0.4421)
0.0469
(0.4053)
0.0262
(0.4895)
0.0137
(0.5075)
0.0329
(0.4953)
III0.0144
(0.6250)
0.0823
(0.3225)
0.0652
(0.3545)
0.0891
(0.3106)
0.1135
(0.2651)
0.0915
(0.2988)
0.1223
(0.2512)
0.0188
(0.5133)
0.0124
(0.5417)
0.0245
(0.4636)
20040I0.0828
(0.3181)
0.1207
(0.2607)
0.0962
(0.2977)
0.1335
(0.2426)
0.0986
(0.3018)
0.0759
(0.3394)
0.1096
(0.2847)
0.0741
(0.3013)
0.0524
(0.3394)
0.0746
(0.2842)
II0.0880
(0.3019)
0.0588
(0.3894)
0.0461
(0.4162)
0.0637
(0.3783)
0.0995
(0.2904)
0.0804
(0.3226)
0.1070
(0.2784)
0.0541
(0.3936)
0.0420
(0.4204)
0.0585
(0.3832)
III0.0259
(0.6509)
0.1010
(0.2895)
0.0832
(0.3183)
0.1070
(0.2799)
0.1265
(0.2469)
0.1059
(0.2772)
0.1336
(0.2370)
0.0280
(0.4123)
0.0194
(0.4459)
0.0348
(0.4006)
80I0.0764
(0.3343)
0.1076
(0.2854)
0.0839
(0.3230)
0.1194
(0.2676)
0.0768
(0.3452)
0.0588
(0.3755)
0.0870
(0.3250)
0.0679
(0.3789)
0.0466
(0.3460)
0.0622
(0.3604)
II0.0678
(0.3574)
0.1147
(0.2379)
0.0924
(0.3081)
0.1268
(0.2564)
0.0936
(0.3089)
0.0725
(0.3450)
0.1054
(0.2903)
0.0443
(0.3517)
0.0286
(0.4077)
0.0367
(0.3537)
III0.0144
(0.6182)
0.0997
(0.2913)
0.0817
(0.3212)
0.1062
(0.2811)
0.1280
(0.2441)
0.1063
(0.2760)
0.1356
(0.2335)
0.0160
(0.4906)
0.0097
(0.5201)
0.0170
(0.4804)
100I0.0578
(0.4088)
0.0857
(0.3170)
0.0660
(0.3530)
0.0960
(0.2996)
0.0661
(0.3637)
0.0495
(0.3965)
0.0752
(0.3464)
0.0413
(0.3758)
0.0399
(0.3616)
0.0425
(0.3591)
II0.0464
(0.7406)
0.0462
(0.4079)
0.0328
(0.4363)
0.0536
(0.3922)
0.0340
(0.4377)
0.0255
(0.4589)
0.0396
(0.4239)
0.0351
(0.4107)
0.0202
(0.4397)
0.0394
(0.3958)
III0.0117
(0.6192)
0.0941
(0.3004)
0.0747
(0.3339)
0.1017
(0.2879)
0.1215
(0.2526)
0.0977
(0.2886)
0.1308
(0.2392)
0.0107
(0.5289)
0.0082
(0.5524)
0.0127
(0.4885)
Table 5. The ABs of δ .
Table 5. The ABs of δ .
n m CS δ ^ M L Prior 1Prior 2Prior 3
δ ^ S E δ ^ L L δ ^ G E δ ^ S E δ ^ L L δ ^ G E δ ^ S E δ ^ L L δ ^ G E
3020I−0.1773−0.2408−0.2453−0.2636−0.2962−0.2611−0.3112−0.2589−0.2238−0.2737
II−0.2775−0.2273−0.1966−0.2433−0.2105−0.2186−0.2309−0.2394−0.2608−0.2711
III0.0882−0.1367−0.1085−0.1458−0.2767−0.2419−0.2419−0.1478−0.1205−0.1632
7030I−0.2443−0.3380−0.3038−0.3519−0.3441−0.3089−0.3593−0.3055−0.2969−0.3210
II−0.3118−0.3021−0.2672−0.3168−0.2860−0.2473−0.3028−0.3064−0.2702−0.3213
III0.0563−0.2258−0.1980−0.2368−0.3031−0.2713−0.3152−0.2089−0.1792−0.2216
50I−0.1996−0.3044−0.2679−0.3193−0.2998−0.2625−0.3157−0.2411−0.2068−0.2547
II−0.2118−0.1734−0.1563−0.1845−0.1951−0.3117−0.1936−0.1942−0.2951−0.2077
III0.0278−0.2169−0.1862−0.2296−0.3119−0.2775−0.3252−0.2237−0.1660−0.2048
10030I−0.2655−0.3700−0.3347−0.3833−0.3595−0.3264−0.3746−0.2962−0.2590−0.3116
II−0.2824−0.2690−0.2340−0.2820−0.3198−0.2840−0.3337−0.2734−0.2407−0.2866
III0.0671−0.2610−0.2327−0.2710−0.3218−0.2901−0.3327−0.2414−0.2101−0.2540
50I−0.2397−0.3330−0.2974−0.3483−0.3091−0.2765−0.3251−0.3352−0.3003−0.3501
II−0.2884−0.2672−0.2334−0.2841−0.2325−0.2020−0.2488−0.2696−0.2348−0.2862
III0.0203−0.2516−0.2235−0.2634−0.3218−0.2889−0.3339−0.1514−0.1218−0.1628
70I−0.1870−0.2931−0.2582−0.3098−0.2672−0.2323−0.2827−0.3563−0.3226−0.3705
II0.0071−0.2002−0.1838−0.2117−0.1940−0.1776−0.2057−0.2017−0.2922−0.2215
III0.0126−0.2385−0.2064−0.2514−0.3244−0.2869−0.3374−0.1609−0.1332−0.1709
15040I−0.2702−0.3489−0.3136−0.3655−0.3087−0.2759−0.3263−0.3078−0.2720−0.3256
II−0.2943−0.2806−0.2488−0.2927−0.3325−0.2991−0.3460−0.2724−0.2461−0.2844
III0.0507−0.2879−0.2598−0.2982−0.3389−0.3074−0.3497−0.2192−0.1464−0.1907
60I−0.2497−0.3259−0.2900−0.3431−0.2911−0.2555−0.3089−0.3234−0.2877−0.3406
II−0.2248−0.3362−0.3015−0.3525−0.3034−0.2681−0.3210−0.2383−0.2028−0.2544
III0.0405−0.2899−0.2586−0.3009−0.3434−0.3118−0.3553−0.1886−0.1582−0.1994
80I−0.2058−0.2971−0.2606−0.3135−0.2628−0.2279−0.2797−0.3418−0.3072−0.3580
II0.0411−0.2018−0.1722−0.2164−0.1794−0.1562−0.1960−0.1845−0.1608−0.1031
III0.0267−0.2758−0.2438−0.2877−0.3343−0.2995−0.3471−0.1580−0.1566−0.1948
20040I−0.2802−0.3376−0.3006−0.3557−0.2965−0.2589−0.3136−0.2970−0.2589−0.3141
II−0.2964−0.2090−0.1821−0.2200−0.3080−0.2758−0.3199−0.2047−0.1779−0.2151
III0.0525−0.3088−0.2800−0.3184−0.3515−0.3212−0.3613−0.2860−0.2524−0.2977
80I−0.2640−0.3129−0.2573−0.3308−0.2558−0.2228−0.2733−0.2194−0.1823−0.2379
II−0.2409−0.3244−0.2902−0.3419−0.2894−0.2533−0.3080−0.1266−0.1906−0.1446
III0.0199−0.3070−0.2771−0.3173−0.3542−0.3223−0.3648−0.1077−0.1781−0.1179
100I−0.1895−0.2813−0.2453−0.2987−0.2347−0.2021−0.2519−0.2225−0.1867−0.2392
II0.1423−0.1904−0.1620−0.2061−0.1606−0.1394−0.1744−0.1876−0.1586−0.2025
III0.0209−0.2979−0.2644−0.3104−0.3457−0.3097−0.3592−0.1005−0.0729−0.1098
Table 6. The CPs and AWs of δ with γ = 0.05 .
Table 6. The CPs and AWs of δ with γ = 0.05 .
n m CSACIBCIHPD
Prior 1Prior 2Prior 3
CPAWCPAWCPAWCPAWCPAW
3020I0.83210.60130.84450.69060.89400.16350.86600.15400.86670.6529
II0.84740.65520.81340.24870.99200.40380.99600.35280.95640.3156
III0.87440.43850.85640.81600.95600.22940.92800.19960.98000.7481
7030I0.81170.22470.90110.46200.81400.13030.82800.12270.87670.5936
II0.82670.82570.86540.31610.99000.34900.99600.33050.92700.3964
III0.82170.79480.89730.24500.87400.13340.85200.13270.92670.6505
50I0.86230.58870.83760.50360.85600.13270.86000.13210.90330.4311
II0.90000.37390.80210.12740.99800.34351.00000.30640.96770.3967
III0.91000.30990.84801.14890.92200.17080.89200.15970.93200.5949
10030I0.79930.38310.93380.54420.83600.11950.86800.12240.82330.5062
II0.82781.21370.89660.18090.99600.35791.00000.34000.90880.4951
III0.82000.26490.92010.51070.82800.12760.80000.12640.81670.6223
50I0.81810.25540.86940.47470.83400.12640.80200.12250.83000.5383
II0.85000.44920.85412.30600.99400.31930.99800.31400.91540.4446
III0.83000.57400.88960.15460.85400.12970.83400.13040.84700.5760
70I0.82680.50720.85210.44800.85000.12890.86200.12960.89330.5039
II0.91080.33610.82470.54360.99000.30201.00000.29560.94900.3036
III0.89330.34860.86420.51610.89200.14410.87200.14060.87650.5381
15040I0.78820.36570.94250.71910.79000.12570.79000.04850.85330.4955
II0.81101.39120.90770.47100.98400.32990.99400.32400.91530.3727
III0.80120.28830.92670.56420.80801.22550.82800.12810.82680.5742
60I0.81170.29860.92470.44270.82200.43370.83400.03220.86500.5605
II0.84380.47300.87710.72330.99600.31130.99800.30680.94000.3554
III0.82010.35320.90450.22240.82800.12450.76800.12020.87670.5389
80I0.83350.36120.88030.43750.83400.12370.82800.12140.87600.5210
II0.85510.27380.84560.93270.99600.29781.00000.29340.94670.3366
III0.83210.49670.88150.16760.84200.12860.83800.12540.89470.5161
20040I0.77350.22480.94881.31350.78200.62730.78400.59990.84670.5919
II0.81431.64400.91920.76220.98400.33900.99000.37320.96330.2906
III0.81170.28010.93290.89060.83000.12320.80800.09690.83770.5707
80I0.79930.28600.91020.46520.77400.57230.78600.62470.83330.5644
II0.84000.30480.88790.57050.99400.30040.99600.29510.98670.2553
III0.82550.26560.91330.23790.81600.12060.82600.12410.89670.5059
100I0.81190.29700.89240.41430.80600.13530.82600.28400.84300.5283
II0.89000.23550.83591.03630.99800.29081.00000.28780.99330.2066
III0.82810.43960.87290.11900.81600.12420.81800.12120.90040.4892
Table 7. Two sets of real data and K–S test.
Table 7. Two sets of real data and K–S test.
DataK-S
Statisticp-Values
Data 11.0131.0341.1691.2661.5091.5331.5631.7161.9291.9650.16690.3468
2.0612.3442.5462.6262.7782.9513.4134.1185.136
Data 20.3010.3090.5570.9431.0701.1241.2481.2811.2811.3030.10290.1587
1.4321.4801.5051.5061.5681.6151.6190.6520.6521.757
1.7951.8661.8761.8991.9111.9121.9141.9812.0102.038
2.0852.0892.0972.1352.1542.1902.1942.2232.2242.229
2.3002.3242.3492.3852.4812.6102.6252.6322.6462.661
2.6882.8232.8902.9022.9342.9622.9643.0003.1033.114
3.1173.1663.3443.3763.3853.4433.4673.4783.5783.595
3.6993.7793.9244.0354.1214.1674.2404.2554.2784.305
4.3764.4494.4854.5704.6024.6634.694
Table 8. The predetermined censored schemes and censored samples.
Table 8. The predetermined censored schemes and censored samples.
Serial NumberCensored SchemeCensored Sample
Data 1I ( 8 × 1 , 0 × 10 ) 1.0131.9652.0612.3442.5462.6262.7782.9513.413
4.1185.136
II ( 4 × 1 , 0 × 9 , 4 × 1 ) 1.0131.5331.5631.7161.9291.9652.0612.3442.546
2.6262.778
Data 2I ( 47 × 1 , 0 × 39 ) 0.3012.6462.6612.6882.8232.8902.9022.9342.962
2.9643.0003.1033.1143.1173.1663.3443.3763.385
3.4433.4673.4783.5783.5953.6993.7793.9244.035
4.1214.1674.2404.2554.2784.3054.3764.4494.485
4.5704.6024.6634.694
II ( 20 × 1 , 0 × 38 , 27 × 1 ) 0.3011.8661.8761.8991.9111.9121.9141.9812.010
2.0382.0852.0892.0972.1352.1542.1902.1942.223
2.2242.2292.3002.3242.3492.3852.4812.6102.625
2.6322.6462.6612.6882.8232.8902.9022.9342.962
2.9643.0003.1033.114
Table 9. The point and interval estimation of δ with γ = 0.05 .
Table 9. The point and interval estimation of δ with γ = 0.05 .
CS δ ^ M L δ ^ S E δ ^ L L δ ^ G E ACIBCIHPD
Data 1I0.37880.56890.57330.5595(0.2770, 1.2346)(0.1437, 0.7382)(0.1139, 0.8767)
II0.55710.79690.79090.7905(0.1106,1.4642)(0.4201, 1.6411)(0.3229,1.0561)
Data 2I0.52170.42220.42370.4219(0.2591, 0.6148)(0.3114, 0.5995)(0.2739, 0.4948)
II0.26390.37860.38190.3790(0.1816, 0.4095)(0.1004, 0.4150)(0.1840, 0.4732)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiao, S.; Hu, X.; Ren, H. Estimation of Lifetime Performance Index for Generalized Inverse Lindley Distribution Under Adaptive Progressive Type-II Censored Lifetime Test. Axioms 2024, 13, 727. https://doi.org/10.3390/axioms13100727

AMA Style

Xiao S, Hu X, Ren H. Estimation of Lifetime Performance Index for Generalized Inverse Lindley Distribution Under Adaptive Progressive Type-II Censored Lifetime Test. Axioms. 2024; 13(10):727. https://doi.org/10.3390/axioms13100727

Chicago/Turabian Style

Xiao, Shixiao, Xue Hu, and Haiping Ren. 2024. "Estimation of Lifetime Performance Index for Generalized Inverse Lindley Distribution Under Adaptive Progressive Type-II Censored Lifetime Test" Axioms 13, no. 10: 727. https://doi.org/10.3390/axioms13100727

APA Style

Xiao, S., Hu, X., & Ren, H. (2024). Estimation of Lifetime Performance Index for Generalized Inverse Lindley Distribution Under Adaptive Progressive Type-II Censored Lifetime Test. Axioms, 13(10), 727. https://doi.org/10.3390/axioms13100727

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop