Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
A Unified Approach to Aitchison’s, Dually Affine, and Transport Geometries of the Probability Simplex
Previous Article in Journal
Some Evaluations About Coefficients Boundaries for Specific Classes of Bi-Univalent Functions
Previous Article in Special Issue
Estimation of Lifetime Performance Index for Generalized Inverse Lindley Distribution Under Adaptive Progressive Type-II Censored Lifetime Test
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimating the Lifetime Parameters of the Odd-Generalized-Exponential–Inverse-Weibull Distribution Using Progressive First-Failure Censoring: A Methodology with an Application

by
Mahmoud M. Ramadan
1,
Rashad M. EL-Sagheer
2,3,* and
Amel Abd-El-Monem
1
1
Department of Mathematics, Faculty of Education, Ain Shams University, Roxy 11341, Cairo, Egypt
2
Mathematics Department, Faculty of Science, Al-Azhar University, Naser City 11884, Cairo, Egypt
3
High Institute of Computer and Management Information System, First Statement, New Cairo 11865, Cairo, Egypt
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(12), 822; https://doi.org/10.3390/axioms13120822
Submission received: 12 September 2024 / Revised: 12 November 2024 / Accepted: 21 November 2024 / Published: 25 November 2024

Abstract

:
This paper investigates statistical methods for estimating unknown lifetime parameters using a progressive first-failure censoring dataset. The failure mode’s lifetime distribution is modeled by the odd-generalized-exponential–inverse-Weibull distribution. Maximum-likelihood estimators for the model parameters, including the survival, hazard, and inverse hazard rate functions, are obtained, though they lack closed-form expressions. The Newton–Raphson method is used to compute these estimations. Confidence intervals for the parameters are approximated via the normal distribution of the maximum-likelihood estimation. The Fisher information matrix is derived using the missing information principle, and the delta method is applied to approximate the confidence intervals for the survival, hazard rate, and inverse hazard rate functions. Bayes estimators were calculated with the squared error, linear exponential, and general entropy loss functions, utilizing independent gamma distributions for informative priors. Markov-chain Monte Carlo sampling provides the highest-posterior-density credible intervals and Bayesian point estimates for the parameters and reliability characteristics. This study evaluates these methods through Monte Carlo simulations, comparing Bayes and maximum-likelihood estimates based on mean squared errors for point estimates, average interval widths, and coverage probabilities for interval estimators. A real dataset is also analyzed to illustrate the proposed methods.

1. Introduction

The odd-generalized-exponential–inverse-Weibull distribution (OGE-IWD), with a more detailed view, is a flexible statistical model combining the features of both the exponential and the Weibull distributions, tailored for various applications. Its probability density function incorporates parameters that control the shape and scale, allowing it to model a wide range of data types, from life testing to reliability analysis. This distribution is particularly useful in fields where data exhibit both exponential decay and Weibull-like tail behavior, such as engineering, survival analysis, and environmental studies. By adapting to different data patterns, the OGE-IWD enables researchers to better capture underlying phenomena and make more accurate predictions. Its versatility makes it the ultimate tool for analyzing complex datasets where traditional models fall short.
Hassan et al. [1] introduced the OGE-IWD as a generalization of the inverse Weibull distribution (IWD) and obtained some of its statistical properties. Also, they estimated the unknown parameters of the OGE-IWD using maximum likelihood (ML), percentiles, and least square methods based on complete samples. Moreover, the Bayes and E-Bayes estimators for the unknown parameters of the OGE-IWD based on Type-II censored samples under the balanced squared error and linear exponential loss functions were derived by Mohamed et al. [2]. If a continuous random variable X follows the OGE-IWD, denoted by OGE-IWD α , β , λ , then the cumulative distribution function (CDF), probability density function (PDF), survival function (SF), hazard rate function (HRF), and inverse hazard rate function (IHRF) are given, respectively, as
F x ; α , β , λ = 1 exp λ exp [ β x α ] 1 , x > 0 , α , β , λ > 0 ,
f x ; α , β , λ = α β λ x α 1 exp β x α 1 exp β x α 2 exp λ exp [ β x α ] 1 , x > 0 , α , β , λ > 0 ,
S ( t ; α , β , λ ) = exp λ exp β t α 1 , t > 0 , α , β , λ > 0 ,
h t ; α , β , λ = α β λ t α 1 exp β t α 1 exp β t α 2 , t > 0 , α , β , λ > 0 ,
and
r t ; α , β , λ = α β λ t α 1 exp β t α 1 exp β t α 2 exp λ exp [ β t α ] 1 1 , t > 0 , α , β , λ > 0 ,
where α , β are the shape parameters and λ is the scale parameter. It is clear that the OGE-IWD tends toward odd-generalized-exponential–inverse-exponential-distribution (OGE-IED) when α = 1 , and the odd-generalized-exponential–inverse-Rayleigh distribution (OGE-IRD) when α = 2 . Also, they observed that the PDF of OGE-IWD for a range of parameter values, as displayed in Figure 1, can be unimodal or decreasing in shape and right-skewed, and they showed that the shape of the HRF of the OGE-IWD is increasing, decreasing, and right-skewed; see Figure 2.
The inclusion of failure times for both cases would indeed enrich the discussion, as they provide a foundation for understanding why the OGE-IWD is a suitable choice for modeling lifetime data. Here are some key points to clarify this.
1.
i.
The importance of OGE-IWD in capturing failure characteristics: The OGE-IWD distribution offers flexibility in modeling skewed lifetime data and can capture the heavy tails or asymmetry often observed in failure times. This capability makes it particularly useful for datasets with progressive first-failure censoring, where failures may occur early or late in a product’s lifetime, depending on the external stressors or quality variances.
ii.
Modeling both early and late failures: Discussing cases of both early failures and late failures can illustrate how the OGE-IWD accommodates a wide range of failure behaviors. Early failures may be common in systems with a “burn-in” period, while late failures might indicate wear-out phenomena. The OGE-IWD’s structure allows it to adapt to these different modes, improving model accuracy across varying failure patterns.
iii.
OGE-IWD’s flexibility compared to traditional distributions: Traditional lifetime models, like the Weibull or exponential, may fall short in adequately modeling the dual nature of early and late failures due to their more restrictive shapes. The OGE-IWD adds flexibility through additional shape parameters, which enable it to fit a broader spectrum of real-world lifetime data characteristics, particularly for complex failure patterns.
In life testing experiments, obtaining complete lifespan data for every component can be challenging, leading researchers to use two main types of censoring: Type-I and Type-II. Type-I censoring is applied when the duration of the experiment is predetermined, making the number of units that fail be a random variable. In contrast, Type-II censoring is used when the experiment’s duration is a random variable but the number of units that fail is fixed in advance. Several authors have studied these two types on many different statistical distributions, including Noor and Aslam [3], who explored Bayesian inference for the inverse Weibull mixture distribution under Type-I censoring, highlighting challenges and methodologies for parameter estimation; Basu et al. [4], who focused on parameter estimation for the inverse Lindley distribution using Type-I censored data, offering insights into model fitting and estimation techniques; Joarder et al. [5], who provided a comprehensive analysis of Weibull parameters with conventional Type-I censoring, emphasizing conventional inference methods; Kundu and Howlader [6], who shifted to Type-II censoring, discussing Bayesian inference and predictions for the inverse Weibull distribution; Kundu and Raqab [7], who extended this to Bayesian inference on order statistics under Type-II censoring for Weibull distributions; Singh et al. [8], who addressed Bayesian estimation for flexible Weibull models with Type-II censoring, focusing on model flexibility and predictions; Panahi and Sayyareh [9], who investigated parameter estimation for the Burr Type-XII distribution with Type-II censoring, expanding the scope to more complex distributions; Asgharzadeh et al. [10], who studied the Lindley model under Type-II censoring, detailing statistical inference and prediction strategies; Xin et al. [11], who conducted reliability inference for the three-parameter Burr Type-XII distribution with Type-II censoring, highlighting its applications in reliability analysis; Goyal et al. [12], compared classical and Bayesian approaches for a new lifetime model with Type-II censoring, providing a comprehensive view of different inference techniques; and Arabi Belaghi et al. [13], who examined the Poisson-Exponential distribution under Type-II censoring, discussing estimation and prediction methodologies in this context.
Both censoring types ensure that any remaining components cannot be removed until the end of the test. To address this, a progressive Type-II censoring (PT-2C) method, as outlined in Balakrishnan and Aggarwala [14], seeks to optimize resource use and cut costs by incorporating multiple stages of censoring. This approach allows for the possibility of removing some surviving units during the period of the test according to a predefined scheme, which is specifically useful when the experimental units being tested are very expensive. Also, it is a useful scheme in which a a specific fraction of individuals at risk may be removed from the experiment at several ordered failure times; see Cohen [15]. Under this scheme, a set of n independent units is subjected to a life testing experiment, and failure times are observed as a progressive sample X 1 : m : n < X 2 : m : n < < X m : m : n , where m (with m < n ) denotes the number of observed failure units. A censoring plan R = ( R 1   , R 2 , ,   R m ) is predetermined. This process unfolds as follows: upon the occurrence of the first failure X 1 , R 1 of the n 1 surviving units are randomly selected and removed from the test. At the second failure X 2 , R 2 of n R 1 2 surviving units are randomly selected and removed, and this pattern continues. At the m t h failure, the remaining surviving units R m = n m ( R 1 + R 2 + + R m 1 ) are removed, ending the test.
In recent studies, PT-2C has emerged as a crucial censor for obtaining reliable statistical inference across various distributions, including Brito et al. [16]; who explored inference techniques for the very flexible Weibull distribution, focusing on PT-2C to enhance model accuracy and estimation methods; Abo-Kasem et al. [17], who extended this concept to the Kumaraswamy distribution, emphasizing optimal sampling strategies under similar censoring conditions; Dey and Al-Mosawi [18], who delved into both classical and Bayesian inference for the unit Gompertz distribution using PT-2C data, highlighting the versatility and application of different inferential approaches; Kumar et al. [19], who further contributed by investigating inference methods for the Li–Li Rayleigh distribution, showcasing the practical implications of PT-2C in real-world applications; and Choudhary et al. [20], who examined estimation techniques in the generalized uniform distribution under PT-2C samples, demonstrating the robustness of this censoring scheme across diverse distributions. Although the PT-2C improves experimental efficiency, the testing time remains interesting, due to the long life of the units. For this reason, Johnson [21] introduced a life test that divided the test units into many groups with the same number of units in each group and then ran all the test units simultaneously until the occurrence of the first failure in each group. This type of censoring is called first-failure censoring (FFC). For more details on FFC, see Wu et al. [22] and Wu and Yu [23]. The FFC does not enable the experimenter to remove some groups of test units before observing the first failure in these groups. For this reason, Wu and Kus [24] proposed a life-testing scheme, which combines FFC with a PT-2C called a progressive first-failure censoring (PFFC) scheme.
The PFFC scheme extends and improves progressive censoring; its adaptability makes it a common choice in experimental design. In this scheme, consider a life test involving n independent groups, each containing k units. When the first failure occurs (denoted as X 1 : m : n : k R ), R 1 groups, along with the group where the failure was observed R X 1 , are randomly removed from the test. Similarly, when the second failure happens (denoted as X 2 : m : n : k R ), R 2 groups and the group where this failure occurred, R X 2 , are randomly removed from the test, and so on until the m t h failure (denoted as X m : m : n : k R ), at which point R m groups and the group with this failure R X m are removed. Consequently, the failure times X 1 : m : n : k R < X 2 : m : n : k R < < X m : m : n : k R represent the PFFC order statistics under the progressive censoring scheme R . It follows that n = m + R 1 + R 2 + + R m . Figure 3 shows a simple diagram of the description of the PFFC scheme. One benefit of this censoring scheme is that it shortens the test time and reduces the test cost of the experiment; more items can be utilized, yet only m out of n × k units fail. If the failure times of the n × k units come from a continuous distribution with CDF F ( x ) and PDF f ( x ) , then the joint PDF for the order statistics X 1 : m : n : k R < X 2 : m : n : k R < < X m : m : n : k R can be specified as
f ( x 1 : m : n : k R , x 2 : m : n : k R , , x m : m : n : k R ) = C R k m i = 1 m f ( x i : m : n : k R ) 1 F ( x i : m : n : k R ) k ( R i + 1 ) 1 ,
where C R is a progressive normalizing constant and is defined as C R = n ( n 1 R 1 ) ( n 2 R 1 R 2 ) ( n m + 1 R 1 R 2 R m 1 ) . It is important to note that various censoring schemes can be represented within this framework. For instance, the complete sample scenario can be captured when k = 1 and R i = 0 for all i = 1 , 2 , . . . , m . The FFC case is applicable when k 1 and R i = 0 for all i = 1 , 2 , . . . , m . PT-2C is represented when k = 1 . Additionally, the standard Type-II censored sample can be modeled by setting k = 1 and R i = 0 for i = 1 , 2 , 3 , . . . , m 1 . Additionally, it is evident that X 1 : m : n : k R < X 2 : m : n : k R < < X m : m : n : k R can be considered as a sample from a PT-2C scheme, drawn from a population with a CDF, 1 ( 1 F ( x ) ) k . . Consequently, findings related to PT-2C-order statistics can be directly applied to PFFC order statistics with relative cases.
Recent advancements in the statistical analysis of PFFC data reveal a broad array of innovative methodologies and theoretical frameworks aimed at enhancing reliability estimation and model accuracy. Kumar et al. [25] delved into reliability estimation using the inverse Pareto distribution, providing a robust approach for dealing with censored life data. Abd-El-Monem et al. [26] introduced a partially accelerated life test model utilizing a power hazard distribution, offering a refined theoretical framework for handling censored data under accelerated conditions. Elshahhat et al. [27] contributed to the field by applying beta-binomial removals to analyze censored data, which enhances the modeling of failure rates. Kumar et al. [28] extended the discussion to Shannon’s entropy within the Maxwell distribution, incorporating entropy measures into the analysis of censored data. Saini [29] addressed the challenge of multi-stress scenarios by estimating strength reliability using a generalized inverted exponential distribution. Shi and Shi [30] focused on stress-strength reliability for the beta log Weibull distribution, providing insights into system performance under stress conditions. Fathi et al. [31] offered a detailed examination of the Weibull inverted exponential distribution, integrating constant-stress partially accelerated life test models. Eliwa et al. [32] proposed a theoretical framework for fitting extreme data using the modified Weibull distribution in a PFFC context, enhancing the analysis of extreme values. Recently, Gong et al. [33] focused on the generalized Rayleigh distribution and applied a PFFC model to enhance the accuracy of reliability estimations.
Additionally, researchers have investigated various Bayesian and maximum likelihood estimation methods across different censoring scenarios and model assumptions. However, a significant gap exists in the literature concerning the use of the OGE-IW lifespan distribution with any type of censoring. Hence, the main aim of the paper is to investigate and compare statistical methods for estimating lifetime parameters from datasets with PFFC, under OGE-IWD. The study seeks to derive and compute maximum likelihood estimators for the model parameters, including survival, hazard rate, and inverse hazard rate functions, despite the lack of closed-form expressions. It employs the Newton–Raphson (NR) method for estimation and approximates confidence intervals for the parameters of OGE-IWD using normal distribution of MLs and the delta method for survival, hazard rate, and inverse hazard rate functions of OGE-IWD. Additionally, the paper aims to develop Bayes estimators under various loss functions with informative gamma priors and to use Markov-chain Monte Carlo sampling to provide credible intervals (CRIs) and Bayesian point estimates. The effectiveness of these methods is evaluated through Monte Carlo simulation and applied to a real dataset to demonstrate their practical utility.
Reliable parameter estimation is crucial in practical scenarios, especially in fields like reliability engineering, medical research, and risk assessment, where understanding the lifespan and failure behavior of components, systems, or biological processes is essential. Accurate estimates of lifetime parameters allow practitioners to make informed decisions on maintenance schedules, warranty policies, safety assessments, and resource allocation, ultimately minimizing risks and optimizing performance.
In the context of this study, the reliable estimation of parameters in the odd-generalized-exponential–inverse-Weibull distribution provides a robust tool for modeling complex lifetime data with flexible hazard shapes, accommodating various failure modes and censoring types. Progressive first-failure censoring schemes are particularly valuable for situations where testing all items to failure is impractical due to time, cost, or ethical constraints, such as in medical trials or high-cost product testing.
There are many potential applications, such as reliability assessment in engineering systems, where failure modes need nuanced modeling, or in biomedical research where patient survival data often include censored observations. Moreover, follow-up work could explore alternative estimation techniques for scenarios with smaller sample sizes or extreme censoring, which often challenge classical methods. Future research could also examine how these estimation methods perform with other complex lifetime distributions or investigate computational improvements for Bayesian estimation, particularly in high-dimensional parameter spaces. For more details on the different data models, see for example, He et al. [34], Ran and Bai [35], Zhuang et al. [36], and Xu et al [37].
The structure of this paper is as follows: Section 2 examines maximum-likelihood estimators (MLEs) and their approximate confidence intervals (ACIs) for the parameters of the OGE-IWD. In Section 3, we utilize the Markov-chain Monte Carlo (MCMC) method to obtain Bayesian estimates for these parameters, as well as for survival, hazard rate, and inverse hazard rate functions, using independent gamma priors. We also provide the highest posterior density credible intervals based on MCMC results. Section 4 conducts a Monte Carlo simulation to evaluate the proposed estimates in terms of mean squared error, average interval width, and coverage probability. Section 5 analyzes real-world datasets and a simulation example for illustration. Finally, Section 6 wraps up with concluding remarks.

2. Maximum-Likelihood Estimation

Maximum-likelihood estimation (MLE) is a statistical method used to estimate the parameters of a probability distribution by maximizing the likelihood function. This approach seeks to find parameter values that make the observed data most probable under the given model. The MLE possesses several desirable properties: it is asymptotically unbiased, meaning that as the sample size grows, the estimator converges to the true parameter value. It is also consistent, indicating that with larger samples, the estimates tend to become more accurate. Additionally, the MLE is efficient in the sense that it achieves the Cramér–Rao lower bound, which provides the minimum variance of unbiased estimators. Under regularity conditions, MLEs are asymptotically normal, facilitating inference using standard statistical techniques. However, MLE can be sensitive to model misspecification and may produce biased estimates if the model does not fit the data well. Despite these limitations, the MLE remains a widely used and powerful tool in statistical analysis. This section focuses on applying MLE to obtain both point and interval estimates for the unknown parameters α , β , and λ , as well as for the survival S t , hazard rate h t , and inverse hazard rate r t functions, within the context of PFFC data.
Let X 1 : m : n : k R < X 2 : m : n : k R < < X m : m : n : k R denote the order statistics of the PFFC sample drawn from OGE-IW α , β , λ with a pre-fixed censoring scheme ( R 1 , R 2 , . . . , R m ) . Henceforth, instead of using ( X 1 : m : n : k R < X 2 : m : n : k R < < X m : m : n : k R ) , we will use x ̲ = ( x 1 < x 2 < < x m : ) . Equations (1), (2), and (6) are used to create the likelihood function (LF), which is shown as follows:
L α m β m λ m i = 1 m x i α 1 exp β x i α 1 exp β x i α 2 exp λ exp [ β x i α ] 1 k R i + 1 .
If the additive constant is excluded, the log-likelihood function, denoted by , can be written as
m log α β λ ( α + 1 ) i = 1 m log ( x i ) β i = 1 m x i α 2 i = 1 m log 1 exp β x i α λ k i = 1 m ( R i + 1 ) exp [ β x i α ] 1 .
Upon computing the first partial derivatives of Equation (8) concerning α , β , and λ , the partial derivatives of the log-likelihood can be obtained in the following manner:
α = m α i = 1 m log x i + β i = 1 m x i α log x i + 2 β i = 1 m x i α exp β x i α log ( x i ) 1 exp β x i α k β λ i = 1 m x i α exp β x i α ( R i + 1 ) log ( x i ) exp β x i α 1 2 = 0 ,
β = m β i = 1 m x i α 2 i = 1 m x i α exp β x i α 1 exp β x i α + k λ i = 1 m x i α exp β x i α ( R i + 1 ) exp β x i α 1 2 = 0 ,
and
λ = m λ k i = 1 m ( R i + 1 ) exp β x i α 1 = 0 .
To improve clarity for readers, we give a brief practical interpretation of Equations (9)–(11) and why these derivatives are necessary for estimating the model parameters α , β , and λ as follows: Equation (9) represents how the log-likelihood function changes concerning α . Intuitively, this derivative captures the sensitivity of the likelihood of changes in α , considering the dependence on the observed values x i and their logarithms. The terms involving log x i indicate that this parameter controls how the scaling of the data affects the model fit, allowing us to estimate α as the shape parameter in the distribution. Equation (10) shows the rate of change of the log-likelihood with respect to β , providing insights into how this parameter influences the distribution’s shape. It includes terms involving x i α , which link it closely with α . As a result, β primarily adjusts the data’s spread, and its partial derivative helps determine the optimal shape to maximize the likelihood. Equation (11) shows the influence of this parameter on the likelihood. In this model, λ might represent an external constraint or regularization effect. This equation reflects the impact of λ on the likelihood, where the term involving R i suggests it relates to external adjustments in the model.
Simultaneously solving the complex nonlinear equations α = 0 , β = 0 and λ = 0 enables the calculation of the MLEs for α , β , and λ . Deriving closed-form solutions for Equations (9)–(11) is difficult. Therefore, numerical methods like the NR algorithm are used to determine the MLEs for unknown parameters. The subsequent steps outline the detailed procedures involved in this iterative method with a set threshold 10 6 .
S1.
The starting values for Ω = α , β , λ need to be specified with l = 0 ; that is, Ω 0 = α 0 , β 0 , λ 0 .
S2.
In the l t h iteration, compute α , β , λ T α = α l , β = β l , λ = λ l and I = I α l , β l , λ l , where
I = I α l , β l , λ l = 11 12 13 21 22 23 31 32 33 .
The matrix representing the observed information for the parameters α , β , and λ is referred to as I. The components of this matrix, I, are detailed as follows:
11 = 2 α 2 = m α 2 β i = 1 m x i α log x i 2 + 2 β i = 1 m x i α β x i α + exp β x i α 1 exp β x i α log x i 2 1 exp β x i α 2 k β λ i = 1 m x i α φ ( x , α , β , λ ) exp β x i α R i + 1 log x i 2 exp β x i α 1 3 ,
12 = 21 = 2 α β = 2 β α = i = 1 m x i α log x i 2 i = 1 m x i α β x i α + exp β x i α 1 exp β x i α log x i 1 exp β x i α 2 + k λ i = 1 m x i α φ ( x , α , β , λ ) exp β x i α R i + 1 log x i exp β x i α 1 3 ,
where
φ ( x , α , β , λ ) = β x i α exp β x i α + β x i α exp β x i α + 1
13 = 31 = 2 α λ = 2 λ α = k β i = 1 m x i α exp β x i α ( R i + 1 ) log ( x i ) ( exp β x i α 1 ) 2 ,
22 = 2 β 2 = m β 2 + 2 i = 1 m x i 2 α exp β x i α 1 exp β x i α 2 k λ i = 1 m x i 2 α exp β x i α ( exp β x i α + 1 ) ( R i + 1 ) exp β x i α 1 3 ,
23 = 32 = 2 β λ = 2 λ β = k i = 1 m x i α exp β x i α ( R i + 1 ) exp β x i α 1 2 ,
and
33 = 2 λ 2 = m λ 2 .
S3.
Assign
α l + 1 , β l + 1 , λ l + 1 T = G × α , β , λ T α = α l , β = β l , λ = λ l ,
where G = α l , β l , λ l T + I 1 α l , β l , λ l , α , β , λ T is the transpose of vector α , β , λ , and I 1 α l , β l , λ l symbolizes the matrix’s inverse I α l , β l , λ l .
S4.
By setting l = l + 1 , the MLEs for the parameters (represented by α ^ , β ^ , and λ ^ ) can be obtained by repeatedly executing steps S2 and S3 until
α l + 1 , β l + 1 , λ l + 1 T α l , β l , λ l T < δ ,
where δ is a predetermined threshold value.
In optimization methods like the NR or Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm, the convergence criteria are critical for deciding when to stop iterations. Two commonly used convergence criteria are as follows.
1.
(a)
Tolerance level for stopping: This is a threshold value that determines how close the estimated parameters need to be to the optimal solution before the algorithm halts. This tolerance can apply to either of the following
  • Gradient norm tolerance: The algorithm stops when the norm (magnitude) of the gradient of the objective function falls below a specified small value, indicating that the function’s slope is almost flat and a minimum has been reached. Typical values for the gradient norm tolerance might be in the range of 10 5 to 10 8 .
  • Parameter change tolerance: The algorithm also stops when consecutive iterations yield parameter estimates that differ by less than a certain tolerance. This indicates that the estimates are no longer changing significantly and have likely converged. Common values for parameter change tolerance are similar, often set around 10 5 to 10 8 .
(b)
BFGS method convergence: The BFGS method, which is a quasi-Newton method, utilizes an approximation of the Hessian matrix of second derivatives. This approximation is updated iteratively, so each iteration requires the Hessian to be closer to the true value. The stopping tolerance for BFGS is typically set based on the following.
  • Gradient norm: As with Newton–Raphson, the gradient norm is checked to see if it has fallen below the tolerance.
  • Objective function convergence: The algorithm also stops if changes in the objective function value (e.g., likelihood or error) between iterations fall below a set threshold, such as 10 6 or smaller. This threshold is often used in conjunction with gradient tolerance to confirm that the method has reached an optimal point.
Choosing these tolerance levels involves a trade-off between accuracy and computational cost. Stricter (lower) tolerance levels generally yield more accurate solutions but require more iterations and computation time. Conversely, higher tolerance levels yield faster convergence but may stop before reaching a true minimum. For many statistical estimation tasks, tolerance values around 10 5 to 10 8 for both gradient norm and objective function convergence are common in practice.
By using the obtained point estimates and the invariance property of MLEs, we can calculate the estimates for the survival, hazard rate, and inverse hazard rate functions based on Equations (3)–(5), which are detailed as follows:
S ^ ( t ; α ^ , β ^ , λ ^ ) = exp λ ^ exp β ^ t α ^ 1 , t > 0 ,
h ^ ( t ; α ^ , β ^ , λ ^ ) = α ^ β ^ λ ^ t α ^ 1 exp β ^ t α ^ 1 exp β ^ t α ^ 2 , t > 0 ,
and
r ^ ( t ; α ^ , β ^ , λ ^ ) = α ^ β ^ λ ^ t α ^ 1 exp β ^ t α ^ 1 exp β ^ t α ^ 2 exp λ ^ exp [ β ^ t α ^ ] 1 1 , t > 0 .

2.1. Existence and Uniqueness of MLEs

The existence and uniqueness of MLEs are fundamental aspects to consider in statistical inference. In this subsection, we discuss the necessary and sufficient conditions for the existence of the ML estimators for arbitrary PFFC data under OGE-IWD. To this end, we examine the behavior of Equations (9)–(11) on the positive real line ( 0 , ) .
For (9), when α 0 + , we have lim α 0 + α = + , but when α + , we have lim α + α = i = 1 m log x i < 0 , for x i > 0 ;   x i 1 , i = 1 , 2 , , m .
Similarly, for (10), when β 0 + , we have lim β 0 + β = + , but when β + , we have lim β + β = i = 1 m x i α < 0 , for x i > 0 ;   α > 0 , i = 1 , 2 , , m .
Similarly, for (11), when λ 0 + , we have lim λ 0 + λ = + , but when λ + , we have lim λ + λ = k i = 1 m ( R i + 1 ) exp [ β x i α ] 1 < 0 , for x i > 0 ;   α > 0 , i = 1 , 2 , , m .
Therefore, on ( 0 , ) for Ω = α , β , λ , there exists at least one positive root for Ω = 0 . Additionally, we find that the second partial derivatives of with respect to α , β and λ are always negative; Equations (9)–(11) have a unique solution; and this solution represents the MLEs of α , β and λ . Consequently, we deduce that Ω is a continuous function on ( 0 , ) , and it monotonically decreases from to negative values. This demonstrate the existence and uniquencess of the MLEs of α , β and λ .

2.2. Approximate Confidence Intervals

Approximate confidence intervals (ACIs) for a parameter can be constructed using the observed Fisher information matrix, which quantifies the amount of information that an observable random variable carries about an unknown parameter. To derive these intervals, one typically calculates the observed Fisher information matrix (FIM), which is the negative of the second derivative of the log-likelihood function with respect to the parameter. This matrix is then inverted to estimate the variance–covariance matrix of the parameter estimates. Assuming the estimates are approximately normally distributed, the square root of the diagonal elements of this variance–covariance matrix provides the standard errors of the parameter estimates. These standard errors can be used to construct ACIs, typically by assuming a normal distribution and using a critical value from the standard normal distribution to determine the range within which the true parameter value is likely to lie with a specified level of confidence. This method relies on the asymptotic properties of MLEs, which suggest that for large sample sizes, the parameter estimates are approximately normally distributed and the FIM provides a good approximation of their variability. In this subsection, we derive the ACIs for the vector of the parameters Ω = α , β , λ , as well as for the S t , h t and r t by using the asymptotic normality of the MLEs. This method relies on the fact that, under regularity conditions, the MLEs Ω ^ = ( α ^ , β ^ , λ ^ ) are approximately normally distributed with the true parameters Ω as their means and the inverse of the FIM at the MLEs as their variance–covariance matrix. Or, equivalently,
( α ^ α , β ^ β , λ ^ λ ) N 0 , I 1 ( α ^ , β ^ , λ ^ ) ,
where, from (12),
I 1 ( α ^ , β ^ , λ ^ ) = 11 12 13 21 22 23 31 32 33 Ω ^ = Ω 1 = V a r ( α ^ ) C o v ( α ^ , β ^ ) C o v ( α ^ , λ ^ ) C o v ( β ^ , α ^ ) V a r ( β ^ ) C o v ( β ^ , λ ^ ) C o v ( λ ^ , α ) C o v ( λ ^ , β ^ ) V a r ( λ ^ ) ,
is the inverse of the observed FIM. Moreover, the quantity i j , where i, j = 1 , 2 , 3 , is specified by Equation (13) through (18). Hence, the 100 ( 1 γ ) % ACIs for the parameters α , β , and λ are established by
α ^ ± Z γ / 2 V a r ( α ^ ) , β ^ ± Z γ / 2 V a r ( β ^ ) , λ ^ ± Z γ / 2 V a r ( λ ^ ) ,
where Z γ / 2 is the value from the standard normal distribution N ( 0 , 1 ) that corresponds to the upper γ / 2 quantile.

2.3. Delta Method

The Delta method (DM) is a statistical technique used to approximate the confidence intervals for complex functions of parameters, such as survival, hazard rate, and inverse hazard rate functions. When dealing with such functions, the direct estimation of their confidence intervals can be challenging due to their non-linear nature. The DM simplifies this by utilizing a first-order Taylor expansion around the estimated parameters. It approximates the variance–covariance matrix of the function of interest by linearizing the function using the gradient (or Jacobian) of the function with respect to the parameters. This approach relies on the assumption that the parameter estimates are approximately normally distributed; see Greene [38]. In applying this method, the approximate variances of S t , h t , and r t are computed by the following steps:
S1.
Define Φ 1 , Φ 2 , and Φ 3 as three quantities with the following specific forms:
Φ 1 = S t α , S t β , S t λ , Φ 2 = h t α , h t β , h t λ , Φ 3 = r t α , r t β , r t λ ,
where according to Equations (3), (4), and (5),
S t α = β λ t α exp β t α λ exp β t α 1 log t exp β t α 1 2 ,
S t β = λ t α exp β t α λ exp β t α 1 exp β t α 1 2 ,
S t λ = exp λ exp β t α 1 exp β t α 1 ,
h t α = 1 4 β λ t 2 α 1 csc h β t α 2 2 α β coth β t α 2 log t + t α 1 α log t ,
h t β = 1 4 α λ t 2 α 1 t α β coth β t α 2 csc h β t α 2 2 ,
h t λ = 1 4 α β t α 1 csc h β t α 2 2 ,
r t α = β λ t 2 α 1 exp β t α α β Λ t ; α , β , λ log t + t α exp β t α 1 2 D α log t 1 exp β t α 1 4 D 2 ,
r t β = α λ t 2 α 1 exp β t α t α exp β t α 1 2 D + β Λ t ; α , β , λ exp β t α 1 4 D 2 ,
and
r t λ = α β t α 1 exp β t α exp β t α exp λ exp β t α 1 exp β t α λ 1 1 exp β t α 1 3 D 2 ,
where Λ t ; α , β , λ = exp 2 β t α exp λ exp β t α 1 exp 2 β t α λ exp β t α 1 1 . and
D = exp λ exp β t α 1 1 .
S2.
Utilize the formulas provided to calculate the approximate variances for S t , h t , and r t :
V a r S ^ ( t ) Φ 1 T V a r ( Ω ) Φ 1 Ω = Ω ^ , V a r h ^ ( t ) Φ 2 T V a r ( Ω ) Φ 2 Ω = Ω ^ , V a r r ^ ( t ) Φ 3 T V a r ( Ω ) Φ 3 Ω = Ω ^ ,
where V a r ( Ω ^ ) is derived from Equation (23) for Ω ^ = ( α ^ , β ^ , λ ^ ) .
S3.
Compute the 100 ( 1 γ ) % ACIs for S t , h t , and r t by applying the following formula:
S ^ ( t ) ± Z γ / 2 V a r ( S ^ ( t ) ) , h ^ ( t ) ± Z γ / 2 V a r ( h ^ ( t ) ) , r ^ ( t ) ± Z γ / 2 V a r ( r ^ ( t ) ) .

2.4. Log-Normal ACIs

In cases where the lower bound of the asymptotic confidence intervals might fall below 0 , which conflicts with the requirement that Ω > 0 , this issue can be addressed using a log transformation combined with the DM. Specifically, the log-transformed MLEs satisfy ln Ω ^ N ln Ω , V a r ( ln Ω ^ ) , where V a r ( ln Ω ^ ) = V a r ( Ω ^ ) Ω ^ 2 ; see Meeker and Escobar [39]. Consequently, the asymptotic confidence intervals at the 100 ( 1 γ ) % level for the log-transformed vector of the parameters Ω = α , β , λ are computed as follows:
Ω ^ exp Z γ / 2 V a r ( Ω ^ ) Ω ^ 2 , Ω ^ exp Z γ / 2 V a r ( Ω ^ ) Ω ^ 2 .
In the same manner, the 100 ( 1 γ ) % log-transformed approximate confidence intervals (LACIs) for S t , h t , or r t can be expressed as follows:
S ^ ( t ) exp Z γ / 2 V a r ( S ^ ( t ) ) S ^ ( t ) 2 , S ^ ( t ) exp Z γ / 2 V a r ( S ^ ( t ) ) S ^ ( t ) 2 ,
h ^ ( t ) exp Z γ / 2 V a r ( h ^ ( t ) ) h ^ ( t ) 2 , h ^ ( t ) exp Z γ / 2 V a r ( h ^ ( t ) ) h ^ ( t ) 2 ,
and
r ^ ( t ) exp Z γ / 2 V a r ( r ^ ( t ) ) r ^ ( t ) 2 , h ^ ( t ) exp Z γ / 2 V a r ( r ^ ( t ) ) r ^ ( t ) 2 .

3. Bayesian Estimation

Bayes estimation, grounded in Bayes’ theorem, is a powerful statistical approach that updates the probability estimate for a hypothesis as additional evidence is acquired. At its core, Bayes’ theorem combines prior distributions with observed data to produce a posterior distribution, which represents a revised probability of the hypothesis given the new evidence. The prior distribution reflects the initial beliefs or knowledge about the parameters before observing the data, while the likelihood function represents the probability of the observed data given the parameters. By integrating these components, Bayes estimation allows for a dynamic adjustment of parameter estimates based on both prior knowledge and current data, thus providing a coherent framework for incorporating uncertainty and updating beliefs as more information becomes available. This methodology is particularly useful in scenarios where data are limited or uncertain, as it enables the incorporation of prior information to make more informed and probabilistic estimates.
The literature on Bayesian parameters estimation is rich and diverse, reflecting advancements across various fields and methodologies. Santos et al. [40] provided a comprehensive exploration of Bayesian estimation techniques applied to decay parameters within Hawkes processes, showcasing innovative approaches in modeling temporal events. In a complementary vein, Han [41] offered a multifaceted examination of hierarchical Bayesian estimation and presented a range of perspectives that deepen the understanding of parameter estimation complexities in different contexts. Vaglio et al. [42] extended Bayesian techniques to astrophysical phenomena, specifically in the parameter estimation of boson–star binary signals, incorporating advanced models like coherent inspiral templates and spin-dependent quadrupolar corrections. Meanwhile, Bangsgaard et al. [43] addressed Bayesian parameter estimation in the realm of medical science, focusing on phosphate dynamics during hemodialysis. Their work underscores the versatility of Bayesian methods in handling complex biological systems and highlights the method’s applicability beyond traditional statistical fields. Collectively, these studies illustrate the broad applicability and continuous evolution of Bayesian estimation methods across diverse scientific disciplines. In this section, we introduce the posterior density function for the OGE-IWD using samples from the PFFC model. We then employ Bayesian inference techniques to derive estimates for parameters, Ω = α , β , λ as well as S t , h t , and r t , considering both symmetric and asymmetric loss functions.

3.1. Prior and Posterior Distributions

In Bayes’s estimation, the initial step involves assessing how the prior distribution affects the estimation. To do this, we use a gamma prior distribution for our analysis. Gamma prior distribution is often used as a prior for parameters that are positive and continuous, particularly in the context of modeling rate parameters in Poisson processes or the precision of normal distributions. The gamma distribution is characterized by two parameters, shape and scale, which together control the distribution’s form. It is particularly useful because it is conjugate to the exponential likelihoods, meaning that when used as a prior in these models, the posterior distribution will also be gamma-distributed. This conjugacy simplifies the process of updating the prior with new data, leading to straightforward analytical solutions. Additionally, the gamma distribution’s flexibility in adjusting its shape and scale makes it a versatile tool in Bayesian inference, allowing practitioners to encode various degrees of prior belief about the parameter’s likely values. This combination of practical computational benefits and flexibility in modeling prior knowledge makes the gamma distribution a preferred choice in Bayesian analysis. In this scenario, it is assumed that the parameters α , β , and λ are mutually independent, and each adheres to a gamma distribution:
π 1 ( α ) α a 1 1 exp b 1 α , π 2 ( β ) β a 2 1 exp b 2 β , π 3 ( λ ) λ a 3 1 exp b 3 λ ; α , β , λ > 0 .
In this context, it is assumed that each hyper-parameter, represented as a i , b i > 0 for i = 1 , 2 , 3 , is known and non-negative. Based on this assumption, a specific prior distribution is proposed. As a result, the joint prior distribution of α , β , λ is defined as follows:
π ( α , β , λ ) α a 1 1 β a 2 1 λ a 3 1 exp b 1 α b 2 β b 3 λ .
By integrating the prior information detailed in Equation (32) with the likelihood function outlined in Equation (7), Bayes’ theorem enables us to formulate the joint posterior distribution as follows:
π * ( α , β , λ | x ̲ ) = L × π ( α , β , λ ) 0 0 0 L × π ( α , β , λ ) d α d β d λ , α m + a 1 1 β m + a 2 1 λ m + a 3 1 exp b 1 α b 2 β b 3 λ × i = 1 m x i α 1 exp β x i α k λ R i + 1 exp β x i α 1 1 exp β x i α 2 .
The Bayes estimate for any function of α , β , and λ , denoted as η ( α , β , λ ) , under the squared error (SE), linear exponential (LINEX), and general entropy (GE) loss functions, is given by the following expressions (see Varian [44] Calabria and Pulcini [45]):
η ^ S E = 0 0 0 η ( α , β , λ ) π * ( α , β , λ | x ̲ ) d α d β d λ 0 0 0 π * ( α , β , λ | x ̲ ) d α d β d λ ,
η ^ L I N E X = 1 ω log 0 0 0 exp ω η ( α , β , λ ) π * ( α , β , λ | x ̲ ) d α d β d λ 0 0 0 π * ( α , β , λ | x ̲ ) d α d β d λ ,
and
η ^ G E = 0 0 0 η ( α , β , λ ) ν π * ( α , β , λ | x ̲ ) d α d β d λ 0 0 0 π * ( α , β , λ | x ̲ ) d α d β d λ 1 ν .
Since the Bayes estimators cannot be explicitly derived for the SE, LINEX, or GE loss functions, it is necessary to solve them numerically. Therefore, we suggest using the Markov-chain Monte Carlo (MCMC) technique to compute the Bayes estimators for α , β , and λ .

3.2. MCMC Techniques

MCMC techniques are essential in Bayesian estimation for approximating complex posterior distributions. At the heart of these techniques is the idea of constructing a Markov chain that converges to the desired distribution, allowing us to estimate it by sampling. Key types include Metropolis–Hastings (MH), which proposes new states based on a proposal distribution and accepts them with a probability that ensures convergence, and Gibbs sampling (GS), which updates one variable at a time while keeping others fixed, suitable for models with conditional distributions. Both techniques showcase MCMC’s flexibility in handling various problem structures and data types, as detailed in the studies by Geman and Geman [46], Metropolis et al. [47], and Hastings [48]. Additional advancements encompass the Hamiltonian Monte Carlo (HMC), which uses gradient information to propose samples and is effective for high-dimensional spaces. Lastly, importance sampling (IMS) is a technique used to estimate the properties of a particular distribution while sampling from a different one. The core concept involves selecting samples from a proposal distribution. This is achieved by calculating a weight for each sample based on the ratio of the target distribution’s density to the proposal distribution’s density. These methods collectively provide powerful tools to delve into Bayesian models and obtain ultimate insights into parameter estimates. At the outset, the fully conditional posterior distributions for the parameters α , β , and λ can be expressed in the following way:
π 1 * ( α | β , λ , x ̲ ) α m + a 1 1 exp b 1 α × i = 1 m x i α exp β x i α k λ R i + 1 exp β x i α 1 1 exp β x i α 2 ,
π 2 * ( β | α , λ , x ̲ ) β m + a 2 1 exp b 2 β m i = 1 exp β x i α k λ R i + 1 exp β x i α 1 1 exp β x i α 2 ,
and
π 3 * ( λ | α , β , x ̲ ) λ m + a 3 1 exp λ b 3 + k R i + 1 exp β x i α 1 .
Equation (39) conforms to a gamma distribution, allowing for the easy sampling of λ with any gamma-generating routine. In contrast, Equations (37) and (38) do not fit standard distributions, requiring MCMC methods for sampling. The algorithm will utilize GS and the MH algorithm in sequence to produce samples from the posterior distribution by using a proposal density. The algorithm advances through the following phases:
1.
Establish the starting values for the parameters α 0 , β 0 , λ 0 = ( α ^ , β ^ , λ ^ ) and initialize i = 1 .
2.
Generate λ ( i ) from gamma distribution π 3 * ( λ | α , β , x ̲ ) .
3.
Employ the following MH algorithm to generate α ( i ) from π 1 * ( α ( i 1 ) | β ( i 1 ) , λ ( i ) , x ̲ ) and β ( i ) from π 2 * ( β ( i 1 ) | α ( i 1 ) , λ ( i ) , x ̲ ) , using normal proposal distributions N ( α ( i 1 ) , V a r ( α ^ ) ) and N ( β ( i 1 ) , V a r ( β ^ ) ) , respectively. Execute the following tasks:
(a)
Generate a proposal α * from N ( α ( i 1 ) , V a r ( α ^ ) ) and β * from N ( β ( i 1 ) , V a r ( β ^ ) ) .
(b)
Assess the acceptance probabilities:
ζ α = min 1 , π 1 * ( α * | β i 1 , λ i , x ̲ ) π 1 * ( α i 1 | β i 1 , λ i , x ̲ ) , ζ β = min 1 , π 2 * ( β * | α i , λ i , x ̲ ) π 3 * ( β i 1 | α i , λ i , x ̲ ) .
(c)
Obtain values ρ 1 and ρ 2 from a iniform ( 0 , 1 ) distribution.
(d)
If ρ 1 ζ α , accept the proposal and set α ( i ) = α * . Otherwise, retain the previous value by setting α ( i ) = α ( i 1 ) .
(e)
If ρ 2 ζ β , accept the proposal and set β ( i ) = β * . Otherwise, retain the previous value by setting β ( i ) = β ( i 1 ) .
4.
For a given t, compute the survival, hazard rate, and the inverse hazard rate functions:
S i ( t ) = exp λ i exp β i t α i 1 , t > 0 ,
h i t = α i β i λ i t α i + 1 exp β i t α i 1 exp β i t α i 2 , t > 0 ,
and
r i t = α i β i λ i t α i + 1 exp β i t α i 1 exp β i t α i 2 exp λ i exp [ β i t α i ] 1 1 , t > 0 .
5.
Set i = i + 1 .
6.
Repeat steps ( 2 4 ) M times to collect the required number of samples. After removing the initial M 0 burn-in samples, use the remaining M M 0 samples to compute the Bayesian estimates.
7.
It is now feasible to compute the Bayes estimate of η = η ( α , β , λ ) for the SE, LINEX, and GE loss functions as follows:
η ^ S E = 1 M M 0 i = M 0 + 1 M η ( α i , β i , λ i ) ,
η ^ L I N E X = 1 ω log 1 M M 0 i = M 0 + 1 M exp ω η ( α i , β i , λ i ) ,
and
η ^ G E = 1 M M 0 i = M 0 + 1 M η ( α i , β i , λ i ) ν 1 ν ,
where η = η ( α , β , λ ) represents the parameters α , β , λ , S t , h t and r t .
8.
Arrange η M 0 + 1 , η M 0 + 2 , , η M in ascending order as η 1 < η 2 < < η M . Accordingly,
η M M 0 γ 2 , η M M 0 1 γ 2 ,
provide the 100 ( 1 γ ) % Bayesian CRIs for η .

4. Simulation Study

Simulation studies is essential for comparing and evaluating different estimation methods because they provide a versatile and controlled framework for analyzing their performance across a variety of scenarios. By simulating a wide range of conditions and generating synthetic data, researchers can test how well different estimation techniques perform when faced with known benchmarks and varying complexities. This approach allows for a detailed assessment of each method’s accuracy, reliability, and computational efficiency, offering insights into their strengths and weaknesses. Consequently, simulations help identify the most effective estimation strategies, refine existing methods, and ensure that the chosen techniques are robust and applicable to real-world problems. Through this iterative process, simulations enhance the overall quality and credibility of estimation practices. In this section, we assess the performance of the proposed Bayes estimators in comparison with the MLEs through a Monte Carlo simulation study. This study involves testing various combinations of sample sizes (n), numbers of groups (m), and censoring schemes (R) with different values of R i . We use the algorithm developed by Balakrishnan and Sandhu [49], applying the CDF 1 1 F x k where k is the same number of units within each group, to generate a set of PFFC samples from the OGE-IWD with parameters ( α , β , λ ) set to ( 1.75 , 0.5 , 1 ) . We then compute the true values of S t , h t and r t at time t = 0.3 , obtaining results 0.9835 , 0.4061 and 24.1811 . The performance of the estimators is evaluated using mean square error (MSE), calculated as M S E = 1 N i = 1 N ( μ ^ j ( i ) μ j ) 2 , where N = 1000 , with μ j = α , β , λ , S t , h t or r t , j = 1 , 2 , , 6 for the point estimates. Additionally, we examine the average confidence interval widths (ACIWs)/credible intervals (CRIWs), along with their coverage probabilities (CPs), for both asymptotic and the highest posterior density interval estimates. The Bayes estimates and CRIs are derived from M = 12,000 MCMC samples, discarding the initial M 0 = 2000 as “burn-in”. Informative gamma priors are used for α , β , and λ , with hyper-parameters a 1 = 1 , b 1 = 2 ,   a 2 = 2 , b 2 = 3 , a 3 = 2 and b 3 = 2 . Furthermore, 95 % CRIs are calculated for each simulation, considering two different group sizes, k = 3 and k = 5 , and various censoring schemes (CS): CSI: R 1 = n m ,   R i = 0 , for i 1 . CSII: R m 2 = R m 2 + 1 = n m 2 , R i = 0 for i m 2 and i m 2 + 1 if m even; R m + 1 2 = n m , R i = 0 for i m + 1 2 if m odd. Finally, CSIII: R m = n m ,   R i = 0 for i m . This means that, if (n,m) = (30,15), then CSI = ( 15 , 0 14 ) , CSII = ( 0 7 , 15 , 0 7 ) and CSIII = ( 0 14 , 15 ) . If (n,m) = (30,20), then CSI = ( 10 , 0 19 ) , CSII = ( 0 9 , 5 , 5 , 0 9 ) and CSIII = ( 0 19 , 10 ) , such that, 0 5 = (0,0,0,0,0). The outcomes are detailed in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9. Based on these results, several notable observations can be made:
1.
As anticipated from Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9, increasing the sample sizes ( n , m ) results in a reduction in MSEs and ACIWs/CRIWs.
2.
Among the various methods, Bayes estimates exhibit the smallest MSEs and AWs for the parameters α , β , λ , S ( t ) , h t and r t , indicating superior performance compared to MLEs.
3.
The Bayes estimate utilizing the GE loss function with ω = 2 delivers more accurate estimates for α , β , λ , S ( t ) , h t and r t due to its smaller MSEs.
4.
When using the LINEX and GE loss function with ω = 2 , Bayes estimates outperform those obtained with ω = 2 , as they yield smaller MSEs.
5.
For the SE loss function, Bayes estimates for α , β and λ are superior to those derived under the LINEX loss function with ω = 2 , demonstrated by their smaller MSEs.
6.
Bayes estimates for S ( t ) , h t and r t under the LINEX loss function with ω = 2 show better performance compared to those under the SE loss function, as evidenced by their smaller MSEs.
7.
Analysis of all tables reveals that increasing the group size k leads to higher MSEs and AWs for α , β , λ , S ( t ) , h t and r t .
8.
Given fixed sample sizes and observed failures, the CSI scheme proves to be the most effective, offering the smallest MSEs and AWs.
9.
In general, all the point estimates are completely effective because the average biases are very small. Average bias tends to zero when n and m increase.
10.
The MLE and Bayesian methods provide very similar estimates, with both having ACIs with high CPs of around 0.95 . However, Bayesian CRIs achieve the highest CPs.

5. Application to Real-Life Data

The practical application of theoretical studies using real data is essential for ensuring that abstract concepts are grounded in reality and truly effective. Theories provide the foundational principles and frameworks, but real data reveal how these theories function in actual situations. This application allows for the testing and refinement of theoretical models, making them more accurate and relevant. In this section, to illustrate the inference methods covered earlier, we use real-life data on bladder cancer patients. Specifically, we analyze the dataset provided by Lee and Wang [50], which was also utilized in Hassan et al. [1]. This dataset includes remission times, measured in months, for a random sample of 128 bladder cancer patients.
To evaluate the goodness of fit, we calculated the K-S distance between the empirical distribution and the fitted distribution functions, which turned out to be 0.05914955 , with a corresponding p-value of 0.7617021 in the presence of the estimated parameters ( α ^ , β ^ , λ ^ ) = ( 0.8699 , 0.3355 , 0.05653 ) . This p-value is the highest among the values obtained for these data, indicating that the OGE-IWD model provides the best fit. The empirical, Q-Q, TTT, SF, PDF and P-P plots displayed in Figure 4 further confirm that the OGE-IWD model aligns closely with the data. The data are randomly partitioned into 64 groups, each containing 2 items. The groups are organized as follows: {(0.08, 0.31), (0.19, 0.39), (0.2, 0.4), (0.51, 0.52), (0.73, 0.81), (0.22, 0.5), (1.76, 2.02), (2.26, 2.64), (0.26, 0.4), (0.62, 0.66), (0.82, 1.35), (0.96, 2.02), (0.9, 1.46), (1.05, 2.07), (2.69, 2.75), (2.69, 2.83), (2.87, 3.02), (1.26, 2.09), (3.88, 4.18), (2.33, 3.25), (3.7, 4.23), (3.48, 3.57), (2.46, 3.46), (5.41, 7.28), (5.41, 8.66), (5.49, 7.93), (5.62, 12.63), (2.54, 4.33), (3.28, 4.26), (26.31, 32.15), (3.36, 4.34), (3.36, 4.4), (4.5, 5.32), (4.87, 5.34), (4.98, 5.32), (4.51, 5.17), (5.09, 23.63), (5.71, 11.98), (7.32, 9.74), (5.06, 7.59), (12.07, 13.11), (5.85, 7.62), (6.25, 7.63), (6.39, 7.87), (9.02, 9.47), (6.54, 10.06), (14.77, 16.62), (6.76, 10.34), (17.36, 18.1), (6.94, 10.66), (10.75, 11.25), ( 6.97, 11.64), (7.09, 11.79), (14.76, 17.12), (7.26, 12.03), (8.37, 13.29), (8.53, 13.8), (19.13, 22.69), (8.65, 14.38), (12.02, 17.14), (34.26, 43.01), (14.24, 46.12), (20.28, 25.74), (36.66, 79.05)}. A PFFC sample, denoted as x ( 1 ) , is generated as follows: the original data are divided into n = 64 groups, with each group containing k = 2 items. The resulting groups are organized according to R = (2, 1, 2, 1, 1, 1, 0, 2, 3, 0, 2, 0, 2, 0, 2, 0, 0, 3, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 3), where 34 first-failure times are recorded ( m = 34 ) and 30 groups are excluded ( n m = 30 ) . Consequently, the resulting PFFC sample is x ( 1 ) = {0.08, 0.19, 0.2, 0.22, 0.26, 0.96, 1.05, 1.26, 2.33, 2.46, 2.54, 3.28, 3.36, 3.36, 4.87, 4.98, 5.06, 5.09, 5.85, 6.25, 6.39, 6.54, 6.76, 6.94, 6.97, 7.09, 7.26, 8.37, 8.53, 8.65, 12.02, 14.24, 20.28, 36.66}. Based on the data x ( 1 ) , the MLEs and 95 % ACIs for α , β , λ , S t , h t and r t are listed in Table 10 and Table 11. To obtain Bayesian estimates, we need to define prior distributions for these parameters. In the absence of prior information, we use non-informative gamma priors for α , β and λ , with hyper-parameters a i = 0.0001 and b i = 0.0001 , i = 1 , 2 , 3 . The posterior analysis was carried out using the MCMC technique, specifically combining the MH algorithm with the GS. As outlined in Section 3, the MLEs were used as initial values for α , β and λ , and 12,000 MCMC samples were generated, with the first 2000 samples discarded as “burn-in” to mitigate initial value effects. Table 10 and Table 12 present the Bayesian estimates and 95 % CRIs for α , β , λ , S t , h t and r t . From the results presented in Table 10 and Table 11, we can make the following observations: The model performs exceptionally well in fitting the data, as evident from the figures. The estimates from the two methods are quite similar, with only minor differences, which provides a favorable impression. The approximate confidence intervals are appropriate, with all point estimates falling within them. Additionally, there are minor variations in the interval widths, as anticipated.

6. Conclusions

In this study, we developed two distinct methods using a PFFC scheme to estimate the unknown parameters of the OGE-IWD. We utilized the Fisher information matrix to create approximate confidence intervals (ACIs) for α , β , and λ , while employing the delta method to compute ACIs for SF, HRF, and IHRF. The complexity of the posterior distribution equations for these parameters, particularly in the context of Bayesian estimates, made analytical reductions challenging. To address this, we used MCMC techniques to calculate Bayesian estimators for the SE, LINEX, and GE loss functions. The study began by comparing various methodologies in a simulated environment, concluding that the Bayes method is effective for estimating and constructing ACIs for unknown parameters with PFFC data from the OGE-IWD. The MCMC algorithm showed better performance than the MLE method. Applying the OGE-IWD to real-world medical data demonstrated its ability to model current data accurately, suggesting its potential for similar applications in the medical field. The study also identifies future research directions, including optimizing censoring schemes and expanding statistical inference methods for accelerated life testing models with multiple failure factors.
One critical aspect that requires further examination is the sample size necessary to achieve desirable statistical, such as property unbiasedness, consistency, and asymptotic normality of the estimators. In MLE, these properties generally hold asymptotically as the sample size approaches infinity. However, for practical applications, particularly with progressive first-failure censoring, finite-sample performance may vary significantly depending on the sample size and censoring scheme. Smaller samples can lead to biased or inefficient estimators, and confidence intervals may exhibit lower-than-expected coverage probabilities.
For the proposed model, the non-closed-form nature of MLEs and the use of iterative methods like Newton–Raphson introduce further dependency on sample size for convergence and stability. To ensure unbiasedness and consistency, it is crucial to determine an appropriate minimum sample size, particularly as the censoring level increases. In progressive censoring schemes, fewer observed failures can reduce the effective sample size, impacting the asymptotic properties and potentially leading to biased estimates.
Additionally, for Bayesian estimation, small sample sizes can lead to posterior distributions that are overly influenced by the prior distributions. When informative priors are employed, as with the gamma priors used in this study, the posterior estimates might exhibit significant shrinkage towards the prior mean. Monte Carlo simulations could be utilized to investigate how sample size affects Bayesian point estimates and credible interval coverage rates under different censoring levels, providing practical guidance on sample size requirements for reliable inference.

Author Contributions

Conceptualization, R.M.E.-S. and M.M.R.; methodology, A.A.-E.-M. and R.M.E.-S.; software, M.M.R., A.A.-E.-M. and R.M.E.-S.; validation, R.M.E.-S., M.M.R. and A.A.-E.-M.; formal analysis, A.A.-E.-M.; investigation, M.M.R. and R.M.E.-S.; resources, M.M.R.; data curation, R.M.E.-S.; writing-original draft preparation, A.A.-E.-M.; writing-review and editing, R.M.E.-S.; visualization, M.M.R.; supervision, R.M.E.-S. and M.M.R.; project administration, R.M.E.-S.; funding acquisition, A.A.-E.-M., M.M.R. and R.M.E.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available within the article.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Hassan, A.S.; Elsherpieny, E.A.; Mohamed, R.E. Odds generalized exponential-inverse Weibull distribution: Properties estimation. Pak. J. Stat. Oper. Res. 2018, 14, 1–22. [Google Scholar] [CrossRef]
  2. Mohamed, A.A.; Refaey, R.M.; AL-Dayian, G.R. Bayesian and E-Bayesian estimation for odd generalized exponential inverted Weibull distribution. J. Bus. Environ. Sci. 2024, 3, 275–301. [Google Scholar] [CrossRef]
  3. Noor, F.; Aslam, M. Bayesian inference of the inverse Weibull mixture distribution using type-I censoring. J. Appl. Stat. 2013, 40, 1076–1089. [Google Scholar] [CrossRef]
  4. Basu, S.; Singh, S.K.; Singh, U. Parameter estimation of inverse Lindley distribution for Type-I censored data. Comput. Stat. 2017, 32, 367–385. [Google Scholar] [CrossRef]
  5. Joarder, A.; Krishna, H.; Kundu, D. Inferences onWeibull parameters with conventional type-I censoring. Comput. Stat. Data Anal. 2011, 55, 1–11. [Google Scholar] [CrossRef]
  6. Kundu, D.; Howlader, H. Bayesian inference and prediction of the inverse Weibull distribution for Type-II censored data. Comput. Stat. Data Anal. 2010, 54, 1547–1558. [Google Scholar] [CrossRef]
  7. Kundu, D.; Raqab, M.Z. Bayesian inference and prediction of order statistics for a Type-II censored Weibull distribution. J. Stat. Plan. Inference 2012, 142, 41–47. [Google Scholar] [CrossRef]
  8. Singh, S.K.; Singh, U.; Sharma, V.K. Bayesian Estimation and Prediction for Flexible Weibull Model under Type-II Censoring Scheme. J. Probab. Stat. 2013, 2013, 146140. [Google Scholar] [CrossRef]
  9. Panahi, H.; Sayyareh, A. Parameter estimation and prediction of order statistics for the Burr Type XII distribution with Type II censoring. J. Appl. Stat. 2014, 41, 215–232. [Google Scholar] [CrossRef]
  10. Asgharzadeh, A.; Ng, H.K.; Valiollahi, R.; Azizpour, M. Statistical inference for Lindley model based o type II censored data. J. Stat. Theory Appl. 2017, 16, 178–197. [Google Scholar] [CrossRef]
  11. Xin, H.; Zhu, J.; Sun, J.; Zheng, C.; Tsai, T.R. Reliability inference based on the three-parameter Burr typ XII distribution with type II censoring. Int. J. Reliab. Qual. Saf. Eng. 2018, 25, 1850010. [Google Scholar] [CrossRef]
  12. Goyal, T.; Rai, P.K.; Maury, S.K. Classical and Bayesian studies for a new lifetime model in presence of type-II censoring. Commun. Stat. Appl. Methods 2019, 26, 385–410. [Google Scholar] [CrossRef]
  13. Arabi Belaghi, R.; Noori Asl, M.; Gurunlu Alma, O.; Singh, S.; Vasfi, M. Estimation and prediction for the Poisson-Exponential distribution based on type-II censored data. Am. J. Math. Manag. Sci. 2019, 38, 96–115. [Google Scholar] [CrossRef]
  14. Balakrishnan, N.; Aggarwala, R. Progressive Censoring: Theory, Methods, and Applications; Springer Science Business Media: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  15. Cohen, A.C. Progressively censored samples in life testing. Technometrics 1963, 5, 327–339. [Google Scholar] [CrossRef]
  16. Brito, E.S.; Ferreira, P.H.; Tomazella, V.L.; Martins Neto, D.S.; Ehlers, R.S. Inference methods for the Very Flexible Weibull distribution based on progressive type-II censoring. Commun. Stat.-Simul. Comput. 2024, 53, 5342–5366. [Google Scholar]
  17. Abo-Kasem, O.E.; El Saeed, A.R.; El Sayed, A.I. Optimal sampling and statistical inferences for Kumaraswamy distribution under progressive Type-II censoring schemes. Sci. Rep. 2023, 13, 12063. [Google Scholar] [CrossRef]
  18. Dey, S.; Al-Mosawi, R. Classical and Bayesian Inference of Unit Gompertz Distribution Based on Progressively Type II Censored Data. Am. J. Math. Manag. Sci. 2024, 43, 61–89. [Google Scholar] [CrossRef]
  19. Kumar, D.; Nassar, M.; Dey, S. Progressive Type-II Censored Data and Associated Inference with Application Based on Li—Li Rayleigh Distribution. Ann. Data Sci. 2023, 10, 43–71. [Google Scholar] [CrossRef]
  20. Choudhary, H.; Krishna, H.; Nagar, K.; Kumar, K. Estimation in generalized uniform distribution with progressively type-II censored sample. Life Cycle Reliab. Saf. Eng. 2024, 13, 309–323. [Google Scholar] [CrossRef]
  21. Johnson, L.G. Theory and Technique of Variation Research; Elsevier: Amsterdam, The Netherlands, 1964. [Google Scholar]
  22. Wu, J.W.; Hung, W.L.; Tsai, C.H. Estimation of the parameters of the Gompertz distribution under the first-failure-censored sampling plan. Statistics 2003, 37, 517–527. [Google Scholar] [CrossRef]
  23. Wu, J.W.; Yu, H.Y. Statistical inference about the shape parameter of the Burr type XII distribution under the first-failure-censored sampling plan. Appl. Math. Comput. 2005, 163, 443–482. [Google Scholar]
  24. Wu, S.J.; Kuş, C. On estimation based on progressive first-failure censored sampling. Comput. Stat. Data Anal. 2009, 53, 3659–3670. [Google Scholar] [CrossRef]
  25. Kumar, I.; Kumar, K.; Ghosh, I. Reliability estimation in inverse Pareto distribution using progressively first failure censored data. Am. J. Math. Manag. Sci. 2023, 42, 126–147. [Google Scholar] [CrossRef]
  26. Abd-El-Monem, A.; Eliwa, M.S.; El-Morshedy, M.; Al-Bossly, A.; EL-Sagheer, R.M. Statistical Analysis and Theoretical Framework for a Partially Accelerated Life Test Model with Progressive First Failure Censoring Utilizing a Power Hazard Distribution. Mathematics 2023, 11, 4323. [Google Scholar] [CrossRef]
  27. Elshahhat, A.; Sharma, V.K.; Mohammed, H.S. Statistical analysis of progressively first-failure-censored data via beta-binomial removals. AIMS Math. 2023, 8, 22419–22446. [Google Scholar] [CrossRef]
  28. Kumar, K.; Kumar, I.; Ng, H.K.T. On Estimation of Shannon’s Entropy of Maxwell Distribution Based on Progressively First-Failure Censored Data. Stats 2024, 7, 138–159. [Google Scholar] [CrossRef]
  29. Saini, S. Estimation of multi-stress strength reliability under progressive first failure censoring using generalized inverted exponential distribution. J. Stat. Comput. Simul. 2024, 94, 3177–3209. [Google Scholar] [CrossRef]
  30. Shi, X.; Shi, Y. Estimation of stress-strength reliability for beta log Weibull distribution using progressive first failure censored samples. Qual. Reliab. Eng. Int. 2023, 39, 1352–1375. [Google Scholar] [CrossRef]
  31. Fathi, A.; Farghal, A.W.A.; Soliman, A.A. Inference on Weibull inverted exponential distribution under progressive first-failure censoring with constant-stress partially accelerated life test. Stat. Pap. 2024, 1–33. [Google Scholar] [CrossRef]
  32. Eliwa, M.S.; Al-Essa, L.A.; Abou-Senna, A.M.; El-Morshedy, M.; EL-Sagheer, R.M. Theoretical framework and inference for fitting extreme data through the modified Weibull distribution in a first-failure censored progressive approach. Heliyon 2024, 10, e34418. [Google Scholar] [CrossRef]
  33. Gong, Q.; Chen, R.; Ren, H.; Zhang, F. Estimation of the reliability function of the generalized Rayleigh distribution under progressive first-failure censoring model. Axioms 2024, 13, 580. [Google Scholar] [CrossRef]
  34. He, D.; Sun, D.; Zhu, Q. Bayesian analysis for the Lomax model using noninformative priors. Stat. Theory Relat. Fields 2023, 7, 61–68. [Google Scholar] [CrossRef]
  35. Ran, H.; Bai, Y. Partially fixed bayesian additive regression trees. Stat. Theory Relat. Fields 2024, 1–11. [Google Scholar] [CrossRef]
  36. Zhuang, L.; Xu, A.; Wang, Y.; Tang, Y. Remaining useful life prediction for two-phase degradation model based on reparameterized inverse Gaussian process. Eur. J. Oper. Res. 2024, 319, 877–890. [Google Scholar] [CrossRef]
  37. Xu, A.; Fang, G.; Zhuang, L.; Gu, C. A multivariate student-t process model for dependent tail-weighted degradation data. IISE Trans. 2024, 1–17. [Google Scholar] [CrossRef]
  38. Greene, W.H. Econometric Analysis, 4th ed.; Prentice-Hall: New York, NY, USA, 2000. [Google Scholar]
  39. Meeker, W.Q.; Escobar, L.A. Statistical Methods for Reliability Data; Wiley: New York, NY, USA, 1998. [Google Scholar]
  40. Santos, T.; Lemmerich, F.; Helic, D. Bayesian estimation of decay parameters in Hawkes processes. Intell. Data Anal. 2023, 27, 223–240. [Google Scholar] [CrossRef]
  41. Han, M. Take a look at the hierarchical Bayesian estimation of parameters from several different angles. Commun. Stat.-Theory Methods 2023, 52, 7718–7730. [Google Scholar] [CrossRef]
  42. Vaglio, M.; Pacilio, C.; Maselli, A.; Pani, P. Bayesian parameter estimation on boson-star binary signals with a coherent inspiral template and spin-dependent quadrupolar corrections. Phys. Rev. D 2023, 108, 023021. [Google Scholar] [CrossRef]
  43. Bangsgaard, K.O.; Andersen, M.; Heaf, J.G.; Ottesen, J.T. Bayesian parameter estimation for phosphate dynamics during hemodialysis. Math. Biosci. Eng. 2023, 20, 4455–4492. [Google Scholar] [CrossRef]
  44. Varian, H.R. Bayesian Approach to Real Estate Assessment. In Studies in Bayesian Economics and Statistics in Honor of Savage; Fienberg, S.E., Zellner, A., Savage, L.J., Eds.; North Holland: Amsterdam, The Netherlands, 1975; pp. 195–208. [Google Scholar]
  45. Calabria, R.; Pulcini, G. Point estimation under asymmetric loss functions for left-truncated exponential samples. Communications in Statistics-Theory and Methods 1996, 25, 585–600. [Google Scholar] [CrossRef]
  46. Geman, S.; Geman, D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 1984, 6, 721–741. [Google Scholar] [CrossRef]
  47. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  48. Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  49. Balakrishnan, N.; Sandhu, R.A. A simple simulation algorithm for generating progressively type-II censored samples. Am. J. Stat. 1995, 49, 229–230. [Google Scholar] [CrossRef]
  50. Lee, E.T.; Wang, J.W. Statistical Methods for Survival Data Analysis, 3rd ed.; Wiley: New York, NY, USA, 2003. [Google Scholar] [CrossRef]
Figure 1. PDF for OGE-IWD.
Figure 1. PDF for OGE-IWD.
Axioms 13 00822 g001
Figure 2. HRF for OGE-IWD.
Figure 2. HRF for OGE-IWD.
Axioms 13 00822 g002
Figure 3. Description of the PFFC scheme.
Figure 3. Description of the PFFC scheme.
Axioms 13 00822 g003
Figure 4. The KD, box, TTT, Q-Q, P-P, SF, PDF, and violin plots for the data set.
Figure 4. The KD, box, TTT, Q-Q, P-P, SF, PDF, and violin plots for the data set.
Axioms 13 00822 g004
Table 1. RMSE (first row) and bias (second row) of estimates for the parameter α .
Table 1. RMSE (first row) and bias (second row) of estimates for the parameter α .
( k , n , m ) CSMLEBayes
SELINEXGE
ω = 2 ω = 2 ν = 2 ν = 2
( 3 , 30 , 15 ) CSI0.59560.58090.59060.56160.57480.5277
0.02500.01340.01460.01290.01320.0114
CSII0.61290.59250.60200.57520.59930.5384
−0.01990.01740.01860.01560.01630.0142
CSIII0.63530.61360.62090.60010.61600.5544
−0.01760.01870.02030.01650.01770.0158
( 3 , 30 , 20 ) CSI0.54460.52390.53320.50470.51520.4860
−0.00210.00220.00230.00200.00230.0019
CSII0.57050.55010.57160.53510.56160.5047
0.00290.00270.00290.00240.00280.0021
CSIII0.60390.5880.60450.54760.59670.5288
0.00410.00390.00400.00360.00380.0026
( 3 , 50 , 35 ) CSI0.47420.4470.45950.41740.45320.3919
0.00160.00150.00160.00130.00140.0009
CSII0.50570.47150.49990.44220.48510.4096
−0.02530.00220.00220.00230.00220.0018
CSIII0.53540.50630.51470.46910.52610.4423
−0.02780.00250.00240.00210.00230.0014
( 5 , 30 , 15 ) CSI0.66520.65080.66050.63040.65540.5997
−0.02830.02360.02650.02250.02460.0174
CSII0.69660.68110.69050.66270.68550.6322
−0.02350.02500.02710.02420.02590.0189
CSIII0.71740.71050.72440.68640.71640.6495
−0.02860.02630.02950.02550.02710.0194
( 5 , 30 , 20 ) CSI0.63530.61700.62960.59560.62080.5471
−0.03340.03210.03540.03140.03250.0264
CSII0.67240.66080.67070.62630.66600.6065
−0.03140.03870.03900.03680.03540.0297
CSIII0.69780.68100.68930.66430.68550.6244
0.04560.04420.04530.04120.04280.0355
( 5 , 50 , 35 ) CSI0.56100.54540.57440.52220.56100.4616
−0.00190.00180.00190.00140.00130.0007
CSII0.59560.58030.59150.54450.58970.4911
−0.00170.00210.00230.00200.00210.0008
CSIII0.62790.60930.62170.57840.61630.5471
0.00130.00120.0120.00100.00110.0009
Table 2. MSE of estimates for the parameter α .
Table 2. MSE of estimates for the parameter α .
( k , n , m ) CSMLEBayes
SELINEXGE
ω = 2 ω = 2 ν = 2 ν = 2
( 3 , 30 , 15 ) CSI0.35470.33750.34880.31540.33040.2785
CSII0.37560.35110.36240.33080.35920.2899
CSIII0.40360.37650.38550.36010.37950.3074
( 3 , 30 , 20 ) CSI0.29660.27450.28430.25470.26540.2362
CSII0.32550.30260.32670.28630.31540.2547
CSIII0.36470.34570.36540.29990.35610.2796
( 3 , 50 , 35 ) CSI0.22490.19980.21110.17420.20540.1536
CSII0.25570.22230.24990.19550.23530.1678
CSIII0.28660.25630.26490.22010.27680.1956
( 5 , 30 , 15 ) CSI0.44250.42360.43620.39740.42950.3596
CSII0.48520.46390.47680.43920.46990.3997
CSIII0.51470.50480.52470.47120.51330.4219
( 5 , 30 , 20 ) CSI0.40360.38070.39640.35470.38540.2993
CSII0.45210.43660.44990.39220.44360.3678
CSIII0.48690.46370.47520.44130.46990.3899
( 5 , 50 , 35 ) CSI0.31470.29750.32990.27270.31470.2131
CSII0.35470.33680.34990.29650.34780.2412
CSIII0.39420.37130.38650.33450.37980.2993
Table 3. MSE of estimates for the parameter β .
Table 3. MSE of estimates for the parameter β .
( k , n , m ) CSMLEBayes
SELINEXGE
ω = 2 ω = 2 ν = 2 ν = 2
( 3 , 30 , 15 ) CSI0.02930.02740.02890.02350.02820.0199
CSII0.03460.03250.03380.02640.03310.0225
CSIII0.04150.03880.04010.02960.03950.0257
( 3 , 30 , 20 ) CSI0.02230.02040.02170.01870.02140.0154
CSII0.02640.02450.02600.02220.02530.0186
CSIII0.03130.02960.03110.02540.03020.0213
( 3 , 50 , 35 ) CSI0.01640.01450.01600.01290.01590.0099
CSII0.02020.01830.01950.01550.01910.0127
CSIII0.02510.02330.02490.01850.02420.0149
( 5 , 30 , 15 ) CSI0.03530.03320.03410.02770.03390.0241
CSII0.03910.03750.03870.03350.03810.0296
CSIII0.04420.04250.04320.03760.04290.0332
( 5 , 30 , 20 ) CSI0.02920.02730.02830.02250.02780.0189
CSII0.03310.03150.03260.02570.03210.0231
CSIII0.03670.03510.03620.02850.03570.0259
( 5 , 50 , 35 ) CSI0.02340.02150.02340.01760.02250.0139
CSII0.02850.02640.02790.02230.02710.0187
CSIII0.03340.03210.03310.02670.03280.0235
Table 4. MSE of estimates for the parameter λ .
Table 4. MSE of estimates for the parameter λ .
( k , n , m ) CSMLEBayes
SELINEXGE
ω = 2 ω = 2 ν = 2 ν = 2
( 3 , 30 , 15 ) CSI0.18250.15970.16930.13550.16220.1176
CSII0.23540.21430.22610.15240.21990.1369
CSIII0.26770.25810.26520.17440.26070.1532
( 3 , 30 , 20 ) CSI0.11630.09560.11230.08460.10690.0723
CSII0.14560.11780.13570.09360.12430.0855
CSIII0.19320.16990.17560.12350.17220.1062
( 3 , 50 , 35 ) CSI0.09860.07750.08690.07340.08240.0593
CSII0.12350.09920.01110.08460.01050.0692
CSIII0.15770.13470.14530.10670.14220.0899
( 5 , 30 , 15 ) CSI0.25460.23480.24530.19110.23980.1637
CSII0.28650.26450.27330.22990.26750.1894
CSIII0.31550.29430.30570.26540.29560.2278
( 5 , 30 , 20 ) CSI0.17360.15640.16350.11570.15980.0969
CSII0.22390.20770.19110.13560.18630.1175
CSIII0.25360.23940.24560.17520.24310.1406
( 5 , 50 , 35 ) CSI0.12350.11550.12370.09750.11960.0786
CSII0.15360.14630.15010.12370.14980.0991
CSIII0.18370.16940.17320.14230.17020.1195
Table 5. MSE of estimates for the parameter α .
Table 5. MSE of estimates for the parameter α .
( k , n , m ) CSMLEBayes
SELINEXGE
ω = 2 ω = 2 ν = 2 ν = 2
( 3 , 30 , 15 ) CSI0.02250.02050.02090.00970.01980.0092
CSII0.02680.02430.02570.01120.02350.0099
CSIII0.03120.02760.02860.01450.26470.0115
( 3 , 30 , 20 ) CSI0.01750.01560.01670.00790.01450.0068
CSII0.02130.02040.02110.00850.01940.0074
CSIII0.02540.02390.02460.01020.02310.0087
( 3 , 50 , 35 ) CSI0.01160.01070.01110.00680.00990.0058
CSII0.01470.01290.01320.00760.01250.0066
CSIII0.02050.01950.01990.00830.01870.0075
( 5 , 30 , 15 ) CSI0.02730.02550.02580.01170.02450.0109
CSII0.03240.03050.03090.01590.02950.0134
CSIII0.03670.03510.03670.01780.03460.0156
( 5 , 30 , 20 ) CSI0.02130.02020.02070.00930.01950.0082
CSII0.02760.02590.02610.00990.02560.0089
CSIII0.03470.03270.03320.01170.03250.0096
( 5 , 50 , 35 ) CSI0.01690.01480.01560.00880.01450.0077
CSII0.01980.01820.01890.00940.01750.0083
CSIII0.02350.02140.02270.01050.02050.0095
Table 6. MSE of estimates for the parameter h t .
Table 6. MSE of estimates for the parameter h t .
( k , n , m ) CSMLEBayes
SELINEXGE
ω = 2 ω = 2 ν = 2 ν = 2
( 3 , 30 , 15 ) CSI0.03210.00970.01070.00880.00910.0073
CSII0.04350.01090.01350.00920.00980.0079
CSIII0.05120.01230.01540.01060.01100.0084
( 3 , 30 , 20 ) CSI0.02750.00910.00940.00810.00840.0067
CSII0.03460.00960.01030.00860.00930.0072
CSIII0.04290.01110.01240.00930.00980.0079
( 3 , 50 , 35 ) CSI0.01980.00830.00860.00650.00750.0058
CSII0.02360.00870.00920.00730.00810.0064
CSIII0.03190.00940.01050.00840.00890.0071
( 5 , 30 , 15 ) CSI0.05210.01240.01370.00960.01120.0085
CSII0.05830.01470.01530.01110.01350.0094
CSIII0.06220.01790.01850.01380.01650.0109
( 5 , 30 , 20 ) CSI0.04460.00970.01010.00870.00920.0078
CSII0.04920.01130.01250.00930.00990.0084
CSIII0.05360.01280.01390.00980.01170.0089
( 5 , 50 , 35 ) CSI0.02550.00850.00890.00750.00810.0068
CSII0.03760.00930.00970.00810.00870.0076
CSIII0.04270.01050.01190.00940.00930.0082
Table 7. MSE of estimates for the parameter r t .
Table 7. MSE of estimates for the parameter r t .
( k , n , m ) CSMLEBayes
SELINEXGE
ω = 2 ω = 2 ν = 2 ν = 2
( 3 , 30 , 15 ) CSI0.91450.88410.90230.79920.87550.7254
CSII0.95420.92530.93440.82570.91280.7623
CSIII0.99930.95470.96370.86440.94560.7952
( 3 , 30 , 20 ) CSI0.83470.79620.80270.74420.78340.6791
CSII0.87250.83550.84620.77250.82530.7248
CSIII0.91660.87230.89230.79990.86410.7634
( 3 , 50 , 35 ) CSI0.78250.74810.75330.69920.73540.6223
CSII0.82640.78350.79640.72960.77530.6545
CSIII0.87360.83640.84320.75420.82550.7039
( 5 , 30 , 15 ) CSI1.21471.08491.00090.89991.15560.8556
CSII1.29651.26671.27950.95361.25340.9145
CSIII1.34621.31481.32540.99781.30150.9652
( 5 , 30 , 20 ) CSI1.09840.97820.99650.85460.95470.7966
CSII1.15681.09991.11740.91471.09870.8469
CSIII1.25471.17991.18670.96481.13650.8893
( 5 , 50 , 35 ) CSI0.92780.88650.89640.79670.87950.7361
CSII0.96450.92570.93450.83420.91440.7564
CSIII1.13260.97240.98310.88240.96370.7992
Table 8. The AWs and CPs of 95 % ACIs and CRIs for the parameters α and β .
Table 8. The AWs and CPs of 95 % ACIs and CRIs for the parameters α and β .
( k , n , m ) CS α β
MLEBayesMLEBayes
ACIWsCPsCRIWsCPsACIWsCPsCRIWsCPs
( 3 , 30 , 15 ) CSI1.26540.9120.79360.9410.66350.9180.35230.939
CSII1.31450.9250.82340.9420.74560.9150.45360.941
CSIII1.38690.9190.87360.9390.82150.9220.49620.938
( 3 , 30 , 20 ) CSI1.16520.9250.66450.9490.58630.9410.29870.951
CSII1.22350.9180.71440.9510.62410.9390.36490.952
CSIII1.29670.9220.78690.9470.68630.9380.43250.949
( 3 , 50 , 35 ) CSI0.99840.9360.57630.9550.44960.9540.21470.954
CSII1.13620.9410.63470.9490.51470.9470.28690.957
CSIII1.21470.9390.72640.9510.56360.9520.34680.959
( 5 , 30 , 15 ) CSI1.46660.9230.85630.9410.83450.9270.55240.945
CSII1.51230.9210.93640.9500.89940.9250.63470.937
CSIII1.63250.9311.05670.9390.92530.9190.72210.941
( 5 , 30 , 20 ) CSI1.29540.9360.72360.9520.72360.9540.43620.961
CSII1.33580.9420.81470.9470.78930.9420.51240.942
CSIII1.41250.9410.93620.9550.84750.9380.57660.948
( 5 , 50 , 35 ) CSI1.16470.9550.64720.9610.66250.9500.36540.963
CSII1.21130.9490.75540.9520.73340.9490.43550.954
CSIII1.29540.9510.82330.9550.79930.9470.50670.955
Table 9. The AWs and CPs of 95 % ACIs and CRIs for the parameter λ and S t .
Table 9. The AWs and CPs of 95 % ACIs and CRIs for the parameter λ and S t .
( k , n , m ) CS α β
MLEBayesMLEBayes
ACIWsCPsCRIWsCPsACIWsCPsCRIWsCPs
( 3 , 30 , 15 ) CSI1.89320.9120.84950.9480.42310.9390.23410.956
CSII1.99680.9230.96350.9460.47630.9280.25360.951
CSIII2.13450.9191.21450.9390.49630.9370.28470.955
( 3 , 30 , 20 ) CSI1.64560.9250.71450.9510.37650.9470.19630.959
CSII1.74730.9340.85660.9470.39540.9510.22450.949
CSIII1.97450.9390.99320.9490.41650.9460.25680.957
( 3 , 50 , 35 ) CSI1.55360.9510.63890.9520.29650.9550.15650.962
CSII1.64770.9380.74560.9550.32150.9510.18480.957
CSIII1.86320.9460.87630.9490.36470.9470.22690.958
( 5 , 30 , 15 ) CSI2.39370.9251.34960.9510.61550.9230.39970.946
CSII2.45680.9311.46580.9550.67450.9330.44210.951
CSIII2.56610.9291.63770.9490.72310.9410.49360.948
( 5 , 30 , 20 ) CSI1.99980.9451.09650.9530.42360.9390.28630.951
CSII2.23570.9441.13540.9540.49680.9410.33670.944
CSIII2.36990.9391.36590.9480.55430.9380.38750.941
( 5 , 50 , 35 ) CSI1.75860.9510.79630.9610.37670.9520.23650.961
CSII1.87430.9480.86720.9570.43620.9450.29660.957
CSIII1.95210.9520.99350.9590.48390.9490.34520.952
Table 10. The AWs and CPs of 95 % ACIs and CRIs for h t and r t .
Table 10. The AWs and CPs of 95 % ACIs and CRIs for h t and r t .
( k , n , m ) CS α β
MLEBayesMLEBayes
ACIWsCPsCRIWsCPsACIWsCPsCRIWsCPs
( 3 , 30 , 15 ) CSI0.51720.9250.34810.9395.16540.9423.64730.952
CSII0.55630.9280.39520.9325.92680.9314.12870.955
CSIII0.62470.9210.43280.9346.74750.9355.24730.951
( 3 , 30 , 20 ) CSI0.44360.9330.28680.9454.25870.9422.78490.949
CSII0.48630.9290.33140.9425.06250.9513.61470.951
CSIII0.53610.9180.37990.9395.86970.9474.46930.948
( 3 , 50 , 35 ) CSI0.39610.9410.23580.9523.65440.9492.02470.955
CSII0.43150.9390.28630.9554.51520.9502.79650.961
CSIII0.47990.9420.31570.9615.13370.9463.64730.957
( 5 , 30 , 15 ) CSI0.63440.9390.45830.9556.26530.9394.54720.941
CSII0.68940.9280.48920.9496.91120.9335.12370.939
CSIII0.71550.9410.52130.9577.53340.9215.86240.951
( 5 , 30 , 20 ) CSI0.54670.9550.36610.9615.56320.9383.42510.949
CSII0.59930.9470.40550.9586.27540.9414.65570.951
CSIII0.63420.9480.45120.9536.96510.9375.11250.944
( 5 , 50 , 35 ) CSI0.44630.9510.29940.9624.33220.9482.57570.961
CSII0.50830.9460.34660.9534.86930.9453.31450.954
CSIII0.56370.9490.39370.9555.66240.9494.19560.957
Table 11. ML and Bayes estimates for α , β , λ , S t , h t and r t .
Table 11. ML and Bayes estimates for α , β , λ , S t , h t and r t .
ParameterMLEBayes
SELINEXGE
ω = 2 ω = 2 ν = 2 ν = 2
α 0.28350.48660.47230.45480.44370.4266
β 1.08371.03371.21341.01451.19650.9976
λ 0.94740.95990.96440.93430.94750.9291
S ( 0.3 ) 0.73380.97830.94550.93410.95520.9369
h 0.3 0.40950.29440.35410.32250.34780.2899
r 0.3 1.12882.53872.55411.76441.96581.6634
Table 12. 95 % ACIs and CRIs of α , β , λ , S t , h t and r t .
Table 12. 95 % ACIs and CRIs of α , β , λ , S t , h t and r t .
ParameterACIsCRIs
[Lower, Upper]Width[Lower, Upper]Width
α [0.1652, 0.3758]0.2106[0.4131, 0.8026]0.3895
β [1.0651, 1.1026]0.0375[0.9120, 1.5972]0.6852
λ [0.9270, 0.9683]0.0414[0.9181, 0.9735]0.0554
S 0.3 [0.6915, 0.7760]0.0845[0.9299, 0.9917]0.0618
h 0.3 [0.3382, 0.4809]0.1428[0.2517, 0.4052]0.1535
r 0.3 [0.7646, 1.4930]0.7284[1.5219, 2.6354]1.1064
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ramadan, M.M.; EL-Sagheer, R.M.; Abd-El-Monem, A. Estimating the Lifetime Parameters of the Odd-Generalized-Exponential–Inverse-Weibull Distribution Using Progressive First-Failure Censoring: A Methodology with an Application. Axioms 2024, 13, 822. https://doi.org/10.3390/axioms13120822

AMA Style

Ramadan MM, EL-Sagheer RM, Abd-El-Monem A. Estimating the Lifetime Parameters of the Odd-Generalized-Exponential–Inverse-Weibull Distribution Using Progressive First-Failure Censoring: A Methodology with an Application. Axioms. 2024; 13(12):822. https://doi.org/10.3390/axioms13120822

Chicago/Turabian Style

Ramadan, Mahmoud M., Rashad M. EL-Sagheer, and Amel Abd-El-Monem. 2024. "Estimating the Lifetime Parameters of the Odd-Generalized-Exponential–Inverse-Weibull Distribution Using Progressive First-Failure Censoring: A Methodology with an Application" Axioms 13, no. 12: 822. https://doi.org/10.3390/axioms13120822

APA Style

Ramadan, M. M., EL-Sagheer, R. M., & Abd-El-Monem, A. (2024). Estimating the Lifetime Parameters of the Odd-Generalized-Exponential–Inverse-Weibull Distribution Using Progressive First-Failure Censoring: A Methodology with an Application. Axioms, 13(12), 822. https://doi.org/10.3390/axioms13120822

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop