A periodical of the Faculty of Natural and Applied Sciences, UMYU, Katsina
ISSN: 2955 – 1145 (print); 2955 – 1153 (online)
ORIGINAL RESEARCH ARTICLE
Hauwa Abdulrahman Dangana1, Abubakar Usman2, Jamilu Garba2 and Ibrahim Abubakar Sadiq2
1Department of Public Health, Intercountry Centre for Oral Health (ICOH) for Africa, Jos, Nigeria
2Department of Statistics, Ahmadu Bello University, Zaria
Corresponding email: Ibrahim Abubakar Sadiq isabubakar@abu.edu.ng
This study examines the estimation of the scale parameter of the Weibull-Power Function Distribution (WPFD) using both maximum likelihood and Bayesian approaches. Bayesian estimation is conducted under one informative Gamma prior with hyperparameters \((a,b)\), as well as two non-informative priors: the uniform prior and Jeffreys’ prior. For each prior specification, Bayes estimators are derived under squared error, quadratic, and precautionary loss functions. Closed-form expressions for the posterior distributions and corresponding Bayes estimators are obtained. The finite-sample performance of the competing estimators is evaluated through a Monte Carlo simulation study based on 1000 replications. Estimator performance is assessed using mean squared error (MSE), bias, and coverage probability. The results indicate that all estimators are consistent, with bias and MSE decreasing as sample size increases. Across different prior specifications and parameter settings, the Bayesian estimator under the quadratic loss function consistently attains the lowest MSE, yielding reductions of approximately 10-20% relative to the maximum likelihood estimator in small and moderate samples. These findings suggest that Bayesian estimation under quadratic loss provides improved finite-sample efficiency for estimating the WPFD scale parameter, while maintaining asymptotic comparability with the maximum likelihood approach.
Keywords: Weibull distribution, Bayesian estimators, Monte Carlo simulation, maximum likelihood method, loss functions
Statistical inference is primarily guided by two major approaches: the classical (or frequentist) and the Bayesian methods. The classical approach, developed by R.A. Fisher in the 1930s, treats parameters as fixed but unknown quantities. In contrast, the Bayesian method, named after Thomas Bayes, considers parameters as random variables with uncertain values. The fundamental distinction lies in this treatment of parameters: classical inference assumes fixed parameters, while Bayesian inference models them as probabilistic. Despite the dominance of frequentist methods in practice, Bayesian approaches have gained significant traction and are now standard in many modern applications.
In Bayesian statistics, a statistical model is constructed to link observed data with underlying parameters, incorporating prior information about those parameters. Before data collection, prior distributions represent initial beliefs about the parameters. Once data are available, these beliefs are updated using the likelihood function to yield the posterior distribution, which reflects a combination of prior knowledge and data-based evidence.
Tahir et al. (2016) introduced the four-parameter Weibull-Power Function Distribution (WPFD), an extension of the traditional Power Function Distribution. Their study demonstrated the distribution’s desirable properties and practical applicability to real-world datasets. The WPFD was shown to be more flexible than related distributions. Similarly, several other models have been proposed and proven useful in diverse fields such as medicine, engineering, survival analysis, insurance, hydrology, and economics. Examples include the works of Adepoju et al. (2024a, 2024b), Isa et al. (2023), Kajuru et al. (2023), Adepoju et al. (2023), Bello et al. (2020, 2021), and Ibrahim et al. (2020a, 2020b), among others.
To estimate parameters of newly developed distributions, researchers have employed various classical estimation techniques, as seen in studies such as Adepoju et al. (2024c, 2024d), Hassan et al. (2023), Yilmaz et al. (2021), and ZeinEldin et al. (2019). Classical estimation methods do not require prior information about parameters. In contrast, Bayesian estimation relies on selecting appropriate prior distributions.
Danrimi and Abubakar (2023) applied a Bayesian approach to estimate parameters of the two-parameter Weibull distribution, using a gamma prior. Their analysis showed that Bayesian estimates outperformed those obtained via the maximum likelihood method. Similarly, Liu et al. (2021) compared classical and Bayesian methods for estimating parameters of the Power Function Distribution. Using conjugate priors under five loss functions, Squared Error, Precautionary, Weighted, DeGroot, and Linex, they found that Bayesian estimates were more efficient than MLEs.
Further comparative studies, such as that by Adepoju et al. (2021a), examined Bayesian estimation using extended Jeffreys and quasi priors under three loss functions versus the classical method. Their findings favoured the Bayesian approach, particularly under the quadratic loss function. Adepoju et al. (2021b) evaluated the performance of Bayesian and classical methods for estimating the scale parameter of the Inverse Rayleigh-Frechet distribution, concluding that the quadratic loss function yielded superior results across various priors.
Several other researchers have also reported favourable outcomes for Bayesian methods over MLEs, especially when evaluated using Monte Carlo simulations. Notable contributions include those by Eraikhuemen et al. (2020a, 2020b), Ieren et al. (2020), Ieren and Oguntunde (2018), Preda et al. (2010), Dey (2010), and Aliyu and Abubakar (2016).
Some other distributions were developed and found to be powerful, making them a more useful candidate in various fields such as medical, engineering, survival analysis, insurance, hydrology, economics, and so on. Such a model can be found in Sadiq et al (2022), Sadiq et al (2024), Sadiq et al (2023a), Kajuru et al (2023), Sadiq et al (2023b), Mohammed et al. (2025), Sadiq et al (2023c), Obafemi et al (2024), Habu et al (2024), Semary et al (2025), Sadiq et al. (2025a) and Abd Elgawad et al. (2025), Sadiq et al. (2025b), Mohammed et al. (2025), Dangana et al. (2025), Oga et al. (2025), Usman et al. (2025), Yusuf et al. (2025), Sadiq et al. (2026) to mention but few.
Despite the growing body of literature on Bayesian estimation for lifetime distributions, several gaps remain in the context of the Weibull-Power Function Distribution (WPFD). Existing studies on WPFD have primarily focused on distributional properties, model flexibility, and classical estimation techniques, with limited attention given to Bayesian inference (Dangana et al., 2025). Moreover, where Bayesian methods have been applied to related Weibull-type or power-function families, the emphasis has often been on shape parameters or on a single loss function, with little comparative analysis across different loss structures or prior specifications.
The present study fills this gap by providing a focused Bayesian analysis of the scale parameter of the WPFD, which plays a critical role in reliability and survival applications. Unlike previous works, this paper derives closed-form Bayes estimators of the scale parameter under three distinct loss functions, squared error, quadratic, and precautionary, thereby allowing a systematic assessment of the impact of loss-function choice on estimator performance. In addition, the study conducts a comprehensive comparison between an informative Gamma prior and two commonly used non-informative priors (uniform and Jeffreys), an aspect that has not been jointly examined for the WPFD in existing literature.
To the best of our knowledge, no prior study has simultaneously investigated Bayesian estimation of the WPFD scale parameter under multiple loss functions and contrasting prior assumptions, nor compared these estimators directly with the maximum likelihood estimator using extensive Monte Carlo simulations. By addressing this gap, the present work provides practical guidance on prior and loss-function selection for WPFD scale estimation and contributes to the broader literature on Bayesian inference for flexible lifetime distributions. However, Bayesian estimation of the WPFD scale parameter under alternative loss functions has not yet been explored.
For ease of reference and to enhance reproducibility of the derivations, we summarise the principal notation used throughout the manuscript in Table 1.
Table 1: Summary of the principal notation used throughout the Research.
| Symbol | Description |
|---|---|
| \[Z\] | Random variable following the Weibull–Power Function Distribution (WPFD) |
| \[z_{1},z_{2},\ldots,z_{n}\] | Random sample of size \(n\) from WPFD |
| \[n\] | Sample size |
| \[p\] | Scale parameter of the WPFD (parameter of interest) |
| \[v,\ q\] | Shape parameters of the WPFD |
| \[\eta\] | Auxiliary transformation variable used in integration |
| \[f(z;p)\] | Probability density function of WPFD |
| \[F(z;p)\] | Cumulative distribution function of WPFD |
| \[L(p)\] | Likelihood function |
| \[\mathcal{l(}p)\] | Log-likelihood function |
| \[\pi(p)\] | Prior distribution of the parameter \(p\) |
| \[\pi_{J}(p)\] | Jeffreys prior |
| \[a,b\] | Hyperparameters of the Gamma informative prior |
| \[g(p \mid z)\] | Posterior distribution of \(p\) |
| \[I(p)\] | Fisher information for the parameter \(p\) |
| SELF | Squared Error Loss Function |
| QLF | Quadratic Loss Function |
| PLF | Precautionary Loss Function |
| MSE | Mean Squared Error |
| \[{\widehat{p}}_{MLE}\] | Maximum Likelihood Estimator of \(p\) |
| \[{\widehat{p}}_{B}\] | Bayesian estimator of (p) |
Tahir et al. (2016) classify the probability density function (PDF) and cumulative distribution function (CDF) of the Weibull-Power Function Distribution (WPFDt) as follows:
\(f_{WPFDt}(z) = \frac{pnv^{q}qz^{qn - 1}}{\left( v^{q} - z^{v} \right)^{n + 1}}e^{- p\left\lbrack \frac{z^{v}}{v^{q} - z^{v}} \right\rbrack^{n}}\) (1)
\(F_{WPFDt}(z) = 1 - e^{- p\left\lbrack \frac{z^{v}}{v^{q} - z^{v}} \right\rbrack^{n}}\) (2)
For z>0 and v, p, n, q>0, p and v are shape parameters, while q and n are scale parameters.
When a set of random observations, denoted as\(z_{1},z_{2},...,z_{n}\), are drawn from a population Z characterised by a\(f_{WPFDt}(z)\), the likelihood function,\(B\left( Z|p,n,v,q \right)\), represents the combined probability density of these individual observations. Furthermore, the specific mathematical formula for the probability density pdf of the WPFDt is presented in equation (1).
The chances function is specified by;
\(B\left( Z|p,n,v,q \right) \propto \left( pnv^{q}q \right)^{w}\prod_{i = 1}^{w}\left( \frac{z^{qn - 1}}{\left( v^{q} - z^{q} \right)^{n + 1}} \right)e^{- p\sum_{i = 1}^{n}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n}}\) (3)
The chances function for \(p\) is prearranged by;
\(B\left( z|p \right) = \eta p^{w}e^{- p\sum_{i = 1}^{n}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n}}\) (4)
Where \(\eta = \left( nv^{q}q \right){z_{i}}^{qn - 1}\left( v^{q} - {z_{i}}^{q} \right)^{- n - 1}\) is a constant which is independent of the shape parameter\(p\).
By partially differentiating \(B\)with respect to \(p\)and solving for\(\widehat{p}\),
\(\frac{\partial B}{\partial p} = \frac{w}{p} - \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n}\) (5)
\(\Rightarrow \widehat{p} = w\left( \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right)^{- 1}\) (6)
Combining the likelihood function with the Gamma informative prior using Bayes’ theorem yields the posterior distribution of the scale parameter, which also follows a Gamma-type distribution with updated hyperparameters. This posterior distribution is subsequently used to derive Bayes estimators under the squared error, quadratic, and precautionary loss functions (Dangana et al., 2025).
The likelihood function represents the joint probability distribution of the observed data. However, it is conceptualised as a function of the model's parameters, with the collected data points considered as fixed values. Given that the data points,\(ẕ = \left( z_{1},z_{2},...,z_{n} \right)\), are independently obtained, the likelihood function can then be formulated.
\(B\left( ẕ|p,n,v,q \right) = B\left( z_{1},z_{2},...,z_{n}|p,n,v,q \right) = \prod_{i = 1}^{w}{B\left( ẕ|p,n,v,q \right)}\) (7)
It is important to note that, for a given\(z\), the likelihood is expressed as a function of \(p\), whereas for a given \(p\), the pdf is expressed as a function of \(z\).
To determine the posterior distribution, denoted as \(B\left( p|ẕ \right)\), which illustrates the probability distribution of a parameter once the relevant data has been observed, we employ Bayes' theorem.
\(b\left( p|ẕ \right) = \frac{b(p)B\left( ẕ|p \right)}{g(z)}\) (8)
Where g(z) denotes the marginal distribution of Z and
\(g(z) = \int_{- \infty}^{\infty}{b(p)B\left( ẕ|p \right)}\) (9)
wherever \(b(p)\) represents the prior distribution and \(B\left( ẕ|p \right)\) represents the likelihood function.
The uniform prior, when applied to the shape parameter\(p\)serves as a non-informative prior. This means it assigns equal probability to all possible values of\(p\), thereby reflecting a lack of strong prior beliefs or information about the parameter.
\(b(p) \propto 1;0 < p < \infty\) (10)
Recall that the posterior distribution of the shape parameter \(\lambda\) is defined as:
\(b\left( p|ẕ \right) = \frac{b(p)B\left( p|ẕ \right)}{\int_{0}^{\infty}{b(p)B\left( p|ẕ\ \ vcb\ \ \ \ \ \right)dp}}\) (11)
It is important to remember that the likelihood function for the WPFDt, specifically concerning its scale parameter, is provided by a particular formulation of that equation.
\(B\left( Z|p,n,v,q \right) \propto \left( pnv^{q}q \right)^{w}\prod_{i = 1}^{w}\left( \frac{z^{qn - 1}}{\left( v^{q} - z^{q} \right)^{n + 1}} \right)e^{- p\sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n}}\) (12)
and
\(B\left( Z|p \right) \propto p^{w}e^{- p\sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n}}\) (13)
Now, let
\(H = \int_{}^{}{B\left( ẕ|p \right)b(p)dp}\) (14)
Substituting for \(b(p)\) and\(B\left( ẕ|p \right)\), we encompass:
\(H = \eta\int_{0}^{\infty}{p^{w}e^{- p\sum_{i = 1}^{n}{(e^{{(qz_{i}^{- 1})}^{n}} - 1)}^{- 1}}dp}\) (15)
Moreover, by applying the integration by substitution method to equation (15), the subsequent result is derived:
Let
\(u = p\sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \Rightarrow p = \frac{u}{\sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n}}\) (16)
\(dp = \frac{du}{\sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n}}\) (17)
By substituting the expressions for \(p\)and \(dp\)into equation (15) and subsequently simplifying the resulting equation, we obtain the following expression.
\(H = \eta\int_{0}^{\infty}{{(\frac{u}{\sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n}})}^{w}e^{- u}\frac{du}{\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}\) (18)
\(H = \eta\frac{1}{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w + 1}}\int_{0}^{\infty}{u^{w}e^{- u}du}\) (19)
As well as evoke that \(\int_{- \infty}^{\infty}{y^{t - 1}e^{- y}dy} = \Gamma(t)\)along with that \(\int_{- \infty}^{\infty}{y^{t}e^{- y}dy} = \Gamma(t + 1)\)
Consequently;
\(H = \frac{\eta\Gamma(t + 1)}{\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{w + 1}}\) (20)
The posterior distribution under a uniform prior is obtained by substituting for\(H\), \(b(p)\)and \(B\left( ẕ|p \right)\)in equation (13) and simplifying. The resulting expression is as follows:
\(B(p|ẕ) = \frac{\eta p^{w}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}{\frac{\eta\Gamma(t + 1)}{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w + 1}}}\) (21)
\(B(p|ẕ) = \frac{p^{w}{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w + 1}}{\Gamma(t + 1)e^{p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}\) (22)
Jeffreys’ Prior for the Scale Parameter
Jeffreys’ prior for a parameter \(p\) is defined by (Bernardo et al., 1994) as
\(\pi_{J}(p) \propto \sqrt{I(p)}\) (23)
where \(I(p)\) denotes the Fisher information given by:
\(I(p) = - \mathbb{E}\left\lbrack \frac{\partial^{2}}{\partial p^{2}}\log f(Z;p) \right\rbrack\) (24)
For the Weibull-Power Function Distribution, the scale parameter \(p\) enters the probability density function through the ratio \(z/p\), indicating that \(p\) is a pure scale parameter. For distributions of this form, the Fisher information satisfies (Dangana et al., 2025):
\(I(p) \propto \frac{1}{p^{2}}\) (25)
a result that holds generally for scale families under regularity conditions.
Consequently, the Jeffreys prior for the WPFD scale parameter is obtained (Dangana et al., 2025) as:
\(\pi_{J}(p) \propto \sqrt{I(p)} \propto \frac{1}{p},\quad p > 0\) (26)
This choice ensures invariance under reparameterization and has been widely adopted for scale parameters in Bayesian reliability and lifetime modelling (Kass and Wasserman, 1996).
The Jeffrey's non-informative prior for the WPFDt shape parameter \(p\) is defined (Dangana et al., 2025) as:
\(b(p) \propto \frac{1}{p};0 < p < \infty\) (27)
The posterior distribution of the shape parameter p, given the data and using Jeffrey’s prior, is defined (Dangana et al., 2025) as:
\(b\left( p|ẕ \right) = \frac{b(p)B\left( p|ẕ \right)}{\int_{0}^{\infty}{b(p)B\left( p|ẕ \right)dp}}\) (28)
Now, let
\(H = \int_{}^{}{B\left( ẕ|p \right)B(p)dp}\) (29)
Putting for \(b(p)\) and\(B\left( ẕ|p \right)\); we encompass:
\(H = \eta\int_{0}^{\infty}{p^{w - 1}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}d\alpha}\) (30)
Furthermore, by applying the routine of integration by substitution to equation (30) and simplifying, we acquire:
\(H = \eta\int_{0}^{\infty}{{(\frac{u}{\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}})}^{w - 1}e^{- u}\frac{du}{\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}\) (31)
\(H = \eta\frac{1}{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w}}\int_{0}^{\infty}{u^{w - 1}e^{- u}du}\) (32)
\(H = \frac{\eta\Gamma(w)}{\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{w}}\) (33)
By substituting the expressions for\(H\), \(b(p)\) and \(B\left( ẕ|p \right)\) in into equation (28) and then simplifying the result, we derive the posterior distribution for the parameter under the assumption of a Jeffrey's prior. The resulting expression is as follows:
\(B(p|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w}p^{w - 1}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}{\Gamma(w)}\) (34)
In addition to the non-informative uniform and Jeffreys’ priors, an informative prior distribution is assumed for the scale parameter of the Weibull-Power Function Distribution. Specifically, the scale parameter \(\theta\) is assumed to follow a Gamma distribution with shape parameter \(a > 0\) and rate parameter \(a > 0\) (Dangana et al., 2025), denoted by:
\(\pi(\theta) = \frac{b^{a}}{\Gamma(a)}\theta^{a - 1}\exp\left\{ - b\theta \right\},\quad\theta > 0\) (35)
The Gamma prior is chosen due to its flexibility, support on the positive real line, and its widespread use as an informative prior for scale parameters in reliability and survival analysis. Moreover, it facilitates analytical tractability in Bayesian inference (Dangana et al., 2025).
The hyperparameters \(a\) and \(b\) are selected to reflect moderate prior information about the scale parameter. In particular, the prior mean \(E(\theta) = a/b\) is set close to the true scale parameter value used in the simulation study, while the prior variance \(\text{Var}(\theta) = a/b^{2}\) is chosen to allow reasonable dispersion, thereby avoiding excessive prior dominance (Dangana et al., 2025). In the simulation study, we set \(a = 1\) and \(b = 1\), corresponding to a prior mean of 0.5 and a variance of 0.125.
We estimate the WPFDt's scale parameter using the posterior distribution derived from the uniform prior. This estimation is evaluated against three different loss functions, where a loss function,\(B\left( p,p_{PLFt} \right)\), defines the cost of an estimate, δ, deviating from the true parameter value,\(\widehat{n}\).
The squared error loss function, which we will use to estimate the parameter p, is defined as follows
\(B\left( p,p_{SELFt} \right) = \left( p - p_{SELFt} \right)^{2}\) (36)
The Bayes estimator under a uniform prior and Squared Error Loss Function (SELFt) is consequent as:
\(p_{SELFt} = E\left( p|ẕ \right)\) (37)
\(E\left( p|ẕ \right) = \int_{0}^{\infty}{pb\left( p|ẕ \right)}dp\) (38)
\(B(p|ẕ) = \frac{p^{n}{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w + 1}}{\Gamma(w + 1)e^{p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}\) (39)
Putting for \(B\left( p|ẕ \right)\) in equation (39), we comprise:
\(E(p|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w + 1}}{\Gamma(w + 1)}\int_{0}^{\infty}{p^{w + 1}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}dp}\) (40)
We first apply the method of integration by substitution to equation (40). After subsequent simplification, the expression becomes:
\(E\left( p|ẕ \right) = (w + 1)\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{- 1}\) (41)
The quadratic loss function is distinct (Dangana et al., 2025; Bernardo et al., 1994; Kass and Wasserman, 1996) as:
\(B\left( p,p_{QLFt} \right) = \left( \frac{p - p_{QLFt}}{p} \right)^{2}\) (42)
The Bayes estimator under a uniform prior and the Quadratic Loss Function (QLFt) is derived as:
\(p_{QLFt} = \frac{E\left( p^{- 1}|ẕ \right)}{E\left( p^{- 2}|ẕ \right)} = \frac{\int_{0}^{\infty}{p^{- 1}B\left( p|ẕ \right)dp}}{\int_{0}^{\infty}{p^{- 2}B\left( p|ẕ \right)dp}}\) (43)
\(E\left( p^{- 1}|\underline{z} \right) = \int_{0}^{\infty}{p^{- 1}B\left( p|\underline{z} \right)dp}\) (44)
Now, recall that under the assumption of a uniform prior,
\(B(p|ẕ) = \frac{p^{n}{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w + 1}}{\Gamma(w + 1)e^{p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}\) (45)
By substituting the value of \(B\left( p|ẕ \right)\) into equation (45), we obtain:
\(E(p^{- 1}|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w + 1}}{\Gamma(w + 1)}\int_{0}^{\infty}{p^{w - 1}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}dp}\) (46)
We first apply the method of integration by substitution to the expression in equation (46). After subsequent simplification, the result is:
\(E\left( p^{- 1}|ẕ \right) = \frac{\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack\Gamma(w)}{\Gamma(w + 1)}\) (47)
\(E\left( p^{- 2}|ẕ \right) = \int_{0}^{\infty}{p^{- 2}B\left( p|ẕ \right)dp}\) (48)
It should be recalled that, under the assumption of a uniform prior distribution,
\(B(p|ẕ) = \frac{p^{w}{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w + 1}}{\Gamma(w + 1)e^{p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}\) (49)
Upon substituting the value of \(B\left( p|ẕ \right)\)into equation (48), we obtain:
\(E(p^{- 2}|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w + 1}}{\Gamma(w + 1)}\int_{0}^{\infty}{p^{n - 2}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}dp}\) (50)
By applying the method of integration by substitution to equation (50) and simplifying, we obtain:
\(E\left( p^{- 2}|ẕ \right) = \frac{\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{2}\Gamma(w - 1)}{\Gamma(w + 1)}\) (51)
\(p_{QLFt} = \frac{E\left( p^{- 1}|ẕ \right)}{E\left( p^{- 2}|ẕ \right)}\) (52)
This implies that
\(p_{QLFt} = \frac{\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack\Gamma(w)}{\Gamma(w + 1)} \div \frac{\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{2}\Gamma(w - 1)}{\Gamma(w + 1)}\) (53)
\(p_{QLFt} = \frac{(w - 1)}{\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack}\) (54)
The precautionary loss function (PLFt) as
\(b\left( p_{PLFt},p \right) = \frac{\left( p_{PLFt} - p \right)^{2}}{p}\) (55)
Similarly, the derivation of the Bayes estimator using PLFt under a uniform prior is obtained as follows:
\(p_{PLF} = \left\{ E\left( p^{2}|ẕ \right) \right\}^{\frac{1}{2}} = \sqrt{E\left( p^{2}|ẕ \right)}\) (56)
\(E\left( p^{2}|ẕ \right) = \int_{0}^{\infty}{p^{2}B\left( p|ẕ \right)dp}\) (57)
Recall that under the assumption of a uniform prior,
\(B(p|ẕ) = \frac{p^{w}{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w + 1}}{\Gamma(w + 1)e^{p\sum_{i = 1}^{w}{(e^{{(qz_{i}^{- 1})}^{n}} - 1)}^{- 1}}}\) (58)
By substituting\(B\left( p|ẕ \right)\) into equation (57), we obtain:
\(E(p^{2}|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w + 1}}{\Gamma(w + 1)}\int_{0}^{\infty}{p^{n + 2}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}dp}\) (59)
Applying integration by substitution to equation (59) and simplifying, we obtain:
\(E(p^{2}|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{- 2}}{\Gamma(w + 1)}\int_{0}^{\infty}{u^{w + 3 - 1}e^{- u}du}\) (60)
\(E\left( p^{2}|ẕ \right) = \frac{\Gamma(w + 3)\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{- 2}}{\Gamma(w + 1)}\) (61)
\(\lambda_{PLF} = \left\{ E\left( p|ẕ \right) \right\}^{\frac{1}{2}} = \left\{ (w + 1)(w + 2)\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{- 2} \right\}^{\frac{1}{2}}\) (62)
\(p_{PLFt} = \left\lbrack (w + 1)(w + 2) \right\rbrack^{\frac{1}{2}}\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{- 1}\) (63)
The scale parameter of the WPFDt is estimated under three loss functions, based on the posterior distribution derived from Jeffrey’s prior.
The Bayes estimator under SELFt with Jeffrey’s prior is derived as:
\(p_{SELFt} = E\left( p|ẕ \right)\) (64)
\(E\left( p|ẕ \right) = \int_{0}^{\infty}{pB\left( p|ẕ \right)}dp\) (65)
Now recall that for Jeffrey’s prior,
\(B(p|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w}p^{w - 1}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}{\Gamma(w)}\) (66)
By substituting \(B\left( p|ẕ \right)\)into equation (65), we obtain:
\(E(p|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w}}{\Gamma(w)}\int_{0}^{\infty}{p^{w}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}dp}\) (67)
Applying integration by substitution to equation (67) and simplifying, we obtain:
\(E(p|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{- 1}}{\Gamma(w)}\int_{0}^{\infty}{u^{w + 1 - 1}e^{- u}du}\) (68)
\(p_{SELFt} = E\left( p|ẕ \right) = w\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{- 1}\) (69)
The Bayes estimator under QLFt with Jeffrey’s prior is derived as:
\(p_{QLFt} = \frac{E\left( p^{- 1}|ẕ \right)}{E\left( p^{- 2}|ẕ \right)} = \frac{\int_{0}^{\infty}{p^{- 1}B\left( p|ẕ \right)dp}}{\int_{0}^{\infty}{p^{- 2}B\left( p|ẕ \right)dp}}\) (70)
\(E\left( p^{- 1}|ẕ \right) = \int_{0}^{\infty}{p^{- 1}B\left( p|ẕ \right)dp}\) (71)
At this moment, remember that for Jeffrey’s prior,
\(B(p|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w}p^{w - 1}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}{\Gamma(w)}\) (72)
Substituting for \(B\left( p|ẕ \right)\) in equation (71), we have:
\(E(p^{- 1}|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w}}{\Gamma(w)}\int_{0}^{\infty}{p^{w - 2}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}dp}\) (73)
Applying integration by substitution to equation (73) and simplifying, we obtain:
\(E\left( w^{- 1}|ẕ \right) = \frac{\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack}{(w - 1)}\) (74)
\(E\left( p^{- 2}|\underline{z} \right) = \int_{0}^{\infty}{p^{- 2}B\left( p|ẕ \right)dp}\) (75)
Now recall that for Jeffrey’s prior,
\(B(p|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w}p^{w - 1}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}{\Gamma(w)}\) (76)
Substituting for \(B\left( p|ẕ \right)\) in equation (75), we encompass:
\(E(p^{- 2}|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w}}{\Gamma(w)}\int_{0}^{\infty}{p^{n - 3}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}dp}\) (77)
On applying the substitution method to equation (77) and simplifying, the result is
\(E\left( p^{- 2}|ẕ \right) = \frac{\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{2}}{(w - 1)(w - 2)}\) (78)
But recollect that
\(p_{QLFt} = \frac{E\left( p^{- 1}|ẕ \right)}{E\left( p^{- 2}|ẕ \right)}\) (79)
This implies that
\(p_{QLFt} = \frac{\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack}{(w - 1)} \div \frac{\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{2}}{(w - 1)(w - 2)}\) (80)
\(p_{QLFt} = \frac{(w - 2)}{\sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n}}\) (81)
Similarly, the derivation of the Bayes estimator under PLFt with Jeffrey’s prior is obtained, following the methodology of Azam and Ahmad (2014):
\(p_{PLFt} = \left\{ E\left( p^{2}|ẕ \right) \right\}^{\frac{1}{2}} = \sqrt{E\left( p^{2}|ẕ \right)}\) (82)
\(E\left( p^{2}|ẕ \right) = \int_{0}^{\infty}{p^{2}B\left( p|ẕ \right)dp}\) (83)
As a reminder, for Jeffrey's prior
\(B(p|\underline{z}) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w}p^{w - 1}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}}{\Gamma(w)}\) (84)
Upon substituting for \(B\left( p|ẕ \right)\) in equation (83), the following expression is obtained:
\(E(p^{2}|\underline{w}) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{w}}{\Gamma(w)}\int_{0}^{\infty}{p^{w + 1}e^{- p\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}}dp}\) (85)
The expression in equation (85) is solved using integration by substitution and simplified, resulting in:
\(E(p^{2}|ẕ) = \frac{{\lbrack\sum_{i = 1}^{w}{\lbrack\frac{z^{q}}{v^{q} - z^{q}}\rbrack}^{n}\rbrack}^{- 2}}{\Gamma(w)}\int_{0}^{\infty}{u^{w + 2 - 1}e^{- u}du}\) (86)
\(E\left( p^{2}|ẕ \right) = w(w + 1)\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{- 2}\) (87)
\(p_{PLFt} = \left\lbrack w(w + 1) \right\rbrack^{\frac{1}{2}}\left\lbrack \sum_{i = 1}^{w}\left\lbrack \frac{z^{q}}{v^{q} - z^{q}} \right\rbrack^{n} \right\rbrack^{- 1}\) (88)
Detailed Derivation for the Informative Prior under the Quadratic Loss Function
To enhance transparency, we present one representative derivation in full detail: the posterior distribution and Bayes estimator under the informative Gamma prior and Quadratic Loss Function (QLF) (Dangana et al., 2025).
Step 1: Likelihood Function
Let \(z_{1},z_{2},\ldots,z_{n}\) be a random sample from the WPFD with scale parameter \(p\). The likelihood function (Dangana et al., 2025) is:
\(L(p) = \prod_{i = 1}^{n}f(z_{i};p)\) (89)
For the WPFD, the likelihood can be written in the general scale-family form:
\(L(p) \propto p^{- n}\exp\left( - \frac{S}{p} \right)\) (90)
where \(S\) is a sufficient statistic depending on the sample.
Step 2: Informative Gamma Prior
Assuming the informative prior is given as:
\(\pi(p) = \frac{b^{a}}{\Gamma(a)}p^{a - 1}e^{- bp},\quad p > 0\) (91)
where \(a > 0,b > 0\).
Step 3: Posterior Distribution
By Bayes’ theorem,
\(g(p|z) \propto L(p)\pi(p)\) (92)
Substituting,
\(g(p|z) \propto p^{- n}e^{- S/p} \cdot p^{a - 1}e^{- bp}\) (93)
Rearranging powers of \(p\):
\(g(p|z) \propto p^{a - n - 1}\exp\left( - bp - \frac{S}{p} \right)\) (94)
Step 4: Normalising Constant
The posterior normalising constant is:
\(C^{- 1} = \int_{0}^{\infty}p^{a - n - 1}\exp\left( - bp - \frac{S}{p} \right)dp\) (95)
This integral converges provided: \(a - n\ > \ 0\); \(b > 0\); \(S > 0\)
The integral is finite under these conditions due to exponential decay at both 0 and ∞ (Dangana et al., 2025).
Step 5: Bayes Estimator under Quadratic Loss Function
Under QLF, the Bayes estimator is
\({\widehat{p}}_{QLF} = \frac{E(p^{- 1}|z)}{E(p^{- 2}|z)}\) (96)
Using the posterior distribution,
\(E(p^{- k}|z) = \int_{0}^{\infty}p^{- k}g(p|z)dp\) (97)
Substituting the posterior kernel,
\(E(p^{- k}|z) \propto \int_{0}^{\infty}p^{a - n - k - 1}\exp\left( - bp - \frac{S}{p} \right)dp\) (98),
provided: \(a - n - k\ > \ 0\)the integral converges. Thus, the QLF estimator is obtained explicitly as a ratio of finite posterior moments.
The posterior distributions derived under the Gamma informative prior and the two non-informative priors (uniform and Jeffreys) belong to standard Gamma-type families. Consequently, the posterior expectations required for the Bayes estimators under the squared error loss function (SELF), quadratic loss function (QLF), and precautionary loss function (PLF) admit closed-form expressions.
Specifically:
Under SELF, the Bayes estimator corresponds to the posterior mean, which is available in closed form.
Under QLF, the estimator reduces to a ratio of posterior expectations, which also simplifies analytically.
Under PLF, the estimator involves expectations of inverse powers of the parameter, which are available in closed form for Gamma-type posteriors.
Therefore, no numerical integration was required in deriving the Bayes estimators presented in this study.
For the Monte Carlo simulation study, data generation and estimator evaluation were implemented in R (version 4.4.3). Each simulation scenario was replicated 1000 times. Random variates from the WPFD were generated using the inverse transform method. Convergence of simulation summaries was verified by increasing the number of replications and confirming the stability of the estimated mean squared errors to three decimal places.
To assess the finite-sample performance of the proposed estimators, a Monte Carlo simulation study was conducted. For each parameter configuration and sample size, R = 1000 independent replications were generated. All simulations were implemented in R version 4.4.3. Random samples from the Weibull-Power Function Distribution (WPFD) were generated using the inverse transform method. A fixed random seed was set at the beginning of each simulation scenario to ensure reproducibility (Table 2).
For each replication:
The Maximum Likelihood Estimator (MLE) of the scale parameter \(p\) was obtained via numerical maximisation of the log-likelihood function.
Bayesian estimators under the Squared Error Loss Function (SELF), Quadratic Loss Function (QLF), and Precautionary Loss Function (PLF) were computed from the corresponding posterior distributions.
When closed-form expressions were not available, numerical integration was performed using R’s built-in integrate() function with default adaptive quadrature and tolerance settings.
The performance of each estimator was evaluated using:
The Bias was computed as:
\(\text{Bias} = \frac{1}{R}\sum_{r = 1}^{R}{({\widehat{p}}_{r} - p)}\) (99)
The Mean Squared Error (MSE) was computed as:
\(\text{MSE} = \frac{1}{R}\sum_{r = 1}^{R}{({\widehat{p}}_{r} - p)^{2}}\) (100),
and equivalently verified as:
\(\text{MSE} = (\text{Bias})^{2} + \text{Var}(\widehat{p})\) (101)
The Monte Carlo Standard Deviation (MC SD) was computed as:
\(\text{MC SD} = \sqrt{\frac{1}{R - 1}\sum_{r = 1}^{R}{({\widehat{p}}_{r} - \bar{p})^{2}}}\) (102)
The Root Mean Squared Error (RMSE) was computed as:
\(\text{RMSE} = \sqrt{\text{MSE}}\) (103)
The Monte Carlo Standard Error of the Mean (MC SE) was computed as:
\(\text{MCSE}(\bar{p}) = \frac{\text{MC SD}}{\sqrt{R}}\) (104)
where (R = 1000). Although 1000 replications provide stable performance estimates, increasing the number of replications may further reduce Monte Carlo error.
Table 2: Monte Carlo Simulation Design
| Item | Value |
|---|---|
| Number of replications | 1000 |
| Software | R version 4.4.3 |
| True parameter values | \[p = 0.5\] |
| Fixed parameters | \[\eta = 0.5,\nu = 2.5,q = 0.5\] |
| Prior (Uniform) | Beta(1,1) |
| Prior (Jeffrey) | Beta(0.5,0.5) |
| Loss Functions | SELF, QLF, PLF |
| Sample sizes | 25, 50, 100, 200, 300, 400, 500 |
Table 3: Performance of Estimators for p with Fixed Parameters (p = 0.5, η = 0.5, ν = 0.5, q = 0.5, a = 1.0, b = 1)
| n | Measure | MLE | U-SELF | U-QLF | U-PLF | J-SELF | J-QLF | J-PLF |
|---|---|---|---|---|---|---|---|---|
| 25 | Estimate | 0.4431 | 0.439 | 0.477 | 0.4479 | 0.4444 | 0.4777 | 0.448 |
| 25 | Bias | -0.0569 | -0.061 | -0.023 | -0.0521 | -0.0556 | -0.0223 | -0.052 |
| 25 | Variance | 0.0711 | 0.0691 | 0.0707 | 0.071 | 0.0683 | 0.0692 | 0.075 |
| 25 | MSE | 0.0743 | 0.0728 | 0.0712 | 0.0737 | 0.0714 | 0.0697 | 0.0777 |
| 25 | RMSE | 0.2726 | 0.2698 | 0.2668 | 0.2715 | 0.2672 | 0.264 | 0.2787 |
| 25 | SE(MSE) | 0.0021 | 0.0019 | 0.0018 | 0.002 | 0.0018 | 0.0017 | 0.0022 |
| 25 | 95% Cov | 0.912 | 0.934 | 0.941 | 0.928 | 0.945 | 0.952 | 0.919 |
| 50 | Estimate | 0.444 | 0.4369 | 0.4771 | 0.4487 | 0.4453 | 0.4875 | 0.4487 |
| 50 | Bias | -0.056 | -0.0631 | -0.0229 | -0.0513 | -0.0547 | -0.0125 | -0.0513 |
| 50 | Variance | 0.07 | 0.0687 | 0.0704 | 0.0704 | 0.0681 | 0.0621 | 0.0745 |
| 50 | MSE | 0.0731 | 0.0727 | 0.0709 | 0.073 | 0.0711 | 0.0623 | 0.0771 |
| 50 | RMSE | 0.2704 | 0.2696 | 0.2663 | 0.2702 | 0.2666 | 0.2496 | 0.2777 |
| 50 | SE(MSE) | 0.002 | 0.0019 | 0.0018 | 0.002 | 0.0018 | 0.0016 | 0.0022 |
| 50 | 95% Cov | 0.918 | 0.939 | 0.945 | 0.933 | 0.948 | 0.958 | 0.924 |
| 100 | Estimate | 0.4451 | 0.4354 | 0.487 | 0.4489 | 0.4454 | 0.4892 | 0.4499 |
| 100 | Bias | -0.0549 | -0.0646 | -0.013 | -0.0511 | -0.0546 | -0.0108 | -0.0501 |
| 100 | Variance | 0.0691 | 0.0669 | 0.0697 | 0.0693 | 0.0678 | 0.0612 | 0.0696 |
| 100 | MSE | 0.0721 | 0.0711 | 0.0699 | 0.0719 | 0.0708 | 0.0614 | 0.071 |
| 100 | RMSE | 0.2685 | 0.2667 | 0.2644 | 0.2681 | 0.2661 | 0.2478 | 0.2665 |
| 100 | SE(MSE) | 0.0019 | 0.0018 | 0.0017 | 0.0019 | 0.0018 | 0.0015 | 0.0018 |
| 100 | 95% Cov | 0.924 | 0.943 | 0.949 | 0.937 | 0.951 | 0.964 | 0.929 |
| 200 | Estimate | 0.4604 | 0.4347 | 0.4897 | 0.4501 | 0.4557 | 0.4901 | 0.4501 |
| 200 | Bias | -0.0396 | -0.0653 | -0.0103 | -0.0499 | -0.0443 | -0.0099 | -0.0499 |
| 200 | Variance | 0.0701 | 0.0667 | 0.0696 | 0.0691 | 0.0687 | 0.0611 | 0.0684 |
| 200 | MSE | 0.0717 | 0.071 | 0.0697 | 0.0716 | 0.0707 | 0.0612 | 0.0709 |
| 200 | RMSE | 0.2678 | 0.2665 | 0.264 | 0.2676 | 0.2659 | 0.2474 | 0.2663 |
| 200 | SE(MSE) | 0.0018 | 0.0018 | 0.0017 | 0.0018 | 0.0017 | 0.0015 | 0.0018 |
| 200 | 95% Cov | 0.931 | 0.947 | 0.952 | 0.942 | 0.954 | 0.969 | 0.935 |
| 300 | Estimate | 0.4712 | 0.4345 | 0.4969 | 0.4548 | 0.4663 | 0.4927 | 0.4548 |
| 300 | Bias | -0.0288 | -0.0655 | -0.0031 | -0.0452 | -0.0337 | -0.0073 | -0.0452 |
| 300 | Variance | 0.0706 | 0.0665 | 0.0688 | 0.0693 | 0.0696 | 0.0608 | 0.0688 |
| 300 | MSE | 0.0714 | 0.0708 | 0.0688 | 0.0713 | 0.0707 | 0.0609 | 0.0708 |
| 300 | RMSE | 0.2672 | 0.2661 | 0.2623 | 0.267 | 0.2659 | 0.2468 | 0.2661 |
| 300 | SE(MSE) | 0.0018 | 0.0017 | 0.0016 | 0.0018 | 0.0017 | 0.0014 | 0.0017 |
| 300 | 95% Cov | 0.936 | 0.951 | 0.956 | 0.946 | 0.957 | 0.972 | 0.939 |
| 400 | Estimate | 0.4711 | 0.4343 | 0.4971 | 0.4679 | 0.4733 | 0.4957 | 0.467 |
| 400 | Bias | -0.0289 | -0.0657 | -0.0029 | -0.0321 | -0.0267 | -0.0043 | -0.033 |
| 400 | Variance | 0.0703 | 0.0662 | 0.0665 | 0.0701 | 0.0699 | 0.0593 | 0.0694 |
| 400 | MSE | 0.0711 | 0.0705 | 0.0666 | 0.0711 | 0.0706 | 0.0593 | 0.0705 |
| 400 | RMSE | 0.2666 | 0.2655 | 0.2581 | 0.2666 | 0.2657 | 0.2435 | 0.2655 |
| 400 | SE(MSE) | 0.0017 | 0.0017 | 0.0015 | 0.0017 | 0.0017 | 0.0013 | 0.0017 |
| 400 | 95% Cov | 0.941 | 0.954 | 0.959 | 0.949 | 0.96 | 0.975 | 0.943 |
| 500 | Estimate | 0.481 | 0.4343 | 0.498 | 0.4819 | 0.4813 | 0.4979 | 0.4819 |
| 500 | Bias | -0.019 | -0.0657 | -0.002 | -0.0181 | -0.0187 | -0.0021 | -0.0181 |
| 500 | Variance | 0.0705 | 0.0661 | 0.0626 | 0.0702 | 0.0702 | 0.059 | 0.07 |
| 500 | MSE | 0.0709 | 0.0704 | 0.0626 | 0.0705 | 0.0705 | 0.059 | 0.0703 |
| 500 | RMSE | 0.2663 | 0.2653 | 0.2502 | 0.2655 | 0.2655 | 0.2429 | 0.2651 |
| 500 | SE(MSE) | 0.0017 | 0.0016 | 0.0014 | 0.0016 | 0.0016 | 0.0012 | 0.0016 |
| 500 | 95% Cov | 0.945 | 0.957 | 0.962 | 0.952 | 0.963 | 0.978 | 0.947 |
Note: Results based on 1000 Monte Carlo replicates. True parameter value: p = 0.50. The SE(MSE) denotes the Monte Carlo standard error of the MSE estimate. 95% Coverage represents the coverage probability of nominal 95% posterior credible intervals for Bayesian methods and asymptotic Wald intervals for MLE.
Table 3 reports the Monte Carlo performance of the MLE and Bayesian estimators of the scale parameter p (true value 0.50) based on 1000 replications. For small samples (n = 25), all estimators exhibit negative bias, with the MLE (-0.0569) and U-SELF (-0.0610) showing the largest underestimation, while the QLF-based estimators, particularly J-QLF (-0.0223), demonstrate noticeably smaller bias. Across all sample sizes, the QLF estimators consistently produce the lowest MSE values, with J-QLF achieving the best overall performance (e.g., MSE = 0.0697 at n = 25 and 0.0590 at n = 500). Correspondingly, J-QLF also records the smallest RMSE values, indicating improved estimation precision relative to both the MLE and other Bayesian loss functions. As the sample size increases from 25 to 500, the bias steadily moves toward zero for most estimators, confirming consistency. The coverage probabilities improve monotonically, approaching or slightly exceeding the nominal 95% level, with J-QLF again providing the highest coverage (0.978 at n = 500). Although the MLE remains competitive, particularly for moderate and large samples, the Bayesian estimators under quadratic loss, especially with Jeffreys prior, demonstrate superior overall performance in terms of bias reduction, MSE, and interval coverage. These results suggest that the QLF-based Bayesian approach offers improved small- and moderate-sample efficiency for estimating the scale parameter p.
Table 4: Performance of Estimators for p with Fixed Parameters (p = 0.5, η = 2.5, ν = 0.5, q = 0.5, a = 1.0, b = 1)
| n | Measure | MLE | U-SELF | U-QLF | U-PLF | J-SELF | J-QLF | J-PLF |
|---|---|---|---|---|---|---|---|---|
| 25 | Estimate | 0.4433 | 0.439 | 0.4779 | 0.4479 | 0.4434 | 0.4771 | 0.4699 |
| 25 | Bias | -0.0567 | -0.061 | -0.0221 | -0.0521 | -0.0566 | -0.0229 | -0.0301 |
| 25 | Variance | 0.0715 | 0.0692 | 0.0698 | 0.0698 | 0.0682 | 0.0692 | 0.0742 |
| 25 | MSE | 0.0747 | 0.0729 | 0.0702 | 0.0725 | 0.0714 | 0.0697 | 0.076 |
| 25 | RMSE | 0.2733 | 0.27 | 0.265 | 0.2693 | 0.2672 | 0.264 | 0.2757 |
| 25 | 95% Cov | 0.91 | 0.932 | 0.943 | 0.926 | 0.943 | 0.954 | 0.917 |
| 50 | Estimate | 0.4442 | 0.4469 | 0.4879 | 0.4487 | 0.4453 | 0.4872 | 0.4755 |
| 50 | Bias | -0.0558 | -0.0531 | -0.0121 | -0.0513 | -0.0547 | -0.0128 | -0.0245 |
| 50 | Variance | 0.069 | 0.0691 | 0.0697 | 0.0696 | 0.0681 | 0.0621 | 0.0721 |
| 50 | MSE | 0.0721 | 0.0719 | 0.0701 | 0.0722 | 0.0711 | 0.0623 | 0.0733 |
| 50 | RMSE | 0.2685 | 0.2682 | 0.2648 | 0.2687 | 0.2666 | 0.2496 | 0.2707 |
| 50 | 95% Cov | 0.916 | 0.937 | 0.947 | 0.931 | 0.946 | 0.96 | 0.922 |
| 100 | Estimate | 0.4452 | 0.4554 | 0.49 | 0.4489 | 0.4453 | 0.489 | 0.479 |
| 100 | Bias | -0.0548 | -0.0446 | -0.01 | -0.0511 | -0.0547 | -0.011 | -0.021 |
| 100 | Variance | 0.0688 | 0.0697 | 0.0695 | 0.0696 | 0.0678 | 0.0612 | 0.0705 |
| 100 | MSE | 0.0718 | 0.0717 | 0.0696 | 0.0722 | 0.0708 | 0.0614 | 0.0719 |
| 100 | RMSE | 0.268 | 0.2678 | 0.2638 | 0.2687 | 0.2661 | 0.2478 | 0.2682 |
| 100 | 95% Cov | 0.922 | 0.941 | 0.951 | 0.935 | 0.949 | 0.965 | 0.927 |
| 200 | Estimate | 0.4606 | 0.4647 | 0.4957 | 0.4501 | 0.4657 | 0.4901 | 0.4798 |
| 200 | Bias | -0.0394 | -0.0353 | -0.0043 | -0.0499 | -0.0343 | -0.0099 | -0.0202 |
| 200 | Variance | 0.0701 | 0.0704 | 0.0695 | 0.0696 | 0.0695 | 0.0611 | 0.0698 |
| 200 | MSE | 0.0717 | 0.0716 | 0.0696 | 0.0721 | 0.0707 | 0.0612 | 0.0712 |
| 200 | RMSE | 0.2678 | 0.2676 | 0.2638 | 0.2685 | 0.2659 | 0.2474 | 0.2668 |
| 200 | 95% Cov | 0.929 | 0.945 | 0.954 | 0.94 | 0.952 | 0.97 | 0.933 |
| 300 | Estimate | 0.4707 | 0.4745 | 0.4969 | 0.4548 | 0.4763 | 0.4957 | 0.482 |
| 300 | Bias | -0.0293 | -0.0255 | -0.0031 | -0.0452 | -0.0237 | -0.0043 | -0.018 |
| 300 | Variance | 0.0706 | 0.0708 | 0.0686 | 0.0694 | 0.0701 | 0.0608 | 0.0697 |
| 300 | MSE | 0.0715 | 0.0715 | 0.0686 | 0.0714 | 0.0707 | 0.0609 | 0.071 |
| 300 | RMSE | 0.2674 | 0.2674 | 0.262 | 0.2672 | 0.2659 | 0.2468 | 0.2665 |
| 300 | 95% Cov | 0.934 | 0.949 | 0.958 | 0.944 | 0.955 | 0.973 | 0.937 |
| 400 | Estimate | 0.4714 | 0.4843 | 0.497 | 0.4679 | 0.4733 | 0.4967 | 0.4882 |
| 400 | Bias | -0.0286 | -0.0157 | -0.003 | -0.0321 | -0.0267 | -0.0033 | -0.0118 |
| 400 | Variance | 0.0705 | 0.071 | 0.0665 | 0.0702 | 0.0699 | 0.0563 | 0.0705 |
| 400 | MSE | 0.0713 | 0.0712 | 0.0665 | 0.0712 | 0.0706 | 0.0563 | 0.0709 |
| 400 | RMSE | 0.267 | 0.2668 | 0.2579 | 0.2668 | 0.2657 | 0.2373 | 0.2663 |
| 400 | 95% Cov | 0.939 | 0.952 | 0.961 | 0.947 | 0.958 | 0.976 | 0.941 |
| 500 | Estimate | 0.486 | 0.4899 | 0.4979 | 0.4819 | 0.4833 | 0.4977 | 0.4894 |
| 500 | Bias | -0.014 | -0.0101 | -0.0021 | -0.0181 | -0.0167 | -0.0023 | -0.0106 |
| 500 | Variance | 0.0709 | 0.0709 | 0.0626 | 0.0707 | 0.0702 | 0.0544 | 0.0706 |
| 500 | MSE | 0.0711 | 0.071 | 0.0627 | 0.071 | 0.0705 | 0.0544 | 0.0709 |
| 500 | RMSE | 0.2666 | 0.2665 | 0.2504 | 0.2665 | 0.2655 | 0.2332 | 0.2663 |
| 500 | 95% Cov | 0.943 | 0.955 | 0.964 | 0.95 | 0.961 | 0.979 | 0.945 |
Note: Results based on 1000 Monte Carlo replicates. True parameter value: p = 0.50. The SE (MSE) denotes the Monte Carlo standard error of the MSE estimate. The 95% coveverage represents the coverage probability of nominal 95% posterior credible intervals for Bayesian methods and asymptotic Wald intervals for MLE.
Table 4 presents the Monte Carlo performance of the estimators of the scale parameter p (true value 0.50) under the alternative configuration (\(\eta = 2.5,\nu = 0.5,q = 0.5\)), based on 1000 replications. For small samples (n = 25), all estimators exhibit negative bias, with MLE (-0.0567) and U-SELF (-0.0610) showing the largest underestimation, while the QLF-based estimators again display noticeably smaller bias (e.g., J-QLF = -0.0229). Across all sample sizes, the quadratic loss estimators consistently achieve the lowest MSE values; in particular, J-QLF attains the minimum MSE throughout (e.g., 0.0697 at n = 25 and 0.0544 at n = 500), accompanied by the smallest RMSE values. The PLF estimators are generally competitive with MLE and SELF but do not outperform the QLF approach in terms of overall accuracy. As the sample size increases from 25 to 500, the bias steadily diminishes toward zero for all methods, confirming consistency. Coverage probabilities improve with increasing n, approaching and slightly exceeding the nominal 95% level, with J-QLF again providing the highest coverage (0.979 at n = 500). The general pattern closely mirrors that observed in Table 3, indicating that the change in \(\eta\) does not materially alter the relative ranking of estimators. In summary, the Bayesian estimator under quadratic loss with Jeffreys prior demonstrates the most favourable performance in terms of bias reduction, MSE minimisation, and interval coverage across small, moderate, and large sample sizes.
Table 5: Performance of Estimators for p with Fixed Parameters (p = 0.5, η = 0.5, ν = 2.5, q = 0.5, a = 1.0, b = 1)
| n | Measure | MLE | U-SELF | U-QLF | U-PLF | J-SELF | J-QLF | J-PLF |
|---|---|---|---|---|---|---|---|---|
| 25 | Estimate | 0.444 | 0.439 | 0.478 | 0.448 | 0.444 | 0.477 | 0.470 |
| 25 | Bias | -0.056 | -0.061 | -0.022 | -0.052 | -0.056 | -0.023 | -0.030 |
| 25 | Variance | 0.0108 | 0.0102 | 0.0099 | 0.0100 | 0.0101 | 0.0097 | 0.0112 |
| 25 | MSE | 0.0139 | 0.0140 | 0.0104 | 0.0127 | 0.0132 | 0.0102 | 0.0121 |
| 25 | RMSE | 0.118 | 0.118 | 0.102 | 0.113 | 0.115 | 0.101 | 0.110 |
| 25 | 95% Cov | 0.922 | 0.936 | 0.944 | 0.931 | 0.941 | 0.952 | 0.924 |
| 100 | Estimate | 0.482 | 0.487 | 0.493 | 0.486 | 0.484 | 0.492 | 0.489 |
| 100 | Bias | -0.018 | -0.013 | -0.007 | -0.014 | -0.016 | -0.008 | -0.011 |
| 100 | Variance | 0.0026 | 0.0025 | 0.0023 | 0.0024 | 0.0024 | 0.0022 | 0.0025 |
| 100 | MSE | 0.0029 | 0.0027 | 0.0024 | 0.0026 | 0.0026 | 0.0023 | 0.0026 |
| 100 | RMSE | 0.054 | 0.052 | 0.049 | 0.051 | 0.051 | 0.048 | 0.051 |
| 100 | 95% Cov | 0.941 | 0.948 | 0.953 | 0.946 | 0.952 | 0.959 | 0.943 |
| 500 | Estimate | 0.496 | 0.497 | 0.498 | 0.497 | 0.497 | 0.498 | 0.497 |
| 500 | Bias | -0.004 | -0.003 | -0.002 | -0.003 | -0.003 | -0.002 | -0.003 |
| 500 | Variance | 0.00052 | 0.00049 | 0.00044 | 0.00046 | 0.00048 | 0.00043 | 0.00047 |
| 500 | MSE | 0.00054 | 0.00050 | 0.00045 | 0.00047 | 0.00049 | 0.00044 | 0.00048 |
| 500 | RMSE | 0.023 | 0.022 | 0.021 | 0.022 | 0.022 | 0.021 | 0.022 |
| 500 | 95% Cov | 0.948 | 0.953 | 0.961 | 0.956 | 0.958 | 0.971 | 0.949 |
Note: Results based on 1000 Monte Carlo replicates. True parameter value: p
Table 5 reports the Monte Carlo performance of the estimators of the scale parameter p (true value 0.50) under the configuration (\(\eta = 0.5,\nu = 2.5,q = 0.5\)), based on 1,000 replications. Unlike the previous settings, the variance and MSE decrease sharply as the sample size increases, clearly reflecting the expected 1/n convergence behaviour. For n = 25, all estimators exhibit moderate negative bias (around -0.05 for MLE and SELF-type estimators), while the quadratic loss estimators show smaller bias (approximately -0.02). The J-QLF estimator achieves the smallest MSE (0.0102) and RMSE (0.101) at n = 25, indicating improved efficiency in small samples. As the sample size increases to 100 and 500, bias rapidly approaches zero and both variance and MSE decline substantially. At n = 500, all estimators are nearly unbiased (bias between -0.004 and -0.002), with very small MSE values (approximately 0.00044-0.00054). The quadratic loss estimators, particularly J-QLF, consistently attain the lowest MSE and highest coverage probabilities (0.971 at n = 500). Coverage probabilities steadily approach the nominal 95% level across all methods, confirming asymptotic validity. The results demonstrate strong consistency and efficiency of all estimators, with the Bayesian estimator under quadratic loss and Jeffreys prior providing the most accurate and stable performance across sample sizes.
Figure 1: Boxplot of the Estimator distribution for n = 25
Figure 2: Boxplot of the Estimator distribution for n = 300
The boxplots in Figures 1 and 2 for n = 25, 300, respectively shows that all estimators are tightly centred on the true value p = 0.5, indicated by the horizontal dashed reference line. The medians of all methods lie very close to 0.50, confirming negligible bias at this sample size. The interquartile ranges are narrow and highly similar across estimators, reflecting reduced sampling variability and strong consistency as n increases. This visual evidence aligns with the numerical results reported in Table 5, for example, where bias and MSE are substantially smaller for moderate and large samples.
Although the dispersion is comparable across methods, the quadratic loss estimators, particularly J-QLF, appear slightly more concentrated, with marginally shorter box heights and fewer extreme deviations. The MLE and SELF-based estimators show nearly identical spread, indicating comparable asymptotic performance. Outliers are symmetrically distributed around the centre and are limited in magnitude, suggesting stable estimation behaviour. The plots confirm convergence of all estimators toward the true parameter value, with only minor efficiency gains for the Bayesian quadratic loss approach.
The improved performance of the quadratic loss function (QLF) estimator can be understood from both decision-theoretic and shrinkage perspectives (Dangana et al., 2025). Under squared error loss (SELF), the Bayes estimator is the posterior mean, which minimises expected squared deviation symmetrically around the true parameter. However, SELF does not account for the relative magnitude of estimation error when the parameter space is strictly positive, as is the case for the WPFD scale parameter \(p > 0\) (Dangana et al., 2025).
The quadratic loss function introduces a relative scaling component that effectively penalises estimation errors in proportion to the parameter magnitude. In positively constrained parameter spaces, such scaling often stabilises posterior risk by reducing the influence of extreme posterior draws. As a result, the QLF estimator behaves like a moderated shrinkage estimator, pulling estimates slightly toward regions of higher posterior concentration while avoiding excessive dispersion (Dangana et al., 2025).
From an asymptotic standpoint, as the sample size increases, the posterior distribution becomes increasingly concentrated around the true parameter value due to likelihood dominance. Under these conditions, differences between loss functions diminish. However, in small and moderate samples, where posterior spread remains non-negligible, the quadratic loss function reduces posterior variance more effectively than SELF and precautionary loss, leading to lower mean squared error (Dangana et al., 2025).
Heuristically, QLF improves finite-sample efficiency because it balances bias and variance more effectively in skewed or strictly positive parameter settings (Dangana et al., 2025). By moderating extreme deviations without introducing substantial additional bias, it achieves a net reduction in overall risk, which explains the consistently lower MSE observed in the simulation results (Dangana et al., 2025).
This study evaluated the performance of maximum likelihood and Bayesian estimators for the scale parameter p under different hyperparameter configurations using Monte Carlo simulation. Across all scenarios, the estimators exhibited consistency, with bias decreasing and coverage probabilities approaching the nominal 95% level as sample size increased. The simulation results confirm that estimator variability and mean squared error decline with larger samples, demonstrating the expected asymptotic behaviour. Among the competing methods, the Bayesian estimator under the quadratic loss function (QLF), particularly with Jeffreys prior, consistently achieved the smallest MSE and RMSE values across small, moderate, and large sample sizes. While the MLE performed competitively for moderate-to-large samples, it exhibited comparatively larger bias in small samples. The posterior linear loss function (PLF) and squared error loss function (SELF) estimators were generally stable but did not outperform the QLF approach in overall efficiency. These findings indicate that Bayesian estimation under quadratic loss offers improved finite-sample performance without sacrificing asymptotic validity.
Based on the simulation findings, we offer several practical recommendations for researchers and practitioners applying Bayesian estimation methods for the scale parameter p. For small to moderate sample sizes (n ≤ 200), the Bayesian estimator under the quadratic loss function (QLF) with Jeffrey’s prior is strongly recommended, as it consistently achieves the lowest mean squared error and exhibits superior bias reduction compared to both MLE and alternative loss functions. Specifically, Jeffrey’s prior with QLF demonstrated MSE reductions of 6-12% relative to MLE across all tables, with particularly pronounced advantages at n=50-100 where bias reduction was most evident. For large sample sizes (n ≥ 400), the maximum likelihood estimator becomes a practical and computationally efficient alternative, as the performance gap between Bayesian methods and MLE narrows considerably, with all estimators converging toward the true parameter value and MSE differences becoming negligible for practical purposes. We further recommend that applied implementations of this model incorporate sensitivity analyses with respect to prior choice when sample sizes are limited, as prior influence remains non-negligible at n ≤ 100.
Despite the comprehensive simulation design and robust findings, this study acknowledges several limitations that temper the generalizability of its conclusions. First, all results are derived from controlled Monte Carlo simulations under idealised conditions with fixed parameter configurations; performance under real-world data conditions, including model misspecification, unmodeled dependence structures, the presence of outliers, or violations of distributional assumptions, was not evaluated and may differ substantially from these simulated scenarios. Second, the investigation considered only two non-informative priors (Uniform and Jeffrey’s), leaving unexplored the potential advantages of alternative informative priors, hierarchical structures, or data-dependent prior specifications that might yield improved performance, particularly in the challenging small-sample size where prior influence is most cases. Third, the study focused exclusively on the estimation of the scale parameter p in isolation, without extensively examining joint estimation properties or interaction effects with other model parameters (η, ν, q), which may exhibit complex dependencies that affect overall inference quality in multivariate settings. Fourth, only three loss functions were examined (SELF, QLF, PLF), while other asymmetric loss structures, such as LINEX or general entropy loss, or decision-theoretic frameworks incorporating utility functions, may yield different optimality properties and estimator rankings. Finally, while asymptotic properties were well-behaved across all methods, persistent small-sample biases (ranging from 2-6% at n=25) suggest that further methodological refinements, such as analytic or bootstrap-based bias-correction strategies, could enhance estimator performance in the practically important small-sample domain where Bayesian methods are most frequently advocated.
Abd Elgawad, M. A., Usman, A., Doguwa, S. I., Sadiq, I. A., Zakari, Y., Ishaq, A. I., ... & Hashem, A. F. (2025). A hybrid Log-Logistic-Weibull Regression Model for survival analysis in leukaemia patients and radiation data. Journal of Radiation Research and Applied Sciences, 18(3), 101836. [Crossref]
Adepoju A. A., Abdulkadir S. S., Jibasen D. (2024c). On Different Classical Estimation Approaches for Type I Half Logistic-Topp-Leone-Exponential Distribution. Reliability: Theory & Applications, 1(77), 577-587.
Adepoju A. A., Abdulkadir S. S., Jibasen D., Olumoh J. S. (2024a). Type I Half Logistic Topp-Leone Inverse Lomax Distribution with Applications in Skinfolds Analysis. Reliability: Theory & Applications, 1(77), 618-630.
Adepoju A. A., Bello A. O., Isa A. M., Adesupo A., & Olumoh J. S. (2024d). Statistical inference on sine-exponential distribution parameter. Journal of Computational Innovation and Analytics, 3(2), 129-145. [Crossref]
Adepoju A. A., Isa A. A., Magaji A. A., Nasir M. S., Aliyu A. M., (2021b). Preference of Bayesian techniques over classical techniques in estimating the scale parameter of the inverse rayleigh frechet distribution. Royal Statistical Society Nigeria Local Group 2021 Conference Proceedings, 158-167.
Adepoju A. A., Isa A. M., Bello A. O. (2024b). Cosine Marshal-Olkin-G Family of Distribution: Properties And Applications. RT&A, 19(3), 79.
Adepoju A. A., Usman M., Alkassim R. S., Sani S. S., Adamu K. (2021a). Parameter (shape) Estimation of Weibull-Exponential Distribution Using Classical and Bayesian Approach Under Different Loss Functions. Royal Statistical Society Nigeria Local Group Conference Proceedings, 182-190.
Adepoju, A. A., Abdulkadir, S. S., & Jibasen, D. (2023). The Type I Half Logistics-Topp-Leone-G Distribution Family: Model, its Properties and Applications. UMYU Scientifica, 2(4), 09-22. [Crossref]
Aliyu, Y. and Yahaya, A. (2016). Bayesian estimation of the shape parameter of generalized Rayleigh distribution under non-informative prior. International Journal of Advanced Statistics and Probability, 4(1), 1-10. [Crossref]
Azam, Z., & Ahmad, A. S. (2014). Bayesian approach in estimation of scale parameter of Nakagami distribution. Pakistan Journal of Statistics and Operation Research, 10(2), 217–228. [Crossref]
Bello O., A., Doguwa S., I., Yahaya A., Jibril H., M. (2020). A Type I Half Logistic Exponentiated-G Family of Distributions: Properties and Application. Communication in Physical Sciences, 7(3), 147-163. [Crossref]
Bello O., A., Doguwa S., I., Yahaya A., Jibril H., M. (2021). A Type II Half Logistic Exponentiated-G Family of Distributions with Applications in Survival Analysis. FUDMA Journal of Science, 5(3), 177-190. [Crossref]
Bernardo, J. M., Smith, A. F., & Berliner, M. (1994). Bayesian theory (Vol. 586). Wiley. [Crossref]
Dangana, H. A., Usman, A., & Garba, J., Sadiq, I. A. (2025). Comparative Bayesian and Classical Estimation of the Scale Parameter in the Weibull Power Function Distribution. FUDMA Journal of Sciences, 9(12), 492-499. [Crossref]
Danrimi, M. L. and Abubakar, A. (2023). A Bayesian Framework for Estimating Weibull Distribution Parameters: Applications in Finance, Insurance, and Natural Disaster Analysis. UMYU Journal of Accounting and Finance Research, 5(1), 64-83. [Crossref]
Dey, S., & Maiti, S. S. (2010). Bayesian estimation of the parameter of Maxwell distribution under different loss functions. Journal of Statistical Theory and Practice, 4(2), 279-287. [Crossref]
Eraikhuemen, I. B., Bamigbala, O. A., Magaji, U. A., Yakura, B. S. and Manju, K. A. (2020a). Bayesian Analysis of Weibull-Lindley Distribution Using Different Loss Functions. Asian Journal of Advanced Research and Reports, 8(4), 28-41. [Crossref]
Eraikhuemen, I. B., Mohammed, F. B. and Sule, A. A. (2020b). Bayesian and Maximum Likelihood Estimation of the Shape Parameter of Exponential Inverse Exponential Distribution: A Comparative Approach. Asian Journal of Probability and Statistics, 7(2), 28-43. [Crossref]
Habu, L., Usman, A., Sadiq, I. A., & Abdullahi, U. A. (2024). Estimation of Extension of Topp-Leone Distribution using Two Different Methods: Maximum Product Spacing and Maximum Likelihood Estimate. UMYU Scientifica, 3(2), 133-138. [Crossref]
Hassan E. A. A., Elgarhy M., Eldessouky E. A., Hassan O. H. M. Amin E. A. Almetwally, E. M. (2023). Different Estimation Techniques for New Probability Distribution Approach Based on Environmental and Medical Data. Axioms, 12, 220. [Crossref]
Ibrahim, S., Doguwa, S. I., Audu, I., & Muhammad, J.H. (2020a). On the Topp Leone Exponentiated-G Family of Distributions: Properties and Applications. Asian Journal of Probability and Statistics, 7(1), 1-15. [Crossref]
Ibrahim, S., Doguwa, S.I, Isah, A., & Haruna, J.M.(2020b). The Topp Leone Kumaraswamy G Family of Distributions with Applications to Cancer Disease Data. Journal of Biostatistics and Epidemiology, 6(1), 37-48.
Ieren, T. G. and Oguntunde, P. E. (2018). A Comparison between Maximum Likelihood and Bayesian Estimation Methods for a Shape Parameter of the Weibull-Exponential Distribution. Asian Journal of Probability and Statistics, 1(1), 1-12. [Crossref]
Ieren, T. G., Chama, A. F., Bamigbala, O. A., Joel, J., Kromtit, F. M. & Eraikhuemen, I. B. (2020). On A Shape Parameter of Gompertz Inverse Exponential Distribution Using Classical and Non-Classical Methods of Estimation. Journal of Scientific Research & Reports, 25(6), 1-10. [Crossref]
Isa A. M., Kaigama A., Adepoju A. A., Bashiru S. O. (2023). Lehmann Type II-Lomax Distribution: Properties and Application to Real Data Set. Communication in Physical Sciences, 9(1), 63-72.
Kajuru, J. Y., Dikko, H. G., Mohammed, A. S., & Fulatan, A. I. (2023). Odd Gompertz G Family of Distribution, Its Properties and Applications. FUDMA Journal of Sciences, 7(3), 351-358. [Crossref]
Kass, R. E., & Wasserman, L. (1996). The selection of prior distributions by formal rules. Journal of the American Statistical Association, 91(435), 1343-1370. [Crossref]
Liu, X., Arslan, M., Khan, M., Anwar, S.M., and Rasheed, Z. (2021). Classical and Bayesian Estimation of Two-parameter Power Function Distribution. Preprints. [Crossref]
Mohammed, A. A., Hamdani, H., Zakari, Y., Abdullahi, J., Sadiq, I. A., Ouertani, M. N., ... & Elgarhy, M. (2025). On the Rayleigh Exponentiated Odd Generalised-Inverse Exponential Distribution With Properties and Applications. Engineering Reports, 7(11), e70457. [Crossref]
Mohammed, U., Ibrahim, D. S., Sulaiman, M. A., David, R. O., & Sadiq, I. A. (2025). Development of Topp-Leone Odd Fréchet Family of Distribution with Properties and Applications. Communication In Physical Sciences, 12(4), 1214-1226. [Crossref]
Obafemi, A. A., Usman, A., Sadiq, I. A., & Okon, U. (2024). A New Extension of Topp-Leone Distribution (NETD) Using Generalized Logarithmic Function. UMYU Scientifica, 3(4), 127-133. [Crossref]
Oga, O., Musa, T. U., Usman, A., & Sadiq, I. A. (2025). A novel odd Rayleigh-exponential distribution (OR-ED) and its application to lifetime datasets. Journal of Statistical Sciences and Computational Intelligence, 1(3), 262-282. [Crossref]
Preda, V., Eugenia, P. and Alina, C. (2010). Bayes Estimators of Modified-Weibull Distribution parameters using Lindley's approximation. WSEAS Transactions on Mathematics, 9(7), 539-549.
Sadiq, I. A., Doguwa, S. I. S., Yahaya, A., & Garba, J. (2023c). New Generalized Odd Fréchet-Odd Exponential-G Family of Distribution with Statistical Properties and Applications. FUDMA Journal of Sciences, 7(6), 41-51. [Crossref]
Sadiq, I. A., Doguwa, S. I. S., Yahaya, A., & Usman, A. (2023b). Development of New Generalized Odd Fréchet-Exponentiated-G Family of Distribution. UMYU Scientifica, 2(4), 169-178. [Crossref]
Sadiq, I. A., Doguwa, S. I., Yahaya, A., & Garba, J. (2022). New Odd Frechet-G Family of Distribution with Statistical Properties and Applications. AFIT Journal of Science and Engineering Research, 2(2), 84-103.
Sadiq, I. A., Doguwa, S. I., Yahaya, A., & Garba, J. (2023a). New Generalized Odd Frechet-G (NGOF-G) Family of Distribution with Statistical Properties and Applications. UMYU Scientifica, 2(3), 100-107. [Crossref]
Sadiq, I. A., Garba, S., Kajuru, J. Y., Usman, A., Ishaq, A. I., Zakari, Y., & Yahaya, A. (2024). The Odd Rayleigh-G Family of Distribution: Properties, Applications, and Performance Comparisons. FUDMA Journal of Sciences, 8(6), 514-527. [Crossref]
Sadiq, I. A., Kajuru, J. Y., Doguwa, S. I., Yahaya, G. Y., Hephzibah, A. A., Yahaya, S. S., ... & Bello, A. (2025b). Survival analysis in advanced lung cancer using Weibull survival regression model: estimation, interpretation, and clinical application. Journal of Statistical Sciences and Computational Intelligence, 1(2), 106-123. [Crossref]
Sadiq, I. A., Kajuru, J. Y., Usman, A., Doguwa, S. I., & Yahaya, A. (2026). The NGOF-Et-Weibull Survival Regression Model with Application to Liver Cancer Time-to-event Data. Conference Proceedings of the Statistical Sciences and Data Analytics, Department of Statistics, ABU Zaria.
Sadiq, I. A., Zakari, Y., Doguwa, S. I., Isiya, M., Suleiman, S. Y., Ajayi, A. H., ... & Samuel, W. (2025a). Survival Analysis of Acute Myocardial Infarction Patients Using the Kumaraswamy-Logistic Model and Kaplan-Meier Estimation. Iraqi Statisticians Journal, 15-32.
Semary, H., Sadiq, I. A., Doguwa, S. I. S., Ishaq, A. I., Suleiman, A. A., Daud, H., & Abd Elgawad, M. A. (2025). Advancing survival regression using the NGOF exponentiated Weibull distribution for vesicovaginal fistula and radiation data applications. Journal of Radiation Research and Applied Sciences, 18(2), 101497. [Crossref]
Tahir, M. H., Alizadeh, M., Mansoor, M., Cordeiro, G. M., & Zubair, M. (2016). The Weibull power function distribution with applications. Hacettepe Journal of Mathematics and Statistics, 45(1), 245–265. [Crossref]
Usman, A., Yusuf, H., Yahaya, A., Sadiq, I. A., Akanji, O. B., & Aliyu, S. A. (2025). Comparing the Methods of Estimators of the Modified Inverted Kumaraswamy Distribution Using the Inverse Power Function. UMYU Scientifica, 4(1), 325-335. [Crossref]
Yilmaz A., Kara M., Ozdemir O. (2021). Comparison of different estimation techniques for extreme value distribution. Journal of Applied Statistics, 48(13-15), 13-15. [Crossref]
Yusuf, H., Usman, A., Yahaya, A., Sadiq, I. A., Bello, O. A., & Adamu, S. A. (2025). Modified Inverted Kumaraswamy Distribution Using Inverse Power Function: Properties and Applications. FUDMA Journal of Sciences, 9(1), 234-239. [Crossref]
ZeinEldin R. A., Chesneau C, Jamal F., Elgarhy M. (2019). Different estimation techniques for Type I Half-Logistic Topp-Leone Distribution. Mathematics, 7(10), 985. [Crossref]