iconOpen Access

ARTICLE

crossmark

Type-I Heavy-Tailed Burr XII Distribution with Applications to Quality Control, Skewed Reliability Engineering Systems and Lifetime Data

Okechukwu J. Obulezi1,*, Hatem E. Semary2, Sadia Nadir3, Chinyere P. Igbokwe4, Gabriel O. Orji1, A. S. Al-Moisheer2, Mohammed Elgarhy5

1 Department of Statistics, Faculty of Physical Sciences, Nnamdi Azikiwe University, Awka, P.O. Box 5025, Nigeria
2 Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, 11432, Saudi Arabia
3 Department of Mathematics and Statistics, Faculty of Engineering and Applied Sciences, Riphah International University, Islamabad, 44000, Pakistan
4 Department of Statistics, School of Computer Science and Engineering, Lovely Professional University, Phagwara, 144411, Punjab, India
5 Department of Basic Sciences, Higher Institute of Administrative Sciences, Belbeis, AlSharkia, 44621, Egypt

* Corresponding Author: Okechukwu J. Obulezi. Email: email

Computer Modeling in Engineering & Sciences 2025, 144(3), 2991-3027. https://doi.org/10.32604/cmes.2025.069553

Abstract

This study introduces the type-I heavy-tailed Burr XII (TIHTBXII) distribution, a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness, heavy tails, and diverse hazard behaviors. We meticulously develop the TIHTBXII’s mathematical foundations, including its probability density function (PDF), cumulative distribution function (CDF), and essential statistical properties, crucial for theoretical understanding and practical application. A comprehensive Monte Carlo simulation evaluates four parameter estimation methods: maximum likelihood (MLE), maximum product spacing (MPS), least squares (LS), and weighted least squares (WLS). The simulation results consistently show that as sample sizes increase, the Bias and RMSE of all estimators decrease, with WLS and LS often demonstrating superior and more stable performance. Beyond theoretical development, we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan (GASP) for truncated life tests. This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number (ASN) while effectively managing consumer and producer risks. Empirical validation using real-world datasets, including “Active Repair Duration,” “Groundwater Contaminant Measurements,” and “Dominica COVID-19 Mortality,” further demonstrates the TIHTBXII’s superior fit compared to existing models. Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment, leading to more informed and robust decision-making.

Keywords

Acceptance sampling; heavy-tailed models; parameter estimation; reliability engineering

1  Introduction

In the analysis of real-world phenomena, rare events and complex hazard behaviors are common, necessitating the use of flexible statistical distributions. Classical models such as the normal, exponential, Weibull, and Rayleigh often prove inadequate in modeling data characterized by significant skewness, heavy tails, or non-monotonic hazard functions. Consequently, the development of more versatile distributions, particularly those with heavy-tailed behavior, has been a central focus of modern applied statistics.

Heavy-tailed distributions, which allocate a higher probability mass to extreme outcomes than Gaussian models, have been observed across a wide range of disciplines. In finance, for example, asset prices and stock returns frequently exhibit fat-tailed tendencies, with the probability of large gains or losses exceeding what would be predicted by a normal distribution [1]. This has led to the adoption of stable laws to better explain high market volatility. Similarly, in actuarial science, catastrophic claims in insurance are a classic example of heavy-tailed behavior, particularly when modeling loss severity [2]. Environmental and natural systems also display these characteristics; hydrological records, rain intensities, and seismic events often follow distributions with slowly decaying tails [3,4]. In medicine and biology, skewed and extreme-value data, such as tumor sizes and survival times, similarly require heavy-tailed models for accurate analysis [57].

Formally, a distribution with distribution function F(x) is defined as heavy-tailed if its survival function F¯(x)=1F(x) converges to zero more slowly than any exponential rate, that is,

limxF¯(x)eλx=for all λ>0.

This implies that heavy-tailed distributions place significantly more probability mass in the tails than exponential-type distributions. Classic examples include the Cauchy distribution by Cauchy [8], Student’s t distribution by Student [9], Fréchet distribution by Fréchet [10], and the Pareto family of distributions by Pareto [11], and Arnold [12], which includes the Pareto II (Lomax) distribution by Lomax [13]. A specific and widely-studied type of heavy-tailed distribution is the power-law distribution, characterized by a survival function that decays as a power of x for large x:

F¯(x)xαas x,

where α>0 is the tail index. This slow, polynomial decay of the tail is a defining feature that distinguishes power-law distributions from those with exponential tails. Given that empirical data frequently violate the assumptions of standard models like the exponential, gamma, and Weibull distributions, researchers have increasingly turned to advanced generalization strategies such as transformation techniques, compounding, and mixture models to construct distributions with superior tail behavior and greater structural flexibility, see [1416] as well as [1720].

Among these flexible models, the Burr Type XII (BXII) distribution has been particularly influential. Introduced by Burr [21] as part of a system of distributions designed to fit a wide range of empirical data, it is a continuous probability distribution for a non-negative random variable x0. Singh [22] demonstrated the applicability of Burr-type distributions in different data scenarios. The conventional Burr Type XII distribution is characterized by two positive shape parameters, commonly represented as c and k, and its high degree of flexibility is well-known. By altering the shape parameters, the BXII distribution can take on many forms and approximate numerous other common distributions, such as the log-logistic, Lomax (Pareto Type II), and Weibull, and can even approximate the normal distribution under certain conditions. It offers a parsimonious yet powerful framework for analyzing data across diverse fields, including reliability, survival analysis, and econometrics.

The ongoing demand for models capable of capturing complex data characteristics has led to a proliferation of BXII generalizations. These extensions often enhance the original BXII’s flexibility by adding new parameters or by embedding it within a larger family of distributions. For example, Gad et al. [23] introduced the Burr XII-Burr XII (BXI-BXII) distribution, a compound model that demonstrated a superior fit to certain datasets compared to the baseline BXII. The literature is rich with such constructions, which can be broadly categorized by their methodological approach.

One common strategy is the use of general families of distributions. The T-X family by Alzaatreh et al. [24], for instance, has been used to derive the Burr XII-moment exponential (BXII-ME) distribution, which offers greater flexibility for modeling lifetime data [25]. Similarly, the Kumaraswamy-G family has led to the Kumaraswamy Burr XII (KBXII) distribution, a four-parameter model with a remarkable ability to accommodate diverse functional shapes and hazard rate behaviors [26]. Another important approach involves the Marshall-Olkin extended family, which produced the Marshall-Olkin extended Burr Type XII (MOEBXII) distribution to provide enhanced flexibility for reliability and survival analysis [27]. For more details see [2830] as well as [3133].

Other specialized extensions of the BXII distribution have been developed to address specific challenges. Ocloo [34] introduced a novel extension of the Burr XII distribution, demonstrating its enhanced capability to model data with complex characteristics such as bimodality and heavy tails, and providing a better fit than many existing BXII extensions. Recent innovations also include the Sine-Burr XII distribution, which utilizes a trigonometric transformation to add flexibility without introducing new parameters [35], and the Odd Burr XII Gompertz (OBXIIGo) distribution, which provides a more adaptable model for lifetime data by integrating the BXII family with the Gompertz distribution [36].

The work of Gad et al. [23] and other similar studies underscore a critical motivation in modern statistical modeling: the development of new distributions that can effectively capture complex data structures not adequately described by classical models. The literature reviewed herein indicates a clear trend toward creating higher-order BXII-based models that offer greater flexibility in their distributional shapes and hazard rate functions. These models consistently serve as a better alternative to their classical counterparts, a finding supported by statistical goodness-of-fit metrics.

One such area where the choice of distribution is paramount is in acceptance sampling, particularly within truncated life-testing experiments. In specialized manufacturing or medical diagnosis, using simple, stratified, or cluster sampling may be insufficient. Group acceptance sampling plans (GASPs) provide an efficient alternative by examining several units in batches, conserving resources. The efficacy of a GASP, however, hinges on the choice of the underlying product lifetime distribution. An ill-suited model can lead to misclassifying good or bad lots, resulting in either excessive consumer risk or uneconomical producer rejection. This has prompted researchers to propose GASPs under more generalized distributions to overcome the limitations of standard models. For example, GASPs have been developed for truncated life tests based on the inverse Rayleigh and log-logistic distributions by Aslam & Jun [37], the Marshall-Olkin extended Lomax distribution by Rao [38], and the inverse Weibull distribution with median lifetime as a quality measure by Singh & Tripathi [39]. Other recent studies have employed generalized distributions like the AG-transmuted exponential by Almarashi & Khan [40], transmuted exponential by Owoloko et al. [41], transmuted Rayleigh by Saha et al. [42], and exponential-logarithmic distributions by Ameeq et al. [43] in the GASP framework. This body of research demonstrates that superior modeling of the lifetime distribution directly leads to more efficient and risk-balanced sampling decisions. Most recently, Ekemezie et al. [44] presented the odd Perks-Lomax (OPL) distribution and used it to construct a GASP, showcasing how a novel, versatile distribution can enhance both statistical theory and practical application simultaneously.

Building on this demonstrated need for more intricate models, this research proposes and thoroughly investigates a novel statistical distribution: the Type-I Heavy-Tailed Burr Type XII (TIHTBXII) distribution. The primary motivation is to address the persistent challenges in adequately modeling real-world data with extreme skewness, heavy tails, and complex, non-monotonic hazard functions, for which existing generalized distributions still exhibit limitations. We not only meticulously derive the mathematical properties of the TIHTBXII distribution and examine various parameter estimation procedures but, more significantly, we demonstrate its practical utility by constructing a group acceptance sampling plan (GASP) specifically tailored to truncated life tests based on this new model. This new tool leverages the TIHTBXII distribution’s excellent capability to provide more accurate and reliable data, thereby enabling better decision-making in fields like quality control, reliability engineering, and lifetime data analysis, where current models may fail to capture the true underlying risk mechanisms.

The rest of this paper is organized as follows: In Section 2, we define the basic functions of the proposed TIHTBXII model, including its probability density function (PDF), cumulative distribution function (CDF), survival function, and hazard function. Section 3 is dedicated to the derivation of several key characteristics of the TIHTBXII model, such as its quantile function, moments, incomplete moments, moment-generating function, mean residual life function, entropy, extropy, and order statistics. In Section 4, we discuss the estimation of the model parameters using maximum likelihood, maximum product spacing, least squares, and weighted least squares methods. Section 5 presents a Monte Carlo Markov Chain (MCMC) simulation study conducted with four different parameter settings, with the bias and root mean square error (RMSE) also plotted to support the numerical results. In Section 6, we design a group acceptance sampling plan (GASP) to illustrate how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number. Finally, Section 7 reports the applications of the proposed model to real datasets, including the duration of active repairs of airborne communication transceivers, groundwater contaminant measurements, and Dominica COVID-19 mortality rate data, while Section 8 provides the conclusion of this study.

2  Development of the Type-I Heavy-Tailed Burr XII Distribution

The Burr XII distribution has cumulative distribution function (CDF) and probability density function (PDF), respectively, given as

F(x)=11(1+xc)k;x>0,c,k>0,(1)

and

f(x)=ckxc1(1+xc)k+1.(2)

Reference [45] proposed the Type I heavy-Tailed (TI-HT) family of distributions with CDF and PDF respectively given as

G(x;θ,ξ)=1(1F(x;ξ)1(1θ)F(x;ξ))θ;x,θ>0,(3)

and

g(x;θ,ξ)=θ2f(x;ξ){1F(x;ξ)}θ1{1(1θ)F(x;ξ)}θ+1,(4)

where F(x;ξ) is the baseline distribution which depends on ξ. Introducing Eqs. (1) and (2) into (3) and (4) produces a special distribution referred to as type-I heavy-tailed Burr XII (TIHTBXII) distribution with CDF and PDF respectively defined as

G(x;θ,c,k)=1[11θ{1(1+xc)k}]θ;x>0,θ,c,k>0,(5)

and

g(x;θ,c,k)=θ2ckxc1(1+xc)k1[1θ{1(1+xc)k}]θ+1.(6)

The survival function is

s(x;θ,c,k)=[11θ{1(1+xc)k}]θ;x>0,θ,c,k>0.(7)

The hazard function is

h(x;θ,c,k)=θ2ckxc1(1+xc)k1[1θ{1(1+xc)k}]1;x>0,θ,c,k>0.(8)

The proposed TIHTBXII distribution aims to improve model flexibility, provide the best fit to real-world data, and accommodate heavy-tailed data in fields such as reliability engineering, medical and financial sciences, among others.

Fig. 1a and b represents the graph of the PDF and hazard function, respectively. These plots are at various combination of the parameter values. The plots reflect left-skewed behaviour, both low and high peaks, and bumped-shape. This demonstrate that the TIHTBXII can be utilized in modeling different lifetime events. The hazard rate plots depict bump-shape, bathtub, L-shape, J-shape and strictly non-decreasing shape. This again implies that TIHTBXII model can benefit from modeling different lifetime events.

images

Figure 1: Plots of (a) PDF, (b) Hazard function of the TIHTBXII distribution

3  Characterization of the TIHTBXII Model

In this section, we study some basic properties of the new distribution which include quantile function, moment and incomplete moment, moment generating function, order statistic, entropy and extropy, and the mean residual life function.

3.1 Quantile Function

The quantile function Q(u)=G1(u), for 0<u<1, is derived as:

Q(u)={[1(1(1u)1θθ)]1k1}1c.(9)

In inference, the advantage of Q(u) is in generating random samples and overall simulation studies. Further, various quantile measures can be obtained such as the mean, variance, skewness and kurtosis.

3.2 Moment

The r-th crude moment of a continuous random variable X which follows any distribution say TIHTBXII (θ,c,k) can be expressed as follows

μr=E(Xr)=xrg(x) dx=0θ2ckxr+c1(1+xc)k1[1θ{1(1+xc)k}]θ+1 dx=θ2ck0xr+c1(1+xc)k1[1θ{1(1+xc)k}](θ+1) dx.

Using series expansion

(1+xc)k1=i=0(k1i)xic;for|x|<1.

For θR or C, a generalized binomial theorem provides that

[1θ{1(1+xc)k}](θ+1)=j=0(θ+jj)θj[1(1+xc)k]j.

Also, for jR/N0 and |x|<1, standard binomial expansion provides that

[1(1+xc)k]j=l=0(1)l(jl)(1+xc)lk.

Similarly,

(1+xc)lk=m=0(lkm)xmc.

By back substitution,

μr=θ2cki=0j=0l=0m=0(1)l(k1i)(θ+jj)(jl)(lkm)θj0xr+c+ic+mc1 dx.

Notice that

(jl)=j!l!(jl)!=(j,l)l!;Recallex=l=0(1)lxll!,

so that

μr=θ2cki=0j=0m=0(k1i)(θ+jj)(lkm)θj(j,l)0l=0(1)lxll!1xlxr+c+ic+mc1 dx=θ2cki=0j=0m=0(k1i)(θ+jj)(lkm)θj(j,l)0xr+c+ic+mcl1ex dx.

Therefore,

μr=θ2cki=0j=0m=0(k1i)(θ+jj)(lkm)θj(j,l)Γ(r+c+ic+mcl);r=1,2,(10)

3.3 Incomplete Moment

For a continuous random variable XTIHTBXII(θ,c,k), the r-th incomplete moment about the origin is given by:

μr(x0)=E[XrXx0]=0x0xrg(x;θ,c,k)dx.

Substitute into the incomplete moment:

μr(x0)=θ2ck0x0xr+c1(1+xc)k1[1θ(1(1+xc)k)](θ+1)dx.

Using the generalized binomial expansion:

(1+xc)k1=i=0(k1i)xic.

Expanding the second term yields

[1θ(1(1+xc)k)](θ+1)=j=0(θ+jj)θj[1(1+xc)k]j.

and

[1(1+xc)k]j=l=0(1)l(jl)(1+xc)lk.

Also,

(1+xc)lk=m=0(lkm)xmc.

The integrand becomes a power series in x:

xr+c1i=0(k1i)xicj=0(θ+jj)θjl=0(1)l(jl)m=0(lkm)xmc.

Now collect all powers of x, and integrate:

μr(x0)=θ2cki,j,l,m(1)l(k1i)(θ+jj)θj(jl)(lkm)0x0xr+c1+ic+mcdx.0x0xr+c1+ic+mcdx=x0r+c+ic+mcr+c+ic+mc.

Hence,

μr(x0)=θ2cki=0j=0l=0m=0(1)l(k1i)(θ+jj)θj(jl)(lkm)x0r+c+ic+mcr+c+ic+mc.(11)

3.4 Moment Generating Function

Let tR+, the moment generating function (MGF) of any continuous random variable X given as MX(t) which assumes the TIHTBXII(θ,c,k) is defined as

MX(t)=E(etX)=etxg(x) dx=0θ2ckxc1(1+xc)k1etx[1θ{1(1+xc)k}]θ+1 dx=θ2ck0xc1(1+xc)k1etx[1θ{1(1+xc)k}](θ+1) dx.

Using series expansion

(1+xc)k1=i=0(k1i)xic;for|x|<1.

For θR or C, a generalized binomial theorem provides that

[1θ{1(1+xc)k}](θ+1)=j=0(θ+jj)θj[1(1+xc)k]j.

Also, for jR/N0 and |x|<1, standard binomial expansion provides that

[1(1+xc)k]j=l=0(1)l(jl)(1+xc)lk.

Similarly,

(1+xc)lk=m=0(lkm)xmc;ex=n=0xnn!.

By back substitution,

MX(t)=θ2cki=0j=0l=0m=0n=0(1)ln!(k1i)(θ+jj)(jl)(lkm)θj0xn+c+ic+mc1 dx.

Notice that

(jl)=j!l!(jl)!=(j,l)l!;Recallex=l=0(1)lxll!,

so that

MX(t)=θ2cki=0j=0m=0n=0(k1i)(θ+jj)(lkm)θj(j,l)n!0l=0(1)lxll!1xlxn+c+ic+mc1 dx=θ2cki=0j=0m=0n=0(k1i)(θ+jj)(lkm)θj(j,l)n!0xn+c+ic+mcl1ex dx.

Therefore,

MX(t)=θ2cki=0j=0m=0n=0(k1i)(θ+jj)(lkm)θj(j,l)Γ(n+c+ic+mcl)n!.(12)

3.5 Mean Residual Life Function

The mean residual life (MRL) function of a non-negative continuous random variable X is defined as

m(t)=E[XtX>t]=1G¯(t)tG¯(x)dx,t0,(13)

where G¯(t)=1G(t) is the survival function. This function quantifies the expected remaining lifetime given survival up to time t. Suppose X TIHTBXII(θ,c,k), the MRL is given as follows

m(t)=[1θ+θ(1+tc)k]θt[1θ+θ(1+xc)k]θ dx.

Recall that for jR/N0 and |x|<1, the standard binomial expansion provides that

(1+x)n=r=0(1)r(n+r1r)xr,

hence

m(t)=[1θ+θ(1+tc)k]θtr=0(1)r(θ+r1r)θr[1(1+xc)k]r dx.

Similarly, by expanding [1(1+xc)k]r, we have

m(t)=[1θ+θ(1+tc)k]θtr=0s=0(1)r+s(θ+r1r)(rs)θr(1+xc)ks dx.

Also, by expanding (1+xc)ks, we realize

m(t)=[1θ+θ(1+tc)k]θtr=0s=0(1)r+s(θ+r1r)(rs)θry=0(ksy)xyc dx=[1θ+θ(1+tc)k]θr=0s=0y=0(1)r+s(θ+r1r)(rs)(ksy)θrtxyc dx.

Recall

(rs)=r!s!(rs)!=(r,s)s!;andex=s=0(1)sxss!m(t)=[1θ+θ(1+tc)k]θtr=0y=0s=0(1)sxss!1xs(1)r(θ+r1r)(ksy)(r,s)θrxyc dx=[1θ+θ(1+tc)k]θr=0y=0(1)r(θ+r1r)(ksy)(r,s)θrtxycsex dx.

Therefore, the MRL function is

m(t)=[1θ+θ(1+tc)k]θr=0y=0(1)r(θ+r1r)(ksy)(r,s)θrΓ(ycs+1,t).(14)

3.6 Entropy

Entropy is a fundamental concept in information theory and statistical mechanics, used to quantify the uncertainty or randomness associated with a probability distribution. For a given probability density function g(x), the Rényi entropy of order q>0, q1, is defined as:

Hq(x)=11qlog{gq(x)dx}.

In this section, we derive the Rényi entropy of the proposed distribution. Due to the complexity of the density function, the integral involved is analytically intractable in closed form. However, by applying the generalized binomial expansion and power series techniques, we express the entropy in terms of infinite series and known special functions, particularly the Gamma function.

Hq(x)=11qlog{gq(x) dx}=11qlog{(θ2ck)q0xq(c1)(1+xc)q(k1)[1θ(1(1+xc)k)]q(θ+1) dx}=11qlog{(θ2ck)q0xq(c1)(1+xc)q(k1)[1θ(1(1+xc)k)]q(θ+1) dx}.

Using generalized binomial expansion when q(θ+1)R and |x|<1, it can be shown that

[1θ(1(1+xc)k)]q(θ+1)=i=0j=0l=0(1)j(q(θ+1)+i1i)(ij)(ikl)θixlc.

Similarly,

(1+xc)q(k1)=h=0(q(k1)h)xhc.

Therefore,

Hq(x)=11qlog{(θ2ck)q0xq(c1)h=0(q(k1)h)xhci=0j=0l=0(1)j(q(θ+1)+i1i)(ij)(ikl)θixlc}=11qlog{(θ2ck)qi=0j=0l=0h=0(1)j(q(θ+1)+i1i)(ij)(ikl)(q(k1)h)θi0xqcq+hc+lc dx}.

Recall that

(ij)=i!j!(ij)!=P(i,j)j!;andex=j=0(1)jxjj!,

Hq(x)=11qlog{(θ2ck)qi=0l=0h=0j=0(1)j0xjj!1xj(i,j)θi(q(θ+1)+i1i)(ikl)(q(k1)h)xqcq+hc+lc dx}=11qlog{(θ2ck)qi=0l=0h=0P(i,j)θi(q(θ+1)+i1i)(ikl)(q(k1)h)0xqcq+hc+lcjex dx}=11qlog{(θ2ck)qi=0l=0h=0P(i,j)θi(q(θ+1)+i1i)(ikl)(q(k1)h)Γ(qcq+hc+lcj+1)}.

3.7 Extropy

This subsection defines extropy, a measure of order or information content within a system, serving as a dual to the more commonly known concept of entropy. We consider a specific mathematical formulation of extropy, denoted by the functional J(g). The initial representation of J(g) is an infinite series where each term encompasses an integral over a complex function g(x). To analytically evaluate this integral, we systematically apply the generalized binomial expansion to several nested terms, including [1θ{1(1+xc)k}]n(θ+1), [1(1+xc)k]i, and (1+xc)jk. This decomposition transforms the original integral into a convergent multifold summation. The final integration step is then performed, yielding an expression for J(g) in terms of the Gamma function, Γ(z), thus providing a complete analytical series expansion.

J(g)=n=21n(n2)0gn(x) dx=n=21n(n2)0(θ2ck)nxn(c1)(1+xc)n(k1)[1θ{1(1+xc)k}]n(θ+1) dx=n=2(θ2ck)nn(n2)0xn(c1)(1+xc)n(k1)[1θ{1(1+xc)k}]n(θ+1) dx.

Using generalized binomial expansion; for n(θ+1)R and |x|<1,

[1θ{1(1+xc)k}]n(θ+1)=i=0(n(θ+1)+i1i)θi[1(1+xc)k]i.

Also,

[1(1+xc)k]i=j=0(1)j(ij)(1+xc)jk.

Similarly,

(1+xc)jk=h=0(jkh)xhc;and(1+xc)n(k1)=l=0(n(k1)l)xlc.

So that,

J(g)=n=2(θ2ck)2n(n1)i=0j=0h=0l=0(1)j(ij)(n(k1)l)(jkh)(n(θ)+i1i)θi0xncn+lc+hc dx.

Since,

(ij)=i!j!(ij)!=P(i,j)j!;ex=j=0(1)jxjj!,

J(g)=n=2(θ2ck)nn(n1)i=0l=0h=0(n(k1)l)(jkh)(n(θ+1)+i1i)0j=0(1)jxjj!1xjP(i,j)θixncn+lc+hc dx=n=2(θ2ck)nn(n1)i=0l=0h=0(n(k1)l)(jkh)(n(θ+1)+i1i)θiP(i,j)0xncn+lc+hcjex dx=n=2(θ2ck)nn(n1)i=0l=0h=0(n(k1)l)(jkh)(n(θ+1)+i1i)θiP(i,j)Γ(ncn+lc+hcj+1).

3.8 Order Statistic

In the realm of probability theory and statistics, an order statistic is a fundamental concept representing the r-th smallest value among a set of n random variables. Given a set of n independent and identically distributed (i.i.d.) random variables, say X1,X2,,Xn, arranged in ascending order as X(1)X(2)X(n), X(r) is the r-th order statistic. The PDF of the r-th order statistic, fXr:n(x), is derived from the PDF of the individual random variables, g(x), and their CDF, G(x). For the proposed TIHTBXII model, it is given as

fXr:n(x)=n!(r1)!(nr)![G(x)]r1[1G(x)]nrg(x)=n!(r1)!(nr)!{1[1θ(1(1+xc)k)]θ}r1{1θ(1(1+xc)k)}θ(nr)×θ2ckxc1(1+xc)k1[1θ{1(1+xc)k}]θ+1=n!θ2ckxc1(r1)!(nr)!i,j=0l,h=0m,p=0q,s=0t,w=0(1)i+p+m+sθh+j+l(r1i)(iθ+h1h)(hp)(pkq)×(θ(nr)+j1j)(jm)(kmw)(θ+ll)(ls)(kst)xqc+wc+tc.

4  Estimation

We proceed with the estimation of the parameters of the TIHTBXII distribution using maximum likelihood, maximum product spacing, least squares and weighted least squares procedures.

4.1 Maximum Likelihood Estimation

Maximum Likelihood Estimation (MLE) involves finding the values of the parameters that maximize the likelihood of observing the given data. First, we formulate the likelihood function. Let X1,X2,,Xn be a random sample of size n from the distribution with the given PDF g(x;θ,c,k). Assuming the observations are independent and identically distributed (i.i.d.), the likelihood function L(θ,c,k|x) is the product of the individual PDFs:

L(θ,c,k|x)=i=1ng(xi;θ,c,k)=i=1nθ2ckxic1(1+xic)k1[1θ{1(1+xic)k}]θ+1.

Formulate the log-likelihood function since it is almost always easier to work with the natural logarithm of the likelihood function, called the log-likelihood function, denoted as l(θ,c,k|x) or lnL. This is because the logarithm is a monotonically increasing function, so maximizing lnL is equivalent to maximizing L. Also, products become sums, which are much simpler to differentiate. So, the log-likelihood function is:

l(θ,c,k|x)=n(2lnθ+lnc+lnk)+(c1)i=1nlnxi+(k1)i=1nln(1+xic)(θ+1)i=1nln[1θ{1(1+xic)k}],

2nθi=1nln[1θ{1(1+xic)k}]+(θ+1)i=1n1(1+xic)k[1θ{1(1+xic)k}]=0(15)

lc=nc+i=1nlnxi+(k1)i=1nxiclnxi1+xic(θ+1)i=1nθk(1+xic)k1xiclnxi[1θ{1(1+xic)k}]=0,

and

lk=nk+i=1nln(1+xic)(θ+1)i=1nθ(1+xic)kln(1+xic)[1θ{1(1+xic)k}]=0.(16)

Next, solve the system of score equations. The MLEs for θ, c, and k (denoted as θ^, c^, and k^) are the solutions to the system of these three non-linear equations:

2nθ^i=1nln[1θ^{1(1+xic^)k^}]+(θ^+1)i=1n1(1+xic^)k^[1θ^{1(1+xic^)k^}]=0,nc^+i=1nlnxi+(k^1)i=1nxic^lnxi1+xic^(θ^+1)i=1nθ^k^(1+xic^)k^1xic^lnxi[1θ^{1(1+xic^)k^}]=0,nk^+i=1nln(1+xic^)(θ^+1)i=1nθ^(1+xic^)k^ln(1+xic^)[1θ^{1(1+xic^)k^}]=0.

The solutions can be obtained numerically.

4.2 Maximum Product Spacing Estimation

The Maximum Product Spacing (MPS) method estimates parameters by maximizing the product of the differences between consecutive values of the CDF, evaluated at ordered observations. Let X1,X2,,Xn be a random sample from the distribution, and let X(1)X(2)X(n) be the ordered statistics. Define the uniform spacings Di(θ,c,k) as:

Di(θ,c,k)=G(X(i);θ,c,k)G(X(i1);θ,c,k),

for i=1,,n+1, where we set G(X(0);θ,c,k)=0 and G(X(n+1);θ,c,k)=1. The product spacing function to be maximized is:

P(θ,c,k)=i=1n+1Di(θ,c,k)

Alternatively, it is often more convenient to maximize the logarithm of the product spacing function:

S(θ,c,k)=i=1n+1lnDi(θ,c,k).

The MPS estimates (θ^MPS,c^MPS,k^MPS) are obtained by solving the following system of equations: Sθ=0,Sc=0,Sk=0. These equations are solved using numerical optimization techniques.

4.3 Least Squares Estimation

The Least Squares (LS) method estimates parameters by minimizing the sum of squared differences between the empirical CDF and the theoretical CDF. Let X(1)X(2)X(n) be the ordered statistics of the observed sample. The empirical CDF at X(i) is approximated by Fn(X(i))=in+1. The LS estimates (θ^LS,c^LS,k^LS) are found by minimizing the sum of squares function:

L(θ,c,k)=i=1n[G(X(i);θ,c,k)in+1]2.

To find the estimates, we set the partial derivatives of L(θ,c,k) with respect to each parameter to zero:

Lθ=2i=1n[G(X(i);θ,c,k)in+1]G(X(i);θ,c,k)θ=0,Lc=2i=1n[G(X(i);θ,c,k)in+1]G(X(i);θ,c,k)c=0,Lk=2i=1n[G(X(i);θ,c,k)in+1]G(X(i);θ,c,k)k=0.(17)

These equations are typically solved numerically.

4.4 Weighted Least Squares Estimation

Weighted Least Squares (WLS) is an extension of LS that incorporates weights to account for varying precision across observations. In the context of empirical CDF fitting, weights are often chosen to reflect the variance of the ordered statistics. The WLS estimates (θ^WLS,c^WLS,k^WLS) are obtained by minimizing the weighted sum of squares function:

W(θ,c,k)=i=1nwi[G(X(i);θ,c,k)in+1]2.

A common choice for weights wi based on the variance of uniform order statistics is:

wi=(n+1)2(n+2)i(ni+1).

The system of equations to solve for the WLS estimates is:

Wθ=2i=1nwi[G(X(i);θ,c,k)in+1]G(X(i);θ,c,k)θ=0,Wc=2i=1nwi[G(X(i);θ,c,k)in+1]G(X(i);θ,c,k)c=0,Wk=2i=1nwi[G(X(i);θ,c,k)in+1]G(X(i);θ,c,k)k=0.(18)

Numerical optimization is required to solve this system. To implement the LS and WLS methods, the partial derivatives of G(x;θ,c,k) with respect to θ,c, and k are required, which are given as follows

Gθ=(1θ{1(1+xc)k})θ[ln(1θ{1(1+xc)k})+θ{1(1+xc)k}1θ{1(1+xc)k}],Gc=θ2k(1+xc)k1xclnx[1θ{1(1+xc)k}]θ1,Gk=θ2(1+xc)kln(1+xc)[1θ{1(1+xc)k}]θ1.(19)

5  Simulation

To contrast the performance of the estimation methods (MLE, MPS, LS, and WLS), we conduct a Monte Carlo Markov Chain (MCMC) simulation study. In the simulation, we test the bias and root mean square error (RMSE) of the parameter estimates for various sample sizes and true parameter values. MCMC simulation is a powerful and widely used approach for evaluating the efficiency of estimators, particularly for advanced statistical models in which analytical solutions are impossible, based on early work by Metropolis et al. [46] and then subsequently broadened by Hastings [47].

For our computation, we performed 1000 Monte Carlo repetitions for each of the scenarios and sample sizes. In each of these, we generated data from the TIHTBXII distribution, via its quantile function, from uniformly distributed random numbers. Numerical optimization (with in particular optim in R) was used to estimate MLE, LS, and WLS parameters, while the MPS estimates were calculated using the function mpsedist from the package BMT.

Our simulation experiment is done on the basis of four distinct scenarios, which are equivalent to different combinations of the distribution parameters θ, c, and k:

Scenario I θ=0.5, c=1.75 and k=2.75

Scenario II θ=0.75, c=2.75 and k=3.5

Scenario III θ=1.5, c=1.70 and k=2.5

Scenario IV θ=0.75, c=2.5 and k=2.75

For each situation, we generate random samples of varying sizes (n=25,100,200,500). We then go ahead and apply each estimation method on these test datasets, compute estimates of the parameters, and compute their Bias and RMSE.

The simulation results, which are reported in Tables 1 and 2, provide information about the performance of the MLE, MPS, LS, and WLS estimators of the parameters of the TIHTBXII distribution for various sample sizes and settings. The overall pattern in all the methods and settings is that Bias and Root Mean Square Error (RMSE) decrease as the sample size increases. This demonstrates that all estimators are more precise and accurate with larger data. Estimation of θ^ and c^ tends to have lower bias and RMSE than k^, demonstrating that k^ is usually more difficult to estimate. Throughout the methods, WLS tends to perform better with relatively lower bias and RMSE for θ^ and c^ in most scenarios, where the benefits of WLS are more pronounced at larger sample sizes. LS also performs well, with a good balance between accuracy and precision. While MLE improves significantly with large sample sizes, with small sample sizes its performance can sometimes be less stable compared to WLS and LS. MPS performs similarly but sometimes has larger RMSE, particularly for k^ with small sample sizes.

images

images

Scenario-dependent results show that the parameter values chosen influence estimation challenge. For instance, Scenario I is “easier” to estimate with lower overall RMSE values, whereas Scenarios II and IV are challenging, particularly for θ^. Basically, the simulation confirms that the quality of estimation improves with larger data. WLS and LS prove to be robust and stable methods for estimating the parameters of the TIHTBXII distribution under the conditions considered.

Fig. 2a and b is the plots of the Bias and RMSE of the four non-Bayesian methods for Scenarios I and II while Fig. 3a and b is the plots of the Bias and RMSE of the four non-Bayesian methods for Scenarios III and IV. These support the numerical results in Tables 1 and 2 since as the sample size increases, the Bias remain positive and the RMSE decrease. These are indicators of good convergence behavior which indicate that the TIHTBXII model is fit for modeling lifetime datasets.

images

Figure 2: Estimator performance (a) scenario I (b) scenario II

images

Figure 3: Estimator performance (a) scenario III (b) scenario IV

6  GASP Based on Truncated Life Tests for TIHTBXII Distribution

This section presents the use of Group Acceptance Sampling Plan (GASP) in truncated life tests for items having lifetimes that are TIHTBXII-distributed. Truncated life tests are very important for effective quality control, particularly in the case of testing long-lifetime components. The median lifetime of an item with TIHTBXII-distributed lifetime is obtained using its quantile function, given in Eq. (9). For the median (u=0.5), the quantile function provides the median lifetime, μ:

μ={[1(1(1u)1θθ)]1k1}1c.(20)

For ease of calculations under the GASP platform, we split μ with the introduction of an intermediate term, ω¯. At the median (u=0.5), ω¯ is:

ω¯=[1(1(1u)1θθ)]1k1,at u=0.5ω¯=[1(121θθ)]1k1.

This split is such that μ=ω¯1c.

For the truncated life tests, the test duration is provided by t0=a1×μ0, where a1 is a specified constant and μ0 is the specified median lifetime. The ratio of the actual median lifetime to the specified median lifetime is r2=μμ0.

The failure probability, p, of a single item before time t0 is derived by inserting these relationships into the CDF of the TIHTBXII distribution in Eq. (3). This yields:

p=1[11θ{1(1+((a1r2)c×ω¯))k}]θ.(21)

This expression for p plays a crucial role in calculating the acceptance probabilities in the GASP scheme.

The GASP design parameters, i.e., the number of groups (G), the acceptance number (n), the group size (r), and the test duration (t0), are determined such that both consumer and producer risks are balanced. For determining the optimal GASP design parameters, we formulate an optimization problem. We want to minimize the Average Sample Number (ASN), which is n=r×G. This minimization is subject to the following, which balance consumer risk (β) and producer risk (α):

Paccept(p1μμ0=r1)=[i=0n(ri)p1i(1p1)ri]gβ,(22)

Paccept(p2μμ0=r2)=[i=0n(ri)p2i(1p2)ri]g1α.(23)

Here, r1 and r2 are the mean ratios relating to the consumer and producer risks, respectively. The failure probabilities p1 and p2 are calculated in these Paccept functions, as provided in Eqs. (22) and (23).

Table 3 presents an illustrative set of GASP design parameters for the TIHTBXII distribution, based on specific parameter values (θ=0.75,c=2.25,k=1.75). The table shows the minimum required number of groups (G) and the acceptance number (n) for varying consumer risks (β), mean ratios (μμ0), and test duration constants (a1), for fixed group sizes (r=5 and r=10). The Paccept(p) values indicate the probability of accepting a lot under these conditions.

images

The factors influencing the optimal plan are:

(a)   β: Risk of consumer, measuring the largest acceptable probability of accepting a lot with actual defect level p.

(b)   r2=μ/μ0: True mean lifetime to specified mean lifetime ratio (μ to μ0), a measure of lot quality.

(c)   r: Number of inspection repetition or rounds.

(d)   a1: A scale factor for the truncation time, such that the test length t0=a1×μ0. This parameter impacts the probability of failure p in cases of truncated testing.

The triplet given (G,n,Paccept(p)) is the minimal number of groups to be inspected, the group size, and the respective probability of accepting a lot.

(i)   Mean Ratio Influence (r2): With β,r,a1 held constant, increasing r2 (i.e., improved quality lots) causes a monotonic decrease in G and n. For example, when β=0.25,r=5,a1=0.5, increasing r2 from 2 to 8 decreases G from 44 to 2 and n from 2 to 0, demonstrating fewer inspection resources are needed for better lot quality. Acceptance probability, Paccept, is always high, occasionally increasing slightly, since it captures the ease of satisfying the acceptance condition for good lots. Influence of Consumer’s Risk (β): Reduction in β (i.e., tighter consumer risk condition) produces a clear trend of increasing G and/or n.

For example, for r2=2,r=5,a1=0.5, a decrease in β from 0.25 to 0.05 raises G from 44 to 94 while n remains 2. This shows that decreasing the risk of the consumer requires an increased sampling effort so as to maintain the required Paccept1β. Notably, under β=0.01 and r2=2,r=5,a1=0.5, G=0,n=0, Paccept is evaluated as ‘–‘ when it is not feasible to construct a valid GASP for such stringent conditions for the specified test parameters. Repetition Factor (r) Impact: With a rise in the repetition factor r (from 5 to 10), one often finds a reduction in G and/or n. For example, if β=0.25,r2=2, a1=0.5, raising r from 5 to 10 lowers G from 44 to 7 (with n still 2).

This means that more inspection repetitions enable a more effective sampling plan with fewer groups or items in each run needed to get the required acceptance probability.

(ii)   Impact of the truncation time scaling factor (a1): Increasing a1 from 0.5 to 1 has a tendency to reduce G and/or n. To illustrate, at β=0.25,r2=2,r=5, increasing a1 from 0.5 to 1 reduces G from 44 to 7 (increasing n from 2 to 3). This means that a slightly longer test duration (t0=a1μ0) can provide more information about product quality and therefore potentially allow a less costly sampling plan.

The trend, while generally generating more frugal plans, will be at the expense of G or n.

(iii)   Non-Feasible Plans (G=0 or n=0): Cells of G=0 and/or n=0 generally reflect cases where no feasible sampling plan was possible to achieve the acceptance criteria. The ‘–‘ for Paccept also signals non-feasibility, particularly under highly strict β values, suggesting such lots would automatically be rejected or the conditions of the test are insufficient to distinguish lot quality adequately. In some cases where n=0 but Paccept is available (e.g. β=0.25,r2=6,r=5,a1=0.5), acceptance implies that acceptance is solely based on the number of groups G.

The Paccept(p) values always satisfy the available constraint, being large (usually 0.95), which is a testament to the efficacy of the optimized GASP designs in guaranteeing lot quality under the specified conditions.

Fig. 4ad is the operating characteristic (OC) curve for

images images

Figure 4: OC curve (a) case I (b) case II (c) case III (d) case IV

Case I: (a1=0.5,r=5)

Case II: (a1=1,r=5)

Case III: (a1=0.5,r=10)

Case IV: (a1=1,r=10),

respectively, based on θ=0.75,c=2.25,k=1.75. These support the results in Table 3.

Table 4 provides the optimum parameters (G,n,Paccept(p)) of a Group Acceptance Sampling Plan (GASP) specially designed under truncated life tests for the TIHTBXII distribution. The distribution’s own parameters in such designs are assigned as θ=0.5, c=1.25, and k=2.25. The primary objective of any design is to determine the smallest number of groups (G) and group size per item (n) that satisfies the consumer’s risk requirement, Paccept(p)1β, where p is the failure probability.

images

Each triplet (G,n,Paccept(p)) is the best configuration: G is the number of groups to sample, n is the number of elements in each group, and Paccept(p) is the resulting accept probability when the true defect rate is p.

(i)   Influence of Mean Ratio (r2): In all β, r, and a1 regimes, an increase in r2 (expressing a better quality lot) always brings about a substantial decrease in G and/or n. For example, for β=0.25,r=5,a1=0.5, increasing r2 from 2 to 8 reduces G from 903 to 3 and n from 4 to 1. This demonstrates a steep inverse relationship: improved lot quality needs far less severe sampling to reach the specified Paccept(p). The Paccept(p) values increase with r2, indicating smoother acceptance of improved quality lots.

(ii)   Consumer’s Risk (β) Influence: As β decreases (denoting a more risk-averse consumer’s risk tolerance), there is the overall trend of increasing G and/or n. To illustrate, at r2=4,r=10,a1=0.5, reducing β from 0.25 (G=2,n=2) to 0.01 (G=14,n=3) necessitates more extensive inspection. This confirms stricter risk control takes more extensive inspection in order to obtain the desired Paccept(p)1β. Few instances of G=0,n=0 (for example, for β=0.10,r2=2,r=5,a1=0.5) appear, particularly for lower levels of β, i.e., for which an optimal plan cannot be formulated in these extreme cases.

(iii)   Impact of Repetition Factor (r): An increase in the repetition factor r from 5 to 10 tends to decrease G and/or n significantly, thus making the sampling plan more effective.

For example, for β=0.25,r2=2,a1=0.5, increasing r from 5 to 10 decreases G from 903 to 45 (with n increasing from 4 to 5). This shows that increasing the number of rounds of inspection can drastically decrease the number of groups or items per round, essentially optimalizing resource distribution for a certain Paccept(p).

(iv)   Effect of Truncation Time Scaling Factor (a1): It tends to produce more cost-effective sampling plans (lower G and/or n) to raise the truncation time scaling factor a1 from 0.5 to 1. Illustratively, when β=0.25,r2=2,r=10, to lower a1 from 0.5 (G=45,n=5) to 1 (G=8,n=6) drops G significantly.

This indicates that having a relatively longer test duration (t0=a1μ0) can result in more conclusive information on lot quality and may allow a less resource-intensive sampling plan. But a noteworthy observation to make here is that in case r=5, increasing a1 for r2=2 and lower β values results in giving G=0,n=0 outcomes (for instance when β=0.25,r2=2,r=5, a1=0.5 yields G=903, n=4 while a1=1 yields G=0,n=0). This shows a complex interaction where some combinations of parameters (with the current fixed distribution parameters θ=0.5,c=1.25,k=2.25) can render a plan infeasible even with increased test duration, and show specific challenges in designing optimum plans for such situations.

(v)   Non-Feasible Plans (G=0,n=0 with ‘–’): The high occurrence of G=0,n=0 with a ‘–’ for Paccept(p) (especially for lower levels of β and r2=2) suggests cases where there was no possible GASP design to meet the requirements of acceptance. This implies that for such relatively high levels of consumer risk or low quality of lots (small r2), along with specific values of r and a1, the lot would be rejected outright or the current testing program is not adequate to offer the desired level of confidence. The observation that their prevalence is greater in this table than in previous analyses suggests that the current fixed parameters of distribution (θ=0.5,c=1.25,k=2.25) might be leading to more challenging scenarios for designing GASP plans with feasibility.

The accept values of Paccept(p) normally adhere to the required constraint (Paccept(p)1β), normally maintaining high (often 0.95), justifying the optimisation process in ensuring the offered lot acceptance probabilities in case there exists an implementable plan.

Fig. 5ad is the operating characteristic (OC) curve for Case I, Case II, Case III and Case IV, respectively, based on θ=0.5,c=1.25,k=2.75. These support the results in Table 4.

images

Figure 5: OC curve for θ=0.5,c=1.25,k=2.25 (a) case I (b) case II (c) case III (d) case IV

7  Applications

The first data to be used for illustrating the importance of the TIHTBXII model is the duration of active repairs for airborne communication transceivers. This data was studied by El-Saeed et al. [48] and Mead et al. [49] and presented in Table 5.

images

The second application data is related to groundwater contaminants measurements taken to monitor and assess the effectiveness of environmental cleanup efforts. It has been studied by Bhaumik et al. [50] and contained in Table 6, clean-up-gradient monitoring wells, expressed in milligrams per liter (mg/L).

images

Weekly Death rate due to COVID-19 from 22/3/2020 to 20/12/2020 in Dominica retrieved from https://data.who.int/dashboards/covid19/data?n=c (accessed on 20 August 2025) and reported in Table 7 below.

images

We present a table of summary statistics with important measures such as the Q1,Q3, variance, standard deviation, skewness and kurtosis. This results are contained in Table 8.

images

Table 8 brings together three different data sets, showing a similar pattern of positive skewness in each. This means each data set holds more small values and a tail that extends to high values due to occasional gigantic observations or outliers. The Active Repair Duration data is the most skewed and dispersed, and it accounts for many short repairs but also for every now and then a very long one, as indicated by its very high skewness (2.7171) and kurtosis (10.5433). Groundwater Contaminant Measurements also show clear right-skewness (1.6037), indicating that most measurements are low but with the periodic high contaminant levels pulling the average quite high. Lastly, Dominica COVID-19 Mortality data, though still positively skewed (1.0777), is the most compact and least spread out with only outliers at higher mortality rates.

Fig. 6 contains boxplots superimposed on the violin plots for the three datasets, respectively. The image provide evidence that the three datasets contains outliers.

images

Figure 6: Boxplot superimposed on violin plots

Fig. 7 contains the density plots sumperimposed on the histogram of the respective datasets. All three show positive skewness.

images

Figure 7: Kernel density superimposed on histogram

Figs. 8 and 9 are the TIHTBXII CDF and survival functions with those of the empirical from the datasets. These reflect the degree of fit of the proposed TIHTBXII to the various datasets.

images

Figure 8: Empirical with TIHTBXII CDF

images

Figure 9: Empirical with TIHTBXII survival functions

Fig. 10 is the total time on test plot. The duration of active repairs and the groundwater contaminants measurements data show convex upwards TTT plots. These indicate a decreasing hazard rate (DHR). That is, as time goes on, the probability of failure for a surviving item decreases. This is often seen in early life failures where “weak” items fail quickly, leaving stronger ones. However, the Dominica COVID-19 mortality rate shows concave upward TTT plot. This indicates an increasing hazard rate (IHR). That is, as time goes on, the probability of failure for a surviving item increases.

images

Figure 10: TTT plots

Fig. 11 is the P-P plots. For the three datasets, the P-P plots showed S-shapes or curved patterns that which indicate differences in skewness or kurtosis.

images

Figure 11: P-P plots

Fig. 12 is the Q-Q plots. For the groundwater contaminants measurements and the Dominica COVID-19 mortality rate, the Q-Q plots curved upwards at both ends (S-shape). These indicate heavier tails (more extreme values) in data than the TIHTBXII distribution. Hence, the datasets have higher kurtosis.

images

Figure 12: Q-Q plots

Figs. 1315 are the contour plots for the three datasets. Each is a combination of any two of the parameters. The plots suggest the optimal points of the likelihood function which are contained in the MLEs table in Table 9.

images

Figure 13: Contour plots for duration of active repairs

images

Figure 14: Contour plots for groundwater contaminants measurements

images

Figure 15: Contour plots for Dominica COVID-19 mortality

images

The choice of competing distributions are the baseline distribution being modified which is Burr type XII (BXII) distribution by Burr [21], Kumaraswamy bell-Rayleigh (KwBR) distribution by Nadir et al. [51], Gamma distribution by Johnson et al. [52], Weibull distribution by [53], Gumbel distribution by Weibull [54] and Pareto I and III by Arnold [12].

Table 10 compares a number of statistical distributions for best fit for three datasets: “Duration of Active Repairs,” “Groundwater Contaminant Measurements,” and “Dominica COVID-19 Mortality.” Comparison is by information criteria (Log-Lik, AIC, CAIC, BIC, HQIC) where lower values are preferable, and goodness-of-fit tests (W, A, D, and their p-values) where higher p-values (ideally >0.05) are desirable.

images

For the Active Repairs Duration data, the KwBR distribution, despite seemingly good information criteria, is definitively a poor fit due to its extremely low p-value (2.2×1016). Among the remainder, the BXII and TIHTBXII distributions are the most promising with the lowest information criteria and high p-values, which is indicative of a good fit to the data. The Pareto I distribution provides a decent fit, with a competitive AIC and a p-value of 0.2449. The Pareto III distribution also performs well, with a slightly higher AIC but a very good p-value of 0.5193, suggesting it is a suitable model.

For Groundwater Contaminants Measurements, the KwBR distribution again has a very poor fit with a null p-value. The Gamma and Weibull distributions and TIHTBXII provide the best fits, which are reinforced by their competitive information criteria and high p-values, suggesting that these models adequately describe the features of the data. The Pareto I distribution is a very poor fit for this data, as indicated by its high AIC and an extremely low p-value of 0.0017. In contrast, the Pareto III distribution is an excellent fit, with a low AIC and the highest p-value (0.9779) among all tested distributions, making it a very strong candidate for this dataset.

While comparable standard errors might make the additional parameter of TIHTBXII seem less useful, its real value is that it can provide a better fit for datasets that have more complex and highly skewed distributions. The “Duration of Active Repairs” data provides a particular case where, even with the penalty for an extra parameter, the TIHTBXII outperformed the BurrXII on the key goodness-of-fit measures. This is an indication that the extra parameter is not superfluous; it is capturing valuable information on the data structure that the simpler BurrXII model cannot capture effectively. The benefit of TIHTBXII is not always the best model, but its additional flexibility, so that it is a more viable choice for a larger array of highly skewed data from the real world, such as a hypothetical time series of volatile stock returns.

Finally, for the Dominica COVID-19 Mortality Data, the Gamma Distribution is by far the most appropriate model. It has the lowest information criteria and a high p-value (0.9157), highly indicating its suitability. Other distributions like TIHTBXII and BXII also yield good fits. The Pareto I distribution performs very poorly, with a high AIC and a p-value of 0.0007. The Pareto III distribution, however, provides a very good fit, with an excellent p-value of 0.9266 and competitive information criteria, making it a strong contender for this dataset. Again, the TIHTBXII has the best fit in the duration of active repairs data.

Table 10 provides the Maximum Likelihood Estimates (MLEs) of the parameters of various statistical distributions fitted to three data sets: “Duration of Active Repairs,” “Groundwater Contaminant Measurements,” and “Dominica COVID-19 Mortality.” Along with each MLE, in parentheses, is its standard error, a measure of the precision of the estimate; smaller standard errors indicate more precise estimates. For the Duration of Active Repairs data, the parameter estimates vary considerably across distributions. The BXII distribution, for instance, estimates the parameters c as 2.9725 and k as 0.3383, both of which have relatively small standard errors, suggesting stable estimates. Similar results were obtained for the proposed TIHTBXII distribution. For the Pareto distributions, Pareto I yields an α^ of 0.6482 with a standard error of 0.1025, while the three-parameter Pareto III distribution gives an α^ of 0.1031 with a larger standard error of 0.1157, indicating less precision for that parameter. The Pareto III’s other parameters, γ^ and σ^, also have relatively large standard errors, suggesting potential instability. In contrast, the KwBR distribution shows a parameter t with an extremely small MLE (1.2590×1012) and a “NAN” (Not a Number) standard error, which often signifies estimation problems or model instability, supporting the poor fit of the KwBR from a previous analysis.In the Groundwater Contaminant Measurements data, parameter estimates differ by distribution. For the Gamma distribution, a is estimated as 1.0624 and b as 0.5653, both with reasonable standard errors. Again, the KwBR model for this dataset has unusual estimates, notably for t (5.7×1011) with a very small standard error, which can still be problematic given its overall lack of fit. For the Pareto distributions, Pareto I estimates α^ at 0.4176 with a small standard error of 0.0716, pointing to a stable estimate. However, for Pareto III, the α^ estimate is 4.2821 with a very large standard error (9.5842), and the σ^ estimate is 5.7208 with an even larger standard error (14.8994), suggesting these parameters are not well-determined by the data. For Dominica COVID-19 Mortality, the Gamma distribution, which was a strong candidate in previous goodness-of-fit analyses, has MLEs for a of 2.0449 and b of 107.8358, both of which are stable with relatively small standard errors. By contrast, the BXII distribution for these data has a very large estimate of k (309.4328) with a very large standard error (201.5188), indicating less precision in that estimate. The Pareto I distribution gives an α^ of 0.5713 with a standard error of 0.0903. The Pareto III distribution, on the other hand, shows an α^ of 3.9606 with a very large standard error (6.8800), indicating high uncertainty in this estimate. The KwBR distribution once more has an estimate of s with a “NAN” standard error, reconfirming its unreliability for this data too. In general, the MLEs provide the true values of the parameters that best describe each data set according to its respective distribution. The standard errors provide a sense of how reliable these estimates are. The systematic presence of “NAN” or extremely small/enormous estimates with large standard errors, particularly for the KwBR and in some cases for the Pareto III distribution across all data sets, is a clear sign of issues with its parameter estimation and general applicability, which is in line with its poor goodness-of-fit behavior.

8  Conclusion

In this study, we introduced the Type-I Heavy-Tailed Burr Type XII (TIHTBXII) distribution as a highly flexible and robust statistical model. Our primary motivation was to address the limitations of conventional distributions in accurately capturing the complexities of real-world data, particularly those exhibiting skewness, heavy tails, and diverse hazard behaviors, common in fields such as finance, insurance, environmental science, and reliability engineering. We meticulously defined the TIHTBXII distribution, providing its PDF and CDF, and thoroughly investigated its fundamental statistical properties, including its quantile function, moments, moment-generating function, order statistics, and entropy. These foundational elements are crucial for both theoretical understanding and practical application. A significant part of our research focused on parameter estimation, where we rigorously compared four standard methods: MLE, MPS, LS, and WLS. Our extensive Monte Carlo simulation studies consistently demonstrated the consistency of these estimators; as sample size increased, the bias and RMSE of the parameter estimates decreased for all methods. Notably, the WLS and LS procedures generally outperformed MLE and MPS, exhibiting lower bias and RMSE, which underscores their stability and robustness across various scenarios and sample sizes. While MLE and MPS improved with larger sample sizes, LS and WLS consistently proved more stable and accurate in estimation for all sample sizes and parameter values considered.

Beyond theoretical exploration and estimation, we showcased the practical utility of the TIHTBXII distribution by developing a Group Acceptance Sampling Plan (GASP) using truncated life tests. This methodology is particularly valuable for quality control and life-testing procedures involving long-lifespan products and specific failure modes. We provided a comprehensive guide for optimizing GASP design parameters (number of groups, acceptance number, group size, and test duration) by minimizing the Average Sample Number (ASN) while effectively balancing consumer and producer risks. Our research highlighted how factors such as the mean ratio, consumer’s risk, repetition factor, and truncation time scaling factor influence the stringency and efficiency of the sampling plan. The TIHTBXII distribution’s ability to realistically model lifetime data directly translates into more efficient and risk-balanced decision-making in industrial settings. Empirical validation further solidified the TIHTBXII distribution’s applicability and versatility. We successfully applied the distribution to real-world datasets, including Active Repair Duration, Groundwater Contaminant Measurements, and Dominica COVID-19 Mortality. In cases like the “Active Repair Duration” data, the TIHTBXII distribution demonstrated a superior fit compared to other established models, as supported by favorable information criteria and goodness-of-fit test statistics. This empirical evidence underscores the TIHTBXII’s potential as a valuable tool for analysts and engineers working with heavy-tailed, skewed, and complex data in reliability engineering and quality measurement.

While this study effectively demonstrates the significant potential of the TIHTBXII distribution, it’s important to acknowledge certain limitations. Our Monte Carlo simulation studies, though extensive, were conducted under specific assumptions regarding parameter values and sample sizes. The performance of estimation methods, particularly MLE and MPS, might exhibit different convergence rates or biases in scenarios with extremely small sample sizes or highly unusual parameter combinations not explored in our simulations. Second, the GASP design was developed specifically for truncated life tests. The applicability and optimality of the proposed GASP framework might vary for other life-testing scenarios, such as complete life tests or different truncation mechanisms. Finally, while the empirical applications showcased the TIHTBXII’s superior fit for the selected datasets, its generalizability and superiority across all possible heavy-tailed and skewed datasets need further validation through broader comparative studies. Future research could explore its performance with even more diverse and complex real-world data from various domains to fully ascertain its range of applicability.

In conclusion, the Type-I Heavy-Tailed Burr Type XII distribution is a robust and valuable addition to the family of statistical models. It offers enhanced flexibility and strength to accurately identify realistic data patterns, particularly for data characterized by skewness and heavy tails. Its improved parameter estimation capabilities and proven effectiveness in real-world problems like acceptance sampling make it an effective alternative to traditional distributions, ultimately leading to more reliable and accurate decisions across various scientific and engineering disciplines.

Acknowledgement: Not applicable.

Funding Statement: This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (Grant Number IMSIU-DDRSP2501).

Author Contributions: Okechukwu J. Obulezi: Conceptualization, Methodology, Software, Formal Analysis, Writing—Original Draft, Writing—Review & Editing. Hatem E. Semary: Supervision, Validation, Funding Acquisition, Writing—Review & Editing. Sadia Nadir: Formal Analysis, Data Curation, Writing—Review & Editing. Chinyere P. Igbokwe: Investigation, Visualization, Resources, Writing—Original Draft. Gabriel O. Orji: Methodology, Data Curation, Project Administration. A. S. Al-Moisheer: Validation, Data Curation, Writing—Review & Editing. Mohammed Elgarhy: Software, Formal Analysis, Writing—Original Draft. All authors reviewed the results and approved the final version of the manuscript.

Availability of Data and Materials: Data available within the article.

Ethics Approval: Not applicable.

Conflicts of Interest: The authors declare no conflicts of interest to report regarding the present study.

References

1. Nolan JP. Financial modeling with heavy-tailed stable distributions. Wiley Interdiscip Rev Comput Stat. 2014;6(1):45–55. doi:10.1002/wics.1286. [Google Scholar] [CrossRef]

2. Alsubie A. On modeling the insurance claims data using a new heavy-tailed distribution. In: Intelligent Decision Technologies: Proceedings of the 14th KES-IDT 2022 Conference. Cham, Switzerland: Springer. 2022, pp. 149–58. doi:10.1007/978-981-19-3444-5. [Google Scholar] [CrossRef]

3. Macdonald E, Merz B, Nguyen VD, Vorogushyn S. Heavy-tailed flood peak distributions: what is the effect of the spatial variability of rainfall and runoff generation? Hydrol Earth Syst Sci. 2025;29(2):447–63. doi:10.5194/hess-29-447-2025. [Google Scholar] [CrossRef]

4. Navas-Portella V, González Á, Serra I, Vives E, Corral Á. Universality of power-law exponents by means of maximum-likelihood estimation. Phys Rev E. 2019;100(6):062106. doi:10.1103/PhysRevE.100.062106. [Google Scholar] [PubMed] [CrossRef]

5. Endo A, Murayama H, Abbott S, Ratnayake R, Pearson CB, Edmunds WJ, et al. Heavy-tailed sexual contact networks and monkeypox epidemiology in the global outbreak, 2022. Science. 2022;378(6615):90–4. doi:10.1126/science.add4507. [Google Scholar] [PubMed] [CrossRef]

6. Karlsson M, Wang Y, Ziebarth NR. Getting the right tail right: modeling tails of health expenditure distributions. J Health Econ. 2024;97(3):102912. doi:10.1016/j.jhealeco.2024.102912. [Google Scholar] [PubMed] [CrossRef]

7. Tokutomi N, Nakai K, Sugano S. Extreme value theory as a framework for understanding mutation frequency distribution in cancer genomes. PLoS One. 2021;16(8):e0243595. doi:10.1371/journal.pone.0243595. [Google Scholar] [PubMed] [CrossRef]

8. Cauchy AL. Sur les résultats moyens d’observations de même nature, et sur les résultats les plus probables. CR Acad Sci Paris. 1853;37:198–206. [Google Scholar]

9. Student. The probable error of a mean. Biometrika. 1908;6(1):1–25. doi:10.2307/2331554. [Google Scholar] [CrossRef]

10. Fréchet M. Sur la loi de probabilité de l’écart maximum. Ann De La Soc Polonaise De Math. 1927;6(2):123–49. [Google Scholar]

11. Pareto V. Cours d’économie politique. Lausanne, Switzerland: F. Rouge; 1897. [Google Scholar]

12. Arnold BC. Pareto distributions. Fairland, MD, USA: International Cooperative Publishing House; 1983. doi:10.1007/978-1-4615-6805-4. [Google Scholar] [CrossRef]

13. Lomax KS. Business failures: another example of the analysis of failure data. J Am Stat Assoc. 1954;49(268):847–52. doi:10.1080/01621459.1954.10501239. [Google Scholar] [CrossRef]

14. Beirlant J, Matthys G, Dierckx G. Heavy-tailed distributions and rating. ASTIN Bull J IAA. 2001;31(1):37–58. doi:10.2143/AST.31.1.993. [Google Scholar] [CrossRef]

15. Orji GO, Etaga HO, Almetwally EM, Igbokwe CP, Aguwa OC, Obulezi OJ. A new odd reparameterized exponential transformed-X family of distributions with applications to public health data. Innov Stat and Prob. 2025;1(1):88–118. doi:10.64389/isp.2025.01107. [Google Scholar] [CrossRef]

16. Husain QN, Qaddoori AS, Noori NA, Abdullah KN, Suleiman AA, Balogun OS. New expansion of Chen distribution according to the nitrosophic logic using the Gompertz family. Innov Stat Prob. 2025;1(1):60–75. doi:10.64389/isp.2025.01105. [Google Scholar] [CrossRef]

17. Gemeay AM, Moakofi T, Balogun OS, Ozkan E, Hossain MM. Analyzing real data by a new heavy-tailed statistical model. Modern J Stat. 2025;1(1):1–24. doi:10.64389/mjs.2025.01108. [Google Scholar] [CrossRef]

18. Mazza A, Punzo A. Modeling household income with contaminated unimodal distributions. In: New statistical developments in data science (SIS 2017). Cham, Switzerland: Springer; 2017. p. 373–91. doi:10.1007/978-3-030-21158-5. [Google Scholar] [CrossRef]

19. Punzo A, Mazza A, Maruotti A. Fitting insurance and economic data with outliers: a flexible approach based on finite mixtures of contaminated gamma distributions. J Appl Stat. 2018;45(14):2563–84. doi:10.1080/02664763.2018.1428288. [Google Scholar] [CrossRef]

20. Punzo A, Bagnato L, Maruotti A. Compound unimodal distributions for insurance losses. Insur Math Econ. 2018;81(13–14):95–107. doi:10.1016/j.insmatheco.2017.10.007. [Google Scholar] [CrossRef]

21. Burr IW. Cumulative frequency functions. Ann Math Stat. 1942;13(2):215–32. [Google Scholar]

22. Singh SK, Maddala GS. A function for size distribution of incomes. In: Modeling income distributions and Lorenz curves. Cham, Switzerland: Springer; 2008. p. 27–35. doi:10.1007/978-0-387-72796-7_2. [Google Scholar] [CrossRef]

23. Gad AM, Hamedani GG, Salehabadi SM, Yousof HM. The Burr XII-Burr XII distribution: mathematical properties and characterizations. Pak J Stat. 2019;35(3):229–48. [Google Scholar]

24. Alzaatreh A, Lee C, Famoye F. A new method for generating families of continuous distributions. J Appl Stat. 2013;40(9):1606–21. doi:10.1007/s40300-013-0007-y. [Google Scholar] [CrossRef]

25. Bhatti FA, Hamedani GG, Korkmaz M, Sheng W, Ali A. On the Burr XII-moment exponential distribution. PLoS One. 2021;16(2):e0246935. doi:10.1371/journal.pone.0246935. [Google Scholar] [PubMed] [CrossRef]

26. Alizadeh M, Cordeiro GM, de Castro A, Santos PA. The Kumaraswamy-Burr XII distribution. J Stat Comput Simul. 2015;85(13):2697–2715. doi:10.1080/00949655.2012.683003. [Google Scholar] [CrossRef]

27. Al-Saiari AY, Baharith LA, Mousa SA. Marshall-Olkin extended Burr type XII distribution. Int J Stat Probab. 2014;3(1):78–84. [Google Scholar]

28. Alsadat N, Nagarjuna VBV, Hassan AS, Elgarhy M, Ahmad H, Almetwally EM. Marshall–Olkin Weibull–Burr XII distribution with application to physics data. AIP Adv. 2023;13(9):095325. doi:10.1063/5.0172143 2023. [Google Scholar] [CrossRef]

29. Hassan MHO, Elbatal I, Al-Nefaie AH, Elgarhy M. On the Kavya–Manoharan–Burr X model: estimations under ranked set sampling and applications. J Risk Financ Manag. 2023;16(1):19. [Google Scholar]

30. Algarni A, MAlmarashi A, Elbatal I, Hassan SA, Almetwally EM, Daghistani MA, et al. Type I half logistic Burr XG family: properties, Bayesian, and non-Bayesian estimation under censored samples and applications to COVID-19 data. Math Probl Eng. 2021;2021(1):5461130. [Google Scholar]

31. Bantan RA, Chesneau C, Jamal F, Elbatal I, Elgarhy M. The truncated burr X-G family of distributions: properties and applications to actuarial and financial data. Entropy. 2021;23(8):1088. [Google Scholar] [PubMed]

32. Ahsanullah M, Shakil M, Elgarhy M, Kibria BM. On a generalized Burr life-testing model: characterization, reliability, simulation, and Akaike information criterion. J Stat Theory Appl. 2019;18(3):259–69. [Google Scholar]

33. Haq M, Elgarhy M, Hashmi S. The generalized odd Burr III family of distributions: properties, and applications. J Taibah Univ Sci. 2019;13(1):961–71. [Google Scholar]

34. Ocloo SK, Brew L, Nasiru S, Odoi B. On the extension of the Burr XII distribution: applications and regression. Comput J Math Stat Sci. 2023;2(1):1–30. doi:10.21608/cjmss.2023.181739.1000. [Google Scholar] [CrossRef]

35. Isa AM, Ali BA, Zannah U. Sine Burr XII distribution: properties and application to real data sets. Arid Zone J Basic Appl Res. 2022;1:48–58. [Google Scholar]

36. Noori NA. Exploring the properties, simulation, and applications of the odd Burr XII Gompertz distribution. Adv Theory Nonlinear Anal Appl. 2023;7(4):60–75. doi:10.17762/atnaa.v7.i4.283. [Google Scholar] [CrossRef]

37. Aslam M, Jun C-H. A group acceptance sampling plans for truncated life tests based on the inverse Rayleigh and log-logistics distributions. Pak J Stat. 2009;25(2):107–19. [Google Scholar]

38. Rao GS. A group acceptance sampling plans based on truncated life tests for Marshall-Olkin extended Lomax distribution. Electronic J Appl Stat Anal. 2009;3(1):18–27. doi:10.1285/i20705948v3n1p18. [Google Scholar] [CrossRef]

39. Singh S, Tripathi YM. Acceptance sampling plans for inverse Weibull distribution based on truncated life test. Life Cycle Reliab Saf Eng. 2017;6(3):169–78. doi:10.1007/s41872-017-0022-8. [Google Scholar] [CrossRef]

40. Almarashi AM, Khan K. Optimizing group size using percentile based group acceptance sampling plans with application. Contemp Math. 2024;5(4):4763–75. doi:10.37256/cm.5420245193. [Google Scholar] [CrossRef]

41. Owoloko EA, Oguntunde PE, Adejumo AO. Performance rating of the transmuted exponential distribution: an analytical approach. SpringerPlus. 2015;4(1):818. doi:10.1186/s40064-015-1590-6. [Google Scholar] [PubMed] [CrossRef]

42. Saha M, Tripathi H, Dey S. Single and double acceptance sampling plans for truncated life tests based on transmuted Rayleigh distribution. J Ind Prod Eng. 2021;38(5):356–68. doi:10.1080/21681015.2021.1893843. [Google Scholar] [CrossRef]

43. Ameeq M, Naz S, Hassan MM, Fatima L, Shahzadi R, Kargbo A. Group acceptance sampling plan for exponential logarithmic distribution: an application to medical and engineering data. Cogent Eng. 2024;11(1):2328386. doi:10.1080/23311916.2024.2328386. [Google Scholar] [CrossRef]

44. Ekemezie D-FN, Alghamdi FM, Aljohani HM, Riad FH, Abd El-Raouf MM, Obulezi OJ. A more flexible Lomax distribution: characterization, estimation, group acceptance sampling plan and applications. Alex Eng J. 2024;109(7):520–31. doi:10.1016/j.aej.2024.09.005. [Google Scholar] [CrossRef]

45. Zhao W, Khosa SK, Ahmad Z, Aslam M, Afify AZ. Type-I heavy tailed family with applications in medicine, engineering and insurance. PLoS One. 2020;15(8):e0237462. doi:10.1371/journal.pone.0237462. [Google Scholar] [PubMed] [CrossRef]

46. Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E. Equation of state calculations by fast computing machines. J Chem Phys. 1953;21(6):1087–92. doi:10.1063/1.1699114. [Google Scholar] [CrossRef]

47. Hastings WK. Monte Carlo sampling methods using Markov chains and their applications. Biometrika. 1970;57(1):97–109. doi:10.1093/biomet/57.1.97. [Google Scholar] [CrossRef]

48. El-Saeed AR, Obulezi OJ, Abd El-Raouf MM. Type II heavy tailed family with applications to engineering, radiation biology and aviation data. J Radiat Res Appl Sci. 2025;18(3):101547. doi:10.1016/j.jrras.2025.101547. [Google Scholar] [CrossRef]

49. Mead M, Nassar MM, Dey S. A generalization of generalized gamma distributions. Pak J Stat Oper Res. 2018;14(1):121–38. doi:10.18187/pjsor.v14i1.1692. [Google Scholar] [CrossRef]

50. Bhaumik DK, Kapur K, Gibbons RD. Testing parameters of a gamma distribution for small samples. Technometrics. 2009;51(3):326–34. doi:10.1198/tech.2009.07038. [Google Scholar] [CrossRef]

51. Nadir S, Aslam M, Anyiam KE, Alshawarbeh E, Obulezi OJ. Group acceptance sampling plan based on truncated life tests for the Kumaraswamy Bell–Rayleigh distribution. Sci Afr. 2025;27(6):e02537. doi:10.1016/j.sciaf.2025.e02537. [Google Scholar] [CrossRef]

52. Johnson NL, Kemp AW, Kotz S. Univariate discrete distributions. Hoboken, NJ, USA: John Wiley & Sons; 2005. [Google Scholar]

53. Weibull W. A statistical theory of strength of materials. Stockholm, Sweden: Generalstabens Litografiska Anstalts Forlag; 1939. [Google Scholar]

54. Gumbel EJ. Statistics of extremes. New York, NY, USA: Columbia University Press; 1958. [Google Scholar]


Cite This Article

APA Style
Obulezi, O.J., Semary, H.E., Nadir, S., Igbokwe, C.P., Orji, G.O. et al. (2025). Type-I Heavy-Tailed Burr XII Distribution with Applications to Quality Control, Skewed Reliability Engineering Systems and Lifetime Data. Computer Modeling in Engineering & Sciences, 144(3), 2991–3027. https://doi.org/10.32604/cmes.2025.069553
Vancouver Style
Obulezi OJ, Semary HE, Nadir S, Igbokwe CP, Orji GO, Al-Moisheer AS, et al. Type-I Heavy-Tailed Burr XII Distribution with Applications to Quality Control, Skewed Reliability Engineering Systems and Lifetime Data. Comput Model Eng Sci. 2025;144(3):2991–3027. https://doi.org/10.32604/cmes.2025.069553
IEEE Style
O. J. Obulezi et al., “Type-I Heavy-Tailed Burr XII Distribution with Applications to Quality Control, Skewed Reliability Engineering Systems and Lifetime Data,” Comput. Model. Eng. Sci., vol. 144, no. 3, pp. 2991–3027, 2025. https://doi.org/10.32604/cmes.2025.069553


cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 816

    View

  • 329

    Download

  • 0

    Like

Share Link