iconOpen Access

ARTICLE

crossmark

Confidence Intervals for the Reliability of Dependent Systems: Integrating Frailty Models and Copula-Based Methods

Osnamir E. Bru-Cordero1, Cecilia Castro2, Víctor Leiva3,*, Mario C. Jaramillo-Elorza4

1 Dirección Académica, Universidad Nacional de Colombia, Sede De La Paz, La Paz, Cesar, 202010, Colombia
2 Centre of Mathematics, Universidade do Minho, Braga, 4710-057, Portugal
3 Escuela de Ingeniería Industrial, Pontificia Universidad Católica de Valparaíso, Valparaíso, 2362807, Chile
4 Departamento de Estadística, Universidad Nacional de Colombia, Sede Medellín, Medellín, 050023, Colombia

* Corresponding Author: Víctor Leiva. Email: email

Computer Modeling in Engineering & Sciences 2025, 143(2), 1401-1431. https://doi.org/10.32604/cmes.2025.064487

Abstract

Most reliability studies assume large samples or independence among components, but these assumptions often fail in practice, leading to imprecise inference. We address this issue by constructing confidence intervals (CIs) for the reliability of two-component systems with Weibull distributed failure times under a copula-frailty framework. Our construction integrates gamma-distributed frailties to capture unobserved heterogeneity and a copula-based dependence structure for correlated failures. The main contribution of this work is to derive adjusted CIs that explicitly incorporate the copula parameter in the variance-covariance matrix, achieving near-nominal coverage probabilities even in small samples or highly dependent settings. Through simulation studies, we show that, although traditional methods may suffice with moderate dependence and large samples, the proposed CIs offer notable benefits when dependence is strong or data are sparse. We further illustrate our construction with a synthetic example illustrating how penalized estimation can mitigate the issue of a degenerate Hessian matrix under high dependence and limited observations, so enabling uncertainty quantification despite deviations from nominal assumptions. Overall, our results fill a gap in reliability modeling for systems prone to correlated failures, and contribute to more robust inference in engineering, industrial, and biomedical applications.

Keywords

Censored data; copula methods; dependent failure times; interval estimation; Weibull distribution

1  Introduction

Reliability is essential for the continuous operation of modern engineering and technological systems, particularly in sectors where uninterrupted service is vital to safety and efficiency [13]. Critical infrastructures, such as power grids, transportation networks, and healthcare systems, depend on uninterrupted functionality to mitigate severe consequences such as power outages, transportation disruptions, and failures of medical devices, which can lead to loss of life, economic damage, or environmental harm. Recent advances in areas such as blockchain technology [4] and robust statistical methods for geostatistical data [5] have contributed to improving the reliability of these essential systems. A key challenge in reliability analysis is to capture the interdependence of system components, often referred to as common mode failures [6].

Although the assumption of independence is commonly made for mathematical convenience, it can produce an issue of imprecise inference when components share factors or conditions that increase the probability of simultaneous or correlated failures. To address this issue, researchers have proposed various models that move beyond strict independence. For instance, composable modeling techniques have been used to describe complex systems [7], and methods like Monte Carlo simulation [8] help to manage structural dependence [9].

Advances on the topic have been proposed. Functional links between components—represented, for example, in failure-tree or block-diagram models—underscore the complexity of accurate reliability estimation with dependence [7,10]. Although redundancy is used to improve reliability, it may exacerbate underestimation when components share covariates [1114]. Copula methods are suited to modeling correlated structures in competing risks [1,15] and positive dependence among components, which is critical for realistic analysis [1620].

Despite these advances, there remains a gap in constructing accurate confidence intervals (CIs) for small-scale series systems with strong dependence, as many existing studies focus on larger systems or making simplifying assumptions [21]. In a two-component series system, the overall failure occurs at whichever component fails first. When the two components are positively correlated, one typically observes low reliability because an early failure in one component is more likely to coincide with an early failure in the other, resulting in a shorter overall system failure time. Although copula-based methods and frailty models have been extensively examined [2226], recent work proposes further generalizations to multivariate or cured-fraction settings [27,28], highlighting the broader potential of combining copula-based methods and frailty models. However, their integration in small-sample series systems remains relatively unexplored.

Building on the likelihood-based frailty-copula framework for competing risks proposed in [24], we extend that framework by incorporating a gamma distributed frailty term [29] into a two-component system, where the failure time of each component follows a Weibull distribution [30,31]. Our key innovation lies in capturing both unobserved heterogeneity and conditional association via a Gumbel copula. By examining varying dependence levels and sample sizes, we show how these two sources of correlation—frailty variance and copula-based tail dependence—affect the accuracy of inference under challenging yet practically relevant conditions. Also, we present a penalized-estimation case study designed to mimic a real-world scenario where unexpectedly high dependence or sparse data can cause the Hessian matrix to degenerate under the maximum likelihood (ML) method. This case study is an illustrative example that underscores how our framework can still yield interval estimates (albeit with potential bias) when real data are not available and small-sample issues are severe.

Throughout the article, we use a consistent set of symbols and acronyms, which are listed in Table 1 in the order of first appearance. Sections are arranged as follows. Section 2 provides background of copula-based methods and frailty models for reliability analysis. In Section 3, we describe the methodology for modeling competing risks data. In Section 4, CIs are constructed for dependent systems under ML estimation and delta-method. Section 5 conducts simulations, showing the roles of dependence and censoring in coverage probabilities (CPs) for Weibull-distributed failure times. In Section 6, we offer a synthetic example illustrating how penalized estimation can handle degenerate Hessian matrices under strong dependence and small samples. Section 7 concludes with findings and ideas for future research.

images

2  Theoretical Background

In this section, we present two main approaches in reliability analysis: copula-based methods for capturing dependence and frailty models for handling unobserved heterogeneity. These approaches form the basis for our proposed competing risks model with dependent failure times.

2.1 Copula-Based Methods for Dependency Structures

Copula-based methods are especially helpful for modeling the system reliability when component failures are dependent. These methods allow analysts to separate the marginal failure-time behavior from the joint dependence structure. Such methods are essential in real-world applications, where the assumption of independence among components is often unrealistic.

In essence, a copula is a bivariate cumulative distribution function (CDF) defined on the unit square  2[0,1]2, which connects marginal distributions to form a joint distribution [18]. This is helpful in reliability problems where component failure times are correlated in ways that independent models cannot capture.

Definition 1: A bivariate copula is a function C2[0,1], satisfying:

(i)   Boundary conditions–For all u, C(0,u)=C(u,0)=0 and C(1,u)=C(u,1)=u.

(ii)   Two-increasing property–For all 0u1<u21 and 0v1<v21, C(u2,v2)C(u1,v2)C(u2,v1)+C(u1,v1)0.

Definition 1 ensures that the copula encodes the dependence between two random variables.

Because a copula has uniform marginals on [0,1], it effectively couples the one-dimensional distributions to form the joint distribution -a concept widely applied in reliability, finance, and other fields [32]. A fundamental result in copula theory is the Sklar theorem given next.

Theorem 1 (Sklar): Let T1 and T2 be random variables with joint CDF F and marginal CDFs F1 and F2, respectively. Then, there exists a copula C such that F(t1,t2)=C(F1(t1),F2(t2)). If F1 and F2 are continuous, C is unique. Conversely, for any copula C and continuous marginal CDFs F1,F2, the function (t1,t2)C(F1(t1),F2(t2)) is a valid joint CDF.

The Sklar result presented in Theorem 1 underpins much of copula theory by separating marginal distributions from the dependence structure. This remains true in bivariate and multivariate contexts [18].

Corollary 1: Let F be a joint CDF of (T1,T2) with continuous marginal CDFs F1 and F2. Then, the associated copula can be expressed as C(u,v)=F(F11(u),F21(v)), where Fj1 denotes the quantile (or pseudo-inverse) function of Fj. Consequently, if F1 and F2 are the marginal CDFs, C serves as the joint CDF of (U,V)=(F1(T1),F2(T2)), which are uniformly distributed on [0,1].

An immediate implication of the Sklar theorem is that the product copula Π(u,v)=uv fully characterizes the joint distribution of two independent random variables. Any deviation from this product implies dependence between the variables. In practice, more flexible copulas—such as Clayton, Frank, or Gumbel— are often needed to capture complex dependence patterns.

2.2 Frailty Models for Failure Times in Competing Risks

Traditional reliability analysis often assumes that failure times are mutually independent. However, this assumption is frequently violated in systems where dependence exists. Correlated frailty models, as discussed in [33], offer a convenient framework for incorporating such dependence.

Parametric and semiparametric Bayesian frailty models [34] also handle shared risks effectively. Moreover, various estimation and extension approaches (such as the expectation-maximization algorithm for semiparametric hazards [35], and the developments presented in [36,37]) further enhance the modeling of correlated failure times under frailty frameworks.

A typical frailty model consists of three main parts as follows [38]:

•   A frailty term representing latent or unobserved heterogeneity.

•   A baseline risk function, which can be either parametric or nonparametric.

•   An optional fixed-effects term to incorporate observed covariates.

Based on [24], we adopt a gamma distributed frailty to account for unobserved heterogeneity when modeling the failure times of two components using the Weibull distribution under competing risks. This adoption captures additional dependence arising from latent factors that affect both modes of failure in a single unit. In addition, we adapt the CI construction proposed in [39] to incorporate both the covariance structure of our frailty model and the dependence parameter of the copula. This provides a robust framework for inference on the system reliability when failure times are dependent.

In many real scenarios involving multiple competing risks, accurately capturing the interplay between shared risks (through a frailty) and direct dependence (through a copula) is critical. Therefore, we propose a framework merging both frailty and copula components to achieve a comprehensive characterization of the system reliability with dependent competing failure modes.

2.3 Advantages of the Proposed Framework

By combining a gamma-distributed frailty with a specific copula (here, the Gumbel family), our frailty-copula approach offers the following advantages:

•   It captures unobserved heterogeneity through the latent variable Zi, for i{1,,n}, allowing certain units to be more frail or more robust without explicit covariates.

•   It enables conditional tail dependence beyond what a simple frailty model can account for—an effect introduced through the Gumbel copula structure. This is especially valuable in high-risk scenarios where correlated early failures can occur more often than basic models suggest.

These two sources of dependence –the gamma distributed frailty Zi and the Gumbel copula parameter θ– provide complementary ways to model correlation. Specifically, θ measures the conditional association (such as the Kendall tau, τ=11/θ) given Zi = z, for i{1,,n}, whereas the variance η of Zi also influences the marginal (overall) correlation observed in the data. Hence, if η>0, the total correlation can differ from 11/θ when viewed unconditionally. Such a dual-layered perspective is particularly relevant when real-world factors create both large heterogeneity across units and direct dependence among components or failure modes.

3  Modeling Competing Risks with Dependent Failure Times

In this section, we first present and adapt the frailty-copula framework introduced in [24], focusing on a two-component system with Weibull distributed failure times under competing risks. Then, we go beyond [24] by deriving CIs for marginal quantiles via a log-transformed delta method, fully incorporating both the frailty parameter η and the copula parameter θ.

3.1 Competing Risks Model with Gamma Distributed Frailty and Gumbel Copula

In reliability terms, two failure time variables, Ti1 and Ti2 say, for i{1,,n}, may be interpreted in two ways: (i) as two possible modes of failure (competing risks) acting on the same item; or (ii) as two physical components arranged in series, that is, the system fails at mini{Ti1,Ti2}. Mathematically, both ways involve observing whichever occurs first, but the practical interpretation (one item with multiple risks versus a two-component system) depends on the engineering or biomedical context.

We adopt the Weibull distribution for each failure time, due to its well-established versatility in reliability contexts. It is capable of modeling various hazard shapes, such as increasing, constant, or decreasing failure rates. Nevertheless, our frailty-copula construction is not intrinsically limited to the Weibull model. In principle, other parametric forms—such as a generalized linear exponential distribution—could be used if prior knowledge or data analyses suggest a hazard structure not adequately captured by the Weibull model. In practice, implementing such alternatives often requires re-deriving the likelihood contributions for the copula-frailty model, which may lead to integrals without closed-form solutions and additional numerical complexity.

Moreover, with small sample sizes or strong dependence, identifiability of the extra parameters can pose further challenges. Given the broad applicability and familiarity of the Weibull model in engineering, we focus on that baseline here, noting that extensions to other parametric families remain viable future avenues under the same modeling framework. Hence, within this formulation based on the Weibull distribution, each failure time Ti1 or Ti2, for i{1,,n}, is still governed by the same gamma distributed frailty and copula structure described below. In practice, only the earliest failure time is observed (along with its corresponding failure mode), while the remaining mode is censored at that same time.

The proposed model accounts for two distinct sources of dependence: (i) unobserved heterogeneity, captured by a gamma distributed frailty Zi, for i{1,,n}. This allows, as mentioned, certain units to be more frail or more robust, reflecting latent factors not explained by observable covariates; (ii) conditional tail dependence, introduced via a Gumbel copula. Even after controlling for the shared frailty, correlated failure times can occur more frequently than predicted by a simple gamma distributed mixture alone. This is particularly relevant in high-risk scenarios where early failures tend to cluster.

In this study, we mainly adopt the viewpoint of two failure modes competing for the same item, but our formulas also apply to a two-component series arrangement with minimal modifications.

The Weibull distributed baseline and other options are considered next. Following [24] and usual practice in reliability studies, we adopt a Weibull distribution for each failure time. This adoption is motivated by the flexibility of the Weibull model in describing a range of hazard shapes and its common usage in engineering contexts. Although we focus on the Weibull distribution here, the framework can accommodate other parametric baselines (for example, exponential, gamma, generalized linear hazard distributions), or even semiparametric models, if domain knowledge or data characteristics suggest a different modeling strategy. In particular, a generalized linear exponential distribution could also be utilized within the same frailty-copula structure, albeit with different hazard formulations and potentially more challenging estimation details.

Next, we describe the gamma-distributed frailty. We introduce a gamma distributed frailty variable Zi, independently drawn for unit i, with i{1,,n}, to account for unobserved shared risk factors. Specifically, we assume that ZiGamma(1/η,η), with E[Zi]=1 and Var[Zi]=η. Thus, the frailty term Zi represents the unit-specific unobserved heterogeneity, where values greater than one indicate higher susceptibility (greater frailty), and values below one are units with lower risk (more robust).

Given Zi = z, each failure time Tij, for mode j{1,2}, follows a Weibull distribution with shape αj>0 and scale λj>0. Thus, the conditional survival function of Tij is given by

STijZi = z(t)=exp(z(tλj)αj),t>0.

To induce dependence between Ti1 and Ti2 conditional on Zi = z, we employ the Gumbel copula stated as

Cθ(u,v)=exp(((log(u))θ+(log(v))θ)1/θ),(u,v)(0,1)2,θ1.

Hence, the joint conditional survival function of (Ti1,Ti2) given Zi = z is formulated as STi1, Ti2 Zi = z(t1,t2)=Cθ(STi1Zi = z(t1),STi2Zi = z(t2)), for t1>0,t2>0. To obtain the unconditional joint survival function for (Ti1,Ti2), we integrate out the frailty Zi via the expression defined as STi1,Ti2(t1,t2)=0STi1,Ti2Zi = z(t1,t2)fZi(z)dz, for t1>0,t2>0, where fZi is the gamma density with shape 1/η and scale η. Because the above integral generally lacks a closed-form expression, numerical or simulation-based techniques are required for practical inference [40,41].

Remark [Model identifiability]: Including both the gamma-frailty variance η and the Gumbel copula parameter θ in the model can raise identifiability issues when sample sizes are small or when covariate information is limited [33,38]. In such scenarios, additional structure (such as repeated observations per unit or covariates) may be needed to separate unobserved heterogeneity from direct dependence.

Some implications and special cases are the following. Our frailty-copula method captures two complementary types of dependence: (i) extra-multiplier risk factors via Zi (unobserved heterogeneity), and (ii) conditional tail dependence via the copula. A simpler special case arises if we assume conditional independence given Zi = z, that is, STi1,Ti2Zi = z(t1,t2)=STi1Zi = z(t1)STi2Zi = z(t2). Under Weibull distributed marginals plus gamma distributed frailty, this leads to a known closed-form for the unconditional joint survival [38] stated as

STi1,Ti2(t1,t2)=(1+η(t1λ1)α1+η(t2λ2)α2)1/η.

Even though this independence assumption given Zi = z eliminates direct copula-based dependence, the shared frailty η can still induce marginal correlation among (Ti1,Ti2).

3.2 Marginal Survival and Quantiles

Regardless of the specific copula, the marginal survival function for mode j follows from integrating out the frailty as

STj(t)=0exp(z(tλj)αj)fZ(z)dz=(1+η(tλj)αj)1/η,t>0.

Hence, the marginal p-th quantile tp,j satisfies STj(tp,j)=1p, which implies

tp,j=λj((1p)η1η)1αj,0<p<1.(1)

Thus, although the joint distribution may be complex, the marginal survival functions and quantiles emerge in closed form. This feature proves convenient for constructing reliability-based CIs and for evaluating system-level performance measures.

3.3 Mean and Variance of Marginal Failure Times

In the simpler gamma-Weibull mixture model setting, where each unit experiences a single failure mode and no additional copula structure is introduced, closed-form expressions for the mean and variance of the marginal failure times can be obtained.

Consider the shape and scale parameters αj>0 and λj>0 of the Weibull distribution conditional on the gamma distributed frailty Zi, and let η>0 be the variance of the gamma distributed frailty, ZiGamma(1/η,η). It is established [33,38] that for integer values m satisfying m/αj<1/η, a general expression for the m-th moment, and from this the first and second moments, are given, for i{1,,n} and j{1,2}, by

E[Tijm]=λjmΓ(1ηmαj)Γ(1+mαj)Γ(1η+1),E[Tij]=λjΓ(1η1αj)Γ(1+1αj)Γ(1η+1),E[Tij2]=λj2Γ(1η2αj)Γ(1+2αj)Γ(1η+1).

Then, the variance follows immediately, for i{1,,n} and j{1,2}, as

Var[Tij]=E[Tij2](E[Tij])2=λj2Γ(1η2αj)Γ(1+2αj)Γ(1η+1)(λjΓ(1η1αj)Γ(1+1αj)Γ(1η+1))2.

Note that, as η increases, both E[Tij] and Var[Tij] increase, for i{1,,n} and j{1,2}, reflecting greater unobserved heterogeneity among the units. High values of η also result in increased marginal correlation among the failure times Ti1 and Ti2, even without an explicit copula parameter such as θ.

3.4 Dependence Structure and Identifiability in Competing Risks

Introducing a Gumbel copula Cθ conditional on the gamma distributed frailty Zi does not alter the marginal distributions of each failure time associated with mode j. Indeed, the univariate survival functions remain stated as

STij(t)=0exp(z(tλj)αj)fZ(z)dz,t>0,i{1,,n},j{1,2}.

where fZ, as mentioned, is the gamma distributed frailty density. However, once the copula is introduced, the joint survival function is presented as

STi1,Ti2(t1,t2)=0Cθ(exp(z(t1λ1)α1),exp(z(t2λ2)α2))fZ(z)dz,t1>0,t2>0,i{1,,n},

generally lacks a closed-form expression. Consequently, joint reliability metrics must be evaluated via numerical quadrature or other approximation methods. This copula-frailty structure is especially valuable for capturing potential tail dependence that goes beyond what is explained by shared frailty alone.

A natural way to measure and interpret the strength of such dependence in reliability studies is through the Kendall tau, denoted by τ, owing to its invariance under monotonic transformations and the ease of deriving closed-form expressions in certain copula families [18]. When dependence arises purely from the gamma distributed frailty, failure times (Ti1,Ti2) of each mode are conditionally independent given Zi, for i{1,,n}, and the resulting (unconditional) dependence typically increases with the frailty variance η. By contrast, our bivariate Gumbel copula with parameter θ1 induces an additional conditional correlation, reflected in τ|Z=(θ1)/θ. Upon integrating out Z, the overall Kendall tau depends on both η and θ, with larger values of either parameter leading to stronger observed dependence.

However, an important practical challenge arises because only the earliest failure time and its cause are typically observed in competing risks. This partial information can make it difficult to disentangle the roles of shared frailty (governed by η) and copula-based dependence (governed by θ) [42]. Additional structural assumptions or more detailed data, such as repeated failures per unit, supplementary covariates, or expert knowledge—may be needed to ensure identifiability [24,33]. In their absence, it can be inherently challenging to isolate how much of the dependence is truly due to unobserved heterogeneity rather than a direct association between the failure times of the two modes.

4  Maximum Likelihood Inference

In this section, we estimate the model parameters using the ML method and construct CIs for the reliability of dependent systems based on the ML estimation and delta methods.

4.1 Estimation

We outline parameter estimation via the ML method. Consider n independent units subject to two competing failure modes. For unit i, we observe the earliest failure time ti=mini{Ti1,Ti2} and indicators δij, where δij=1 if failure occurs by mode j{1,2}, and δij=0 otherwise. Consequently, if δi1=δi2=0, the observation for unit i is right-censored. We estimate the parameter vector θ=(α1,λ1,α2,λ2,η,θ), where αj and λj are the Weibull shape and scale parameters for mode j, η is the variance parameter of the gamma-distributed frailty, and θ is the Gumbel copula parameter capturing tail dependence.

The likelihood function for observed data (ti,δi1,δi2), with i{1,,n}, is defined as

L(θ)=i=1nf1(ti)δi1f2(ti)δi2STi1,Ti2(ti,ti)1(δi1+δi2),(2)

where fj is the density for the failure time of mode j computed as fj(t)=/tjSTi1,Ti2(t1,t2)|t1=t2=t for j{1,2}.

Under the frailty-copula model, the joint survival function STi1,Ti2 involves an integral over the gamma-distributed frailty combined with the copula. Numerical integration and differentiation techniques must be employed to evaluate the likelihood components accurately. Taking the logarithm of the function formulated in (2), we obtain the log-likelihood function (θ) given by

(θ)=i=1n{δi1log(f1(ti))+δi2log(f2(ti))+(1δi1δi2)log(STi1,Ti2(ti,ti))}.

Since δi1+δi21, at most one density term is active per observation, and the survival function term appears only in censored observations. ML estimates are obtained numerically by solving θ^=argmaxθ(θ), typically through quasi-Newton or similar optimization algorithms. In more complex cases—particularly when both the frailty variance η and the copula parameter θ must be estimated—additional constraints or prior information may be required to ensure identifiability.

In practice, several considerations are essential to achieve robust inference. First, to avoid numerical instability or overflow, one can optimize η and θ on a logarithmic scale, guaranteeing their positivity and promoting stable convergence. Second, convergence diagnostics should include verification of gradient norms and examination of the Hessian matrix definiteness, ensuring the algorithm attains a global rather than a local maximum. Then, once the model has converged, the asymptotic covariance matrix Σ can be estimated by inverting the observed Fisher information matrix, that is, the negative Hessian matrix evaluated at the ML estimates. This enables the use of the delta method to construct CIs for derived reliability measures and quantiles, as described next.

4.2 Confidence Intervals for Marginal Quantiles

We now focus on constructing CIs for the marginal quantiles tp,j, where STj(tp,j)=1p, in the presence of a frailty-copula model, for j{1,2}. Because tp,j is a nonlinear function of the parameter vector θ, traditional CI formulas cannot be applied directly. Instead, we use a log-transformation combined with the delta method to derive an asymptotic approximation for the quantile variance. Define the log-quantile by yp,j=log(tp,j), for 0<p<1,j{1,2}, and let y^p,j be its estimate obtained by plugging the ML estimate θ^ into the marginal quantile function defined in (1). By linearizing yp,j around θ^, the delta method provides an approximate standard error (SE).

Specifically, if SE^(y^p,j) denotes the estimated SE of y^p,j, then an approximate (1αCI)×100% CI for the log-quantile is given by [y^p,jzαCI/2SE^(y^p,j),y^p,j+zαCI/2SE^(y^p,j)], where zαCI/2 is the standard normal quantile corresponding to the chosen confidence level. Exponentiating both endpoints yields the desired interval for tp,j, [exp(y_p,j),exp(y¯p,j)], for 0<p<1 and j{1,2}.

In terms of practical implementation, evaluating SE^(y^p,j) requires computing the gradient of log(tp,j) with respect to the parameter θ. This can be done using numerical approximations (such as finite differences) or symbolic differentiation. In particular, recall that tp,j is implicitly defined by the equation given by STj(tp,j;θ)=1p, for 0<p<1 and j{1,2}. Thus, differentiating this equation concerning each component of θ allows us to track how changes in θ propagate to tp,j. When the model also includes copula and frailty parameters (θ,η), the corresponding derivatives must incorporate the full covariance structure Σ^ obtained by inverting the Fisher information matrix and evaluating it at the ML estimate (see Section 4.1).

4.3 Asymptotic Validity and Coverage

Under usual regularity conditions, ensuring that θ^ is consistent and asymptotically normal, the log-quantile estimator y^p,j converges in distribution to a normally distributed random variable centered at the true yp,j=log(tp,j). Consequently, the resulting CIs achieve the nominal CP 1αCI as the sample size grows. Sufficient conditions include the smoothness of tp,j(θ) (so that the delta method applies) and have enough observed failures from mode j{1,2} to estimate its parameters with precision. In practice, one can further validate CP accuracy through simulation studies or bootstrap resampling.

By applying a log-transformation and the delta method, we obtain a theoretically justified procedure for constructing CIs for tp,j. Under regularity conditions, these intervals attain the nominal CP (1αCI) as the sample size increases.

Specifically, let STj denote the true survival function for the failure time of mode j with quantile tp,j, and let S^Tj be the fitted counterpart derived from the ML estimate θ^. Our CI [t_p,j,t¯p,j] must satisfy the expression formulated as

Pr[tp,j(t_p,j,t¯p,j)]=Pr[y_p,jyp,jy¯p,j]1αCI,0<p<1,j{1,2},

as n, where yp,j=log(tp,j) and y_p,j,y¯p,j are the log-scale bounds obtained via the delta method. For convergence to be reached, the following three main conditions are usually required:

(i)   Consistency and asymptotic normality of the ML estimator distribution θ^, which ensures a valid linear approximation of log(tp,j) around the true parameter values.

(ii)  Smoothness of the mapping from θ to tp,j, so that partial derivatives exist and the gradient at the true parameter point is nonzero.

(iii) Sufficient effective sample size for each mode j, with the proportion of observed failures bounded away from zero and one, thereby guaranteeing stable estimation of both the marginal distribution and the parameter vector.

When conditions (i), (ii), and (iii) above are met, traditional asymptotic theory [33,38] implies that

Pr[y_p,jyp,jy¯p,j]1αCI,0<p<1,j{1,2},

as n.

Thus, exponentiating the bounds restores a suitable CP level on the original time scale. In practice, one can further corroborate these asymptotic results via simulation studies or bootstrap resampling to assess finite-sample performance.

4.4 Implicit Differentiation and Partial Derivatives

Because yp,j=log(tp,j) is implicitly defined as a nonlinear function of the parameter vector θ^, we apply the delta method in conjunction with implicit differentiation to obtain SE^(y^p,j). Concretely, recall that the fitted marginal survival function S^Tj satisfies the relationship stated as S^Tj(exp(yp,j);θ^)=1p. Differentiating this relationship with respect to the parameters in θ^ captures how small changes in θ^ propagate to yp,j. To organize these effects, let

Δ1,j=S^Tj(exp(yp,j))yp,j=fj(exp(yp,j))exp(yp,j),0<p<1,j{1,2},(3)

where fj is the fitted marginal density of the failure time for mode j. The partial derivative presented in (3) tracks the sensitivity of S^Tj to infinitesimal shifts in yp,j. In parallel, let

Δ2,j2=exp(yp,j)θ^Σ^exp(yp,j)θ^,0<p<1,j{1,2},(4)

where Σ^ is the asymptotic covariance matrix (obtained from the inverse observed Fisher information at θ^), and exp(yp,j)/θ^ is the gradient vector of exp(yp,j) with respect to θ^. The two partial derivatives presented in (3) and (4) combine to yield

SE^(y^p,j)=Δ2,j2|Δ1,j|,0<p<1,j{1,2}.

Under usual regularity assumptions, including the consistency and asymptotic normality of θ^, we have SE^(y^p,j)𝒫0 as nj, which ensures that the delta-method-based CIs for tp,j converge to the correct nominal CP, where 𝒫 means convergence in probability to.

4.5 Taylor Expansions and Coverage Probability

To further elucidate why the proposed CIs converge to the nominal level, we use a Taylor expansion of the estimated marginal survival function S^Tj around the true log-quantile yp,j=log(tp,j). Define a small shift by means of y~p,j=yp,j+z1αCI/2Δ2,j/|Δ1,j|, where Δ1,j and Δ2,j are defined in (3) and (4), respectively, for 0<p<1 and j{1,2}.

By expanding S^Tj(exp(y~p,j)) around yp,j, we get

S^Tj(exp(y~p,j))S^Tj(exp(yp,j))+S^Tj(exp(yp,j))yp,j(z1αCI/2Δ2,j|Δ1,j|)+Rn,j,0<p<1,j{1,2},

where Rn,j=op(1), that is, the order of approximation. Since S^Tj(exp(yp,j))STj(tp,j)=1p, and recalling Δ1,j<0, the shift in S^Tj is approximately given by (1p)z1αCI/2Δ2,j+Rn,j.

An analogous argument applies to the lower bound y_p,j. Hence, the event {S^Tj(exp(y~p,j))(1p)S^Tj(exp(y_p,j))} translates into a statement about the standardized difference (S^Tj(tp,j)STj(tp,j))/Δ2,j, which converges in distribution to a normal distributed standard random variable by the delta method and the asymptotic normality of the distribution for θ^. As the sample size nj (the effective sample size for mode j) becomes large, this probability approaches 1αCI.

4.6 Asymptotic Normality and Slutsky Theorem

Under regularity conditions ensuring that θ^ is both consistent and asymptotically normal distributed, the random variable is defined as (S^Tj(tp,j)STj(tp,j))/Δ2,j𝒟Normal(0,1), for 0<p<1 and j{1,2}, where 𝒟 means convergence in distribution to. Moreover, the remainder terms in the Taylor expansion vanish as nj. The Slutsky theorem guarantees that the true CP converges to 1αCI. Consequently, the log-transformed delta-method CI for tp,j attains the nominal CP as the sample grows.

In summary, the asymptotic validity of our interval estimators for tp,j follows from a combination of (i) implicit differentiation of the quantile definition, (ii) taylor expansions around the true log-quantile, and (iii) traditional asymptotic results on ML estimators, culminating in a robust procedure for quantifying uncertainty in marginal quantiles under dependent competing risks.

5  Simulation Study

This section describes a simulation study aimed at evaluating our proposed CIs for the reliability of a two-component series system, where component lifetimes follow Weibull distributions with a gamma-distributed frailty and are linked by a Gumbel copula. We assess the CPs of these CIs under different dependence levels (quantified by Kendall’s tau, τ) and various censoring rates. We further compare our adjusted intervals with two traditional approaches and examine how the sample size affects the mean time to failure (MTTF).

5.1 Setup and Frailty-Dependence Modeling

We adopt a gamma-distributed frailtyZ to capture unobserved heterogeneity among units, governed by the variance parameter η. As η approaches zero, the frailty collapses to Z=1 (no extra heterogeneity). Larger values of η imply stronger unobserved heterogeneity, typically increasing the marginal correlation of failure times. Conditionally on Z=z, each component follows a Weibull distribution with shape αj and scale λj, for j{1,2}. Beyond the shared frailty, a Gumbel copula (parameterized by θ1) introduces additional tail dependence. This setup accommodates both latent heterogeneity (via η) and direct association (via θ).

5.2 Implementation and Comparison of Interval Methods

For each generated dataset of size n, we estimate the parameter vector θ=(α1,λ1,α2,λ2,η,θ) by maximizing the full log-likelihood function introduced in Section 3. As there is a closed-form for the integral over the gamma frailty or the partial derivatives of the copula-based survival function, we rely on numerical integration combined with finite-difference approximations as follows:

•   We use integrate() (the base R adaptive quadrature routine) or simple Monte Carlo methods to evaluate the integrals over the gamma frailty distribution.

•   We employ numDeriv for finite-difference approximations of partial derivatives, which are needed to compute fj(t) in the likelihood.

•   We call optim() (BFGS or L-BFGS-B) for quasi-Newton optimization, facilitating parameter constraints (such as positivity of η, θ) and promoting stable convergence.

Following the expression stated in (2), each observation (ti,δi1,δi2) contributes a density term fj(ti) if δij=1, or a survival term STi1,Ti2(ti,ti) if the observation is censored. To avoid numerical under-/overflow and to ensure positivity (or boundedness) of parameters, we often reparametrize η and θ on a log-scale. For instance, if θ must satisfy θ1, we let θR and define θ=1+exp(θ), where θ is the unconstrained parameter manipulated by the optimizer, guaranteeing that θ1. Convergence is monitored by checking the gradient norm and the definiteness of the Hessian matrix. If the Hessian matrix is nearly singular –for instance, due to strong dependence or small sample sizes– we optionally add a mild ridge penalty to stabilize the fit.

5.3 Coverage Probability

We adopt the procedure outlined in Algorithm 1 to obtain empirical CPs for the proposed CI methods. In each simulation replicate, we generate a latent gamma distributed frailty ZiΓ(1/η,η), use a Gumbel copula (parameter θ) to introduce dependence among the Weibull-distributed times, and optionally impose right-censoring.

After generating (tobs,i,δi1,δi2) for each unit, we compute the log-likelihood function, substituting fj(t)=/tjSTi1,Ti2(t1,t2)|t1=t2=t into the expression stated as (2), and maximize it via quasi-Newton routines. Once θ^ is obtained, we invert the observed Fisher information to estimate the covariance matrix. A log-transform delta method then provides CIs for marginal quantiles tp,j (see Section 4 for details).

images

Following the steps in Algorithm 1, we evaluate the CP for each triple (n,θ, censoring_rate), where n is the sample size, θ1 is the Gumbel copula parameter controlling the dependence strength, and censoring_rate is the rate parameter for exponential censoring times {Ci}. We construct the proposed CIDep, along with the simple CIML and CILogit, all at the nominal 95% confidence level. The R code in Appendix 1 focuses on CIDep. We then summarize the resulting CPs for each combination of sample size, dependence level, and censoring scenario in the tables below.

5.4 Dependence Levels and Scenarios without Censoring

We examine three levels of dependence for Weibull-distributed failure times: weak, moderate, and strong. These are measured via the conditional Kendall’s tau, denoted by τθZ, which quantifies the correlation between the two components given the latent frailty Z. Concretely, we set: τθZ=0.25(weak),τθZ=0.50(moderate), and τθZ=0.75(strong). Throughout this article, we use a Gumbel copula with parameter θ1. In this copula, the conditional Kendall’s tau is given by τθZ=11/θ. Hence, for example, τθZ=0.25 implies θ1.33, τθZ=0.50 implies θ=2, and τθZ=0.75 implies θ=4. Together with the Weibull parameters α1,α2,λ1,λ2 and the frailty variance η, these values define the data-generating processes in all of our simulations.

5.5 Simulations with Complete Data (No Censoring)

To first isolate the effect of dependence on coverage, we consider the no-censoring case (0%). Tables 24 show the estimated 95% CPs for various sample sizes and quantiles p. Each table compares the following:

•   CIML—The method from [39], assuming independence,

•   CILogit—Logit-transform CIs without accounting for the copula,

•   CIDep—Our adjusted approach that includes both η and θ in its variance structure.

images

images

images

Fig. 1 summarizes the simulation design. Subsequent sections present analogous CP results under 10% and 25% censoring to represent more realistic scenarios with partially observed lifetimes.

images

Figure 1: Flowchart of the simulation process for evaluating the impact of frailty on system reliability [39]

For the case of weak dependence, τθZ=0.25, Table 2 shows that all methods produce CPs close to the nominal 95%, even for small sample sizes. At n=50, there are slight deviations below 0.95 for some intervals, but overall, the differences between the traditional CIs [39], the logit-based CIs, and our proposed adjusted intervals are modest. As n increases, CP across all methods quickly stabilizes near the nominal level, with the adjusted intervals often exhibiting slightly better stability at certain quantiles.

In the case of moderate dependence, τθZ=0.50, as reported in Table 3, the CPs again remain fairly close to 95%, although small-sample settings (n=50) display mild fluctuations around the nominal target. The traditional CIs [39] and logit-based CIs are generally in line with the adjusted intervals, and any under- or over-coverage tends to be within a few percentage points. As before, increasing the sample size mitigates these minor deviations.

Under strong dependence, τθZ=0.75 (see Table 4), the CPs continue to cluster around the nominal 95%. Although minor under- or over-coverage can appear for very small n, the magnitude of such deviations is not large. The adjusted intervals do show somewhat steadier performance across sample sizes in these high-dependence scenarios, but the traditional and logit-based CIs remain reasonably accurate as well.

Overall, sample size still plays a role in CP performance, with all methods converging toward the nominal 95% as n grows (such as at n=500 or n=1000). The corrected intervals tend to be slightly more robust under stronger dependence or smaller n, but the numerical differences between the traditional and logit-based CIs are generally modest. These differences demonstrate that incorporating both the frailty variance η and the copula parameter θ can offer a small edge in maintaining nominal CP, particularly in more challenging settings.

5.6 Effect of Censoring on Confidence Interval Coverage

We now evaluate how additional right-censoring affects CPs under the same configurations stated in Section 5.4. Specifically, we apply 10% and 25% censoring and report results in Tables 57 (for 10%) and Tables 810 (for 25%).

images

images

images

images

images

images

Under 10% censoring, CPs for smaller samples (n=50) decrease slightly for all methods, particularly at lower quantiles (p=0.05), but remain close to 95%. The traditional CIs [39] and logit-based CIs occasionally dip a bit below nominal CP with moderate or strong dependence, whereas the corrected intervals tend to stay near 95% even at n=50. As the sample size grows, differences among the methods generally diminish.

Moving to 25% censoring amplifies these differences somewhat, especially with strong dependence (τθZ=0.75) and small n. In such cases, the traditional CIs [39] and logit-based CIs show slightly lower CP, but typically not drastically below 95%. Meanwhile, the corrected CIs maintain CPs closer to the nominal level across the quantiles reported. As n increases beyond 100, the gap again narrows, and all methods converge to near 95%.

In summary, whether censoring is present or not, the adjusted CIs consistently provide CPs near the nominal level, especially in higher-dependence regimes and smaller sample sizes, where even slight improvements can be valuable. Hence, incorporating both the copula parameter θ and the frailty variance η in the variance-covariance structure offers practical benefits for reliability analyses involving censoring and interdependent component failure times.

5.7 Impact of Dependence and Heterogeneity on System Reliability

Next, we investigate how varying the dependence among Weibull-distributed failure times affects both the system reliability and the mean time to failure (MTTF) in a two-component series system. Specifically, we adopt a Gumbel copula parameterized by θ1, where Kendall’s tau is given by τ=11/θ. Thus, as θ grows, the positive dependence among components becomes stronger. Concretely, we consider dependence levels ranging from τ=0.167 (weak) up to τ=0.800 (very strong). For instance, τ=0.167 corresponds to θ1.2, whereas τ=0.800 corresponds to θ5.0. We simulate 10,000 independent realizations for each τ value, aiming to observe how higher correlation influences both the reliability at a specified time horizon and the MTTF under strong dependence scenarios.

Table 11 summarizes our main results for reliability and MTTF, together with their bootstrap CIs. As expected for a series configuration, increasing dependence leads to lower reliability. Indeed, when τ grows from 0.167 to 0.800, the estimated reliability (at a given time) declines considerably. Likewise, the MTTF decreases, indicating that a higher degree of correlation among component failures makes it more likely that both components fail at earlier times.

images

As τ intensifies, both the reliability (at the chosen time) and the MTTF show a marked decrease, which is precisely what one expects in a series system: if one component fails prematurely and the other is highly correlated, both are likely to fail in close succession, thereby shortening the overall system lifetime. The bootstrap CIs also become narrower at higher τ in these simulations, reflecting increased homogeneity when both components tend to fail together.

Overall, these findings highlight the importance of accurately modeling dependence in series systems. Greater positive dependence (higher θ, thus higher τ) substantially lowers reliability and MTTF, consistent with theoretical expectations. We note that, while simpler methods ignoring dependence can underestimate the risk of early failure, our copula-frailty framework more realistically captures correlated failures, underscoring its relevance for reliability studies of high-dependence scenarios.

Future research may consider more complex or higher-dimensional dependence structures (such as multi-component series systems), as well as various censoring schemes, to further assess the stability of reliability estimation in strongly interdependent environments.

6  Illustrative Example with Penalized Estimation

In this section, we illustrate the practical implications of our method by conducting a synthetic case study designed to mimic a challenging real-world scenario. Specifically, we show how strong dependence, small samples, and estimation under a penalized likelihood framework can lead to non-trivial outcomes in practice. Here, we complement our previous discussion (see Section 5.7) on how dependence affects the system reliability and MTTF.

6.1 Rationale and Study Design

We consider a system of two-components connected in series for which each component time-to-failure follows a Weibull distribution (here taken as an exponential for simplicity), conditional on a gamma distributed frailty Zi. Dependence between the two failure times is introduced via a Gumbel copula with parameter θ, while the variance parameter of the frailty is η. To explore a hard-to-estimate case, we nominally set the values given by λ1=40, λ2=20, η=0.3, θ=2.0, and hence τ=11/2=0.50, which would typically generate relatively rare failures (owing to large λj) and moderately strong dependence. Then, we simulate n=3000 observations, but, in practice, the data can deviate substantially from these nominal parameters, such as many failures might occur much earlier than anticipated. Also, we introduce a moderate censoring mechanism by drawing censored times from an exponential distribution with rate (mean) 0.001. Although this is intended to be a friendly censoring scheme, the real data can still pose considerable challenges if unusually early failures dominate.

6.2 Penalized Likelihood Function

As discussed in Section 4.1, ML estimation may suffer a degenerate Hessian matrix if the data exhibit strong dependence or the sample contains insufficient information to separate the copula and frailty parameters. To mitigate this degeneracy, we adopt a ridge-type penalization approach [39], adding a term αk(θkmk)2 to the negative LL function, where α>0 is a small penalty coefficient, and (θkmk) represents the deviation of a log-parameter from some center mk. This penalty injects additional curvature in near-flat directions, thereby helping to produce a non-singular Hessian matrix so that SEs and CIs can be computed.

6.3 Results and Observations

Fig. 2 sketches the simulation and estimation process. In one representative synthetic dataset, consider the following:

•   We nominally set (λ1,λ2,η,θ)=(40,20,0.3,2).

•   The data, however, had a far larger proportion of early failures than expected, with a definitive cause distribution of roughly (0.4%,35.0%,64.6%) for {censored, cause 1, cause 2}. Such a proportion of short failure times can starkly contradict initial assumptions.

•   Fitting via the ML estimate without penalization yielded a nearly singular Hessian matrix, preventing computation SEs and CIs.

•   After imposing a moderate ridge penalty (α=0.1), the Hessian matrix became invertible. Our definitive estimates were λ^11.03, λ^21.10, η^1.02, and θ^2.01. Although they deviate substantially from the nominal values, we were able to quantify large uncertainty (SEs of about ±2 on the log-scale), resulting in feasible albeit broad CIs.

images

Figure 2: Flowchart illustrating penalized estimation in the challenging-data scenario. Without a penalty, the Hessian matrix may degenerate preventing SE computation. Adding a ridge penalty α>0 ensures an invertible Hessian matrix but can introduce bias. Either outcome signals that the data deviate substantially from nominal assumptions

6.4 Interpretation, Relevance, and Role of α

Although the definitive estimates deviate sharply from (40,20,0.3,2), these estimates highlight a realistic challenge: if the observed data differ markedly from initial assumptions, naive ML struggles to fit a multi-parameter frailty-copula model. Penalization enforces added curvature, preventing Hessian matrix degeneracy and enabling valid inference. However, the strength of α strongly influences the result such as indicated as follows:

•   Large values of α can stabilize the estimation by shrinking parameter estimates toward nominal centers, often yielding small SEs but potentially larger bias.

•   Small values of α give the data more weight but can cause near-singular Hessian matrices if the information is insufficient to separate frailty and copula effects, leading to extremely large uncertainties or unstable estimates.

In a genuine industrial or engineering scenario, such results might appear if a system nominally expected to exhibit few failures (large λj) instead of experiencing multiple early failures due to unforeseen stressors. Our penalized approach can at least provide the following:

•   Finite SEs to gauge how severely the data deviate from the nominal scenario.

•   A strong signal that model assumptions may be misaligned (as evidenced by substantial parameter shifts and wide CIs).

Hence, from a reliability engineering viewpoint, even sophisticated models can become unstable if the data are insufficient or unrepresentative. Thus, incorporating penalization or prior knowledge helps to ensure a basic level of inferential robustness.

6.5 Link to Real Applications

Although we do not present a real industrial dataset, this synthetic, real-like example is crafted to reflect high dependence and limited effective sample information. Practitioners commonly face similarly uncooperative data (sparse, heavily censored, or dominated by unanticipated early failures). Our penalized estimation strategy (frailty + copula + ridge) maintains a non-degenerate Hessian matrix, yielding SEs and CIs at the cost of potential bias. In realistic contexts, engineers may combine these estimates with domain expertise, covariate adjustments, or additional reliability testing to ease identifiability concerns. Overall, this synthetic, real-like study underlines the importance of robust or penalized approaches under strong dependence and sparse data. They avert numerical degeneracies and elucidate the degree of uncertainty, even if the ultimate parameter estimates deviate profoundly from nominal values.

7  Conclusions

In this article, we developed confidence intervals for dependent failure times in a two-component system, using Weibull distributed failure times combined with gamma distributed frailty and copula-based dependence. By explicitly including the copula parameter in the variance-covariance structure, our adjusted confidence intervals provide robust coverage probability under small samples and strong dependence—conditions where traditional methods can deviate from the nominal level.

Numerical experiments revealed that, while the traditional confidence intervals often remain close to their nominal coverage probability under moderate dependence or larger samples, they may underestimate variability in more challenging scenarios. In contrast, we found the proposed confidence intervals consistently maintain near-nominal coverage probability, even with up to 25% censoring. These findings underscore the practical gains from fully modeling both observed and unobserved heterogeneity, especially in high-dependence or data-sparse applications.

We further illustrated our analysis in a penalized estimation study with a synthetic real-like example, where we simulated data under nominal parameters (λ1,λ2,η,θ) but observed an unexpected pattern of early failures. This real-like scenario demonstrated how the maximum likelihood method can fail to produce invertible Hessian matrices, whereas a ridge-penalized approach successfully delivered finite standard errors—even though the resulting parameter estimates were substantially biased. Such findings confirm the importance of penalization or other stabilizing strategies whenever strong dependence and small samples are identified.

Although our study focused on two-component systems connected in series, extending this study to higher-dimensional settings is feasible but will face additional identifiability and computational complexities. Likewise, while we employed the Weibull distribution for its flexibility and frequent use in reliability contexts, alternative baselines (such as the exponential, gamma, or Birnbaum-Saunders distributions [43]) could be explored if data or expert knowledge suggests different hazard shapes. Moreover, although we illustrated the Gumbel and Clayton copulas for positive dependence, our framework accommodates other families, especially those capturing distinct tail behaviors, to match real-world failure patterns more closely.

Despite the strengths identified in our study, certain limitations remain. Greater systems may require stronger identifiability constraints or Bayesian/regularization strategies to separate frailty from copula effects reliably.

Further research could also examine semiparametric baselines and alternative censoring schemes [44,45]. Nonetheless, we believe that the proposed adjusted confidence intervals, by integrating both frailty and direct copula dependence, constitute a valuable tool for practical reliability analyses where correlated failures and small samples pose challenges, and that penalized variants of our estimation strategy offer a pragmatic fallback in highly demanding scenarios.

Acknowledgement: The authors thank the editors and anonymous reviewers for their insightful and constructive feedback. Their detailed comments and thoughtful suggestions improved the clarity, completeness, and theoretical foundation of this article. We sincerely appreciate the time and effort they dedicated to reviewing our work.

Funding Statement: The research of O. E. Bru-Cordero was supported by the Colombian government through COLCIENCIA scholarships, National Doctoral Program, Call 727 of 2015. C. Castro gratefully acknowledges partial financial support from the Centro de Matemática da Universidade do Minho (CMAT/UM), through UID/00013. V. Leiva acknowledges funding from the Agencia Nacional de Investigación y Desarrollo (ANID) of the Chilean Ministry of Science, Technology, Knowledge and Innovation, through FONDECYT project grant 1200525.

Author Contributions: All authors contributed to the conceptualization, methodology, and investigation of this research. Osnamir E. Bru-Cordero focused on the theoretical framework for frailty models and copula-based dependence structures, under the scientific supervision of Mario C. Jaramillo-Elorza and Víctor Leiva. Cecilia Castro led the overarching review and editing process, performed additional analyses, restructured the manuscript, and coordinated revisions in response to feedback. Víctor Leiva led the writing of the initial draft and subsequent revisions until the final submission. Mario C. Jaramillo-Elorza and Víctor Leiva supervised the overall study design, including the simulation scenarios and the critical review of the manuscript. All authors reviewed the results and approved the final version of the manuscript.

Availability of Data and Materials: The codes and data are available under request from the authors.

Ethics Approval: Not applicable. The research does not involve human or animal subjects.

Conflicts of Interest: The authors declare no conflicts of interest to report regarding the present study.

Appendix A R Script for Penalized Estimation and CIDep

This appendix provides an illustrative R script implementing data generation via a gamma-frailty Gumbel-copula model, quasi-Newton parameter estimation with a mild ridge penalty when the Hessian is nearly singular, and the construction of CIDep. For brevity, the code omits the routines for CIML and CILogit, which can be coded analogously.

images

images

images

images

images

images

References

1. Li H, Peng W, Adumene S, Yazdi M. Cutting edge research topics on system safety, reliability, maintainability, and resilience of energy-critical infrastructures. In: Intelligent reliability and maintainability of energy infrastructure assets. Cham, Switzerland: Springer; 2023. p. 25–38 doi:10.1007/978-3-031-29962-9_2. [Google Scholar] [CrossRef]

2. Osei-Kyei R, Almeida LM, Ampratwum G, Tam V. Systematic review of critical infrastructure resilience indicators. Constr Innov. 2023;23(5):1210–31. doi:10.1108/ci-03-2021-0047. [Google Scholar] [CrossRef]

3. Li Y, Bai X, Shi S, Wang S. Dynamic fatigue reliability analysis of transmission gear considering failure dependence. Comput Model Eng Sci. 2022;130(2):1077–92. doi:10.32604/cmes.2022.018181. [Google Scholar] [CrossRef]

4. Livingston D, Sivaram V, Freeman M, Fiege M. Applying blockchain technology to electric power systems. New York, NY, USA: Council on Foreign Relations; 2022. [Google Scholar]

5. Giraldo R, Leiva V, Christakos G. Leverage and Cook distance in regression with geostatistical data: methodology, simulation, and applications related to geographical information. Int J Geogr Inf Sci. 2023;37(3):607–33. doi:10.1080/13658816.2022.2131790. [Google Scholar] [CrossRef]

6. Aven T, Jensen U. Stochastic failure models. In: Stochastic models in reliability. New York, NY, USA: Springer; 2013. p. 57–103. [Google Scholar]

7. Wagner N. Comparing the complexity and efficiency of composable modeling techniques for multi-scale and multi-domain complex system modeling and simulation applications: a probabilistic analysis. Systems. 2024;12(3):96. doi:10.3390/systems12030096. [Google Scholar] [CrossRef]

8. Qiu J, Sun H, Wang S. Research progress of reverse Monte Carlo and its application in Josephson junction barrier layer. Comput Model Eng Sci. 2023;137(3):2077–109. doi:10.32604/cmes.2023.027353. [Google Scholar] [CrossRef]

9. Wang C. Structural reliability and time-dependent reliability. Cham, Switzerland: Springer; 2021. [Google Scholar]

10. Guiraud P, Leiva V, Fierro R. A non-central version of the Birnbaum-Saunders distribution for reliability analysis. IEEE Trans Reliab. 2009;58(1):152–60. doi:10.1109/tr.2008.2011869. [Google Scholar] [CrossRef]

11. Netes V, Sharov V. Common cause failures in communication networks. In: Proceedings of the 2024 Conference on Systems of Signals Generating and Processing in the Field of on Board Communications; 2024 Mar 12–14; Moscow, Russian Federation. p. 1–7. [Google Scholar]

12. Meeker WQ, Escobar LA, Pascual FG. Statistical methods for reliability data. New York, NY, USA: Wiley; 2022. [Google Scholar]

13. Octavina S, Lin SW, Yeh RH. Copula-based Bayesian reliability assessment for series systems. Qual Reliab Eng Int. 2024;40(5):2444–55. doi:10.1002/qre.3514. [Google Scholar] [CrossRef]

14. Abuelamayem O. Reliability estimation of a multicomponent stress-strength model based on copula function under progressive first failure censoring. Stat Optim Inf Comput. 2024;12(6):1601–11. doi:10.19139/soic-2310-5070-1894. [Google Scholar] [CrossRef]

15. Zheng M, Klein JP. Estimates of marginal survival for dependent competing risks based on an assumed copula. Biometrika. 1995;82(1):127–38. doi:10.1093/biomet/82.1.127. [Google Scholar] [CrossRef]

16. Barlow R, Proschan F. Statistical theory of reliability and life testing: probability models. New York, NY, USA: Wiley; 1975. [Google Scholar]

17. Navarro J, Durante F. Copula-based representations for the reliability of the residual lifetimes of coherent systems with dependent components. J Multivar Anal. 2017;158:87–102. doi:10.1016/j.jmva.2017.04.003. [Google Scholar] [CrossRef]

18. Nelsen RB. An introduction to copulas. New York, NY, USA: Springer; 2006. [Google Scholar]

19. Leiva V, dos Santos RA, Saulo H, Marchant C, Lio Y. Bootstrap control charts for quantiles based on log-symmetric distributions with applications to the monitoring of reliability data. Qual Reliab Eng Int. 2023;39(1):1–24. doi:10.1002/qre.3072. [Google Scholar] [CrossRef]

20. Aalen O, Borgan O, Gjessing H. Survival and event history analysis: a process point of view. New York, NY, USA: Springer; 2008. [Google Scholar]

21. Mokkink LB, de Vet H, Diemeer S, Eekhout I. Sample size recommendations for studies on reliability and measurement error: an online application based on simulation studies. Health Serv Outcomes Res Methodol. 2023;23(3):241–65. doi:10.1007/s10742-022-00293-9. [Google Scholar] [CrossRef]

22. Brown B, Liu B, McIntyre S, Revie M. Reliability evaluation of repairable systems considering component heterogeneity using frailty model. Proc Inst Mech Eng O J Risk Reliab. 2023;237(4):654–70. doi:10.1177/1748006x221109341. [Google Scholar] [CrossRef]

23. Gorfine M, Zucker DM. Shared frailty methods for complex survival data: a review of recent advances. Annu Rev Stat Appl. 2023;10(1):51–73. doi:10.1146/annurev-statistics-032921-021310. [Google Scholar] [CrossRef]

24. Wang YC, Emura T, Fan TH, Lo SM, Wilke RA. Likelihood-based inference for a frailty-copula model based on competing risks failure time data. Qual Reliab Eng Int. 2020;36(5):1622–38. doi:10.1002/qre.2650. [Google Scholar] [CrossRef]

25. Leao J, Leiva V, Saulo H, Tomazella V. Incorporation of frailties into a cure rate regression model and its diagnostics and application to melanoma data. Stat Med. 2018;37(29):4421–40. doi:10.1002/sim.7929. [Google Scholar] [PubMed] [CrossRef]

26. Calsavara VF, Tomazella V, Fogo JC. The effect of frailty term in the standard mixture model. Chil J Stat. 2013;4(2):95–109. [Google Scholar]

27. Wang YC, Emura T. Multivariate failure time distributions derived from shared frailty and copulas. Jpn J Stat Data Sci. 2021;4(2):1105–31. doi:10.1007/s42081-021-00123-1. [Google Scholar] [CrossRef]

28. Rouzbahani M, Akhoond MR, Chinipardaz R. A new bivariate survival model with a cured fraction: a mixed Poisson frailty-copula approach. Jpn J Stat Data Sci. 2025;11(4):261. doi:10.1007/s42081-023-00240-z. [Google Scholar] [CrossRef]

29. Liu X. Planning of accelerated life tests with dependent failure modes based on a gamma frailty model. Technometrics. 2012;54(4):398–409. doi:10.1080/00401706.2012.707579. [Google Scholar] [CrossRef]

30. Lu JC, Bhattacharyya GK. Some new constructions of bivariate Weibull models. Ann Inst Stat Math. 1990;42(3):543–59. doi:10.1007/bf00049307. [Google Scholar] [CrossRef]

31. Murthy DP, Xie M, Jiang R. Weibull models. New York, NY, USA: Wiley; 2004. [Google Scholar]

32. Escarela G, Carriere J. Fitting competing risks with an assumed copula. Stat Methods Med Res. 2003;12(4):333–49. doi:10.1191/0962280203sm335ra. [Google Scholar] [PubMed] [CrossRef]

33. Duchateau L, Janssen P. The frailty model. New York, NY, USA: Springer; 2007. [Google Scholar]

34. Ibrahim JG, Chen MH, Sinha D. Bayesian survival analysis. Hoboken, NJ, USA: Wiley; 2014. [Google Scholar]

35. Tableman M, Kim JS. Survival analysis using S: analysis of time-to-event data. Boca Raton (FLCRC Press; 2003. [Google Scholar]

36. Hougaard P. Analysis of multivariate survival data. New York, NY, USA: Springer; 2012. [Google Scholar]

37. Lin H, Zelterman D. Modeling survival data: extending the cox model. New York, NY, USA: Taylor and Francis; 2002. [Google Scholar]

38. Wienke A. Frailty models in survival analysis. Boca Raton, FL, USA: Chapman & Hall/CRC; 2010. [Google Scholar]

39. Hong Y, Meeker WQ. Confidence intervals for system reliability and applications to competing risks models. Lifetime Data Anal. 2014;20(2):161–84. doi:10.1007/s10985-013-9245-9. [Google Scholar] [PubMed] [CrossRef]

40. Vallejos CA, Steel MF. Incorporating unobserved heterogeneity in Weibull survival models: a Bayesian approach. Econom Stat. 2017;3(2):73–88. doi:10.1016/j.ecosta.2017.01.005. [Google Scholar] [CrossRef]

41. Emura T, Nakatochi M, Murotani K, Rondeau V. A joint frailty-copula model between tumour progression and death for meta-analysis. Stat Methods Med Res. 2017;26(6):2649–66. doi:10.1177/0962280215604510. [Google Scholar] [PubMed] [CrossRef]

42. Tsiatis A. A nonidentifiability aspect of the problem of competing risks. Proc Natl Acad Sci USA. 1975;72(1):20–2. doi:10.1073/pnas.72.1.20. [Google Scholar] [PubMed] [CrossRef]

43. Leiva V, Castro C, Vila R, Saulo H. Unveiling patterns and trends in research on cumulative damage models for statistical and reliability analyses: bibliometric and thematic explorations with data analytics. Chil J Stat. 2024;15:81–109. [Google Scholar]

44. Barros M, Leiva V, Ospina R, Tsuyuguchi A. Goodness-of-fit tests for the Birnbaum-Saunders distribution with censored reliability data. IEEE Trans Reliab. 2014;63(2):543–54. doi:10.1109/tr.2014.2313707. [Google Scholar] [CrossRef]

45. Cárcamo E, Marchant C, Ibacache-Pulgar G, Leiva V. Birnbaum-Saunders semi-parametric additive modeling: estimation, smoothing, diagnostics and application. REVSTAT Stat J. 2024;22:211–37. doi:10.57805/revstat.v22i2.483. [Google Scholar] [CrossRef]


Cite This Article

APA Style
Bru-Cordero, O.E., Castro, C., Leiva, V., Jaramillo-Elorza, M.C. (2025). Confidence Intervals for the Reliability of Dependent Systems: Integrating Frailty Models and Copula-Based Methods. Computer Modeling in Engineering & Sciences, 143(2), 1401–1431. https://doi.org/10.32604/cmes.2025.064487
Vancouver Style
Bru-Cordero OE, Castro C, Leiva V, Jaramillo-Elorza MC. Confidence Intervals for the Reliability of Dependent Systems: Integrating Frailty Models and Copula-Based Methods. Comput Model Eng Sci. 2025;143(2):1401–1431. https://doi.org/10.32604/cmes.2025.064487
IEEE Style
O. E. Bru-Cordero, C. Castro, V. Leiva, and M. C. Jaramillo-Elorza, “Confidence Intervals for the Reliability of Dependent Systems: Integrating Frailty Models and Copula-Based Methods,” Comput. Model. Eng. Sci., vol. 143, no. 2, pp. 1401–1431, 2025. https://doi.org/10.32604/cmes.2025.064487


cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 756

    View

  • 418

    Download

  • 0

    Like

Share Link