首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper reports the robustness of the four proportional intensity (PI) models: Prentice–Williams–Peterson-gap time (PWP-GT), PWP-total time (PWP-TT), Andersen–Gill (AG), and Wei–Lin–Weissfeld (WLW), for right-censored recurrent failure event data. The results are beneficial to practitioners in anticipating the more favorable engineering application domains and selecting appropriate PI models. The PWP-GT and AG prove to be models of choice over ranges of sample sizes, shape parameters, and censoring severity. At the smaller sample size (U=60), where there are 30 per class for a two-level covariate, the PWP-GT proves to perform well for moderate right-censoring (Pc≤0.8), where 80% of the units have some censoring, and moderately decreasing, constant, and moderately increasing rates of occurrence of failures (power-law NHPP shape parameter in the range of 0.8≤δ≤1.8). For the large sample size (U=180), the PWP-GT performs well for severe right-censoring (0.8<Pc≤1.0), where 100% of the units have some censoring, and moderately decreasing, constant, and moderately increasing rates of occurrence of failures (power-law NHPP shape parameter in the range of 0.8≤δ≤2.0). The AG model proves to outperform the PWP-TT and WLW for stationary processes (HPP) across a wide range of right-censorship (0.0≤Pc≤1.0) and for sample sizes of 60 or more.  相似文献   

2.
We consider P-spline smoothing in a varying coefficient regression model when the response is subject to random right censoring. We introduce two data transformation approaches to construct a synthetic response vector that is used in a penalized least squares optimization problem. We prove the consistency and asymptotic normality of the P-spline estimators for a diverging number of knots and show by simulation studies and real data examples that the combination of a data transformation for censored observations with P-spline smoothing leads to good estimators of the varying coefficient functions.  相似文献   

3.
In this paper, an extension to allow the presence of non-informative left- or right-censored observations in log-symmetric regression models is addressed. Under such models, the log-lifetime distribution belongs to the symmetric class and its location and scale parameters are described by semi-parametric functions of explanatory variables, whose nonparametric components are approximated using natural cubic splines or P-splines. An iterative process of parameter estimation by the maximum penalized likelihood method is presented. The large sample properties of the maximum penalized likelihood estimators are studied analytically and by simulation experiments. Diagnostic methods such as deviance-type residuals and local influence measures are derived. The package ssym, which includes an implementation in the computational environment R of the methodology addressed in this paper, is also discussed. The proposed methodology is illustrated by the analysis of a real data set.  相似文献   

4.
On the basis of a doubly censored sample from an exponential lifetime distribution, the problem of predicting the lifetimes of the unfailed items (one-sample prediction), as well as a second independent future sample from the same distribution (two-sample prediction), is addressed in a Bayesian setting. A class of conjugate prior distributions, which includes Jeffreys' prior as a special case, is considered. Explicit expressions for predictive densities and survivals are derived. Assuming squared-error loss, Bayes predictive estimators are obtained in closed forms (in particular, the estimator of the number of failures in a specified future time interval, is given analytically). Bayes prediction limits and predictive estimators under absolute-error loss can readily be computed using iterative methods. As applications, the total duration time in a life test and the failure time of ak-out-of-n system may be predicted. As an illustration, a numerical example is also included.  相似文献   

5.
Fault data for integrated circuits manufactured on silicon wafers are usually presented using wafer maps to indicate the spatial distribution of defects. This paper shows how this type of spatial data can be analyzed under the framework of generalized linear models. This provides a systematic method for monitoring the quality of a manufacturing process, and identifying fault sources with assignable causes that may possibly be eliminated with process improvement as a result. We consider models that account for different spatial patterns and, in particular, the observed phenomenon that the faults are distributed non‐uniformly across the wafer. Furthermore, we demonstrate how designed experiments can be used in optimizing the setting of important process parameters. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

6.
The data‐transformation approach and generalized linear modeling both require specification of a transformation prior to deriving the linear predictor (LP). By contrast, response modeling methodology (RMM) requires no such specifications. Furthermore, RMM effectively decouples modeling of the LP from modeling its relationship to the response. It may therefore be of interest to compare LPs obtained by the three approaches. Based on numerical quality problems that have appeared in the literature, these approaches are compared in terms of both the derived structure of the LPs and goodness‐of‐fit statistics. The relative advantages of RMM are discussed. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
We consider marginal generalized partially linear single-index models for longitudinal data. A profile generalized estimating equations (GEE)-based approach is proposed to estimate unknown regression parameters. Within a wide range of bandwidths for estimating the nonparametric function, our profile GEE estimator is consistent and asymptotically normal even if the covariance structure is misspecified. Moreover, if the covariance structure is correctly specified, the semiparametric efficiency can be achieved under heteroscedasticity and without distributional assumptions on the covariates. Simulation studies are conducted to evaluate the finite sample performance of the proposed procedure. The proposed methodology is further illustrated through a data analysis.  相似文献   

8.
Two new classes of nonparametric hazard estimators for censored data are proposed in this paper. One is based on a formula that expresses the hazard rate of interest as a product of the hazard rate of the observable lifetime and the conditional probability of uncensoring. The second class follows presmoothing ideas already used by Cao et al. (J Nonparametr Stat 17:31–56, 2005) for the cumulative hazard function. Asymptotic representations for some estimators in these classes are obtained and used to prove their limit distributions. Finally, a simulation study illustrates the comparative behavior of the estimators studied along the paper.  相似文献   

9.
The widely used Weibull distribution could be generalized to be q-Weibull distribution. To fill out the gap in existing literature, the reliability is studied for q-Weibull distribution with multiply Type-I censored data, which is the general form of Type-I censored data. The point estimates and confidence intervals (CIs) for q-Weibull parameters and reliability parameters such as the reliability and remaining lifetime are all focused on. The maximum likelihood estimates (MLE) are obtained by maximizing the likelihood function and transforming it to an unconstrained optimization problem. The least-square estimates (LSEs) are proposed by minimizing the single-variable profile error function derived from reducing the previous multivariable error function. These improvements could make the computation of point estimates efficient. Concerning the CIs, the asymptotic normality of log-transformed MLE is used to guarantee they fall into the value ranges. Particularly, the closed form for the Fisher information matrix is derived using the missing information principal and is combined with the delta method to construct the CIs for reliability. Besides, the bias-corrected and accelerated (BCa) bootstrap method is also applied. Further, a Monte Carlo simulation study is conducted to compare different point estimates and CIs. Finally, an illustrative example is presented to show the application of the study in this paper.  相似文献   

10.
The hazard function, also called the risk function or intensity function, is usually used to model survival data or other waiting times, such as unemployment times. In contrast to the proportional hazard model, the additive risk model assumes that the hazard function is the sum of rather than the product of, the baseline hazard function and a non-negative function of covariates. We propose to introduce the covariates into the model through a Gamma hazard function, while the baseline hazard function is left unspecified. Following the Bayesian paradigm, we obtain an approximation to the posterior distribution using Markov Chain Monte Carlo techniques. The subject-specific survival estimation is also studied. A real example using unemployment data is provided. This work was partially supported by the Spanish Education and Science Council Project PB96-0776.  相似文献   

11.
Transformation errors for linear electronic systems are analyzed that lead to presence in systems of natural low-frequency noise showing properties of local uniform random processes. __________ Translated from Metrologiya, No. 10. pp. 10–14, October, 2007.  相似文献   

12.
A method is presented for estimating dispersion effects (DE) from robust design experiments (RDE) with control and noise factors involving censored response data. This method is developed to discern the significance of DE from RDE and the method aims at analyzing a multi‐level/multi‐factor experiment. This method imputes censored data by a regression based imputation technique, assuming that the distribution of lifetime before and after censoring is identical. This method also models the residuals to identify important DE, assuming that the distribution of the observed random variables of the model is the same with or without censored response data. Finally, the method is demonstrated through a numerical example. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

13.
This paper is intended to make researchers in reliability theory aware of a recently introduced Bayesian model with imprecise prior distributions for statistical inference on failure data, that can also be considered as a robust Bayesian model. The model consists of a multinomial distribution with Dirichlet priors, making the approach basically nonparametric. New results for the model are presented, related to right-censored observations, where estimation based on this model is closely related to the product-limit estimator, which is an important statistical method to deal with reliability or survival data including right-censored observations. As for the product-limit estimator, the model considered in this paper aims at not using any information other than that provided by observed data, but our model fits into the robust Bayesian context which has the advantage that all inferences can be based on probabilities or expectations, or bounds for probabilities or expectations. The model uses a finite partition of the time-axis, and as such it is also related to life-tables.  相似文献   

14.
The accelerated life testing (ALT) is frequently used in examining the component reliability and acceptance testing. The ALT is carried out by exposing the unit to higher stress levels in order to observe data faster than those are producing under the normal conditions. The simple step-stress model based on type-II censoring Weibull lifetimes is studied here. In addition, the lifetimes satisfy Khamis-Higgins model assumption. In this paper, Bayesian approaches are developed for estimating the model parameters and predicting times to failure of future censored of the simple step-stress model from Weibull distribution using Khamis-Higgins model. The main goal of this work consists of two parts. First, the Bayesian estimation of the unknown parameters involved in the model is considered by adopting Devroye method to generate log-concave densities within sampling-based algorithm under different loss functions. The Bayes and highest posterior density credible intervals are then established. Second, the estimation of the posterior predictive density of the future lifetimes are discussed to obtain the point and prediction intervals with a given coverage probability. Monte Carlo simulation is performed to check the efficiency of the developed procedures and analyze a real data set for illustrative purposes.  相似文献   

15.
The two‐parameter Burr XII distribution has been widely used in various practical applications such as business, chemical engineering, quality control, medical research and reliability engineering. In this paper, we present maximum likelihood estimation (MLE) via the expectation–maximization (EM) algorithm to estimate the Burr XII parameters with multiple censored data. We also provide a method that can be used to construct the confidence intervals of the parameters, a method that computes the asymptotic variance and the covariance of the MLE from the complete and missing information matrices. A simulation study is conducted to compare the performance of the MLE via the EM algorithm and the Netwon–Raphson (NR) algorithm. The simulation results show that the EM algorithm outperforms the NR algorithm in most cases in terms of bias and errors in the root mean square. A numerical example is also used to demonstrate the performance of the proposed method. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
Composite scale modeling in the presence of censored data   总被引:1,自引:1,他引:0  
A composite scale modeling approach can be used to combine several scales or variables into a single scale or variable. A typical application is to combine age and usage together to form a composite timescale model. The combined scale is expected to have better failure prediction capability than individual scales. Two typical models are the linear and multiplicative models. Their parameters are determined by minimizing the sample coefficient of variation of the composite scale. The minimum coefficient of variation is hard to apply in the presence of censored data. Another open issue is how to identify key variables when a number of variables are combined. This paper develops methods to handle these two issues. A numerical example is also included to illustrate the proposed methods.  相似文献   

17.
A numerical method was developed for estimating the shapes of unknown distributions of analytical data and for estimating the expected values of censored data points. The method is based conceptually on the normal probability plot. Data are ordered and then transformed by using a power function to achieve approximate linearity with respect to a computed normal cumulative probability scale. The exponent used in the power transformation is an index of the distribution shape, which covers a continuum on which normality is defined as d = 1 and log normality is defined as d = 0. Expected transformed values of censored points are computed from a straight line fitted to the transformed, accepted data, and these are then back-transformed to the original distribution. The method gives improved characterization of analytical data distributions, particularly in the distribution extremities. It also avoids the biases from improper handling of censored data arising from measurements near the analytical detection limit. Illustrative applications were computed for atmospheric SO2 data and for mineral concentrations in hamburgers.  相似文献   

18.
In this paper we study the strong and weak convergence with rates for the estimators of the conditional distribution function as well as conditional cumulative hazard rate function for a left truncated and right censored model. It is assumed that the lifetime observations with multivariate covariates form a stationary ??-mixing sequence. Also, the almost sure representations and asymptotic normality of the estimators are established. The finite sample performance of the estimators is investigated via simulations.  相似文献   

19.
Application of a General Linear Model (GLM, “Analysis of Covariance”) to the statistical interpretation of stability data combines the methods of regression and analysis of variance in one common model. Expanding the well accepted method of linear regression upon time, the GLM model permits one to include supportive factors which may be either continuous regressors (temperature, humidity, etc.) or class effects (batch number, formulation type, manufacturer, etc.). Using the GLM procedure of SAS as a convenient software tool, the technique is illustrated by several examples. It is concluded that GLM provides a most suitable approach for the interpretation of stability data.  相似文献   

20.
In this paper we introduce an iterative estimation procedure based on conditional modes suitable to fit linear models when errors are known to be unimodal and, moreover, the dependent data stem from different sources and, consequently, may be either non-grouped or grouped with different classification criteria. The procedure requires, at each step, the imputation of the exact values of the grouped data and runs by means of a process that is similar to the EM algorithm with normal errors. The expectation step has been substituted with a mode step that avoids awkward integration with general errors and, in addition, we have substituted the maximisation step with a natural one which only coincides with it when the error distribution is normal. Notwithstanding the former modifications, we have proved that, on the one hand, the iterative estimating algorithm converges to a point which is unique and non-dependent on the starting values and, on the other hand, our final estimate, being anM-estimator, may enjoy good stochastic asymptotic properties such as consistency, boundness inL 2, and limit normality. Research partially founded by Ministry of Education and Culture, Spain. Grant SEC99-0402  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号