首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is important to educational planners to estimate the likelihood and time-scale of graduation of students enrolled on a curriculum. The particular case we are concerned with, emerges when studies are not completed in the prescribed interval of time. Under these circumstances we use a framework of survival analysis applied to lifetime-type educational data to examine the distribution of duration of undergraduate studies for 10,313 students, enrolled in a Greek university during ten consecutive academic years. Non-parametric and parametric survival models have been developed for handling this distribution as well as a modified procedure for testing goodness-of-fit of the models. Data censoring was taken into account in the statistical analysis and the problems of thresholding of graduation and of perpetual students are also addressed. We found that the proposed parametric model adequately describes the empirical distribution provided by non-parametric estimation. We also found significant difference between duration of studies of men and women students. The proposed methodology could be useful to analyse data from any other type and level of education or general lifetime data with similar characteristics.  相似文献   

2.
Cox's (1972) Proportioal hazards failure time model, already widely used in the analysis of clinical trials, also provides an elegant formalization of the epidemiologic concept of relative risk. When used to compare the disease experience of a study cohort with that of an external control population, it generalizes the notions of the standardized morbidity ratio (SMR) and the proportional morbidity ratio (PMR). For studies in which matched sets of cases and controls are sampled retrospectively from the population at risk, the model provides a flexible tool for the regression analysis of multiple risk factors.  相似文献   

3.
In this article, we formulate a semiparametric model for counting processes in which the effect of covariates is to transform the time scale for a baseline rate function. We assume an arbitrary dependence structure for the counting process and propose a class of estimating equations for the regression parameters. Asymptotic results for these estimators are derived. In addition, goodness of fit methods for assessing the adequacy of the accelerated rates model are proposed. The finite-sample behavior of the proposed methods is examined in simulation studies, and data from a chronic granulomatous disease study are used to illustrate the methodology.  相似文献   

4.
It has been approximately 30 years since D.R. Cox introduced the proportional hazards method to model the relationship between covariates and survival time. However, the proportional hazards model has limited value when the proportionality assumption is violated. Over the years, there have many been many alternative proposals to the proportional hazards regression model for the case of right censored survival data, but to date none have demonstrated widespread acceptance. In general, problems encountered in these methods include their computational algorithms or evaluation of their asymptotic properties. In this work, an estimating equation based on a U-statistic of degree 2 is proposed. It is easy to implement and the U-statistic framework provides a straightforward development of asymptotic inferential theory for the regression parameters.  相似文献   

5.
Previous time series applications of qualitative response models have ignored features of the data, such as conditional heteroscedasticity, that are routinely addressed in time series econometrics of financial data. This article addresses this issue by adding Markov-switching heteroscedasticity to a dynamic ordered probit model of discrete changes in the bank prime lending rate and estimating via the Gibbs sampler. The dynamic ordered probit model of Eichengreen, Watson, and Grossman allows for serial autocorrelation in probit analysis of a time series, and this article demonstrates the relative simplicity of estimating a dynamic ordered probit using the Gibbs sampler instead of the Eichengreen et al. maximum likelihood procedure. In addition, the extension to regime-switching parameters and conditional heteroscedasticity is easy to implement under Gibbs sampling. The article compares tests of goodness of fit between dynamic ordered probit models of the prime rate that have constant variance and conditional heteroscedasticity.  相似文献   

6.
In Bayesian model selection or testingproblems one cannot utilize standard or default noninformativepriors, since these priors are typically improper and are definedonly up to arbitrary constants. Therefore, Bayes factors andposterior probabilities are not well defined under these noninformativepriors, making Bayesian model selection and testing problemsimpossible. We derive the intrinsic Bayes factor (IBF) of Bergerand Pericchi (1996a, 1996b) for the commonly used models in reliabilityand survival analysis using an encompassing model. We also deriveproper intrinsic priors for these models, whose Bayes factors are asymptoticallyequivalent to the respective IBFs. We demonstrate our resultsin three examples.  相似文献   

7.
Abstract. In this article we consider a problem from bone marrow transplant (BMT) studies where there is interest on assessing the effect of haplotype match for donor and patient on the overall survival. The BMT study we consider is based on donors and patients that are genotype matched, and this therefore leads to a missing data problem. We show how Aalen's additive risk model can be applied in this setting with the benefit that the time‐varying haplomatch effect can be easily studied. This problem has not been considered before, and the standard approach where one would use the expected‐maximization (EM) algorithm cannot be applied for this model because the likelihood is hard to evaluate without additional assumptions. We suggest an approach based on multivariate estimating equations that are solved using a recursive structure. This approach leads to an estimator where the large sample properties can be developed using product‐integration theory. Small sample properties are investigated using simulations in a setting that mimics the motivating haplomatch problem.  相似文献   

8.
The present work demonstrates an application of random effects model for analyzing birth intervals that are clustered into geographical regions. Observations from the same cluster are assumed to be correlated because usually they share certain unobserved characteristics between them. Ignoring the correlations among the observations may lead to incorrect standard errors of the estimates of parameters of interest. Beside making the comparisons between Cox's proportional hazards model and random effects model for analyzing geographically clustered time-to-event data, important demographic and socioeconomic factors that may affect the length of birth intervals of Bangladeshi women are also reported in this paper.  相似文献   

9.
Survival studies often collect information about covariates. If these covariates are believed to contain information about the life-times, they may be considered when estimating the underlying life-time distribution. We propose a non-parametric estimator which uses the recorded information about the covariates. Various forms of incomplete data, e.g. right-censored data, are allowed. The estimator is the conditional mean of the true empirical survival function given the observed history, and it is derived using a general filtering formula. Feng & Kurtz (1994) showed that the estimator is the Kaplan–Meier estimator in the case of right-censoring when using the observed life-times and censoring-times as the observed history. We take the same approach as Feng & Kurtz (1994) but in addition we incorporate the recorded information about the covariates in the observed history. Two models are considered and in both cases the Kaplan–Meier estimator is a special case of the estimator. In a simulation study the estimator is compared with the Kaplan–Meier estimator in small samples.  相似文献   

10.
Many late-onset diseases are caused by what appears to be a combination of a genetic predisposition to disease and environmental factors. The use of existing cohort studies provides an opportunity to infer genetic predisposition to disease on a representative sample of a study population, now that many such studies are gathering genetic information on the participants. One feature to using existing cohorts is that subjects may be censored due to death prior to genetic sampling, thereby adding a layer of complexity to the analysis. We develop a statistical framework to infer parameters of a latent variables model for disease onset. The latent variables model describes the role of genetic and modifiable risk factors on the onset ages of multiple diseases, and accounts for right-censoring of disease onset ages. The framework also allows for missing genetic information by inferring a subject's unknown genotype through appropriately incorporated covariate information. The model is applied to data gathered in the Framingham Heart Study for measuring the effect of different Apo-E genotypes on the occurrence of various cardiovascular disease events.  相似文献   

11.
A representation of sums and differences of the form 2n log n, the lnn function, is introduced to express likelihood-ratio chi-square test statistics in contingency table analysis. This is a concise explicit form to display when partitioning chi-square statistics in accordance with hierarchical models. The lnn representation gives students insights into the construction of test statistics, and assists in relating identical forms under differing model sets. Hierarchies are presented for independence and equi-probability in two-way tables, for symmetry in correlated square tables, for independence-and-homogeneity of two-way responses across levels of a factor, and for mutual independence in three-way tables, along with relevant partitions of chi-square.  相似文献   

12.
Models for monotone trends in hazard rates for grouped survival data in stratified populations are introduced, and simple closed form score statistics for testing the significance of these trends are presented. The test statistics for some of the models understudy are shown to be independent of the assumed form of the function which relates the hazard rates to the sets of monotone scores assigned to the time intervals. The procedure is applied to test monotone trends in the recovery rates of erythematous response among skin cancer patients and controls that have been irradiated with a ultraviolent challenge.  相似文献   

13.
ABSTRACT

I use longitudinal survey data from commercial fishing deckhands in the Alaskan Bering Sea to provide new insights on empirical methods commonly used to estimate compensating wage differentials and the value of statistical life (VSL). The unique setting exploits intertemporal variation in fatality rates and wages within worker-vessel pairs caused by a combination of weather patterns and policy changes, allowing identification of parameters and biases that it has only been possible to speculate about in more general settings. I show that estimation strategies common in the literature produce biased estimates in this setting, and decompose the bias components due to latent worker, establishment, and job-match heterogeneity. The estimates also remove the confounding effects of endogenous job mobility and dynamic labor market search, narrowing a conceptual gap between search-based hedonic wage theory and its empirical applications. I find that workers’ marginal aversion to fatal risk falls as risk levels rise, which suggests complementarities in the benefits of public safety policies. Supplementary materials for this article are available online.  相似文献   

14.
Summary.  We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.  相似文献   

15.
Murrayand Tsiatis (1996) described a weighted survival estimate thatincorporates prognostic time-dependent covariate informationto increase the efficiency of estimation. We propose a test statisticbased on the statistic of Pepe and Fleming (1989, 1991) thatincorporates these weighted survival estimates. As in Pepe andFleming, the test is an integrated weighted difference of twoestimated survival curves. This test has been shown to be effectiveat detecting survival differences in crossing hazards settingswhere the logrank test performs poorly. This method uses stratifiedlongitudinal covariate information to get more precise estimatesof the underlying survival curves when there is censored informationand this leads to more powerful tests. Another important featureof the test is that it remains valid when informative censoringis captured by the incorporated covariate. In this case, thePepe-Fleming statistic is known to be biased and should not beused. These methods could be useful in clinical trials with heavycensoring that include collection over time of covariates, suchas laboratory measurements, that are prognostic of subsequentsurvival or capture information related to censoring.  相似文献   

16.
ABSTRACT

Random events such as a production machine breakdown in a manufacturing plant, an equipment failure within a transportation system, a security failure of information system, or any number of different problems may cause supply chain disruption. Although several researchers have focused on supply chain disruptions and have discussed the measures that companies should use to design better supply chains, or study the different ways that could help firms to mitigate the consequences of a supply chain disruption, the lack of an appropriate method to predict time to disruptive events is strongly felt. Based on this need, this paper introduces statistical flowgraph models (SFGMs) for survival analysis in supply chains. SFGMs provide an innovative approach to analyze time-to-event data. Time-to-event data analysis focuses on modeling waiting times until events of interest occur. SFGMs are useful for reducing multistate models into an equivalent binary-state model. Analysis from the SFGM gives an entire waiting time distribution as well as the system reliability (survivor) and hazard functions for any total or partial waiting time. The end results from a SFGM helps to identify the supply chain's strengths, and more importantly, weaknesses. Therefore, the results are a valuable decision support for supply chain managers to predict supply chain behaviors. Examples presented in this paper demonstrate with clarity the applicability of SFGMs to survival analysis in supply chains.  相似文献   

17.
In biomedical research, profiling is now commonly conducted, generating high-dimensional genomic measurements (without loss of generality, say genes). An important analysis objective is to rank genes according to their marginal associations with a disease outcome/phenotype. Clinical-covariates, including for example clinical risk factors and environmental exposures, usually exist and need to be properly accounted for. In this study, we propose conducting marginal ranking of genes using a receiver operating characteristic (ROC) based method. This method can accommodate categorical, censored survival, and continuous outcome variables in a very similar manner. Unlike logistic-model-based methods, it does not make very specific assumptions on model, making it robust. In ranking genes, we account for both the main effects of clinical-covariates and their interactions with genes, and develop multiple diagnostic accuracy improvement measurements. Using simulation studies, we show that the proposed method is effective in that genes associated with or gene–covariate interactions associated with the outcome receive high rankings. In data analysis, we observe some differences between the rankings using the proposed method and the logistic-model-based method.  相似文献   

18.
利用从烟台市某商业银行调研得到的微观数据样本,实证研究中国个人住房抵押贷款提前偿付的影响因素。结果发现:借款人年龄越大、学历越高,提前偿还贷款的概率越大;贷款数额越大、贷款期限越长,借款人提前偿付的概率越大;首付比率高的借款人提前偿付概率较高;外地人比当地人更具有提前偿付的可能性;借款人的债龄越长,提前偿付的可能性越大;借款人的性别、婚否、家庭人口数量、工作行业和月还款额占家庭收入比率等因素对提前偿付的影响不显著。  相似文献   

19.
The tumor burden (TB) process is postulated to be the primary mechanism through which most anticancer treatments provide benefit. In phase II oncology trials, the biologic effects of a therapeutic agent are often analyzed using conventional endpoints for best response, such as objective response rate and progression‐free survival, both of which causes loss of information. On the other hand, graphical methods including spider plot and waterfall plot lack any statistical inference when there is more than one treatment arm. Therefore, longitudinal analysis of TB data is well recognized as a better approach for treatment evaluation. However, longitudinal TB process suffers from informative missingness because of progression or death. We propose to analyze the treatment effect on tumor growth kinetics using a joint modeling framework accounting for the informative missing mechanism. Our approach is illustrated by multisetting simulation studies and an application to a nonsmall‐cell lung cancer data set. The proposed analyses can be performed in early‐phase clinical trials to better characterize treatment effect and thereby inform decision‐making. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
Quality of life (QOL) is looked upon as a multidimensional entity comprising physical, psychological, social, and medical parameters. QOL is a good prognostic factor for the cancer patients. In this article, we want to determine if QOL is a good biomarker as a surrogate to indicate the survival time of gastric cancer patients. We conducted a single institutional trial and examines QOL of gastric cancer patients receiving the different surgery. In this trial, QOL is a longitudinal measurement. The accelerated failure time model can be used to deal with survival data when the proportionality assumption fails to capture the relationship between the survival time and covariates. In this article, similar to Henderson et al. (2000 Henderson , R. , Diggle , P. , Dobson , A. ( 2000 ). Joint modelling of longitudinal measurements and event time data . Biostatistics 1 ( 4 ): 465480 .[Crossref], [PubMed] [Google Scholar], 2002 Henderson , R. , Diggle , P. J. , Dobson , A. ( 2002 ). Identification and efficacy of longitudinal markers for survival . Biostatistics 3 ( 1 ): 3350 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the survival times under the accelerated failure time assumption. We introduce a method employing a frailty model to identify longitudinal biomarkers or surrogates for a time to event outcome. We allow random effects to be present in both the longitudinal biomarker and underlying survival function. The random effects in the biomarker are introduced via an explicit term while the random effect in the underlying survival function is introduced by the inclusion of frailty into the model. We will introduce a method to identify longitudinal biomarkers or surrogates for a time to event outcome based on the accelerated failure time assumption.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号