首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Clinical trials often assess efficacy by comparing treatments on the basis of two or more event‐time outcomes. In the case of cancer clinical trials, progression‐free survival (PFS), which is the minimum of the time from randomization to progression or to death, summarizes the comparison of treatments on the hazards for disease progression and mortality. However, the analysis of PFS does not utilize all the information we have on patients in the trial. First, if both progression and death times are recorded, then information on death time is ignored in the PFS analysis. Second, disease progression is monitored at regular clinic visits, and progression time is recorded as the first visit at which evidence of progression is detected. However, many patients miss or have irregular visits (resulting in interval‐censored data) and sometimes die of the cancer before progression was recorded. In this case, the previous progression‐free time could provide additional information on the treatment efficacy. The aim of this paper is to propose a method for comparing treatments that could more fully utilize the data on progression and death. We develop a test for treatment effect based on of the joint distribution of progression and survival. The issue of interval censoring is handled using the very simple and intuitive approach of the Conditional Expected Score Test (CEST). We focus on the application of these methods in cancer research. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
In many chronic diseases it is important to understand the rate at which patients progress from infection through a series of defined disease states to a clinical outcome, e.g. cirrhosis in hepatitis C virus (HCV)‐infected individuals or AIDS in HIV‐infected individuals. Typically data are obtained from longitudinal studies, which often are observational in nature, and where disease state is observed only at selected examinations throughout follow‐up. Transition times between disease states are therefore interval censored. Multi‐state Markov models are commonly used to analyze such data, but rely on the assumption that the examination times are non‐informative, and hence the examination process is ignorable in a likelihood‐based analysis. In this paper we develop a Markov model that relaxes this assumption through the premise that the examination process is ignorable only after conditioning on a more regularly observed auxiliary variable. This situation arises in a study of HCV disease progression, where liver biopsies (the examinations) are sparse, irregular, and potentially informative with respect to the transition times. We use additional information on liver function tests (LFTs), commonly collected throughout follow‐up, to inform current disease state and to assume an ignorable examination process. The model developed has a similar structure to a hidden Markov model and accommodates both the series of LFT measurements and the partially latent series of disease states. We show through simulation how this model compares with the commonly used ignorable Markov model, and a Markov model that assumes the examination process is non‐ignorable. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
4.
Interval‐censored data occur naturally in many fields and the main feature is that the failure time of interest is not observed exactly, but is known to fall within some interval. In this paper, we propose a semiparametric probit model for analyzing case 2 interval‐censored data as an alternative to the existing semiparametric models in the literature. Specifically, we propose to approximate the unknown nonparametric nondecreasing function in the probit model with a linear combination of monotone splines, leading to only a finite number of parameters to estimate. Both the maximum likelihood and the Bayesian estimation methods are proposed. For each method, regression parameters and the baseline survival function are estimated jointly. The proposed methods make no assumptions about the observation process and can be applicable to any interval‐censored data with easy implementation. The methods are evaluated by simulation studies and are illustrated by two real‐life interval‐censored data applications. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
Methods for analyzing interval‐censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval‐censored data. Our method is based on Cox's proportional hazard model with piecewise‐constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family‐based cohort study of pandemic H1N1 influenza in Taiwan during 2009–2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
This paper presents a parametric method of fitting semi‐Markov models with piecewise‐constant hazards in the presence of left, right and interval censoring. We investigate transition intensities in a three‐state illness–death model with no recovery. We relax the Markov assumption by adjusting the intensity for the transition from state 2 (illness) to state 3 (death) for the time spent in state 2 through a time‐varying covariate. This involves the exact time of the transition from state 1 (healthy) to state 2. When the data are subject to left or interval censoring, this time is unknown. In the estimation of the likelihood, we take into account interval censoring by integrating out all possible times for the transition from state 1 to state 2. For left censoring, we use an Expectation–Maximisation inspired algorithm. A simulation study reflects the performance of the method. The proposed combination of statistical procedures provides great flexibility. We illustrate the method in an application by using data on stroke onset for the older population from the UK Medical Research Council Cognitive Function and Ageing Study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Interval‐censored data, in which the event time is only known to lie in some time interval, arise commonly in practice, for example, in a medical study in which patients visit clinics or hospitals at prescheduled times and the events of interest occur between visits. Such data are appropriately analyzed using methods that account for this uncertainty in event time measurement. In this paper, we propose a survival tree method for interval‐censored data based on the conditional inference framework. Using Monte Carlo simulations, we find that the tree is effective in uncovering underlying tree structure, performs similarly to an interval‐censored Cox proportional hazards model fit when the true relationship is linear, and performs at least as well as (and in the presence of right‐censoring outperforms) the Cox model when the true relationship is not linear. Further, the interval‐censored tree outperforms survival trees based on imputing the event time as an endpoint or the midpoint of the censoring interval. We illustrate the application of the method on tooth emergence data.  相似文献   

8.
The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross‐sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross‐sectional response, where the unobserved transition rates of a two‐state continuous‐time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6‐month outcome based on physiological data collected post‐injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long‐term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

9.
Irreversible multi‐state models provide a convenient framework for characterizing disease processes that arise when the states represent the degree of organ or tissue damage incurred by a progressive disease. In many settings, however, individuals are only observed at periodic clinic visits and so the precise times of the transitions are not observed. If the life history and observation processes are not independent, the observation process contains information about the life history process, and more importantly, likelihoods based on the disease process alone are invalid. With interval‐censored failure time data, joint models are nonidentifiable and data analysts must rely on sensitivity analyses to assess the effect of the dependent observation times. This paper is concerned, however, with the analysis of data from progressive multi‐state disease processes in which individuals are scheduled to be seen at periodic pre‐scheduled assessment times. We cast the problem in the framework used for incomplete longitudinal data problems. Maximum likelihood estimation via an EM algorithm is advocated for parameter estimation. Simulation studies demonstrate that the proposed method works well under a variety of situations. Data from a cohort of patients with psoriatic arthritis are analyzed for illustration. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

10.
Multivariate interval‐censored failure time data arise commonly in many studies of epidemiology and biomedicine. Analysis of these type of data is more challenging than the right‐censored data. We propose a simple multiple imputation strategy to recover the order of occurrences based on the interval‐censored event times using a conditional predictive distribution function derived from a parametric gamma random effects model. By imputing the interval‐censored failure times, the estimation of the regression and dependence parameters in the context of a gamma frailty proportional hazards model using the well‐developed EM algorithm is made possible. A robust estimator for the covariance matrix is suggested to adjust for the possible misspecification of the parametric baseline hazard function. The finite sample properties of the proposed method are investigated via simulation. The performance of the proposed method is highly satisfactory, whereas the computation burden is minimal. The proposed method is also applied to the diabetic retinopathy study (DRS) data for illustration purpose and the estimates are compared with those based on other existing methods for bivariate grouped survival data. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
In repeated dose-toxicity studies, many outcomes are repeatedly measured on the same animal to study the toxicity of a compound of interest. This is only one example in which one is confronted with the analysis of many outcomes, possibly of a different type. Probably the most common situation is that of an amalgamation of continuous and categorical outcomes. A possible approach towards the joint analysis of two longitudinal outcomes of a different nature is the use of random-effects models (Models for Discrete Longitudinal Data. Springer Series in Statistics. Springer: New York, 2005). Although a random-effects model can easily be extended to jointly model many outcomes of a different nature, computational problems arise as the number of outcomes increases. To avoid maximization of the full likelihood expression, Fieuws and Verbeke (Biometrics 2006; 62:424-431) proposed a pairwise modeling strategy in which all possible pairs are modeled separately, using a mixed model, yielding several different estimates for the same parameters. These latter estimates are then combined into a single set of estimates. Also inference, based on pseudo-likelihood principles, is indirectly derived from the separate analyses. In this paper, we extend the approach of Fieuws and Verbeke (Biometrics 2006; 62:424-431) in two ways: the method is applied to different types of outcomes and the full pseudo-likelihood expression is maximized at once, leading directly to unique estimates as well as direct application of pseudo-likelihood inference. This is very appealing when interested in hypothesis testing. The method is applied to data from a repeated dose-toxicity study designed for the evaluation of the neurofunctional effects of a psychotrophic drug. The relative merits of both methods are discussed. Copyright (c) 2008 John Wiley & Sons, Ltd.  相似文献   

12.
We propose a method for calculating power and sample size for studies involving interval‐censored failure time data that only involves standard software required for fitting the appropriate parametric survival model. We use the framework of a longitudinal study where patients are assessed periodically for a response and the only resultant information available to the investigators is the failure window: the time between the last negative and first positive test results. The survival model is fit to an expanded data set using easily computed weights. We illustrate with a Weibull survival model and a two‐group comparison. The investigator can specify a group difference in terms of a hazards ratio. Our simulation results demonstrate the merits of these proposed power calculations. We also explore how the number of assessments (visits), and thus the corresponding lengths of the failure intervals, affect study power. The proposed method can be easily extended to more complex study designs and a variety of survival and censoring distributions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Semi-competing risks data occur frequently in medical research when interest is in simultaneous modelling of two or more processes, one of which may censor the others. We consider the analysis of semi-competing risks data in the presence of interval-censoring and informative loss-to-followup. The work is motivated by a data set from the MRC UK Cognitive Function and Ageing Study, which we use to model two processes, cognitive impairment and death. Analysis is carried out using a multi-state model, which is an extension of that used by Siannis et al. (Statist. Med. 2007; 26:426–442) to model semi-competing risks data with exact transition times, to data which is interval-censored. Model parameters are estimated using maximum likelihood. The role of a sensitivity parameter k, which influences the nature of informative censoring, is explored.  相似文献   

14.
A general joint modeling framework is proposed that includes a parametric stratified survival component for continuous time survival data, and a mixture multilevel item response component to model latent developmental trajectories given mixed discrete response data. The joint model is illustrated in a real data setting, where the utility of longitudinally measured cognitive function as a predictor for survival is investigated in a group of elderly persons. The object is partly to determine whether cognitive impairment is accompanied by a higher mortality rate. Time-dependent cognitive function is measured using the generalized partial credit model given occasion-specific mini-mental state examination response data. A parametric survival model is applied for the survival information, and cognitive function as a continuous latent variable is included as a time-dependent explanatory variable along with other explanatory information. A mixture model is defined, which incorporates the latent developmental trajectory and the survival component. The mixture model captures the heterogeneity in the developmental trajectories that could not be fully explained by the multilevel item response model and other explanatory variables. A Bayesian modeling approach is pursued, where a Markov chain Monte Carlo algorithm is developed for simultaneous estimation of the joint model parameters. Practical issues as model building and assessment are addressed using the DIC and various posterior predictive tests.  相似文献   

15.
This study proposes a time‐varying effect model for examining group differences in trajectories of zero‐inflated count outcomes. The motivating example demonstrates that this zero‐inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
Interval‐censored failure time data occur in many areas, especially in medical follow‐up studies such as clinical trials, and in consequence, many methods have been developed for the problem. However, most of the existing approaches cannot deal with the situations where the hazard functions may cross each other. To address this, we develop a sieve maximum likelihood estimation procedure with the application of the short‐term and long‐term hazard ratio model. In the method, the I‐splines are used to approximate the underlying unknown function. An extensive simulation study was conducted for the assessment of the finite sample properties of the presented procedure and suggests that the method seems to work well for practical situations. The analysis of an motivated example is also provided.  相似文献   

17.
Outcome variables that are semicontinuous with clumping at zero are commonly seen in biomedical research. In addition, the outcome measurement is sometimes subject to interval censoring and a lower detection limit (LDL). This gives rise to interval‐censored observations with clumping below the LDL. Level of antibody against influenza virus measured by the hemagglutination inhibition assay is an example. The interval censoring is due to the assay's technical procedure. The clumping below LDL is likely a result of the lack of prior exposure in some individuals such that they either have zero level of antibodies or do not have detectable level of antibodies. Given a pair of such measurements from the same subject at two time points, a binary ‘fold‐increase’ endpoint can be defined according to the ratio of these two measurements, as it often is in vaccine clinical trials. The intervention effect or vaccine immunogenicity can be assessed by comparing the binary endpoint between groups of subjects given different vaccines or placebos. We introduce a two‐part random effects model for modeling the paired interval‐censored data with clumping below the LDL. Based on the estimated model parameters, we propose to use Monte Carlo approximation for estimation of the ‘fold‐increase’ endpoint and the intervention effect. Bootstrapping is used for variance estimation. The performance of the proposed method is demonstrated by simulation. We analyze antibody data from an influenza vaccine trial for illustration. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
In longitudinal studies, it is of interest to investigate how repeatedly measured markers in time are associated with a time to an event of interest, and in the mean time, the repeated measurements are often observed with the features of a heterogeneous population, non‐normality, and covariate measured with error because of longitudinal nature. Statistical analysis may complicate dramatically when one analyzes longitudinal–survival data with these features together. Recently, a mixture of skewed distributions has received increasing attention in the treatment of heterogeneous data involving asymmetric behaviors across subclasses, but there are relatively few studies accommodating heterogeneity, non‐normality, and measurement error in covariate simultaneously arose in longitudinal–survival data setting. Under the umbrella of Bayesian inference, this article explores a finite mixture of semiparametric mixed‐effects joint models with skewed distributions for longitudinal measures with an attempt to mediate homogeneous characteristics, adjust departures from normality, and tailor accuracy from measurement error in covariate as well as overcome shortages of confidence in specifying a time‐to‐event model. The Bayesian mixture of joint modeling offers an appropriate avenue to estimate not only all parameters of mixture joint models but also probabilities of class membership. Simulation studies are conducted to assess the performance of the proposed method, and a real example is analyzed to demonstrate the methodology. The results are reported by comparing potential models with various scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
Palliative medicine is an interdisciplinary specialty focusing on improving quality of life (QOL) for patients with serious illness and their families. Palliative care programs are available or under development at over 80% of large US hospitals (300+ beds). Palliative care clinical trials present unique analytic challenges relative to evaluating the palliative care treatment efficacy which is to improve patients’ diminishing QOL as disease progresses towards end of life (EOL). A unique feature of palliative care clinical trials is that patients will experience decreasing QOL during the trial despite potentially beneficial treatment. Often longitudinal QOL and survival data are highly correlated which, in the face of censoring, makes it challenging to properly analyze and interpret terminal QOL trend. To address these issues, we propose a novel semiparametric statistical approach to jointly model the terminal trend of QOL and survival data. There are two sub‐models in our approach: a semiparametric mixed effects model for longitudinal QOL and a Cox model for survival. We use regression splines method to estimate the nonparametric curves and AIC to select knots. We assess the model performance through simulation to establish a novel modeling approach that could be used in future palliative care research trials. Application of our approach in a recently completed palliative care clinical trial is also presented.  相似文献   

20.
In many medical problems that collect multiple observations per subject, the time to an event is often of interest. Sometimes, the occurrence of the event can be recorded at regular intervals leading to interval‐censored data. It is further desirable to obtain the most parsimonious model in order to increase predictive power and to obtain ease of interpretation. Variable selection and often random effects selection in case of clustered data become crucial in such applications. We propose a Bayesian method for random effects selection in mixed effects accelerated failure time (AFT) models. The proposed method relies on the Cholesky decomposition on the random effects covariance matrix and the parameter‐expansion method for the selection of random effects. The Dirichlet prior is used to model the uncertainty in the random effects. The error distribution for the accelerated failure time model has been specified using a Gaussian mixture to allow flexible error density and prediction of the survival and hazard functions. We demonstrate the model using extensive simulations and the Signal Tandmobiel Study®. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号