首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Causal inference for non‐censored response variables, such as binary or quantitative outcomes, is often based on either (1) direct standardization (‘G‐formula’) or (2) inverse probability of treatment assignment weights (‘propensity score’). To do causal inference in survival analysis, one needs to address right‐censoring, and often, special techniques are required for that purpose. We will show how censoring can be dealt with ‘once and for all’ by means of so‐called pseudo‐observations when doing causal inference in survival analysis. The pseudo‐observations can be used as a replacement of the outcomes without censoring when applying ‘standard’ causal inference methods, such as (1) or (2) earlier. We study this idea for estimating the average causal effect of a binary treatment on the survival probability, the restricted mean lifetime, and the cumulative incidence in a competing risks situation. The methods will be illustrated in a small simulation study and via a study of patients with acute myeloid leukemia who received either myeloablative or non‐myeloablative conditioning before allogeneic hematopoetic cell transplantation. We will estimate the average causal effect of the conditioning regime on outcomes such as the 3‐year overall survival probability and the 3‐year risk of chronic graft‐versus‐host disease. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

2.
We consider a study‐level meta‐analysis with a normally distributed outcome variable and possibly unequal study‐level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing‐completely‐at‐random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta‐regression to impute the missing sample variances. Our method takes advantage of study‐level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross‐over studies. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Investigators interested in whether a disease aggregates in families often collect case‐control family data, which consist of disease status and covariate information for members of families selected via case or control probands. Here, we focus on the use of case‐control family data to investigate the relative contributions to the disease of additive genetic effects (A), shared family environment (C), and unique environment (E). We describe an ACE model for binary family data; this structural equation model, which has been described previously, combines a general‐family extension of the classic ACE twin model with a (possibly covariate‐specific) liability‐threshold model for binary outcomes. We then introduce our contribution, a likelihood‐based approach to fitting the model to singly ascertained case‐control family data. The approach, which involves conditioning on the proband's disease status and also setting prevalence equal to a prespecified value that can be estimated from the data, makes it possible to obtain valid estimates of the A, C, and E variance components from case‐control (rather than only from population‐based) family data. In fact, simulation experiments suggest that our approach to fitting yields approximately unbiased estimates of the A, C, and E variance components, provided that certain commonly made assumptions hold. Further, when our approach is used to fit the ACE model to Austrian case‐control family data on depression, the resulting estimate of heritability is very similar to those from previous analyses of twin data. Genet. Epidemiol. 34: 238–245, 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

4.
In patients with chronic kidney disease (CKD), clinical interest often centers on determining treatments and exposures that are causally related to renal progression. Analyses of longitudinal clinical data in this population are often complicated by clinical competing events, such as end‐stage renal disease (ESRD) and death, and time‐dependent confounding, where patient factors that are predictive of later exposures and outcomes are affected by past exposures. We developed multistate marginal structural models (MS‐MSMs) to assess the effect of time‐varying systolic blood pressure on disease progression in subjects with CKD. The multistate nature of the model allows us to jointly model disease progression characterized by changes in the estimated glomerular filtration rate (eGFR), the onset of ESRD, and death, and thereby avoid unnatural assumptions of death and ESRD as noninformative censoring events for subsequent changes in eGFR. We model the causal effect of systolic blood pressure on the probability of transitioning into 1 of 6 disease states given the current state. We use inverse probability weights with stabilization to account for potential time‐varying confounders, including past eGFR, total protein, serum creatinine, and hemoglobin. We apply the model to data from the Chronic Renal Insufficiency Cohort Study, a multisite observational study of patients with CKD.  相似文献   

5.
Clinical trials with multiple primary time‐to‐event outcomes are common. Use of multiple endpoints creates challenges in the evaluation of power and the calculation of sample size during trial design particularly for time‐to‐event outcomes. We present methods for calculating the power and sample size for randomized superiority clinical trials with two correlated time‐to‐event outcomes. We do this for independent and dependent censoring for three censoring scenarios: (i) the two events are non‐fatal; (ii) one event is fatal (semi‐competing risk); and (iii) both are fatal (competing risk). We derive the bivariate log‐rank test in all three censoring scenarios and investigate the behavior of power and the required sample sizes. Separate evaluations are conducted for two inferential goals, evaluation of whether the test intervention is superior to the control on: (1) all of the endpoints (multiple co‐primary) or (2) at least one endpoint (multiple primary). Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

6.
For twin time‐to‐event data, we consider different concordance probabilities, such as the casewise concordance that are routinely computed as a measure of the lifetime dependence/correlation for specific diseases. The concordance probability here is the probability that both twins have experienced the event of interest. Under the assumption that both twins are censored at the same time, we show how to estimate this probability in the presence of right censoring, and as a consequence, we can then estimate the casewise twin concordance. In addition, we can model the magnitude of within pair dependence over time, and covariates may be further influential on the marginal risk and dependence structure. We establish the estimators large sample properties and suggest various tests, for example, for inferring familial influence. The method is demonstrated and motivated by specific twin data on cancer events with the competing risk death. We thus aim to quantify the degree of dependence through the casewise concordance function and show a significant genetic component. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Marginal structural Cox models are used for quantifying marginal treatment effects on outcome event hazard function. Such models are estimated using inverse probability of treatment and censoring (IPTC) weighting, which properly accounts for the impact of time‐dependent confounders, avoiding conditioning on factors on the causal pathway. To estimate the IPTC weights, the treatment assignment mechanism is conventionally modeled in discrete time. While this is natural in situations where treatment information is recorded at scheduled follow‐up visits, in other contexts, the events specifying the treatment history can be modeled in continuous time using the tools of event history analysis. This is particularly the case for treatment procedures, such as surgeries. In this paper, we propose a novel approach for flexible parametric estimation of continuous‐time IPTC weights and illustrate it in assessing the relationship between metastasectomy and mortality in metastatic renal cell carcinoma patients. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
We propose a prediction model for the cumulative incidence functions of competing risks, based on a logit link. Because of a concern about censoring potentially depending on time‐varying covariates in our motivating human immunodeficiency virus (HIV) application, we describe an approach for estimating the parameters in the prediction models using inverse probability of censoring weighting under a missingness at random assumption. We then illustrate the application of this methodology to identify predictors of the competing outcomes of virologic failure, an efficacy outcome, and treatment limiting adverse event, a safety outcome, among human immunodeficiency virus‐infected patients first starting antiretroviral treatment. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
To access the calibration of a predictive model in a survival analysis setting, several authors have extended the Hosmer–Lemeshow goodness‐of‐fit test to survival data. Grønnesby and Borgan developed a test under the proportional hazards assumption, and Nam and D'Agostino developed a nonparametric test that is applicable in a more general survival setting for data with limited censoring. We analyze the performance of the two tests and show that the Grønnesby–Borgan test attains appropriate size in a variety of settings, whereas the Nam‐D'Agostino method has a higher than nominal Type 1 error when there is more than trivial censoring. Both tests are sensitive to small cell sizes. We develop a modification of the Nam‐D'Agostino test to allow for higher censoring rates. We show that this modified Nam‐D'Agostino test has appropriate control of Type 1 error and comparable power to the Grønnesby–Borgan test and is applicable to settings other than proportional hazards. We also discuss the application to small cell sizes. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Cai Wu  Liang Li 《Statistics in medicine》2018,37(21):3106-3124
This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time‐to‐event outcomes with competing events. We consider the time‐dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time‐dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end‐stage renal disease, accounting for the competing risk of pre–end‐stage renal disease death, and evaluate its numerical performance in extensive simulation studies.  相似文献   

11.
The hazard ratios resulting from a Cox's regression hazards model are hard to interpret and to be converted into prolonged survival time. As the main goal is often to study survival functions, there is increasing interest in summary measures based on the survival function that are easier to interpret than the hazard ratio; the residual mean time is an important example of those measures. However, because of the presence of right censoring, the tail of the survival distribution is often difficult to estimate correctly. Therefore, we consider the restricted residual mean time, which represents a partial area under the survival function, given any time horizon τ, and is interpreted as the residual life expectancy up to τ of a subject surviving up to time t. We present a class of regression models for this measure, based on weighted estimating equations and inverse probability of censoring weighted estimators to model potential right censoring. Furthermore, we show how to extend the models and the estimators to deal with delayed entries. We demonstrate that the restricted residual mean life estimator is equivalent to integrals of Kaplan–Meier estimates in the case of simple factor variables. Estimation performance is investigated by simulation studies. Using real data from Danish Monitoring Cardiovascular Risk Factor Surveys, we illustrate an application to additive regression models and discuss the general assumption of right censoring and left truncation being dependent on covariates. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
This paper presents a parametric method of fitting semi‐Markov models with piecewise‐constant hazards in the presence of left, right and interval censoring. We investigate transition intensities in a three‐state illness–death model with no recovery. We relax the Markov assumption by adjusting the intensity for the transition from state 2 (illness) to state 3 (death) for the time spent in state 2 through a time‐varying covariate. This involves the exact time of the transition from state 1 (healthy) to state 2. When the data are subject to left or interval censoring, this time is unknown. In the estimation of the likelihood, we take into account interval censoring by integrating out all possible times for the transition from state 1 to state 2. For left censoring, we use an Expectation–Maximisation inspired algorithm. A simulation study reflects the performance of the method. The proposed combination of statistical procedures provides great flexibility. We illustrate the method in an application by using data on stroke onset for the older population from the UK Medical Research Council Cognitive Function and Ageing Study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
Multiple papers have studied the use of gene‐environment (GE) independence to enhance power for testing gene‐environment interaction in case‐control studies. However, studies that evaluate the role of GE independence in a meta‐analysis framework are limited. In this paper, we extend the single‐study empirical Bayes type shrinkage estimators proposed by Mukherjee and Chatterjee (2008) to a meta‐analysis setting that adjusts for uncertainty regarding the assumption of GE independence across studies. We use the retrospective likelihood framework to derive an adaptive combination of estimators obtained under the constrained model (assuming GE independence) and unconstrained model (without assumptions of GE independence) with weights determined by measures of GE association derived from multiple studies. Our simulation studies indicate that this newly proposed estimator has improved average performance across different simulation scenarios than the standard alternative of using inverse variance (covariance) weighted estimators that combines study‐specific constrained, unconstrained, or empirical Bayes estimators. The results are illustrated by meta‐analyzing 6 different studies of type 2 diabetes investigating interactions between genetic markers on the obesity related FTO gene and environmental factors body mass index and age.  相似文献   

14.
The case‐control study is a common design for assessing the association between genetic exposures and a disease phenotype. Though association with a given (case‐control) phenotype is always of primary interest, there is often considerable interest in assessing relationships between genetic exposures and other (secondary) phenotypes. However, the case‐control sample represents a biased sample from the general population. As a result, if this sampling framework is not correctly taken into account, analyses estimating the effect of exposures on secondary phenotypes can be biased leading to incorrect inference. In this paper, we address this problem and propose a general approach for estimating and testing the population effect of a genetic variant on a secondary phenotype. Our approach is based on inverse probability weighted estimating equations, where the weights depend on genotype and the secondary phenotype. We show that, though slightly less efficient than a full likelihood‐based analysis when the likelihood is correctly specified, it is substantially more robust to model misspecification, and can out‐perform likelihood‐based analysis, both in terms of validity and power, when the model is misspecified. We illustrate our approach with an application to a case‐control study extracted from the Framingham Heart Study. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
In an infectious disease cohort study, individuals who have been infected with a pathogen are often recruited for follow up. The period between infection and the onset of symptomatic disease, referred to as the incubation period, is of interest because of its importance on disease surveillance and control. However, the incubation period is often difficult to ascertain due to the uncertainty associated with asymptomatic infection onset time. An additional complication is that the observed infected subjects are likely to have longer incubation periods due to the prevalent sampling. In this article, we demonstrate how to estimate the distribution of the incubation period with the uncertain infection onset, subject to left‐truncation and right‐censoring. We employ a family of sufficiently general parametric models, the generalized odds‐rate class of regression models, for the underlying incubation period and its correlation with covariates. In simulation studies, we assess the finite sample performance of the model fitting and hazard function estimation. The proposed method is illustrated on data from the HIV/AIDS study on injection drug users admitted to a detoxification program in Badalona, Spain.  相似文献   

16.
In cluster‐randomized trials, intervention effects are often formulated by specifying marginal models, fitting them under a working independence assumption, and using robust variance estimates to address the association in the responses within clusters. We develop sample size criteria within this framework, with analyses based on semiparametric Cox regression models fitted with event times subject to right censoring. At the design stage, copula models are specified to enable derivation of the asymptotic variance of estimators from a marginal Cox regression model and to compute the number of clusters necessary to satisfy power requirements. Simulation studies demonstrate the validity of the sample size formula in finite samples for a range of cluster sizes, censoring rates, and degrees of within‐cluster association among event times. The power and relative efficiency implications of copula misspecification is studied, as well as the effect of within‐cluster dependence in the censoring times. Sample size criteria and other design issues are also addressed for the setting where the event status is only ascertained at periodic assessments and times are interval censored. Copyright © 2014 JohnWiley & Sons, Ltd.  相似文献   

17.
Interval‐censored data, in which the event time is only known to lie in some time interval, arise commonly in practice, for example, in a medical study in which patients visit clinics or hospitals at prescheduled times and the events of interest occur between visits. Such data are appropriately analyzed using methods that account for this uncertainty in event time measurement. In this paper, we propose a survival tree method for interval‐censored data based on the conditional inference framework. Using Monte Carlo simulations, we find that the tree is effective in uncovering underlying tree structure, performs similarly to an interval‐censored Cox proportional hazards model fit when the true relationship is linear, and performs at least as well as (and in the presence of right‐censoring outperforms) the Cox model when the true relationship is not linear. Further, the interval‐censored tree outperforms survival trees based on imputing the event time as an endpoint or the midpoint of the censoring interval. We illustrate the application of the method on tooth emergence data.  相似文献   

18.
Simulation studies to evaluate performance of statistical methods require a well‐specified data‐generating model. Details of these models are essential to interpret the results and arrive at proper conclusions. A case in point is random‐effects meta‐analysis of dichotomous outcomes. We reviewed a number of simulation studies that evaluated approximate normal models for meta‐analysis of dichotomous outcomes, and we assessed the data‐generating models that were used to generate events for a series of (heterogeneous) trials. We demonstrate that the performance of the statistical methods, as assessed by simulation, differs between these 3 alternative data‐generating models, with larger differences apparent in the small population setting. Our findings are relevant to multilevel binomial models in general.  相似文献   

19.
It is often the case that interest lies in the effect of an exposure on each of several distinct event types. For example, we are motivated to investigate in the impact of recent injection drug use on deaths due to each of cancer, end‐stage liver disease, and overdose in the Canadian Co‐infection Cohort (CCC). We develop a marginal structural model that permits estimation of cause‐specific hazards in situations where more than one cause of death is of interest. Marginal structural models allow for the causal effect of treatment on outcome to be estimated using inverse‐probability weighting under the assumption of no unmeasured confounding; these models are particularly useful in the presence of time‐varying confounding variables, which may also mediate the effect of exposures. An asymptotic variance estimator is derived, and a cumulative incidence function estimator is given. We compare the performance of the proposed marginal structural model for multiple‐outcome data to that of conventional competing risks models in simulated data and demonstrate the use of the proposed approach in the CCC. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
Drug‐drug interactions (DDIs) are a common cause of adverse drug events (ADEs). The electronic medical record (EMR) database and the FDA's adverse event reporting system (FAERS) database are the major data sources for mining and testing the ADE associated DDI signals. Most DDI data mining methods focus on pair‐wise drug interactions, and methods to detect high‐dimensional DDIs in medical databases are lacking. In this paper, we propose 2 novel mixture drug‐count response models for detecting high‐dimensional drug combinations that induce myopathy. The “count” indicates the number of drugs in a combination. One model is called fixed probability mixture drug‐count response model with a maximum risk threshold (FMDRM‐MRT). The other model is called count‐dependent probability mixture drug‐count response model with a maximum risk threshold (CMDRM‐MRT), in which the mixture probability is count dependent. Compared with the previous mixture drug‐count response model (MDRM) developed by our group, these 2 new models show a better likelihood in detecting high‐dimensional drug combinatory effects on myopathy. CMDRM‐MRT identified and validated (54; 374; 637; 442; 131) 2‐way to 6‐way drug interactions, respectively, which induce myopathy in both EMR and FAERS databases. We further demonstrate FAERS data capture much higher maximum myopathy risk than EMR data do. The consistency of 2 mixture models' parameters and local false discovery rate estimates are evaluated through statistical simulation studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号