首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
This article explores Bayesian joint models for a quantile of longitudinal response, mismeasured covariate and event time outcome with an attempt to (i) characterize the entire conditional distribution of the response variable based on quantile regression that may be more robust to outliers and misspecification of error distribution; (ii) tailor accuracy from measurement error, evaluate non‐ignorable missing observations, and adjust departures from normality in covariate; and (iii) overcome shortages of confidence in specifying a time‐to‐event model. When statistical inference is carried out for a longitudinal data set with non‐central location, non‐linearity, non‐normality, measurement error, and missing values as well as event time with being interval censored, it is important to account for the simultaneous treatment of these data features in order to obtain more reliable and robust inferential results. Toward this end, we develop Bayesian joint modeling approach to simultaneously estimating all parameters in the three models: quantile regression‐based nonlinear mixed‐effects model for response using asymmetric Laplace distribution, linear mixed‐effects model with skew‐t distribution for mismeasured covariate in the presence of informative missingness and accelerated failure time model with unspecified nonparametric distribution for event time. We apply the proposed modeling approach to analyzing an AIDS clinical data set and conduct simulation studies to assess the performance of the proposed joint models and method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.

Background

Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies.

Methods

We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios.

Results

Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes.

Conclusions

Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses.  相似文献   

3.
Baseline risk is a proxy for unmeasured but important patient‐level characteristics, which may be modifiers of treatment effect, and is a potential source of heterogeneity in meta‐analysis. Models adjusting for baseline risk have been developed for pairwise meta‐analysis using the observed event rate in the placebo arm and taking into account the measurement error in the covariate to ensure that an unbiased estimate of the relationship is obtained. Our objective is to extend these methods to network meta‐analysis where it is of interest to adjust for baseline imbalances in the non‐intervention group event rate to reduce both heterogeneity and possibly inconsistency. This objective is complicated in network meta‐analysis by this covariate being sometimes missing, because of the fact that not all studies in a network may have a non‐active intervention arm. A random‐effects meta‐regression model allowing for inclusion of multi‐arm trials and trials without a ‘non‐intervention’ arm is developed. Analyses are conducted within a Bayesian framework using the WinBUGS software. The method is illustrated using two examples: (i) interventions to promote functional smoke alarm ownership by households with children and (ii) analgesics to reduce post‐operative morphine consumption following a major surgery. The results showed no evidence of baseline effect in the smoke alarm example, but the analgesics example shows that the adjustment can greatly reduce heterogeneity and improve overall model fit. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
In health services research, it is common to encounter semicontinuous data characterized by a point mass at zero followed by a right‐skewed continuous distribution with positive support. Examples include health expenditures, in which the zeros represent a subpopulation of patients who do not use health services, while the continuous distribution describes the level of expenditures among health services users. Semicontinuous data are typically analyzed using two‐part mixture models that separately model the probability of health services use and the distribution of positive expenditures among users. However, because the second part conditions on a non‐zero response, conventional two‐part models do not provide a marginal interpretation of covariate effects on the overall population of health service users and non‐users, even though this is often of greatest interest to investigators. Here, we propose a marginalized two‐part model that yields more interpretable effect estimates in two‐part models by parameterizing the model in terms of the marginal mean. This model maintains many of the important features of conventional two‐part models, such as capturing zero‐inflation and skewness, but allows investigators to examine covariate effects on the overall marginal mean, a target of primary interest in many applications. Using a simulation study, we examine properties of the maximum likelihood estimates from this model. We illustrate the approach by evaluating the effect of a behavioral weight loss intervention on health‐care expenditures in the Veterans Affairs health‐care system. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
In this article, we show how Tobit models can address problems of identifying characteristics of subjects having left‐censored outcomes in the context of developing a method for jointly analyzing time‐to‐event and longitudinal data. There are some methods for handling these types of data separately, but they may not be appropriate when time to event is dependent on the longitudinal outcome, and a substantial portion of values are reported to be below the limits of detection. An alternative approach is to develop a joint model for the time‐to‐event outcome and a two‐part longitudinal outcome, linking them through random effects. This proposed approach is implemented to assess the association between the risk of decline of CD4/CD8 ratio and rates of change in viral load, along with discriminating between patients who are potentially progressors to AIDS from patients who do not. We develop a fully Bayesian approach for fitting joint two‐part Tobit models and illustrate the proposed methods on simulated and real data from an AIDS clinical study.  相似文献   

6.
A number of mixture modeling approaches assume both normality and independent observations. However, these two assumptions are at odds with the reality of many data sets, which are often characterized by an abundance of zero‐valued or highly skewed observations as well as observations from biologically related (i.e., non‐independent) subjects. We present here a finite mixture model with a zero‐inflated Poisson regression component that may be applied to both types of data. This flexible approach allows the use of covariates to model both the Poisson mean and rate of zero inflation and can incorporate random effects to accommodate non‐independent observations. We demonstrate the utility of this approach by applying these models to a candidate endophenotype for schizophrenia, but the same methods are applicable to other types of data characterized by zero inflation and non‐independence. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
We have developed a method, called Meta‐STEPP (subpopulation treatment effect pattern plot for meta‐analysis), to explore treatment effect heterogeneity across covariate values in the meta‐analysis setting for time‐to‐event data when the covariate of interest is continuous. Meta‐STEPP forms overlapping subpopulations from individual patient data containing similar numbers of events with increasing covariate values, estimates subpopulation treatment effects using standard fixed‐effects meta‐analysis methodology, displays the estimated subpopulation treatment effect as a function of the covariate values, and provides a statistical test to detect possibly complex treatment‐covariate interactions. Simulation studies show that this test has adequate type‐I error rate recovery as well as power when reasonable window sizes are chosen. When applied to eight breast cancer trials, Meta‐STEPP suggests that chemotherapy is less effective for tumors with high estrogen receptor expression compared with those with low expression. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Studies of HIV dynamics in AIDS research are very important in understanding the pathogenesis of HIV‐1 infection and also in assessing the effectiveness of antiviral therapies. Nonlinear mixed‐effects (NLME) models have been used for modeling between‐subject and within‐subject variations in viral load measurements. Mostly, normality of both within‐subject random error and random‐effects is a routine assumption for NLME models, but it may be unrealistic, obscuring important features of between‐subject and within‐subject variations, particularly, if the data exhibit skewness. In this paper, we develop a Bayesian approach to NLME models and relax the normality assumption by considering both model random errors and random‐effects to have a multivariate skew‐normal distribution. The proposed model provides flexibility in capturing a broad range of non‐normal behavior and includes normality as a special case. We use a real data set from an AIDS study to illustrate the proposed approach by comparing various candidate models. We find that the model with skew‐normality provides better fit to the observed data and the corresponding estimates of parameters are significantly different from those based on the model with normality when skewness is present in the data. These findings suggest that it is very important to assume a model with skew‐normal distribution in order to achieve robust and reliable results, in particular, when the data exhibit skewness. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
The normality assumption of measurement error is a widely used distribution in joint models of longitudinal and survival data, but it may lead to unreasonable or even misleading results when longitudinal data reveal skewness feature. This paper proposes a new joint model for multivariate longitudinal and multivariate survival data by incorporating a nonparametric function into the trajectory function and hazard function and assuming that measurement errors in longitudinal measurement models follow a skew‐normal distribution. A Monte Carlo Expectation‐Maximization (EM) algorithm together with the penalized‐splines technique and the Metropolis–Hastings algorithm within the Gibbs sampler is developed to estimate parameters and nonparametric functions in the considered joint models. Case deletion diagnostic measures are proposed to identify the potential influential observations, and an extended local influence method is presented to assess local influence of minor perturbations. Simulation studies and a real example from a clinical trial are presented to illustrate the proposed methodologies. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

10.
A limiting feature of previous work on growth mixture modeling is the assumption of normally distributed variables within each latent class. With strongly non‐normal outcomes, this means that several latent classes are required to capture the observed variable distributions. Being able to relax the assumption of within‐class normality has the advantage that a non‐normal observed distribution does not necessitate using more than one class to fit the distribution. It is valuable to add parameters representing the skewness and the thickness of the tails. A new growth mixture model of this kind is proposed drawing on recent work in a series of papers using the skew‐t distribution. The new method is illustrated using the longitudinal development of body mass index in two data sets. The first data set is from the National Longitudinal Survey of Youth covering ages 12–23 years. Here, the development is related to an antecedent measuring socioeconomic background. The second data set is from the Framingham Heart Study covering ages 25–65 years. Here, the development is related to the concurrent event of treatment for hypertension using a joint growth mixture‐survival model. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
It is a common practice to analyze complex longitudinal data using nonlinear mixed‐effects (NLME) models with normality assumption. The NLME models with normal distributions provide the most popular framework for modeling continuous longitudinal outcomes, assuming individuals are from a homogeneous population and relying on random‐effects to accommodate inter‐individual variation. However, the following two issues may standout: (i) normality assumption for model errors may cause lack of robustness and subsequently lead to invalid inference and unreasonable estimates, particularly, if the data exhibit skewness and (ii) a homogeneous population assumption may be unrealistically obscuring important features of between‐subject and within‐subject variations, which may result in unreliable modeling results. There has been relatively few studies concerning longitudinal data with both heterogeneity and skewness features. In the last two decades, the skew distributions have shown beneficial in dealing with asymmetric data in various applications. In this article, our objective is to address the simultaneous impact of both features arisen from longitudinal data by developing a flexible finite mixture of NLME models with skew distributions under Bayesian framework that allows estimates of both model parameters and class membership probabilities for longitudinal data. Simulation studies are conducted to assess the performance of the proposed models and methods, and a real example from an AIDS clinical trial illustrates the methodology by modeling the viral dynamics to compare potential models with different distribution specifications; the analysis results are reported. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Joint models for longitudinal and time‐to‐event data are particularly relevant to many clinical studies where longitudinal biomarkers could be highly associated with a time‐to‐event outcome. A cutting‐edge research direction in this area is dynamic predictions of patient prognosis (e.g., survival probabilities) given all available biomarker information, recently boosted by the stratified/personalized medicine initiative. As these dynamic predictions are individualized, flexible models are desirable in order to appropriately characterize each individual longitudinal trajectory. In this paper, we propose a new joint model using individual‐level penalized splines (P‐splines) to flexibly characterize the coevolution of the longitudinal and time‐to‐event processes. An important feature of our approach is that dynamic predictions of the survival probabilities are straightforward as the posterior distribution of the random P‐spline coefficients given the observed data is a multivariate skew‐normal distribution. The proposed methods are illustrated with data from the HIV Epidemiology Research Study. Our simulation results demonstrate that our model has better dynamic prediction performance than other existing approaches. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

13.
For time‐to‐event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression‐free survival or time to AIDS progression) can be difficult to assess or reliant on self‐report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log‐linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic.  相似文献   

14.
Instrumental variable (IV) analysis has been widely used in economics, epidemiology, and other fields to estimate the causal effects of covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. However, IV methods for time‐to‐event outcome with censored data remain underdeveloped. This paper proposes a Bayesian approach for IV analysis with censored time‐to‐event outcome by using a two‐stage linear model. A Markov chain Monte Carlo sampling method is developed for parameter estimation for both normal and non‐normal linear models with elliptically contoured error distributions. The performance of our method is examined by simulation studies. Our method largely reduces bias and greatly improves coverage probability of the estimated causal effect, compared with the method that ignores the unobserved confounders and measurement errors. We illustrate our method on the Women's Health Initiative Observational Study and the Atherosclerosis Risk in Communities Study. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
The Michigan Female Health Study (MFHS) conducted research focusing on reproductive health outcomes among women exposed to polybrominated biphenyls (PBBs). In the work presented here, the available longitudinal serum PBB exposure measurements are used to obtain predictions of PBB exposure for specific time points of interest via random effects models. In a two‐stage approach, a prediction of the PBB exposure is obtained and then used in a second‐stage health outcome model. This paper illustrates how a unified approach, which links the exposure and outcome in a joint model, provides an efficient adjustment for covariate measurement error. We compare the use of empirical Bayes predictions in the two‐stage approach with results from a joint modeling approach, with and without an adjustment for left‐ and interval‐censored data. The unified approach with the adjustment for left‐ and interval‐censored data resulted in little bias and near‐nominal confidence interval coverage in both the logistic and linear model setting. Published in 2010 by John Wiley & Sons, Ltd.  相似文献   

16.
In conventional survival analysis there is an underlying assumption that all study subjects are susceptible to the event. In general, this assumption does not adequately hold when investigating the time to an event other than death. Owing to genetic and/or environmental etiology, study subjects may not be susceptible to the disease. Analyzing nonsusceptibility has become an important topic in biomedical, epidemiological, and sociological research, with recent statistical studies proposing several mixture models for right‐censored data in regression analysis. In longitudinal studies, we often encounter left, interval, and right‐censored data because of incomplete observations of the time endpoint, as well as possibly left‐truncated data arising from the dissimilar entry ages of recruited healthy subjects. To analyze these kinds of incomplete data while accounting for nonsusceptibility and possible crossing hazards in the framework of mixture regression models, we utilize a logistic regression model to specify the probability of susceptibility, and a generalized gamma distribution, or a log‐logistic distribution, in the accelerated failure time location‐scale regression model to formulate the time to the event. Relative times of the conditional event time distribution for susceptible subjects are extended in the accelerated failure time location‐scale submodel. We also construct graphical goodness‐of‐fit procedures on the basis of the Turnbull–Frydman estimator and newly proposed residuals. Simulation studies were conducted to demonstrate the validity of the proposed estimation procedure. The mixture regression models are illustrated with alcohol abuse data from the Taiwan Aboriginal Study Project and hypertriglyceridemia data from the Cardiovascular Disease Risk Factor Two‐township Study in Taiwan. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
Tao Lu 《Statistics in medicine》2017,36(16):2614-2629
In AIDS studies, heterogeneous between and within subject variations are often observed on longitudinal endpoints. To accommodate heteroscedasticity in the longitudinal data, statistical methods have been developed to model the mean and variance jointly. Most of these methods assume (conditional) normal distributions for random errors, which is not realistic in practice. In this article, we propose a Bayesian mixed‐effects location scale model with skew‐t distribution and mismeasured covariates for heterogeneous longitudinal data with skewness. The proposed model captures the between‐subject and within‐subject (WS) heterogeneity by modeling the between‐subject and WS variations with covariates as well as a random effect at subject level in the WS variance. Further, the proposed model also takes into account the covariate measurement errors, and commonly assumed normal distributions for model errors are substituted by skew‐t distribution to account for skewness. Parameter estimation is carried out in a Bayesian framework. The proposed method is illustrated with a Multicenter AIDS Cohort Study. Simulation studies are performed to assess the performance of the proposed method. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

18.
Shared parameter joint models provide a framework under which a longitudinal response and a time to event can be modelled simultaneously. A common assumption in shared parameter joint models has been to assume that the longitudinal response is normally distributed. In this paper, we instead propose a joint model that incorporates a two‐part ‘hurdle’ model for the longitudinal response, motivated in part by longitudinal response data that is subject to a detection limit. The first part of the hurdle model estimates the probability that the longitudinal response is observed above the detection limit, whilst the second part of the hurdle model estimates the mean of the response conditional on having exceeded the detection limit. The time‐to‐event outcome is modelled using a parametric proportional hazards model, assuming a Weibull baseline hazard. We propose a novel association structure whereby the current hazard of the event is assumed to be associated with the current combined (expected) outcome from the two parts of the hurdle model. We estimate our joint model under a Bayesian framework and provide code for fitting the model using the Bayesian software Stan. We use our model to estimate the association between HIV RNA viral load, which is subject to a lower detection limit, and the hazard of stopping or modifying treatment in patients with HIV initiating antiretroviral therapy. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
We propose a semiparametric multivariate skew–normal joint model for multivariate longitudinal and multivariate survival data. One main feature of the posited model is that we relax the commonly used normality assumption for random effects and within‐subject error by using a centered Dirichlet process prior to specify the random effects distribution and using a multivariate skew–normal distribution to specify the within‐subject error distribution and model trajectory functions of longitudinal responses semiparametrically. A Bayesian approach is proposed to simultaneously obtain Bayesian estimates of unknown parameters, random effects and nonparametric functions by combining the Gibbs sampler and the Metropolis–Hastings algorithm. Particularly, a Bayesian local influence approach is developed to assess the effect of minor perturbations to within‐subject measurement error and random effects. Several simulation studies and an example are presented to illustrate the proposed methodologies. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
In behavioral, biomedical, and social‐psychological sciences, it is common to encounter latent variables and heterogeneous data. Mixture structural equation models (SEMs) are very useful methods to analyze these kinds of data. Moreover, the presence of missing data, including both missing responses and missing covariates, is an important issue in practical research. However, limited work has been done on the analysis of mixture SEMs with non‐ignorable missing responses and covariates. The main objective of this paper is to develop a Bayesian approach for analyzing mixture SEMs with an unknown number of components, in which a multinomial logit model is introduced to assess the influence of some covariates on the component probability. Results of our simulation study show that the Bayesian estimates obtained by the proposed method are accurate, and the model selection procedure via a modified DIC is useful in identifying the correct number of components and in selecting an appropriate missing mechanism in the proposed mixture SEMs. A real data set related to a longitudinal study of polydrug use is employed to illustrate the methodology. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号