首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a local influence analysis to assigned model quantities in the context of a dose-response analysis of cancer mortality in relation to estimated absorbed dose of dioxin. The risk estimation is performed using dioxin dose as a time-dependent explanatory variable in a proportional hazard model. The dioxin dose is computed using a toxicokinetic model, which depends on some factors, such as assigned constants and estimated parameters. We present a local influence analysis to assess the effects on final results of minor perturbations of toxicokinetic model factors. In the present context, there is no evidence of local influence in risk estimates. It is however possible to identify which factors are more influential.  相似文献   

2.

Background

In matched-pair cohort studies with censored events, the hazard ratio (HR) may be of main interest. However, it is lesser known in epidemiologic literature that the partial maximum likelihood estimator of a common HR conditional on matched pairs is written in a simple form, namely, the ratio of the numbers of two pair-types. Moreover, because HR is a noncollapsible measure and its constancy across matched pairs is a restrictive assumption, marginal HR as “average” HR may be targeted more than conditional HR in analysis.

Methods

Based on its simple expression, we provided an alternative interpretation of the common HR estimator as the odds of the matched-pair analog of C-statistic for censored time-to-event data. Through simulations assuming proportional hazards within matched pairs, the influence of various censoring patterns on the marginal and common HR estimators of unstratified and stratified proportional hazards models, respectively, was evaluated. The methods were applied to a real propensity-score matched dataset from the Rotterdam tumor bank of primary breast cancer.

Results

We showed that stratified models unbiasedly estimated a common HR under the proportional hazards within matched pairs. However, the marginal HR estimator with robust variance estimator lacks interpretation as an “average” marginal HR even if censoring is unconditionally independent to event, unless no censoring occurs or no exposure effect is present. Furthermore, the exposure-dependent censoring biased the marginal HR estimator away from both conditional HR and an “average” marginal HR irrespective of whether exposure effect is present. From the matched Rotterdam dataset, we estimated HR for relapse-free survival of absence versus presence of chemotherapy; estimates (95% confidence interval) were 1.47 (1.18–1.83) for common HR and 1.33 (1.13–1.57) for marginal HR.

Conclusion

The simple expression of the common HR estimator would be a useful summary of exposure effect, which is less sensitive to censoring patterns than the marginal HR estimator. The common and the marginal HR estimators, both relying on distinct assumptions and interpretations, are complementary alternatives for each other.
  相似文献   

3.
For survival data regression, the Cox proportional hazards model is the most popular model, but in certain situations the Cox model is inappropriate. Various authors have proposed the proportional odds model as an alternative. Yang and Prentice recently presented a number of easily implemented estimators for the proportional odds model. Here we show how to extend the methods of Yang and Prentice to a family of survival models that includes the proportional hazards model and proportional odds model as special cases. The model is defined in terms of a Box-Cox transformation of the survival function, indexed by a transformation parameter rho. This model has been discussed by other authors, and is related to the Harrington-Fleming G(rho) family of tests and to frailty models. We discuss inference for the case where rho is known and the case where rho must be estimated. We present a simulation study of a pseudo-likelihood estimator and a martingale residual estimator. We find that the methods perform reasonably. We apply our model to a real data set.  相似文献   

4.

Background

When analysing spatial data, it is important to account for spatial autocorrelation. In Bayesian statistics, spatial autocorrelation is commonly modelled by the intrinsic conditional autoregressive prior distribution. At the heart of this model is a spatial weights matrix which controls the behaviour and degree of spatial smoothing. The purpose of this study is to review the main specifications of the spatial weights matrix found in the literature, and together with some new and less common specifications, compare the effect that they have on smoothing and model performance.

Methods

The popular BYM model is described, and a simple solution for addressing the identifiability issue among the spatial random effects is provided. Seventeen different definitions of the spatial weights matrix are defined, which are classified into four classes: adjacency-based weights, and weights based on geographic distance, distance between covariate values, and a hybrid of geographic and covariate distances. These last two definitions embody the main novelty of this research. Three synthetic data sets are generated, each representing a different underlying spatial structure. These data sets together with a real spatial data set from the literature are analysed using the models. The models are evaluated using the deviance information criterion and Moran’s I statistic.

Results

The deviance information criterion indicated that the model which uses binary, first-order adjacency weights to perform spatial smoothing is generally an optimal choice for achieving a good model fit. Distance-based weights also generally perform quite well and offer similar parameter interpretations. The less commonly explored options for performing spatial smoothing generally provided a worse model fit than models with more traditional approaches to smoothing, but usually outperformed the benchmark model which did not conduct spatial smoothing.

Conclusions

The specification of the spatial weights matrix can have a colossal impact on model fit and parameter estimation. The results provide some evidence that a smaller number of neighbours used in defining the spatial weights matrix yields a better model fit, and may provide a more accurate representation of the underlying spatial random field. The commonly used binary, first-order adjacency weights still appear to be a good choice for implementing spatial smoothing.
  相似文献   

5.
In applying capture-recapture methods for closed populations to epidemiology, one needs to estimate the total number of people with a certain disease in a certain research area by using several lists with information of patients. Problems of lists error often arise due to mistyping or misinformation. Adopting the concept of tag-loss methodology in animal populations, Seber et al. (Biometrics 2000; 56:1227-1232) proposed solutions to a two-list problem. This article reports an interesting simulation study, where Bayesian point estimates based on improper constant and Jeffreys prior for unknown population size N could have smaller frequentist standard errors and MSEs compared to the estimates proposed in Seber et al. (2000). The Bayesian credible intervals based on the same priors also have super frequentist coverage probabilities while some of the frequentist confidence intervals procedures have drastically poor coverage. Seber's real data set on gestational diabetics is analysed with the proposed new methods.  相似文献   

6.
Huang Y  Dagne G  Wu L 《Statistics in medicine》2011,30(24):2930-2946
Normality (symmetry) of the model random errors is a routine assumption for mixed-effects models in many longitudinal studies, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain inter-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. This paper formulates a class of models in general forms that considers model errors to have skew-normal distributions for a joint behavior of longitudinal dynamic processes and time-to-event process of interest. For estimating model parameters, we propose a Bayesian approach to jointly model three components (response, covariate, and time-to-event processes) linked through the random effects that characterize the underlying individual-specific longitudinal processes. We discuss in detail special cases of the model class, which are offered to jointly model HIV dynamic response in the presence of CD4 covariate process with measurement errors and time to decrease in CD4/CD8 ratio, to provide a tool to assess antiretroviral treatment and to monitor disease progression. We illustrate the proposed methods using the data from a clinical trial study of HIV treatment. The findings from this research suggest that the joint models with a skew-normal distribution may provide more reliable and robust results if the data exhibit skewness, and particularly the results may be important for HIV/AIDS studies in providing quantitative guidance to better understand the virologic responses to antiretroviral treatment.  相似文献   

7.
In controlled clinical trials, where minimizing treatment failures is crucial, response-adaptive designs are attractive competitors to 1:1 randomized designs for comparing the success rates φ(1) and φ(2) of two treatments. In these designs each new treatment assignment depends on previous outcomes through some predefined rule. Here Play-The-Winner (PW), Randomized Play-The-Winner (RPW), Drop-The-Loser, Generalized Drop-the-Loser and Doubly adaptive Biased Coin Designs are considered for new treatment assignments. As frequentist inference relies on complex sampling distributions in those designs, we investigate how Bayesian inference, based on two independent Beta prior distributions, performs from a frequentist point-of-view. Performance is assessed through coverage probabilities of interval estimation procedures, power and minimization of failure count. It is shown that Bayesian inference can be favorably compared to frequentist procedures where the latter are available. The power of response-adaptive designs is generally very close to the power of 1:1 randomized design. However, failure count savings are generally small, except for the PW and Doubly adaptive Biased Coin designs in particular ranges of the true success rates. The RPW assignment rule has the worst performance, while PW, Generalized Drop-the-Loser or Doubly adaptive Biased Coin Designs may outperform other designs depending on different particular ranges of the true success rates.  相似文献   

8.
This paper presents a mixture model which combines features of the usual Cox proportional hazards model with those of a class of models, known as mixtures-of-experts. The resulting model is more flexible than the usual Cox model in the sense that the log hazard ratio is allowed to vary non-linearly as a function of the covariates. Thus it provides a flexible approach to both modelling survival data and model checking. The method is illustrated with simulated data, as well as with multiple myeloma data.  相似文献   

9.
目的 共线关系可产生不准确的估计及检验。现有的处理方法都存在不同程度的缺陷。云南锡业公司是世界知名的职业性肺癌高发地区,其主要危险因素间存在较强的共线关系。本文旨在克服变量间的这种线性依存关系。方法 本文利用树结构的基本原理,对数据作树型分层,在各层内分析变量的局部效应及具体的作用方式。结果 所得结果与生物试验报告相当吻合,并部分解释了“该矿接受放射性氡子体的暴露剂量在国际同类矿山中不是最高,而其肺癌发病率却远远高于其他矿山”的事实(长期以来该现象没有得到较为合理的解释)。结论 树分层方法可有效消除共线对回归的影响。  相似文献   

10.
ObjectiveRandomized trials generally use “frequentist” statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004.Study Design and SettingWe used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P < 0.05 in favor of the intervention were deemed “positive”; otherwise, they were “negative.” We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects.ResultsOf 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio < 1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies.ConclusionBayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.  相似文献   

11.
The availability of longstanding collection of detailed cancer patient information makes multivariable modelling of cancer‐specific hazard of death appealing. We propose to report variation in survival explained by each variable that constitutes these models. We adapted the ranks explained (RE) measure to the relative survival data setting, ie, when competing risks of death are accounted for through life tables from the general population. RE is calculated at each event time. We introduce weights for each death reflecting its probability to be a cancer death. RE varies between ?1 and +1 and can be reported at given times in the follow‐up and as a time‐varying measure from diagnosis onward. We present an application for patients diagnosed with colon or lung cancer in England. The RE measure shows reasonable properties and is comparable in both relative and cause‐specific settings. One year after diagnosis, RE for the most complex excess hazard models reaches 0.56, 95% CI: 0.54 to 0.58 (0.58 95% CI: 0.56–0.60) and 0.69, 95% CI: 0.68 to 0.70 (0.67, 95% CI: 0.66–0.69) for lung and colon cancer men (women), respectively. Stage at diagnosis accounts for 12.4% (10.8%) of the overall variation in survival among lung cancer patients whereas it carries 61.8% (53.5%) of the survival variation in colon cancer patients. Variables other than performance status for lung cancer (10%) contribute very little to the overall explained variation. The proportion of the variation in survival explained by key prognostic factors is a crucial information toward understanding the mechanisms underpinning cancer survival. The time‐varying RE provides insights into patterns of influence for strong predictors.  相似文献   

12.
Described in the paper is a reconstruction algorithm for special distributions of optical parameters of biological objects based on the non-stationary axial model of radiation transport in case of proportionality of the absorption and scattering coefficients. The proportionality coefficient, used in reconstruction, is shown to influence a precision of images being restored. The appropriate recommendations are defined of how to diminish the occurring distortions.  相似文献   

13.
14.
Dawson R  Lavori PW 《Statistics in medicine》2002,21(12):1641-61; discussion 1663-87
We estimate the effects of non-randomized time-varying treatments on the discrete-time hazard, using inverse weighting. We consider the special monotone pattern of treatment that develops over time as subjects permanently discontinue an initial treatment, and assume that treatment selection is sequentially ignorable. We use a propensity score in the hazard model to reduce the potential for finite-sample bias due to inverse weighting. When the number of subjects who discontinue treatment at any given time is small, we impose scientific restrictions on the potentially observable discontinuation hazards to improve efficiency. We use predictive inference to account for the correlation of the potential hazards, when comparing outcomes under different durations of initial treatment.  相似文献   

15.
When assessing association between a binary trait and some covariates, the binary response may be subject to unidirectional misclassification. Unidirectional misclassification can occur when revealing a particular level of the trait is associated with a type of cost, such as a social desirability or financial cost. The feasibility of addressing misclassification is commonly obscured by model identification issues. The current paper attempts to study the efficacy of inference when the binary response variable is subject to unidirectional misclassification. From a theoretical perspective, we demonstrate that the key model parameters possess identifiability, except for the case with a single binary covariate. From a practical standpoint, the logistic model with quantitative covariates can be weakly identified, in the sense that the Fisher information matrix may be near singular. This can make learning some parameters difficult under certain parameter settings, even with quite large samples. In other cases, the stronger identification enables the model to provide more effective adjustment for unidirectional misclassification. An extension to the Poisson approximation of the binomial model reveals the identifiability of the Poisson and zero‐inflated Poisson models. For fully identified models, the proposed method adjusts for misclassification based on learning from data. For binary models where there is difficulty in identification, the method is useful for sensitivity analyses on the potential impact from unidirectional misclassification.  相似文献   

16.
建设项目职业病危害预评价中应用数学模型方法的探讨   总被引:4,自引:0,他引:4  
当前,危险化学品危害事故时有发生。大量危险化学物质泄漏将会导致火灾、爆炸、中毒等重大事故,造成严重的人员伤亡和环境污染,并可引起突发中毒等公共卫生事件。因此,在建设项目职业病危害预评价工作中需要有一个科学、简明、可行、规范的定量评价方法,对突发中毒事故的影响范围及危害程度进行定量分析,使职业病危害预评价报告的内容对工程项目建设单位起到具体的指导作用。但目前在建设项目职业病危害预评价规范和实际工作中还缺少这样的评价方法。为此,我们借鉴建设项目环境影响评价和安全预评价方法、评价工作经验及事故案例等加以综合分析,探讨在建设项目职业病危害预评价中应用数学模型方法对职业病危害事故影响范围及危害程度进行定量评价。  相似文献   

17.
We compare parameter estimates from the proportional hazards model, the cumulative logistic model and a new modified logistic model (referred to as the person-time logistic model), with the use of simulated data sets and with the following quantities varied: disease incidence, risk factor strength, length of follow-up, the proportion censored, non-proportional hazards, and sample size. Parameter estimates from the person-time logistic regression model closely approximated those from the Cox model when the survival time distribution was close to exponential, but could differ substantially in other situations. We found parameter estimates from the cumulative logistic model similar to those from the Cox and person-time logistic models when the disease was rare, the risk factor moderate, and censoring rates similar across the covariates. We also compare the models with analysis of a real data set that involves the relationship of age, race, sex, blood pressure, and smoking to subsequent mortality. In this example, the length of follow-up among survivors varied from 5 to 14 years and the Cox and person-time logistic approaches gave nearly identical results. The cumulative logistic results had somewhat larger p-values but were substantively similar for all but one coefficient (the age-race interaction). The latter difference reflects differential censoring rates by age, race and sex.  相似文献   

18.
19.
Methodology for causal inference based on propensity scores has been developed and popularized in the last two decades. However, the majority of the methodology has concentrated on binary treatments. Only recently have these methods been extended to settings with multi-valued treatments. We propose a number of discrete choice models for estimating the propensity scores. The models differ in terms of flexibility with respect to potential correlation between treatments, and, in turn, the accuracy of the estimated propensity scores. We present the effects of discrete choice models used on performance of the causal estimators through a Monte Carlo study. We also illustrate the use of discrete choice models to estimate the effect of antipsychotic drug use on the risk of diabetes in a cohort of adults with schizophrenia.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号