首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Many hormones and other physiological processes vary in a circadian pattern. Although a sine/cosine function can be used to model these patterns, this functional form is not appropriate when there is asymmetry between the peak and nadir phases. In this paper we describe a semiparametric periodic spline function that can be fit to circadian rhythms. The model includes both phase and amplitude so that the time and the magnitude of the peak or nadir can be estimated. We also describe tests of fit for components in the model. Data from an experiment to study immunological responses in humans are used to demonstrate the methods.  相似文献   

2.
BACKGROUND: Relative risks are the most common statistics used to quantify the risk of mortal or morbid outcomes associated with different patient groups and therapeutic interventions. However, absolute risks are of greater value of both patient and physician in making clinical decisions. METHODS: The relationship between relative and absolute risks is explained using graphical aids. A program to estimate absolute risks from relative risks is available on the internet (see ftp://ftp.vanderbilt.edu/pub/biostat/absrisk+ ++.txt). This program uses a competing hazards model of morbidity and mortality to derive these estimates. RESULTS: When a patient's absolute risk is low, it can be approximated by multiplying her relative risk by the absolute risk in the reference population. This approximation fails for higher absolute risks. The relationship between relative and absolute risk can vary dramatically for different diseases. This is illustrated by breast cancer morbidity and cardiovascular mortality in American women. The accuracy of absolute risk estimates will be affected by the accuracy of relative risk estimates, by the appropriateness of the reference groups used to calculate relative risks, by the stability of cross-sectional, age-specific morbidity and mortality rates over time, by the influence of individual risk factors on multiple causes of mortality, and by the extent to which relative risks may vary over time. CONCLUSIONS: Valid absolute risk estimates are valuable when making treatment decisions. They can often be obtained over time intervals of 10 to 20 years when the corresponding relative risk estimates have been accurately determined.  相似文献   

3.
This paper deals with analysis of data from longitudinal studies where the rate of a recurrent event characterizing morbidity is the primary criterion for treatment evaluation. We consider clinical trials which require patients to visit their clinical center at successive scheduled times as part of follow-up. At each visit, the patient reports the number of events that occurred since the previous visit, or an examination reveals the number of accumulated events, such as skin cancers. The exact occurrence times of the events are unavailable and the actual patient visit times typically vary randomly about the scheduled follow-up times. Each patient's record thus consists of a sequence of clinic visit dates, event counts corresponding to the successive time intervals between clinic visits, and baseline covariates. We propose a semiparametric regression model, extending the fully parametric model of Thall (1988, Biometrics 44, 197-209), to estimate and test for covariate effects on the rate of events over time while also accounting for the possibly time-varying nature of the underlying event rate. Covariate effects enter the model parametrically, while the underlying time-varying event rate is modelled nonparametrically. The method of Severini and Wong (1992, Annals of Statistics 20, 1768-1802) is used to construct asymptotically efficient estimators of the parametric component and to specify their asymptotic distribution. A simulation study and application to a data set are provided.  相似文献   

4.
In the analysis of survival data using the Cox proportional hazard (PH) model, it is important to verify that the explanatory variables analysed satisfy the proportional hazard assumption of the model. This paper presents results of a simulation study that compares five test statistics to check the proportional hazard assumption of Cox's model. The test statistics were evaluated under proportional hazards and the following types of departures from the proportional hazard assumption: increasing relative hazards; decreasing relative hazards; crossing hazards; diverging hazards, and non-monotonic hazards. The test statistics compared include those based on partitioning of failure time and those that do not require partitioning of failure time. The simulation results demonstrate that the time-dependent covariate test, the weighted residuals score test and the linear correlation test have equally good power for detection of non-proportionality in the varieties of non-proportional hazards studied. Using illustrative data from the literature, these test statistics performed similarly.  相似文献   

5.
Previously we proposed an aggregate data study design for estimation of exposure effects from population-based disease rates and covariate data from risk factor surveys in each population group. A basic relative rate model specified for individuals is aggregated to produce a random effects relative rate model for the disease rates. Relative rate parameter estimates from aggregate data studies target the same parameters as individual-level studies but use between-group information in the data. We distinguish aggregate data studies from ecologic studies. Considerations in the design of aggregate studies are motivated by the need to gain clearer understanding of the role of diet in cancer aetiology. Simulation studies show that increasing the number of populations included in an aggregate data study from about 20 to 30-40 gives greater improvement in power than corresponding increases in the size of the survey sample in each population over an initial size of 100 individuals.  相似文献   

6.
A special model for dental care in pre-school children was used in a small clinic in the county of Blekinge in southern Sweden. The model is based on screening of caries risk performed by a dental assistant before the caries attack. Any single risk factor or risk behavior in pre-school children was considered. The aim was to 1) evaluate the dental assistant's selection of caries risk children up to the age of three years, 2) compare dental health variables in 4 yr olds in the test clinic with those for the whole county in 1994 and 3) compare time spent by the dentist and the dental assistant in the test clinic and in the whole county per child up to the age of four. 102 children participated. One specially trained dental assistant screened all children using background factors combined with clinical examinations at ages 1, 2 and 3. Eighty-two children participated each year from one year. A systematized form for questioning the parents was used. Individual caries prevention was given including fluoride and antimicrobial treatments as well as fissure sealants in primary molars at caries risk. The proportion of children with caries lesions at four years and a caries risk assessment up to the age of two was 1.0 (sensitivity). The proportion of children with no caries lesions at four years and no caries risk assessment at year two was 0.7 (specificity). The most frequent risk factors found at 2 yrs were frequency order: lack of oral hygiene (visible plaque), deep fissures in molars and frequent intakes of sweet drinks. The proportion of children with no caries lesions at 4 yrs of age in the test clinic was 92.9% compared to a county mean of 76.4%. In the group of children where a risk assessment was made each year from one year the proportion of caries free children was 96.3%. The total time spent per child in the test clinic was 22 minutes more than the county mean. However, dentist's time, excluding assistance, was 28 minutes less in the test clinic. The results suggest that the model used for caries prevention in pre-school children is cost-effective, and that dental health can be remarkably improved.  相似文献   

7.
It is common in catastrophic food-contamination events that consumers fail to adjust instantaneously to a normal consumption level. One explanation is that consumers only gradually accept new positive information as being trustworthy. The gradual establishment of the trustworthiness of the released information depends on both positive and negative media coverage over time. We examine the individual "trust" effects by extending the prospective reference theory (Viscusi, 1989) to include a dynamic adjustment process of risk perception. Conditions that allow aggregation of changes in risk perceptions across individuals are described. The proposed model describes a general updating process of risk perceptions to media coverage and can be applied to explain the temporal impact of media coverage on consumption of a broad range of goods (food or nonfood). A case study of milk contamination is conducted to demonstrate consumer demand adjustment process to a temporarily unfavorable shock. The results suggest that effects of positive and negative information to adjustment of consumption and risk perception are asymmetric over time.  相似文献   

8.
The set of statistical methods available to developmentalists is continually being expanded, allowing for questions about change over time to be addressed in new, informative ways. Indeed, new developments in methods to model change over time create the possibility for new research questions to be posed. Latent transition analysis, a longitudinal extension of latent class analysis, is a method that can be used to model development in discrete latent variables, for example, stage processes, over 2 or more times. The current article illustrates this approach using a new SAS procedure, PROC LTA, to model change over time in adolescent and young adult dating and sexual risk behavior. Gender differences are examined, and substance use behaviors are included as predictors of initial status in dating and sexual risk behavior and transitions over time. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
We explore the effects of measurement error in a time-varying covariate for a mixed model applied to a longitudinal study of plasma levels and dietary intake of beta-carotene. We derive a simple expression for the bias of large sample estimates of the variance of random effects in a longitudinal model for plasma levels when dietary intake is treated as a time-varying covariate subject to measurement error. In general, estimates for these variances made without consideration of measurement error are biased positively, unlike estimates for the slope coefficients which tend to be 'attenuated'. If we can assume that the residuals from a longitudinal fit for the time-varying covariate behave like measurement errors, we can estimate the original parameters without the need for additional validation or reliability studies. We propose a method to test this assumption and show that the assumption is reasonable for the example data. We then use a likelihood-based method of estimation that involves a simple extension of existing methods for fitting mixed models. Simulations illustrate the properties estimators.  相似文献   

10.
11.
Acute rejection episodes are thought to be prognostic of eventual kidney graft failure. The influence of rejection events on the hazard of transplant failure appears to be a complex function of how long after transplantation the rejection event occurs as well as the time elapsed since the rejection event. To examine the nature of this relationship, we propose a penalized likelihood approach to estimate the parameters of a two-dimensional rectanglewise constant hazard model. The approach appears to be fairly successful at modelling time dependency in a time-varying covariate. The approach is equally applicable for modelling fixed covariates that act in a jointly nonproportional (non-log-linear) and time-dependent manner.  相似文献   

12.
A simple method for testing the assumption of independent censoring is developed using a Cox proportional hazards regression model with a time-dependent covariate. This method involves further follow-up of a subset of lost-to-follow-up censored subjects. An adjusted estimator of the survivor function is obtained for the dependent censoring model under a proportional hazards alternative. The proposed procedure is applied to an example of a clinical trial for lung cancer and a simulation study is given for investigating the power of the proposed test.  相似文献   

13.
Objective: We examine the prospective relationship between mastery, where limited mastery is defined as the inability to control negative emotions (and perceiving stressful experiences as beyond personal control), and cardiovascular disease (CVD) mortality particularly among individuals at apparently low CVD risk. Design: Prospective population-based study of 19,067 men and women, aged 41–80 years with no previous heart disease or stroke at baseline assessment. Main Outcome Measures: Primary outcome measure CVD mortality. Results: A total of 791 CVD deaths were recorded up to June 2009 during a median 11.3 person-years of follow-up. Limited perceived mastery over life circumstances was associated with an increased risk of CVD mortality, independently of biological, lifestyle, and socioeconomic risk factors (hazard ratio 1.11 per SD decrease in mastery score, 95% confidence interval 1.01–1.21). This association was more pronounced among those participants apparently at low CVD risk (p = .01 for test of interaction according to the number of CVD risk factors at baseline). Conclusions: Limited perceived control over life circumstances is associated with an increased risk of CVD mortality, independently of classical cardiovascular risk factors, and particularly among those at apparently low risk. Future attention should be given to this potentially modifiable personal characteristic, through the design of preliminary intervention studies, to reduce cardiovascular risk. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
In situations in which one cannot specify a single primary outcome, epidemiologic analyses often examine multiple associations between outcomes and explanatory covariates or risk factors. To compare alternative approaches to the analysis of multiple outcomes in regression models, I used generalized estimating equations (GEE) models, a multivariate extension of generalized linear models, to incorporate the dependence among the outcomes from the same subject and to provide robust variance estimates of the regression coefficients. I applied the methods in a hospital-population-based study of complications of surgical anaesthesia, using GEE model fitting and quasi-likelihood score and Wald tests. In one GEE model specification, I allowed the associations between each of the outcomes and a covariate to differ, yielding a regression coefficient for each of the outcome and covariate combinations; I obtained the covariances among the set of outcome-specific regression coefficients for each covariate from the robust 'sandwich' variance estimator. To address the problem of multiple inference, I used simultaneous methods that make adjustments to the test statistic p-values and the confidence interval widths, to control type I error and simultaneous coverage, respectively. In a second model specification, for each of the covariates I assumed a common association between the outcomes and the covariate, which eliminates the problem of multiplicity by use of a global test of association. In an alternative approach to multiplicity, I used empirical Bayes methods to shrink the outcome-specific coefficients toward a pooled mean that is similar to the common effect coefficient. GEE regression models can provide a flexible framework for estimation and testing of multiple outcomes.  相似文献   

15.
Test statistics for the homogeneity of the risk difference for a series of 2 x 2 tables when the data are sparse is proposed. A weighted least squares statistic is commonly used to test for equality of the risk difference over the tables; however, when the data are sparse, this statistic can have anticonservative Type I error rates. Simulation is used to compare the proposed test statistics to the weighted least squares statistic. The weighted least squares statistic has the most anticonservative Type I error rates of all the statistics compared. We suggest the use of one of our proposed test statistics instead of the weighted least squares statistic.  相似文献   

16.
BACKGROUND: Current guidelines state that the goal of antiretroviral therapy for HIV-infected individuals is to suppress plasma viral load (pVL) to below 400 copies/ml. METHODS: Predictors of achieving and maintaining pVL suppression were examined in a randomized trial of combinations of zidovudine, nevirapine and didanosine in patients with CD4+ T cell counts of between 200 and 600 x 10(6) cells/l who were naive to antiretroviral therapy and AIDS-free at enrolment. RESULTS: One hundred and four patients had pVL > 500 copies/ml at baseline and a pVL nadir below 500 copies/ml. Of these, 77 patients experienced an increase in pVL above 500 copies/ml. The median number of days of pVL suppression to below 500 copies/ml was 285 (42) for patients with pVL nadir < or = (>) 20 copies/ml (P = 00.0001). The relative risk of an increase in pVL above 500 copies/ml associated with a pVL nadir below 20 copies/ml was 0.11 (P = 0.0001). The relative risks of an increase in pVL above 5000 copies/ml associated with a pVL nadir below 20 copies/ml or between 20 and 400 copies/ml were 0.05 [95% confidence interval (CI), 0.02-0.12] and 0.37 (95% CI, 0.23-0.61) respectively, compared with individuals with a pVL nadir > 400 copies/ml. Individuals with a pVL nadir < or = 20 copies/ml were at a significantly lower risk of virologic failure than individuals with a pVL nadir of between 21 and 400 copies/ml (P = 0.0001). CONCLUSIONS: Our results demonstrate that suppression of pVL below 20 copies/ml is necessary to achieve a long-term antiretroviral response. Our data support the need for a revision of current therapeutic guidelines for the management of HIV infection.  相似文献   

17.
Conditional distributions for bivariate survival can be obtained via a model for the joint distribution, or, as has sometimes been suggested, by modelling the conditioned variable directly, with the conditioning variable included as a covariate. A quantitative comparison of estimated covariate effects and predictive distributions under the two approaches is given. The results are illustrated in a novel frailty application.  相似文献   

18.
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystis pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. The goals of this paper are to specify two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. In a companion paper (Robins, 1995), we provide consistent and reasonably efficient semiparametric estimators for the treatment effect under these assumptions. In this paper we largely restrict attention to testing. We propose tests that, like standard weighted-log-rank tests, are asymptotically distribution-free alpha-level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free alpha-level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an alpha-level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also extend our methods to studies of the effect of a treatment on the evolution over time of the mean of a repeated measures outcome, such as CD-4 count.  相似文献   

19.
M Roach 《Canadian Metallurgical Quarterly》1996,10(8):1143-53; discussion 1154-61
Pretreatment prostate-specific antigen (PSA) level is the single most important prognostic factor for patients undergoing radiotherapy for clinically localized prostate cancer. When combined with Gleason score and T-stage, pretreatment PSA enhances our ability to accurately predict pathologic stage. Patients with pretreatment PSA levels > 10 ng/mL are at high risk for biochemical failure when treated with conventional radiation alone. A PSA nadir of > 1 ng/mL and a post-treatment PSA > 1.5 ng/mL are associated with a high risk of biochemical failure. Postoperative radiotherapy delivered while the tumor burden is low (eg, PSA < 1 ng/mL) predicts a favorable outcome. Many of these conclusions about the usefulness of pretreatment PSA are based on the assumption that PSA can be used as a surrogate end point for disease-free and overall survival from prostate cancer. However, this assumption still remains to be validated by phase III trials.  相似文献   

20.
We present an epidemiological model applicable to insulin-dependent diabetes mellitus (IDDM), based on which prevalence rates are estimated from assumed rates of incidence and mortality of diabetes. The model is illustrated by analysing epidemiological data on IDDM in Fyn County, Denmark for the period 1970-1990, with predictions of prevalence rates during 1990-2020. The epidemiological model assumes known prevalence rates as well as incidence rates and mortality at a given point of time. Under assumed rates of incidence and mortality of IDDM and its complications, the prevalence rate is the dependent variable which is estimated as a function of calendar time. We used epidemiological data on IDDM (operationally defined as insulin-treated diabetes with onset before age 30 years), blindness and nephropathy as well as mortality as reported for the years 1973 and 1987 in Fyn County, Denmark. During 1970-1990 the prevalence of IDDM increased steadily, due to increasing incidence and decreasing risk of complications and mortality. The relative prevalence of patients with nephropathy increased whereas that of blind patients decreased considerably. Under specified assumptions regarding the future levels of incidence of disease, complications and of mortality, it is estimated that the prevalence rate of IDDM in the year 2020 will be 45-60% higher than the level in 1990. The relative prevalence of patients with nephropathy will increase further, whereas the relative prevalence of blind patients will remain constant at a low level. We conclude that IDDM will represent an increasing public health problem in Denmark over the next decades, with increasing overall prevalence rates and a rising proportion of patients with nephropathy. The major determinants of this trend are increasing incidence, combined with declining mortality and declining risk of complications. It is recommended that epidemiological modelling techniques be further developed to provide improved data for the planning of the future diabetes care.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号