首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Discusses the appropriate use of the analysis of covariance for cases in which groups differ substantially on a variable that is entered as a covariate. The erroneous notions that groups must not differ significantly on the covariate and that covariates must be measured without error are rejected. Selective nonrandom assignment of Ss to groups on the basis of an observed variable that is measured with error can result in groups that differ substantially, but it is shown that conventional analysis of covariance provides unbiased estimates of the true treatment effects, in spite of the initial group differences. In other cases, correction for attenuation due to measurement error is required to obtain unbiased estimates of true treatment effects. (19 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Biologically based markers (biomarkers) are currently used to provide information on exposure, health effects, and individual susceptibility to chemical and radiological wastes. However, the development and validation of biomarkers are expensive and time consuming. To determine whether biomarker development and use offer potential improvements to risk models based on predictive relationships or assumed values, we explore the use of uncertainty analysis applied to exposure models for dietary methyl mercury intake. We compare exposure estimates based on self-reported fish intake and measured fish mercury concentrations with biomarker-based exposure estimates (i.e., hair or blood mercury concentrations) using a published data set covering 1 month of exposure. Such a comparison of exposure model predictions allowed estimation of bias and random error associated with each exposure model. From these analyses, both bias and random error were found to be important components of uncertainty regarding biomarker-based exposure estimates, while the diary-based exposure estimate was susceptible to bias. Application of the proposed methods to a simple case study demonstrates their utility in estimating the contribution of population variability and measurement error in specific applications of biomarkers to environmental exposure and risk assessment. Such analyses can guide risk analysts and managers in the appropriate validation, use, and interpretation of exposure biomarker information.  相似文献   

3.
BACKGROUND: International correlational analyses have suggested a strong positive association between fat consumption and breast cancer incidence, especially among post-menopausal women. However, case-control studies have been taken to indicate a weaker association, and a recent, pooled cohort analysis reported little evidence of an association. Differences among study results could be due to differences in the populations studied, differences in the control for total energy intake, recall bias in the case-control studies, and dietary measurement error biases. Existing measurement error models assume either that the sample data used to validate dietary self-report instruments are without measurements error or that any such error is independent of both the true dietary exposure and other study subject characteristics. However, growing evidence indicates that total energy and, presumably, both total fat and percent energy from fat are increasingly underreported as percent body fat increases. PURPOSE: A relaxed dietary measurement model is introduced that allows all measurement error parameters to depend on body mass index (weight in kilograms divided by the square of height in meters) and incorporates a random underreporting quantity that applies to each dietary self-report instrument. The model was applied to results from international correlational analyses to determine whether the differing associations between dietary fat and postmenopausal breast cancer can be explained by measurement errors in dietary assessment. METHODS: The relaxed measurement model was developed by use of data on total fat intake and percent energy from fat from 4-day food records (4DFRs) and food-frequency questionnaires (FFQs) from the original Women's Health Trial. This trial was a randomized, controlled, feasibility study of a low-fat dietary intervention carried out from 1985 through 1988 in Cincinnati (OH), Houston (TX), and Seattle (WA) among 303 women (184 intervention and 119 control) who were 45-69 years of age. The relaxed model was used to project results from the international correlational analyses onto 4DFR and FFQ fat-intake categories. RESULTS AND CONCLUSIONS: If measurement errors in dietary assessment are overlooked entirely, the projected relative risks (RRs) for breast cancer based on the international data vary substantially across percentiles of total fat intake. The projected RR for the 90% versus the 10% fat-intake percentile is 3.08 with the 4DFR and 4.00 with the FFQ. If random (i.e., noise) aspects of measurement error are acknowledged, the projected RR for the same comparison is reduced to 1.54 with the 4DFR and 1.42 with the FFQ. If both systematic and noise aspects of measurement error are acknowledged, the projected RR is reduced to about 1.10 with either instrument. Acknowledgment of measurement error also leads to a projected RR of about 1.10 for the 90% versus the 10% percentile of percent energy from fat with either dietary instrument. IMPLICATIONS: Dietary self-report instruments may be inadequate for analytic epidemiologic studies of dietary fat and disease risk because of measurement error biases.  相似文献   

4.
OBJECTIVES: This paper describes 2 statistical methods designed to correct for bias from exposure measurement error in point and interval estimates of relative risk. METHODS: The first method takes the usual point and interval estimates of the log relative risk obtained from logistic regression and corrects them for nondifferential measurement error using an exposure measurement error model estimated from validation data. The second, likelihood-based method fits an arbitrary measurement error model suitable for the data at hand and then derives the model for the outcome of interest. RESULTS: Data from Valanis and colleagues' study of the health effects of antineoplastics exposure among hospital pharmacists were used to estimate the prevalence ratio of fever in the previous 3 months from this exposure. For an interdecile increase in weekly number of drugs mixed, the prevalence ratio, adjusted for confounding, changed from 1.06 to 1.17 (95% confidence interval [CI] = 1.04, 1.26) after correction for exposure measurement error. CONCLUSIONS: Exposure measurement error is often an important source of bias in public health research. Methods are available to correct such biases.  相似文献   

5.
A simple form of measurement error model for explanatory variables is studied incorporating classical and Berkson cases as particular forms, and allowing for either additive or multiplicative errors. The work is motivated by epidemiological problems, and therefore consideration is given not only to continuous response variables but also to logistic regression models. The possibility that different individuals in a study have errors of different types is also considered. The relatively simple estimation procedures proposed for use with cohort data and case-control data are checked by simulation, under the assumption of various error structures. The results show that even in situations where conventional analysis yields slope estimates that are on average attenuated by a factor of approximately 50 per cent, estimates obtained using the proposed amended likelihood functions are within 5 per cent of their true values. The work was carried out to provide a method for the analysis of lung cancer risk following residential radon exposure, but it should be applicable to a wide variety of situations.  相似文献   

6.
Residual error models, traditionally used in population pharmacokinetic analyses, have been developed as if all sources of error have properties similar to those of assay error. Since assay error often is only a minor part of the difference between predicted and observed concentrations, other sources, with potentially other properties, should be considered. We have simulated three complex error structures. The first model acknowledges two separate sources of residual error, replication error plus pure residual (assay) error. Simulation results for this case suggest that ignoring these separate sources of error does not adversely affect parameter estimates. The second model allows serially correlated errors, as may occur with structural model misspecification. Ignoring this error structure leads to biased random-effect parameter estimates. A simple autocorrelation model, where the correlation between two errors is assumed to decrease exponentially with the time between them, provides more accurate estimates of the variability parameters in this case. The third model allows time-dependent error magnitude. This may be caused, for example, by inaccurate sample timing. A time-constant error model fit to time-varying error data can lead to bias in all population parameter estimates. A simple two-step time-dependent error model is sufficient to improve parameter estimates, even when the true time dependence is more complex. Using a real data set, we also illustrate the use of the different error models to facilitate the model building process, to provide information about error sources, and to provide more accurate parameter estimates.  相似文献   

7.
We discuss pragmatic clinical trials with survival endpoints in which subjects commonly change treatment during follow-up. Suppose that an intention-to-treat (ITT) analysis shows a significant difference between the randomized groups. We may want to ask questions about the reason for such a difference in outcome between randomized groups: for example, was the difference due to different policies for change to a third more beneficial regime? We address such questions using the semi-parametric accelerated life models of Robins, which exploit the randomization assumption fully and avoid direct comparisons of possibly differently selected subgroups. No assumption is made about the relationship of treatment actually prescribed to prognosis. A sensitivity analysis, using a range of plausible values for the causal effect of a covariate, estimates the contrasts between randomized groups that would have been observed if the covariate had universally been 0. The main technical problem is in dealing with censoring, for the method requires different degrees of recensoring for different values of the causal effect, and this can lead to estimates of low precision. The methods are applied to a randomized comparison of two anti-hypertensive treatments in which approximately half the subjects changed treatment during follow-up. Various time-dependent covariates, representing patterns of side-effects and treatments, are used in the model. We find that the observed difference in cardiovascular deaths between the randomized groups cannot be explained in this way by their different covariate patterns.  相似文献   

8.
Standard methods for the regression analysis of clustered data postulate models relating covariates to the response without regard to between- and within-cluster covariate effects. Implicit in these analyses is the assumption that these effects are identical. Example data show that this is frequently not the case and that analyses that ignore differential between- and within-cluster covariate effects can be misleading. Consideration of between- and within-cluster effects also helps to explain observed and theoretical differences between mixture model analyses and those based on conditional likelihood methods. In particular, we show that conditional likelihood methods estimate purely within-cluster covariate effects, whereas mixture model approaches estimate a weighted average of between- and within-cluster covariate effects.  相似文献   

9.
We present a new approach to evaluating the effect of a continuous exposure factor when that factor increases with a covariate (for example, age) which is regarded as a potential confounder. The basic idea is to estimate, as functions of the covariate, some selected quantiles of the exposure distribution, under the assumption that the dependence of each quantile on the covariate is monotonic. The resulting estimates are then used to divide the data into different exposure categories. This method of categorizing the data implies that the covariate distribution will be almost the same in each exposure group. We illustrate the approach with a study of blood pressure and cardiovascular mortality.  相似文献   

10.
This paper deals with analysis of data from longitudinal studies where the rate of a recurrent event characterizing morbidity is the primary criterion for treatment evaluation. We consider clinical trials which require patients to visit their clinical center at successive scheduled times as part of follow-up. At each visit, the patient reports the number of events that occurred since the previous visit, or an examination reveals the number of accumulated events, such as skin cancers. The exact occurrence times of the events are unavailable and the actual patient visit times typically vary randomly about the scheduled follow-up times. Each patient's record thus consists of a sequence of clinic visit dates, event counts corresponding to the successive time intervals between clinic visits, and baseline covariates. We propose a semiparametric regression model, extending the fully parametric model of Thall (1988, Biometrics 44, 197-209), to estimate and test for covariate effects on the rate of events over time while also accounting for the possibly time-varying nature of the underlying event rate. Covariate effects enter the model parametrically, while the underlying time-varying event rate is modelled nonparametrically. The method of Severini and Wong (1992, Annals of Statistics 20, 1768-1802) is used to construct asymptotically efficient estimators of the parametric component and to specify their asymptotic distribution. A simulation study and application to a data set are provided.  相似文献   

11.
A common assumption in the analysis of immunoassay data is a similar pattern of within-run variation across runs of the assays. One makes this assumption without formal investigation of its validity, despite the widely acknowledged fact that accurate understanding of intra-run variation is critical to reliable calibration inference. We propose a simple procedure for a formal test of the assumption of the homogeneity of parameters that characterize intra-run variation based on representation of standard curve data from multiple assay runs by a non-linear mixed effects model. We examine the performance of the procedure and investigate the robustness of calibration inference to incorrect assumptions about the pattern of intra-run variation.  相似文献   

12.
The exposure of an individual to an air pollutant can be assessed indirectly, with a "microenvironmental" approach, or directly with a personal sampler. Both methods of assessment are subject to measurement error, which can cause considerable bias in estimates of health effects. If the exposure estimates are unbiased and the measurement error is nondifferential, the bias in a linear model can be corrected when the variance of the measurement error is known. Unless the measurement error is quite large, estimates of health effects based on individual exposures appear to be more accurate than those based on ambient levels.  相似文献   

13.
A random-effects probit model is developed for the case in which the outcome of interest is a series of correlated binary responses. These responses can be obtained as the product of a longitudinal response process where an individual is repeatedly classified on a binary outcome variable (e.g., sick or well on occasion t), or in "multilevel" or "clustered" problems in which individuals within groups (e.g., firms, classes, families, or clinics) are considered to share characteristics that produce similar responses. Both examples produce potentially correlated binary responses and modeling these person- or cluster-specific effects is required. The general model permits analysis at both the level of the individual and cluster and at the level at which experimental manipulations are applied (e.g., treatment group). The model provides maximum likelihood estimates for time-varying and time-invariant covariates in the longitudinal case and covariates which vary at the level of the individual and at the cluster level for multilevel problems. A similar number of individuals within clusters or number of measurement occasions within individuals is not required. Empirical Bayesian estimates of person-specific trends or cluster-specific effects are provided. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
To clarify the nutritional status of vitamin D in Japanese, effects of dietary intake of vitamin D on plasma levels of intact and highly sensitive parathyroid hormone (I-PTH and HS-PTH), 25-hydroxyvitamin D (25-OH-D), 1,25-dihydroxyvitamin D (1,25(OH)2D), calcium (Ca) and phosphorus (P(i)) in 79 healthy Japanese were investigated. The plasma levels of 25-OH-D in men were significantly higher than those in women, whereas those of HS-PTH in men were significantly lower than those in women. The levels of 25-OH-D in men were generally higher than those in women. Significant correlations were observed between the dietary vitamin D intake and the plasma 25-OH-D or HS-PTH levels. Correlations between the plasma 25-OH-D levels and the plasma HS-PTH levels were also significant. These results suggest that dietary intake of sufficient amounts of vitamin D is effective for improving the vitamin D nutritional status through normalizing PTH levels.  相似文献   

15.
Population models were developed to analyze processes, described by parametric models, from measurements obtained in a sample of individuals. In order to analyze the sources of interindividual variability, covariates may be incorporated in the population analysis. The exploratory analyses and the two-stage approaches which use standard non-linear regression techniques are simple tools to select meaningful covariates. The global population approaches may be divided into two classes within which the covariates are handled differently: the parametric and the non-parametric methods. The power as well as the limitations of each approach regarding handling of covariates are illustrated and compared using the same data set which concerns the pharmacokinetics of gentamicin in neonates. With parametric approaches a second-stage model between structural parameters and covariates has to be defined. In the non-parametric method the joint distribution of parameters and covariates is estimated without parametric assumptions; however, it is assumed that covariates are observed with some error and parameters involved in functional relationships are not estimated. The important results concerning gentamicin in neonates were found by the two methods.  相似文献   

16.
A simple method for testing the assumption of independent censoring is developed using a Cox proportional hazards regression model with a time-dependent covariate. This method involves further follow-up of a subset of lost-to-follow-up censored subjects. An adjusted estimator of the survivor function is obtained for the dependent censoring model under a proportional hazards alternative. The proposed procedure is applied to an example of a clinical trial for lung cancer and a simulation study is given for investigating the power of the proposed test.  相似文献   

17.
It has been stated that energy adjustment can control for recall bias in case-control studies. Simulation of recall bias and cases and controls in a nutritional survey of German adults was conducted to examine its impact on five dietary effects, (adding a macronutrient, substituting one macronutrient for another, adding a macronutrient while keeping the other energy sources constant, and changing the macronutrient to energy ratio through addition or substitution) using various energy adjustment models. If energy adjustment were an effective means of correcting measurement error, the energy adjusted dietary effects, after a subtraction of energy and fat intake, should equal those in the original data set. Simulation of differential under-reporting of fat and energy intake by cases but not controls showed this to dramatically impact all five considered dietary effects, even after energy adjustment. The influence of the assumed recall bias on the different effects depends on the error type structure, inflating an odds ration of 1.8 to as much as 12.3 or reducing it to 0.45 when 100 kcal of fat was substituted for 100 kcal of other macronutrients. Although energy adjustment may serve many functions, it cannot correct for differential error. Depending upon the nature of the hypothesized effect and the error type, energy adjustment may also distort risk ratios in the presence of non-differential bias. The concern that cases and controls report their energy intakes with different degrees of error remains a critical consideration that must be addressed through improved measurements, and not energy adjustment under any of the currently used models.  相似文献   

18.
Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but use ANCOVA anyway (and, most likely, report misleading results); (b) attempt to employ 1 of several measurement error models with the understanding that no research has examined their relative performance and with the added practical difficulty that several of these models are not available in commonly used statistical software; or (c) not use ANCOVA at all. First, we discuss analytic evidence to explain why using ANCOVA with fallible covariates produces bias and a systematic inflation of Type I error rates that may lead to the incorrect conclusion that treatment effects exist. Second, to provide a solution for this problem, we conduct 2 Monte Carlo studies to compare 4 existing approaches for adjusting treatment effects in the presence of covariate measurement error: errors-in-variables (EIV; Warren, White, & Fuller, 1974), Lord's (1960) method, Raaijmakers and Pieters's (1987) method (R&P), and structural equation modeling methods proposed by S?rbom (1978) and Hayduk (1996). Results show that EIV models are superior in terms of parameter accuracy, statistical power, and keeping Type I error close to the nominal value. Finally, we offer a program written in R that performs all needed computations for implementing EIV models so that ANCOVA can be used to obtain accurate results even when covariates are measured with error. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

19.
BACKGROUND: Folate requirements during lactation are not well established. OBJECTIVE: We assessed the effects of dietary and supplemental folate intakes during extended lactation. DESIGN: Lactating women (n = 42) were enrolled in a double-blind, randomized, longitudinal supplementation trial and received either 0 or 1 mg folic acid/d. At 3 and 6 mo postpartum, maternal folate status was assessed by measuring erythrocyte, plasma, milk, and dietary folate concentrations; plasma homocysteine; and hematologic indexes. Infant anthropometric measures of growth, milk intake, and folate intake were also assessed. RESULTS: In supplemented women, values at 6 mo for erythrocyte and milk folate concentrations and for plasma homocysteine were not significantly different from those at 3 mo. In supplemented women compared with unsupplemented women at 6 mo, values for erythrocyte folate (840 compared with 667 nmol/L; P < 0.05), hemoglobin (140 compared with 134 g/L; P < 0.02), and hematocrit (0.41 compared with 0.39; P < 0.02) were higher and values for reticulocytes were lower. In unsupplemented women, milk folate declined from 224 to 187 nmol/L (99 to 82 ng/mL), whereas plasma homocysteine increased from 6.7 to 7.4 micromol/L. Dietary folate intake was not significantly different between groups (380+/-19 microg/d) and at 6 mo was correlated with plasma homocysteine in unsupplemented women (r = -0.53, P < 0.01) and with plasma folate in supplemented women (r = 0.49, P < 0.02). CONCLUSIONS: A dietary folate intake of approximately 380 microg/d may not be sufficient to prevent mobilization of maternal folate stores during lactation.  相似文献   

20.
The analysis of failure time data often involves two strong assumptions. The proportional hazards assumption postulates that hazard rates corresponding to different levels of explanatory variables are proportional. The additive effects assumption specifies that the effect associated with a particular explanatory variable does not depend on the levels of other explanatory variables. A hierarchical Bayes model is presented, under which both assumptions are relaxed. In particular, time-dependent covariate effects are explicitly modelled, and the additivity of effects is relaxed through the use of a modified neural network structure. The hierarchical nature of the model is useful in that it parsimoniously penalizes violations of the two assumptions, with the strength of the penalty being determined by the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号