首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Biologically based markers (biomarkers) are currently used to provide information on exposure, health effects, and individual susceptibility to chemical and radiological wastes. However, the development and validation of biomarkers are expensive and time consuming. To determine whether biomarker development and use offer potential improvements to risk models based on predictive relationships or assumed values, we explore the use of uncertainty analysis applied to exposure models for dietary methyl mercury intake. We compare exposure estimates based on self-reported fish intake and measured fish mercury concentrations with biomarker-based exposure estimates (i.e., hair or blood mercury concentrations) using a published data set covering 1 month of exposure. Such a comparison of exposure model predictions allowed estimation of bias and random error associated with each exposure model. From these analyses, both bias and random error were found to be important components of uncertainty regarding biomarker-based exposure estimates, while the diary-based exposure estimate was susceptible to bias. Application of the proposed methods to a simple case study demonstrates their utility in estimating the contribution of population variability and measurement error in specific applications of biomarkers to environmental exposure and risk assessment. Such analyses can guide risk analysts and managers in the appropriate validation, use, and interpretation of exposure biomarker information.  相似文献   

2.
OBJECTIVES: This paper describes 2 statistical methods designed to correct for bias from exposure measurement error in point and interval estimates of relative risk. METHODS: The first method takes the usual point and interval estimates of the log relative risk obtained from logistic regression and corrects them for nondifferential measurement error using an exposure measurement error model estimated from validation data. The second, likelihood-based method fits an arbitrary measurement error model suitable for the data at hand and then derives the model for the outcome of interest. RESULTS: Data from Valanis and colleagues' study of the health effects of antineoplastics exposure among hospital pharmacists were used to estimate the prevalence ratio of fever in the previous 3 months from this exposure. For an interdecile increase in weekly number of drugs mixed, the prevalence ratio, adjusted for confounding, changed from 1.06 to 1.17 (95% confidence interval [CI] = 1.04, 1.26) after correction for exposure measurement error. CONCLUSIONS: Exposure measurement error is often an important source of bias in public health research. Methods are available to correct such biases.  相似文献   

3.
Random error (misclassification) in exposure measurements usually biases a relative risk, regression coefficient, or other effect measure towards the null value (no association). The most important exception is Berkson type error, which causes little or no bias. Berkson type error arises, in particular, due to use of group average exposure in place of individual values. Random error in exposure measurements, Berkson or otherwise, reduces the power of a study, making it more likely that real associations are not detected. Random error in confounding variables compromises the control of their effect, leaving residual confounding. Random error in a variable that modifies the effect of exposure on health--for example, an indicator of susceptibility--tends to diminish the observed modification of effect, but error in the exposure can create a supurious appearance of modification. Methods are available to correct for bias (but not generally power loss) due to measurement error, if information on the magnitude and type of error is available. These methods can be complicated to use, however, and should be used cautiously as "correction" can magnify confounding if it is present.  相似文献   

4.
We combine two major approaches currently used in human air pollution exposure assessment, the direct approach and the indirect approach. The direct approach measures exposures directly using personal monitoring. Despite its simplicity, this approach is costly and is also vulnerable to sample selection bias because it usually imposes a substantial burden on the respondents, making it difficult to recruit a representative sample of respondents. The indirect approach predicts exposures using the activity pattern model to combine activity pattern data with microenvironmental concentrations data. This approach is lower in cost and imposes less respondent burden, thus is less vulnerable to sample selection bias. However, it is vulnerable to systematic measurement error in the predicted exposures because the microenvironmental concentration data might need to be "grafted" from other data sources. The combined approach combines the two approaches to remedy the problems in each. A dual sample provides both the direct measurements of exposures based on personal monitoring and the indirect estimates based on the activity pattern model. An indirect-only sample provides additional indirect estimates. The dual sample is used to calibrate the indirect estimates to correct the systematic measurement error. If both the dual sample and the indirect-only sample are representative, the indirect estimates from the indirect-only sample is used to improve the precision for the overall estimates. If the dual sample is vulnerable to sample selection bias, the indirect-only sample is used to correct the sample selection bias. We discuss the allocation of the resources between the two subsamples and provide algorithms which can be used to determine the optimal sample allocation. The theory is illustrated with applications to the empirical data obtained from the Washington, DC, Carbon Monoxide (CO) Study.  相似文献   

5.
The study of potential racial and gender bias in individual test items is a major research area today. The fact that research has established that total scores on ability and achievement tests are predictively unbiased raises the question of whether there is in fact any real bias at the item level. No theoretical rationale for expecting such bias has been advanced. It appears that findings of item bias (differential item functioning; DIF) can be explained by three factors: failure to control for measurement error in ability estimates, violations of the unidimensionality assumption required by DIF detection methods, and reliance on significance testing (causing tiny artifactual DIF effects to be statistically significant because sample sizes are very large). After taking into account these artifacts, there appears to be no evidence that items on currently used tests function differently in different racial and gender groups. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
We explore the effects of measurement error in a time-varying covariate for a mixed model applied to a longitudinal study of plasma levels and dietary intake of beta-carotene. We derive a simple expression for the bias of large sample estimates of the variance of random effects in a longitudinal model for plasma levels when dietary intake is treated as a time-varying covariate subject to measurement error. In general, estimates for these variances made without consideration of measurement error are biased positively, unlike estimates for the slope coefficients which tend to be 'attenuated'. If we can assume that the residuals from a longitudinal fit for the time-varying covariate behave like measurement errors, we can estimate the original parameters without the need for additional validation or reliability studies. We propose a method to test this assumption and show that the assumption is reasonable for the example data. We then use a likelihood-based method of estimation that involves a simple extension of existing methods for fitting mixed models. Simulations illustrate the properties estimators.  相似文献   

7.
Adverse effects of maternal smoking have been mostly identified through epidemiologic investigations that have used questionnaires to assess active and passive smoking. However, unvalidated self-reports of cigarette smoking may bias true estimates of relative risk of smoking-related health outcomes. This report is based on two separate investigations. First, within a molecular epidemiologic study of the relationship between environmental exposures (smoking, air pollution, diet) and developmental impairment, we have compared self-reported tobacco smoke exposure during pregnancy to plasma cotinine measurements in mothers. One hundred and fifty-eight patients from obstetrical wards in Cracow and in Limanowa, Poland were included in the parent study. Biochemically-identified smokers were defined as persons with plasma cotinine levels greater than 25 ng/mL. The data showed that exposure classification based on self-reported smoking status compared with cotinine values was of low sensitivity (52%) but of high specificity (98%). We assessed the effect of this exposure classification error on the association between low birth weight (LBW) and smoking in pregnancy using data from a related epidemiologic study of children's health in Cracow involving 1115 subjects. The odds ratio (OR) estimates for smoking and LBW after adjustment for exposure misclassification error were significantly higher than before adjustment (crude OR = 2.9, corrected OR = 5.1). The estimated attributable fraction (AF(pop)) based on the crude OR amounted to 22%; however, after adjustment it reached 50%. The corresponding values for the attributable fraction in the exposed group (AF(exp)) were 66% and 80%. These results illustrate the value of validating questionnaire responses on smoking during pregnancy against reliable biologic markers.  相似文献   

8.
Ecologic regression studies conducted to assess the cancer risk of indoor radon to the general population are subject to methodological limitations, and they have given seemingly contradictory results. The authors use simulations to examine the effects of two major methodological problems that affect these studies: measurement error and misspecification of the risk model. In a simulation study of the effect of measurement error caused by the sampling process used to estimate radon exposure for a geographic unit, both the effect of radon and the standard error of the effect estimate were underestimated, with greater bias for smaller sample sizes. In another simulation study, which addressed the consequences of uncontrolled confounding by cigarette smoking, even small negative correlations between county geometric mean annual radon exposure and the proportion of smokers resulted in negative average estimates of the radon effect. A third study considered consequences of using simple linear ecologic models when the true underlying model relation between lung cancer and radon exposure is nonlinear. These examples quantify potential biases and demonstrate the limitations of estimating risks from ecologic studies of lung cancer and indoor radon.  相似文献   

9.
OBJECTIVE: The authors examined different ways of measuring unit costs and how methodological assumptions can affect the magnitude of cost estimates and the ratio of treatment costs in comparative studies of mental health interventions. Four methodological choices may bias cost estimates: study perspective, definition of the opportunity cost of resources, cost allocation rules, and measurement of service units. METHOD: Unit costs for outpatient services, individual therapy, and group therapy were calculated under different assumptions for a single community mental health center (CMHC). Using hypothetical service utilization profiles, the authors used the unit costs to calculate the costs of mental health treatments provided by two programs of the CMHC. RESULTS: The unit costs for an hour of outpatient services ranged from $108 to $538. The unit costs for an hour of therapy varied by 156%; unit costs were lowest if the management perspective was assumed and highest if the economist perspective was assumed. The ratio of the outpatient costs in the two treatment programs ranged from 0.6 to 1.8. CONCLUSIONS: The potential errors introduced by methodological choices can bias cost-effectiveness findings based on randomized control trials. These errors go undetected because crucial methodological information is not reported.  相似文献   

10.
The population risk, for example the control group mortality rate, is an aggregate measurement of many important attributes of a clinical trial, such as the general health of the patients treated and the experience of the staff performing the trial. Plotting measurements of the population risk against the treatment effect estimates for a group of clinical trials may reveal an apparent association, suggesting that differences in the population risk might explain heterogeneity in the results of clinical trials. In this paper we consider using estimates of population risk to explain treatment effect heterogeneity, and show that using these estimates as fixed covariates will result in bias. This bias depends on the treatment effect and population risk definitions chosen, and the magnitude of measurement errors. To account for the effect of measurement error, we represent clinical trials in a bivariate two-level hierarchical model, and show how to estimate the parameters of the model by both maximum likelihood and Bayes procedures. We use two examples to demonstrate the method.  相似文献   

11.
Studies using the single-aggregate approach (L. R. James, 1982), where assessments made by individual respondents are correlated with other assessments that have been averaged across multiple respondents, can exhibit a systematic bias of 20 to 70% or more if they are used to estimate individual level relationships. Not only may results of such studies be erroneous, but theory development based on such studies may be misguided. A comprehensive solution (nested and crossed designs) to the single–aggregation problem is provided through generalizability theory. Results show that the aggregation bias is a function of both the generalizability (reliability) of individual responses and the number of individuals per group. Conceptual parallels to classical measurement theory are discussed. Factors are presented for converting single–aggregated correlations and standard deviations to estimates of the corresponding values using the individual as the level of analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Minimal measurement error (reliability) during the collection of interval- and ratio-type data is critically important to sports medicine research. The main components of measurement error are systematic bias (e.g. general learning or fatigue effects on the tests) and random error due to biological or mechanical variation. Both error components should be meaningfully quantified for the sports physician to relate the described error to judgements regarding 'analytical goals' (the requirements of the measurement tool for effective practical use) rather than the statistical significance of any reliability indicators. Methods based on correlation coefficients and regression provide an indication of 'relative reliability'. Since these methods are highly influenced by the range of measured values, researchers should be cautious in: (i) concluding acceptable relative reliability even if a correlation is above 0.9; (ii) extrapolating the results of a test-retest correlation to a new sample of individuals involved in an experiment; and (iii) comparing test-retest correlations between different reliability studies. Methods used to describe 'absolute reliability' include the standard error of measurements (SEM), coefficient of variation (CV) and limits of agreement (LOA). These statistics are more appropriate for comparing reliability between different measurement tools in different studies. They can be used in multiple retest studies from ANOVA procedures, help predict the magnitude of a 'real' change in individual athletes and be employed to estimate statistical power for a repeated-measures experiment. These methods vary considerably in the way they are calculated and their use also assumes the presence (CV) or absence (SEM) of heteroscedasticity. Most methods of calculating SEM and CV represent approximately 68% of the error that is actually present in the repeated measurements for the 'average' individual in the sample. LOA represent the test-retest differences for 95% of a population. The associated Bland-Altman plot shows the measurement error schematically and helps to identify the presence of heteroscedasticity. If there is evidence of heteroscedasticity or non-normality, one should logarithmically transform the data and quote the bias and random error as ratios. This allows simple comparisons of reliability across different measurement tools. It is recommended that sports clinicians and researchers should cite and interpret a number of statistical methods for assessing reliability. We encourage the inclusion of the LOA method, especially the exploration of heteroscedasticity that is inherent in this analysis. We also stress the importance of relating the results of any reliability statistic to 'analytical goals' in sports medicine.  相似文献   

13.
IB Tager  N Künzli  F Lurmann  L Ngo  M Segal  J Balmes 《Canadian Metallurgical Quarterly》1998,(81):27-78; discussion 109-21
An extensive body of data supports a relation between acute exposures to ambient ozone and the occurrence of various acute respiratory symptoms and changes in measures of lung function. In contrast, relatively few data are available on the human health effects that result from long-term exposure to ambient ozone, Current efforts to study long-term ozone-related health effects are limited by the methods available for ascertaining lifetime exposures to ozone. The present feasibility study was undertaken as part of the Health Effects Institute's Environmental Epidemiology Planning Project (Health Effects Institute 1994) to (1) determine whether, in the context of an epidemiologic study, reliable estimates can be obtained for lifetime exposures to ozone by combining estimates from lifetime residential histories, typical activity patterns during life, and residence-specific ambient ozone monitoring data; (2) identify the minimum data required to produce reliable estimates of lifetime exposure; and (3) analyze the relations between various estimates of lifetime ozone exposure and measures of lung function. A convenience sample of 175 first-year students at the University of California, Berkeley, who lived all of their lives in selected areas of California (the Los Angeles Basin or the San Francisco Bay Area), were studied on two occasions (test and retest or test sessions 1 and 2), five to seven days apart. Residential and lifestyle data were obtained from a questionnaire: residence-based ambient ozone exposure values were assigned by interpolation of ambient ozone monitoring data to residential locations. Estimated lifetime exposure was based on average ozone levels between 10 a.m. and 6 p.m. and hours of exposure to ozone concentrations greater than 60 parts per billion (ppb). "Effective" lifetime exposure to ozone was based on a weighted average of estimated time spent in different ambient ozone environments as determined by different combinations of activity data. Pulmonary function was evaluated with flows and volumes from maximum expiratory flow-volume curves and slope of phase III of the single-breath nitrogen washout (SBNW) curves. Although the test-retest reliability of the residential history was acceptably high only for first and second residences, most of the unreliability for other residences came from residences occupied for relatively short durations. Therefore, the test-retest reliability of estimated lifetime exposure to ozone was high, with intraclass correlations greater than 0.90 for all approaches evaluated. Multiple, linear regression analyses showed a consistently negative relation between estimates of lifetime exposure to ozone and flows that reflect the physiology of pulmonary small airways. No relation was observed between lifetime ozone exposure and forced expiratory volume or the slope of phase III, and the relation between lifetime exposure and forced expiratory volume in one second was inconsistent. The results of the flow measures were unaffected by the method used to estimate lifetime exposure and gave effect estimates that were nearly identical. The data from this study indicate that useful and reproducible estimates of lifetime ozone exposure can be obtained in epidemiologic studies by using a residential history. However, the total burden of ozone to which the subjects were exposed cannot be determined accurately from such data. Nonetheless, the estimates so obtained appear to be associated with alterations in pulmonary function that are consistent with the predicted site of maximum effect of ozone in the human lung.  相似文献   

14.
A random-effects probit model is developed for the case in which the outcome of interest is a series of correlated binary responses. These responses can be obtained as the product of a longitudinal response process where an individual is repeatedly classified on a binary outcome variable (e.g., sick or well on occasion t), or in "multilevel" or "clustered" problems in which individuals within groups (e.g., firms, classes, families, or clinics) are considered to share characteristics that produce similar responses. Both examples produce potentially correlated binary responses and modeling these person- or cluster-specific effects is required. The general model permits analysis at both the level of the individual and cluster and at the level at which experimental manipulations are applied (e.g., treatment group). The model provides maximum likelihood estimates for time-varying and time-invariant covariates in the longitudinal case and covariates which vary at the level of the individual and at the cluster level for multilevel problems. A similar number of individuals within clusters or number of measurement occasions within individuals is not required. Empirical Bayesian estimates of person-specific trends or cluster-specific effects are provided. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Discusses the appropriate use of the analysis of covariance for cases in which groups differ substantially on a variable that is entered as a covariate. The erroneous notions that groups must not differ significantly on the covariate and that covariates must be measured without error are rejected. Selective nonrandom assignment of Ss to groups on the basis of an observed variable that is measured with error can result in groups that differ substantially, but it is shown that conventional analysis of covariance provides unbiased estimates of the true treatment effects, in spite of the initial group differences. In other cases, correction for attenuation due to measurement error is required to obtain unbiased estimates of true treatment effects. (19 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
OBJECTIVES: Estimates of long-term average exposure to occupational hazards are often imprecise because intraindividual variability in exposure can be large and exposure is usually based on one or few measurements. One potential result is bias of exposure-response relationships. The possibility was studied of a more valid measure of exposure being obtained by modeling exposure and consequently increasing the number of days with exposure estimates, using simple measurable exposure surrogates. METHODS: In a group of 198 Dutch pig farmers, exposure to endotoxins was measured on one workday in summer and one day in winter. Farmers recorded activity patterns during one week in both seasons, and farm characteristics were evaluated. Relationships between farm characteristics and activities and log-transformed measured exposure levels were quantified in a multiple regression analysis. Exposure was estimated for 14 d with known activity patterns. RESULTS: The ratio of intraindividual and interindividual variance in log-transformed measured exposure was 4.7. Given this ratio, the true regression coefficient of lung function on exposure would potentially be attenuated by 70%. The variance ratio for predicted exposures was only 1.2, and the potential attenuation by variation in exposure estimates was decreased to 8%. There was no relationship between lung function and measured exposure. Modeled long-term average exposure was inversely related to base-line lung function; it reached statistical significance for asymptomatic farmers. CONCLUSIONS: The results suggest that the presented strategy offers a possibility to minimize measurement effort in occupational epidemiologic studies, without apparent loss of statistical power.  相似文献   

17.
The presence of random measurement error in indicators of theoretical constructs biases observed estimates of relations among those constructs. Correcting for this bias is particularly important when random measurement error is substantial or is substantially different for indicators of distinct constructs included in a theoretical model. Validity assessment in the case of thematic apperceptive measures of the achievement motive (TAT n Achievement) has been vulnerable to interpretive errors because these indicators of the achievement motive are typically much less reliable than indicators of other constructs to which the motive may be related, and no correction has been made for the bias introduced by such differential measurement error. A causal modeling approach to validity assessment for TAT n Achievement is presented that incorporates explicit true-score measurement models of theoretical constructs. Data from J. Veroff et al (1981) on 413 adult US males confirm the hypothesis that the achievement motive construct is positively related to work satisfaction. Evidence for the discriminant validity of story content as opposed to story length, an issue raised in the literature on the TAT, is also presented in this nomological network. (56 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Reports an error in "The measurement of individual change" by Gideon J. Mellenbergh and Wulfert P. van den Brink (Psychological Methods, 1998[Dec], Vol 3[4], 470-485). This article contained errors in a series of equations. The corrected equations are provided in the erratum. (The following abstract of the original article appeared in record 1998-11538-005.) Models of the Wiener simplex type are described for a single participant's change in a multiwave study. Individual test score change models for continuous scores are based on classical test theory and for discrete scores on the binomial error model. Subsequently, these models are generalized to individual item response change models. In addition, specific models are specified for the item change parameters. The emphasis is on single-subject change models, but they can be extended to population models by making assumptions at the population level. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
In an analysis of H. L. Roediger and K. B. McDermott's (see record 1995-42833-001) false-memory paradigm, M. B. Miller and G. L. Wolford (see record 1999-13930-007) argued that falsely recognized items occur because a bias toward calling such items "old" is created by their membership in a studied category. This interpretation was contested by Roediger and McDermott (see record 2000-15248-006). The authors of this article approach this issue as a statistical decision problem and observe that an explanation of false memory based on stored strengths and one based on decision process can have identical implications for data. Problems with equivalent formal models of this type can frequently be resolved by looking at the effects of other variables on the fitted estimates. The authors illustrate this analysis by examining the effects of presentation duration on the parameter estimates produced by models that instantiate the 2 explanations. Although the question remains open, the storage-based interpretation was found to be somewhat more plausible. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Reports an error in "Twins and the study of rater (dis)agreement" by Meike Bartels, Dorret I. Boomsma, James J. Hudziak, Toos C. E. M. van Beijsterveldt and Edwin J. C. G. van den Oord (Psychological Methods, 2007[Dec], Vol 12[4], 451-466). The DOI for the supplemental materials was printed incorrectly. The correct DOI is as follows: http://dx.doi.org/10.1037/1082-989X.12.4.451.supp. (The following abstract of the original article appeared in record 2007-18729-006.) Genetically informative data can be used to address fundamental questions concerning the measurement of behavior in children. The authors illustrate this with longitudinal multiple-rater data on internalizing problems in twins. Valid information on the behavior of a child is obtained for behavior that multiple raters agree upon and for rater-specific perception of the child's behavior. Rater-disagreement variance =?2(rd) accounted for 35% of the individual differences in internalizing behavior. Up to 17% of this =?2(rd) was accounted for by rater-specific additive genetic variance=?2(Au). Thus, the disagreement should not be considered only to be bias/error but also as representing the unique feature of the relationships between that parent and the child. The longitudinal extension of this model helps to make a distinction between measurement error and the raters' unique perception of the child's behavior. For internalizing behavior, the results show large stability across time, which is accounted for by common additive genetic and common shared environmental factors. Rater-specific shared environmental factors show substantial influence on stability. This could mean that rater bias may be persistent and affect longitudinal studies. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号