首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
Using biomarkers to inform cumulative risk assessment   总被引:1,自引:0,他引:1  
BACKGROUND: Biomarkers are considered the method of choice for determining exposure to environmental contaminants and relating such exposures to health outcomes. However, the association between many biomarkers and outcome is not direct because of variability in sensitivity and susceptibility in the individual. OBJECTIVES: We explore the relationship between environmental exposures and health outcomes as mitigated by differential susceptibility in individuals or populations and address the question "Can biomarkers enable us to understand and quantify better the population burden of disease and health effects attributable to environmental exposures?" METHODS: We use a case-study approach to develop the thesis that biomarkers offer a pathway to disaggregation of health effects into specific, if multiple, risk factors. We offer the point of view that a series or array of biomarkers, including biomarkers of exposure, biomarkers of susceptibility, and biomarkers of effect, used in concert offer the best means by which to effect this disaggregation. We commence our discussion by developing the characteristics of an ideal biomarker, then give some examples of commonly used biomarkers to show the strengths and weaknesses of current usage. We follow this by more detailed case-study assessment outlining the state-of-the-science in specific cases. We complete our work with recommendations regarding the future use of biomarkers and areas for continued development. CONCLUSIONS: The case studies provide examples of when and how biomarkers can be used to infer the source and magnitude of exposure among a set of competing sources and pathways. The answer to this question is chemical specific and relates to how well the biomarker matches the characteristics of an "ideal" biomarker-in particular ease of collection and persistence. The use of biomarkers in combination provides a better opportunity to disaggregate both source and pathway contributions.  相似文献   

2.
Estimating probit models with self-selected treatments   总被引:1,自引:0,他引:1  
Outcomes research often requires estimating the impact of a binary treatment on a binary outcome in a non-randomized setting, such as the effect of taking a drug on mortality. The data often come from self-selected samples, leading to a spurious correlation between the treatment and outcome when standard binary dependent variable techniques, like logit or probit, are used. Intuition suggests that a two-step procedure (analogous to two-stage least squares) might be sufficient to deal with this problem if variables are available that are correlated with the treatment choice but not the outcome.This paper demonstrates the limitations of such a two-step procedure. We show that such estimators will not generally be consistent. We conduct a Monte Carlo exercise to compare the performance of the two-step probit estimator, the two-stage least squares linear probability model estimator, and the multivariate probit. The results from this exercise argue in favour of using the multivariate probit rather than the two-step or linear probability model estimators, especially when there is more than one treatment, when the average probability of the dependent variable is close to 0 or 1, or when the data generating process is not normal. We demonstrate how these different methods perform in an empirical example examining the effect of private and public insurance coverage on the mortality of HIV+ patients.  相似文献   

3.
Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure.  相似文献   

4.
BACKGROUND: Recently there has been considerable debate about possible false positive study outcomes. Several well-known epidemiologists have expressed their concern and the possibility that epidemiological research may loose credibility with policy makers as well as the general public. METHODS: We have identified 75 false positive studies and 150 true positive studies, all published reports and all epidemiological studies reporting results on substances or work processes generally recognized as being carcinogenic to humans. All studies were scored on a number of design characteristics and factors relating to the specificity of the research objective. These factors included type of study design, use of cancer registry data, adjustment for smoking and other factors, availability of exposure data, dose- and duration-effect relationship, magnitude of the reported relative risk, whether the study was considered a 'fishing expedition', affiliation and country of the first author. RESULTS: The strongest factor associated with the false positive or true positive study outcome was if the study had a specific a priori hypothesis. Fishing expeditions had an over threefold odds ratio of being false positive. Factors that decreased the odds ratio of a false positive outcome included observing a dose-effect relationship, adjusting for smoking and not using cancer registry data. CONCLUSION: The results of the analysis reported here clearly indicate that a study with a specific a priori study objective should be valued more highly in establishing a causal link between exposure and effect than a mere fishing expedition.  相似文献   

5.
Sample size requirements for epidemiologic studies are usually determined on the basis of the desired level of statistical power. Suppose, however, that one is planning a study in which the participants' true exposure levels are unobservable. Instead, the analysis will be based on an imprecise surrogate measure that differs from true exposure by some non-negligible amount of measurement error. Sample size estimates for tests of association between the surrogate exposure measure and the outcome of interest may be misleading if they are based solely on the anticipated characteristics of the distribution of surrogate measures in the study population. We examine the accuracy of sample size estimates for cohort studies in which a continuous surrogate exposure measure is subject to either classical or Berkson measurement error. In particular, we evaluate the consequences of not adjusting the sample size estimation procedure for tests based on imprecise exposure measurements to account for anticipated differences between the distributions of the true exposure and the surrogate measure in the study population. As expected, failure to adjust for classical measurement error can lead to underestimation of the required sample size. Disregard of Berkson measurement error, however, can result in sample size estimates that exceed the actual number of participants required for tests of association between the outcome and the surrogate exposure measure. We illustrate this Berkson error effect by estimating sample size for a hypothetical cohort study that examines an association between childhood exposure to radioiodine and the development of thyroid neoplasms. © 1998 John Wiley & Sons, Ltd.  相似文献   

6.
Any genome-wide analysis is hampered by reduced statistical power due to multiple comparisons. This is particularly true for interaction analyses, which have lower statistical power than analyses of associations. To assess gene–environment interactions in population settings we have recently proposed a statistical method based on a modified two-step approach, where first genetic loci are selected by their associations with disease and environment, respectively, and subsequently tested for interactions. We have simulated various data sets resembling real world scenarios and compared single-step and two-step approaches with respect to true positive rate (TPR) in 486 scenarios and (study-wide) false positive rate (FPR) in 252 scenarios. Our simulations confirmed that in all two-step methods the two steps are not correlated. In terms of TPR, two-step approaches combining information on gene-disease association and gene–environment association in the first step were superior to all other methods, while preserving a low FPR in over 250 million simulations under the null hypothesis. Our weighted modification yielded the highest power across various degrees of gene–environment association in the controls. An optimal threshold for step 1 depended on the interacting allele frequency and the disease prevalence. In all scenarios, the least powerful method was to proceed directly to an unbiased full interaction model, applying conventional genome-wide significance thresholds. This simulation study confirms the practical advantage of two-step approaches to interaction testing over more conventional one-step designs, at least in the context of dichotomous disease outcomes and other parameters that might apply in real-world settings.  相似文献   

7.
Although intra- and interindividual sources of variation in airborne exposures have been extensively studied, similar investigations examining variability in biological measures of exposure have been limited. Following a review of the world's published literature, biological monitoring data were abstracted from 53 studies that examined workers' exposures to metals, solvents, polycyclic aromatic hydrocarbons, and pesticides. Approximately 40% of the studies also reported personal sampling results, which were compiled as well. In this study, the authors evaluated the intra- and interindividual sources of variation in biological measures of exposure collected on workers employed at the same plant. In 60% of the data sets, there was more variation among workers than variation from day to day. Approximately one-fourth of the data were homogeneous with small differences among workers' mean exposure levels. However, an almost equal number of data sets exhibited moderate to extreme levels of heterogeneity in exposures among workers at the same facility. In addition, the relative magnitude of the intra- to interindividual source of variation was larger for biomarkers with short compared to long half-lives, which suggests that biomarkers with half-lives of 7 days or longer exhibit physiologic dampening of fluctuations in external levels of the workplace contaminant and thereby may offer advantages when compared to short-lived biomarkers or exposures assessed by air monitoring. The use of biological indices of exposure, however, places an additional burden on the strategy used to evaluate exposures, because data may be serially correlated as evidenced in this study, which could result in biased estimates of the variance components if autocorrelation is undetected or ignored in the statistical analyses.  相似文献   

8.
Measurement error in both the exposure and the outcome is a common problem in epidemiologic studies. Measurement errors in the exposure and the outcome are said to be independent of each other if the measured exposure and the measured outcome are statistically independent conditional on the true exposure and true outcome (and dependent otherwise). Measurement error is said to be nondifferential if measurement of the exposure does not depend on the true outcome conditional on the true exposure and vice versa; otherwise it is said to be differential. Few results on differential and dependent measurement error are available in the literature. Here the authors use formal rules governing associations on signed directed acyclic graphs (DAGs) to draw conclusions about the presence and sign of causal effects under differential and dependent measurement error. The authors apply these rules to 4 forms of measurement error: independent nondifferential, dependent nondifferential, independent differential, and dependent differential. For a binary exposure and outcome, the authors generalize Weinberg et al.'s (Am J Epidemiol. 1994;140(6):565-571) result for nondifferential measurement error on preserving the direction of a trend to settings which also allow measurement error in the outcome and to cases involving dependent and/or differential error.  相似文献   

9.
Continuous measurements are often dichotomized for classification of subjects. This paper evaluates two procedures for determining a best cutpoint for a continuous prognostic factor with right censored outcome data. One procedure selects the cutpoint that minimizes the significance level of a logrank test with comparison of the two groups defined by the cutpoint. This procedure adjusts the significance level for maximal selection. The other procedure uses a cross-validation approach. The latter easily extends to accommodate multiple other prognostic factors. We compare the methods in terms of statistical power and bias in estimation of the true relative risk associated with the prognostic factor. Both procedures produce approximately the correct type I error rate. Use of a maximally selected cutpoint without adjustment of the significance level, however, results in a substantially elevated type I error rate. The cross-validation procedure unbiasedly estimated the relative risk under the null hypothesis while the procedure based on the maximally selected test resulted in an upward bias. When the relative risk for the two groups defined by the covariate and true changepoint was small, the cross-validation procedure provided greater power than the maximally selected test. The cross-validation based estimate of relative risk was unbiased while the procedure based on the maximally selected test produced a biased estimate. As the true relative risk increased, the power of the maximally selected test was about 10 per cent greater than the power obtained using cross-validation. The maximally selected test overestimated the relative risk by about 10 per cent. The cross-validation procedure produced at most 5 per cent underestimation of the true relative risk. Finally, we report the effect of dichotomizing a continuous non-linear relationship between covariate and risk. We compare using a linear proportional hazard model to using models based on optimally selected cutpoints. Our simulation study indicates that we can have a substantial loss of statistical power when we use cutpoint models in cases where there is a continuous relationship between covariate and risk.  相似文献   

10.
Comparative analyses of safety/tolerability data from a typical phase III randomized clinical trial generate multiple p-values associated with adverse experiences (AEs) across several body systems. A common approach is to 'flag' any AE with a p-value less than or equal to 0.05, ignoring the multiplicity problem. Despite the fact that this approach can result in excessive false discoveries (false positives), many researchers avoid a multiplicity adjustment to curtail the risk of missing true safety signals. We propose a new flagging mechanism that significantly lowers the false discovery rate (FDR) without materially compromising the power for detecting true signals, relative to the common no-adjustment approach. Our simple two-step procedure is an enhancement of the Mehrotra-Heyse-Tukey approach that leverages the natural grouping of AEs by body systems. We use simulations to show that, on the basis of FDR and power, our procedure is an attractive alternative to the following: (i) the no-adjustment approach; (ii) a one-step FDR approach that ignores the grouping of AEs by body systems; and (iii) a recently proposed two-step FDR approach for much larger-scale settings such as genome-wide association studies. We use three clinical trial examples for illustration.  相似文献   

11.
Physiologically based pharmacokinetic (PBPK) modeling is a well-established toxicological tool designed to relate exposure to a target tissue dose. The emergence of federal and state programs for environmental health tracking and the availability of exposure monitoring through biomarkers creates the opportunity to apply PBPK models to estimate exposures to environmental contaminants from urine, blood, and tissue samples. However, reconstructing exposures for large populations is complicated by often having too few biomarker samples, large uncertainties about exposures, and large interindividual variability. In this paper, we use an illustrative case study to identify some of these difficulties, and for a process for confronting them by reconstructing population-scale exposures using Bayesian inference. The application consists of interpreting biomarker data from eight adult males with controlled exposures to trichloroethylene (TCE) as if the biomarkers were random samples from a large population with unknown exposure conditions. The TCE concentrations in blood from the individuals fell into two distinctly different groups even though the individuals were simultaneously in a single exposure chamber. We successfully reconstructed the exposure scenarios for both subgroups - although the reconstruction of one subgroup is different than what is believed to be the true experimental conditions. We were however unable to predict with high certainty the concentration of TCE in air.  相似文献   

12.
The burgeoning interest in the field of epigenetics has precipitated the need to develop approaches to strengthen causal inference when considering the role of epigenetic mediators of environmental exposures on disease risk. Epigenetic markers, like any other molecular biomarker, are vulnerable to confounding and reverse causation. Here, we present a strategy, based on the well-established framework of Mendelian randomization, to interrogate the causal relationships between exposure, DNA methylation and outcome. The two-step approach first uses a genetic proxy for the exposure of interest to assess the causal relationship between exposure and methylation. A second step then utilizes a genetic proxy for DNA methylation to interrogate the causal relationship between DNA methylation and outcome. The rationale, origins, methodology, advantages and limitations of this novel strategy are presented.  相似文献   

13.
Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure‐disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure‐outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons, Ltd.  相似文献   

14.
The relationship between type of biopsy-mastectomy procedure and four sets of independent variables (physician, patient, hospital and tumor characteristics) was examined for 993 locally-staged breast cancer patients diagnosed between 1974 and 1981 in 13 counties of western Washington State. Time trends were also investigated. Cases were selected from records of a population-based cancer registry. The frequency of the two-step procedure, in which biopsy and surgery occur on different days, increased from 28 to 57% during the 8-year study period. Use of the two-step procedure was associated with younger age of the patient, less suspicious symptoms or mammogram results, younger physician cohorts and hospitals with government and health maintenance organization proprietorship. These relationships remained unchanged after controlling for potentially confounding variables. There were also substantial differences between individual hospitals in the frequency of the two-step procedure, suggesting differing schools of thought in different locales.  相似文献   

15.
Epidemiological studies and clinical data confirm that occupational exposure to carcinogenic agents plays an important role in cancer etiology. Recent tremendous progress in understanding of the mechanisms of carcinogenesis, and also introduction of new tests to recognize changes occurring in the exposed organism have made it possible for the occupational medicine to detect the earliest cancer stages which occur during the latent phase of the disease. Detecting pre-neoplastic changes which precede an overt form of cancer and identification of measurable indicators of those changes has been one of the fundamental aims of molecular biology research. Biomarkers may serve as a research tool which makes it possible to achieve this aim. Suitably selected biomarker sets can provide information on the extent of the exposure to carcinogenic agents (biomarkers of exposure), detect early changes produced by the agents in the exposed organism (biomarkers of effects), and identify people with particularly high cancer risk (biomarkers of susceptibility). It will soon be possible to use molecular biomarkers, capable of detecting increased cancer risk at the molecular level of cell structure, in prophylactic action intended to reduce cancer incidence. Molecular biomarkers are capable of recording very early health effects of exposure to carcinogens, thus making it possible to determine cancer risk at a very early stage of cancer development.  相似文献   

16.
Modern epidemiological studies face opportunities and challenges posed by an ever‐expanding capacity to measure a wide range of environmental exposures, along with sophisticated biomarkers of exposure and response at the individual level. The challenge of deciding what to measure is further complicated for longitudinal studies, where logistical and cost constraints preclude the collection of all possible measurements on all participants at every follow‐up time. This is true for the National Children's Study (NCS), a large‐scale longitudinal study that will enroll women both prior to conception and during pregnancy and collect information on their environment, their pregnancies, and their children's development through early adulthood—with a goal of assessing key exposure/outcome relationships among a cohort of approximately 100 000 children. The success of the NCS will significantly depend on the accurate, yet cost‐effective, characterization of environmental exposures thought to be related to the health outcomes of interest. The purpose of this paper is to explore the use of cost saving, yet valid and adequately powered statistical approaches for gathering exposure information within epidemiological cohort studies. The proposed approach involves the collection of detailed exposure assessment information on a specially selected subset of the study population, and collection of less‐costly, and presumably less‐detailed and less‐burdensome, surrogate measures across the entire cohort. We show that large‐scale efficiency in costs and burden may be achieved without making substantive sacrifices on the ability to draw reliable inferences concerning the relationship between exposure and health outcome. Several detailed scenarios are provided that document how the targeted sub‐sampling design strategy can benefit large cohort studies like the NCS, as well as other more focused environmental epidemiologic studies. Published in 2010 by John Wiley & Sons, Ltd.  相似文献   

17.
BACKGROUND: Assessment of the imprecision of exposure biomarkers usually focuses on laboratory performance only. Unrecognized imprecision leads to underestimation of the true toxicity of the exposure. We have assessed the total imprecision of exposure biomarkers and the implications for calculation of exposure limits. METHODS: In a birth cohort study, mercury concentrations in cord blood, cord tissue, and maternal hair were used as biomarkers of prenatal methylmercury exposure. We determined their mutual correlations and their associations with the child's neurobehavioral outcome variables at age 7 years. With at least three exposure parameters available, factor analysis and structural equation modeling could be applied to determine the total imprecision of each biomarker. The estimated imprecision was then applied to adjust benchmark dose calculations and the derived exposure limits. RESULTS: The exposure biomarkers correlated well with one another, but the cord blood mercury concentration showed the best associations with neurobehavioral deficits. Factor analysis and structural equation models showed a total imprecision of the cord-blood parameter of 25-30%, and almost twice as much for maternal hair. These imprecisions led to inflated benchmark dose levels. Adjusted calculations resulted in an exposure limit 50% below the level recommended by the U.S. National Research Council. CONCLUSIONS: The biomarker imprecisions of 25-50% much exceeded normal laboratory variability. Such imprecision causes underestimation of dose-related toxicity and therefore must be considered in the data analysis and when deriving exposure limits. Future studies should ideally include at least three exposure parameters to allow independent assessment of total imprecision.  相似文献   

18.
Investigators sometimes use information obtained from multiple informants about a given variable. We focus on estimating the effect of a predictor on a continuous outcome, when that (true) predictor cannot be observed directly but is measured by 2 informants. We describe various approaches to using information from 2 informants to estimate a regression or correlation coefficient for the effect of the (true) predictor on the outcome. These approaches include methods we refer to as single informant, simple average, optimal weighted average, principal components analysis, and classical measurement error. Each of these 5 methods effectively uses a weighted average of the informants' reports as a proxy for the true predictor in calculating the correlation or regression coefficient. We compare the performance of these methods in simulation experiments that assume a rounded congeneric measurement model for the relationship between the informants' reports and a true predictor that is a mixture of zeros and positively distributed continuous values. We also compare the methods' performance in a real data example-the relationship between vigorous physical activity (the predictor) and body mass index (the continuous outcome). The results of the simulations and the example suggest that the simple average is a reasonable choice when there are only 2 informants.  相似文献   

19.
Studies of the effects of environmental exposures on human health typically require estimation of both exposure and outcome. Standard methods for the assessment of the association between exposure and outcome include multiple linear regression analysis, which assumes that the outcome variable is observed with error, while the levels of exposure and other explanatory variables are measured with complete accuracy, so that there is no deviation of the measured from the actual value. The term measurement error in this discussion refers to the difference between the actual or true level and the value that is actually observed. In the investigations of the effects of prenatal methylmercury (MeHg) exposure from fish consumption on child development, the only way to obtain a true exposure level (producing the toxic effect) is to ascertain the concentration in fetal brain, which is not possible. As is often the case in studies of environmental exposures, the measured exposure level is a biomarker, such as the average maternal hair level during gestation. Measurement of hair mercury is widely used as a biological indicator for exposure to MeHg and is the only indicator that has been calibrated against the target tissue, the developing brain. Variability between the measured and the true values in explanatory variables in a multiple regression analysis can produce bias, leading to either over or underestimation of regression parameters (slopes). Fortunately, statistical methods known as measurement error models (MEM) are available to account for measurement errors in explanatory variables in multiple regression analysis, and these methods can provide an (either "unbiased" or "bias-corrected") estimate of the unknown outcome/exposure relationship. In this paper, we illustrate MEM analysis by reanalyzing data from the 5.5-year test battery in the Seychelles Child Development Study, a longitudinal study of prenatal exposure to MeHg from maternal consumption of a diet high in fish. The use of the MEM approach was made possible by the existence of independent, calibration data on the magnitude of the variability of the measurement error deviations for the biomarker of prenatal exposure used in this study, the maternal hair level. Our reanalysis indicated that adjustment for measurement errors in explanatory variables had no appreciable effect on the original results.  相似文献   

20.
There are numerous examples in the epidemiologic literature of analyses that relate the change in a risk factor, such as serum cholesterol, to the risk of an adverse outcome, such as heart disease. Many of these analyses fit some type of regression model (such as logistic regression or the Cox model for survival time data) that includes both the change in the risk factor and the baseline value as covariates. We show that this method of adjusting for the baseline level can produce misleading results. The problem occurs when the true value of the risk factor relates to the outcome, and the measured value differs from the true value due to measurement error. We may find the observed change in the risk factor significantly related to the outcome when there is in fact no relationship between the true change and the outcome. If the question of interest is whether a person who lowers his level of the risk factor by means of drugs or lifestyle changes will thereby reduce his risk of disease, then we should consider an association due solely to measurement error as spurious. We present a method that adjusts for the measurement error in a linear regression analysis and show that an analogous adjustment applies asymptotically to logistic regression. As in other errors-in-variables problems, this analysis depends on knowledge of the relative variances of the random variation, the true baseline value, and the true change. Since the magnitudes of these variances are usually unknown and sometimes unknowable (the distinction between true change and measurement error being ambiguous), we recommend a sensitivity analysis that examines how the analysis results depend on the assumptions concerning the variances. The commonly used analysis method corresponds to the extreme case in which there is no measurement error. We use data from the Framingham Study and simulations to illustrate these points.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号