首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Real-world exposure measurements are a necessary ingredient for subsequent detailed study of the risks from an environmental pollutant. For volatile organic compounds, researchers are applying exhaled breath analysis and the time dependence of concentrations as a noninvasive indicator of exposure, dose, and blood levels. To optimize the acquisition of such data, samples must be collected in a time frame suited to the needs of the mathematical model, within physical limitations of the equipment and subjects, and within logistical constraints. Additionally, one must consider the impact of measurement error on the eventual extraction of biologically and physiologically relevant parameters. Given a particular mathematical model for the elimination kinetics (in this case a very simple pharmacokinetic model based upon a multitenn exponential decay function that has been shown to fit real-world data extremely well), we investigated the effects on synthetic data caused by sample timing, random measurement error, and number of terms included in the model. This information generated a series of conditions for collecting samples and performing analyses dependent upon the eventual informational needs, and it provided an estimate of error associated with various choices and compromises. Though the work was geared specifically toward breath sampling, it is equally applicable to direct blood measurements in optimizing sampling strategy and improving the exposure assessment process.  相似文献   

2.
We analyzed the 1980 U.S. vital statistics and available ambient air pollution data bases for sulfates and fine, inhalable, and total suspended particles. Using multiple regression analyses, we conducted a cross-sectional analysis of the association between various particle measures and total mortality. Results from the various analyses indicated the importance of considering particle size, composition, and source information in modeling of particle pollution health effects. Of the independent mortality predictors considered, particle exposure measures related to the respirable and/or toxic fraction of the aerosols, such as fine particles and sulfates, were most consistently and significantly associated with the reported SMSA-specific total annual mortality rates. On the other hand, particle mass measures that included coarse particles (e.g., total suspended particles and inhalable particles) were often found to be nonsignificant predictors of total mortality. Furthermore, based on the application of fine particle source apportionment, particles from industrial sources (e.g., from iron/steel emissions) and from coal combustion were suggested to be more significant contributors to human mortality than soil-derived particles.  相似文献   

3.
Many environmental data sets, such as for air toxic emission factors, contain several values reported only as below detection limit. Such data sets are referred to as "censored." Typical approaches to dealing with the censored data sets include replacing censored values with arbitrary values of zero, one-half of the detection limit, or the detection limit. Here, an approach to quantification of the variability and uncertainty of censored data sets is demonstrated. Empirical bootstrap simulation is used to simulate censored bootstrap samples from the original data. Maximum likelihood estimation (MLE) is used to fit parametric probability distributions to each bootstrap sample, thereby specifying alternative estimates of the unknown population distribution of the censored data sets. Sampling distributions for uncertainty in statistics such as the mean, median, and percentile are calculated. The robustness of the method was tested by application to different degrees of censoring, sample sizes, coefficients of variation, and numbers of detection limits. Lognormal, gamma, and Weibull distributions were evaluated. The reliability of using this method to estimate the mean is evaluated by averaging the best estimated means of 20 cases for small sample size of 20. The confidence intervals for distribution percentiles estimated with bootstrap/MLE method compared favorably to results obtained with the nonparametric Kaplan-Meier method. The bootstrap/MLE method is illustrated via an application to an empirical air toxic emission factor data set.  相似文献   

4.
Of the 188 hazardous air pollutants (HAPs) listed in the Clean Air Act, only a handful have information on human health effects, derived primarily from animal and occupational studies. Lack of consistent monitoring data on ambient air toxics makes it difficult to assess the extent of low-level, chronic, ambient exposures to HAPs that could affect human health, and limits attempts to prioritize and evaluate policy initiatives for emissions reduction. Modeled outdoor HAP concentration estimates from the U.S. Environmental Protection Agency's Cumulative Exposure Project were used to characterize the extent of the air toxics problem in California for the base year of 1990. These air toxics concentration estimates were used with chronic toxicity data to estimate cancer and noncancer hazards for individual HAPs and the risks posed by multiple pollutants. Although hazardous air pollutants are ubiquitous in the environment, potential cancer and noncancer health hazards posed by ambient exposures are geographically concentrated in three urbanized areas and in a few rural counties. This analysis estimated a median excess individual cancer risk of 2.7E-4 for all air toxics concentrations and 8600 excess lifetime cancer cases, 70% of which were attributable to four pollutants: polycyclic organic matter, 1,3 butadiene, formaldehyde, and benzene. For noncancer effects, the analysis estimated a total hazard index representing the combined effect of all HAPs considered. Each pollutant contributes to the index a ratio of estimated concentration to reference concentration. The median value of the index across census tracts was 17, due primarily to acrolein and chromium concentration estimates. On average, HAP concentrations and cancer and noncancer health risks originate mostly from area and mobile source emissions, although there are several locations in the state where point sources account for a large portion of estimated concentrations and health risks. Risk estimates from this study can provide guidance for prioritizing research, monitoring, and regulatory intervention activities to reduce potential hazards to the general population. Improved ambient monitoring efforts can help clarify uncertainties inherent in this analysis.  相似文献   

5.
《Risk analysis》2018,38(1):194-209
This article presents the findings from a numerical simulation study that was conducted to evaluate the performance of alternative statistical analysis methods for background screening assessments when data sets are generated with incremental sampling methods (ISMs). A wide range of background and site conditions are represented in order to test different ISM sampling designs. Both hypothesis tests and upper tolerance limit (UTL) screening methods were implemented following U.S. Environmental Protection Agency (USEPA) guidance for specifying error rates. The simulations show that hypothesis testing using two‐sample t ‐tests can meet standard performance criteria under a wide range of conditions, even with relatively small sample sizes. Key factors that affect the performance include unequal population variances and small absolute differences in population means. UTL methods are generally not recommended due to conceptual limitations in the technique when applied to ISM data sets from single decision units and due to insufficient power given standard statistical sample sizes from ISM.  相似文献   

6.
Mark R. Powell 《Risk analysis》2015,35(12):2172-2182
Recently, there has been considerable interest in developing risk‐based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk‐based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean‐variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false “optimal” portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out‐of‐sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk‐based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers.  相似文献   

7.
In the days following the collapse of the World Trade Center (WTC) towers on September 11, 2001 (9/11), the U.S. Environmental Protection Agency (EPA) initiated numerous air monitoring activities to better understand the ongoing impact of emissions from that disaster. Using these data, EPA conducted an inhalation exposure and human health risk assessment to the general population. This assessment does not address exposures and potential impacts that could have occurred to rescue workers, firefighters, and other site workers, nor does it address exposures that could have occurred in the indoor environment. Contaminants evaluated include particulate matter (PM), metals, polychlorinated biphenyls, dioxins, asbestos, volatile organic compounds, particle-bound polycyclic aromatic hydrocarbons, silica, and synthetic vitreous fibers (SVFs). This evaluation yielded three principal findings. (1) Persons exposed to extremely high levels of ambient PM and its components, SVFs, and other contaminants during the collapse of the WTC towers, and for several hours afterward, were likely to be at risk for acute and potentially chronic respiratory effects. (2) Available data suggest that contaminant concentrations within and near ground zero (GZ) remained significantly elevated above background levels for a few days after 9/11. Because only limited data on these critical few days were available, exposures and potential health impacts could not be evaluated with certainty for this time period. (3) Except for inhalation exposures that may have occurred on 9/11 and a few days afterward, the ambient air concentration data suggest that persons in the general population were unlikely to suffer short-term or long-term adverse health effects caused by inhalation exposures. While this analysis by EPA evaluated the potential for health impacts based on measured air concentrations, epidemiological studies conducted by organizations other than EPA have attempted to identify actual impacts. Such studies have identified respiratory effects in worker and general populations, and developmental effects in newborns whose mothers were near GZ on 9/11 or shortly thereafter. While researchers are not able to identify specific times and even exactly which contaminants are the cause of these effects, they have nonetheless concluded that exposure to WTC contaminants (and/or maternal stress, in the case of developmental effects) resulted in these effects, and have identified the time period including 9/11 itself and the days and few weeks afterward as a period of most concern based on high concentrations of key pollutants in the air and dust.  相似文献   

8.
Human populations are exposed to environmental carcinogens in both indoor and outdoor atmospheres. Recent studies indicate that pollutant concentrations are generally higher in indoor atmospheres than in outdoor. Environmental pollutants that occur in indoor air from a variety of sources include radon, asbestos, organic and inorganic compounds, and certain particles (e.g., tobacco smoke). Some of the gases or vapors are adsorbed on suspended particulate matter, whereas others exist entirely in the gas phase or are distributed between the latter and a particle-bound state. Because of differences in chemical and physical properties, each class of carcinogens generally requires different sampling and analytical methods. In addition, a single indoor environment may contain a wide variety of air pollutants from different sources. Unfortunately, no single best approach currently exists for the quantitative determination of such complex mixtures and, for practical reasons, only the more toxic or the more abundant pollutants are usually measured. This paper summarizes the currently available monitoring methods for selected environmental pollutants found in indoor atmospheres. In addition, some possible sources for those pollutants are identified.  相似文献   

9.
Influenza remains a significant threat to public health, yet there is significant uncertainty about the routes of influenza transmission from an infectious source through the environment to a receptor, and their relative risks. Herein, data pertaining to factors that influence the environmental mediation of influenza transmission are critically reviewed, including: frequency, magnitude and size distribution and virus expiration, inactivation rates, environmental and self‐contact rates, and viral transfer efficiencies during contacts. Where appropriate, two‐stage Monte Carlo uncertainty analysis is used to characterize variability and uncertainty in the reported data. Significant uncertainties are present in most factors, due to: limitations in instrumentation or study realism; lack of documentation of data variability; or lack of study. These analyses, and future experimental work, will improve parameterization of influenza transmission and risk models, facilitating more robust characterization of the magnitude and uncertainty in infection risk.  相似文献   

10.
The wide-scale use of methyl tertiary butyl ether (MTBE) in gasoline has resulted in substantial public controversy and action to ban or control its use due to perceived impacts on water quality. Because oxygenates are still required under federal law, considerable research has focused on ethanol as a substitute for MTBE. In this article, we summarize the currently available literature on the air and water quality risks and benefits of MTBE versus ethanol as alternative fuel oxygenates. We find that MTBE-fuel blends are likely to have substantial air quality benefits; ethanol-fuel blends appear to offer similar benefits, but these may be at least partially negated because of ethanol's propensity to increase emissions and ambient concentrations of some air contaminants. Releases of gasoline containing either MTBE or ethanol could have an impact on some drinking water sources, although the impacts associated with MTBE tend to relate to aesthetics (i.e., taste and odor), whereas the impacts associated with ethanol generally relate to health risk (i.e., greater exposure to gasoline constituents such as benzene). It is likely that these water quality impacts will be outweighed by the air quality benefits associated with MTBE and perhaps ethanol use, which affect a much larger population. A lack of data on environmental exposures and associated health impacts hinders the completion of a comprehensive quantitative risk-benefit analysis, and the available air and water quality data should be evaluated in a broader risk-management context, which considers the potential life-cycle impacts, costs, and feasibility associated with alternative fuel oxygenates.  相似文献   

11.
In many cases, human health risk from biological agents is associated with aerosol exposures. Because air concentrations decline rapidly after a release, it may be necessary to use concentrations found in other environmental media to infer future or past aerosol exposures. This article presents an approach for linking environmental concentrations of Bacillus. anthracis (B. anthracis) spores on walls, floors, ventilation system filters, and in human nasal passages with human health risk from exposure to B. anthracis spores. This approach is then used to calculate example values of risk‐informed concentration standards for both retrospective risk mitigation (e.g., prophylactic antibiotics) and prospective risk mitigation (e.g., environmental clean up and reoccupancy). A large number of assumptions are required to calculate these values, and the resulting values have large uncertainties associated with them. The values calculated here suggest that documenting compliance with risks in the range of 10?4 to 10?6 would be challenging for small diameter (respirable) spore particles. For less stringent risk targets and for releases of larger diameter particles (which are less respirable and hence less hazardous), environmental sampling would be more promising.  相似文献   

12.
We develop acceptance sampling plans assuming that the life test is truncated at a preassigned time. The lifetimes of the test units are assumed to follow the Birnbaum Saunders distribution. The minimum sample size necessary to ensure the specified average life is obtained and the operating characteristic values of the sampling plans and producer's risk are presented. An illustrative example is given.  相似文献   

13.
Assessing Exposures to Environmental Tobacco Smoke   总被引:1,自引:0,他引:1  
The combustion of tobacco indoors results in the emission of a wide range of air contaminants that are associated with a variety of acute and chronic health and comfort effects. Exposures to environmental tobacco smoke (ETS) are assessed for epidemiologic studies and risk assessment and risk management applications. An individual's or population's exposure to ETS can be assessed by direct methods, which employ personal air monitoring and biomarkers, and indirect methods, which utilize various degrees of microenvironmental measurements of spaces, models, and questionnaires in combination with time-activity information. The major issues related to assessing exposures to ETS are summarized and discussed, including the physical-chemical nature of ETS air contaminants, use of proxy air contaminants to represent ETS, use of biomarkers, models for estimating ETS concentrations indoors, and the application of questionnaires.  相似文献   

14.
Application of Geostatistics to Risk Assessment   总被引:3,自引:0,他引:3  
Geostatistics offers two fundamental contributions to environmental contaminant exposure assessment: (1) a group of methods to quantitatively describe the spatial distribution of a pollutant and (2) the ability to improve estimates of the exposure point concentration by exploiting the geospatial information present in the data. The second contribution is particularly valuable when exposure estimates must be derived from small data sets, which is often the case in environmental risk assessment. This article addresses two topics related to the use of geostatistics in human and ecological risk assessments performed at hazardous waste sites: (1) the importance of assessing model assumptions when using geostatistics and (2) the use of geostatistics to improve estimates of the exposure point concentration (EPC) in the limited data scenario. The latter topic is approached here by comparing design-based estimators that are familiar to environmental risk assessors (e.g., Land's method) with geostatistics, a model-based estimator. In this report, we summarize the basics of spatial weighting of sample data, kriging, and geostatistical simulation. We then explore the two topics identified above in a case study, using soil lead concentration data from a Superfund site (a skeet and trap range). We also describe several areas where research is needed to advance the use of geostatistics in environmental risk assessment.  相似文献   

15.
Quantitative microbial risk assessment (QMRA) is a valuable tool that can be used to predict the risk associated with human exposure to specific microbial contaminants in water sources. The transparency inherent in the QMRA process benefits discussions between multidisciplinary teams because members of such teams have different expertise and their confidence in the risk assessment output will depend upon whether they regard the selected input data and assumptions as being suitable and/or plausible. Selection of input data requires knowledge of the availability of appropriate data sets, the limitations of using a particular data set, and the logic of using alternative approaches. In performing QMRA modeling and in the absence of directly relevant data, compromises must be made. One such compromise made is to use available Escherichia coli data and apply a ratio of enteric viruses to indicator E. coli in wastewater obtained from prior studies to estimate the concentration of enteric viruses in other wastewater types/sources. In this article, we have provided an argument for why we do not recommend the use of a pathogen to E. coli ratio to estimate virus concentrations in single household graywater and additionally suggested circumstances in which use of such a ratio may be justified.  相似文献   

16.
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway‐Maxwell Poisson (COM‐Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM‐Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM‐Poisson GLM, and (2) estimate the prediction accuracy of the COM‐Poisson GLM using simulated data sets. The results of the study indicate that the COM‐Poisson GLM is flexible enough to model under‐, equi‐, and overdispersed data sets with different sample mean values. The results also show that the COM‐Poisson GLM yields accurate parameter estimates. The COM‐Poisson GLM provides a promising and flexible approach for performing count data regression.  相似文献   

17.
Exposure to chemical contaminants in various media must be estimated when performing ecological risk assessments. Exposure estimates are often based on the 95th-percentile upper confidence limit on the mean concentration of all samples, calculated without regard to critical ecological and spatial information about the relative relationship of receptors, their habitats, and contaminants. This practice produces exposure estimates that are potentially unrepresentative of the ecology of the receptor. This article proposes a habitat area and quality-conditioned exposure estimator, E[HQ], that requires consideration of these relationships. It describes a spatially explicit ecological exposure model to facilitate calculation of E[HQ]. The model provides (1) a flexible platform for investigating the effect of changes in habitat area, habitat quality, foraging area, and population size on exposure estimates, and (2) a tool for calculating E[HQ] for use in actual risk assessments. The inner loop of a Visual Basic program randomly walks a receptor over a multicelled landscape--each cell of which contains values for cell area, habitat area, habitat quality, and concentration--accumulating an exposure estimate until the total area foraged is less than or equal to a given foraging area. An outer loop then steps through foraging areas of increasing size. This program is iterated by Monte Carlo software, with the number of iterations representing the population size. Results indicate that (1) any single estimator may over- or underestimate exposure, depending on foraging strategy and spatial relationships of habitat and contamination, and (2) changes in exposure estimates in response to changes in foraging and habitat area are not linear.  相似文献   

18.
Much of the literature regarding food safety sampling plans implicitly assumes that all lots entering commerce are tested. In practice, however, only a fraction of lots may be tested due to a budget constraint. In such a case, there is a tradeoff between the number of lots tested and the number of samples per lot. To illustrate this tradeoff, a simple model is presented in which the optimal number of samples per lot depends on the prevalence of sample units that do not conform to microbiological specifications and the relative costs of sampling a lot and of drawing and testing a sample unit from a lot. The assumed objective is to maximize the number of nonconforming lots that are rejected subject to a food safety sampling budget constraint. If the ratio of the cost per lot to the cost per sample unit is substantial, the optimal number of samples per lot increases as prevalence decreases. However, if the ratio of the cost per lot to the cost per sample unit is sufficiently small, the optimal number of samples per lot reduces to one (i.e., simple random sampling), regardless of prevalence. In practice, the cost per sample unit may be large relative to the cost per lot due to the expense of laboratory testing and other factors. Designing effective compliance assurance measures depends on economic, legal, and other factors in addition to microbiology and statistics.  相似文献   

19.
In Japan, environmental standards for contaminants in groundwater and in leachate from soil are set with the assumption that they are used for drinking water over a human lifetime. Where there is neither a well nor groundwater used for drinking, the standard is thus too severe. Therefore, remediation based on these standards incurs excessive effort and cost. In contrast, the environmental-assessment procedure used in the United States and the Netherlands considers the site conditions (land use, existing wells, etc.); however, a risk assessment is required for each site. Therefore, this study proposes a new framework for judging contamination in Japan by considering the merits of the environmental standards used and a method for risk assessment. The framework involves setting risk-based concentrations that are attainable remediation goals for contaminants in soil and groundwater. The framework was then applied to a model contaminated site for risk management, and the results are discussed regarding the effectiveness and applicability of the new methodology.  相似文献   

20.
Adaptive Spatial Sampling of Contaminated Soil   总被引:1,自引:0,他引:1  
Cox  Louis Anthony 《Risk analysis》1999,19(6):1059-1069

Suppose that a residential neighborhood may have been contaminated by a nearby abandoned hazardous waste site. The suspected contamination consists of elevated soil concentrations of chemicals that are also found in the absence of site-related contamination. How should a risk manager decide which residential properties to sample and which ones to clean? This paper introduces an adaptive spatial sampling approach which uses initial observations to guide subsequent search. Unlike some recent model-based spatial data analysis methods, it does not require any specific statistical model for the spatial distribution of hazards, but instead constructs an increasingly accurate nonparametric approximation to it as sampling proceeds. Possible cost-effective sampling and cleanup decision rules are described by decision parameters such as the number of randomly selected locations used to initialize the process, the number of highest-concentration locations searched around, the number of samples taken at each location, a stopping rule, and a remediation action threshold. These decision parameters are optimized by simulating the performance of each decision rule. The simulation is performed using the data collected so far to impute multiple probable values of unknown soil concentration distributions during each simulation run. This optimized adaptive spatial sampling technique has been applied to real data using error probabilities for wrongly cleaning or wrongly failing to clean each location (compared to the action that would be taken if perfect information were available) as evaluation criteria. It provides a practical approach for quantifying trade-offs between these different types of errors and expected cost. It also identifies strategies that are undominated with respect to all of these criteria.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号