首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
We analyze the risk of severe fatal accidents causing five or more fatalities and for nine different activities covering the entire oil chain. Included are exploration and extraction, transport by different modes, refining and final end use in power plants, heating or gas stations. The risks are quantified separately for OECD and non‐OECD countries and trends are calculated. Risk is analyzed by employing a Bayesian hierarchical model yielding analytical functions for both frequency (Poisson) and severity distributions (Generalized Pareto) as well as frequency trends. This approach addresses a key problem in risk estimation—namely the scarcity of data resulting in high uncertainties in particular for the risk of extreme events, where the risk is extrapolated beyond the historically most severe accidents. Bayesian data analysis allows the pooling of information from different data sets covering, for example, the different stages of the energy chains or different modes of transportation. In addition, it also inherently delivers a measure of uncertainty. This approach provides a framework, which comprehensively covers risk throughout the oil chain, allowing the allocation of risk in sustainability assessments. It also permits the progressive addition of new data to refine the risk estimates. Frequency, severity, and trends show substantial differences between the activities, emphasizing the need for detailed risk analysis.  相似文献   

2.
To inform local and regional decisions about protecting short-term and long-term quality of life, the Consortium for Atlantic Regional Assessment (CARA) provides data and tools (for the northeastern United States) that can help decision makers understand how outcomes of their decisions could be affected by potential changes in both climate and land use. On an interactive, user-friendly website, CARA has amassed data on climate (historical records and future projections for seven global climate models), land cover, and socioeconomic and environmental variables, along with tools to help decision makers tailor the data for their own decision types and locations. CARA Advisory Council stakeholders help identify what information and tools stakeholders would find most useful and how to present these; they also provide in-depth feedback for subregion case studies. General lessons include: (1) decision makers want detailed local projections for periods short enough to account for extreme events, in contrast to the broader spatial and temporal observations and projections that are available or consistent at a regional level; (2) stakeholders will not use such a website unless it is visually appealing and easy to find the information they want; (3) some stakeholders need background while others want to go immediately to data, and some want maps while others want text or tables. This article also compares what has been learned across case studies of Cape May County, New Jersey, Cape Cod, Massachusetts, and Hampton Roads, Virginia, relating specifically to sea-level rise. Lessons include: (1) groups can be affected differently by physical dangers compared with economic dangers; (2) decisions will differ according to decision makers' preferences about waiting and risk tolerance; (3) future scenarios and maps can help assess the impacts of dangers to emergency evacuation routes, homes, and infrastructure, and the natural environment; (4) residents' and decision makers' perceptions are affected by information about potential local impacts from global climate change.  相似文献   

3.
A simple and useful characterization of many predictive models is in terms of model structure and model parameters. Accordingly, uncertainties in model predictions arise from uncertainties in the values assumed by the model parameters (parameter uncertainty) and the uncertainties and errors associated with the structure of the model (model uncertainty). When assessing uncertainty one is interested in identifying, at some level of confidence, the range of possible and then probable values of the unknown of interest. All sources of uncertainty and variability need to be considered. Although parameter uncertainty assessment has been extensively discussed in the literature, model uncertainty is a relatively new topic of discussion by the scientific community, despite being often the major contributor to the overall uncertainty. This article describes a Bayesian methodology for the assessment of model uncertainties, where models are treated as sources of information on the unknown of interest. The general framework is then specialized for the case where models provide point estimates about a single‐valued unknown, and where information about models are available in form of homogeneous and nonhomogeneous performance data (pairs of experimental observations and model predictions). Several example applications for physical models used in fire risk analysis are also provided.  相似文献   

4.
Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.  相似文献   

5.
Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose‐response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose‐response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose‐response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose‐response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.  相似文献   

6.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

7.
Many environmental data sets, such as for air toxic emission factors, contain several values reported only as below detection limit. Such data sets are referred to as "censored." Typical approaches to dealing with the censored data sets include replacing censored values with arbitrary values of zero, one-half of the detection limit, or the detection limit. Here, an approach to quantification of the variability and uncertainty of censored data sets is demonstrated. Empirical bootstrap simulation is used to simulate censored bootstrap samples from the original data. Maximum likelihood estimation (MLE) is used to fit parametric probability distributions to each bootstrap sample, thereby specifying alternative estimates of the unknown population distribution of the censored data sets. Sampling distributions for uncertainty in statistics such as the mean, median, and percentile are calculated. The robustness of the method was tested by application to different degrees of censoring, sample sizes, coefficients of variation, and numbers of detection limits. Lognormal, gamma, and Weibull distributions were evaluated. The reliability of using this method to estimate the mean is evaluated by averaging the best estimated means of 20 cases for small sample size of 20. The confidence intervals for distribution percentiles estimated with bootstrap/MLE method compared favorably to results obtained with the nonparametric Kaplan-Meier method. The bootstrap/MLE method is illustrated via an application to an empirical air toxic emission factor data set.  相似文献   

8.
Scott Janzwood 《Risk analysis》2023,43(10):2004-2016
Outside of the field of risk analysis, an important theoretical conversation on the slippery concept of uncertainty has unfolded over the last 40 years within the adjacent field of environmental risk. This literature has become increasingly standardized behind the tripartite distinction between uncertainty location, the nature of uncertainty, and uncertainty level, popularized by the “W&H framework.” This article introduces risk theorists and practitioners to the conceptual literature on uncertainty with the goal of catalyzing further development and clarification of the uncertainty concept within the field of risk analysis. It presents two critiques of the W&H framework's dimension of uncertainty level—the dimension that attempts to define the characteristics separating greater uncertainties from lesser uncertainties. First, I argue the framework's conceptualization of uncertainty level lacks a clear and consistent epistemological position and fails to acknowledge or reconcile the tensions between Bayesian and frequentist perspectives present within the framework. This article reinterprets the dimension of uncertainty level from a Bayesian perspective, which understands uncertainty as a mental phenomenon arising from “confidence deficits” as opposed to the ill-defined notion of “knowledge deficits” present in the framework. And second, I elaborate the undertheorized concept of uncertainty “reducibility.” These critiques inform a clarified conceptualization of uncertainty level that can be integrated with risk analysis concepts and usefully applied by modelers and decisionmakers engaged in model-based decision support.  相似文献   

9.
A recent report by the National Academy of Sciences estimates that the radiation dose to the bronchial epithelium, per working level month (WLM) of radon daughter exposure, is about 30% lower for residential exposures than for exposures received in underground mines. Adjusting the previously published BEIR IV radon risk model accordingly, the unit risk for indoor exposures of the general population is about 2.2 x 10(-4) lung cancer deaths (lcd)/WLM. Using results from EPA's National Residential Radon Survey, the average radon level is estimated to be about 1.25 pCi/L, and the annual average exposure about 0.242 WLM. Based on these estimates, 13,600 radon-induced lcd/yr are projected for the United States. A quantitative uncertainty analysis was performed, which considers: statistical uncertainties in the epidemiological studies of radon-exposed miners; the dependence of risk on age at, and time since, exposure; the extrapolation of risk estimates from mines to homes based on comparative dosimetry; and uncertainties in the radon daughter levels in homes and in the average residential occupancy. Based on this assessment of the uncertainties in the unit risk and exposure estimates, an uncertainty range of 7000-30000 lcd/yr is derived.  相似文献   

10.
Risk characterization in a study population relies on cases of disease or death that are causally related to the exposure under study. The number of such cases, so-called "excess" cases, is not just an indicator of the impact of the risk factor in the study population, but also an important determinant of statistical power for assessing aspects of risk such as age-time trends and susceptible subgroups. In determining how large a population to study and/or how long to follow a study population to accumulate sufficient excess cases, it is necessary to predict future risk. In this study, focusing on models involving excess risk with possible effect modification, we describe a method for predicting the expected magnitude of numbers of excess cases and assess the uncertainty in those predictions. We do this by extending Bayesian APC models for rate projection to include exposure-related excess risk with possible effect modification by, e.g., age at exposure and attained age. The method is illustrated using the follow-up study of Japanese Atomic-Bomb Survivors, one of the primary bases for determining long-term health effects of radiation exposure and assessment of risk for radiation protection purposes. Using models selected by a predictive-performance measure obtained on test data reserved for cross-validation, we project excess counts due to radiation exposure and lifetime risk measures (risk of exposure-induced deaths (REID) and loss of life expectancy (LLE)) associated with cancer and noncancer disease deaths in the A-Bomb survivor cohort.  相似文献   

11.
Congress is currently considering adopting a mathematical formula to assign shares in cancer causation to specific doses of radiation, for use in establishing liability and compensation awards. The proposed formula, if it were sound, would allow difficult problems in tort law and public policy to be resolved by reference to tabulated "probabilities of causation." This article examines the statistical and conceptual bases for the proposed methodology. We find that the proposed formula is incorrect as an expression for "probability and causation," that it implies hidden, debatable policy judgments in its treatment of factor interactions and uncertainties, and that it can not in general be quantified with sufficient precision to be useful. Three generic sources of statistical uncertainty are identified--sampling variability, population heterogeneity, and error propagation--that prevent accurate quantification of "assigned shares." These uncertainties arise whenever aggregate epidemiological or risk data are used to draw causal inferences about individual cases.  相似文献   

12.
The purpose of this article is to discuss the role of quantitative risk assessments for characterizing risk and uncertainty and delineating appropriate risk management options. Our main concern is situations (risk problems) with large potential consequences, large uncertainties, and/or ambiguities (related to the relevance, meaning, and implications of the decision basis; or related to the values to be protected and the priorities to be made), in particular terrorism risk. We look into the scientific basis of the quantitative risk assessments and the boundaries of the assessments in such a context. Based on a risk perspective that defines risk as uncertainty about and severity of the consequences (or outcomes) of an activity with respect to something that humans value we advocate a broad risk assessment approach characterizing uncertainties beyond probabilities and expected values. Key features of this approach are qualitative uncertainty assessment and scenario building instruments.  相似文献   

13.
Treatment of Uncertainty in Performance Assessments for Complex Systems   总被引:13,自引:0,他引:13  
When viewed at a high level, performance assessments (PAs) for complex systems involve two types of uncertainty: stochastic uncertainty, which arises because the system under study can behave in many different ways, and subjective uncertainty, which arises from a lack of knowledge about quantities required within the computational implementation of the PA. Stochastic uncertainty is typically incorporated into a PA with an experimental design based on importance sampling and leads to the final results of the PA being expressed as a complementary cumulative distribution function (CCDF). Subjective uncertainty is usually treated with Monte Carlo techniques and leads to a distribution of CCDFs. This presentation discusses the use of the Kaplan/Garrick ordered triple representation for risk in maintaining a distinction between stochastic and subjective uncertainty in PAs for complex systems. The topics discussed include (1) the definition of scenarios and the calculation of scenario probabilities and consequences, (2) the separation of subjective and stochastic uncertainties, (3) the construction of CCDFs required in comparisons with regulatory standards (e.g., 40 CFR Part 191, Subpart B for the disposal of radioactive waste), and (4) the performance of uncertainty and sensitivity studies. Results obtained in a preliminary PA for the Waste Isolation Pilot Plant, an uncertainty and sensitivity analysis of the MACCS reactor accident consequence analysis model, and the NUREG-1150 probabilistic risk assessments are used for illustration.  相似文献   

14.
Probabilistic risk analysis (PRA) can be an effective tool to assess risks and uncertainties and to set priorities among safety policy options. Based on systems analysis and Bayesian probability, PRA has been applied to a wide range of cases, three of which are briefly presented here: the maintenance of the tiles of the space shuttle, the management of patient risk in anesthesia, and the choice of seismic provisions of building codes for the San Francisco Bay Area. In the quantification of a risk, a number of problems arise in the public sector where multiple stakeholders are involved. In this article, I describe different approaches to the treatments of uncertainties in risk analysis, their implications for risk ranking, and the role of risk analysis results in the context of a safety decision process. I also discuss the implications of adopting conservative hypotheses before proceeding to what is, in essence, a conditional uncertainty analysis, and I explore some implications of different levels of "conservatism" for the ranking of risk mitigation measures.  相似文献   

15.
In recent years physiologically based pharmacokinetic models have come to play an increasingly important role in risk assessment for carcinogens. The hope is that they can help open the black box between external exposure and carcinogenic effects to experimental observations, and improve both high-dose to low-dose and interspecies projections of risk. However, to date, there have been only relatively preliminary efforts to assess the uncertainties in current modeling results. In this paper we compare the physiologically based pharmacokinetic models (and model predictions of risk-related overall metabolism) that have been produced by seven different sets of authors for perchloroethylene (tetrachloroethylene). The most striking conclusion from the data is that most of the differences in risk-related model predictions are attributable to the choice of the data sets used for calibrating the metabolic parameters. Second, it is clear that the bottom-line differences among the model predictions are appreciable. Overall, the ratios of low-dose human to bioassay rodent metabolism spanned a 30-fold range for the six available human/rat comparisons, and the seven predicted ratios of low-dose human to bioassay mouse metabolism spanned a 13-fold range. (The greater range for the rat/human comparison is attributable to a structural assumption by one author group of competing linear and saturable pathways, and their conclusion that the dangerous saturable pathway constitutes a minor fraction of metabolism in rats.) It is clear that there are a number of opportunities for modelers to make different choices of model structure, interpretive assumptions, and calibrating data in the process of constructing pharmacokinetic models for use in estimating "delivered" or "biologically effective" dose for carcinogenesis risk assessments. We believe that in presenting the results of such modeling studies, it is important for researchers to explore the results of alternative, reasonably likely approaches for interpreting the available data--and either show that any conclusions they make are relatively insensitive to particular interpretive choices, or to acknowledge the differences in conclusions that would result from plausible alternative views of the world.  相似文献   

16.
The tenfold "uncertainty" factor traditionally used to guard against human interindividual differences in susceptibility to toxicity is not based on human observations. To begin to build a basis for quantifying an important component of overall variability in susceptibility to toxicity, a data base has been constructed of individual measurements of key pharmacokinetic parameters for specific substances (mostly drugs) in groups of at least five healthy adults. 72 of the 101 data sets studied were positively skewed, indicating that the distributions are generally closer to expectations for log-normal distributions than for normal distributions. Measurements of interindividual variability in elimination half-lives, maximal blood concentrations, and AUC (area under the curve of blood concentration by time) have median values of log10 geometric standard deviations in the range of 0.11-0.145. For the median chemical, therefore, a tenfold difference in these pharmacokinetic parameters would correspond to 7-9 standard deviations in populations of normal healthy adults. For one relatively lipophilic chemical, however, interindividual variability in maximal blood concentration and AUC was 0.4--implying that a tenfold difference would correspond to only about 2.5 standard deviations for those parameters in the human population. The parameters studied to date are only components of overall susceptibility to toxic agents, and do not include contributions from variability in exposure- and response-determining parameters. The current study also implicitly excludes most human interindividual variability from age and illness. When these other sources of variability are included in an overall analysis of variability in susceptibility, it is likely that a tenfold difference will correspond to fewer standard deviations in the overall population, and correspondingly greater numbers of people at risk of toxicity.  相似文献   

17.
The increase in the thyroid cancer incidence in France observed over the last 20 years has raised public concern about its association with the 1986 nuclear power plant accident at Chernobyl. At the request of French authorities, a first study sought to quantify the possible risk of thyroid cancer associated with the Chernobyl fallout in France. This study suffered from two limitations. The first involved the lack of knowledge of spontaneous thyroid cancer incidence rates (in the absence of exposure), which was especially necessary to take their trends into account for projections over time; the second was the failure to consider the uncertainties. The aim of this article is to enhance the initial thyroid cancer risk assessment for the period 1991-2007 in the area of France most exposed to the fallout (i.e., eastern France) and thereby mitigate these limitations. We consider the changes over time in the incidence of spontaneous thyroid cancer and conduct both uncertainty and sensitivity analyses. The number of spontaneous thyroid cancers was estimated from French cancer registries on the basis of two scenarios: one with a constant incidence, the other using the trend observed. Thyroid doses were estimated from all available data about contamination in France from Chernobyl fallout. Results from a 1995 pooled analysis published by Ron et al. were used to determine the dose-response relation. Depending on the scenario, the number of spontaneous thyroid cancer cases ranges from 894 (90% CI: 869-920) to 1,716 (90% CI: 1,691-1,741). The number of excess thyroid cancer cases predicted ranges from 5 (90% UI: 1-15) to 63 (90% UI: 12-180). All of the assumptions underlying the thyroid cancer risk assessment are discussed.  相似文献   

18.
Vulnerability of human beings exposed to a catastrophic disaster is affected by multiple factors that include hazard intensity, environment, and individual characteristics. The traditional approach to vulnerability assessment, based on the aggregate‐area method and unsupervised learning, cannot incorporate spatial information; thus, vulnerability can be only roughly assessed. In this article, we propose Bayesian network (BN) and spatial analysis techniques to mine spatial data sets to evaluate the vulnerability of human beings. In our approach, spatial analysis is leveraged to preprocess the data; for example, kernel density analysis (KDA) and accumulative road cost surface modeling (ARCSM) are employed to quantify the influence of geofeatures on vulnerability and relate such influence to spatial distance. The knowledge‐ and data‐based BN provides a consistent platform to integrate a variety of factors, including those extracted by KDA and ARCSM to model vulnerability uncertainty. We also consider the model's uncertainty and use the Bayesian model average and Occam's Window to average the multiple models obtained by our approach to robust prediction of the risk and vulnerability. We compare our approach with other probabilistic models in the case study of seismic risk and conclude that our approach is a good means to mining spatial data sets for evaluating vulnerability.  相似文献   

19.
Variability is the heterogeneity of values within a population. Uncertainty refers to lack of knowledge regarding the true value of a quantity. Mixture distributions have the potential to improve the goodness of fit to data sets not adequately described by a single parametric distribution. Uncertainty due to random sampling error in statistics of interests can be estimated based upon bootstrap simulation. In order to evaluate the robustness of using mixture distribution as a basis for estimating both variability and uncertainty, 108 synthetic data sets generated from selected population mixture log-normal distributions were investigated, and properties of variability and uncertainty estimates were evaluated with respect to variation in sample size, mixing weight, and separation between components of mixtures. Furthermore, mixture distributions were compared with single-component distributions. Findings include: (1). mixing weight influences the stability of variability and uncertainty estimates; (2). bootstrap simulation results tend to be more stable for larger sample sizes; (3). when two components are well separated, the stability of bootstrap simulation is improved; however, a larger degree of uncertainty arises regarding the percentiles coinciding with the separated region; (4). when two components are not well separated, a single distribution may often be a better choice because it has fewer parameters and better numerical stability; and (5). dependencies exist in sampling distributions of parameters of mixtures and are influenced by the amount of separation between the components. An emission factor case study based upon NO(x) emissions from coal-fired tangential boilers is used to illustrate the application of the approach.  相似文献   

20.
A methodology is developed for ranking entry mode alternatives encountered by individual firms considering foreign direct investment (FDI). The methodology deals with the risks and uncertainties related to FDI. The analytic hierarchy process (AHP) is used to solve the multiple criteria decision-making problem using input from a firm's management. A simulation approach is incorporated into the AHP to handle the uncertainty considerations encountered in an FDI environment. The uncertainties include: (1) uncertainty regarding the future characteristics of the FDI decision making environment, (2) uncertainty associated with the decision maker's judgment regarding pairwise comparisons necessitated by the AHP.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号