首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The importance of fitting distributions to data for risk analysis continues to grow as regulatory agencies, like the Environmental Protection Agency (EPA), continue to shift from deterministic to probabilistic risk assessment techniques. The use of Monte Carlo simulation as a tool for propagating variability and uncertainty in risk requires specification of the risk model's inputs in the form of distributions or tables of data. Several software tools exist to support risk assessors in their efforts to develop distributions. However, users must keep in mind that these tools do not replace clear thought about judgments that must be made in characterizing the information from data. This overview introduces risk assessors to the statistical concepts and physical reasons that support important judgments about appropriate types of parametric distributions and goodness-of-fit. In the context of using data to improve risk assessment and ultimately risk management, this paper discusses issues related to the nature of the data (representativeness, quantity, and quality, correlation with space and time, and distinguishing between variability and uncertainty for a set of data), and matching data and distributions appropriately. All data analysis (whether “Frequentist” or “Bayesian” or oblivious to the distinction) requires the use of subjective judgment. The paper offers an iterative process for developing distributions using data to characterize variability and uncertainty for inputs to risk models that provides incentives for collecting better information when the value of information exceeds its cost. Risk analysts need to focus attention on characterizing the information appropriately for purposes of the risk assessment (and risk management questions at hand), not on characterization for its own sake.  相似文献   

2.
The selection of the most appropriate model for an ecological risk assessment depends on the application, the data and resources available, the knowledge base of the assessor, the relevant endpoints, and the extent to which the model deals with uncertainty. Since ecological systems are highly variable and our knowledge of model input parameters is uncertain, it is important that models include treatments of uncertainty and variability, and that results are reported in this light. In this paper we discuss treatments of variation and uncertainty in a variety of population models. In ecological risk assessments, the risk relates to the probability of an adverse event in the context of environmental variation. Uncertainty relates to ignorance about parameter values, e.g., measurement error and systematic error. An assessment of the full distribution of risks, under variability and parameter uncertainty, will give the most comprehensive and flexible endpoint. In this paper we present the rationale behind probabilistic risk assessment, identify the sources of uncertainty relevant for risk assessment and provide an overview of a range of population models. While all of the models reviewed have some utility in ecology, some have more comprehensive treatments of uncertainty than others. We identify the models that allow probabilistic assessments and sensitivity analyses, and we offer recommendations for further developments that aim towards more comprehensive and reliable ecological risk assessments for populations.  相似文献   

3.
This article reviews some of the current guidance concerning the separation of variability and uncertainty in presenting the results of human health and ecological risk assessments. Such guidance and some of the published examples of its implementation using two-stage Monte Carlo simulation methods have not emphasized the fact that there is considerable judgment involved in determining which input parameters can be modeled as purely variable or purely uncertain, and which require explicit treatment in both dimensions. Failure to discuss these choices leads to confusion and misunderstanding of the proposed methods. We conclude with an example illustrating some of the reasoning and statistical calculations that might be used to inform such choices.  相似文献   

4.
While default uncertainty factor (UF) adjustments have been proposed for pharmacokinetic variability in the derivation of Reference Doses (RfDs), few attempts have been made to derive chemical-specific UFs for such variability. In recent epidemiologic data on the neuro-developmental effects of MeHg, Hg concentration in either hair or blood is the point-of-departure for RfD derivation. The application of a pharmacokinetic model to derive an intake dose from the measured biomarker concentration allows examination of the inter-individual variability in the relationship between intake dose and biomarker concentration through specification of the variability in model parameters. Three independent studies of this variability, using different models and/or different parameter values, are compared. While differences in central tendency estimates give different predictions of the intake dose corresponding to a given biomarker concentration, normalization of the central tendency estimate resulted in strong agreement among the studies. Starting with Hg concentration in hair or blood, and dividing a central tendency estimate of the corresponding intake dose by a UF of 2 to 3, accounts for 95 to 99% of the variability in the relationship between intake dose and biomarker concentration. This variability, however, encompasses only a portion of the maternal ingestion-to-fetal brain pathway. It is therefore likely that this UF underestimates the overall pharmacokinetic variability in this pathway.  相似文献   

5.
Uncertainty and variability affect economic and environmental performance in the production of biotechnology and pharmaceutical products. However, commercial process simulation software typically provides analysis that assumes deterministic rather than stochastic process parameters and thus is not capable of dealing with the complexities created by variance that arise in the decision-making process. Using the production of penicillin V as a case study, this article shows how uncertainty can be quantified and evaluated. The first step is construction of a process model, as well as analysis of its cost structure and environmental impact. The second step is identification of uncertain variables and determination of their probability distributions based on available process and literature data. Finally, Monte Carlo simulations are run to see how these uncertainties propagate through the model and affect key economic and environmental outcomes. Thus, the overall variation of these objective functions are quantified, the technical, supply chain, and market parameters that contribute most to the existing variance are identified and the differences between economic and ecological evaluation are analyzed. In our case study analysis, we show that final penicillin and biomass concentrations in the fermenter have the highest contribution to variance for both unit production cost and environmental impact. The penicillin selling price dominates return on investment variance as well as the variance for other revenue-dependent parameters.  相似文献   

6.
International harmonization of risk assessment approaches affords a number of opportunities and advantages. Overall, harmonization will lead to more efficient use of resources, but also will lead to better understanding amongst scientists and regulators worldwide. It is with these goals in mind that in 1994 the International Programme on Chemical Safety (IPCS) initiated its Project on the Harmonization of Approaches to the Assessment of Risk from Exposure to Chemicals (Harmonization Project). An ongoing activity under this project addresses uncertainty and variability in risk assessment. The goal of the overall activity is to promote harmonization of risk assessment methodologies for noncancer endpoints. However, given the common links in uncertainty and variability that apply across a range of end-point-specific activities, these links are identified wherever possible. This paper provides an overview of the IPCS Harmonization Project and reviews the activity and future plans related to uncertainty and variability.  相似文献   

7.
Four different probabilistic risk assessment methods were compared using the data from the Sangamo Weston/Lake Hartwell Superfund site. These were one-dimensional Monte Carlo, two-dimensional Monte Carlo considering uncertainty in the concentration term, two-dimensional Monte Carlo considering uncertainty in ingestion rate, and microexposure event analysis. Estimated high-end risks ranged from 2.0×10?4 to 3.3×10?3. Microexposure event analysis produced a lower risk estimate than any of the other methods due to incorporation of time-dependent changes in the concentration term.  相似文献   

8.
9.
Groundwater modeling typically relies on some hypothesis and approximations of reality, as the real hydrologic systems are far more complex than we can mathematically characterize. This kind of a model's errors cannot be neglected in the uncertainty analysis for a model's predictions in practical issues. As the scale and complexity increase, the associated uncertainties boost dramatically. In this study, a Bayesian uncertainty analysis method for a deterministic model's predictions is presented. The geostatistics of hydrogeologic parameters obtained from site characterization are treated as the prior parameter distribution in the Bayes’ theorem. Then the Markov-Chain Monte Carlo method is used to generate the posterior statistical distribution of the model's predictions, conditional to the observed hydrologic system behaviors. Finally, a series of synthetic examples are given by applying this method to a MODFLOW pumping test model, to test its capability and efficiency in order to assess various sources of the model's prediction uncertainty. The impacts of the model's parameter sensitivity, simplification, and observation errors to predict uncertainty are evaluated, respectively. The results are analyzed statistically to provide deterministic predictions with associated prediction errors. Risk analysis is also derived from the Bayesian results to draw tradeoff curves for decision-making about exploitation of groundwater resources.  相似文献   

10.
傅煜  雷渊才  曾伟生 《生态学报》2015,35(23):7738-7747
采用系统抽样体系江西省固定样地杉木连续观测数据和生物量数据,通过Monte Carlo法反复模拟由单木生物量模型推算区域尺度地上生物量的过程,估计了江西省杉木地上总生物量。基于不同水平建模样本量n及不同决定系数R~2的设计,分别研究了单木生物量模型参数变异性及模型残差变异性对区域尺度生物量估计不确定性的影响。研究结果表明:2009年江西省杉木地上生物量估计值为(19.84±1.27)t/hm~2,不确定性占生物量估计值约6.41%。生物量估计值和不确定性值达到平稳状态所需的运算时间随建模样本量及决定系数R~2的增大而减小;相对于模型参数变异性,残差变异性对不确定性的影响更小。  相似文献   

11.
The results of quantitative risk assessments are key factors in a risk manager's decision of the necessity to implement actions to reduce risk. The extent of the uncertainty in the assessment will play a large part in the degree of confidence a risk manager has in the reported significance and probability of a given risk. The two main sources of uncertainty in such risk assessments are variability and incertitude. In this paper we use two methods, a second-order two-dimensional Monte Carlo analysis and probability bounds analysis, to investigate the impact of both types of uncertainty on the results of a food-web exposure model. We demonstrate how the full extent of uncertainty in a risk estimate can be fully portrayed in a way that is useful to risk managers. We show that probability bounds analysis is a useful tool for identifying the parameters that contribute the most to uncertainty in a risk estimate and how it can be used to complement established practices in risk assessment. We conclude by promoting the use of probability analysis in conjunction with Monte Carlo analyses as a method for checking how plausible Monte Carlo results are in the full context of uncertainty.  相似文献   

12.
PurposeTo study the impact of shielding elements in the proximity of Intra-Operative Radiation Therapy (IORT) irradiation fields, and to generate graphical and quantitative information to assist radiation oncologists in the design of optimal shielding during pelvic and abdominal IORT.MethodAn IORT system was modeled with BEAMnrc and EGS++ Monte Carlo codes. The model was validated in reference conditions by gamma index analysis against an experimental data set of different beam energies, applicator diameters, and bevel angles. The reliability of the IORT model was further tested considering shielding layers inserted in the radiation beam. Further simulations were performed introducing a bone-like layer embedded in the water phantom. The dose distributions were calculated as 3D dose maps.ResultsThe analysis of the resulting 2D dose maps parallel to the clinical axis shows that the bevel angle of the applicator and its position relative to the shielding have a major influence on the dose distribution. When insufficient shielding is used, a hotspot nearby the shield appears near the surface. At greater depths, lateral scatter limits the dose reduction attainable with shielding, although the presence of bone-like structures in the phantom reduces the impact of this effect.ConclusionsDose distributions in shielded IORT procedures are affected by distinct contributions when considering the regions near the shielding and deeper in tissue: insufficient shielding may lead to residual dose and hotspots, and the scattering effects may enlarge the beam in depth. These effects must be carefully considered when planning an IORT treatment with shielding.  相似文献   

13.
14.
15.
Study of perturbation experiments is crucial for conservation biology. We carry out Monte Carlo simulations on finite-size lattices composed of n species (n ≤ 4). The value of mortality rate m of top predator is altered to a higher or lower level, and a fluctuation enhancement (FE) is explored. Here FE means an uncertainty in population dynamics. It is found for ≥ 2 that FE is observed, when m is decreased. Namely, when we protect the top predator, its population dynamics becomes very difficult to predict.  相似文献   

16.
Life cycle inventory data have multiple sources of uncertainty. These data uncertainties are often modeled using probability density functions, and in the ecoinvent database the lognormal distribution is used by default to model exchange uncertainty values. The aim of this article is to systematically measure the effect of this default distribution by changing from the lognormal to several other distribution functions and examining how this change affects the uncertainty of life cycle assessment results. Using the ecoinvent 2.2 inventory database, data uncertainty distributions are switched from the lognormal distribution to the normal, triangular, and gamma distributions. The effect of the distribution switching is assessed for both impact assessment results of individual products system, as well as comparisons between product systems. Impact assessment results are generated using 5,000 Monte Carlo iterations for each product system, using the Intergovernmental Panel on Climate Change (IPCC) 2001 (100‐year time frame) method. When comparing the lognormal distribution to the alternative default distributions, the difference in the resulting median and standard deviation values range from slight to significant, depending on the distributions used by default. However, the switch shows practically no effect on product system comparisons. Yet, impact assessment results are sensitive to how the data uncertainties are defined. In this article, we followed what we believe to be ecoinvent standard practice and preserved the “most representative” value. Practitioners should recognize that the most representative value can depart from the average of a probability distribution. Consistent default distribution choices are necessary when performing product system comparisons.  相似文献   

17.
The life cycle environmental profile of energy‐consuming products is dominated by the products’ use stage. Variation in real‐world product use can therefore yield large differences in the results of life cycle assessment (LCA). Adequate characterization of input parameters is paramount for uncertainty quantification and has been a challenge to wider adoption of the LCA method. After emphasis in recent years on methodological development, data development has become the primary focus again. Pervasive sensing presents the opportunity to collect rich data sets and improve profiling of use‐stage parameters. Illustrating a data‐driven approach, we examine energy use in domestic cooling systems, focusing on climate change as the impact category. Specific objectives were to examine: (1) how characterization of the use stage by different probability distributions and (2) how characterizing data aggregated at successively higher granularity affects LCA modeling results and the uncertainty in output. Appliance‐level electricity data were sourced from domestic residences for 3 years. Use‐stage variables were propagated in a stochastic model and analyses simulated by Monte Carlo procedure. Although distribution choice did not necessarily significantly impact the estimated output, there were differences in the estimated uncertainty. Characterization of use‐stage power consumption in the model at successively higher data granularity reduced the output uncertainty with diminishing returns. Results therefore justify the collection of high granularity data sets representing the life cycle use stage of high‐energy products. The availability of such data through proliferation of pervasive sensing presents increasing opportunities to better characterize data and increase confidence in results of LCA.  相似文献   

18.
A finite population consists of kN individuals of N different categories with k individuals each. It is required to estimate the unknown parameter N, the number of different classes in the population. A sequential sampling scheme is considered in which individuals are sampled until a preassigned number of repetitions of already observed categories occur in the sample. Corresponding fixed sample size schemes were considered by Charalambides (1981). The sequential sampling scheme has the advantage of always allowing unbiased estimation of the size parameter N. It is shown that relative to Charalambides' fixed sample size scheme only minor adjustments are required to account for the sequential scheme. In particular, MVU estimators of parametric functions are expressible in terms of the C-numbers introduced by Charalambides.  相似文献   

19.
The life cycle environmental profile of energy‐consuming products, such as air conditioning, is dominated by the products’ use phase. Different user behavior patterns can therefore yield large differences in the results of a cradle‐to‐grave assessment. Although this variation and uncertainty is increasingly recognized, it remains often poorly characterized in life cycle assessment (LCA) studies. Today, pervasive sensing presents the opportunity to collect rich data sets and improve profiling of use‐phase parameters, in turn facilitating quantification and reduction of this uncertainty in LCA. This study examined the case of energy use in building cooling systems, focusing on global warming potential (GWP) as the impact category. In Singapore, building cooling systems or air conditioning consumes up to 37% of national electricity demand. Lack of consideration of variation in use‐phase interaction leads to the oversized designs, wasted energy, and therefore reducible GWP. Using a high‐resolution data set derived from sensor observations, energy use and behavior patterns of single‐office occupants were characterized by probabilistic distributions. The interindividual variability and use‐phase variables were propagated in a stochastic model for the life cycle of air‐conditioning systems and simulated by way of Monte Carlo analysis. Analysis of the generated uncertainties identified plausible reductions in global warming impact through modifying user interaction. Designers concerned about the environmental profile of their products or systems need better representation of the underlying variability in use‐phase data to evaluate the impact. This study suggests that data can be reliably provided and incorporated into the life cycle by proliferation of pervasive sensing, which can only continue to benefit future LCA.  相似文献   

20.
IntroductionA mathematical 3D model of an existing computed tomography (CT) scanner was created and used in the EGSnrc-based BEAMnrc and egs_cbct Monte Carlo codes. Simulated transmission dose profiles of a RMI-465 phantom were analysed to verify Hounsfield numbers against measured data obtained from the CT scanner.Methods and materialsThe modelled CT unit is based on the design of a Toshiba Aquilion 16 LB CT scanner. As a first step, BEAMnrc simulated the X-ray tube, filters, and secondary collimation to obtain phase space data of the X-ray beam. A bowtie filter was included to create a more uniform beam intensity and to remove the beam hardening effects. In a second step the Interactive Data Language (IDL) code was used to build an EGSPHANT file that contained the RMI phantom which was used in egs_cbct simulations. After simulation a series of profiles were sampled from the detector model and the Feldkamp-Davis-Kress (FDK) algorithm was used to reconstruct transversal images. The results were tested against measured data obtained from CT scans.ResultsThe egs_cbct code can be used for the simulation of a fan beam CT unit. The calculated bowtie filter ensured a uniform flux on the detectors. Good correlation between measured and simulated CT numbers was obtained.ConclusionsIn principle, Monte Carlo codes such as egs_cbct can model a fan beam CT unit. After reconstruction, the images contained Hounsfield values comparable to measured data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号