首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article discusses how analyst's or expert's beliefs on the credibility and quality of models can be assessed and incorporated into the uncertainty assessment of an unknown of interest. The proposed methodology is a specialization of the Bayesian framework for the assessment of model uncertainty presented in an earlier paper. This formalism treats models as sources of information in assessing the uncertainty of an unknown, and it allows the use of predictions from multiple models as well as experimental validation data about the models’ performances. In this article, the methodology is extended to incorporate additional types of information about the model, namely, subjective information in terms of credibility of the model and its applicability when it is used outside its intended domain of application. An example in the context of fire risk modeling is also provided.  相似文献   

2.
The treatment of uncertainties associated with modeling and risk assessment has recently attracted significant attention. The methodology and guidance for dealing with parameter uncertainty have been fairly well developed and quantitative tools such as Monte Carlo modeling are often recommended. However, the issue of model uncertainty is still rarely addressed in practical applications of risk assessment. The use of several alternative models to derive a range of model outputs or risks is one of a few available techniques. This article addresses the often-overlooked issue of what we call "modeler uncertainty," i.e., difference in problem formulation, model implementation, and parameter selection originating from subjective interpretation of the problem at hand. This study uses results from the Fruit Working Group, which was created under the International Atomic Energy Agency (IAEA) BIOMASS program (BIOsphere Modeling and ASSessment). Model-model and model-data intercomparisons reviewed in this study were conducted by the working group for a total of three different scenarios. The greatest uncertainty was found to result from modelers' interpretation of scenarios and approximations made by modelers. In scenarios that were unclear for modelers, the initial differences in model predictions were as high as seven orders of magnitude. Only after several meetings and discussions about specific assumptions did the differences in predictions by various models merge. Our study shows that parameter uncertainty (as evaluated by a probabilistic Monte Carlo assessment) may have contributed over one order of magnitude to the overall modeling uncertainty. The final model predictions ranged between one and three orders of magnitude, depending on the specific scenario. This study illustrates the importance of problem formulation and implementation of an analytic-deliberative process in risk characterization.  相似文献   

3.
In any model the values of estimates for various parameters are obtained from different sources each with its own level of uncertainty. When the probability distributions of the estimates are obtained as opposed to point values only, the measurement uncertainties in the parameter estimates may be addressed. However, the sources used for obtaining the data and the models used to select appropriate distributions are of differing degrees of uncertainty. A hierarchy of different sources of uncertainty based upon one's ability to validate data and models empirically is presented. When model parameters are aggregated with different levels of the hierarchy represented, this implies distortion or degradation in the utility and validity of the models used. Means to identify and deal with such heterogeneous data sources are explored, and a number of approaches to addressing this problem is presented. One approach, using Range/Confidence Estimates coupled with an Information Value Analysis Process, is presented as an example.  相似文献   

4.
It is sometimes argued that the use of increasingly complex "biologically-based" risk assessment (BBRA) models to capture increasing mechanistic understanding of carcinogenic processes may run into a practical barrier that cannot be overcome in the near term: the need for unrealistically large amounts of data about pharmacokinetic and pharmacodynamic parameters. This paper shows that, for a class of dynamical models widely used in biologically-based risk assessments, it is unnecessary to estimate the values of the individual parameters. Instead, the input-output properties of such a model–specifically, the ratio of the area-under-curve (AUC) for any selected output to the AUC of the input–is determined by a single aggregate "reduced" constant, which can be estimated from measured input and output quantities. Uncertainties about the many individual parameter values of the model, and even uncertainties about its internal structure, are irrelevant for purposes of quantifying and extrapolating its input-output (e.g., dose-response) behavior. We prove that this is the case for the class of linear, constant-coefficient, globally stable compartmental flow systems used in many classical pharmacokinetic and low-dose PBPK models. Examples are cited that suggest that the value of the reduced parameter representing such a system's aggregate behavior may be relatively insensitive to changes in (and hence to uncertainties about) the values of individual parameters. The theory is illustrated with a model of pharmacokinetics and metabolism of cyclophosphamide (CP), a drug widely used in cancer chemotherapy and as an immunosuppressive agent.  相似文献   

5.
In the quest to model various phenomena, the foundational importance of parameter identifiability to sound statistical modeling may be less well appreciated than goodness of fit. Identifiability concerns the quality of objective information in data to facilitate estimation of a parameter, while nonidentifiability means there are parameters in a model about which the data provide little or no information. In purely empirical models where parsimonious good fit is the chief concern, nonidentifiability (or parameter redundancy) implies overparameterization of the model. In contrast, nonidentifiability implies underinformativeness of available data in mechanistically derived models where parameters are interpreted as having strong practical meaning. This study explores illustrative examples of structural nonidentifiability and its implications using mechanistically derived models (for repeated presence/absence analyses and dose–response of Escherichia coli O157:H7 and norovirus) drawn from quantitative microbial risk assessment. Following algebraic proof of nonidentifiability in these examples, profile likelihood analysis and Bayesian Markov Chain Monte Carlo with uniform priors are illustrated as tools to help detect model parameters that are not strongly identifiable. It is shown that identifiability should be considered during experimental design and ethics approval to ensure generated data can yield strong objective information about all mechanistic parameters of interest. When Bayesian methods are applied to a nonidentifiable model, the subjective prior effectively fabricates information about any parameters about which the data carry no objective information. Finally, structural nonidentifiability can lead to spurious models that fit data well but can yield severely flawed inferences and predictions when they are interpreted or used inappropriately.  相似文献   

6.
In a series of articles and a health-risk assessment report, scientists at the CIIT Hamner Institutes developed a model (CIIT model) for estimating respiratory cancer risk due to inhaled formaldehyde within a conceptual framework incorporating extensive mechanistic information and advanced computational methods at the toxicokinetic and toxicodynamic levels. Several regulatory bodies have utilized predictions from this model; on the other hand, upon detailed evaluation the California EPA has decided against doing so. In this article, we study the CIIT model to identify key biological and statistical uncertainties that need careful evaluation if such two-stage clonal expansion models are to be used for extrapolation of cancer risk from animal bioassays to human exposure. Broadly, these issues pertain to the use and interpretation of experimental labeling index and tumor data, the evaluation and biological interpretation of estimated parameters, and uncertainties in model specification, in particular that of initiated cells. We also identify key uncertainties in the scale-up of the CIIT model to humans, focusing on assumptions underlying model parameters for cell replication rates and formaldehyde-induced mutation. We discuss uncertainties in identifying parameter values in the model used to estimate and extrapolate DNA protein cross-link levels. The authors of the CIIT modeling endeavor characterized their human risk estimates as "conservative in the face of modeling uncertainties." The uncertainties discussed in this article indicate that such a claim is premature.  相似文献   

7.
针对碳交易政策下的多式联运路径选择问题,考虑运输时间和单位运费率不确定且其概率分布未知的情况,引入鲁棒优化建模方法对其进行研究。首先利用box不确定集合刻画分布未知的运输时间和运费率,然后在碳交易政策下确定模型的基础上,构建鲁棒性可调节的多式联运路径选择模型,并通过对偶转化得到相对易求解的鲁棒等价模型。实例分析表明,鲁棒模型能较好地处理参数概率分布未知的多式联运路径选择问题,方便决策者根据偏好调整不确定预算水平进行决策。运输时间和单位运费率的不确定性都会影响多式联运路径决策,但是作用机理有所不同。将上述碳交易政策下的模型拓展到其他低碳政策,结果表明多种低碳政策的组合能更好实现多式联运减排。  相似文献   

8.
In pest risk assessment it is frequently necessary to make management decisions regarding emerging threats under severe uncertainty. Although risk maps provide useful decision support for invasive alien species, they rarely address knowledge gaps associated with the underlying risk model or how they may change the risk estimates. Failure to recognize uncertainty leads to risk‐ignorant decisions and miscalculation of expected impacts as well as the costs required to minimize these impacts. Here we use the information gap concept to evaluate the robustness of risk maps to uncertainties in key assumptions about an invading organism. We generate risk maps with a spatial model of invasion that simulates potential entries of an invasive pest via international marine shipments, their spread through a landscape, and establishment on a susceptible host. In particular, we focus on the question of how much uncertainty in risk model assumptions can be tolerated before the risk map loses its value. We outline this approach with an example of a forest pest recently detected in North America, Sirex noctilio Fabricius. The results provide a spatial representation of the robustness of predictions of S. noctilio invasion risk to uncertainty and show major geographic hotspots where the consideration of uncertainty in model parameters may change management decisions about a new invasive pest. We then illustrate how the dependency between the extent of uncertainties and the degree of robustness of a risk map can be used to select a surveillance network design that is most robust to knowledge gaps about the pest.  相似文献   

9.
Probabilistic safety analysis (PSA) has been used in nuclear, chemical, petrochemical, and several other industries. The probability and/or frequency results of most PSAs are based on average component unavailabilities during the mission of interest. While these average results are useful, they provide no indication of the significance of the facility's current status when one or more components are known to be out of service. Recently, several interactive computational models have been developed for nuclear power plants to allow the user to specify the plant's status at a particular time (i.e., to specify equipment known to be out of service) and then to receive updated PSA information. As with conventional PSA results, there are uncertainties associated with the numerical updated results. These uncertainties stem from a number of sources, including parameter uncertainty (uncertainty in equipment failure rates and human error probabilities). This paper presents an analysis of the impact of parameter uncertainty on updated PSA results.  相似文献   

10.
Mitchell J. Small 《Risk analysis》2011,31(10):1561-1575
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose‐response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose‐response models (logistic and quantal‐linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5–10%. The results demonstrate that dose selection for studies that subsequently inform dose‐response models can benefit from consideration of how these models will be fit, combined, and interpreted.  相似文献   

11.
In quantitative uncertainty analysis, it is essential to define rigorously the endpoint or target of the assessment. Two distinctly different approaches using Monte Carlo methods are discussed: (1) the end point is a fixed but unknown value (e.g., the maximally exposed individual, the average individual, or a specific individual) or (2) the end point is an unknown distribution of values (e.g., the variability of exposures among unspecified individuals in the population). In the first case, values are sampled at random from distributions representing various "degrees of belief" about the unknown "fixed" values of the parameters to produce a distribution of model results. The distribution of model results represents a subjective confidence statement about the true but unknown assessment end point. The important input parameters are those that contribute most to the spread in the distribution of the model results. In the second case, Monte Carlo calculations are performed in two dimensions producing numerous alternative representations of the true but unknown distribution. These alternative distributions permit subject confidence statements to be made from two perspectives: (1) for the individual exposure occurring at a specified fractile of the distribution or (2) for the fractile of the distribution associated with a specified level of individual exposure. The relative importance of input parameters will depend on the fractile or exposure level of interest. The quantification of uncertainty for the simulation of a true but unknown distribution of values represents the state-of-the-art in assessment modeling.  相似文献   

12.
Industrial societies have altered the earth's environment in ways that could have important, long-term ecological, economic, and health implications. In this paper, we examine the extent to which uncertainty about global climate change could impact the precision of predictions of secondary outcomes such as health impacts of pollution. Using a model that links global climate change with predictions of chemical exposure and human health risk in the Western region of the United States of America (U.S.), we define parameter variabilities and uncertainties and we characterize the resulting outcome variance. As a case study, we consider the public health consequences from releases of hexachlorobenzene (HCB), a ubiquitous multimedia pollutant. By constructing a matrix that links global environmental change both directly and indirectly to potential human-health effects attributable to HCB released into air, soil, and water, we define critical parameter variances in the health risk estimation process. We employ a combined uncertainty/sensitivity analysis to investigate how HCB releases are affected by increasing atmospheric temperature and the accompanying climate alterations that are anticipated. We examine how such uncertainty impacts both the expected magnitude and calculational precision of potential human exposures and health effects. This assessment reveals that uncertain temperature increases of up to 5°C have little impact on either the magnitude or precision of the public-health consequences estimated under existing climate variations for HCB released into air and water in the Western region of the U.S.  相似文献   

13.
In this work, we study the effect of epistemic uncertainty in the ranking and categorization of elements of probabilistic safety assessment (PSA) models. We show that, while in a deterministic setting a PSA element belongs to a given category univocally, in the presence of epistemic uncertainty, a PSA element belongs to a given category only with a certain probability. We propose an approach to estimate these probabilities, showing that their knowledge allows to appreciate " the sensitivity of component categorizations to uncertainties in the parameter values " (U.S. NRC Regulatory Guide 1.174). We investigate the meaning and utilization of an assignment method based on the expected value of importance measures. We discuss the problem of evaluating changes in quality assurance, maintenance activities prioritization, etc. in the presence of epistemic uncertainty. We show that the inclusion of epistemic uncertainly in the evaluation makes it necessary to evaluate changes through their effect on PSA model parameters. We propose a categorization of parameters based on the Fussell-Vesely and differential importance (DIM) measures. In addition, issues in the calculation of the expected value of the joint importance measure are present when evaluating changes affecting groups of components. We illustrate that the problem can be solved using DIM. A numerical application to a case study concludes the work.  相似文献   

14.
This paper addresses the use of data for identifying and characterizing uncertainties in model parameters and predictions. The Bayesian Monte Carlo method is formally presented and elaborated, and applied to the analysis of the uncertainty in a predictive model for global mean sea level change. The method uses observations of output variables, made with an assumed error structure, to determine a posterior distribution of model outputs. This is used to derive a posterior distribution for the model parameters. Results demonstrate the resolution of the uncertainty that is obtained as a result of the Bayesian analysis and also indicate the key contributors to the uncertainty in the sea level rise model. While the technique is illustrated with a simple, preliminary model, the analysis provides an iterative framework for model refinement. The methodology developed in this paper provides a mechanism for the incorporation of ongoing data collection and research in decision-making for problems involving uncertain environmental change.  相似文献   

15.
The BMD (benchmark dose) method that is used in risk assessment of chemical compounds was introduced by Crump (1984) and is based on dose-response modeling. To take uncertainty in the data and model fitting into account, the lower confidence bound of the BMD estimate (BMDL) is suggested to be used as a point of departure in health risk assessments. In this article, we study how to design optimum experiments for applying the BMD method for continuous data. We exemplify our approach by considering the class of Hill models. The main aim is to study whether an increased number of dose groups and at the same time a decreased number of animals in each dose group improves conditions for estimating the benchmark dose. Since Hill models are nonlinear, the optimum design depends on the values of the unknown parameters. That is why we consider Bayesian designs and assume that the parameter vector has a prior distribution. A natural design criterion is to minimize the expected variance of the BMD estimator. We present an example where we calculate the value of the design criterion for several designs and try to find out how the number of dose groups, the number of animals in the dose groups, and the choice of doses affects this value for different Hill curves. It follows from our calculations that to avoid the risk of unfavorable dose placements, it is good to use designs with more than four dose groups. We can also conclude that any additional information about the expected dose-response curve, e.g., information obtained from studies made in the past, should be taken into account when planning a study because it can improve the design.  相似文献   

16.
The choice of a dose-response model is decisive for the outcome of quantitative risk assessment. Single-hit models have played a prominent role in dose-response assessment for pathogenic microorganisms, since their introduction. Hit theory models are based on a few simple concepts that are attractive for their clarity and plausibility. These models, in particular the Beta Poisson model, are used for extrapolation of experimental dose-response data to low doses, as are often present in drinking water or food products. Unfortunately, the Beta Poisson model, as it is used throughout the microbial risk literature, is an approximation whose validity is not widely known. The exact functional relation is numerically complex, especially for use in optimization or uncertainty analysis. Here it is shown that although the discrepancy between the Beta Poisson formula and the exact function is not very large for many data sets, the differences are greatest at low doses--the region of interest for many risk applications. Errors may become very large, however, in the results of uncertainty analysis, or when the data contain little low-dose information. One striking property of the exact single-hit model is that it has a maximum risk curve, limiting the upper confidence level of the dose-response relation. This is due to the fact that the risk cannot exceed the probability of exposure, a property that is not retained in the Beta Poisson approximation. This maximum possible response curve is important for uncertainty analysis, and for risk assessment of pathogens with unknown properties.  相似文献   

17.
There has been an increasing interest in physiologically based pharmacokinetic (PBPK)models in the area of risk assessment. The use of these models raises two important issues: (1)How good are PBPK models for predicting experimental kinetic data? (2)How is the variability in the model output affected by the number of parameters and the structure of the model? To examine these issues, we compared a five-compartment PBPK model, a three-compartment PBPK model, and nonphysiological compartmental models of benzene pharmacokinetics. Monte Carlo simulations were used to take into account the variability of the parameters. The models were fitted to three sets of experimental data and a hypothetical experiment was simulated with each model to provide a uniform basis for comparison. Two main results are presented: (1)the difference is larger between the predictions of the same model fitted to different data se1ts than between the predictions of different models fitted to the dame data; and (2)the type of data used to fit the model has a larger effect on the variability of the predictions than the type of model and the number of parameters.  相似文献   

18.
The Monte Carlo (MC) simulation approach is traditionally used in food safety risk assessment to study quantitative microbial risk assessment (QMRA) models. When experimental data are available, performing Bayesian inference is a good alternative approach that allows backward calculation in a stochastic QMRA model to update the experts’ knowledge about the microbial dynamics of a given food‐borne pathogen. In this article, we propose a complex example where Bayesian inference is applied to a high‐dimensional second‐order QMRA model. The case study is a farm‐to‐fork QMRA model considering genetic diversity of Bacillus cereus in a cooked, pasteurized, and chilled courgette purée. Experimental data are Bacillus cereus concentrations measured in packages of courgette purées stored at different time‐temperature profiles after pasteurization. To perform a Bayesian inference, we first built an augmented Bayesian network by linking a second‐order QMRA model to the available contamination data. We then ran a Markov chain Monte Carlo (MCMC) algorithm to update all the unknown concentrations and unknown quantities of the augmented model. About 25% of the prior beliefs are strongly updated, leading to a reduction in uncertainty. Some updates interestingly question the QMRA model.  相似文献   

19.
Call an economic model incomplete if it does not generate a probabilistic prediction even given knowledge of all parameter values. We propose a method of inference about unknown parameters for such models that is robust to heterogeneity and dependence of unknown form. The key is a Central Limit Theorem for belief functions; robust confidence regions are then constructed in a fashion paralleling the classical approach. Monte Carlo simulations support tractability of the method and demonstrate its enhanced robustness relative to existing methods.  相似文献   

20.
Physiologically-based toxicokinetic (PBTK) models are widely used to quantify whole-body kinetics of various substances. However, since they attempt to reproduce anatomical structures and physiological events, they have a high number of parameters. Their identification from kinetic data alone is often impossible, and other information about the parameters is needed to render the model identifiable. The most commonly used approach consists of independently measuring, or taking from literature sources, some of the parameters, fixing them in the kinetic model, and then performing model identification on a reduced number of less certain parameters. This results in a substantial reduction of the degrees of freedom of the model. In this study, we show that this method results in final estimates of the free parameters whose precision is overestimated. We then compared this approach with an empirical Bayes approach, which takes into account not only the mean value, but also the error associated with the independently determined parameters. Blood and breath 2H8-toluene washout curves, obtained in 17 subjects, were analyzed with a previously presented PBTK model suitable for person-specific dosimetry. Model parameters with the greatest effect on predicted levels were alveolar ventilation rate QPC, fat tissue fraction VFC, blood-air partition coefficient Kb, fraction of cardiac output to fat Qa/co and rate of extrahepatic metabolism Vmax-p. Differences in the measured and Bayesian-fitted values of QPC, VFC and Kb were significant (p < 0.05), and the precision of the fitted values Vmax-p and Qa/co went from 11 ± 5% to 75 ± 170% (NS) and from 8 ± 2% to 9 ± 2% (p < 0.05) respectively. The empirical Bayes approach did not result in less reliable parameter estimates: rather, it pointed out that the precision of parameter estimates can be overly optimistic when other parameters in the model, either directly measured or taken from literature sources, are treated as known without error. In conclusion, an empirical Bayes approach to parameter estimation resulted in a better model fit, different final parameter estimates, and more realistic parameter precisions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号