首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a web-based probability distribution elicitation tool: The MATCH Uncertainty Elicitation Tool. The Tool is designed to help elicit probability distributions about uncertain model parameters from experts, in situations where suitable data is either unavailable or sparse. The Tool is free to use, and offers five different techniques for eliciting univariate probability distributions. A key feature of the Tool is that users can log in from different sites and view and interact with the same graphical displays, so that expert elicitation sessions can be conducted remotely (in conjunction with tele- or videoconferencing). This will make probability elicitation easier in situations where it is difficult to interview experts in person. Even when conducting elicitation remotely, interviewers will be able to follow good elicitation practice, advise the experts, and provide instantaneous feedback and assistance.  相似文献   

2.
In this study, we propose a learning algorithm for ordinal regression problems. In most existing learning algorithms, the threshold or location model is assumed to be the statistical model. For estimation of conditional probability of labels for a given covariate vector, we extended the location model to apply ordinal regressions. We present this learning algorithm using the squared-loss function with the location-scale models for estimating conditional probability. We prove that the estimated conditional probability satisfies the monotonicity of the distribution function. Furthermore, we have conducted numerical experiments to compare these proposed methods with existing approaches. We found that, in its ability to predict labels, our method may not have an advantage over existing approaches. However, for estimating conditional probabilities, it does outperform the learning algorithm using location models.  相似文献   

3.
We study the computational aspects of information elicitation mechanisms in which a principal attempts to elicit the private information of other agents using a carefully selected payment scheme based on proper scoring rules. Scoring rules, like many other mechanisms set in a probabilistic environment, assume that all participating agents share some common belief about the underlying probability of events. In real-life situations however, the underlying distributions are not known precisely, and small differences in beliefs of agents about these distributions may alter their behavior under the prescribed mechanism.We examine two related models for the problem. The first model assumes that agents have a similar notion of the probabilities of events, and we show that this approach leads to efficient design algorithms that produce mechanisms which are robust to small changes in the beliefs of agents.In the second model we provide the designer with a more precise and discrete set of alternative beliefs that the seller of information may hold. We show that construction of an optimal mechanism in that case is a computationally hard problem, which is even hard to approximate up to any constant. For this model, we provide two very different exponential-time algorithms for the design problem that have different asymptotic running times. Each algorithm has a different set of cases for which it is most suitable. Finally, we examine elicitation mechanisms that elicit the confidence rating of the seller regarding its information.  相似文献   

4.
Dealing with the expert inconsistency in probability elicitation   总被引:1,自引:0,他引:1  
In this paper, we present and discuss our experience in the task of probability elicitation from experts for the purpose of belief network construction. In our study, we applied four techniques. Three of these techniques are available from the literature, whereas the fourth one is a technique that we developed by adapting a method for the assessment of preferences to the task of probability elicitation. The new technique is based on the analytic hierarchy process (AHP) proposed by Saaty (1980, 1994), and it allows for the quantitative assessment of the expert inconsistency. The method is, in our opinion, very promising and lends itself to be applied more extensively to the task of probability elicitation  相似文献   

5.
Expert elicitation is the process of retrieving and quantifying expert knowledge in a particular domain. Such information is of particular value when the empirical data is expensive, limited or unreliable. This paper describes a new software tool, called Elicitator, which assists in quantifying expert knowledge in a form suitable for use as a prior model in Bayesian regression. Potential environmental domains for applying this elicitation tool include habitat modelling, assessing detectability or eradication, ecological condition assessments, risk analysis and quantifying inputs to complex models of ecological processes. The tool has been developed to be user-friendly, extensible and facilitate consistent and repeatable elicitation of expert knowledge across these various domains. We demonstrate its application to elicitation for logistic regression in a geographically based ecological context. The underlying statistical methodology is also novel, utilizing an indirect elicitation approach to target expert knowledge on a case-by-case basis. For several elicitation sites (or cases), experts are asked simply to quantify their estimated ecological response (e.g. probability of presence), and its range of plausible values, after inspecting (habitat) covariates via GIS.  相似文献   

6.
Managing uncertainty during the knowledge engineering process from elicitation to validation and verification requires a flexible, intuitive, and semantically sound knowledge representation. This is especially important since this process is typically highly interactive with the human user to add, update, and maintain knowledge. In this paper, we present a model of knowledge representation called Bayesian Knowledge-Bases (BKBs). It unifies a ‘if-then’ style rules with probability theory. We also consider the computational efficiency of reasoning over BKBs. We can show that through careful construction of the knowledge-base, reasoning is computationally tractable and can in fact be polynomial-time. BKBs are currently fielded in the PESKI intelligent system development environment.  相似文献   

7.
Representation of uncertain knowledge by using a Bayesian network requires the acquisition of a conditional probability table (CPT) for each variable. The CPT can be acquired by data mining or elicitation. When data are insufficient for supporting mining, causal modeling such as the noisy-OR aids elicitation by reducing the number of probability parameters to be acquired from human experts. Multiple causes can reinforce each other in producing the effect or can undermine the impact of each other. Most existing causal models do not consider causal interactions from the perspective of reinforcement or undermining. Our analysis shows that none can represent both interactions. Except for the RNOR, other models also limit parameters to probabilities of single-cause events. We present the first general causal model, that is, the nonimpeding noisy-AND tree, that allows encoding of both reinforcement and undermining. It supports efficient CPT acquisition by elicitating a partial ordering of causes in terms of a tree topology, plus the necessary numerical parameters. It also allows the incorporation of probabilities for multicause events.  相似文献   

8.
In general, an information security risk assessment (ISRA) method produces risk estimates, where risk is the product of the probability of occurrence of an event and the associated consequences for the given organization. ISRA practices vary among industries and disciplines, resulting in various approaches and methods for risk assessments. There exist several methods for comparing ISRA methods, but these are scoped to compare the content of the methods to a predefined set of criteria, rather than process tasks to be carried out and the issues the method is designed to address. It is the lack of an all-inclusive and comprehensive comparison that motivates this work. This paper proposes the Core Unified Risk Framework (CURF) as an all-inclusive approach to compare different methods, all-inclusive since we grew CURF organically by adding new issues and tasks from each reviewed method. If a task or issue was present in surveyed ISRA method, but not in CURF, it was appended to the model, thus obtaining a measure of completeness for the studied methods. The scope of this work is primarily functional approaches risk assessment procedures, which are the formal ISRA methods that focus on assessments of assets, threats, vulnerabilities, and protections, often with measures of probability and consequence. The proposed approach allowed for a detailed qualitative comparison of processes and activities in each method and provided a measure of completeness. This study does not address aspects beyond risk identification, estimation, and evaluation; considering the total of all three activities, we found the “ISO/IEC 27005 Information Security Risk Management” to be the most complete approach at present. For risk estimation only, we found the Factor Analysis of Information Risk and ISO/IEC 27005:2011 as the most complete frameworks. In addition, this study discovers and analyzes several gaps in the surveyed methods.  相似文献   

9.
Two problems may arise when an intelligent (recommender) system elicits users?? preferences. First, there may be a mismatch between the quantitative preference representations in most preference models and the users?? mental preference models. Giving exact numbers, e.g., such as ??I like 30?days of vacation 2.5?times better than 28?days?? is difficult for people. Second, the elicitation process can greatly influence the acquired model (e.g., people may prefer different options based on whether a choice is represented as a loss or gain). We explored these issues in three studies. In the first experiment we presented users with different preference elicitation methods and found that cognitively less demanding methods were perceived low in effort and high in liking. However, for methods enabling users to be more expressive, the perceived effort was not an indicator of how much the methods were liked. We thus hypothesized that users are willing to spend more effort if the feedback mechanism enables them to be more expressive. We examined this hypothesis in two follow-up studies. In the second experiment, we explored the trade-off between giving detailed preference feedback and effort. We found that familiarity with and opinion about an item are important factors mediating this trade-off. Additionally, affective feedback was preferred over a finer grained one-dimensional rating scale for giving additional detail. In the third study, we explored the influence of the interface on the elicitation process in a participatory set-up. People considered it helpful to be able to explore the link between their interests, preferences and the desirability of outcomes. We also confirmed that people do not want to spend additional effort in cases where it seemed unnecessary. Based on the findings, we propose four design guidelines to foster interface design of preference elicitation from a user view.  相似文献   

10.
We present a new approach for the elicitation and development security requirements in the entire Data Warehouse (DWs) life cycle, which we have called a Secure Engineering process for DAta WArehouses (SEDAWA). Whilst many methods for the requirements analysis phase of the DWs have been proposed, the elicitation of security requirements as non-functional requirements has not received sufficient attention. Hence, in this paper we propose a methodology for the DW design based on Model Driven Architecture (MDA) and the standard Software Process Engineering Metamodel Specification (SPEM) from the Object Management Group (OMG). We define four phases comprising of several activities and steps, an d five disciplines which cover the whole DW design. Our methodology adapts the i1 framework to be used under MDA and the SPEM approaches in order to elicit and develop security requirements for DWs. The benefits of our proposal are shown through an example related to the management of the pharmacies consortium business.  相似文献   

11.
Normalized mutual information (NMI) is a widely used measure to compare community detection methods. Recently, however, the need of adjustment for information theory‐based measures has been argued because of the so‐called selection bias problem, that is, they show the tendency in choosing clustering solutions with more communities. In this article, an experimental evaluation of these measures is performed to deeply investigate the problem, and an adjustment that scales the values of these measures is proposed. Experiments on synthetic networks, for which the ground‐truth division is known, highlight that scaled NMI does not present the selection bias behavior. Moreover, a comparison among some well‐known community detection methods on synthetic generated networks shows a fairer behavior of scaled NMI, especially when the network topology does not present a clear community structure. The experimentation also on two real‐world networks reveals that the corrected formula allows to choose, among a set, the method finding a network division that better reflects the ground‐truth structure.  相似文献   

12.
In this work we present, apply, and evaluate a novel, interactive visualization model for comparative analysis of structural variants and rearrangements in human and cancer genomes, with emphasis on data integration and uncertainty visualization. To support both global trend analysis and local feature detection, this model enables explorations continuously scaled from the high-level, complete genome perspective, down to the low-level, structural rearrangement view, while preserving global context at all times. We have implemented these techniques in Gremlin, a genomic rearrangement explorer with multi-scale, linked interactions, which we apply to four human cancer genome data sets for evaluation. Using an insight-based evaluation methodology, we compare Gremlin to Circos, the state-of-the-art in genomic rearrangement visualization, through a small user study with computational biologists working in rearrangement analysis. Results from user study evaluations demonstrate that this visualization model enables more total insights, more insights per minute, and more complex insights than the current state-of-the-art for visual analysis and exploration of genome rearrangements.  相似文献   

13.
We consider soft constraint problems where some of the preferences may be unspecified. This models, for example, settings where agents are distributed and have privacy issues, or where there is an ongoing preference elicitation process. In this context, we study how to find an optimal solution without having to wait for all the preferences. In particular, we define algorithms, that interleave search and preference elicitation, to find a solution which is necessarily optimal, that is, optimal no matter what the missing data will be, with the aim to ask the user to reveal as few preferences as possible. We define a combined solving and preference elicitation scheme with a large number of different instantiations, each corresponding to a concrete algorithm, which we compare experimentally. We compute both the number of elicited preferences and the user effort, which may be larger, as it contains all the preference values the user has to compute to be able to respond to the elicitation requests. While the number of elicited preferences is important when the concern is to communicate as little information as possible, the user effort measures also the hidden work the user has to do to be able to communicate the elicited preferences. Our experimental results on classical, fuzzy, weighted and temporal incomplete CSPs show that some of our algorithms are very good at finding a necessarily optimal solution while asking the user for only a very small fraction of the missing preferences. The user effort is also very small for the best algorithms.  相似文献   

14.
Conventional methods for state space exploration are limited to the analysis of small systems because they suffer from excessive memory and computational requirements. We have developed a new dynamic probabilistic state exploration algorithm which addresses this problem for general, structurally unrestricted state spaces.

Our method has a low state omission probability and low memory usage that is independent of the length of the state vector. In addition, the algorithm can be easily parallelised. This combination of probability and parallelism enables us to rapidly explore state spaces that are an order of magnitude larger than those obtainable using conventional exhaustive techniques.

We derive a performance model of this new algorithm in order to quantify its benefits in terms of distributed run-time, speedup and efficiency. We implement our technique on a distributed-memory parallel computer and demonstrate results which compare favourably with the performance model. Finally, we discuss suitable choices for the three hash functions upon which our algorithm is based.  相似文献   


15.
Statecharts were experimented as a mediation tool between multiple experts and a knowledge engineer. After a short survey of knowledge elicitation methods for multiple experts, we present our method for assessing the quality of the elicited model and give critiques on the basis of our case study in vineyards crop protection management.  相似文献   

16.
Multi-label classification exhibits several challenges not present in the binary case. The labels may be interdependent, so that the presence of a certain label affects the probability of other labels’ presence. Thus, exploiting dependencies among the labels could be beneficial for the classifier’s predictive performance. Surprisingly, only a few of the existing algorithms address this issue directly by identifying dependent labels explicitly from the dataset. In this paper we propose new approaches for identifying and modeling existing dependencies between labels. One principal contribution of this work is a theoretical confirmation of the reduction in sample complexity that is gained from unconditional dependence. Additionally, we develop methods for identifying conditionally and unconditionally dependent label pairs; clustering them into several mutually exclusive subsets; and finally, performing multi-label classification incorporating the discovered dependencies. We compare these two notions of label dependence (conditional and unconditional) and evaluate their performance on various benchmark and artificial datasets. We also compare and analyze labels identified as dependent by each of the methods. Moreover, we define an ensemble framework for the new methods and compare it to existing ensemble methods. An empirical comparison of the new approaches to existing base-line and state-of-the-art methods on 12 various benchmark datasets demonstrates that in many cases the proposed single-classifier and ensemble methods outperform many multi-label classification algorithms. Perhaps surprisingly, we discover that the weaker notion of unconditional dependence plays the decisive role.  相似文献   

17.
Due to the rapid development in computer networks, congestion becomes a critical issue. Congestion usually occurs when the connection demands on network resources, i.e. buffer spaces, exceed the available ones. We propose in this paper a new discrete-time queueing network analytical model based on dynamic random early drop (DRED) algorithm to control the congestion in early stages. We apply our analytical model on two-queue nodes queueing network. Furthermore, we compare between the proposed analytical model and three known active queue management (AQM) algorithms, including DRED, random early detection (RED) and adaptive RED, in order to figure out which of them offers better quality of service (QoS). We also experimentally compare the queue nodes of the proposed analytical model and the three AQM methods in terms of different performance measures, including, average queue length, average queueing delay, throughput, packet loss probability, etc., aiming to determine the queue node that offers better performance.  相似文献   

18.
We present a theoretical framework for an asymptotically converging, scaled genetic algorithm which uses an arbitrary-size alphabet and common scaled genetic operators. The alphabet can be interpreted as a set of equidistant real numbers and multiple-spot mutation performs a scalable compromise between pure random search and neighborhood-based change on the alphabet level. We discuss several versions of the crossover operator and their interplay with mutation. In particular, we consider uniform crossover and gene-lottery crossover which does not commute with mutation. The Vose–Liepins version of mutation-crossover is also integrated in our approach. In order to achieve convergence to global optima, the mutation rate and the crossover rate have to be annealed to zero in proper fashion, and unbounded, power-law scaled proportional fitness selection is used with logarithmic growth in the exponent. Our analysis shows that using certain types of crossover operators and large population size allows for particularly slow annealing schedules for the crossover rate. In our discussion, we focus on the following three major aspects based upon contraction properties of the mutation and fitness selection operators: (i) the drive towards uniform populations in a genetic algorithm using standard operations, (ii) weak ergodicity of the inhomogeneous Markov chain describing the probabilistic model for the scaled algorithm, (iii) convergence to globally optimal solutions. In particular, we remove two restrictions imposed in Theorem 8.6 and Remark 8.7 of (Theoret. Comput. Sci. 259 (2001) 1) where a similar type of algorithm is considered as described here: mutation need not commute with crossover and the fitness function (which may come from a coevolutionary single species setting) need not have a single maximum.  相似文献   

19.
Making Tea (MT) is a design elicitation method developed in eScience specifically to deal with situations in which (1) the designers do not share domain or artifact knowledge with design-domain experts, (2) the processes in the space are semi-structured and (3) the processes to be modeled can last for periods exceeding the availability of most ethnographers. We have used the method in two distinct eScience contexts, and may offer an effective, low cost way to deal with bridging between software design teams and scientists to develop useful and usable eScience artifacts. To that end, we propose a set of criteria in order to understand why MT works. Through these criteria we also reflect upon the relation of MT to other design elicitation methods in order to propose a kind of method framework from which other designers may be assisted in choosing elicitation methods and in developing new methods both for eScience contexts and beyond.  相似文献   

20.
ContextIn large software development projects a huge number of unstructured text documents from various stakeholders becomes available and needs to be analyzed and transformed into structured requirements. This elicitation process is known to be time-consuming and error-prone when performed manually by a requirements engineer. Consequently, substantial research has been done to automate the process through a plethora of tools and technologies.ObjectiveThis paper aims to capture the current state of automated requirements elicitation and derive future research directions by identifying gaps in the existing body of knowledge and through relating existing works to each other. More specifically, we are investigating the following research question: What is the state of the art in research covering tool support for automated requirements elicitation from natural language documents?MethodA systematic review of the literature in automated requirements elicitation is performed. Identified works are categorized using an analysis framework comprising tool categories, technological concepts and evaluation approaches. Furthermore, the identified papers are related to each other through citation analysis to trace the development of the research field.ResultsWe identified, categorized and related 36 relevant publications. Summarizing the observations we made, we propose future research to (1) investigate alternative elicitation paradigms going beyond a pure automation approach (2) compare the effects of different types of knowledge on elicitation results (3) apply comparative evaluation methods and multi-dimensional evaluation measures and (4) strive for a closer integration of research activities across the sub-fields of automatic requirements elicitation.ConclusionThrough the results of our paper, we intend to contribute to the Requirements Engineering body of knowledge by (1) conceptualizing an analysis framework for works in the area of automated requirements elicitation, going beyond former classifications (2) providing an extensive overview and categorization of existing works in this area (3) formulating concise directions for future research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号