首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Logical filtering is the process of updating a belief state (set of possible world states) after a sequence of executed actions and perceived observations. In general, it is intractable in dynamic domains that include many objects and relationships. Still, potential applications for such domains (e.g., semantic web, autonomous agents, and partial-knowledge games) encourage research beyond intractability results.In this paper we present polynomial-time algorithms for filtering belief states that are encoded as First-Order Logic (FOL) formulas. Our algorithms are exact in many cases of interest. They accept belief states in FOL without functions, permitting arbitrary arity for predicates, infinite universes of elements, and equality. They enable natural representation with explicit references to unidentified objects and partially known relationships, still maintaining tractable computation. Previous results focus on more general cases that are intractable or permit only imprecise filtering. Our algorithms guarantee that belief-state representation remains compact for STRIPS actions (among others) with unbounded-size domains. This guarantees tractable exact filtering indefinitely for those domains. The rest of our results apply to expressive modeling languages, such as partial databases and belief revision in FOL.  相似文献   

2.
一个智能体从周围环境中接收到多种知识,如何将这些知识合并成单一的、一致的知识是一个非常重要的问题,从信念修正中"缩并+添加"得到启发,我们分两步解决这个问题.第一步弱化接收到的多种信息,使之一致,第二步进行简单的合并操作.本文主要研究了第一步,称为基于群体信念协商的矛盾知识处理模型,本文讨论了该模型的公理系统和该模型的过程实现,通过一个例子示范了这种模型下信息合并操作的具体实现过程.  相似文献   

3.
Abstract

We discuss Zadeh’s idea of computing with words and emphasize its perspective that information provides a restriction on the values variables can assume. We describe the role that the constraint-based semantics plays in translating natural language statements into formal mathematical objects. One task that arises in using this approach is the formulation of joint restrictions on multiple variables from individual information about each of the variables. Our interest here is to extend the capability of the framework of computing with words in the task of forming joint variables with the introduction of the idea of perceived relatedness between variables, a concept closely related to the idea of correlation. We are particularly interested in role that knowledge about perceived relatedness between variables can play in further restricting the possible values of joint then that simple provided by the individual constraints. We look at the problem of joining various types of uncertain variables, possibilistic, probabilistic and Dempster-Shafer belief structures.  相似文献   

4.
The well-known Fuzzy C-Means (FCM) algorithm for data clustering has been extended to Evidential C-Means (ECM) algorithm in order to work in the belief functions framework with credal partitions of the data. Depending on data clustering problems, some barycenters of clusters given by ECM can become very close to each other in some cases, and this can cause serious troubles in the performance of ECM for the data clustering. To circumvent this problem, we introduce the notion of imprecise cluster in this paper. The principle of our approach is to consider that objects lying in the middle of specific classes (clusters) barycenters must be committed with equal belief to each specific cluster instead of belonging to an imprecise meta-cluster as done classically in ECM algorithm. Outliers object far away of the centers of two (or more) specific clusters that are hard to be distinguished, will be committed to the imprecise cluster (a disjunctive meta-cluster) composed by these specific clusters. The new Belief C-Means (BCM) algorithm proposed in this paper follows this very simple principle. In BCM, the mass of belief of specific cluster for each object is computed according to distance between object and the center of the cluster it may belong to. The distances between object and centers of the specific clusters and the distances among these centers will be both taken into account in the determination of the mass of belief of the meta-cluster. We do not use the barycenter of the meta-cluster in BCM algorithm contrariwise to what is done with ECM. In this paper we also present several examples to illustrate the interest of BCM, and to show its main differences with respect to clustering techniques based on FCM and ECM.  相似文献   

5.
This paper applies the Transferable Belief Model (TBM) interpretation of the Dempster-Shafer theory of evidence to estimate parameter distributions for probabilistic structural reliability assessment based on information from previous analyses, expert opinion, or qualitative assessments (i.e., evidence). Treating model parameters as credal variables, the suggested approach constructs a set of least-committed belief functions for each parameter defined on a continuous frame of real numbers that represent beliefs induced by the evidence in the credal state, discounts them based on the relevance and reliability of the supporting evidence, and combines them to obtain belief functions that represent the aggregate state of belief in the true value of each parameter. Within the TBM framework, beliefs held in the credal state can then be transformed to a pignistic state where they are represented by pignistic probability distributions. The value of this approach lies in its ability to leverage results from previous analyses to estimate distributions for use within a probabilistic reliability and risk assessment framework. The proposed methodology is demonstrated in an example problem that estimates the physical vulnerability of a notional office building to blast loading.  相似文献   

6.
There is now extensive interest in reasoning about moving objects. A probabilistic spatio-temporal (PST) knowledge base (KB) contains atomic statements of the form “Object o is/was/will be in region r at time t with probability in the interval [?,u]”. In this paper, we study mechanisms for belief revision in PST KBs. We propose multiple methods for revising PST KBs. These methods involve finding maximally consistent subsets and maximal cardinality consistent subsets. In addition, there may be applications where the user has doubts about the accuracy of the spatial information, or the temporal aspects, or about the ability to recognize objects in such statements. We study belief revision mechanisms that allow changes to the KB in each of these three components. Finally, there may be doubts about the assignment of probabilities in the KB. Allowing changes to the probability of statements in the KB yields another belief revision mechanism. Each of these belief revision methods may be epistemically desirable for some applications, but not for others. We show that some of these approaches cannot satisfy AGM-style axioms for belief revision under certain conditions. We also perform a detailed complexity analysis of each of these approaches. Simply put, all belief revision methods proposed that satisfy AGM-style axioms turn out to be intractable with the exception of the method that revises beliefs by changing the probabilities (minimally) in the KB. We also propose two hybrids of these basic approaches to revision and analyze the complexity of these hybrid methods.  相似文献   

7.
In this paper, we have studied the Dempster–Shafer theory of evidence in situations of decision making with linguistic information and we develop a new aggregation operator: belief structure generalized linguistic hybrid averaging (BS-GLHA) operator and a wide range of particular cases. we have developed the new decision making model with Dempster–Shafer belief structure that uses linguistic information in order to manage uncertain situations that cannot be managed in a probabilistic way. We have seen that all these approaches are very useful for representing the new approaches in a more complete way selecting for each situation the particular case that it is closest to our interests in the specific problem analyzed. Finally, a numerical example is used to illustrate the applicability and effectiveness of the proposed method. We have pointed out that the results and decisions are dependent on the linguistic aggregation operator used in the decision making process.  相似文献   

8.
When conjunctively merging two belief functions concerning a single variable but coming from different sources, Dempster rule of combination is justified only when information sources can be considered as independent. When dependencies between sources are ill-known, it is usual to require the property of idempotence for the merging of belief functions, as this property captures the possible redundancy of dependent sources. To study idempotent merging, different strategies can be followed. One strategy is to rely on idempotent rules used in either more general or more specific frameworks and to study, respectively, their particularization or extension to belief functions. In this paper, we study the feasibility of extending the idempotent fusion rule of possibility theory (the minimum) to belief functions. We first investigate how comparisons of information content, in the form of inclusion and least-commitment, can be exploited to relate idempotent merging in possibility theory to evidence theory. We reach the conclusion that unless we accept the idea that the result of the fusion process can be a family of belief functions, such an extension is not always possible. As handling such families seems impractical, we then turn our attention to a more quantitative criterion and consider those combinations that maximize the expected cardinality of the joint belief functions, among the least committed ones, taking advantage of the fact that the expected cardinality of a belief function only depends on its contour function.  相似文献   

9.
It is important to predict the future behavior of complex systems. Currently there are no effective methods to solve time series forecasting problem by using the quantitative and qualitative information. Therefore, based on belief rule base (BRB), this paper focuses on developing a new model that can deal with the problem. Although it is difficult to obtain accurately and completely quantitative information, some qualitative information can be collected and represented by a BRB. As such, a new BRB based forecasting model is proposed when the quantitative and qualitative information exist simultaneously. The performance of the proposed model depends on the structure and belief degrees of BRB simultaneously. Moreover, the structure is determined by the delay step. In order to obtain the appropriate delay step using the available information, a model selection criterion is defined according to Akaike's information criterion (AIC). Based on the proposed model selection criterion and the optimal algorithm for training the belief degrees, an algorithm for constructing the BRB based forecasting model is developed. Experimental results show that the constructed BRB based forecasting model can not only predict the time series accurately, but also has the appropriate structure.  相似文献   

10.
Tracking multiple objects is critical to automatic video content analysis and virtual reality. The major problem is how to solve data association problem when ambiguous measurements are caused by objects in close proximity. To tackle this problem, we propose a multiple information fusion-based multiple hypotheses tracking algorithm integrated with appearance feature, local motion pattern feature and repulsion–inertia model for multi-object tracking. Appearance model based on HSV–local binary patterns histogram and local motion pattern based on optical flow are adopted to describe objects. A likelihood calculation framework is proposed to incorporate the similarities of appearance, dynamic process and local motion pattern. To consider the changes in appearance and motion pattern over time, we make use of an effective template updating strategy for each object. In addition, a repulsion–inertia model is adopted to explore more useful information from ambiguous detections. Experimental results show that the proposed approach generates better trajectories with less missing objects and identity switches.  相似文献   

11.
On the revision of probabilistic beliefs using uncertain evidence   总被引:1,自引:0,他引:1  
We revisit the problem of revising probabilistic beliefs using uncertain evidence, and report results on several major issues relating to this problem: how should one specify uncertain evidence? How should one revise a probability distribution? How should one interpret informal evidential statements? Should, and do, iterated belief revisions commute? And what guarantees can be offered on the amount of belief change induced by a particular revision? Our discussion is focused on two main methods for probabilistic revision: Jeffrey's rule of probability kinematics and Pearl's method of virtual evidence, where we analyze and unify these methods from the perspective of the questions posed above.  相似文献   

12.
Cost-based abduction (CBA) is an important problem in reasoning under uncertainty, and can be considered a generalization of belief revision. CBA is known to be NP-hard and has been a subject of considerable research over the past decade. In this paper, we investigate the fitness landscape for CBA, by looking at fitness–distance correlation for local minima and at landscape ruggedness. Our results indicate that stochastic local search techniques would be promising on this problem. We go on to present an iterated local search algorithm based on hill-climbing, tabu search, and simulated annealing. We compare the performance of our algorithm to simulated annealing, and to Santos' integer linear programming method for CBA.  相似文献   

13.
In this paper we present a new credal classification rule (CCR) based on belief functions to deal with the uncertain data. CCR allows the objects to belong (with different masses of belief) not only to the specific classes, but also to the sets of classes called meta-classes which correspond to the disjunction of several specific classes. Each specific class is characterized by a class center (i.e. prototype), and consists of all the objects that are sufficiently close to the center. The belief of the assignment of a given object to classify with a specific class is determined from the Mahalanobis distance between the object and the center of the corresponding class. The meta-classes are used to capture the imprecision in the classification of the objects when they are difficult to correctly classify because of the poor quality of available attributes. The selection of meta-classes depends on the application and the context, and a measure of the degree of indistinguishability between classes is introduced. In this new CCR approach, the objects assigned to a meta-class should be close to the center of this meta-class having similar distances to all the involved specific classes? centers, and the objects too far from the others will be considered as outliers (noise). CCR provides robust credal classification results with a relatively low computational burden. Several experiments using both artificial and real data sets are presented at the end of this paper to evaluate and compare the performances of this CCR method with respect to other classification methods.  相似文献   

14.
基于置信规则库的飞控系统故障诊断   总被引:1,自引:0,他引:1       下载免费PDF全文
针对传统飞控系统故障诊断中存在的因引入专家知识引起的主观偏差问题和使用数据驱动方法因数据量不足导致的过拟合问题,提出了基于置信规则库推理的飞控系统故障诊断。根据已有故障知识构建飞控系统故障诊断置信规则库,利用测试过程中获得的故障数据,以数值样本优化学习模型对置信规则库参数进行训练。实例表明,经少量样本训练后的置信规则库可以很好地解决初始置信规则库参数存在主观偏差的问题,经实验证明该方法能够实现高效可靠的飞控系统故障诊断。  相似文献   

15.
Autonomous agents in computer simulations do not have the usual mechanisms to acquire information as do their human counterparts. In many such simulations, it is not desirable that the agent have access to complete and correct information about its environment. We examine how imperfection in available information may be simulated in the case of autonomous agents. We determine probabilistically what the agent may detect, through hypothetical sensors, in a given situation. These detections are combined with the agent's knowledge base to infer observations and beliefs. Inherent in this task is a degree of uncertainty in choosing the most appropriate observation or belief. We describe and compare two approaches — a numerical approach and one based on defeasible logic — for simulating an appropriate belief in light of conflicting detection values at a given point in time. We discuss the application of this technique to autonomous forces in combat simulation systems.  相似文献   

16.
Numerous belief revision and update semantics have been proposed in the literature in the past few years, but until recently, no work in the belief revision literature has focussed on the problem of implementing these semantics, and little attention has been paid to algorithmic questions. In this paper, we present and analyze our update algorithms built in Immortal, a model-based belief revision system. These algorithms can work for a variety of model-based belief revision semantics proposed to date. We also extend previously proposed semantics to handle updates involving the equality predicate and function symbols and incorporate these extensions in our algorithms. As an example, we discuss the use of belief revision semantics to model the action-augmented envisioning problem in qualitative simulation, and we show the experimental results of running an example simulation in Immortal.  相似文献   

17.
Multi-agent learning (MAL) studies how agents learn to behave optimally and adaptively from their experience when interacting with other agents in dynamic environments. The outcome of a MAL process is jointly determined by all agents’ decision-making. Hence, each agent needs to think strategically about others’ sequential moves, when planning future actions. The strategic interactions among agents makes MAL go beyond the direct extension of single-agent learning to multiple agents. With the strategic thinking, each agent aims to build a subjective model of others decision-making using its observations. Such modeling is directly influenced by agents’ perception during the learning process, which is called the information structure of the agent’s learning. As it determines the input to MAL processes, information structures play a significant role in the learning mechanisms of the agents. This review creates a taxonomy of MAL and establishes a unified and systematic way to understand MAL from the perspective of information structures. We define three fundamental components of MAL: the information structure (i.e., what the agent can observe), the belief generation (i.e., how the agent forms a belief about others based on the observations), as well as the policy generation (i.e., how the agent generates its policy based on its belief). In addition, this taxonomy enables the classification of a wide range of state-of-the-art algorithms into four categories based on the belief-generation mechanisms of the opponents, including stationary, conjectured, calibrated, and sophisticated opponents. We introduce Value of Information (VoI) as a metric to quantify the impact of different information structures on MAL. Finally, we discuss the strengths and limitations of algorithms from different categories and point to promising avenues of future research.  相似文献   

18.
We give a logical framework for reasoning with observations at different time points. We call belief extrapolation the process of completing initial belief sets stemming from observations by assuming minimal change. We give a general semantics and we propose several extrapolation operators. We study some key properties verified by these operators and we address computational issues. We study in detail the position of belief extrapolation with respect to revision and update: in particular, belief extrapolation is shown to be a specific form of time-stamped belief revision. Several related lines of work are positioned with respect to belief extrapolation.  相似文献   

19.
The compact representation of incomplete probabilistic knowledge which can be encountered in risk evaluation problems, for instance in environmental studies is considered. Various kinds of knowledge are considered such as expert opinions about characteristics of distributions or poor statistical information. The approach is based on probability families encoded by possibility distributions and belief functions. In each case, a technique for representing the available imprecise probabilistic information faithfully is proposed, using different uncertainty frameworks, such as possibility theory, probability theory, and belief functions, etc. Moreover the use of probability-possibility transformations enables confidence intervals to be encompassed by cuts of possibility distributions, thus making the representation stronger. The respective appropriateness of pairs of cumulative distributions, continuous possibility distributions or discrete random sets for representing information about the mean value, the mode, the median and other fractiles of ill-known probability distributions is discussed in detail.  相似文献   

20.
The scalability of data broadcasting has been manifested by prior studies on the base of the traditional data management systems where data objects, mapped to a pair of state and value in the database, are independent, persistent, and static against simple queries. However, many modern information applications spread dynamic data objects and process complex queries for retrieving multiple data objects. Particularly, the information servers dynamically generate data objects that are dependent and can be associated into a complete response against complex queries. Accordingly, the study in this paper considers the problem of scheduling dynamic broadcast data objects in a clients-providers-servers system from the standpoint of data association, dependency, and dynamics. Since the data broadcast problem is NP-hard, we derive the lower and the upper bounds of the mean service access time. In light of the theoretical analyses, we further devise a deterministic algorithm with several gain measure functions for the approximation of schedule optimization. The experimental results show that the proposed algorithm is able to generate a dynamic broadcast schedule and also minimize the mean service access time to the extent of being very close to the theoretical optimum.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号