首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
证据理论在不确定性推理中的应用研究*   总被引:3,自引:2,他引:1  
利用证据理论中的基本概率分配函数、信任函数和似然函数来描述和处理知识的不确定性。提出一个特殊的概率分配函数和新的组合规则,并以其为基础建立一个不确定性推理模型。实例证明该模型能有效地度量最终结论的不确定性。  相似文献   

2.
赵静  关欣  衣晓  刘海桥 《控制与决策》2020,35(6):1307-1315
自动化数据处理、检测判决、推理决策要求对多种传感器和不同信息源进行有效集成.但由于环境的扰动、传感器的局限以及人为的干预,信源信息一般具有较强的不确定性、不完备性、冲突性,集中表现为证据的冲突,有必要对冲突证据的推理与决策进行研究.为解决证据理论不能有效融合冲突证据的问题,提出一种新的不确定性度量方法.首先,对现有的基于信息熵和区间距离的不确定性度量公式进行深入分析,总结现有方法的缺陷并进行相关证明;其次,基于定积分定义新的区间距离度量公式,并对公式的合理性进行证明;然后,基于所提出的区间距离公式给出新的不确定性度量方法,利用改进的不确定性度量方法给出冲突证据组合规则及算法流程;最后,通过算例分析验证改进算法的有效性和可行性.  相似文献   

3.
D–S证据理论中一个关键问题是度量传感器给出的基本概率赋值(BPA)的不确定性大小, 它的准确度量对 于评估信任结构传递的信息量至关重要. 本文首先提出了改进的归一化投影方法(iNP), 然后基于iNP提出了一种新 的投影不确定性(PU)广义度量方法, 理论证明和实验仿真说明了PU满足非负性、有界性、不变性、单调性、不反直 观性、较高的敏感性和较低的计算负担等性质, 这些性质保证了PU可以有效对BPA的不确定性广义度量. 与现有的 不确定性度量方法相比, 所提出的方法对证据的变化更为敏感. 最后, 基于PU方法给出了一种新的证据组合方法, 通过数值实例和实际应用, 说明了本文所提方法的有效性和合理性.  相似文献   

4.
采用一个全序的符号值集合来代替数值信任度集合[0,1],提出定性Dempster-Shafer理论来处理既有不确定性又有不精确性的推理问题.首先,定义了适合对不确定性进行定性表达和推理的定性mass函数、定性信任函数等概念,并且研究了这些概念之间的基本关系;其次,详细讨论了定性证据合成问题,提出了基于平均策略的证据合成规则.这种定性Dempster-Shafer理论与其他相关理论相比,既通过在定性领域重新定义Dempster-Shafer理论的基本概念,继承了Dempster-Shafer理论在不确定推理方面的主要特点,同时又具有适合对不精确性操作的既有严格定义又符合直观特性的定性算子,因此更适合基于Dempster-Shafer理论框架不精确表示和处理不确定性.  相似文献   

5.
定性概率推理是不确定性推理领域的一种重要方法。将定性概率推理的论据系统方法和抽象系统方法二者合而为一,在定性概率推理机(QPR)的基础上提出基于论据系统的带权定性概率推理机(WQPR)。首先扩展了带权定性概率网的定义,讨论了带权定性影响的对称性;其次将带权定性概率推理融入到论据系统中,提出WQPR推理系统,相比QPR能够在更精确的尺度进行不确定性推理,并证明了系统的正确性与完备性。  相似文献   

6.
用于态势评估中因果推理的贝叶斯网络   总被引:4,自引:0,他引:4  
1 引言贝叶斯网络是由R.Howard和J.Matheson于1981年提出来的,它主要用来表述不确定的专家知识。后来经过J.Pearl,D.Heckerman等人的研究,贝叶斯网络的理论及算法有了很大的发展。作为一种知识表示和进行概率推理的框架,贝叶斯网络在具有内在不确定性的推理和决策问题中已经得到了广泛的应用,例如概率专家系统、计算机视觉和数据挖掘等。  相似文献   

7.
针对随机与认知混合不确定性的概率盒灵敏度分析问题,提出一种利用概率盒缩减前后重叠面积作为不确定性度量的全局灵敏度分析方法.混合不确定性在航空航天仿真系统中广泛存在,概率盒方法用于随机与认知混合不确定性的表征在学术界已被广泛应用.首先,介绍传统概率盒灵敏度分析的不确定性缩减法理论,在此基础上,进一步考虑概率盒在位置和形状上的偏移量;然后,通过计算缩减前后的概率盒面积重叠量来表征各输入不确定性的影响程度,阐述其实施步骤;最后,通过数值算例对所提出方法与传统不确定性缩减方法进行全局灵敏度分析的对比和验证,并应用于发动机总体性能仿真计算灵敏度排序.研究结果表明,所提出面积重叠方法比传统不确定性缩减法适用范围更广,计算结果更准确.  相似文献   

8.
王杰  周志杰  胡昌华  张朋  赵导 《控制与决策》2023,38(10):2749-2763
在基于数据的复杂系统建模过程中,各种不确定性信息普遍存在.一般而言,客观系统的随机性与人类认知的模糊性构成了不确定性的最基本内涵.为了对不确定性信息进行形式化的描述,促进人类对实际系统的理解,近年来各种不确定性理论得到极大发展.基于此,首先给出不确定性的来源、分类及特点;然后,从随机性、模糊性及混合不确定性3方面系统梳理贝叶斯推理、模糊推理、粗糙集、灰色理论和证据理论等方法在不确定性信息表示与推理方面的研究,同时总结分析上述理论在可靠性工程、信息融合和决策支持等方面的典型应用;最后,在对现有工作简要总结的基础上,提出不确定性理论在未来发展中面临的三大挑战,并给出潜在的解决思路,以期为该领域的研究者提供一定的参考.  相似文献   

9.
通过语义分析,提出一种修正的粗糙集不确定性度量公理化定义。首先,对该定义的数学特征进行分析,提出两种基于条件概率的粗糙集不确定性度量方法;然后,证明它们满足所提出的公理化定义,并导出相应的知识不确定性度量,发现其中一个是现有条件信息熵,另一个与确定性度量形成互补关系。设计算例对各种不确定性度量进行比较分析,验证了所提出的度量公式与不确定性语义保持一致。  相似文献   

10.
黄国顺  文翰 《软件学报》2018,29(11):3484-3499
通过语义分析,提出了一种拓展的粗糙集不确定性度量公理化定义;将香农熵函数推广到严凹函数,提出了一类以条件概率为自变量、基于严凹函数的粗糙集不确定性度量公式,它是严凹函数值的加权平均.在此基础上,得到一系列粗糙集不确定性度量方法.从严凹函数视角讨论了基于模糊熵的不确定性度量方法,发现现有多种能够用于度量粗糙集不确定性的模糊熵函数都是所提出方法的特殊情形.比较了粗糙度、改进粗糙度和所提出方法的区别和联系,最后设计了一些算例,比较了各种方法的异同,验证了基于严凹函数的粗糙集不确定性度量与粗糙集不确定性语义是一致的.  相似文献   

11.
广义证据推理融合结构   总被引:1,自引:0,他引:1  
针对Dempster Shafer理论(DST)及Dezert Smarandache理论(DSmT)难以处理不确定信息的问题,定义了辨识框架中的不确定因子,提出了2种自适应通用分配法则(AUPR).并提出了证据理论的广义融合框架,并在此基础上构建了广义证据推理机.以Pioneer 2 DXe机器人为实验平台,绘制了实验场景的信度分布图.实验结果验证了所提方法的有效性和实用性,为构建统一的信息融合框架提供了有力的依据.  相似文献   

12.
Dempster–Shafer theory (DST) was presented as an effective mathematical tool to represent uncertainty. Its significant innovation is to allow the allocation of the belief of mass to sets or intervals, and it becomes a valuable method in the field of decision making and evaluation when accurate information is not available or when knowledge is expressed subjectively by humans. A crucial research issue in DST is the combination of multi-sources of evidence. In this paper, a novel combination rule for Dempster–Shafer structures is developed based on ordered weighted average (OWA)-based soft likelihood functions proposed by Yager. First, the belief intervals, including the belief measures and plausibility measures, of all the hypotheses in the frame of discernment (FOD) are calculated. Second, the representative value of belief interval is defined based on golden rule introduced by Yager. Third, the soft likelihood value of each hypothesis is calculated based on the proposed OWA-based soft likelihood function for belief interval, which can be considered as the combined evidence. The final evaluation results can be employed for practical applications, such as decision making and evaluation. In addition, the improved evidence combination rule is presented which takes into account the weight of evidence. Several illustrative examples are conducted to manifest the use of the developed methods. Finally, an application for environmental impact assessment is given to demonstrate the usefulness of the developed combination rule in DST.  相似文献   

13.
In the reduction of problems with uncertainty, combination relation called COMB becomes important as well as AND/OR relations. Dempster & Shafer’s theory provides a rational inference mechanism for the COMB relation. Knowledge often manifests fuzziness as well as uncertainty. To construct expert systems utilizing such knowledge, Dempster & Shafer’s theory is extended in two ways to include fuzzy subset and fuzzy certainty.  相似文献   

14.
This paper introduces a novel trust assessment formalism for contradicting evidence in the context of multi-agent ontology mapping. Evidence combination using the Dempster rule tend to ignore contradictory evidence and the contemporary approaches for managing these conflicts introduce additional computation complexity i.e. increased response time of the system. On the Semantic Web, ontology mapping systems that need to interact with end users in real time cannot afford prolonged computation. In this work, we have made a step towards the formalisation of eliminating contradicting evidence, to utilise the original Dempster’s combination rule without introducing additional complexity. Our proposed solution incorporates the fuzzy voting model to the Dempster–Shafer theory. Finally, we present a case study where we show how our approach improves the ontology mapping problem.  相似文献   

15.
Knowledge and Information Systems - Knowledge-based systems developed based on Dempster–Shafer theory and prospect theory enhances decision-making under uncertainty. But at times, the...  相似文献   

16.
The Dempster–Shafer (D–S) theory of evidence is introduced to improve fuzzy inference under the complex stochastic environment. The Dempster–Shafer based fuzzy set (DFS) is first proposed, together with its union and intersection operations, to capture the principal stochastic uncertainties. Then, the fuzzy inference will be modified based on the extensional Dempster rule of combination. This new approach is able to capture the stochastic disturbance acting on fuzzy membership function, and provide a more effective inference under strong stochastic uncertainty. Finally, the numerical simulation and the experimental prediction of the wind speed are conducted to show the potential of the proposed method in stochastic modeling.  相似文献   

17.
In real classification problems intrinsically vague information often coexist with conditions of “lack of specificity” originating from evidence not strong enough to induce knowledge, but only degrees of belief or credibility regarding class assignments. The problem has been addressed here by proposing a fuzzy Dempster–Shafer model (FDS) for multisource classification purposes. The salient aspect of the work is the definition of an empirical learning strategy for the automatic generation of fuzzy Dempster–Shafer classification rules from a set of exemplified training data. Dempster–Shafer measures of uncertainty are semantically related to conditions of ambiguity among the data and then automatically set during the learning process. Partial reduced beliefs in class assignments are then induced and explicitly represented when generating classification rules. The fuzzy deductive apparatus has been modified and extended to integrate the Dempster–Shafer propagation of evidence. The strategy has been applied to a standard classification problem in order to develop a sensitivity analysis in an easily controlled domain. A second experimental test has been conducted in the field of natural risk assessment, where vagueness and lack of specificity conditions are prevalent. These empirical tests show that classification benefits from the combination of the fuzzy and Dempster–Shafer models especially when conditions of lack of specifity among data are prevalent. ©1999 John Wiley & Sons, Inc.  相似文献   

18.
Uncertainty in service management stems from the incompleteness and vagueness of the conditioning attributes that characterize a service. In particular, location based services often have complex interaction mechanisms in terms of their neighborhood relationships. Classical location service models require rigorous parameters and conditioning attributes and offers limited flexibility to incorporate imprecise or ambiguous evidences. In this paper we have developed a formal model of uncertainty in service management. We have developed a rough set and Dempster–Shafer’s evidence theory based formalism to objectively represent uncertainty inherent in the process of service discovery, characterization, and classification. Rough set theory is ideally suited for dealing with limited resolution, vague and incomplete information, while Dempster–Shafer’s evidence theory provides a consistent approach to model an expert’s belief and ignorance in the classification decision process. Integrating these two formal approaches in spatial domain provides a way to model an expert’s belief and ignorance in service classification. In an application scenario of the model we have used a cognitive map of retail site assessment, which reflects the partially subjective assessment process. The uncertainty hidden in the cognitive map can be consistently formalized using the proposed model. Thus we provide a naturalistic means of incorporating both qualitative aspects of intuitive knowledge as well as hard empirical information for service management within a formal uncertainty framework.  相似文献   

19.
We discuss the Dempster–Shafer belief theory and describe its role in representing imprecise probabilistic information. In particular, we note its use of intervals for representing imprecise probabilities. We note in fuzzy set theory that there are two related approaches used for representing imprecise membership grades: interval-valued fuzzy sets and intuitionistic fuzzy sets. We indicate the first of these, interval-valued fuzzy sets, is in the same spirit as Dempster–Shafer representation, both use intervals. Using a relationship analogous to the type of relationship that exists between interval-valued fuzzy sets and intuitionistic fuzzy sets, we obtain from the interval-valued view of the Dempster–Shafer model an intuitionistic view of the Dempster–Shafer model. Central to this view is the use of an intuitionistic statement, pair of values, (Bel(A) Dis(A)), to convey information about the value of a variable lying in the set A. We suggest methods for combining intuitionistic statements and making inferences from these type propositions.  相似文献   

20.
In approximate reasoning, aggregation of multiple measures representing uncertainty, belief, or desirability may be achieved by defining an appropriate combination operator. Formalisms such as probability theory and Dempster–Shafer evidence theory have proposed specific forms for these operators. Ad-hoc approaches to combination have also been put forth, a classical example being the MYCIN calculus of certainty factors. In the present paper we present an analytical theory of combination operators based on the idea that certain combination operators are characterized by special geometric frames of reference or systems of coordinates in which the operators reduce to the canonical arithmetic sum. The cornerstone of our theory is an algorithm that determines whether a given combination operator can be so reduced, and that explicitly constructs a normalizing reference frame directly from the operator whenever such a frame exists. Our approach provides a natural nonlinear scaling mechanism that extends operators to parameterized families, allowing one to adjust the sensitivity of the operators to new information and to control the asymptotic growth rate of the aggregate values produced by the operators in the presence of an unbounded number of information sources. We also give a procedure to reconstruct the normalizing reference frame directly from the group of nonlinear scaling operations associated with it.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号