首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
Bayesian networks have become a popular technique for representing and reasoning with probabilistic information.The fuzzy functional dependency is an important kind of data dependencies in relational databases with fuzzy values,The purpose of this paper is to set up a connection between these data dependencies and Bayesian networks.The connection is done through a set of methods that enable pepople to obtain the most information of independent conditions from fuzzy functional dependencies.  相似文献   

2.
Knowledge-based modeling is a trend in complex system modeling technology. To extract the process knowledge from an information system, an approach of knowledge modeling based on interval-valued fuzzy rough set is presented in this paper, in which attribute reduction is a key to obtain the simplified knowledge model. Through defining dependency and inclusion functions, algorithms for attribute reduction and rule extraction are obtained. The approximation inference plays an important role in the development of the fuzzy system. To improve the inference mechanism, we provide a method of similaritybased inference in an interval-valued fuzzy environment. larity based approximate reasoning, an inference result is Combining the conventional compositional rule of inference with simideduced via rule translation, similarity matching, relation modification, and projection operation. This approach is applied to the problem of predicting welding distortion in marine structures, and the experimental results validate the effectiveness of the proposed methods of knowledge modeling and similarity-based inference.  相似文献   

3.
The information content of rules and rule sets and its application   总被引:1,自引:1,他引:0  
The information content of rules is categorized into inner mutual information content and outer impartation information content. Actually, the conventional objective interestingness measures based on information theory are all inner mutual information, which represent the confidence of rules and the mutual information between the antecedent and consequent. Moreover, almost all of these measures lose sight of the outer impartation information, which is conveyed to the user and help the user to make decisions. We put forward the viewpoint that the outer impartation information content of rules and rule sets can be represented by the relations from input universe to output universe. By binary relations, the interaction of rules in a rule set can be easily represented by operators: union and intersection. Based on the entropy of relations, the outer impartation information content of rules and rule sets are well measured. Then, the conditional information content of rules and rule sets, the independence of rules and rule sets and the inconsistent knowledge of rule sets are defined and measured. The properties of these new measures are discussed and some interesting results are proven, such as the information content of a rule set may be bigger than the sum of the information content of rules in the rule set, and the conditional information content of rules may be negative. At last, the applications of these new measures are discussed. The new method for the appraisement of rule mining algorithm, and two rule pruning algorithms, λ-choice and RPClC, are put forward. These new methods and algorithms have predominance in satisfying the need of more efficient decision information.  相似文献   

4.
With the high developed hardware from the PC‘s today,there arise possibilities to implement programming environments on such kind of computers.To rduces the amount of calculaiton time and required memory space from implemented algorithms.new optimization approaches in the algorithm design are demanded.The purpose of this work is to explore and analyse possibilities to reduce the required memory space through elimination of superfluonus grammar rules created during the process of recognition.  相似文献   

5.
A systematic,efficient compilation method for query evaluation of Deductive Databases (DeDB) is proposed in this paper.In order to eliminate redundancy and to minimize the potentially relevant facts,which are two key issues to the efficiency of a DeDB,the compilation process is decomposed into two phases.The first is the pre-compilation phase,which is responsible for the minimization of the potentially relevant facts.The second,which we refer to as the general compilation phase,is responsible for the elimination of redundancy.The rule/goal graph devised by J.D.Ullman is appropriately extended and used as a uniform formalism.Two general algorithms corresponding to the two phases respectively are described intuitively and formally.  相似文献   

6.
7.
The effectiveness of many SAT algorithms is mainly reflected by their significant performances on one or several classes of specific SAT problems.Different kinds of SAT algorithms all have their own hard instances respectively.Therefore,to get the better performance on all kinds of problems,SAT solver should know how to select different algorithms according to the feature of instances.In this paper the differences of several effective SAT algorithms are analyzed and two new parameters φand δ are proposed to characterize the feature of SAT instances.Experiments are performed to study the relationship between SAT algorithms and some statistical parameters including φ,δ.Based on this analysis,a strategy is presented for designing a faster SAT tester by carefully combining some existing SAT algorithms.With this strategy,a faster SAT tester to solve many kinds of SAT problem is obtained.  相似文献   

8.
In this paper,a new dynamical evolutionary algorithm(DEA) is presented based on the theory of statistical mechanics.The novelty of this kind of dynamical evolutionary algorithm is that all individuals in a population(called particles in a dynamical system)are running and searching with their population evolving driven by a new selecting mechanism.This mechanism simulates the principle of molecular dynamics,which is easy to design and implement.A basic theoretical analysis for the dynamical evolutionary algorithm is given and as a consequence two stopping criteria of the algorithm are derived from the principle of energy minimization and the law of entropy increasing.In order to verify the effectiveness of the scheme,DEA is applied to sloving some typical numerical function minimization problems which are poorly solved by traditional evolutionary algorithms.The experimental results show that EAT is fast and reliable.  相似文献   

9.
A new combination rule based on Dezert-Smarandache theory (DSmT) is proposed to deal with the conflict evidence resulting from the non-exhaustivity of the discernment frame. A two-dimensional measure factor in Dempster-Shafer theory (DST) is extended to DSmT to judge the conflict degree between evidence. The original DSmT combination rule or new DSmT combination rule can be selected for fusion according to this degree. Finally, some examples in simultaneous fault diagnosis of motor rotor are given to illustrate the effectiveness of the proposed combination rule.  相似文献   

10.
RETE网络中的优化编译模式及其PVS形式验证   总被引:1,自引:0,他引:1  
刘晓建  陈平 《计算机科学》2003,30(6):168-171
In the compilation of rule program to the intermediate code-RETE network,optimizing compilation is an important compiler schema,and is a necessary step in the compiler verification.In this paper,we discuss optimization schemas in rule program compilation,and prove the semantic equivalence theorems of these schemas.Firstly,the structure of RETE network and its PVS specification are represented.Secondly,three kinds of optimization schemas are listed.Then algorithms evaluating semantics of target RETE network are given.Finally,we prove the semantic equivalence theorems with theorem prover PVS (Prototype Verification System).  相似文献   

11.
以粗糙集理论(Rough Set Theory)和关系数据库理论为基础,从函数依赖、范式理论、Armstrong公理等方面系统地研究了粗糙关系数据库(Rough Relational DataBase,简称RRDB)与模糊关系数据库(Fuzzy Relational DataBase,简称FRDB)之间的关系。结果表明,模糊函数依赖与粗糙函数依赖均为经典函数依赖的泛化,模糊范式理论为经典范式的扩充,而粗糙范式理论自成体系,从推理规则上看,它们都不同程度地符合Armstrong公理。  相似文献   

12.
函数依赖对关系数据库和XML文档都是一种重要的语义表达。文中对XML文档中存在的函数依赖、部分函数依赖和传递函数依赖进行分析,对规范部分函数依赖提出XML第二范式,对规范部分函数依赖和传递函数依赖提出了XML第三范式,给出了相应算法,并进行了无损联接性和函数依赖保持性证明,对可终止性和时间复杂度进行了分析。  相似文献   

13.
从消除XML文档内数据冗余的角度出发研究了文档的规范化问题.首先引入XML上的数据冗余及其消除处理示例,同时基于函数依赖,提出了规范化的DTD概念和XML DTD 规范化处理规则;其次通过XML多值依赖的定义,给出用于消除冗余模式的算法;最后给出用于XML模式及其消除冗余模式的算法.该算法相应于其他XML模式的研究,在算法产生的层次模式中,完全MVD和嵌入MVD的集合由给出的MVD集合导出;并且产生的XML模式具有消除冗余模式和满足无损连接的特性.  相似文献   

14.
XML文档的范式   总被引:7,自引:1,他引:7  
给出了 XML函数依赖、部分函数依赖和传递函数依赖的概念 ,然后据此提出了三种 XML范式 :1XNF、2 XNF和 3XNF.提出了 DTD无损联接分解的概念 ,给出了两个把 DTD无损联接地分解成 2 XNF和 3XNF的算法  相似文献   

15.
神经网络的优化方法一般仅局限于学习算法、输入属性方面。由于神经网络拟合的高维映射存在复杂的内在属性依赖关系,而传统的优化方法却没有对其进行分析研究。以函数依赖理论为基础,提出了属性依赖理论,阐述了属性依赖的有关定义,证明了相关定理;并且与径向基函数(RBF)神经网络结合,提出了基于属性依赖理论的RBF神经网络结构优化方法(ADO-RBF)。最后通过实例证明了该方法在实际应用中的可行性。  相似文献   

16.
Rough函数依赖及其推理机制   总被引:6,自引:0,他引:6  
在引入Rough函数依赖的基础上,提出了先行上、下冗余因子及结果上、下冗余因子的概念,研究了Rough函数依赖的性质和推理规则,最后分析了Rough函数依赖与函数依赖、Fuzzy函数依赖的关系.  相似文献   

17.
函数依赖在关系数据库和XML文档中都是一种重要的语义表达.通过分析函数依赖的表现形式在XML文档和关系数据库中的不同之处,提出了基于DTD中的路径表达式的XML函数依赖的概念.它不仅能表达元素的属性和元素的值之间的函数依赖,而且也能表达元素之间的函数依赖.给出了关于XML函数依赖的一组完备的推理规则集,这对解决XML函数依赖的蕴含问题具有重要的意义.  相似文献   

18.
DTD的规范化   总被引:19,自引:0,他引:19  
一个设计良好的DTD对于XML应用来说是必须的,从消除文档内数据冗余的角度出发研究了这一问题。函数依赖是数据语义的重要组成部分,将它引入到XML的领域中。给出的函数依赖可以是绝对的,也可以是相对的,键只是它的一种特例。讨论了逻辑蕴涵及其相应的推理规则,并证明了推理规则集的正确性和完备性。基于函数依赖,提出了规范化的DTD概念,并给出了一个将DTD转化为规范化形式的算法。  相似文献   

19.
XML弱函数依赖是在XML数据库中引入空值理论后的函数依赖。在空值、不完全树元组等概念的基础上,定义了弱函数依赖、单依赖集合,证明了单依赖集合判定定理和单依赖集合判定可终止定理。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号