首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
自动定理证明一直是人工智能领域中最重要的问题之一,基于归结的方法是通过推出空子句的方法来判定子句集的可满足性.基于扩展规则的定理证明方法在一定意义上是和归结原理对偶的方法,是通过子句集能否推导出所有极大项组成的子句集来判定可满足性.通过对扩展规则的研究给出了半扩展规则的概念,并提出了基于半扩展规则的定理证明算法SER.然后分析及证明了该算法的正确性、完备性和复杂性.实验结果表明,算法SER的执行效率较基于归结的有向归结算法DR和基于扩展规则算法IER,NER有明显的提高.  相似文献   

2.
曹锋  徐扬  钟建  宁欣然 《计算机科学》2020,47(3):217-221
一阶逻辑定理证明是人工智能的核心基础,研究一阶逻辑自动定理证明器的相关理论和高效的算法实现具有重要的学术意义。当前一阶逻辑自动定理证明器首先通过子句集预处理约简子句集规模,然后通过演绎方法对定理进行判定。现有的应用于证明器中的子句集预处理方法普遍只从与目标子句项符号相关性角度出发,不能很好地从文字的互补对关系中体现子句间的演绎。为了在子句集预处理时从演绎的角度刻画子句间的关系,定义了目标演绎距离的概念并给出了计算方法,提出了一种基于目标演绎距离的一阶逻辑子句集预处理方法。首先对原始子句集进行包含冗余子句约简并应用纯文字删除规则,然后根据目标子句计算剩余子句集中的文字目标演绎距离、子句目标演绎距离,并最终通过设定子句演绎距离阈值来实现对子句集的进一步预处理。将该预处理方法应用于顶尖证明器Vampire,以2017年国际一阶逻辑自动定理证明器标准一阶逻辑问题组竞赛例为测试对象,在标准的300 s内,加入提出的子句集预处理方法的Vampire4.1相比原始的Vampire4.1多证明4个定理,能证明10个Vampire4.1未证明的定理,占其未证明定理总数的13.5%;在证明的定理中,提出的子句集预处理方法能对77.2%的子句集进行约简,最大子句集约简规模达到51.7%。实验结果表明,提出的一阶逻辑子句集预处理方法是一种有效的方法,能有效地约简一阶逻辑子句集的规模,提高一阶逻辑自动定理证明器的证明能力。  相似文献   

3.
张立明  欧阳丹彤  赵毅 《软件学报》2015,26(9):2250-2261
基于扩展规则的定理证明方法在一定意义上是与归结原理对偶的方法,通过子句集能否推导出所有极大项来判定可满足性.IER(improved extension rule)算法是不完备的算法,在判定子句集子空间不可满足时,并不能判定子句集的满足性,算法还需重新调用ER(extension rule)算法,降低了算法的求解效率.通过对子句集的极大项空间的研究,给出了子句集的极大项空间分解后子空间的求解方法.通过对扩展规则的研究,给出了极大项部分空间可满足性判定方法PSER(partial semi-extension rule).在IER算法判定子空间不可满足时,可以调用PSER算法判定子空间对应的补空间的可满足性,从而得到子句集的可满足性,避免了不能判定极大项子空间可满足性时需重新调用ER算法的缺点,使得IER算法更完备.在此基础上,还提出DPSER(degree partial semi-extension rule)定理证明方法.实验结果表明:所提出的DPSER和IPSER的执行效率较基于归结的有向归结算法DR、IER及NER算法有明显的提高.  相似文献   

4.
一种新的基于扩展规则的定理证明算法   总被引:3,自引:0,他引:3  
基于扩展规则的定理证明方法是一种与归结方法互补的新的定理证明方法,首先通过对扩展规则的深入研究,给出了扩展规则的一个重要性质,设计并实现了该性质的判定算法.此外,从理论上分析及证明了该判定算法的时问和空间复杂性.基于此,提出了一种新的基于扩展规则的定理证明算法NER,将判定子句集可满足性问题转化为一系列文字集合的包含问题,而非计数问题.实验结果表明,算法NER的执行效率较原有扩展规则算法IER和基于归结的有向归结算法DR有明显提高,有些问题可以提高两个数量级.  相似文献   

5.
《计算机科学与探索》2016,(12):1783-1792
S5系统是一类知识表示能力和处理能力都较强的模态公理系统,它是认知逻辑、信念逻辑等非经典逻辑理论的基础。根据Kripke语义模型以及S5系统中部分公理,对命题模态逻辑S5公理系统的性质进行了较为深入的研究,并对S5系统中一类具有代表性的标准模态子句集的特性进行了分析,提出了一种基于扩展规则方法的命题模态逻辑推理算法(propositional modal clausal reasoning based on novel extension rule,PMCRNER)。针对朴素算法时间复杂度较高的问题,利用任务间潜在的关联性对算法同时进行了粗粒度与细粒度并行化,提出了并行算法PPMCRNER(parallel PMCRNER)理论框架,并且与基本算法进行了对比。实验结果表明,PPMCRNER算法在不可满足的子句集上的推理具有良好的加速比,为高时间复杂性的模态推理方法的进一步研究提供了一种可行方案。  相似文献   

6.
本文提出了一种基于布尔矩阵FP-array的最大频繁项目集挖掘的并行算法。该算法利用基于前缀的划分方法将事务数据集划分为较小的子空间,并将具有完全包含关系的项目集分配到同一处理机,然后各处理机站点Si分别进行局部最大频繁项目集的挖掘,再将挖掘结果传送到主站点S,最后得到全局最大频繁项目集。  相似文献   

7.
一阶子句搜索方法   总被引:1,自引:0,他引:1  
子句集的可满足性判定是自动证明领域的热点之一.提出了子句搜索方法判定命题子句集Φ的可满足性,该方法查找Φ中子句的一个公共不可扩展子句C,当且仅当找到C时Φ可满足,此时C中各文字的补构成一个模型.结合部分实例化方法将子句搜索方法提升至一阶.一阶子句搜索方法可以判定子句集的M可满足性,具备终止性、正确性和完备性,是一种判定子句集可满足性的有效方法.  相似文献   

8.
在基于扩展规则的知识编译算法的基础上提出了2种启发式策略:MCN策略和MO策略.MCN策略和MO策略利用子句集的信息分别选择相应子句和变量,减少扩展规则的使用次数,进而降低知识编译后目标子句集的规模.在此基础上,设计并实现了MCN_KCER,MO_KCER和MCN_MO_KCER算法.实验结果表明:2种启发式策略都可以大幅度减小编译后的子句集规模,同时使用它们的效果更为明显,经过编译后得到的子句集规模是原算法的1/3~1/39,从而大幅度提高之后的在线推理阶段的效率.  相似文献   

9.
启发式属性约简算法完备性和规则发现算法的研究   总被引:7,自引:0,他引:7  
唐彬  李龙澍 《计算机工程与应用》2003,39(30):191-194,229
寻找最小属性集已经被证明是NP难的问题,一般采用启发式的算法来寻找近似最优解。而一般文献中对启发式算法的完备性都没做深入的讨论,论文从冗余性存在的两种情况和找到的属性集可能不是最小属性约简集两方面对算法完备性问题作了研究,对几个算法的各种情况都相应地用构造性的方法给出了例子,特别定理1、2对构造例子有重要的指导作用。论文同时考虑了如何根据应用的要求求得一定程度的属性约简集。论文最后改进了规则提取的算法并详细分析了算法的优缺点。  相似文献   

10.
基于极大团和FP-Tree的挖掘关联规则的改进算法   总被引:16,自引:0,他引:16  
融合了关联规则挖掘的FP-Tree算法和图论的极大团理论的优势,做了以下主要工作:(1) 提出了用邻接矩阵的产生频繁2-项集的改进方法;(2) 提出了极大有序频繁集的概念,证明了Head关系的等价性、划分定理、局部复杂性定理和归并收敛值域定理;(3) 提出并实现了基于极大团划分的MaxCFPTree算法,扫描时间复杂性小于O(n2);(4) 做了相关实验,以验证算法的正确性.新方法缓解了项目数量巨大而内存不足的矛盾,提高了系统效率和伸缩性.  相似文献   

11.
Resolution based theorem proving systems require the conversion of predicate logic formulae into clausal normal form. The multiplication from disjunctive into conjunctive forms in general produces a lot of tautologous and subsumed clauses, which is relatively hard to recognize in later stages of the proof. In this paper an algorithm is presented that avoids the generation of redundant clauses. It is based on the generation of paths through a matrix and produces the set of prime implicants of the original formula.  相似文献   

12.
This paper introduces some improvements on the intelligent backtracking strategy for forward chaining theorem proving. How to decide a minimal useful consequent atom set for a refutation derived at a node in a proof tree is discussed. In most cases, an unnecessary non-Horn clause used for forward chaining will be split only once. The increase of the search space by invoking unnecessary forward chaining clauses will be nearly linear, not exponential anymore.In this paper, the principle of the proposed method and its correctness are introduced. Moreover,some examples are provided to show that the proposed approach is powerful for forward chaining theorem proving.  相似文献   

13.
Parallelization of deduction strategies: An analytical study   总被引:1,自引:0,他引:1  
In this paper we present a general analysis of the parallelization of deduction strategies. We classify strategies assubgoal-reduction strategies, expansion-oriented strategies, andcontraction-based strategies. For each class we analyze how and what types of parallelism can be utilized. Since the operational semantics of deduction-based programming languages can be construed as subgoal-reduction strategies, our analysis encompasses, at the abstract level, both strategies for deduction-based programming and those for theorem proving. We distinguish different types of parallel deduction based on the granularity of parallelism. These two criteria — the classification of strategies and of types of parallelism — provide us with a framework to treat problems and with a grid to classify approaches to parallel deduction. Within this framework, we analyze many issues, including the dynamicity and size of the database of clauses during the derivation, the possibility of conflicts between parallel inferences, and duplication versus sharing of clauses. We also suggest the type of architectures that may be suitable for each class of strategies. We substantiate our analysis by describing existing methods, emphasizing parallel expansion-oriented strategies and parallel contraction-based strategies for theorem proving. The most interesting and least explored by existing approaches are the contraction-based strategies. The presence of contraction rules — rules that delete clauses — and especially the application ofbackward contraction, emerges as a key issue for parallelization of these strategies. Backward contraction is the main reason for the impressive experimental success of contraction-based strategies. Our analysis shows that backward contraction makes efficient parallelization much more difficult. In our analysis, coarse-grain parallelism appears to be the best choice for parallelizing contraction-based reasoning. Accordingly, we propose a notion ofparallelism at the search level as coarse-grain parallelism for deduction.Supported by the GE Foundation Faculty Fellowship to the University of Iowa and by the National Science Foundation with grant CCR-94-08667.Supported by grant NSC 83-0408-E-002-012T of the National Science Council of the Republic of China.  相似文献   

14.
Some new approaches to mechanical theorem proving in the first-order predicate calculus are presented. These are based on a natural deduction system which can be used to show that a set of clauses is inconsistent. This natural deduction system distinguishes positive from negative literals and treats clauses having 0, 1, and 2 or more positive literals in three separate ways. Several such systems are presented. The systems are complete and relatively simple and allow a goal to be decomposed into subgoals, and solutions to the subgoals can then be searched for in the same way. Also, the systems permit a natural use of semantic information to delete unachievable subgoals. The goal-subgoal structure of these systems should allow much of the current artificial intelligence methodology to be applied to mechanical theorem proving.  相似文献   

15.
Generating relevant models   总被引:2,自引:0,他引:2  
Manthey and Bry's model generation approach to theorem proving for FOPC has been greeted with considerable interest. Unfortunately the original presentation of the technique can become arbitrarily inefficient when applied to problems whose statements contain large amounts of irrelevant information. We show how to avoid these problems whilst retaining nearly all the advantages of the basic approach.  相似文献   

16.
A new method for first-order theorem proving based on the Boolean ring approach is proposed. The method is an extension of Hsiang's N-Strategy in two aspects: (1) When the input polynomials are derived from clauses, our method is reduced to a more restricted (but still complete) version of N-Strategy: Only maximal atoms in an N-rule are considered for generating new inferences. (2) When the input polynomials are derived from non-clausal formulas, no new inference rules are needed in our method for ensuring the completeness. Unlike Kapur and Narendran's method which considers every pair of polynomials for superposition, our method restricts the pairs to those one of which consists of an odd number of monomials. The completeness proof of our method with the integration of reduction is also provided and is done by using the technique of semantic trees. The same technique is used to prove the completeness of N-strategy with reduction (using only N-rules and P-rules) for clausal theorem proving, thus it settles a longtime open problem.  相似文献   

17.
This paper presents an improvement of Herbrand's theorem.We propose a method for specifying a sub- universe of the Herbrand universe of a clause set S for each argument of predicate symbols and function symbols in S. We prove that a clause set S is unsatisfiable if and only if there is a finite unsatisfiable set of ground instances of clauses of S that are derived by only instantiating each variable,which appears as an argument of predicate symbols or function symbols,in S over its corresponding argument's sub-universe of the Herbrand universe of S.Because such sub-universes are usually smaller(sometimes considerably)than the Herbrand universe of S,the number of ground instances may decrease considerably in many cases.We present an algorithm for automatically deriving the sub-universes for arguments in a given clause set,and show the correctness of our improvement.Moreover,we introduce an application of our approach to model generation theorem proving for non-range-restricted problems,show the range-restriction transformation algorithm based on our improvement and provide examples on benchmark problems to demonstrate the power of our approach.  相似文献   

18.
A class of mappings called abstractions are defined, and examples of abstractions are given. These functions map a set S of clauses onto a possibly simpler set T of clauses. Also, resolution proofs from S map onto possibly simpler resolution proofs from T. In order to search for a proof of a clause C from S, it suffices to search for a proof from T and attempt to invert the abstraction mapping to obtain a proof of C from S. Some theorem proving strategies based on this idea are presented. Most of these strategies are complete. A method of using more than one abstraction at the same time is also presented. This requires the use of ‘multiclauses’, which are multisets of literals, and associated ‘m-abstraction mappings’ on multiclauses. Certain abstractions are especially interesting, because they correspond to particular interpretations of the set S of clauses. The use of abstractions permits the advantages of set-of-support strategies to be realized in arbitrary complete non set-of-support resolution strategies.  相似文献   

19.
The satisfiability problem is a well known NP-complete problem. In artificial intelligence, solving the satisfiability problem is called mechanical theorem proving. One of those mechanical theorem proving methods is the resolution principle which was invented by J.R. Robinson. In this paper, we shall show how an algorithm based upon the resolution principle can be analyzed. Letn andr 0 denote the numbers of variables and input clauses respectively. LetP 0 denote the probability that a variable appears positively, or negatively, in a clause. Our analysis shows that the expected total number of clauses processed by our algorithm isO(n+r 0) ifP 0 is a constant,r 0 is polynomially related withn, andn is large.This research was supported partially by the National Science Council of the Republic of China under the Grant NSC 78-0408-E007-06.  相似文献   

20.
为了有效管理学习子句,避免学习子句规模呈几何级增长,减少冗余学习子句对系统内存占用,从而提高布尔可满足性问题SAT求解器的求解效率,需要对学习子句进行评估,然后删减学习子句。传统的评估方式是基于学习子句的长度,保留较短的子句。当前主流的做法一个是变量衰减和VSIDS的子句评估方式,另外一个是基于文字块距离LBD的评估方式,也有将二者结合使用作为子句评估的依据。通过对学习子句参与冲突分析次数与问题求解的关系进行分析,将学习子句使用频率与LBD评估算法混合使用,既反映了学习子句在冲突分析中的作用,也充分利用了文字与决策层之间的信息。以Syrup求解器(GLUCOSE 4.1并行版本)为基准,在评估算法与并行子句共享策略方面做改进测试,通过实验对比发现,混合评估算法比LBD评估算法有优势,求解问题个数明显增多。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号