首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Certain distributivity results for ukasiewicz's infinite-valued logic 0 are proved axiomatically (for the first time) with the help of the automated reasoning program OTTER. In addition, nondistributivity results are established for a wide variety of positive substructural logics by the use of logical matrices discovered with the automated model-finding programs MACE and MAGIC.  相似文献   

2.
Summary A weak logic of programs is a formal system in which statements that mean the program halts cannot be expressed. In order to prove termination, we would usually have to use a stronger logical system. In this paper we show how we can prove termination of both iterative and recursive programs within a weak logic by augmenting the programs with counters and adding bound assertions on the counters as loop invariants and entry conditions. Thus, most of the existing verifiers which are based on a weak logic of programs can be used to prove termination of programs without any modification. We give examples of proofs of termination and of upper bounds on computation time that were obtained using a Pascal program verifier. The use of the method to locate causes of non-termination is also illustrated.This research was supported inpart by the Advanced Research Agency of the Office of the Secretary of Defence under contract DAHC 15-73-C-0435  相似文献   

3.
We define a class of function-free rule-based production system (PS) programs that exhibit non-deterministic and/or causal behavior. We develop a fixpoint semantics and an equivalent declarative semantics for these programs. The criterion to recognize the class of non-deterministic causal (NDC) PS programs is based upon extending and relaxing the concept of stratification, to partition the rules of the program. Unlike strict stratification, this relaxed stratification criterion allows a more flexible partitioning of the rules and admits programs whose execution is non-deterministic or causal or both. The fixpoint semantics is based upon a monotonic fixpoint operator which guarantees that the execution of the program will terminate. Each fixpoint corresponds to a minimal database of answers for the NDC PS program. Since the execution of the program is non-deterministic, several fixpoints may be obtained. To obtain a declarative meaning for the PS program, we associate a normal logic program with each NDC PS program. We use the generalized disjunctive well-founded semantics to provide a meaning to the normal logic program Through these semantics, a well-founded state is associated with and a set of possible extensions, each of which are minimal models for the well-founded state, are obtained. We show that the fixpoint semantics for the NDC PS programs is sound and complete with respect to the declarative semantics for the corresponding normal logic program .This research is partially sponsored by the National Science Foundation under grant IRI-9008208 and by the Institute for Advanced Computer Studies.  相似文献   

4.
Providing a clean procedural semantics of the Negation As Failure rule in Logic Programming has been an open problem for some time now. This rule has been treated as a technique in nonmonotonic reasoning, not as a rule in classical logic. This paper contains a demonstration of the negation as failure rule as a resolution procedure in first-order logic. We present a sound and complete resolution scheme for negation as failure rule for the larger class of constraint logic programs. The approach is to consider a canonical partition of the completion of a definite (constraint) program into the IF and the FI programs. We show that a negated goal, provable from the completed definite program is provable from just the FI part. The clauses in this program have a structure dual to that of definite Horn clauses. We describe a sound and complete linear resolution rule for this fragment, and show that a resolution proof of the negated goal from the FI part corresponds to a finite failure tree resulting from classical linear resolution applied to the goal on the If part of the original definite program. Our work shows that negation as failure rule can be computationally efficient in the sense that the SLD-resolution on the If part of a definite program along with the negation as failure rule is more efficient than a direct resolution procedure on the completion of that program.  相似文献   

5.
The language FCP(:,?) is the outcome of attempts to integrate the best of several flat concurrent logic programming languages, including Flat GHC, FCP (↓, |) and Flat Concurrent Prolog, in a single consistent framework. FCP(:) is a subset of FCP(:, ?), which is a variant of FPP(↓, |) and employs concepts of the concurrent constraint framework of cc(↓, |). FCP(:, ?) is a language which is strong enough to accommodate all useful concurrent logic programming techniques, including those which rely on atomic test unification and read-only variables, yet incorporates the weaker languages mentioned as subsets. This allows the programmer to remain within a simple subset of the language such as Flat GHC when the full power of atomic unification or read-only variables is not needed.  相似文献   

6.
We provide techniques to integrate resolution logic with equality in type theory. The results may be rendered as follows. A clausification procedure in type theory, equipped with a correctness proof, all encoded using higher-order primitive recursion. A novel representation of clauses in minimal logic such that the -representation of resolution steps is linear in the size of the premisses. A translation of resolution proofs into lambda terms, yielding a verification procedure for those proofs. Availability of the power of resolution theorem provers in interactive proof construction systems based on type theory.  相似文献   

7.
There has been considerable progress in the domain of software verification over the last few years. This advancement has been driven, to a large extent, by the emergence of powerful yet automated abstraction techniques such as predicate abstraction. However, the state-space explosion problem in model checking remains the chief obstacle to the practical verification of real-world distributed systems. Even in the case of purely sequential programs, a crucial requirement to make predicate abstraction effective is to use as few predicates as possible. This is because, in the worst case, the state-space of the abstraction generated (and consequently the time and memory complexity of the abstraction process) is exponential in the number of predicates involved. In addition, for concurrent programs, the number of reachable states could grow exponentially with the number of components. We attempt to address these issues in the context of verifying concurrent (message-passing) C programs against safety specifications. More specifically, we present a fully automated compositional framework which combines two orthogonal abstraction techniques (predicate abstraction for data and action-guided abstraction for events) within a counterexample-guided abstraction refinement scheme. In this way, our algorithm incrementally increases the granularity of the abstractions until the specification is either established or refuted. Additionally, a key feature of our approach is that if a property can be proved to hold or not hold based on a given finite set of predicates $\mathcal{P}$ , the predicate refinement procedure we propose in this article finds automatically a minimal subset of $\mathcal{P}$ that is sufficient for the proof. This, along with our explicit use of compositionality, delays the onset of state-space explosion for as long as possible. We describe our approach in detail, and report on some very encouraging experimental results obtained with our tool MAGIC.  相似文献   

8.
This paper presents in detail how the Unity logic for reasoning about concurrent programs was formalized within the mechanized theorem prover PC-NQTHM-92. Most of Unitys proof rules were formalized in the unquantified logic of NQTHM, and the proof system has been used to mechanically verify several concurrent programs. The mechanized proof system is sound by construction, since Unitys proof rules were proved about an operational semantics of concurrency, also presented here. Skolem functions are used instead of quantifiers, and the paper describes how proof rules containing Skolem function are used instead of Unitys quantified proof rules when verifying concurrent programs. This formalization includes several natural extensions to Unity, including nondeterministic statements. The paper concludes with a discussion of the cost and value of mechanization.  相似文献   

9.
This paper presents an application of structural modeling and automated reasoning as a software development environment for real-time systems. This application satisfies two major requirements for such an environment: (1) to synthesize an absolutely correct program and, (2) to increase software productivity. The real-time systems, which consist of concurrent programs, are described by a Prolog based concurrent object-oriented language, called MENDEL/87. As a typical concurrent program consists of two parts: functional and synchronization parts; the functional part in the reusable component to be registered in a library will be generated by a structural modeling through the use of structuring functions with respect to data flows. The synchronization part will be synthesized from temporal logic specifications by the use of an automated reasoning mechanism. This paper also describes the MENDELS ZONE implemented on a Prolog machine, which is the working base for the presented application method.  相似文献   

10.
Summary This work has been motivated by the following general problem: find logics for the specification and proof of programs, described by terms of some algebra with given congruence relation. This relation is supposed to define a satisfactory concept for the behavioural comparison of programs. We require these logics to be adequate with respect to the term language, in the sense that two programs, behaviourally equivalent satisfy the same formulas and conversely. The term language considered is the subset of controllable, regular terms of CCS, on a vocabulary of actions A, with observational congruence. A term is said to be controllable if it is congruent to some term without occurrence of . We obtain an adequate logic whose language of formulas is obtained from constants true, false and ¦Nil¦ by using operators , , fixpoint operators, + and a for aA; the latter can be considered as extensions of the operators + and a for aA of CCS. As a result, controllable CCS terms can be considered as formulas of this logic and the problem of program verification is reduced to the proof of the validity of a formula.  相似文献   

11.
Testing concurrent programs is a challenging problem due to interleaving explosion: even for a fixed set of inputs, there is a huge number of concurrent runs that need to be tested to account for scheduler behavior. Testing all possible schedules is not practical. Consequently, most effective testing algorithms only test a select subset of runs. For example, limiting testing to runs that contain data races or atomicity violations has been shown to capture a large proportion of concurrency bugs. In this paper we present a general approach to concurrent program testing that is based on techniques from artificial intelligence (AI) automated planning. We propose a framework for predicting concurrent program runs that violate a collection of generic correctness specifications for concurrent programs, namely runs that contain data races, atomicity violations, or null-pointer dereferences. Our prediction is based on observing an arbitrary run of the program, and using information collected from this run to model the behavior of the program, and to predict new runs that contain bugs with one of the above noted violation patterns. We characterize the problem of predicting such new runs as an AI sequential planning problem with the temporally extended goal of achieving a particular violation pattern. In contrast to many state-of-the-art approaches, in our approach feasibility of the predicted runs is guaranteed and, therefore, all generated runs are fully usable for testing. Moreover, our planning-based approach has the merit that it can easily accommodate a variety of violation patterns which serve as the selection criteria for guiding search in the state space of concurrent runs. This is achieved by simply modifying the planning goal. We have implemented our approach using state-of-the-art AI planning techniques and tested it within the Penelope concurrent program testing framework [35]. Nevertheless, the approach is general and is amenable to a variety of program testing frameworks. Our experiments with a benchmark suite showed that our approach is very fast and highly effective, finding all known bugs.  相似文献   

12.
Most proof methods for reasoning about concurrent programs are based upon theinterleaving semantics of concurrent computation: a concurrent program is executed in a stepwise fashion, with only one enabled action being executed at each step. Interleaving semantics, in effect, requires that a concurrent program be executed as a nondeterministic sequential program. This is clearly an abstraction of the way in which concurrent programs are actually executed. To ensure that this is a reasonable abstraction, interleaving semantics should only be used to reason about programs with simple actions; we call such programs atomic. In this paper, we formally characterise the class of atomic programs. We adopt the criterion that a program isatomic if it can be implemented in a wait-free, serialisable manner by a primitive program. A program isprimitive if each of its actions has at most one occurrence of a shared bit, and each shared bit is read by at most one process and written by at most one process. It follows from our results that the traditionally accepted atomicity criterion, which allows each action to have at most one occurrence of a shared variable, can be relaxed, allowing programs to have more powerful actions. For example, according to our criterion, an action can read any finite number of shared variables, provided it writes no shared variable.Work supported, in part, at the University of Texas at Austin by Office of Naval Research Contract N00014-89-J-1913, and at the University of Maryland by an award from the University of Maryland General Research Board.Work supported, in part, by Office of Naval Research Contract N00014-89-J-1913.  相似文献   

13.
We consider the parallel time complexity of logic programs without function symbols, called logical query programs, or Datalog programs. We give a PRAM algorithm for computing the minimum model of a logical query program, and show that for programs with the “polynomial fringe property,” this algorithm runs in time that is logarithmic in the input size, assuming that concurrent writes are allowed if they are consistent. As a result, the “linear” and “piecewise linear” classes of logic programs are inN C. Then we examine several nonlinear classes in which the program has a single recursive rule that is an “elementary chain.” We show that certain nonlinear programs are related to GSM mappings of a balanced parentheses language, and that this relationship implies the “polynomial fringe property;” hence such programs are inN C Finally, we describe an approach for demonstrating that certain logical query programs are log space complete forP, and apply it to both elementary single rule programs and nonelementary programs.  相似文献   

14.
Constraint solving has been applied to many domains of program analysis and is further used in concurrent program analysis. Concurrent programs have been widely used with the rapid development of multi-core processors. However, concurrent bugs threaten the security and reliability of concurrent programs, and thus it is of great importance to detect concurrent bugs. The explosion of thread interleaving caused by the uncertainty of the execution of concurrent program threads brings some challenges to the detection of concurrent bugs. Existing concurrent defect detection algorithms reduce the exploration cost in the state space of concurrent programs by reducing invalid thread interleaving. For example, the maximal causal model algorithm transforms the state space exploration problem of concurrent programs into a constraint solving problem. However, it will produce a large number of redundant and conflicting constraints during constraint construction, which greatly prolongs the time of constraint solving, increases the number of constraint solver calls, and reduces the exploration efficiency of concurrent program state space. Thus, this study proposes a directed graph constraint-guided maximal causality reduction method, called GC-MCR. This method aims to improve the speed of constraint solving and the efficiency of the state space exploration of concurrent programs by filtering and reducing constraints using directed graphs. The experimental results show that the GC-MCR method can effectively optimize the expression of constraints, so as to improve the solving speed of the constraint solver and reduce the number of solver calls. Compared with the existing J-MCR method, GC-MCR can significantly improve the detection efficiency of concurrent program bugs without reducing the detection ability of concurrent bugs, and the test time on 38 groups of concurrent test programs widely used by existing research methods can be reduced by 34.01% on average.  相似文献   

15.
This paper presents an approach to specialising logic programs which is based on abstract interpretation. Program specialisation involves two stages, the construction of an abstract computation tree and a program construction stage. For the tree construction stage, abstract OLDT resolution is defined and used to construct a complete and finite tree corresponding to a given logic program and a goal. In the program construction stage, a specialised program is extracted from this tree. We focus on two logic programming languages: sequential Prolog and Flat Concurrent Prolog. Although the procedural reading of concurrent logic programs is very different from that of sequential programs, the techniques presented provide a uniform approach to the specialisation of both languages. We present the results for Prolog rigorously, and extend them less formally to Flat Concurrent Prolog. There are two main advantages of basing program specialisation on abstract interpretation. Firstly, termination can be ensured by using abstract interpretations over finite domains, while performing a complete flow analysis of the program. Secondly, correctness of the specialised program is closely related to well-defined consistency conditions on the concrete and abstract interpretations.  相似文献   

16.
并发程序与并发系统可以拥有非常高的执行效率和相对串行系统较快的响应速度,在现实中有着非常广泛的应用。但是并发程序与并发系统往往难以保证其实现的正确性,实际应用程序运行中的错误会带来严重的后果。同时,并发程序执行时的不确定性会给其正确性验证带来巨大的困难。在形式化验证方法中,人们可以通过交互式定理证明器严格地对并发程序进行验证。本文对在交互式定理证明中可用于描述并发程序正确性的验证目标进行总结,它们包括霍尔三元组、可线性化、上下文精化和逻辑原子性。交互式定理证明方法中常用程序逻辑对程序进行验证,本文分析了基于并发分离逻辑、依赖保证逻辑、关系霍尔逻辑等理论研究的系列成果与相应形式化方案,并对使用了这些方法的程序验证工具和程序验证成果进行了总结。  相似文献   

17.
张晔  陆余良 《计算机科学》2017,44(Z11):348-352
符号约束描述了程序中的变量关系,被广泛运用于模型检测、符号执行等程序的静态分析方法中。将符号约束应用于可编程逻辑控制器(PLC)程序的正确性验证,能够发现程序中的逻辑错误。人工计算符号约束不仅冗杂枯燥,而且错误率高。针对语句表形式的PLC程序,提出一种基于符号约束的正确性验证方法,通过分析PLC源代码的控制流及数据流,构造程序的控制流图并将其转换为静态单赋值形式的三地址码,最后使用迭代计算的方法求出每个变量的符号约束。  相似文献   

18.
19.
We consider the parallel time complexity of logic programs without function symbols, called logical query programs, or Datalog programs. We give a PRAM algorithm for computing the minimum model of a logical query program, and show that for programs with the polynomial fringe property, this algorithm runs in time that is logarithmic in the input size, assuming that concurrent writes are allowed if they are consistent. As a result, the linear and piecewise linear classes of logic programs are inN C. Then we examine several nonlinear classes in which the program has a single recursive rule that is an elementary chain. We show that certain nonlinear programs are related to GSM mappings of a balanced parentheses language, and that this relationship implies the polynomial fringe property; hence such programs are inN C Finally, we describe an approach for demonstrating that certain logical query programs are log space complete forP, and apply it to both elementary single rule programs and nonelementary programs.Supported by NSF Grant IST-84-12791, a grant of IBM Corporation, and ONR contract N00014-85-C-0731.  相似文献   

20.
A recent trend in program development is to derive correct implementations from program specifications by the application of a formal calculus, a programming methodology. The application of formal rules lends itself to automation. We investigate the automation of one part of a methodology for programming with concurrency. In this methodology, concurrency is derived by transforming the sequential execution of a program into an equivalent concurrent execution on the basis of formal transformation rules. Such rules can be interpreted as theorems of semantic equivalences. The mechanical certification of these theorems would significantly enhance the reliability of the methodology. The following is an initial exploration of this problem applied to a certain class of programs: sorting networks. We present an implementation of a part of the underlying semantic theory in Boyer and Moore's mechanized logic, and report on the mechanical proof of a transformation that derives concurrency for an insertion sort.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号