首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
The refinement calculus provides a methodology for transforming an abstract specification into a concrete implementation, by following a succession of refinement rules. These rules have been mechanized in theorem provers, thus providing a formal and rigorous way to prove that a given program refines another one. In a previous work, we have extended this mechanization for object-oriented programs, where the memory is represented as a graph, and we have integrated our approach within the rCOS tool, a model-driven software development tool providing a refinement language. Hence, for any refinement step, the tool automatically generates the corresponding proof obligations and the user can manually discharge them, using a provided library of refinement lemmas. In this work, we propose an approach to automate the search of possible refinement rules from a program to another, using the rewriting tool Maude. Each refinement rule in Maude is associated with the corresponding lemma in Isabelle, thus allowing the tool to automatically generate the Isabelle proof when a refinement rule can be automatically found. The user can add a new refinement rule by providing the corresponding Maude rule and Isabelle lemma.  相似文献   

2.
In relational semantics, the input-output semantics of a program is a relation on its set of states. We generalise this in considering elements of Kleene algebras as semantical values. In a nondeterministic context, the demonic semantics is calculated by considering the worst behaviour of the program. In this paper, we concentrate on while loops. Calculating the semantics of a loop is difficult, but showing the correctness of any candidate abstraction is much easier. For deterministic programs, Mills has described a checking method known as the while statement verification rule. A corresponding programming theorem for nondeterministic iterative constructs is proposed, proved and applied to an example. This theorem can be considered as a generalisation of the while statement verification rule to nondeterministic loops. The paper generalises earlier relation-algebraic work to the setting of modal Kleene algebra, an extension of Kozen’s Kleene algebra with tests that allows the internalisation of weakest liberal precondition and strongest liberal postcondition operators.  相似文献   

3.
This paper presents novel definitions of interfaced recursion blocks, interfaced procedures, and interfaced recursive procedures for the Refinement Calculus. These definitions allow step-wise refinement rules to be formally stated and proved for these constructs. An interface is associated with a (recursive) call by preceding the body of the implementation by an assertion statement which says that the interface refines to the implementation. An interface will typically be a specification statement, but in principle can be any command. The theory and rules presented in this paper have been mechanised in the theorem prover Isabelle/ZF. Received August 1999 / Accepted in revised form November 2000  相似文献   

4.
We introduce a new algebraic model for program variables, suitable for reasoning about recursive procedures with parameters and local variables in a mechanical verification setting. We give a predicate transformer semantics to recursive procedures and prove refinement rules for introducing recursive procedure calls, procedure parameters, and local variables. We also prove, based on the refinement rules, Hoare total correctness rules for recursive procedures, and parameters. We introduce a special form of Hoare specification statement which alone is enough to fully specify a procedure. Moreover, we prove that this Hoare specification statement is equivalent to a refinement specification. We implemented this theory in the PVS theorem prover.This work is based on an earlier work: Reasoning about recursive procedures with parameters. In Proceedings of the Workshop on Mechanized Reasoning about Languages with Variable Binding, 2003, Uppsala, Sweden, ACM Press.Received March 2004Revised October 2004Accepted February 2005 by C. B. Jones  相似文献   

5.
The specification language Csp-Casl allows one to model processes as well as data of distributed systems within one framework. In our paper, we describe how a combination of the existing tools Hets and Csp-Prover can solve the challenges that Csp-Casl raises on integrated theorem proving for processes and data. For building this new tool, the automated generation of theorems and their proofs in Isabelle/HOL plays a fundamental role. A case study of industrial strength demonstrates that our approach scales up to complex problems.  相似文献   

6.
This article continues a cycle of papers, which describe an approach to construction and verification of discrete PLC-programs by an LTL-specification. The approach provides a possibility of PLC-program correctness analysis by the model checking method. For the specification of the program behaviour the linear-time temporal logic LTL is used. The correctness analysis of an LTL specification is performed automatically by the symbolic model checking tool Cadence SMV. It was previously shown how ST-, LD- and IL-programs are constructed by a correct (with verified program properties) LTL-specification. In this article, a technology of CFC-program construction by an LTL-specification is described. The language CFC (Continuous Function Chart) is a variation of FBD (Function Block Diagram). FBD is a graphical language for microcircuits. CFC provides a possibility of free allocation of program components and connections on a screen. The approach to construction of CFC-programs is shown by an example. PLC-program representation on CFC within the approach to programming by LTLspecification differs from other representations. It gives the visualization of a data flow from inputs to outputs. The influence and dependence among variables is explicitly shown during the program execution within one PLC working cycle. In fact, CFC-program is a scheme of PLC-program data flow.  相似文献   

7.
We present a framework for the specification and verification of reactive concurrent programs using general-purpose mechanical theorem proving. We define specifications for concurrent programs by formalizing a notion of refinements analogous to stuttering trace containment. The formalization supports the definition of intuitive specifications of the intended behavior of a program. We present a collection of proof rules that can be effectively orchestrated by a theorem prover to reason about complex programs using refinements. The proof rules systematically reduce the correctness proof for a concurrent program to the definition and proof of an invariant. We include automated support for discharging this invariant proof with a predicate abstraction tool that leverages the existing theorems proven about the components of the concurrent programs. The framework is integrated with the ACL2 theorem prover and we demonstrate its use in the verification of several concurrent programs in ACL2.  相似文献   

8.
We show how the formalization and application of schemata for program development can be reduced to the formalization and application of derived rules of inference. We formalize and derive schemata as rules in theories that axiomatize program data and programs themselves. We reduce schema-based program development to ordinary theorem proving, where higher-order unification is used to apply rules. Conceptually, our formalization is simple and unifies divergent views of schemata, program synthesis, and program transformation. Practically, our formalization yields a simple methodology for carrying out development using existing logical frameworks; we illustrate this in the domain of logic program synthesis and transformation using the Isabelle logical framework.  相似文献   

9.
Formal verification of complex algorithms is challenging. Verifying their implementations goes beyond the state of the art of current automatic verification tools and usually involves intricate mathematical theorems. Certifying algorithms compute in addition to each output a witness certifying that the output is correct. A checker for such a witness is usually much simpler than the original algorithm—yet it is all the user has to trust. The verification of checkers is feasible with current tools and leads to computations that can be completely trusted. We describe a framework to seamlessly verify certifying computations. We use the automatic verifier VCC for establishing the correctness of the checker and the interactive theorem prover Isabelle/HOL for high-level mathematical properties of algorithms. We demonstrate the effectiveness of our approach by presenting the verification of typical examples of the industrial-level and widespread algorithmic library LEDA.  相似文献   

10.
We present a calculus for tracking equality relationships between values through pairs of bytecode programs. The calculus may serve as a certification mechanism for non-interference, a well-known program property in the field of language-based security, and code transformations. Contrary to previous type systems for non-interference, no restrictions are imposed on the control flow structure of programs. Objects, static and virtual methods are included, and heap-local reasoning is supported by frame rules. In combination with polyvariance, the latter enable the modular verification of programs over heap-allocated data structures, which we illustrate by verifying and comparing different implementations of list copying. The material is based on a complete formalisation in Isabelle/HOL.  相似文献   

11.
Invariant based programming is an approach where we start to construct a program by first identifying the basic situations (pre- and post-conditions as well as invariants) that could arise during the execution of the algorithm. These situations are identified before any code is written. After that, we identify the transitions between the situations, which will give us the flow of control in the program. Data refinement is a technique of building correct programs working on concrete data structures as refinements of more abstract programs working on abstract data types. We study in this paper data refinement for invariant based programs and we apply it to the construction of the classical Deutsch–Schorr–Waite graph marking algorithm. Our results are formalized and mechanically proved in the Isabelle/HOL theorem prover.  相似文献   

12.
Most standard approaches to the static analysis of programs, such as the popular worklist method, are first-order methods that inductively annotate program points with abstract values. In [6] we introduced a second-order approach based on Kleene algebra. In this approach, the primary objects of interest are not the abstract data values, but the transfer functions that manipulate them. These elements form a left-handed Kleene algebra. The dataflow labeling is not achieved by inductively labeling the program with abstract values, but rather by computing the star (Kleene closure) of a matrix of transfer functions. In this paper we show how this general framework applies to the problem of Java bytecode verification. We show how to specify transfer functions arising in Java bytecode verification in such a way that the Kleene algebra operations (join, composition, star) can be computed efficiently. We also give a hybrid dataflow analysis algorithm that computes the closure of a matrix on a cutset of the control flow graph, thereby avoiding the recalculation of dataflow information when there are cycles in the graph. This method could potentially improve the performance over the standard worklist algorithm when a small cutset can be found.  相似文献   

13.

Context

This paper deals with the development and verification of liveness properties on reactive systems using the Event-B method. By considering the limitation of the Event-B method to invariance properties, we propose to apply the language TLA+ to verify liveness properties on Event-B models.

Objective

This paper deals with the use of two verification approaches: theorem proving and model-checking, in the construction and verification of safe reactive systems. The theorem prover concerned is part of the Click_n_Prove tool associated to the Event-B method and the model checker is TLC for TLA+ models.

Method

To verify liveness properties on Event-B systems, we extend first the expressivity and the semantics of a B model (called temporal B model) to deal with the specification of fairness and eventuality properties. Second, we propose semantics of the extension over traces, in the same spirit as TLA+ does. Third, we give verification rules in the axiomatic way of the Event-B method. Finally, we give transformation rules from a temporal B model into a TLA+ module. We present in particular, our prototype system called B2TLA+, that we have developed to support this transformation; then we can verify liveness properties thanks to the model checker TLC on finite state systems. For the verification of infinite-state systems, we propose the use of the predicate diagrams and its associated tool DIXIT. As the B refinement preserves invariance properties through refinement steps, we propose some rules to get the preservation of liveness properties by the B refinement.

Results

The proposed approach is applied for the development of some reactive systems examples and our prototype system B2TLA+ is successfully used to transform a temporal B model into a TLA+ module.

Conclusion

The paper successfully defines an approach for the specification and verification of safety and liveness properties for the development of reactive systems using the Event-B method, the language TLA+ and the predicate diagrams with their associated tools. The approach is illustrated on a case study of a parcel sorting system.  相似文献   

14.
Available statistical data shows that the cost of repairing software faults rises dramatically in later development stages. In particular, the new technology of generating implementation code from architectural specifications requires highly reliable designs. Much research has been done at this stage using verification and validation techniques to prove correctness in terms of certain properties. A prominent approach of this category is model checking (Atlee, J.M., and Gannon, J. 1993. State-based model checking of event-driven systems requirements. IEEE Trans. Software Eng., 19(1): 24–40.) Such approaches and the approach of software testing are complementary. Testing reveals some errors that cannot be easily identified through verification, and vice versa. This paper presents the technology and the accompanying tool suite to the testing of software architecture specifications. We apply our state-of-the-art technology in software coverage testing, program diagnosis and understanding to software architectural designs. Our technology is based on both the control flow and the data flow of the executable architectural specifications. It first generates a program flow diagram from the specification and then automatically analyses the coverage features of the diagram. It collects the corresponding flow data during the design simulation to be mapped to the flow diagram. The coverage information for the original specification is then obtained from the coverage information of the flow diagram. This technology has been used for C, C++, and Java, and has proven effective (Agrawal, H., Alberti, J., Li, J.J., et al. 1998. Mining system tests to aid software maintenance, IEEE Computer July, pp. 64–73.)  相似文献   

15.
16.
Bytecode subroutines are a major complication for Java bytecode verification: They are difficult to fit into the dataflow analysis that the JVM specification suggests. Hence, subroutines are left out or are restricted in most formalizations of the bytecode verifier. We examine the problems that occur with subroutines and give an overview of the most prominent solutions in the literature. Using the theorem prover Isabelle/HOL, we have extended our substantial formalization of the JVM and the bytecode verifier with its proof of correctness by the most general solution for bytecode subroutines. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

17.
徐亮  谭煌 《计算机工程》2013,(12):130-135
在《信息安全技术操作系统安全技术要求》中,提出访问验证保护级安全操作系统的研发过程需要完全形式化的安全策略模型。针对该情况,对经典的数据机密性BLP模型进行相应改进,为系统中的主客体引入多级安全标签以及安全迁移规则,使其满足实际系统开发的需求。运用完全形式化的方法对改进模型的状态、不变量、迁移规则等进行描述,使用Isabelle定理证明器证明了迁移规则对模型的不变量保持性,从而实现对模型正确性的自动形式化验证,并保证了模型的可靠性。  相似文献   

18.
孙小祥  陈哲 《计算机科学》2021,48(1):268-272
随着软件运行时验证技术的发展,出现了许多面向C语言的运行时内存安全验证工具.这些工具大多是基于源代码或者中间代码插桩技术来实现内存安全的运行时检测.但是,其中一些没有经过严格证明的验证工具往往存在两方面的问题,一是插桩程序的加入可能会改变源程序的行为及语义,二是插桩程序并不能有效保证内存安全.为了解决这些问题,文中提出...  相似文献   

19.
Boogie is a verification condition generator for an imperative core language. It has front-ends for the programming languages C# and C enriched by annotations in first-order logic, i.e. pre- and postconditions, assertions, and loop invariants. Moreover, concepts like ghost fields, ghost variables, ghost code and specification functions have been introduced to support a specific modeling methodology. Boogie’s verification conditions—constructed via a wp calculus from annotated programs—are usually transferred to automated theorem provers such as Simplify or Z3. This also comprises the expansion of language-specific modeling constructs in terms of a theory describing memory and elementary operations on it; this theory is called a machine/memory model. In this paper, we present a proof environment, HOL-Boogie, that combines Boogie with the interactive theorem prover Isabelle/HOL, for a specific C front-end and a machine/memory model. In particular, we present specific techniques combining automated and interactive proof methods for code verification. The main goal of our environment is to help program verification engineers in their task to “debug” annotations and to find combined proofs where purely automatic proof attempts fail.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号