首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
In this paper, we address the problem of verifying probabilistic and epistemic properties in concurrent probabilistic systems expressed in PCTLK. PCTLK is an extension of the Probabilistic Computation Tree Logic (PCTL) augmented with Knowledge (K). In fact, PCTLK enjoys two epistemic modalities Ki for knowledge and \(Pr_{\triangledown b}K_{i}\) for probabilistic knowledge. The approach presented for verifying PCTLK specifications in such concurrent systems is based on a transformation technique. More precisely, we convert PCTLK model checking into the problem of model checking Probabilistic Branching Time Logic (PBTL), which enjoys path quantifiers in the range of adversaries. We then prove that model checking a formula of PCTLK in concurrent probabilistic programs is PSPACE-complete. Furthermore, we represent models associated with PCTLK logic symbolically with Multi-Terminal Binary Decision Diagrams (MTBDDs), which are supported by the probabilistic model checker PRISM. Finally, an application, namely the NetBill online shopping payment protocol, and an example about synchronization illustrated through the dining philosophers problem are implemented with the MTBDD engine of this model checker to verify probabilistic epistemic properties and evaluate the practical complexity of this verification.  相似文献   

2.
Static analysis tools, such as resource analyzers, give useful information on software systems, especially in real-time and safety-critical applications. Therefore, the question of the reliability of the obtained results is highly important. State-of-the-art static analyzers typically combine a range of complex techniques, make use of external tools, and evolve quickly. To formally verify such systems is not a realistic option. In this work, we propose a different approach whereby, instead of the tools, we formally verify the results of the tools. The central idea of such a formal verification framework for static analysis is the method-wise translation of the information about a program gathered during its static analysis into specification contracts that contain enough information for them to be verified automatically. We instantiate this framework with costa, a state-of-the-art static analysis system for sequential Java programs, for producing resource guarantees and KeY, a state-of-the-art verification tool, for formally verifying the correctness of such resource guarantees. Resource guarantees allow to be certain that programs will run within the indicated amount of resources, which may refer to memory consumption, number of instructions executed, etc. Our results show that the proposed tool cooperation can be used for automatically producing verified resource guarantees.  相似文献   

3.
Many recent implementations of concurrent data structures relaxed their linearizability requirements for better performance and scalability. Quasi-linearizability, k-linearizability and regular-relaxed linearizability are three quantitative relaxation variants of linearizability that have been proposed as correctness conditions of relaxed data structures, yet preserving the intuition of linearizability. Quasi-linearizability has been proved undecidable. In this paper, we first show that k-linearizability is undecidable for a bounded number of processes, by reducing quasi-linearizability into it. We then show that regular-relaxed linearizability is decidable for a bounded number of processes. We also find that the number of the states of a relaxed specification is exponential to the number of the states of the underlying specification automaton (representing its relaxation strategy), and polynomial to the number of the states of the underlying quantitative sequential specification and the number of operations.  相似文献   

4.
Inaccuracies, or deviations, in the measurements of monitored variables in a control system are facts of life that control software must accommodate. Deviation analysis can be used to determine how a software specification will behave in the face of such deviations. Deviation analysis is intended to answer questions such as “What is the effect on output O if input I is off by 0 to 100?”. This property is best checked with some form of symbolic execution approach. In this report we wish to propose a new approach to deviation analysis using model checking techniques. The key observation that allows us to use model checkers is that the property can be restated as “Will there be an effect on output O if input I is off by 0 to 100?”—this restatement of the property changes the analysis from an exploratory analysis to a verification task suitable for model checking.  相似文献   

5.
Model checking is a successful approach for verifying hardware and software systems. Despite its success, the technique suffers from the state explosion problem which arises due to the large state space of real-life systems. One solution to the state explosion problem is compositional verification, that aims to decompose the verification of a large system into the more manageable verification of its components. To account for dependencies between components, assume-guarantee reasoning defines rules that break-up the global verification of a system into local verification of individual components, using assumptions about the rest of the system. In recent years, compositional techniques have gained significant successes following a breakthrough in the ability to automate assume-guarantee reasoning. However, automation has been restricted to simple acyclic assume-guarantee rules. In this work, we focus on automating circular assume-guarantee reasoning in which the verification of individual components mutually depends on each other. We use a sound and complete circular assume-guarantee rule and we describe how to automatically build the assumptions needed for using the rule. Our algorithm accumulates joint constraints on the assumptions based on (spurious) counterexamples obtained from checking the premises of the rule, and uses a SAT solver to synthesize minimal assumptions that satisfy these constraints. To the best of our knowledge, our work is the first to fully automate circular assume-guarantee reasoning. We implemented our approach and compared it with established non-circular compositional methods that use learning or SAT-based techniques. The experiments show that the assumptions generated for the circular rule are generally smaller, and on the larger examples, we obtain a significant speedup.  相似文献   

6.
模型检测新技术研究   总被引:17,自引:1,他引:17  
戎玫  张广泉 《计算机科学》2003,30(5):102-104
1 引言软件是否可信赖已成为一个国家的经济、国防等系统能否正常运转的关键因素之一,尤其在一些诸如核反应堆控制、航空航天以及铁路调度等安全悠关(safety-critical)领域更是如此。这类系统要求绝对安全可靠,不容半点疏漏,否则将导致灾难性后果。如1996年6月4日,欧洲航天局阿丽亚娜(Ariane)501火箭因为其控制软件的规范和设计错误而导致发射37秒后爆炸。类似的报道屡见不鲜,如何确保这些系统的可靠性成为计算机科学与控制论领域共同关注的一个焦点问题。  相似文献   

7.
Uppaal是一种对实时系统模型进行建模和验证的工具,PVS(Prototype Verification System)是开发和分析形式化规格说明的原型证明系统。介绍了Uppaal2PVS翻译器的设计与实现,给出了一种将用Uppaal生成的时间自动机规格说明翻译成PVS文件的方法,从而将模型检查问题转换成了定理证明问题,解决了潜在的状态空间爆炸问题。最后给出了一个实例。  相似文献   

8.
模型检测作为一种形式化验证技术,已被广泛应用于各种并发系统的正确性验证。针对具有非确定性选择和广义可能性分布的并发系统,引入广义可能性决策过程作为此类系统的模型;给出描述其性质的规范语言广义可能性计算树逻辑的概念;研究此类系统的广义可能性计算树逻辑模型检测问题。结论表明,其模型检测算法的时间复杂度也为多项式时间。所获得的结果扩大了广义可能性测度在模型检测中的应用范围。  相似文献   

9.
In implementation verification, we check that an implementation is correct with respect to a specification by checking whether the behaviors of a transition system that models the program's implementation correlate with the behaviors of a transition system that models its specification. In this paper, we investigate the effect of concurrency on the complexity of implementation verification. We consider trace-based and tree-based approaches to the verification of concurrent transition systems, with and without fairness. Our results show that in almost all cases the complexity of the problem is exponentially harder than that of the sequential case. Thus, as in the model-checking verification methodology, the state-explosion problem cannot be avoided.  相似文献   

10.
Contextual refinement is a compositional approach to compositional verification of concurrent objects.There has been much work designing program logics to prove the contextual refinement between the object implementation and its abstract specification.However,these program logics for contextual refinement verification cannot support objects with resource ownership transfer,which is a common pattern in many concurrent objects,such as the memory management module in OS kernels,which transfers the allocated memory block between the object and clients.In this paper,we propose a new approach to give abstract and implementation independent specifications to concurrent objects with ownership transfer.We also design a program logic to verify contextual refinement of concurrent objects w.r.t.their abstract specifications.We have successfully applied our logic to verifying an implementation of the memory management module,where the implementation is an appropriately simplified version of the original version from a real-world preemptive OS kernel.  相似文献   

11.
In a literature review on the last 20 years of automated analysis of feature models, the formalization of analysis operations was identified as the most relevant challenge in the field. This formalization could provide very valuable assets for tool developers such as a precise definition of the analysis operations and, what is more, a reference implementation, i.e., a trustworthy, not necessarily efficient implementation to compare different tools outputs. In this article, we present the FLAME framework as the result of facing this challenge. FLAME is a formal framework that can be used to formally specify not only feature models, but other variability modeling languages (VML s) as well. This reusability is achieved by its two-layered architecture. The abstract foundation layer is the bottom layer in which all VML-independent analysis operations and concepts are specified. On top of the foundation layer, a family of characteristic model layers—one for each VML to be formally specified—can be developed by redefining some abstract types and relations. The verification and validation of FLAME has followed a process in which formal verification has been performed traditionally by manual theorem proving, but validation has been performed by integrating our experience on metamorphic testing of variability analysis tools, something that has shown to be much more effective than manually designed test cases. To follow this automated, test-based validation approach, the specification of FLAME, written in Z, was translated into Prolog and 20,000 random tests were automatically generated and executed. Tests results helped to discover some inconsistencies not only in the formal specification, but also in the previous informal definitions of the analysis operations and in current analysis tools. After this process, the Prolog implementation of FLAME is being used as a reference implementation for some tool developers, some analysis operations have been formally specified for the first time with more generic semantics, and more VML s are being formally specified using FLAME.  相似文献   

12.
Data Flow Diagram (DFD) has been widely used in Software Engineering as means of requirement analysis and system specification.However,one defect of DFD approach remains untackled:the lack of formal semantics has brought about a lot of problems.In this paper,we model Data Flow Diagram as networks of concurrent processes.With the use of temporal logic language XYZ/E,the formal basis of the semantic specification of DFD can be ensured,and the system properties such as safety and liveness can be easily characterized.The main part of this paper is devoted to the study of the hierarchical decomposition of semantic specification and its correctness.A verification methodology is proposed and several examples are analyzed.The implementation of the tools which can support the formal specification,verification and simulation of DFD are also briefly described.  相似文献   

13.
One of the great benefits of using a stream X-machine to specify a system is its associated testing method. Under certain design for test conditions, this method produces a test suite that can determine the correctness of the implementation under test (IUT), provided that the basic components of the stream X-machine model have been correctly implemented. However, such an approach implies that each component can be tested in isolation from the rest of the system. This is a limitation that, in practice, can be resolved by developing stubs and drivers. However, this adds complexity to the testing process and, furthermore, these new pieces of software can introduce faults that can invalidate the theoretical results of the aforementioned testing method. This paper extends the approach by allowing component testing to be performed in parallel with integration testing, while still guaranteeing the IUT correctness under the given design for test conditions. It also shows how the integration test suite, produced in previous publications, can be reduced.  相似文献   

14.
We present an algorithm for extracting control flow graphs from Java bytecode that captures normal as well as exceptional control flow. We prove its correctness, in the sense that the behaviour of the extracted control flow graph is a sound over-approximation of the behaviour of the original program. This makes control flow graphs suitable for performing various static analyses, such as model checking of temporal safety properties. Analysing exceptional control flow for Java bytecode is difficult because of the stack-based nature of the language. We therefore develop the extraction in two stages. In the first, we abstract away from the complications arising from exceptional flows, and relativize the extraction on an oracle that is able to look into the stack and predict the exceptions that can be raised at each instruction. This idealized algorithm provides a specification for concrete extraction algorithms, which have to provide a suitable implementation for the oracle. We prove correctness of the idealized algorithm by means of behavioural simulation. In the second stage, we develop a concrete extraction algorithm that consists of two phases. In the first phase, the program is transformed into a BIR program, a stack-less intermediate representation of Java bytecode, from which the control flow graph is extracted in the second phase. We use this intermediate format because it provides the information needed to implement the oracle, and since it gives rise to more compact graphs. We show that the behaviour of the control flow graph extracted via the intermediate representation is a sound over-approximation of the behaviour of the graph extracted by the direct, idealized algorithm, and thus of the original program. The concrete extraction algorithm is implemented as the ConFlEx tool. A number of test cases are performed to evaluate the efficiency of the algorithm.  相似文献   

15.
The state space explosion problem in model checking remains the chief obstacle to the practical verification of real-world distributed systems. We attempt to address this problem in the context of verifying concurrent (message-passing) C programs against safety specifications. More specifically, we present a fully automated compositional framework which combines two orthogonal abstraction techniques (operating respectively on data and events) within a counterexample-guided abstraction refinement (CEGAR) scheme. In this way, our algorithm incrementally increases the granularity of the abstractions until the specification is either established or refuted. Our explicit use of compositionality delays the onset of state space explosion for as long as possible. To our knowledge, this is the first compositional use of CEGAR in the context of model checking concurrent C programs. We describe our approach in detail, and report on some very encouraging preliminary experimental results obtained with our tool MAGIC.  相似文献   

16.
This paper presents a formal framework for verifying distributed embedded systems. An embedded system is described as a set of concurrent real time functions which communicate through a network of interconnected switches involving messages queues and routing services.In order to allow requirements verification, such a model is then translated into timed automata. However, the complexity inherent in distributed embedded systems often does not allow to apply model checking techniques. Consequently, the paper presents an abstraction-based verification method which consists in abstracting the communication network by end-to-end timed channels. To prove a given safety property φ requires then (1) to prove a set of proof obligations ensuring the correctness of the abstraction step (i.e. the end-to-end channels correctly abstract the network), and (2) to prove φ at the abstract level. The expected advantage of such a method lies in the ability to overcome the combinatorial explosion frequently met when verifying complex systems. This method is illustrated by an avionic case study.  相似文献   

17.
The construction of large software systems is always achieved through assembly of independently written components — program modules. For these software components to work together, they must share a common set of data types and principles for representing structured data such as arrays of values and files. This common set of tools for creating and operating on data objects is provided by the infrastructure of the computer system: the hardware, operating system and runtime code. Because the nature and properties of these tools are crucial for correct operation of software components and their inter-operation, it is essential to have a precise specification that may be used for verifying correctness of application software on one hand, and to verify correctness of system behavior on the other. We call such a specification a program execution model (PXM). It is evident that the properties of the PXM implemented by a computer system can have serious impact on the ability of application programmers to practice modular software construction. This paper discusses the concept of program execution models and presents a set of principles that a PXM must satisfy to provide a sound basis for modular software construction. Because parallel program execution on computer systems with many processing units is an essential part of contemporary computing environments, the expression of parallelism and modular software construction using components involving parallel operations is included in this treatment. The conclusion is that it is possible to build computer systems that implement a PXM within which any parallel program may be used, unmodified, as a component for building more substantial parallel programs.  相似文献   

18.
The paper suggests a method for synthesis of adaptive tests with guaranteed coverage for checking functioning of discrete systems whose behavior is described by nondeterministic finite state machines. In contrast to other known methods, we do not represent the complete test as a tree but list test cases one by one and check functioning of the finite state machine on each test case. The complete test detects all defective systems that are r-distinguishable from the reference system. Besides, the test detects other defective systems containing traces that are not present in the specification; but detection of all such systems that are r-compatible with the specification is not guaranteed.  相似文献   

19.
A GALS (Globally Asynchronous, Locally Synchronous) system consists of several synchronous components that evolve concurrently and interact with each other asynchronously. The design of GALS systems is tedious and error-prone due to the high degree of synchronous and asynchronous concurrency present in complex architectures. In this paper, we present GRL (GALS Representation Language), a formal language designed to model GALS systems, for the purpose of formal verification of the asynchronous aspects. GRL combines the synchronous reactive model underlying dataflow languages and the asynchronous concurrent model underlying process algebras. We propose a translation from GRL to LNT, a value-passing concurrent language with classical process algebra flavour. This makes possible the analysis of GRL specifications using all the state-of-the-art simulation and verification functionalities provided by the CADP toolbox.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号