排序方式: 共有28条查询结果,搜索用时 15 毫秒
21.
This issue contains the Proceedings of the First International Workshop on Parallel and Distributed Model Checking (PDMC 2002) held in Brno, Czech Republic, August 19, 2002 as a satellite event to the 13th International Conference on Concurrency Theory (CONCUR 2002).The aim of the PDMC 2002 workshop was to cover all aspects of parallel and distributed model checking and supporting techniques. The mission was on one side to introduce people to the field of parallel and distributed model checking and on the other side to be a working forum for describing new research building thus the relationship between people working in the area of parallel and distributed approaches to the verification of large-scale systems and to encourage cross-fertilization of ideas.The workshop programme consisted of:
相似文献
- 2 invited talks, respectively by Moshe Vardi on Model Checking: A Complexity-Theoretic Perspective and by Orna Grumberg on Different directions in parallel and distributed model checking,
- 8 regular presentations, and
- 5 short presentations.
- Lubos Brim (MU Brno, Czech Republic) - Co-chair
- Gianpiero Cabodi (Torino, Italy)
- Hubert Garavel (INRIA, France)
- Orna Grumberg (Technion, Israel) - Co-chair
- Boudewijn R. Haverkort (Aachen, Germany)
- Kim Larsen (BRICS Aalborg, Denmark)
- Rajeev Ranjan (Real Intent, USA)
August 19, 2002 | Lubos Brim |
Orna Grumberg |
Full-size table
22.
In this paper we present two methods that use static analysis of parallel programs to create reduced models for them. Our algorithms examine the control-flow graph of a program (the syntax) and create a smaller transition system than would have been created otherwise. The smaller transition system is equivalent to the original transition system of the program with respect to temporal logic specifications.The two methods are orthogonal in their approach. The first, called path reduction, reduces the state-space by compressing computation paths. This method reduces the number of steps each computation takes. The second method, called dead variable reduction, reduces according to the variable domains. It identifies classes of equivalent states which differ only on variable values (and not the program counter) and uses a representative for each class. We also consider a refinement of the dead variable reduction, based on partially dead variables, which may result in a greater reduction.Our algorithms are based on syntactic manipulation of expressions, thus enabling us to handle programs with variables over finite as well as infinite domains. Both methods can easily be combined with either explicit state or symbolic methods (and with each other).We used the Murphi verifier to test the amount of reduction achieved by both methods. We let Murphi perform a DFS search and compared the sizes of the original and reduced transition systems, for several examples and according to both reductions. The results show that path reduction and the reduction based on partially dead variables give significant reductions, while the effect of fully dead variables is less impressive. We discuss the differences between the approaches, and the reasons for these results. 相似文献
23.
This work presents a collection of methods that integrate symmetry reduction and under-approximation with symbolic model checking in order to reduce space and time. The main objective of these methods is falsification. However, under certain conditions, they can provide verification as well.We first present algorithms that use symmetry reduction to perform on-the-fly model checking for temporal safety properties. These algorithms avoid building the orbit relation and choose representatives on-the-fly while computing the reachable states. We then extend these algorithms to check liveness properties as well. In addition, we introduce an iterative on-the-fly algorithm that builds subsets of the orbit relation rather than the full relation.Our methods are fully automatic once the user supplies some basic information about the symmetry in the verified system. Moreover, the methods are robust and work correctly even if the information supplied by the user is incorrect. Furthermore, the methods return correct results even when the computation of the symmetry reduction has not been completed due to memory or time explosion.We implemented our methods within the IBM model checker Rule-Base and compared their performance to that of RuleBase. In most cases, our algorithms outperformed RuleBase in both time and space. 相似文献
24.
The use of Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient
algorithms are known for computing interpolants in rational and real linear arithmetic. We focus on subsets of integer linear
arithmetic. Our main results are polynomial time algorithms for obtaining interpolants for conjunctions of linear Diophantine
equations, linear modular equations (linear congruences), and linear Diophantine disequations. We also present an interpolation
result for conjunctions of mixed integer linear equations. We show the utility of the proposed interpolation algorithms for
discovering modular/divisibility predicates in a counterexample guided abstraction refinement (CEGAR) framework. This has enabled verification of simple programs
that cannot be checked using existing CEGAR based model checkers.
This paper is an extended version of [14]. This research was sponsored by the Gigascale Systems Research Center (GSRC), Semiconductor Research Corporation (SRC),
the National Science Foundation (NSF), the Office of Naval Research (ONR), the Naval Research Laboratory (NRL), the Defense
Advanced Research Projects Agency (DARPA), the Army Research Office (ARO), and the General Motors Collaborative Research Lab
at CMU. The views and conclusions contained in this document are those of the author and should not be interpreted as representing
the official policies, either expressed or implied, of GSRC, SRC, NSF, ONR, NRL, DARPA, ARO, GM, or the U.S. government. 相似文献
25.
26.
Michael Huth Orna Grumberg 《International Journal on Software Tools for Technology Transfer (STTT)》2009,11(2):85-94
Reachability analysis asks whether a system can evolve from legitimate initial states to unsafe states. It is thus a fundamental
tool in the validation of computational systems—be they software, hardware, or a combination thereof. We recall a standard
approach for reachability analysis, which captures the system in a transition system, forms another transition system as an
over-approximation, and performs an incremental fixed-point computation on that over-approximation to determine whether unsafe
states can be reached. We show this method to be sound for proving the absence of errors, and discuss its limitations for
proving the presence of errors, as well as some means of addressing this limitation. We then sketch how program annotations
for data integrity constraints and interface specifications—as in Bertrand Meyer’s paradigm of Design by Contract—can facilitate
the validation of modular programs, e.g., by obtaining more precise verification conditions for software verification supported
by automated theorem proving. Then we recap how the decision problem of satisfiability for formulae of logics with theories—e.g.,
bit-vector arithmetic—can be used to construct an over-approximating transition system for a program. Programs with data types
comprised of bit-vectors of finite width require bespoke decision procedures for satisfiability. Finite-width data types challenge
the reduction of that decision problem to one that off-the-shelf tools can solve effectively, e.g., SAT solvers for propositional
logic. In that context, we recall the Tseitin encoding which converts formulae from that logic into conjunctive normal form—the
standard format for most SAT solvers—with only linear blow-up in the size of the formula, but linear increase in the number
of variables. Finally, we discuss the contributions that the three papers in this special section make in the areas that we
sketched above. 相似文献
27.
28.
E.M. Clarke O. Grumberg M. Minea D. Peled 《International Journal on Software Tools for Technology Transfer (STTT)》1999,2(3):279-287
With the advancement of computer technology, highly concurrent systems are being developed. The verification of such systems
is a challenging task, as their state space grows exponentially with the number of processes. Partial order reduction is an
effective technique to address this problem. It relies on the observation that the effect of executing transitions concurrently
is often independent of their ordering. In this paper we present the basic principles behind partial order reduction and its
implementation. 相似文献