首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a framework for model checking concurrent software systems which incorporates both states and events. Contrary to other state/event approaches, our work also integrates two powerful verification techniques, counterexample-guided abstraction refinement and compositional reasoning. Our specification language is a state/event extension of linear temporal logic, and allows us to express many properties of software in a concise and intuitive manner. We show how standard automata-theoretic LTL model checking algorithms can be ported to our framework at no extra cost, enabling us to directly benefit from the large body of research on efficient LTL verification. We also present an algorithm to detect deadlocks in concurrent message-passing programs. Deadlock- freedom is not only an important and desirable property in its own right, but is also a prerequisite for the soundness of our model checking algorithm. Even though deadlock is inherently non-compositional and is not preserved by classical abstractions, our iterative algorithm employs both (non-standard) abstractions and compositional reasoning to alleviate the state-space explosion problem. The resulting framework differs in key respects from other instances of the counterexample-guided abstraction refinement paradigm found in the literature. We have implemented this work in the magic verification tool for concurrent C programs and performed tests on a broad set of benchmarks. Our experiments show that this new approach not only eases the writing of specifications, but also yields important gains both in space and in time during verification. In certain cases, we even encountered specifications that could not be verified using traditional pure event-based or state-based approaches, but became tractable within our state/event framework. We also recorded substantial reductions in time and memory consumption when performing deadlock-freedom checks with our new abstractions. Finally, we report two bugs (including a deadlock) in the source code of Micro-C/OS versions 2.0 and 2.7, which we discovered during our experiments. This research was sponsored by the National Science Foundation (NSF) under grants no. CCR-9803774 and CCR-0121547, the Office of Naval Research (ONR) and the Naval Research Laboratory (NRL) under contract no. N00014-01-1-0796, the Army Research Office (ARO) under contract no. DAAD19-01-1-0485, and was conducted as part of the Predictable Assembly from Certifiable Components (PACC) project at the Software Engineering Institute (SEI). This article combines and builds upon the papers (CCO+04) and (CCOS04). Received December 2004 Revised July 2005 Accepted July 2005 by Eerke A. Boiten, John Derrick, Graeme Smith and Ian Hayes  相似文献   

2.
The state space explosion problem in model checking remains the chief obstacle to the practical verification of real-world distributed systems. We attempt to address this problem in the context of verifying concurrent (message-passing) C programs against safety specifications. More specifically, we present a fully automated compositional framework which combines two orthogonal abstraction techniques (operating respectively on data and events) within a counterexample-guided abstraction refinement (CEGAR) scheme. In this way, our algorithm incrementally increases the granularity of the abstractions until the specification is either established or refuted. Our explicit use of compositionality delays the onset of state space explosion for as long as possible. To our knowledge, this is the first compositional use of CEGAR in the context of model checking concurrent C programs. We describe our approach in detail, and report on some very encouraging preliminary experimental results obtained with our tool MAGIC.  相似文献   

3.
Buffer overflow detection using static analysis can provide a powerful tool for software programmers to find difficult bugs in C programs. Sound static analysis based on abstract interpretation, however, often suffers from false alarm problem. Although more precise abstraction can reduce the number of the false alarms in general, the cost to perform such analysis is often too high to be practical for large software. On the other hand, less precise abstraction is likely to be scalable in exchange for the increased false alarms. In order to attain both precision and scalability, we present a method that first applies less precise abstraction to find buffer overflow alarms fast, and selectively applies a more precise analysis only to the limited areas of code around the potential false alarms. In an attempt to develop the precise analysis of alarm filtering for large C programs, we perform a symbolic execution over the potential alarms found in the previous analysis, which is based on the abstract interpretation. Taking advantage of a state-of-art SMT solver, our precise analysis efficiently filters out a substantial number of false alarms. Our experiment with the test cases from three open source programs shows that our filtering method can reduce about 68% of false alarms on average.  相似文献   

4.
Debugging is crucial for producing reliable software. One of the effective bug localization techniques is spectral‐based fault localization (SBFL). It helps to locate a buggy statement by applying an evaluation metric to program spectra and ranking program components on the basis of the score it computes. SBFL is an example of a dynamic analysis – an analysis of computer program that is performed by executing it with sufficient number of test cases. Static analysis, on the other hand, is performed in a non‐runtime environment. We introduce a weighting technique by combining these two kinds of program analysis. Static analysis is performed to categorize program statements into different classes and giving them weights based on the likelihood of being buggy statement. Statements are finally ranked on the basis of the weights computed by statements' categorization (static analysis) and scores computed by SBFL metrics (dynamic analysis). We evaluate the performance of our technique on Siemens test suite and Flex (having seeded bugs seeded by expert developers), Sed (having mixture of real and seeded bugs), and Space (having real bugs). In our evaluation, proposed weighting technique improves the performance of a wide variety of fault localization metrics up to 20% on single bug datasets and up to 42% on multi‐bug datasets. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

5.
We present a novel framework for automatic inference of efficient synchronization in concurrent programs, a task known to be difficult and error-prone when done manually. Our framework is based on abstract interpretation and can infer synchronization for infinite state programs. Given a program, a specification, and an abstraction, we infer synchronization that avoids all (abstract) interleavings that may violate the specification, but permits as many valid interleavings as possible. Combined with abstraction refinement, our framework can be viewed as a new approach for verification where both the program and the abstraction can be modified on-the-fly during the verification process. The ability to modify the program, and not only the abstraction, allows us to remove program interleavings not only when they are known to be invalid, but also when they cannot be verified using the given abstraction. We implemented a prototype of our approach using numerical abstractions and applied it to verify several example programs.  相似文献   

6.
We present a model based approach to diagnosability analysis for interacting finite state systems where fault isolation is deferred until the system comes to a standstill. Local abstractions of the system model are used to alleviate the state space explosion. Pairs of closely coupled automata are merged and replaced by a single automaton with an equivalently behavior as seen from the rest of the system; interaction between the merged automata is internalized and the new equivalent automaton is subsequently abstracted from internal behavior irrelevant to fault isolation. In moderately concurrent systems these steps can often be iterated until the system consists of a single automaton providing a compact encoding of all possible fault scenarios of the original model. We illustrate how the resulting abstraction can be used as a basis for post mortem diagnosability analysis.  相似文献   

7.
There is a growing interest in efficient models of text mining and an emergent need for new data structures that address word relationships. Detailed knowledge about the taxonomic environment of keywords that are used in text documents can provide valuable insight into the nature of the subject matter contained therein. Such insight may be used to enhance the data structures used in the text data mining task as relationships become usefully apparent. A popular scalable technique used to infer these relationships, while reducing dimensionality, has been Latent Semantic Analysis. We present a new approach, which uses an ontology of lexical abstractions to create abstraction profiles of documents and uses these profiles to perform text organization based on a process that we call frequent abstraction analysis. We introduce TATOO, the Text Abstraction TOOlkit, which is a full implementation of this new approach. We present our data model via an example of how taxonomically derived abstractions can be used to supplement semantic data structures for the text classification task.  相似文献   

8.
We propose a new technique combining dynamic and static analysis of programs to find linear invariants. We use a statistical tool, called simple component analysis, to analyze partial execution traces of a given program. We get a new coordinate system in the vector space of program variables, which is used to specialize numerical abstract domains. As an application, we instantiate our technique to interval analysis of simple imperative programs and show some experimental evaluations.  相似文献   

9.
10.
We present a new technique for helping developers understand heap referencing properties of object-oriented programs and how the actions of the program affect these properties. Our dynamic analysis uses the aliasing properties of objects to synthesize a set of roles; each role represents an abstract object state intended to be of interest to the developer. We allow the developer to customize the analysis to explore the object states and behavior of the program at multiple different and potentially complementary levels of abstraction. The analysis uses roles as the basis for three abstractions: role transition diagrams, which present the observed transitions between roles and the methods responsible for the transitions; role relationship diagrams, which present the observed referencing relationships between objects playing different roles; and enhanced method interfaces, which present the observed roles of method parameters. Together, these abstractions provide useful information about important object and data structure properties and how the actions of the program affect these properties. We have implemented the role analysis and have used this implementation to explore the behavior of several Java programs. Our experience indicates that, when combined with a powerful graphical user interface, roles are a useful abstraction for helping developers explore and understand the behavior of object-oriented programs.  相似文献   

11.
Model checking and static analysis are traditionally seen as two separate approaches to software analysis and verification. In this work we define a model, checking approach for the static analysis of large C/C++ source code bases to detect potential run-time issues such as program crashes, security vulnerabilities and memory leaks. Working on the intersection of software model checking and automated static bug detection for real-life systems, we address a number of issues: how to scale for real-life systems of 1,000,000 LoC or more, how to quickly write new checks, and most importantly how to distinguish between relevant and irrelevant bugs and fine tune the analysis accordingly. We define our model checking-based static analysis approach implemented in our tool Goanna, illustrate a number of design and implementation decisions to obtain practical outcomes and relevant results, and present our findings by empirical data obtained from regularly analyzing large industrial and open source code bases such as the Firefox Web browser.  相似文献   

12.
We address the problem of error detection for programs that take recursive data structures and arrays as input. Previously we proposed a combination of symbolic execution and model checking for the analysis of such programs: we put a bound on the size of the program inputs and/or the search depth of the model checker to limit the search state space. Here we look beyond bounded model checking and consider state matching techniques to limit the state space. We describe a method for examining whether a symbolic state that arises during symbolic execution is subsumed by another symbolic state. Since the number of symbolic states may be infinite, subsumption is not enough to ensure termination. Therefore, we also consider abstraction techniques for computing and storing abstract states during symbolic execution. Subsumption checking determines whether an abstract state is being revisited, in which case the model checker backtracks—this enables analysis of an under-approximation of the program behaviors. We illustrate the technique with abstractions for lists and arrays. We also discuss abstractions for more general data structures. The abstractions encode both the shape of the program heap and the constraints on numeric data. We have implemented the techniques in the Java PathFinder tool and we show their effectiveness on Java programs. This paper is an extended version of Anand et al. (Proceedings of SPIN, pp. 163–181, 2006).  相似文献   

13.
Traditional static checking centers around finding bugs in programs by isolating cases where the language has been used incorrectly. These language‐based checkers do not understand the semantics of software libraries, and therefore cannot be used to detect errors in the use of libraries. In this paper, we introduce STLlint, a program analysis we have implemented for the C++ Standard Template Library and similar, generic software libraries, and we present the general approach that underlies STLlint. We show that static checking of library semantics differs greatly from checking of language semantics, requiring new representations of program behavior and new algorithms. Major challenges include checking the use of generic algorithms, loop analysis for interfaces, and organizing behavioral specifications for extensibility. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
A strength of model checking is its ability to automate the detection of subtle system errors and produce traces that exhibit those errors. Given the high-computational cost of model checking most researchers advocate the use of aggressive property-preserving abstractions. Unfortunately, the more aggressively a system is abstracted the more infeasible behavior it will have. Thus, while abstraction enables efficient model checking it also threatens the usefulness of model checking as a defect detection tool, since it may be difficult to determine whether a counter-example is feasible and hence worth developer time to analyze.We have explored several strategies for addressing this problem by extending an explicit-state model checker, Java PathFinder (JPF), to search for and analyze counter-examples in the presence of abstractions. We demonstrate that these techniques effectively preserve the defect detection ability of model checking in the presence of aggressive abstraction by applying them to check properties of several abstracted multi-threaded Java programs. These new capabilities are not specific to JPF and can be easily adapted to other model checking frameworks; we describe how this was done for the Bandera toolset.  相似文献   

15.
Testing concurrent programs is a challenging problem due to interleaving explosion: even for a fixed set of inputs, there is a huge number of concurrent runs that need to be tested to account for scheduler behavior. Testing all possible schedules is not practical. Consequently, most effective testing algorithms only test a select subset of runs. For example, limiting testing to runs that contain data races or atomicity violations has been shown to capture a large proportion of concurrency bugs. In this paper we present a general approach to concurrent program testing that is based on techniques from artificial intelligence (AI) automated planning. We propose a framework for predicting concurrent program runs that violate a collection of generic correctness specifications for concurrent programs, namely runs that contain data races, atomicity violations, or null-pointer dereferences. Our prediction is based on observing an arbitrary run of the program, and using information collected from this run to model the behavior of the program, and to predict new runs that contain bugs with one of the above noted violation patterns. We characterize the problem of predicting such new runs as an AI sequential planning problem with the temporally extended goal of achieving a particular violation pattern. In contrast to many state-of-the-art approaches, in our approach feasibility of the predicted runs is guaranteed and, therefore, all generated runs are fully usable for testing. Moreover, our planning-based approach has the merit that it can easily accommodate a variety of violation patterns which serve as the selection criteria for guiding search in the state space of concurrent runs. This is achieved by simply modifying the planning goal. We have implemented our approach using state-of-the-art AI planning techniques and tested it within the Penelope concurrent program testing framework [35]. Nevertheless, the approach is general and is amenable to a variety of program testing frameworks. Our experiments with a benchmark suite showed that our approach is very fast and highly effective, finding all known bugs.  相似文献   

16.
Bug busters     
Spinellis  D. 《Software, IEEE》2006,23(2):92-93
One way to deal with bugs is to avoid them entirely. The approach would be wasteful because we'd be underutilizing the many automated tools and techniques that can catch bugs for us. Most tools for eliminating bugs work by tightening the specifications of what we build. At the program code level, tighter specifications affect the operations allowed on various data types, our program's behavior, and our code's style. Furthermore, we can use many different approaches to verify that our code is on track: the programming language, its compiler, specialized tools, libraries, and embedded tests are our most obvious friends. We can delegate bug busting to code. Many libraries come with hooks or specialized builds that can catch questionable argument values, resource leaks, and wrong ordering of function calls. Bugs many be a fact of life, but they're not inevitable. We have some powerful tools to find them before they mess with our programs, and the good news is that these tools get better every year.  相似文献   

17.
Testability transformation based on equivalence of target statements   总被引:1,自引:1,他引:0  
The application of genetic algorithms in automatically generating test data has become a research hotspot and produced many results in recent years. However, its applicability is limited in the presence of flag variables. This issue, known as the flag problem, has been studied by many researchers to date. We propose a novel method of testability transformation to tackle the flag problem. Different from traditional transformation methods, in our method, the key step is not the transformation of source code, but that of target statements. We search for a new target statement (or a set of target statements) equivalent to the original one and then transform the problem of generating test data that cover the original target statement into the one that cover the new target statement (or a set of target statements). We apply our method in many real-world programs, and the experimental results show its effectiveness.  相似文献   

18.
The object-oriented paradigm in software engineering provides support for the construction of modular and reusable program components and is attractive for the design of large and complex distributed systems. Reachability analysis is an important and well-known tool for static analysis of critical properties in concurrent programs, such as deadlock freedom. It involves the systematic enumeration of all possible global states of program execution and provides the same level of assurance for properties of the synchronization structure in concurrent programs, such as formal verification. However, direct application of traditional reachability analysis to concurrent object-oriented programs has many problems, such as incomplete analysis for reusable classes (not safe) and increased computational complexity (not efficient). We propose a novel technique called apportioning, for safe and efficient reachability analysis of concurrent object-oriented programs, that is based upon a simple but powerful idea of classification of program analysis points as local (having influence within a class) and global (having possible influence outside a class). We have developed a number of apportioning-based algorithms, having different degrees of safety and efficiency. We present the details of one of these algorithms, formally show its safety for an appropriate class of programs, and present experimental results to demonstrate its efficiency for various examples  相似文献   

19.
Predicate abstraction refinement is one of the leading approaches to software verification. The key idea is to abstract the input program into a Boolean Program (i.e. a program whose variables range over the Boolean values only and model the truth values of predicates corresponding to properties of the program state), and refinement searches for new predicates in order to build a new, more refined abstraction. Thus Boolean programs are commonly employed as a simple, yet useful abstraction. However, the effectiveness of predicate abstraction refinement on programs that involve a tight interplay between data-flow and control-flow is still to be ascertained. We present a novel counterexample guided abstraction refinement procedure for Linear Programs with arrays, a fragment of the C programming language where variables and array elements range over a numeric domain and expressions involve linear combinations of variables and array elements. In our procedure the input program is abstracted w.r.t. a family of sets of array indices, the abstraction is a Linear Program (without arrays), and refinement searches for new array indices. We use Linear Programs as the target of the abstraction (instead of Boolean programs) as they allow to express complex correlations between data and control. Thus, unlike the approaches based on predicate abstraction, our approach treats arrays precisely. This is an important feature as arrays are ubiquitous in programming. We provide a precise account of the abstraction, Model Checking, and refinement processes, discuss their implementation in the EUREKA tool, and present a detailed analysis of the experimental results confirming the effectiveness of our approach on a number of programs of interest.  相似文献   

20.
Strings are widely used in modern programming languages in various scenarios. For instance, strings are used to build up Structured Query Language (SQL) queries that are then executed. Malformed strings may lead to subtle bugs, as well as non‐sanitized strings may raise security issues in an application. For these reasons, the application of static analysis to compute safety properties over string values at compile time is particularly appealing. In this article, we propose a generic approach for the static analysis of string values based on abstract interpretation. In particular, we design a suite of abstract semantics for strings, where each abstract domain tracks a different kind of information. We discuss the trade‐off between efficiency and accuracy when using such domains to catch the properties of interest. In this way, the analysis can be tuned at different levels of precision and efficiency, and it can address specific properties.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号