首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 904 毫秒
1.
Code protection technologies require anti reverse engineering transformations to obfuscate programs in such a way that tools and methods for program analysis become ineffective. We introduce the concept of model deformation inducing an effective code obfuscation against attacks performed by abstract model checking. This means complicating the model in such a way a high number of spurious traces are generated in any formal verification of the property to disclose about the system under attack.We transform the program model in order to make the removal of spurious counterexamples by abstraction refinement maximally inefficient. Because our approach is intended to defeat the fundamental abstraction refinement strategy, we are independent from the specific attack carried out by abstract model checking. A measure of the quality of the obfuscation obtained by model deformation is given together with a corresponding best obfuscation strategy for abstract model checking based on partition refinement.  相似文献   

2.
We enhance the narrowing-driven partial evaluation scheme for lazy functional logic programs with the computation of symbolic costs. The enhanced scheme allows us to estimate the effects of the program transformer in a precise framework and, moreover, to quantify these effects. The considered costs are “symbolic” in the sense that they measure the number of basic operations performed during a computation rather than actual execution times. Our scheme may serve as a basis to develop speedup analyses and cost-guided transformers. A cost-augmented partial evaluator, which demonstrates the usefulness of our approach, has been implemented in the multi-paradigm language Curry.  相似文献   

3.
This paper presents algorithms for program abstraction based on the principle of loop summarization, which, unlike traditional program approximation approaches (e.g., abstract interpretation), does not employ iterative fixpoint computation, but instead computes symbolic abstract transformers with respect to a set of abstract domains. This allows for an effective exploitation of problem-specific abstract domains for summarization and, as a consequence, the precision of an abstract model may be tailored to specific verification needs. Furthermore, we extend the concept of loop summarization to incorporate relational abstract domains to enable the discovery of transition invariants, which are subsequently used to prove termination of programs. Well-foundedness of the discovered transition invariants is ensured either by a separate decision procedure call or by using abstract domains that are well-founded by construction. We experimentally evaluate several abstract domains related to memory operations to detect buffer overflow problems. Also, our light-weight termination analysis is demonstrated to be effective on a wide range of benchmarks, including OS device drivers.  相似文献   

4.
We present a novel framework for automatic inference of efficient synchronization in concurrent programs, a task known to be difficult and error-prone when done manually. Our framework is based on abstract interpretation and can infer synchronization for infinite state programs. Given a program, a specification, and an abstraction, we infer synchronization that avoids all (abstract) interleavings that may violate the specification, but permits as many valid interleavings as possible. Combined with abstraction refinement, our framework can be viewed as a new approach for verification where both the program and the abstraction can be modified on-the-fly during the verification process. The ability to modify the program, and not only the abstraction, allows us to remove program interleavings not only when they are known to be invalid, but also when they cannot be verified using the given abstraction. We implemented a prototype of our approach using numerical abstractions and applied it to verify several example programs.  相似文献   

5.
Invariant based programming is an approach where we start to construct a program by first identifying the basic situations (pre- and post-conditions as well as invariants) that could arise during the execution of the algorithm. These situations are identified before any code is written. After that, we identify the transitions between the situations, which will give us the flow of control in the program. Data refinement is a technique of building correct programs working on concrete data structures as refinements of more abstract programs working on abstract data types. We study in this paper data refinement for invariant based programs and we apply it to the construction of the classical Deutsch–Schorr–Waite graph marking algorithm. Our results are formalized and mechanically proved in the Isabelle/HOL theorem prover.  相似文献   

6.
We present a generic scheme for the declarative debugging of programs that are written in rewriting-based languages that are equipped with narrowing. Our aim is to provide an integrated development environment in which it is possible to debug a program and then correct it automatically. Our methodology is based on the combination (in a single framework) of a semantics-based diagnoser that identifies those parts of the code that contain errors and an inductive learner that tries to repair them, once the bugs have been located in the program. We develop our methodology in several steps. First, we associate with our programs a semantics that is based on a (continuous) immediate consequence operator, TR, which models the answers computed by narrowing and is parametric w.r.t. the evaluation strategy, which can be eager or lazy. Then, we show that, given the intended specification of a program R, it is possible to check the correctness of R by a single step of TR. In order to develop an effective debugging method, we approximate the computed answers semantics of R and derive a finitely terminating bottom-up abstract diagnosis method, which can be used statically. Finally, a bug-correction program synthesis methodology attempts to correct the erroneous components of the wrong code. We propose a hybrid, top-down (unfolding-based) as well as bottom-up (induction-based), correction approach that is driven by a set of evidence examples which are automatically produced as an outcome by the diagnoser. The resulting program is proven to be correct and complete w.r.t. the considered example sets. Our debugging framework does not require the user to provide error symptoms in advance or to answer difficult questions concerning program correctness. An implementation of our debugging system has been undertaken which demonstrates the workability of our approach.  相似文献   

7.
A logical system of inference rules intended to give the foundation of logic programs is presented. The distinguished point of the approach taken here is the application of the theory of inductive definitions, which allows us to uniformly treat various kinds of induction schema and also allows us to regardnegation as failure as a kind of induction schema. This approach corresponds to the so-called least fixpoint semantics. Moreover, in our formalism, logic programs are extended so that a condition of a clause may be any first-order formula. This makes it possible to write a quantified specification as a logic program. It also makes the class of induction schemata much larger to include the usual course-of-values inductions.  相似文献   

8.
This paper presents a new programming methodology for introducing and tuning parallelism in Erlang programs, using source-level code refactoring from sequential source programs to parallel programs written using our skeleton library, Skel. High-level cost models allow us to predict with reasonable accuracy the parallel performance of the refactored program, enabling programmers to make informed decisions about which refactorings to apply. Using our approach, we demonstrate easily obtainable, significant and scalable speedups of up to 21 on a 24-core machine over the sequential code.  相似文献   

9.
We propose a new technique combining dynamic and static analysis of programs to find linear invariants. We use a statistical tool, called simple component analysis, to analyze partial execution traces of a given program. We get a new coordinate system in the vector space of program variables, which is used to specialize numerical abstract domains. As an application, we instantiate our technique to interval analysis of simple imperative programs and show some experimental evaluations.  相似文献   

10.
Relational program reasoning is concerned with formally comparing pairs of executions of programs. Prominent examples of relational reasoning are program equivalence checking (which considers executions from different programs) and detecting illicit information flow (which considers two executions of the same program). The abstract logical foundations of relational reasoning are, by now, sufficiently well understood. In this paper, we address some of the challenges that remain to make the reasoning practicable. Two major ones are dealing with the feature richness of programming languages such as C and with the weakly structured control flow that many real-world programs exhibit. A popular approach to control this complexity is to define the analyses on the level of an intermediate program representation (IR) such as one generated by modern compilers. In this paper we describe the ideas and insights behind IR-based relational verification. We present a program equivalence checker for C programs that operates on LLVM IR. To extend the reach of the approach and to make it more efficient, we show how dynamic analyses can be employed to support and strengthen the static verification. The effectiveness of the approach is demonstrated by automatically verifying equivalence of functions from different implementations of the standard C library.  相似文献   

11.
To protect valuable assets embedded in software against reverse-engineering attacks, software obfuscations aim at raising the apparent complexity of programs and at removing information that is useful for attackers. In this work, we propose to combine five transformations that obfuscate the type hierarchy of Java applications and eliminate much of the type information that can be inferred from the Java bytecode. We rely on some existing algorithms, present adaptations, and introduce new algorithms for some of the transformations, which are all made available in an open-source prototype implementation ready for take-up. We present an extensive experimental evaluation on benchmarks of real-world complexity, using complementary metrics that cover the protection strength against both human and tool-based reverse-engineering attack methods. The results indicate that the obfuscation is effective as well as much more efficient than the previous state of the art. For the first time, this makes these obfuscations practically viable in real-world deployment scenarios.  相似文献   

12.
Models have been widely used in the information system development process. Models are not just means for system analysis and documentation. They may be also transformed into system implementation, primarily program code. Generated program code of screen forms and transaction programs mainly implements generic functionalities that can be expressed by simple retrieval, insertion, update, or deletion operations over database records. Besides the program code of generic functionalities, each application usually includes program code for specific business logic that represents application-specific functionalities, which may include complex calculations, as well as a series of database operations. There is a lack of domain-specific and tool-supported techniques for specification of such application-specific functionalities at the level of platform-independent models (PIMs). In this paper, we propose an approach and a domain-specific language (DSL), named IISCFuncLang, aimed at enabling a complete specification of application-specific functionalities at the PIM level. We have developed algorithms for transformation of IISCFuncLang specifications into executable program code, such as PL/SQL program code. In order to support specification of application-specific functionalities using IISCFuncLang, we have also developed appropriate tree-based and textual editors. The language, editors, and the transformations are embedded into a Model-Driven Software Development tool, named Integrated Information Systems CASE (IISCase). IISCase supports platform-independent design and automated prototyping of information systems, which allows us to verify and test our approach in practice.  相似文献   

13.
Modern obfuscation techniques are intended to discourage reverse engineering and malicious tampering of software programs. We study control-flow obfuscation, which works by modifying the control flow of the program to be obfuscated, and observe that it is difficult to evaluate the robustness of these obfuscation techniques. In this paper, we present a framework for quantitative analysis of control-flow obfuscating transformations. Our framework is based upon the control-flow graph of the program, and we show that many existing control-flow obfuscation techniques can be expressed as a sequence of basic transformations on these graphs. We also propose a new measure of the difficulty of reversing these obfuscated programs, and we show that our framework can be used to easily evaluate the space penalty due to the transformations.   相似文献   

14.
Barak et al. gave a first formalization of obfuscation, describing an obfuscator as an efficient, probabilistic “compiler” that takes in input a program P and produces a new program that has the same functionality as P but is unintelligible. This means that any result an obfuscated program can compute is actually computable given only an input/output access (called oracle access) to the program P: we call such results trivial results. On the basis of this informal definition, they suggest a formal definition of obfuscation based on oracle access to programs and show that no obfuscator can exist according to this definition. They also try to relax the definition and show that, even with a restriction to some common classes of programs, there exists no obfuscator. In this work, we show that their definition is too restrictive and lacks a fundamental property, that we formalize by the notion of oracle programs. Oracle programs are an abstract notion which basically refers to perfectly obfuscated programs. We suggest a new definition of obfuscation based on these oracle programs and show that such obfuscators do not exist either. Considering the actual implementations of “obfuscators”, we define a new kind of obfuscators, -obfuscators. These are obfuscators that hide non trivial results at least for time . By restricting the -requirement to deobfuscation (that is outputting an intelligible program when fed with an obfuscated program in input), we show that such obfuscators do exist. Practical -obfuscation methods are presented at the end of this paper: we focus more specifically on code protection techniques in a malware context. Based on the fact that a malware may fulfill its action in an amount of time which may be far larger than the analysis time of any automated detection program, these obfuscation methods can be considered as efficient enough to greatly thwart automated analysis and put check on any antivirus software. This research has been conducted while on stay at the Laboratoire de virologie et de cryptologie.  相似文献   

15.
Program verification is usually done by adding specifications and invariants to the program and then proving that the verification conditions are all true. This makes program verification an alternative to or a complement to testing. We describe here another approach to program construction, which we refer to as invariant based programming, where we start by formulating the specifications and the internal loop invariants for the program, before we write the program code itself. The correctness of the code is then easy to check at the same time as one is constructing it. In this approach, program verification becomes a complement to coding rather than to testing. The purpose is to produce programs and software that are correct by construction. We present a new kind of diagrams, nested invariant diagrams, where program specifications and invariants (rather than the control) provide the main organizing structure. Nesting of invariants provide an extension hierarchy that allows us to express the invariants in a very compact manner. We have studied the feasibility of formulating specifications and loop invariants before the code itself has been written in a number of case studies. Our experience is that a systematic use of figures, in combination with a rough idea of the intended behavior of the algorithm, makes it rather straightforward to formulate the invariants needed for the program, to construct the code around these invariants and to check that the resulting program is indeed correct. We describe our experiences from using invariant based programming in practice, both from teaching programmers how to construct programs that they prove correct themselves, and from teaching invariant based programming for CS students in class. D. A. Duce, J. Oliveira, P. Boca and R. Boute  相似文献   

16.
Knowledge-based programs   总被引:1,自引:0,他引:1  
Summary.  Reasoning about activities in a distributed computer system at the level of the knowledge of individuals and groups allows us to abstract away from many concrete details of the system we are considering. In this paper, we make use of two notions introduced in our recent book to facilitate designing and reasoning about systems in terms of knowledge. The first notion is that of a knowledge-based program. A knowledge-based program is a syntactic object: a program with tests for knowledge. The second notion is that of a context, which captures the setting in which a program is to be executed. In a given context, a standard program (one without tests for knowledge) is represented by (i.e., corresponds in a precise sense to) a unique system. A knowledge-based program, on the other hand, may be represented by no system, one system, or many systems. In this paper, we provide a sufficient condition for a knowledge-based program to be represented in a unique way in a given context. This condition applies to many cases of interest, and covers many of the knowledge-based programs considered in the literature. We also completely characterize the complexity of determining whether a given knowledge-based program has a unique representation, or any representation at all, in a given finite-state context. Received: October 1995 / Accepted: February 1997  相似文献   

17.
SystemJ is a programming language based on the Globally Asynchronous Locally Synchronous (GALS) Model of Computation (MoC) used to design safety critical hard real-time systems. SystemJ uses the Java programming language as the “host” language, for carrying out data computations, because Java provides clearly defined operational semantics, type and memory safety in the form of the Garbage Collector(GC), which help with formal functional verification. The same GC, which helps in functional verification, makes Worst Case Reaction Time (WCRT)1 analysis challenging. Any WCRT analysis framework for GALS programs needs to consider the operations performed by the host language. It has been shown that the worst case time estimates for garbage collection cycles are in seconds, whereas the program’s WCRT itself is in micro-seconds. These pessimistic estimates render the WCRT analysis framework ineffective. In order to overcome this problem, we develop a compiler assisted memory management technique for applications written in SystemJ. The SystemJ MoC plays the central role in the proposed technique. The SystemJ MoC allows clearly demarcating the state boundaries of the program, which in turn allows us to partition the heap, at compile time, into two distinct areas: (1) the memory area called the permanent heap, which holds objects that are alive throughout the life time of the application, and (2) the memory area used to hold all other objects, called the transient heap. The size of these memory areas are bounded statically. Furthermore, the memory allocation and reclaim procedures are simple load and pointer reset operations, respectively, which are guaranteed to complete within a bounded number of clock-cycles, thereby alleviating the need for large pessimistic WCRT bounds obtained due to the GC. Experimental results also show that the proposed approach is approximately three times faster, in terms of memory allocation times as compared to standard real-time GC approaches.  相似文献   

18.
Lock-freedom is a property of concurrent programs which states that, from any state of the program, eventually some process will complete its operation. Lock-freedom is a weaker property than the usual expectation that eventually all processes will complete their operations. By weakening their completion guarantees, lock-free programs increase the potential for parallelism, and hence make more efficient use of multiprocessor architectures than lock-based algorithms. However, lock-free algorithms, and reasoning about them, are considerably more complex.In this paper we present a technique for proving that a program is lock-free. The technique is designed to be as general as possible and is guided by heuristics that simplify the proofs. We demonstrate our theory by proving lock-freedom of two non-trivial examples from the literature. The proofs have been machine-checked by the PVS theorem prover, and we have developed proof strategies to minimise user interaction.  相似文献   

19.
Hierarchical control system design using approximate simulation   总被引:1,自引:0,他引:1  
In this paper, we present a new approach for hierarchical control based on the recent notions of approximate simulation and simulation functions, a quantitative version of the simulation relations. Given a complex system that needs to be controlled and a simpler abstraction, we show how the knowledge of a simulation function allows us to synthesize hierarchical control laws by first controlling the abstraction and then lifting the abstract control law to the complex system using an interface. For the class of linear control systems, we give an effective characterization of the simulation functions and of the associated interfaces. This characterization allows us to use algorithmic procedures for their computation. We show how to choose an abstraction for a linear control system such that our hierarchical control approach can be used. Finally, we show the effectiveness of our approach on an example.  相似文献   

20.
We present a generic approach for the analysis of concurrent programs with (unbounded) dynamic creation of threads and recursive procedure calls. We define a model for such programs based on a set of term rewrite rules where terms represent control configurations. The reachability problem for this model is undecidable. Therefore, we propose a method for analyzing such models based on computing abstractions of their sets of computation paths. Our approach allows to compute such abstractions as least solutions of a system of (path language) constraints. More precisely, given a program and two regular sets of configurations (process terms) T and T, we provide (1) a construction of a system of constraints which characterizes the set of computation paths leading from T to T, and (2) a generic framework, based on abstract interpretation, allowing to solve this system in various abstract domains leading to abstract analysis with different precision and cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号