首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
The Java Virtual Machine executes bytecode programs that may have been sent from other, possibly untrusted, locations on the network. Since the transmitted code may be written by a malicious party or corrupted during network transmission, the Java Virtual Machine contains a bytecode verifier to check the code for type errors before it is run. As illustrated by reported attacks on Java run-time systems, the verifier is essential for system security. However, no formal specification of the bytecode verifier exists in the Java Virtual Machine Specification published by Sun. In this paper, we develop such a specification in the form of a type system for a subset of the bytecode language. The subset includes classes, interfaces, constructors, methods, exceptions, and bytecode subroutines. We also present a type checking algorithm and prototype bytecode verifier implementation, and we conclude by discussing other applications of this work. For example, we show how to extend our formal system to check other program properties, such as the correct use of object locks. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

2.
High-performance just-in-time compilers for Java need to invest considerable effort before actual code generation can commence. This is in part due to the very nature of the Java Virtual Machine, which is not well matched to the requirements of optimizing code generators. Alternative transportation formats based on Static Single Assignment form should theoretically be superior to virtual machines, but this claim has not previously been validated in practice. This paper revisits the topic and attempts to quantify the effect of using an SSA-based mobile code representation (IR) instead of a virtual-machine based one.To this end, we have integrated full support for a verifiable SSA-based IR into Jikes RVM, an existing Java execution environment. The resulting system is capable of loading and executing Java programs represented in either format, traditional JVM bytecode as well as the SSA-based representation, and it can even execute programs made up of a mixture of the two formats. In our implementation, the two alternative just-in-time compilation pipelines share a common low-level code generator.Performance results are encouraging and show simultaneous improvements in both compilation time and code quality relative to Jikes RVM's standard optimizing compiler for JVM class files. They support the hypothesis that SSA-based intermediate representations offer advantages in the context of just-in-time compilation.  相似文献   

3.
A novel compiler optimization for loops is presented. The optimization uses exceptions to eliminate redundant tests that are performed when code is interpretively executed, as is the case with Java bytecode executed on the Java Virtual Machine. An analysis technique based on abstract interpretation is used to discover when the optimization is applicable. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we provide a theoretical foundation for and improvements to the existing bytecode verification technology, a critical component of the Java security model, for mobile code used with the Java “micro edition” (J2ME), which is intended for embedded computing devices. In Java, remotely loaded “bytecode” class files are required to be bytecode verified before execution, that is, to undergo a static type analysis that protects the platform's Java run-time system from so-called type confusion attacks such as pointer manipulation. The data flow analysis that performs the verification, however, is beyond the capacity of most embedded devices because of the memory requirements that the typical algorithm will need. We propose to take a proof-carrying code approach to data flow analysis in defining an alternative technique called “lightweight analysis” that uses the notion of a “certificate” to reanalyze a previously analyzed data flow problem, even on poorly resourced platforms. We formally prove that the technique provides the same guarantees as standard bytecode safety verification analysis, in particular that it is “tamper proof” in the sense that the guarantees provided by the analysis cannot be broken by crafting a “false” certificate or by altering the analyzed code. We show how the Java bytecode verifier fits into this framework for an important subset of the Java Virtual Machine; we also show how the resulting “lightweight bytecode verification” technique generalizes and simulates the J2ME verifier (to be expected as Sun's J2ME “K-Virtual machine” verifier was directly based on an early version of this work), as well as Leroy's “on-card bytecode verifier,” which is specifically targeted for Java Cards.  相似文献   

5.
During an attempt to prove that the Java-to-JVM compiler generates code that is accepted by the bytecode verifier, we found examples of legal Java programs that are rejected by the verifier. We propose therefore to restrict the rules of definite assignment for the try-finally statement as well as for the labeled statement so that the example programs are no longer allowed. Then we can prove, using the framework of Abstract State Machines, that each program from the slightly restricted Java language is accepted by the Bytecode Verifier. In the proof we use a new notion of bytecode type assignment without subroutine call stacks. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

6.
Xavier Leroy 《Software》2002,32(4):319-340
This article presents a novel approach to the problem of bytecode verification for Java Card applets. By relying on prior off‐card bytecode transformations, we simplify the bytecode verifier and reduce its memory requirements to the point where it can be embedded on a smart card, thus increasing significantly the security of post‐issuance downloading of applets on Java Cards. This article describes the on‐card verification algorithm and the off‐card code transformations, and evaluates experimentally their impact on applet code size. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
This paper describes a new method for code space optimization for interpreted languages called LZW‐CC . The method is based on a well‐known and widely used compression algorithm, LZW , which has been adapted to compress executable program code represented as bytecode. Frequently occurring sequences of bytecode instructions are replaced by shorter encodings for newly generated bytecode instructions. The interpreter for the compressed code is modified to recognize and execute those new instructions. When applied to systems where a copy of the interpreter is supplied with each user program, space is saved not only by compressing the program code but also by automatically removing the unused implementation code from the interpreter. The method's implementation within two compiler systems for the programming languages Haskell and Java is described and implementation issues of interest are presented, notably the recalculations of target jumps and the automated tailoring of the interpreter to program code. Applying LZW‐CC to nhc98 Haskell results in bytecode size reduction by up to 15.23% and executable size reduction by up to 11.9%. Java bytecode is reduced by up to 52%. The impact of compression on execution speed is also discussed; the typical speed penalty for Java programs is between 1.8 and 6.6%, while most compressed Haskell executables run faster than the original. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
为了对Java虚拟机(JVM)进行测试,开发人员通常需要手工设计或利用测试生成工具生成复杂的测试程序,从而检测JVM中潜在的缺陷。然而,复杂的测试程序给开发人员定位及修复缺陷带来了极高的成本。测试程序约简技术旨在保障测试程序缺陷检测能力的同时,尽可能的删减测试程序中与缺陷检测无关的代码。现有研究工作基于Delta调试在C程序和XML输入上可以取得较好的约简效果,但是在JVM测试场景中,具有复杂语法和语义依赖关系的Java测试程序约减仍存在粒度较粗、约简效果较差的问题,导致约简后的程序理解成本依然很高。因此,针对具有复杂程序依赖关系的Java测试程序,本文提出一种基于程序约束的细粒度测试程序约简方法JavaPruner。首先在语句块级别设计细粒度的代码度量方法,随后在Delta调试技术上引入语句块之间的依赖约束关系来对测试程序进行约简。以Java字节码测试程序为实验对象,通过从现有的针对JVM测试的测试程序生成工具中筛选出具有复杂依赖关系的50个测试程序作为基准数据集,并在这些数据集上验证JavaPruner的有效性。实验结果表明,JavaPruner可以有效删减Java字节码测试程序中的冗余代码。与现有方法相比,在所有基准数据集上约减能力平均可提升37.7%。同时,JavaPruner可以在保障程序有效性及缺陷检测能力的同时将Java字节码测试程序最大约简至其原有大小的1.09% ,有效降低了测试程序的分析和理解成本。  相似文献   

9.
Bytecode verification is one of the key security functions of several architectures for mobile and embedded code, including Java, Java Card, and .NET. Over the past few years, its formal correctness has been studied extensively by academia and industry, using general-purpose theorem provers. The objective of our work is to facilitate such endeavors by providing a dedicated environment for establishing the correctness of bytecode verification within a proof assistant. The environment, called Jakarta, exploits a methodology that casts the correctness of bytecode verification relatively to a defensive virtual machine that performs checks at run-time and to an offensive one that does not; it can be summarized as stating that the two machines coincide on programs that pass bytecode verification. Such a methodology has been used successfully to prove the correctness of the Java Card bytecode verifier and may potentially be applied to many similar problems. One definite advantage of the methodology is that it is amenable to automation. Indeed, Jakarta automates the construction of an offensive virtual machine and a bytecode verifier from a defensive machine, and the proofs of correctness of the bytecode verifier. We illustrate the principles of Jakarta on a simple low-level language extended with subroutines and discuss its usefulness to proving the correctness of the Java Card platform.  相似文献   

10.
11.
Aspect-oriented programming (AOP) has been successfully applied to application code thanks to techniques such as Java bytecode instrumentation. Unfortunately, with existing AOP frameworks for Java such as AspectJ, aspects cannot be woven into the standard Java class library. This restriction is particularly unfortunate for aspects that would benefit from comprehensive aspect weaving with complete method coverage, such as profiling or debugging aspects. In this article we present MAJOR, a new tool for comprehensive aspect weaving, which ensures that aspects are woven into all classes loaded in a Java Virtual Machine, including those in the standard Java class library. MAJOR includes the pluggable module CARAJillo, which supports efficient access to a complete and customizable calling context representation. We validate our approach with three case studies. Firstly, we weave existing profiling aspects with MAJOR which otherwise would generate incomplete profiles. Secondly, we introduce an aspect for memory leak detection that also benefits from comprehensive weaving. Thirdly, we present an aspect subsuming the functionality of ReCrash, an existing tool based on low-level bytecode instrumentation techniques that generates unit tests to reproduce program failures. Our aspect-based tools are concisely implemented in a few lines of code, and leverage MAJOR and CARAJillo for comprehensive aspect weaving and for efficient access to calling context information.  相似文献   

12.
The accurate measurement of the execution time of Java bytecode is one factor that is important in order to estimate the total execution time of a Java application running on a Java Virtual Machine. In this paper we document the difficulties and solutions for the accurate timing of Java bytecode. We also identify trends across the execution times recorded for all imperative Java bytecodes. These trends would suggest that knowing the execution times of a small subset of the Java bytecode instructions would be sufficient to model the execution times of the remainder. We first review a statistical approach for achieving high precision timing results for Java bytecode using low precision timers and then present a more suitable technique using homogeneous bytecode sequences for recording such information. We finally compare instruction execution times acquired using this platform independent technique against execution times recorded using the read time stamp counter assembly instruction. In particular our results show the existence of a strong linear correlation between both techniques.  相似文献   

13.
Developers using third party software components need to test them to satisfy quality requirements. In the past, researchers have proposed fault injection testing approaches in which the component state is perturbed and the resulting effects on the rest of the system are observed. Non-availability of source code in third-party components makes it harder to perform source code level fault injection. Even if Java decompilers are used, they do not work well with obfuscated bytecode. We propose a technique that injects faults in Java software by manipulating the bytecode. Existing test suites are assessed according to their ability to detect the injected faults and improved accordingly. We present a case study using an open source Java component that demonstrates the feasibility and effectiveness of our approach. We also evaluate the usability of our approach on obfuscated bytecode.  相似文献   

14.
Cost analysis statically approximates the cost of programs in terms of their input data size. This paper presents, to the best of our knowledge, the first approach to the automatic cost analysis of object-oriented bytecode programs. In languages such as Java and C#, analyzing bytecode has a much wider application area than analyzing source code since the latter is often not available. Cost analysis in this context has to consider, among others, dynamic dispatch, jumps, the operand stack, and the heap. Our method takes a bytecode program and a cost model specifying the resource of interest, and generates cost relations which approximate the execution cost of the program with respect to such resource. We report on COSTA, an implementation for Java bytecode which can obtain upper bounds on cost for a large class of programs and complexity classes. Our basic techniques can be directly applied to infer cost relations for other object-oriented imperative languages, not necessarily in bytecode form.  相似文献   

15.
Tom Hume  Des Watson 《Software》2015,45(4):571-579
The technique of superoptimization attempts to ensure true optimality of a code (according to predefined criteria) through an exhaustive search of all potentially viable programs. Implementations have demonstrated that superoptimizers are capable of finding shorter programs than those hand optimized for size by experts or produced by conventional compilers. Superoptimizers have been developed for many machine architectures and used for diverse purposes including automating peephole optimization and binary translation of instruction sets. The output of superoptimizers is frequently surprising to human experts and often takes advantage of side effects or obscure characteristics of the targeted hardware. Virtual machines (VMs) are increasingly popular in implementations of programming languages, because they can provide a common platform across heterogeneous hardware architectures. This paper examines whether superoptimization could be a viable technique for VMs. A superoptimizer for the Java VM (JVM) has been developed and used to generate demonstrably size‐optimized versions of some simple mathematical functions, implemented in a Java bytecode. We have shown that these versions are shorter than both implementations shipped with the Java software development kit and those generated by the Java compiler. We also have some useful observations concerning techniques to allow larger programs to be optimized in a reasonable time by this approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
The Java Virtual Machine is primarily designed for transporting Java programs. As a consequence, when JVM bytecodes are used to transport programs in other languages, the result becomes less acceptable the more the source language diverges from Java. Microsoft's .NET transport format fares better in this respect because it has a more flexible type system and instruction set, but it is not extensible, and (for example) has no provision for supporting explicit programmer-specified parallelism. Both platforms have difficulty making transported programs run efficiently.This paper discusses first steps towards mobile code representations that are independent (in the sense that the representation can be appropriately parameterized) of the source language (e.g., Java), intermediate representation (e.g., bytecode), and target architecture (e.g., x86). We call this kind of parameterizable framework language-agnostic.We present two techniques which provide parts of the envisioned language-agnostic functionality. Compressed abstract syntax trees as a wire format provide for a very dense encoding of programs at a high level of abstraction. We show how to parameterize the compression algorithm in a modular fashion with knowledge beyond the purely syntactical level. This leads to the notion of well-formedness by construction. The second technique defines the semantics of programs by mapping from abstract syntax trees to a typed core calculus representation. Based on this representation it becomes possible to use portable definitions of security policies and to execute programs written in different source languages, even if a more efficient trusted native compiler is not available on the target platform.  相似文献   

17.
This paper presents a novel profiling approach, which is entirely based on program transformation techniques in order to enable exact profiling, preserving complete call stacks, method invocation counters, and bytecode instruction counters. We exploit the number of executed bytecode instructions as profiling metric, which has several advantages, such as making the instrumentation entirely portable and generating reproducible profiles. These ideas have been implemented as the JP tool. It provides a small and flexible API to write portable profiling agents in pure Java, which are periodically activated to process the collected profiling information. Performance measurements point out that JP causes significantly less overhead than a prevailing tool for the exact profiling of Java code.  相似文献   

18.
This paper describes intra‐method control‐flow and data‐flow testing criteria for the Java bytecode language. Six testing criteria are considered for the generation of testing requirements: four control‐flow and two data‐flow based. The main reason to work at a lower level is that, even when there is no source code, structural testing requirements can still be derived and used to assess the quality of a given test set. It can be used, for instance, to perform structural testing on third‐party Java components. In addition, the bytecode can be seen as an intermediate language, so the analysis performed at this level can be mapped back to the original high‐level language that generated the bytecode. To support the application of the testing criteria, we have implemented a tool named JaBUTi (Java Bytecode Understanding and Testing). JaBUTi is used to illustrate the application of the ideas developed in this paper. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
Java bytecode verification is traditionally performed by using dataflow analysis. We investigate an alternative based on reducing bytecode verification to model checking. First, we analyze the complexity and scalability of this approach. We show experimentally that, despite an exponential worst-case time complexity, model checking type-correct bytecode using an explicit-state on-the-fly model checker is feasible in practice, and we give a theoretical account why this is the case. Second, we formalize our approach using Isabelle/HOL and prove its correctness. In doing so we build on the formalization of the Java Virtual Machine and dataflow analysis framework of Pusch and Nipkow and extend it to a more general framework for reasoning about model-checking-based analysis. Overall, our work constitutes the first comprehensive investigation of the theory and practice of bytecode verification by model checking. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

20.
Virtual execution environments, such as the Java virtual machine, promote platform‐independent software development. However, when it comes to analyzing algorithm complexity and performance bottlenecks, available tools focus on platform‐specific metrics, such as the CPU time consumption on a particular system. Other drawbacks of many prevailing profiling tools are high overhead, significant measurement perturbation, as well as reduced portability of profiling tools, which are often implemented in platform‐dependent native code. This article presents a novel profiling approach, which is entirely based on program transformation techniques, in order to build a profiling data structure that provides calling‐context‐sensitive program execution statistics. We explore the use of platform‐independent profiling metrics in order to make the instrumentation entirely portable and to generate reproducible profiles. We implemented these ideas within a Java‐based profiling tool called JP. A significant novelty is that this tool achieves complete bytecode coverage by statically instrumenting the core runtime libraries and dynamically instrumenting the rest of the code. JP provides a small and flexible API to write customized profiling agents in pure Java, which are periodically activated to process the collected profiling information. Performance measurements point out that, despite the presence of dynamic instrumentation, JP causes significantly less overhead than a prevailing tool for the profiling of Java code. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号