首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
一种精确程序最坏执行时间分析方法   总被引:1,自引:0,他引:1       下载免费PDF全文
Java语言的动态特性使程序的最坏执行时间分析较悲观和难以预测,提出一种精确最坏执行时间分析方法,在高层分析中,引入一种标记方法,对带有标记的Java类文件进行反编译提取控制流程,得到每一个基本块中的Java 字节码指令的最坏情况下的执行次数,在底层分析中,建立结合流水线和高级缓存影响的时间模型,得到每条指令所对应的执行时间,最后结合高层分析和底层分析的结果得到程序的最坏情况下的执行时间。实验表明,该方法可以使对实时Java 程序的最坏情况执行时间预测更加安全和精确。  相似文献   

2.
Java语言至今已成为最受欢迎的编程语言之一,由于其平台无关性、执行的安全性以及垃圾收集等特性而得到广泛应用,受到众多的IT企业及开发者的支持。然而,与诸如C/C++这类语言比起来,Java语言的运行性能在很多情况下还有待提高,优化Java应用性能的课题就成为当前业界迫切需要解决的问题和研究的热点。本文从优化Java字节码的角度切入,介绍一款在其上进行优化和变换的框架,并展示相关应用的例子。  相似文献   

3.
为能以硬件方式直接执行CISC结构的Java字节码,设计并实现适用于32位嵌入式实时Java平台的JPOR-32指令集。分析Java虚拟机规范中各Java字节码的功能和实现原理,设定执行每条指令时信号和数据在Java处理器数据通路上的变化,采用微指令方式执行复杂指令,简单指令直接执行,从而使JPOR-32的指令集具有RISC特性。实验结果验证了指令集的正确性及其最坏情况执行时间(WCET)的可预测性。  相似文献   

4.
Cost analysis statically approximates the cost of programs in terms of their input data size. This paper presents, to the best of our knowledge, the first approach to the automatic cost analysis of object-oriented bytecode programs. In languages such as Java and C#, analyzing bytecode has a much wider application area than analyzing source code since the latter is often not available. Cost analysis in this context has to consider, among others, dynamic dispatch, jumps, the operand stack, and the heap. Our method takes a bytecode program and a cost model specifying the resource of interest, and generates cost relations which approximate the execution cost of the program with respect to such resource. We report on COSTA, an implementation for Java bytecode which can obtain upper bounds on cost for a large class of programs and complexity classes. Our basic techniques can be directly applied to infer cost relations for other object-oriented imperative languages, not necessarily in bytecode form.  相似文献   

5.
Pattern-based Java bytecode compression techniques rely on the identification of identical instruction sequences that occur more than once. Each occurrence of such a sequence is substituted by a single instruction. The sequence defines a pattern that is used for extending the standard bytecode instruction set with the instruction that substitutes the pattern occurrences in the original bytecode. Alternatively, the pattern may be stored in a dictionary that serves for the bytecode decompression. In this case, the instruction that substitutes the pattern in the original bytecode serves as an index to the dictionary. In this paper, we investigate a bytecode compression technique that considers a more general case of patterns. Specifically, we employ the use of an advanced pattern discovery technique that allows locating patterns of an arbitrary length, which may contain a variable number of wildcards in place of certain instruction opcodes or operands. We evaluate the benefits and the limitations of this technique in various scenarios that aim at compressing the reference implementation of MIDP, a standard Java environment for the development of applications for mobile devices.  相似文献   

6.
This paper presents a novel profiling approach, which is entirely based on program transformation techniques in order to enable exact profiling, preserving complete call stacks, method invocation counters, and bytecode instruction counters. We exploit the number of executed bytecode instructions as profiling metric, which has several advantages, such as making the instrumentation entirely portable and generating reproducible profiles. These ideas have been implemented as the JP tool. It provides a small and flexible API to write portable profiling agents in pure Java, which are periodically activated to process the collected profiling information. Performance measurements point out that JP causes significantly less overhead than a prevailing tool for the exact profiling of Java code.  相似文献   

7.
Developers using third party software components need to test them to satisfy quality requirements. In the past, researchers have proposed fault injection testing approaches in which the component state is perturbed and the resulting effects on the rest of the system are observed. Non-availability of source code in third-party components makes it harder to perform source code level fault injection. Even if Java decompilers are used, they do not work well with obfuscated bytecode. We propose a technique that injects faults in Java software by manipulating the bytecode. Existing test suites are assessed according to their ability to detect the injected faults and improved accordingly. We present a case study using an open source Java component that demonstrates the feasibility and effectiveness of our approach. We also evaluate the usability of our approach on obfuscated bytecode.  相似文献   

8.
Java’s annotation mechanism allows us to extend its type system with non-null types. Checking such types cannot be done using the existing bytecode verification algorithm. We extend this algorithm to verify non-null types using a novel technique that identifies aliasing relationships between local variables and stack locations in the JVM. We formalise this for a subset of Java Bytecode and report on experiences using our implementation.  相似文献   

9.
基于混合模式的Java卡字节码优化器   总被引:1,自引:0,他引:1       下载免费PDF全文
Java卡是一种基于Java语言的智能卡。因为智能卡的空间和处理器速度的约束,一个应用程序在Java卡上运行时面临的最大问题是存储空间的不足和对程序执行时间的严格限制。因此,对下载到卡中的字节码进行优化是十分必要的。本文提出了一种综合使用扩展指令集和分段压缩算法的Java卡字节码优化器的设计方案,通过对字节码文件的优化,可得到占用空间较少且没有降低执行速率的字节码文件。  相似文献   

10.
面向Java的信息流分析工作需要修改编译器或实时执行环境,对已有系统兼容性差,且缺乏形式化分析与安全性证明。首先,提出了基于有限状态自动机的Java信息流分析方法,将整个程序变量污点取值空间抽象为自动机状态空间,并将Java字节码指令看做自动机状态转换动作;然后,给出了自动机转换的信息流安全规则,并证明了在该规则下程序执行的无干扰安全性;最后,采用静态污点跟踪指令插入和动态污点跟踪与控制的方法实现了原型系统IF-JVM,既不需要获得Java应用程序源码,也不需要修改Java编译器和实时执行环境,更独立于客户操作系统。实验结果表明,原型系统能正确实现对Java的细粒度地信息流跟踪与控制,性能开销为53.1%。  相似文献   

11.
A bytecode verifier for the Java virtual machine language (JVML) statically checks that bytecode does not cause any fatal error. However, the present verifier does not check correctness of the usage of lock primitives. To solve this problem, we extend Stata and Abadi’s type system for JVML by augmenting types with information about how each object is locked and unlocked. The resulting type system guarantees that when a thread terminates, it has released all the locks it has acquired and that a thread releases a lock only if it has acquired the lock previously. We have implemented a prototype Java bytecode verifier based on the type system. We have tested the verifier for several classes in the Java run time library and confirmed that the verifier runs efficiently and gives correct answers.
Naoki KobayashiEmail:
  相似文献   

12.
In this paper, we provide a theoretical foundation for and improvements to the existing bytecode verification technology, a critical component of the Java security model, for mobile code used with the Java “micro edition” (J2ME), which is intended for embedded computing devices. In Java, remotely loaded “bytecode” class files are required to be bytecode verified before execution, that is, to undergo a static type analysis that protects the platform's Java run-time system from so-called type confusion attacks such as pointer manipulation. The data flow analysis that performs the verification, however, is beyond the capacity of most embedded devices because of the memory requirements that the typical algorithm will need. We propose to take a proof-carrying code approach to data flow analysis in defining an alternative technique called “lightweight analysis” that uses the notion of a “certificate” to reanalyze a previously analyzed data flow problem, even on poorly resourced platforms. We formally prove that the technique provides the same guarantees as standard bytecode safety verification analysis, in particular that it is “tamper proof” in the sense that the guarantees provided by the analysis cannot be broken by crafting a “false” certificate or by altering the analyzed code. We show how the Java bytecode verifier fits into this framework for an important subset of the Java Virtual Machine; we also show how the resulting “lightweight bytecode verification” technique generalizes and simulates the J2ME verifier (to be expected as Sun's J2ME “K-Virtual machine” verifier was directly based on an early version of this work), as well as Leroy's “on-card bytecode verifier,” which is specifically targeted for Java Cards.  相似文献   

13.
This article presents a type certifying compiler for a subset of Java and proves the type correctness of the bytecode it generates in the proof assistant Isabelle. The proof is performed by defining a type compiler that emits a type certificate and by showing a correspondence between bytecode and the certificate which entails well-typing. The basis for this work is an extensive formalization of the Java bytecode type system, which is first presented in an abstract, lattice-theoretic setting and then instantiated to Java types.  相似文献   

14.
Abstract machines bridge the gap between the high-level of programming languages and the low-level mechanisms of a real machine. The paper proposed a general abstract-machine-based framework (AMBF) to build instruction level parallelism processors using the instruction tagging technique. The constructed processor may accept code written in any (abstract or real) machine instruction set, and produce tagged machine code after data conflicts are resolved. This requires the construction of a tagging unit which emulates the sequential execution of the program using tags rather than actual values. The paper presents a Java ILP processor by using the proposed framework. The Java processor takes advantage of the tagging unit to dynamically translate Java bytecode instructions to RISC-like tag-based instructions to facilitate the use of a general-purpose RISC core and enable the exploitation of instruction level parallelism. We detailed the Java ILP processor architecture and the design issues. Benchmarking of the Java processor using SpecJVM98 and Linpack has shown the overall ILP speedup improvement between 78% and 173%.  相似文献   

15.
The creation, transformation and analysis of bytecode is widespread. Nevertheless, several problems related to the reusability and comprehensibility of the results and tools exist. In particular, the results of tools for bytecode analysis are usually represented in proprietary tool dependent ways, which makes it hard to build more sophisticated analysis on top of the results generated by tools for lower-level analysis.Furthermore, intermediate results, such as e.g., the results of basic control flow and dataflow analysis, are usually not explicitly represented at all; though, required by many more sophisticated analysis. This lack of a common format, for the well structured representation of the (intermediate) results of code analysis, makes the creation of new tools or the integration of the results generated by different tools costly and ineffective.To solve the highlighted problems, we propose a higher-level XML-based representation of Java bytecode which is designed as a common platform for the creation and transformation of bytecode and explicitly enables the integration of arbitrary information generated by different tools for static code analysis.  相似文献   

16.
Program logics for bytecode languages such as Java bytecode or the .NET CIL can be used to apply Proof-Carrying Code concepts to bytecode programs and to verify correctness properties of bytecode programs. This paper presents a Hoare-style logic for a sequential bytecode kernel language similar to Java bytecode and CIL. The logic handles object-oriented features such as inheritance, dynamic method binding, and object structures with destructive updates, as well as unstructured control flow with jumps. It is sound and complete.  相似文献   

17.
Bytecode instrumentation is a widely used technique to implement aspect weaving and dynamic analyses in virtual machines such as the Java virtual machine. Aspect weavers and other instrumentations are usually developed independently and combining them often requires significant engineering effort, if at all possible. In this article, we present polymorphic bytecode instrumentation(PBI), a simple but effective technique that allows dynamic dispatch amongst several, possibly independent instrumentations. PBI enables complete bytecode coverage, that is, any method with a bytecode representation can be instrumented. We illustrate further benefits of PBI with three case studies. First, we describe how PBI can be used to implement a comprehensive profiler of inter‐procedural and intra‐procedural control flow. Second, we provide an implementation of execution levels for AspectJ, which avoids infinite regression and unwanted interference between aspects. Third, we present a framework for adaptive dynamic analysis, where the analysis to be performed can be changed at runtime by the user. We assess the overhead introduced by PBI and provide thorough performance evaluations of PBI in all three case studies. We show that pure Java profilers like JP2 can, thanks to PBI, produce accurate execution profiles by covering all code, including the core Java libraries. We then demonstrate that PBI‐based execution levels are much faster than control flow pointcuts to avoid interference between aspects and that their efficient integration in a practical aspect language is possible. Finally, we report that PBI enables adaptive dynamic analysis tools that are more reactive to user inputs than existing tools that rely on dynamic aspect‐oriented programming with runtime weaving. These experiments position PBI as a widely applicable and practical approach for combining bytecode instrumentations. © 2015 The Authors. Software: Practice and Experience Published by John Wiley & Sons Ltd.  相似文献   

18.
Aspect-oriented programming (AOP) has been successfully applied to application code thanks to techniques such as Java bytecode instrumentation. Unfortunately, with existing AOP frameworks for Java such as AspectJ, aspects cannot be woven into the standard Java class library. This restriction is particularly unfortunate for aspects that would benefit from comprehensive aspect weaving with complete method coverage, such as profiling or debugging aspects. In this article we present MAJOR, a new tool for comprehensive aspect weaving, which ensures that aspects are woven into all classes loaded in a Java Virtual Machine, including those in the standard Java class library. MAJOR includes the pluggable module CARAJillo, which supports efficient access to a complete and customizable calling context representation. We validate our approach with three case studies. Firstly, we weave existing profiling aspects with MAJOR which otherwise would generate incomplete profiles. Secondly, we introduce an aspect for memory leak detection that also benefits from comprehensive weaving. Thirdly, we present an aspect subsuming the functionality of ReCrash, an existing tool based on low-level bytecode instrumentation techniques that generates unit tests to reproduce program failures. Our aspect-based tools are concisely implemented in a few lines of code, and leverage MAJOR and CARAJillo for comprehensive aspect weaving and for efficient access to calling context information.  相似文献   

19.
Ghahramani  B. Pauley  M.A. 《Computer》2003,36(9):109-111
Java programs are executed by a Java virtual machine (JVM), which interprets intermediate compiled bytecode that is nominally platform independent. Although early versions of Java interpreted unoptimized bytecode in a relatively unsophisticated manner, recent developments including static analysis, just-in-time compilation, JVM optimization, and instruction-level optimizations have improved execution efficiency. Consequently, Java is now competitive with C and C++ for some applications and on some platforms. Despite Java's increasing popularity, there is a lingering perception that deficiencies in the language make it unsuitable for high-performance computing. In this paper we address some of those deficiencies and discuss the suitability of using Java in a distributed environment.  相似文献   

20.
In this paper, we propose a solution for a worst‐case execution time (WCET) analyzable Java system: a combination of a time‐predictable Java processor and a tool that performs WCET analysis at Java bytecode level. We present a Java processor, called JOP, designed for time‐predictable execution of real‐time tasks. The execution time of bytecodes, the instructions of the Java virtual machine, is known to cycle accuracy for JOP. Therefore, JOP simplifies the low‐level WCET analysis. A method cache, which fills whole Java methods into the cache, simplifies cache analysis. The WCET analysis tool is based on integer linear programming. The tool performs the low‐level analysis at the bytecode level and integrates the method cache analysis. An integrated data‐flow analysis performs receiver‐type analysis for dynamic method dispatches and loop‐bound analysis. Furthermore, a model checking approach to WCET analysis is presented where the method cache can be exactly simulated. The combination of the time‐predictable Java processor and the WCET analysis tool is evaluated with standard WCET benchmarks and three real‐time applications. The WCET friendly architecture of JOP and the integrated method cache analysis yield tight WCET bounds. Comparing the exact, but expensive, model checking‐based analysis of the method cache with the static approach demonstrates that the static approximation of the method cache is sufficiently tight for practical purposes. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号