首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we provide a theoretical foundation for and improvements to the existing bytecode verification technology, a critical component of the Java security model, for mobile code used with the Java “micro edition” (J2ME), which is intended for embedded computing devices. In Java, remotely loaded “bytecode” class files are required to be bytecode verified before execution, that is, to undergo a static type analysis that protects the platform's Java run-time system from so-called type confusion attacks such as pointer manipulation. The data flow analysis that performs the verification, however, is beyond the capacity of most embedded devices because of the memory requirements that the typical algorithm will need. We propose to take a proof-carrying code approach to data flow analysis in defining an alternative technique called “lightweight analysis” that uses the notion of a “certificate” to reanalyze a previously analyzed data flow problem, even on poorly resourced platforms. We formally prove that the technique provides the same guarantees as standard bytecode safety verification analysis, in particular that it is “tamper proof” in the sense that the guarantees provided by the analysis cannot be broken by crafting a “false” certificate or by altering the analyzed code. We show how the Java bytecode verifier fits into this framework for an important subset of the Java Virtual Machine; we also show how the resulting “lightweight bytecode verification” technique generalizes and simulates the J2ME verifier (to be expected as Sun's J2ME “K-Virtual machine” verifier was directly based on an early version of this work), as well as Leroy's “on-card bytecode verifier,” which is specifically targeted for Java Cards.  相似文献   

2.
Mobile devices with 3G/4G networking often waste energy in the so-called “tail time” during which the radio is kept on even though no communication is occurring. Prior work has proposed policies to reduce this energy waste by batching network requests. However, this work is challenging to apply in practice due to a lack of mechanisms. In response, we have developed DelayDroid, a framework that allows a developer to add the needed policy to existing, unmodified Android applications (apps) with no human effort as well as no SDK/OS changes. This allows such prior work (as well as our own policies) to be readily deployed and evaluated. The DelayDroid compile-time uses static analysis and bytecode refactoring to identify method calls that send network requests and modify such calls to detour them to the DelayDroid run-time. The run-time then applies a policy to batch them, avoiding the tail time energy waste. DelayDroid also includes a cross-app communication mechanism that supports policies that optimize across multiple apps running together, and we propose a policy that does so. We evaluated the correctness and universality of the DelayDroid mechanisms on 14 popular Android apps chosen from the Google App Store. To evaluate our proposed policy, we studied three DelayDroid-enabled apps (weather forecasting, email client, and news client) running together, finding that the DelayDroid mechanisms combined with our policy can reduce 3G/4G tail time energy waste by 36%.  相似文献   

3.
4.
Today’s security threats like malware are more sophisticated and targeted than ever, and they are growing at an unprecedented rate. To deal with them, various approaches are introduced. One of them is Signature-based detection, which is an effective method and widely used to detect malware; however, there is a substantial problem in detecting new instances. In other words, it is solely useful for the second malware attack. Due to the rapid proliferation of malware and the desperate need for human effort to extract some kinds of signature, this approach is a tedious solution; thus, an intelligent malware detection system is required to deal with new malware threats. Most of intelligent detection systems utilise some data mining techniques in order to distinguish malware from sane programs. One of the pivotal phases of these systems is extracting features from malware samples and benign ones in order to make at least a learning model. This phase is called “Malware Analysis” which plays a significant role in these systems. Since API call sequence is an effective feature for realising unknown malware, this paper is focused on extracting this feature from executable files. There are two major kinds of approach to analyse an executable file. The first type of analysis is “Static Analysis” which analyses a program in source code level. The second one is “Dynamic Analysis” that extracts features by observing program’s activities such as system requests during its execution time. Static analysis has to traverse the program’s execution path in order to find called APIs. Because it does not have sufficient information about decision making points in the given executable file, it is not able to extract the real sequence of called APIs. Although dynamic analysis does not have this drawback, it suffers from execution overhead. Thus, the feature extraction phase takes noticeable time. In this paper, a novel hybrid approach, HDM-Analyser, is presented which takes advantages of dynamic and static analysis methods for rising speed while preserving the accuracy in a reasonable level. HDM-Analyser is able to predict the majority of decision making points by utilising the statistical information which is gathered by dynamic analysis; therefore, there is no execution overhead. The main contribution of this paper is taking accuracy advantage of the dynamic analysis and incorporating it into static analysis in order to augment the accuracy of static analysis. In fact, the execution overhead has been tolerated in learning phase; thus, it does not impose on feature extraction phase which is performed in scanning operation. The experimental results demonstrate that HDM-Analyser attains better overall accuracy and time complexity than static and dynamic analysis methods.  相似文献   

5.
We propose a framework based on an original generation and use of algorithmic skeletons, and dedicated to speculative parallelization of scientific nested loop kernels, able to apply at run-time polyhedral transformations to the target code in order to exhibit parallelism and data locality. Parallel code generation is achieved almost at no cost by using binary algorithmic skeletons that are generated at compile-time, and that embed the original code and operations devoted to instantiate a polyhedral parallelizing transformation and to verify the speculations on dependences. The skeletons are patched at run-time to generate the executable code. The run-time process includes a transformation selection guided by online profiling phases on short samples, using an instrumented version of the code. During this phase, the accessed memory addresses are used to compute on-the-fly dependence distance vectors, and are also interpolated to build a predictor of the forthcoming accesses. Interpolating functions and distance vectors are then employed for dependence analysis to select a parallelizing transformation that, if the prediction is correct, does not induce any rollback during execution. In order to ensure that the rollback time overhead stays low, the code is executed in successive slices of the outermost original loop of the nest. Each slice can be either a parallel version which instantiates a skeleton, a sequential original version, or an instrumented version. Moreover, such slicing of the execution provides the opportunity of transforming differently the code to adapt to the observed execution phases, by patching differently one of the pre-built skeletons. The framework has been implemented with extensions of the LLVM compiler and an x86-64 runtime system. Significant speed-ups are shown on a set of benchmarks that could not have been handled efficiently by a compiler.  相似文献   

6.
使用工作站网络并行执行prolog程序   总被引:1,自引:0,他引:1       下载免费PDF全文
陶杰  鞠九滨 《软件学报》1994,5(11):38-43
本文介绍了一个在SUN工作站网络上实现的分布式C—PROLOG解释系统DC-PROLOG,它能够自动地将其应用程序的顺序解释过程变为并行解释过程;能够充分利用空闲的主存资源求解大问题,使一些单机上因内存容量不足而无法执行的任务得以执行.  相似文献   

7.
8.
A run-time monitor shares computational resources, such as memory and CPU time, with the target program. Furthermore, heavy computation performed by a monitor for checking target program's execution with respect to requirement properties can be a bottleneck to the target program's execution. Therefore, computational characteristics of run-time monitoring cause a significant impact on the target program's execution.We investigate computational issues on run-time monitoring. The first issue is the power of run-time monitoring. In other words, we study the class of properties run-time monitoring can evaluate. The second issue is computational complexity of evaluating properties written in process algebraic language. Third, we discuss sound abstraction of the target program's execution, which does not change the result of property evaluation. This abstraction can be used as a technique to reduce monitoring overhead. Theoretical understanding obtained from these issues affects the implementation of Java-MaC, a toolset for the run-time monitoring and checking of Java programs. Finally, we demonstrate the abstraction-based overhead reduction technique implemented in Java-MaC through a case study.  相似文献   

9.
针对虚拟专用网的管理问题,提出了一种基于层次策略的管理模型。模型定义了两级管理策略:全局组策略和局部的设备执行策略。其中,组策略以抽象方式定义通信保护目标,执行策略则描述了具体设备所关联的安全隧道和数据处理行为。针对特定设备,相应的执行策略通过对组策略的扩展自动生成,并可在运行中动态调整以适应网络变化。管理策略的层次结构使得模型可以通过简洁的高层策略对全网VPN设备实施统一管理,从而有效减轻了VPN网络的管理负担,也使系统具备了良好的扩展能力。  相似文献   

10.
Context: Declarative business processes are commonly used to describe permitted and prohibited actions in a business process. However, most current proposals of declarative languages fail in three aspects: (1) they tend to be oriented only towards the execution order of the activities; (2) the optimization is oriented only towards the minimization of the execution time or the resources used in the business process; and (3) there is an absence of capacity of execution of declarative models in commercial Business Process Management Systems.Objective: This contribution aims at taking into account these three aspects, by means of: (1) the formalization of a hybrid model oriented towards obtaining the outcome data optimization by combining a data-oriented declarative specification and a control-flow-oriented imperative specification; and (2) the automatic creation from this hybrid model to an imperative model that is executable in a standard Business Process Management System.Method: An approach, based on the definition of a hybrid business process, which uses a constraint programming paradigm, is presented. This approach enables the optimized outcome data to be obtained at runtime for the various instances.Results: A language capable of defining a hybrid model is provided, and applied to a case study. Likewise, the automatic creation of an executable constraint satisfaction problem is addressed, whose resolution allows us to attain the optimized outcome data. A brief computational study is also shown.Conclusion: A hybrid business process is defined for the specification of the relationships between declarative data and control-flow imperative components of a business process. In addition, the way in which this hybrid model automatically creates an entirely imperative model at design time is also defined. The resulting imperative model, executable in any commercial Business Process Management System, can obtain, at execution time, the optimized outcome data of the process.  相似文献   

11.
ContextFault handling represents a very important aspect of business process functioning. However, fault handling has thus far been solved statically, requiring the definition of fault handlers and handling logic to be defined at design time, which requires a great deal of effort, is error-prone and relatively difficult to maintain and extend. It is sometimes even impossible to define all fault handlers at design time.ObjectiveTo address this issue, we describe a novel context-aware architecture for fault handling in executable business process, which enables dynamic fault handling during business process execution.MethodWe performed analysis of existing fault handling disadvantages of WS-BPEL. We designed the artifact which complements existing statically defined fault handling in such a way that faults can be defined dynamically during business process run-time. We evaluated the artifact with analysis of system performance and performed a comparison against a set of well known workflow exception handling patterns.ResultsWe designed an artifact, that comprises an Observer component, Exception Handler Bus, Exception Knowledge Base and Solution Repository. A system performance analysis shows a significantly decreased repair time with the use of context aware activities. We proved that the designed artifact extends the range of supported workflow exception handling patterns.ConclusionThe artifact presented in this research considerably improves static fault handling, as it enables the dynamic fault resolution of semantically similar faults with continuous enhancement of fault handling in run-time. It also results in broader support of workflow exception handling patterns.  相似文献   

12.
M. K. Crowe 《Software》1987,17(7):455-467
A system for dynamic compilation under the Unix operating system is described. The basis of the system is an incremental assembler that can be used statically or during program execution to insert or replace a module in an executable image. All cross-module references are via offets into a run-time symbol table. All generated code is independent of its location or the location of the symbol table. The symbol table and all modules reside in memory segments compatible with the memory allocator malloc() . The symbol table origin is maintained in a processor register. Library procedures allow the assembler (or C compiler) to be called to alter the currently executing program, or to place a stub function which acts as a trap, so that when the stub is invoked it caues a file to be dynamically compiled into the executing program to replace the stub with a bona fide procedure. This facilitates the construction of advanced interactive environments using native code. Some example applications, to Prolog and to incremental compilation, are considered.  相似文献   

13.
传统的PLC系统由于自身系统结构和处理器性能等问题,在执行工业控制的过程中往往在执行了一定时间后系统就会发生惯性停机,影响工业生产.提出了基于ARM+FPGA高性能双处理器的嵌入式安全PLC结构模型,可以大幅降低系统失效的概率,提高工业控制可靠性.本系统分为硬件结构和软件系统两大部分.硬件部分采用了1oo2D双通道异构冗余安全体系结构,两条通道配备有安全电路,两个处理器之间设计有安全诊断电路,通过交叉检测判断系统运行是否正常.软件部分主要包括编译系统和执行系统,编译系统将编写的PLC程序转换成机器可执行的代码也叫做目标代码,再由执行系统进行目标代码的执行.  相似文献   

14.
A search methodology with goal state optimization considering computational resource constraints is proposed. The combination of “an extended graph search methodology” and “parallelization of task execution and online planning” makes it possible to solve the problem. The uncertainty of the task execution time is also considered. The problem can be solved by utilizing a random-based and/or a greedy-based graph-searching methodology. The proposed method is evaluated using a rearrangement problem of 20 movable objects with uncertainty in the task execution time, and the effectiveness is shown with simulation results.  相似文献   

15.
This article proposes an approach for real-time monitoring of risks in executable business process models. The approach considers risks in all phases of the business process management lifecycle, from process design, where risks are defined on top of process models, through to process diagnosis, where risks are detected during process execution. The approach has been realized via a distributed, sensor-based architecture. At design-time, sensors are defined to specify risk conditions which when fulfilled, are a likely indicator of negative process states (faults) to eventuate. Both historical and current process execution data can be used to compose such conditions. At run-time, each sensor independently notifies a sensor manager when a risk is detected. In turn, the sensor manager interacts with the monitoring component of a business process management system to prompt the results to process administrators who may take remedial actions. The proposed architecture has been implemented on top of the YAWL system, and evaluated through performance measurements and usability tests with students. The results show that risk conditions can be computed efficiently and that the approach is perceived as useful by the participants in the tests.  相似文献   

16.
Binary translation and dynamic optimization are widely used to provide compatibility between legacy and promising upcoming architectures on the level of executable binary codes. Dynamic optimization is one of the key contributors to dynamic binary translation system performance. At the same time it can be a major source of overhead, both in terms of CPU cycles and whole system latency, as long as optimization time is included in the execution time of the application under translation. One of the solutions that allow to eliminate dynamic optimization overhead is to perform optimization simultaneously with the execution, in a separate thread. In the paper we present implementation of this technique in full system dynamic binary translator. For this purpose, an infrastructure for multithreaded execution was implemented in binary translation system. This allowed running dynamic optimization in a separate thread independently of and concurrently with the main thread of execution of binary codes under translation. Depending on the computational resources available, this is achieved whether by interleaving the two threads on a single processor core or by moving optimization thread to an underutilized processor core. In the first case the latency introduced to the system by a computational intensive dynamic optimization is reduced. In the second case overlapping of execution and optimization threads also results in elimination of optimization time from the total execution time of original binary codes.  相似文献   

17.
The issue of software partition deals with the process of mapping the given set of logical modules, which reflect the user's point of view, into a set of software tasks, which reflect the software implementor's point of view. It is shown in this paper that the software partitioning problem can be modeled as one that maximizes the efficiency in resource utilization while observing the constraints on CPU throughput, memory space available, maximally allowed task execution time, and the order of module execution. The CPU and memory constraints are implementation dependent. The maximum task execution time constraint is due to considerations on the response time performance. The constraint on module execution order is a logical one, and it is shown to have significant performance impact. It is proven that by employing the module precedence relation, which reflects the sequence of module execution, the order of module execution can be properly maintained during the software partitioning process. And thus the defined tasks can be guaranteed to be completely executable, once properly activated With completely executable tasks, the operating overhead cost and the response time delay can be minimized. The following four module precedence relations are explored: precede, succeed, parallel, and precede as well as succeed. The validity of the selected partitioning criterion of maximizing the resource utilization efficiency is also assessed through simulation experiments. The results of simulation show that performance of the selected criterion is insensitive to the application environment, as well as to the application requirements.  相似文献   

18.
Current fully automatic model-based test-case generation techniques for GUIs employ a static model. Therefore they are unable to leverage certain state-based relationships between GUI events (e.g., one enables the other, one alters the other’s execution) that are revealed at run-time and non-trivial to infer statically. We present ALT – a new technique to generate GUI test cases in batches. Because of its “alternating” nature, ALT enhances the next batch by using GUI run-time information from the current batch. An empirical study on four fielded GUI-based applications demonstrated that ALT was able to detect new 4- and 5-way GUI interaction faults; in contrast, previous techniques, due to their requirement of too many test cases, were unable to even test 4- and 5-way GUI interactions.  相似文献   

19.
A framework for the automatic parallelization of (constraint) logic programs is proposed and proved correct. Intuitively, the parallelization process replaces conjunctions of literals with parallel expressions. Such expressions trigger at run-time the exploitation of restricted, goal-level, independent and parallelism. The parallelization process performs two steps. The first one builds a conditional dependency graph (which can be simplified using compile-time analysis information), while the second transforms the resulting graph into linear conditional expressions, the parallel expressions of the &-Prolog language. Several heuristic algorithms for the latter (“annotation”) process are proposed and proved correct. Algorithms are also given which determine if there is any loss of parallelism in the linearization process with respect to a proposed notion of maximal parallelism. Finally, a system is presented which implements the proposed approach. The performance of the different annotation algorithms is compared experimentally in this system by studying the time spent in parallelization and the effectiveness of the results in terms of speedups.  相似文献   

20.
产品数据管理是集成化CAD/CAPP/CAM系统的关键,首先介绍一个基于STEP标准的CAD/CAPP/CAM系统的开发情况,然后介绍其中数据管理软件的开发策略,系统结构与特点及在集成系统中应用的一些问题,最后给出一个应用实例。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号