首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
This paper presents new approaches to the validation of loop optimizations that compilers use to obtain the highest performance from modern architectures. Rather than verify the compiler, the approach of translation validationperforms a validation check after every run of the compiler, producing a formal proof that the produced target code is a correct implementation of the source code. As part of an active and ongoing research project on translation validation, we have previously described approaches for validating optimizations that preserve the loop structure of the code and have presented a simulation-based general technique for validating such optimizations. In this paper, for more aggressive optimizations that alter the loop structure of the code—such as distribution, fusion, tiling, and interchange—we present a set of permutation ruleswhich establish that the transformed code satisfies all the implied data dependencies necessary for the validity of the considered transformation. We describe the extensions to our tool voc-64 which are required to validate these structure-modifying optimizations. This paper also discusses preliminary work on run-time validation of speculative loop optimizations. This involves using run-time tests to ensure the correctness of loop optimizations whose correctness cannot be guaranteed at compile time. Unlike compiler validation, run-time validation must not only determine when an optimization has generated incorrect code, but also recover from the optimization without aborting the program or producing an incorrect result. This technique has been applied to several loop optimizations, including loop interchange and loop tiling, and appears to be quite promising. This research was supported in part by NSF grant CCR-0098299, ONR grant N00014-99-1-0131, and the John von Neumann Minerva Center for Verification of Reactive Systems.  相似文献   

2.
Translation validation is an approach for validating the output of optimizing compilers. Rather than verifying the compiler itself, translation validation mandates that every run of the compiler generate a formal proof that the produced target code is a correct implementation of the source code. Speculative loop optimizations are aggressive optimizations which are only correct under certain conditions which cannot be validated at compile time. We propose using an automatic theorem prover together with the translation validation framework to automatically generate run-time tests for such speculative optimizations. This run-time validation approach must not only detect the conditions under which an optimization generates incorrect code, but also provide a way to recover from the optimization without aborting the program or producing an incorrect result. In this paper, we apply the run-time validation technique to a class of speculative reordering transformations and give some initial results of run-time tests generated by the theorem prover CVC.  相似文献   

3.
Translation validation is a technique for ensuring that a translator, such as a compiler, produces correct results. Because complete verification of the translator itself is often infeasible, translation validation advocates coupling the verification task with the translation task, so that each run of the translator produces verification conditions which, if valid, prove the correctness of the translation. In previous work, the translation validation approach was used to give a framework for proving the correctness of a variety of compiler optimizations, with a recent focus on loop transformations. However, some of these ideas were preliminary and had not been implemented. Additionally, there were examples of common loop transformations which could not be handled by our previous approaches.This paper addresses these issues. We introduce a new rule Reduce for loop reduction transformations, and we generalize our previous rule Validate so that it can handle more transformations involving loops. We then describe how all of this (including some previous theoretical work) is implemented in our compiler validation tool TVOC.  相似文献   

4.
Irregular access patterns are a major problem for today’s optimizing compilers. In this paper, a novel approach will be presented that enables transformations that were designed for regular loop structures to be applied to linked list data structures. This is achieved by linearizing access to a linked list, after which further data restructuring can be performed. Two subsequent optimization paths will be considered: annihilation and sublimation, which are driven by the occurring regular and irregular access patterns in the applications. These intermediate codes are amenable to traditional compiler optimizations targeting regular loops. In the case of sublimation, a run-time step is involved which takes the access pattern into account and thus generates a data instance specific optimized code. Both approaches are applied to a sparse matrix multiplication algorithm and an iterative solver: preconditioned conjugate gradient. The resulting transformed code is evaluated using the major compilers for the x86 platform, GCC and the Intel C compiler.  相似文献   

5.
文章[1]中提出了数组之间的数据融合优化方法,并以IA-32服务器为平台测试了数据融合优化的效果。测试结果表明,在IA-32机器上,数据融合优化在性能代价模型的控制下,能较好地改善具有非连续数据访问特征的应用程序的CACHE利用率。那么,在新一代体系结构IA-64平台上,数据融合优化的效果如何呢?该文分别以IntelIA-32服务器和HPITANIUM服务器为平台,用IntelFORTRAN编译器ifc和efc及自由软件编译器g95分别编译并运行数据融合优化变换前后的程序,获得两种平台上的执行时间及相关的性能数据。测试结果表明,源程序级的数据融合优化不能很好地与IA-64平台上的EFC编译器高级优化配合工作,在O3级优化开关控制下,优化效果是负值。此测试结果进一步表明,编译高级优化如数据预取、循环变换和数据变换等各种优化必须结合体系结构的特点统筹考虑,才能取得好的全局优化效果。该文为研究各种面向IA-32体系结构的编译优化算法在IA-64体系结构上的性能可移植性优化起到抛砖引玉的作用。  相似文献   

6.
Compiling code for the Icon programming language presents several challenges, particularly in dealing with types and goal-directed expression evaluation. In order to produce optimized code, it is necessary for the compiler to know much more about operations than is necessary for the compilation of most programming languages. This paper describes the organization of the Icon compiler and the way it acquires and maintains information about operations. The Icon compiler generates C code, which makes it portable to a wide variety of platforms and also allows the use of existing C compilers for performing routine optimizations on the final code. A specially designed implementation language, which is a superset of C, is used for writing Icon's run-time system. This language allows the inclusion of information about the abstract semantics of Icon operations and their type-checking and conversion requirements. A translator converts code written in the run-time language to C code to provide an object library for linking with the code produced by the Icon compiler. The translation process also automatically produces a database that contains the information the Icon compiler needs to generate and optimize code. This approach allows easy extension of Icon's computational repertoire, alternate computational extensions, and cross compilation.  相似文献   

7.
Current parallelizing compilers do a reasonable job of extracting parallelism from programs with regular, well behaved, statically analyzable access patterns. However, they cannot extract a significant fraction of the avaialable, parallelism if the program has a complex and/or statically insufficiently defined access pattern, e.g., simulation programs with irregular domains and/or dynamically changing interactions. Since such programs represent a large fraction of all applications, techniques are needed for extracting their inherent parallelism at run-time. In this paper we give a new run-time technique for finding an optimal parallel execution schedule for a partially parallel loop, i.e., a loop whose parallelization requires synchronization to ensure that the iterations are executed in the correct order. Given the original loop, the compiler generatesinspector code that performas run-time preprocessing of the loop's access pattern, andscheduler code that schedules (and executes) the loop interations. The inspector is fully parallel, uses no sychronization, and can be applied to any loop (from which an inspector can be extracted). In addition, it can implement at run-time the two most effective transformations for increasing the amount of parallelism in a loop:array privatization andreduction parallelization (elementwise). The ability to identify privatizable and reduction variables is very powerful since it eliminates the data dependences involving these variables and An abstract of this paper has been publsihed in Ref. 1. Research supported in part by Army contract #DABT63-92-C-0033. This work is not necessarily representative of the positions or policies of the Army of the Government. Research supported in part by Intel and NASA Graduate Fellowships. Research supported in part by an AT&T Bell Laboratoroies Graduate Fellowship and by the International Computer Science Institute, Berkeley, California.  相似文献   

8.
We describe our efforts to use rule-based programming to produce a model of Jumbo, a run-time program generation (RTPG) system for Java. Jumbo incorporates RTPG following the simple principle that the regular compiler — or, rather, its back-end — can be used both for ordinary, static compilation and for run-time compilation. This tends to produce a run-time compiler that is inefficient but potentially subject to improvement by partial evaluation. However, the complexity of the language and compiler have made it difficult for us to achieve actual optimization. The model, written in Maude, preserves all the essential ingredients of Jumbo, but operates on a simplified language, called Mumbo. The simplification in the language together with Maude's support for code rewriting has allowed us to make rapid progress. We discuss the model in detail, the kinds of optimizations we have obtained, and the impact on the Jumbo project.  相似文献   

9.
Adam is a high-level language for parallel processing. It is intended for programming resource scheduling applications, in particular supervisory packages for run-time scheduling of multiprocessing systems. An important design goal was to provide support for implementation of Ada and its run-time environment. Adam has been used to implement Ada task supervision and also as a high-level target language for compilation of Ada tasking. Adam provides facilities corresponding to the Ada sequential constructs (including subprograms, packages, exceptions, generics). In addition, it provides specialized module constructs for implementation of packages that may be shared between parallel processes, and new predefined types for scheduling. The parallel processing constructs of Adam are more primitive than Ada tasking. Strong restrictions are enforced on the ways in which parallel processes can interact. A compiler for Adam has been implemented in MacLisp on DEC PDP-10 computers. Runtime support packages in Adam for scheduling (on a single CPU) and I/O are also provided. The compiler contains a library manipulation facility for separate compilation. The Adam compiler has been used to build an Ada compiler for most of the July 1980 Ada, including task types and rendezvous constructs. This was achieved by implementing the translation of Ada tasking into Adam parallel processing as a preprocessor to the Adam compiler. This present Ada compiler, which has been operational since December 1980, uses a procedure call implementation of tasking. It can be easily modified to other implementations. Compilation of Ada tasking into a high-level target language such as Adam facilitates studying questions of correctness and efficiency of various compilation algorithms, and code optimizations specific to tasking, e.g. elimination of unnecessary threads of control. This paper gives an overview of Adam and examples of its use. Emphasis is placed on the differences from Ada. Experience using Adam to build the experimental Ada system is evaluated. Design of a run-time supervisor in Adam is discussed in detail.  相似文献   

10.
Previous work on compiler-based multiple instruction retry has utilized a series of compiler transformations, loop protection, node splitting, and loop expansion, to eliminate anti-dependencies of length ≤ N in the pseudo register, the machine register, and the post-pass resolver phases of compilation.1 The results have provided a means of rapidly recovering from transient processor failures by rolling back N instructions. This paper presents techniques for improving compilation and run-time performance in compiler-based multiple instruction retry. Incremental updating enhances compilation time when new instructions are added to the program. Post-pass code rescheduling and spill register reassignment algorithms improve the run-time performance and decrease the code growth across the application programs studied. Branch hazards are shown to be resolvable by simple modifications to the incremental updating schemes during the pseudo register phase and to the spill register reassignment algorithm during the post-pass phase.  相似文献   

11.
We propose a framework based on an original generation and use of algorithmic skeletons, and dedicated to speculative parallelization of scientific nested loop kernels, able to apply at run-time polyhedral transformations to the target code in order to exhibit parallelism and data locality. Parallel code generation is achieved almost at no cost by using binary algorithmic skeletons that are generated at compile-time, and that embed the original code and operations devoted to instantiate a polyhedral parallelizing transformation and to verify the speculations on dependences. The skeletons are patched at run-time to generate the executable code. The run-time process includes a transformation selection guided by online profiling phases on short samples, using an instrumented version of the code. During this phase, the accessed memory addresses are used to compute on-the-fly dependence distance vectors, and are also interpolated to build a predictor of the forthcoming accesses. Interpolating functions and distance vectors are then employed for dependence analysis to select a parallelizing transformation that, if the prediction is correct, does not induce any rollback during execution. In order to ensure that the rollback time overhead stays low, the code is executed in successive slices of the outermost original loop of the nest. Each slice can be either a parallel version which instantiates a skeleton, a sequential original version, or an instrumented version. Moreover, such slicing of the execution provides the opportunity of transforming differently the code to adapt to the observed execution phases, by patching differently one of the pre-built skeletons. The framework has been implemented with extensions of the LLVM compiler and an x86-64 runtime system. Significant speed-ups are shown on a set of benchmarks that could not have been handled efficiently by a compiler.  相似文献   

12.
The NYU Tvoc project applies the method of translation validation to verify that optimized code is semantically equivalent to the unoptimized code, by establishing, for each run of the optimizing compiler, a set of verification conditions (VCs) whose validity implies the correctness of the optimized run. The core of Tvoc is Tvoc-sp, that handles structure preserving optimizations, i.e., optimizations that do not alter the inner loop structures. The underlying proof rule, Val, on whose soundness Tvoc-sp is based, requires, among other things, to generating invariants at each “cutpoint” of the control graph of both source and target codes. The current implementation of Tvoc-sp employs somewhat naïve fix-point computations to obtain the invariants. In this paper, we propose an alternative method to compute invartiants which is based on simple data-flow analysis techniques.  相似文献   

13.
CHAU-WEN TSENG 《Software》1997,27(7):763-796
Fortran D is a version of Fortran enhanced with data decomposition specifications. Case studies illustrate strengths and weaknesses of the prototype Fortran D compiler when compiling linear algebra codes and whole programs. Statement groups, execution conditions, inter-loop communication optimizations, multi-reductions, and array kills for replicated arrays are identified as new compilation issues. On the Intel iPSC/860, the output of the prototype Fortran D compiler approaches the performance of hand-optimized code for parallel computations, but needs improvement for linear algebra and pipelined codes. The Fortran D compiler outperforms the CM Fortran compiler (2.1 beta) by a factor of four or more on the TMC CM-5 when not using vector units. Its performance is comparable to the DEC and IBM HPF compilers on a Alpha cluster and SP-2. Better analysis, run-time support, and flexibility are required for the prototype compiler to be useful for a wider range of programs. © 1997 John Wiley & Sons, Ltd.  相似文献   

14.
一种运行时消除指针别名歧义的新方法   总被引:1,自引:1,他引:0  
提出一种采用软硬件结合的运行时消除指针别名歧义的新方法SHRTD(software/hardware run-time disambiguation).为延迟运行时不正确的内存访问及其后继操作,SHRTD的功能单元执行NOP操作.为保证所有延迟操作执行顺序的一致性,编译时就确定执行NOP操作的所有功能单元的顺序和NOP操作的数目.SHRTD方法适用于不可逆代码,同时它的代码空间受限,也不存在严重的代码可重入性问题.新方法有效地解决了指针别名问题,为获得潜在的指令级并行加速提供了可能.  相似文献   

15.
Overlapping communication with computation is a well-known approach to improving performance. Previous research has focused on optimizations performed by the programmer. This paper presents a compiler algorithm that automatically determines the appropriate loop indices of a given nested loop and applies loop interchange and tiling in order to overlap communication with computation. The algorithm avoids generating redundant communication by providing a framework for combining information on data dependence, communication, and reuse. It also describes a method of generating messages to exchange data between processors for tiled loops on distributed memory machines. The algorithm has been implemented in our High Performance Fortran (HPF) compiler, and experimental results have shown its effectiveness on distributed memory machines, such as the RISC System/6000 Scalable POWERparallel System. This paper also discusses the architectural problems of efficient optimization.  相似文献   

16.
Simultaneous Multithreading (SMT) is a processor architectural technique that promises to significantly improve the utilization and performance of modern wide-issue superscalar processors. An SM T processor is capable of issuing multiple instructions from multiple threads to a processor's functional units each cycle. Unlike shared-memory multiprocessors, SMT provides and benefits from fine-grained sharing of processor and memory system resources; unlike current uniprocessors, SMT exposes and benefits from inter-thread instruction-level parallelism when hiding long-latency operations. Compiler optimizations are often driven by specific assumptions about the underlying architecture and implementation of the target machine, particularly for parallel processors. For example, when targeting shared-memory multiprocessors, parallel programs are compiled to minimize sharing, in order to decrease high-cost inter-processor communication. Therefore, optimizations that are appropriate for these conventional machines may be inappropriate for SMT, which can benefit from finegrained resource sharing within the processor. This paper reexamines several compiler optimizations in the context of simultaneous multithreading. We revisit three optimizations in this light: loop-iteration scheduling, software speculative execution, and loop tiling. Our results show that all three optimizations should be applied differently in the context of SMT architectures: threads should be parallelized with a cyclic, rather than a blocked algorithm; non-loop programs should not be software speculated, and compilers no longer need to be concerned about precisely sizing tiles to match cache sizes. By following these new guidelines, compilers can generate code that improves the performance of programs executing on SMT machines.  相似文献   

17.
Automatic scheduling for directed acyclic graphs (DAG) and its applications for coarse-grained irregular problems such as largen-body simulation have been studied in the literature. However, solving irregular problems with mixed granularities such as sparse matrix factorization is challenging since it requires efficient run-time support to execute a DAG schedule. In this paper, we investigate run-time optimization techniques for executing general asynchronous DAG schedules on distributed memory machines and discuss an approach for exploiting parallelism from commuting operations in the DAG model. Our solution tightly integrates the run-time scheme with a fast communication mechanism to eliminate unnecessary overhead in message buffering and copying. We present a consistency model incorporating the above optimizations, and take advantage of task dependence properties to ensure the correctness of execution. We demonstrate the applications of this scheme in sparse matrix factorizations and triangular equation solving for which actual speedups are difficult to obtain. We provide a detailed experimental study on Meiko CS-2 to show that the automatically scheduled code has achieved good performance for these difficult problems, and the run-time overhead is small compared to total execution times.  相似文献   

18.
Using hammock graphs to structure programs   总被引:1,自引:0,他引:1  
Advanced computer architectures rely mainly on compiler optimizations for parallelization, vectorization, and pipelining. Efficient-code generation is based on a control dependence analysis to find the basic blocks and to determine the regions of control. However, unstructured branch statements, such as jumps and goto's, render the control flow analysis difficult, time-consuming, and result in poor code generation. Branches are part of many programming languages and occur in legacy and maintenance code as well as in assembler, intermediate languages, and byte code. A simple and effective technique is presented to convert unstructured branches into hammock graph control structures. Using three basic transformations, an equivalent program is obtained in which all control statements have a well-defined scope. In the interest of predication and branch prediction, the number of control variables has been minimized, thereby allowing a limited code replication. The correctness of the transformations has been proven using an axiomatic proof rule system. With respect to previous work, the algorithm is simpler and the branch conditions are less complex, making the program more readable and the code generation more efficient. Additionally, hammock graphs define single entry single exit regions and therefore allow localized optimizations. The restructuring method has been implemented into the parallelizing compiler FPT and allows to extract parallelism in unstructured programs. The use of hammock graph transformations in other application areas such as vectorization, decompilation, and assembly program restructuring is also demonstrated.  相似文献   

19.
Proving equivalence of programs has several important applications, including algorithm recognition, regression checking, compiler optimization verification and validation, and information flow checking. Despite being a topic with so many important applications, program equivalence checking has seen little advances over the past decades due to its inherent (high) complexity. In this paper, we propose, to the best of our knowledge, the first semi-algorithm for the automatic verification of partial equivalence of two programs over the combined theory of uninterpreted function symbols and integer arithmetic (UF+IA). The proposed algorithm supports, in particular, programs with nested loops. The crux of the technique is a transformation of uninterpreted functions (UFs) applications into integer polynomials, which enables the precise summarization of loops with UF applications using recurrences. The equivalence checking algorithm then proceeds on loop-free, integer only programs. We implemented the proposed technique in CORK, a tool that automatically verifies the correctness of compiler optimizations, and we show that it can prove more optimizations correct than state-of-the-art techniques.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号