首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper describes SIMPL-D, a stack-based language with data abstraction features, and some of the details of its implementation. The language allows users to define new types that are parameterized by size and to perform system-defined operations (e.g. assignment) on objects with user-defined types. The use of object-describing templates in the implementation of storage allocation, assignment and returning values from functions is discussed. Finally, the conflicts between automatic initialization and separate compilation are explained.  相似文献   

2.
The size of today’s programs continues to grow, as does the number of bugs they contain. Testing alone is rarely able to flush out all bugs, and many lurk in difficult-to-test corner cases. An important alternative is static analysis, in which correctness properties of a program are checked without running it. While it cannot catch all errors, static analysis can catch many subtle problems that testing would miss.We propose a new space of abstractions for pointer analysis—an important component of static analysis for C and similar languages. We identify two main components of any abstraction—how to model statement order and how to model conditionals, then present a new model of programs that enables us to explore different abstractions in this space. Our assign-fetch graph represents reads and writes to memory instead of traditional points-to relations and leads to concise function summaries that can be used in any context. Its flexibility supports many new analysis techniques with different trade-offs between precision and speed.We present the details of our abstraction space, explain where existing algorithms fit, describe a variety of new analysis algorithms based on our assign-fetch graphs, and finally present experimental results that show our flow-aware abstraction for statement ordering both runs faster and produces more precise results than traditional flow-insensitive analysis.  相似文献   

3.
4.
对可信指针分析技术的定义和描述、指针分析对软件可信性的保障、可信指针分析属性以及该领域主要研究成果等方面进行了综述。通过对现有可信指针分析技术的分析和比较,详细讨论了面向软件可信性的可信指针分析的关键技术;此外,重点介绍了流敏感指针分析及上下文敏感指针分析的方法和理论;最后对进一步研究工作的方向进行了展望。  相似文献   

5.
夏国恩 《计算机应用》2008,28(1):149-151
将核主成分分析(KPCA)引入到客户流失预测中,提出了相应的特征提取算法。将KPCA与Logistic回归结合,设计了预测模型。通过对某电信公司客户流失预测的试验结果表明:该方法获得的命中率、覆盖率、准确率和提升系数高于原始属性集和主成分分析(PCA)特征提取法。这表明KPCA能提取客户数据的非线性特征,是研究客户流失预测问题的有效方法。  相似文献   

6.
王曙燕  权雅菲  孙家泽 《计算机应用》2017,37(10):2968-2972
针对静态测试中空指针引用缺陷假阳性问题,提出一种空指针引用缺陷分类假阳性识别方法。挖掘空指针引用缺陷知识,对空指针引用缺陷知识进行预处理,生成空指针引用缺陷数据集;通过基于粗糙集理论属性重要性的ID3算法分类空指针引用缺陷数据集,分类结果有假阳性空指针引用缺陷实例和真实空指针引用缺陷实例两种;根据分类结果对静态测试中的空指针引用缺陷进行假阳性识别,确认真实的空指针引用缺陷。该方法对十个基准程序和基于主流静态测试工具FindBugs的空指针引用缺陷检测方法相比,空指针引用缺陷假阳性降低率平均为25%,减少了24%的空指针引用缺陷确认。实验结果表明,该方法在静态测试方面能有效降低缺陷确认开销,提高空指针引用缺陷检测效率和稳定性。  相似文献   

7.
In this paper, we introduce the concept of an exoskeleton as a new abstraction of arbitrary shapes that succinctly conveys both the perceptual and the geometric structure of a 3D model. We extract exoskeletons via a principled framework that combines segmentation and shape approximation. Our method starts from a segmentation of the shape into perceptually relevant parts and then constructs the exoskeleton using a novel extension of the Variational Shape Approximation method. Benefits of the exoskeleton abstraction to graphics applications such as simplification and chartification are presented.  相似文献   

8.
As aspects extend or replace existing functionality at specific join points in the code, their behavior may raise new exceptions, which can flow through the program execution in unexpected ways. Assuring the reliability of exception handling code in aspect-oriented (AO) systems is a challenging task. Testing the exception handling code is inherently difficult, since it is tricky to provoke all exceptions during tests, and the large number of different exceptions that can happen in a system may lead to the test-case explosion problem. Moreover, we have observed that some properties of AO programming (e.g., quantification, obliviousness) may conflict with characteristics of exception handling mechanisms, exacerbating existing problems (e.g., uncaught exceptions). The lack of verification approaches for exception handling code in AO systems stimulated the present work. This work presents a verification approach based on a static analysis tool, called SAFE, to check the reliability of exception handling code in AspectJ programs. We evaluated the effectiveness and feasibility of our approach in two complementary ways (i) by investigating if the SAFE tool is precise enough to uncover exception flow information and (ii) by applying the approach to three medium-sized ApectJ systems from different application domains.  相似文献   

9.
蔄羽佳  尹青  朱晓东 《计算机应用》2016,36(6):1567-1572
针对传统的数据随机化技术静态分析精度不高的问题,提出一种基于域敏感指针分析算法的细粒度数据随机化技术。在静态分析过程中,首先对中间表示进行语法抽象,得到形式化的语言表示;然后建立非标准类型系统,描述变量之间的指向关系;最后按照类型规则进行类型推断并求解,得到域敏感的指向关系。根据指向关系对数据进行随机化加密,得到经过随机化的可执行程序。实验数据表明,基于域敏感指针分析的数据随机化技术与传统的数据随机化技术相比,分析精度显著提高;处理时间开销平均增加了2%,但运行时间开销平均减少了3%。所提技术利用域敏感的指针分析,给程序带来更少的执行开销,并能够更好地提高程序的防御能力。  相似文献   

10.

Context

Pointer analysis is an important building block of optimizing compilers and program analyzers for C language. Various methods with precision and performance trade-offs have been proposed. Among them, cycle elimination has been successfully used to improve the scalability of context-insensitive pointer analyses without losing any precision.

Objective

In this article, we present a new method on context-sensitive pointer analysis with an effective application of cycle elimination.

Method

To obtain similar benefits of cycle elimination for context-sensitive analysis, we propose a novel constraint-based formulation that uses sets of contexts as annotations. Our method is not based on binary decision diagram (BDD). Instead, we directly use invocation graphs to represent context sets and apply a hash-consing technique to deal with the exponential blow-up of contexts.

Result

Experimental results on C programs ranging from 20,000 to 290,000 lines show that applying cycle elimination to our new formulation results in 4.5 ×speedup over the previous BDD-based approach.

Conclusion

We showed that cycle elimination is an effective method for improving the scalability of context-sensitive pointer analysis.  相似文献   

11.
Work domain analysis (WDA) is used to model the functional structure of sociotechnical systems (STS) through the abstraction hierarchy (AH). By identifying objects, processes, functions and measures that support system purposes, WDA reveals constraints within the system. Traditionally, the AH describes system elements at the lowest level of abstraction as physical objects. Multiple analyses of complex systems reveal that many include objects that exist only at a conceptual level. This paper argues that, by extending the AH to include cognitive objects, the analytical power of WDA is extended, and novel areas of application are enabled. Three case studies are used to demonstrate the role that cognitive objects play within STS. It is concluded that cognitive objects are a valid construct that offer a significant enhancement of WDA and enable its application to some of the world’s most pressing problems. Implications for future applications of WDA and the AH are discussed.

Practitioner summary: Some sociotechnical systems include memes as part of their functional structure. Three case studies were used to evaluate the utility of introducing cognitive objects alongside physical ones in work domain analysis, the first phase of cognitive work analysis. Including cognitive objects increases the scope and accuracy of work domain analysis.  相似文献   


12.
为实现基于静态分析技术充分地检测出C程序中的空指针引用缺陷,提出了一种基于属性可靠分析的缺陷检测方法。首先介绍了空指针引用缺陷模式及特征。然后针对空指针引用缺陷的检测特点提出了属性可靠分析理论,并将指针的指向属性描述为一个属性格。通过提出的抽象内存模型,基于给出的每种程序语句上的迁移实现指针指向属性的可靠分析,根据得到的每个被引用指针的指向属性进而实现空指针引用缺陷的检测。通过对五个实际工程的检测结果分析表明,方法可充分检测出C程序的空指针引用缺陷。  相似文献   

13.
Stack filters are operators that commute with the thresholding operation, i.e., thresholding a signal, applying the binary filter on each thresholded binary signals, and then summing up (stacking) the results yields the same result as applying the multi-level (gray-scale) filter on the original signal. Several approaches for designing optimal stack filters from training data, where optimality is characterized in terms of costs based on input-output joint observations, have been proposed. This work considers stack filter design from training data under a general statistical framework developed in the context of morphological image operator design. This framework (1) provides a common point of view for distinct design approaches, being useful for comparative analysis or for emphasizing differences, (2) clearly answers the issue of why binary signals from different threshold levels, although following distinct distributions, can be pooled together in the cost estimation process, and (3) helps to show that several stack filter design approaches based on lattice diagrams search methods share a common underlying formulation.  相似文献   

14.
Yulei Sui  Sen Ye  Jingling Xue  Jie Zhang 《Software》2014,44(12):1485-1510
Because of its high precision as a flow‐insensitive pointer analysis, Andersen's analysis has been deployed in some modern optimising compilers. To obtain improved precision, we describe how to add context sensitivity on top of Andersen's analysis. The resulting analysis, called ICON , is efficient to analyse large programs while being sufficiently precise to drive compiler optimisations. Its novelty lies in summarising the side effects of a procedure by using one transfer function on virtual variables that represent fully parameterised locations accessed via its formal parameters. As a result, a good balance between efficiency and precision is made, resulting in ICON that is more powerful than a 1‐callsite‐sensitive analysis and less so than a call‐path‐sensitive analysis (when the recursion cycles in a program are collapsed in all cases). We have compared ICON with FULCRA , a state of the art Andersen's analysis that is context sensitive by acyclic call paths, in Open64 (with recursion cycles collapsed in both cases) using the 16 C/C++ benchmarks in SPEC2000 (totalling 600 KLOC) and 5 C applications (totalling 2.1 MLOC). Our results demonstrate scalability of ICON and lack of scalability of FULCRA . FULCRA spends over 2 h in analysing SPEC2000 and fails to run to completion within 5 h for two of the five applications tested. In contrast, ICON spends just under 7 min on the 16 benchmarks in SPEC2000 and just under 26 min on the same two applications. For the 19 benchmarks analysable by FULCRA , ICON is nearly as accurate as FULCRA in terms of the quality of the built Static Single Assignment (SSA) form and the precision of the discovered alias information. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
While JavaScript programs have become pervasive in web applications, they remain hard to reason about. In this context, most static analyses for JavaScript programs require precise call graph information, since the presence of large numbers of spurious callees significantly deteriorates precision. One of the most challenging JavaScript features that complicate the inference of precise static call graph information is read/write accesses to object fields, the names of which are computed at runtime. JavaScript framework libraries often exploit this facility to build objects from other objects, as a way to simulate sophisticated high-level programming constructions. Such code patterns are difficult to analyze precisely, due to weak updates and limitations of unrolling techniques. In this paper, we observe that precise field origination relations can be inferred by locally reasoning about object copies, both regarding to the object and to the program structure, and we propose an abstraction that allows to separately reason about field read/write access patterns working on different fields and to carefully handle the sets of JavaScript object fields. We formalize and implement an analysis based on this technique. We evaluate the performance and precision of the analysis on the computation of call graph information for examples from jQuery tutorials.  相似文献   

16.
Pointer analysis is a technique to identify at copile-time the potential values of the pointer expressions in a program,which promises significant benefits for optimzing and parallelizing complilers.In this paper,a new approach to pointer analysis for assignments is presented.In this approach,assignments are classified into three categories:pointer assignments,structure(union)assignents and normal assignments which don‘t affect the point-to information.Pointer analyses for these three kinds of assignments respectively make up the integrated algorithm.When analyzing a pointer assigemtn.a new method called expression expansion is used to calculate both the left targets and the right targets.The integration of recursive data structure analysis into pointer analysis is a significant originality of this paper,which uniforms the pointer analysis for heap variables and the pointer analysis for stack variables.This algorithm is implemented in Agassiz,an analyzing tool for C programs developed by Institute of Parallel Processing,Fudan University,Its accuracy and effectiveness are illustrated by experimental data.  相似文献   

17.
We study the problem of determining stack boundedness and the exact maximum stack size for three classes of interrupt-driven programs. Interrupt-driven programs are used in many real-time applications that require responsive interrupt handling. In order to ensure responsiveness, programmers often enable interrupt processing in the body of lower-priority interrupt handlers. In such programs a programming error can allow interrupt handlers to be interrupted in a cyclic fashion to lead to an unbounded stack, causing the system to crash. For a restricted class of interrupt-driven programs, we show that there is a polynomial-time procedure to check stack boundedness, while determining the exact maximum stack size is PSPACE-complete. For a larger class of programs, the two problems are both PSPACE-complete, and for the largest class of programs we consider, the two problems are PSPACE-hard and can be solved in exponential time. While the complexities are high, our algorithms are exponential only in the number of handlers, and polynomial in the size of the program.  相似文献   

18.
We present an effective approach to performing data flow analysis in parallel and identify three types of parallelism inherent in this solution process: independent-problem parallelism, separate-unit parallelism and algorithmic parallelism. We present our investigations of Fortran procedures from thePerfect Benchmarks andnetlib libraries, which reveal structural characteristics of program flow graphs that are amenable to algorithmic parallelism. Previously, the utility of algorithmic parallelism had been explored using our parallel hybrid algorithm in the context of solving the Reaching Definitions problem for Fortran procedures. Here we present new refinements that optimize performance by increasing the grain size of the parallelism, to improve communication on distributed-memory machines. The empirical performance of our optimized and unoptimized hybrid algorithms for Reaching Definitions are compared on this large data set using an iPSC/2. Our empirical findings qualitatively validate the usefulness of algorithmic parallelism.This research was supported, in part, by National Science Foundation grants CCR-8920078 and CCR-9023628-1, 2/5. An earlier version of this paper appears inProceedings of the 6th ACM International Conference on Supecomputing (Washington, D.C., July 1992), pp. 236–247.  相似文献   

19.
现有基于函数调用图的程序二进制文件相似性分析方法在分析经混淆处理的复杂程序时存在准确度低的问题。针对该问题提出了一种基于子图匹配的层次分析方法。以子图为最小检测单元,分层检测各个子图的相似度;再依据各个子图的相似度,采用加权平均策略计算程序二进制文件的相似度。实验结果表明,该方法抗干扰能力强,能够有效应用于恶意程序家族分类及新病毒变种检测,且具有较高的检测效率。  相似文献   

20.
Approximate pattern matching algorithms have become an important tool in computer assisted music analysis and information retrieval. The number of different problem formulations has greatly expanded in recent years, not least because of the subjective nature of measuring musical similarity. From an algorithmic perspective, the complexity of each problem depends crucially on the exact definition of the difference between two strings. We present an overview of advances in approximate string matching in this field focusing on new measures of approximation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号