首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
面向路径的测试数据自动生成工具的设计与实现   总被引:1,自引:0,他引:1  
面向路径的测试数据生成问题是软件测试中的一个基本问题。Gupta等提出一种线性化谓词函数的迭代松驰方法求解该问题。文献[2]改进了该方法,证明改进后的方法与原方法生成的约束系统相同,文章以改进后的方法为核心算法,根据软件工程的思想,采用面向对象的方法,使用UML进行设计,并且在Linux Red Hat7.0操作系统下用C++语言言实现一个为程序路径自动生成测试数据的原型工具,然后将它移植到Windows操作系统。  相似文献   

2.
测试数据生成是软件测试的核心与关键,本文介绍了迭代松弛法以及对迭代松弛法进行改进,改进后的方法比原方法生成测试数据的能力更强,不仅能够用于白盒测试数据的自动生成,还能够用于黑盒测试数据的自动生成。在此基础上提出一个面向路径的测试数据生成框架。并讨论该框架在单元测试、组装测试中的应用。  相似文献   

3.
针对故障测试约束构造过程相当复杂,计算开销太大,测试用例设计质量不高,不利于广泛运用,提出一种基于多切片最优融合集的故障测试约束构造方法,采用同一切片标准的不同程序切片构造融合度矩阵来度量切片的一致融合度,通过利用切片融合度、路径条件以及软件故障触发与传播的内在机制进而构造某个故障相应的测试约束,可控制测试用例的规模,提高测试用例设计质量。实验结果表明,这种故障测试约束与传统基于谓词约束和必要性约束比较,生成的测试用例规模较小,很少生成无效测试用例,发现Bug的效率很高。  相似文献   

4.
基于谓词切片的字符串测试数据自动生成   总被引:3,自引:0,他引:3  
字符串谓词使用相当普遍,如何实现字符串测试数据的自动生成是一个有待解决的问题,针对字符串谓词,讨论了路径Path上给定谓词的谓词切片的动态生成算法,以及基于谓词切片的字符串测试数据自动生成方法,并给出了字符串间距离的定义,利用程序DUC(Definithon-Use-Control)表达式,构造谓词的谓词切片,对任意的输入,通过执行谓词切片,获取谓词中变量的当前值,进而对谓词中变量的每一字符进行分支函数极小化,动态生成给定字符串谓词边界的ON-OFF测试点,实验表明,该方法是行之有效的。  相似文献   

5.
一种用于测试数据生成的动态程序切片算法   总被引:3,自引:0,他引:3  
王雪莲  赵瑞莲  李立健 《计算机应用》2005,25(6):1445-1447,1450
介绍了程序切片技术的基本概念,提出了一种基于前向分析的动态程序切片算法,探讨了程序切片在软件测试数据生成中的应用,结果表明可以有效地提高基于路径的测试数据生成效率。  相似文献   

6.
一种基于路径的测试数据自动生成算法   总被引:3,自引:0,他引:3  
陈继锋  朱利  沈钧毅  陈玲 《控制与决策》2005,20(9):1065-1068
提出了一种新的基于路径测试数据自动生成的算法,该算法将路径中的线性谓词函数直接作为线性算术表示来构造谓词函数关于输入变量的线性约束,仅当谓词函数是输入变量的非线性函数时,才计算其线性算术表示,因而不必计算所有谓词函数的线性算术表示,也不必计算谓词片和确定输入依赖集,以及构造谓词函数关于输入变量的增量的线性约束,理论分析和实例证明,该算法具有简单、容易、有效且计算量小等特点。  相似文献   

7.
Statistical Debugging: A Hypothesis Testing-Based Approach   总被引:1,自引:0,他引:1  
Manual debugging is tedious, as well as costly. The high cost has motivated the development of fault localization techniques, which help developers search for fault locations. In this paper, we propose a new statistical method, called SOBER, which automatically localizes software faults without any prior knowledge of the program semantics. Unlike existing statistical approaches that select predicates correlated with program failures, SOBER models the predicate evaluation in both correct and incorrect executions and regards a predicate as fault-relevant if its evaluation pattern in incorrect executions significantly diverges from that in correct ones. Featuring a rationale similar to that of hypothesis testing, SOBER quantifies the fault relevance of each predicate in a principled way. We systematically evaluate SOBER under the same setting as previous studies. The result clearly demonstrates the effectiveness: SOBER could help developers locate 68 out of the 130 faults in the Siemens suite by examining no more than 10 percent of the code, whereas the cause transition approach proposed by Holger et al. [2005] and the statistical approach by Liblit et al. [2005] locate 34 and 52 faults, respectively. Moreover, the effectiveness of SOBER is also evaluated in an "imperfect world", where the test suite is either inadequate or only partially labeled. The experiments indicate that SOBER could achieve competitive quality under these harsh circumstances. Two case studies with grep 2.2 and bc 1.06 are reported, which shed light on the applicability of SOBER on reasonably large programs  相似文献   

8.
介绍一种自动程序流信息分析方法,使用静态单赋值简化程序切片中的数据依赖关系,利用简单快速程序切片算法删除对循环控制无影响的语句和控制谓词,利用抽象解释自动精确获得程序流信息。实验结果表明,在不失精度的情况下,该方法的分析速度较普通方法快了近25%,且未假定任何程序格式,适用于任何程序格式的流分析过程。  相似文献   

9.
Order-sorted logic programming with predicate hierarchy   总被引:1,自引:0,他引:1  
Order-sorted logic has been formalized as first-order logic with sorted terms where sorts are ordered to build a hierarchy (called a sort-hierarchy). These sorted logics lead to useful expressions and inference methods for structural knowledge that ordinary first-order logic lacks. Nitta et al. pointed out that for legal reasoning a sort-hierarchy (or a sorted term) is not sufficient to describe structural knowledge for event assertions, which express facts caused at some particular time and place. The event assertions are represented by predicates with n arguments (i.e., n-ary predicates), and then a particular kind of hierarchy (called a predicate hierarchy) is built by a relationship among the predicates. To deal with such a predicate hierarchy, which is more intricate than a sort-hierarchy, Nitta et al. implemented a typed (sorted) logic programming language extended to include a hierarchy of verbal concepts (corresponding to predicates). However, the inference system lacks a theoretical foundation because its hierarchical expressions exceed the formalization of order-sorted logic. In this paper, we formalize a logic programming language with not only a sort-hierarchy but also a predicate hierarchy. This language can derive general and concrete expressions in the two kinds of hierarchies. For the hierarchical reasoning of predicates, we propose a manipulation of arguments in which surplus and missing arguments in derived predicates are eliminated and supplemented. As discussed by Allen, McDermott and Shoham in research on temporal logic and as applied by Nitta et al. to legal reasoning, if each predicate is interpreted as an event or action (not as a static property), then missing arguments should be supplemented by existential terms in the argument manipulation. Based on this, we develop a Horn clause resolution system extended to add inference rules of predicate hierarchies. With a semantic model restricted by interpreting a predicate hierarchy, the soundness and completeness of the Horn-clause resolution is proven.  相似文献   

10.
软件测试分为静态分析、路径选择、测试数据生成和动态分析四个阶段,而测试数据的自动生成是软件测试的关键技术之一。文章通过对被测试程序的分析,提出了生成测试数据的平衡力法,对任意的输入变量,判断变量移动范围及进行谓词中变量的函数极小化,得到测试数据,并给出了具体实现方法。  相似文献   

11.
在对程序分片技术研究的基础上,提出一种新的片变体测试方法。通过实例说明,该方法能更有效地提高变体测试的准确性及测试效率。  相似文献   

12.
Predicates appear in both the specification and implementation of a program. One approach to software testing, referred to as predicate testing, is to require certain types of tests for a predicate. In this paper, three fault-based testing criteria are defined for compound predicates, which are predicates with one or more AND/OR operators. BOR (boolean operator) testing requires a set of tests to guarantee the detection of (single or multiple) boolean operator faults, including incorrect AND/OR operators and missing/extra NOT operators. BRO (boolean and relational operator) testing requires a set of tests to guarantee the detection of boolean operator faults and relational operator faults (i.e., incorrect relational operators). BRE (boolean and relational expression) testing requires a set of tests to guarantee the detection of boolean operator faults, relational operator faults, and a type of fault involving arithmetical expressions. It is shown that for a compound predicate with n, n>0, AND/OR operators, at most n+2 constraints are needed for BOR testing and at most 2*n+3 constraints for BRO or BRE testing, where each constraint specifies a restriction on the value of each boolean variable or relational expression in the predicate. Algorithms for generating a minimum set of constraints for BOR, BRO, and BRE testing of a compound predicate are given, and the feasibility problem for the generated constraints is discussed. For boolean expressions that contain multiple occurrences of some boolean variables, how to combine BOR testing with the meaningful impact strategy (Weyuker et al., 1994) is described  相似文献   

13.
静态回归测试用例集构建策略是依据程序间的调用关联,分析因代码更改而受影响的模块,进而构建回归测试用例集,该方法并没有考虑程序间的隐式数据关联,对同一数据库操作或者对公共对象数据操作的方法间存在隐式数据关联。针对代码更改不仅会对调用关联的方法产生影响,也会对隐式数据关联的方法产生影响进行研究,提出了一种多重关联的静态回归测试用例集构建策略,通过构建多重方法关联图分析方法间调用关联和隐式数据关联,进而依据关联关系构建因代码更改而受影响的回归测试用例集。通过对4个开源项目进行实验评估,实验结果表明本文提出的静态策略提高了回归测试的安全性和精确性。  相似文献   

14.
《Real》2005,11(4):270-281
Recently, Shen et al. [IEEE Transactions on Image Processing 2003;12:283–95] presented an efficient adaptive vector quantization (AVQ) algorithm and their proposed AVQ algorithm has a better peak signal-to-noise ratio (PSNR) than that of the previous benchmark AVQ algorithm. This paper presents an improved AVQ algorithm based on the proposed hybrid codebook data structure which consists of three codebooks—the locality codebook, the static codebook, and the history codebook. Due to easy maintenance advantage, the proposed AVQ algorithm leads to a considerable computation-saving effect while preserving the similar PSNR performance as in the previous AVQ algorithm by Shen et al. [IEEE Transactions on Image Processing 2003;12:283–95]. Experimental results show that the proposed AVQ algorithm over the previous AVQ algorithm has about 75% encoding time improvement ratio while both algorithms have the similar PSNR performance.  相似文献   

15.
Slicing is a program analysis technique originally developed for imperative languages. It facilitates understanding of data flow and debugging.This paper discusses slicing of Constraint Logic Programs. Constraint Logic Programming (CLP) is an emerging software technology with a growing number of applications. Data flow in constraint programs is not explicit, and for this reason the concepts of slice and the slicing techniques of imperative languages are not directly applicable.This paper formulates declarative notions of slice suitable for CLP. They provide a basis for defining slicing techniques (both dynamic and static) based on variable sharing. The techniques are further extended by using groundness information.A prototype dynamic slicer of CLP programs implementing the presented ideas is briefly described together with the results of some slicing experiments.  相似文献   

16.
Writing correct distributed programs is hard. In spite of extensive testing and debugging, software faults persist even in commercial grade software. Many distributed systems should be able to operate properly even in the presence of software faults. Monitoring the execution of a distributed system, and, on detecting a fault, initiating the appropriate corrective action is an important way to tolerate such faults. This gives rise to the predicate detection problem which requires finding whether there exists a consistent cut of a given computation that satisfies a given global predicate.Detecting a predicate in a computation is, however, an NP-complete problem in general. In order to ameliorate the associated combinatorial explosion problem, we introduce the notion of computation slice. Formally, the slice of a computation with respect to a predicate is a (sub)computation with the least number of consistent cuts that contains all consistent cuts of the computation satisfying the predicate. Intuitively, slice is a concise representation of those consistent cuts of a computation that satisfy a certain condition. To detect a predicate, rather than searching the state-space of the computation, it is much more efficient to search the state-space of the slice.We prove that the slice of a computation is uniquely defined for all predicates. We also present efficient algorithms for computing the slice for several useful classes of predicates. For an arbitrary predicate, we establish that the problem of computing the slice is NP-complete in general. Nonetheless, for such a predicate, we develop an efficient heuristic algorithm for computing an approximate slice. Our experimental results demonstrate that slicing can lead to an exponential improvement over existing techniques for predicate detection in terms of time and space.Received: 19 November 2003, Revised: 29 July 2004, Published online: 7 February 2005Vijay K. Garg: Supported in part by the NSF Grants ECS-9907213, CCR-9988225, Texas Education Board Grant ARP-320, an Engineering Foundation Fellowship, and an IBM grant.Parts of this paper have appeared earlier in conference proceedings [GM01,MG01a,MG03a].  相似文献   

17.
面向路径的测试数据生成问题是软件测试中一个基本问题。文章介绍了自主开发的面向路径的测试数据自动生成工具。Tcl/Tk是一种图形界面设计工具,其功能强大,可运行于Windows,UNIX等操作系统上,具有良好的可移植性。文章简要介绍Tcl/Tk,并给出用Tcl/Tk设计面向路径的测试数据自动生成工具的图形界面的方法。  相似文献   

18.
基于DDGRAPH图的路径覆盖研究   总被引:3,自引:0,他引:3  
软件测试分为静态分析、路径选择、测试数据生成和动态分析四个阶段,而路径选择的自动生成是软件测试的关键技术之一。路径覆盖是软件测试中一种十分重要的方法,它使程序的每个分支至少执行一次。文中通过对DDGRAPH图的分析,提出了DDGRAPH图中弧的支配树和蕴含树的表示方法,然后给出由支配树和蕴含树确定非限制弧的方法,通过近似最少谓词覆盖策略以确定覆盖所有非限制弧的路径测试子集。  相似文献   

19.
基于程序频谱的动态缺陷定位(spectrum based dynamic fault localization,简称SFL)可分为基于可执行语句覆盖的方法和基于谓词覆盖的方法。通过分析以上两类方法可以发现:(1) 基于可执行语句覆盖的方法未考虑谓词错误和执行结果之间的关联。(2)基于谓词覆盖的方法只针对谓词进行插桩,最后只计算谓词的可疑度并对谓词进行排序。如果缺陷是非谓词,此类方法无法准确定位缺陷位置。(3) 忽略了基本块之间的关联和层次特性,将各个基本块看成相互独立的个体。为解决上述问题,首先,本文将谓词错误与执行结果之间的关联性这一有用信息加入到算法的设计中;其次,加入谓词分层覆盖与分析的思想,对覆盖矩阵中的基本块进行细分和分层;最后,将二者结合,提出一种基于谓词分层覆盖矩阵的缺陷定位方法,提出了谓词分层覆盖算法Phcm。本文将西门子程序集作为目标程序,通过与其他三种缺陷定位方法进行对比实验,验证了该方法在提高缺陷定位的精准度和减小代码检查率上的有效性。  相似文献   

20.
Two robust remote user authentication protocols using smart cards   总被引:2,自引:0,他引:2  
With the rapid growth of electronic commerce and enormous demand from variants of Internet based applications, strong privacy protection and robust system security have become essential requirements for an authentication scheme or universal access control mechanism. In order to reduce implementation complexity and achieve computation efficiency, design issues for efficient and secure password based remote user authentication scheme have been extensively investigated by research community in these two decades. Recently, two well-designed password based authentication schemes using smart cards are introduced by Hsiang and Shih (2009) and Wang et al. (2009), respectively. Hsiang et al. proposed a static ID based authentication protocol and Wang et al. presented a dynamic ID based authentication scheme. The authors of both schemes claimed that their protocol delivers important security features and system functionalities, such as mutual authentication, data security, no verification table implementation, freedom on password selection, resistance against ID-theft attack, replay attack and insider attack, as well as computation efficiency. However, these two schemes still have much space for security enhancement. In this paper, we first demonstrate a series of vulnerabilities on these two schemes. Then, two enhanced protocols with corresponding remedies are proposed to eliminate all identified security flaws in both schemes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号