共查询到20条相似文献,搜索用时 31 毫秒
1.
《IEEE transactions on pattern analysis and machine intelligence》1982,(4):371-379
Different approaches to the generation of test data are described. Error-based approaches depend on the definition of classes of commonly occurring program errors. They generate tests which are specifically designed to determine if particular classes of errors occur in a program. An error-based method called weak mutation testing is described. In this method, tests are constructed which are guaranteed to force program statements which contain certain classes of errors to act incorrectly during the execution of the program over those tests. The method is systematic, and a tool can be built to help the user apply the method. It is extensible in the sense that it can be extended to cover additional classes of errors. Its relationship to other software testing methods is discussed. Examples are included. 相似文献
2.
3.
This paper proposes an improved way of controlling the costly and tedious test phases involved in software development. It presents a numerical method of making forecasts while testing large software systems. This method takes into account major changes to the test object and the test method. The model is adaptive; that is, the complexity of the model is controlled by the real test considered here. The number of future errors according to test activities, the number of residual errors and the end of testing can therefore be predicted at a very early stage. Corrective action to adapt the test can thus be taken in good time. The forecasts have been successfully applied to the development of the world's first ISDN communication computer HICOM and other projects. 相似文献
4.
E. P. Doolan 《Software》1992,22(2):173-182
Fagan's inspection method was used by a software development group to validate requirements specifications for software functions. The experiences of that group are described in this paper. In general, they have proved to be favourable. Because the costs of fixing errors in software were known, the payback for every hour invested in inspection was shown to be a factor 30. There are also other benefits that are much more difficult to quantify directly but whose effect is significant in terms of the overall quality of the software. Some pointers are given at the end of this paper for those who want to introduce Fagan's inspection method into their own development environment. 相似文献
5.
6.
A. Jefferson Offutt W. Michael Craft 《Software Testing, Verification and Reliability》1994,4(3):131-154
Mutation analysis is a software testing technique that requires the tester to generate test data that will find specific, well-defined errors. Mutation testing executes many slightly differing versions, called mutants, of the same program to evaluate the quality of the data used to test the program. Although these mutants are generated and executed efficiently by automated methods, many of the mutants are functionally equivalent to the original program and are not useful for testing. Recognizing and eliminating equivalent mutants is currently done by hand, a time-consuming and arduous task. This problem is currently a major obstacle to the practical application of mutation testing. This paper presents extensions to previous work in detecting equivalent mutants; specifically, algorithms for determining several classes of equivalent mutants are presented, an implementation of these algorithms is discussed, and results from using this implementation are presented. These algorithms are based on data flow analysis and six compiler optimization techniques. Each of these techniques is described together with how they are used to detect equivalent mutants. The design of the tool and some experimental results using it are also presented. 相似文献
7.
Object‐oriented software development is an evolutionary process, and hence the opportunities for integration are abundant.
Conceptually, classes are encapsulation of data attributes and their associated functions. Software components are amalgamation
of logically and/or physically related classes. A complete software system is also an aggregation of software components.
All of these various integration levels warrant contemporary integration techniques. Traditional integration techniques towards
the end of software development process do not suffice any more. Integration strategies are needed at class level, component
level, sub‐system level, and system levels. Classes require integration of methods. Various types of class interaction mechanisms
demand different testing strategies. Integration of classes into components presses its own integration requirements. Finally,
the system integration demands different types of integration testing strategies. This paper discusses the various integration
levels prevalent in object‐oriented software development. The integration requirements of each level are met by suggesting
a solution for the same. An integration framework for integrating classes into a system is also proposed.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
8.
ContextMemory safety errors such as buffer overflow vulnerabilities are one of the most serious classes of security threats. Detecting and removing such security errors are important tasks of software testing for improving the quality and reliability of software in practice.ObjectiveThis paper presents a goal-oriented testing approach for effectively and efficiently exploring security vulnerability errors. A goal is a potential safety violation and the testing approach is to automatically generate test inputs to uncover the violation.MethodWe use type inference analysis to diagnose potential safety violations and dynamic symbolic execution to perform test input generation. A major challenge facing dynamic symbolic execution in such application is the combinatorial explosion of the path space. To address this fundamental scalability issue, we employ data dependence analysis to identify a root cause leading to the execution of the goal and propose a path exploration algorithm to guide dynamic symbolic execution for effectively discovering the goal.ResultsTo evaluate the effectiveness of our proposed approach, we conducted experiments against 23 buffer overflow vulnerabilities. We observed a significant improvement of our proposed algorithm over two widely adopted search algorithms. Specifically, our algorithm discovered security vulnerability errors within a matter of a few seconds, whereas the two baseline algorithms failed even after 30 min of testing on a number of test subjects.ConclusionThe experimental results highlight the potential of utilizing data dependence analysis to address the combinatorial path space explosion issue faced by dynamic symbolic execution for effective security testing. 相似文献
9.
Rosenblum D.S. Weyuker E.J. 《IEEE transactions on pattern analysis and machine intelligence》1997,23(3):146-156
Selective regression testing strategies attempt to choose an appropriate subset of test cases from among a previously run test suite for a software system, based on information about the changes made to the system to create new versions. Although there has been a significant amount of research in recent years on the design of such strategies, there has been very little investigation of their cost-effectiveness. The paper presents some computationally efficient predictors of the cost-effectiveness of the two main classes of selective regression testing approaches. These predictors are computed from data about the coverage relationship between the system under test and its test suite. The paper then describes case studies in which these predictors were used to predict the cost-effectiveness of applying two different regression testing strategies to two software systems. In one case study, the TESTTUBE method selected an average of 88.1 percent of the available test cases in each version, while the predictor predicted that 87.3 percent of the test cases would be selected on average 相似文献
10.
ContextEmerging multicores and clusters of multicores that may operate in parallel have set a new challenge – development of massively parallel software composed of thousands of loosely coupled or even completely independent threads/processes, such as MapReduce and Java 3.0 workers, or Erlang processes, respectively. Testing and verification is a critical phase in the development of such software products.ObjectiveGenerating test cases based on operational profiles and certifying declared operational reliability figure of the given software product is a well-established process for the sequential type of software. This paper proposes an adaptation of that process for a class of massively parallel software – large-scale task trees.MethodThe proposed method uses statistical usage testing and operational reliability estimation based on operational profiles and novel test suite quality indicators, namely the percentage of different task trees and the percentage of different paths.ResultsAs an example, the proposed method is applied to operational reliability certification of a parallel software infrastructure named the TaskTreeExecutor. The paper proposes an algorithm for generating random task trees to enable that application. Test runs in the experiments involved hundreds and thousands of Win32/Linux threads thus demonstrating scalability of the proposed approach. For practitioners, the most useful result presented is the method for determining the number of task trees and the number of paths, which are needed to certify the given operational reliability of a software product. The practitioners may also use the proposed coverage metrics to measure the quality of automatically generated test suite.ConclusionThis paper provides a useful solution for the test case generation that enables the operational reliability certification process for a class of massively parallel software called the large-scale task trees. The usefulness of this solution was demonstrated by a case study – operational reliability certification of the real parallel software product. 相似文献
11.
Many statistical methods for estimating software quality rely on representative testing: they assume a program is tested in
an environment that simulates the environment where it will be used. Often, however, a software tester’s aim is to uncover
defects as soon as possible, and representative testing may not be the best way to do this. Instead, tests are often selected
according to some plan that is believed to result in an efficient but thorough examination of the software’s behavior. This
raises the question of how practical measurements of software quality, like software probability‐of‐failure, can be obtained
from directed testing. In this paper, we discuss some factors affecting the ability of directed tests to predict software
quality when quality is measured in the environment where the software operates, but the directed tests do not simulate that
environment. We consider a number of ways to measure the power of a directed test method, and show how these affect the tester’s
ability to predict software quality. 相似文献
12.
Jason M. Daida Adam M. Hilss David J. Ward Stephen L. Long 《Genetic Programming and Evolvable Machines》2005,6(1):79-110
This paper presents methods to visualize the structure of trees that occur in genetic programming. These methods allow for
the inspection of structure of entire trees even though several thousands of nodes may be involved. The methods also scale
to allow for the inspection of structure for entire populations and for complete trials even though millions of nodes may
be involved. Examples are given that demonstrate how this new way of “seeing” can afford a potentially rich way of understanding
dynamics that underpin genetic programming. The examples indicate further studies that might be enabled by visualizing structure
at these scales. 相似文献
13.
14.
15.
16.
An approach to the problem of complete testing is proposed. Testing is interpreted as the check of an implementation’s conformance
to the given requirements described by a specification. The completeness means that a test suite finds all the possible implementation
errors. In practice, testing must end in a finite amount of time. In the general case, the requirements of completeness and
finiteness contradict each other. However, finite complete test suites can be constructed for certain classes of implementations
and specifications provided that there are specific test capabilities. Test algorithms are proposed for finite specifications
and finite implementations with limited nondeterminism for the case of open-state testing. The complexity of those algorithms
is estimated. 相似文献
17.
Lack of metrology tools for inspecting high aspect ratio MEMS severely limits the degree to which tolerances of a given part can be examined. Tools such as SEMs, AFMs, vision-based systems, and profilometers are good at examining two-dimensional entities of a part or for calculating surface roughness characteristics. None of these tools, however, can extract full three-dimensional data sets of high aspect ratio MEMS for part inspection. The hardware is either limited by the steep sidewalls of the part or by the simple fact that the acquisition method only collects two-dimensional data. This research proposes a method for extracting three-dimensional information from a part using multiple two-dimensional pointclouds. A fiducial setup is proposed which would allow for the registration of multiple pointclouds. A computer-aided inspection (CAI) software platform has been developed to handle the multiple data sets. The software platform implements a least-squares localization routine to compute the best-fit deviations from the nominal CAD geometry, as well as algorithms to determine the correct alignment between two pointclouds. With this software platform, both form errors of a single pointcloud and geometric errors with respect to multiple pointclouds can be calculated.This work was partially funded by Sandia National Laboratories and the National Science Foundation under Grant Number DMI-9988664. The government has certain rights in this material. Any opinions, findings and conclusions or recommendations are those of the authors and do not necessarily reflect the views of the Sandia National Laboratories or National Science Foundation.This paper was presented at HARMST 2003 in June 2003. 相似文献
18.
The testing phase of the software development process consumes about one-half of the development time and resources. This paper addresses the automation of the analysis stage of testing. Dual programming is introduced as one approach to implement this automation. It uses a higher level language to duplicate the functionality of the software under test. We contend that a higher level language (HLL) uses fewer lines of code than a lower level language (LLL) to achieve the same functionality, so testing the HLL program will require less effort than testing the LLL equivalent. The HLL program becomes the oracle for the LLL version. This paper describes experiments carried out using different categories of applications, and it identifies those most likely to profit from this approach. A metric is used to quantify savings realized. The results of the research are: (a) that dual programming can be used to automate the analysis stage of software testing; (b) that substantial savings of the cost of this testing phase can be realized when the appropriate pairing of primal and dual languages is made, and (c) that it is now possible to build a totally automated testing system. Recommendations are made regarding the applicability of the method to specific classes of applications. 相似文献
19.
20.
《Expert systems with applications》2007,32(3):879-889
Genetic algorithms have successfully been used in automatic software testing. Particularly programming errors and inputs that conflict with time constraints can be found. In this paper, the idea of genetic algorithm based software testing is broadened to algorithm performance testing. It is shown how the best and worst case performance of the algorithms can be found effectively. This information can be further utilized when comparing and improving algorithms. In this paper, the proposed test method is introduced and the advantages of using genetic algorithms are discussed. Furthermore, the proposed method is applied to a 2D nearest point algorithm, which is tested by optimizing the parameters of 2D Gaussian distributions using genetic algorithms in order to find the best and worst case distributions and the corresponding performances. 相似文献