首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
设计实现了一种用于大规模软件测试项目的远程控制并行测试平台。在实际的工程测试中,证明了通过该测试平台,并行地执行测试用例,极大地减少了软件测试的执行时间,提高了执行效率,为大规模软件测试提供了一个有价值的工程应用参考。  相似文献   

2.
Component-based software development is rapidly introducing numerous new paradigms and possibilities to deliver highly customized software in a distributed environment. Among other communication, teamwork, and coordination problems in global software development, the detection of faults is seen as the key challenge. Thus, there is a need to ensure the reliability of component-based applications requirements. Distributed device detection faults applied to tracked components from various sources and failed to keep track of all the large number of components from different locations. In this study, we propose an approach for fault detection from component-based systems requirements using the fuzzy logic approach and historical information during acceptance testing. This approach identified error-prone components selection for test case extraction and for prioritization of test cases to validate components in acceptance testing. For the evaluation, we used empirical study, and results depicted that the proposed approach significantly outperforms in component selection and acceptance testing. The comparison to the conventional procedures, i.e., requirement criteria, and communication coverage criteria without irrelevancy and redundancy successfully outperform other procedures. Consequently, the F-measures of the proposed approach define the accurate selection of components, and faults identification increases in components using the proposed approach were higher (i.e., more than 80 percent) than requirement criteria, and code coverage criteria procedures (i.e., less than 80 percent), respectively. Similarly, the rate of fault detection in the proposed approach increases, i.e., 92.80 compared to existing methods i.e., less than 80 percent. The proposed approach will provide a comprehensive guideline and roadmap for practitioners and researchers.  相似文献   

3.
Software testing is one of the most important techniques to examine the behavior of the software products to assure their quality. An effective and efficient testing approach must balance two important but conflicting requirements. One of them is the accuracy that needs a large number of test cases for testing, and the second one is reducing the time and cost, which requires a few test cases. Even for small software, the number of possible test cases is typically very large, and exhaustive testing is impractical. Hence, selecting appropriate test suite is necessary. Cause–effect graph testing is a common black‐box testing technique, which is equivalently representing Boolean relations between input parameters. However, the other traditional black‐box strategies cannot identify the relations that it may result in loss of some of the important test cases. Although the cause–effect graph is regarded very promising in specification testing, it is observed that most of the proposed approaches using the graph are complex or generate impossible and a large number of test cases. This observation has motivated our research to propose an efficient strategy to generate minimal test suite that simultaneously achieves high coverage of input parameters. To do so, at first, we identify major effects from the cause–effect graph using reduced ordered binary decision diagram (ROBDD). ROBDD makes the related Boolean expression of the graph concise and obtains a unique representation of the expression. Using the ROBDD, it is possible to reduce the size of the generated test suite and to perform testing faster. After that, our proposed method utilizes particle swarm optimization (PSO) algorithm to select the optimal test suite, which covers all pairwise combinations of input parameters. The experimental results show that our approach simultaneously achieves high efficacy and reduces cost of testing by selecting appropriate test cases, respectively, to both test size and coverage size. Also, it outperforms some existing state‐of‐the‐art strategies in the black‐box testing. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
Testing is an integral part of software development. Current fast-paced system developments have rendered traditional testing techniques obsolete. Therefore, automated testing techniques are needed to adapt to such system developments speed. Model-based testing (MBT) is a technique that uses system models to generate and execute test cases automatically. It was identified that the test data generation (TDG) in many existing model-based test case generation (MB-TCG) approaches were still manual. An automatic and effective TDG can further reduce testing cost while detecting more faults. This study proposes an automated TDG approach in MB-TCG using the extended finite state machine model (EFSM). The proposed approach integrates MBT with combinatorial testing. The information available in an EFSM model and the boundary value analysis strategy are used to automate the domain input classifications which were done manually by the existing approach. The results showed that the proposed approach was able to detect 6.62 percent more faults than the conventional MB-TCG but at the same time generated 43 more tests. The proposed approach effectively detects faults, but a further treatment to the generated tests such as test case prioritization should be done to increase the effectiveness and efficiency of testing.  相似文献   

5.
Video games comprise a multi-billion-dollar industry. Companies invest huge amounts of money for the release of their games. A part of this money is invested in testing the games. Current game testing methods include manual execution of pre-written test cases in the game. Each test case may or may not result in a bug. In a game, a bug is said to occur when the game does not behave per its intended design. The process of writing the test cases to test games requires standardization. We believe that this standardization can be achieved by implementing experimental design to video game testing. In this research, we discuss the implementation of combinatorial testing, specifically covering arrays, to test games. Combinatorial testing is a method of experimental design that is used to generate test cases and is primarily used for commercial software testing. In addition to the discussion of the implementation of combinatorial testing techniques in video game testing, we present an algorithm that can be used to sort test cases to aid developers in finding the combination of settings resulting in a bug.  相似文献   

6.
Both unit and integration testing are incredibly crucial for almost any software application because each of them operates a distinct process to examine the product. Due to resource constraints, when software is subjected to modifications, the drastic increase in the count of test cases forces the testers to opt for a test optimization strategy. One such strategy is test case prioritization (TCP). Existing works have propounded various methodologies that re-order the system-level test cases intending to boost either the fault detection capabilities or the coverage efficacy at the earliest. Nonetheless, singularity in objective functions and the lack of dissimilitude among the re-ordered test sequences have degraded the cogency of their approaches. Considering such gaps and scenarios when the meteoric and continuous updations in the software make the intensive unit and integration testing process more fragile, this study has introduced a memetics-inspired methodology for TCP. The proposed structure is first embedded with diverse parameters, and then traditional steps of the shuffled-frog-leaping approach (SFLA) are followed to prioritize the test cases at unit and integration levels. On 5 standard test functions, a comparative analysis is conducted between the established algorithms and the proposed approach, where the latter enhances the coverage rate and fault detection of re-ordered test sets. Investigation results related to the mean average percentage of fault detection (APFD) confirmed that the proposed approach exceeds the memetic, basic multi-walk, PSO, and optimized multi-walk by 21.7%, 13.99%, 12.24%, and 11.51%, respectively.  相似文献   

7.
The interest in selecting an appropriate cloud data center is exponentially increasing due to the popularity and continuous growth of the cloud computing sector. Cloud data center selection challenges are compounded by ever-increasing users’ requests and the number of data centers required to execute these requests. Cloud service broker policy defines cloud data center’s selection, which is a case of an NP-hard problem that needs a precise solution for an efficient and superior solution. Differential evolution algorithm is a metaheuristic algorithm characterized by its speed and robustness, and it is well suited for selecting an appropriate cloud data center. This paper presents a modified differential evolution algorithm-based cloud service broker policy for the most appropriate data center selection in the cloud computing environment. The differential evolution algorithm is modified using the proposed new mutation technique ensuring enhanced performance and providing an appropriate selection of data centers. The proposed policy’s superiority in selecting the most suitable data center is evaluated using the CloudAnalyst simulator. The results are compared with the state-of-arts cloud service broker policies.  相似文献   

8.
Denoizing of magnetic resonance (MR) brain images has been focus of numerous studies in the past. The performance of subsequent stages of image processing, in automated image analysis, is substantially improved by explicit consideration of noise. Nonlocal means (NLM) is a popular denoizing method which exploits usual redundancy present in an image to restore noise free image. It computes restored value of a pixel as weighted average of candidate pixels in a search window. In this article, we propose an improved version of the NLM algorithm which is modified in two ways. First, a robust threshold criterion is introduced, which helps selecting suitable pixels for participation in the restoration process. Second, the search window size is made adaptive using a window adaptation test based on the proposed threshold criterion. The modified NLM algorithm is named as improved adaptive nonlocal means (IANLM). An alternate implementation of IANLM is also proposed which exploits the image smoothness property to yield better denoizing performance. The computational burden is reduced significantly due to proposed modifications. Experiments are performed on simulated and real brain MR images at various noise levels. Results indicate that the proposed algorithm produces not only better denoizing results (quantitatively and qualitatively), but is also computationally more efficient. Moreover, the proposed technique is incorporated in an already proposed segmentation framework to check its validity in the practical scenario of segmentation. Improved segmentation results (quantitative and qualitative) verify the practical usefulness of the proposed algorithm in real world medical applications. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 235–248, 2013  相似文献   

9.
曹斌 《硅谷》2014,(2):27-29
软件测试是软件质量保障的基础,而单元测试是软件测试的重要阶段,单元测试用例的设计是软件测试的重要环节。文章重点结合xx型号嵌入式星载软件的一个模块,详细介绍并论述单元测试的方法。  相似文献   

10.
考虑到基于观察的测试常用的基本块、分支边等执行剖面信息不能完全表现测试用例对缺陷的发现能力,致使筛选的测试用例失效发现率不高,本文根据分析不同程序元素对缺陷类型的表现情况,提出将全面覆盖缺陷的复合执行剖面用于缩减高效的测试子集.实验结果表明,该缩减技术综合考虑了执行剖面对测试用例缺陷覆盖能力的刻画及失效用例的分布特性,因而得到的测试子集更能有效发现程序缺陷,提高了测试效率及可信度.  相似文献   

11.
软件测试是为了发现软件缺陷和错误而执行的过程.传统的穷举测试,受到时间、费用和人工等条件的限制,实施起来是不现实的.为了节省时间和资源,提高测试效率,从大量的可用测试用例中精心选取了少量的测试数据,使得采用这些测试数据能够达到最佳的测试效果,可以高效地发现软件中隐藏的错误和缺陷.  相似文献   

12.
一种基于复模量的粘弹减摆器非线性VKS改进模型   总被引:3,自引:0,他引:3  
根据粘弹减摆器单频、对称激振实验获得的复模量数据,对粘弹减摆器的非线性VKS模型进行了参数识别;为了使模型适用于单频及双频条件,提出了一种考虑频率修正的非线性VKS改进模型,并用单频及双频条件下的复模量实验数据对分析模型进行了验证。改进模型可以正确地反映出粘弹减摆器复模量的非线性特性,可用来预估其单频及双频条件下的复模量特性。应用该模型进行了直升机空中共振分析,发现某直升机从悬停到前飞,其摆振后退型模态阻尼下降了37%左右。  相似文献   

13.
To incorporate the effect of test coverage, we proposed two novel discrete nonhomogeneous Poisson process software reliability growth models in this article using failure data and test coverage, which are both regarding the number of executed test cases instead of execution time. Because one of the most important factors of the coverage‐based software reliability growth models is the test coverage function (TCF), we first discussed a discrete TCF based on beta function. Then we developed two discrete mean value functions (MVF) integrating test coverage and imperfect debugging. Finally, the proposed discrete TCF and MVFs are evaluated and validated on two actual software reliability data sets. The results of numerical illustration demonstrate that the proposed TCF and the MVFs provide better estimation and fitting under comparisons. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
This paper presents novel control schemes for testing embedded cores in a system-on-a-chip. It converts a traditional built-in self-test (BIST) scheme into an externally controllable scheme to achieve high test quality within optimal test execution time without inserting test points. Interactive BIST promotes design and test reuse without revealing IP information by using a pattern matching technique instead of fault simulation.  相似文献   

15.
针对约束优化问题,提出一种适于约束优化的增强差异演化算法(enhanced differential evolution algorithm for constrained optimization, ECDE).在约束处理上采用不可行域与可行域更新规则的方法,避免了传统的惩罚函数方法中对惩罚因子的设置,使算法的实现变得简单.改进了DE算法的变异操作,对选择的3个父代个体进行操作遍历,产生6个候选解,取适应值最优的为变异操作的解,大大改善了算法的稳定性、鲁棒性和搜索性能.通过4个测试函数和1个设计实例仿真,表明所提出的算法具有较快的收敛速度和较好的稳定性和鲁棒性.  相似文献   

16.
The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series–parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series–parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.  相似文献   

17.
Software testing is an important and cost intensive activity in software development. The major contribution in cost is due to test case generations. Requirement-based testing is an approach in which test cases are derivative from requirements without considering the implementation’s internal structure. Requirement-based testing includes functional and nonfunctional requirements. The objective of this study is to explore the approaches that generate test cases from requirements. A systematic literature review based on two research questions and extensive quality assessment criteria includes studies. The study identifies 30 primary studies from 410 studies spanned from 2000 to 2018. The review’s finding shows that 53% of journal papers, 42% of conference papers, and 5% of book chapters’ address requirements-based testing. Most of the studies use UML, activity, and use case diagrams for test case generation from requirements. One of the significant lessons learned is that most software testing errors are traced back to errors in natural language requirements. A substantial amount of work focuses on UML diagrams for test case generations, which cannot capture all the system’s developed attributes. Furthermore, there is a lack of UML-based models that can generate test cases from natural language requirements by refining them in context. Coverage criteria indicate how efficiently the testing has been performed 12.37% of studies use requirements coverage, 20% of studies cover path coverage, and 17% study basic coverage.  相似文献   

18.
The degree of toolpath redundancy is a critical concern when looking for an appropriate toolpath strategy for free-form surface machining. Hence, quantitative analysis of toolpath redundancy is important to CAM applications. In this work, a novel approach for prediction of toolpath redundancy for free-form surface machining is proposed. Firstly, a general mathematical model to represent toolpath redundancy rate is proposed based on the analysis of local toolpath intervals and their difference from the optimal values. And then, taking the most widely used iso-planar machining as case study, the steep-wall features that bring in the variation of surface slope rates alone machining strips are identified as the main cause of the generation of toolpath redundancy, so a method to automatic recognising steep-wall features from free-form surface is developed. At last, based on the steep-wall feature segmentation, an algorithm is presented to quantitatively predict the toolpath redundancy rate for free-form surface machining. A comparison study is made between the predicted redundancy rates and the experimental results by a number of case studies. The results have validated that the proposed approach can effectively predict the redundancy rate for a surface machining case before the real toolpaths to be generated.  相似文献   

19.
Maxville  V. Armarego  J. Lam  C.P. 《Software, IET》2009,3(5):369-380
With increasing use of component-based development (CBD), the process for selecting software from repositories is a critical concern for quality systems development. As support for developers blending in-house and third party software, the context-driven component evaluation (CdCE) process provides a three-phase approach to software selection: filtering to a short list, functional evaluation and ranking. The process was developed through iterative experimentation on real-world data. CdCE has tool support to generate classifier models, shortlists and test cases as artefacts that provide for a repeatable, transparent process that can be reused as the system evolves. Although developed for software component selection, the CdCE process framework can be easily modified for other selection tasks by substituting templates, tools, evaluation criteria and/or repositories. In this article the authors describe the CdCE process and its development, the CdCE framework as a reusable pattern for software selection and provide a case study where the process is applied.  相似文献   

20.
The π measure     
Nikolik  B. 《Software, IET》2008,2(5):404-416
A novel software measure, called the π measure, used for evaluating the fault-detection effectiveness of test sets, for measuring test-case independence and for measuring code complexity is proposed. The π measure is interpreted as a degree of run-time control and data difference at the code level, resulting from executing a program on a set of test cases. Unlike other well-known static and dynamic complexity measures, the π measure is an execution measure, computed using only run-time information. The Diversity Analyzer computes the p measure for programs written in C, C++, C# and VB in .NET. The experimental data presented here show a correlation between the p measure, test case independence and fault-detection rates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号