首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
利用静态代码缺陷分析技术对软件进行早期缺陷检测,是提高软件质量的重要途径。静态代码缺陷分析工具(如FINDBUGS,JLINT,ESC/JAVA,PMD,COVERITY等)已经被证实可以成功地识别出大量的软件潜在缺陷[1-3]。然而,这类工具在可用性和有效性方面的不足严重限制了它们的进一步广泛使用。可用性不足包括a)每个独立缺陷检测工具只擅于检测特定类型的缺陷,需要配合使用才能全面检测缺陷;b)每个缺陷检测工具的安装、配置和运行占用了用户大量的时间、精力。有效性不足包括静态缺陷分析结果往往存在大量误报,并且会包括许多不重要的(不会引起程序员修复行为的)缺陷报告。为了解决上述问题,提出并构建了一个易扩展的"静态代码缺陷分析"服务(Code Defect Analysis Service,CODAS)。CODAS基于一个高度可扩展的架构设计,对多个独立的缺陷检测工具进行了封装和集成,并对缺陷检测报告进行了有效汇总和排序,从而充分发挥了各个独立工具的优势,大大提升了静态缺陷分析工具的可用性和有效性。  相似文献   

2.
Software defects due to coding errors continue to plague the industry with disastrous impact, especially in the enterprise application software category. Identifying how much of these defects are specifically due to coding errors is a challenging problem. In this paper, we investigate the best methods for preventing new coding defects in enterprise resource planning (ERP) software, and discovering and fixing existing coding defects. A large-scale survey-based ex-post-facto study coupled with experiments involving static code analysis tools on both sample code and real-life million lines of code open-source ERP software were conducted for such purpose. The survey-based methodology consisted of respondents who had experience developing ERP software. This research sought to determine if software defects could be merely mitigated or totally eliminated, and what supporting policies, procedures and infrastructure were needed to remedy the problem. In this paper, we introduce a hypothetical framework developed to address our research questions, the hypotheses we have conjectured, the research methodology we have used, and the data analysis methods used to validate the stated hypotheses. Our study revealed that: (a) the best way for ERP developers to discover coding-error based defects in existing programs is to choose an appropriate programming language; perform a combination of manual and automated code auditing, static code analysis, and formal test case design, execution and analysis, (b) the most effective ways to mitigate defects in an ERP system is to track the defect densities in the ERP software, fix the defects found, perform regression testing, and update the resulting defect density statistics, and (c) the impact of epistemological and legal commitments on the defect densities of ERP systems is inconclusive.We feel that our proposed model has the potential to vastly improve the quality of ERP and other similar software by reducing the coding-error defects, and recommend that future research aimed at testing the model in actual production environments.  相似文献   

3.
SUDS is a powerful infrastructure for creating dynamic software defect detection tools. It contains phases for both static analysis and dynamic instrumentation allowing users to create tools that take advantage of both paradigms. The results of static analysis phases can be used to improve the quality of dynamic defect detection tools created with SUDS by focusing the instrumentation on types of defects, sources of data, or regions of code. The instrumentation engine is designed in a manner that allows users to create their own correctness models quickly but is flexible to support construction of a wide range of different tools. The effectiveness of SUDS is demonstrated by showing that it is capable of finding bugs and that performance improves when static analysis is used to eliminate unnecessary instrumentation.  相似文献   

4.
Despite the interest and the increasing number of static analysis tools for detecting defects in software systems, there is still no consensus on the actual gains that such tools introduce in software development projects. Therefore, this article reports a study carried out to evaluate the degree of correspondence and correlation between post-release defects (i.e., field defects) and warnings issued by FindBugs, a bug finding tool widely used in Java systems. The study aimed to evaluate two types of relations: static correspondence (when warnings contribute to find the static program locations changed to remove field defects) and statistical correlation (when warnings serve as early indicators for future field defects). As a result, we have concluded that there is no static correspondence between field defects and warnings. However, statistical tests showed that there is a moderate level of correlation between warnings and such kinds of software defects.  相似文献   

5.
Software security can be improved by identifying and correcting vulnerabilities. In order to reduce the cost of rework, vulnerabilities should be detected as early and efficiently as possible. Static automated code analysis is an approach for early detection. So far, only few empirical studies have been conducted in an industrial context to evaluate static automated code analysis. A case study was conducted to evaluate static code analysis in industry focusing on defect detection capability, deployment, and usage of static automated code analysis with a focus on software security. We identified that the tool was capable of detecting memory related vulnerabilities, but few vulnerabilities of other types. The deployment of the tool played an important role in its success as an early vulnerability detector, but also the developers perception of the tools merit. Classifying the warnings from the tool was harder for the developers than to correct them. The correction of false positives in some cases created new vulnerabilities in previously safe code. With regard to defect detection ability, we conclude that static code analysis is able to identify vulnerabilities in different categories. In terms of deployment, we conclude that the tool should be integrated with bug reporting systems, and developers need to share the responsibility for classifying and reporting warnings. With regard to tool usage by developers, we propose to use multiple persons (at least two) in classifying a warning. The same goes for making the decision of how to act based on the warning. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
In this study, defect tracking is used as a proxy method to predict software readiness. The number of remaining defects in an application under development is one of the most important factors that allow one to decide if a piece of software is ready to be released. By comparing predicted number of faults and number of faults discovered in testing, software manager can decide whether the software is likely ready to be released or not.The predictive model developed in this research can predict: (i) the number of faults (defects) likely to exist, (ii) the estimated number of code changes required to correct a fault and (iii) the estimated amount of time (in minutes) needed to make the changes in respective classes of the application. The model uses product metrics as independent variables to do predictions. These metrics are selected depending on the nature of source code with regards to architecture layers, types of faults and contribution factors of these metrics. The use of neural network model with genetic training strategy is introduced to improve prediction results for estimating software readiness in this study. This genetic-net combines a genetic algorithm with a statistical estimator to produce a model which also shows the usefulness of inputs.The model is divided into three parts: (1) prediction model for presentation logic tier (2) prediction model for business tier and (3) prediction model for data access tier. Existing object-oriented metrics and complexity software metrics are used in the business tier prediction model. New sets of metrics have been proposed for the presentation logic tier and data access tier. These metrics are validated using data extracted from real world applications. The trained models can be used as tools to assist software mangers in making software release decisions.  相似文献   

7.
A wide range of commercial consumer devices such as mobile phones and smart televisions rely on embedded systems software to provide their functionality. Testing is one of the most commonly used methods for validating this software, and improved testing approaches could increase these devices’ dependability. In this article we present an approach for performing such testing. Our approach is composed of two techniques. The first technique involves the selection of test data; it utilizes test adequacy criteria that rely on dataflow analysis to distinguish points of interaction between specific layers in embedded systems and between individual software components within those layers, while also tracking interactions between tasks. The second technique involves the observation of failures: it utilizes a family of test oracles that rely on instrumentation to record various aspects of a system's execution behavior, and compare observed behavior to certain intended system properties that can be derived through program analysis. Empirical studies of our approach show that our adequacy criteria can be effective at guiding the creation of test cases that detect faults, and our oracles can help expose faults that cannot easily be found using typical output-based oracles. Moreover, the use of our criteria accentuates the fault-detection effectiveness of our oracles.  相似文献   

8.
Two experimental comparisons of data flow and mutation testing are presented. These techniques are widely considered to be effective for unit-level software testing, but can only be analytically compared to a limited extent. We compare the techniques by evaluating the effectiveness of test data developed for each. We develop ten independent sets of test data for a number of programs: five to satisfy the mutation criterion and five to satisfy the all-uses data-flow criterion. These test sets are developed using automated tools, in a manner consistent with the way a test engineer might be expected to generate test data in practice. We use these test sets in two separate experiments. First we measure the effectiveness of the test data that was developed for one technique in terms of the other. Second, we investigate the ability of the test sets to find faults. We place a number of faults into each of our subject programs, and measure the number of faults that are detected by the test sets. Our results indicate that while both techniques are effective, mutation-adequate test sets are closer to satisfying the data flow criterion, and detect more faults.  相似文献   

9.
ContextSecurity vulnerabilities discovered later in the development cycle are more expensive to fix than those discovered early. Therefore, software developers should strive to discover vulnerabilities as early as possible. Unfortunately, the large size of code bases and lack of developer expertise can make discovering software vulnerabilities difficult. A number of vulnerability discovery techniques are available, each with their own strengths.ObjectiveThe objective of this research is to aid in the selection of vulnerability discovery techniques by comparing the vulnerabilities detected by each and comparing their efficiencies.MethodWe conducted three case studies using three electronic health record systems to compare four vulnerability discovery techniques: exploratory manual penetration testing, systematic manual penetration testing, automated penetration testing, and automated static analysis.ResultsIn our case study, we found empirical evidence that no single technique discovered every type of vulnerability. We discovered that the specific set of vulnerabilities identified by one tool was largely orthogonal to that of other tools. Systematic manual penetration testing found the most design flaws, while automated static analysis found the most implementation bugs. The most efficient discovery technique in terms of vulnerabilities discovered per hour was automated penetration testing.ConclusionThe results show that employing a single technique for vulnerability discovery is insufficient for finding all types of vulnerabilities. Each technique identified only a subset of the vulnerabilities, which, for the most part were independent of each other. Our results suggest that in order to discover the greatest variety of vulnerability types, at least systematic manual penetration testing and automated static analysis should be performed.  相似文献   

10.
H. Szer 《Software》2015,45(10):1359-1373
Static code analysis tools automatically generate alerts for potential software faults that can lead to failures. However, these tools usually generate a very large number of alerts, some of which are subject to false positives. Because of limited resources, it is usually hard to inspect all the alerts. As a complementary approach, runtime verification techniques verify dynamic system behavior with respect to a set of specifications. However, these specifications are usually created manually based on system requirements and constraints. In this paper, we introduce a noval approach and a toolchain for integrated static code analysis and runtime verification. Alerts that are generated by static code analysis tools are utilized for automatically generating runtime verification specifications. On the other hand, runtime verification results are used for automatically generating filters for static code analysis tools to eliminate false positives. The approach is illustrated for the static analysis and runtime verification of an open‐source bibliography reference manager software. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.

Context

A feature model (FM) represents the valid combinations of features in a domain. The automated extraction of information from FMs is a complex task that involves numerous analysis operations, techniques and tools. Current testing methods in this context are manual and rely on the ability of the tester to decide whether the output of an analysis is correct. However, this is acknowledged to be time-consuming, error-prone and in most cases infeasible due to the combinatorial complexity of the analyses, this is known as the oracle problem.

Objective

In this paper, we propose using metamorphic testing to automate the generation of test data for feature model analysis tools overcoming the oracle problem. An automated test data generator is presented and evaluated to show the feasibility of our approach.

Method

We present a set of relations (so-called metamorphic relations) between input FMs and the set of products they represent. Based on these relations and given a FM and its known set of products, a set of neighbouring FMs together with their corresponding set of products are automatically generated and used for testing multiple analyses. Complex FMs representing millions of products can be efficiently created by applying this process iteratively.

Results

Our evaluation results using mutation testing and real faults reveal that most faults can be automatically detected within a few seconds. Two defects were found in FaMa and another two in SPLOT, two real tools for the automated analysis of feature models. Also, we show how our generator outperforms a related manual suite for the automated analysis of feature models and how this suite can be used to guide the automated generation of test cases obtaining important gains in efficiency.

Conclusion

Our results show that the application of metamorphic testing in the domain of automated analysis of feature models is efficient and effective in detecting most faults in a few seconds without the need for a human oracle.  相似文献   

12.
In order to apply the best fault-detection and diagnosis scheme, it is required to investigate the process model profoundly and the kinds of faults to be detected. Especially, the process excitation and the effect of the fault being considered play an important role. This is the starting point for the choice of one of the various model-based fault-detection methods. According to this strategy, two different approaches, an observer-based and a signal-based approach, are selected for the two given faults of the benchmark task. It is shown that the use of adaptive thresholds can significantly improve the performance of the fault-detection scheme with respect to the false alarm rate and the delay in detection.  相似文献   

13.
Product design and evaluation requires a broad and varied set of information and analysis tools. Yet effective design and evaluation of a product during its design phase is critical if production costs are to be minimized. A system is described that integrates product design specifications with material and process databases, and a simulation-based analysis module. The system allows product designs to be evaluated in terms of economic and technical criteria, and to identify the best production environment.  相似文献   

14.
艾红  丁俊龙  刘云龙 《控制工程》2022,29(2):223-230
针对水泥烧成系统过程变量繁多、变量间静态关系耦合强等特点,采用因子分析方法建立静态过程监控模型.针对系统时序相关问题,结合经典动态主元分析DPCA方法和典型变量分析CVA方法,提出典型变量动态主元分析CVDPCA过程监控方法,有效解决了DPCA方法扩展后的数据矩阵维度大等不足之处.将算法用于水泥烧成系统故障检测,结果表...  相似文献   

15.
In the semiconductor manufacturing industry, production resembles an automated assembly line in which many similar products with slightly different specifications are manufactured step-by-step, with each step being a complicated physiochemical batch process performed by a number of tools. This constitutes a high-mix production system for which effective run-to-run control (RtR) and fault detection control (FDC) can be carried out only if the states of different tools and different products can be estimated. However, since in each production run, a specific product is performed on a specific tool, absolute individual states of products and tools are not observable. In this work, a novel state estimation method based on analysis of variance (ANOVA) is developed to estimate the relative states of each product and tool to the grand average performance of this station in the fab. The method is formulated in the form of a recursive state estimation using the Kalman filter. The advantages of this method are demonstrated using simulations to show that the correct relative states can be estimated in production scenarios such as tool-shift, tool-drift, product ramp-up, tool/product-offline and preventive maintenance (PM). Furthermore, application of this state estimation method in RtR control scheme shows that substantial improvements in process capabilities can be gained, especially for products with small lot counts. The proposed algorithm is also evaluated by an industrial application.  相似文献   

16.
An important aspect of developing models relating the number and type of faults in a software system to a set of structural measurement is defining what constitutes a fault. By definition, a fault is a structural imperfection in a software system that may lead to the system's eventually failing. A measurable and precise definition of what faults are makes it possible to accurately identify and count them, which in turn allows the formulation of models relating fault counts and types to other measurable attributes of a software system. Unfortunately, the most widely used definitions are not measurable—there is no guarantee that two different individuals looking at the same set of failure reports and the same set of fault definitions will count the same number of underlying faults. The incomplete and ambiguous nature of current fault definitions adds a noise component to the inputs used in modeling fault content. If this noise component is sufficiently large, any attempt to develop a fault model will produce invalid results. In this paper, we base our recognition and enumeration of software faults on the grammar of the language of the software system. By tokenizing the differences between a version of the system exhibiting a particular failure behavior, and the version in which changes were made to eliminate that behavior, we are able to unambiguously count the number of faults associated with that failure. With modern configuration management tools, the identification and counting of software faults can be automated.  相似文献   

17.
丁忠校 《微计算机信息》2007,23(21):288-289,264
本文针对当前汇编语言的测试工具较少这一实际情况,针对某种汇编语言语法结构特点,确定了汇编语言静态分析工具的总体结构框架,并对工具实现过程中重点的功能模块算法,进行了详细的分析阐述,最终完成了汇编语言静态分析工具的设计和开发.对于关键技术的研究以及开发的测试工具具有一定的通用性,可满足不同类型汇编语言的软件静态测试工作.  相似文献   

18.
基于程序静态分析和故障树的软件故障检测   总被引:1,自引:0,他引:1  
为提高软件安全性和可靠性,探讨一种在软件故障检测过程中将故障定位和原因分析相结合的方法,该方法基于对程序的静态分析,找出故障的位置,利用故障树定位故障原因。非法计算是一种常见的软件故障,该类故障极易导致系统崩溃,该文以一个非法计算故障为例,说明该方法的分析过程,实验表明其可以有效地定位故障并分析其原因。  相似文献   

19.
Improving project management, product development and engineering processes is for many companies crucial to survive in a fast changing environment. However, these activities are rarely integrated well due to the diversity of stakeholders with individual knowledge about projects, products and processes. This case study shows how Alcatel-Lucent over time achieved effective interaction of engineering processes, tools and people on the basis of a knowledge-centric product life-cycle management (PLM). Starting from identifying project, product and process knowledge, we show how they can be effectively integrated for best possible usage across the enterprise. The case study provides insight into how to best embark on PLM and how to effectively integrate product development with supportive tools. It describes how the concepts can be transferred to software engineering teams and IT departments in other companies. Concrete results from several product lines, such as efficiency improvement and better global development underline the business value.  相似文献   

20.
Knowledge-based manufacturability assessment: an object-oriented approach   总被引:5,自引:0,他引:5  
To help the achievement of integrated product and process development, there is a need for tools that can assist designers in creating manufacturable parts with less design routines and tryouts. This paper presents a systematic approach to developing automated manufacturability assessment tools by identifying the functional and informational requirements and proposing an assessment model. The work presented in this paper includes: (1) identification of characteristics and tasks of design for the die-casting process; (2) determination of functional and informational requirements for automatic manufacturability assessment; (3) formalization and modularization of assessment knowledge; and (4) modeling of product definition data to support the assessment. Object-oriented techniques are employed to model the assessment knowledge and manage the complicated and diverse types of product definition data by taking advantage of data abstraction, modularity, inherent concurrence, and the concept of encapsulation and extendibility.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号