首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Many studies have investigated the relationships between object-oriented (OO) metrics and change-proneness and conclude that OO metrics are able to predict the extent of change of a class across the versions of a system. However, there is a need to re-examine this subject for two reasons. First, most studies only analyze a small number of OO metrics and, therefore, it is not clear whether this conclusion is applicable to most, if not all, OO metrics. Second, most studies only uses relatively few systems to investigate the relationships between OO metrics and change-proneness and, therefore, it is not clear whether this conclusion can be generalized to other systems. In this paper, based on 102 Java systems, we employ statistical meta-analysis techniques to investigate the ability of 62 OO metrics to predict change-proneness. In our context, a class which is changed in the next version of a system is called change-prone and not change-prone otherwise. The investigated OO metrics cover four metric dimensions, including 7 size metrics, 18 cohesion metrics, 20 coupling metrics, and 17 inheritance metrics. We use AUC (the area under a relative operating characteristic, ROC) to evaluate the predictive effectiveness of OO metrics. For each OO metric, we first compute AUCs and the corresponding variances for individual systems. Then, we employ a random-effect model to compute the average AUC over all systems. Finally, we perform a sensitivity analysis to investigate whether the AUC result from the random-effect model is robust to the data selection bias in this study. Our results from random-effect models reveal that: (1) size metrics exhibit moderate or almost moderate ability in discriminating between change-prone and not change-prone classes; (2) coupling and cohesion metrics generally have a lower predictive ability compared to size metrics; and (3) inheritance metrics have a poor ability to discriminate between change-prone and not change-prone classes. Our results from sensitivity analyses show that these conclusions reached are not substantially influenced by the data selection bias.  相似文献   

2.
Much effort has been devoted to the development and empirical validation of object-oriented metrics. The empirical validations performed thus far would suggest that a core set of validated metrics is close to being identified. However, none of these studies allow for the potentially confounding effect of class size. We demonstrate a strong size confounding effect and question the results of previous object-oriented metrics validation studies. We first investigated whether there is a confounding effect of class size in validation studies of object-oriented metrics and show that, based on previous work, there is reason to believe that such an effect exists. We then describe a detailed empirical methodology for identifying those effects. Finally, we perform a study on a large C++ telecommunications framework to examine if size is really a confounder. This study considered the Chidamber and Kemerer metrics and a subset of the Lorenz and Kidd metrics. The dependent variable was the incidence of a fault attributable to a field failure (fault-proneness of a class). Our findings indicate that, before controlling for size, the results are very similar to previous studies. The metrics that are expected to be validated are indeed associated with fault-proneness  相似文献   

3.
软件易变性预测主要通过软件的内部特性,即软件度量值,来刻画、预测的,是软件工程中热点方向之一,在提高软件质量和控制软件成本方面起着非常重要的作用。虽然软件易变性预测在学术界取得了一系列的成绩,但在工业界尚未有成功应用的案例。本文从简单相关性分析与偏相关性分析和关联规则挖掘的角度出发甄别面向对象度量与软件易变性间相关性的真伪,明确了在软件易变性上下文中类规模对面向对象度量有潜在影响。  相似文献   

4.
Empirical validation of software metrics suites to predict fault proneness in object-oriented (OO) components is essential to ensure their practical use in industrial settings. In this paper, we empirically validate three OO metrics suites for their ability to predict software quality in terms of fault-proneness: the Chidamber and Kemerer (CK) metrics, Abreu's Metrics for Object-Oriented Design (MOOD), and Bansiya and Davis' Quality Metrics for Object-Oriented Design (QMOOD). Some CK class metrics have previously been shown to be good predictors of initial OO software quality. However, the other two suites have not been heavily validated except by their original proposers. Here, we explore the ability of these three metrics suites to predict fault-prone classes using defect data for six versions of Rhino, an open-source implementation of JavaScript written in Java. We conclude that the CK and QMOOD suites contain similar components and produce statistical models that are effective in detecting error-prone classes. We also conclude that the class components in the MOOD metrics suite are not good class fault-proneness predictors. Analyzing multivariate binary logistic regression models across six Rhino versions indicates these models may be useful in assessing quality in OO classes produced using modern highly iterative or agile software development processes.  相似文献   

5.
It has been proposed by El Emam et al. (ibid. vol.27 (7), 2001) that size should be taken into account as a confounding variable when validating object-oriented metrics. We take issue with this perspective since the ability to measure size does not temporally precede the ability to measure many of the object-oriented metrics that have been proposed. Hence, the condition that a confounding variable must occur causally prior to another explanatory variable is not met. In addition, when specifying multivariate models of defects that incorporate object-oriented metrics, entering size as an explanatory variable may result in misspecified models that lack internal consistency. Examples are given where this misspecification occurs.  相似文献   

6.
Many studies use logistic regression models to investigate the ability of complexity metrics to predict fault-prone classes. However, it is not uncommon to see the inappropriate use of performance indictors such as odds ratio in previous studies. In particular, a recent study by Olague et al. uses the odds ratio associated with one unit increase in a metric to compare the relative magnitude of the associations between individual metrics and fault-proneness. In addition, the percents of concordant, discordant, and tied pairs are used to evaluate the predictive effectiveness of a univariate logistic regression model. Their results suggest that lesser known complexity metrics such as standard deviation method complexity (SDMC) and average method complexity (AMC) are better predictors than the two commonly used metrics: lines of code (LOC) and weighted method McCabe complexity (WMC). In this paper, however, we show that (1) the odds ratio associated with one standard deviation increase, rather than one unit increase, in a metric should be used to compare the relative magnitudes of the effects of individual metrics on fault-proneness. Otherwise, misleading results may be obtained; and that (2) the connection of the percents of concordant, discordant, and tied pairs with the predictive effectiveness of a univariate logistic regression model is false, as they indeed do not depend on the model. Furthermore, we use the data collected from three versions of Eclipse to re-examine the ability of complexity metrics to predict fault-proneness. Our experimental results reveal that: (1) many metrics exhibit moderate or almost moderate ability in discriminating between fault-prone and not fault-prone classes; (2) LOC and WMC are indeed better fault-proneness predictors than SDMC and AMC; and (3) the explanatory power of other complexity metrics in addition to LOC is limited.  相似文献   

7.
Network measures are useful for predicting fault-prone modules. However, existing work has not distinguished faults according to their severity. In practice, high severity faults cause serious problems and require further attention. In this study, we explored the utility of network measures in high severity faultproneness prediction. We constructed software source code networks for four open-source projects by extracting the dependencies between modules. We then used univariate logistic regression to investigate the associations between each network measure and fault-proneness at a high severity level. We built multivariate prediction models to examine their explanatory ability for fault-proneness, as well as evaluated their predictive effectiveness compared to code metrics under forward-release and cross-project predictions. The results revealed the following: (1) most network measures are significantly related to high severity fault-proneness; (2) network measures generally have comparable explanatory abilities and predictive powers to those of code metrics; and (3) network measures are very unstable for cross-project predictions. These results indicate that network measures are of practical value in high severity fault-proneness prediction.  相似文献   

8.
ContextIn a large object-oriented software system, packages play the role of modules which group related classes together to provide well-identified services to the rest of the system. In this context, it is widely believed that modularization has a large influence on the quality of packages. Recently, Sarkar, Kak, and Rama proposed a set of new metrics to characterize the modularization quality of packages from important perspectives such as inter-module call traffic, state access violations, fragile base-class design, programming to interface, and plugin pollution. These package-modularization metrics are quite different from traditional package-level metrics, which measure software quality mainly from size, extensibility, responsibility, independence, abstractness, and instability perspectives. As such, it is expected that these package-modularization metrics should be useful predictors for fault-proneness. However, little is currently known on their actual usefulness for fault-proneness prediction, especially compared with traditional package-level metrics.ObjectiveIn this paper, we examine the role of these new package-modularization metrics for determining software fault-proneness in object-oriented systems.MethodWe first use principal component analysis to analyze whether these new package-modularization metrics capture additional information compared with traditional package-level metrics. Second, we employ univariate prediction models to investigate how these new package-modularization metrics are related to fault-proneness. Finally, we build multivariate prediction models to examine the ability of these new package-modularization metrics for predicting fault-prone packages.ResultsOur results, based on six open-source object-oriented software systems, show that: (1) these new package-modularization metrics provide new and complementary views of software complexity compared with traditional package-level metrics; (2) most of these new package-modularization metrics have a significant association with fault-proneness in an expected direction; and (3) these new package-modularization metrics can substantially improve the effectiveness of fault-proneness prediction when used with traditional package-level metrics together.ConclusionsThe package-modularization metrics proposed by Sarkar, Kak, and Rama are useful for practitioners to develop quality software systems.  相似文献   

9.
The Object-Oriented (OO) paradigm has become increasingly popular in recent years. Researchers agree that, although maintenance may turn out to be easier for OO systems, it is unlikely that the maintenance burden will completely disappear. One approach to controlling software maintenance costs is the utilization of software metrics during the development phase, to help identify potential problem areas. Many new metrics have been proposed for OO systems, but only a few of them have been validated. The purpose of this research is to empirically explore the validation of three existing OO design complexity metrics and, specifically, to assess their ability to predict maintenance time. This research reports the results of validating three metrics, Interaction Level (IL), Interface Size (IS), and Operation Argument Complexity (OAC). A controlled experiment was conducted to investigate the effect of design complexity (as measured by the above metrics) on maintenance time. Each of the three metrics by itself was found to be useful in the experiment in predicting maintenance performance.  相似文献   

10.
Several studies have explored the relationship between the metrics of the object-oriented software and the change-proneness of the classes. This knowledge can be used to help decision-making among design alternatives or assess software quality such as maintainability. Despite the increasing use of complex inheritance relationships and polymorphism in object-oriented software, there has been less emphasis on developing metrics that capture the aspect of dynamic behavior. Considering dynamic behavior metrics in conjunction with existing metrics may go a long way toward obtaining more accurate predictions of change-proneness. To address this need, we provide the behavioral dependency measure using structural and behavioral information taken from UML 2.0 design models. Model-based change-proneness prediction helps to make high-quality software by exploiting design models from the earlier phase of the software development process. The behavioral dependency measure has been evaluated on a multi-version medium size open-source project called JFlex. The results obtained show that the proposed measure is a useful indicator and can be complementary to existing object-oriented metrics for improving the accuracy of change-proneness prediction when the system contains high degree of inheritance relationships and polymorphism.  相似文献   

11.
In the last decade, empirical studies on object-oriented design metrics have shown some of them to be useful for predicting the fault-proneness of classes in object-oriented software systems. This research did not, however, distinguish among faults according to the severity of impact. It would be valuable to know how object-oriented design metrics and class fault-proneness are related when fault severity is taken into account. In this paper, we use logistic regression and machine learning methods to empirically investigate the usefulness of object-oriented design metrics, specifically, a subset of the Chidamber and Kemerer suite, in predicting fault-proneness when taking fault severity into account. Our results, based on a public domain NASA data set, indicate that 1) most of these design metrics are statistically related to fault-proneness of classes across fault severity, and 2) the prediction capabilities of the investigated metrics greatly depend on the severity of faults. More specifically, these design metrics are able to predict low severity faults in fault-prone classes better than high severity faults in fault-prone classes  相似文献   

12.
The identification of a module's fault-proneness is very important for minimizing cost and improving the effectiveness of the software development process. How to obtain the correlation between software metrics and module's fault-proneness has been the focus of much research. This paper presents the application of hybrid artificial neural network (ANN) and Quantum Particle Swarm Optimization (QPSO) in software fault-proneness prediction. ANN is used for classifying software modules into fault-proneness or non fault-proneness categories, and QPSO is applied for reducing dimensionality. The experiment results show that the proposed prediction approach can establish the correlation between software metrics and modules’ fault-proneness, and is very simple because its implementation requires neither extra cost nor expert's knowledge. Proposed prediction approach can provide the potential software modules with fault-proneness to software developers, so developers only need to focus on these software modules, which may minimize effort and cost of software maintenance.  相似文献   

13.
Empirical validation of code metrics has a long history of success. Many metrics have been shown to be good predictors of external features, such as correlation to bugs. Our study provides an alternative explanation to such validation, attributing it to the confounding effect of size. In contradiction to received wisdom, we argue that the validity of a metric can be explained by its correlation to the size of the code artifact. In fact, this work came about in view of our failure in the quest of finding a metric that is both valid and free of this confounding effect. Our main discovery is that, with the appropriate (non-parametric) transformations, the validity of a metric can be accurately (with R-squared values being at times as high as 0.97) predicted from its correlation with size. The reported results are with respect to a suite of 26 metrics, that includes the famous Chidamber and Kemerer metrics. Concretely, it is shown that the more a metric is correlated with size, the more able it is to predict external features values, and vice-versa. We consider two methods for controlling for size, by linear transformations. As it turns out, metrics controlled for size, tend to eliminate their predictive capabilities. We also show that the famous Chidamber and Kemerer metrics are no better than other metrics in our suite. Overall, our results suggest code size is the only “unique” valid metric.  相似文献   

14.
Software component size estimation is an important task in software project management. For a component-based approach, two steps may be used to estimate the overall size of object-oriented (OO) software: a designer uses metrics to predict the size of the software components and then utilizes the sizes to estimate the overall project size. Using OO software metrics literature, we identified factors that may affect the size of an OO software component. Using real-life data from 152 software components, we then determined the effect of the identified factors on the prediction of OO software component size. The results indicated that certain factors and the type of OO software component play a significant role in the estimate. It is shown how a regression tree data mining approach can be used to learn decision rules to guide future estimates.  相似文献   

15.
Software component size estimation is an important task in software project management. For a component-based approach, two steps may be used to estimate the overall size of object-oriented (OO) software: a designer uses metrics to predict the size of the software components and then utilizes the sizes to estimate the overall project size. Using OO software metrics literature, we identified factors that may affect the size of an OO software component. Using real-life data from 152 software components, we then determined the effect of the identified factors on the prediction of OO software component size. The results indicated that certain factors and the type of OO software component play a significant role in the estimate. It is shown how a regression tree data mining approach can be used to learn decision rules to guide future estimates.  相似文献   

16.
Applying machine learning to software fault-proneness prediction   总被引:1,自引:0,他引:1  
The importance of software testing to quality assurance cannot be overemphasized. The estimation of a module’s fault-proneness is important for minimizing cost and improving the effectiveness of the software testing process. Unfortunately, no general technique for estimating software fault-proneness is available. The observed correlation between some software metrics and fault-proneness has resulted in a variety of predictive models based on multiple metrics. Much work has concentrated on how to select the software metrics that are most likely to indicate fault-proneness. In this paper, we propose the use of machine learning for this purpose. Specifically, given historical data on software metric values and number of reported errors, an Artificial Neural Network (ANN) is trained. Then, in order to determine the importance of each software metric in predicting fault-proneness, a sensitivity analysis is performed on the trained ANN. The software metrics that are deemed to be the most critical are then used as the basis of an ANN-based predictive model of a continuous measure of fault-proneness. We also view fault-proneness prediction as a binary classification task (i.e., a module can either contain errors or be error-free) and use Support Vector Machines (SVM) as a state-of-the-art classification method. We perform a comparative experimental study of the effectiveness of ANNs and SVMs on a data set obtained from NASA’s Metrics Data Program data repository.  相似文献   

17.
Object-oriented metrics aim to exhibit the quality of source code and give insight to it quantitatively. Each metric assesses the code from a different aspect. There is a relationship between the quality level and the risk level of source code. The objective of this paper is to empirically examine whether or not there are effective threshold values for source code metrics. It is targeted to derive generalized thresholds that can be used in different software systems. The relationship between metric thresholds and fault-proneness was investigated empirically in this study by using ten open-source software systems. Three types of fault-proneness were defined for the software modules: non-fault-prone, more-than-one-fault-prone, and more-than-three-fault-prone. Two independent case studies were carried out to derive two different threshold values. A single set was created by merging ten datasets and was used as training data by the model. The learner model was created using logistic regression and the Bender method. Results revealed that some metrics have threshold effects. Seven metrics gave satisfactory results in the first case study. In the second case study, eleven metrics gave satisfactory results. This study makes contributions primarily for use by software developers and testers. Software developers can see classes or modules that require revising; this, consequently, contributes to an increment in quality for these modules and a decrement in their risk level. Testers can identify modules that need more testing effort and can prioritize modules according to their risk levels.  相似文献   

18.
Identifying change-prone sections of code can help managers plan and allocate maintenance effort. Design patterns have been used to study change-proneness and are widely believed to support certain kinds of changes, while inhibiting others. Recently, several studies have analyzed recorded changes to classes playing design pattern roles and find that the patterns “folklore” offers a reasonable explanation for the reality: certain pattern roles do seem to be less change-prone than others. We push this analysis on two fronts: first, we deploy W. Pree’s metapatterns, which group patterns purely by structure (rather than intent), and argue that metapatterns are a simpler model to explain recent findings by Di Penta et al. (2008). Second, we study the effect of the size of the classes playing the design pattern and metapattern roles. We find that size explains more of the variance in change-proneness than either design pattern or metapattern roles. We also find that both design pattern and metapattern roles were strong determinants of size. We conclude, therefore, that size appears to be a stronger determinant of change-proneness than either design pattern or metapattern roles, and observed differences in change-proneness between roles might be due to differences in the sizes of the classes playing those roles. The size of a class can be found much more quickly, easily and accurately than its pattern-roles. Thus, while identifying design pattern roles may be important for other reasons, as far as identifying change-prone classes, sheer size might be a better indicator.  相似文献   

19.
To produce high quality object-oriented (OO) applications, a strong emphasis on design aspects, especially during the early phases of software development, is necessary. Design metrics play an important role in helping developers understand design aspects of software and, hence, improve software quality and developer productivity. In this paper, we provide empirical evidence supporting the role of OO design complexity metrics, specifically a subset of the Chidamber and Kemerer (1991, 1994) suite (CK metrics), in determining software defects. Our results, based on industry data from software developed in two popular programming languages used in OO development, indicate that, even after controlling for the size of the software, these metrics are significantly associated with defects. In addition, we find that the effects of these metrics on defects vary across the samples from two programming languages-C++ and Java. We believe that these results have significant implications for designing high-quality software products using the OO approach.  相似文献   

20.
In the literature the fault-proneness of classes or methods has been used to devise strategies for reducing testing costs and efforts. In general, fault-proneness is predicted through a set of design metrics and, most recently, by using Machine Learning (ML) techniques. However, some ML techniques cannot deal with unbalanced data, characteristic very common of the fault datasets and, their produced results are not easily interpreted by most programmers and testers. Considering these facts, this paper introduces a novel fault-prediction approach based on Multiobjective Particle Swarm Optimization (MOPSO). Exploring Pareto dominance concepts, the approach generates a model composed by rules with specific properties. These rules can be used as an unordered classifier, and because of this, they are more intuitive and comprehensible. Two experiments were accomplished, considering, respectively, fault-proneness of classes and methods. The results show interesting relationships between the studied metrics and fault prediction. In addition to this, the performance of the introduced MOPSO approach is compared with other ML algorithms by using several measures including the area under the ROC curve, which is a relevant criterion to deal with unbalanced data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号