首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It can be argued that the quality of software management has an effect on the degree of success or failure of a software development program. We have developed a metric for measuring the quality of software management along four dimensions: requirements management, estimation/planning management, people management, and risk management. The quality management metric (QMM) for a software development program manager is a composite score obtained using a questionnaire administered to both the program manager and a sample of his or her peers. The QMM is intended to both characterize the quality of software management and serve as a template for improving software management performance. We administered the questionnaire to measure the performance of managers responsible for large software development programs within the US Department of Defense (DOD). Informal verification and validation of the metric compared the QMM score to an overall program-success score for the entire program; this resulted in a positive correlation.  相似文献   

2.
In spite of considerable prior research, a generic framework has not emerged for structuring work on object-oriented (OO) metrics. We propose such a framework (Generic Framework) for object-oriented product metrics. The framework captures the generic structure of the underlying metrics space (Metrics Space) based on a mereological and set theoretic perspective of the building blocks of OO systems and relational measurement theory. We validate the framework by applying it to a repository of about 350 product metrics. The validation shows that the framework does, indeed, capture the underlying metrics space, and can be useful in identifying gaps and additional metrics that can extend the manner in which Metrics Space is currently populated.  相似文献   

3.
Coupling represents the degree of interdependence between two software components. Understanding software dependency is directly related to improving software understandability, maintainability, and reusability. In this paper, we analyze the difference between component coupling and component dependency, introduce a two-parameter component coupling metric and a three-parameter component dependency metric. An important parameter in both these metrics is coupling distance, which represents the relevance of two coupled components. These metrics are applicable to layered component-based software. These metrics can be used to represent the dependencies induced by all types of software coupling. We show how to determine coupling and dependency of all scales of software components using these metrics. These metrics are then applied to Apache HTTP, an open-source web server. The study shows that coupling distance is related to the number of modifications of a component, which is an important indicator of component fault rate, stability and subsequently, component complexity.
Srini RamaswamyEmail: Email:

Liguo Yu   received the Ph.D. degree in Computer Science from Vanderbilt University. He is an assistant professor of Computer and Information Sciences Department at Indiana University South Bend. Before joining IUSB, he was a visiting assistant professor at Tennessee Technological University. His research concentrates on software coupling, software maintenance, software reuse, software testing, software management, and open-source software development. Kai Chen   received the Ph.D. degree from the Department of Electrical Engineering and Computer Science at Vanderbilt University. He is working at Google Incorporation. His current research interests include development and maintenance of open-source software, embedded software design, component-based design, model-based design, formal methods and model verification. Srini Ramaswamy   earned his Ph.D. degree in Computer Science in 1994 from the Center for Advanced Computer Studies (CACS) at the University of Southwestern Louisiana (now University of Louisiana at Lafayette). His research interests are on intelligent and flexible control systems, behavior modeling, analysis and simulation, software stability and scalability. He is currently the Chairperson of the Department of Computer Science, University of Arkansas at Little Rock. Before joining UALR, he is the chairman of Computer Science Department at Tennessee Tech University. He is member of the Association of Computing Machinery, Society for Computer Simulation International, Computing Professionals for Social Responsibility and a senior member of the IEEE.   相似文献   

4.
5.
6.
This paper presents an approach to software assessment using a new software tool that integrates most of the known static metrics. A six-step method shows how to use metrics to obtain a picture of the software project. The method is visual and each step provides graphical representations of the data. Successive integration of data results in normality profiles. Examples illustrate each step. The method is adaptable to various environments and specific applications.  相似文献   

7.
This paper reviews the current state of industry with regard to the introduction of software metrics. It discusses the benefits that organizations have derived from metrication, it looks at why organizations have sought to introduce metrics programmes, how they have gone about introducing those programmes and the problems that they have encountered during implementation. The review found that, on the whole, only the sanitized aspects of metrics experiences have actually been published. This seems to be especially the case with respect to practitioner resistance. Very few organizations admit to having encountered resistance during the introduction of a metrics programme. The paper also includes the results of a pilot study, conducted by the first author, which examines the attitudes that developers hold towards the introduction of software metrics. The key findings of this pilot study are that positive attitudes to metrics correlate highly with levels of education and to job satisfaction.  相似文献   

8.
Understanding the exposure risk of software vulnerabilities is an important part of the software ecosystem. Reliable software vulnerability metrics allow end-users to make informed decisions regarding the risk posed by the choice of one software package versus another. In this article, we develop and analyze two new security metrics: median active vulnerabilities (MAV) and vulnerability free days (VFD). Both metrics take into account both the rate of vulnerability discovery and the rate at which vendors produce corresponding patches. We examine how our metrics are computed from publicly available data sets and then demonstrate their use in a case study with various vendors and products. Finally, we discuss the use of the metrics by various software stakeholders and how end-users can benefit from their use.  相似文献   

9.
Packages are important high-level organizational units for large object-oriented systems. Package-level metrics characterize the attributes of packages such as size, complexity, and coupling. There is a need for empirical evidence to support the collection of these metrics and using them as early indicators of some important external software quality attributes. In this paper, three suites of package-level metrics (Martin, MOOD and CK) are evaluated and compared empirically in predicting the number of pre-release faults and the number of post-release faults in packages. Eclipse, one of the largest open source systems, is used as a case study. The results indicate that the prediction models that are based on Martin suite are more accurate than those that are based on MOOD and CK suites across releases of Eclipse.  相似文献   

10.
Search landscape analysis has become a central tool for analysing the dependency of the performance of stochastic local search algorithms on structural aspects of the spaces being searched. Central to search landscape analysis is the notion of distance between candidate solutions. This distance depends on some underlying basic operator and it is defined as the minimum number of operations that need to be applied to one candidate solution for transforming it into another one. For operations on candidate solutions that are represented by permutations, in almost all researches on search landscape analysis surrogate distance measures are applied, although efficient algorithms exist in many cases for computing the exact distances. This discrepancy is probably due to the fact that these efficient algorithms are not very widely known. In this article, we review algorithms for computing distances on permutations for the most widely applied operators and present simulation results that compare the exact distances to commonly used approximations.  相似文献   

11.
Object-oriented (OO) metrics are used mainly to predict software engineering activities/efforts such as maintenance effort, error proneness, and error rate. There have been discussions about the effectiveness of metrics in different contexts. In this paper, we present an empirical study of OO metrics in two iterative processes: the short-cycled agile process and the long-cycled framework evolution process. We find that OO metrics are effective in predicting design efforts and source lines of code added, changed, and deleted in the short-cycled agile process and ineffective in predicting the same aspects in the long-cycled framework process. This leads us to believe that OO metrics' predictive capability is limited to the design and implementation changes during the development iterations, not the long-term evolution of an established system in different releases.  相似文献   

12.
The goal of this paper is to investigate the relation between object-oriented design choices and defects in software systems, with focus on a real-time telecommunication domain. The design choices are measured using the widely accepted metrics suite proposed by Chidamber and Kemerer for object oriented languages [S.R. Chidamber, C.F. Kemerer, A metrics suite for object oriented design, IEEE Transactions on Software Engineering 20 (6) (1994) 476-493].This paper reports the results of an extensive case study, which strongly reinforces earlier, mainly anecdotal, evidence that design aspects related to communication between classes can be used as indicators of the most defect-prone classes.Statistical models applicable for the non-normally distributed count data are used, such as Poisson regression, negative binomial regression, and zero-inflated negative binomial regression. The performances of the models are assessed using correlations, dispersion coefficients and Alberg diagrams.The zero-inflated negative binomial regression model based on response for a class shows the best overall ability to describe the variability of the number of defects in classes.  相似文献   

13.
The open-source Java software framework JStatCom is presented which supports the development of rich desktop clients for data analysis in a rather general way. The concept is to solve all recurring tasks with the help of reusable components and to enable rapid application development by adopting a standards based approach which is readily supported by existing programming tools. Furthermore, JStatCom allows to call external procedures from within Java that are written in other languages, for example Gauss, Ox or Matlab. This way it is possible to reuse an already existing code base for numerical routines written in domain-specific programming languages and to link them with the Java world. A reference application for JStatCom is the econometric software package JMulTi, which will shortly be introduced.  相似文献   

14.
ContextState machines are widely used to describe the dynamic behavior of objects, components, and systems. As a communication tool between various stakeholders, it is essential that state machines be easily and correctly comprehensible. Poorly understood state machines can lead to misunderstandings and communication overhead, thus adversely affecting the quality of the final product. Nevertheless, there is a lack of measurement research for state machines.ObjectiveIn this paper, we propose a metric, called SUM, to evaluate the understandability of state machines. SUM is defined on the basis of cohesion and coupling concepts.MethodTo validate SUM as a state machine understandability indicator, we performed an empirical study using five systems. We constructed five different state machines for each system, resulting in a total of 25 state machines being prepared. Two aspects of understandability, efficiency (UEff) and correctness (UCor), were obtained from 40 participants for the state machines. We then performed correlation and consistency analyses between the SUMs and the measured understandability values.ResultsThe results of the correlation analysis indicated that SUM was significantly correlated with UEff (p = 0.003) and UCor (p = 0.027). The consistency analysis results indicated that SUM was positively correlated with UEff in four of the systems and UCor in all five systems.ConclusionThese results confirm the possibility that SUM can be a useful understandability indicator for SMs. We believe that the proposed metric can be used as a guideline to construct quality state machines.  相似文献   

15.
金蕾  王乘 《计算机仿真》2004,21(6):190-193
在已有高压输电线路上架设光缆可以合理利用现有资源。该系统采用电力电磁场与力场的耦合计算方法仿真高压输电线路与自承式通信光缆(ADSS)同塔并架工程。该系统综合分析了输电塔架在新增光缆作用下的刚度、强度及稳定性,电缆周围空间及杆塔上的电场分布,光缆所处的电磁场场强等多个因素,并考虑了舞动、强气流、大覆冰等不利条件下电缆和光缆的运动轨迹及相对位置的情况。该系统利用面向对象的程序设计范型,采用模块化设计思想,系统兼容性、可扩充性良好,并已投入使用,证实仿真结果比较准确可靠。  相似文献   

16.
In this empirical study, we evaluate the extent to which a set of software measures are correlated with the number of faults and the total estimated repair effort for a large software system. The measures we use are basic counts reflecting program size and structure and metrics proposed by McCabe and Halstead. The effect of program size has a major influence on these metrics, and we present a suitable method of adjusting the metrics for size. In modeling faults or repair effort as a function of one variable, a number of measures individually explain approximately one-quarter of the variation observed in the fault data. No one measure does significantly better than size in explaining the variation in faults found across software units, and thus multiple variable models are necessary to find metrics of importance in addition to program size. The “best” multivariate model explains approximately one-half the variation in the fault data. The metrics included in this model (in addition to size) are: the ratio of block comments to total lines of code, the number of decisions per function, and the relative vocabulary of program variables and operators. These metrics have potential for future use in the quality control of software.  相似文献   

17.
Many empirical studies have found that software metrics can predict class error proneness and the prediction can be used to accurately group error-prone classes. Recent empirical studies have used open source systems. These studies, however, focused on the relationship between software metrics and class error proneness during the development phase of software projects. Whether software metrics can still predict class error proneness in a system’s post-release evolution is still a question to be answered. This study examined three releases of the Eclipse project and found that although some metrics can still predict class error proneness in three error-severity categories, the accuracy of the prediction decreased from release to release. Furthermore, we found that the prediction cannot be used to build a metrics model to identify error-prone classes with acceptable accuracy. These findings suggest that as a system evolves, the use of some commonly used metrics to identify which classes are more prone to errors becomes increasingly difficult and we should seek alternative methods (to the metric-prediction models) to locate error-prone classes if we want high accuracy.  相似文献   

18.
This paper presents an analysis of the unit testing approach developed and used by the Core Flight Software System (CFS) product line team at the NASA Goddard Space Flight Center (GSFC). The goal of the analysis is to understand, review, and recommend strategies for improving the CFS’ existing unit testing infrastructure as well as to capture lessons learned and best practices that can be used by other software product line (SPL) teams for their unit testing. The results of the analysis show that the core and application modules of the CFS are unit tested in isolation using a stub framework developed by the CFS team. The application developers can unit test their code without waiting for the core modules to be completed, and vice versa. The analysis found that this unit testing approach incorporates many practical and useful solutions such as allowing for unit testing without requiring hardware and special OS features in-the-loop by defining stub implementations of dependent modules. These solutions are worth considering when deciding how to design the testing architecture for a SPL.  相似文献   

19.
With the increasing use of object-oriented methods in new software development, there is a growing need to both document and improve current practice in object-oriented design and development. In response to this need, a number of researchers have developed various metrics for object-oriented systems as proposed aids to the management of these systems. In this research, an analysis of a set of metrics proposed by Chidamber and Kemerer (1994) is performed in order to assess their usefulness for practising managers. First, an informal introduction to the metrics is provided by way of an extended example of their managerial use. Second, exploratory analyses of empirical data relating the metrics to productivity, rework effort and design effort on three commercial object-oriented systems are provided. The empirical results suggest that the metrics provide significant explanatory power for variations in these economic variables, over and above that provided by traditional measures, such as size in lines of code, and after controlling for the effects of individual developers  相似文献   

20.
Many studies use logistic regression models to investigate the ability of complexity metrics to predict fault-prone classes. However, it is not uncommon to see the inappropriate use of performance indictors such as odds ratio in previous studies. In particular, a recent study by Olague et al. uses the odds ratio associated with one unit increase in a metric to compare the relative magnitude of the associations between individual metrics and fault-proneness. In addition, the percents of concordant, discordant, and tied pairs are used to evaluate the predictive effectiveness of a univariate logistic regression model. Their results suggest that lesser known complexity metrics such as standard deviation method complexity (SDMC) and average method complexity (AMC) are better predictors than the two commonly used metrics: lines of code (LOC) and weighted method McCabe complexity (WMC). In this paper, however, we show that (1) the odds ratio associated with one standard deviation increase, rather than one unit increase, in a metric should be used to compare the relative magnitudes of the effects of individual metrics on fault-proneness. Otherwise, misleading results may be obtained; and that (2) the connection of the percents of concordant, discordant, and tied pairs with the predictive effectiveness of a univariate logistic regression model is false, as they indeed do not depend on the model. Furthermore, we use the data collected from three versions of Eclipse to re-examine the ability of complexity metrics to predict fault-proneness. Our experimental results reveal that: (1) many metrics exhibit moderate or almost moderate ability in discriminating between fault-prone and not fault-prone classes; (2) LOC and WMC are indeed better fault-proneness predictors than SDMC and AMC; and (3) the explanatory power of other complexity metrics in addition to LOC is limited.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号