首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ashman  R. 《IT Professional》2004,6(4):40-44
Software development estimates are inaccurate and overly optimistic estimates are major contributors to project failure, despite the fact that every completed project is a rich source of information about performance and estimation. Modern development processes promote risk management, the realization of architecture first, the decomposition of the project into iterations, and the assignment of requirements to these iterations. When a project adopts these forms of best practice, it achieves a high degree of technical control and easier management. One difficult project management task is to accurately determine the effort required to complete the project. This article discusses a use-case-based estimation model for determining project effort. This technique calls for looking at the relationship between estimated and actual data to improve future estimates. Using a simple set of metrics, it is possible to generate a credible model for project estimation. The model described here works best in an iterative development process, allowing comparisons between successive iterations.  相似文献   

2.
Development effort is one of the most important metrics that must be estimated in order to design the plan of a project. The uncertainty and complexity of software projects make the process of effort estimation difficult and ambiguous. Analogy-based estimation (ABE) is the most common method in this area because it is quite straightforward and practical, relying on comparison between new projects and completed projects to estimate the development effort. Despite many advantages, ABE is unable to produce accurate estimates when the importance level of project features is not the same or the relationship among features is difficult to determine. In such situations, efficient feature weighting can be a solution to improve the performance of ABE. This paper proposes a hybrid estimation model based on a combination of a particle swarm optimization (PSO) algorithm and ABE to increase the accuracy of software development effort estimation. This combination leads to accurate identification of projects that are similar, based on optimizing the performance of the similarity function in ABE. A framework is presented in which the appropriate weights are allocated to project features so that the most accurate estimates are achieved. The suggested model is flexible enough to be used in different datasets including categorical and non-categorical project features. Three real data sets are employed to evaluate the proposed model, and the results are compared with other estimation models. The promising results show that a combination of PSO and ABE could significantly improve the performance of existing estimation models.  相似文献   

3.
Agresti  W.W. 《IT Professional》2006,8(5):12-16
Software metrics programs must provide value. Metrics need attachment to what's important about a software project. A thorough and comprehensive way to help ensure that connectedness is the Goal Question Metric. GQM proceeds in an orderly fashion to consider goals and specific questions so that when the metrics are defined they relate directly to project goals. The P10 software metrics framework better reflects the contemporary software development environment. P10 is a simple way to help ensure that the software project metrics you use will provide answers to the important questions you face in creating on-time, high-quality software  相似文献   

4.
The ability to accurately and consistently estimate software development efforts is required by the project managers in planning and conducting software development activities. Since software effort drivers are vague and uncertain, software effort estimates, especially in the early stages of the development life cycle, are prone to a certain degree of estimation errors. A software effort estimation model which adopts a fuzzy inference method provides a solution to fit the uncertain and vague properties of software effort drivers. The present paper proposes a fuzzy neural network (FNN) approach for embedding artificial neural network into fuzzy inference processes in order to derive the software effort estimates. Artificial neural network is utilized to determine the significant fuzzy rules in fuzzy inference processes. We demonstrated our approach by using the 63 historical project data in the well-known COCOMO model. Empirical results showed that applying FNN for software effort estimates resulted in slightly smaller mean magnitude of relative error (MMRE) and probability of a project having a relative error of less than or equal to 0.25 (Pred(0.25)) as compared with the results obtained by just using artificial neural network and the original model. The proposed model can also provide objective fuzzy effort estimation rule sets by adopting the learning mechanism of the artificial neural network.  相似文献   

5.
Schroeder  M. 《IT Professional》1999,1(6):30-36
While there are many ways you can capture your development experiences, metrics can help quantify previous work in a way that can directly guide future efforts. For example, projects of different sizes can require vastly different levels of effort, organizational structure, and management discipline. If you let experience be your guide and understand how a newly proposed system compares to projects you've already completed, you have a much better chance of finishing on time and under budget. A wide range of metrics can aid you in managing projects, but here the author focuses on a particular set of product metrics that highlight and quantify a system's object-oriented (OO) properties. He draws many of the results mentioned here from an analysis of 18 production-level applications built in PowerBuilder-a common GUI tool used for developing client-server database applications on a variety of platforms. (The PowerBuilder metrics analyzer is available free from American Management Systems-http:// www.amsinc.com.) Although these results are derived mainly from PowerBuilder applications, they should still provide practical guidance for development in most OO languages  相似文献   

6.
周海玲  孙涌 《微机发展》2006,16(2):23-25
所有成功的软件组织都将度量作为保证自己管理和技术质量的重要手段,软件成本估计则是软件度量[1,2]的核心任务。为了提高成本估算的准确性,文中根据特定软件企业中的历史项目数据对基本COCOMO模型进行校准,在具体的参数修正方法上利用对数数据相关算法进行校正,并与其它方法进行了比较,得到了满意的结果。校准后的模型对项目开发成本的预测将会更加准确,从而切实体现COCOMO成本度量工作对于软件项目的指导价值。因此,文中所做的成本估算模型的校准工作,对软件开发企业非常具有实用价值。  相似文献   

7.
Evaluating computers and other systems is difficult for a couple of reasons. First, the goal of evaluation is typically ill-defined: customers, sometimes even designers, either don't know or can't specify exactly what result they expect. Often, they don't specify the architectural variants to consider, and often the metrics and workload they expect you to use are ill-defined. Second, they rarely clarify which kind of model and evaluation method best suit the evaluation problem. These problems have consequences. For one thing, the decision-maker may not trust the evaluation. For another, poor planning means the evaluation cannot be reproduced if any of the parameters are changed slightly. Finally, the evaluation documentation is usually inadequate, and so some time after the evaluation you might ask yourself, how did I come to that conclusion? An approach developed at Siemens makes decisions explicit and the process reproducible  相似文献   

8.
Successfully applying software metrics   总被引:2,自引:0,他引:2  
Grady  R.B. 《Computer》1994,27(9):18-25
The word success is very powerful. It creates strong, but widely varied, images that may range from the final seconds of an athletic contest to a graduation ceremony to the loss of 10 pounds. Success makes us feel good; it's cause for celebration. All these examples of success are marked by a measurable end point, whether externally or self-created. Most of us who create software approach projects with some similar idea of success. Our feelings from project start to end are often strongly influenced by whether we spent any early time describing this success and how we might measure progress. Software metrics measure specific attributes of a software product or a software development process. In other words, they are measures of success. It's convenient to group the ways that we apply metrics to measure success into four areas. What do you need to measure and analyze to make your project a success? We show examples from many projects and Hewlett Packard divisions which may help you chart your course  相似文献   

9.
《Software, IEEE》2006,23(4):11-13
How should you design your software to detect, react, and recover from exceptional conditions? If you follow Jim Shore's advice and design with a fail fast attitude, you won't expend any effort recovering from failures. Shore argues that a "patch up and proceed" strategy often obfuscates problems. Shore's simple design solution is to write code that checks for expected values upon entry and returns failure notifications when it can't fulfil its responsibilities. He argues that careful use of assertions allows for early and visible failure, so you can quickly identify and correct problems.  相似文献   

10.
In this paper, a multidimensional 0–1 knapsack model with fuzzy parameters is defuzzified using triangular norm (t-norm) and t-conorm fuzzy relations. In the first part of the paper, the surrogate relaxation models of the defuzzified models are developed, and the use of surrogate constraint normalization rules is proposed as the surrogate multipliers. A methodology is proposed to evaluate some surrogate constraint normalization rules proposed in the literature as well as one rule proposed in this paper. Three distance metrics are used to find the distance of fuzzy objective function from the surrogate models to the distance of fuzzy objective function from the original models. A numerical experiment shows that the rule proposed in this paper dominates the other rules considered in this paper for three distance metrics given the whole assumptions. In the second part of the paper, a methodology is proposed for multi-attribute project portfolio selection, and optimal solutions from the original defuzzified models as well as near-optimal solutions from their surrogate relaxation models are considered as alternatives. The aggregation of evaluation results is managed using a simple yet effective method so-called fuzzy Simple Additive Weighting (SAW) method. Then, the methodology is applied to a hypothetical construction project portfolio selection problem with multiple attributes.  相似文献   

11.
The effort required to complete software projects is often estimated, completely or partially, using the judgment of experts, whose assessment may be biased. In general, such bias as there is seems to be towards estimates that are overly optimistic. The degree of bias varies from expert to expert, and seems to depend on both conscious and unconscious processes. One possible approach to reduce this bias towards over-optimism is to combine the judgments of several experts. This paper describes an experiment in which experts with different backgrounds combined their estimates in group discussion. First, 20 software professionals were asked to provide individual estimates of the effort required for a software development project. Subsequently, they formed five estimation groups, each consisting of four experts. Each of these groups agreed on a project effort estimate via the pooling of knowledge in discussion. We found that the groups submitted less optimistic estimates than the individuals. Interestingly, the group discussion-based estimates were closer to the effort expended on the actual project than the average of the individual expert estimates were, i.e., the group discussions led to better estimates than a mechanical averaging of the individual estimates. The groups ability to identify a greater number of the activities required by the project is among the possible explanations for this reduction of bias.  相似文献   

12.
Software component size estimation is an important task in software project management. For a component-based approach, two steps may be used to estimate the overall size of object-oriented (OO) software: a designer uses metrics to predict the size of the software components and then utilizes the sizes to estimate the overall project size. Using OO software metrics literature, we identified factors that may affect the size of an OO software component. Using real-life data from 152 software components, we then determined the effect of the identified factors on the prediction of OO software component size. The results indicated that certain factors and the type of OO software component play a significant role in the estimate. It is shown how a regression tree data mining approach can be used to learn decision rules to guide future estimates.  相似文献   

13.
Software component size estimation is an important task in software project management. For a component-based approach, two steps may be used to estimate the overall size of object-oriented (OO) software: a designer uses metrics to predict the size of the software components and then utilizes the sizes to estimate the overall project size. Using OO software metrics literature, we identified factors that may affect the size of an OO software component. Using real-life data from 152 software components, we then determined the effect of the identified factors on the prediction of OO software component size. The results indicated that certain factors and the type of OO software component play a significant role in the estimate. It is shown how a regression tree data mining approach can be used to learn decision rules to guide future estimates.  相似文献   

14.
15.
Barnard  J. Price  A. 《Software, IEEE》1994,11(2):59-69
Inspection data is difficult to gather and interpret. At AT&T Bell Laboratories, the authors have defined nine key metrics that software project managers can use to plan, monitor, and improve inspections. Graphs of these metrics expose problems early and can help managers evaluate the inspection process itself. The nine metrics are: total noncomment lines of source code inspected in thousands (KLOC); average lines of code inspected; average preparation rate; average inspection rate; average effort per KLOC; average effort per fault detected; average faults detected per KLOC; percentage of reinspections; defect-removal efficiency  相似文献   

16.
You can't control people. It's never too early to plan. Project management and control must be built in, not added on. These are just a few of the 18 rules one project manager (a father) passes along in a letter to a new project manager (his daughter) so that she can better prepare, plan, and manage her organization's software projects.  相似文献   

17.
Schneier  B. 《Computer》1999,32(3)
Cryptography is difficult. It combines mathematics, computer science, sometimes electrical engineering, and a twisted mindset that can figure out how to get around rules, break systems, and subvert the designers' intentions. Even very smart, knowledgeable, experienced people invent bad cryptography. In cryptography, there is security in following the crowd. A homegrown algorithm can't possibly be subjected to the hundreds of thousands of hours of cryptanalysis that DES and RSA have seen. A company, or even an industry association, can't begin to mobilize the resources that have been brought to bear against the Kerberos authentication protocol, for example. No one can duplicate the confidence that PGP offers, after years of people going over the code, line by line, looking for implementation flaws. By following the crowd you can leverage the cryptanalytic expertise of the worldwide community, not just a few weeks of some analyst's time  相似文献   

18.
To date most research in software effort estimation has not taken chronology into account when selecting projects for training and validation sets. A chronological split represents the use of a project’s starting and completion dates, such that any model that estimates effort for a new project p only uses as its training set projects that have been completed prior to p’s starting date. A study in 2009 (“S3”) investigated the use of chronological split taking into account a project’s age. The research question investigated was whether the use of a training set containing only the most recent past projects (a “moving window” of recent projects) would lead to more accurate estimates when compared to using the entire history of past projects completed prior to the starting date of a new project. S3 found that moving windows could improve the accuracy of estimates. The study described herein replicates S3 using three different and independent data sets. Estimation models were built using regression, and accuracy was measured using absolute residuals. The results contradict S3, as they do not show any gain in estimation accuracy when using windows for effort estimation. This is a surprising result: the intuition that recent data should be more helpful than old data for effort estimation is not supported. Several factors, which are discussed in this paper, might have contributed to such contradicting results. Some of our future work entails replicating this work using other datasets, to understand better when using windows is a suitable choice for software companies.  相似文献   

19.
As the cost of programming becomes a major component of the cost of computer systems, it becomes imperative that program development and maintenance be better managed. One measurement a manager could use is programming complexity. Such a measure can be very useful if the manager is confident that the higher the complexity measure is for a programming project, the more effort it takes to complete the project and perhaps to maintain it. Until recently most measures of complexity were based only on intuition and experience. In the past 3 years two objective metrics have been introduced, McCabe's cyclomatic number v(G) and Halstead's effort measure E. This paper reports an empirical study designed to compare these two metrics with a classic size measure, lines of code. A fourth metric based on a model of programming is introduced and shown to be better than the previously known metrics for some experimental data.  相似文献   

20.
Although typically a software development organisation is involved in more than one project simultaneously, the available tools in the area of software cost estimation deal mostly with single software projects. In order to calculate the possible cost of the entire project portfolio, one must combine the single project estimates taking into account the uncertainty involved. In this paper, statistical simulation techniques are used to calculate confidence intervals for the effort needed for a project portfolio. The overall approach is illustrated through the adaptation of the analogy-based method for software cost estimation to cover multiple projects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号