首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 114 毫秒
1.
软件复用技术是在软件开发过程中避免重复劳动的解决方案。从软件复用技术的概念出发,提出了过程复用的总体框架,描述了演化过程构件的基本算法,提出了复用模型在系统开发中的步骤。  相似文献   

2.
张传波 《软件世界》2007,(17):60-63
软件复用是在软件开发中避免重复劳动的解决方法。软件复用,被视为解决软件危机,提高软件生产效率和质量的现实可行的途径。  相似文献   

3.
随着计算机应用领域的迅速扩大,软件规模及复杂性的不断提高,软件危机愈加明显地暴露出来。软件复用是在软件开发中避免重复劳动的解决方案,软件复用是软件工程的重要研究领域,被认为是解决软件危机,提高软件生产率和软件质量的主要途径。基于构件的软件复用是当前复用研究的焦点,被视为是实现成功复用的关键技术之一。该文将对软件复用技术的作一全面综述,介绍基于构件的软件复用的基本概念及其在嵌入式系统中的简单应用。  相似文献   

4.
软件复用是在软件开发过程中避免重复劳动的解决方案,但要设计在许多领域都通用的可复用业务组件是很困难的,而面向领域的复用是在一个特定应用领域中实现复用;因此,设计大粒度复用的应用框架对于提高软件的生产率和软件质量具有重要的意义;文中以软件复用为出发点,基于构件化软件的开发思路,对软件的构件技术、领域工程、面向领域的应用框架技术进行了深入的研究,提出了基于需求驱动的面向领域应用框架的开发方法,并详细说明了该方法在项目评审领域的应用.  相似文献   

5.
实施一个面向对象框架的方法   总被引:4,自引:2,他引:4  
周警伟  罗晓沛 《计算机仿真》2002,19(3):《计算机仿真》-2002年19卷3期-107-109.页-《计算机仿真》-2002年19卷3期-107-109.页
软件重用是在软件开发中避免重复劳动的解决方案,通过软件重用,可心提高软件开发的效率和质量。然而通常的一些重用技术如使用类库等仍然不能满足对重用的要求。一个面向对象的框架是针对某些特定领域的一些组件的整合,它从更高的层次和更大的规模来软件重用。它不仅重用了代码,而且重用了分析和设计,以求更好地提高效率和质量。该文主要从方法学的角度讨论如何实施一个面向对象的框架(Object-Oriented Framework,简称为OOF)的方法,同时介绍国内外在面向对象框架领域的一些研究和实践活动,并对如何加强相关领域的研究提出一些思路。  相似文献   

6.
软件复用是软件开发中避免重复劳动的解决方案。开源软件的源代码、邮件列表、缺陷报告和问答文档等软件资源中蕴含了规模庞大、结构复杂、语义关联丰富的软件知识。如何获取知识、组织知识,以及如何在软件复用过程中方便地检索软件知识是亟待解决的问题。为了解决这些问题,面向开源软件项目,构建了软件知识图谱,并提供了基于软件知识图谱的软件知识检索。主要工作包括:针对4种不同类型的软件资源,提出了软件知识实体的提取原则与方法;提出了软件知识实体之间关联关系构建的方法;实现了两种软件知识检索机制,并以文字列表和图形可视化相结合的方式展现检索结果;设计了软件知识图谱构建框架。基于上述工作,设计并实现了一个面向开源软件项目的软件知识图谱构建工具。实例证明,所构建的软件知识图谱可以更好地帮助软件开发人员进行软件知识的检索与应用。  相似文献   

7.
姚竞英 《福建电脑》2011,27(3):93-94,92
软件复用可以提高软件开发的效率和质量,是在软件开发过程中避免重复劳动的解决方案。构件技术是软件复用中的一个常用方法。本文对构件技术和CBSD进行研究,结合网上考试系统的开发项目,对基于EOS的网上考试系统展开探讨。  相似文献   

8.
基于ISO9000的软件质量保证模型   总被引:11,自引:0,他引:11  
王青 《软件学报》2001,12(12):1837-1842
软件危机出现了40多年依然未得到根本的解决,究其原因,业界人士已充分认识到缺乏规范有效的软件质量保障技术和手段是一个主要的原因.随着长期社会化生产的发展,全面质量管理已经形成了一整套理论和体系,并出现了一些国际标准和业界标准.旨在针对我国软件产业的现状,提出一个符合我国国情的软件质量保证模型和实现框架.  相似文献   

9.
软件复用是软件开发中避免重复劳动的解决方案,基于构件的复用是软件复用的主要形式.文中以订单管理系统开发为例,研究了基于构件的企业Web开发方法,建立了订单管理系统的构件模型,并使用J2EE技术规范来实现,从而解决管理软件重复编码、开发效率低的问题.  相似文献   

10.
软件危机的出现制约了软件规模的进一步发展.软件复用技术成为解决软件危机的一个途径,构件复用是软件复用的一个重要的组成部分.对软件开发者来说,如何着手进行复用,是一个急需解决的问题.文中通过对可复用构件理论的研究,结合在软件开发中的实践经验,总结了识别软件可复用机会的方法和策略,提出了多领域再分析法.给出了该方法的实施步骤、相应技术及策略,建立了相应模型.  相似文献   

11.
Accurate estimation of software development effort is strongly associated with the success or failure of software projects. The clear lack of convincing accuracy and flexibility in this area has attracted the attention of researchers over the past few years. Despite improvements achieved in effort estimating, there is no strong agreement as to which individual model is the best. Recent studies have found that an accurate estimation of development effort in software projects is unreachable in global space, meaning that proposing a high performance estimation model for use in different types of software projects is likely impossible. In this paper, a localized multi-estimator model, called LMES, is proposed in which software projects are classified based on underlying attributes. Different clusters of projects are then locally investigated so that the most accurate estimators are selected for each cluster. Unlike prior models, LMES does not rely on only one individual estimator in a cluster of projects. Rather, an exhaustive investigation is conducted to find the best combination of estimators to assign to each cluster. The investigation domain includes 10 estimators combined using four combination methods, which results in 4017 different combinations. ISBSG, Maxwell and COCOMO datasets are utilized for evaluation purposes, which include a total of 573 real software projects. The promising results show that the estimate accuracy is improved through localization of estimation process and allocation of appropriate estimators. Besides increased accuracy, the significant contribution of LMES is its adaptability and flexibility to deal with the complexity and uncertainty that exist in the field of software development effort estimation.  相似文献   

12.
In this paper, we present a model for software effort (person-month) estimation based on three levels Bayesian network and 15 components of COCOMO and software size. The Bayesian network works with discrete intervals for nodes. However, we consider the intervals of all nodes of network as fuzzy numbers. Also, we obtain the optimal updating coefficient of effort estimation based on the concept of optimal control using Genetic algorithm and Particle swarm optimization for the COCOMO NASA database. In the other words, estimated value of effort is modified by determining the optimal coefficient. Also, we estimate the software effort with considering software quality in terms of the number of defects which is detected and removed in three steps of requirements specification, design and coding. If the number of defects is more than the specified threshold then the model is returned to the current step and an additional effort is added to the estimated effort. The results of model indicate that optimal updating coefficient obtained by genetic algorithm increases the accuracy of estimation significantly. Also, results of comparing the proposed model with the other ones indicate that the accuracy of the model is more than the other models.  相似文献   

13.
14.
风险管理和危机管理是软件开发项目管理中两个重要的方面,风险管理是预测、控制和管理项目风险,而危机管理则是应对项目的突发问题,他们是项目成功的有效保障。本文重点讨论了风险和危机产生的原因,以及风险和危机的管理方法。  相似文献   

15.
基于重用方法的软件开发环境   总被引:3,自引:0,他引:3  
软件重用方法是提高软件生产率和质量、缓解软件危机的重要手段 .实施该方法的关键是提供一个帮助开发者实现软件重用的支撑环境 .为此 ,我们在 SUN工作站上设计并实现了一个基于可重用性的软件开发环境 RSDE.本文介绍了该环境的设计、实现和特点  相似文献   

16.
The ability to accurately and consistently estimate software development efforts is required by the project managers in planning and conducting software development activities. Since software effort drivers are vague and uncertain, software effort estimates, especially in the early stages of the development life cycle, are prone to a certain degree of estimation errors. A software effort estimation model which adopts a fuzzy inference method provides a solution to fit the uncertain and vague properties of software effort drivers. The present paper proposes a fuzzy neural network (FNN) approach for embedding artificial neural network into fuzzy inference processes in order to derive the software effort estimates. Artificial neural network is utilized to determine the significant fuzzy rules in fuzzy inference processes. We demonstrated our approach by using the 63 historical project data in the well-known COCOMO model. Empirical results showed that applying FNN for software effort estimates resulted in slightly smaller mean magnitude of relative error (MMRE) and probability of a project having a relative error of less than or equal to 0.25 (Pred(0.25)) as compared with the results obtained by just using artificial neural network and the original model. The proposed model can also provide objective fuzzy effort estimation rule sets by adopting the learning mechanism of the artificial neural network.  相似文献   

17.
Studies on open source software (OSS) have shown that the license under which an OSS is released has an impact on the success or failure of the software. In this paper, we model the relationship between an OSS developer's utility, the effort that goes into developing an OSS, his attitude towards the freedom to choose an OSS license, and the choice of OSS license. We find that the larger the effort to develop OSS, the more is the likelihood that the OSS license would be free from restrictions. Interestingly, the result holds even when all OSS developers prefer restrictive licenses or less-restrictive license. The results suggest that least-restrictive or non-copyleft license will dominate other types of OSS license when a large effort is required to develop derivative software. On the other hand, most-restrictive or strong-copyleft licenses will be the dominant license when minimal effort is required to develop the original OSS and the derivative software.  相似文献   

18.
ContextAlong with expert judgment, analogy-based estimation, and algorithmic methods (such as Function point analysis and COCOMO), Least Squares Regression (LSR) has been one of the most commonly studied software effort estimation methods. However, an effort estimation model using LSR, a single LSR model, is highly affected by the data distribution. Specifically, if the data set is scattered and the data do not sit closely on the single LSR model line (do not closely map to a linear structure) then the model usually shows poor performance. In order to overcome this drawback of the LSR model, a data partitioning-based approach can be considered as one of the solutions to alleviate the effect of data distribution. Even though clustering-based approaches have been introduced, they still have potential problems to provide accurate and stable effort estimates.ObjectiveIn this paper, we propose a new data partitioning-based approach to achieve more accurate and stable effort estimates via LSR. This approach also provides an effort prediction interval that is useful to describe the uncertainty of the estimates.MethodEmpirical experiments are performed to evaluate the performance of the proposed approach by comparing with the basic LSR approach and clustering-based approaches, based on industrial data sets (two subsets of the ISBSG (Release 9) data set and one industrial data set collected from a banking institution).ResultsThe experimental results show that the proposed approach not only improves the accuracy of effort estimation more significantly than that of other approaches, but it also achieves robust and stable results according to the degree of data partitioning.ConclusionCompared with the other considered approaches, the proposed approach shows a superior performance by alleviating the effect of data distribution that is a major practical issue in software effort estimation.  相似文献   

19.
Integrating software components to produce large-scale software systems is an effective way to reuse experience and reduce cost. However, unexpected interactions among components when integrated into software systems are often the cause of failures. Discovering these composition errors early in the development process could lower the cost and effort in fixing them. This paper introduces a rigorous analysis approach to software design composition based on automated verification techniques. We show how to represent, instantiate and integrate design components, and how to find design composition errors using model checking techniques. We illustrate our approach with a Web-based hypermedia case study.  相似文献   

20.
An Empirical Study of Analogy-based Software Effort Estimation   总被引:1,自引:1,他引:0  
Conventional approaches to software cost estimation have focused on algorithmic cost models, where an estimate of effort is calculated from one or more numerical inputs via a mathematical model. Analogy-based estimation has recently emerged as a promising approach, with comparable accuracy to algorithmic methods in some studies, and it is potentially easier to understand and apply. The current study compares several methods of analogy-based software effort estimation with each other and also with a simple linear regression model. The results show that people are better than tools at selecting analogues for the data set used in this study. Estimates based on their selections, with a linear size adjustment to the analogue's effort value, proved more accurate than estimates based on analogues selected by tools, and also more accurate than estimates based on the simple regression model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号