首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
Building enterprise reuse program——A model-based approach   总被引:1,自引:0,他引:1  
Reuse is viewed as a realistically effective approach to solving software crisis. For an organization that wants to build a reuse program, technical and non-technical issues must be considered in parallel. In this paper, a model-based approach to building systematic reuse program is presented. Component-based reuse is currently a dominant approach to software reuse. In this approach, building the right reusable component model is the first important step. In order to achieve systematic reuse, a set of component models should be built from different perspectives. Each of these models will give a specific view of the components so as to satisfy different needs of different persons involved in the enterprise reuse program. There already exist some component models for reuse from technical perspectives. But less attention is paid to the reusable components from a non-technical view, especially fromthe view of process and management. In our approach, a reusable component model--FLP modelfor reusable component  相似文献   

2.
Conventional ERP (Enterprise Resource Planning) system is based on a software developing mode in which software reuse can be only achieved at class level. Such a developing mode results in inefficient development, low-quality software and poor variability of ERP system. By applying component technology, these problems are solved and the system becomes more reliable, reusable, extensible, and transplantable. In this paper, we firstly introduce the concept of component and two popular component models, based on which our own ERP component model is developed. Then we propose a layered, flexible and extensible architecture for component-based ERP system. Components performing different functions at different layers can be extracted and encapsulated conveniently according to the architecture. Finally, we focus on the application of the proposed architecture, illuminating how to extract, encapsulate and assemble components to a completed ERP system with examples.  相似文献   

3.
Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling.In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in software development and they can provide useful information about how to improve the reliability of software products.A number of SRGMs have been proposed in the literature to represent time-dependent fault identification/removal phenomenon;still new models are being proposed that could fit a greater number of reliability growth curves.Often,it is assumed that detected faults axe immediately corrected when mathematical models are developed.This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault,the skill and experience of the personnel,the size of the debugging team,the technique,and so on.Thus,the detected fault need not be immediately removed,and it may lag the fault detection process by a delay effect factor.In this paper,we first review how different software reliability growth models have been developed,where fault detection process is dependent not only on the number of residual fault content but also on the testing time,and see how these models can be reinterpreted as the delayed fault detection model by using a delay effect factor.Based on the power function of the testing time concept,we propose four new SRGMs that assume the presence of two types of faults in the software:leading and dependent faults.Leading faults are those that can be removed upon a failure being observed.However,dependent faults are masked by leading faults and can only be removed after the corresponding leading fault has been removed with a debugging time lag.These models have been tested on real software error data to show its goodness of fit,predictive validity and applicability.  相似文献   

4.
The advent of Web 2.0 has led to an increase in user-generated content on the Web.This has provided an extensive collection of free-style texts with opinion expressions that could influence the decisions and actions of their readers.Providers of such content exert a certain level of influence on the receivers and this is evident from blog sites having effect on their readers’ purchase decisions,political view points,financial planning,and others.By detecting the opinion expressed,we can identify the sentiments on the topics discussed and the influence exerted on the readers.In this paper,we introduce an automatic approach in deriving polarity pattern rules to detect sentiment polarity at the phrase level,and in addition consider the effects of the more complex relationships found between words in sentiment polarity classification.Recent sentiment analysis research has focused on the functional relations of words using typed dependency parsing,providing a refined analysis on the grammar and semantics of textual data.Heuristics are typically used to determine the typed dependency polarity patterns,which may not comprehensively identify all possible rules.We study the use of class sequential rules(CSRs) to automatically learn the typed dependency patterns,and benchmark the performance of CSR against a heuristic method.Preliminary results show CSR leads to further improvements in classification performance achieving over 80% F1 scores in the test cases.In addition,we observe more complex relationships between words that could influence phrase sentiment polarity,and further discuss on possible approaches to handle the effects of these complex relationships.  相似文献   

5.
Several software reliability growth models (SRGM) have been developed to monitor the reliability growth during the testing phase of software development.In most of the existing research available in the literatures,it is considered that a similar testing effort is required on each debugging effort.However,in practice,different types of faults may require different amounts of testing efforts for their detection and removal.Consequently,faults are classified into three categories on the basis of severity:simple,hard and complex.This categorization may be extended to (?) type of faults on the basis of severity.Although some existing research in the literatures has incorporated this concept that fault removal rate (FRR) is different for different types of faults,they assume that the FRR remains constant during the overall testing period.On the contrary,it has been observed that as testing progresses,FRR changes due to changing testing strategy,skill,environment and personnel resources.In this paper,a general discrete SRGM is proposed for errors of different severity in software systems using the change-point concept.Then,the models are formulated for two particular environments.The models were validated on two real-life data sets.The results show better fit and wider applicability of the proposed models as to different types of failure datasets.  相似文献   

6.
In this paper, we propose a new method to model the temporal context for boosting video annotation accuracy. The motivation of our idea mainly comes from the fact that temporally continuous shots in video are generally with relevant content, so that the performance of video annotation could be comparably boosted by mining the temporal dependency between shots in video. Based on this consideration, we propose a temporal context model to mine the redundant information between shots. By connecting our model with conditional random field and borrowing the learning and inference approaches from it, we could obtain the refined probability of a concept occurring in the shot, which is the leverage of temporal context information and initial output of video annotation. Comparing with existing methods for temporal context mining of video annotation, our model could capture different kinds of shot dependency more accurately to improve the video annotation performance. Furthermore, our model is relatively simple and efficient, which is important for the applications which have large scale data to process. Extensive experimental results on the widely used TRECVID datasets exhibit the effectiveness of our method for improving video annotation accuracy.  相似文献   

7.
Software systems can be represented as complex networks and their artificial nature can be investigated with approaches developed in network analysis.Influence maximization has been successfully applied on software networks to identify the important nodes that have the maximum influence on the other parts.However,research is open to study the effects of network fabric on the influence behavior of the highly influential nodes.In this paper,we construct class dependence graph(CDG)networks based on eight practical Java software systems,and apply the procedure of influence maximization to study empirically the correlations between the characteristics of maximum influence and the degree distributions in the software networks.We demonstrate that the artificial nature of CDG networks is reflected partly from the scale free behavior:the in-degree distribution follows power law,and the out-degree distribution is lognormal.For the influence behavior,the expected influence spread of the maximum influence set identified by the greedy method correlates significantly with the degree distributions.In addition,the identified influence set contains influential classes that are complex in both the number of methods and the lines of code(LOC).For the applications in software engineering,the results provide possibilities of new approaches in designing optimization procedures of software systems.  相似文献   

8.
In this work we establish an equivalent circuit model to analyze the resonace of the metamaterial considering the loss of the unit cell and coupling effect between them. From this model, we find that metamaterial can be divided into three categories: weak, critical and strong couplings, depending on the values of the loss and coupling strength, where the different resonant properties are presented. The physical reason of the division is whether the loss in each unit cell can be offset by energy coupling from the adjunct unit cells. Full-wave electromagnetic simulations have also been carried out to verify the equivalent circuit analysis. Our circuit analysis provides a simple and effective way to understand the coupling of the metamaterial and gives guidance for the analysis and design of the metamaterial.  相似文献   

9.
In this paper, we review the concept of contextual probability, the resulting notion of neighbourhood counting and the various specialisations of this notion which result in new functions for measuring similarity, such as all common subsequences. We also provide new results on the generalisation of the all common subsequences similarity. Contextual probability was originally proposed as an alternative way of reasoning. It was later found to be an alternative way of estimating probability, and it led to the introduction of the neighbourhood counting notion. This notion was then found to be a generic similarity metric that can be applied to different types of data.  相似文献   

10.
This paper presents a new model of scenarios, dedicated to the specification and verification of system be- haviours in the context of software product lines (SPL). We draw our inspiration from some techniques that are mostly used in the hardware community, and we show how they could be applied to the verification of software components. We point out the benefits of synchronous languages and mod- els to bridge the gap between both worlds.  相似文献   

11.
Coupling represents the degree of interdependence between two software components. Understanding software dependency is directly related to improving software understandability, maintainability, and reusability. In this paper, we analyze the difference between component coupling and component dependency, introduce a two-parameter component coupling metric and a three-parameter component dependency metric. An important parameter in both these metrics is coupling distance, which represents the relevance of two coupled components. These metrics are applicable to layered component-based software. These metrics can be used to represent the dependencies induced by all types of software coupling. We show how to determine coupling and dependency of all scales of software components using these metrics. These metrics are then applied to Apache HTTP, an open-source web server. The study shows that coupling distance is related to the number of modifications of a component, which is an important indicator of component fault rate, stability and subsequently, component complexity.
Srini RamaswamyEmail: Email:

Liguo Yu   received the Ph.D. degree in Computer Science from Vanderbilt University. He is an assistant professor of Computer and Information Sciences Department at Indiana University South Bend. Before joining IUSB, he was a visiting assistant professor at Tennessee Technological University. His research concentrates on software coupling, software maintenance, software reuse, software testing, software management, and open-source software development. Kai Chen   received the Ph.D. degree from the Department of Electrical Engineering and Computer Science at Vanderbilt University. He is working at Google Incorporation. His current research interests include development and maintenance of open-source software, embedded software design, component-based design, model-based design, formal methods and model verification. Srini Ramaswamy   earned his Ph.D. degree in Computer Science in 1994 from the Center for Advanced Computer Studies (CACS) at the University of Southwestern Louisiana (now University of Louisiana at Lafayette). His research interests are on intelligent and flexible control systems, behavior modeling, analysis and simulation, software stability and scalability. He is currently the Chairperson of the Department of Computer Science, University of Arkansas at Little Rock. Before joining UALR, he is the chairman of Computer Science Department at Tennessee Tech University. He is member of the Association of Computing Machinery, Society for Computer Simulation International, Computing Professionals for Social Responsibility and a senior member of the IEEE.   相似文献   

12.
构件软件相较于传统软件系统有更快的演化速度,对其变更进行有效的度量将有利于后期的维护活动.本文分别针对代码可见及不可见两种类型的构件,运用改进的构件依赖图建模,表示构件软件系统.分两步分析构件变更所带来的风险:首先在计算变更比例的基础上度量单个构件的变更风险,再通过将构件依赖图转化成构件依赖树来计算变更的构件集给系统所带来的风险.此外,结合实例系统的分析给出了所提出的变更风险度量的若干性质.  相似文献   

13.
Reports on the initial experimental evaluation of ROPE (reusability-oriented parallel programming environment), a software component reuse system. ROPE helps the designer find and understand components by using a new classification method called structured relational classification. ROPE is part of a development environment for parallel programs which uses a declarative/hierarchical graphical programming interface. This interface allows use of components with different levels of abstraction, ranging from design units to actual code modules. ROPE supports reuse of all the component types defined in the development environment. Programs developed with the aid of ROPE were found to have error rates far less than those developed without ROPE  相似文献   

14.
一种基于切片技术度量Java耦合性的框架   总被引:7,自引:0,他引:7  
在研究面向对象的度量问题时,人们通过简单的统计方法和基于信息源的方法来度量其中的一些特征,例如基本度量、CK度量和AoKi度量等。文中采用一种基于程序切片的方法来度量Java的耦合性问题,通过对J ava源程序中存在的耦合关系的度量,得到了一种比传统方法更精确的耦合度量方法。  相似文献   

15.
Software reuse is important, especially product reuse. This paper describes a retrieval system for software components, the most popular form of product reuse. The system is distributed and embedded in the web and is based on structured retrieval using a classification schema.After defining the requirements for the system, we first discuss the advanced outside functionalities of the component retrieval system, as its multi-paradigmatic classification approach, the ability to extend/change the schema, the navigational facility through different views, and the system's interface to search engines. Then, the most interesting topics of the system's realization are discussed, as dynamic web page generation and personalization, how the specific environments for different roles are built, how schema modification is handled, and how the system was designed being driven by software for reuse. Some measurements of the system's outside behavior and its convenience for users are given.  相似文献   

16.
Component-Based Development (CBD) is revolutionizing the process of building applications by assembling pre-built reusable components. Components should be designed more for inter-organizational reuse, rather than intra-organization reuse through domain analysis which captures the commonality of the target domain. Moreover, the minor variations within the commonality should also be modeled and reflected in the design of components so that family members can effectively customize the components for their own purpose. To carry out domain analysis effectively and design widely reusable components, precise definitions of variability-related terms and a classification of variability types must be made. In this paper, we identify the fundamental difference between conventional variability and component variability, and present five types of variability and three kinds of variability scopes. Each type of variability is precisely defined for its applicable situations and guidelines. Having a formal view on variability, not only the domain analysis but also component customization can be effectively carried out in a precise manner.  相似文献   

17.
Quality Impacts of Clandestine Common Coupling   总被引:2,自引:0,他引:2  
The increase in maintenance of software and the increased amounts of reuse are having major positive impacts on the quality of software, but are also introducing some rather subtle negative impacts on the quality. Instead of talking about existing problems (faults), developers now discuss potential problems, that is, aspects of the program that do not affect the quality initially, but could have deleterious consequences when the software goes through some maintenance or reuse. One type of potential problem is that of common coupling, which unlike other types of coupling can be clandestine. That is, the number of instances of common coupling between a module M and the other modules can be changed without any explicit change to M. This paper presents results from a study of clandestine common coupling in 391 versions of Linux. Specifically, the common coupling between each of 5332 kernel modules and the rest of the product as a whole was measured. In more than half of the new versions, a change in common coupling was observed, even though none of the modules themselves was changed. In most cases where this clandestine common coupling was observed, the number of instances of common coupling increased. These results provide yet another reason for discouraging the use of common coupling in software products.  相似文献   

18.
Reuse is viewed as a realistically effective approach to solving software crisis. For an organization that wants to build a reuse program, technical and non-technical issues must be considered in parallel. In this paper, a model-based approach to building systematic reuse program is presented. Component-based reuse is currently a dominant approach to software reuse. In this approach, building the right reusable component model is the first important step. In order to achieve systematic reuse, a set of component models should be built from different perspectives. Each of these models will give a specific view of the components so as to satisfy different needs of different persons involved in the enterprise reuse program. There already exist some component models for reuse from technical perspectives. But less attention is paid to the reusable components from a non-technical view, especially from the view of process and management. In our approach, a reusable component model—FLP model for reusable component—is introduced. This model describes components from three dimensions (Form, Level, and Presentation) and views components and their relationships from the perspective of process and management. It determines the sphere of reusable components, the time points of reusing components in the development process, and the needed means to present components in terms of the abstraction level, logic granularity and presentation media. Being the basis on which the management and technical decisions are made, our model will be used as the kernel model to initialize and normalize a systematic enterprise reuse program.  相似文献   

19.
Today, Information Systems (IS) are often distributed and heterogeneous. Thus, software systems become more and more complex and their evolution is difficult to manage. Our works deal with engineering of heterogeneous distributed systems based on reuse. Such systems need a distributed adaptable software architecture to be implemented. In this paper, we propose a Model Driven Architecture (MDA)-inspired approach for developing adaptable software. First, we briefly present the component paradigm in which we place our works. Then, we position our component model with regards to related works. In our component model, the interface of the component is described by the way of points of interaction. These points are used to manage different types of interactions between components. The components and the interactions make up a new core model. From our core model, we can build an application model represented by a graph of interactions allowing the integration of the reused components. We finish with the implementation of the application model, thanks to the distributed adaptable software architecture. Each part of this paper is illustrated with a concrete case, the European Aero user-friendly SIMulation-based dIstance Learning (ASIMIL) project.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号