首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

Context

Several metrics have been proposed to measure the extent to which class members are related. Connectivity-based class cohesion metrics measure the degree of connectivity among the class members.

Objective

We propose a new class cohesion metric that has higher discriminative power than any of the existing cohesion metrics. In addition, we empirically compare the connectivity and non-connectivity-based cohesion metrics.

Method

The proposed class cohesion metric is based on counting the number of possible paths in a graph that represents the connectivity pattern of the class members. We theoretically and empirically validate this path connectivity class cohesion (PCCC) metric. The empirical validation compares seven connectivity-based metrics, including PCCC, and 11 non-connectivity-based metrics in terms of discriminative and fault detection powers. The discriminative-power study explores the probability that a cohesion metric will incorrectly determine classes to be cohesively equal when they have different connectivity patterns. The fault detection study investigates whether connectivity-based metrics, including PCCC, better explain the presence of faults from a statistical standpoint in comparison to other non-connectivity-based cohesion metrics, considered individually or in combination.

Results

The theoretical validation demonstrates that PCCC satisfies the key cohesion properties. The results of the empirical studies indicate that, in contrast to other connectivity-based cohesion metrics, PCCC is much better than any comparable cohesion metric in terms of its discriminative power. In addition, the results also indicate that PCCC measures cohesion aspects that are not captured by other metrics, wherein it is considerably better than other connectivity-based metrics but slightly worse than some other non-connectivity-based cohesion metrics in terms of its ability to predict faulty classes.

Conclusion

PCCC is more useful in practice for the applications in which practitioners need to distinguish between the quality of different classes or the quality of different implementations of the same class.  相似文献   

2.
A methodology to assess the impact of design patterns on software quality   总被引:1,自引:0,他引:1  

Context

Software quality is considered to be one of the most important concerns of software production teams. Additionally, design patterns are documented solutions to common design problems that are expected to enhance software quality. Until now, the results on the effect of design patterns on software quality are controversial.

Aims

This study aims to propose a methodology for comparing design patterns to alternative designs with an analytical method. Additionally, the study illustrates the methodology by comparing three design patterns with two alternative solutions, with respect to several quality attributes.

Method

The paper introduces a theoretical/analytical methodology to compare sets of “canonical” solutions to design problems. The study is theoretical in the sense that the solutions are disconnected from real systems, even though they stem from concrete problems. The study is analytical in the sense that the solutions are compared based on their possible numbers of classes and on equations representing the values of the various structural quality attributes in function of these numbers of classes. The exploratory designs have been produced by studying the literature, by investigating open-source projects and by using design patterns. In addition to that, we have created a tool that helps practitioners in choosing the optimal design solution, according to their special needs.

Results

The results of our research suggest that the decision of applying a design pattern is usually a trade-off, because patterns are not universally good or bad. Patterns typically improve certain aspects of software quality, while they might weaken some other.

Conclusions

Concluding the proposed methodology is applicable for comparing patterns and alternative designs, and highlights existing threshold that when surpassed the design pattern is getting more or less beneficial than the alternative design. More specifically, the identification of such thresholds can become very useful for decision making during system design and refactoring.  相似文献   

3.
4.

Context

An optimal software development process is regarded as being dependent on the situational characteristics of individual software development settings. Such characteristics include the nature of the application(s) under development, team size, requirements volatility and personnel experience. However, no comprehensive reference framework of the situational factors affecting the software development process is presently available.

Objective

The absence of such a comprehensive reference framework of the situational factors affecting the software development process is problematic not just because it inhibits our ability to optimise the software development process, but perhaps more importantly, because it potentially undermines our capacity to ascertain the key constraints and characteristics of a software development setting.

Method

To address this deficiency, we have consolidated a substantial body of related research into an initial reference framework of the situational factors affecting the software development process. To support the data consolidation, we have applied rigorous data coding techniques from Grounded Theory and we believe that the resulting framework represents an important contribution to the software engineering field of knowledge.

Results

The resulting reference framework of situational factors consists of eight classifications and 44 factors that inform the software process. We believe that the situational factor reference framework presented herein represents a sound initial reference framework for the key situational elements affecting the software process definition.

Conclusion

In addition to providing a useful reference listing for the research community and for committees engaged in the development of standards, the reference framework also provides support for practitioners who are challenged with defining and maintaining software development processes. Furthermore, this framework can be used to develop a profile of the situational characteristics of a software development setting, which in turn provides a sound foundation for software development process definition and optimisation.  相似文献   

5.

Context

Modern software engineering demands professionals and researchers to proactively and collectively work towards exploring and experimenting viable and valuable mechanisms in order to extract all kinds of degenerative bugs, security holes, and possible deviations at the initial stage. Having understood the real need here, we have introduced a novel methodology for the estimation of defect proneness of class structures in object oriented (OO) software systems at design stage.

Objective

The objective of this work is to develop an estimation model that provides significant assessment of defect proneness of object oriented software packages at design phase of SDLC. This frame work enhances the efficiency of SDLC through design quality improvement.

Method

This involves a data driven methodology which is based on the empirical study of the relationship existing between design parameters and defect proneness. In the first phase, a mapping of the relationship between the design metrics and normal occurrence pattern of defects are carried out. This is represented as a set of non linear multifunctional regression equations which reflects the influence of individual design metrics on defect proneness. The defect proneness estimation model is then generated by weighted linear combination of these multifunctional regression equations. The weighted coefficients are evaluated through GQM (Goal Question Metric) paradigm.

Results

The model evaluation and validation is carried out with a selected set of cases which is found to be promising. The current study is successfully dealt with three projects and it opens up the opportunity to extend this to a wide range of projects across industries.

Conclusion

The defect proneness estimation at design stage facilitates an effective feedback to the design architect and enabling him to identify and reduce the number of defects in the modules appropriately. This results in a considerable improvement in software design leading to cost effective products.  相似文献   

6.

Context

Specification matching techniques are crucial for effective retrieval processes. Despite the prevalence for object-oriented methodologies, little attention has been given to Unified Modeling Language (UML) for matching.

Objective

This paper presents a two-stage framework for matching two UML specifications and quantifying the results based on the systematic integration of their structural and behavioral similarities in order to identify the candidate component set for reuse.

Method

The first stage in the framework is an evaluation of the similarities between UML class diagrams using the Structure-Mapping Engine (SME), a simulation of the analogical reasoning approach known as the structure-mapping theory. The second stage, performed on the components identified in the first stage, is based on a graph-similarity scoring algorithm in which UML class diagrams and sequence diagrams are transformed into an SME representation and a Message-Object-Order Graph (MOOG). The effectiveness of the proposed framework was evaluated using a case study.

Results

The experimental results showed a reduction in potential mismatches and an overall high precision and recall.

Conclusion

It is concluded that the two-stage framework is capable of performing more precise matching compared to those of other single-stage matching frameworks. Moreover, the two-stage framework could be utilized within a reuse process, bypassing the need for extra information for retrieval of the components described by UML.  相似文献   

7.

Context

A particular strength of agile systems development approaches is that they encourage a move away from ‘introverted’ development, involving the customer in all areas of development, leading to more innovative and hence more valuable information system. However, a move toward open innovation requires a focus that goes beyond a single customer representative, involving a broader range of stakeholders, both inside and outside the organisation in a continuous, systematic way.

Objective

This paper provides an in-depth discussion of the applicability and implications of open innovation in an agile environment.

Method

We draw on two illustrative cases from industry.

Results

We highlight some distinct problems that arose when two project teams tried to combine agile and open innovation principles. For example, openness is often compromised by a perceived competitive element and lack of transparency between business units. In addition, minimal documentation often reduce effective knowledge transfer while the use of short iterations, stand-up meetings and presence of on-site customer reduce the amount of time for sharing ideas outside the team.

Conclusion

A clear understanding of the inter- and intra-organisational applicability and implications of open innovation in agile systems development is required to address key challenges for research and practice.  相似文献   

8.

Context

In training disciplined software development, the PSP is said to result in such effect as increased estimation accuracy, better software quality, earlier defect detection, and improved productivity. But a systematic mechanism that can be easily adopted to assess and interpret PSP effect is scarce within the existing literature.

Objective

The purpose of this study is to explore the possibility of devising a feasible assessment model that ties up critical software engineering values with the pertinent PSP metrics.

Method

A systematic review of the literature was conducted to establish such an assessment model (we called a Plan-Track-Review model). Both mean and median approaches along with a set of simplified procedures were used to assess the commonly accepted PSP training effects. A set of statistical analyses further followed to increase understanding of the relationships among the PSP metrics and to help interpret the application results.

Results

Based on the results of this study, PSP training effect on the controllability, manageability, and reliability of a software engineer is quite positive and largely consistent with the literature. However, its effect on one’s predictability on project in general (and on project size in particular) is not implied as said in the literature. As for one’s overall project efficiency, our results show a moderate improvement. Our initial finding also suggests that a prior stage PSP effect could have an impact on later stage training outcomes.

Conclusion

It is concluded that this Plan-Track-Review model with the associated framework can be used to assess PSP effect regarding a disciplined software development. The generated summary report serves to provide useful feedback for both PSP instructors and students based on internal as well as external standards.  相似文献   

9.

Context

Source code revision control systems contain vast amounts of data that can be exploited for various purposes. For example, the data can be used as a base for estimating future code maintenance effort in order to plan software maintenance activities. Previous work has extensively studied the use of metrics extracted from object-oriented source code to estimate future coding effort. In comparison, the use of other types of metrics for this purpose has received significantly less attention.

Objective

This paper applies machine learning techniques to unveil predictors of yearly cumulative code churn of software projects on the basis of metrics extracted from revision control systems.

Method

The study is based on a collection of object-oriented code metrics, XML code metrics, and organisational metrics. Several models are constructed with different subsets of these metrics. The predictive power of these models is analysed based on a dataset extracted from eight open-source projects.

Results

The study shows that a code churn estimation model built purely with organisational metrics is superior to one built purely with code metrics. However, a combined model provides the highest predictive power.

Conclusion

The results suggest that code metrics in general, and XML metrics in particular, are complementary to organisational metrics for the purpose of estimating code churn.  相似文献   

10.
11.

Context

Writing software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance.

Objective

Measure the effect of parallel programming models (message-passing vs. PRAM-like) on programmer effort.

Design, setting, and subjects

One group of subjects implemented sparse-matrix dense-vector multiplication using message-passing (MPI), and a second group solved the same problem using a PRAM-like model (XMTC). The subjects were students in two graduate-level classes: one class was taught MPI and the other was taught XMTC.

Main outcome measures

Development time, program correctness.

Results

Mean XMTC development time was 4.8 h less than mean MPI development time (95% confidence interval, 2.0-7.7), a 46% reduction. XMTC programs were more likely to be correct, but the difference in correctness rates was not statistically significant (p = .16).

Conclusions

XMTC solutions for this particular problem required less effort than MPI equivalents, but further studies are necessary which examine different types of problems and different levels of programmer experience.  相似文献   

12.

Context

Automated static analysis (ASA) identifies potential source code anomalies early in the software development lifecycle that could lead to field failures. Excessive alert generation and a large proportion of unimportant or incorrect alerts (unactionable alerts) may cause developers to reject the use of ASA. Techniques that identify anomalies important enough for developers to fix (actionable alerts) may increase the usefulness of ASA in practice.

Objective

The goal of this work is to synthesize available research results to inform evidence-based selection of actionable alert identification techniques (AAIT).

Method

Relevant studies about AAITs were gathered via a systematic literature review.

Results

We selected 21 peer-reviewed studies of AAITs. The techniques use alert type selection; contextual information; data fusion; graph theory; machine learning; mathematical and statistical models; or dynamic detection to classify and prioritize actionable alerts. All of the AAITs are evaluated via an example with a variety of evaluation metrics.

Conclusion

The selected studies support (with varying strength), the premise that the effective use of ASA is improved by supplementing ASA with an AAIT. Seven of the 21 selected studies reported the precision of the proposed AAITs. The two studies with the highest precision built models using the subject program’s history. Precision measures how well a technique identifies true actionable alerts out of all predicted actionable alerts. Precision does not measure the number of actionable alerts missed by an AAIT or how well an AAIT identifies unactionable alerts. Inconsistent use of evaluation metrics, subject programs, and ASAs in the selected studies preclude meta-analysis and prevent the current results from informing evidence-based selection of an AAIT. We propose building on an actionable alert identification benchmark for comparison and evaluation of AAIT from literature on a standard set of subjects and utilizing a common set of evaluation metrics.  相似文献   

13.

Context

Testing is an essential part of the development life-cycle of any software product. While most phases of data warehouse design have received considerable attention in the literature, not much has been written about data warehouse testing.

Objective

In this paper we propose a comprehensive approach to testing data warehouse systems. Its main features are earliness with respect to the life-cycle, modularity, tight coupling with design, scalability, and measurability through proper metrics.

Method

We introduce a number of specific testing activities, we classify them in terms of what is tested and how it is tested, and we show how they can be framed within a prototype-based methodology. We apply our approach to a real case study for a large retail company.

Results

The case study we faced, based on an iterative prototype-based medium-size project, confirmed the validity of our approach. In particular, the main benefits were obtained in terms of project transparency, coordination of the development team, and organization of design activities.

Conclusion

Though some general-purpose testing techniques can be applied to data warehouse projects, the effectiveness of testing can be largely improved by applying specifically-devised techniques and metrics.  相似文献   

14.

Context

Component identification, the process of evolving legacy system into finely organized component-based software systems, is a critical part of software reengineering. Currently, many component identification approaches have been developed based on agglomerative hierarchical clustering algorithms. However, there is a lack of thorough investigation on which algorithm is appropriate for component identification.

Objective

This paper focuses on analyzing agglomerative hierarchical clustering algorithms in software reengineering, and then identifying their respective strengths and weaknesses in order to apply them effectively for future practical applications.

Method

A series of experiments were conducted for 18 clustering strategies combined according to various similarity measures, weighting schemes and linkage methods. Eleven subject systems with different application domains and source code sizes were used in the experiments. The component identification results are evaluated by the proposed size, coupling and cohesion criteria.

Results

The experimental results suggested that the employed similarity measures, weighting schemes and linkage methods can have various effects on component identification results with respect to the proposed size, coupling and cohesion criteria, so the hierarchical clustering algorithms produced quite different clustering results.

Conclusions

According to the experimental results, it can be concluded that it is difficult to produce perfectly satisfactory results for a given clustering algorithm. Nevertheless, these algorithms demonstrated varied capabilities to identify components with respect to the proposed size, coupling and cohesion criteria.  相似文献   

15.

Context

There is surprisingly little empirical software engineering research (ESER) that has analysed and reported the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data. The ESER community also increasingly recognises the need to develop theories of software engineering phenomena e.g. theories of the actual behaviour of software projects at the level of the project and over time.

Objective

To examine the use of the longitudinal, chronological case study (LCCS) as a research strategy for investigating the rich, fine-grained behaviour of phenomena over time using qualitative and quantitative data.

Method

Review the methodological literature on longitudinal case study. Define the LCCS and demonstrate the development and application of the LCCS research strategy to the investigation of Project C, a software development project at IBM Hursley Park. Use the study to consider prospects for LCCSs, and to make progress on a theory of software project behaviour.

Results

LCCSs appear to provide insights that are hard to achieve using existing research strategies, such as the survey study. The LCCS strategy has basic requirements that data is time-indexed, relatively fine-grained and collected contemporaneous to the events to which the data refer. Preliminary progress is made on a theory of software project behaviour.

Conclusion

LCCS appears well suited to analysing and reporting rich, fine-grained behaviour of phenomena over time.  相似文献   

16.

Context

Software maintenance is an important software engineering activity that has been reported to account for the majority of the software total cost. Thus, understanding the factors that influence the cost of software maintenance tasks helps maintainers to make informed decisions about their work.

Objective

This paper describes a controlled experiment of student programmers performing maintenance tasks on a C++ program. The objective of the study is to assess the maintenance size, effort, and effort distributions of three different maintenance types and to describe estimation models to predict the programmer’s effort spent on maintenance tasks.

Method

Twenty-three graduate students and a senior majoring in computer science participated in the experiment. Each student was asked to perform maintenance tasks required for one of the three task groups. The impact of different LOC metrics on maintenance effort was also evaluated by fitting the data collected into four estimation models.

Results

The results indicate that corrective maintenance is much less productive than enhancive and reductive maintenance and program comprehension activities require as much as 50% of the total effort in corrective maintenance. Moreover, the best software effort model can estimate the time of 79% of the programmers with the error of or less than 30%.

Conclusion

Our study suggests that the LOC added, modified, and deleted metrics are good predictors for estimating the cost of software maintenance. Effort estimation models for maintenance work may use the LOC added, modified, deleted metrics as the independent parameters instead of the simple sum of the three. Another implication is that reducing business rules of the software requires a sizable proportion of the software maintenance effort. Finally, the differences in effort distribution among the maintenance types suggest that assigning maintenance tasks properly is important to effectively and efficiently utilize human resources.  相似文献   

17.
Fault-based test suite prioritization for specification-based testing   总被引:1,自引:0,他引:1  

Context

Existing test suite prioritization techniques usually rely on code coverage information or historical execution data that serve as indicators for estimating the fault-detecting ability of test cases. Such indicators are primarily empirical in nature and not theoretically driven; hence, they do not necessarily provide sound estimates. Also, these techniques are not applicable when the source code is not available or when the software is tested for the first time.

Objective

We propose and develop the novel notion of fault-based prioritization of test cases which directly utilizes the theoretical knowledge of their fault-detecting ability and the relationships among the test cases and the faults in the prescribed fault model, based on which the test cases are generated.

Method

We demonstrate our approach of fault-based prioritization by applying it to the testing of the implementation of logical expressions against their specifications. We then validate our proposal by an empirical study that evaluates the effectiveness of prioritization techniques using two different metrics.

Results

A theoretically guided fault-based prioritization technique generally outperforms other techniques under study, as assessed by two different metrics. Our empirical results also show that the technique helps to reveal all target faults by executing only about 72% of the prioritized test suite, thereby reducing the effort required in testing.

Conclusions

The fault-based prioritization approach is not only applicable to the instance empirically validated in this paper, but should also be adaptable to other fault-based testing strategies. We also envisage new research directions to be opened up by our work.  相似文献   

18.

Context

User participation in information system (IS) development has received much research attention. However, prior empirical research regarding the effect of user participation on IS success is inconclusive. This might be because previous studies overlook the effect of the particular components of user participation and other possible mediating factors.

Objective

The objective of this study is to empirically examine how user influence and user responsibility affect IS project performance. We inspect whether user influence and user responsibility improve the quality of the IS development process and in turn leads to project success, or if they have a direct positive influence on project success.

Method

We conducted a survey of 151 IS project managers in order to understand the impact of user influence and user responsibility on IS project performance. Regression analysis was conducted to assess the relationship among user influence, user responsibility, organizational technology learning, project control, user-developer interaction, and IS project management performance.

Results

This study shows that user responsibility and user influence have a positive effect on project performance through the promotion of IS development processes as mediators, including organizational technology learning, project control, and user-IS interaction.

Conclusion

Our results suggest that user responsibility and user influence respectively play an important role in indirectly and directly impacting project management performance. Results of the analysis imply that organizations and project managers should use both user participation and user influence to improve processes performance, and in turn, increase project success.  相似文献   

19.
20.

Context

Assessing software quality at the early stages of the design and development process is very difficult since most of the software quality characteristics are not directly measurable. Nonetheless, they can be derived from other measurable attributes. For this purpose, software quality prediction models have been extensively used. However, building accurate prediction models is hard due to the lack of data in the domain of software engineering. As a result, the prediction models built on one data set show a significant deterioration of their accuracy when they are used to classify new, unseen data.

Objective

The objective of this paper is to present an approach that optimizes the accuracy of software quality predictive models when used to classify new data.

Method

This paper presents an adaptive approach that takes already built predictive models and adapts them (one at a time) to new data. We use an ant colony optimization algorithm in the adaptation process. The approach is validated on stability of classes in object-oriented software systems and can easily be used for any other software quality characteristic. It can also be easily extended to work with software quality predictive problems involving more than two classification labels.

Results

Results show that our approach out-performs the machine learning algorithm C4.5 as well as random guessing. It also preserves the expressiveness of the models which provide not only the classification label but also guidelines to attain it.

Conclusion

Our approach is an adaptive one that can be seen as taking predictive models that have already been built from common domain data and adapting them to context-specific data. This is suitable for the domain of software quality since the data is very scarce and hence predictive models built from one data set is hard to generalize and reuse on new data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号