首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Context

Adopting IT innovation in organizations is a complex decision process driven by technical, social and economic issues. Thus, those organizations that decide to adopt innovation take a decision of uncertain success of implementation, as the actual use of a new technology might not be the one expected. The misalignment between planned and effective use of innovation is called assimilation gap.

Objective

This research aims at defining a quantitative instrument for measuring the assimilation gap and applying it to the case of the adoption of OSS.

Method

In this paper, we use the theory of path dependence and increasing returns of Arthur. In particular, we model the use of software applications (planned or actual) by stochastic processes defined by the daily amounts of files created with the applications. We quantify the assimilation gap by comparing the resulting models by measures of proximity.

Results

We apply and validate our method to a real case study of introduction of OpenOffice. We have found a gap between the planned and the effective use despite well-defined directives to use the new OS technology. These findings suggest a need of strategy re-calibration that takes into account environmental factors and individual attitudes.

Conclusions

The theory of path dependence is a valid instrument to model the assimilation gap provided information on strategy toward innovation and quantitative data on actual use are available.  相似文献   

2.

Context

During development managers, analysts and designers often need to know whether enough requirements analysis work has been done and whether or not it is safe to proceed to the design stage.

Objective

This paper describes a new, simple and practical method for assessing our confidence in a set of requirements.

Method

We identified four confidence factors and used a goal oriented framework with a simple ordinal scale to develop a method for assessing confidence. We illustrate the method and show how it has been applied to a real systems development project.

Results

We show how assessing confidence in the requirements could have revealed problems in this project earlier and so saved both time and money.

Conclusion

Our meta-level assessment of requirements provides a practical and pragmatic method that can prove useful to managers, analysts and designers who need to know when sufficient requirements analysis has been performed.  相似文献   

3.

Context

Formal methods are very useful in the software industry and are becoming of paramount importance in practical engineering techniques. They involve the design and modeling of various system aspects expressed usually through different paradigms. These different formalisms make the verification of global developed systems more difficult.

Objective

In this paper, we propose to combine two modeling formalisms, in order to express both functional and security timed requirements of a system to obtain all the requirements expressed in a unique formalism.

Method

First, the system behavior is specified according to its functional requirements using Timed Extended Finite State Machine (TEFSM) formalism. Second, this model is augmented by applying a set of dedicated algorithms to integrate timed security requirements specified in Nomad language. This language is adapted to express security properties such as permissions, prohibitions and obligations with time considerations.

Results

The proposed algorithms produce a global TEFSM specification of the system that includes both its functional and security timed requirements.

Conclusion

It is concluded that it is possible to merge several requirement aspects described with different formalisms into a global specification that can be used for several purposes such as code generation, specification correctness proof, model checking or automatic test generation. In this paper, we applied our approach to a France Telecom Travel service to demonstrate its scalability and feasibility.  相似文献   

4.

Context

Software developers spend considerable effort implementing auxiliary functionality used by the main features of a system (e.g., compressing/decompressing files, encryption/decription of data, scaling/rotating images). With the increasing amount of open source code available on the Internet, time and effort can be saved by reusing these utilities through informal practices of code search and reuse. However, when this type of reuse is performed in an ad hoc manner, it can be tedious and error-prone: code results have to be manually inspected and integrated into the workspace.

Objective

In this paper we introduce and evaluate the use of test cases as an interface for automating code search and reuse. We call our approach Test-Driven Code Search (TDCS). Test cases serve two purposes: (1) they define the behavior of the desired functionality to be searched; and (2) they test the matching results for suitability in the local context. We also describe CodeGenie, an Eclipse plugin we have developed that performs TDCS using a code search engine called Sourcerer.

Method

Our evaluation consists of two studies: an applicability study with 34 different features that were searched using CodeGenie; and a performance study comparing CodeGenie, Google Code Search, and a manual approach.

Results

Both studies present evidence of the applicability and good performance of TDCS in the reuse of auxiliary functionality.

Conclusion

This paper presents an approach to source code search and its application to the reuse of auxiliary functionality. Our exploratory evaluation shows promising results, which motivates the use and further investigation of TDCS.  相似文献   

5.

Background

We conducted a 3 year intervention to increase awareness and adoption of eight more profitable nursery crop production practices that reduced certain traumatic and musculoskeletal injury hazards.

Methods

We disseminated information to nursery managers across seven states using information channels they were known to rely on (e.g. trade publications, public events, university Extension, other managers). We evaluated rolling, independent, probability samples (n = 1200) with mail questionnaires before the intervention and after each of 3 intervention years. We also evaluated samples (n = 250) from a comparison group of New Zealand nursery managers.

Results

The intervention was associated with increased awareness of four of the eight practices among US managers after year 3 compared to their baseline: zippers (20 vs. 32%, p ≤ 0.000), stools (11 vs. 22%, p ≤ 0.001), pruners (29 vs. 40%, p ≤ 0.014), and tarps (24 vs. 33%, p ≤ 0.009). There were no changes in adoption. New Zealand manager awareness was increased for hoes after year 2 compared to their baseline (35 vs. 52%, p ≤ 0.010).

Conclusions

A modest, regionwide information dissemination intervention was associated with increased awareness, but not adoption.  相似文献   

6.

Context

Software productivity measurement is essential in order to control and improve the performance of software development. For example, by identifying role models (e.g. projects, individuals, tasks) when comparing productivity data. The prediction is of relevance to determine whether corrective actions are needed, and to discover which alternative improvement action would yield the best results.

Objective

In this study we identify studies for software productivity prediction and measurement. Based on the identified studies we first create a classification scheme and map the studies into the scheme (systematic map). Thereafter, a detailed analysis and synthesis of the studies is conducted.

Method

As a research method for systematically identifying and aggregating the evidence of productivity measurement and prediction approaches systematic mapping and systematic review have been used.

Results

In total 38 studies have been identified, resulting in a classification scheme for empirical research on software productivity. The mapping allowed to identify the rigor of the evidence with respect to the different productivity approaches. In the detailed analysis the results were tabulated and synthesized to provide recommendations to practitioners.

Conclusion

Risks with simple ratio-based measurement approaches were shown. In response to the problems data envelopment analysis seems to be a strong approach to capture multivariate productivity measures, and allows to identify reference projects to which inefficient projects should be compared. Regarding simulation no general prediction model can be identified. Simulation and statistical process control are promising methods for software productivity prediction. Overall, further evidence is needed to make stronger claims and recommendations. In particular, the discussion of validity threats should become standard, and models need to be compared with each other.  相似文献   

7.

Context

For large software projects it is important to have some traceability between artefacts from different phases (e.g.requirements, designs, code), and between artefacts and the involved developers. However, if the capturing of traceability information during the project is felt as laborious to developers, they will often be sloppy in registering the relevant traceability links so that the information is incomplete. This makes automated tool-based collection of traceability links a tempting alternative, but this has the opposite challenge of generating too many potential trace relationships, not all of which are equally relevant.

Objective

This paper evaluates how to rank such auto-generated trace relationships.

Method

We present two approaches for such a ranking: a Bayesian technique and a linear inference technique. Both techniques depend on the interaction event trails left behind by collaborating developers while working within a development tool.

Results

The outcome of a preliminary study suggest the advantage of the linear approach, we also explore the challenges and potentials of the two techniques.

Conclusion

The advantage of the two techniques is that they can be used to provide traceability insights that are contextual and would have been much more difficult to capture manually. We also present some key lessons learnt during this research.  相似文献   

8.

Context

Software development effort estimation (SDEE) is the process of predicting the effort required to develop a software system. In order to improve estimation accuracy, many researchers have proposed machine learning (ML) based SDEE models (ML models) since 1990s. However, there has been no attempt to analyze the empirical evidence on ML models in a systematic way.

Objective

This research aims to systematically analyze ML models from four aspects: type of ML technique, estimation accuracy, model comparison, and estimation context.

Method

We performed a systematic literature review of empirical studies on ML model published in the last two decades (1991-2010).

Results

We have identified 84 primary studies relevant to the objective of this research. After investigating these studies, we found that eight types of ML techniques have been employed in SDEE models. Overall speaking, the estimation accuracy of these ML models is close to the acceptable level and is better than that of non-ML models. Furthermore, different ML models have different strengths and weaknesses and thus favor different estimation contexts.

Conclusion

ML models are promising in the field of SDEE. However, the application of ML models in industry is still limited, so that more effort and incentives are needed to facilitate the application of ML models. To this end, based on the findings of this review, we provide recommendations for researchers as well as guidelines for practitioners.  相似文献   

9.

Context

Extreme Programming (XP) is one of the most popular agile software development methodologies. XP is defined as a consistent set of values and practices designed to work well together, but lacks practices for project management and especially for supporting the customer role. The customer representative is constantly under pressure and may experience difficulties in foreseeing the adequacy of a release plan.

Objective

To assist release planning in XP by structuring the planning problem and providing an optimization model that suggests a suitable release plan.

Method

We develop an optimization model that generates a release plan taking into account story size, business value, possible precedence relations, themes, and uncertainty in velocity prediction. The running-time feasibility is established through computational tests. In addition, we provide a practical heuristic approach to velocity estimation.

Results

Computational tests show that problems with up to six themes and 50 stories can be solved exactly. An example provides insight into uncertainties affecting velocity, and indicates that the model can be applied in practice.

Conclusion

An optimization model can be used in practice to enable the customer representative to take more informed decisions faster. This can help adopting XP in projects where plan-driven approaches have traditionally been used.  相似文献   

10.

Context

In order to ensure high quality of a process model repository, refactoring operations can be applied to correct anti-patterns, such as overlap of process models, inconsistent labeling of activities and overly complex models. However, if a process model collection is created and maintained by different people over a longer period of time, manual detection of such refactoring opportunities becomes difficult, simply due to the number of processes in the repository. Consequently, there is a need for techniques to detect refactoring opportunities automatically.

Objective

This paper proposes a technique for automatically detecting refactoring opportunities.

Method

We developed the technique based on metrics that can be used to measure the consistency of activity labels as well as the extent to which processes overlap and the type of overlap that they have. We evaluated it, by applying it to two large process model repositories.

Results

The evaluation shows that the technique can be used to pinpoint the approximate location of three types of refactoring opportunities with high precision and recall and of one type of refactoring opportunity with high recall, but low precision.

Conclusion

We conclude that the technique presented in this paper can be used in practice to automatically detect a number of anti-patterns that can be corrected by refactoring.  相似文献   

11.

Context

The context of this research is software process improvement (SPI) in small and medium Web companies.

Objective

The primary objective of this paper is to identify software process improvement (SPI) success factors for small and medium Web companies.

Method

To achieve this goal, we conducted semi-structured, open-ended interviews with 21 participants representing 11 different companies in Pakistan, and analyzed the data qualitatively using the Glaserian strand of grounded theory research procedures. The key steps of these procedures that were employed in this research included open coding, focused coding, theoretical coding, theoretical sampling, constant comparison, and scaling up.

Results

An initial framework of key SPI success factors for small and medium Web companies was proposed, which can be of use for small and medium Web companies engaged in SPI. The paper also differentiates between small and medium Web companies and analyzes crucial SPI requirements for companies operating in the Web development domain.

Conclusion

The results of this work, in particular the use of qualitative techniques - allowed us to obtain rich insight into SPI success factors for small and medium Web companies. Future work comprises the validation of the SPI success factors with small and medium Web companies.  相似文献   

12.

Context

Release scheduling deals with the selection and assignment of deliverable features to a sequence of consecutive product deliveries while several constraints are fulfilled. Although agile software development represents a major approach to software engineering, there is no well-established conceptual definition and sound methodological support of agile release scheduling.

Objective

To propose a solution, we present, (1) a conceptual model for agile scheduling, and (2) a novel multiple knapsack-based optimization model with (3) a branch-and-bound optimization algorithm for agile release scheduling.

Method

To evaluate our model simulations were carried out seven real life and several generated data sets.

Results

The developed algorithm strives to prevent resource overload and resource underload, and mitigates risks of delivery slippage.

Conclusion

The results of the experiment suggest that this approach can provide optimized semi-automatic release schedule generations and more informed and established decisions utilizing what-if-analysis on the fly to tailor the best schedule for the specific project context.  相似文献   

13.

Context

Although agile software development methods such as SCRUM and DSDM are gaining popularity, the consequences of applying agile principles to software product management have received little attention until now.

Objective

In this paper, this gap is filled by the introduction of a method for the application of SCRUM principles to software product management.

Method

A case study research approach is employed to describe and evaluate this method.

Results

This has resulted in the ‘agile requirements refinery’, an extension to the SCRUM process that enables product managers to cope with complex requirements in an agile development environment. A case study is presented to illustrate how agile methods can be applied to software product management.

Conclusions

The experiences of the case study company are provided as a set of lessons learned that will help others to apply agile principles to their software product management process.  相似文献   

14.
15.

Context

Agile information systems development (ISD) has received much attention from both the practitioner and researcher community over the last 10-15 years. However, it is still unclear what precisely constitutes agile ISD.

Objective

Based on four empirical studies conducted over a 10-year time period from 1999 to 2008 the objective of this paper is to show how the meaning and practice of agile ISD has evolved over time and on this basis to speculate about what comes next.

Method

Four phases of research has been conducted, using a grounded theory approach. For each research phase qualitative interviews were held in American and/or Danish companies and a grounded theory was inductively discovered by careful data analysis. Subsequently, the four unique theories have been analyzed for common themes, and a global theory was identified across the empirical data.

Results

In 1999 companies were developing software at high-speed in a desperate rush to be first-to-market. In 2001 a new high-speed/quick results development process had become established practice. In 2003 changes in the market created the need for a more balanced view on speed and quality, and in 2008 companies were successfully combining agile and plan-driven approaches to achieve the benefits of both. The studies reveal a two-stage pattern in which dramatic changes in the market causes disruption of established practices and process adaptations followed by consolidation of lessons learnt into a once again stable software development process.

Conclusion

The cyclical history of punctuated process evolution makes it possible to distinguish pre-agility from current practices (agility), and on this basis, to speculate about post-agility: a possible next cycle of software process evolution concerned with proactively pursuing the dual goal of agility and alignment through a diversity of means.  相似文献   

16.

Context

In recent years, many software companies have considered Software Process Improvement (SPI) as essential for successful software development. These companies have also shown special interest in IT Service Management (ITSM). SPI standards have evolved to incorporate ITSM best practices.

Objective

This paper presents a systematic literature review of ITSM Process Improvement initiatives based on the ISO/IEC 15504 standard for process assessment and improvement.

Method

A systematic literature review based on the guidelines proposed by Kitchenham and the review protocol template developed by Biolchini et al. is performed.

Results

Twenty-eight relevant studies related to ITSM Process Improvement have been found. From the analysis of these studies, nine different ITSM Process Improvement initiatives have been detected. Seven of these initiatives use ISO/IEC 15504 conformant process assessment methods.

Conclusion

During the last decade, in order to satisfy the on-going demand of mature software development companies for assessing and improving ITSM processes, different models which use the measurement framework of ISO/IEC 15504 have been developed. However, it is still necessary to define a method with the necessary guidelines to implement both software development processes and ITSM processes reducing the amount of effort, especially because some processes of both categories are overlapped.  相似文献   

17.

Context

An operational test means a system test that examines whether or not all software or hardware components comply with the requirements given to a system which is deployed in an operational environment.

Objective

It is a necessary lightweight-profiling method for embedded systems with severe resource restrictions to conduct operational testing.

Method

We focus on the Process Control Block as the optimal location to monitor the execution of all processes. We propose a profiling method to collect the runtime execution information of the processes without interrupting the system’s operational environment by hacking the Process Control Block information. Based on the proposed method applied to detect runtime memory faults, we develop the operational testing tool AMOS v1.0 which is currently being used in the automobile industry.

Results

An industrial field study on 23 models of car-infotainment systems revealed a total of 519 memory faults while only slowing down the system by 0.084-0.132×. We conducted a comparative analysis on representative runtime memory fault detection tools. This analysis result shows our proposed method that has relatively low overhead meets the requirements for operational testing, while other methods failed to satisfy the operational test conditions.

Conclusion

We conclude that a lightweight-profiling method for embedded system operational testing can be built around the Process Control Block.  相似文献   

18.

Context

In recent years, many usability evaluation methods (UEMs) have been employed to evaluate Web applications. However, many of these applications still do not meet most customers’ usability expectations and many companies have folded as a result of not considering Web usability issues. No studies currently exist with regard to either the use of usability evaluation methods for the Web or the benefits they bring.

Objective

The objective of this paper is to summarize the current knowledge that is available as regards the usability evaluation methods (UEMs) that have been employed to evaluate Web applications over the last 14 years.

Method

A systematic mapping study was performed to assess the UEMs that have been used by researchers to evaluate Web applications and their relation to the Web development process. Systematic mapping studies are useful for categorizing and summarizing the existing information concerning a research question in an unbiased manner.

Results

The results show that around 39% of the papers reviewed reported the use of evaluation methods that had been specifically crafted for the Web. The results also show that the type of method most widely used was that of User Testing. The results identify several research gaps, such as the fact that around 90% of the studies applied evaluations during the implementation phase of the Web application development, which is the most costly phase in which to perform changes. A list of the UEMs that were found is also provided in order to guide novice usability practitioners.

Conclusions

From an initial set of 2703 papers, a total of 206 research papers were selected for the mapping study. The results obtained allowed us to reach conclusions concerning the state-of-the-art of UEMs for evaluating Web applications. This allowed us to identify several research gaps, which subsequently provided us with a framework in which new research activities can be more appropriately positioned, and from which useful information for novice usability practitioners can be extracted.  相似文献   

19.

Context

A particular strength of agile systems development approaches is that they encourage a move away from ‘introverted’ development, involving the customer in all areas of development, leading to more innovative and hence more valuable information system. However, a move toward open innovation requires a focus that goes beyond a single customer representative, involving a broader range of stakeholders, both inside and outside the organisation in a continuous, systematic way.

Objective

This paper provides an in-depth discussion of the applicability and implications of open innovation in an agile environment.

Method

We draw on two illustrative cases from industry.

Results

We highlight some distinct problems that arose when two project teams tried to combine agile and open innovation principles. For example, openness is often compromised by a perceived competitive element and lack of transparency between business units. In addition, minimal documentation often reduce effective knowledge transfer while the use of short iterations, stand-up meetings and presence of on-site customer reduce the amount of time for sharing ideas outside the team.

Conclusion

A clear understanding of the inter- and intra-organisational applicability and implications of open innovation in agile systems development is required to address key challenges for research and practice.  相似文献   

20.

Context

Software migration—and in particular migration towards the Web and towards distributed architectures—is a challenging and complex activity, and has been particularly relevant in recent years, due to the large number of migration projects the industry had to face off because of the increasing pervasiveness of the Web and of mobile devices.

Objective

This paper reports a survey aimed at identifying the state-of-the-practice of the Italian industry for what concerns the previous experiences in software migration projects—specifically concerning information systems—the adopted tools and the emerging needs and problems.

Method

The study has been carried out among 59 Italian Information Technology companies, and for each company a representative person had to answer an on-line questionnaire concerning migration experiences, pieces of technology involved in migration projects, adopted tools, and problems occurred during the project.

Results

Indicate that migration—especially towards the Web—is highly relevant for Italian IT companies, and that companies tend to increasingly adopt free and open source solutions rather than commercial ones. Results also indicate that the adoption of specific tools for migration is still very limited, either because of the lack of skills and knowledge, or due to the lack of mature and adequate options.

Conclusions

Findings from this survey suggest the need for further technology transfer between academia and industry for the purpose of favoring the adoption of software migration techniques and tools.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号