首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Context

To guarantee the success of Business Process Modelling (BPM) it is necessary to check whether the activities and tasks described by Business Processes (BPs) are sound and well coordinated.

Objective

This article describes and validates a Formal Compositional Verification Approach (FCVA) that uses a Model-Checking (MC) technique to specify and verify BPs.

Method

This is performed using the Communicating Sequential Processes +Time (CSP+T) process calculus, which adds new constructions to timed Business Process Model and Notation (BPMN) modelling entities for non- functional requirement specification.

Results

Using our proposal we are able to specify the BP Task Model (BPTM) associated with BPs by formalising the timed BPMN notational elements. The proposal also allows us to apply MC to BPTM verification. A real-life example of verifying a BPTM in the field of Customer Relationship Management (CRM) is discussed as a practical application of FCVA.

Conclusion

This approach facilitates the verification of complex BPs from independently verified local processes, and establishes a feasible way to use process calculi to verify BPs using state-of-the-art MC tools.  相似文献   

2.

Context

During development managers, analysts and designers often need to know whether enough requirements analysis work has been done and whether or not it is safe to proceed to the design stage.

Objective

This paper describes a new, simple and practical method for assessing our confidence in a set of requirements.

Method

We identified four confidence factors and used a goal oriented framework with a simple ordinal scale to develop a method for assessing confidence. We illustrate the method and show how it has been applied to a real systems development project.

Results

We show how assessing confidence in the requirements could have revealed problems in this project earlier and so saved both time and money.

Conclusion

Our meta-level assessment of requirements provides a practical and pragmatic method that can prove useful to managers, analysts and designers who need to know when sufficient requirements analysis has been performed.  相似文献   

3.

Context

In requirements engineering, there will be many different stake holders. Often the requirements engineer has to find a set of requirements that reflect the needs of several different stake holders, while remaining within budget.

Objective

This paper introduces an optimisation-based approach to the automated analysis of requirements assignments when multiple stake holders are to be satisfied by a single choice of requirements.

Method

The paper reports on experiments using two different multi-objective evolutionary optimisation algorithms with real world data sets as well as synthetic data sets. This empirical validation includes a statistical analysis of the performance of the two algorithms.

Results

The results reveal that the Two-Archive algorithm outperformed the others in convergence as the scale of problems increase. The paper also shows how both traditional and animated Kiviat diagrams can be used to visualise the tensions between the stake holders’ competing requirements in the presence of increasing budgetary pressure.

Conclusion

This paper presented the concept of internal tensioning among multi-stakeholder in requirements analysis and optimisation for the first time. This analysis may be useful in internal negotiations over budgetary allowance for the project.  相似文献   

4.

Context

The implied scenarios are unexpected behaviors in the scenario specifications. Detecting and handling them is essential for the correctness of the scenario specifications. To handle such implied scenarios, identifying the causes of implied scenarios is also essential. Most recent researches focus on detecting those implied scenarios, themselves or limited causes of implied scenarios.

Objective

The purpose of this research is to provide an approach to detecting the causes of implied scenarios.

Method

The scenario specification is a set of events and a set of relative orders between the events, and enforces them for its implementation. Among the orders, a set of orders that cannot be inherently enforced is the unenforceable orders. Obviously, existence of unenforceable orders leads the implied scenarios. To obtain the unenforceable orders, we first provide a method to represent each of the specification and its implementation as a set of orders between events, called the causal order graph. Then, the differences between them are the unenforceable orders.

Results

Because the unenforceable orders consist of events and their order relation that are specified in the scenario specification, they can point out which part of the scenario specification should be considered to handle the implied scenarios. In addition, our approach supports the synchronous, asynchronous, and FIFO communication styles without the state explosion or heavy computational overhead. To validate our approach, we provide two case studies.

Conclusions

This approach helps a designer to effectively correct the scenario specification by identifying where to be fixed, especially in large cases and under the various communication styles.  相似文献   

5.

Context

Although agile software development methods such as SCRUM and DSDM are gaining popularity, the consequences of applying agile principles to software product management have received little attention until now.

Objective

In this paper, this gap is filled by the introduction of a method for the application of SCRUM principles to software product management.

Method

A case study research approach is employed to describe and evaluate this method.

Results

This has resulted in the ‘agile requirements refinery’, an extension to the SCRUM process that enables product managers to cope with complex requirements in an agile development environment. A case study is presented to illustrate how agile methods can be applied to software product management.

Conclusions

The experiences of the case study company are provided as a set of lessons learned that will help others to apply agile principles to their software product management process.  相似文献   

6.

Context

An operational test means a system test that examines whether or not all software or hardware components comply with the requirements given to a system which is deployed in an operational environment.

Objective

It is a necessary lightweight-profiling method for embedded systems with severe resource restrictions to conduct operational testing.

Method

We focus on the Process Control Block as the optimal location to monitor the execution of all processes. We propose a profiling method to collect the runtime execution information of the processes without interrupting the system’s operational environment by hacking the Process Control Block information. Based on the proposed method applied to detect runtime memory faults, we develop the operational testing tool AMOS v1.0 which is currently being used in the automobile industry.

Results

An industrial field study on 23 models of car-infotainment systems revealed a total of 519 memory faults while only slowing down the system by 0.084-0.132×. We conducted a comparative analysis on representative runtime memory fault detection tools. This analysis result shows our proposed method that has relatively low overhead meets the requirements for operational testing, while other methods failed to satisfy the operational test conditions.

Conclusion

We conclude that a lightweight-profiling method for embedded system operational testing can be built around the Process Control Block.  相似文献   

7.

Context

For large software projects it is important to have some traceability between artefacts from different phases (e.g.requirements, designs, code), and between artefacts and the involved developers. However, if the capturing of traceability information during the project is felt as laborious to developers, they will often be sloppy in registering the relevant traceability links so that the information is incomplete. This makes automated tool-based collection of traceability links a tempting alternative, but this has the opposite challenge of generating too many potential trace relationships, not all of which are equally relevant.

Objective

This paper evaluates how to rank such auto-generated trace relationships.

Method

We present two approaches for such a ranking: a Bayesian technique and a linear inference technique. Both techniques depend on the interaction event trails left behind by collaborating developers while working within a development tool.

Results

The outcome of a preliminary study suggest the advantage of the linear approach, we also explore the challenges and potentials of the two techniques.

Conclusion

The advantage of the two techniques is that they can be used to provide traceability insights that are contextual and would have been much more difficult to capture manually. We also present some key lessons learnt during this research.  相似文献   

8.

Context

The context of this research is software process improvement (SPI) in small and medium Web companies.

Objective

The primary objective of this paper is to identify software process improvement (SPI) success factors for small and medium Web companies.

Method

To achieve this goal, we conducted semi-structured, open-ended interviews with 21 participants representing 11 different companies in Pakistan, and analyzed the data qualitatively using the Glaserian strand of grounded theory research procedures. The key steps of these procedures that were employed in this research included open coding, focused coding, theoretical coding, theoretical sampling, constant comparison, and scaling up.

Results

An initial framework of key SPI success factors for small and medium Web companies was proposed, which can be of use for small and medium Web companies engaged in SPI. The paper also differentiates between small and medium Web companies and analyzes crucial SPI requirements for companies operating in the Web development domain.

Conclusion

The results of this work, in particular the use of qualitative techniques - allowed us to obtain rich insight into SPI success factors for small and medium Web companies. Future work comprises the validation of the SPI success factors with small and medium Web companies.  相似文献   

9.

Context

The field of mutation analysis has been growing, both in the number of published papers and the number of active researchers. This special issue provides a sampling of recent advances and ideas. But do all the new researchers know where we started?

Objective

To imagine where we are going, we must first know where we are. To understand where we are, we must know where we have been. This paper reviews past mutation analysis research, considers the present, then imagines possible future directions.

Method

A retrospective study of past trends lets us the ability to see the current state of mutation research in a clear context, allowing us to imagine and then create future vectors.

Results

The value of mutation has greatly expanded since the early view of mutation as an expensive way to unit test subroutines. Our understanding of what mutation is and how it can help has become much deeper and broader.

Conclusion

Mutation analysis has been around for 35 years, but we are just now beginning to see its full potential. The papers in this issue and future mutation workshops will eventually allow us to realize this potential.  相似文献   

10.

Context

Writing software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance.

Objective

Measure the effect of parallel programming models (message-passing vs. PRAM-like) on programmer effort.

Design, setting, and subjects

One group of subjects implemented sparse-matrix dense-vector multiplication using message-passing (MPI), and a second group solved the same problem using a PRAM-like model (XMTC). The subjects were students in two graduate-level classes: one class was taught MPI and the other was taught XMTC.

Main outcome measures

Development time, program correctness.

Results

Mean XMTC development time was 4.8 h less than mean MPI development time (95% confidence interval, 2.0-7.7), a 46% reduction. XMTC programs were more likely to be correct, but the difference in correctness rates was not statistically significant (p = .16).

Conclusions

XMTC solutions for this particular problem required less effort than MPI equivalents, but further studies are necessary which examine different types of problems and different levels of programmer experience.  相似文献   

11.
12.

Context

The constant changes in today’s business requirements demand continuous database revisions. Hence, database structures, not unlike software applications, deteriorate during their lifespan and thus require refactoring in order to achieve a longer life span. Although unit tests support changes to application programs and refactoring, there is currently a lack of testing strategies for database schema evolution.

Objective

This work examines the challenges for database schema evolution and explores the possibility of using various testing strategies to assist with schema evolution. Specifically, the work proposes a novel unit test approach for the application code that accesses databases with the objective of proactively evaluating the code against the altered database.

Method

The approach was validated through the implementation of a testing framework in conjunction with a sample application and a relatively simple database schema. Although the database schema in this study was simple, it was nevertheless able to demonstrate the advantages of the proposed approach.

Results

After changes in the database schema, the proposed approach found all SELECT statements as well as the majority of other statements requiring modifications in the application code. Due to its efficiency with SELECT statements, the proposed approach is expected to be more successful with database warehouse applications where SELECT statements are dominant.

Conclusion

The unit test approach that accesses databases has proven to be successful in evaluating the application code against the evolved database. In particular, the approach is simple and straightforward to implement, which makes it easily adoptable in practice.  相似文献   

13.

Context

The increasing presence of Object-Oriented (OO) programs in industrial systems is progressively drawing the attention of mutation researchers toward this paradigm. However, while the number of research contributions in this topic is plentiful, the number of empirical results is still marginal and mostly provided by researchers rather than practitioners.

Objective

This article reports our experience using mutation testing to measure the effectiveness of an automated test data generator from a user perspective.

Method

In our study, we applied both traditional and class-level mutation operators to FaMa, an open source Java framework currently being used for research and commercial purposes. We also compared and contrasted our results with the data obtained from some motivating faults found in the literature and two real tools for the analysis of feature models, FaMa and SPLOT.

Results

Our results are summarized in a number of lessons learned supporting previous isolated results as well as new findings that hopefully will motivate further research in the field.

Conclusion

We conclude that mutation testing is an effective and affordable technique to measure the effectiveness of test mechanisms in OO systems. We found, however, several practical limitations in current tool support that should be addressed to facilitate the work of testers. We also missed specific techniques and tools to apply mutation testing at the system level.  相似文献   

14.

Context

In the long run, features of a software product line (SPL) evolve with respect to changes in stakeholder requirements and system contexts. Neither domain engineering nor requirements engineering handles such co-evolution of requirements and contexts explicitly, making it especially hard to reason about the impact of co-changes in complex scenarios.

Objective

In this paper, we propose a problem-oriented and value-based analysis method for variability evolution analysis. The method takes into account both kinds of changes (requirements and contexts) during the life of an evolving software product line.

Method

The proposed method extends the core requirements engineering ontology with the notions to represent variability-intensive problem decomposition and evolution. On the basis of problemorientation, the analysis method identifies candidate changes, detects influenced features, and evaluates their contributions to the value of the SPL.

Results and Conclusion

The process of applying the analysis method is illustrated using a concrete case study of an evolving enterprise software system, which has confirmed that tracing back to requirements and contextual changes is an effective way to understand the evolution of variability in the software product line.  相似文献   

15.

Context

Variability management (VM) is one of the most important activities of software product-line engineering (SPLE), which intends to develop software-intensive systems using platforms and mass customization. VM encompasses the activities of eliciting and representing variability in software artefacts, establishing and managing dependencies among different variabilities, and supporting the exploitation of the variabilities for building and evolving a family of software systems. Software product line (SPL) community has allocated huge amount of effort to develop various approaches to dealing with variability related challenges during the last two decade. Several dozens of VM approaches have been reported. However, there has been no systematic effort to study how the reported VM approaches have been evaluated.

Objective

The objectives of this research are to review the status of evaluation of reported VM approaches and to synthesize the available evidence about the effects of the reported approaches.

Method

We carried out a systematic literature review of the VM approaches in SPLE reported from 1990s until December 2007.

Results

We selected 97 papers according to our inclusion and exclusion criteria. The selected papers appeared in 56 publication venues. We found that only a small number of the reviewed approaches had been evaluated using rigorous scientific methods. A detailed investigation of the reviewed studies employing empirical research methods revealed significant quality deficiencies in various aspects of the used quality assessment criteria. The synthesis of the available evidence showed that all studies, except one, reported only positive effects.

Conclusion

The findings from this systematic review show that a large majority of the reported VM approaches have not been sufficiently evaluated using scientifically rigorous methods. The available evidence is sparse and the quality of the presented evidence is quite low. The findings highlight the areas in need of improvement, i.e., rigorous evaluation of VM approaches. However, the reported evidence is quite consistent across different studies. That means the proposed approaches may be very beneficial when they are applied properly in appropriate situations. Hence, it can be concluded that further investigations need to pay more attention to the contexts under which different approaches can be more beneficial.  相似文献   

16.

Context

A software artefact typically makes its functionality available through a specialized Application Programming Interface (API) describing the set of services offered to client applications. In fact, building any software system usually involves managing a plethora of APIs, which complicates the development process. In Model-Driven Engineering (MDE), where models are the key elements of any software engineering activity, this API management should take place at the model level. Therefore, tools that facilitate the integration of APIs and MDE are clearly needed.

Objective

Our goal is to automate the implementation of API-MDE bridges for supporting both the creation of models from API objects and the generation of such API objects from models. In this sense, this paper presents the API2MoL approach, which provides a declarative rule-based language to easily write mapping definitions to link API specifications and the metamodel that represents them. These definitions are then executed to convert API objects into model elements or vice versa. The approach also allows both the metamodel and the mapping to be automatically obtained from the API specification (bootstrap process).

Method

After implementing the API2MoL engine, its correctness was validated using several APIs. Since APIs are normally large, we then developed a tool to implement the bootstrap process, which was also validated.

Results

We provide a toolkit (language and bootstrap tool) for the creation of bridges between APIs and MDE. The current implementation focuses on Java APIs, although its adaptation to other statically typed object-oriented languages is straightforward. The correctness, expressiveness and completeness of the approach have been validated with the Swing, SWT and JTwitter APIs.

Conclusion

API2MoL frees developers from having to manually implement the tasks of obtaining models from API objects and generating such objects from models. This helps to manage API models in MDE-based solutions.  相似文献   

17.

Context

Software product lines (SPL) are used in industry to achieve more efficient software development. However, the testing side of SPL is underdeveloped.

Objective

This study aims at surveying existing research on SPL testing in order to identify useful approaches and needs for future research.

Method

A systematic mapping study is launched to find as much literature as possible, and the 64 papers found are classified with respect to focus, research type and contribution type.

Results

A majority of the papers are of proposal research types (64%). System testing is the largest group with respect to research focus (40%), followed by management (23%). Method contributions are in majority.

Conclusions

More validation and evaluation research is needed to provide a better foundation for SPL testing.  相似文献   

18.

Context

Agile information systems development (ISD) has received much attention from both the practitioner and researcher community over the last 10-15 years. However, it is still unclear what precisely constitutes agile ISD.

Objective

Based on four empirical studies conducted over a 10-year time period from 1999 to 2008 the objective of this paper is to show how the meaning and practice of agile ISD has evolved over time and on this basis to speculate about what comes next.

Method

Four phases of research has been conducted, using a grounded theory approach. For each research phase qualitative interviews were held in American and/or Danish companies and a grounded theory was inductively discovered by careful data analysis. Subsequently, the four unique theories have been analyzed for common themes, and a global theory was identified across the empirical data.

Results

In 1999 companies were developing software at high-speed in a desperate rush to be first-to-market. In 2001 a new high-speed/quick results development process had become established practice. In 2003 changes in the market created the need for a more balanced view on speed and quality, and in 2008 companies were successfully combining agile and plan-driven approaches to achieve the benefits of both. The studies reveal a two-stage pattern in which dramatic changes in the market causes disruption of established practices and process adaptations followed by consolidation of lessons learnt into a once again stable software development process.

Conclusion

The cyclical history of punctuated process evolution makes it possible to distinguish pre-agility from current practices (agility), and on this basis, to speculate about post-agility: a possible next cycle of software process evolution concerned with proactively pursuing the dual goal of agility and alignment through a diversity of means.  相似文献   

19.

Context

An optimal software development process is regarded as being dependent on the situational characteristics of individual software development settings. Such characteristics include the nature of the application(s) under development, team size, requirements volatility and personnel experience. However, no comprehensive reference framework of the situational factors affecting the software development process is presently available.

Objective

The absence of such a comprehensive reference framework of the situational factors affecting the software development process is problematic not just because it inhibits our ability to optimise the software development process, but perhaps more importantly, because it potentially undermines our capacity to ascertain the key constraints and characteristics of a software development setting.

Method

To address this deficiency, we have consolidated a substantial body of related research into an initial reference framework of the situational factors affecting the software development process. To support the data consolidation, we have applied rigorous data coding techniques from Grounded Theory and we believe that the resulting framework represents an important contribution to the software engineering field of knowledge.

Results

The resulting reference framework of situational factors consists of eight classifications and 44 factors that inform the software process. We believe that the situational factor reference framework presented herein represents a sound initial reference framework for the key situational elements affecting the software process definition.

Conclusion

In addition to providing a useful reference listing for the research community and for committees engaged in the development of standards, the reference framework also provides support for practitioners who are challenged with defining and maintaining software development processes. Furthermore, this framework can be used to develop a profile of the situational characteristics of a software development setting, which in turn provides a sound foundation for software development process definition and optimisation.  相似文献   

20.

Context

Method engineering approaches are often based on the assumption that method users are able to explicitly express their situational method requirements. Similar to systems requirements, method requirements are often vague and hard to explicate. In this paper we address the issue of involving method users early in method configuration. This is done through borrowing ideas from user-centered design and prototyping, and implementing them on the method engineering layer.

Objective

We design a computerized tool, MC Sandbox, to capture method requirements through the use of method-user-centered method configuration, hence bridging the gap between systems developers’ and method engineers’ understanding of and expectations on a situational method.

Method

The research method adopted can be characterized as multi-grounded action research. Our implementation of multi-grounded action research follows the traditional ‘canonical’ action research method, which has cycles of diagnosing, action planning, action taking, evaluating, and specifying learning. The research project comprised three such action research cycles where 10 action cases were performed.

Results

MC Sandbox has proven useful in eliciting and negotiating method requirements in a continuously ongoing dialog between the method users and the method engineers during configuration workshops. The results also show that the method engineer role rotated among the systems developers and that they were indeed committed to the negotiated methods during the systems development projects.

Conclusion

It is possible for method users to actively participate and construct suitable situational methods if they are provided with appropriate high-level modelling concepts, such as method components, configuration packages and configuration templates. This way, the project members’ understanding of the current development practice develops incrementally, both in terms of understanding the needs and available method support. In addition, both method requirements and commitments are made explicit, which are important aspects when working with method configuration from a collaboration point of view.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号