首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Verification of component-based systems presents new challenges not yet completely addressed by existing testing techniques. This paper proposes a new approach for automatically testing highly reconfigurable component-based systems, i.e., systems that can be obtained by changing some components. The paper presents an industrial case that motivates our research and proposes a testing infrastructure that tracks run-time information for components. The collected information is used for automatic testing new versions of existing components and new configurations of existing systems.  相似文献   

2.
There is an increasing interest in techniques that support analysis and measurement of fielded software systems. These techniques typically deploy numerous instrumented instances of a software system, collect execution data when the instances run in the field, and analyze the remotely collected data to better understand the system's in-the-field behavior. One common need for these techniques is the ability to distinguish execution outcomes (e.g., to collect only data corresponding to some behavior or to determine how often and under which condition a specific behavior occurs). Most current approaches, however, do not perform any kind of classification of remote executions and either focus on easily observable behaviors (e.g., crashes) or assume that outcomes' classifications are externally provided (e.g., by the users). To address the limitations of existing approaches, we have developed three techniques for automatically classifying execution data as belonging to one of several classes. In this paper, we introduce our techniques and apply them to the binary classification of passing and failing behaviors. Our three techniques impose different overheads on program instances and, thus, each is appropriate for different application scenarios. We performed several empirical studies to evaluate and refine our techniques and to investigate the trade-offs among them. Our results show that 1) the first technique can build very accurate models, but requires a complete set of execution data; 2) the second technique produces slightly less accurate models, but needs only a small fraction of the total execution data; and 3) the third technique allows for even further cost reductions by building the models incrementally, but requires some sequential ordering of the software instances' instrumentation.  相似文献   

3.
This article presents a software visualization framework which can help project managers and team leaders in overseeing issues and their management in software development. To automate the framework, a dashboard tool called IssuePlayer is developed. The tool is used to study the trends in which different types of issues (e.g., bugs, support requests) are submitted, handled and piled up in software projects and use that information to identify process symptoms, e.g., the times when the code maintenance team is not responsive enough. The interactive nature of the tool enables identification of the team members who have not been as active as they were expected to be in such cases. The user can play, pause, rewind and forward the issue management histories using the tool. The tool is empirically evaluated by two industrial partners in North America and Europe. The survey and qualitative feedback support the usefulness and effectiveness of the tool in assessing the issue management processes and the performance of team members. The tool can be used complementarily in parallel with textual information provided by issue management tools (e.g., BugZilla) to enable team leaders to conduct effective and successful monitoring of issue management in software development projects.  相似文献   

4.
5.
This paper surveys research in developing computational models for integrating linguistic and visual information. It begins with a discussion of systems which have been actually implemented and continues with computationally motivated theories of human cognition. Since existing research spans several disciplines (e.g., natural language understanding, computer vision, knowledge representation), as well as several application areas, an important contribution of this paper is to categorize existing research based on inputs and objectives. Finally, some key issues related to integrating information from two such diverse sources are outlined and related to existing research. Throughout, the key issue addressed is the correspondence problem, namely how to associate visual events with words and vice versa.  相似文献   

6.
Two factors limit the utility of reverse engineering technology for many distributed software systems. First, with the exception of tools that support Ada and its explicit tasking constructs, reverse engineering tools fail to capture information concerning the flow of information between tasks. Second, relatively few reverse engineering tools are available for programming languages in which many older legacy applications were written (e.g., Jovial, CMS-2, and various assembly languages). We describe approaches that were developed for overcoming these limitations. In particular, we have implemented an approach for automatically extracting task flow information from a command and control system written in CMS-2. Our approach takes advantage of a small amount of externally provided design knowledge in order to recover design information relevant to the distributed nature of the target system  相似文献   

7.
张洋  王涛  吴逸文  尹刚  王怀民 《软件学报》2019,30(5):1407-1421
社交化编程使得开源社区中的知识可以快速被传播,其中,缺陷报告作为一类重要的软件开发知识,会含有特定的语义信息.通常,开发者会人工地将相关的缺陷报告关联起来.在一个软件项目中,发现并关联相关的缺陷报告可以为开发者提供更多的资源和信息去解决目标缺陷,从而提高缺陷修复效率.然而,现有人工关联缺陷报告的方法是十分耗费时间的,它在很大程度上取决于开发者自身的经验和知识.因此,研究如何及时、高效地关联相关缺陷是对于提高软件开发效率十分有意义的工作.将这类关联相关缺陷的问题视为推荐问题,并提出了一种基于嵌入模型的混合式相关缺陷关联方法,将传统的信息检索技术(TF-IDF)与深度学习中的嵌入模型(词嵌入模型和文档嵌入模型)结合起来.实验结果表明,该方法能够有效地提高传统方法的性能,且具有较强的应用扩展性.  相似文献   

8.
9.
《Interacting with computers》2006,18(5):1070-1083
As medical devices and information systems become increasingly complex, the issue of how to support users becomes more important. However, many current help systems are often ignored or found to be too complicated to use by clinicians. In this article, we suggest an approach that allows designers to think about user support and automating tasks in ways users find more acceptable. The issue we address in particular is the notion of transparency and to what extent it allows the end-user to understand and critique the advice given. We have found that one central problem with existing support systems is that often the end-user does not understand the differences between the automated parts and the parts that have to be done manually. By taking aspects of transparency and control into account when designing an automated tool it seems that some of the more refractory issues that help systems pose for professional users can be addressed.  相似文献   

10.
While information visualization technologies have transformed our life and work, designing information visualization systems still faces challenges. Non-expert users or end-users need toolkits that allow for rapid design and prototyping, along with supporting unified data structures suitable for different data types (e.g., tree, network, temporal, and multi-dimensional data), various visualization, interaction tasks. To address these issues, we designed DaisyViz, a model-based user interface toolkit, which enables end-users to rapidly develop domain-specific information visualization applications without traditional programming. DaisyViz is based on a user interface model for information (UIMI), which includes three declarative models: data model, visualization model, and control model. In the development process, a user first constructs a UIMI with interactive visual tools. The results of the UIMI are then parsed to generate a prototype system automatically. In this paper, we discuss the concept of UIMI, describe the architecture of DaisyViz, and show how to use DaisyViz to build an information visualization system. We also present a usability study of DaisyViz we conducted. Our findings indicate DaisyViz is an effective toolkit to help end-users build interactive information visualization systems.  相似文献   

11.
12.
In Open Source Software (OSS), users report different issues on issues tracking systems. Due to time constraint, it is not possible for developers to resolve all the issues in the current release. The leftover issues which are not addressed in the current release are added in the next release issue content. Fixing of issues result in code changes that can be quantified with a measure known as complexity of code changes or entropy. We have developed a 2-dimensional entropy based mathematical model to determine the leftover issues of different releases of five Apache open source products. A model for release time prediction using entropy is also proposed. This model maximizes the satisfaction level of user’s in terms of number of issues addressed.  相似文献   

13.
Agent-oriented software engineering and software product lines are two promising software engineering techniques. Recent research work has been exploring their integration, namely multi-agent systems product lines (MAS-PLs), to promote reuse and variability management in the context of complex software systems. However, current product derivation approaches do not provide specific mechanisms to deal with MAS-PLs. This is essential because they typically encompass several concerns (e.g., trust, coordination, transaction, state persistence) that are constructed on the basis of heterogeneous technologies (e.g., object-oriented frameworks and platforms). In this paper, we propose the use of multi-level models to support the configuration knowledge specification and automatic product derivation of MAS-PLs. Our approach provides an agent-specific architecture model that uses abstractions and instantiation rules that are relevant to this application domain. In order to evaluate the feasibility and effectiveness of the proposed approach, we have implemented it as an extension of an existing product derivation tool, called GenArch. The approach has also been evaluated through the automatic instantiation of two MAS-PLs, demonstrating its potential and benefits to product derivation and configuration knowledge specification.  相似文献   

14.
ContextEnterprise software systems (e.g., enterprise resource planning software) are often deployed in different contexts (e.g., different organizations or different business units or branches of one organization). However, even though organizations, business units or branches have the same or similar business goals, they may differ in how they achieve these goals. Thus, many enterprise software systems are subject to variability and adapted depending on the context in which they are used.ObjectiveOur goal is to provide a snapshot of variability in large scale enterprise software systems. We aim at understanding the types of variability that occur in large industrial enterprise software systems. Furthermore, we aim at identifying how variability is handled in such systems.MethodWe performed an exploratory case study in two large software organizations, involving two large enterprise software systems. Data were collected through interviews and document analysis. Data were analyzed following a grounded theory approach.ResultsWe identified seven types of variability (e.g., functionality, infrastructure) and eight mechanisms to handle variability (e.g., add-ons, code switches).ConclusionsWe provide generic types for classifying variability in enterprise software systems, and reusable mechanisms for handling such variability. Some variability types and handling mechanisms for enterprise software systems found in the real world extend existing concepts and theories. Others confirm findings from previous research literature on variability in software in general and are therefore not specific to enterprise software systems. Our findings also offer a theoretical foundation for describing variability handling in practice. Future work needs to provide more evaluations of the theoretical foundations, and refine variability handling mechanisms into more detailed practices.  相似文献   

15.

When custom modeling tools are used for designing complex safety-critical systems (e.g., critical cyber-physical systems), the tools themselves need to be validated by systematic testing to prevent tool-specific bugs reaching the system. Testing of such modeling tools relies upon an automatically generated set of models as a test suite. While many software testing practices recommend that this test suite should be diverse, model diversity has not been studied systematically for graph models. In the paper, we propose different diversity metrics for models by generalizing and exploiting neighborhood and predicate shapes as abstraction. We evaluate such shape-based diversity metrics using various distance functions in the context of mutation testing of graph constraints and access policies for two separate industrial DSLs. Furthermore, we evaluate the quality (i.e., bug detection capability) of different (random and consistent) model generation techniques for mutation testing purposes.

  相似文献   

16.
The release frequency of software projects has increased in recent years. Adopters of so-called rapid releases—short release cycles, often on the order of weeks, days, or even hours—claim that they can deliver fixed issues (i.e., implemented bug fixes and new features) to users more quickly. However, there is little empirical evidence to support these claims. In fact, our prior work shows that code integration phases may introduce delays for rapidly releasing projects—98% of the fixed issues in the rapidly releasing Firefox project had their integration delayed by at least one release. To better understand the impact that rapid release cycles have on the integration delay of fixed issues, we perform a comparative study of traditional and rapid release cycles. Our comparative study has two parts: (i) a quantitative empirical analysis of 72,114 issue reports from the Firefox project, and a (ii) qualitative study involving 37 participants, who are contributors of the Firefox, Eclipse, and ArgoUML projects. Our study is divided into quantitative and qualitative analyses. Quantitative analyses reveal that, surprisingly, fixed issues take a median of 54% (57 days) longer to be integrated in rapid Firefox releases than the traditional ones. To investigate the factors that are related to integration delay in traditional and rapid release cycles, we train regression models that model whether a fixed issue will have its integration delayed or not. Our explanatory models achieve good discrimination (ROC areas of 0.80–0.84) and calibration scores (Brier scores of 0.05–0.16) for rapid and traditional releases. Our explanatory models indicate that (i) traditional releases prioritize the integration of backlog issues, while (ii) rapid releases prioritize issues that were fixed in the current release cycle. Complementary qualitative analyses reveal that participants’ perception about integration delay is tightly related to activities that involve decision making, risk management, and team collaboration. Moreover, the allure of shipping fixed issues faster is a main motivator for adopting rapid release cycles among participants (although this motivation is not supported by our quantitative analysis). Furthermore, to explain why traditional releases deliver fixed issues more quickly, our participants point out the rush for integration in traditional releases and the increased time that is invested on polishing issues in rapid releases. Our results suggest that rapid release cycles may not be a silver bullet for the rapid delivery of new content to users. Instead, our results suggest that the benefits of rapid releases are increased software stability and user feedback.  相似文献   

17.
Issue-tracking systems (e.g. JIRA) have increasingly been used in many software projects. An issue could represent a software bug, a new requirement or a user story, or even a project task. A deadline can be imposed on an issue by either explicitly assigning a due date to it, or implicitly assigning it to a release and having it inherit the release’s deadline. This paper presents a novel approach to providing automated support for project managers and other decision makers in predicting whether an issue is at risk of being delayed against its deadline. A set of features (hereafter called risk factors) characterizing delayed issues were extracted from eight open source projects: Apache, Duraspace, Java.net, JBoss, JIRA, Moodle, Mulesoft, and WSO2. Risk factors with good discriminative power were selected to build predictive models to predict if the resolution of an issue will be at risk of being delayed. Our predictive models are able to predict both the the extend of the delay and the likelihood of the delay occurrence. The evaluation results demonstrate the effectiveness of our predictive models, achieving on average 79 % precision, 61 % recall, 68 % F-measure, and 83 % Area Under the ROC Curve. Our predictive models also have low error rates: on average 0.66 for Macro-averaged Mean Cost-Error and 0.72 Macro-averaged Mean Absolute Error.  相似文献   

18.
19.
The popularity of mobile devices has been steadily growing in recent years. These devices heavily depend on software from the underlying operating systems to the applications they run. Prior research showed that mobile software is different than traditional, large software systems. However, to date most of our research has been conducted on traditional software systems. Very little work has focused on the issues that mobile developers face. Therefore, in this paper, we use data from the popular online Q&A site, Stack Overflow, and analyze 13,232,821 posts to examine what mobile developers ask about. We employ Latent Dirichlet allocation-based topic models to help us summarize the mobile-related questions. Our findings show that developers are asking about app distribution, mobile APIs, data management, sensors and context, mobile tools, and user interface development. We also determine what popular mobile-related issues are the most difficult, explore platform specific issues, and investigate the types (e.g., what, how, or why) of questions mobile developers ask. Our findings help highlight the challenges facing mobile developers that require more attention from the software engineering research and development communities in the future and establish a novel approach for analyzing questions asked on Q&A forums.  相似文献   

20.
In this paper, a novel framework is developed for leveraging large-scale loosely tagged images for object classifier training by addressing three key issues jointly: (a) spam tags e.g., some tags are more related to popular query terms rather than the image semantics; (b) loose object tags, e.g., multiple object tags are loosely given at the image level without identifying the object locations in the images; (c) missing object tags, e.g., some object tags are missed and thus negative bags may contain positive instances. To address these three issues jointly, our framework consists of the following key components for leveraging large-scale loosely tagged images for object classifier training: (1) distributed image clustering and inter-cluster visual correlation analysis for handling the issue of spam tags by filtering out large amounts of junk images automatically, (2) multiple instance learning with missing tag prediction for dealing with the issues of loose object tags and missing object tags jointly; (3) structural learning for leveraging the inter-object visual correlations to train large numbers of inter-related object classifiers jointly. Our experiments on large-scale loosely tagged images have provided very positive results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号