首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
陈宁  陈文超  李文全  李峰 《计算机工程与设计》2007,28(15):3761-3763,3775
对在交通地理信息系统引入工作流技术的必要性进行分析,将工作流管理系统的流程控制能力与地理信息系统的空间数据处理能力结合起来,给出了一种基于工作流的交通地理信息系统框架.利用工作流技术建立空间信息工作流,为支持空间信息处理过程的规划和管理提供了科学的途径,同时对其中的关键技术进行了阐述,在此基础上实现了一个基于工作流的交通地理信息系统,成功地应用交通业务管理中,满足了现代化交通管理的需要.  相似文献   

2.
为提高科学工作流对不确定性因素的处理能力,本文建立了一种树状结构的动态科学工作流模型,它通过与基于案例的推理技术相结合,能很好地解决科学工作流对动态性的要求,提高了科学工作流管理系统的自适应性。基于案例推理的重用,为解决科学工作流低重复性问题、实现科学工作流从单个计算步骤到整个流程定义的多层次重用提供了有效的解决手段。  相似文献   

3.
通过对带闲PERT图的审批工作流引擎研究与开发,为信息系统中的审批流程控制提供了一个轻量级的运行时和配置工具.在给出带阈PERT图定义的基础上,研究了审批节点的可达性,为引擎开发提供算法依据,同时对审批实体进行了状态分析,从而设计并实现了引擎的运行时,编程接口和配置文件.基于20种工作流模式,对模型的需求适应性进行了测度分析,并通过多个信息系统的应用实践,表明了该引擎运行的稳定性,可配置性和适应性.  相似文献   

4.
访问控制机制是科学工作流管理系统中安全控制的重要内容。本文在事务工作流系统中基于角色的访问控制模型的基础上,结合科学工作流的特点和需求,提出了基于用户分组的角色访问控制机制(UGRBAC);在科学工作流管理系统中增加单独的访问控制模块,在科学工作流流程定义阶段以及工作流执行阶段对用户进行访问控制,有效地控制了各类用户对服务和资源访问的限制,并且为服务提供者提供了访问权限设置的功能。  相似文献   

5.
一种基于扩展时间Petri网的工作流时间性能评价方法   总被引:6,自引:0,他引:6  
时间性能分析是工作流模型分析和评价的重要方面.首先介绍了业务过程的一般Petri网模型,然后建立了工作流网的扩展时间模型,在可达图的基础上提出了简单路径图和可变换子网的概念,利用保持网响应时间和分配概率不变的网变换方法对扩展时间工作流网进行化简,给出了找出可变换子网的算法和计算工作流模型时间性能指标的方法.  相似文献   

6.
工作流技术支持的敏捷供应链管理系统研究   总被引:5,自引:0,他引:5  
刘建勋  张申生 《计算机工程》2001,27(3):11-12,153
首先对供应链系统敏捷性的两个方面作了分析、总结。提出了一个由工作流支持的敏捷供应链管理系统的体系结构,这种结构是基于案例的跨组织间工作流体系结构和松耦合的跨组织间工作流体系结构的一个混合物。最后从供应链敏捷性的第一个方面对该系统的敏捷性进行了比较详细的分析。  相似文献   

7.
尚蕾  刘茜萍 《计算机工程》2020,46(5):122-130,138
云环境下科学工作流的数据布局成为当前工作流研究中的一个热点问题,对科学工作流中任务和数据之间多对多关系进行分析,可以发现不同数据布局方案在数据传输上的费用各不相同,在很大程度上影响工作流的运行成本。为降低科学工作流数据集传输费用,提出一种基于任务分配和数据集副本的科学工作流数据布局方法。该方法从任务分配开始,在定量计算任务依赖度的基础上进行任务分配,根据分配结果给出一个基于数据集副本的两阶段数据布局方法,以实现科学工作流运行中传输费用的优化。实例结果表明,与工作流层方法相比,该方法可以有效降低科学工作流的运行成本。  相似文献   

8.
工作流的研究和改进从未停止过。工作流技术实现了应用逻辑与过程逻辑的分离,在过程逻辑的建立过程中,可以不考虑资源的异构性,而将相关的资源融入到单个的过程逻辑中,有效地把所有资源合理地组织在一起。业务流程的自动化不仅需要对流程的结构做出详细的说明,也需要流程执行过程中对所需要的资源进行定义。工作流技术实现了对流程模型和组织机构模型的分离。为了能够科学的建模,组织机构单元和流程的活动以及它们之间的关联都需要很好的研究。文中在整个工作流的生命周期中,从工作流技术的组织机构方面做一些分析和研究。  相似文献   

9.
用线性回归的加权分配法对工作流时间的预测   总被引:1,自引:1,他引:0  
为了更好地为一项新设计任务估算合理的设计时间,为流程设计者提供科学的、优化的决策支持.以项目需求和创新性为评价标准,采用线性回归法对工作流中的设计过程进行预测,对得到的回归方程进行线性回归关系的显著性检验,并给出了预测时间值的置信区间.采用加权分配法对工作流各个节点的执行时间进行估算,给出了加权分配法的公式,并采用正态分布对工作流及工作流各个任务节点的工作进度进行了估算.  相似文献   

10.
工作流的研究和改进从未停止过。工作流技术实现了应用逻辑与过程逻辑的分离,在过程逻辑的建立过程中,可以不考虑资源的异构性,而将相关的资源融入到单个的过程逻辑中.有效地把所有资源合理地组织在一起。业务流程的自动化不仅需要对流程的结构做出详细的说明.也需要流程执行过程中对所需要的资源进行定义。工作流技术实现了对流程模型和组织机构模型的分离。为了能够科学的建模,组织机构单元和流程的活动以及它们之间的关联都需要很好的研究。文中在整个工作流的生命周期中,从工作流技术的组织机构方面做一些分析和研究。  相似文献   

11.
While existing work concentrates on developing QoS models of business workflows and Web services, few tools have been developed to support the monitoring and performance analysis of scientific workflows in Grids. This paper describes novel Grid services for dynamic instrumentation of Grid-based applications, performance monitoring and analysis of Grid scientific workflows. We describe a Grid dynamic instrumentation service that provides a widely accessible interface for other services and users to conduct the dynamic instrumentation of Grid applications during the runtime. We introduce a Grid performance analysis service for Grid scientific workflows. The analysis service utilizes various types of data including workflow graphs, monitoring data of resources, execution status of activities, and performance measurements obtained from the dynamic instrumentation of invoked applications, and provides a rich set of functionalities and features to support the online monitoring and performance analysis of scientific workflows. Workflows and their relevant information including performance metrics are stored and utilized for comparing the performance of constructs of different workflows and for supporting multi-workflow analysis. The work described in this paper is supported in part by the Austrian Science Fund as part of the Aurora Project under contract SFBF1104 and by the European Union through the IST-2002-511385 project K-WfGrid.  相似文献   

12.
Nowadays, more and more computer-based scientific experiments need to handle massive amounts of data. Their data processing consists of multiple computational steps and dependencies within them. A data-intensive scientific workflow is useful for modeling such process. Since the sequential execution of data-intensive scientific workflows may take much time, Scientific Workflow Management Systems (SWfMSs) should enable the parallel execution of data-intensive scientific workflows and exploit the resources distributed in different infrastructures such as grid and cloud. This paper provides a survey of data-intensive scientific workflow management in SWfMSs and their parallelization techniques. Based on a SWfMS functional architecture, we give a comparative analysis of the existing solutions. Finally, we identify research issues for improving the execution of data-intensive scientific workflows in a multisite cloud.  相似文献   

13.
Scientific workflows have become a valuable tool for large-scale data processing and analysis. This has led to the creation of specialized online repositories to facilitate workflow sharing and reuse. Over time, these repositories have grown to sizes that call for advanced methods to support workflow discovery, in particular for similarity search. Effective similarity search requires both high quality algorithms for the comparison of scientific workflows and efficient strategies for indexing, searching, and ranking of search results. Yet, the graph structure of scientific workflows poses severe challenges to each of these steps. Here, we present a complete system for effective and efficient similarity search in scientific workflow repositories, based on the Layer Decomposition approach to scientific workflow comparison. Layer Decomposition specifically accounts for the directed dataflow underlying scientific workflows and, compared to other state-of-the-art methods, delivers best results for similarity search at comparably low runtimes. Stacking Layer Decomposition with even faster, structure-agnostic approaches allows us to use proven, off-the-shelf tools for workflow indexing to further reduce runtimes and scale similarity search to sizes of current repositories.  相似文献   

14.
Workflow technology continues to play an important role as a means for specifying and enacting computational experiments in modern science. Reusing and re-purposing workflows allow scientists to do new experiments faster, since the workflows capture useful expertise from others. As workflow libraries grow, scientists face the challenge of finding workflows appropriate for their task, understanding what each workflow does, and reusing relevant portions of a given workflow. We believe that workflows would be easier to understand and reuse if high-level views (abstractions) of their activities were available in workflow libraries. As a first step towards obtaining these abstractions, we report in this paper on the results of a manual analysis performed over a set of real-world scientific workflows from Taverna, Wings, Galaxy and Vistrails. Our analysis has resulted in a set of scientific workflow motifs that outline (i) the kinds of data-intensive activities that are observed in workflows (Data-Operation motifs), and (ii) the different manners in which activities are implemented within workflows (Workflow-Oriented motifs). These motifs are helpful to identify the functionality of the steps in a given workflow, to develop best practices for workflow design, and to develop approaches for automated generation of workflow abstractions.  相似文献   

15.
Recently, scientific workflows have emerged as a platform for automating and accelerating data processing and data sharing in scientific communities. Many scientific workflows have been developed for collaborative research projects that involve a number of geographically distributed organizations. Sharing of data and computation across organizations in different administrative domains is essential in such a collaborative environment. Because of the competitive nature of scientific research, it is important to ensure that sensitive information in scientific workflows can be accessed by and propagated to only authorized parties. To address this problem, we present techniques for analyzing how information propagates in scientific workflows. We also present algorithms for incrementally analyzing how information propagates upon every change to an existing scientific workflow.  相似文献   

16.
Typical patterns of using scientific workflows include their periodical executions using a fixed set of computational resources. Using the statistics from multiple runs, one can accurately estimate task execution and communication times to apply static scheduling algorithms. Several workflows with known estimates could be combined into a set to improve the resulting schedule. In this paper, we consider the mapping of multiple workflows to partially available heterogeneous resources. The problem is how to fill free time windows with tasks from different workflows, taking into account users’ requirements of the urgency of the results of calculations. To estimate quality of schedules for several workflows with various soft deadlines, we introduce the unified metric incorporating levels of meeting constraints and fairness of resource distribution.The main goal of the work was to develop a set of algorithms implementing different scheduling strategies for multiple workflows with soft deadlines in a non-dedicated environment, and to perform a comparative analysis of these strategies. We study how time restrictions (given by resource providers and users) influence the quality of schedules, and which scheme of grouping and ordering the tasks is the most effective for the batched scheduling of non-urgent workflows. Experiments with several types of synthetic and domain-specific sets of multiple workflows show that: (i) the use of information about time windows and deadlines leads to the significant increase of the quality of static schedules, (ii) the clustering-based scheduling scheme outperforms task-based and workflow-based schemes. This was confirmed by an evaluation of studied algorithms on a basis of the CLAVIRE workflow management platform.  相似文献   

17.
Recently, workflow technologies have been increasingly used in scientific communities. Scientists carry out research by employing scientific workflows to automate computing steps, analyze large data sets and integrate distributed computing processes. This is a challenging task because of insecure procedures in a distributed environment. In this paper, we present an access control framework and models for supporting secure and reliable collaboration. The proposed approaches combine control flows and data flow models to describe scientific workflows, and extend the atomicity sphere concept by considering two levels of atomicity abstraction at the level of process as well as at the level of data, in order to maintain the process consistency and the data consistency in the presence of failures. We also present a case study in a scientific research scenario to show the effectiveness of our approaches.  相似文献   

18.
Scientific workflows have emerged as an important tool for combining the computational power with data analysis for all scientific domains in e-science, especially in the life sciences. They help scientists to design and execute complex in silico experiments. However, with rising complexity it becomes increasingly impractical to optimize scientific workflows by trial and error. To address this issue, we propose to insert a new optimization phase into the common scientific workflow life cycle. This paper describes the design and implementation of an automated optimization framework for scientific workflows to implement this phase. Our framework was integrated into Taverna, a life-science oriented workflow management system and offers a versatile programming interface (API), which enables easy integration of arbitrary optimization methods. We have used this API to develop an example plugin for parameter optimization that is based on a Genetic Algorithm. Two use cases taken from the areas of structural bioinformatics and proteomics demonstrate how our framework facilitates setup, execution, and monitoring of workflow parameter optimization in high performance computing e-science environments.  相似文献   

19.
With the development of new experimental technologies, biologists are faced with an avalanche of data to be computationally analyzed for scientific advancements and discoveries to emerge. Faced with the complexity of analysis pipelines, the large number of computational tools, and the enormous amount of data to manage, there is compelling evidence that many if not most scientific discoveries will not stand the test of time: increasing the reproducibility of computed results is of paramount importance.The objective we set out in this paper is to place scientific workflows in the context of reproducibility. To do so, we define several kinds of reproducibility that can be reached when scientific workflows are used to perform experiments. We characterize and define the criteria that need to be catered for by reproducibility-friendly scientific workflow systems, and use such criteria to place several representative and widely used workflow systems and companion tools within such a framework. We also discuss the remaining challenges posed by reproducible scientific workflows in the life sciences. Our study was guided by three use cases from the life science domain involving in silico experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号