首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper discusses a prototype infrastructure, HydroTerre, that provides researchers, educators and resource managers with seamless access to geospatial/geotemporal data for supporting physics-based numerical models. The prototype defines the supporting data as Essential Terrestrial Variables (ETV's) and includes data fusion tools necessary to predict and manage surface and groundwater resources that resolve important dynamics of upland stream networks. The evaluation of ecosystem and watershed services, such as the detection and attribution of the impact of climatic change, provides one of many examples of the pressing need for high resolution, spatially explicit resource assessments in upland catchments. However, the current infrastructure for supporting models and data anywhere in the continental USA (CONUS) must overcome important problems of: efficient accessibility to high resolution geospatial datasets from multiple sources, scalability of geospatial data in support of distributed models and data-intensive computation for multi-scale, multi-state simulations.We discuss data workflows for web access to ETV data processing in support of catchment modeling, as part of a larger strategy for consuming this data within a framework that enables hydrological modelers to build and test models with fast data access at a United States Geological Survey (USGS) National Hydrography Dataset Hydrological Unit Code (HUC) level-12 scale. Given the prospect of petabytes of existing high resolution environmental data (NRC, 2012), we limit our investigation to a limited set of ETV's necessary to provide the first level of support for model implementation anywhere in the CONUS, and that resolve important features of upland watersheds (e.g. hill slopes within 1st–2nd–3rd order streams). The paper demonstrates HydroTerre tools for fast ETV data access to web users, and describes the computational resources necessary for using ETV's as the basis for implementing spatially distributed models at scales approaching the native resolution of the data (≥30 m). The Penn State Integrated Hydrologic Model (PIHM) serves as an example although other models are currently being considered.  相似文献   

2.
Scientific workflows are increasingly used to manage and share scientific computations and methods to analyze data. A variety of systems have been developed that store the workflows executed and make them part of public repositories However, workflows are published in the idiosyncratic format of the workflow system used for the creation and execution of the workflows. Browsing, linking and using the stored workflows and their results often becomes a challenge for scientists who may only be familiar with one system. In this paper we present an approach for addressing this issue by publishing and exploiting workflows as data on the Web with a representation that is independent from the workflow system used to create them. In order to achieve our goal, we follow the Linked Data Principles to publish workflow inputs, intermediate results, outputs and codes; and we reuse and extend well established standards like W3C PROV. We illustrate our approach by publishing workflows and consuming them with different tools designed to address common scenarios for workflow exploitation.  相似文献   

3.
4.
柔性工作流的可视化方法研究   总被引:1,自引:0,他引:1  
工作流的可视化是工作流管理系统必不可少的重要功能.相对于刚性工作流,柔性工作流的可视化更加复杂,需要解决可视化界面与工作流的双向动态映射问题.柔性工作流的可视化涉及三个层次的模型,从下到上分别是柔性工作流模型、可视化存储模型、可视化界面模型.首先给出了基于OMG(Object Management Group)工作流管理规范的柔性工作流模型,进而给出了可视化界面模型和可视化存储模型,阐述了工作流与可视化界面之间的双向映射过程,并讨论了流程的结构完整性规则.该模型及方法较好的解决了柔性工作流的可视化问题,据此开发的工作流管理系统已经在航空二集团某厂实施并得到了用户的肯定.  相似文献   

5.
张磊  苑伟政  王伟 《计算机应用》2006,26(1):57-0060
为实现制造网格应用的自动化,提出了一个基于领域本体(ontology)的服务自动组合体系结构及其相应的算法。领域本体基于TOVE、STEP、PSL三个成熟的制造本体和网格服务概念模型构建。体系结构以语义结构良好的用户目标作为输入,输出预约的可执行工作流;通过基于领域本体推理的反向递归组合算法,能够实现本地仓库中工作流的重用和网格范围内服务的新工作流组合;支持抽象、具体两个层面的松耦合工作流组合(对应于编排(Choreography)和编制(Orchestration));支持面向服务质量的工作流优化选择和工作流预约。实现了一个原型系统,实例的实验结果证明了该体系结构和算法的有效性。  相似文献   

6.
This paper presents a Platform of Extensible Workflow Simulation Service (Pewss), which we have developed to provide a cloud service for aiding research work in workflow scheduling. The simulation has been a major tool for performance evaluation and comparison in workflow scheduling research. However, researchers usually have to develop their own simulation programs with limited functionality, simply outputting summarized performance results. Pewss has been developed for easing and improving current practices in conducting performance simulations during studying of existing workflow scheduling algorithms or designing of new scheduling algorithms. Pewss has been designed based on the Software as a Service (SaaS) model, adopting a multiuser Web‐based client/server architecture. Conducting simulation experiments on Pewss, researchers simply have to implement the scheduling algorithm under study instead of a whole simulation environment, allowing them to focus on their research work without spending unnecessary efforts on the simulation implementation details. Pewss provides the visualization of a workflow execution schedule based on simulation results, offering a convenient way for researchers to gain an insight into the effectiveness, characteristics, and performance bottleneck of scheduling algorithms. As a multiuser environment, Pewss also provides functionality for researchers to facilitate comparative performance analysis and collaborative research works effectively. Pewss has been used in our research work on task‐parallel workflow scheduling and has been planned to be extended to support other types of workflow scheduling research problems, eg, mixed‐parallel workflows.  相似文献   

7.
Current conceptual workflow models use either informally defined conceptual models or several formally defined conceptual models that capture different aspects of the workflow, e.g., the data, process, and organizational aspects of the workflow. To the best of our knowledge, there are no algorithms that can amalgamate these models to yield a single view of reality. A fragmented conceptual view is useful for systems analysis and documentation. However, it fails to realize the potential of conceptual models to provide a convenient interface to automate the design and management of workflows. First, as a step toward accomplishing this objective, we propose SEAM (State-Entity-Activity-Model), a conceptual workflow model defined in terms of set theory. Second, no attempt has been made, to the best of our knowledge, to incorporate time into a conceptual workflow model. SEAM incorporates the temporal aspect of workflows. Third, we apply SEAM to a real-life organizational unit's workflows. In this work, we show a subset of the workflows modeled for this organization using SEAM. We also demonstrate, via a prototype application, how the SEAM schema can be implemented on a relational database management system. We present the lessons we learned about the advantages obtained for the organization and, for developers who choose to use SEAM, we also present potential pitfalls in using the SEAM methodology to build workflow systems on relational platforms. The information contained in this work is sufficient enough to allow application developers to utilize SEAM as a methodology to analyze, design, and construct workflow applications on current relational database management systems. The definition of SEAM as a context-free grammar, definition of its semantics, and its mapping to relational platforms should be sufficient also, to allow the construction of an automated workflow design and construction tool with SEAM as the user interface  相似文献   

8.
Many regions are still threatened with frequent floods and water resource shortage problems in China. Consequently, the task of reproducing and predicting the hydrological process in watersheds is hard and unavoidable for reducing the risks of damage and loss. Thus, it is necessary to develop an efficient and cost-effective hydrological tool in China as many areas should be modeled. Currently, developed hydrological tools such as Mike SHE and ArcSWAT (soil and water assessment tool based on ArcGIS) show significant power in improving the precision of hydrological modeling in China by considering spatial variability both in land cover and in soil type. However, adopting developed commercial tools in such a large developing country comes at a high cost. Commercial modeling tools usually contain large numbers of formulas, complicated data formats, and many preprocessing or postprocessing steps that may make it difficult for the user to carry out simulation, thus lowering the efficiency of the modeling process. Besides, commercial hydrological models usually cannot be modified or improved to be suitable for some special hydrological conditions in China. Some other hydrological models are open source, but integrated into commercial GIS systems. Therefore, by integrating hydrological simulation code EasyDHM, a hydrological simulation tool named MWEasyDHM was developed based on open-source MapWindow GIS, the purpose of which is to establish the first open-source GIS-based distributed hydrological model tool in China by integrating modules of preprocessing, model computation, parameter estimation, result display, and analysis. MWEasyDHM provides users with a friendly manipulating MapWindow GIS interface, selectable multifunctional hydrological processing modules, and, more importantly, an efficient and cost-effective hydrological simulation tool. The general construction of MWEasyDHM consists of four major parts: (1) a general GIS module for hydrological analysis, (2) a preprocessing module for modeling inputs, (3) a model calibration module, and (4) a postprocessing module. The general GIS module for hydrological analysis is developed on the basis of totally open-source GIS software, MapWindow, which contains basic GIS functions. The preprocessing module is made up of three submodules including a DEM-based submodule for hydrological analysis, a submodule for default parameter calculation, and a submodule for the spatial interpolation of meteorological data. The calibration module contains parallel computation, real-time computation, and visualization. The postprocessing module includes model calibration and model results spatial visualization using tabular form and spatial grids. MWEasyDHM makes it possible for efficient modeling and calibration of EasyDHM, and promises further development of cost-effective applications in various watersheds.  相似文献   

9.
10.
Cloud computing has established itself as an interesting computational model that provides a wide range of resources such as storage, databases and computing power for several types of users. Recently, the concept of cloud computing was extended with the concept of federated clouds where several resources from different cloud providers are inter-connected to perform a common action (e.g. execute a scientific workflow). Users can benefit from both single-provider and federated cloud environment to execute their scientific workflows since they can get the necessary amount of resources on demand. In several of these workflows, there is a demand for high performance and parallelism techniques since many activities are data and computing intensive and can execute for hours, days or even weeks. There are some Scientific Workflow Management Systems (SWfMS) that already provide parallelism capabilities for scientific workflows in single-provider cloud. Most of them rely on creating a virtual cluster to execute the workflow in parallel. However, they also rely on the user to estimate the amount of virtual machines to be allocated to create this virtual cluster. Most SWfMS use this initial virtual cluster configuration made by the user for the entire workflow execution. Dimensioning the virtual cluster to execute the workflow in parallel is then a top priority task since if the virtual cluster is under or over dimensioned it can impact on the workflow performance or increase (unnecessarily) financial costs. This dimensioning is far from trivial in a single-provider cloud and specially in federated clouds due to the huge number of virtual machine types to choose in each location and provider. In this article, we propose an approach named GraspCC-fed to produce the optimal (or near-optimal) estimation of the amount of virtual machines to allocate for each workflow. GraspCC-fed extends a previously proposed heuristic based on GRASP for executing standalone applications to consider scientific workflows executed in both single-provider and federated clouds. For the experiments, GraspCC-fed was coupled to an adapted version of SciCumulus workflow engine for federated clouds. This way, we believe that GraspCC-fed can be an important decision support tool for users and it can help determining an optimal configuration for the virtual cluster for parallel cloud-based scientific workflows.  相似文献   

11.
Ergonomic models and techniques are a fundamental issue in the design of comfortable and safe products and spaces. User studies, related to visualization tools are current issues in the ergonomics and design visualization literature. But researchers have begun to discover that user study is rarely straightforward, especially when drawing visualization data from interdisciplinary sources. The availability of a plethora of visualization techniques can make it difficult to determine the most appropriate technique to convey maximum possible understanding.The RT-MHV (“Real-time”– “Motion history volumes”) 3D computerized assessment model, developed by the authors, demonstrates a local risk evaluation of work-related musculoskeletal disorders (WMSDs), based on real-time and on motion history volumes. With the model, the visual display of the WMSD risk level for each body segment is defined by color-coding at points surrounding an avatar's segment, representing an actual user. The values associated with areas with an increased risk of WMSDs can be identified and iterated quickly, so as to determine the “optimal posture”. Designers can share this knowledge by recording the user's postural interactions, defined through the mapping of geometric comfort data and WMSD risk level categories.The challenge in the development process was to overcome existing “gaps” between ergonomics data and designer requirements. Further research on the RT-MHV model is recommended, principally for developing stand-alone CAD software. An aggregated statistical information database and complete body joints visualizations will be computerized in due course. 2D tabulation and statistical information relating to body joints will be made available on demand.  相似文献   

12.
杜清华  张凯 《计算机工程》2022,48(7):13-21+28
为了应对复杂的数据分析任务,研究人员设计开发出结合多个平台的跨平台数据处理系统。系统跨平台工作流中算子的平台选择对于系统性能至关重要,因为算子在不同平台上的实现会产生性能间的显著差异。目前多使用基于成本的优化方法来实现跨平台工作流的平台选择,但现有的成本模型由于无法挖掘跨平台工作流的潜在信息而导致成本估计不准确。提出一种高效的跨平台工作流优化方法,采用GGFN模型作为成本模型,以算子特征和工作流特征作为模型输入,利用图注意力机制捕捉有向无环图型跨平台工作流的结构信息和算子邻居节点信息,同时结合门控循环单元记忆算子的运行时序信息,从而实现准确的成本估计。在此基础上,根据跨平台工作流的特点设计算子实现平台的枚举算法,利用基于GGFN的成本模型和延迟贪婪剪枝方法进行枚举操作,为每个算子选择合适的实现平台。实验结果表明,该方法可以将跨平台工作流的执行性能提升3倍,运行时间缩短60%以上。  相似文献   

13.
Automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compelling case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. The paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.  相似文献   

14.
15.
Neuroimaging is a field that benefits from distributed computing infrastructures (DCIs) to perform data processing and analysis, which is often achieved using Grid workflow systems. Collaborative research in neuroimaging requires ways to facilitate exchange between different groups, in particular to enable sharing, re-use and interoperability of applications implemented as workflows. The SHIWA project provides solutions to facilitate sharing and exchange of workflows between workflow systems and DCI resources. In this paper we present and analyse how the SHIWA Platform was used to implement various cases in which workflow exchange supports collaboration in neuroscience. The SHIWA Platform and the implemented solutions are described and analysed from a “user” perspective, in this case workflow developers and neuroscientists. We conclude that the platform in its current form is valuable for these cases, and we identify remaining challenges.  相似文献   

16.
Many civil engineering tasks require to access geospatial data in the field and reference the stored information to the real-world situation. Augmented reality (AR), which interactively overlays 3D graphical content directly over a view of the world, can be a useful tool to visualize but also create, edit and update geospatial data representing real-world artifacts. We present research results on the next-generation field information system for companies relying on geospatial data, providing mobile workforces with capabilities for on-site inspection and planning, data capture and as-built surveying. To achieve this aim, we used mobile AR technology for on-site surveying of geometric and semantic attributes of geospatial 3D models on the user’s handheld device. The interactive 3D visualizations automatically generated from production databases provide immediate visual feedback for many tasks and lead to a round-trip workflow where planned data are used as a basis for as-built surveying through manipulation of the planned data. Classically, surveying of geospatial objects is a typical scenario performed from utility companies on a daily basis. We demonstrate a mobile AR system that is capable of these operations and present first field trials with expert end users from utility companies. Our initial results show that the workflows of planning and surveying of geospatial objects benefit from our AR approach.  相似文献   

17.
Event sequence visualization aids analysts in many domains to better understand and infer new insights from event data. Analysing behaviour before or after a certain event of interest is a common task in many scenarios. In this paper, we introduce, formally define, and position double trees as a domain-agnostic tree visualization approach for this task. The visualization shows the sequences that led to the event of interest as a tree on the left, and those that followed on the right. Moreover, our approach enables users to create selections based on event attributes to interactively compare the events and sequences along colour-coded categories. We integrate the double tree and category-based comparison into a user interface for event sequence analysis. In three application examples, we show a diverse set of scenarios, covering short and long time spans, non-spatial and spatial events, human and artificial actors, to demonstrate the general applicability of the approach.  相似文献   

18.
The Petri Net Markup Language (PNML) is originally an XML-based interchange format for Petri nets. Individual companies may specify their process models in Petri nets and exchange the Petri nets with other companies in PNML. This paper aims to demonstrate the capabilities of PNML in the development of applications instead of an industrial interchange format only. In this paper, we apply PNML to develop context-aware workflow systems. In existing literature, different methodologies for the design of context-aware systems have been proposed. However, workflow models have not been considered in these methodologies. Our interests in this paper are to propose a methodology to automatically generate context-aware action lists for users and effectively control resource allocation based on the state of the workflow systems. To achieve these objectives, we first propose Petri net models to describe the workflows. Next, we propose models to capture resource activities. Finally, the interactions between workflows and resources are combined to obtain a model for the whole processes. Based on the combined model, we propose architecture to automatically generate context-aware graphical user interface to guide the users and control resource allocation in workflow systems. We demonstrate our design methodology using a health care example.  相似文献   

19.
Quality of service for workflows and web service processes   总被引:14,自引:0,他引:14  
Workflow management systems (WfMSs) have been used to support various types of business processes for more than a decade now. In workflows or Web processes for e-commerce and Web service applications, suppliers and customers define a binding agreement or contract between the two parties, specifying quality of service (QoS) items such as products or services to be delivered, deadlines, quality of products, and cost of services. The management of QoS metrics directly impacts the success of organizations participating in e-commerce. Therefore, when services or products are created or managed using workflows or Web processes, the underlying workflow engine must accept the specifications and be able to estimate, monitor, and control the QoS rendered to customers. In this paper, we present a predictive QoS model that makes it possible to compute the quality of service for workflows automatically based on atomic task QoS attributes. We also present the implementation of our QoS model for the METEOR workflow system. We describe the components that have been changed or added, and discuss how they interact to enable the management of QoS.  相似文献   

20.
随着大数据时代的到来,数据分析的作用日益显著,它能够从海量数据中发现有价值的信息,从而更有效地指导用户决策。然而,数据分析流程中存在三大挑战:分析流程高耦合、交互接口种类多和探索分析高耗时。为应对上述挑战,本文提出了基于自然语言交互的数据分析系统Navi,该系统采用模块化的设计原则,抽象出主流数据分析流程的三个核心功能模块:数据查询、可视化生成和可视化探索模块,从而降低系统设计的耦合度。同时,Navi以自然语言作为统一的交互接口,并通过一个任务调度器,实现了各功能模块的有效协同。此外,为了解决可视化探索中搜索空间指数级和用户意图不明确的问题,本文提出了一种基于蒙特卡洛树搜索的可视化自动探索方法,并设计了基于可视化领域知识的剪枝算法和复合奖励函数,提高了搜索效率和结果质量。最后,本文通过量化实验和用户实验验证了Navi的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号