首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cloud computing has established itself as an interesting computational model that provides a wide range of resources such as storage, databases and computing power for several types of users. Recently, the concept of cloud computing was extended with the concept of federated clouds where several resources from different cloud providers are inter-connected to perform a common action (e.g. execute a scientific workflow). Users can benefit from both single-provider and federated cloud environment to execute their scientific workflows since they can get the necessary amount of resources on demand. In several of these workflows, there is a demand for high performance and parallelism techniques since many activities are data and computing intensive and can execute for hours, days or even weeks. There are some Scientific Workflow Management Systems (SWfMS) that already provide parallelism capabilities for scientific workflows in single-provider cloud. Most of them rely on creating a virtual cluster to execute the workflow in parallel. However, they also rely on the user to estimate the amount of virtual machines to be allocated to create this virtual cluster. Most SWfMS use this initial virtual cluster configuration made by the user for the entire workflow execution. Dimensioning the virtual cluster to execute the workflow in parallel is then a top priority task since if the virtual cluster is under or over dimensioned it can impact on the workflow performance or increase (unnecessarily) financial costs. This dimensioning is far from trivial in a single-provider cloud and specially in federated clouds due to the huge number of virtual machine types to choose in each location and provider. In this article, we propose an approach named GraspCC-fed to produce the optimal (or near-optimal) estimation of the amount of virtual machines to allocate for each workflow. GraspCC-fed extends a previously proposed heuristic based on GRASP for executing standalone applications to consider scientific workflows executed in both single-provider and federated clouds. For the experiments, GraspCC-fed was coupled to an adapted version of SciCumulus workflow engine for federated clouds. This way, we believe that GraspCC-fed can be an important decision support tool for users and it can help determining an optimal configuration for the virtual cluster for parallel cloud-based scientific workflows.  相似文献   

2.
Bag-of-Tasks (BoT) workflows are widespread in many big data analysis fields. However, there are very few cloud resource provisioning and scheduling algorithms tailored for BoT workflows. Furthermore, existing algorithms fail to consider the stochastic task execution times of BoT workflows which leads to deadline violations and increased resource renting costs. In this paper, we propose a dynamic cloud resource provisioning and scheduling algorithm which aims to fulfill the workflow deadline by using the sum of task execution time expectation and standard deviation to estimate real task execution times. A bag-based delay scheduling strategy and a single-type based virtual machine interval renting method are presented to decrease the resource renting cost. The proposed algorithm is evaluated using a cloud simulator ElasticSim which is extended from CloudSim. The results show that the dynamic algorithm decreases the resource renting cost while guaranteeing the workflow deadline compared to the existing algorithms.  相似文献   

3.
The Journal of Supercomputing - Recent advancements of virtualization technologies for parallel processing involve scheduling containerized tasks in a workflow. Since a container can include...  相似文献   

4.
Utility computing is a form of computer service whereby the company providing the service charges the users for using the system resources. In this paper, we present system‐optimal and user‐optimal price‐based job allocation schemes for utility computing systems whose objective is to minimize the cost for the users. The system‐optimal scheme provides an allocation of jobs to the computing resources that minimizes the overall cost for executing all the jobs in the system. The user‐optimal scheme provides an allocation that minimizes the cost for individual users in the system for providing fairness. The system‐optimal scheme is formulated as a constraint minimization problem, and the user‐optimal scheme is formulated as a non‐cooperative game. The prices charged by the computing resource owners for executing the users jobs are obtained using a pricing model based on a non‐cooperative bargaining game theory framework. The performance of the studied job allocation schemes is evaluated using simulations with various system loads and configurations. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
Cloud computing promises the delivery of on-demand pay-per-use access to unlimited resources. Using these resources requires more than a simple access to them as most clients have certain constraints in terms of cost and time that need to be fulfilled. Therefore certain scheduling heuristics have been devised to optimize the placement of client tasks on allocated virtual machines. The applications can be roughly divided in two categories: independent bag-of-tasks and workflows. In this paper we focus on the latter and investigate a less studied problem, i.e., the effect the virtual machine allocation policy has on the scheduling outcome. For this we look at how workflow structure, execution time, virtual machine instance type affect the efficiency of the provisioning method when cost and makespan are considered. To aid our study we devised a mathematical model for cost and makespan in case single or multiple instance types are used. While the model allows us to determine the boundaries for two of our extreme methods, the complexity of workflow applications calls for a more experimental approach to determine the general relation. For this purpose we considered synthetically generated workflows that cover a wide range of possible cases. Results have shown the need for probabilistic selection methods in case small and heterogeneous execution times are used, while for large homogeneous ones the best algorithm is clearly noticed. Several other conclusions regarding the efficiency of powerful instance types as compared to weaker ones, and of dynamic methods against static ones are also made.  相似文献   

6.
7.
This paper describes how the concept of imposing geometric constraints by minimizing cost functions may be used and extended to accomplish a variety of animated modelling tasks for computer graphics. In this approach a complex 3-D geometric problem is mapped into a scalar minimization formulation. The mapping provides a straightforward method for converting abstract geometric concepts into a construct that is easily computed. The minimization approach is demonstrated in three application areas: computer animation, visualization, and physically-based modelling. In the computer animation application, cost minimization may be used to generate motion paths and joint parameters for animated actors. The approach may also be used to generate deformable models that extract closed 3-D geometric models from volume data for visualization. In the final application, the approach provides the fundamental structure to a physically-based model of woven cloth. © 1997 John Wiley & Sons, Ltd.  相似文献   

8.
This paper presents the procedure for comparing costs of leasing IT resources in a commercial computing cloud against those incurred in using on-premise resources. The procedure starts with calculating the number of computers as depending on parameters that describe application's features and execution conditions. By measuring required execution time for different parameter values, we determined that this dependence is a second-order polynomial. Polynomial coefficients were calculated by processing the results of fractional factorial design. On that basis we calculated costs of computing and storage resources required for the application to run. The same calculation model can be applied to both a personal user and a cloud provider. The results will differ because of different hardware exploitation levels and the economy of scale effects. Such calculation enables cloud providers to determine marginal costs in their services' price, and allows users to calculate costs they would incur by executing the same application using their own resources.  相似文献   

9.
A growing number of data- and compute-intensive experiments have been modeled as scientific workflows in the last decade. Meanwhile, clouds have emerged as a prominent environment to execute this type of workflows. In this scenario, the investigation of workflow scheduling strategies, aiming at reducing its execution times, became a top priority and a very popular research field. However, few work consider the problem of data file assignment when solving the task scheduling problem. Usually, a workflow is represented by a graph where nodes represent tasks and the scheduling problem consists in allocating tasks to machines to be executed at a predefined time aiming at reducing the makespan of the whole workflow. In this article, we show that the scheduling of scientific workflows can be improved when both task scheduling and the data file assignment problems are treated together. Thus, we propose a new workflow representation, where nodes of the workflow graph represent either tasks or data files, and define the Task Scheduling and Data Assignment Problem (TaSDAP), considering this new model. We formulated this problem as an integer programming problem. Moreover, a hybrid evolutionary algorithm for solving it, named HEA-TaSDAP, is also introduced. To evaluate our approach we conducted two types of experiments: theoretical and practical ones. At first, we compared HEA-TaSDAP with the solutions produced by the mathematical formulation and by other works from related literature. Then, we considered real executions in Amazon EC2 cloud using a real scientific workflow use case (SciPhy for phylogenetic analyses). In all experiments, HEA-TaSDAP outperformed the other classical approaches from the related literature, such as Min–Min and HEFT.  相似文献   

10.
As cloud computing evolves, it is becoming more and more apparent that the future of this industry lies in interconnected cloud systems where resources will be provided by multiple “Cloud” providers instead of just one. In this way, the hosts of services that are cloud-based will have access to even larger resource pools while at the same time increasing their scalability and availability by diversifying both their computing resources and the geographical locations where those resources operate from. Furthermore the increased competition between the cloud providers in conjunction with the commoditization of hardware has already led to large decreases in the cost of cloud computing and this trend is bound to continue in the future. Scientific focus in cloud computing is also headed this way with more studies on the efficient allocation of resources and effective distribution of computing tasks between those resources. This study evaluates the use of meta-heuristic optimization algorithms in the scheduling of bag-of-tasks applications in a heterogeneous cloud of clouds. The study of both local and globally arriving jobs has been considered along with the introduction of sporadically arriving critical jobs. Simulation results show that the use of these meta-heuristics can provide significant benefits in costs and performance.  相似文献   

11.
As cloud federation allows companies in need of computational resources to use computational resources hosted by different cloud providers, it reduces the cost of IT infrastructure by lowering capital and operational expenses. This is the result of economies of scale and the possibility for organizations to purchase just as much computing and storage resources as needed whenever needed. However, a clear specification of cost savings requires a detailed specification of the costs incurred. Although there are some efforts to define cost models for clouds, the need for a comprehensive cost model, which covers all cost factors and types of clouds, is undeniable. In this paper, we cover this gap by suggesting a cost model for the most general form of a cloud, namely federated hybrid clouds. This type of cloud is composed of a private cloud and a number of interoperable public clouds. The proposed cost model is applied within a cost minimization algorithm for making service placement decisions in clouds. We demonstrate the workings of our cost model and service placement algorithm within a specific cloud scenario. Our results show that the service placement algorithm with the cost model minimizes the spending for computational services.  相似文献   

12.
Software and Systems Modeling - With the establishment of Cyber-physical Systems (CPS) and the Internet of Things, the virtual world of software and services and the physical world of objects and...  相似文献   

13.
Lumber, a heterogeneous, anisotropic material produced from sawing logs, contains a varying number of randomly dispersed, unusable areas (defects) distributed over each boards' surface area. Each board's quality is determined by the frequency and distribution of these defects and the board's dimension. Typically, the industry classifies lumber into five quality classes, ranking board quality in respect to use for the production of wooden components and its resulting material yield. Price differentials between individual lumber quality classes vary over time driven by market forces. Manufacturers using hardwood lumber can minimize their production costs by proper selection of the minimum cost lumber quality combination, an optimization problem referred to as the least-cost lumber grade-mix problem in industry parlance. However, finding the minimum cost lumber quality combination requires that lumber cut-up simulations are conducted and statistical calculations are performed. While the lumber cut-up simulation can be done on a local computing workstation, the statistical calculations require a remote station running commercial statistical software. A second order polynomial model is presented for finding the least-cost lumber grade-mix that manufacturers of wood products can use to minimize their raw material costs. Tests of the newly developed model, which has been incorporated into a user-friendly decision support system, revealed that only a limited amount of lower quality raw material (e.g. lumber with a high frequency of defects in boards and/or small board sizes) can be accepted, as otherwise the lumber quality mix cannot supply all the parts required. However, the new model suggested solutions that resulted in lower raw material costs than solutions from older models.  相似文献   

14.
现如今,如何在满足截止时间约束的前提下降低工作流的执行成本,是云中工作流调度的主要问题之一。三步列表调度算法可以有效解决这一问题。但该算法在截止时间分配阶段只能形成静态的子截止时间。为方便用户部署工作流任务,云服务商为用户提供了的三种实例类型,其中竞价实例具有非常大的价格优势。为解决上述问题,提出了截止时间动态分配的工作流调度成本优化算法(S-DTDA)。该算法利用粒子群算法对截止时间进行动态分配,弥补了三步列表调度算法的缺陷。在虚拟机选择阶段,该算法在候选资源中增加了竞价实例,大大降低了执行成本。实验结果表明,相较于其他经典算法,该算法在实验成功率和执行成本上具有明显优势。综上所述,S-DTDA算法可以有效解决工作流调度中截止时间约束的成本优化问题。  相似文献   

15.
Executing bag-of-tasks applications in multiple Cloud environments while satisfying both consumers’ budgets and deadlines poses the following challenges: How many resources and how many hours should be allocated? What types of resources are required? How to coordinate the distributed execution of bag-of-tasks applications in resources composed from multiple Cloud providers?. This work proposes a genetic algorithm for estimating suboptimal sets of resources and an agent-based approach for executing bag-of-tasks applications simultaneously constrained by budgets and deadlines. Agents (endowed with distributed algorithms) compose resources and coordinate the execution of bag-of-tasks applications. Empirical results demonstrate that the genetic algorithm can autonomously estimate sets of resources to execute budget-constrained and deadline-constrained bag-of-tasks applications composed of more economical (but slower) resources in the presence of loose deadlines, and more powerful (but more expensive) resources in the presence of large budgets. Furthermore, agents can efficiently and successfully execute randomly generated bag-of-tasks applications in multi-Cloud environments.  相似文献   

16.
Workflow systems are popular in daily business processing. Since vulnerability cannot be totally removed from a workflow management system, successful attacks always happen and may inject malicious tasks or incorrect data into the workflow system. Moreover, legitimate tasks referring to the incorrect data will further corrupt more data objects in the system. As a result, the integrity level of the system can be seriously compromised. This problem cannot be efficiently solved by existing defense mechanisms, such as access control, intrusion detection, and checkpoints. In this paper, we propose a practical solution for on-line attack recovery of workflows. The recovery system discovers all damages caused by the malicious tasks and automatically repairs the damages based on data and control dependencies between workflow tasks. We describe fundamental theories for workflow attack recovery system. Based on these theories, we build a prototype system and develop the corresponding recovery algorithms. We evaluate the performance of the recovery system under different attacking densities, intrusion detection delays and arrival rates. The experimental results show that our system is practical.  相似文献   

17.
《Information Systems》1999,24(3):255-273
When designing a workflow schema, the workflow designer must often explicitly deal with exceptional situations, such as abnormal process termination or suspension of task execution. This paper shows how the designer can be supported by tools allowing him to capture exceptional behavior within a workflow schema, by reusing an available set of pre-configured exceptions skeletons. Exceptions are expressed by means of triggers, to be executed on the top of an active database environment. In particular, the paper deals with the handling of typical workflow exceptional situations which are modeled as generic exception skeletons to be included in a new workflow schema by simply specializing or instantiating them. Such skeletons, called patterns, are stored in a catalog; the paper describes the catalog structure and its management tools constituting an integrated environment for pattern-based exception design and reuse.  相似文献   

18.
We show how a layered Cloud service model of software (SaaS), platform (PaaS), and infrastructure (IaaS) leverages multiple independent Clouds by creating a federation among the providers. The layered architecture leads naturally to a design in which inter-Cloud federation takes place at each service layer, mediated by a broker specific to the concerns of the parties at that layer. Federation increases consumer value for and facilitates providing IT services as a commodity. This business model for the Cloud is consistent with broker mediated supply and service delivery chains in other commodity sectors such as finance and manufacturing. Concreteness is added to the federated Cloud model by considering how it works in delivering the Weather Research and Forecasting service (WRF) as SaaS using PaaS and IaaS support. WRF is used to illustrate the concepts of delegation and federation, the translation of service requirements between service layers, and inter-Cloud broker functions needed to achieve federation.  相似文献   

19.
The use of event–condition–action (ECA) rules has transformed database systems from passive query-based data repositories to active sources of information delivery. In a similar fashion, ECA rules can be used to benefit workflow systems. In this paper, a software framework known as STEP workflow management facility is proposed in order to manage collaborative and distributed workflows and to provide interfaces to object management group-compliant product data management systems. Issues related to implementation using open standards such as CORBA are discussed. A key point underlying the framework is the flexibility it affords to users to re-configure the system according to evolving needs in collaborative product development.  相似文献   

20.
Scientific workflow systems often operate in highly unreliable, heterogeneous and dynamic environments, and have accordingly incorporated different fault tolerance techniques. We propose an exception‐handling mechanism, based on techniques adopted in programming languages, for modifying at run‐time the structure of a workflow. In contrast to other proposals that achieve the required flexibility by means of the infrastructure, our proposal expresses the exception‐handling mechanism within the workflow language—primarily as two exception‐handling patterns that are exclusively based on the Reference Nets‐within‐Nets formalism (a specific type of Petri nets). When an exception is detected, a workflow in our approach can be re‐written (replaced), based on the particular failure condition that has been detected. This enables workflow users to have better control and understanding of the behaviour of their workflow without having to be aware of the underlying infrastructure. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号