首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 125 毫秒
1.
通信开销在云环境中无法忽略,但现有DAG(directed acyclic graph)工作流费用优化模型大都未考虑任务之间的通信开销,难以在实际云环境中应用.为此,提出带通信开销的工作流费用优化模型CA-DAG (communication aware-DAG),并在分层算法的基础上提出针对CA-DAG模型的调度算法CACO(communication aware cost optimization).CACO使用前向一致规则(forward consistent,FC)求解工作流的最小完工时间;根据逆向分层策略将任务分层,使费用优化问题从全局转化到局部;采用动态规划方法收集任务在选择服务时产生的零散“时间碎片”,增加任务的费用优化空间,改善费用优化效果.仿真实验结果表明,在考虑通信开销时,CACO费用优化效果较DTL (deadline top level),DBL(deadline bottom level),TCDBL(temporal consistency deadline bottom level)都有显著提高.  相似文献   

2.
任丰玲  于炯  杨兴耀 《计算机工程》2012,38(23):287-290
针对云计算环境下多个有向无环图(DAG)工作流的调度问题,提出一种基于最小化数据传输时间和任务完成时间(LTCT)的算法,用于处理具有相同优先级的多个DAG工作流之间的调度问题。在多个DAG优先级各不相同时的情况下,给出多优先级多DAG的混合调度算法。实验结果表明,LTCT算法较E-Fairness算法在保证多DAG调度公平性的基础上,能避免额外的数据传输开销,有利于缩短整个工作流的执行Makespan,提高资源的利用率。  相似文献   

3.
为了优化云工作流调度的经济代价和执行效率,提出一种基于有向无循环图(DAG)分割的工作流调度算法PBWS。以工作流调度效率与代价同步优化为目标,算法将调度求解过程划分为三个阶段进行:工作流DAG结构分割、分割结构调整及资源分配。工作流DAG结构分割阶段在确保任务间执行顺序依赖的同时求解初始的任务分割图;分割结构调整阶段以降低执行跨度为目标,在不同分割间对任务进行重分配;资源分配阶段旨在选择代价最高效的任务与资源映射关系,确保资源的总空闲时间最小。利用五种科学工作流DAG模型对算法进行了仿真实验。结果表明。PBWS算法仅以较小的执行跨度为开销,极大降低了工作流执行代价,实现了调度效率与调度代价的同步优化,其综合性能是优于同类型算法的。  相似文献   

4.
孙月  于炯  朱建波 《计算机科学》2014,41(3):145-148,168
为解决多用户工作流调度过程中的公平性问题,提高资源利用率,满足不同用户DAG工作流的不同QoS需求,提出了抢占式多DAG工作流动态调度模型。该算法将DAG工作流按照QoS需求进行优先级划分,采用高优先级作业优先占有资源的原则调度作业。相同优先级DAG工作流的任务依据带有启发性信息的slowdown进行资源抢占,进一步提高了作业调度的公平性;对于不同优先级的作业调度,提出了基于阈值的回填算法,该算法在保证作业调度公平的同时提高了资源利用率。  相似文献   

5.
随着计算机技术的发展与用户需求的不断提升,多有向无环图(DAG)共享一组异构计算资源的问题受到广泛的关注。但由于实际任务的复杂多变,多个DAG之间存在一定的差异,导致多DAG调度策略存在公平性问题。为此,提出一种改进的启发式公平调度算法IFairness。在选择待调度DAG阶段采用一种新的评判指标DAG完成度,代替原Fairness算法中的剩余Makespan作为DAG选择依据,在计算每个DAG的滞后程度阶段,采用"向后看"一步的原则,解决某些DAG在初期得不到调度的问题。仿真结果表明,与原Fairness算法相比,IFairness算法不公平程度降低了7.28%,资源利用率提升了11.97%,有效提高了调度算法的公平性及资源利用率。  相似文献   

6.
异构分布式环境下多DAG工作流的混合调度策略   总被引:2,自引:0,他引:2  
田国忠  肖创柏  徐竹胜  肖霞 《软件学报》2012,23(10):2720-2734
关于多个DAG工作流在异构分布式环境下调度的研究近来有了新的进展,也解决了一些问题,但现阶段还没有考虑和解决根据不同类型DAG的需求按优先级进行分类,以及对不同时间到达的多个不同优先级DAG进行调度的问题.为解决这些问题,针对各用户对DAG工作流的QoS需求的不同,在对不同用户的DAG工作流进行优先级划分的基础上,首先提出了一种新的调度模型,并改进了已有的公平调度算法,解决在不同时间上被提交的具有相同优先级的多个DAG工作流之间调度的公平性问题.为了提高资源利用率和高优先级DAG尽可能小地受低优先级DAG的影响,又提出了一种适用于多个不同优先级DAG之间调度的Backfill算法.在新的系统模型和这两种算法的基础上,提出了一种混合调度策略.实验结果表明,这种混合调策略能够兼顾不同时间到达的多个不同类型DAG调度需求和资源利用率的改善.另外,通过实验发现了关于两个DAG调度所特有的"拖尾"规律,具有进一步研究和应用的价值.  相似文献   

7.
云计算环境下多有向无环图工作流的节能调度算法   总被引:1,自引:0,他引:1  
刘丹琦  于炯  英昌甜 《计算机应用》2013,33(9):2410-2415
针对多有向无环图(DAG)工作流节能调度算法中存在的节能效果不佳、适用范围较窄和无法兼顾性能优化等问题,提出了一种新的多DAG工作流节能调度方法--MREO。MREO在对计算密集型和通信密集型任务特点进行分析的基础上,通过整合独立任务,减少了处理器的数量,并利用回溯和分支限界算法对任务整合路径进行动态的优化选择,有效降低了整合算法的复杂度。实验结果证明,MREO在保证多DAG工作流性能的前提下,能够有效降低系统的计算和通信能量开销,获得了良好的节能效果。  相似文献   

8.
为了同步考虑用户的任务QoS需求和云资源提供方的收益,提出一种云环境中满足帕累托最优的多目标最优化DAG(Directed Acyclic Graph)粒子群算法MODPSO(Multi-objective DAG Particle Swarm Optimization)。综合考虑任务执行跨度、执行代价与执行能耗的三目标同步最优化,设计基于DVFS的离散PSO调度优化方法。重新定义PSO的种群粒子进化过程和更新规则,进而得到多目标优化工作流调度解。通过人工合成工作流和现实科学工作流进行仿真测试,并对算法性能进行分析。结果表明,该算法可以通过非支配集的方式实现冲突多目标的调度优化求解。在满足用户QoS的同时,得到最优解的Pareto边界集,实现调度性能与系统能耗的均衡。  相似文献   

9.
针对当前网格工作流调度算法中大多只考虑DAG结构的网格工作流、涉及QoS参数较少及将多QoS参数聚合成一个单目标函数进行优化调度的现状,提出了一种新颖的网格工作流调度算法。该算法基于表达结构丰富的AGWL语言建模网格工作流,且基于MOPSO算法所设计的带多QoS约束的多目标优化的网格工作流调度算法。通过与基于NSGA-Ⅱ算法的网格工作流调度算法比较,表明了该算法的有效性。  相似文献   

10.
随着网格和云计算工作流技术的发展,近来关于多DAG(Directed Acyclic Graph)共享资源调度的研究取得了一些进展,然而,关于具有最晚完成期限约束的多DAG共享一组有限异构资源的调度及其费用最低化等问题还有待进一步研究和解决.针对这些问题,文中首先提出了衡量DAG期限紧急水平的"相对严格程度"的新方法,并在此基础上提出了基于相对严格程度的调度算法MDRS(Scheduling for Multi-DAGs with Deadline based on Relative Stritness).该算法不仅能够合理处理多个DAG之间调度的紧急水平关系,也能对由于DAG期限过于严格而可能产生的"过饱和"情况进行探测和处理.一旦遇到"过饱和"情况,则采用"堆栈"与"调度回溯"相结合的机制尽可能少地丢弃其中的DAG,从而达到DAG吞吐量最大化调度目标.在MDRS算法的基础上,为了满足各DAG期限内完成约束条件,并尽可能公平地降低多个DAG执行的费用,又提出了基于单位相对严格程度变化量的费用降低率最大化方法的费用优化算法CDVRS(Cost Decrease based on Variance of the Relative Strictness).实验表明:这些方法及算法能够达到较好的性能.  相似文献   

11.
In heterogeneous computing systems, there is a need for solutions that can cope with the unavoidable uncertainty in individual task execution times, when scheduling DAGs. When such uncertainties occur, static DAG scheduling approaches may suffer, and some rescheduling may be necessary. Assuming that the uncertainty in task execution times is modelled in a stochastic manner, we may be able to use this information to improve static DAG scheduling considerably. In this paper, a novel DAG scheduling approach is proposed to solve this stochastic scheduling problem, based on a Monte Carlo method. The approach is built on the top of a classic static DAG scheduling heuristic and evaluated through extensive simulation. Empirical results show that a significant improvement of average application performance can be achieved by the proposed approach at a reasonable execution time cost.  相似文献   

12.
Job scheduling plays a critical role in resource utilisation in a grid computing environment. The heterogeneity of grid resources adds some challenges to the work of job scheduling especially when jobs have dependencies which can be represented as Direct Acyclic Graphs (DAGs). Heuristics have been developed for job scheduling optimisation. This paper presents six heuristic enhancements—MMSTFT for minimising both makespan and task finish time, levelU for upward DAG levelling, TMWD for matching tasks with data, Slack for prioritising task scheduling based on slack time, LSlack for levelling the Slack heuristic, and NLPETS for non-levelling of performance effective task scheduling (PETS). The performance of LSlack is amongst the best heuristics evaluated (with BL and LMT). Additionally, heuristic enhancements MMSTS and TMWD can significantly improve the makespan of generated schedules. To facilitate performance evaluation, a DAG simulator is implemented which provides a set of tools for DAG job configuration, execution and monitoring. The components of the DAG simulator are also presented in this paper.  相似文献   

13.
王小乐  黄宏斌  邓苏 《自动化学报》2012,38(11):1870-1879
针对异构环境并行计算的静态任务调度问题,以最小化有向无环图 (Directed acyclic graph, DAG)的执行跨度为目标,改变HEFT (Heterogeneous earliest finish time)算法中任务上行权重的计算方法, 获得更加合理的任务顺序排列,提出了一种最早完成时间优先的表调度算法IHEFT (Improvement heterogeneous earliest finish time).该算法在计算任务的上行权重时, 分别计算该任务分配给不同资源的上行权重,取其最小值,比使用所有资源对该任务的平均处理时间进行计算的HEFT算法更为准确. 确定任务的处理顺序后采用最早完成时间越小越优先的策略将任务分配给最优资源,并使得任务的开始执行时间和结束时间满足DAG中有向边的通讯时间约束.通过使用部分文献中的算例数据以及随机生成满足一定结构要求的DAG进行算法测试,将IHEFT与HEFT, CPOP (Critical-path-on-a-processor)和LDCP (Longest dynamic critical path)进行了比较,结果显示IHEFT算法更有效,而且时间复杂度较低.  相似文献   

14.
Earlier work has developed the underpinnings of the IC-scheduling theory, a framework for scheduling computations having intertask dependencies - modeled via directed acyclic graphs (DAGs) - for Internet-based computing. The goal of the schedules produced is to render tasks eligible for execution at the maximum possible rate, with the dual aim of 1) utilizing remote clients' computational resources well by always having work to allocate to an available client and 2) lessening the likelihood of a computation's stalling for lack of eligible tasks. The DAGs handled by the theory thus far are those that can be decomposed into a given collection of bipartite building block DAGs via the operation of DAG decomposition. A basic tool in constructing schedules is a relation >, which allows one to "prioritize" the scheduling of a complex DAG's building blocks. The current paper extends the IC-scheduling theory in two ways: by expanding significantly the repertoire of DAGs that the theory can schedule optimally and by allowing one sometimes to shortcut the algorithmic process required to find optimal schedules. The expanded repertoire now allows the theory to schedule optimally, among other DAGs, a large range of DAGs that are either "expansive", in the sense that they grow outward from their sources, or "reductive", in the sense that they grow inward toward their sinks. The algorithmic shortcuts allow one to "read off" an optimal schedule for a DAG from a given optimal schedule for the DAG's dual, which is obtained by reversing all arcs (thereby exchanging the roles of sources and sinks).  相似文献   

15.
Expensive dataflow queries which may involve large-scale computations operating on significant volumes of data are typically executed on distributed platforms to improve application performance. Among these, cloud computing has emerged as an attractive option for users to execute dataflows allowing them to select proper configurations (e.g., number of machines) to achieve desired trade-offs between execution time and monetary cost. Discovering dataflow schedules that exhibit the best trade-offs within a plethora of potential solutions can be challenging, especially in a heterogeneous environment where resource characteristics like performance and price can be varied. To increase resource utilization, users may also submit multiple dataflows for execution concurrently. Traditionally, building fair schedules (schedules where the slowdown of all dataflows due to resource sharing is similar) while achieving good performance is a major concern. However, considering fairness in the cloud computing setting where monetary cost is part of the optimization objectives significantly increases the difficulty of the scheduling problem. This paper proposes an algorithm for the scheduling of multiple dataflows on heterogeneous clouds that identifies Pareto-optimal solutions (schedules) in the three-dimensional space formed from the different trade-offs between overall execution time, monetary cost and fairness. The results show that in most cases the proposed approach can provide solutions with fairer schedules without significantly impacting the quality of the execution time to monetary cost skyline compared to the state of the art where the fairness of a solution is not taken into account.  相似文献   

16.
This paper addresses the problem of minimizing the scheduling length (make-span) of a batch of jobs with different arrival times. A job is described by a direct acyclic graph (DAG) of parallel tasks. The paper proposes a dynamic scheduling method that adapts the schedule when new jobs are submitted and that may change the processors assigned to a job during its execution. The scheduling method is divided into a scheduling strategy and a scheduling algorithm. We also propose an adaptation of the Heterogeneous Earliest-Finish-Time (HEFT) algorithm, called here P-HEFT, to handle parallel tasks in heterogeneous clusters with good efficiency without compromising the makespan. The results of a comparison of this algorithm with another DAG scheduler using a simulation of several machine configurations and job types shows that P-HEFT gives a shorter makespan for a single DAG but scores worse for multiple DAGs. Finally, the results of the dynamic scheduling of a batch of jobs using the proposed scheduler method showed significant improvements for more heavily loaded machines when compared to the alternative resource reservation approach.  相似文献   

17.
Processor specialization has become the development trend of modern processor industry. It is quite possible that this will still be the main-stream in the next decades of semiconductor era. As the diversity of heterogeneous systems grows, organizing computation efficiently on systems with multiple kinds of heterogeneous processors is a challenging problem and will be a normality. In this paper, we analyze some state-of-the-art task scheduling algorithms of heterogeneous computing systems and propose a Degree of Node First (DONF) algorithm for task scheduling of fine-grained parallel programs on heterogeneous systems. The major innovations of DONF include:1) simplifying task priority calculation for directed acyclic graph (DAG) based fine-grained parallel programs which not only reduces the complexity of task selection but also enables the algorithm to solve the scheduling problem for dynamic DAGs; 2) building a novel communication model in the processor selection phase that makes the task scheduling much more efficient. They are achieved by exploring finegrained parallelism via a dataflow program execution model, and validated through experimental results with a selected set of benchmarks. The results on synthesized and real-world application DAGs show a very good performance. The proposed DONF algorithm significantly outperforms all the evaluated state-of-the-art heuristic algorithms in terms of scheduling length ratio (SLR) and efficiency.  相似文献   

18.
This paper proposes a scheduling algorithm to solve the problem of task scheduling in a cloud computing system with time‐varying communication conditions. This algorithm converts the scheduling problem with communication changes into a directed acyclic graph (DAG) scheduling problem for existing fuzzy communication task nodes, that is, the scheduling problem for a communication‐change DAG (CC‐DAG). The CC‐DAG contains both computation task nodes and communication task nodes. First, this paper proposes a weighted time‐series network bandwidth model to solve the indefinite processing time (cost) problem for a fuzzy communication task node. This model can accurately predict the processing time of a fuzzy communication task node. Second, to address the scheduling order problem for the computation task nodes, a dynamic pre‐scheduling search strategy (DPSS) is proposed. This strategy computes the essential paths for the pre‐scheduling of the computation task nodes based on the actual computation costs (times) of the computation task nodes and the predicted processing costs (times) of the fuzzy communication task nodes during the scheduling process. The computation task node with the longest essential path is scheduled first because its completion time directly influences the completion time of the task graph. Finally, we demonstrate the proposed algorithm via simulation experiments. The experimental results show that the proposed DPSS produced remarkable performance improvement rate on the total execution time that ranges between 11.5% and 21.2%. In view of the experimental results, the proposed algorithm provides better quality scheduling solution that is suitable for scientific application task execution in the cloud computing environment than HEFT, PEFT, and CEFT algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号