首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
云计算环境下基于蜜蜂觅食行为的任务负载均衡算法   总被引:1,自引:0,他引:1  
针对云计算环境下的任务调度程序通常需要较多响应时间和通信成本的问题,提出了一种基于蜜蜂行为的负载均衡(HBB-LB)算法。首先,利用虚拟机(VM)进行负载平衡来最大化吞吐量;然后,对机器上任务的优先级进行平衡;最后,将平衡重点放在减少VM等待序列中任务的等待时间上,从而提高处理过程的整体吞吐量和优先级。利用CloudSim工具模拟云计算环境进行仿真实验,结果表明,相比粒子群优化(PSO)、蚁群算法(ACO)、动态负载均衡(DLB)、先入先出(FIFO)和加权轮询(WRR)算法, HBB-LB算法的平均响应时间分别节省了5%、13%、17%、67%、37%,最大完成时间分别节省了20%、23%、18%、55%、46%,可以更好地平衡非抢占式独立任务,适用于异构云计算系统。  相似文献   

2.
针对当前云计算负载平衡调度过程中出现的虚拟机迁移效率低和能耗高问题,提出了一种基于渗透式人工蜂群与蚁群混合优化负载平衡算法,该算法将化学渗透行为与生物启发的负载平衡算法相结合,在充分利用人工蜂群和蚁群两种优化算法优点的同时,将渗透技术应用于负载均衡。由于渗透技术支持通过云基础设施迁移的虚拟机的自动部署,从而克服了现有仿生算法在实现物理机之间负载平衡方面的缺点,提高了迁移效率。实验结果表明,以现有负载平衡算法相比,提出的算法在迁移性能上提升明显。  相似文献   

3.
 本文研究在主机之间迁移虚拟机来提高系统负载均衡度(包括2个方面:CPU和disk I/O),同时尽可能地降低迁移代价。因此,目标是寻找主机和虚拟机之间尽可能优的映射方案。本文提出虚拟机的亲和力概念,并且定义了亲和力指数的计算方法,然后建立基于遗传算法的虚拟机调度模型。在这个模型中,交叉操作驱动映射方案的亲和力指数尽可能地增加,变异操作使得主机的CPU和disk I/O的差值趋于收敛。在每一代中,选择策略将亲本个体和子代个体分为一组,并选择较大适应度的个体遗传到下一代,从而使得种群不断地进化,得到最终的映射方案解空间。本文提出基于遗传算法的虚拟机均衡调度算法。该算法选取最终映射方案解空间中的最优解,做到从全局的角度考虑负载均衡问题;提前计算迁移的影响,在得到最优的迁移方案时才进行实质性迁移,从而降低了迁移代价;使用MTALB算法将多类型任务均匀地分配到虚拟机中,系统的负载均衡效果更佳。实验结果表明,就迁移代价和系统负载均衡各项具体指标而言,本文算法相比于首次适应和轮转调度算法以及NABM算法存在全面优势。在任务处理率这一关键指标上,本文算法比首次适应和轮转调度算法及NABM算法分别平均提升了25%和12%。  相似文献   

4.

Big data analytics in cloud environments introduces challenges such as real-time load balancing besides security, privacy, and energy efficiency. This paper proposes a novel load balancing algorithm in cloud environments that performs resource allocation and task scheduling efficiently. The proposed load balancer reduces the execution response time in big data applications performed on clouds. Scheduling, in general, is an NP-hard problem. Our proposed algorithm provides solutions to reduce the search area that leads to reduced complexity of the load balancing. We recommend two mathematical optimization models to perform dynamic resource allocation to virtual machines and task scheduling. The provided solution is based on the hill-climbing algorithm to minimize response time. We evaluate the performance of proposed algorithms in terms of response time, turnaround time, throughput metrics, and request distribution with some of the existing algorithms that show significant improvements.

  相似文献   

5.
Task scheduling in heterogeneous environments such as cloud data centers is considered to be an NP-complete problem. Efficient task scheduling will lead to balance the load on the virtual machines (VMs) thereby achieving effective resource utilization. Hence there is a need for a new scheduling framework to perform load balancing amid considering multiple quality of service (QoS) metrics such as makespan, response time, execution time, and task priority. Multi-core Web server is difficult to achieve dynamic balance in the process of remote dynamic request scheduling, so it is necessary to improve it based on the traditional scheduling algorithm to enhance the actual effect of the algorithm. This article do research on the multi-core Web server, Focusing on multi-core Web server queuing model. On this basis, the author draws the drawbacks of the multi-core Web server in the remote dynamic request scheduling algorithm, and improves the traditional algorithm with the demand analysis. Not only it overcomes the drawbacks of traditional algorithms, but also promotes the system threads carrying the same amount of tasks, and promotes the server being always in a dynamic balance. On the basis of this, it achieves an effective solution to customer requests.  相似文献   

6.
针对现有多目标调度方法所需时间较长以及处理突发情况时性能降低的问题,提出一种基于模因优化和循环调度的多目标负载均衡技术。使用突发检测器检测发送到云服务器的用户请求,确定负载状态。基于测器结果,应用不同的负载平衡算法来高效地调度用户任务。利用选定的负载平衡算法将用户请求任务调度到资源最佳的虚拟机上,保证在最低的时间消耗内达到负载均衡的状态。实验结果表明,与其他算法相比,该方法在多个性能指标上具有明显优势,可以提高调度效率的同时,最大限度地降低云中的能源使用。  相似文献   

7.
Reducing energy consumption has become an important task in cloud datacenters. Many existing scheduling approaches in cloud datacenters try to consolidate virtual machines (VMs) to the minimum number of physical hosts and hence minimize the energy consumption. VM live migration technique is used to dynamically consolidate VMs to as few PMs as possible; however, it introduces high migration overhead. Furthermore, the cost factor is usually not taken into account by existing approaches, which will lead to high payment cost for cloud users. In this paper, we aim to achieve energy reduction for cloud providers and payment saving for cloud users, and at the same time, without introducing VM migration overhead and without compromising deadline guarantees for user tasks. Motivated by the fact that some of the tasks have relatively loose deadlines, we can further reduce energy consumption by proactively postponing the tasks without waking up new physical machines (PMs). A heuristic task scheduling algorithm called Energy and Deadline Aware with Non-Migration Scheduling (EDA-NMS) algorithm is proposed, which exploits the looseness of task deadlines and tries to postpone the execution of the tasks that have loose deadlines in order to avoid waking up new PMs. When determining the VM instant types, EDA-NMS selects the instant types that are just sufficient to guarantee task deadline to reduce user payment cost. The results of extensive experiments show that our algorithm performs better than other existing algorithms on achieving energy efficiency without introducing VM migration overhead and without compromising deadline guarantees.  相似文献   

8.
Efficient task scheduling is critical to achieving high performance on grid computing environment. The task scheduling on grid is studied as optimization problem in this paper. A heuristic task scheduling algorithm satisfying resources load balancing on grid environment is presented. The algorithm schedules tasks by employing mean load based on task predictive execution time as heuristic information to obtain an initial scheduling strategy. Then an optimal scheduling strategy is achieved by selecting two machines satisfying condition to change their loads via reassigning their tasks under the heuristic of their mean load. Methods of selecting machines and tasks are given in this paper to increase the throughput of the system and reduce the total waiting time. The efficiency of the algorithm is analyzed and the performance of the proposed algorithm is evaluated via extensive simulation experiments. Experimental results show that the heuristic algorithm performs significantly to ensure high load balancing and achieve an optimal scheduling strategy almost all the time. Furthermore, results show that our algorithm is high efficient in terms of time complexity.  相似文献   

9.
独立任务调度的启发式算法   总被引:5,自引:0,他引:5  
任务调度是一个NP-hard问题,而且是并行与分布式计算中一个必不可少的组成部分,特别是在网格计算环境下任务调度更加复杂。该文提出了满足负载均衡的一个启发式任务调度算法。给出了选择处理机和任务的方法,以提高算法的效率。实验表明该算法是一个高效率的调度算法,并且几乎总是找到了最优调度方案。  相似文献   

10.
Apache Flink是现在主流的大数据分布式计算引擎之一,其中任务调度问题是分布式计算系统中的关键问题。由于集群的异构性以及不同算子复杂度不同,大数据计算系统Flink中不可避免地会出现负载不均的情况,针对这种问题,提出了基于资源反馈的负载均衡任务调度算法RFTS。通过实时资源监控、区域划分和基于人工萤火虫优化的任务调度算法3个模块,把负载过重的机器中处于等待状态的任务分配给负载较轻的机器,来实现集群的负载均衡,提高系统集群利用率和执行效率。最后通过基于TPC-C和TPC-H数据集的实验结果表明,RFTS算法从执行时间和吞吐量2个方面有效提升了Apache Flink计算系统的性能。  相似文献   

11.
随着云计算的普及,大量的数据处理选择云服务来完成。现有算法较少考虑异构型系统中虚拟机计算能力的不同,导致某些任务等待时间过长。提出了虚拟机负载大小实时调整的算法。对云计算中资源虚拟化特征,给出一种评估虚拟机计算能力的方法。根据虚拟机能力和运行过程中的状态变化,自适应进行任务量大小调整,满足实时要求。通过任务调度,协调任务完成时间,保持各虚拟机负载的动态均衡,缩短长作业的总执行时间,提高了系统的吞吐量和整体服务能力,提升了效益。实验结果表明,本文算法能自适应地调整任务量大小,进行调度,以维持虚拟机负载均衡。  相似文献   

12.
异构系统中一种基于可用性的抢占式任务调度算法*   总被引:1,自引:0,他引:1  
针对大多数现有的异构系统调度算法没有考虑由多类任务特别是抢占式任务所引起的可用性需求的不足,在现有基于可用性的非抢占式任务调度算法的基础上,通过计算任务的平均等待时间来确定优先级等级,对异构系统中多类抢占式任务的可用性约束的调度问题进行了探索,提出了一种基于可用性的抢占式优先调度算法P-SSAC。该算法在不增加硬件代价的前提条件下通过调度增加了系统的可用性,缩短了任务的平均等待时间,同时该算法可对抢占式的任务进行有效调度。仿真实验结果表明,该算法有效实现了异构系统可用性和任务等待时间之间的折中。  相似文献   

13.
This paper presents a novel algorithm for task assignment in mobile cloud computing environments in order to reduce offload duration time while balancing the cloudlets’ loads. The algorithm is proposed for a two-level mobile cloud architecture, including public cloud and cloudlets. The algorithm models each cloud and cloudlet as a queue to consider cloudlets’ limited resources and study response time more accurately. Performance factors and resource limitations of cloudlets such as waiting time for clients in cloudlets can be determined using queue models. We propose a hybrid genetic algorithm (GA) - Ant Colony Optimization (ACO) algorithm to minimize mean completion time of offloaded tasks for the whole system. Simulation results confirm that the proposed hybrid heuristic algorithm has significant improvements in terms of decreasing mean completion time, total energy consumption of the mobile devices, number of dropped tasks over Queue based Random, Queue based Round Robin and Queue based weighted Round Robin assignment algorithms. Also, to prove the superiority of our queue based algorithm, it is compared with a dynamic application scheduling algorithm, HACAS, which has not considered queue in cloudlets.  相似文献   

14.
随着互联网产业的发展,虚拟机创建速度慢、不易扩展、灵活性不足等缺点越来越凸显,容器技术的出现为这些问题提出了一种新的解决思路;而现有的调度算法仅考虑容器云集群中工作节点的内存、CPU等物理资源,没有考虑对容器云调度后的镜像分发过程有明显影响的网络负载率,导致容器调度任务等待时间过长,造成数据中心的资源浪费;鉴于粒子群优化算法在局部开采能力和全局探测方面有较强的优势,提出了一种基于模拟退火算法的粒子群优化算法(SA-PSO,simulated annealing particle swarm optimization algorithm)的容器调度算法,通过使用模拟退火优化粒子群算法使其在算法初期跳出局部最优情况,提升算法性能;在Kubernetes平台实验过程中,SA-PSO调度算法相比Kubernetes的BalancedQosPriority算法,提升了整体节点资源利用率,显著减少任务最少等待时间;同时与标准PSO算法以及动态惯性权重PSO算法进行对比,不仅收敛能力有显著提升,并且相较标准PSO算法全局最优节点命中率提升近60%.  相似文献   

15.
We design a task mapper TPCM for assigning tasks to virtual machines, and an application-aware virtual machine scheduler TPCS oriented for parallel computing to achieve a high performance in virtual computing systems. To solve the problem of mapping tasks to virtual machines, a virtual machine mapping algorithm (VMMA) in TPCM is presented to achieve load balance in a cluster. Based on such mapping results, TPCS is constructed including three components: a middleware supporting an application-driven scheduling, a device driver in the guest OS kernel, and a virtual machine scheduling algorithm. These components are implemented in the user space, guest OS, and the CPU virtualization subsystem of the Xen hypervisor, respectively. In TPCS, the progress statuses of tasks are transmitted to the underlying kernel from the user space, thus enabling virtual machine scheduling policy to schedule based on the progress of tasks. This policy aims to exchange completion time of tasks for resource utilization. Experimental results show that TPCM can mine the parallelism among tasks to implement the mapping from tasks to virtual machines based on the relations among subtasks. The TPCS scheduler can complete the tasks in a shorter time than can Credit and other schedulers, because it uses task progress to ensure that the tasks in virtual machines complete simultaneously, thereby reducing the time spent in pending, synchronization, communication, and switching. Therefore, parallel tasks can collaborate with each other to achieve higher resource utilization and lower overheads. We conclude that the TPCS scheduler can overcome the shortcomings of present algorithms in perceiving the progress of tasks, making it better than schedulers currently used in parallel computing.  相似文献   

16.
Live virtual machine (VM) migration is a technique for achieving system load balancing in a cloud environment by transferring an active VM from one physical host to another. This technique has been proposed to reduce the downtime for migrating overloaded VMs, but it is still time- and cost-consuming, and a large amount of memory is involved in the migration process. To overcome these drawbacks, we propose a Task-based System Load Balancing method using Particle Swarm Optimization (TBSLB-PSO) that achieves system load balancing by only transferring extra tasks from an overloaded VM instead of migrating the entire overloaded VM. We also design an optimization model to migrate these extra tasks to the new host VMs by applying Particle Swarm Optimization (PSO). To evaluate the proposed method, we extend the cloud simulator (Cloudsim) package and use PSO as its task scheduling model. The simulation results show that the proposed TBSLB-PSO method significantly reduces the time taken for the load balancing process compared to traditional load balancing approaches. Furthermore, in our proposed approach the overloaded VMs will not be paused during the migration process, and there is no need to use the VM pre-copy process. Therefore, the TBSLB-PSO method will eliminate VM downtime and the risk of losing the last activity performed by a customer, and will increase the Quality of Service experienced by cloud customers.  相似文献   

17.
This paper proposes an algorithm for scheduling Virtual Machines (VM) with energy saving strategies in the physical servers of cloud data centers. Energy saving strategy along with a solution for productive resource utilization for VM deployment in cloud data centers is modeled by a combination of “Virtual Machine Scheduling using Bayes Theorem” algorithm (VMSBT) and Virtual Machine Migration (VMMIG) algorithm. It is shown that the overall data center’s consumption of energy is minimized with a combination of VMSBT algorithm and Virtual Machine Migration (VMMIG) algorithm. Virtual machine migration between the active physical servers in the data center is carried out at periodical intervals as and when a physical server is identified to be under-utilized. In VM scheduling, the optimal data centers are clustered using Bayes Theorem and VMs are scheduled to appropriate data center using the selection policy that identifies the cluster with lesser energy consumption. Clustering using Bayes rule minimizes the number of server choices for the selection policy. Application of Bayes theorem in clustering has enabled the proposed VMSBT algorithm to schedule the virtual machines on to the physical server with minimal execution time. The proposed algorithm is compared with other energy aware VM allocations algorithms viz. “Ant-Colony” optimization-based (ACO) allocation scheme and “min-min” scheduling algorithm. The experimental simulation results prove that the proposed combination of ‘VMSBT’ and ‘VMMIG’ algorithm outperforms other two strategies and is highly effective in scheduling VMs with reduced energy consumption by utilizing the existing resources productively and by minimizing the number of active servers at any given point of time.  相似文献   

18.
Load balanced transaction scheduling problem is an important issue in distributed computing environments including grid system. This problem is known to be NP-hard and can be solved by using heuristic as well as any meta-heuristic method. We ponder over the problem of the load balanced transaction scheduling in a grid processing system by using an Ant Colony Optimization for load balancing. The problem that we consider is to achieve good execution characteristics for a given set of transactions that has to be completed within their given deadline. We propose a transaction processing algorithm based on Ant Colony Optimization (ACO) for load balanced transaction scheduling. We modify two meta-heuristic along with ACO and three heuristic scheduling algorithms for the purpose of comparison with our proposed algorithm. The results of the comparison show that the proposed algorithm provides better results for the load balanced transaction scheduling in the grid processing system.  相似文献   

19.
随着虚拟化技术和云计算技术的发展,越来越多的高性能计算应用运行在云计算资源上.在基于虚拟化技术的高性能计算云系统中,高性能计算应用运行在多个虚拟机之中,这些虚拟机可能放置在不同的物理节点上.若多个通信密集型作业的虚拟机放置在相同的物理节点上,虚拟机之间将竞争物理节点的网络Ⅰ/O资源,如果虚拟机对网络Ⅰ/O资源的需求超过物理节点的网络Ⅰ/O带宽上限,将严重影响通信密集型作业的计算性能.针对虚拟机对网络Ⅰ/O资源的竞争问题,提出一种基于网络Ⅰ/O负载均衡的虚拟机放置算法NLPA,该算法采用网络Ⅰ/O负载均衡策略来减少虚拟机对网络Ⅰ/O资源的竞争.实验表明,与贪心算法进行比较,对于同样的高性能计算作业测试集,NLPA算法在完成作业的计算时间、系统中的网络Ⅰ/O负载吞吐率、网络Ⅰ/O负载均衡3个方面均有更好的表现.  相似文献   

20.
针对传统的物理集群系统无法灵活应对大型互联网应用的问题,提出一种云环境下虚拟机集群的综合负载均衡机制。该方法首先定期地采集集群中虚拟机节点的CPU、内存、连接数、响应时间,以及所在物理主机的负载状况等指标信息,然后加权计算节点的综合负载并得出其权值,最后通过调度器进行任务请求的合理分配,从而解决了传统集群系统负载不均且不能适应多变的网络环境等诸多问题。实验结果表明,与加权轮询法(WRR)和加权最少连接法(WLC)调度方案相比,该机制能够在并发量较大时维持较低的响应时间,并能够根据集群中综合负载的状态实时地增加或减少虚拟机数量,通常在5s之内达到整体集群的负载均衡。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号