首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
刘珂男  童薇  冯丹  刘景宁  张炬 《软件学报》2017,28(2):398-410
目前,虚拟化已经广泛应用于数据中心,但主流的虚拟CPU调度策略并没有实现对I/O性能的保障,尤其当延时敏感型负载的虚拟机和计算敏感型负载的虚拟机竞争CPU资源时,其性能显著下降.针对上述问题,本文提出了一种灵活、高效的虚拟CPU调度算法(FLMS).FLMS通过采用虚拟机分类、虚拟CPU绑定、多类时间片等技术降低了虚拟机的响应延时,同时基于多处理器架构重新设计了负载均衡策略,优化了虚拟CPU迁移.FLMS通用于目前主流的虚拟化方案,在软件虚拟化方式下相比于最新的优化方案延时降低了30%,带宽有10%的提升;在使用硬件辅助虚拟化的系统中,通过FLMS能够获得接近原生系统的I/O性能,并且保证了整个系统的公平性.  相似文献   

2.
I/O调度算法对磁盘阵列(RAID)性能具有至关重要的影响。虽然已有很多典型的I/O调度算法在一定负载情况下可获得较好的性能,但很难有哪一种算法在各种负载情况下均能获得很好的性能。本文提出了一种智能RAID控制模型,结合C4.5决策树和AdaBoost算法实现负载自动分类,根据负载变化和性能反馈情况动态调整I/O调度策略,实现面向应用需求的自治调度。模拟实验结果表明,自适应调度算法具有较好的适应性,在各种负载情况下优于现有的I/O调度算法,尤其适用于多线程混合负载环境的I/O性能优化。  相似文献   

3.
一种基于资源预分配的虚拟机软实时调度方法   总被引:1,自引:0,他引:1       下载免费PDF全文
虚拟机技术作为云计算的重要技术之一,近年来得到广泛关注,但是由于虚拟机管理层的存在,导致语义鸿沟,使得实时应用程序、并发程序等在虚拟机上的运行性能受到影响。分析和研究了Xen虚拟机管理器的Credit调度算法,针对其在并发调度和软实时调度方面存在的不足,提出了改进调度算法,实现了算法的调度器原型。新的调度算法对软实时虚拟机进行Credit比例预分配,采用动态调度时间片机制,以non-work-conserving方式实现软实时任务周期调度,保障调度周期满足运行周期要求。通过区分并发和非并发软实时虚拟机,采取不同的调度策略,在满足资源利用率的基础上,确保实时任务的顺利运行。测试结果表明,该调度算法在对并发和非并发软实时任务调度上,具有良好的表现,较好满足了软实时应用调度需求。  相似文献   

4.
The use of virtualization technology (VT) has become widespread in modern datacenters and Clouds in recent years. In spite of their many advantages, such as provisioning of isolated execution environments and migration, current implementations of VT do not provide effective performance isolation between virtual machines (VMs) running on a physical machine (PM) due to workload interference of VMs. Generally, this interference is due to contention on physical resources that impacts performance in different workload configurations. To investigate the impacts of this interference, we formalize the concept of interference for a consolidated multi-tenant virtual environment. This formulation, represented as a mathematical model, can be used by schedulers to estimate the interference of a consolidated virtual environment in terms of the processing and networking workloads of running VMs, and the number of consolidated VMs. Based on the proposed model, we present a novel batch scheduler that reduces the interference of running tenant VMs by pausing VMs that have a higher impact on proliferation of the interference. The scheduler achieves this by selecting a set of VMs that produce the least interference using a 0–1 knapsack problem solver. The selected VMs are allowed to run and other VMs are paused. Users are not troubled by the pausing and resumption of VMs for a short time because the scheduler has been designed for the execution of batch type applications such as scientific applications. Evaluation results on the makespan of VMs executed under the control of our scheduler have shown nearly 33% improvement in the best case and 7% improvement in the worst case compared to the case in which all VMs are running concurrently. In addition, the results show that our scheduling algorithm outperforms serial and random scheduling of VMs as well.  相似文献   

5.
While virtualization enables multiple virtual machines (VMs)—with multiple operating systems and applications—to run within a physical server, it also complicates resource allocations trying to guarantee Quality of Service (QoS) requirements of the diverse applications running within these VMs. As QoS is crucial in the cloud, considerable research efforts have been directed towards CPU, memory and network allocations to provide effective QoS to VMs, but little attention has been devoted to disk resource allocation.This paper presents the design and implementation of Flubber, a two-level scheduling framework that decouples throughput and latency allocation to provide QoS guarantees to VMs while maintaining high disk utilization. The high-level throughput control regulates the pending requests from the VMs with an adaptive credit-rate controller, in order to meet the throughput requirements of different VMs and ensure performance isolation. Meanwhile, the low-level latency control, by the virtue of the batch and delay earliest deadline first mechanism (BD-EDF), re-orders all pending requests from VMs based on their deadlines, and batches them to disk devices taking into account the locality of accesses across VMs. We have implemented Flubber and made extensive evaluations on a Xen-based host. The results show that Flubber can simultaneously meet the different service requirements of VMs while improving the efficiency of the physical disk. The results also show improvement of up to 25% in the VM performance over state-of-art approaches: for example, in contract to the default Xen disk I/O scheduler—Completely Fair Queueing (CFQ)—besides achieving the desired QoS of each VM, Flubber speeds up the sequential and random reads by 17% and 25%, respectively, due to the efficient physical disk utilization.  相似文献   

6.
Virtualization poses new challenges to I/O performance. The single-root I/O virtualization (SR-IOV) standard allows an I/O device to be shared by multiple Virtual Machines (VMs), without losing performance. We propose a generic virtualization architecture for SR-IOV-capable devices, which can be implemented on multiple Virtual Machine Monitors (VMMs). With the support of our architecture, the SR-IOV-capable device driver is highly portable and agnostic of the underlying VMM. Because the Virtual Function (VF) driver with SR-IOV architecture sticks to hardware and poses a challenge to VM migration, we also propose a dynamic network interface switching (DNIS) scheme to address the migration challenge. Based on our first implementation of the network device driver, we deployed several optimizations to reduce virtualization overhead. Then, we conducted comprehensive experiments to evaluate SR-IOV performance. The results show that SR-IOV can achieve a line rate throughput (9.48 Gbps) and scale network up to 60 VMs, at the cost of only 1.76% additional CPU overhead per VM, without sacrificing throughput and migration.  相似文献   

7.
In virtualized environments, the VMM (virtual machine monitor) scheduler is critical to overall performance, as it allocates the physical resources. However, traditional schedulers have poor I/O performance of mixed workloads. Although recent research significantly improves I/O performance, they degrade the performance of computational tasks by shortening time slices and reducing cache efficiency. In order to eliminate these problems while guaranteeing I/O performance, this paper presents a multicore periodical preemption scheduling scheme with three optimization techniques: (1) periodically coalescing and handling I/O events to reduce the preemption rate and scheduling latency, which guarantees I/O performance; (2) taking advantage of multicore environments and centrally handling I/O events on different cores in a Round-Robin manner to lengthen time slices, which improves the performance of computational tasks; (3) using a dedicated priority for I/O event handling to keep the CPU fairness. We implement a Xen-based prototype and evaluate the performance of I/O workloads and computation-intensive workloads. The experimental results demonstrate that our scheduling scheme efficiently lengthens time slices and improves the performance of computational tasks, achieving the same I/O performance as the existing approaches optimized for I/O.  相似文献   

8.
Multicore processors are widely used in today’s computer systems. Multicore virtualization technology provides an elastic solution to more efficiently utilize the multicore system. However, the Lock Holder Preemption (LHP) problem in the virtualized multicore systems causes significant CPU cycles wastes, which hurt virtual machine (VM) performance and reduces response latency. The system consolidates more VMs, the LHP problem becomes worse. In this paper, we propose an efficient consolidation-aware vCPU (CVS) scheduling scheme on multicore virtualization platform. Based on vCPU over-commitment rate, the CVS scheduling scheme adaptively selects one algorithm among three vCPU scheduling algorithms: co-scheduling, yield-to-head, and yield-to-tail based on the vCPU over-commitment rate because the actions of vCPU scheduling are split into many single steps such as scheduling vCPUs simultaneously or inserting one vCPU into the run-queue from the head or tail. The CVS scheme can effectively improve VM performance in the low, middle, and high VM consolidation scenarios. Using real-life parallel benchmarks, our experimental results show that the proposed CVS scheme improves the overall system performance while the optimization overhead remains low.  相似文献   

9.
Barely acceptable block I/O performance prevents virtualization from being widely used in the High-Performance Computing field. Although the virtio paravirtual framework brings great I/O performance improvement, there is a sharp performance degradation when accessing high-performance NAND-flash-based devices in the virtual machine due to their data parallel design. The primary cause of this fact is the deficiency of block I/O parallelism in hypervisor, such as KVM and Xen. In this paper, we propose a novel design of block I/O layer for virtualization, named VBMq. VBMq is based on virtio paravirtual I/O model, aiming to solve the block I/O parallelism issue in virtualization. It uses multiple dedicated I/O threads to handle I/O requests in parallel. In the meanwhile, we use polling mechanism to alleviate overheads caused by the frequent context switches of the VM’s notification to and from its hypervisor. Each dedicated I/O thread is assigned to a non-overlapping core to improve performance by avoiding unnecessary scheduling. In addition, we configure CPU affinity to optimize I/O completion for each request. The CPU affinity setting is very helpful to reduce CPU cache miss rate and increase CPU efficiency. The prototype system is based on Linux 4.1 kernel and QEMU 2.3.1. Our measurements show that the proposed method scales graciously in the multi-core environment, and provides performance which is 39.6x better than the baseline at most, and approaches bare-metal performance.  相似文献   

10.
Virtualization technology has been widely adopted in Internet hosting centers and cloud-based computing services, since it reduces the total cost of ownership by sharing hardware resources among virtual machines (VMs). In a virtualized system, a virtual machine monitor (VMM) is responsible for allocating physical resources such as CPU and memory to individual VMs. Whereas CPU and I/O devices can be shared among VMs in a time sharing manner, main memory is not amendable to such multiplexing. Moreover, it is often the primary bottleneck in achieving higher degrees of consolidation. In this paper, we present VMMB (Virtual Machine Memory Balancer), a novel mechanism to dynamically monitor the memory demand and periodically re-balance the memory among the VMs. VMMB accurately measures the memory demand with low overhead and effectively allocates memory based on the memory demand and the QoS requirement of each VM. It is applicable even to guest OS whose source code is not available, since VMMB does not require modifying guest kernel. We implemented our mechanism on Linux and experimented on synthetic and realistic workloads. Our experiments show that VMMB can improve performance of VMs that suffers from insufficient memory allocation by up to 3.6 times with low performance overhead (below 1%) for monitoring memory demand.  相似文献   

11.
Multicore systems are widely deployed in both the embedded and the high end computing infrastructures. However, traditional virtualization systems can not effectively isolate shared micro architectural resources among virtual machines (VMs) running on multicore systems. CPU and memory intensive VMs contending for these resources will lead to serious performance interference, which makes virtualization systems less efficient and VM performance less stable. In this paper, we propose a contention-aware performance prediction model on the virtualized multicore systems to quantify the performance degradation of VMs. First, we identify the performance interference factors and design synthetic micro-benchmarks to obtain VM’s contention sensitivity and intensity features that are correlated with VM performance degradation. Second, based on the contention features, we build VM performance prediction model using machine learning techniques to quantify the precise levels of performance degradation. The proposed model can be used to optimize VM performance on multicore systems. Our experimental results show that the performance prediction model achieves high accuracy and the mean absolute error is 2.83%.  相似文献   

12.
There is growing demand on datacenters to serve more clients with reasonable response times, demanding more hardware resources, and higher energy consumption. Energy-aware datacenters have thus been amongst the forerunners to deploy virtualization technology to multiplex their physical machines (PMs) to as many virtual machines (VMs) as possible in order to utilize their hardware resources more effectively and save power. The achievement of this objective strongly depends on how smart VMs are consolidated. In this paper, we show that blind consolidation of VMs not only does not reduce the power consumption of datacenters but it can lead to energy wastage. We present four models, namely the target system model, the application model, the energy model, and the migration model, to identify the performance interferences between processor and disk utilizations and the costs of migrating VMs. We also present a consolidation fitness metric to evaluate the merit of consolidating a number of known VMs on a PM based on the processing and storage workloads of VMs. We then propose an energy-aware scheduling algorithm using a set of objective functions in terms of this consolidation fitness metric and presented power and migration models. The proposed scheduling algorithm assigns a set of VMs to a set of PMs in a way to minimize the total power consumption of PMs in the whole datacenter. Empirical results show nearly 24.9% power savings and nearly 1.2% performance degradation when the proposed scheduling algorithm is used compared to when other scheduling algorithms are used.  相似文献   

13.
In virtualized datacenters, accurately measuring the power consumption of virtual machines (VMs) is the prerequisite to achieve the goal of fine-grained power management. However, existing VM power models can only provide power measurements with empirical accuracy and unbounded error. In this paper, we firstly formulize the co-relation between utilization and accuracy of power model, and compare two classes of VM power models; then we propose a novel VM power model which is based on a conception named relative performance monitoring counter (PMC); finally, based on the relative PMC power model, we propose a novel VM scheduling algorithm which uses the information of relative PMC to compensate the recursive power consumption. Theoretical analysis indicates that the proposed algorithm can provide bounded error when measuring per-VM power consumption. Extensive experiments are conducted by using various benchmarks on different platforms, and the results show that the error of per-VM power measurement can be significantly reduced. In addition, the proposed algorithm is effective to improve the power efficiency of a server when its virtualization ratio is high.  相似文献   

14.
虚拟环境下Web服务动态负载均衡策略改进   总被引:1,自引:0,他引:1  
为了提高Web服务集群的伸缩性和自动化能力,从虚拟化和负载均衡两方面研究集群系统,对现有负载采集策略做了改进,设计并实现了一种可根据负载值自动控制集群规模的模型XCluster。新模型运行在Xen提供的虚拟化环境中,实时监视宿主机层和虚拟机层的负载状态,随着集群系统总负载的增长,逐渐引入新的虚拟机来扩大集群规模,同时将任务合理分配到各个虚拟机节点上;当总负载下降时,逐渐关闭虚拟机缩小集群规模,释放出来的硬件资源又可以提供给其他集群系统使用。理论分析和实验结果表明,XCluster只需占用很少的网络通信量完成信息收集和命令下达,能够充分利用虚拟机易于管理的优势完成后端节点的调度,并且在任务总量相同的情况下,使用尽可能少的集群节点来执行任务。  相似文献   

15.
Consolidation of multiple applications on a single Physical Machine (PM) within a cloud data center can increase utilization, minimize energy consumption, and reduce operational costs. However, these benefits come at the cost of increasing the complexity of the scheduling problem.In this paper, we present a topology-aware resource management framework. As part of this framework, we introduce a Reconsolidating PlaceMent scheduler (RPM) that provides and maintains durable allocations with low maintenance costs for data centers with dynamic workloads. We focus on workloads featuring both short-lived batch jobs and latency-sensitive services such as interactive web applications. The scheduler assigns resources to Virtual Machines (VMs) and maintains packing efficiency while taking into account migration costs, topological constraints, and the risk of resource contention, as well as the variability of the background load and its complementarity to the new VM.We evaluate the model by simulating a data center with over 65,000 PMs, structured as a three-level multi-rooted tree topology. We investigate trade-offs between factors that affect the durability and operational cost of maintaining a near-optimal packing. The results show that the proposed scheduler can scale to the number of PMs in the simulation and maintain efficient utilization with low migration costs.  相似文献   

16.
In general, operating systems (OSs) are designed to mediate access to device hardware by applications. They process different kinds of system calls using an indiscriminate kernel with the same configuration. Applications in cloud computing platforms are constructed from service components. Each of the service components is assigned separately to an individual virtual machine (VM), which leads to homogeneous system calls on each VM. In addition, the requirements for kernel function and configuration of system parameters from different VMs are different. Therefore, the suit-to-all design incurs an unnecessary performance overhead and restricts the OS’s processing capacity in cloud computing. In this paper, we propose an adaptive model for cloud computing to resolve the conflict between generality and performance. Our model adaptively specializes the OS of a VM according to the resource-consuming characteristics of workloads on the VM. We implement a prototype of the adaptive model, vSpec. There are five classes of VM: CPU-intensive, memory-intensive, I/O-intensive, networkintensive and compound, according to the resource-consuming characteristics of the workloads running on the VMs. vSpec specializes the OS of a VM according to the VM class. We perform comprehensive experiments to evaluate the effectiveness of vSpec on benchmarks and real-world applications.  相似文献   

17.
Performance of disk I/O schedulers is affected by many factors, such as workloads, file systems, and disk systems. Disk scheduling performance can be improved by tuning scheduler parameters, such as the length of read timers. Scheduler performance tuning is mostly done manually. To automate this process, we propose four self-learning disk scheduling schemes: Change-sensing Round-Robin, Feedback Learning, Per-request Learning, and Two-layer Learning. Experiments show that the novel Two-layer Learning Scheme performs best. It integrates the workload-level and request-level learning algorithms. It employs feedback learning techniques to analyze workloads, change scheduling policy, and tune scheduling parameters automatically. We discuss schemes to choose features for workload learning, divide and recognize workloads, generate training data, and integrate machine learning algorithms into the Two-layer Learning Scheme. We conducted experiments to compare the accuracy, performance, and overhead of five machine learning algorithms: Decision Tree, Logistic Regression, Naïve Bayes, Neural Network, and Support Vector Machine Algorithms. Experiments with real-world and synthetic workloads show that self-learning disk scheduling can adapt to a wide variety of workloads, file systems, disk systems, and user preferences. It outperforms existing disk schedulers by as much as 15.8% while consuming less than 3%-5% of CPU time.  相似文献   

18.
Efficiency of batch processing is becoming increasingly important for many modern commercial service centers, e.g., clusters and cloud computing datacenters. However, periodical resource contentions have become the major performance obstacles for concurrently running applications on mainstream CMP servers. I/O contention is such a kind of obstacle, which may impede both the co-running performance of batch jobs and the system throughput seriously. In this paper, a dynamic I/O-aware scheduling algorithm is proposed to lower the impacts of I/O contention and to enhance the co-running performance in batch processing. We set up our environment on an 8-socket, 64-core server in Dawning Linux Cluster. Fifteen workloads ranging from 8 jobs to 256 jobs are evaluated. Our experimental results show significant improvements on the throughputs of the workloads, which range from 7% to 431%. Meanwhile, noticeable improvements on the slowdown of workloads and the average runtime for each job can be achieved. These results show that a well-tuned dynamic I/O-aware scheduler is beneficial for batch-mode services. It can also enhance the resource utilization via throughput improvement on modern service platforms.  相似文献   

19.
Consolidated environments are progressively accommodating diverse and unpredictable workloads in conjunction with virtual desktop infrastructure and cloud computing. Unpredictable workloads, however, aggravate the semantic gap between the virtual machine monitor and guest operating systems, leading to inefficient resource management. In particular, CPU management for virtual machines has a critical impact on I/O performance in cases where the virtual machine monitor is agnostic about the internal workloads of each virtual machine. This paper presents virtual machine scheduling techniques for transparently bridging the semantic gap that is a result of consolidated workloads. To enable us to achieve this goal, we ensure that the virtual machine monitor is aware of task-level I/O-boundedness inside a virtual machine using inference techniques, thereby improving I/O performance without compromising CPU fairness. In addition, we address performance anomalies arising from the indirect use of I/O devices via a driver virtual machine at the scheduling level. The proposed techniques are implemented on the Xen virtual machine monitor and evaluated with micro-benchmarks and real workloads on Linux and Windows guest operating systems.  相似文献   

20.
Cloud computing provides scalable computing and storage resources over the Internet. These scalable resources can be dynamically organized as many virtual machines (VMs) to run user applications based on a pay-per-use basis. The required resources of a VM are sliced from a physical machine (PM) in the cloud computing system. A PM may hold one or more VMs. When a cloud provider would like to create a number of VMs, the main concerned issue is the VM placement problem, such that how to place these VMs at appropriate PMs to provision their required resources of VMs. However, if two or more VMs are placed at the same PM, there exists certain degree of interference between these VMs due to sharing non-sliceable resources, e.g. I/O resources. This phenomenon is called as the VM interference. The VM interference will affect the performance of applications running in VMs, especially the delay-sensitive applications. The delay-sensitive applications have quality of service (QoS) requirements in their data access delays. This paper investigates how to integrate QoS awareness with virtualization in cloud computing systems, such as the QoS-aware VM placement (QAVMP) problem. In addition to fully exploiting the resources of PMs, the QAVMP problem considers the QoS requirements of user applications and the VM interference reduction. Therefore, in the QAVMP problem, there are following three factors: resource utilization, application QoS, and VM interference. We first formulate the QAVMP problem as an Integer Linear Programming (ILP) model by integrating the three factors as the profit of cloud provider. Due to the computation complexity of the ILP model, we propose a polynomial-time heuristic algorithm to efficiently solve the QAVMP problem. In the heuristic algorithm, a bipartite graph is modeled to represent all the possible placement relationships between VMs and PMs. Then, the VMs are gradually placed at their preferable PMs to maximize the profit of cloud provider as much as possible. Finally, simulation experiments are performed to demonstrate the effectiveness of the proposed heuristic algorithm by comparing with other VM placement algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号