首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
密码技术是云计算安全的基础,支持SR-IOV虚拟化的高性能密码卡适用于云密码机,可以为云计算环境提供虚拟化数据加密保护服务,满足安全需求.针对该类密码卡在云密码机使用过程中存在的兼容性不好、扩充性受限、迁移性差以及性价比低等问题,本文提出了基于I/O前后端模型的密码卡软件虚拟化方法,利用共享内存或者VIRTIO作为通信...  相似文献   

2.
KVM虚拟化动态迁移技术的安全防护模型   总被引:2,自引:0,他引:2  
范伟  孔斌  张珠君  王婷婷  张杰  黄伟庆 《软件学报》2016,27(6):1402-1416
虚拟机动态迁移技术是在用户不知情的情况下使得虚拟机在不同宿主机之间动态地转移,保证计算任务的完成,具有负载均衡、解除硬件依赖、高效利用资源等优点,但此技术应用过程中会将虚拟机信息和用户信息暴露到网络通信中,其在虚拟化环境下的安全性成为广大用户担心的问题,逐渐成为学术界讨论和研究的热点问题.本文从研究虚拟化机制、虚拟化操作系统源代码出发,以虚拟机动态迁移的安全问题作为突破点,首先分析了虚拟机动态迁移时的内存泄漏安全隐患;其次结合KVM(Kernel-based Virtual Machine)虚拟化技术原理、通信机制、迁移机制,设计并提出一种新的基于混合随机变换编码方式的安全防护模型,该模型在虚拟机动态迁移时的迁出端和迁入端增加数据监控模块和安全模块,保证虚拟机动态迁移时的数据安全;最后通过大量实验,仿真测试了该模型的安全防护能力和对虚拟机运行性能的影响,仿真结果表明,该安全防护模型可以在KVM虚拟化环境下保证虚拟机动态迁移的安全,并实现了虚拟机安全性和动态迁移性能的平衡.  相似文献   

3.
Network virtualization provides the ability to run multiple concurrent virtual networks over a shared substrate. However, it is challenging to design such a platform to host multiple heterogenous and often highly customized virtual networks. Not only high degree of flexibility is desired for virtual networks to customize their functions, fast packet forwarding is also required. This paper presents PdP, a flexible network virtualization platform capable of achieving high speed packet forwarding. A PdP node has multiple machines to perform packet processing for virtual networks hosted in the system. To forward packets in high speed, the data plane of a virtual network in PdP can be allocated with multiple forwarding machines to process packets in parallel. Furthermore, a virtual network in PdP can be fully customized. Both the control plane and data plane of a virtual network run in virtual machines so as to be isolated from other virtual networks. We have built a proof-of-concept prototyping PdP platform using off-the-shelf commodity hardware and open source software. The performance evaluation results show that our system can closely match the best-known packet forwarding speed of software router running in commodity hardware.  相似文献   

4.
当前虚拟桌面实施方法中,终端用户对3D图形处理能力越来越高的要求与虚拟机GPU处理能力之间的矛盾逐渐凸显。为解决上述问题,对GPU虚拟化的典型实施方法进行了研究。在对上述虚拟化技术进行分析的基础上,介绍了一种改进的基于设备独占法和API remoting法的虚拟化方案。利用Hypervisor创建两种模式的虚拟机,分别为一台父虚拟机(GVM)和多台子虚拟机(DVM)。GVM完全独占物理GPU,而DVM与物理GPU无直接交互关系。两种模式虚拟机共享GPU内存以及指令通道,DVM中的GPU调用指令传递至GVM,通过GVM对物理GPU进行快速调用,将调用结果返回到共享内存空间,进而呈现给用户。最后对改进的GPU虚拟化方法与典型虚拟化方法进行了对比与分析,总结了其中的优缺点,梳理了将来的研究重点。  相似文献   

5.
在云计算蓬勃发展这个外因的驱动下,虚拟化技术作为云计算的关键技术平台也正不可逆转地发展着。虚拟化使得一台计算机上能够运行多个虚拟机,虚拟机之间有很强的隔离性,且虚拟机与硬件没有直接的关联。论文从虚拟化出现的原因入手,进而介绍UVP虚拟化平台的架构、特点以及在虚拟化平台中使用的性能优化技术、节能管理技术、安全实现技术以及UVP的增强技术等,使用这些关键技术意义在于提高系统性能、增强安全性、易于后期维护和扩展等。  相似文献   

6.
Virtualization is a key technology to enable cloud computing. Driver domain based model for network virtualization offers isolation and high levels of flexibility. However, it suffers from poor performance and lacks scalability. In this paper, we evaluate networking performance of virtual machines within Xen. The I/O channel transferring packets between the driver domain and the virtual machines is shown to be the bottleneck. To overcome this limitation, we proposed a packet aggregation based mechanism to transfer packets from the driver domain to the virtual machines. Packet aggregation, combined with an efficient core allocation, allows virtual machines throughput to scale up by 700%, while minimizing both memory and CPU consumption. Besides, aggregation impact on packets delay and jitter remains acceptable. Hence, the proposed I/O virtualization model satisfies infrastructure providers to offer Cloud computing services.  相似文献   

7.
业务上云在近些年已经成为趋势,而新冠疫情也加速了这一趋势.然而公有云并不适用于所有用户.尤其是出于数据隐私的考虑,很多用户尤其是政府用户更希望在后疫情时代建设他们自己的私有云或者混合云.超融合设备(HCI)是达到这一目标的有效手段.在超融合设备中,计算、网络、存储等资源都被完全虚拟化,传统的物理网络设备单元也被一段段代码所替代.此外为了获得高性能的网络转发能力,很多创新技术应运而生,其中DPDK技术是其中翘楚而被广泛应用.开发者可以利用DPDK技术实现多种多样地、定制化地网络转发应用.虚拟化技术和DPDK技术可以大大提升设备资源的利用率以及网络转发性能,降低大中小企业或者机构的数据中心或者私有云的构建难度和成本.但同时高度的虚拟化也给网络运维人员带来了巨大的挑战.这些虚拟网元对网络运维人员而言是没有实体的,虚拟网络在运维人员看来就像一个“黑盒”.当网络出现故障时(如丢包),传统的针对物理网络设备的排障手段在虚拟网络中变得不可用,这就大大增加了网络排障的时间,进而对业务的持续运行造成影响.针对这种问题,设计了一种虚拟网络持续性丢包探测系统Flowprobe,该系统旨在解决基于DPDK用户...  相似文献   

8.
The importance of heterogeneous multicore programming is increasing, and Open Computing Language (OpenCL) is an open industrial standard for parallel programming that provides a uniform programming model for programmers to write efficient, portable code for heterogeneous computing devices. However, OpenCL is not supported in the system virtualization environments that are often used to improve resource utilization. In this paper, we propose an OpenCL virtualization framework based on Kernel‐based Virtual Machine with API remoting to enable multiplexing of multiple guest virtual machines (guest VMs) over the underlying OpenCL resources. The framework comprises three major components: (i) an OpenCL library implementation in guest VMs for packing/unpacking OpenCL requests/responses; (ii) a virtual device, called virtio‐CL, that is responsible for the communication between guest VMs and the hypervisor (also called the VM monitor); and (iii) a thread, called CL thread, that is used for the OpenCL API invocation. Although the overhead of the proposed virtualization framework is directly affected by the amount of data to be transferred between the OpenCL host and devices because of the primitive nature of API remoting, experiments demonstrated that our virtualization framework has a small virtualization overhead (mean of 6.8%) for six common device‐intensive OpenCL programs and performs well when the number of guest VMs involved in the system increases. These results indirectly infer that the framework allows for effective resource utilization of OpenCL devices.Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
虚拟化技术是高性能计算系统规模化的关键技术。高能所计算资源虚拟实验床采用 OpenStack 云平台搭建环境。本文讨论了实现虚拟计算资源与计算系统相互融合的三个关键因素:网络架构设计、环境匹配和系统总体规划。本文首先讨论了虚拟网络架构。虚拟化平台通过部署 neutron 组件、OVS以及 802.1Q 协议来实现虚拟网络和物理网络的二层直连,通过配置物理交换机实现三层转发,避免了数据经过 OpenStack 网络节点转发的瓶颈。其次,虚拟计算资源要融入计算系统,需要与计算系统的各个组件进行信息的动态同步,以满足域名分配、自动化配置以及监视等系统的需要。文章介绍了自主开发的 NETDB 组件,该组件负责实现包括虚拟机与域名系统 (DNS)、自动化安装和管理系统 (puppet) 以及监视系统的信息动态同步等功能;最后,在系统总体规划中,文章讨论了包括统一认证、共享存储、自动化部署、规模扩展和镜像等内容。  相似文献   

10.
基于微内核的虚拟化架构相较于传统的宏内核虚拟化架构,具有可信计算基小,易于完全形式化验证的特点.然而,在基于微内核的虚拟化架构中,即使在同一物理机上运行的不同虚拟机,虚拟机间通信仍需要通过调用网卡驱动传输数据,通信效率低.针对以上问题,提出了一种同一物理机上不同虚拟机间的通信加速方法,通过在网络服务中加入通信数据选择模块和转发模块,使得虚拟机间数据的传输可以直接在内存中完成.实验表明,可以有效提高虚拟机间的通信效率.  相似文献   

11.
Cloud computing is emerging as an increasingly popular computing paradigm, allowing dynamic scaling of resources available to users as needed. This requires a highly accurate demand prediction and resource allocation methodology that can provision resources in advance, thereby minimizing the virtual machine downtime required for resource provisioning. In this paper, we present a dynamic resource demand prediction and allocation framework in multi‐tenant service clouds. The novel contribution of our proposed framework is that it classifies the service tenants as per whether their resource requirements would increase or not; based on this classification, our framework prioritizes prediction for those service tenants in which resource demand would increase, thereby minimizing the time needed for prediction. Furthermore, our approach adds the service tenants to matched virtual machines and allocates the virtual machines to physical host machines using a best‐fit heuristic approach. Performance results demonstrate how our best‐fit heuristic approach could efficiently allocate virtual machines to hosts so that the hosts are utilized to their fullest capacity. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
Directive-based programming models, such as OpenMP, OpenACC, and OmpSs, enable users to accelerate applications by using coprocessors with little effort. These devices offer significant computing power, but their use can introduce two problems: an increase in the total cost of ownership and their underutilization because not all codes match their architecture. Remote accelerator virtualization frameworks address those problems. In particular, rCUDA provides transparent access to any graphic processor unit installed in a cluster, reducing the number of accelerators and increasing their utilization ratio. Joining these two technologies, directive-based programming models and rCUDA, is thus highly appealing. In this work, we study the integration of OmpSs and OpenACC with rCUDA, describing and analyzing several applications over three different hardware configurations that include two InfiniBand interconnections and three NVIDIA accelerators. Our evaluation reveals favorable performance results, showing low overhead and similar scaling factors when using remote accelerators instead of local devices.  相似文献   

13.
Server consolidation is very attractive for cloud computing platforms to improve energy efficiency and resource utilization. Advances in multi-core processors and virtualization technologies have enabled many workloads to be consolidated in a physical server. However, current virtualization technologies do not ensure performance isolation among guest virtual machines, which results in degraded performance due to contention in shared resources along with violation of service level agreement (SLA) of the cloud service. In that sense, minimizing performance interference among co-located virtual machines is the key factor of successful server consolidation policy in the cloud computing platforms. In this work, we propose a performance model that considers interferences in the shared last-level cache and memory bus. Our performance interference model can estimate how much an application will hurt others and how much an application will suffer from others. We also present a virtual machine consolidation method called swim which is based on our interference model. Experimental results show that the average performance degradation ratio by swim is comparable to the optimal allocation.  相似文献   

14.
Yang  Jian  Xiang  Zhen  Mou  Lisha  Liu  Shumu 《Multimedia Tools and Applications》2020,79(47-48):35353-35367

The virtualized resource allocation (mapping) algorithm is the core issue of network virtualization technology. Universal and excellent resource allocation algorithms not only provide efficient and reliable network resources sharing for systems and users, but also simplify the complexity of resource scheduling and management, improve the utilization of basic resources, balance network load and optimize network performance. Based on the application of wireless sensor network, this paper proposes a wireless sensor network architecture based on cloud computing. The WSN hardware resources are mapped into resources in cloud computing through virtualization technology, and the resource allocation strategy of the network architecture is proposed. The experiment evaluates the performance of the resource allocation strategy. The proposed heuristic algorithm is a distributed algorithm. The complexity of centralized algorithms is high, distributed algorithms can handle problems in parallel, and reduce the time required to get a good solution with limited traffic.

  相似文献   

15.
One of the techniques used to improve I/O performance of virtual machines is paravirtualization. Paravirtualized devices are intended to reduce the performance overhead on full virtualization where all hardware devices are emulated. The interface of a paravirtualized device is not identical to that of the underlying hardware. The OS of the virtual guest machine must be ported in order to use a paravirtualized device. In this paper, the network virtualization done by the Kernel-based Virtual Machine (KVM) is described. The KVM model is different from other Virtual Machines Monitors (VMMs) because the KVM is a Linux kernel model and it depends on hardware support. In this work, the overhead of using such virtual networks is been measured. A paravirtualized model by using the virtio [38] network driver is described, and some performance results of web benchmark on the two models are presented.  相似文献   

16.
Seamless hardware-software integration in reconfigurable computing systems   总被引:3,自引:0,他引:3  
Ideally, reconfigurable-system programmers and designers should code algorithms and write hardware accelerators independently of the underlying platform. To realize this scenario, the authors propose a portable, hardware-agnostic programming paradigm, which delegates platform-specific tasks to a system-level virtualization layer. This layer supports a chosen programming model and hides platform details from users much as general-purpose computers do. We introduce multithreaded programming model for reconfigurable computing based on a unified virtual-memory image for both software and hardware application parts. We also address the challenge of achieving seamless hardware-software interfacing and portability with minimal performance penalties.  相似文献   

17.
A resource management framework for collaborative computing systems over multiple virtual machines (CCSMVM) is presented to increase the performance of computing systems by improving the resource utilization, which has constructed a scalable computing environment for resource on-demand utilization. We design a resource management framework based on the advantages of some components in grid computing platform, virtualized platform and cloud computing platform to reduce computing systems overheads and maintain workloads balancing with the supporting of virtual appliance, Xen API, applications virtualization and so on. The content of collaborate computing, the basis of virtualized resource management and some key technologies including resource planning, resource allocation, resource adjustment and resource release and collaborative computing scheduling are designed in detail. A prototype is designed, and some experiments have verified the correctness and feasibility of our prototype. System evaluations show that the time in resource allocation and resource release is proportional to the quantity of virtual machines, but not the time in the virtual machines migrations. CCSMVM has higher CPU utilization and better performance than other systems, such as Eucalyptus 2.0, Globus4.0, et al. It is concluded that CCSMVM can accelerate the execution of systems by improving average CPU utilization from the results of comparative analysis with other systems, so it is better than others. Our study on resource management framework has some significance to the optimization of the performance in virtual computing systems.  相似文献   

18.
NOVA等微内核虚拟化架构解决了宏内核平台可信计算基体积和攻击面过大的问题, 但其仍缺乏虚拟机分等级保护和I/O资源访问控制等安全机制. 本文提出了安全域的概念, 并将虚拟机划分至不同的安全域, 进而建立可定制的I/O资源访问控制机制. 通过将访问控制模块添加至I/O资源访问的关键代码路径, 实现了不同安全域的I/O资源访问控制. 实验表明, 该机制提高了数据的隔离性与安全性, 仅对计算密集型、I/O密集型任务造成了较小的性能损耗.  相似文献   

19.
Recent developments in the field of virtualization technologies have led to renewed interest in performance evaluation of these systems. Nowadays, maturity of virtualization technology has made a fuss of provisioning IT services to maximize profits, scalability and QoS. This pioneer solution facilitates deployment of datacenter applications and grid and Cloud computing services; however, there are challenges. It is necessary to investigate a trade‐off among overall system performance and revenue and to ensure service‐level agreement of submitted workloads. Although a growing body of literature has investigated virtualization overhead and virtual machines interference, there is still lack of accurate performance evaluation of virtualized systems. In this paper, we present in‐depth performance measurements to evaluate a Xen‐based virtualized Web server. Regarding this experimental study; we support our approach by queuing network modeling. Based on these quantitative and qualitative analyses, we present the results that are important for performance evaluation of consolidated workloads on Xen hypervisor. First, demands of both CPU intensive and disk intensive workloads on CPU and disk are independent from the submitted rate to unprivileged domain when dedicated core(s) are pinned to virtual machines. Second, request response time not only depends on processing time at unprivileged domain but also pertains to amount of flipped pages at Domain 0. Finally, results show that the proposed modeling methodology performs well to predict the QoS parameters in both para‐virtualized and hardware virtual machine modes by knowing the request content size. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
王卅  张文博  吴恒  宋云奎  魏峻  钟华  黄涛 《软件学报》2015,26(8):2074-2090
虚拟化技术已成为云计算平台中的关键性支撑技术.它极大地提高了数据中心的资源利用率,降低了管理成本和能源消耗,但同时也为数据中心带来了新的问题——性能干扰.同一平台上的多虚拟机过度竞争某一底层硬件资源(如CPU,Cache等),会造成虚拟机性能严重下降;而出于安全性和可移植性的考虑,底层平台管理者需要尽量避免侵入式监测上层虚拟机,因而,如何透明而有效地从底层估算虚拟机性能干扰,成为虚拟化平台管理者必须面临的一个挑战.为应对以上挑战,提出了一种基于硬件计数器的虚拟机性能干扰估算方法.硬件计数器是程序运行期间产生的硬件事件信息(如CPU时间片、缓存失效次数等),已有工作主要利用大规模分布式系统任务相似性查找产生异常硬件计数器数据的节点,而没有探究硬件事件变化与性能干扰之间的直接关系.通过实验研究发现,硬件计数器(last level cache misses rates,简称LLC misses rates)与不同资源需求的应用性能干扰存在不同的关联关系;以此建立虚拟机性能干扰估算模型,估算虚拟机性能.实验结果表明:该方法可以有效地预测CPU密集型应用和网络密集型应用的性能干扰大小,并仅为系统带来小于10%的开销.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号