首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
刘开南 《计算机应用》2019,39(11):3333-3338
为了节省云数据中心的能量消耗,提出了几种基于贪心算法的虚拟机(VM)迁移策略。这些策略将虚拟机迁移过程划分为物理主机状态检测、虚拟机选择和虚拟机放置三个步骤,并分别在虚拟机选择和虚拟机放置步骤中采用贪心算法予以优化。提出的三种迁移策略分别为:最小主机使用效率选择且最大主机使用效率放置算法MinMax_Host_Utilization、最大主机能量使用选择且最小主机能量使用放置算法MaxMin_Host_Power_Usage、最小主机计算能力选择且最大主机计算能力放置算法MinMax_Host_MIPS。针对物理主机处理器使用效率、物理主机能量消耗、物理主机处理器计算能力等指标设置最高或者最低的阈值,参考贪心算法的原理,在指标上超过或者低于这些阈值范围的虚拟机都将进行迁移。利用CloudSim作为云数据中心仿真环境的测试结果表明,基于贪心算法的迁移策略与CloudSim中已存在的静态阈值迁移策略和绝对中位差迁移策略比较起来,总体能量消耗少15%,虚拟机迁移次数少60%,平均SLA违规率低5%。  相似文献   

2.
提出了云数据中心的一种物理资源利用阈值边界管理策略RUT-MS(physical resource utilization thresholds management strategy)。RUT-MS把虚拟机迁移过程进一步划分为超负载主机检测、虚拟机选择、虚拟机放置第1阶段、低负载主机检测和虚拟机放置第2阶段。使用一种迭代权重线性回归方法来预测物理资源的阈值上限,避免超负载的物理主机数量的增加;采用最小能量消耗策略完成虚拟机选择过程。使用多维物理资源的均方根来确定其资源使用阈值下限,减少低负载主机数量。实验结果表明: RUT-MS物理资源利用阈值边界管理策略使云数据中心的能量消耗和虚拟机迁移次数明显减少,SLA违规率和SLA及能量消耗联合指标只有少量的增加。  相似文献   

3.
提出了一种新的物理主机资源利用阈值边界管理策略(Physical host resource utilization thresholds management strategy, RUT-MS)。RUT-MS把云数据中心的虚拟机迁移过程进一步划分为超负载主机检测、虚拟机选择、虚拟机放置第1阶段、低负载主机检测和虚拟机放置第2阶段。使用一种迭代权重线性回归方法来预测物理资源的阈值上限,避免超负载的物理主机数量的增加;采用最小能量消耗策略完成虚拟机选择过程;使用多维物理资源的均方根来确定其资源使用阈值下限,减少低负载物理主机数量。实验结果表明: RUT-MS物理资源利用阈值边界管理策略使云数据中心的能量消耗和虚拟机迁移次数明显减少,SLA(Service level agreement)违规率和SLA及能量消耗联合指标只有少量的增加。  相似文献   

4.
提出基于遗传算法的虚拟机放置方法GA-VMP(Genetic Algorithm based Virtual Machine Placement)。GA-VMP是一种应用于虚拟机迁移过程的优化算法。在物理主机状态检测和虚拟机选择阶段分别选取了鲁棒局部归约检测方法和最小迁移时间选择方法;在最后的虚拟机放置阶段,GA-VMP将遗传算法应用到虚拟机的重新分配过程中形成了一个全新的虚拟机迁移模型。设计云数据中心的能量消耗数学模型,以能量消耗最小作为遗传算法的目标函数。Cloudsim模拟器仿真结果表明:在总体能量消耗、虚拟机迁移次数、服务等级协议违规率等指标上明显降低,平衡指标参数只有少量的增加。仿真结果可为其他企业构造节能云数据中心提供参考作用。  相似文献   

5.
提出云数据中心基于温度感知的虚拟机迁移模型TA-VMM.TA-VMM迁移时着重考虑物理主机处理器的温度情况和物理主机负载均衡情况.在物理主机状态检测阶段寻找出候选迁移主机MigrationFromHosts;在虚拟机选择阶段寻找出候选迁移虚拟机列表VmstoMigrateList;在最后的虚拟机放置阶段完成候选迁移虚拟机的重新放置.CloudSim云计算模拟器仿真结果表明,TA-VMM中温度阈值对云数据中心的性能影响十分重要,TA-VMM比其他虚拟机迁移模型具有更低的能量消耗.  相似文献   

6.
提出云数据中心中基于遗传算法的虚拟机迁移模型GA-VMM(genetic algorithm based virtual machine migration)。GA-VMM在虚拟机迁移的时刻考虑的问题维度优于常见的策略,使虚拟机的分配与迁移更加合理与公平。建立了云端能量消耗与在线虚拟机迁移时间消耗数学模型,通过全局遗传算法来优化虚拟机迁移和放置策略。利用某个企业的大数据中心作为云端测试环境,对比测试GA-VMM迁移模型与已有的虚拟机迁移策略的性能。测试结果表明,GA-VMM迁移模型能够更好地减少物理主机的使用数量和虚拟机的迁移次数,SLA(service level agreement violation)违规基本处于稳定状态;GA-VMM可以降低数据中心能耗,性能优于已有的迁移策略。  相似文献   

7.
低能量消耗与物理资源的充分利用是绿色云数据中心构造的两个主要目标,需要采用虚拟机迁移模型来完成优化,为此提出了融合虚拟机选择和放置的虚拟机迁移模型INTER-VMM(Interrelation approach in virtual machine migration)。INTER-VMM设计了云数据中心的基于多维物理资源约束的能量消耗模型,是一种将主机负载检测、虚拟机选择及放置结合起来考虑的虚拟机迁移策略。在虚拟机选择中采用HPS(High CPU utilization selection)选择法,选择超负载物理主机上CPU利用率最高的一个虚拟机,让其进入候选迁移虚拟机列表中。在虚拟机放置中采用空间感知分配(Space aware placement, SAP)放置法,考虑了充分利用物理主机空余空间使用效率的方法。仿真结果表明,INTER-VMM比近几年来常见的虚拟机迁移策略具有更好的性能指标,对云服务提供商具有很好的参考价值。  相似文献   

8.
提出基于粒子群优化的虚拟机迁移模型(Particle swarm optimization for virtual machine migration model,PSO-VMM)。设计基于多维物理资源约束的能量消耗模型,以能量消耗最小作为粒子群优化的目标函数。在物理主机状态检测和虚拟机选择阶段,利用鲁棒局部归约检测LRR(Local Regression Robust)和最小迁移时间选择MMT(Minimum Migration Time)。在虚拟机放置阶段,将粒子群优化算法应用到大规模的候选迁移虚拟机到物理主机的重新分配。仿真实验结果表明:PSO-VMM迁移策略使得云平台的各类性能指标都得到改善。  相似文献   

9.
刘开南 《计算机工程》2019,45(10):33-39
改变云数据中心虚拟机选择与放置的相互关系可提高云数据中心的整体性能。为此,提出基于任务映射的虚拟机选择策略。重点考虑任务粒度、虚拟机尺寸、物理主机计算能力等指标,将虚拟机选择与放置2个过程相互结合,分别设计Simple、Multiple(k)、Maxsize(u)和Relation算法,以此构建任务映射虚拟机选择的数学模型。基于Cloudsim模拟器的实验结果表明,通过该策略优化虚拟机选择与放置过程,可减少云数据中心的能量消耗和虚拟机迁移次数,节省云服务提供商的成本。  相似文献   

10.
提出了一种云数据中心基于数据依赖的虚拟机选择算法DDBS(data dependency based VM selection).参考Cloudsim项目中方法,将虚拟机迁移过程划分为虚拟机选择操作(VM selection)和虚拟机放置(VM placement)操作.DDBS在虚拟机选择过程中考虑虚拟机之间的数据依赖关系,把选择与迁移代价值比较小的虚拟机形成侯选虚拟机列表,配合后续的虚拟机放置策略最终完成虚拟机的迁移过程.以Cloudsim云计算模拟器中的虚拟机选择及放置策略作为性能比较对象.实验结果表明:DDBS与Cloudsim中已有能量感知的算法比较起来,在虚拟机迁移次数和能量消耗方面都比较少,可用性比较高.  相似文献   

11.
李浩  朱焱 《计算机应用》2020,40(6):1633-1637
为了解决集成学习模型Xgboost在二分类问题中少数类检出率低的问题,提出了基于梯度分布调节策略的改进的Xgboost算法——LCGHA-Xgboost。首先,通过定义损失贡献(LC)来模拟Xgboost算法中样本个体的损失量;而后,通过定义损失贡献密度(LCD)来衡量Xgboost算法中样本被正确分类的难易程度;最后,提出了梯度分布调节算法LCGHA,依据LCD动态调整样本个体的一阶梯度分布,间接地增大难分样本(主要存在于少数类中)的损失量,减小易分样本(主要存在于多数类中)的损失量,使Xgboost算法偏向对难分样本的学习。实验结果表明,与Xgboost、GBDT、随机森林(Random_Forest)这三大集成学习算法相比,LCGHA-Xgboost算法在多个UCI数据集上的召回率(Recall)值有5.4%~16.7%的提高,AUC值有0.94%~7.41%的提高;在垃圾网页数据集WebSpam-UK2007和DC2010数据集上所提算法的Recall值更是有44.4%~383.3%的提高,AUC值有5.8%~35.6%的提高。LCGHA-Xgboost算法可以有效提高对少数类的分类检出能力,减小少数类的分类错误率。  相似文献   

12.
Many attempts1, 7, 8, 35 have been made to overcome the limit imposed by the Turing Machine34 to realise general mathematical functions and models of (physical) phenomena.

They center around the notion of computability.

In this paper we propose a new definition of computability which lays the foundations for a theory of cybernetic and intelligent machines in which the classical limits imposed by discrete algorithmic procedures are offset by the use of continuous operators on unlimited data. This data is supplied to the machine in a totally parallel mode, as a field or wave.

This theory of machines draws its concepts from category theory, Lie algebras, and general systems theory. It permits the incorporation of intelligent control into the design of the machine as a virtual element. The incorporated control can be realized in many (machine) configurations of which we give three:

a) a quantum mechanical realization appropriate to a possible understanding of the quantum computer and other models of the physical microworld,

b) a stochastic realization based on Kolmogorov-Gabor theory leading to a possible understanding of generalised models of the physical or thermodynamic macroworld, and lastly

c) a classical mechanical realization appropriate lo the study of a new class of robots.

Particular applications at a fundamental level are cited in geometry, mathematics, biology, acoustics, aeronautics, quantum mechanics, general relativity and. Markov chains. The proposed theory therefore opens a new way towards understanding the processes that underlie intelligence.  相似文献   


13.
The last few years have seen the development of Discrete Event-Dynamic Net Systems1,2 as instruments for modeling complex systems. They are able to achieve the following objectives:

—formality of the modeling methodology

—ability to model static and dynamic aspects

—ability to pass between levels of differently rich structures by morphisms

—uniform representation of the communication process as

—an information process

—a decision process and

—a control process

—homogeneity of the representation and modeling methods

—ability to derive qualitative and quantitative statements.

The foundation is provided by a Discrete Event-Dynamic Net System which includes the axiomatic declaration of general Petri nets. In order to calculate the structural and dynamic aspects, so-called Petri net machines are developed. It is shown that this approach can even be used to treat the following aspects:

—use of time during the process

—increase of costs during the generation and transportation of information

—augmentation, evaluation and transformation of information objects.

Recursive formulas are derived and some examples calculated.  相似文献   


14.
The Problem

Internet of Things (IoT) is providing new services and insights by sensing contextual data but there are growing concerns of privacy risks from users that need immediate attention.

The Reason

The IoT devices and smart services can capture Personally Identifiable Information (PII) without user knowledge or consent. The IoT technology has not reached the desired level of maturity to standardize security and privacy requirements.

The Solution

IoT Privacy by Design is a user-centric approach for enabling privacy with security and safety as a ‘win-win’ positive outcome of IoT offerings, irrespective of business domain. The Proactive and Preventive Privacy (3P) Framework proposed in this paper should be adopted by the IoT stakeholders for building trust and confidence in end users about IoT devices and smart services.  相似文献   


15.
16.
17.
18.
19.
With the increasing amount of data, there is an urgent need for efficient sorting algorithms to process large data sets. Hardware sorting algorithms have attracted much attention because they can take advantage of different hardware’s parallelism. But the traditional hardware sort accelerators suffer “memory wall” problems since their multiple rounds of data transmission between the memory and the processor. In this paper, we utilize the in-situ processing ability of the ReRAM crossbar to design a new ReCAM array that can process the matrix-vector multiplication operation and the vector-scalar comparison in the same array simultaneously. Using this designed ReCAM array, we present ReCSA, which is the first dedicated ReCAM-based sort accelerator. Besides hardware designs, we also develop algorithms to maximize memory utilization and minimize memory exchanges to improve sorting performance. The sorting algorithm in ReCSA can process various data types, such as integer, float, double, and strings. We also present experiments to evaluate the performance and energy efficiency against the state-of-the-art sort accelerators. The experimental results show that ReCSA has 90.92×, 46.13×, 27.38×, 84.57×, and 3.36× speedups against CPU-, GPU-, FPGA-, NDP-, and PIM-based platforms when processing numeric data sets. ReCSA also has 24.82×, 32.94×, and 18.22× performance improvement when processing string data sets compared with CPU-, GPU-, and FPGA-based platforms.  相似文献   

20.
The problem of subgraph matching is one fundamental issue in graph search, which is NP-Complete problem. Recently, subgraph matching has become a popular research topic in the field of knowledge graph analysis, which has a wide range of applications including question answering and semantic search. In this paper, we study the problem of subgraph matching on knowledge graph. Specifically, given a query graph q and a data graph G, the problem of subgraph matching is to conduct all possible subgraph isomorphic mappings of q on G. Knowledge graph is formed as a directed labeled multi-graph having multiple edges between a pair of vertices and it has more dense semantic and structural features than general graph. To accelerate subgraph matching on knowledge graph, we propose a novel subgraph matching algorithm based on subgraph index for knowledge graph, called as F G q T-Match. The subgraph matching algorithm consists of two key designs. One design is a subgraph index of matching-driven flow graph ( F G q T), which reduces redundant calculations in advance. Another design is a multi-label weight matrix, which evaluates a near-optimal matching tree for minimizing the intermediate candidates. With the aid of these two key designs, all subgraph isomorphic mappings are quickly conducted only by traversing F G q T. Extensive empirical studies on real and synthetic graphs demonstrate that our techniques outperform the state-of-the-art algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号