首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Energy-efficient data centers   总被引:1,自引:0,他引:1  
Energy consumption of the Information and Communication Technology (ICT) sector has grown exponentially in recent years. A major component of the today’s ICT is constituted by the data centers which have experienced an unprecedented growth in their size and population, recently. The Internet giants like Google, IBM and Microsoft house large data centers for cloud computing and application hosting. Many studies, on energy consumption of data centers, point out to the need to evolve strategies for energy efficiency. Due to large-scale carbon dioxide ( $\mathrm{CO}_2$ ) emissions, in the process of electricity production, the ICT facilities are indirectly responsible for considerable amounts of green house gas emissions. Heat generated by these densely populated data centers needs large cooling units to keep temperatures within the operational range. These cooling units, obviously, escalate the total energy consumption and have their own carbon footprint. In this survey, we discuss various aspects of the energy efficiency in data centers with the added emphasis on its motivation for data centers. In addition, we discuss various research ideas, industry adopted techniques and the issues that need our immediate attention in the context of energy efficiency in data centers.  相似文献   

2.
Every time an Internet user downloads a video, shares a picture, or sends an email, his/her device addresses a data center and often several of them. These complex systems feed the web and all Internet applications with their computing power and information storage, but they are very energy hungry. The energy consumed by Information and Communication Technology (ICT) infrastructures is currently more than 4% of the worldwide consumption and it is expected to double in the next few years. Data centers and communication networks are responsible for a large portion of the ICT energy consumption and this has stimulated in the last years a research effort to reduce or mitigate their environmental impact. Most of the approaches proposed tackle the problem by separately optimizing the power consumption of the servers in data centers and of the network. However, the Cloud computing infrastructure of most providers, which includes traditional telcos that are extending their offer, is rapidly evolving toward geographically distributed data centers strongly integrated with the network interconnecting them. Distributed data centers do not only bring services closer to users with better quality, but also provide opportunities to improve energy efficiency exploiting the variation of prices in different time zones, the locally generated green energy, and the storage systems that are becoming popular in energy networks. In this paper, we propose an energy aware joint management framework for geo-distributed data centers and their interconnection network. The model is based on virtual machine migration and formulated using mixed integer linear programming. It can be solved using state-of-the art solvers such as CPLEX in reasonable time. The proposed approach covers various aspects of Cloud computing systems. Alongside, it jointly manages the use of green and brown energies using energy storage technologies. The obtained results show that significant energy cost savings can be achieved compared to a baseline strategy, in which data centers do not collaborate to reduce energy and do not use the power coming from renewable resources.  相似文献   

3.
Information and communication technology (ICT) has a profound impact on environment because of its large amount of CO2 emissions. In the past years, the research field of “green” and low power consumption networking infrastructures is of great importance for both service/network providers and equipment manufacturers. An emerging technology called Cloud computing can increase the utilization and efficiency of hardware equipment. The job scheduler is needed by a cloud datacenter to arrange resources for executing jobs. In this paper, we propose a scheduling algorithm for the cloud datacenter with a dynamic voltage frequency scaling technique. Our scheduling algorithm can efficiently increase resource utilization; hence, it can decrease the energy consumption for executing jobs. Experimental results show that our scheme can reduce more energy consumption than other schemes do. The performance of executing jobs is not sacrificed in our scheme. We provide a green energy-efficient scheduling algorithm using the DVFS technique for Cloud computing datacenters.  相似文献   

4.
Energy consumption in cloud data centers is increasing as the use of such services increases. It is necessary to propose new methods of decreasing energy consumption. Green cloud computing helps to reduce energy consumption and significantly decreases both operating costs and greenhouse gas emissions. Scheduling the enormous number of user-submitted workflow tasks is an important aspect of cloud computing. Resources in cloud data centers should compute these tasks using energy efficient techniques. This paper proposed a new energy-aware scheduling algorithm for time-constrained workflow tasks using the DVFS method in which the host reduces the operating frequency using different voltage levels. The goal of this research is to reduce energy consumption and SLA violations and improve resource utilization. The simulation results show that the proposed method performs more efficiently when evaluating metrics such as energy utilization, average execution time, average resource utilization and average SLA violation.  相似文献   

5.
The increasing requirements of big data analytics and complex scientific computing impose significant burdens on cloud data centers. As a result, not only the computation but also the communication expenses in data centers are greatly increased. Previous work on green computing in data centers mainly focused on the energy consumption of the servers rather than the communication. However, for those emerging applications with big data-flows transmission, more energy consumption could be consumed by communication links, switching and aggregation elements. To this end, based on data-flows’ transmission characteristics, we proposes a novel Job-Aware Virtual Machine Placement and Route Scheduling (JAVPRS) scheme to reduce the energy consumption of data center networks (DCN) while still meeting as many network QoS (Quality of Service) requirements as possible. Our proposed scheme focuses on not just migrating large data flows, but also integrating small data flows to improve the utilization rate of the communication links. With more idle switches turned off, DCN’s energy consumption will thus be reduced. Besides the data flows’ migration and integration, the Traffic Engineering (TE) technique is also applied to decrease the transmission delay and increase the network throughput. To evaluate the performance of our proposed scheme, a number of simulation studies are performed. Compared to the selected benchmarks, the simulation results showed that JAVPRS can achieve 22.28%–35.72% energy saving while reducing communication delay by 5.8%–6.8% and improving network throughput by 13.3%.  相似文献   

6.
Cloud computing is a form of distributed computing, which promises to deliver reliable services through next‐generation data centers that are built on virtualized compute and storage technologies. It is becoming truly ubiquitous and with cloud infrastructures becoming essential components for providing Internet services, there is an increase in energy‐hungry data centers deployed by cloud providers. As cloud providers often rely on large data centers to offer the resources required by the users, the energy consumed by cloud infrastructures has become a key environmental and economical concern. Much energy is wasted in these data centers because of under‐utilized resources hence contributing to global warming. To conserve energy, these under‐utilized resources need to be efficiently utilized and to achieve this, jobs need to be allocated to the cloud resources in such a way so that the resources are used efficiently and there is a gain in performance and energy efficiency. In this paper, a model for energy‐aware resource utilization technique has been proposed to efficiently manage cloud resources and enhance their utilization. It further helps in reducing the energy consumption of clouds by using server consolidation through virtualization without degrading the performance of users’ applications. An artificial bee colony based energy‐aware resource utilization technique corresponding to the model has been designed to allocate jobs to the resources in a cloud environment. The performance of the proposed algorithm has been evaluated with the existing algorithms through the CloudSim toolkit. The experimental results demonstrate that the proposed technique outperforms the existing techniques by minimizing energy consumption and execution time of applications submitted to the cloud. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
云计算作为一种新的分布计算技术,变得越来越流行,很多院校和企业都建立了云计算数据中心.在一个数据中心中,电力的消耗越来越严重,随着全球能源的涨价,数据中心的能耗的成本也在增加.云计算的一个主要目标是通过规模经济为用户提供省钱的服务,而这些云服务的能耗成本已经非常显著了.因此,如何提高整个云平台的能耗效率变得越来越重要....  相似文献   

8.
随着移动云计算的快速发展和应用普及,如何对移动云中心资源进行有效管理同时又降低能耗、确保资源高可用是目前移动云计算数据中心的热点问题之一.本文从CPU、内存、网络带宽和磁盘四个维度,建立了基于多目标优化的虚拟机调度模型VMSM-EUN(Virtual Machine Scheduling Model based on Energy consumption,Utility and minimum Number of servers),将最小化数据中心能耗、最大化数据中心效用以及最小化服务器数量作为调度目标.设计了基于改进粒子群的自适应参数调整的虚拟机调度算法VMSA-IPSO(Virtual Machine Scheduling Algorithm based on Improved Particle Swarm Optimization)来求解该模型.最后通过仿真实验验证了本文提出的调度算法的可行性与有效性.对比实验结果表明,本文设计的基于改进粒子群的自适应虚拟机调度算法在进行虚拟机调度时,能在降低能耗的同时提高数据中心效用.  相似文献   

9.
云计算数据中心的耗电量巨大,但绝大多数的云计算数据中心并没有取得较高的资源利用率,通常只有15%-20%,有相当数量的服务器处于闲置工作状态,导致大量的能耗白白浪费。为了能够有效降低云计算数据中心的能耗,提出了一种适用于异构集群系统的云计算数据中心虚拟机节能调度算法(PVMAP算法),仿真实验结果表明:与经典算法PABFD相比,PVMAP算法的能耗明显更低,可扩展性与稳定性都更好。与此同时,随着〈Hosts,VMs〉数目的不断增加,PVMAP 算法虚拟机迁移总数和关闭主机总数的增长幅度都要低于PABFD算法。  相似文献   

10.
The rapid growth in demand for computational power has led to a shift to the cloud computing model established by large-scale virtualized data centers. Such data centers consume enormous amounts of electrical energy. Cloud providers must ensure that their service delivery is flexible to meet various consumer requirements. However, to support green computing, cloud providers also need to minimize the cloud infrastructure energy consumption while conducting the service delivery. In this paper, for cloud environments, a novel QoS-aware VMs consolidation approach is proposed that adopts a method based on resource utilization history of virtual machines. Proposed algorithms have been implemented and evaluated using CloudSim simulator. Simulation results show improvement in QoS metrics and energy consumption as well as demonstrate that there is a trade-off between energy consumption and quality of service in the cloud environment.  相似文献   

11.
With continued advancements of mobile computing and communications, emerging novel multimedia services and applications have attracted lots of attention and been developed for mobile users, such as mobile social network, mobile cloud medical treatment, mobile cloud game. However, because of limited resources on mobile terminals, it is of great challenge to improve the energy efficiency of multimedia services. In this paper, we propose a cloud-assisted green multimedia processing architecture (CGMP) based on mobile cloud computing. Specifically, the tasks of multimedia processing with energy-extensive consumption can be offloaded to the cloud, and the face recognition algorithm with improved principal component analysis and nearest neighbor classifier is realized on CGMP based cloud platform. Experimental results show that the proposed scheme can effectively save the energy consumption of the equipment.  相似文献   

12.
互连网络是高性能计算系统和数据中心的核心组件之一,也是决定其系统整体性能的全局性基础设施。随着高性能计算、云计算和大数据技术的迅速发展,传统的电互连网络在性能、能耗和成本等方面无法满足高性能计算应用和数据中心业务的大规模可扩展通信需求,面临着严峻的挑战。为此,近年来相关研究者提出了多种面向高性能计算和数据中心的可重构的光互连网络结构。首先阐明了光互连网络相对于电互连网络的优势;然后介绍了几种典型的可重构光互连网络体系结构,并对其特点进行了分析比较;最后探讨了可重构光互连网络的发展趋势。  相似文献   

13.
Cloud computing aims to provide dynamic leasing of server capabilities as scalable virtualized services to end users. However, data centers hosting cloud applications consume vast amounts of electrical energy, thereby contributing to high operational costs and carbon footprints. Green cloud computing solutions that can not only minimize the operational costs but also reduce the environmental impact are necessary. This study focuses on the Infrastructure as a Service model, where custom virtual machines (VMs) are launched in appropriate servers available in a data center. A complete data center resource management scheme is presented in this paper. The scheme can not only ensure user quality of service (through service level agreements) but can also achieve maximum energy saving and green computing goals. Considering that the data center host is usually tens of thousands in size and that using an exact algorithm to solve the resource allocation problem is difficult, the modified shuffled frog leaping algorithm and improved extremal optimization are employed in this study to solve the dynamic allocation problem of VMs. Experimental results demonstrate that the proposed resource management scheme exhibits excellent performance in green cloud computing.  相似文献   

14.
In this paper we present a dynamic, country level methodology and model to determine the energy related Green House Gas (GHG) emissions abatement potential of enterprise cloud computing. The developed model focused upon demonstrating the impact of a move to cloud computing from traditional on-site computing, by creating country specific estimates of energy and GHG reductions. The methodology presented includes variables for market penetration, organisation size, and organisational adoption of on-site and cloud computing. Using the current enterprise cloud service applications of email, customer relationship management (CRM), and groupware against selected global countries, results indicated that 4.5 million tonnes of CO2e could be reduced with an 80% market penetration for cloud computing. The majority of reductions were calculated to be from small and medium size organisations. A sensitivity analysis of the market penetration and current organisational adoption of cloud computing highlights the possible large variability in overall energy and GHG reductions. An analysis of the model and data used within this study illustrates a requirement for industry and academia to work closely in order to reach the large energy reductions possible with enterprise cloud computing.  相似文献   

15.
由于云计算中心在降低能耗的同时还需要保证服务质量(QoS),针对用户访问云计算中心的排队机制,给出一种云计算任务排队模型,在此基础上提出一种基于M/M/c排队过程的云计算中心能耗管理算法,通过求解该模型获得了平均等待时间、阻塞概率等性能指标进而建立系统的能耗模型。同时用参量ERP(Energy-Response time Product)作为排队网络的反馈量,引入反馈策略及服务器休眠预留机制,动态调整云计算中心服务器服务数。仿真结果表明,与其他策略进行比较该策略能够在保证QoS值的情况下,有效降低系统的能耗,避免了服务器资源浪费。  相似文献   

16.
Energy efficiency has grown into a latest exploration area of virtualized cloud computing paradigm. The increase in the number and the size of the cloud data centers has propagated the need for energy efficiency. An extensively practiced technology in cloud computing is live virtual machine migration and is thus focused in this work to save energy. This paper proposes an energy-aware virtual machine migration technique for cloud computing, which is based on the Firefly algorithm. The proposed technique migrates the maximally loaded virtual machine to the least loaded active node while maintaining the performance and energy efficiency of the data centers. The efficacy of the proposed technique is exhibited by comparing it with other techniques using the CloudSim simulator. An enhancement in the average energy consumption of about 44.39 % has been attained by reducing an average of 72.34 % of migrations and saving 34.36 % of hosts, thereby, making the data center more energy-aware.  相似文献   

17.
针对云计算数据中心的能耗问题,提出了绿色云计算体系理论,设计了绿色云系统架构;基于该架构,将能量作为一种系统资源进行分配,提出了三种绿色任务调度算法分别是STF-OS、LTF-OS和RT-OS算法;对三种绿色任务调度算法可行性做了相关的理论分析,三种算法可以有效地减少能源消耗;通过扩展云计算仿真平台CloudSim实现了模拟实验,结果表明STF-OS算法降低数据中心能耗的能力最优。  相似文献   

18.

Excessive consumption of energy in cloud data centers whose number is increasing day by day has led to substantial problems. Hence, offering efficient schemes for virtual machine (VM) placement to decrease energy consumption in cloud computing environments has become a significant research field in recent years. In this paper, with the goal of reducing energy consumption in cloud data centers, we present a VM placement method using the cultural algorithm. In the proposed algorithm called balance-based cultural algorithm for virtual machine placement (BCAVMP), a new fitness function is introduced to evaluate VM allocation solutions. In this function, by using the sum of balance vector lengths for each VM placement, balanced utilization of resources is considered. Also, by applying the amount of energy usage in the fitness function, solutions with lower energy consumption are intended. The performance of the proposed method is evaluated using CloudSim simulator. The simulation results indicate that by appropriate VM assignment and resource wastage reduction, energy consumption in cloud data centers can be decreased.

  相似文献   

19.
云计算资源调度研究综述   总被引:27,自引:5,他引:22  
资源调度是云计算的一个主要研究方向.首先对云计算资源调度的相关研究现状进行深入调查和分析;然后重点讨论以降低云计算数据中心能耗为目标的资源调度方法、以提高系统资源利用率为目标的资源管理方法、基于经济学的云资源管理模型,给出最小能耗的云计算资源调度模型和最小服务器数量的云计算资源调度模型,并深入分析和比较现有的云资源调度方法;最后指出云计算资源管理的未来重要研究方向:基于预测的资源调度、能耗与性能折衷的调度、面向不同应用负载的资源管理策略与机制、面向计算能力(CPU、内存)和网络带宽的综合资源分配、多目标优化的资源调度,以便为云计算研究提供有益的参考.  相似文献   

20.
越来越多的行业开始利用云以降低成本提高生产力,支撑多样化的服务对数据中心的网络性能提出了更高的要求,如何高负载下优先保证各项服务的质量变得至关重要,云服务提供商同时也关注如何提高数据中心的网络资源利用率并降低能耗。结合上述问题,提出一种基于服务满足度对非服务网络流进行调度的方法。引入服务满足度这一概念,评估网络状态能否满足服务需要,然后依据网络流所支撑的不同服务将其分类,网络负载激增时基于服务满足度调整非用户服务依赖的网络流,降低网络负载缓解拥塞。仿真结果表明,主动避让方法在网络高负载时能优先保证服务的质量,同时提高网络性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号