首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
如今日益增长的数据中心功耗特别是冷却系统功耗已经被日益重视,降低系统功耗能够减少数据中心碳排放。传统负载聚合方法在降低计算功耗的同时,尽量保留数据中心的计算能力,却往往忽略了对冷却功耗的关注。不恰当的负载聚合会带来数据中心峰值温度的升高和冷却功耗的增加。文中首先根据数据中心温度功耗模型探讨了负载聚合的必要条件,接着提出了离线的遗传算法。最后文中以实际在线书店的访问数据进行因特网数据中心仿真,定量分析了基于遗传算法的负载聚合方法对系统温度、性能和功耗的影响。  相似文献   

2.
基于模型预测控制的数据中心节能调度算法   总被引:1,自引:0,他引:1  
如今日益增长的数据中心能耗,特别是冷却系统能耗已日益受到重视,降低系统能耗能够减少数据中心碳排放.提出了一种基于模型预测控制(model prediction control,简称MPC)的节能调度策略,该策略可以有效地减小数据中心冷却能耗.该方法采用动态电压频率调节技术来调整计算节点频率,从而减少节点间的热循环;所有节点的峰值温度可被保持在温度阈值下,在任务的执行中稳态误差较小.该方法可以通过动态频率调节来抑制由于负载类型变化造成的模型不确定性带来的内部扰动,分析结果表明,基于模型预测的温控算法系统开销较小,具有良好的可扩展性.基于该算法设计的控制器能够有效地降低输入温度,提高数据中心能耗效率.通过在实际数据中心内运行的模拟网上书店,该方法与安全最小热传递算法和传统反馈温控算法这两种经典方法相比,无论是在正常条件下还是在扰动存在的情况下都能取得较好的温度抑制效果,系统性能如吞吐率也达到最大.在相同的负载条件下,该方法能够获得最小的输入峰值温度和最小的冷却能耗.  相似文献   

3.
Thermo-Fluids Provisioning of a High Performance High Density Data Center   总被引:1,自引:0,他引:1  
Consolidation and dense aggregation of slim compute, storage and networking hardware has resulted in high power density data centers. The high power density resulting from current and future generations of servers necessitates detailed thermo-fluids analysis to provision the cooling resources in a given data center for reliable operation. The analysis must also predict the impact on the thermo-fluid distribution due to changes in hardware configuration and building infrastructure such as a sudden failure in data center cooling resources. The objective of the analysis is to assure availability of adequate cooling resources to match the heat load, which is typically non-uniformly distributed and characterized by high-localized power density. This study presents an analysis of an example modern data center with a view of the magnitude of temperature variation and impact of a failure. Initially, static provisioning for a given distribution of heat loads and cooling resources is achieved to produce a reference state. A perturbation in reference state is introduced to simulate a very plausible scenario—failure of a computer room air conditioning (CRAC) unit. The transient model shows the “redlining” of inlet temperature of systems in the area that is most influenced by the failed CRAC. In this example high-density data center, the time to reach unacceptable inlet temperature is less than 80 seconds based on an example temperature set point limit of 40°C (most of today's servers would require an inlet temperature below 35°C to operate). An effective approach to resolve this issue, if there is adequate capacity, is to migrate the compute workload to other available systems within the data center to reduce the inlet temperature to the servers to an acceptable level. Recommended by: Ahmed Elmagarmid  相似文献   

4.
Data Centers are huge power consumers, both because of the energy required for computation and the cooling needed to keep servers below thermal redlining. The most common technique to minimize cooling costs is increasing data room temperature. However, to avoid reliability issues, and to enhance energy efficiency, there is a need to predict the temperature attained by servers under variable cooling setups. Due to the complex thermal dynamics of data rooms, accurate runtime data center temperature prediction has remained as an important challenge. By using Grammatical Evolution techniques, this paper presents a methodology for the generation of temperature models for data centers and the runtime prediction of CPU and inlet temperature under variable cooling setups. As opposed to time costly Computational Fluid Dynamics techniques, our models do not need specific knowledge about the problem, can be used in arbitrary data centers, re-trained if conditions change and have negligible overhead during runtime prediction. Our models have been trained and tested by using traces from real Data Center scenarios. Our results show how we can fully predict the temperature of the servers in a data rooms, with prediction errors below 2 °C and 0.5 °C in CPU and server inlet temperature respectively.  相似文献   

5.
High temperatures within a data center can cause a number of problems, such as increased cooling costs and increased hardware failure rates. To overcome this problem, researchers have shown that workload management, focused on a data center’s thermal properties, effectively reduces temperatures within a data center. In this paper, we propose a method to predict a workload’s thermal effect on a data center, which will be suitable for real-time scenarios. We use machine learning techniques, such as artificial neural networks (ANN) as our prediction methodology. We use real data taken from a data center’s normal operation to conduct our experiments. To reduce the data’s complexity, we introduce a thermal impact matrix to capture the spacial relationship between the data center’s heat sources, such as the compute nodes. Our results show that machine learning techniques can predict the workload’s thermal effects in a timely manner, thus making them well suited for real-time scenarios. Based on the temperature prediction techniques, we developed a thermal-aware workload scheduling algorithm for data centers, which aims to reduce power consumption and temperatures in a data center. A simulation study is carried out to evaluate the performance of the algorithm. Simulation results show that our algorithm can significantly reduce temperatures in data centers by introducing an endurable decline in performance.  相似文献   

6.
Management of computing infrastructure in data centers is an important and challenging problem, that needs to: (i) ensure availability of services conforming to the Service Level Agreements (SLAs); and (ii) reduce the Power Usage Effectiveness (PUE), i.e. the ratio of total power, up to half of which is attributed to data center cooling, over the computing power to service the workloads. The cooling energy consumption can be reduced by allowing higher-than-usual thermostat set temperatures while maintaining the ambient temperature in the data center room within manufacturer-specified server redline temperatures for their reliable operations. This paper proposes: (i) a Coordinated Job, Power, and Cooling Management (JPCM) policy, which performs: (a) job management so as to allow for an increase in the thermostat setting of the cooling unit while meeting the SLA requirements, (b) power management to reduce the produced thermal load, and (c) cooling management to dynamically adjust the thermostat setting; and (ii) a Model-driven coordinated Management Architecture (MMA), which uses a state-based model to dynamically decide the correct management policy to handle events, such as new workload arrival or failure of a cooling unit, that can trigger an increase in the ambient temperature. Each event is associated with a time window, referred to as the window-of-opportunity, after which the temperature at the inlet of one or more servers can go beyond the redline temperature if proper management policies are not enforced.This window-of-opportunity monotonically decreases with increase in the incoming workload. The selection of the management policy depends on their potential energy benefits and the conformance of the delays in their actuation to the window-of-opportunity. Simulations based on actual job traces from the ASU HPC data center show that the JPCM can achieve up to 18% energy-savings over separated power or job management policies. However, high delay to reach a stable ambient temperature (in case of cooling management through dynamic thermostat setting) can violate the server redline temperatures. A management decision chart is developed as part of MMA to autonomically employ the management policy with maximum energy-savings without violating the window-of-opportunity, and hence the redline temperatures. Further, a prototype of the JPCM is developed by configuring the widely used Moab cluster manager to dynamically change the server priorities for job assignment.  相似文献   

7.
Size and number of high-performance data centers are rapidly growing all around the world in recent years. The growth in the leakage power consumption of servers along with its exponential dependence on the ever increasing process variation in nanometer technologies has made it inevitable to move toward variation-aware power reduction strategies in data centers. In this paper, we address the problem of joint server placement and chassis consolidation to minimize power consumption of high-performance computing data centers under process variation. To this end, we introduce two variation-aware server placement heuristics as well as an integer linear programming (ILP)-based server placement method to find the best location of each server in the data center based on its power consumption and the data center heat recirculation model. We then incorporate a novel ILP-based variation-aware chassis consolidation technique to find the optimum task assignment solution under the obtained server placement approach to minimize total power consumption. Experimental results show that by applying the proposed joint variation-aware server placement and chassis consolidation techniques, up to 14.6 % improvement can be obtained at common data center utilization rates compared to state-of-the-art variation-unaware approaches.  相似文献   

8.
Data centers now play an important role in modern IT infrastructures. Related research shows that the energy consumption for data center cooling systems has recently increased significantly. There is also strong evidence to show that high temperatures in a data center will lead to higher hardware failure rates, and thus an increase in maintenance costs. This paper devotes itself in the field of thermal aware workload placement for data centers. In this paper, we propose an analytical model, which describes data center resources with heat transfer properties and workloads with thermal features. Then two thermal aware task scheduling algorithms, TASA and TASA-B, are presented which aim to reduce temperatures and cooling system power consumption in a data center. A simulation study is carried out to evaluate the performance of the proposed algorithms. Simulation results show that our algorithms can significantly reduce temperatures in data centers by introducing endurable decline in system performance.  相似文献   

9.
电网在运行过程中,换流阀等关键设备会不断产生热量,当设备的热量不断聚集温度不断上升,会影响设备的稳定性和安全性,保证换流阀等关键设备稳定运行就显得至关重要.阀冷系统作为冷却系统的关键设备,以热导性较高的水为介质,将设备的热能带出,达到降低设备温度的目的.可以通过监控冷却水的温度、压力技术指标来确保换流阀安全、稳定运行.选取阀冷系统中的进阀温度为主要预测指标,对系统的历史数据进行充分的挖掘和分析,达到对电网运行状态预估的目的.将传统时序模型与机器学习结合提出ARIMA-SVM的混合模型,并与传统的ARIMA模型、SVM模型和GRU神经网络模型对中国南方电网的真实阀冷数据进行时序分析预测并进行对比实验.实验结果表明,ARIMA模型、SVM模型、GRU神经网络模型和ARIMA-SVM混合模型都可以较好地预测进阀温度的变化趋势,但ARIMA-SVM混合模型在均方根误差、均方误差和平均绝对误差3个评价指标上表现均更优于其他3个模型,能够进一步提升进阀温度预测的精度.  相似文献   

10.
Job scheduling in data centers can be considered from a cyber–physical point of view, as it affects the data center’s computing performance (i.e. the cyber aspect) and energy efficiency (the physical aspect). Driven by the growing needs to green contemporary data centers, this paper uses recent technological advances in data center virtualization and proposes cyber–physical, spatio-temporal (i.e. start time and servers assigned), thermal-aware job scheduling algorithms that minimize the energy consumption of the data center under performance constraints (i.e. deadlines). Savings are possible by being able to temporally “spread” the workload, assign it to energy-efficient computing equipment, and further reduce the heat recirculation and therefore the load on the cooling systems. This paper provides three categories of thermal-aware energy-saving scheduling techniques: (a) FCFS-Backfill-XInt and FCFS-Backfill-LRH, thermal-aware job placement enhancements to the popular first-come first-serve with back-filling (FCFS-backfill) scheduling policy; (b) EDF-LRH, an online earliest deadline first scheduling algorithm with thermal-aware placement; and (c) an offline genetic algorithm for SCheduling to minimize thermal cross-INTerference (SCINT), which is suited for batch scheduling of backlogs. Simulation results, based on real job logs from the ASU Fulton HPC data center, show that the thermal-aware enhancements to FCFS-backfill achieve up to 25% savings compared to FCFS-backfill with first-fit placement, depending on the intensity of the incoming workload, while SCINT achieves up to 60% savings. The performance of EDF-LRH nears that of the offline SCINT for low loads, and it degrades to the performance of FCFS-backfill for high loads. However, EDF-LRH requires milliseconds of operation, which is significantly faster than SCINT, the latter requiring up to hours of runtime depending upon the number and size of submitted jobs. Similarly, FCFS-Backfill-LRH is much faster than FCFS-Backfill-XInt, but it achieves only part of FCFS-Backfill-XInt’s savings.  相似文献   

11.
针对人群聚集地对稳定热水的需求,市面上出现了各式各样的水加热器,大多采用电加热以及化石能源加热,但由于结构的单一以及温控不到位造成了能源的浪费和出水温度不稳定等问题。为了提高能源的利用率和稳定的出水温度,论文设计出一种热源泵供水机[1],应用增量式PID算法控制的温度控制系统。该系统以STM32F103[2]作为整套温度控制系统的主控芯片,以DS18B20用作系统温度参数的采集,并在相应的信号采集单元与运算单元之间配置一枚Max485芯片来保证两个单元之间的高效数据交换,综合所有数据参数经由增量式PID算法来进行计算调节,通过对阀门电机的调节来控制阀门的开启大小与开关来实现对进水流量的控制和冷热水的配比控制,以此来获得稳定热水的输出。  相似文献   

12.
王肇国  易涵  张为华 《软件学报》2014,25(7):1432-1447
随着互联网的发展,各种类型的数据呈爆炸式增长.通过机器学习的方法对大量数据进行实时或离线的分析,获取规律性信息,已成为各行业提升决策准确性的重要途径.因此,这些机器学习算法成为各个数据中心运行的主要应用.然而,随着数据规模的增大和数据中心面临的能耗问题的突出,如何实现这些算法的低功耗处理,已成为实现绿色数据中心亟待解决的关键问题之一.为了实现对这些机器算法的绿色计算,首先对运行在数据中心中的关键算法进行了深入的分析,并观察到在这些算法中存在大量的冗余计算.在此基础上,设计和实现了一种面向数据中心典型应用的低功耗调度策略.该算法通过对不同计算部分的输入数据进行匹配来判断计算过程中的冗余部分,并对算法进行调度.实验数据显示,对于数据中心的两种典型应用k-means和PageRank,该算法可以实现23%和17%的能耗节约.  相似文献   

13.
Computing has recently reached an inflection point with the introduction of multi-core processors. On-chip thread-level parallelism is doubling approximately every other year. Concurrency lends itself naturally to allowing a program to trade performance for power savings by regulating the number of active cores, however in several domains users are unwilling to sacrifice performance to save power. We present a prediction model for identifying energy-efficient operating points of concurrency in well-tuned multithreaded scientific applications, and a runtime system which uses live program analysis to optimize applications dynamically. We describe a dynamic, phase-aware performance prediction model that combines multivariate regression techniques with runtime analysis of data collected from hardware event counters to locate optimal operating points of concurrency. Using our model, we develop a prediction-driven, phase-aware runtime optimization scheme that throttles concurrency so that power consumption can be reduced and performance can be set at the knee of the scalability curve of each program phase. The use of prediction reduces the overhead of searching the optimization space while achieving near-optimal performance and power savings. A thorough evaluation of our approach shows a reduction in power consumption of 10.8% simultaneous with an improvement in performance of 17.9%, resulting in energy savings of 26.7%.  相似文献   

14.
It is hard to imagine living in a building without electricity and a heating or cooling system these days. Factories and data centers are equally dependent on a continuous functioning of these systems. As beneficial as this development is for our daily life, the consequences of a failure are critical. Malfunctioning power supplies or temperature regulation systems can cause the close-down of an entire factory or data center. Heat and air conditioning losses in buildings lead to a large waste of the limited energy resources and pollute the environment unnecessarily. To detect these flaws as quickly as possible and to prevent the negative consequences constant monitoring of power lines and heat sources is necessary. To this end, we propose a fully automatic system that creates 3D thermal models of indoor environments. The proposed system consists of a mobile platform that is equipped with a 3D laser scanner, an RGB camera and a thermal camera. A novel 3D exploration algorithm ensures efficient data collection that covers the entire scene. The data from all sensors collected at different positions is joined into one common reference frame using calibration and scan matching. In the post-processing step a model is built and points of interest are automatically detected. A viewer is presented that aids experts in analyzing the heat flow and localizing and identifying heat leaks. Results are shown that demonstrate the functionality of the system.  相似文献   

15.
基于快速搜索和寻找密度峰值聚类算法(DPC)具有无需迭代且需要较少参数的优点,但其仍然存在一些缺点:需要人为选取截断距离参数;在流形数据集上的处理效果不佳。针对这些问题,提出一种密度峰值聚类改进算法。该算法结合了自然和共享最近邻算法,重新定义了截断距离和局部密度的计算方法,并且算法融合了候选聚类中心计算概念,通过算法选出不同的候选聚类中心,然后以这些候选中心为新的数据集,再次开始密度峰值聚类,最后将剩余的点分配到所对应的候选中心点所在类簇中。改进的算法在合成数据集和UCI数据集上进行验证,并与K-means、DBSCAN和DPC算法进行比较。实验结果表明,提出的算法在性能方面有明显提升。  相似文献   

16.
优化虚拟机部署是降低云数据中心能耗的有效方法,但是,过度对虚拟机部署进行合并可能导致主机机架出现热点,影响数据中心提供服务的可靠性。提出一种基于能效和可靠性的虚拟机部署算法。综合考虑主机利用率、主机温度、主机功耗、冷却系统功耗和主机可靠性间的相互关系,建立确保主机可靠性的冗余模型。在主动避免机架热点情况下,实现动态的虚拟机部署决策,在降低数据中心总体能耗前提下,确保主机服务可靠性。仿真结果表明,该算法不仅可以节省更多能耗,避免热点主机,而且性能保障上也更好。  相似文献   

17.
在数据中心放置海量数据时,每个数据常有多个副本,服务提供商需要支付巨额电费以运行存储这些数据副本的服务器。同时,为保证多个数据副本的一致性,放置在不同数据中心的副本需要通过数据中心之间的网络进行同步,从而引发高额的网络传输费用。为此,以最小化多副本数据放置代价为目标,建立数据放置问题模型,并提出一种基于数据组和数据中心划分的数据放置算法DDDP。将数据划分为多个数据组,按用户访问数据的延迟要求将数据中心划分成数据中心子集,并将每个数据组中的数据放置到能满足访问延迟要求且能最小化放置代价的数据中心子集中。仿真结果表明,相比NPR算法,DDDP算法能有效降低数据中心存储数据时的放置代价。  相似文献   

18.
密度峰值聚类算法(Density Peaks Clustering,DPC),是一种基于密度的聚类算法,该算法具有不需要指定聚类参数,能够发现非球状簇等优点。针对密度峰值算法凭借经验计算截断距离[dc]无法有效应对各个场景并且密度峰值算法人工选取聚类中心的方式难以准确获取实际聚类中心的缺陷,提出了一种基于基尼指数的自适应截断距离和自动获取聚类中心的方法,可以有效解决传统的DPC算法无法处理复杂数据集的缺点。该算法首先通过基尼指数自适应截断距离[dc],然后计算各点的簇中心权值,再用斜率的变化找出临界点,这一策略有效避免了通过决策图人工选取聚类中心所带来的误差。实验表明,新算法不仅能够自动确定聚类中心,而且比原算法准确率更高。  相似文献   

19.
This paper presents a load control method for small data centers, which are rarely studied although they account for more than 50% of all data centers. The method utilizes the data network and the electrical network to control power usage for participation in demand response (DR) programs, which are regarded as the killer applications of the emerging smart grid (SG). Traditional data center power management often directly manipulates energy usage, which may be ineffective or impractical for small data centers due to their limited resources. Both the SG and the data centers are considered to be the cyber-physical systems (CPSs). This article proposes an approach that performs the data center DR load management through the cyberspaces of the SG and the targeted data center. The proposed method instructs the workload dispatcher to select the best-suited algorithm when a DR event is issued. Additionally, this method also adjusts the temperature set-points of the air conditioners. The simulation result shows that this approach can achieve a 30% power reduction for DR.  相似文献   

20.
In this study, the effect of the nozzle number and the inlet pressure on the heating and cooling performance of the counter flow type vortex tube has been modeled with artificial neural networks (ANN) by using the experimentally obtained data. ANN has been designed by Pithiya software. In the developed system output parameter temperature gradient between the cold and hot outlets (ΔT) has been determined using inlet parameters such as the inlet pressure (Pinlet), nozzle number (N), and cold mass fraction (μc). The back-propagation learning algorithm with variant which is Levenberg–Marquardt (LM) and Fermi transfer function have been used in the network. In addition, the statistical validity of the developed model has been determined by using the coefficient of determination (R2), the root means square error (RMSE) and the mean absolute percentage error (MAPE). R2, RMSE and MAPE have been determined for ΔT as 0.9947, 0.188224, and 0.0460, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号