首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
The schedule of divisible loads is one of the most typical problems in the research and application of parallel and distributed systems. For these large‐scale systems, the energy consumption problem has drawn great attention in recent years because of falling hardware costs and the growing concern of energy costs. In computing‐intensive systems, energy is primarily consumed by CPUs, and dynamic voltage‐frequency scaling technology is capable of adjusting CPUs' speed as well as saving energy. In this paper, we focus on computing‐intensive applications and study the energy‐aware scheduling problem for divisible loads in a bus network. The energy‐speed model is introduced to characterize the problem based on dynamic voltage scaling, and the energy‐aware scheduling problem is analyzed in the application layer above the operating system. The problem can be formulated mathematically as a nonlinear programming problem, and the solution is achieved using the Lagrange multiplier method under Kuhn–Tucker conditions. Based on the analytical results, an energy‐aware scheduling scheme called ENERG for divisible loads is presented. Finally, the energy‐aware scheme is compared with two other schemes to show the effectiveness and efficiency of the energy savings of our algorithm. Additionally, the experimental results illustrate the influence of network transmission delay on energy consumption. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Reducing power consumption has been an essential requirement for Cloud resource providers not only to decrease operating costs, but also to improve the system reliability. As Cloud computing becomes emergent for the Anything as a Service (XaaS) paradigm, modern real‐time services also become available through Cloud computing. In this work, we investigate power‐aware provisioning of virtual machines for real‐time services. Our approach is (i) to model a real‐time service as a real‐time virtual machine request; and (ii) to provision virtual machines in Cloud data centers using dynamic voltage frequency scaling schemes. We propose several schemes to reduce power consumption by hard real‐time services and power‐aware profitable provisioning of soft real‐time services. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
One of the major design constraints of a heterogeneous computing system is optimal scheduling, that is, mapping of tasks on the processing nodes in order to optimize the QoS parameters. Because of the huge energy consumption by computing resources, negative environmental effects and reduced system reliability, energy has unavoidably been added as a new parameter to the list of QoS parameters. Energy optimization in scheduling strategies along with makespan makes it an even more challenging combinatorial optimization problem. This work proposes two energy‐aware scheduling algorithms G1 and G2 to schedule a batch‐of‐tasks, made of a collection of independent tasks, on heterogeneous processors in order to minimize the makespan and the energy consumption. The proposed algorithms schedule tasks based on weighted aggregation cost function to the appropriate processors followed by task migration phase designed to further minimize the makespan and the energy consumption. The study evaluates the performance of the proposed algorithms with some of the peers, that is, MinMin, MINSuff on account of makespan, energy consumption, flowtime, and utilization. An experimental study reveals that the proposed algorithm (G2) consistently performs better under various test conditions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
康雁 《计算机科学》2010,37(10):287-290
能耗是影响异构式并行和分布式系统性能的一个重要因素,动态电压缩放(DVS)技术通过将处理器降低到不同频率来达到有效地节约能耗的目标。通常DVS技术包含任务调度及空闲时间片分配两阶段。当前绝大部分研究均针对时间片分配阶段,而在此考虑的是任务分配与空闲时间片间的关系。为了降低异构分布式系统的能耗,提出了一个利用禁忌(Tabu)策略进行调度的DVS算法。此算法首先调度用有向无环图(DAG)表示的任务集到处理器上,再应用禁忌策略来改进它,通过禁止任务再调度到特定处理器,从而增加时间片,分配阶段可用的空闲时间片达到进一步减少能耗的目标。仿真结果表明,本算法能有效地减少计算机系统的能耗。  相似文献   

5.
针对能量受限的多用户移动边缘计算(MEC)系统存在恶意窃听节点的问题,提出一种联合无线能量传输(WPT)和MEC的安全部分计算卸载方案。该方法以系统接入点(AP)能耗最小化为优化目标,在计算延迟、安全卸载和能量捕获约束条件下,联合优化AP能量传输协方差矩阵、本地CPU频率、用户卸载比特数、用户卸载时间分配以及用户传输功率。针对AP能耗最小化问题为非凸问题,首先采用凸差分算法(DCA)将原始非凸问题转换为凸问题,然后采用拉格朗日对偶法以半封闭形式获得问题最优解。当计算任务数为5×105比特时,与本地计算和安全全部计算卸载方法相比,安全部分卸载方案的能量消耗分别降低了61.3%和84.4%;当窃听节点距离超过25 m时,安全部分卸载方案所消耗的能量远小于本地计算和安全全部计算卸载。仿真实验结果表明,在保证物理层安全卸载的情况下,所提方案能够有效降低AP能耗、提高系统性能增益。  相似文献   

6.
Energy consumption has become a major design constraint in modern computing systems. With the advent of petaflops architectures, power‐efficient software stacks have become imperative for scalability. Techniques such as dynamic voltage and frequency scaling (called DVFS) and CPU clock modulation (called throttling) are often used to reduce the power consumption of the compute nodes. To avoid significant performance losses, these techniques should be used judiciously during parallel application execution. For example, its communication phases may be good candidates to apply the DVFS and CPU throttling without incurring a considerable performance loss. They are often considered as indivisible operations although little attention is being devoted to the energy saving potential of their algorithmic steps. In this work, two important collective communication operations, all‐to‐all and allgather, are investigated as to their augmentation with energy saving strategies on the per‐call basis. The experiments prove the viability of such a fine‐grain approach. They also validate a theoretical power consumption estimate for multicore nodes proposed here. While keeping the performance loss low, the obtained energy savings were always significantly higher than those achieved when DVFS or throttling were switched on across the entire application run. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
In recent years, the issue of energy consumption in parallel and distributed computing systems has attracted a great deal of attention. In response to this, many energy-aware scheduling algorithms have been developed primarily using the dynamic voltage-frequency scaling (DVFS) capability which has been incorporated into recent commodity processors. Majority of these algorithms involve two passes: schedule generation and slack reclamation. The former pass involves the redistribution of tasks among DVFS-enabled processors based on a given cost function that includes makespan and energy consumption, while the latter pass is typically achieved by executing individual tasks with slacks at a lower processor frequency. In this paper, a new slack reclamation algorithm is proposed by approaching the energy reduction problem from a different angle. Firstly, the problem of task slack reclamation by using combinations of processors’ frequencies is formulated. Secondly, several proofs are provided to show that (1) if the working frequency set of processor is assumed to be continues, the optimal energy will be always achieved by using only one frequency, (2) for real processors with a discrete set of working frequencies, the optimal energy is always achieved by using at most two frequencies, and (3) these two frequencies are adjacent/neighbouring when processor energy consumption is a convex function of frequency. Thirdly, a novel algorithm to find the best combination of frequencies to result the optimal energy is presented. The presented algorithm has been evaluated based on results obtained from experiments with three different sets of task graphs: 3000 randomly generated task graphs, and 600 task graphs for two popular applications (Gauss-Jordan and LU decomposition). The results show the superiority of the proposed algorithm in comparison with other techniques.  相似文献   

8.
DVFS is a ubiquitous technique for CPU power management in modern computing systems. Reducing processor frequency/voltage leads to a decrease of CPU power consumption and an increase in the execution time. In this paper, we analyze which application/platform characteristics are necessary for a successful energy-performance trade-off of large scale parallel applications. We present a model that gives an upper bound on performance loss due to frequency scaling using the application parallel efficiency. The model was validated with performance measurements of large scale parallel applications. Then we track how application sensitivity to frequency scaling evolved over the last decade for different cluster generations. Finally, we study how cluster power consumption characteristics together with application sensitivity to frequency scaling determine the energy effectiveness of the DVFS technique.  相似文献   

9.
Multi-core computing has gone mobile. Managing power consumption within energy-constrained mobile devices demands low-power architectures to increase battery lifespan. One of the promising solutions offered today by microprocessor architects is hybrid microprocessors that integrate different core architectures on a single die and that are equipped with dynamic frequency-scaling techniques. This paper presents analytical models based on an energy consumption metric to analyze the impact of dynamic frequency scaling on the energy consumption of various architectural design choices for hybrid-architecture chips. The power consumption implications of different processing schemes and various chip configurations were also analyzed. The analysis shows that by choosing the optimal hardware configuration, the energy savings can be increased considerably while keeping sacrifices in performance at tolerable levels.  相似文献   

10.
Most of studies about energy management for MC systems are based on dynamic priority scheme. The disadvantages of dynamic priority scheme are high system overhead and poor predictability. Unlike previous studies, we focus on the problem of scheduling mixed-criticality (MC) periodic tasks with minimizing energy consumption in MC systems based on fixed priority scheme. Firstly, we explain a criticality rate monotonic scheduling (CRMS) and propose the sufficient schedulability condition of CRMS. Secondly, we compute the energy minimization uniform scaled speed and present an optimal static solution algorithm based on CRMS. The extra workload of the high criticality level (HI) task executes with the maximum processor speed in the high criticality mode (HI-mode). But this algorithm does not exploit the slack time generated from the HI task in the low criticality mode (LO-mode). For energy efficiency, we propose a dynamic fixed priority energy minimization algorithm which exploits the slack time generated from the HI task in LO-mode to save energy. In addition, it combines a dynamic voltage and frequency scaling technique and a dynamic power management technique to reduce energy consumption. Finally, the experiments are applied to evaluate the performance of the proposed algorithm and the experimental results show that the proposed algorithm can save up 23.89% energy compared with other existing algorithms.  相似文献   

11.
一种面向同构集群系统的并行任务节能调度优化方法   总被引:1,自引:0,他引:1  
节能调度算法设计是高性能计算领域中的一个研究热点.复制调度算法能够减少后继任务等待延时,缩短任务总体调度时间,但是耗费了更多的能量.为此,作者提出一种启发式处理器合并优化方法 PRO.该方法按照任务最早开始时间和最早结束时间查找处理器时间空隙,将轻负载处理器上的任务重新分配到其它处理器上,从而减少使用的处理器数目,降低系统总体能耗.实验结果表明,和已有的复制任务调度算法TDS、EAD和PEBD相比,优化后的调度算法在不增加调度时间的条件下,能够明显减少使用的处理器数和系统总体能耗,从而更好地实现性能和能耗之间的平衡.  相似文献   

12.
针对异构集群下高效节能的任务调度算法进行了研究, 提出了一种基于复制的任务调度算法, 在任务初始分配的基础上, 分别从能源感知和性能—能源平衡两个角度考虑任务的复制。建立了由计算和通信造成的能源消耗的数学模型, 并进行了大量的实验。实验结果表明, 与已有的BEATA算法相比, 该算法能明显地减少异构集群处理并行应用的调度长度和能耗。分析结果发现, 任务复制的方法在减少调度长度的同时会增加相应的能耗, 能同比优化调度长度和能耗的任务调度方法是今后的研究方向。  相似文献   

13.
云计算是大量的虚拟化的计算机资源的服务节点,如何管理基于节能型和服务型的动态可扩展资源已成为一个重要的问题。针对这一目的,综合大量前期工作,提出了一种改进的遗传算法,并构造系统模型,通过使用CloudSim(云计算仿真软件)和CloudAnalyst(云分析软件)进行定性和定量的数据分析。同时也与传统的动态电压和频率缩放 (DVFS) 做了比较,通过数据验证证明出利用服务质量感知对虚拟机的节能管理在响应时间、能源消耗、虚拟机迁移数量及合并适应性方面都起到改进作用。表现为在相同功率条件下,新方法能降低用户请求的响应时间,进而提高了用户的服务质量;在相同的响应时间内,新方法又能有效的降低能量功耗。这些改进都能提高用户对服务质量的满意度,同时也为未来使用并行计算技术打下了基础。  相似文献   

14.
This paper aims at studying the optimal Fuzzy Proportional–Integral– Derivative controllers' tuning problem by considering two different nonlinear constrained optimisation techniques. One relying on a Hessian‐based analytical approach, and the other based on a differential evolutionary method. In the case of offline implementation, two basic frameworks are under assessment, depending on the controller parameters to be adjusted. For online scaling factors and membership functions' width tuning, its implementation is based on the parallel computation paradigm. The performance index is described by a quadratic cost function, taking as arguments control errors and the increment of control actions. Constraints on the scaling factors, membership functions' width, as well as on the system inputs and outputs are also included in the optimisation problem. Experiments carried out on a benchmark system favour the offline joint optimisation based on the differential evolutionary approach of scaling factors and membership functions' width.  相似文献   

15.
Energy consumption in cloud data centers is increasing as the use of such services increases. It is necessary to propose new methods of decreasing energy consumption. Green cloud computing helps to reduce energy consumption and significantly decreases both operating costs and greenhouse gas emissions. Scheduling the enormous number of user-submitted workflow tasks is an important aspect of cloud computing. Resources in cloud data centers should compute these tasks using energy efficient techniques. This paper proposed a new energy-aware scheduling algorithm for time-constrained workflow tasks using the DVFS method in which the host reduces the operating frequency using different voltage levels. The goal of this research is to reduce energy consumption and SLA violations and improve resource utilization. The simulation results show that the proposed method performs more efficiently when evaluating metrics such as energy utilization, average execution time, average resource utilization and average SLA violation.  相似文献   

16.
Due to the sustained and rapid growth of big data and the demand on higher accuracy solutions for application problems, the completion time of fixed-time big data tasks executing on original parallel computing systems becomes longer and longer. To meet the requirement of fixed completion time, the original parallel computing systems need to be scaled accordingly. Therefore, this paper studies an iso-time scaling method to guide the scaling of parallel computing systems. Firstly, the models of big data parallel tasks and parallel computing systems are built, and an algorithm is designed to calculate the completion time of big data parallel tasks. Secondly, according to the actual situation of the current majority computing centers, we put forward some reasonable hypotheses, make full use of backup computational nodes, and optimize the cost of scaling parallel computing systems. Then, a vertical scaling algorithm is designed to upgrade computational nodes, and a horizontal scaling algorithm is designed to add computational nodes. Furthermore, this paper compares the two scaling algorithms in the aspects of time complexity, degree of parallelism and system utilization for scaled parallel computing system. Finally, some simulation experiments are conducted. The experimental results show that our method can keep the completion time within fixed time when the increasing data parallel tasks execute on the scaled parallel computing systems and it has better effect in scaling cost than traditional methods.  相似文献   

17.
Workflow applications are a popular paradigm used by scientists for modelling applications to be run on heterogeneous high-performance parallel and distributed computing systems. Today, the increase in the number and heterogeneity of multi-core parallel systems facilitates the access to high-performance computing to almost every scientist, yet entailing additional challenges to be addressed. One of the critical problems today is the power required for operating these systems for both environmental and financial reasons. To decrease the energy consumption in heterogeneous systems, different methods such as energy-efficient scheduling are receiving increasing attention. Current schedulers are, however, based on simplistic energy models not matching the reality, use techniques like DVFS not available on all types of systems, or do not approach the problem as a multi-objective optimisation considering both performance and energy as simultaneous objectives. In this paper, we present a new Pareto-based multi-objective workflow scheduling algorithm as an extension to an existing state-of-the-art heuristic capable of computing a set of tradeoff optimal solutions in terms of makespan and energy efficiency. Our approach is based on empirical models which capture the real behaviour of energy consumption in heterogeneous parallel systems. We compare our new approach with a classical mono-objective scheduling heuristic and state-of-the-art multi-objective optimisation algorithm and demonstrate that it computes better or similar results in different scenarios. We analyse the different tradeoff solutions computed by our algorithm under different experimental configurations and we observe that in some cases it finds solutions which reduce the energy consumption by up to 34.5% with a slight increase of 2% in the makespan.  相似文献   

18.
Although high-performance computing has always been about efficient application execution, both energy and power consumption have become critical concerns owing to their effect on operating costs and failure rates of large-scale computing platforms. Modern processors provide techniques, such as dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (called throttling), to improve energy efficiency on-the-fly. Without careful application, however, DVFS and throttling may cause a significant performance loss due to system overhead. This paper proposes a novel runtime system that maximizes energy saving by selecting appropriate values for DVFS and throttling in parallel applications. Specifically, the system automatically predicts communication phases in parallel applications and applies frequency scaling considering both the CPU offload, provided by the network-interface card, and the architectural stalls during computation. Experiments, performed on NAS parallel benchmarks as well as on real-world applications in molecular dynamics and linear system solution, demonstrate that the proposed runtime system obtaining energy savings of as much as 14 % with a low performance loss of about 2 %.  相似文献   

19.
无人机(UAV)无线网络中,UAV承载基站设备,可灵活提供无线通信服务,支持在高质量的无线信道状态下进行数据传输;另一方面,将边缘计算服务器部署到基站侧,计算资源更靠近用户,通过任务卸载,能够直接在基站侧进行计算处理,缓解无线网络的去程链路压力;但是,考虑到能耗受限问题,如何通过资源优化来降低网络能耗并保证用户到基站的数据传输和任务处理的稳定性依然是研究的难题;针对UAV无线网络中用户向UAV基站发送数据并卸载计算任务的场景,研究了在数据传输和任务处理稳定性约束下进行无线与计算资源优化的能耗最小化问题,构建数据队列与任务队列,采用李雅普诺夫优化理论对问题进行转化和分解,获得能耗与队列的折中关系,并通过仿真分析评估了所提解决方案的有效性。  相似文献   

20.
In order to minimize the execution time of a parallel application running on a heterogeneously distributed computing system, an appropriate mapping scheme is needed to allocate the application tasks to the processors. The general problem of mapping tasks to machines is a well‐known NP‐hard problem and several heuristics have been proposed to approximate its optimal solution. In this paper we propose a static graph‐based mapping algorithm, called Heterogeneous Multi‐phase Mapping (HMM), which permits suboptimal mapping of a parallel application onto a heterogeneous computing distributed system by using a local search technique together with a tabu search meta‐heuristic. HMM allocates parallel tasks by exploiting the information embedded in the parallelism forms used to implement an application, and considering an affinity parameter, that identifies which machine in the heterogeneous computing system is most suitable to execute a task. We compare HMM with some leading techniques and with an exhaustive mapping algorithm. We also give an example of mapping of two real applications using HMM. Experimental results show that HMM performs well demonstrating the applicability of our approach. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号