首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Petar   《Performance Evaluation》2007,64(9-12):1053-1061
The maximum weight matching algorithm is a high-performance scheduling algorithm for cross-bar switches. It is known that it performs optimally under heavy loads. However, its centralized nature and high computational complexity limit the algorithm’s applicability. This paper presents a randomized algorithm for distributed switch scheduling that is capable of delivering high throughput.  相似文献   

2.
Effective task scheduling is essential for obtaining high performance in heterogeneous distributed computing systems (HeDCSs). However, finding an effective task schedule in HeDCSs requires the consideration of both the heterogeneity of processors and high interprocessor communication overhead, which results from non-trivial data movement between tasks scheduled on different processors. In this paper, we present a new high-performance scheduling algorithm, called the longest dynamic critical path (LDCP) algorithm, for HeDCSs with a bounded number of processors. The LDCP algorithm is a list-based scheduling algorithm that uses a new attribute to efficiently select tasks for scheduling in HeDCSs. The efficient selection of tasks enables the LDCP algorithm to generate high-quality task schedules in a heterogeneous computing environment. The performance of the LDCP algorithm is compared to two of the best existing scheduling algorithms for HeDCSs: the HEFT and DLS algorithms. The comparison study shows that the LDCP algorithm outperforms the HEFT and DLS algorithms in terms of schedule length and speedup. Moreover, the improvement in performance obtained by the LDCP algorithm over the HEFT and DLS algorithms increases as the inter-task communication cost increases. Therefore, the LDCP algorithm provides a practical solution for scheduling parallel applications with high communication costs in HeDCSs.  相似文献   

3.
Centralized or hierarchical administration of the classical grid resource discovery approaches is unable to efficiently manage the highly dynamic large-scale grid environments. Peer-to-peer (P2P) overlay represents a dynamic, scalable, and decentralized prospect of the grids. Structured P2P methods do not fully support the multi-attribute range queries and unstructured P2P resource discovery methods suffer from the network-wide broadcast storm problem. In this paper, a decentralized learning automata-based resource discovery algorithm is proposed for large-scale P2P grids. The proposed method supports the multi-attribute range queries and forwards the resource queries through the shortest path ending at the grid peers more likely having the requested resource. Several simulation experiments are conducted to show the efficiency of the proposed algorithm. Numerical results reveal the superiority of the proposed model over the other methods in terms of the average hop count, average hit ratio, and control message overhead.  相似文献   

4.
A modified genetic algorithm for distributed scheduling problems   总被引:9,自引:1,他引:8  
Genetic algorithms (GAs) have been widely applied to the scheduling and sequencing problems due to its applicability to different domains and the capability in obtaining near-optimal results. Many investigated GAs are mainly concentrated on the traditional single factory or single job-shop scheduling problems. However, with the increasing popularity of distributed, or globalized production, the previously used GAs are required to be further explored in order to deal with the newly emerged distributed scheduling problems. In this paper, a modified GA is presented, which is capable of solving traditional scheduling problems as well as distributed scheduling problems. Various scheduling objectives can be achieved including minimizing makespan, cost and weighted multiple criteria. The proposed algorithm has been evaluated with satisfactory results through several classical scheduling benchmarks. Furthermore, the capability of the modified GA was also tested for handling the distributed scheduling problems.  相似文献   

5.
基于层次化调度策略和动态数据复制的网格调度方法   总被引:2,自引:0,他引:2  
针对在网格中如何有效地进行任务调度和数据复制, 以便减少任务执行时间等问题, 提出了任务调度算法(ISS)和优化动态数据复制算法(ODHRA), 并构建一个方案将两种算法进行了有效结合。该方案采用ISS算法综合考虑任务等待队列的数量、任务需求数据的位置和站点的计算容量, 采用网络结构分级调度的方式, 配以适当的权重系数计算综合任务成本, 搜索出最佳计算节点区域; 采用ODHRA算法分析数据传输时间、存储访问延迟、等待在存储队列中的副本请求和节点间的距离, 在众多的副本中选取出最佳副本位置, 再结合副本放置和副本管理, 从而降低了文件访问时间。仿真结果表明, 提出的方案在平均任务执行时间方面, 与其他算法相比表现出了更好的性能。  相似文献   

6.
A static scheduling algorithm is presented for off-line scheduling of tasks in distributed hard real-time systems. The tasks considered are instances of periodic jobs and have deadlines, resource requirements and precedence constraints. Tasks are divided into nonpreemptable blocks and all task characteristics are known a priori. The algorithm orders the tasks and iteratively schedules the tasks according to the order. Each task is scheduled globally by selecting a node to which it is assigned. Then, the task is scheduled locally by adding the task to the schedule of the selected node. Heuristics are used for both task ordering and node selection in order to guide the algorithm to a feasible schedule. Whenever local scheduling leads to an infeasible schedule, backtracking is used.Results of simulation studies of randomly generated task sets are presented. Although the scheduling problem is NP-hard, the results show that time performance is acceptable for off-line scheduling, except for extremely difficult task sets which make extensive use of the available resources.  相似文献   

7.
The most important goal in hard real-time systems is to guarantee that all timing constraints are satisfied. Even though object-based techniques (which contain reusable software components) are used to manage the complexity in the software development process of such systems, execution efficiency may have to be sacrificed, due to the large number of procedure calls and contention for accessing software components. These issues are addressed by the following parallelizing techniques: (a) converting potentially inefficient procedure calls to a source of concurrency via asynchronous remote procedure calls (ARPC) (b) replicating (or cloning) software components to reduce the contention. The existing object-based scheduling algorithms construct an initial schedule and apply incremental parallelization techniques to modify the initial schedule till a feasible schedule is generated. But these algorithms are applicable for scheduling only multiple independent tasks. This paper describes a pre-run-time scheduling algorithm for a set of periodic object-based tasks having precedence constraints among them. The algorithm allocates the components of object-based periodic real-time tasks to the sites of a distributed system based on a clustering heuristic which takes into account the ARPC parallelism and load balancing, and schedules them on respective sites. The algorithm also finds a schedule for communication channel(s). Further, it clones the components of object-based periodic tasks, if contention occurs in accessing them. In addition to the above (periodicity and precedence) constraints, the tasks handled by our algorithm can have resource constraints among them. The experimental evaluation of the algorithm shows that the combination of the proposed clustering heuristic and cloning enhances schedulability.  相似文献   

8.
Job scheduling in utility grids should take into account the incentives for both grid users and resource providers. However, most of existing studies on job scheduling in utility grids only address the incentive for one party, i.e., either the users or the resource providers. Very few studies on job scheduling in utility grids consider incentives for both parties, in which the cost, one of the most attractive incentives for users, is not addressed. In this paper, we study the job scheduling in utility grid by optimizing the incentives for both parties. We propose a multi-objective optimization approach, i.e., maximizing the successful execution rate of jobs and minimizing the combined cost (incentives for grid users), and minimizing the fairness deviation of profits (incentive for resource providers). The proposed multi-objective optimization approach could offer sufficient incentives for the two parties to stay and play in the utility grid. A heuristic scheduling algorithm called Cost-Greedy Price-Adjusting (CGPA) algorithm is developed to optimize the incentives for both parties. Simulation results show that the CGPA algorithm is effective and could lead to higher successful execution rate, lower combined cost and lower fairness deviation compared with some popular algorithms in most cases.  相似文献   

9.
Grid computing utilizes the distributed heterogeneous resources in order to support complicated computing problems. Grid can be classified into two types: computing grid and data grid. Job scheduling in computing grid is a very important problem. To utilize grids efficiently, we need a good job scheduling algorithm to assign jobs to resources in grids.In the natural environment, the ants have a tremendous ability to team up to find an optimal path to food resources. An ant algorithm simulates the behavior of ants. In this paper, we propose a Balanced Ant Colony Optimization (BACO) algorithm for job scheduling in the Grid environment. The main contributions of our work are to balance the entire system load while trying to minimize the makespan of a given set of jobs. Compared with the other job scheduling algorithms, BACO can outperform them according to the experimental results.  相似文献   

10.
Distributed Scheduling (DS) problems have attracted attention by researchers in recent years. DS problems in multi-factory production are much more complicated than classical scheduling problems because they involve not only the scheduling problems in a single factory, but also the problems in the higher level, which is: how to allocate the jobs to suitable factories. It mainly focuses on solving two issues simultaneously: (i) allocation of jobs to suitable factories and (ii) determination of the corresponding production schedules in each factory. Its objective is to maximize system efficiency by finding an optimal plan for a better collaboration among various processes. However, in many papers, machine maintenance has usually been ignored during the production scheduling. In reality, every machine requires maintenance, which will directly influence the machine's availability, and consequently the planned production schedule. The objective of this paper is to propose a modified genetic algorithm approach to deal with those DS models with maintenance consideration, aiming to minimize the makespan of the jobs. Its optimization performance has been compared with other existing approaches to demonstrate its reliability. This paper also tests the influence of the relationship between the maintenance repairing time and the machine age to the performance of scheduling of maintenance during DS in the studied models.  相似文献   

11.
为了优化同时考虑最大完工时间和机器能耗的双目标分布式柔性作业车间调度问题,提出了一种改进的多目标松鼠搜索算法。引入了基于升序排列规则的转换机制,实现了松鼠位置向量与调度解之间的转换,并针对机器空闲时间设计了从半主动到主动的解码策略。针对不同优化目标设计了三种种群初始化策略。同时提出了动态捕食者策略来更好地协调算法的全局探索和局部开发能力。设计了四种领域搜索策略用于增加种群多样。20个实例上的实验结果验证了改进后的算法求得解的质量和多样性更好,从而证明了其可有效求解分布式节能柔性调度问题。  相似文献   

12.
针对现有实时调度算法无法适应动态安全需求的问题,构建了一种安全驱动调度模型,该模型从系统安全级别、系统安全服务和任务安全策略三个方面描述了实时系统的动态安全需求,并设计了一种基于安全驱动的实时任务调度器框架。以该模型和框架为基础,提出了一种安全驱动调度算法(Security Driven Scheduling Algorithm,SDSA)。从全局角度对新到达任务进行可调度性检查,并将可调度任务分配到合适的处理机上运行。按照系统安全级别来动态调整已分配到各处理机上实时任务的安全策略,使其达到安全性和可调度性的最优平衡。采用优先级抢占式策略对各实时任务进行调度。仿真结果表明,SDSA算法与其他同类算法相比,在系统动态安全需求的适应性、关键任务的可调度性以及安全防危能力等方面具有较好的表现。  相似文献   

13.
为了提升异构分布式环境下处理具有依赖关系的任务的性能,提出一种基于关键任务和处理器选择参数的启发式任务调度算法(HCNPSV)。该算法结合表调度和任务复制调度的思想,改进了关键任务的计算方法,并按照是否为关键任务、上行权重值递减、关联任务数递增的顺序获得调度序列,资源选择阶段综合考虑了任务的最早完成时间和到出口节点的最短距离,最后将任务调度到处理器选择参数最小的资源上执行。实验结果表明,HCNPSV有效地提高了系统的调度性能。  相似文献   

14.
随着信息技术的发展,工业嵌入式系统的功能规模迅速地增长,大大增加了硬件成本,需缩减硬件成本以提高利润.同时,为满足系统的功能安全约束,对任务和消息进行整体调度的问题也亟待解决.以硬件成本缩减为目标,设计了硬件成本缩减方案,定义了任务到处理器映射、任务和任务、任务和消息等的时序约束关系,提出了基于整数线性规划的硬件成本缩...  相似文献   

15.
Data-intensive Grid applications need access to large data sets that may each be replicated on different resources. Minimizing the overhead of transferring these data sets to the resources where the applications are executed requires that appropriate computational and data resources be selected. In this paper, we consider the problem of scheduling an application composed of a set of independent tasks, each of which requires multiple data sets that are each replicated on multiple resources. We break this problem into two parts: one, to match each task (or job) to one compute resource for executing the job and one storage resource each for accessing each data set required by the job and two, to assign the set of tasks to the selected resources. We model the first part as an instance of the well-known Set Covering Problem (SCP) and apply a known heuristic for SCP to match jobs to resources. The second part is tackled by extending existing MinMin and Sufferage algorithms to schedule the set of distributed data-intensive tasks. Through simulation, we experimentally compare the SCP-based matching heuristic to others in conjunction with the task scheduling algorithms and present the results.  相似文献   

16.
徐晓飞 《计算机应用》2006,26(8):1788-1790
为提高高速通信网络的通信效率,针对VOQ交换机,提出在交换机的各个输出端口中进行分布式通信调度(DSA)的策略。DSA算法可直接支持变长数据包通信调度,克服了传统信元交换只能调度定长数据包的缺点,降低了交换机的实现复杂度。仿真结果表明:在各种流量下,DSA算法都比信元调度算法具有更好的调度性能。  相似文献   

17.
The distributed manufacturing takes place in a multi-factory environment including several factories, which may be geographically distributed in different locations, or in a multi-cell environment including several independent manufacturing cells located in the same plant. Each factory/cell is capable of manufacturing a variety of product types. An important issue in dealing with the production in this decentralized manner is the scheduling of manufacturing operations of products (jobs) in the distributed manufacturing system. In this paper, we study the distributed and flexible job-shop scheduling problem (DFJSP) which involves the scheduling of jobs (products) in a distributed manufacturing environment, under the assumption that the shop floor of each factory/cell is configured as a flexible job shop. A fast heuristic algorithm based on a constructive procedure is developed to obtain good quality schedules very quickly. The algorithm is tested on benchmark instances from the literature in order to evaluate its performance. Computational results show that, despite its simplicity, the proposed heuristic is computationally efficient and promising for practical problems.  相似文献   

18.
We have proposed speculative locking (SL) protocols to improve the performance of distributed database systems (DDBSs) by trading extra processing resources. In SL, a transaction releases the lock on the data object whenever it produces corresponding after-image during its execution. By accessing both before and after-images, the waiting transaction carries out speculative executions and retains one execution based on the termination (commit or abort) mode of the preceding transactions. By carrying out multiple executions for a transaction, SL increases parallelism without violating serializability criteria. Under the naive version of SL, the number of speculative executions of the transaction explodes with data contention. By exploiting the fact that a submitted transaction is more likely to commit than abort, we propose the SL variants that process transactions efficiently by significantly reducing the number of speculative executions. The simulation results indicate that even with manageable extra resources, these variants significantly improve the performance over two-phase locking in the DDBS environments where transactions spend longer time for processing and transaction-aborts occur frequently.  相似文献   

19.
Today, almost everyone is connected to the Internet and uses different Cloud solutions to store, deliver and process data. Cloud computing assembles large networks of virtualized services such as hardware and software resources. The new era in which ICT penetrated almost all domains (healthcare, aged-care, social assistance, surveillance, education, etc.) creates the need of new multimedia content-driven applications. These applications generate huge amount of data, require gathering, processing and then aggregation in a fault-tolerant, reliable and secure heterogeneous distributed system created by a mixture of Cloud systems (public/private), mobile devices networks, desktop-based clusters, etc. In this context dynamic resource provisioning for Big Data application scheduling became a challenge in modern systems. We proposed a resource-aware hybrid scheduling algorithm for different types of application: batch jobs and workflows. The proposed algorithm considers hierarchical clustering of the available resources into groups in the allocation phase. Task execution is performed in two phases: in the first, tasks are assigned to groups of resources and in the second phase, a classical scheduling algorithm is used for each group of resources. The proposed algorithm is suitable for Heterogeneous Distributed Computing, especially for modern High-Performance Computing (HPC) systems in which applications are modeled with various requirements (both IO and computational intensive), with accent on data from multimedia applications. We evaluate their performance in a realistic setting of CloudSim tool with respect to load-balancing, cost savings, dependency assurance for workflows and computational efficiency, and investigate the computing methods of these performance metrics at runtime.  相似文献   

20.
研究并提出一种采用分布式Kahn处理网络表达的并行程序在多处理器集群环境下的任务——处理器动态分配算法。由于Kahn处理网络的不可判定性,静态作业调度算法不能适用,而忽略其显式数据依赖关系的动态负载均衡策略存在很大的随机性,往往带来不必要的进程迁移。基于运行时动态生成的离散事件序列,预测Kahn处理网络在不同分配方案下的执行效率(处理器资源利用率),迭代寻求最优动态分配方案,仿真效果良好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号