首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In current networks, packet losses can occur if routers do not provide sufficiently large buffers. This paper studies how many buffers should be provided in a router to eliminate packet losses. We assume a network router has m incoming queues, each corresponding to a single traffic stream, and must schedule at any time on-line from which queue to take the next packet to send out. To exclude packet losses with a small amount of buffers, the maximum queue length must be kept low over the entire scheduling period. We call this new on-line problem the balanced scheduling problem (BSP). By competitive analysis, we measure the power of on-line scheduling algorithms to prevent packet losses. We show that a simple greedy algorithm is (log m)-competitive which is asymptotically optimal, while Round-Robin scheduling is not better than m-competitive, as actually is any deterministic on-line algorithm for BSP. We also give a polynomial time algorithm for solving off-line BSP optimally. We also study another on-line balancing problem that tries to balance the delay among the m traffic streams.  相似文献   

2.
The delivery of latency sensitive packets is a crucial issue in real-time applications of communication networks. Such packets often have a firm deadline and a packet becomes useless if it arrives after its deadline. The deadline, however, applies only to the packet’s journey through the entire network; individual routers along the packet’s route face a more flexible deadline. We study policies for admitting latency sensitive packets at a router. Each packet is tagged with a value. A packet waiting at a router loses value over time as its probability of arriving at its destination on time decreases. The router is modeled as a non-preemptive queue, and its objective is to maximize the total value of the forwarded packets. When a router receives a packet, it must either accept it (and delay future packets), or reject it immediately. The best policy depends on the set of values that a packet can take. We consider three natural sets: an unrestricted model, a real-valued model, where any value over 1 is allowed, and an integral-valued model. For the unrestricted model, we prove that there is no constant competitive ratio algorithm. For the real-valued model, we give a randomized 4-competitive algorithm and a matching lower bound (up to low order terms). We also provide a deterministic lower bound of \(\phi ^3 - {\varepsilon }\approx 4.236\), almost matching the previously known 4.24-competitive algorithm. For the integral-valued model, we describe a deterministic 4-competitive algorithm, and prove that this is tight even for randomized algorithms (up to low order terms).  相似文献   

3.
Bar-Noy  Freund  Landa  Naor 《Algorithmica》2008,36(3):225-247
Abstract. Consider the following problem. A switch connecting n input channels to a single output channel must deliver all incoming messages through this channel. Messages are composed of packets , and in each time slot the switch can deliver a single packet from one of the input queues to the output channel. In order to prevent packet loss, a buffer is maintained for each input channel. The goal of a switching policy is to minimize the maximum buffer size. The setting is on-line; decisions must be made based on the current state without knowledge of future events. This general scenario models multiplexing tasks in various systems such as communication networks, cable modem systems, and traffic control. Traditionally, researchers analyzed the performance of a given policy assuming some distribution on the arrival rates of messages at the input queues, or assuming that the service rate is at least the aggregate of all the input rates. We use competitive analysis, avoiding any prior assumptions on the input. We show O(log n )-competitive switching policies for the problem and demonstrate matching lower bounds.  相似文献   

4.
With the increase of internet protocol (IP) packets the performance of routers became an important issue in internet/working. In this paper we examine the matching algorithm in gigabit router which has input queue with virtual output queueing. Dynamic queue scheduling is also proposed to reduce the packet delay and packet loss probability. Port partitioning is employed to reduce the computational burden of the scheduler in a switch which matches the input and output ports for fast packet switching. Each port is divided into two groups such that the matching algorithm is implemented within each pair of groups in parallel. The matching is performed by exchanging the pair of groups at every time slot. Two algorithms, maximal weight matching by port partitioning (MPP) and modified maximal weight matching by port partitioning (MMPP) are presented. In dynamic queue scheduling, a popup decision rule for each delay critical packet is made to reduce both the delay of the delay critical packet and the loss probability of loss critical packet. Computational results show that MMPP has the lowest delay and requires the least buffer size. The throughput is illustrated to be linear to the packet arrival rate, which can be achieved under highly efficient matching algorithm. The dynamic queue scheduling is illustrated to be highly effective when the occupancy of the input buffer is relatively high.Scope and purposeTo cope with the increasing internet traffic, it is necessary to improve the performance of routers. To accelerate the switching from input ports to output in the router partitioning of ports and dynamic queueing are proposed. Input and output ports are partitioned into two groups A/B and a/b, respectively. The matching for the packet switching is performed between group pairs (A, a) and (B, b) in parallel at one time slot and (A, b) and (B, a) at the next time slot. Dynamic queueing is proposed at each input port to reduce the packet delay and packet loss probability by employing the popup decision rule and applying it to each delay critical packet.The partitioning of ports is illustrated to be highly effective in view of delay, required buffer size and throughput. The dynamic queueing also demonstrates good performance when the traffic volume is high.  相似文献   

5.
We study a basic problem in Multi-Queue switches. A switch connectsm input ports to a single output port. Each input port is equipped with an incoming FIFO queue with bounded capacityB. A switch serves its input queues by transmitting packets arriving at these queues, one packet per time unit. Since the arrival rate can be higher than the transmission rate and each queue has limited capacity, packet loss may occur as a result of insufficient queue space. The goal is to maximize the number of transmitted packets. This general scenario models most current networks (e.g. IP networks) which only support a “best effort” service in which all packet streams are treated equally. A 2-competitive algorithm for this problem was designed in [5] for arbitraryB. Recently, a (17/9 ≈ 1.89)-competitive algorithm was presented forB>1 in [3]. Our main result in this paper shows that forB which is not too small our algorithm can do better than 1.89, and approach a competitive ratio ofe/(e − 1) ≈ 1.58. The research of Yossi Azar was supported in part by the Israeli Ministry of Industry and Trade and by the Israel Science Foundation.  相似文献   

6.
We study the on-line Steiner tree problem on a general metric space. We show that the greedy on-line algorithm isO(log((d/z)s))-competitive, wheres is the number of regular nodes,d is the maximum metric distance between any two revealed nodes, andz is the optimal off-line cost. Our results refine the previous known bound [9] and show that AlgorithmSB of Bartalet al. [3] for the on-line file allocation problem isO(log logN)-competitive on anN-node hypercube or butterfly network. A lower bound of (log((d/z)s)) is shown to hold.We further consider the on-line generalized Steiner problem on a general metric space. We show that a class of lazy and greedy deterministic on-line algorithms areO(k· logk)-competitive and no on-line algorithm is better than (logk)-competitive, wherek is the number of distinct nodes that appear in the request sequence.For the on-line Steiner problem on a directed graph, it is shown that no deterministic on-line algorithm is better thans-competitive and the greedy on-line algorithm iss-competitive.A preliminary version of this paper has appeared in theProceedings of the Workshop on Algorithms and Data Structures, 1993, Montréal. The first author's research was partially supported by NSF Grant CCR-9009753, whilst that of the second author was partially supported by NSF Grant DDM-8909660 and a University Fellowship from the Graduate School, Yale University.  相似文献   

7.
In this paper, we consider the scheduling problem on identical parallel machines, in which jobs are arriving over time and preemption is not allowed. The goal is to minimize the total completion times. According to the idea of the Delayed-SPT Algorithm proposed by Hoogeven and Vestjens [Optimal on-line algorithms for single-machine scheduling. In: Proceedings 5th international conference on integer programming and combinatorial optimization (IPCO). Lecture notes in computer science, vol. 1084. Berlin: Springer; 1996. p. 404–14], we give an on-line algorithm for the scheduling problem on mm identical parallel machines. We show that this algorithm is 2-competitive and the bound is tight.  相似文献   

8.
Abstract. In this paper we deal with competitive local on-line algorithms for non-preemptive channel allocation in mobile networks. The signal interferences in a network are modeled using an interference graph G . We prove that the greedy on-line algorithm is Δ -competitive, where Δ is the maximum degree of G . We employ the ``classify and randomly select" paradigm [5], [17], and give a 5 -competitive randomized algorithm for the case of planar interference graphs, a 2 -competitive randomized algorithm for trees, and a (2c) -competitive randomized algorithm for graphs of arboricity c . We also show that the problem of call control in mobile networks with multiple available frequencies reduces to the problem of call control in mobile networks with a single frequency. Using this reduction, we present on-line algorithms for general networks with a single frequency. We give a local on-line algorithm which is (α (δ +1 + α )/(1/2+α ) 2 )-competitive, where α is the independence number of G , and δ is the average degree of G . The above results hold in the case when the duration of each request is infinite, and the benefit the algorithm gains by accepting each request is equal to one. They are extended to handle requests of arbitrary durations, and arbitrary benefits.  相似文献   

9.
This note deals with the scheduling problem of minimizing the sum of job completion times in a system with n jobs and a single machine. We investigate the on-line version of the problem where every job has to be scheduled immediately and irrevocably as soon as it arrives, without any information on the later arriving jobs. We prove that for any sufficiently smooth, non-negative, non-decreasing function f(n) there exists an O(f(n))-competitive on-line algorithm for minimizing the total completion time if and only if the infinite sum converges. Received: 6 May 1997 / 3 February 1999  相似文献   

10.
This paper applies matrix-analytic approach to the examination of the loss behavior of a space priority queue. In addition to the evaluation of the long-term high-priority and low-priority packet loss probabilities, we examine the bursty nature of packet losses by means of conditional statistics with respect to critical and non-critical periods that occur in an alternating manner. The critical period corresponds to having more than a certain number of packets in the buffer; non-critical corresponds to the opposite. Hence there is a threshold buffer level that splits the state space into two. By such a state-space decomposition, two hypothesized Markov chains are devised to describe the alternating renewal process. The distributions of various absorbing times in the two hypothesized Markov chains are derived to compute the average durations of the two periods and the conditional high-priority packet loss probability encountered during a critical period. These performance measures greatly assist the space priority mechanism for determining a proper threshold. The overall complexity of computing these performance measures is of the order O(K2m13m23), where K is the buffer capacity, and m1 and m2 are the numbers of phases of the underlying Markovian structures for the high-priority and low-priority packet arrival processes, respectively. Thus the results obtained are computationally tractable and numerical results show that, by choosing a proper threshold, a space priority queue not only can maintain the quality of service for the high-priority traffic but also can provide the near-optimum utilization of the capacity for the low-priority traffic.  相似文献   

11.
We consider the online scheduling problem with m−1, m?2, uniform machines each with a processing speed of 1, and one machine with a speed of s, 1?s?2, to minimize the makespan. The well-known list scheduling (LS) algorithm has a worst-case bound of [Y. Cho, S. Sahni, Bounds for list schedules on uniform processors, SIAM J. Comput. 9 (1980) 91-103]. An algorithm with a better competitive ratio was proposed in [R. Li, L. Shi, An on-line algorithm for some uniform processor scheduling, SIAM J. Comput. 27 (1998) 414-422]. It has a worst-case bound of 2.8795 for a big m and s=2. In this note we present a 2.45-competitive algorithm for m?4 and any s, 1?s?2.  相似文献   

12.
Phillips  Stein  Torng  Wein 《Algorithmica》2008,32(2):163-200
Abstract. We consider two fundamental problems in dynamic scheduling: scheduling to meet deadlines in a preemptive multiprocessor setting, and scheduling to provide good response time in a number of scheduling environments. When viewed from the perspective of traditional worst-case analysis, no good on-line algorithms exist for these problems, and for some variants no good off-line algorithms exist unless P = NP . We study these problems using a relaxed notion of competitive analysis, introduced by Kalyanasundaram and Pruhs, in which the on-line algorithm is allowed more resources than the optimal off-line algorithm to which it is compared. Using this approach, we establish that several well-known on-line algorithms, that have poor performance from an absolute worst-case perspective, are optimal for the problems in question when allowed moderately more resources. For optimization of average flow time, these are the first results of any sort, for any NP -hard version of the problem, that indicate that it might be possible to design good approximation algorithms.  相似文献   

13.
The concept of Quality of Service (QoS) networks has gained growing attention recently, as the traffic volume in the Internet constantly increases, and QoS guarantees are essential to ensure proper operation of most communication-based applications. A QoS switch serves m incoming queues by transmitting packets arriving to these queues through one output port, one packet per time step. Each packet is marked with a value indicating its priority in the network. Since the queues have bounded capacities and the rate of arriving packets can be much higher than the transmission rate, packets can be lost due to insufficient queue space. The goal is to maximize the total value of transmitted packets. This problem encapsulates two dependent questions: buffer management, namely which packets to admit into the queues, and scheduling, i.e. which queue to use for transmission in each time step. We use competitive analysis to study online switch performance in QoS-based networks. Specifically, we provide a novel generic technique that decouples the buffer management and scheduling problems. Our technique transforms any single-queue buffer management policy (preemptive or non-preemptive) to a scheduling and buffer management algorithm for our general m queues model, whose competitive ratio is at most twice the competitive ratio of the given buffer management policy. We use our technique to derive concrete algorithms for the general preemptive and non-preemptive cases, as well as for the interesting special cases of the 2-value model and the unit-value model. We also provide a 1.58-competitive randomized algorithm for the unit-value case. This case is interesting by itself since most current networks (e.g. IP networks) do not yet incorporate full QoS capabilities, and treat all packets equally.  相似文献   

14.
We present the first experimental study of online packet buffering algorithms for network switches. We consider a basic scenario in which m queues of size B have to be maintained so as to maximize the packet throughput. For this model various online algorithms with competitive factors ranging between 2 and 1.5 were developed in the literature. We first develop a new 2-competitive online algorithm, called HSFOD, which is especially designed to perform well under real-world conditions. In our experimental study we have implemented all the proposed algorithms, including HSFOD, and tested them on packet traces from benchmark libraries. We have evaluated the experimentally observed competitiveness, the running times, memory requirements and actual packet throughput of the strategies. The tests were executed for varying values of m and B as well as varying switch speeds. It shows that greedy-like strategies and HSFOD perform best in practice.  相似文献   

15.
The buffered crossbar switch architecture has recently gained considerable research attention. In such a switch, besides normal input and output queues, a small buffer is associated with each crosspoint. Due to the introduction of crossbar buffers, output and input dependency is eliminated, and the scheduling process is greatly simplified. We analyze the performance of switch policies by means of competitive analysis, where a uniform guarantee is provided for all traffic patterns. We assume that each packet has an intrinsic value designating its priority and the goal of the switch policy is to maximize the weighted throughput of the switch. We consider FIFO queueing buffering policies, which are deployed by the majority of today’s Internet routers. In packet-mode scheduling, a packet is divided into a number of unit length cells and the scheduling policy is constrained to schedule all the cells contiguously, which removes reassembly overhead and improves Quality-of-Service. For the case of variable length packets with uniform value density (Best Effort model), where the packet value is proportional to its size, we present a packet-mode greedy switch policy that is 7-competitive. For the case of unit size packets with variable values (Differentiated Services model), we propose a β-preemptive (β is a preemption factor) greedy switch policy that achieves a competitive ratio of 6 + 4β + β 2 + 3/(β − 1). In particular, its competitive ratio is at most 19.95 for the preemption factor of β = 1.67. As far as we know, this is the first constant-competitive FIFO policy for this architecture in the case of variable value packets. In addition, we evaluate performance of β-preemptive greedy switch policy by simulations and show that it outperforms other natural switch policies. The presented policies are simple and thus can be efficiently implemented at high speeds. Moreover, our results hold for any value of the internal switch fabric speedup.  相似文献   

16.
This paper presents the derivation of an analytical model for a multi-queue nodes network router, which is referred to as the multi-queue nodes (mQN) model. In this model, expressions are derived to calculate two performance metrics, namely, the queue node and system utilization factors. In order to demonstrate the flexibility and effectiveness of the mQN model in analyzing the performance of an mQN network router, two scenarios are performed. These scenarios investigated the variation of queue nodes and system utilization factors against queue nodes dropping probability for various system sizes and packets arrival routing probabilities. The performed scenarios demonstrated that the mQN analytical model is more flexible and effective when compared with experimental tests and computer simulations in assessing the performance of an mQN network router.  相似文献   

17.
We study on-line scheduling on parallel batch machines. Jobs arrive over time. A batch processing machine can handle up to B jobs simultaneously. The jobs that are processed together form a batch and all jobs in a batch start and are completed at the same time. The processing time of a batch is given by the processing time of the longest job in the batch. The objective is to minimize the makespan. We deal with the unbounded model, where B is sufficiently large. We first show that no deterministic on-line algorithm can have a competitive ratio of less than 1+(?{m2+4}-m)/21+(\sqrt{m^{2}+4}-m)/2 , where m is the number of parallel batch machines. We then present an on-line algorithm which is the one best possible for any specific values of m.  相似文献   

18.
文章首先对目前分组交换网络中支持QoS的队列调度算法进行了比较研究,分析了其性能指标和技术特点。然后以Internet核心路由器中线卡级和交换级的队列调度设计为例,从控制论的角度提出了一种支持QoS的分布式加权轮询调度控制算法,同时对交换网络进行了仿真实验,吞吐率达到96%的仿真实验结果表明所提出的算法是有效的,最后,文章认为在实际应用中,应针对不同情况设计不同的调度控制算法,以便在复杂性、公平性、快速性及有效性等特性方面取得了一个折衷方案,以使分组交换网络的整体性能更好。  相似文献   

19.
Abstract. We investigate a variant of on-line edge-coloring in which there is a fixed number of colors available and the aim is to color as many edges as possible. We prove upper and lower bounds on the performance of different classes of algorithms for the problem. Moreover, we determine the performance of two specific algorithms, First-Fit and Next-Fit . Specifically, algorithms that never reject edges that they are able to color are called fair algorithms. We consider the four combinations of fair/ not fair and deterministic/ randomized. We show that the competitive ratio of deterministic fair algorithms can vary only between approximately 0.4641 and 1/2, and that Next-Fit is worst possible among fair algorithms. Moreover, we show that no algorithm is better than 4/7-competitive. If the graphs are all k -colorable, any fair algorithm is at least 1/2-competitive. Again, this performance is matched by Next-Fit while the competitive ratio for First-Fit is shown to be k/(2k-1) , which is significantly better, as long as k is not too large.  相似文献   

20.
We study an on-line parallel job scheduling problem, where jobs arrive one by one. A parallel job may require a number of machines for its processing at the same time. Upon arrival of a job, its processing time and the number of requested machines become known, and it must be scheduled immediately without any knowledge of future jobs. We present a 7-competitive on-line algorithm, which improves the previous upper bound of 12 by Johannes (J. Sched. 9:433–452, 2006). Furthermore, we investigate a special case in which the largest processing time is known beforehand. A preliminary version of this paper appeared in Proceedings of the 11th Colloquium on Structural Information and Communication Complexity (SIROCCO’04, pp. 279-290). Research of D. Ye was supported by NSFC (10601048). Research of G. Zhang was supported by NSFC (60573020).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号