首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Design of a generalized priority queue manager for ATM switches   总被引:1,自引:0,他引:1  
Meeting quality of service (QoS) requirements for various services in ATM networks has been very challenging to network designers. Various control techniques at either the call or cell level have been proposed. In this paper, we deal with cell transmission scheduling and discarding at the output buffers of an ATM switch. We propose a generalized priority queue manager (GPQM) that uses per-virtual-connection queueing to support multiple QoS requirements and achieve fairness in both cell transmission and discarding. It achieves the ultimate goal of guaranteeing the QoS requirement for each connection. The GPQM adopts the earliest due date (EDD) and self-clocked fair queueing (SCFQ) schemes for scheduling cell transmission and a new self-calibrating pushout (SCP) scheme for discarding cells. The GPQM's performance in cell loss rate and delay is presented. An implementation architecture for the GPQM is also proposed, which is facilitated by a new VLSI chip called the priority content-addressable memory (PCAM) chip  相似文献   

2.
The authors propose to control user traffic at two places in an asynchronous transfer model (ATM) network: at the user-network interface (UNI) by a traffic enforcer, and at the network-node interface (NNI) by a queue manager. The traffic enforcer adopted in this work contains a buffer to delay and reshape the violating cells that do not comply with some agreed-upon traffic parameters, and thus is also called a traffic shaper. The queue manager manages the queued cells in network nodes in such a way that higher priority cells are always served first, low-priority cells are discarded when the queue is full, and any interference between same-priority cells is prevented. Architectures for the traffic shaper and the queue manager are proposed. A key component, called the sequencer chip, has been implemented and tested to realize both architectures. The sequencer chip uses 1.2-μm CMOS technology. It contains about 150 K transistors, has a die size of 7.5 mm×8.3 mm, and is packages in a 223-pin ceramic pin-grid-array (PGA) carrier  相似文献   

3.
Self-similarity characteristic of network traffic will lead to the continuous burstness of data in the network.In order to effectively reduce the queue delay and packet loss rate caused by network traffic burst,improve the transmission capacity of different priority services,and guarantee the service quality requirements,a queue scheduling algorithm P-DWRR based on the self-similarity of network traffic was proposed.A dynamic weight allocation method and a service quantum update method based on the self-similar traffic level grading prediction results were designed,and the service order of the queue according was determined to the service priority and queue waiting time,so as to reduce the queuing delay and packet loss rate.The simulation results show that the P-DWRR algorithm can reduce the queueing delay,delay jitter and packet loss rate on the basis of satisfying the different service priority requirements of the network,and its performance is better than that of DWRR and VDWRR.  相似文献   

4.
Algorithms for solving for the cell loss rates in an asynchronous transfer mode (ATM) network using cell loss priorities are presented. With the loss priority scheme, cells of low-priority classes are accepted only if the instantaneous buffer queue length at the cell arrival epoch is below a given threshold. The input is modeled by Markov-modulated Bernoulli processes. The effect of the loss priority scheme on data, voice, and video traffic is investigated  相似文献   

5.
In asynchronous transfer mode (ATM) switching networks, buffers are required to accommodate traffic fluctuations due to statistical multiplexing. However, cell discarding takes place when the buffer space of a network node is used up during a traffic surge. Though pushout cell discarding was found to achieve fair buffer utilization and good cell loss performance, it is difficult to implement because of the large number of queue length comparisons. We propose quasi-pushout cell discarding which reduces the number of queue length comparisons by employing the concept of quasi-longest queue. Simulation results under bursty and imbalanced traffic conditions show that quasi-pushout can achieve comparable cell loss performance as pushout at a much lower complexity  相似文献   

6.
In order to efficiently utilize network resources while still providing satisfactory QoS to both real-time and nonreal-time applications, prioritizing these two types of traffic according to their service requirement becomes necessary. Several slot-oriented transmission priority schemes applicable to the output queue of ATM switches have been proposed. We studied the slot-oriented queueing disciplines that further involve the buffer management of the output queue of ATM switches. A fundamental principle called the separation principle is presented, which asserts that (1) the QoS (measured by the time-cumulative cell loss for each traffic class) region of the efficient disciplines (provide the best QoS tradeoff between the two types of traffic) can be divided into two mutually exclusive ones by the QoS of a special efficient discipline called R*; and (2) the efficient disciplines may involve either dynamic transmission priority or dynamic enqueueing priority but not both depending on which of the two mutually exclusive QoS regions is desired. The QoS region of less time-cumulative nonreal-time cell loss than R* is shown to be approximately linear in the space of time-cumulative cell loss vector when the real-time traffic is well regulated. The suboptimal but simple disciplines which are functions of only a small set of system parameters are also investigated to achieve less time-cumulative nonreal-time cell loss than R*  相似文献   

7.
8.
This paper deals with overload control in asynchronous transfer mode (ATM) networks via priority cell discarding mechanisms governed by a set of nested queue fill thresholds. Specifically, we address the problem of finding the optimal set of discarding thresholds, for an arbitrary number of priorities, under two different performance scenarios. In the first scenario, we minimize the expected discarding cost (a performance penalty) for a given offered load using stochastic dynamic programming. In the second scenario, we maximize the offered load subject to constraints on cell loss (discarding) probabilities using an efficient search technique developed specifically for this problem. Our results illustrate that nested threshold discarding systems can perform significantly better under either scenario than a system without discarding priorities. We characterize the performance advantage over ranges of system parameter values and briefly study the use of sub-optimal, non-adaptive thresholds.  相似文献   

9.
由于无线网络环境下网络节点的增加,网络延时成为一个亟待解决的问题。为了提高服务质量(QoS),提高吞吐量,文中提出了一种基于优先级的队列延迟模型,通过将每一个包预设置优先级来区分其重要性和实时性,同时将每一个AP设备中的队列根据优先级划分为3种类型,并将预设优先级的包放入其中进行传输,从而有效减少发送端的队列延迟。通过分析和仿真可以发现,与未划分优先级队列的节点网络相比,这种方案不仅使单个节点的延迟大幅减少,也使整个网络的平均延迟明显降低,网络整体性能显著提高。  相似文献   

10.
We propose a simple first-in first-out (FIFO)-based service protocol which is appropriate for a multimedia ATM satellite system. The main area of interest is to provide real-time traffic with upper bounds on the end-to-end delay, jitter, and loss experienced at various service queues within a satellite network. Various service protocols, each based on a common underlying strategy, are developed in light of the requirements and limitations imposed at each of the satellite's subsystems. These subsystems include the uplink (UL) earth station (ES) service queue, on-board processing (OBP) queues, and the downlink (DL) ES service queue feeding into a wireline ATM network or directly to an end-user application. Numerous network simulation results demonstrate the tractability, efficiency, and versatility of the underlying service discipline. Key features of our strategy are its algorithmic and architectural simplicity, its non-ad-hoc scheduling approach, and its unified treatment of all real-time streams at all service queues. In addition, the delay and jitter bounds are uncoupled. In this way, end-to-end jitter can be tightly controlled even if medium access requires long indeterminate waiting durations  相似文献   

11.
In this paper, we propose queueing strategies employing the service interval-based priority (sip) which can provide delay-bounded, and loss-free services, while maximizing bandwidth utilization in the atm network. We also describe a variation of the sip, the residual service interval-based priority (rsip) which can achieve almost full utilization by assigning priorities dynamically on the basis of the residual service interval. We store the realtime cells belonging to different connections in logically separated queues, and for each queue, we set a parameter called service interval, during which only one cell is allowed to be transmitted. The sip server takes and transmits the head-of-line (hol) cell of the queue which has the smallest service interval, while the rsip server selects the queue with the smallest residual service interval. When there is no eligible real-time cell, it transmits non-real-time cell, thus enabling a maximized bandwidth utilization. Employing the above queueing strategies, we analyze the delay characteristics deterministically with the leaky bucket bounded input traffic and then dimension the optimal service interval. In dimensioning the service interval and buffer space of each real-time service queue, we consider burstiness of traffic in conjunction with delay constraints, so that bandwidth utilization can get maximized. In addition, we consider the issues of protection from malicious users, average bandwidth utilization, and coupling between the delay bound and the bandwidth allocation granularity.  相似文献   

12.
The interaction of congestion control with the partitioning of source information into components of varying importance for variable-bit-rate packet voice and packet video is investigated. High-priority transport for the more important signal components results in substantially increased objective service quality. Using a Markov chain voice source model with simple PCM speech encoding and a priority queue, simulation results show a signal-to-noise ratio improvement of 45 dB with two priorities over an unprioritized system. Performance is sensitive to the fraction of traffic placed in each priority, and the optimal partition depends on network loss conditions. When this partition is optimized dynamically, quality degrades gracefully over a wide range of load values. Results with DCT encoded speech and video samples show similar behavior. Variations are investigated such as further partition of low-priority information into multiple priorities. A simulation with delay added to represent other network nodes shows general insensitivity to delay of network feedback information. A comparison is made between dropping packets on buffer overflow and timeout based on service requirements  相似文献   

13.
The asynchronous transfer mode (ATM) is the choice of transport mode for broadband integrated service digital networks (B-ISDNs). We propose a window-based contention resolution algorithm to achieve higher throughput for nonblocking switches in ATM environments. In a nonblocking switch with input queues, significant loss of throughput can occur due to head-of-line (HOL) blocking when first-in first-out (FIFO) queueing is employed. To resolve this problem, we employ bypass queueing and present a cell scheduling algorithm which maximizes the switch throughput. We also employ a queue length based priority scheme to reduce the cell delay variations and cell loss probabilities. With the employed priority scheme, the variance of cell delay is also significantly reduced under nonuniform traffic, resulting in lower cell loss rates (CLRs) at a given buffer size. As the cell scheduling controller, we propose a neural network (NN) model which uses a high degree of parallelism. Due to higher switch throughput achieved with our cell scheduling, the cell loss probabilities and the buffer sizes necessary to guarantee a given CLR become smaller than those of other approaches based on sequential input window scheduling or output queueing  相似文献   

14.
A novel architecture for queue management in the ATM network   总被引:3,自引:0,他引:3  
The author presents four architecture designs for queue management in asynchronous transfer mode (ATM) networks and compares their implementation feasibility and hardware complexity. The author introduces the concept of assigning a departure sequence number to every cell in the queue so that the effect of long-burst traffic on other cells is avoided. A novel architecture to implement the queue management is proposed. It applies the concepts of fully distributed and highly parallel processing to schedule the cells' sending or discarding sequence. To support the architecture, a VLSI chip (called Sequencer), which contains about 150 K CMOS transistors, has been designed in a regular structure such that the queue size and the number of priority levels can grow flexibly  相似文献   

15.
For an efficient utilization of the upstream bandwidth in passive optical network, a dynamic bandwidth assignment mechanism is necessary as it helps the service providers in provisioning of bandwidth to users according to the service level agreements. The scheduling mechanism of existing schemes, immediate allocation with colorless grant and efficient bandwidth utilization (EBU), does not assign the surplus bandwidth to a specific traffic class and only divides it equally among the optical network units (ONUs). This results in overreporting of ONU bandwidth demand to the optical line terminal and causes wastage of bandwidth and increase in delays at high traffic loads. Moreover, the EBU also assigns the unused bandwidth of lightly loaded ONU queues to the overloaded queues through an Update operation. This Update operation has a flaw that it borrows the extra bandwidth to a queue in the current service interval, if the queue report is higher than its service level agreement and refunds in next service interval. This borrow‐refund operation causes reduced bandwidth allocation to the lower priority classes and increases their delay and frame loss. This study improves both these weaknesses. The simulation results show that the proposed scheme uses bandwidth efficiently and reduces mean upstream delay of type‐2 (T2) traffic class by 38% and type‐3 (T3) up to 150% compared to immediate allocation with colorless grant at a cost of up to 10% higher delay for T2. However, T4 performance improves by 400% compared to EBU with slight increase in delay for T2 traffic class. Overall, it shows a balanced performance for all the traffic classes and minimizes the bandwidth waste per cycle as well as the frame loss rate.  相似文献   

16.
Data performance in ATM networks should be measured on the packet level instead of the cell level, since one or more cell losses within each packet is equivalent to the loss of the packet itself. Two packet-level control schemes, packet tail discarding and early packet discarding, were proposed to improve data performance. In this paper, a new stochastic modeling technique is developed for performance evaluation of two existing packet-discarding schemes at a single bottleneck node. We assume that the data arrival process is independent of the nodal congestion, which may represent the unspecified bit-rate traffic class in ATM networks, where no end-to-end feedback control mechanism is implemented. Through numerical study, we explore the effects of buffer capacity, control threshold, packet size, source access rate, underlying high-priority real-time traffic, and loading factor on data performance, and discuss their design tradeoffs. Our study shows that a network system can he entirely shut down in an overload period if no packet-discarding control scheme is implemented, under the assumption that there are no higher layer congestion avoidance schemes. Further, unless with sufficiently large buffer capacity, early packet discarding (EPD) always outperforms packet tail discarding (PTD) significantly under most renditions. Especially under the overload condition, EPD can always achieve about 100% goodput and 0% badput, whereas the PTD performance deteriorates rapidly. Among all the factors, the packet size has a dominant impact on EPD performance. The optimal selection of the EPD queue control threshold to achieve the maximum goodput is found to be relatively insensitive to traffic statistics  相似文献   

17.
We describe the fuzzy explicit fate marking (FERM) traffic flow control algorithm for a class of best effort service, known as available bit rate (ABR), proposed by the ATM Forum. FERM is an explicit rate marking scheme in which an explicit rate is calculated at the asynchronous transfer mode (ATM) switch and sent back to the ABR traffic sources encapsulated within resource management (RM) cells. The flow rate is calculated by the fuzzy congestion control (FCC) module by monitoring the average ABR queue length and its rate of change, then by using a set of linguistic rules. We use simulation to compare the steady-state and transient performance of FERM with EPRCA (a current favourite by the ATM Forum) in the presence of high priority variable bit rate (VBR) video and constant bit rate (CBR) in both a local-area network (LAN) and a wide-area network (WAN) environment. Our experiments show that FERM exhibits a robust behavior, even under extreme network loading conditions, and ensures fair share of the bandwidth for all virtual channels (VCs) regardless of the number of hops they traverse. Additionally, FERM controls congestion substantially better than EPRCA, offers faster transient response, leads to lower end-to-end delay and better network utilization  相似文献   

18.
为解决多业务环境下VSAT ATM中多址协议的信道分配效率及QoS问题,提出了基于复用的自适应随机预约多址协议(MRRAA)。在MRRAA中,由于rt-VBR业务所需带宽变化,其预约的时隙常有剩余,能被其他业务复用。复用rt-VBR业务剩余时隙时,按优先级顺序,首先是nrt-VBR,其次ABR,最后是UBR业务。用流体流方法表明,MR RAA在信源突发性比较高时,能大幅度提高信道的利用率,而又不违反业务的QoS要求。  相似文献   

19.
Cooper  C.A. Park  K.I. 《IEEE network》1990,4(3):18-23
The congestion control problem in asynchronous transfer mode (ATM) based broadband networks is defined. In general, a suitable set of congestion controls will include features for admission control, buffer and queue management, traffic enforcement, and reactive control. The leading alternatives for each of these congestion control features are summarized. An approach for choosing the best of these alternatives is presented, and a reasonable set of such alternatives that captures the increased utilization due to statistical multiplexing is suggested. It uses separate and static bandwidth pools for each service category; a statistical multiplexing gain determined for each bandwidth pool that supports a variable-bit-rate (VBR) service category; traffic enforcement on a virtual circuit basis using a leaky bucket algorithm with parameters set to accommodate anticipated levels of cell transfer delay variation; and multilevel loss priorities as well as a reactive control for appropriate VBR service categories based on multithreshold traffic enforcement and explicity congestion notification  相似文献   

20.
ATM networks promise to provide the means to support a wide range of applications exhibiting different traffic characteristics and performance requirements. Video communications have been recognized as one of the most demanding applications to be supported by ATM networks. This is mainly due to the need of transferring large amounts of data and the strict timing requirements characterizing digital video applications. In this paper, simulation experiments are conducted to study the performance of the different cell discarding control mechanisms, in terms of quality of service, when used in an ATM network supporting hierarchical encoded VBR MPEG-2 video distribution. Our results show the effectiveness of the control schemes in reducing the cell loss rates as compared to the system configuration without a cell discarding scheme in place  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号