首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In shared-memory packet switches, buffer management schemes can improve overall loss performance, as well as fairness, by regulating the sharing of memory among the different output port queues. Of the conventional schemes, static threshold (ST) is simple but does not adapt to changing traffic conditions, while pushout (PO) is highly adaptive but difficult to implement. We propose a novel scheme called dynamic threshold (DT) that combines the simplicity of ST and the adaptivity of PO. The key idea is that the maximum permissible length, for any individual queue at any instant of time, is proportional to the unused buffering in the switch. A queue whose length equals or exceeds the current threshold value may accept no more arrivals. An analysis of the DT algorithm shows that a small amount of buffer space is (intentionally) left unallocated, and that the remaining buffer space becomes equally distributed among the active output queues. We use computer simulation to compare the loss performance of DT, ST, and PO. DT control is shown to be more robust to uncertainties and changes in traffic conditions than ST control  相似文献   

2.
Data performance in ATM networks should be measured on the packet level instead of the cell level, since one or more cell losses within each packet is equivalent to the loss of the packet itself. Two packet-level control schemes, packet tail discarding and early packet discarding, were proposed to improve data performance. In this paper, a new stochastic modeling technique is developed for performance evaluation of two existing packet-discarding schemes at a single bottleneck node. We assume that the data arrival process is independent of the nodal congestion, which may represent the unspecified bit-rate traffic class in ATM networks, where no end-to-end feedback control mechanism is implemented. Through numerical study, we explore the effects of buffer capacity, control threshold, packet size, source access rate, underlying high-priority real-time traffic, and loading factor on data performance, and discuss their design tradeoffs. Our study shows that a network system can he entirely shut down in an overload period if no packet-discarding control scheme is implemented, under the assumption that there are no higher layer congestion avoidance schemes. Further, unless with sufficiently large buffer capacity, early packet discarding (EPD) always outperforms packet tail discarding (PTD) significantly under most renditions. Especially under the overload condition, EPD can always achieve about 100% goodput and 0% badput, whereas the PTD performance deteriorates rapidly. Among all the factors, the packet size has a dominant impact on EPD performance. The optimal selection of the EPD queue control threshold to achieve the maximum goodput is found to be relatively insensitive to traffic statistics  相似文献   

3.
Design of a generalized priority queue manager for ATM switches   总被引:1,自引:0,他引:1  
Meeting quality of service (QoS) requirements for various services in ATM networks has been very challenging to network designers. Various control techniques at either the call or cell level have been proposed. In this paper, we deal with cell transmission scheduling and discarding at the output buffers of an ATM switch. We propose a generalized priority queue manager (GPQM) that uses per-virtual-connection queueing to support multiple QoS requirements and achieve fairness in both cell transmission and discarding. It achieves the ultimate goal of guaranteeing the QoS requirement for each connection. The GPQM adopts the earliest due date (EDD) and self-clocked fair queueing (SCFQ) schemes for scheduling cell transmission and a new self-calibrating pushout (SCP) scheme for discarding cells. The GPQM's performance in cell loss rate and delay is presented. An implementation architecture for the GPQM is also proposed, which is facilitated by a new VLSI chip called the priority content-addressable memory (PCAM) chip  相似文献   

4.
In this paper we investigate the performance metrics of buffer management schemes. In general, the selective pushout (SP) scheme can support very low loss probability of the high‐priority cells, but it may cause unfairness of buffer allocation among different output queues and high overall cell loss probability. In order to fit the dynamic required performance metrics of ATM switches, a novel buffer management scheme called pushout with virtual thresholds (PVT) is proposed. In the PVT scheme, each output queue is guaranteed to increase in length until its virtual threshold (VT). Simulation results show the PVT can dynamically achieve the fairness and low overall cell loss probability or very low loss probability of the high priority cells by adequately adjusting the VT. Specially, when the VT = 0, the PVT control can be viewed as the SP control. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

5.
The asynchronous transfer mode (ATM) technique has been widely accepted as a flexible and effective scheme to transport various traffic over the future broadband network. To fully utilize network resources while still providing satisfactory quality of service (QOS) to all network users, prioritizing the user's traffic according to their service requirements becomes necessary. During call setup or service provisioning, each service can be assigned a service class determined by a delay priority and a loss priority. A queue manager in ATM network nodes will schedule ATM cells departing and discarding sequence based on their delay and loss priorities. Most queue management schemes proposed so far only consider either one of these two priority types. The queue manager handles multiple delay and loss priorities simultaneously. Moreover, a cell discarding strategy, called push-out, that allows the buffer to be completely shared by all service classes, has been adopted in the queue manager. We propose a practical architecture to implement the queue manager by using available VLSI sequencer chips  相似文献   

6.
We study a multistage hierarchical asynchronous transfer mode (ATM) switch in which each switching element has its own local cell buffer memory that is shared among all its output ports. We propose a novel buffer management technique called delayed pushout that combines a pushout mechanism (for sharing memory efficiently among queues within the same switching element) and a backpressure mechanism (for sharing memory across switch stages). The backpressure component has a threshold to restrict the amount of sharing between stages. A synergy emerges when pushout, backpressure, and this threshold are all employed together. Using a computer simulation of the switch under symmetric but bursty traffic, we study delayed pushout as well as several simpler pushout and backpressure schemes under a wide range of loads. At every load level, we find that the delayed pushout scheme has a lower cell loss rate than its competitors. Finally, we show how delayed pushout can be extended to share buffer space between traffic classes with different space priorities  相似文献   

7.
We propose a novel buffer management scheme called threshold-based selective drop (TSD) to improve the overall loss performance and fairness by regulating the buffer sharing in a packet switch. A transient analysis of TSD is derived to prove the fairness of buffer allocation. Computer simulation shows that the overall loss performance of TSD approaches to the pushout (PO) scheme, which is considered as an optimal solution with implementation difficulties in high-speed Internet. However, unlike the PO, the TSD will block the unwanted packets before they enter the queue, and does not need to pre-empty the queue for accepting new packets.  相似文献   

8.
This paper reports the findings of a simulation study of the queueing behavior of “best-effort” traffic in the presence of constant bit-rate and variable bit-rate isochronous traffic. In this study, best-effort traffic refers to ATM cells that support communications between host end systems executing various applications and exchanging information using TCP/IP. The performance measures considered are TCP cell loss, TCP packet loss, mean cell queueing delay, and mean cell queue length. Our simulation results show that, under certain conditions, best-effort TCP traffic may experience as much as 2% cell loss. Our results also show that the probability of cell and packet loss decreases logarithmically with increased buffer size  相似文献   

9.
The heterogeneity and the burstiness of input source traffic together with large size of the shared buffer make it difficult to analyze the performance of an asynchronous transfer mode (ATM) multiplexer. Based on the asymptotic decay rate of queue length distribution at the shared buffer, we propose a Bernoulli process approximation for the individual on-off input source with buffer size adjustment, which gives a good upper bound of the cell loss probability  相似文献   

10.
Shared buffer switches consist of a memory pool completely shared among output ports of a switch. Shared buffer switches achieve low packet loss performance as buffer space is allocated in a flexible manner. However, this type of buffered switches suffers from high packet losses when the input traffic is imbalanced and bursty. Heavily loaded output ports dominate the usage of shared memory and lightly loaded ports cannot have access to these buffers. To regulate the lengths of very active queues and avoid performance degradations, threshold‐based dynamic buffer management policy, decay function threshold, is proposed in this paper. Decay function threshold is a per‐queue threshold scheme that uses a tailored threshold for each output port queue. This scheme suggests that buffer space occupied by an output port decays as the queue size of this port increases and/or empty buffer space decreases. Results have shown that decay function threshold policy is as good as well‐known dynamic thresholds scheme, and more robust when multicast traffic is used. The main advantage of using this policy is that besides best‐effort traffic it provides support to quality of service (QoS) traffic by using an integrated buffer management and scheduling framework. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
Algorithms for solving for the cell loss rates in an asynchronous transfer mode (ATM) network using cell loss priorities are presented. With the loss priority scheme, cells of low-priority classes are accepted only if the instantaneous buffer queue length at the cell arrival epoch is below a given threshold. The input is modeled by Markov-modulated Bernoulli processes. The effect of the loss priority scheme on data, voice, and video traffic is investigated  相似文献   

12.
We study a multistage ATM switch in which shared-memory switching elements are arranged in a banyan topology. By “shared-memory,” we mean that each switching element uses output queueing and shares its local cell buffer memory among all its output ports. We apply a buffer management technique called delayed pushout that was originally designed for multistage ATM switches with hierarchical topologies. Delayed pushout combines a pushout mechanism, for sharing memory efficiently among queues within the same switching element, and a backpressure mechanism, for sharing memory across switch stages. The backpressure component has a threshold to restrict the amount of sharing between stages. A synergy emerges when pushout, backpressure, and this threshold are all employed together. Using a computer simulation of the switch under bursty traffic, we study delayed pushout as well as several simpler pushout and backpressure schemes under a variety of traffic conditions. Of the five schemes we simulate, delayed pushout is the only one that performs well under all load conditions  相似文献   

13.
SMAQ is a measurement-based tool for integration of traffic modeling and queuing analysis. There are three basic components in SMAQ. In the design of the first component, statistic measurement, the most critical issues are to identify the important traffic statistics for queuing analysis in a finite buffer system and then to build a measurement structure to collect them. Our study indicates that both first- and second-order traffic statistics, measured within a given frequency-window, have a very significant impact on the queue length and loss rate performance. In the design of the second component, matched modeling, the focal point is to construct a stochastic model that can match a wide range of important statistics collected in various applications. New methodologies and fast algorithms are developed for such construction on the basis of a circulant modulated Poisson process (CMPP). For the third component, queuing solutions, the basic requirement is to provide numerical solutions of the queue length and loss rate for transport of given traffic in a finite buffer system. A fast and stable computation method, called a Folding algorithm, is applied to provide both steady-state and transient solutions of various kinds, including congestion control performance where arriving traffic are selectively discarded based on queue thresholds. We provide both design methodologies and software architectures of these three components, with discussion of practical engineering issues for the use of the SMAQ tool  相似文献   

14.
This paper presents the architecture of a new space priority mechanism intended to control cell loss in ATM switches. Our mechanism is a new generic concept called: the multiple pushout. It is based on the utilization of both AAL and ATM features and on a particular definition of the priority bit. Whenever one cell of a message overflows the buffer of an ATM switch, the algorithm causes the switch to discard other cells of the message (including later arrivals). Such discarding frees buffer spaces for cells of other messages that have a chance of arriving at their destination intact. Our objective is to emphasize that in case of overload, with most of proposed mechanisms, cells are discarded without any semantic information about the type of cells. Therefore, at the destination, all the fragments of the corrupted messages will be discarded anyway. Finally, we present simulation results comparing cell loss rates and message loss rates of several space priority mechanisms.  相似文献   

15.
In order to reduce the time delays as well as multiplexer memory requirements in packet voice systems, a family of congestion control schemes is proposed. They are all based on the selective discarding of packets whose loss will produce the least degradation in quality of the reconstructed voice signal. A mathematical model of the system is analyzed and queue length distributions are derived. These are used to compute performance measures, including mean waiting time and fractional packet loss. Performance curves for some typical systems are presented, and it is shown that the control procedures can achieve significant improvement over uncontrolled systems, reducing the mean waiting time and total packet loss (at transmitting and receiving ends). Congestion control with a resume level is also analyzed, showing that without increasing the fractional packet loss, the mean and variance of the queue can be reduced by selecting an appropriate resume level. The performance improvements are confirmed by the results of some informal subjective testing  相似文献   

16.
The asynchronous transfer mode (ATM) is the choice of transport mode for broadband integrated service digital networks (B-ISDNs). We propose a window-based contention resolution algorithm to achieve higher throughput for nonblocking switches in ATM environments. In a nonblocking switch with input queues, significant loss of throughput can occur due to head-of-line (HOL) blocking when first-in first-out (FIFO) queueing is employed. To resolve this problem, we employ bypass queueing and present a cell scheduling algorithm which maximizes the switch throughput. We also employ a queue length based priority scheme to reduce the cell delay variations and cell loss probabilities. With the employed priority scheme, the variance of cell delay is also significantly reduced under nonuniform traffic, resulting in lower cell loss rates (CLRs) at a given buffer size. As the cell scheduling controller, we propose a neural network (NN) model which uses a high degree of parallelism. Due to higher switch throughput achieved with our cell scheduling, the cell loss probabilities and the buffer sizes necessary to guarantee a given CLR become smaller than those of other approaches based on sequential input window scheduling or output queueing  相似文献   

17.
A Per-Flow Based Node Architecture for Integrated Services Packet Networks   总被引:3,自引:0,他引:3  
Wu  Dapeng  Hou  Yiwei Thomas  Li  Bo  Chao  H. Jonathan 《Telecommunication Systems》2001,17(1-2):135-160
As the Internet transforms from the traditional best-effort service network into QoS-capable multi-service network, it is essential to have new architectural design and appropriate traffic control algorithms in place. This paper presents a network node architecture and several traffic management mechanisms that are capable of achieving QoS provisioning for the guaranteed service (GS), the controlled-load (CL) service, and the best-effort (BE) service for future integrated services networks. A key feature of our architecture is that it resolves the out-of-sequence problem associated with the traditional design. We also propose two novel packet discarding mechanisms called selective pushout (SP) and selective pushout plus (SP+). Simulation results show that, once admitted into the network, our architecture and traffic management algorithms provide, under all conditions, hard performance guarantees to GS flows and consistent (or soft) performance guarantees to CL flows, respectively; minimal negative impact to in-profile GS, CL and BE traffic should there be any out-of-profile behavior from some CL flows.  相似文献   

18.
This letter suggests a modified priority scheduling policy for the asynchronous transfer mode (ATM) multiplexer, which is called DQLT. In the dual queue length threshold (DQLT) method, there exist two queues: (1) Q1 is for nonreal-time traffic and (2) Q2 is for real-time traffic and each queue has its own threshold to adaptively control the buffer congestion. If Q1 is congested over the threshold T1 one cell at the head of Q1 moves into Q2 in a slot time. It is shown that the DQLT method gives intermediate performance between those of minimum laxity threshold (MLT) and queue length threshold (QLT) policy, but its control method is quite simpler  相似文献   

19.
A space-division, nonblocking packet switch with data concentration and output buffering is proposed. The performance of the switch is evaluated with respect to packet loss probability, the first and second moments of the equilibrium queue length and waiting time, throughput, and buffer overflow probability. Numerical results indicate that the switch exhibits very good delay-throughput performance over a wide range of input traffic. The switch compares favorably with some previously proposed switches in terms of fewer basic building elements used to attain the same degree of output buffering  相似文献   

20.
This paper studies the impact of long-range-dependent (LRD) traffic on the performance of reassembly and multiplexing queueing. A queueing model characterizing the general reassembly and multiplexing operations performed in packet networks is developed and analyzed. The buffer overflow probabilities for both reassembly and multiplexing queues are derived by extending renewal analysis and Bene fluid queue analysis, respectively. Tight upper and lower bounds of the frame loss probabilities are also analyzed and obtained. Our analysis is not based on existing asymptotic methods, and it provides new insights regarding the practical impact of LRD traffic. For the reassembly queue, the results show that LRD traffic and conventional Markov traffic yield similar queueing behavior. For the multiplexing queue, the results show that the LRD traffic has a significant impact on the buffer requirement when the target loss probability is small, including for practical ranges of buffer size or maximum delay.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号