首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
A new ATM switch architecture is presented. Our proposed Multinet switch is a self-routing multistage switch with partially shared internal buffers capable of achieving 100% throughput under uniform traffic. Although it provides incoming ATM cells with multiple paths, the cell sequence is maintained throughout the switch fabric thus eliminating the out-of-order cell sequence problem. Cells contending for the same output addresses are buffered internally according to a partially shared queueing discipline. In a partially shared queueing scheme, buffers are partially shared to accommodate bursty traffic and to limit the performance degradation that may occur in a completely shared system where a small number of calls may hog the entire buffer space unfairly. Although the hardware complexity in terms of number of crosspoints is similar to that of input queueing switches, the Multinet switch has throughput and delay performance similar to output queueing switches  相似文献   

2.
Shared buffering and channel grouping are powerful techniques with great benefits in terms of both performance and implementation. Shared‐buffer switches are known to have better performance and better utilization than input or output queued switches. With channel grouping, a cell is routed to a group of channels instead of a specific output channel. In this way, congestion due to output contention can be minimized and the switch performance can therefore be greatly improved. Although each technique is well known by itself in the traditional study of queuing systems, their combined use in ATM networks has not been much explored previously. In this paper, we develop an analytical model for a shared‐buffer ATM switch with grouped output channels. The model is then used to study the switch performance in terms of cell loss probability, cell delay and throughput. In particular, we study the impact of the channel grouping factor on the buffer requirements. Our results show that grouping the output channels in a shared‐buffer ATM switch leads to considerable savings in buffer space. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
Performance studies, linking ATM switch capabilities to physical limitations imposed by integrated circuit technology, have been scarce. This paper explores trends in circuit capabilities, and makes projections toward the 0.25-μm technologies that will be available to all switch designers in the year 2000. The limits imposed by circuit technology are applied to shared buffer ATM switches. We determine requirements and physical limits for buffer capacity, buffer throughput, chip I/O throughput, and power dissipation. As a result, we are able to project chip counts, aggregate switch throughputs, and switch dimensions. As well, performance capabilities of single-chip shared buffer switches are estimated. A single-chip shared buffer switch implemented in 0.25-μm technology will be capable of an aggregate throughput of 1.3 Tb/s, will accomplish almost arbitrarily low cell loss rates for bursty traffic, and may be integrated together with translation tables supporting hundreds of connections per port  相似文献   

4.
A set of 0.8 μm CMOS VLSIs developed for shared buffer switches in asynchronous transfer mode (ATM) switching systems is described. A 32×32 unit switch consists of eight buffer memory VLSIs, two memory control VLSIs, and two commercially available first in first out (FIFO) memory LSIs. Using the VLSIs, the switch can be mounted on a printed board. To provide excellent traffic characteristics not only under random traffic conditions but also under burst traffic conditions, this switch has a 2-Mb shared buffer memory, the largest reported to date. which can save 4096 cells among 32 output ports. This switch has a priority control function to meet the different cell loss rate requirements and switching delay requirements of different service classes. A multicast function and a 600 Mb/s link switch architecture, which are suitable for ATM network systems connecting various media, and an expansion method using the 32×32 switching board to achieve large-scale switching systems such as 256×256 or 1024×1024 switches are discussed  相似文献   

5.
In practical ATM switch design, a proper dimensioning of buffer sizes and a cost effective selection of speed-up factor should be considered to guarantee a specified cell loss requirement for a given traffic. Although a larger speed-up factor provides better throughput for the switch, increasing the speed-up factor involves greater complexity and cost. Hence, it may not be cost effective to increase the speed-up factor for 100% throughput. Moreover, with a given buffer budget, an increase in the speed-up factor beyond a certain value only adds to the cell loss. The paper addresses design trade-offs existing between finite input/output buffer sizes and speed-up factor in a nonblocking ATM switch. Another important issue is the adverse effect on cell loss performance caused by nonuniform traffic (different traffic intensity and unevenly distributed routing). The paper analyzes cell loss performance of ATM switches with nonuniform traffic, and examines the effect of each nonuniform traffic parameter. The authors also provide an algorithm for effective buffer sharing that alleviates the performance degradation caused by traffic nonuniformity  相似文献   

6.
Multistage interconnection networks (MINs) have long been studied for use in switching networks. Since they have a unique path between source and destination and the intermediate nodes of the paths are shared, internal blocking can cause very poor throughput. This paper proposes a high throughput ATM switch consisting of an Omega network with a new form of input queues called bypass queues. We also improve the switch throughput by partitioning the Input buffers into disjoint buffer sets and multiplexing several sets of nonblocking cells within a time slot, assuming that the routing switch operates only a couple of times faster than the transmission rate. A neural network model is presented as a controller for cell scheduling and multiplexing in the switch. Our simulation results under uniform traffic show that the proposed approach achieves almost 100% of potential switch throughput  相似文献   

7.
Modern switches and routers require massive storage space to buffer packets. This becomes more significant as link speed increases and switch size grows. From the memory technology perspective, while DRAM is a good choice to meet capacity requirement, the access time causes problems for high‐speed applications. On the other hand, though SRAM is faster, it is more costly and does not have high storage density. The SRAM/DRAM hybrid architecture provides a good solution to meet both capacity and speed requirements. From the switch design and network traffic perspective, to minimize packet loss, the buffering space allocated for each switch port is normally based on the worst‐case scenario, which is usually huge. However, under normal traffic load conditions, the buffer utilization for such configuration is very low. Therefore, we propose a reconfigurable buffer‐sharing scheme that can dynamically adjust the buffering space for each port according to the traffic patterns and buffer saturation status. The target is to achieve high performance and improve buffer utilization, while not posing much constraint on the buffer speed. In this paper, we study the performance of the proposed buffer‐sharing scheme by both a numerical model and extensive simulations under uniform and non‐uniform traffic conditions. We also present the architecture design and VLSI implementation of the proposed reconfigurable shared buffer using the 0.18 µm CMOS technology. Our results manifest that the proposed architecture can always achieve high performance and provide much flexibility for the high‐speed packet switches to adapt to various traffic patterns. Furthermore, it can be easily integrated into the functionality of port controllers of modern switches and routers. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we propose a new technique for reducing cell loss in multi‐banyan‐based ATM switching fabrics. We propose a switch architecture that uses incremental path reservation based on previously established connections. Path reservation is carried out sequentially within each banyan but multiple banyan planes can be concurrently reserved. We use a conflict resolution approach according to which banyans make concurrent reservation offers of conflict‐free paths to head of the line cells waiting in input buffers. A reservation offer from a given banyan is allocated to the cell whose source‐to‐destination path uses the largest number of partially allocated switching elements which are shared with previously reserved paths. Paths are incrementally clustered within each banyan. This approach leaves the largest number of free switching elements for subsequent reservations which has the effect of reducing the potential of future conflicts and improves throughput. We present a pipelined switch architecture based on the above concept of path‐clustering which we call path‐clustering banyan switching fabric (PCBSF). An efficient hardware that implements PCBSF is presented together with its theoretical basis. The performance and robustness of PCBSF are evaluated under simulated uniform traffic and ATM traffic. We also compare the cell loss rate of PCBSF to that of other pipelined banyan switches by varying the switch size, input buffer size, and traffic pattern. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

9.
ATM (asynchronous transfer mode) is a new technique for transmitting voice, data and video. The performance of atm networks will depend on switch structure. Performance analysis of an atm switch based on a three-stage Clos network is presented. In this paper two types of switches are studied: a switch with input queues in the switching elements and a switch with output queues. This study is at the cell level and intends to dimension the switch. First, the traffic is supposed to be uniform, cells arrive on each input according to a geometric arrival process, they are uniformly directed over all the network outputs. An analytic model is proposed for both input and output queues in the switching elements. A study of the saturation throughput is proposed for input buffer switching elements. This work proves the influence of buffer dimensioning on the different stages of the switch. Dissymmetric switching elements are shown to be better than symmetric ones. A model is then designed for nonuniform traffic patterns and output buffers. Two types of non-uniform traffic are presented: single source to single destination (sssd) and multi-hot spots traffic (mhs). Discrete event simulations are used to validate the different models.  相似文献   

10.
A model for the analysis of multistage switches based on shared buffer switching for Asynchronous Transfer Mode (ATM) networks is developed, and the results are compared with the simulation. Switches constructed from shared buffer switches do not suffer from the head of line blocking which is the common problem in simple input buffering. The analysis models the state of the entire switch and extends the model introduced by Turner to global flow control with backpressure mechanism. It is shown that buffer utilization is better and throughput improves significantly compared with the same switch using local flow control policy. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

11.
Burst traffic is a common traffic pattern in modern IP networks, and it may lead to the unfairness problem and seriously degrade the performance of switches and routers. From the perspective of switching mechanism, the majority of commercial switches adopt the on‐chip shared‐memory switching architecture, and high‐speed packet buffer with efficient queue management is required to deal with the unfairness and congestion problem. In this paper, the performance of a shared‐private buffer management scheme is analyzed in detail. In the proposed scheme, the total memory space is split into shared area and private area. Each output port has a private memory area that cannot be used by other ports. The shared area is completely shared among all output ports. A theoretical queuing model of the proposed scheme is formulated, and closed‐form formulas for multiple performance parameters are derived. Through the numerical studies, we demonstrate that a nearly optimal buffer partition policy can be obtained by setting an equally small amount of private area for each queue. This work is validated by simulations as well as hardware experiments. Software simulations show that the proposed scheme performs better than existing methods, and packet dropping caused by burst traffic can be significantly reduced. Besides, a prototype of the buffer management module is implemented and evaluated in field programmable gate array platform. The evaluation shows that the proposed scheme can ensure the efficiency and fairness while keeping a high throughput in real workload.  相似文献   

12.
As an alternative to input-buffered switches, combined input-crosspoint buffered switches relax arbitration timing and provide high-performance switching for packet switches with high-speed ports. It has been shown that these switches, with one-cell crosspoint buffer and round-robin (RR) arbitration at input and output ports, provide 100% throughput under uniform traffic. However, under admissible traffic patterns with nonuniform distributions, only weight-based selection schemes are reported to provide high throughput. We propose an RR based arbitration scheme for a combined input-crosspoint buffered packet switch that provides nearly 100% throughput for several admissible traffic patterns, including uniform and unbalanced traffic, using one-cell crosspoint buffers. The presented scheme uses adaptable-size frames, so that the frame size adapts to the traffic pattern.  相似文献   

13.
An asynchronous transfer mode (ATM) switch chip set, which employs a shared multibuffer architecture, and its control method are described. This switch architecture features multiple-buffer memories located between two crosspoint switches. By controlling the input-side crosspoint switch so as to equalize the number of stored ATM cells in each buffer memory, these buffer memories can be treated as a single large shared buffer memory. Thus, buffers are used efficiently and the cell loss ratio is reduced to a minimum. Furthermore, no multiplexing or demultiplexing is required to store and restore the ATM cells by virtue of parallel access to the buffer memories via the crosspoint switches. Access time for the buffer memory is thus greatly reduced. This feature enables high-speed switch operation. A three-VLSI chip set using 0.8-μm BiCMOS process technology has been developed. Four aligner LSIs, nine bit-sliced buffer-switch LSIs, and one control LSI are combined to create a 622-Mb/s 8×8 ATM switching system that operates at 78 MHz. In the switch fabric, 155-Mb/s ATM cells can also be switched on the 622-Mb/s port using time-division multiplexing  相似文献   

14.
Shared buffer switches consist of a memory pool completely shared among output ports of a switch. Shared buffer switches achieve low packet loss performance as buffer space is allocated in a flexible manner. However, this type of buffered switches suffers from high packet losses when the input traffic is imbalanced and bursty. Heavily loaded output ports dominate the usage of shared memory and lightly loaded ports cannot have access to these buffers. To regulate the lengths of very active queues and avoid performance degradations, threshold‐based dynamic buffer management policy, decay function threshold, is proposed in this paper. Decay function threshold is a per‐queue threshold scheme that uses a tailored threshold for each output port queue. This scheme suggests that buffer space occupied by an output port decays as the queue size of this port increases and/or empty buffer space decreases. Results have shown that decay function threshold policy is as good as well‐known dynamic thresholds scheme, and more robust when multicast traffic is used. The main advantage of using this policy is that besides best‐effort traffic it provides support to quality of service (QoS) traffic by using an integrated buffer management and scheduling framework. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
The configuration of an asynchronous transfer mode (ATM) switch architecture using a shared buffer memory switch (SBMS) is discussed. The scaling factors of the ATM switching network under a condition of mixed applications, including a conventional mix and telecommunication with video, are analyzed. The use of the SBMS as the unit switch for a multistage switching network is examined. A prototype system and its performance evaluation and experimental data are presented. The data indicate excellent performance under a burst cell arrival condition. The buffer size of the SBMS can be reduced in comparison with that of an individual (nonshared) buffer memory switch. A configuration for a large-scale ATM switching network with multistage switches is proposed  相似文献   

16.
The asynchronous transfer mode (ATM) is the choice of transport mode for broadband integrated service digital networks (B-ISDNs). We propose a window-based contention resolution algorithm to achieve higher throughput for nonblocking switches in ATM environments. In a nonblocking switch with input queues, significant loss of throughput can occur due to head-of-line (HOL) blocking when first-in first-out (FIFO) queueing is employed. To resolve this problem, we employ bypass queueing and present a cell scheduling algorithm which maximizes the switch throughput. We also employ a queue length based priority scheme to reduce the cell delay variations and cell loss probabilities. With the employed priority scheme, the variance of cell delay is also significantly reduced under nonuniform traffic, resulting in lower cell loss rates (CLRs) at a given buffer size. As the cell scheduling controller, we propose a neural network (NN) model which uses a high degree of parallelism. Due to higher switch throughput achieved with our cell scheduling, the cell loss probabilities and the buffer sizes necessary to guarantee a given CLR become smaller than those of other approaches based on sequential input window scheduling or output queueing  相似文献   

17.
The design of a copy network is presented for use in an ATM (asynchronous transfer mode) switch supporting BISDN (broadband integrated services digital network) traffic. Inherent traffic characteristics of BISDN services require ATM switches to handle bursty traffic with multicast connections. In typical ATM switch designs a copy network is used to replicate multicast cells before being forwarded to a point-to-point routeing network. In such designs, a single multicast cell enters the switch and is replicated once for each multicast connection. Each copy is forwarded to the routeing network with a unique destination address and is routed to the appropriate output port. Non-blocking copy networks permit multiple cells to be multicasted at once, up to the number of outputs of the copy network. Another critical feature of ATM switch design is the location of buffers for the temporary storage of transmitted cells. Buffering is required when multiple cells require a common switch resource for transmission. Typically, one cell is granted the resource and is transmitted while the remaining cells are buffered. Current switch designs associate discrete buffers with individual switch resources. Discrete buffering is not efficient for bursty traffic as traffic bursts can overflow individual switch buffers and result in dropped cells, while other buffers are under-used. A new non-blocking copy network is presented in this paper with a shared-memory input buffer. Blocked cells from any switch input are stored in a single shared input buffer. The copy network consists of three banyan networks and shared-memory queues. The design is scalable for large numbers of inputs due to low hardware complexity, O (N log2 N), and distributed operation and control. It is shown in a simulation study that a switch incorporating the shared-memory copy network has increased throughput and lower buffer requirements to maintain low packet loss probability when compared to a switch with a discrete buffer copy network.  相似文献   

18.
A general expansion architecture is proposed that can be used in building large-scale switches using any type of asynchronous transfer mode (ATM) switch. The proposed universal multistage interconnection network (UniMIN) switch is composed of a buffered distribution network (DN) and a column of output switch modules (OSMs), which can be any type of ATM switch. ATM cells are routed to their destination using a two-level routing strategy. The DN provides each incoming cell with a self-routing path to the destined OSM, which is the switch module containing the destination output port. Further routing to the destined output port is performed by the destination OSM. Use of the channel grouping technique yields excellent delay/throughput performance in the DN, and the virtual FIFO concept is used for implementing the output buffers of the distribution module without internal speedup. We also propose a “fair virtual FIFO” to provide fairness between input links while preserving cell sequence. The distribution network is composed of one kind of distribution module which has the same size as the OSM, regardless of the overall switch size N. This gives good modular scalability in the UniMIN switch. Performance analysis for uniform traffic and hot-spot traffic shows that a negligible delay and cell loss ratio in the DN can be achieved with a small buffer size, and that DN yields robust performance even with hot-spot traffic. In addition, a fairness property of the proposed fair virtual FIFO is shown by a simulation study  相似文献   

19.
This paper develops both exact and approximate models for the analysis of an all-optical packet switch based on a fiber-loop buffer memory (FLBM). The switch structure and operation is based on the fully shared buffer architecture of the Research and Development in Advanced Communications in Europe - ATM Optical Switching (RACE-ATMOS) project , which uses individual wavelengths to store fixed-length packets in the fiber-loop buffer. An exact model of the switch has been developed , which can be used to determine the blocking performance of the switch and obtain both its throughput and packet loss characteristics. It has been used to study the switch performance under different loading conditions and for different values of the key design parameters of the switch. This model is difficult to use for studying large switches of this kind because of computational complexities. To tackle this problem, an approximate queuing model has also been presented, which may be used to study the performance of large switches of this kind. The results obtained by the two methods are compared to confirm that the approximate model works well under typical loading conditions of the switch.  相似文献   

20.
赵豫彪  刘增基 《电子学报》1997,25(10):85-87
本文给出了ATM交换结构性能分析的有效数学方法,通过分析几种业务流情况下不同缓冲方式ATM交换结构的信元丢失率和时延特性,证明了该方法的正确性和有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号