首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
Packet reordering is not pathological network behavior   总被引:2,自引:0,他引:2  
It is a widely held belief that packet reordering in the Internet is a pathological behavior, or more precisely, that it is an uncommon behavior caused by incorrect or malfunctioning network components. Some studies of Internet traffic have reported seeing occasional packet reordering events and ascribed these events to “route fluttering”, router “pauses” or simply to broken equipment. We have found, however, that parallelism in Internet components and links is causing packet reordering under normal operation and that the incidence of packet reordering appears to be substantially higher than previously reported. More importantly, we observe that in the presence of massive packet reordering transmission control protocol (TCP) performance can be profoundly effected. Perhaps the most disturbing observation about TCP's behavior is that large scale and largely random reordering on the part of the network can lead to self-reinforcingly poor performance from TCP  相似文献   

2.
Delay-based congestion avoidance for TCP   总被引:1,自引:0,他引:1  
The set of TCP congestion control algorithms associated with TCP-Reno (e.g., slow-start and congestion avoidance) have been crucial to ensuring the stability of the Internet. Algorithms such as TCP-NewReno (which has been deployed) and TCP-Vegas (which has not been deployed) represent incrementally deployable enhancements to TCP as they have been shown to improve a TCP connection's throughput without degrading performance to competing flows. Our research focuses on delay-based congestion avoidance algorithms (DCA), like TCP-Vegas, which attempt to utilize the congestion information contained in packet round-trip time (RTT) samples. Through measurement and simulation, we show evidence suggesting that a single deployment of DCA (i.e., a TCP connection enhanced with a DCA algorithm) is not a viable enhancement to TCP over high-speed paths. We define several performance metrics that quantify the level of correlation between packet loss and RTT. Based on our measurement analysis, we find that, although there is useful congestion information contained within RTT samples, the level of correlation between an increase in RTT and packet loss is not strong enough to allow a TCP-sender to improve throughput reliably. While DCA is able to reduce the packet loss rate experienced by a connection, in its attempts to avoid packet loss, the algorithm reacts unnecessarily to RTT variation that is not associated with packet loss. The result is degraded throughput as compared to a similar flow that does not support DCA.  相似文献   

3.
一种支持多媒体通信QoS的拥塞控制机制   总被引:3,自引:0,他引:3       下载免费PDF全文
罗万明  林闯  阎保平 《电子学报》2000,28(Z1):48-52
本文针对Internet传输协议TCP的和式增加积式减少(AIMD)拥塞控制机制不适应多媒体通信,而目前拥塞控制的研究又大多集中在尽量做好(Best-effort)服务上的问题,结合Internet上多媒体通信的特点及其对QoS的要求,提出了一种将多媒体通信服务质量(QoS)控制和基于速率拥塞控制结合起来的拥塞控制的新机制.本文详细地研究了这一机制,并提出了源端多媒体数据流的带宽控制策略、基于动态部分缓存共享(DPBS)的数据包丢失控制方案和接收端计算包丢失率p的方法.最后给出了整个拥塞控制机制的系统结构.  相似文献   

4.
5.
The Internet uses a window‐based congestion control mechanism in transmission control protocol (TCP). In the literature, there have been a great number of analytical studies on TCP. Most of those studies have focused on the statistical behaviour of TCP by assuming a constant packet loss probability in the network. However, the packet loss probability, in reality, changes according to the packet transmission rates from TCP connections. Conversely, the window size of a TCP connection is dependent on the packet loss probability in the network. In this paper, we explicitly model the interaction between the congestion control mechanism of TCP and the network as a feedback system. By using this model, we analyse the steady state and the transient state behaviours of TCP. We derive the throughput and the packet loss probability of TCP, and the number of packets queued in the bottleneck router. We then analyse the transient state behaviour using a control theoretic approach, showing the influence of the number of TCP connections and the propagation delay on the transient state behaviour of TCP. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
Promoting the use of end-to-end congestion control in the Internet   总被引:2,自引:0,他引:2  
This paper considers the potentially negative impacts of an increasing deployment of non-congestion-controlled best-effort traffic on the Internet. These negative impacts range from extreme unfairness against competing TCP traffic to the potential for congestion collapse. To promote the inclusion of end-to-end congestion control in the design of future protocols using best-effort traffic, we argue that router mechanisms are needed to identify and restrict the bandwidth of selected high-bandwidth best-effort flows in times of congestion. The paper discusses several general approaches for identifying those flows suitable for bandwidth regulation. These approaches are to identify a high-bandwidth flow in times of congestion as unresponsive, “not TCP-friendly”, or simply using disproportionate bandwidth. A flow that is not “TCP-friendly” is one whose long-term arrival rate exceeds that of any conformant TCP in the same circumstances. An unresponsive flow is one failing to reduce its offered load at a router in response to an increased packet drop rate, and a disproportionate-bandwidth flow is one that uses considerably more bandwidth than other flows in a time of congestion  相似文献   

7.
A fuzzy-logic control algorithm for active Queue Management in IP networks   总被引:2,自引:0,他引:2  
Active Queue Management (AQM) is an active research area in the Internet community. Random Early Detection (RED) is a typical AQM algorithm, but it is known that it is difficult to configure its parameters and its average queue length is closely related to the load level. This paper proposes an effective fuzzy congestion control algorithm based on fuzzy logic which uses the predominance of fuzzy logic to deal with uncertain events. The main advantage of this new congestion control algorithm is that it discards the packet dropping mechanism of RED, and calculates packet loss according to a preconfigured fuzzy logic by using the queue length and the buffer usage ratio. Theoretical analysis and Network Simulator (NS) simulation results show that the proposed algorithm achieves more throughput and more stable queue length than traditional schemes. It really improves a router's ability in network congestion control in IP network.  相似文献   

8.
In packet networks, congestion events tend to persist, producing large delays and long bursts of consecutive packet loss resulting in perceived performance degradations. The length and rate of these events have a significant effect on network quality of service (QoS). The packet delay resulting from these congestion events also influences QoS. In this paper a technique for predicting these properties of congestion events in the presence of fractional Brownian motion (fBm) traffic is developed.  相似文献   

9.
TCP Veno: TCP enhancement for transmission over wireless access networks   总被引:18,自引:0,他引:18  
Wireless access networks in the form of wireless local area networks, home networks, and cellular networks are becoming an integral part of the Internet. Unlike wired networks, random packet loss due to bit errors is not negligible in wireless networks, and this causes significant performance degradation of transmission control protocol (TCP). We propose and study a novel end-to-end congestion control mechanism called TCP Veno that is simple and effective for dealing with random packet loss. A key ingredient of Veno is that it monitors the network congestion level and uses that information to decide whether packet losses are likely to be due to congestion or random bit errors. Specifically: (1) it refines the multiplicative decrease algorithm of TCP Reno-the most widely deployed TCP version in practice-by adjusting the slow-start threshold according to the perceived network congestion level rather than a fixed drop factor and (2) it refines the linear increase algorithm so that the connection can stay longer in an operating region in which the network bandwidth is fully utilized. Based on extensive network testbed experiments and live Internet measurements, we show that Veno can achieve significant throughput improvements without adversely affecting other concurrent TCP connections, including other concurrent Reno connections. In typical wireless access networks with 1% random packet loss rate, throughput improvement of up to 80% can be demonstrated. A salient feature of Veno is that it modifies only the sender-side protocol of Reno without changing the receiver-side protocol stack.  相似文献   

10.
Flow Routing and its Performance Analysis in Optical IP Networks   总被引:1,自引:0,他引:1  
Optical packet-switching networks deploying buffering, wavelength conversion and multi-path routing have been extensively studied in recent years to provide high capacity transport for Internet traffic. However due to packet-based routing and switching, such a network could result in significant disorder and delay variation of packets when they are received by end users, thus increasing the burstiness of the Internet traffic and causing higher-layer protocol to malfunction. This paper addresses a novel routing and switching method for optical IP networks — flow routing, and its facilitating protocol. Flow routing deals with packet-flows to reduce flow corruption due to packet out-of-order, delay variation and packet loss, without using complicate control mechanism. Detailed performance analysis is given for output-buffered optical routers adopting flow routing. Two flow-oriented discarding techniques, i.e., flow discard (FD) and early flow discard (EFD), are discussed. Compared with optical packet-switching routers, a remarkable improvement of good-throughput is obtained in the optical flow-routers, especially under high congestion periods. We conclude that EFD behaves as a robust technique, which is more tolerant than FD to the change of traffic and transmission system factors.  相似文献   

11.
This paper provides a parallel review of two important issues for the next‐generation multimedia networking. Firstly, the emerging multimedia applications require a fresh approach to congestion control in the Internet. Currently, congestion control is performed by TCP; it is optimised for data traffic flows, which are inherently elastic. Audio and video traffic do not find the sudden rate fluctuations imposed by the TCP multiplicative‐decrease control algorithm optimal. The second important issue is the mobility support for multimedia applications. Wireless networks are characterized by a substantial packet loss due to the imperfection of the radio medium. This increased packet loss disturbs the foundation of TCP's loss‐based congestion control. This paper contributes to the ongoing discussion about the Internet congestion control by providing a parallel analysis of these two issues. The paper describes the main challenges, design guidelines, and existing proposals for the Internet congestion control, optimised for the multimedia traffic in the wireless network environment. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
《IEEE network》2002,16(5):38-46
Today, the dominant paradigm for congestion control in the Internet is based on the notion of TCP friendliness. To be TCP-friendly, a source must behave in such a way as to achieve a bandwidth that is similar to the bandwidth obtained by a TCP flow that would observe the same round-trip time (RTT) and the same loss rate. However, with the success of the Internet comes the deployment of an increasing number of applications that do not use TCP as a transport protocol. These applications can often improve their own performance by not being TCP-friendly, which severely penalizes TCP flows. To design new applications to be TCP-friendly is often a difficult task. The idea of the fair queuing (FQ) paradigm as a means to improve congestion control was first introduced by Keshav (1991). While Keshav made a fundamental step toward a new paradigm for the design of congestion control protocols, he did not formalize his results so that his findings could be extended for the design of new congestion control protocols. We make this step and formally define the FQ paradigm as a paradigm for the design of new end-to-end congestion control protocols. This paradigm relies on FQ scheduling with per-flow scheduling and longest queue drop buffer management in each router. We assume only selfish and noncollaborative end users. Our main contribution is the formal statement of the congestion control problem as a whole, which enables us to demonstrate the validity of the FQ paradigm. We also demonstrate that the FQ paradigm does not adversely impact the throughput of TCP flows and explain how to apply the FQ paradigm for the design of new congestion control protocols. As a pragmatic validation of the FQ paradigm, we discuss a new multicast congestion control protocol called packet pair receiver-driven layered multicast (PLM).  相似文献   

13.
In the Internet, network congestion is becoming an intractable problem. Congestion results in longer delay, drastic jitter and excessive packet losses. As a result, quality of service (QoS) of networks deteriorates, and then the quality of experience (QoE) perceived by end users will not be satisfied. As a powerful supplement of transport layer (i.e. TCP) congestion control, active queue management (AQM) compensates the deficiency of TCP in congestion control. In this paper, a novel adaptive traffic prediction AQM (ATPAQM) algorithm is proposed. ATPAQM operates in two granularities. In coarse granularity, on one hand, it adopts an improved Kalman filtering model to predict traffic; on the other hand, it calculates average packet loss ratio (PLR) every prediction interval. In fine granularity, upon receiving a packet, it regulates packet dropping probability according to the calculated average PLR. Simulation results show that ATPAQM algorithm outperforms other algorithms in queue stability, packet loss ratio and link utilization.  相似文献   

14.
The BLUE active queue management algorithms   总被引:6,自引:0,他引:6  
In order to stem the increasing packet loss rates caused by an exponential increase in network traffic, the IETF has been considering the deployment of active queue management techniques such as RED (random early detection) (see Floyd, S. and Jacobson, V., IEEE/ACM Trans. Networking, vol.1, p.397-413, 1993). While active queue management can potentially reduce packet loss rates in the Internet, we show that current techniques are ineffective in preventing high loss rates. The inherent problem with these algorithms is that they use queue lengths as the indicator of the severity of congestion. In light of this observation, a fundamentally different active queue management algorithm, called BLUE, is proposed, implemented and evaluated. BLUE uses packet loss and link idle events to manage congestion. Using both simulation and controlled experiments, BLUE is shown to perform significantly better than RED, both in terms of packet loss rates and buffer size requirements in the network. As an extension to BLUE, a novel technique based on Bloom filters (see Bloom, B., Commun. ACM, vol.13, no.7, p.422-6, 1970) is described for enforcing fairness among a large number of flows. In particular, we propose and evaluate stochastic fair BLUE (SFB), a queue management algorithm which can identify and rate-limit nonresponsive flows using a very small amount of state information.  相似文献   

15.
Worldwide Interoperability for Microwave Access (WiMAX) technology, which is based on the IEEE 802.16 standard, supports different quality of service (QoS) for different services. WiMAX is expected to support QoS in real-time applications such as Voice over Internet Protocol (VoIP). When network congestion occurs, the VoIP bit rate needs to be adjusted to achieve the best speech quality. In this study, we propose a new scheme called Adaptive VoIP Level Coding (AVLC). This scheme takes into consideration network conditions (packet delay and packet loss) and a connection’s modulation scheme. The amount of data that can be transmitted increases with the speed of the modulation scheme. When network congestion occurs, AVLC scheme prioritizes reducing the bit rate of a connection that has a slower modulation scheme to mitigate congestion. Depending on network conditions, such as modulation scheme, packet delay, packet loss, and residual time slot, we use the G.722.2 codec to adjust each connection’s bit rate. Simulations are conducted to test the performance (network delay, packet loss, number of modulation symbols, and R-score) of the proposed scheme. The simulation results indicate that speech quality is improved by the use of AVLC.  相似文献   

16.
ABE: providing a low-delay service within best effort   总被引:1,自引:0,他引:1  
《IEEE network》2001,15(3):60-69
We propose alternative best effort (ABE), a novel service for IP networks, which idea of providing low-delay at the expense of maybe less throughput. The objective is to retain the simplicity of the original Internet single-class best-effort service while providing low-delay to interactive adaptive applications. With ABE, every best effort packet is marked as either green or blue. Green packets are guaranteed a low bounded delay in every router. In exchange, green packets are more likely to be dropped (or marked using congestion notification) during periods of congestion than blue packets. For every packet, the choice of color is made by the application based on the nature of its traffic and on global traffic conditions. Typically, an interactive application with real-time deadlines, such as audio, will mark most at its packets as green, as long as the network conditions offer large enough throughput. In contrast, an application that transfers binary data such as bulk data transfer will seek to minimize overall transfer time and send blue traffic. We propose router requirements that aim at enforcing benefits for all types of traffic, namely that green traffic achieves low-delay and blue traffic receives at least as much throughput as it would in a flat (legacy) best effort network. ABE is different from differentiated or integrated services in that neither packet color can be said to receive better treatment; thus, flat rate pricing may be maintained, and there is no need for reservations or profiles. We define the ABE service, its requirements, properties, and usage. We discuss the implications of replacing the existing IP best effort service by the ABE service. We propose and analyze an implementation based on a new scheduling method called duplicate scheduling with deadlines. It supports any mixture of TCP, TCP-friendly, and non-TCP-friendly traffic  相似文献   

17.
Network support for IP traceback   总被引:5,自引:0,他引:5  
This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back toward their source. This work is motivated by the increased frequency and sophistication of denial-of-service attacks and by the difficulty in tracing packets with incorrect, or “spoofed,” source addresses. We describe a general purpose traceback mechanism based on probabilistic packet marking in the network. Our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet service providers (ISPs). Moreover, this traceback can be performed “post mortem”-after an attack has completed. We present an implementation of this technology that is incrementally deployable, (mostly) backward compatible, and can be efficiently implemented using conventional technology  相似文献   

18.
在当前Internet的尽力而为的服务模式下,网络拥塞和分组丢失不可避免,视频流必须使用有效的拥塞控制和差错控制来改善性能。本文分析了:Internet视频流QoS影响因素,提出了两种QoS解决方案:基于终端和基于网络。本文着重讨论了基于终端的QoS解决方案,在目前Internet的环境下,基于终端的QoS解决方案更具可行性。  相似文献   

19.
Supporting packet-data QoS in next generation cellular networks   总被引:1,自引:0,他引:1  
In the past few years, the Internet has grown beyond anyone's reasonable imagination into a universal communication platform. At the same time the cellular networks, with their ability to reach a person “anywhere, anytime,” have grown impressively as well. Thus the combination of mobile networks and the Internet into the so called “mobile Internet” promises to be an important technology area. The indications are clear: the cellular networks are rapidly adopting suitable network models for supporting packet data services. A key component of this packet data service model is quality of service (QoS), which is crucial for supporting disparate services envisioned in the future cellular networks. We describe the packet data QoS architecture and specific mechanisms that are being defined for multi-service QoS provisioning in the Universal Mobile Telecommunication Systems  相似文献   

20.
Video communication with Quality of Service (QoS) is an important and challenging task. To have QoS provision at application level in the current best-effort Internet, rate control, congestion control and error control are several effective approaches. In this paper, we propose a new network-adaptive rate control and Unequal Loss Protection (ULP) scheme in conjunction with TCP-friendly congestion control for scalable video streaming. Our proposed approach is capable of simultaneously controlling congestion and packet loss occurred across the Internet. More specifically, we first dynamically estimate the available network bandwidth on the fly. Then, TCP-friendly congestion control is performed to smoothly adjust the sending rate for transmission of continuous media. Considering the characteristic of scalable video, unequal loss protection at packet level is adopted for different video layers while performing congestion control. Consequently, a fixed-length and priority-based packetization scheme is introduced to enhance the capability of loss protection and improve the efficiency of network-bandwidth utilization. Moreover, Rate-Distortion (R-D) based bit allocation is proposed to minimize the expected end-to-end distortion. Simulation results demonstrate the effectiveness of our proposed scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号