首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 68 毫秒
1.
This work proposes a stochastic model to characterize the transmission control protocol (TCP) over optical burst switching (OBS) networks which helps to understand the interaction between the congestion control mechanism of TCP and the characteristic bursty losses in the OBS network. We derive the steady-state throughput of a TCP NewReno source by modeling it as a Markov chain and the OBS network as an open queueing network with rejection blocking. We model all the phases in the evolution of TCP congestion window and evaluate the number of packets sent and time spent in different states of TCP. We model the mixed assembly process, burst assembler and disassembler modules, and the core network using queueing theory and compute the burst loss probability and end-to-end delay in the network. We derive expression for the throughput of a TCP source by solving the models developed for the source and the network with a set of fixed-point equations. To evaluate the impact of a burst loss on each TCP flow accurately, we define the burst as a composition of per-flow-bursts (which is a burst of packets from a single source). Analytical and simulation results validate the model and highlight the importance of accounting for individual phases in the evolution of TCP congestion window.  相似文献   

2.
For optical burst-switched (OBS) networks in which TCP is implemented at a higher layer, the loss of bursts can lead to serious degradation of TCP performance. Due to the bufferless nature of OBS, random burst losses may occur, even at low traffic loads. Consequently, these random burst losses may be mistakenly interpreted by the TCP layer as congestion in the network. The TCP sender will then trigger congestion control mechanisms, thereby reducing TCP throughput unnecessarily. In this paper, we introduce a controlled retransmission scheme in which the bursts lost due to contention in the OBS network are retransmitted at the OBS layer. The OBS retransmission scheme can reduce the burst loss probability in the OBS core network. Also, the OBS retransmission scheme can reduce the probability that the TCP layer falsely detects congestion, thereby improving the TCP throughput. We develop an analytical model for evaluating the burst loss probability in an OBS network that uses a retransmission scheme, and we also analyze TCP throughput when the OBS layer implements burst retransmission. We develop a simulation model to validate the analytical results. Simulation and analytical results show that an OBS layer with controlled burst retransmission provides up to two to three orders of magnitude improvement in TCP throughput over an OBS layer without burst retransmission. This significant improvement is primarily because the TCP layer triggers fewer time-outs when the OBS retransmission scheme is used.  相似文献   

3.
A major concern in optical burst-switched (OBS) networks is contention, which occurs when more than one bursts contend for the same data channel at the same time. Due to the bufferless nature of OBS networks, these contentions randomly occur at any degree of congestion in the network. When contention occurs at any core node, the core node drops bursts according to its dropping policy. Burst loss in OBS networks significantly degrades the throughput of TCP sources in the local access networks because current TCP congestion control mechanisms perform a slow start phase mainly due to contention rather than heavy congestion. However, there has not been much study about the impact of burst loss on the performance of TCP over OBS networks. To improve TCP throughput over OBS networks, we first introduce a dropping policy with burst retransmission that retransmits the bursts dropped due to contention, at the ingress node. Then, we extend the dropping policy with burst retransmission to drop a burst that has experienced fewer retransmissions in the event of contention at a core node in order to reduce the number of events that a TCP source enters the slow start phase due to contention. In addition, we propose to limit the number of retransmissions of each burst to prevent severe congestion. For the performance evaluation of the proposed schemes, we provide an analytic throughput model of TCP over OBS networks. Through simulations as well as analytic modeling, it is shown that the proposed dropping policy with burst retransmission can improve TCP throughput over OBS networks compared with an existing dropping policy without burst retransmission.  相似文献   

4.
It is well-known that the bufferless nature of optical burst-switching (OBS) networks cause random burst loss even at low traffic loads. When TCP is used over OBS, these random losses make the TCP sender decrease its congestion window even though the network may not be congested. This results in significant TCP throughput degradation. In this paper, we propose a multi-layer loss-recovery approach with automatic retransmission request (ARQ) and Snoop for OBS networks given that TCP is used at the transport layer. We evaluate the performance of Snoop and ARQ at the lower layer over a hybrid IP-OBS network. Based on the simulation results, the proposed multi-layer hybrid ARQ + Snoop approach outperforms all other approaches even at high loss probability. We developed an analytical model for end-to-end TCP throughput and verified the model with simulation results.  相似文献   

5.
In transport control protocol (TCP) over optical burst switching (OBS) networks, TCP window size and OBS parameters, including assembly period and burst dropping probability, will impact the network performance. In this paper, a parameter window data dropping probability(WDDP), is defined to analyze the impact of the assembly and the burst loss on the network performance in terms of the round trip time and the throughput. To reduce the WDDP without introducing the extra assembly delay penalty, we propose a novel TCP window based flow-oriented assembly algorithm dynamic assembly period (DAP). In the traditional OBS assembly algorithms, the packets with the same destination and class of service (CoS) are assembled into the same burst, i.e., the packets from different sources will be assembled into one burst. In that case, one burst loss will influence multiple TCP sources. In DAP, the packets from one TCP connection are assembled into bursts, which can avoid the above situation. Through comparing the two consecutive burst lengths, DAP can track the variation of TCP window dynamically and update the assembly period for the next assembly. In addition, the ingress node architecture for the flow-oriented assembly is designed. The performance of DAP is evaluated and compared with that of fixed assembly period (FAP) over a single TCP connection and multiple TCP connections. The results show that DAP performs better than FAP at almost the whole range of burst dropping probability.  相似文献   

6.
FAST TCP is important for promoting data-intensive applications since it can cleverly react to both packet loss and delay for detecting network congestion. This paper provides a continuous time model and extensive stability analysis of FAST TCP congestion-control mechanism in bufferless Optical Burst Switched Networks (OBS). The paper first shows that random burst contentions are essential to stabilize the network, but cause throughput degradation in FAST TCP flows when a burst with all the packets from a single round is dropped. Second, it shows that FAST TCP is vulnerable to burst delay and fails to detect network congestion due to the little variation of round-trip time, thus unstable. Finally it shows that introducing extra delays by implementing burst retransmission stabilizes FAST TCP over OBS. The paper proves that FAST TCP is not stable over barebone OBS. However, it is locally, exponentially, and asymptotically stable over OBS with burst retransmission.  相似文献   

7.
Burst assembly is one of the key factors affecting the TCP performance in optical burst switching (OBS) networks. When the TCP congestion window is small, the fixed-delay burst assembler waits unnecessarily long, which increases the end-to-end delay and thus decreases the TCP goodput. On the other hand, when the TCP congestion window becomes larger, the fixed-delay burst assembler may unnecessarily generate a large number of small-sized bursts, which increases the overhead and decreases the correlation gain, resulting in a reduction in the TCP goodput. In this paper, we propose adaptive burst assembly algorithms that use the congestion window sizes of TCP flows. Using simulations, we show that the usage of the congestion window size in the burst assembly algorithm significantly improves the TCP goodput (by up to 38.4% on the average and by up to 173.89% for individual flows) compared with the timer-based assembly, even when the timer-based assembler uses the optimum assembly period. It is shown through simulations that even when estimated values of the congestion window size, that are obtained via passive measurements, are used, TCP goodput improvements are still close to the results obtained by using exact values of the congestion window.  相似文献   

8.
In TCP over optical burst switching (OBS) networks, consecutive multiple packet losses are common since an optical burst usually contains a number of consecutive packets from the same TCP sender. It has been proved that over OBS networks Reno and New-Reno achieve lower throughput performances than that of SACK, which can address the inefficiency of Reno and New-Reno in dealing with consecutive multiple packet losses. However, SACK adopts complex mechanisms not only at the sender's but also at the receiver's protocol stack, and thus has a higher difficulty in deployment.In this paper we propose B-Reno, a new TCP implementation designed for TCP over OBS networks. Using some simple modifications to New-Reno only at the sender's protocol stack, B-Reno can overcome the inefficiencies of Reno and New-Reno in dealing with consecutive multiple packet losses and thus improve their throughputs over OBS networks. Moreover, B-Reno can also achieve performance similar with that of SACK over OBS networks while avoiding SACK's difficulty in deployment due to complex mechanisms at both the sender's and the receiver's protocol stack.  相似文献   

9.
Random burst contention losses plague the performance of Optical Burst Switched networks. Such random losses occur even in low load network condition due to the analogous behavior of wavelength and routing algorithms. Since a burst may carry many packets from many TCP sources, its loss can trick the TCP sources to conclude/infer that the underlying (optical) network is congested. Accordingly, TCP reduces sending rate and switches over to either fast retransmission or slow start state. This reaction by TCP is uncalled-for in TCP over OBS networks as the optical network may not be congested during such random burst contention losses. Hence, these losses are to be addressed in order to improve the performance of TCP over OBS networks. Existing work in the literature achieves the above laid objective at the cost of violating the semantics of OBS and/or TCP. Several other works make delay inducing assumptions. In our work, we introduce a new layer, called Adaptation Layer, in between TCP and OBS layers. This layer uses burst retransmission to mitigate the effect of burst loss due to contention on TCP by leveraging the difference between round trip times of TCP and OBS. We achieve our objective with the added advantage of maintaining the semantics of the layers intact.  相似文献   

10.
This paper introduces a novel congestion detection scheme for high-bandwidth TCP flows over optical burst switching (OBS) networks, called statistical additive increase multiplicative decrease (SAIMD). SAIMD maintains and analyzes a number of previous round-trip time (RTTs) at the TCP senders in order to identify the confidence with which a packet loss event is due to network congestion. The confidence is derived by positioning short-term RTT in the spectrum of long-term historical RTTs. The derived confidence corresponding to the packet loss is then taken in the developed policy for TCP congestion window adjustment. We will show through extensive simulation that the proposed scheme can effectively solve the false congestion detection problem and significantly outperform the conventional TCP counterparts without losing fairness. The advantages gained in our scheme are at the expense of introducing more overhead in the SAIMD TCP senders. Based on the proposed congestion control algorithm, a throughput model is formulated, and is further verified by simulation results.   相似文献   

11.
In this paper, we propose and verify a modified version of TCP Reno that we call TCP Congestion Control Enhancement for Random Loss (CERL). We compare the performance of TCP CERL, using simulations conducted in ns-2, to the following other TCP variants: TCP Reno, TCP NewReno, TCP Vegas, TCP WestwoodNR and TCP Veno. TCP CERL is a sender-side modification of TCP Reno. It improves the performance of TCP in wireless networks subject to random losses. It utilizes the RTT measurements made throughout the duration of the connection to estimate the queue length of the link, and then estimates the congestion status. By distinguishing random losses from congestion losses based on a dynamically set threshold value, TCP CERL successfully attacks the well-known performance degradation issue of TCP over channels subject to random losses. Unlike other TCP variants, TCP CERL doesn’t reduce the congestion window and slow start threshold when random loss is detected. It is very simple to implement, yet provides a significant throughput gain over the other TCP variants mentioned above. In single connection tests, TCP CERL achieved an 175, 153, 85, 64 and 88% throughput gain over TCP Reno, TCP NewReno, TCP Vegas, TCP WestwoodNR and TCP Veno, respectively. In tests with multiple coexisting connections, TCP CERL achieved an 211, 226, 123, 70 and 199% throughput improvement over TCP Reno, TCP NewReno, TCP Vegas, TCP WestwoodNR and TCP Veno, respectively.  相似文献   

12.
Transmission Control Protocol (TCP) performance over Optical Burst Switching (OBS) is experimentally investigated on an OBS network testbed, concluding that burst losses will lead to a significant drop in the available TCP bandwidth. Two mechanisms are introduced to improve TCP performance. One is concerning the burst assembly optimization and another is based on the novel assembly and scheduling mechanism to reduce the burst losses.  相似文献   

13.
Most of the recent research on TCP over heterogeneous wireless networks has concentrated on differentiating between packet drops caused by congestion and link errors, to avoid significant throughput degradations due to the TCP sending window being frequently shut down, in response to packet losses caused not by congestion but by transmission errors over wireless links. However, TCP also exhibits inherent unfairness toward connections with long round-trip times or traversing multiple congested routers. This problem is aggravated by the difference of bit-error rates between wired and wireless links in heterogeneous wireless networks. In this paper, we apply the TCP Bandwidth Allocation (TBA) algorithm, which we have proposed previously, to improve TCP fairness over heterogeneous wireless networks with combined wireless and wireline links. To inform the sender when congestion occurs, we propose to apply Wireless Explicit Congestion Notification (WECN). By controlling the TCP window behavior with TBA and WECN, congestion control and error-loss recovery are effectively separated. Further enhancement is also incorporated to smooth traffic bursts. Simulation results show that not only can the combined TBA and WECN mechanism improve TCP fairness, but it can maintain good throughput performance in the presence of wireless losses as well. A salient feature of TBA is that its main functions are implemented in the access node, thus simplifying the sender-side implementation.  相似文献   

14.
TCP Veno: TCP enhancement for transmission over wireless access networks   总被引:18,自引:0,他引:18  
Wireless access networks in the form of wireless local area networks, home networks, and cellular networks are becoming an integral part of the Internet. Unlike wired networks, random packet loss due to bit errors is not negligible in wireless networks, and this causes significant performance degradation of transmission control protocol (TCP). We propose and study a novel end-to-end congestion control mechanism called TCP Veno that is simple and effective for dealing with random packet loss. A key ingredient of Veno is that it monitors the network congestion level and uses that information to decide whether packet losses are likely to be due to congestion or random bit errors. Specifically: (1) it refines the multiplicative decrease algorithm of TCP Reno-the most widely deployed TCP version in practice-by adjusting the slow-start threshold according to the perceived network congestion level rather than a fixed drop factor and (2) it refines the linear increase algorithm so that the connection can stay longer in an operating region in which the network bandwidth is fully utilized. Based on extensive network testbed experiments and live Internet measurements, we show that Veno can achieve significant throughput improvements without adversely affecting other concurrent TCP connections, including other concurrent Reno connections. In typical wireless access networks with 1% random packet loss rate, throughput improvement of up to 80% can be demonstrated. A salient feature of Veno is that it modifies only the sender-side protocol of Reno without changing the receiver-side protocol stack.  相似文献   

15.
Random contentions occur in optical burst-switched (OBS) networks because of one-way signaling and lack of optical buffers. These contentions can occur at low loads and are not necessarily an indication of congestion. The loss caused by them, however, causes TCP at the transport layer to reduce its send rate drastically, which is unnecessary and reduces overall performance. In this paper, we propose forward segment redundancy (FSR), a proactive technique to prevent data loss during random contentions in the optical core. With FSR, redundant TCP segments are appended to each burst at the edge and redundant burst segmentation is implemented in the core, so that when a contention occurs, primarily redundant data are dropped. We develop an analytical throughput model for TCP over OBS with FSR and perform extensive simulations. FSR is found to improve TCP’s performance by an order of magnitude at high loads and by over two times at lower loads.  相似文献   

16.
TCP Vegas performance can be improved since its rate-based congestion control mechanism could proactively avoid possible congestion and packet losses in multi-hop ad hoc networks. Nevertheless, Vegas cannot make full advantage of available bandwidth to transmit packets since incorrect bandwidth estimates may occur due to frequent topology changes caused by node mobility. This paper proposes an improved TCP Vegas based on the grey prediction theory, named TCP-Gvegas, for multi-hop ad hoc networks, which has the capability of prediction and self-adaption, as well as three enhanced aspects in the phase of congestion avoidance. The lower layers’ parameters are considered in the throughput model to improve the accuracy of theoretical throughput. The prediction of future throughput based on grey prediction is used to promote the online control. The optimal exploration method based on Q-Learning and Round Trip Time quantizer are applied to search for the more reasonable changing size of congestion window. Besides, the convergence analysis of grey prediction by using the Lyapunov’s second method proves that a shorter input data length of prediction implies a faster convergence rate. The simulation results show that the TCP-Gvegas achieves a substantially higher throughput and lower delay than Vegas in multi-hop ad hoc networks.  相似文献   

17.
The most important design goal in Optical Burst Switching (OBS) networks is to reduce burst loss resulting from resource contention. Especially, the higher the congestion degree in the network is, the higher the burst loss rate becomes. The burst loss performance can be improved by employing an appropriate congestion control. In this paper, to actively avoid contentions, we propose a dynamic load-aware congestion control scheme that operates based on the highest (called ‘peak load’) of the loads of all links over the path between each pair of ingress and egress nodes in an OBS network. We also propose an algorithm that dynamically determines a load threshold for adjusting burst sending rate, according to the traffic load in a network. Further, a simple signalling method is developed for our proposed congestion control scheme. The proposed scheme aims to (1) reduce the burst loss rate in OBS networks and (2) maintain reasonable throughput and fairness. Simulation results show that the proposed scheme reduces the burst loss rate significantly, compared to existing OBS protocols (with and without congestion control), while maintaining reasonable throughput and fairness. Simulation results also show that our scheme keeps signalling overhead due to congestion control at a low level.  相似文献   

18.
TCP Throughput Enhancement over Wireless Mesh Networks   总被引:1,自引:0,他引:1  
TCP is the predominant technology used on the Internet to support upper layer applications with reliable data transfer and congestion control services. Furthermore, it is expected that traditional TCP applications (e.g., Internet access) will continue to constitute the major traffic component during the initial deployment of wireless mesh networks. However, TCP is known for its poor throughput performance in wireless multihop transmission environments. For this article, we conducted simulations to examine the impact of two channel interference problems, the hidden terminal and exposed terminal, on TCP transmissions over wireless mesh networks. We also propose a multichannel assignment algorithm for constructing a wireless mesh network that satisfies the spatial channel reuse property and eliminates the hidden terminal problem. The simulation results demonstrate the effectiveness of the proposed approach in improving the performance of TCP in wireless multihop networks.  相似文献   

19.
Dynamics of TCP traffic over ATM networks   总被引:6,自引:0,他引:6  
Investigates the performance of transport control protocol (TCP) connections over ATM networks without ATM-level congestion control and compares it to the performance of TCP over packet-based networks. For simulations of congested networks, the effective throughput of TCP over ATM can be quite low when cells are dropped at the congested ATM switch. The low throughput is due to wasted bandwidth as the congested link transmits cells from “corrupted” packets, i.e., packets in which at least one cell is dropped by the switch. The authors investigate two packet-discard strategies that alleviate the effects of fragmentation. Partial packet discard, in which remaining cells are discarded after one cell has been dropped from a packet, somewhat improves throughput. They introduce early packet discard, a strategy in which the switch drops whole packets prior to buffer overflow. This mechanism prevents fragmentation and restores throughput to maximal levels  相似文献   

20.
该文在分析光突发交换(OBS)网络对TCP性能影响的基础上,研究了单个突发所包含的属于同一TCP/ IP连接的分组数对TCP Reno吞吐量性能的影响,得到了一个吞吐量与突发丢失率、单个突发所包含分组数以及往返时延(RTT)的闭合表达式;并通过仿真验证了分析的正确性;分析和仿真结果表明,在接入链路带宽较大时,突发所包含的分组数存在一个最佳值,使TCP吞吐量达到最大。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号