首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers the interaction of HTTP with several transport protocols, including TCP, Transaction TCP, a UDP-based request-response protocol, and HTTP with persistent TCP connections. We present an analytic model for each of these protocols and use that model to evaluate network overhead carrying HTTP traffic across a variety of network characteristics. This model includes an analysis of the transient effects of TCP slow-start. We validate this model by comparing it to network packet traces measured with two protocols (HTTP and persistent HTTP) over local and wide-area networks. We show that the model is accurate within 5% of measured performance for wide-area networks, but can underestimate latency when the bandwidth is high and delay is low. We use the model to compare the connection-setup costs of these protocols, bounding the possible performance improvement. We evaluate these costs for a range of network characteristics, finding that setup optimizations are relatively unimportant for current modem, ISDN, and LAN users but can provide moderate to substantial performance improvement over high-speed WANs. We also use the model to predict performance over future network characteristics  相似文献   

2.
The Internet has been growing tremendously in the recent years and applications like web browsing are becoming increasingly popular. In a collective effort to provide seamless access to the Internet, wireless equipment manufacturers and service providers are developing 3G wireless systems that efficiently support current and future Internet applications. In this paper, we evaluate the performance and capacity of a 3G wireless data system based on IS-2000 standard. We consider web browsing as the common application for all users and evaluate the system performance for single and parallel web browsing sessions. We perform this study through a detailed simulation of web traffic model described by distributions of number of objects per page, object size, page request size and page reading time. The simulation includes HTTP and TCP/IP protocols, link level recovery, radio resource management, mobility, channel model and, delays in the Internet and the radio access network. We quantify important system attributes like average page download times and system throughput (Kb/s per carrier per sector). We also evaluate normalized object download time, normalized page download time, penalty in performance due to link errors, link layer buffer sizes needed, channel holding time, average power used and distribution of the power used in the system.  相似文献   

3.
运用VLC媒体播放器增加HDTV播出频道的实践   总被引:1,自引:0,他引:1  
VLC是一个开放源代码的、跨平台的多媒体播放器,它可以播放多种音频和视频格式(MPEG-1、MPEG-2、MPEG-4、D ix、MP3、Ogg等以及DVD、VCD、CD音频以及各种流媒体协议),VLC同时也具有转码能力(UDP unicast和multicas、tHTTP等),主要为宽带网络设计的流媒体服务器使用。基于上述软件,结合拥有G igabitEthernet I/O板卡的统计复用网关DM6400、复用/加扰/调制器BN G6104和播发服务器、HDTV机顶盒,试验播出2个HDTV频道。  相似文献   

4.
End to end performance of web application degrades seriously in mobile networks because the inefficiencies of HTTP and TCP in lossy and asymmetrical environment. In this article, we discussed common architecture of web accelerator which embraces both HTTP and TCP optimization. Based on the feasibility analyses of various acceleration technologies in asymmetrical mobile network, the components of accelerator and their functions are provided. In order to explain how to choose function entities of accelerator and how to optimize their parameters in asymmetrical environment, we carried out three simulation based analyses. Firstly we characterize the correlations between user perceived web response time and asymmetrical link characteristics. Consequently, we show how HTTP compression strongly affects the web response time when uplink resources are limited. At the same time, our study demonstrated that caching scheme performs poorly when uplink quality degraded. In addition, possible potential methods for web response time and their design criteria are discussed.  相似文献   

5.
Quick User Datagram Protocol (UDP) Internet Connections (QUIC) is an experimental and low‐latency transport protocol proposed by Google, which is still being improved and specified in the Internet Engineering Task Force (IETF). The viewer's quality of experience (QoE) in HTTP adaptive streaming (HAS) applications may be improved with the help of QUIC's low‐latency, improved congestion control, and multiplexing features. We measured the streaming performance of QUIC on wireless and cellular networks in order to understand whether the problems that occur when running HTTP over TCP can be reduced by using HTTP over QUIC. The performance of QUIC was tested in the presence of network interface changes caused by the mobility of the viewer. We observed that QUIC resulted in quicker start of media streams, better streaming, and seeking experience, especially during the higher levels of congestion in the network and had a better performance than TCP when the viewer was mobile and switched between the wireless networks. Furthermore, we measured QUIC's performance in an emulated network that had a various amount of losses and delays to evaluate how QUIC's multiplexing feature would be beneficial for HAS applications. We compared the performance of HAS applications using multiplexing video streams with HTTP/1.1 over multiple TCP connections to HTTP/2 over one TCP connection and to QUIC over one UDP connection. To that effect, we observed that QUIC provided better performance than TCP on a network that had large delays. However, QUIC did not provide a significant improvement when the loss rate was large. Finally, we analyzed the performance of the congestion control mechanisms implemented by QUIC and TCP, and tested their ability to provide fairness among streaming clients. We found that QUIC always provided fairness among QUIC flows, but was not always fair to TCP.  相似文献   

6.
赵明  万倩  白鹤  李博 《电视技术》2012,36(14):58-61
Web认证通过HTTP重定向技术推送认证页面,用户在客户端使用浏览器输入用户名和密码完成认证并获得相应的授权。通过研究相关的性能测试方法,有助于有线电视网络运营商发现系统存在的瓶颈,更好地实施广电宽带数据网的建设和改造。  相似文献   

7.
A comparison of load balancing techniques for scalable Web servers   总被引:3,自引:0,他引:3  
Bryhni  H. Klovning  E. Kure  O. 《IEEE network》2000,14(4):58-64
Scalable Web servers can be built using a network of workstations where server capacity can be extended by adding new workstations as the workload increases. The topic of our article is a comparison of different method to do load-balancing of HTTP traffic for scalable Web servers. We present a classification framework the different load-balancing methods and compare their performance. In addition, we evaluate in detail one class of methods using a prototype implementation with instruction-level analysis of processing overhead. The comparison is based on a trace driven simulation of traces from a large ISP (Internet Service Provider) in Norway. The simulation model is used to analyze different load-balancing schemes based on redirection of request in the network and redirection in the mapping between a canonical name (CNAME) and IP address. The latter is vulnerable to spatial and temporal locality, although for the set of traces used, the impact of locality is limited. The best performance is obtained with redirection in the network  相似文献   

8.
Network caching of objects has become a standard way of reducing network traffic and latency in the web. However, web caches exhibit poor performance with a hit rate of about 30%. A solution to improve this hit rate is to have a group of proxies form co‐operation where objects can be cached for later retrieval. A co‐operative cache system includes protocols for hierarchical and transversal caching. The drawback of such a system lies in the resulting network load due to the number of messages that need to be exchanged to locate an object. This paper proposes a new co‐operative web caching architecture, which unifies previous methods of web caching. Performance results shows that the architecture achieve up to 70% co‐operative hit rate and accesses the cached object in at most two hops. Moreover, the architecture is scalable with low traffic and database overhead. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
10.
Web应用能否成功,一定程度取决于其性能。如何改善网页性能已成为业内共同研究方向。针对影响Web系统前端性能的因素,从HTTP请求优化,网页元素优化两个方面,在不改变后台数据结构及网页本身的情况下,探索各因素的可优化性,从而提高Web系统性能。  相似文献   

11.
Today's HTTP carries Web interactions over client-initiated TCP connections. An important implication of using this transport method is that interception caches in the network violate the end-to-end principle of the Internet, which severely limits deployment options of these caches. Furthermore, while an increasing number of Web interactions are short, and in fact frequently carry only control information and no data, TCP is often inefficient for short interactions We propose a new transfer protocol for the Web, called Dual-Transport HTTP (DHTTP), which splits the traffic between UDP and TCP channels. When choosing the TCP channel, it is the server who opens the connection back to the client. Through server-initiated connections, DHTTP upholds the Internet end-to-end principle in the presence of interception caches, thereby allowing unrestricted caching within backbones. Moreover, the comparative performance study of DHTTP and HTTP using trace-driven simulation as well as testing real HTTP and DHTTP servers showed a significant performance advantage of DHTTP when the bottleneck is at the server and comparable performance when the bottleneck is in the network.  相似文献   

12.
This paper evaluates techniques for improving operating system and network protocol software support for high-performance World Wide Web servers. We study approaches in three categories: i.e., new socket functions, per-byte optimizations, and per-connection optimizations. We examine two proposed socket functions, i.e., acceptex( ) and send-file( ), comparing send-file( )'s effectiveness with a combination of mmap( ) and writev( ). We show how send-file( ) provides the necessary semantic support to eliminate copies and checksums in the kernel, and quantify the benefit of the function's header and close options. We also present mechanisms to reduce the number of packets exchanged in an HTTP transaction, both increasing server performance and reducing network utilization, without compromising interoperability. Results using WebStone show that our combination of mechanisms can improve server throughput by up to 64%, and can eliminate up to 33% of the packets in an HTTP exchange. Results with SURGE show an aggregate increase in server throughput of 25%  相似文献   

13.
During the last decade, the Web has grown in terms of complexity, while the evolution of the HTTP (Hypertext Transfer Protocol) has not experienced the same trend. Even if HTTP 1.1 adds improvements like persistent connections and request pipelining, they are not decisive, especially in modern mixed wireless/wired networks, often including satellites. The latter play a key role for accessing the Internet everywhere, and they are one of the preferred methods to provide connectivity in rural areas or for disaster relief operations. However, they suffer of high‐latency and packet losses, which degrade the browsing experience. Consequently, the investigation of protocols mitigating the limitations of HTTP, also in challenging scenarios, is crucial both for the industry and the academia. In this perspective, SPDY, which is a protocol optimized for the access to Web 2.0 contents over fixed and mobile devices, could be suitable also for satellite links. Therefore, this paper evaluates its performance when used both in real and emulated satellite scenarios. Results indicate the effectiveness of SPDY if compared with HTTP, but at the price of a more fragile behavior when in the presence of errors. Besides, SPDY can also reduce the transport overhead experienced by middleboxes typically deployed by service providers using satellite links. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
2.5 Generation (2.5G) and Third Generation (3G) cellular wireless networks allow mobile Internet access with bearers specifically designed for data communications. However, Internet protocols under‐utilize wireless wide area network (WWAN) link resources, mainly due to large round trip times (RTTs) and request‐‐reply protocol patterns. Web browsing is a popular service that suffers significant performance degradation over 2.5G and 3G. In this paper, we review and compare the two main approaches for improving web browsing performance over wireless links: (i) using adequate end‐to‐end parameters and mechanisms and (ii) interposing a performance enhancing proxy (PEP) between the wireless and wired parts. We conclude that PEPs are currently the only feasible way for significantly optimizing web browsing behavior over 2.5G and 3G. In addition, we evaluate the two main current commercial PEPs over live general packet radio service (GPRS) and universal mobile telecommunications system (UMTS) networks. The results show that PEPs can lead to near‐ideal web browsing performance in certain scenarios. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
Scalable on-demand media streaming with packet loss recovery   总被引:4,自引:0,他引:4  
Previous scalable on-demand streaming protocols do not allow clients to recover from packet loss. This paper develops new protocols that: (1) have a tunably short latency for the client to begin playing the media; (2) allow heterogeneous clients to recover lost packets without jitter as long as each client's cumulative loss rate is within a tunable threshold; and (3) assume a tunable upper bound on the transmission rate to each client that can be as small as a fraction (e.g., 25%) greater than the media play rate. Models are developed to compute the minimum required server bandwidth for a given loss rate and playback latency. The results of the models are used to develop the new protocols and assess their performance. The new protocols, Reliable Periodic Broadcast and Reliable Bandwidth Skimming, are simple to implement and achieve nearly the best possible scalability and efficiency for a given set of client characteristics and desirable/feasible media quality. Furthermore, the results show that the new reliable protocols that transmit to each client at only twice the media play rate have similar performance to previous protocols that require clients to receive at many times the play rate.  相似文献   

16.
HTTP/2.0是标准化组织IETF正在制定中的新一代WWW应用协议标准,在保持对现有协议HTTP/1.1的前向兼容性的前提下,以异步并发、增量传输和关键内容优先等机制为手段,以期提升广域网网页浏览和移动应用的用户体验。互联网厂家和设备厂家分别从最大化Web应用加速效果、权衡移动终端综合效能和降低网络设备处理复杂度的角度,提出了不同的技术建议方案。  相似文献   

17.
在对基于文本编码的网络协议解析中,传统的解决方案难以兼顾速度和灵活性两方面的要求.本文针对扩展巴克斯范式(ABNF)的文法特点,提出一种新型可编程处理器的指令系统和体系结构,以满足网络处理对速度和灵活性的共同要求.该方案在可编程逻辑器件(FPGA)上进行了验证,实验结果表明该处理器在实现面积、处理速度和灵活性上都占有较大优势.  相似文献   

18.
Performance benchmarking of wireless Web servers   总被引:1,自引:0,他引:1  
Guangwei  Kehinde  Carey   《Ad hoc Networks》2007,5(3):392-412
The advent of mobile computers and wireless networks enables the deployment of wireless Web servers and clients in short-lived ad hoc network environments, such as classroom area networks. The purpose of this paper is to benchmark the performance capabilities of wireless Web servers in such an environment. Network traffic measurements are conducted on an in-building IEEE 802.11b wireless ad hoc network, using a wireless-enabled Apache Web server, several wireless clients, and a wireless network traffic analyzer. The experiments focus on the HTTP transaction rate and end-to-end throughput achievable in such an ad hoc network environment, and the impacts of factors such as Web object size, number of clients, and persistent HTTP connections. The results show that the wireless network bottleneck manifests itself in several ways: inefficient HTTP performance, client-side packet losses, server-side packet losses, network thrashing, and unfairness among Web clients. Persistent HTTP connections offer up to 350% improvement in HTTP transaction rate and user-level throughput, while also improving fairness for mobile clients accessing content from a wireless Web server.  相似文献   

19.
HTTP-based video streaming has been gaining popularity within the recent years. There are multiple benefits of relying on HTTP/TCP connections, such as the usage of the widely deployed network caches to relieve video servers from sending the same content to a high number of users and the avoidance of traversal issues with firewalls and NATs typical for RTP/UDP-based solutions. Therefore, many service providers resort to adopt HTTP streaming as the basis for their services. In this paper, the benefits of using the Scalable Video Coding (SVC) for a HTTP streaming service are shown, and the SVC based approach is compared to the AVC based approach. We show that network resources are more efficiently used and how the benefits of the traditional techniques can even be heightened by adopting the Scalable Video Coding (SVC) as the video codec for adaptive low delay streaming over HTTP. For the latter small playout-buffers are considered hence allowing low media access latency in the delivery chain and it is shown that adaptation is more effectively performed with the SVC based approach.  相似文献   

20.
The use of covert‐channel methods to bypass security policies has increased considerably in the recent years. Malicious users neutralize security restriction by encapsulating protocols like peer‐to‐peer, chat or http proxy into other allowed protocols like Domain Name Server (DNS) or HTTP. This paper illustrates a machine learning approach to detect one particular covert‐channel technique: DNS tunneling. Despite packet inspection may guarantee reliable intrusion detection in this context, it may suffer of scalability performance when a large set of sockets should be monitored in real time. Detecting the presence of DNS intruders by an aggregation‐based monitoring is of main interest as it avoids packet inspection, thus preserving privacy and scalability. The proposed monitoring mechanism looks at simple statistical properties of protocol messages, such as statistics of packets inter‐arrival times and of packets sizes. The analysis is complicated by two drawbacks: silent intruders (generating small statistical variations of legitimate traffic) and quick statistical fingerprints generation (to obtain a detection tool really applicable in the field). Results from experiments conducted on a live network are obtained by replicating individual detections over successive samples over time and by making a global decision through a majority voting scheme. The technique overcomes traditional classifier limitations. An insightful analysis of the performance leads to discover a unique intrusion detection tool, applicable in the presence of different tunneled applications. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号