首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 960 毫秒
1.
Traffic analysis of a Web proxy caching hierarchy   总被引:3,自引:0,他引:3  
Understanding Web traffic characteristics is key to improving the performance and scalability of the Web. In this article Web proxy workloads from different levels of a caching hierarchy are used to understand how the workload characteristics change across different levels of a caching hierarchy. The main observations of this study are that HTML and image documents account for 95 percent of the documents seen in the workload; the distribution of transfer sizes of documents is heavy-tailed, with the tails becoming heavier as one moves up the caching hierarchy; the popularity profile of documents does not precisely follow the Zipf distribution; one-timers account for approximately 70 percent of the documents referenced; concentration of references is less at proxy caches than at servers, and concentration of references diminishes as one moves up the caching hierarchy; and the modification rate is higher at higher-level proxies  相似文献   

2.
Arlitt  M. Jin  T. 《IEEE network》2000,14(3):30-37
This article presents a detailed workload characterization study of the 1998 World Cup Web site. Measurements from this site were collected over a three-month period. During this time the site received 1.35 billion requests, making this the largest Web workload analyzed to date. By examining this extremely busy site and through comparison with existing characterization studies, we are able to determine how Web server workloads are evolving. We find that improvements in the caching architecture of the World Wide Web are changing the workloads of Web servers, but major improvements to that architecture are still necessary. In particular, we uncover evidence that a better consistency mechanism is required for World Wide Web caches  相似文献   

3.
Many proxy servers are limited by their file I/O needs. Even when a proxy is configured with sufficient I/O hardware, the file system software often fails to provide the available bandwidth to the proxy processes. Although specialized file systems may offer a significant improvement and overcome these limitations, we believe that user-level disk management on top of industry-standard file systems can offer similar performance advantages. We study the overheads associated with file I/O in Web proxies, we investigate their underlying causes, and we propose Web-conscious storage management, a set of techniques that exploit the unique reference characteristics of Web-page accesses in order to allow Web proxies to overcome file I/O limitations. Using realistic trace-driven simulations, we show that these techniques can improve the proxy's secondary storage I/O throughput by a factor of 15 over traditional open-source proxies, enabling a single disk to serve over 400 (URL-get) operations per second. We implement Foxy, a Web proxy which incorporates our techniques. Experimental evaluation suggests that Foxy outperforms traditional proxies, such as SQUID, by more than a factor of four in throughput, without sacrificing response latency.  相似文献   

4.
The growth of the World Wide Web and web‐based applications is creating demand for high performance web servers to offer better throughput and shorter user‐perceived latency. This demand leads to widely used cluster‐based web servers in the Internet infrastructure. Load balancing algorithms play an important role in boosting the performance of cluster web servers. Previous load balancing algorithms suffer a significant performance drop under dynamic and database‐driven workloads. We propose an estimation‐based load balancing algorithm with admission control for cluster‐based web servers. Because it is difficult to accurately determine the load of web servers, we propose an approximate policy. The algorithm classifies requests based on their service times and tracks the number of outstanding requests from each class in each web server node to dynamically estimate each web server load state. The available capacity of each web server is then computed and used for the load balancing and admission control decisions. The implementation results confirm that the proposed scheme improves both the mean response time and the throughput of clusters compared to rival load balancing algorithms and prevents clusters being overloaded even when request rates are beyond the cluster capacity.  相似文献   

5.
The use of Web Clusters that implement Quality of Service has gained much attention in recent years. In this paper we propose an adaptive algorithm for a Web Switch that estimates the best configuration of the Web cluster attending to the throughput of servers. The slotted time used by the system is adaptive to the burstiness factor of client requests arriving to the Web Cluster. The web switch model is based on the estimation of the throughput that servers would have in the next future to dynamically balance the workload. In order to reduce the checking time, estimations are computed in a variable slot scheduling. Simulation results are confronted with other algorithms and show that the algorithm proposed performs significantly better in terms of latency time for priority users.  相似文献   

6.
Efficient web content delivery using proxy caching techniques   总被引:4,自引:0,他引:4  
Web caching technology has been widely used to improve the performance of the Web infrastructure and reduce user-perceived network latencies. Proxy caching is a major Web caching technique that attempts to serve user Web requests from one or a network of proxies located between the end user and Web servers hosting the original copies of the requested objects. This paper surveys the main technical aspects of proxy caching and discusses recent developments in proxy caching research including caching the "uncacheable" and multimedia streaming objects, and various adaptive and integrated caching approaches.  相似文献   

7.
在面向服务架构(SOA)中,针对目前负载均衡算法对于延时期间的负载波动情况自适应性及预测性较差的问题,提出一种改进的带有预测功能的自适应负载均衡算法.当工作负载到达率和服务特征发生变化时自动调整负载参数,并对后续请求的负载权重及分配情况进行预测.实验结果表明,该算法适用于高频率分布式的服务集群环境,能够有效的缩短服务器的平均响应时间,提高服务器的性能.  相似文献   

8.
The advent of Web technology has made Web servers core elements of future communication networks. Although the amount of traffic that Web servers must handle has grown explosively during the last decade, the performance limitations and the proper tuning of Web servers are still not well understood. In this paper we present an end-to-end queueing model for the performance of Web servers, encompassing the impacts of client workload characteristics, server harware/software configuration, communication protocols, and interconnect topologies. The model has been implemented in a simulation tool, and performance predictions based on the model are shown to match very well with the performance of a Web server in a test lab environment. The simulation tool forms an excellent basis for development of a Decision Support System for the configuration tuning and sizing of Web servers.  相似文献   

9.
Understanding the nature of media server workloads is crucial to properly designing and provisioning current and future media services. The main issue we address in this paper is the workload analysis of today's enterprise media servers. This analysis aims to establish a set of properties specific to the enterprise media server workloads and to compare them to well-known related observations about the web server workloads. We partition the media workload properties in two groups: static and temporal. While the static properties provide more traditional and general characteristics of the underlying media fileset and quantitative properties of client accesses to those files (independent of the access time), the temporal properties reflect the dynamics and evolution of accesses to the media content over time. We propose two new metrics characterizing the temporal properties: 1) the new files impact metric characterizing the site evolution due to new content and 2) the life span metric reflecting the rates of change in accesses to the newly introduced files. We illustrate these new metrics with the analysis of two different enterprise media server workloads collected over a significant period of time.  相似文献   

10.
Internet Video sharing sites, led by YouTube , have been gaining popularity in a dazzling speed, which also brings massive workload to their service data centers. In this paper we analyze Yahoo! Video, the 2nd largest U.S. video sharing site, to understand the nature of such unprecedented massive workload as well as its impact on online video data center design. We crawled the Yahoo! Video web site for 46 days. The measurement data allows us to understand the workload characteristics at different time scales (minutes, hours, days, weeks), and we discover interesting statistical properties on both static and temporal dimensions of the workload including file duration and popularity distributions, arrival rate dynamics and predictability, and workload stationarity and burstiness. Complemented with queueing-theoretic techniques, we further extend our understanding on the measurement data with a virtual design on the workload and capacity management components of a data center assuming the same workload as measured, which reveals key results regarding the impact of workload arrival distribution, Service Level Agreements (SLAs), and workload scheduling schemes on the design and operations of such large-scale video distribution systems.  相似文献   

11.
This paper presents a workload characterization study for Internet Web servers. Six different data sets are used in the study: three from academic environments, two from scientific research organizations, and one from a commercial Internet provider. These data sets represent three different orders of magnitude in server activity, and two different orders of magnitude in time duration, ranging from one week of activity to one year. The workload characterization focuses on the document type distribution, the document size distribution, the document referencing behavior, and the geographic distribution of server requests. Throughout the study, emphasis is placed on finding workload characteristics that are common to all the data sets studied. Ten such characteristics are identified. The paper concludes with a discussion of caching and performance issues, using the observed workload characteristics to suggest performance enhancements that seem promising for Internet Web servers  相似文献   

12.
This paper introduces an adaptive cache proxy to improve the performance of web access in soft real-time applications. It consists of client proxies and cooperative proxy servers with a server-side pushing schema. The large amount of heterogeneous data will be stored in the proxy servers and delivered to clients through computer networks to reduce the response time and network traffic. The adaptive proxy pre-fetches and replaces heterogeneous data dynamically in consideration of networks cost, data size, data change rate, etc. The simulation results show that the modified LUV algorithm has better performance in terms of hit rate, byte hit rate, and delay saving rate. With the cooperative proxy caching, it is shown that the performance of the proxy caching system is more predictable even if the proxies need to deal with a variety of data. The modified adaptive TTL algorithm has better performance in terms of the combination of temporal coherency and system overheads.
Zhubin ZhangEmail:
  相似文献   

13.
Proxy caching for media streaming over the Internet   总被引:7,自引:0,他引:7  
Streaming media has contributed to a significant amount of today's Internet traffic. Like conventional Web objects (e.g., HTML pages and images), media objects can benefit from proxy caching; but their unique features such as huge size and high bandwidth demand imply that conventional proxy caching strategies have to be substantially revised. This article discusses the critical issues and challenges of cache management for proxy-assisted media streaming. We survey, classify, and compare the state-of-the-art solutions. We also investigate advanced issues of combining multicast with caching, cooperating among proxies, and leveraging proxy caching in overlay networks.  相似文献   

14.
The success of the World Wide Web has led to a steep increase in the user population and the amount of traffic on the Internet. Popular Web pages create “hot spots” of network load due to their great demand for bandwidth and increase the response time because of the overload on the Web servers. We propose the distribution of very popular and frequently changing Web documents using continuous multicast push. The benefits of CMP in the case of such documents are a very efficient use of network resources, a reduction of the load on the server, lower response times, and scalability for an increasing number of receivers. We present a quantitative evaluation of the continuous multicast push for a wide range of parameters  相似文献   

15.
A comparison of load balancing techniques for scalable Web servers   总被引:3,自引:0,他引:3  
Bryhni  H. Klovning  E. Kure  O. 《IEEE network》2000,14(4):58-64
Scalable Web servers can be built using a network of workstations where server capacity can be extended by adding new workstations as the workload increases. The topic of our article is a comparison of different method to do load-balancing of HTTP traffic for scalable Web servers. We present a classification framework the different load-balancing methods and compare their performance. In addition, we evaluate in detail one class of methods using a prototype implementation with instruction-level analysis of processing overhead. The comparison is based on a trace driven simulation of traces from a large ISP (Internet Service Provider) in Norway. The simulation model is used to analyze different load-balancing schemes based on redirection of request in the network and redirection in the mapping between a canonical name (CNAME) and IP address. The latter is vulnerable to spatial and temporal locality, although for the set of traces used, the impact of locality is limited. The best performance is obtained with redirection in the network  相似文献   

16.
Web caches have become an integral component contributing to the improvement of the performance observed by Web clients. Cache satellite distribution systems (CSDSs) have emerged as a technology for feeding the caches with the information clients are expected to request, ahead of time. In such a system, the participating proxies periodically report to a central station about requests received from their clients. The central station selects a collection of Web documents, which are "pushed" via a satellite broadcast to the participating proxies, so that upon a future local request for the documents, they will already reside in the local cache, and will not need to be fetched from the terrestrial network. In this paper, our aim is addressing the issues of how to operate the CSDS, how to design it, and how to estimate its effect. Questions of interest are: 1) what Web documents should be transmitted by the central station and 2) what is the benefit of adding a particular proxy into a CSDS? We offer a model for CSDS that accounts for the request streams addressed to the proxies and which captures the intricate interaction between the proxy caches. Unlike models that are based only on the access frequency of the various documents, this model captures both their frequency and their locality of reference. We provide an analysis that is based on the stochastic properties of the traffic streams that can be derived from HTTP logs, examine it on real traffic, and demonstrate its applicability in selecting a set of proxies into a CSDS.  相似文献   

17.
Layer 7 Multimedia Proxy Handoff Using Anycast/Multicast in Mobile Networks   总被引:1,自引:0,他引:1  
Proxies can improve the quality of service (QoS) of clients in the three-tier networking architecture. However, it is more complicated to apply the server-proxy-client networking architecture to mobile networks for multimedia streaming because mobile clients possess the "keeping moving" characteristic. Therefore, the three-tier architecture in mobile networks must take user mobility into consideration, i.e., mobile clients should be able to switch to a proxy dynamically. In this paper, application-layer proxy handoff (APH) is defined to have applications be executed smoothly when mobile clients move in the server-proxy-client architecture. First, APH employs application-layer anycast to select one of the candidate proxies as the next proxy based on 1) the network condition between the mobile client and each candidate proxy and 2) the load balance among the candidate proxies. Second, APH utilizes IPv6 multicast to switch the session from the original proxy to the next proxy smoothly and to forward the available cache unsent in the original proxy to the next proxy for keeping the original session continuous  相似文献   

18.
In this contribution, we consider emerging wireless content delivery networks (CDNs), where multiple (possibly nomadic) clients download large-size files from battery-powered proxy servers via faded links that are composed of multiple slotted orthogonal bearers (e.g., logical subchannels). Since the considered transmit proxy servers are battery-powered mobile routers, a still open basic question deals with searching for optimal energy-allocation (e.g., energy scheduling) policies that efficiently split the available energy over the (faded) bearers. The target is to minimize the resulting (average) download time when constraints on the average available energy per information unit (IU), peak-energy per slot, and minimum energy per bearer (e.g., rate-induced constraints) are simultaneously active. The performance and the robustness of the resulting optimal energy scheduler are tested on the last hop of Rayleigh-faded mesh networks that adopt the so-called ldquodirty paper strategyrdquo for broadcasting multiple traffic flows that are generated by proxy servers equipped with multiple antennas .  相似文献   

19.
Recently, the proliferation of smartphones and the extensive coverage of wireless networks have enabled numerous mobile users to access Web resources with smartphones. Mobile mashup applications are very attractive to smartphone users due to specialized services and user-friendly GUIs. However, to offer new services through the integration of Web resources via Web API invocations, mobile mashup applications suffer from high energy consumption and long response time. In this paper, we propose a proxy system and two techniques to reduce the size of data transfer, thereby enabling mobile mashup applications to achieve energy-efficient and cost-effective Web API invocations. Specifically, we design an API query language that allows mobile mashup applications to readily specify and obtain desired information by instructing a proxy to filter unnecessary information returned from Web API servers. We also devise an image multi-get module, which results in mobile mashup applications with smaller transfer sizes by combining multiple images and adjusting the quality, scale, or resolution of the images. With the proposed proxy and techniques, a mobile mashup application can rapidly retrieve Web resources via Web API invocations with lower energy consumption due to a smaller number of HTTP requests and responses as well as smaller response bodies. Experimental results show that the proposed proxy system and techniques significantly reduce transfer size, response time, and energy consumption of mobile mashup applications.  相似文献   

20.
We develop a general model, called latency-rate servers (ℒℛ servers), for the analysis of traffic scheduling algorithms in broadband packet networks. The behavior of an ℒℛ server is determined by two parameters-the latency and the allocated rate. Several well-known scheduling algorithms, such as weighted fair queueing, virtualclock, self-clocked fair queueing, weighted round robin, and deficit round robin, belong to the class of ℒℛ servers. We derive tight upper bounds on the end-to-end delay, internal burstiness, and buffer requirements of individual sessions in an arbitrary network of ℒℛ servers in terms of the latencies of the individual schedulers in the network, when the session traffic is shaped by a token bucket. The theory of ℒℛ servers enables computation of tight upper bounds on end-to-end delay and buffer requirements in a heterogeneous network, where individual servers may support different scheduling architectures and under different traffic models  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号