首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
基于因特网的代理缓存技术是解决Web访问速度慢、服务器负载重和网络阻塞等问题的一种主要和有效的技术。为了能设计出有效、可扩展、健壮、自适应和稳定的代理缓存方案,本文主要对代理缓存的一致性策略、替换策略、体系结构、缓存内容选择和预取等关键技术问题进行研究,并给出了相关技术的解决方案。  相似文献   

2.
一种基于分散协作的Web缓存集群体系结构   总被引:1,自引:0,他引:1  
Web对象缓存技术是一种减少Web访问通信量和访问延迟的重要手段,该文通过分析现有的各种Web缓存系统,提出了一种基于分散协作的Web缓存集群体系结构。该体系结构克服了集中式系统需要额外配备一台管理服务器的缺陷,消除了管理服务器瓶颈失效造成系统瘫痪的危险,减少由于管理服务器带来的延迟;同时消除了分散系统的缓存不命中情况下的多级转发的延迟和缓存内容重叠,提高了资源利用率和系统效率,具有良好的可扩展性和健壮性。  相似文献   

3.
With the exponential growth of WWW traffic, web proxy caching becomes a critical technique for Internet web services. Well-organized proxy caching systems with multiple servers can greatly reduce the user perceived latency and decrease the network bandwidth consumption. Thus, many research papers focused on improving web caching performance with the efficient coordination algorithms among multiple servers. Hash based algorithm is the most widely used server coordination mechanism, however, there's still a lot of technical issues need to be addressed. In this paper, we propose a new hash based web caching architecture, Tulip. Tulip aggregates web objects that are likely to be accessed together into object clusters and uses object clusters as the primary access units. Tulip extends the locality-based algorithm in UCFS to hash based web proxy systems and proposes a simple algorithm to reduce the data grouping overhead. It takes into consideration the access speed dispatch between memory and disk and replaces expensive small disk I/O with less large ones. In case a client request cannot be fulfilled by the server in the memory, the system fetches the whole cluster which contains the required object into memory, the future requests for other objects in the same cluster can be satisfied directly from memory and slow disk I/Os are avoided. It also introduces a simple and efficient data dupllication algorithm, few maintenance work need to be done in case of server join/leave or server failure. Along with the local caching strategy, Tulip achieves better fault tolerance and load balance capability with the minimal cost. Our simulation results show Tulip has better performance than previous approaches.  相似文献   

4.
Proxy servers have been used to cache web objects to alleviate the load of the web servers and to reduce network congestion on the Internet. In this paper, a central video server is connected to a proxy server via wide area networks (WANs) and the proxy server can reach many clients via local area networks (LANs). We assume a video can be either entirely or partially cached in the proxy to reduce WAN bandwidth consumption. Since the storage space and the sustained disk I/O bandwidth are limited resources in the proxy, how to efficiently utilize these resources to maximize the WAN bandwidth reduction is an important issue. We design a progressive video caching policy in which each video can be cached at several levels corresponding to cached data sizes and required WAN bandwidths. For a video, the proxy server determines to cache a smaller amount of data at a lower level or to gradually accumulate more data to reach a higher level. The proposed progressive caching policy allows the proxy to adjust caching amount for each video based on its resource condition and the user access pattern. We investigate the scenarios in which the access pattern is priorly known or unknown and the effectiveness of the caching policy is evaluated.  相似文献   

5.
High-performance Web sites rely on Web server `farms', hundreds of computers serving the same content, for scalability, reliability, and low-latency access to Internet content. Deploying these scalable farms typically requires the power of distributed or clustered file systems. Building Web server farms on file systems complements hierarchical proxy caching. Proxy caching replicates Web content throughout the Internet, thereby reducing latency from network delays and off-loading traffic from the primary servers. Web server farms scale resources at a single site, reducing latency from queuing delays. Both technologies are essential when building a high-performance infrastructure for content delivery. The authors present a cache consistency model and locking protocol customized for file systems that are used as scalable infrastructure for Web server farms. The protocol takes advantage of the Web's relaxed consistency semantics to reduce latencies and network overhead. Our hybrid approach preserves strong consistency for concurrent write sharing with time-based consistency and push caching for readers (Web servers). Using simulation, we compare our approach to the Andrew file system and the sequential consistency file system protocols we propose to replace  相似文献   

6.
Effective caching in the domain name system (DNS) is critical to its performance and scalability. Existing DNS only supports weak cache consistency by using the time-to-live (TTL) mechanism, which functions reasonably well in normal situations. However, maintaining strong cache consistency in DNS as an indispensable exceptional handling mechanism has become more and more demanding for three important objectives: 1) to quickly respond and handle exceptions such as sudden and dramatic Internet failures caused by natural and human disasters, 2) to adapt increasingly frequent changes of Internet Protocol (IP) addresses due to the introduction of dynamic DNS techniques for various stationed and mobile devices on the Internet, and 3) to provide fine-grain controls for content delivery services to timely balance server load distributions. With agile adaptation to various exceptional Internet dynamics, strong DNS cache consistency improves the availability and reliability of Internet services. In this paper, we first conduct extensive Internet measurements to quantitatively characterize DNS dynamics. Then, we propose a proactive DNS cache update protocol (DNScup), running as middleware in DNS name servers, to provide strong cache consistency for DNS. The core of DNScup is an optimal lease scheme, called dynamic lease, to keep track of the local DNS name servers. We compare dynamic lease with other existing lease schemes through theoretical analysis and trace-driven simulations. Based on the DNS dynamic update protocol, we build a DNScup prototype with minor modifications to the current DNS implementation. Our system prototype demonstrates the effectiveness of DNScup and its easy and incremental deployment on the Internet.  相似文献   

7.
Adaptive leases: a strong consistency mechanism for the World Wide Web   总被引:2,自引:0,他引:2  
We argue that weak cache consistency mechanisms supported by existing Web proxy caches must be augmented by strong consistency mechanisms to support the growing diversity in application requirements. Existing strong consistency mechanisms are not appealing for Web environments due to their large state space or control message overhead. We focus on the lease approach that balances these trade-offs and present analytical models and policies for determining the optimal lease duration. We present extensions to the HTTP protocol to incorporate leases and then implement our techniques in the Squid proxy cache and the Apache Web server. Our experimental evaluation of the leases approach shows that: 1) our techniques impose modest overheads even for long leases (a lease duration of 1 hour requires state to be maintained for 1030 leases and imposes an per-object overhead of a control message every 33 minutes), 2) leases yields a 138-425 percent improvement over existing strong consistency mechanisms, and 3) the implementation overhead of leases is comparable to existing weak consistency mechanisms.  相似文献   

8.
In this paper, we investigate a proxy-based integrated cache consistency and mobility management scheme for supporting client–server applications in Mobile IP systems with the objective to minimize the overall network traffic generated. Our cache consistency management scheme is based on a stateful strategy by which cache invalidation messages are asynchronously sent by the server to a mobile host (MH) whenever data objects cached at the MH have been updated. We use a per-user proxy to buffer invalidation messages to allow the MH to disconnect arbitrarily and to reduce the number of uplink requests when the MH is reconnected. Moreover, the user proxy takes the responsibility of mobility management to further reduce the network traffic. We investigate a design by which the MH’s proxy serves as a gateway foreign agent (GFA) as in the MIP Regional Registration protocol to keep track of the address of the MH in a region, with the proxy migrating with the MH when the MH crosses a regional area. We identify the optimal regional area size under which the overall network traffic cost, due to cache consistency management, mobility management, and query requests/replies, is minimized. The integrated cache consistency and mobility management scheme is demonstrated to outperform MIPv6, no-proxy and/or no-cache schemes, as well as a decoupled scheme that optimally but separately manages mobility and service activities in Mobile IPv6 environments.  相似文献   

9.
Web对象访问特征模拟器的设计与实现   总被引:2,自引:0,他引:2  
石磊  陶永才 《计算机仿真》2006,23(1):133-136
Web缓存是一个提高Web性能非常有效的方法,它可以位于网络的不同位置:客户端,代理服务器端,服务器端。研究表明Web缓存命中率可以达到30%-50%。Web缓存在应用中最大的问题就是Web缓存管理,研究Web访问特征是有效进行Web缓存管理的基础。Web日志生成模拟器对于研究Web缓存系统有很大地帮助,目前有两种方法模拟生成Web访问日志:日志驱动方法,数学模拟方法。日志驱动方法利用对历史日志进行变换来模拟生成新的日志,数学模拟方法在充分研究Ⅵ协对象访问特征的基础上,通过建立数学模型来模拟生成Web日志。该文通过分析Web对象访问特征,采用数学模拟方法分别模拟了Web对象高频区及低频区流行度特征,Web对象大小重尾分布特征,Web访问的时间局部性特征;设计并实现了一个Web日志模拟生成器WEBSIM。该模拟器不仅可以模拟生成Web对象访问日志,而且具有较大的灵活性,为进一步研究Web缓存技术和预取技术提供依据。  相似文献   

10.
Exploiting Regularities in Web Traffic Patterns for Cache Replacement   总被引:2,自引:0,他引:2  
Cohen  Kaplan 《Algorithmica》2002,33(3):300-334
Abstract. Caching web pages at proxies and in web servers' memories can greatly enhance performance. Proxy caching is known to reduce network load and both proxy and server caching can significantly decrease latency. Web caching problems have different properties than traditional operating systems caching, and cache replacement can benefit by recognizing and exploiting these differences. We address two aspects of the predictability of traffic patterns: the overall load experienced by large proxy and web servers, and the distinct access patterns of individual pages. We formalize the notion of ``cache load' under various replacement policies, including LRU and LFU, and demonstrate that the trace of a large proxy server exhibits regular load. Predictable load allows for improved design, analysis, and experimental evaluation of replacement policies. We provide a simple and (near) optimal replacement policy when each page request has an associated distribution function on the next request time of the page. Without the predictable load assumption, no such online policy is possible and it is known that even obtaining an offline optimum is hard. For experiments, predictable load enables comparing and evaluating cache replacement policies using partial traces , containing requests made to only a subset of the pages. Our results are based on considering a simpler caching model which we call the interval caching model . We relate traditional and interval caching policies under predictable load, and derive (near)-optimal replacement policies from their optimal interval caching counterparts.  相似文献   

11.
Proxy caching is an effective approach to reduce the response latency to client requests, web server load, and network traffic. Recently there has been a major shift in the usage of the Web. Emerging web applications require increasing amount of server-side processing. Current proxy protocols do not support caching and execution of web processing units. In this paper, we present a weblet environment, in which, processing units on web servers are implemented as weblets. These weblets can migrate from web servers to proxy servers to perform required computation and provide faster responses. Weblet engine is developed to provide the execution environment on proxy servers as well as web servers to facilitate uniform weblet execution. We have conducted thorough experimental studies to investigate the performance of the weblet approach. We modify the industrial standard e-commerce benchmark TPC-W to fit the weblet model and use its workload model for performance comparisons. The experimental results show that the weblet environment significantly improves system performance in terms of client response latency, web server throughput, and workload. Our prototype weblet system also demonstrates the feasibility of integrating weblet environment with current web/proxy infrastructure.  相似文献   

12.
如何维护移动环境下的客户端缓存中数据的一致性,是移动数据库中的关键技术.而数据广播技术则是利用无线通信网络不对称的特点,使移动客户端数据和服务器端数据保持一致性的最为实用的技术.但由于不同的环境,不同的时段各种参数的变化,失效报告时间窗口ω的大小如何确定是一个难点.根据移动数据库中数据更新的时间间隔,提出了基于多时间窗口的失效报告技术.  相似文献   

13.
Coherence misses and invalidation traffic limit the performance of bus-based multiprocessors using write-invalidate snooping caches. This paper considers optimizations of a write-invalidate protocol that remove such overhead. While coherence misses are attacked by a hybrid update/invalidate protocol and another technique where update instructions are selectively inserted by a compiler, invalidation traffic is reduced by three optimizations that coalesce ownership acquisition with miss handling: migrate-on-dirty, an adaptive hardware-based scheme, and compiler-controlled insertion of load-exclusive instructions.

The relative effectiveness of these optimizations are evaluated using detailed architectural simulations and a set of four parallel programs. We find that while both of the update-based schemes effectively remove most coherence misses, the hybrid update/invalidate scheme causes lower traffic. By contrast, the compiler-based approach to cut invalidation traffic is slightly more efficient than the adaptive hardware-based scheme. Moreover, the migrate-on-dirty heuristic is found to have devastating effects on the miss rate.  相似文献   


14.
Caching data in a wireless mobile computer can significantly reduce the bandwidth requirement. However, due to battery power limitation, a wireless mobile computer may often be forced to operate in a doze or even totally disconnected mode. As a result, the mobile computer may miss some cache invalidation reports. In this paper, we present an energy-efficient cache invalidation method for a wireless mobile computer. The new cache invalidation scheme is called grouping with cold update-set retention (GCORE). Upon waking up, a mobile computer checks its cache validity with the server. To reduce the bandwidth requirement for validity checking, data objects are partitioned into groups. However, instead of simply invalidating a group if any of the objects in the group has been updated, GCORE retains the cold update set of objects in a group if possible. We present an efficient implementation of GCORE and conduct simulations to evaluate its caching effectiveness. The results show that GCORE can substantially improve mobile caching by reducing the communication bandwidth (thus energy consumption) for query processing.  相似文献   

15.
Web对象缓存技术是一种减少web服务器访问通信量和访问延迟的重要手段。Web缓存的引入虽然大大减轻了服务器负载,降低了网络拥塞,减少了客户端访问的延迟等优点,但同时也带来缓存的一致性问题,这样使客户端获得web的数据可能不是最新的版本。该文通过分析现有的缓存一致性方针,提出了一个应适于web的强缓存一致性算法。  相似文献   

16.
流媒体对象的缓存管理策略   总被引:2,自引:0,他引:2  
基于流媒体服务的代理技术是流媒体研究领域中的重要课题.随着流媒体技术在Internet和无线网络环境中的高速发展,对流媒体代理服务器的研究也正在逐步深入.本文主要讨论通过代理技术改善媒体的服务质量,降低媒体的传输延迟以及减轻网络负载.在Internet环境下,对流媒体代理服务器的研究集中于流媒体的访问特性、缓存替换算法,构建和实现一个流媒体代理服务器是对流媒体代理技术研究的基础.  相似文献   

17.
In this paper, we propose and analyze a proxy-based hybrid cache management scheme for client-server applications in Mobile IP (MIP) networks. We leverage a per-user proxy as a gateway between the server and the mobile host (MH) such that any communication between the MH and server must pass through the proxy. The proxy has dual responsibilities in our design. It keeps track of the current location of the MH by acting as a regional Gateway Foreign Agent (GFA) as in the MIP Regional Registration protocol for mobility management. The proxy is also responsible for cache consistency management and query processing on behalf of the MH. To reduce the network traffic, a threshold-based hybrid cache consistency management policy is applied. That is, when a data object is updated at the server, the server sends an invalidation report to the MH through the proxy to invalidate the cached data object, provided that the size of the data object exceeds the given threshold. Otherwise, the server sends a fresh copy of the data object through the proxy to the MH. We identify the best “threshold” value that would minimize the overall network traffic incurred due to mobility management, cache consistency management, and query processing, when given a set of parameter values characterizing the operational and workload conditions of the MIP network.  相似文献   

18.
随着高速宽带接入技术的发展,流媒体技术的研究得到了迅速的发展,并具有广阔的应用前景.流媒体代理技术作为减轻服务器的访问负载、提高用户的访问响应速度的重要手段,已成为流媒体研究领域中的研究热点之一.针对流媒体服务中的分布式代理服务器系统,提出了一种优化的缓存数据放置策略.其主要思想是将缓存数据放入某个特定的代理服务器中,使得今后访问该数据的网络传输开销最小.仿真实验表明,所提出的算法比传统的缓存数据放置算法能获得更小的传输开销和更好的可扩展性.  相似文献   

19.
Current proxy server and client caching techniques do not incorporate the dynamics of document selection and modification. The adaptive model proposed in the article uses document life histories to optimize cache performance. We briefly describe existing “semi intelligent” caching strategies and then propose a mechanism for adaptive cache management. Our approach attempts to improve cache performance by modeling document life histories to determine usefulness. We use damped exponential smoothing to ensure an accurate yet responsive model of document dynamics  相似文献   

20.
A WWW proxy server, proxy for short, provides access to the Web for people on closed subnets who can only access the Internet through a firewall machine. The hypertext server developed at CERN, cern_httpd, is capable of running as a proxy, providing seamless external access to HTTP, Gopher, WAIS and FTP.cern_httpd has had gateway features for a long time, but only this spring they were extended to support all the methods in the HTTP protocol used by WWW clients. Clients do not lose any functionality by going through a proxy, except special processing they may have done for non-native Web protocols such as Gopher and FTP.A brand new feature is caching performed by the proxy, resulting in shorter response times after the first document fetch. This makes proxies useful even to the people who do have full Internet access and do not really need the proxy just to get out of their local subnet.This paper gives an overview of proxies and reports their current status.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号