首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 129 毫秒
1.
周刚  周建国  晏蒲柳 《计算机应用》2006,26(3):733-0735
提出了一种新的基于连续哈希函数的合作式缓存系统。针对传统合作式缓存系统中多级转发造成的高时延和多重哈希计算问题,设计了一种高效的Web对象定位和路由模式,保证任意Web请求只需计算一次哈希且至多经过一次转发就可到达目标节点。采用失效-触发的策略来解决路由表一致性维护的问题,减少了网络开销,提高了系统的可扩展性和可靠性。仿真实验表明,该系统性能优于基于互联网缓存协议和缓存阵列路由协议的系统。  相似文献   

2.
The article provides a primer on Web resource caching, one technology used to make the Web scalable. Web caching can reduce bandwidth usage, decrease user-perceived latencies, and reduce Web server loads transparently. As a result, caching has become a significant part of the Web's infrastructure. Caching has even spawned a new industry: content delivery networks, which are also growing at a fantastic rate. Readers familiar with relatively advanced Web caching topics such as the Internet Cache Protocol (ICP), invalidation, and interception proxies are not likely to learn much here. Instead, the article is designed for the general audience of Web users. Rather than a how-to guide to caching technology deployment, it is a high-level argument for the value of Web caching to content consumers and producers. The article defines caching, explains how it applies to the Web, and describes when and why it is useful  相似文献   

3.
局域网流媒体Caching代理服务器的实现   总被引:2,自引:0,他引:2  
With the widespread use of streaming media application on the Internet, a significant change in Internet workload will be provoked.Caching is one kind of applied technique relatively for enhancing the scalability of streaming system and reducing the workload of server/network. We have utilized RTP/RTSP protocol, and implemented the prototype of streaming proxy caching based on LAN in visual C-t-t-environment with WINSOCK network interface. This system can play a role in decreasing server load, reducing the traffic from streaming server to proxy, and improving the Start-up latency of the client.  相似文献   

4.
Many algorithmic efforts have been made to address technical issues in designing a streaming media caching proxy. Typical of those are segment-based caching approaches that efficiently cache large media objects in segments which reduces the startup latency while ensuring continuous streaming. However, few systems have been practically implemented and deployed. The implementation and deployment efforts are hindered by several factors: 1) streaming of media content in complicated data formats is difficult; 2) typical streaming protocols such as RTP often run on UDP; in practice, UDP traffic is likely to be blocked by firewalls at the client side due to security considerations; and 3) coordination between caching discrete object segments and streaming continuous media data is challenging. To address these problems, we have designed and implemented a segment-based streaming media proxy, called SProxy. This proxy system has the following merits. First, SProxy leverages existing Internet infrastructure to address the flash crowd. The content server is now free of the streaming duty while hosting streaming content through a regular Web server. Thus, UDP based streaming traffic from SProxy suffers less dropping and no blocking. Second, SProxy streams and caches media objects in small segments determined by the object popularity, causing very low startup latency, and significantly reducing network traffic. Finally, prefetching techniques are used to pro-actively preload uncached segments that are likely to be used soon, thus providing continuous streaming. SProxy has been extensively tested and we show that it provides high quality streaming delivery in both local area networks and wide area networks (e.g., between Japan and the U.S.).  相似文献   

5.
Zari  M. Saiedian  H. Naeem  M. 《Computer》2001,34(12):30-37
Slow performance costs e-commerce Web sites as much as $4.35 billion annually in lost revenue. Perceived latency-the amount of time between when a user issues a request and receives a response-is a critical issue. Research into improving performance falls into two categories: work on servers and work on networks and protocols. On the server side, previous work has focused on techniques for improving server performance. Such studies show how Web servers behave under a range of loads. These studies often suggest enhancements to application implementations and the operating systems those servers run. On the network side, research has focused on improving network infrastructure performance for Internet applications. Studies focusing on network dynamics have resulted in several enhancements to HTTP, including data compression, persistent connections, and pipelining. These improvements are all part of HTTP 1.1. However, little work has been done on common latency sources that cause the overall delays that frustrate end users. The future of performance improvements lies in developing additional techniques to help implement efficient, scalable, and stable improvements that enhance the end-user experience  相似文献   

6.
适应性Web缓存的研究   总被引:5,自引:0,他引:5  
近年来,由于Web缓存技术对缓解因特网上热点现象的有效性,它已迅速得到了研究人员和业界的关注。适应性Web缓存(adaptive Web caching)由于能够根据用户的不同访问模式,自适应地调整热点数据在缓存系统中的分布,自动均衡整个缓存系统的负载,因而成为了缓存技术研究的一个新的热点。本文介绍了适应性Web缓存领域的研究状况,详细分析了基于组播和基于单播这两种主要的适应性缓存技术。最后,指出了适应性Web缓存研究存在的问题和值得进一步改进的方向。  相似文献   

7.
随着计算机技术和高速网络技术的发展,视频点播系统已变成现实,并且具有巨大的潜在需求。利用视频对象简介能够给用户一个友好的互动收视环境。可扩展视频服务器集群可以适应未来的用户需求的快速增长。视频对象分段技术和前缀缓存技术使视频文件按照一定的缓存策略以分段方式分布在协作式的缓存服务器集群中,以利于服务器集群的负载平衡和减少对用户的启动延迟。系统还引入了IP组播技术来减少对网络带宽的开销。该文提出了混合式的协作缓存和IP组播的方式交付视频对象,并描述了它是如何工作的。  相似文献   

8.
代理Web Cache性能分析   总被引:3,自引:0,他引:3  
采用WebCache技术提高当前Internet性能已成为一个主流的研究领域,其功能原理就象处理器和文件系统中的多级高速缓存一样。大规模Web高速缓存系统已成为许多国家Internet基础设施的重要组成部分。该文从三个不同访问规模的代理WebCache的跟踪日志出发,分析了WebCache的用户访问模式、Cache命中率、Cache服务器处理延迟等统计特征,提出基于分布式共享RAM和外存储结合的两级协同WebCache集群技术,可以提供可扩展的高性能并行Web高速缓存服务。  相似文献   

9.
This paper describes a scalable architecture for Web servers designed to cope with the ongoing increase of the Internet requirements. In the paper, first the drawbacks of the traditional Web server architecture are discussed, and the need for an innovative solution is described. The proposed design addresses two of the parameters that can dramatically impact the performance of Web servers: (1) the need for a powerful data management system to cope with the increase in the complexity of users' requests; and (2) an efficient caching mechanism to reduce the amount of redundant traffic. In this direction, a scalable solution based on distributed database technology to replace the file system is described, and performance test results of the system are provided. This architecture is further extended by a collaborative caching system that builds up an adaptive hierarchy of caches for Web servers, which allows them to keep up with the changes in the traffic generated by the applications they are running. Finally, some improvements to the proposed architecture are addressed.  相似文献   

10.
基于因特网的代理缓存技术是解决Web访问速度慢、服务器负载重和网络阻塞等问题的一种主要和有效的技术。为了能设计出有效、可扩展、健壮、自适应和稳定的代理缓存方案,本文主要对代理缓存的一致性策略、替换策略、体系结构、缓存内容选择和预取等关键技术问题进行研究,并给出了相关技术的解决方案。  相似文献   

11.
Design, implementation, and evaluation of differentiated caching services   总被引:3,自引:0,他引:3  
With the dramatic explosion of online information, the Internet is undergoing a transition from a data communication infrastructure to a global information utility. PDAs, wireless phones, Web-enabled vehicles, modem PCs, and high-end workstations can be viewed as appliances that "plug-in" to this utility for information. The increasing diversity of such appliances calls for an architecture for performance differentiation of information access. The key performance accelerator on the Internet is the caching and content distribution infrastructure. While many research efforts addressed performance differentiation in the network and on Web servers, providing multiple levels of service in the caching system has received much less attention. It has two main contributions. First, we describe, implement, and evaluate an architecture for differentiated content caching services as a key element of the Internet content distribution architecture. Second, we describe a control-theoretical approach that lays well-understood theoretical foundations for resource management to achieve performance differentiation in proxy caches. An experimental study using the Squid proxy cache shows that differentiated caching services provide significantly better performance to the premium content classes.  相似文献   

12.
Dynamic Web applications have gained a great deal of popularity. Improving the performance of these applications has recently attracted the attention of many researchers. One of the most important techniques proposed for this purpose is caching, which can be done at different locations and within different stages of the process of generating a dynamic Web page. Most of the caching schemes proposed in literature are lenient about the issue of consistency; they assume that users can tolerate receiving stale data. However, an important class of dynamic Web applications are those in which users always expect to get the freshest data available. Any caching scheme has to incur a significant overhead to be able to provide this level of consistency (i.e., strong consistency); the overhead may be so much that it neutralizes the benefits of caching. In this paper, three alternative architectures are investigated for dynamic Web applications that require strong consistency. A proxy caching scheme is designed and implemented, which performs caching at the level of database queries. This caching system is used in one of the alternative architectures. The performance experiments show that, despite the high overhead of providing strong consistency in database caching, this technique can improve the performance of dynamic Web applications, especially when there is a long network latency between clients and the (origin) server.  相似文献   

13.
Proxy caching is an effective approach to reduce the response latency to client requests, web server load, and network traffic. Recently there has been a major shift in the usage of the Web. Emerging web applications require increasing amount of server-side processing. Current proxy protocols do not support caching and execution of web processing units. In this paper, we present a weblet environment, in which, processing units on web servers are implemented as weblets. These weblets can migrate from web servers to proxy servers to perform required computation and provide faster responses. Weblet engine is developed to provide the execution environment on proxy servers as well as web servers to facilitate uniform weblet execution. We have conducted thorough experimental studies to investigate the performance of the weblet approach. We modify the industrial standard e-commerce benchmark TPC-W to fit the weblet model and use its workload model for performance comparisons. The experimental results show that the weblet environment significantly improves system performance in terms of client response latency, web server throughput, and workload. Our prototype weblet system also demonstrates the feasibility of integrating weblet environment with current web/proxy infrastructure.  相似文献   

14.
Web对象缓存技术是一种减少web服务器访问通信量和访问延迟的重要手段。Web缓存的引入虽然大大减轻了服务器负载,降低了网络拥塞,减少了客户端访问的延迟等优点,但同时也带来缓存的一致性问题,这样使客户端获得web的数据可能不是最新的版本。该文通过分析现有的缓存一致性方针,提出了一个应适于web的强缓存一致性算法。  相似文献   

15.
Using a central file server is good for interactive access to files, because of the coherency implied by a centralized design. In fact, within local area networks, this is a common case. However, distributed environments in use today may exhibit round‐trip times on the order of 50 or 100 ms. This is a problem for interactive file access to a central file server because of the resulting access times. Although aggressive caching and loosely synchronized replicas may be used for distributed file access, there are cases where the better coherency provided by a central server is still desirable. In this paper, we present ZX, a distributed file system and protocol designed with latency in mind. It can use caching, but it does not require caching or batching to address latency issues. ZX relies on a novel channel‐based file system interface. It includes find requests and leverages streaming requests to work well under high‐latency conditions. Unlike other protocols designed for distributed access to a central server, ZX tolerates round‐trip times on the order of 50 or 100 ms to access a central file server for interactive usage such as compiling shared sources, running binaries, editing documents, and other similar workloads. It can be used on UNIX using a FUSE adaptor while permitting native ZX speakers to run faster.  相似文献   

16.
为了满足飞速发展的Intemet对服务器性能的要求,由多台服务器构成服务器集群系统来分担负荷已成为实现高可伸缩的、高可用网络服务的有效结构。目前的服务器集群系统大多集中在对服务器(包括虚拟服务器和真实服务器)进行改进或扩展上,而没有考虑对socket协议本身进行扩展,本文提出了一个新的面向服务的线程控制模型,通过对文件系统调用的修改,扩展socket协议,构建了支持集群的服务器集群系统,取得了良好的性能改进。  相似文献   

17.
可扩展单一映象文件系统的设计、实现及评价   总被引:1,自引:0,他引:1  
曙光超级服务器是典型的机群系统,COSMOS是为其研制开发的可扩展的单一映象文件系统。文中主要描述了COSMOS原型系统的设计、实现及评价。其中重点介绍了双粒度合作式缓存、分布式元数据管理及网络磁盘存储分组等关键技术,并利用I/O其准程序对原型文件系统进行了性能评价,测试结果表明了该原型系统在保证系统单一映象的基础上,具备良好的可扩展性。  相似文献   

18.
With the success of Internet video-on-demand (VoD) streaming services, the bandwidth required and the financial cost incurred by the host of the video server becoming extremely large. Peer-to-peer (P2P) networks and proxies are two common ways for reducing the server workload. In this paper, we consider a peer-assisted Internet VoD system with proxies deployed at domain gateways. We formally present the video caching problem with the objectives of reducing the video server workload and avoiding inter-domain traffic, and we obtain its optimal solution. Inspired by theoretical analysis, we develop a practical protocol named PopCap for Internet VoD services. Compared with previous work, PopCap does not require additional infrastructure support, is inexpensive, and able to cope well with the characteristic workloads of Internet VoD services. From simulation-based experiments driven by real-world data sets from YouTube, we find that PopCap can effectively reduce the video server workload, therefore provides a superior performance regarding the video server’s traffic reduction.  相似文献   

19.
Leung  K. Y.  Wong  Eric W. M.  Yeung  K. H. 《World Wide Web》2004,7(3):297-314
Content Delivery Networks (CDN) have been used on the Internet to cache media content so as to reduce the load on the original media server, network congestion, and latency. Due to the large size of media content compared to normal web objects, current caching algorithms used in the Internet are no longer suitable. This paper presents a high-performance prefetch system that accommodates user time-varying behavior. A hybrid caching technique, which combines prefetch and replacement algorithms, is also introduced. The robustness of the cache system against imperfect user request information is evaluated using three request noise models. Two prefetch performance indices are also presented to help content administrators in deciding when to update the user request profile for caching algorithms.  相似文献   

20.
A common way to address scalability requirements of distributed services is to employ server replication and client caching of objects that encapsulate the service state. The performance of such a system could depend very much on the protocol implemented by the system to maintain consistency among object copies. We explore scalable consistency protocols that never require synchronization and communication between all nodes that have copies of related objects. We achieve this by developing a novel approach called local consistency (LC). LC based protocols can provide increased flexibility and efficiency by allowing nodes control over how and when they become aware of updates to cached objects. We develop two protocols for implementing strong consistency using this approach and demonstrate that they scale better than a traditional invalidation based consistency protocol along the system load and geographic distribution dimensions of scale  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号