首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
符青云  刘心松 《计算机工程》2007,33(11):120-122
提出了一种基于全局内存对象缓冲池的高性能分布式Web Proxy模型,通过在系统范围内构建类似于分布式共享存储器系统的缓冲池,并将分布式Web Proxy系统中访问最频繁的Web对象置于其中,则Web对象的平均服务时间缩短,提高了系统性能。通过实际Proxy服务器的访问日志进行了系统性能仿真,结果表明,该机制可以提高分布式Web Proxy服务器的性能。  相似文献   

2.
为了解决异构作物模型在分布式环境下的集成问题,减少模型Web服务间大量数据传输的响应时间,本文以适应性评价模型Web服务为例,设计了基于.NET Caching类数据缓存和ZIP数据压缩的模型Web复合服务与优化方案。在网络环境中响应时间的测试结果表明,随着地点和并发用户数的增加,基于.NET Caching类数据缓存的Web服务方案可以缩短的响应时间在45%到86%之间,可以解决模型Web服务响应时间长的问题;基于ZIP数据压缩的Web服务方案优化效果较差。最后基于SOA和WebGIS,开发了作物种植制度设计服务系统,实现了小麦适应性评价功能、生产潜力评价功能、安全区划功能、地图操作功能和专题图功能,并以江苏省为例进行了小麦光温生产潜力评价的测试,结果表明江苏省的小麦光温生产潜力基本呈现北高南低的趋势,符合实际情况。  相似文献   

3.
Web caching: A way to improve web QoS   总被引:4,自引:0,他引:4       下载免费PDF全文
As the Internet and World Wide Web grow at a fast pace, it is essential that the Web‘s performance should keep up with increased demand and expectations. Web Caching technology has.been widely accepted as one of the effective approaches to alleviating Web traffic and increase the Web Quality of Service (QoS). This paper provides an up-to-date survey of the rapidly expanding Web Caching literature. It discusses the state-of-the-art web caching schemes and techniques, with emphasis on the recent developments in Web Caching technology such as the differentiated Web services, heterogeneous caching network structures, and dynamic content caching.  相似文献   

4.
微软公司和Netscape公司新近推出了新版的Web代理服务器,不过,用这两个代理服务器来进行Web代理,给人的感觉就好象用高射炮打蚊子一样。 Netscape Proxy Server 3.5和Microsoft Proxy Server 2.0的基本功能是将经常访问的内容存在高速缓冲区中,然后直接从缓冲区中将内容传送给浏览器,这比直接从Internet网上下载快。它们同时也可以控制对Internet的访问,并过滤掉一些不想要的内容,例如Java小程  相似文献   

5.
1 引言随着因特网技术和应用向广度和深度迅猛发展,人们除了在网上共享信息外,还试图通过无所不在的因特网来大规模地共享计算能力和服务等一切可以共享的资源。但是,由于网络中瓶颈的存在及热点资源的相对集中,这种迅猛发展也带来了一个问题,即人们上网时常常体验到的“World WideWait”问题。要解决这一问题,一种有效的解决办法就是采用Web Caching技术,即将Web上常被人们访问的、热门的信息缓存在Web服务器与最终用户之间的缓存服务器系统中。当前,Web Caching是一个正在迅速发展的领域,无论科研界还是工业界,都在这一方面投入了大量的精力。如果设计和部署得当的话,Web Caching可以带来很多好处,如节约大  相似文献   

6.
Web缓存技术综述   总被引:24,自引:2,他引:24  
Web高速缓存(Web Caching)技术实现了Web内容的关键节点(包括本地)存储,它能减少网络带宽的占用,降低硬件成本,改善响应时间,提高了最终用户的效率.本文通过对Web缓存的分类、性能指标、一致性策略以及替换算法几个方面对目前流行的缓存技术作了一个总结.  相似文献   

7.
应用交付网络技术领导厂商Blue Coat系统公司近日宣布推出可在互联网网关处对内联恶意软件进行扫描的Blue Coat Proxy AV1200设备,以此扩充其Blue Coat ProxyAV系列设备产品。Proxy AV1200设备可扫描所有网络内容,包括通过用户身份验证和认证令牌从Web2.0站点下载的文件,  相似文献   

8.
Microsoft Proxy Server 2.0在功能上比原有的1.0版有了巨大的飞跃。新的功能使Proxy Server 2.0成为那些需要代理服务器或防火墙的公司网络的上佳选择之一。作为代理服务器,它连接内部网与外部网。旧的1.0版只实现了Web ProxyServer和WinSock Proxy Server。而2.0版则实现了对SOCKS 4的支持,增加了SockProxy Server。这对非Windows的客户来说是个好消息,他们现在可以做更多的事情(比如RealAudio)而不仅仅限于通过CERN标准的Web Proxy浏览Internet。在1.0版中没有的Proxy级联在新版本中不仅得到了实现,而且被充分地扩展  相似文献   

9.
WWW业务访问特性分布研究   总被引:8,自引:0,他引:8  
WWW业务表现为一系列的访问序列。而Web Server和Proxy Server的日志很好地记录了这种访问序列的过程及特性。WWW业务的特性研究是Web Server、Web中间件研究和人工合成Web负载的基础。分析了一个Web Server和两个Proxy Server的日志,重点研究了Web页面请求的概率分布、Web静态文档大小的概率分布(含传输文档)、Web静态文档的访问距离的概率分布,并将分析结果同相关文献的结果进行了对比,同时通过试验证实了在使用Size作为Web缓存替换依据时,还应该考虑Web文档的访问频率。  相似文献   

10.
武捷东  吕述望 《计算机工程》2007,33(16):101-103
基于移动代理的主动网络实现一个简单的媒体内容服务原型。针对该应用探讨了应用层主动节点环境下实现原型的关键技术,分析了动态和静态Proxy的执行过程。针对Proxy的特性,通过对包括服务定制和主动消息的效率等问题进行的实验和评价,表明了Proxy在服务定制方面优于传统服务模型,说明了Proxy迁移的可用性。  相似文献   

11.
This paper proposes a novel contribution in Web caching area, especially in Web cache replacement, so-called intelligent client-side Web caching scheme (ICWCS). This approach is developed by splitting the client-side cache into two caches: short-term cache that receives the Web objects from the Internet directly, and long-term cache that receives the Web objects from the short-term cache. The objects in short-term cache are removed by least recently used (LRU) algorithm as short-term cache is full. More significantly, when the long-term cache saturates, the neuro-fuzzy system is employed efficiently in managing contents of the long-term cache. The proposed solution is validated by implementing trace-driven simulation and the results are compared with least recently used (LRU) and least frequently used (LFU) algorithms; the most common policies of evaluating Web caching performance. The simulation results have revealed that the proposed approach improves the performance of Web caching in terms of hit ratio (HR), up to 14.8% and 17.9% over LRU and LFU. In terms of byte hit ratio (BHR), the Web caching performance is improved up to 2.57% and 26.25%, and for latency saving ratio (LSR), the performance is better with 8.3% and 18.9% over LRU and LFU, respectively.  相似文献   

12.
Proxy caches are essential to improve the performance of the World Wide Web and to enhance user perceived latency. Appropriate cache management strategies are crucial to achieve these goals. In our previous work, we have introduced Web object-based caching policies. A Web object consists of the main HTML page and all of its constituent embedded files. Our studies have shown that these policies improve proxy cache performance substantially.In this paper, we propose a new Web object-based policy to manage the storage system of a proxy cache. We propose two techniques to improve the storage system performance. The first technique is concerned with prefetching the related files belonging to a Web object, from the disk to main memory. This prefetching improves performance as most of the files can be provided from the main memory rather than from the proxy disk. The second technique stores the Web object members in contiguous disk blocks in order to reduce the disk access time. We used trace-driven simulations to study the performance improvements one can obtain with these two techniques. Our results show that the first technique by itself provides up to 50% reduction in hit latency, which is the delay involved in providing a hit document by the proxy. An additional 5% improvement can be obtained by incorporating the second technique.  相似文献   

13.
代理Web Cache性能分析   总被引:3,自引:0,他引:3  
采用WebCache技术提高当前Internet性能已成为一个主流的研究领域,其功能原理就象处理器和文件系统中的多级高速缓存一样。大规模Web高速缓存系统已成为许多国家Internet基础设施的重要组成部分。该文从三个不同访问规模的代理WebCache的跟踪日志出发,分析了WebCache的用户访问模式、Cache命中率、Cache服务器处理延迟等统计特征,提出基于分布式共享RAM和外存储结合的两级协同WebCache集群技术,可以提供可扩展的高性能并行Web高速缓存服务。  相似文献   

14.
Vakali  Athena 《World Wide Web》2001,4(4):277-297
Accesing and circulation of Web objects has been facilitated by the design and implementation of effective caching schemes. Web caching has been integrated in prototype and commercial Web-based information systems in order to reduce the overall bandwidth and increase system's fault tolerance. This paper presents an overview of a series of Web cache replacement algorithms based on the idea of preserving a history record for cached Web objects. The number of references to Web objects over a certain time period is a critical parameter for the cache content replacement. The proposed algorithms are simulated and experimented under a real workload of Web cache traces provided by a major (Squid) proxy cache server installation. Cache and bytes hit rates are given with respect to different cache sizes and a varying number of request workload sets and it is shown that the proposed cache replacement algorithms improve both cache and byte hit rates.  相似文献   

15.
一种有效的Web代理缓存替换算法   总被引:2,自引:0,他引:2       下载免费PDF全文
设计良好的Web缓存替换策略能使网络上的资源得到最有效的利用。文章设计了一个较有效率的Web缓存替换策略LFRU,期望以较佳的方式获得网络资源及改善Web缓存的性能和服务质量。实验结果表明该策略有较高的文档命中率和字节命中率。  相似文献   

16.
通过对Web通信量的分析,人们发现用户对Web对象的访问模式服从Zipf定律或类Zipf定律。在Web缓存的设计中,为得到所期望的Web对象命中率的要求,设计人员可以根据Zipf定律近似计算出相应的缓存大小。因此,Zipf定律为Web缓存结构的设计提供了重要的依据。适当的缓存大小结合P-LFU替换策略可以得到很高的Web缓存命中率。  相似文献   

17.
In information-centric networking, in-network caching has the potential to improve network efficiency and content distribution performance by satisfying user requests with cached content rather than downloading the requested content from remote sources. In this respect, users who request, download, and keep the content may be able to contribute to in-network caching by sharing their downloaded content with other users in the same network domain (i.e., user-assisted in-network caching). In this paper, we examine various aspects of user-assisted in-network caching in the hopes of efficiently utilizing user resources to achieve in-network caching. Through simulations, we first show that user-assisted in-network caching has attractive features, such as self-scalable caching, a near-optimal cache hit ratio (that can be achieved when the content is fully cached by the in-network caching) based on stable caching, and performance improvements over in-network caching. We then examine the caching strategy of user-assisted in-network caching. We examine three caching strategies based on a centralized server that maintains all content availability information and informs each user of what to cache. We also examine three caching strategies based on each user’s content availability information. We first show that the caching strategy affects the distribution of upload overhead across users and the number of cache hits in each segment. One interesting observation is that, even with a small storage space (i.e., 0.1% of the content size per user), the centralized and distributed approaches improve the cache hit ratio by 50% and 45%, respectively. With an overall view of caching information, the centralized approach can achieve a higher cache hit ratio than the distributed approach. Based on this observation, we discuss a distributed approach with a larger view of caching information than the distributed approach and, through simulations, confirm that a larger view leads to a higher cache hit ratio. Another interesting observation is that the random distributed strategy yields comparable performance to more complex strategies.  相似文献   

18.
缓存和预取在提高无线环境下的Web访问性能方面发挥着重要作用。文章研究针对无线局域网的Web缓存和预取机制,分别基于数据挖掘和信息论提出了采用序列挖掘和延迟更新的预测算法,设计了上下文感知的预取算法和获益驱动的缓存替换机制,上述算法已在Web缓存系统OnceEasyCache中实现。性能评估实验结果表明,上述算法的集成能有效地提高缓存命中率和延迟节省率。  相似文献   

19.
在Device-to-Device (D2D)缓存网络中,缓存文件的副本数量是制约系统缓存效率的重要因素,过多的副本会导致缓存资源不能得到充分利用,副本数过低又将使流行文件难以被有效获取。针对D2D缓存网络副本布设问题,以系统缓存命中率最大化为目标,利用凸规划理论,提出了一种缓存文件副本数布设算法(CRP)。仿真结果显示,与现有副本数量布设算法相比,该算法可以有效提升D2D缓存网络总体缓存命中率。  相似文献   

20.
Nowadays, server-side Web caching becomes an important technique used to reduce the User Perceived Latency (UPL). In large-scale multimedia systems, there are many Web proxies, connected with a multimedia server, that can cache some most popular multimedia objects and respond to the requests for them. Multimedia objects have some particular characteristic, e.g., strict QoS requirements. Hence, even some efficient conventional caching strategies based on cache hit ratio, meant for non-multimedia objects, will confront some problems in dealing with the multimedia objects. If we consider additional resources of proxy besides cache space, say bandwidth, we can readily observe that high hit ratios may deteriorate the entire system performance. In this paper, we propose a novel placement model for networked multimedia systems, referred to as the Hk/T model, which considers the combined influence of arrival rate, size, and playback time to select the objects to be cached. Based on this model, we propose an innovative Web caching algorithm, named as the ART-Greedy algorithm, which can balance the load among the proxies and achieve a minimum Average Response Time (ART) of the requests. Our experimental results conclusively demonstrate that the ART-Greedy algorithm outperforms the most popular and commonly used LFU (Least Frequently Used) algorithm significantly, and can achieve a better performance than the byte-hit algorithm when the system utilization is medium and high.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号