首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 197 毫秒
1.
现有的Web缓存器的实现主要是基于传统的内存缓存算法,由于Web业务请求的异质性,传统的替换算法不能在Web环境中有效工作。研究了Web缓存替换操作的依据,分析了以往替换算法的不足,考虑到Web文档的大小、访问代价、访问频率、访问兴趣度以及最近一次被访问的时间对缓存替换的影响,提出了Web缓存对象角色的概念,建立了一种新的基于对象角色的高精度Web缓存替换算法(ORB算法);并以NASA和DEC的代理服务器数据为例,将该算法与LRU、LFU、SIZE、Hybrid算法进行了仿真实验对比,结果证明,ORB算  相似文献   

2.
王庆桦 《计算机仿真》2020,37(2):294-298
针对传统分布式缓存替换算法路由器命中缓存性能不足的问题,提出一种动态数据处理平台分布式缓存替换算法。描述动态数据处理平台分布式数据缓存信息,构建动态数据处理平台的缓存架构表,并根据缓存情况替换缓存架构表,通过不断替换的缓存架构表改进权重替换算法,在算法中添加缓存对象这一参数,并通过改进后的算法计算缓存对象的更新权重值及其权重成本,根据计算成本替换LRU链表中的尾指针元素,当元素已存在缓存中并且被命中时、或出现被请求的新元素时,则更新LRU链,构造新的LRU链表,通过重构的LRU链表构建分布式缓存替换策略,从而实现动态数据处理平台分布式缓存替换算法的构建。为了证明动态数据处理平台分布式缓存替换算法的优越性,将其与传统分布式缓存替换算法进行比较,实验结果证明,上述算法的路由器命中缓存性能优于传统算法,更适合进行动态数据处理平台的分布式缓存替换。  相似文献   

3.
Web缓存的一种新的替换算法   总被引:9,自引:0,他引:9  
林永旺  张大江  钱华林 《软件学报》2001,12(11):1710-1715
现有的Web缓存器的实现主要是基于传统的内存缓存算法,然而由于Web业务请求的异质性,传统的替换算法不能在Web环境中有效工作.首先给出了问题的一个最优化模型,分析了替换算法的关键在于能正确地体现Web业务的访问模式.在泊松到达模型的基础上,提出一种新的缓存策略--最少正规化代价替换算法(leastnormalized-cost,简称LNC).新的替换算法除了考虑Web文档的平均引用时间、最近流逝时间、文档大小和单位大小价值以外,还考虑了Web业务的访问率动态改变的特征.对轨迹文件所做的性能实验表明,LNC优于其他主要的算法.  相似文献   

4.
缓存替换算法对优化网络处理应用的性能起到关键作用,但目前面向网络流量的缓存替换算法研究主要集中在算法设计和领域应用方面,较少有文献对现有的缓存替换算法在网络环境下的性能进行分析比较。对此,本文针对主要的6种缓存替换算法进行分析和比较。通过分析网络流量的新近度与频度特征,为基于最近最少使用(Least Recently Used, LRU)和最近最不常使用(Least Frequently Used, LFU)的缓存替换算法给出实际依据。对仿真环境和实际系统的实验结果表明,类LRU算法较LFU算法更适用于网络流量,而缓存空间较大时,随机替换算法较LRU算法更适用于多核环境。  相似文献   

5.
针对GDSF替换算法中对访问频率缺少预测的不足,提出了一种基于协同过滤的GDSF缓存替换算法(GDSF-CF)。该算法考虑了Web对象之间相似性与用户访问时间间隔,运用协同过滤算法生成Web对象的预测访问频率,并采用齐普夫定律参数对GDSF算法的目标函数进行了改进。当需要进行缓存替换时,利用目标函数价值计算缓存空间中的每个Web对象缓存价值,将最小缓存价值的Web对象进行替换。仿真实验结果表明,该算法的命中率HR和字节命中率BHR都有较大提升。  相似文献   

6.
Web缓存技术是提高Web性能的一种有效方法,缓存管理是Web缓存技术的核心,研究Web访问特征的数学模型是有效进行Web缓存管理的基础。本文根据Web缓存流量访问特征建立数学模型,设计实现了Web缓存流量特征模拟生成器(WebTraffsim),利用两层代理缓存结构,对Web缓存流量的访问特征和性能进行测试,对Web缓存替换算法(LRU,LFU,LRV)进行了性能评价和分析。  相似文献   

7.
Web缓存分层结构在避免单点失效、提高缓存性能方面具有重要作用.论文研究了Web缓存层次模型,提出请求分发的三种模式,并利用代价函数分析探讨了缓存模型性能.根据Web访问共同特征,实验采用数学建模方法生成模拟日志,模拟不同层采用不同替换算法(LRU、LFU、GDS)时的缓存性能.结果表明,模拟日志的高频区、低频区流行度访问特征分别服从齐普夫第一定律、第二定律,具有真实日志的特性,能够模拟用户请求评价Web缓存层次模型性能;当低层代理缓存采用LFU或LRU替换算法,高层代理缓存采用GDS替换算法时,两层缓存模型在命中率、字节命中率方面有较好的性能表现.  相似文献   

8.
.NET平台下自适应缓存对象替换算法   总被引:1,自引:0,他引:1  
缓存是提高Web应用程序性能的一个重要手段..NET平台下提供了数据缓存与页面缓存技术,在已经实现的缓存对象的基础上,设计了一种缓存对象的自适应替换算法.算法使用了基于缓存对象的价值与被访问频率的综合优先级策略,提出了相应的缓存对象被替换规则.算法充分考虑了缓存对象之间的依赖关系对优先级的影响.从对象命中率与系统的整体响应时间两个方面测试了算法的性能,测试表明所提出算法相对于最小价值算法、最少使用频率算法有较大的改进.  相似文献   

9.
传统的缓存替换算法由于不能适应应用程序的流式访问行为而导致缓存性能不佳.设计基于周期检测的预测方法,分析程序访存重用距离的规律性和流式访问的复杂性,提出用重用距离预测能同时适应简单流和复杂流访问模式的RDP算法.RDP的基本思想是预测重用距离并动态维护重用距离计数,动态调整缓存数据的替换顺序,通过流采样缩减存储开销.实验结果表明,RDP算法能够很好地适应程序中多样化的流访问模式,其总体性能优于LRU算法和DIP算法,在32MB缓存上比传统LRU算法平均减少了27.5%的缓存缺失.  相似文献   

10.
为了进一步提升内容分发网络中代理缓存的整体性能,将一致性策略有机的融入到替换策略中,本文提出一种高效的自适应代理缓存一致性替换算法ACRA。本算法中的一致性策略采用了自适应TTL机制,替换策略是在分析了Web轨迹的基础上,找出用户访问Web内容的访问特性:访问再次发生的概率和访问内容大小的分布情况,并以此建立相应代价公式,作为替换标准中计算缓存内容价值的要素。通过Trace-Driven摸拟实验,表明了ACRA在命中陈旧比上优于传统的几个替换算法。  相似文献   

11.
Web caching has been proposed as an effective solution to the problems of network traffic and congestion, Web objects access and Web load balancing. This paper presents a model for optimizing Web cache content by applying either a genetic algorithm or an evolutionary programming scheme for Web cache content replacement. Three policies are proposed for each of the genetic algorithm and the evolutionary programming techniques, in relation to objects staleness factors and retrieval rates. A simulation model is developed and long term trace-driven simulation is used to experiment on the proposed techniques. The results indicate that all evolutionary techniques are beneficial to the cache replacement, compared to the conventional replacement applied in most Web cache server. Under an appropriate objective function the genetic algorithm has been proven to be the best of all approaches with respect to cache hit and byte hit ratios.  相似文献   

12.
Vakali  Athena 《World Wide Web》2001,4(4):277-297
Accesing and circulation of Web objects has been facilitated by the design and implementation of effective caching schemes. Web caching has been integrated in prototype and commercial Web-based information systems in order to reduce the overall bandwidth and increase system's fault tolerance. This paper presents an overview of a series of Web cache replacement algorithms based on the idea of preserving a history record for cached Web objects. The number of references to Web objects over a certain time period is a critical parameter for the cache content replacement. The proposed algorithms are simulated and experimented under a real workload of Web cache traces provided by a major (Squid) proxy cache server installation. Cache and bytes hit rates are given with respect to different cache sizes and a varying number of request workload sets and it is shown that the proposed cache replacement algorithms improve both cache and byte hit rates.  相似文献   

13.
With the recent explosion in usage of the World Wide Web, Web caching has become increasingly important. However, due to the non-uniform cost/size property of data objects in this environment, design of an efficient caching algorithm becomes an even more difficult problem compared to the traditional caching problems. In this paper, we propose the Least Expected Cost (LEC) replacement algorithm for Web caches that provides a simple and robust framework for the estimation of reference probability and fair evaluation of non-uniform Web objects. LEC evaluates a Web object based on its cost per unit size multiplied by the estimated reference probability of the object. This results in a normalized assessment of the contribution to the cost-savings ratio, leading to a fair replacement algorithm. We show that this normalization method finds optimal solution under some assumptions. Trace-driven simulations with actual Web cache logs show that LEC offers the performance of caches more than twice its size compared with other algorithms we considered. Nevertheless, it is simple, having no parameters to tune. We also show how the algorithm can be effectively implemented as a Web cache replacement module.  相似文献   

14.
一种有效的Web代理缓存替换算法   总被引:2,自引:0,他引:2       下载免费PDF全文
设计良好的Web缓存替换策略能使网络上的资源得到最有效的利用。文章设计了一个较有效率的Web缓存替换策略LFRU,期望以较佳的方式获得网络资源及改善Web缓存的性能和服务质量。实验结果表明该策略有较高的文档命中率和字节命中率。  相似文献   

15.
This paper proposes a novel contribution in Web caching area, especially in Web cache replacement, so-called intelligent client-side Web caching scheme (ICWCS). This approach is developed by splitting the client-side cache into two caches: short-term cache that receives the Web objects from the Internet directly, and long-term cache that receives the Web objects from the short-term cache. The objects in short-term cache are removed by least recently used (LRU) algorithm as short-term cache is full. More significantly, when the long-term cache saturates, the neuro-fuzzy system is employed efficiently in managing contents of the long-term cache. The proposed solution is validated by implementing trace-driven simulation and the results are compared with least recently used (LRU) and least frequently used (LFU) algorithms; the most common policies of evaluating Web caching performance. The simulation results have revealed that the proposed approach improves the performance of Web caching in terms of hit ratio (HR), up to 14.8% and 17.9% over LRU and LFU. In terms of byte hit ratio (BHR), the Web caching performance is improved up to 2.57% and 26.25%, and for latency saving ratio (LSR), the performance is better with 8.3% and 18.9% over LRU and LFU, respectively.  相似文献   

16.
Performance evaluation of Web proxy cache replacement policies   总被引:10,自引:0,他引:10  
Martin  Rich  Tai 《Performance Evaluation》2000,39(1-4):149-164
The continued growth of the World-Wide Web and the emergence of new end-user technologies such as cable modems necessitate the use of proxy caches to reduce latency, network traffic and Web server loads. In this paper we analyze the importance of different Web proxy workload characteristics in making good cache replacement decisions. We evaluate workload characteristics such as object size, recency of reference, frequency of reference, and turnover in the active set of objects. Trace-driven simulation is used to evaluate the effectiveness of various replacement policies for Web proxy caches. The extended duration of the trace (117 million requests collected over 5 months) allows long term side effects of replacement policies to be identified and quantified.

Our results indicate that higher cache hit rates are achieved using size-based replacement policies. These policies store a large number of small objects in the cache, thus increasing the probability of an object being in the cache when requested. To achieve higher byte hit rates a few larger files must be retained in the cache. We found frequency-based policies to work best for this metric, as they keep the most popular files, regardless of size, in the cache. With either approach it is important that inactive objects be removed from the cache to prevent performance degradation due to pollution.  相似文献   


17.
This work introduces and establishes a new model for cache management, where clients suggest preferences regarding their expectations for the time they are willing to wait, and the level of obsolescence they are willing to tolerate. The cache uses these preferences to decide upon entrance and exit of objects to and from its storage, and select the best copy of requested object among all available copies (fresh, cached, remote). We introduce three replacement policies, each evicts objects based on ongoing scores, considering users’ preferences combined with other objects’ properties such as size, obsolescence rate and popularity. Each replacement algorithm follows a different strategy: (a) an optimal solution that use dynamic programming approach to find the best objects to be kept (b) another optimal solution that use branch and bound approach to find the worst objects to be thrown out (c) an algorithm that use heuristic approach to efficiently select the objects to be evicted. Using these replacement algorithms the cache is able to keep the objects that are best suited for users preferences and dump the other objects. We compare our proposed algorithms to the Least-Recently-Used algorithm, and provide evidence to the advantages of our algorithms providing better service to cache’s users with less burden on network resources and reduced workloads on origin servers.  相似文献   

18.
Web缓存是用来解决网络访问延迟和网络拥塞问题,缓存替换策略直接影响缓存的命中率。为此,提出一种朴素贝叶斯(NB)分类器重访概率预测的Web缓存替换策略;根据用户之前访问日志,通过分区操作提取多项特征来表示每次访问的对象,并构建特征数据集;训练NB分类器,用来确定缓存中对象被再次访问的概率,为对象分配权重;结合LRU策略来合理删除一些对象。仿真结果表明,提出的策略在保证较高命中率的同时有效降低了执行时间。  相似文献   

19.
基于反馈控制理论,通过系统辨识设计了缓存控制器。动态调整不同类别缓存对象的缓存空间,可保证高优先级Web对象的高命中率,而不同类别的Web对象命中率之比保持不变。在服务器端实现了基于比例命中率的缓存区分服务。经实验验证,在GDSF,LRU,LFU缓存替换算法下,无论是请求命中率还是字节命中率,均有良好的区分效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号