首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
命中率、字节命中率和延迟时间是Web缓存系统中最重要的性能指标,但是却难以准确、合理地度量不同大小的Web对象的访问延迟.引入字节延迟的概念,为不同的对象延迟建立了一个比较合理的评价标准.提出最小延迟代价的Web缓存替换算法LLC,使用户访问的延迟时间尽可能缩短.实验结果表明,与常用的缓存替换算法相比,LLC算法在有效减少用户感知的访问延迟方面具有较好的性能表现.  相似文献   

2.
Timing predictability of cache replacement policies   总被引:1,自引:0,他引:1  
Hard real-time systems must obey strict timing constraints. Therefore, one needs to derive guarantees on the worst-case execution times of a system’s tasks. In this context, predictable behavior of system components is crucial for the derivation of tight and thus useful bounds. This paper presents results about the predictability of common cache replacement policies. To this end, we introduce three metrics, evict, fill, and mls that capture aspects of cache-state predictability. A thorough analysis of the LRU, FIFO, MRU, and PLRU policies yields the respective values under these metrics. To the best of our knowledge, this work presents the first quantitative, analytical results for the predictability of replacement policies. Our results support empirical evidence in static cache analysis.
Reinhard WilhelmEmail:
  相似文献   

3.
The technology advance in network has accelerated the development of multimedia applications over the wired and wireless communication. To alleviate network congestion and to reduce latency and workload on multimedia servers, the concept of multimedia proxy has been proposed to cache popular contents. Caching the data objects can relieve the bandwidth demand on the external network, and reduce the average time to load a remote data object to local side. Since the effectiveness of a proxy server depends largely on cache replacement policy, various approaches are proposed in recent years. In this paper, we discuss the cache replacement policy in a multimedia transcoding proxy. Unlike the cache replacement for conventional web objects, to replace some elements with others in the cache of a transcoding proxy, we should further consider the transcoding relationship among the cached items. To maintain the transcoding relationship and to perform cache replacement, we propose in this paper the RESP framework (standing for REplacement with Shortest Path). The RESP framework contains two primary components, i.e., procedure MASP (standing for Minimum Aggregate Cost with Shortest Path) and algorithm EBR (standing for Exchange-Based Replacement). Procedure MASP maintains the transcoding relationship using a shortest path table, whereas algorithm EBR performs cache replacement according to an exchanging strategy. The experimental results show that the RESP framework can approximate the optimal cache replacement with much lower execution time for processing user queries.  相似文献   

4.
An effectiveness-based adaptive cache replacement policy   总被引:1,自引:0,他引:1  
Belady’s optimal cache replacement policy is an algorithm to work out the theoretical minimum number of cache misses, but the rationale behind it was too simple. In this work, we revisit the essential function of caches to develop an underlying analytical model. We argue that frequency and recency are the only two affordable attributes of cache history that can be leveraged to predict a good replacement. Based on those two properties, we propose a novel replacement policy, the Effectiveness-Based Replacement policy (EBR) and a refinement, Dynamic EBR (D-EBR), which combines measures of recency and frequency to form a rank sequence inside each set and evict blocks with lowest rank. To evaluate our design, we simulated all 30 applications from SPEC CPU2006 for uni-core system and a set of combinations for 4-core systems, for different cache sizes. The results show that EBR achieves an average miss rate reduction of 12.4%. With the help of D-EBR, we can tune the weight ratio between ‘frequency’ and ‘recency’ dynamically. D-EBR can nearly double the miss reduction achieved by EBR alone. In terms of hardware overhead, EBR requires half the hardware overhead of real LRU and even compared with Pseudo LRU the overhead is modest.  相似文献   

5.
Named Data Networking (NDN) is a candidate next-generation Internet architecture designed to overcome the fundamental limitations of the current IP-based Internet, in particular strong security. The ubiquitous in-network caching is a key NDN feature. However, pervasive caching strengthens security problems namely cache pollution attacks including cache poisoning (i.e., introducing malicious content into caches as false-locality) and cache pollution (i.e., ruining the cache locality with new unpopular content as locality-disruption).In this paper, a new cache replacement method based on Adaptive Neuro-Fuzzy Inference System (ANFIS) is presented to mitigate the cache pollution attacks in NDN. The ANFIS structure is built using the input data related to the inherent characteristics of the cached content and the output related to the content type (i.e., healthy, locality-disruption, and false-locality). The proposed method detects both false-locality and locality-disruption attacks as well as a combination of the two on different topologies with high accuracy, and mitigates them efficiently without very much computational cost as compared to the most common policies.  相似文献   

6.
最小驻留价值缓存替换算法   总被引:5,自引:0,他引:5  
刘磊  熊小鹏 《计算机应用》2013,33(4):1018-1022
为提高搜索应用的缓存性能,提出一种新的缓存替换算法--最小驻留价值(LCV)算法。该算法通过计算对象访问频率,结合对象大小,优先选取对字节命中率贡献最小的对象集进行缓存替换。同时,将最优替换对象集的选取转化为经典0-1背包问题进行了求解,并给出一种快速近似解法及其算法数据结构。在与最近最少使用(LRU)、先进先出(FIFO)和考虑多重因子(GD-Size)算法的对比实验中,LCV算法在提高字节命中率(BHR)和降低平均延时时间(ALT)方面具有更好的性能。  相似文献   

7.
Named Data Networking (NDN) is considered an appropriate architecture for IoT as it naturally supports consumer mobility and provides in-network caching capabilities as leverage to meet IoT requirements. Some caching techniques have been introduced to meet IoT application requirements and enforce the caching at the network edge. However, it remains challenging to design a popularity and freshness aware caching technique that places cached contents at the edge of the network as close to consumers as possible in a natural and simple manner without resorting to cumbersome networking mechanisms and hard-to-insure assumptions. In this paper, we propose PF-EdgeCache, an efficient popularity and freshness aware caching technique that naturally brings requested popular contents to the edge of the network in a manner fully compliant with the NDN standard. Simulations performed using the ndnSIM simulator and a large transit stub topology clearly show the competitiveness of PF-EdgeCache in terms of server hit reduction, eviction rate, and retrieval time compared to some representative work proposed in the literature.  相似文献   

8.
In this paper, a comprehensive study is first conducted to investigate the effects of cache coherence protocols and cache replacement policies on the characteristics of NUCA in current many-core processors. The main focus of this study is to analyze the effects of coherence protocols and replacement policies on the vulnerability of caches. The outcomes of this analysis indicate two facts: (i) Differences in handling write operations play an important role to make distinction in favor of or against a cache coherence protocol; (ii) Near-optimal solutions for replacement problem, aimed at enhancing the performance, can also make positive influence on reduction of cache vulnerability factor. Based on the results of first step, two schemes are introduced to enhance the reliability of caches by applying some modification on the structures of cache coherence protocols and cache replacement policies. The first scheme tries to manage sharing of the dirty data items among different same-level caches. The second helps to give priority and more opportunity to old dirty blocks than clean blocks for replacement. The proposed schemes reveal about 18% improvement in MTTF, with negligible performance, bandwidth and energy consumption overhead compared to previous cache structures.  相似文献   

9.
As processor performance continues to improve, more demands are being placed on the performance of the memory system. The caches employed in current processor designs are very similar to those described in early cache studies. In this paper, a detailed characterization of data cache behavior for individual load instructions is given. It will be shown that by selectively allocating cache lines according the characteristics of individual load instructions, overall performance can be improved for both the data cache and the memory system. This approach can improve some aspects of memory performance by as much as 60 percent on existing executables. This work was supported by National Science Foundation Grants CCR-94-03651, CCR-92-13651, CCR-92-13627, MIP-92-57259, and generous grants from the SUN Microsystems and Tektronix corporations.  相似文献   

10.
针对目前内容中心网络CCN缓存替换策略所存在的效率低下等问题,引用物理学中“势能”的概念,并结合“自然冷却”这一自然现象,提出了一种基于势能冷却的替换算法PEC-Rep。根据被访问的次数以及时间间隔,准确判断出内容在未来一段时间内的使用价值,在进行内容替换时,将价值最小的内容删除,使得节点中的内容保持最大价值,满足用户的后续请求。仿真结果表明,PEC-Rep可以有效地提高域内缓存命中率,减轻服务器的负载,提高CCN的整体性能。  相似文献   

11.
张超  李可  范平志 《计算机应用》2019,39(7):2044-2050
针对无线移动设备数量的指数增长使得异构协作小小区(SBS)将承载大规模的流量负载问题,提出了一种基于协作SBS与流行度预测的在线热点视频缓存更新方案(OVCRP)。首先,分析在线热点视频的流行度在短期内变化情况;然后,构建k近邻模型进行在线热点视频流行度的预测;最后,确定在线热点视频的缓存更新位置。为了选择合适的位置存放在线热点视频,以最小化总体传输时延为目标,建立数学模型,设计整数规划优化算法。仿真实验结果显示,与随机缓存(RANDOM)、最近最少使用(LRU)、最不经常使用(LFU)方案相比,OVCRP在平均缓存命中率和平均访问时延方面具有明显的优势,因此减轻了协作SBS的网络负担。  相似文献   

12.
基于预测的Web缓存替换算法   总被引:2,自引:0,他引:2  
为了提高Web缓存的性能,在缓存替换算法GDSF的基础上引入了预测机制,提出了基于预测的缓存替换算法PGDSF.先利用Web日志构造预测模型,再用预测模型对当前的用户访问序列进行预测,形成用户可能要访问的预测对象集.当缓存空间不能满足新请求对像时,则利用替换策略GDSF,将权值最小的且不属于预测对象集的对像进行替换.该算法综合考虑了各项因素对Web对象的影响,仿真实验结果表明,在一定的缓存空间内相对于GDSF替换算法有较高的文档命中率和字节命中率.  相似文献   

13.
寻找新型存储材料代替DRAM内存是当前的一个研究热点。相变存储PCM因其具有低功耗、高存储密度和非易失性的优点受到广泛的关注,然而PCM的可擦写次数有限,要用作内存必须考虑如何减少对其的写操作。针对该问题,一种有效的解决方法是优化Cache替换策略,减少Cache中脏块被替换出的数量。现有研究主要通过在插入和访问命中时给脏块设定较高的保护优先级来达到给脏块额外保护的目的,但是在降级过程中不再对脏块与干净块进行区分,这导致Cache可能在存在大量干净块的情况下仍然先替换脏块。提出一种新型的Cache替换策略MAC,它通过一个多维分级结构在脏块与干净块之间设置了不可逾越的界限,使得脏块能得到更有力的保护。模拟实验表明,相对LRU替换策略,MAC以较低的硬件开销代价平均减少约25.12%的内存写,同时对程序运行性能几乎没有影响。  相似文献   

14.
基于最小效用的流媒体缓存替换算法   总被引:7,自引:0,他引:7  
提出最小缓存替换算法SCU-K,综合考虑流媒体文件最近K次访问情况,使缓存大小动态适应媒体流行度、字节有用性和已缓存部分大小的变化,降低了文件前缀部分被替换的概率,避免LRU和LFU算法中出现的媒体文件被连续替换的问题。在与LRU,LFU和LRU-2算法的对比实验中,SCU-K算法在提高缓存空间利用率、字节命中率和降低启动延迟方面具有更好的性能。  相似文献   

15.
代理缓存一致性策略和替换策略的研究   总被引:5,自引:1,他引:5  
针对代理缓存的一致性策略和替换策略还没有很好地结合起来,从而影响了代理缓存系统的整体性能的现状,分别探讨了基于Internet的代理缓存一致性策略和替换策略的处理流程、性能评价指标和研究现状,进而给出将这两种策略结合起来的一致性一替换算法的处理流程,并提出陈旧命中比是其主要性能评价指标,能很好地衡量代理缓存的各种算法的优劣和代理缓存系统的整体性能。  相似文献   

16.
在SCU-K算法的基础上,提出了基于流行度和将来访问次数的最小效用替换算法(SCU-PFUT)。此外算法还考虑了流媒体文件的字节有效性和文件块大小的因素,使得替换出内存的数据块更加合理。不但避免LRU和LFU算法中出现的媒体文件被连续替换的问题,相对于LRU、LFU和SCU-2,其在缓存命中率、字节命中率和空间利用率都得到了提升。  相似文献   

17.
18.
Replacement algorithms have been widely used as key technologies for cache management in areas such as file systems or database management. A replacement algorithm determines which page to be evicted when the cache is full and a new page is referenced. Because replacement policies considering only recency or frequency such as LRU (Least Recently Used) and LFU (Least Frequently Used) do not perform well, replacement polices that take both recency and frequency into account have been intensively studied. As a classical replacement policy, LRFU (Least Recently/Frequently Used) policy subsumes the LRU and LFU policy. However, because LFU is not able to adapt to the change of page accessing pattern and it is hard to select a suitable λ for each certain trace, LRFU cannot always guarantee a good performance. In this paper, we propose a Window‐LRFU policy, to subsume the LRU and Window‐LFU policies. Experimental results show that the Window‐LRFU policy outperforms LRFU and has at least competitive performance than other classical algorithms. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
石磊  孟彩霞  韩英杰 《计算机应用》2007,27(8):1842-1845
为提高Web缓存性能,在缓存替换算法的基础上加入预测机制,提出了基于预测的Web替换策略P-Re。预测算法采用PPM上下文模型,当缓存空间不够用来存放新的对象时,P-Re选择键值较小且未被预测到的对象进行替换。实验表明,基于预测的Web缓存替换算法P-Re相对于传统替换算法而言具有较高的命中率和字节命中率。  相似文献   

20.
In wireless mobile ad hoc networks (MANETs), a mobile node would normally acquire data from a data server through an access point by sending the server a request each time it needs data. To reduce the high costs normally associated with accessing remote servers (i.e., outside the MANET), data caching by the mobile nodes can be employed. Several caching techniques for MANETs have been proposed and implemented, including a cooperative scheme that we recently introduced. It employs a directory-based approach in which submitted queries are cached in the MANET to be used subsequently as indexes to corresponding data items (results). When a request is issued, nodes cooperate to find its answer (if it exists) and send it to the requesting node. In this paper, we extend this scheme by semantically comparing each submitted request with all cached queries. The semantic analysis process includes trimming the request into fragments and joining the answers of these fragments to produce the answer of the request. We study the performance of the proposed system both analytically and experimentally, and prove the advantageous features of the system relative to others in terms of query response time, generated traffic, and hit ratio.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号