首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 28 毫秒
1.
文章研究了基于TTL的Web缓存层次过滤效果,业务量性质对基于TTL的动态Web缓存系统的性能有重要影响。在层次缓存中,由于只有错失的请求才会被转发给下一级缓存,因而逐级对业务量存在过滤作用,业务量性质随之改变。文章利用仿真研究了基于TTL的动态Web缓存层次过滤对业务量的影响。重点考察了请求到达间隔模型及对象流行度分布的变化。  相似文献   

2.
韩东 《电子设计工程》2014,22(20):190-193
本文基于为嵌入式系统中接口间的数据透明传输提供缓存区的建模设计的目的,采用了分层的设计方式,通过分析并定义了在数据透明传播时所需的接口间映射关系和缓存层次分类,并结合不同数据流对于其缓存区的不同需求,从最简单的数据流需求开始逐渐增加数据流需求,最终设计出一系列不同层次的缓存模型,其中着重设计了无嵌套分块缓存及可嵌套分块缓存这两种形式的缓存模型。  相似文献   

3.
Cache cooperation improves the performance of isolated caches, especially for caches with small cache populations. To make caches cooperate on a large scale and effectively increase the cache population, several caches are usually federated in caching architectures. We discuss and compare the performance of different caching architectures. In particular, we consider hierarchical and distributed caching. We derive analytical models to study important performance parameters of hierarchical and distributed caching, i.e., client's perceived latency, bandwidth usage, load in the caches, and disk space usage. Additionally, we consider a hybrid caching architecture that combines hierarchical caching with distributed caching at every level of a caching hierarchy. We evaluate the performance of a hybrid scheme and determine the optimal number of caches that should cooperate at each caching level to minimize client's retrieval latency  相似文献   

4.
The article is a review of the book: Web Caching and Its Applications, written by S.V. Nagaraj and published by Springer, 2004. Web caching technology improves client download times and reduces network traffic by caching frequently accessed copies of Web objects close to the clients. The primary research issues in Web caching are where to cache copies of objects (cache placement), how to keep the cached copies consistent (cache consistency), and how to redirect clients to the optimal cache server (client redirection). Web caching systems' design space is huge, and building a good caching system involves several issues. Over the past decade, researchers have carried out a tremendous amount of work in addressing these issues. In Web Caching and Its Applications, S.V. Nagaraj aims to provide a bird's eye view of this research. He has exhaustively surveyed the literature and summarized the results of several research publications. The author concludes that the book can serve as a reference tool for researchers and for graduate students working on Web systems. However, its approach isn't suitable for Web administrators or students who are new to the field.  相似文献   

5.
An overview of web caching replacement algorithms   总被引:2,自引:0,他引:2  
The increasing demand for World Wide Web (WWW) services has made document caching a necessity to decrease download times and reduce Internet traffic. To make effective use of caching, an informative decision has to be made as to which documents are to be evicted from the cache in case of cache saturation. This is particularly important in a wireless network, where the size of the client cache at the mobile terminal (MT) is small. Several types of caching are used over the Internet, including client caching, server caching, and more recently, proxy caching. In this article we review some of the well known proxy-caching policies for the Web. We describe these policies, show how they operate, and discuss the main traffic properties they incorporate in their design. We argue that a good caching policy adapts itself to changes in Web workload characteristics. We make a qualitative comparison between these policies after classifying them according to the traffic properties they consider in their designs. Furthermore, we compare a selected subset of these policies using trace-driven simulations.  相似文献   

6.
Web caching is a significantly important strategy for improving Web performance. In this paper, we design SmartCache, a router-based system of Web page load time reduction in home broadband access networks, which is composed of cache, SVM trainer and classifier, and browser extension. More specifically, the browser interacts with users to collect their experience satisfaction and prepare training dataset for SVM trainer. With the desired features extracted from training dataset, the SVM classifier predicts the classes of the Web objects. Then, integrated with LFU, the cache makes a cache replacement based on SVM-LFU policy. Finally, by implementing SmartCache on a Netgear router and Chrome browsers, we evaluate our SVM-LFU algorithm in terms of Web page load time, SVM accuracy, and cache performance, and the experimental results illustrate that SmartCache can greatly improve Web performance in Web page load time.  相似文献   

7.
Web caching has been widely used to alleviate Internet traffic congestion in World Wide Web (WWW) services. To reduce download throughput, an effective strategy on web cache management is needed to exploit web usage information in order to make a decision on evicting the document stored in case of cache saturation. This paper presents a so-called Learning Based Replacement algorithm (LBR), a hybrid approach towards an efficient replacement model for web caching by incorporating a machine learning technique (naive Bayes) into the LRU replacement method to improve prediction of possibility that an existing page will be revised by a succeeding request, from access history in a web log. The learned knowledge includes information on which URL objects in cache should be kept or evicted. The learning-based model is acquired to represent the hidden aspect of user request pattern for predicting the re-reference possibility. By a number of experiments, the LBR gains potential improvement of prediction on revisit probability, hit rate and byte hit rate overtraditional methods; LRU, LFU, and GDSF, respectively.  相似文献   

8.
廖建新  杨波  朱晓民  王纯 《通信学报》2007,28(11):51-58
提出一种适用于移动通信网的两级缓存流媒体系统结构2CMSA(two—level cache mobile streaming architecture),它突破了移动流媒体系统中终端缓存空间小、无线接入网带宽窄的局限;针对2CMSA结构设计了基于两级缓存的移动流媒体调度算法2CMSS(two—level cache based mobile streaming scheduling algorithm),建立数学模型分析了其性能;仿真实验证明,与原有的移动流媒体系统相比,使用2CMSS调度算法能够有效地节省网络传输开销,降低用户启动时延。  相似文献   

9.
This paper presents a new open-loop architecture for three-phase grid synchronization based on moving average and predictive filters, where accurate measurements of phase, frequency, and amplitude are carried out in real time. Previous works establish that the fundamental positive sequence vector of a set of utility voltage/current vectors can be decoupled using Park's transformation and low-pass filters. However, the filtering process introduces delays that impair the system performance. More specifically, when the input signal frequency is shifted above the nominal, a nonzero average steady-state phase error appears in the measurements. To overcome such limitations, a suitable combination of predictive and moving average finite impulse response (FIR) filters is used by the authors to achieve a robust synchronization system for all input frequencies. Moving average filters are linear phase FIR filters that have a constant time delay at low frequencies, a characteristic that is exploited to good effect to design a predictive filter that compensates such time delays, enabling zero steady-state phase errors for shifted input frequencies. In summary, the main attributes of the new system are its good frequency adaptation, good filtering/transient response tradeoff, and the fact that its dynamics is independent of the input vector amplitude. Comprehensive experimental results validate the theoretical approach and the high performance of the proposed synchronization algorithm.   相似文献   

10.
To address the vast multimedia traffic volume and requirements of user quality of experience in the next‐generation mobile communication system (5G), it is imperative to develop efficient content caching strategy at mobile network edges, which is deemed as a key technique for 5G. Recent advances in edge/cloud computing and machine learning facilitate efficient content caching for 5G, where mobile edge computing can be exploited to reduce service latency by equipping computation and storage capacity at the edge network. In this paper, we propose a proactive caching mechanism named learning‐based cooperative caching (LECC) strategy based on mobile edge computing architecture to reduce transmission cost while improving user quality of experience for future mobile networks. In LECC, we exploit a transfer learning‐based approach for estimating content popularity and then formulate the proactive caching optimization model. As the optimization problem is NP‐hard, we resort to a greedy algorithm for solving the cache content placement problem. Performance evaluation reveals that LECC can apparently improve content cache hit rate and decrease content delivery latency and transmission cost in comparison with known existing caching strategies.  相似文献   

11.
Network caching of objects has become a standard way of reducing network traffic and latency in the web. However, web caches exhibit poor performance with a hit rate of about 30%. A solution to improve this hit rate is to have a group of proxies form co‐operation where objects can be cached for later retrieval. A co‐operative cache system includes protocols for hierarchical and transversal caching. The drawback of such a system lies in the resulting network load due to the number of messages that need to be exchanged to locate an object. This paper proposes a new co‐operative web caching architecture, which unifies previous methods of web caching. Performance results shows that the architecture achieve up to 70% co‐operative hit rate and accesses the cached object in at most two hops. Moreover, the architecture is scalable with low traffic and database overhead. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
Web caching has been the solution of choice to web latency problems. The efficiency of a Web cache is strongly affected by the replacement algorithm used to decide which objects to evict once the cache is saturated. Numerous web cache replacement algorithms have appeared in the literature. Despite their diversity, a large number of them belong to a class known as stack‐based algorithms. These algorithms are evaluated mainly via trace‐driven simulation. The very few analytical models reported in the literature were targeted at one particular replacement algorithm, namely least recently used (LRU) or least frequently used (LFU). Further they provide a formula for the evaluation of the Hit Ratio only. The main contribution of this paper is an analytical model for the performance evaluation of any stack‐based web cache replacement algorithm. The model provides formulae for the prediction of the object Hit Ratio, the byte Hit Ratio, and the delay saving ratio. The model is validated against extensive discrete event trace‐driven simulations of the three popular stack‐based algorithms, LRU, LFU, and SIZE, using NLANR and DEC traces. Results show that the analytical model achieves very good accuracy. The mean error deviation between analytical and simulation results is at most 6% for LRU, 6% for the LFU, and 10% for the SIZE stack‐based algorithms. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
Edge caching is an effective feature of the next 5G network to guarantee the availability of the service content and a reduced time response for the user. However, the placement of the cache content remains an issue to fully take advantage of edge caching. In this paper, we address the proactive caching problem in Heterogeneous Cloud Radio Access Network (H‐CRAN) from a game theoretic point of view. The problem is formulated as a bargaining game where the remote radio heads (RRHs) dynamically negotiate and decide which content to cache in which RRH under energy saving and cache capacity constraints. The Pareto optimal equilibrium is proved for the cooperative game by the iterative Nash bargaining algorithm. We compare between cooperative and noncooperative proactive caching games and demonstrate how the selfishness of different players can affect the overall system performance. We also showed that our cooperative proactive caching game improves the energy consumption of 40% as compared with noncooperative game and of 68% to no‐game strategy. Moreover, the number of satisfied requests at the RRHs with the proposed cooperative proactive caching scheme is significantly increased.  相似文献   

14.
Cooperative caching is an important technique to support pervasive Internet access. In order to ensure valid data access, the cache consistency must be maintained properly. However, this problem has not been sufficiently studied in mobile computing environments, especially those with ad hoc networks. There are two essential issues in cache consistency maintenance: consistency control initiation and data update propagation. Consistency control initiation not only decides the cache consistency provided to the users, but also impacts the consistency maintenance cost. This issue becomes more challenging in asynchronous and fully distributed ad hoc networks. To this end, we propose the predictive consistency control initiation (PCCI) algorithm, which adaptively initiates consistency control based on its online predictions of forthcoming data updates and cache queries. In order to efficiently propagate data updates through multi‐hop wireless connections, the hierarchical data update propagation (HDUP) algorithm is proposed. Theoretical analysis shows that cooperation among the caching nodes facilitates data update propagation. Extensive simulations are conducted to evaluate performance of both PCCI and HDUP. Evaluation results show that PCCI cost‐effectively initiates consistency control even when faced with dynamic changes in data update rate, cache query rate, node speed, and number of caching nodes. The evaluation results also show that HDUP saves cost for data update propagation by up to 66%. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
《电子学报:英文版》2016,(6):1101-1108
Query result caching is a crucial technique employed in search engines,reducing the response time and load of the search engines.As search engines continuously update their indexes,the query results in long-lived cache entries may become stale.It is important to provide the refresh mechanism to enhance the degree of freshness of cached results.We present a prejudgment approach to improve the freshness of the result cache and design an incomplete allocation algorithm.We introduce the queryTime-to-live (TTL) and term-TTL structure to prejudge the result cache.The query-TTL is used to pre-check the likelihood of a cache hit and term-TTL is applied to maintain all terms of the latest posting list.For the cache structure,we design a Queue-Hash structure and develop the corresponding incomplete allocation algorithm.The preliminary results demonstrate that our approaches can improve the freshness of cached results and decrease processing overhead compared with no prejudgment approaches.  相似文献   

16.
Existing cooperative caching algorithms for mobile ad hoc networks face serious challenges due to message overhead and scalability issues. To solve these issues, we propose an adaptive virtual backbone based cooperative caching that uses a connective dominating set (CDS) to find the desired location of cached data. Message overhead in cooperative caching is mainly due to cache lookup process used for cooperative caching. The idea in this scheme is to reduce the number of nodes involved in cache look up process, by constructing a virtual backbone adaptive to the dynamic topology in mobile ad hoc networks. The proposed algorithm is decentralized and the nodes in the CDS perform data dissemination and discovery. Simulation results show that the message overhead created by the proposed cooperative caching technique is very less compared to other approaches. Moreover, due to the CDS based cache discovery we applied in this work, the proposed cooperative caching has the potential to increase the cache hit ratio and reduce average delay.  相似文献   

17.
This paper presents three different optimization cases for normalized fractional order low-pass filters (LPFs) with numerical, circuit and experimental results. A multi-objective optimization technique is used for controlling some filter specifications, which are the transition bandwidth, the stop band frequency gain and the maximum allowable peak in the filter pass band. The extra degree of freedom provided by the fractional order parameter allows the full manipulation of the filter specifications to obtain the desired response required by any application. The proposed mathematical model is further applied to a case study of a practical second- generation current conveyor (CCII)-based fractional low-pass filter. Circuit simulations are performed for two different fractional order filters, with orders 1.6 and 3.6, with cutoff frequencies 200 and 500 Hz, respectively. Experimental results are also presented for LPF of 4.46 kHz cutoff frequency using a fabricated fractional capacitor of order 0.8, proving the validity of the proposed design approach.  相似文献   

18.
Caching is an important means to scale up the growth of the Internet. Weak consistency is a major approach used in Web caching and has been deployed in various forms. The paper investigates some fundamental properties and performance issues associated with an expiration-based caching system. We focus on a hierarchical caching system based on the time-to-live expiration mechanism and present a basic model for such system. By analyzing the intrinsic timing behavior of the basic model, we derive important performance metrics from the perspectives of the caching system and end users, respectively. Based on the results for the basic model, we introduce threshold-based and randomization-based techniques to enhance and generalize the basic model further. Our results offer some important insights into a hierarchical caching system based on the weak consistency paradigm.  相似文献   

19.
陈昊宇  胡宏林 《电讯技术》2023,63(12):1902-1910
作为5G中的一种重要模型,雾无线接入网络(Fog Radio Access Network, F-RAN)通过设备到设备通信和无线中继等技术获得了显著的性能增益,而边缘设备中合适的缓存则可以让内容缓存用户(Caching Users, CUs)向内容请求用户(Requesting Users, RUs)直接发送缓存内容,有效减小前传链路的负担和下载延迟。考虑一个F-RAN模型下用户发出请求并获得交付的场景,将每个CU的内容请求队列建模为独立的M/D/1模型,分析导出CUs缓存命中率和平均下载延迟关于内容缓存与交付方案的表达式,证明CUs缓存命中率与内容统计分布之间的联系有助于实现前者的近似最优解。针对在一段时间内的期望视角下建立的优化问题,提出了基于统计分布的算法并注意了执行时的交付控制。仿真结果表明,相较于现有缓存策略,优化内容整体统计分布的方案能够最大化CUs缓存命中率,同时减小平均下载延迟。  相似文献   

20.
In-network caching is one of the most important issues in content centric networking (CCN), which may extremely influence the performance of the caching system. Although much work has been done for in-network caching scheme design in CCN, most of them have not addressed the multiple network attribute parameters jointly during caching algorithm design. Hence, to fill this gap, a new in-network caching based on grey relational analysis (GRA) is proposed. The authors firstly define two newly metric parameters named request influence degree (RID) and cache replacement rate, respectively. The RID indicates the importance of one node along the content delivery path from the view of the interest packets arriving The cache replacement rate is used to denote the caching load of the node. Then combining hops a request traveling from the users and the node traffic, four network attribute parameters are considered during the in-network caching algorithm design. Based on these four network parameters, a GRA based in-network caching algorithm is proposed, which can significantly improve the performance of CCN. Finally, extensive simulation based on ndnSIM is demonstrated that the GRA-based caching scheme can achieve the lower load in the source server and the less average hops than the existing the betweeness (Betw) scheme and the ALWAYS scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号