首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
由于目前的web缓存替换算法多侧重于对用户的历史访问进行参考,缺乏对用户请求的预测.主要利用自回归模型可用于预测的特性,在基于访问时间间隔(LRU)替换算法的基础之上提出了一种基于自回归模型预测的web缓存替换算法,并进一步的在opnet网络仿真中进行了仿真验证.仿真结果表明:基于自回归预测的缓存替换算法相对于传统的缓存替换算法更能提高代理缓存的对象命中率和字节命中率.  相似文献   

2.
变码率视频服务器间隔缓存的接纳策略和替换算法   总被引:4,自引:0,他引:4       下载免费PDF全文
龙白滔  钟玉琢  王浩 《电子学报》2002,30(2):163-167
本文提出了ROC(Resist-Overload Capability)缓存接纳策略和替换算法,解决了使用间隔缓存变码率视频服务器的缓存管理问题.确定性缓存接纳策略能提供确定的服务质量,却存在不适应交互应用和缓存利用率低的缺点;统计复用缓存接纳策略需要海量卷积运算,因此缺乏实用性;ROC缓存接纳策略通过简单运算,提供概率的缓存服务质量保证和较高的缓存利用率.仿真结果表明,在典型系统配置下,ROC缓存接纳策略和替换算法可以提高约25%的系统吞吐量;相对确定性缓存接纳策略和STP-L缓存替换算法,可以多服务约17%的视频流,平均缓存利用率也要高出约38%.  相似文献   

3.
卞琛  于炯  英昌甜  修位蓉 《电子学报》2017,45(2):278-284
并行计算框架Spark缺乏有效缓存选择机制,不能自动识别并缓存高重用度数据;缓存替换算法采用LRU,度量方法不够细致,影响任务的执行效率.本文提出一种Spark框架自适应缓存管理策略(Self-Adaptive Cache Management,SACM),包括缓存自动选择算法(Selection)、并行缓存清理算法(Parallel Cache Cleanup,PCC)和权重缓存替换算法(Lowest Weight Replacement,LWR).其中,缓存自动选择算法通过分析任务的DAG(Directed Acyclic Graph)结构,识别重用的RDD并自动缓存.并行缓存清理算法异步清理无价值的RDD,提高集群内存利用率.权重替换算法通过权重值判定替换目标,避免重新计算复杂RDD产生的任务延时,保障资源瓶颈下的计算效率.实验表明:我们的策略提高了Spark的任务执行效率,并使内存资源得到有效利用.  相似文献   

4.
大规模连续媒体服务的缓存替换算法设计与实现   总被引:4,自引:0,他引:4       下载免费PDF全文
张潇  吴敏强  恽爽  陆桑璐  谢立 《电子学报》2003,31(5):783-785
连续媒体的缓存设计是非常关键的问题,本文针对大规模连续媒体服务系统的特点,提出了EA缓存替换算法.该算法充分考虑了现有用户和请求接入用户的服务需求,提高了内存使用效率.我们的理论分析和实验模拟证明了它在性能上大大优于传统的缓存替换算法.  相似文献   

5.
0323233大规模连续媒体服务的缓存替换算法设计与实现[刊]/张潇//电子学报.—2003,31(5).—783~785(L)连续媒体的缓存设计是非常关键的问题,本文针对大规模连续媒体服务系统的特点,提出了 EA 缓存替换算法。该算法充分考虑了现有用户的请求接入用户的服务需求,提高了内存使用效率。我们的理论分析和实验模拟证明了它在性能上大大优于传统的缓存替换算法。参7  相似文献   

6.
一种基于LRU算法改进的缓存方案研究与实现   总被引:1,自引:0,他引:1  
廖鑫 《电子工程师》2008,34(7):46-48
LRU(最近最少使用)替换算法在单处理器结构的许多应用中被广泛使用。然而在多处理器结构中,传统LRU算法对降低共享缓存的缺失率并不是最优的。文中研究了基本的缓存块替换算法,在分析LRU算法的基础上,提出基于LRU算法及访问概率改进的缓存方案,综合考虑最近使用次数和访问频率来决定候选的替换块,增强了替换算法对多处理器的适应性。  相似文献   

7.
基于P2P的CDN新型网络及缓存替换算法   总被引:1,自引:0,他引:1  
对内容分发网络和P2P网络的特点进行了分析,给出了一种基于P2P的CDN新型网络自治缓存系统的体系结构,提出了自治缓存区域中智能缓存替换问题并给出了智能缓存替换方法和双关键字缓存替换算法.通过仿真实验,可以找到以运算复杂度低命中率高的关键字来实现缓存替换.  相似文献   

8.
为提高NDN(命名数据网络)中的缓存利用率,提出了一种基于蚁群替换算法的邻居协作缓存管理(ACNCM)策略。首先将单节点的缓存替换问题,建模为0/1背包问题,并根据缓存数据的大小、使用频率以及邻居副本深度等信息定义本地存储内容的缓存价值,提出基于蚁群算法的缓存替换算法。然后利用邻域协作的思想,通过路由节点之间定期交换自身节点的缓存信息,对单个节点替换出去的缓存内容,选择邻居节点完成协作式缓存管理。实验结果表明,ACNCM策略在缓存命中率、网络开销和平均响应时延方面均优于现有方法。  相似文献   

9.
缓存替换算法对代理缓存的系统性能起着重要的影响,本文对Web缓存替换算法进行了研究,针对Hybrid算法提出了改进方法。实验结果表明,改进后的算法在保持相对较低的延迟率和较高的URL命中率的情况下,字节命中率有较大的提高,对改善网络状况有一定的意义。  相似文献   

10.
针对交互式流媒体应用及异构网络对媒体类型要求多样性的特点,在代理服务器中引入转码技术,构建一种符合转码技术特点的新的缓存价值判断方法;通过新的缓存替换模型实现缓存替换。仿真结果表明,转码缓存算法能解决媒体类型多样性的要求,较之传统的缓存替换算法具有更高的缓存命中率和更低的启动延时率。  相似文献   

11.
Understanding the nature of media server workloads is crucial to properly designing and provisioning current and future media services. The main issue we address in this paper is the workload analysis of today's enterprise media servers. This analysis aims to establish a set of properties specific to the enterprise media server workloads and to compare them to well-known related observations about the web server workloads. We partition the media workload properties in two groups: static and temporal. While the static properties provide more traditional and general characteristics of the underlying media fileset and quantitative properties of client accesses to those files (independent of the access time), the temporal properties reflect the dynamics and evolution of accesses to the media content over time. We propose two new metrics characterizing the temporal properties: 1) the new files impact metric characterizing the site evolution due to new content and 2) the life span metric reflecting the rates of change in accesses to the newly introduced files. We illustrate these new metrics with the analysis of two different enterprise media server workloads collected over a significant period of time.  相似文献   

12.
本文提出了使用服务质量管理的确保时态与安全的时态、安全实时反馈控制体系结构。探讨了在无确定工作量和数据访问模式下为事务提供截止时间丢失比例和时态一致性保障的QMF方法,并通过动态调整基于反馈控制环中测量性能差错上的系统行为.采纳反馈控制器要求的自适应修改策略动态调整工作量以达到用户期望的时态、安全保障的实时数据服务。  相似文献   

13.
A hierarchical characterization of a live streaming media workload   总被引:1,自引:0,他引:1  
We present a thorough characterization of what we believe to be the first significant live Internet streaming media workload in the scientific literature. Our characterization of over 3.5 million requests spanning a 28-day period is done at three increasingly granular levels, corresponding to clients, sessions, and transfers. Our findings support two important conclusions. First, we show that the nature of interactions between users and objects is fundamentally different for live versus stored objects. Access to stored objects is user driven, whereas access to live objects is object driven. This reversal of active/passive roles of users and objects leads to interesting dualities. For instance, our analysis underscores a Zipf-like profile for user interest in a given object, which is in contrast to the classic Zipf-like popularity of objects for a given user. Also, our analysis reveals that transfer lengths are highly variable and that this variability is due to client stickiness to a particular live object, as opposed to structural (size) properties of objects. Second, by contrasting two live streaming workloads from two radically different applications, we conjecture that some characteristics of live media access workloads are likely to be highly dependent on the nature of the live content being accessed. This dependence is clear from the strong temporal correlation observed in the traces, which we attribute to the impact of synchronous access to live content. Based on our analysis, we present a model for live media workload generation that incorporates many of our findings, and which we implement in GISMO.  相似文献   

14.
Han  Zhijie  Ma  Ji_ao  He  Xin  Fan  Weibei 《Journal of Signal Processing Systems》2019,91(10):1149-1157

The generally accepted that Zipf-Distribution is a convinced access pattern for text-based Web. However, with the dramatic increasement of VoD media traffic on the Internet such as Flash P2P, the inconsistency between the access patterns of media objects and the Zipf model has been researched by many scholars. In this paper, we have studied a large variety of media work-loads collected from both browser and server sides in Adobe Flash P2P systems which applied in Youku, Youtube, etc. Through extensive analysis and modeling. And found the object reference ranks of all these workloads follow the logistic (LOG) distribution despite their different media systems and delivery methods by extensive analysis and modeling. This mean it does not follow long tail effect; Furthermore, we have constructed mathematical models which can applied in access pattern in FlashP2P traffic. By analyzing the model of media traffic access, it is possible to better describe the user’s access mode. Meantime, it is very suitable for the configuration and allocation of network resources which can be used more efficiently.

  相似文献   

15.
数据网格作为面向服务的架构,为远程用户提供分布式数据查询、存储和管理等服务,而数据网格中的数据分类日益成为研究者们所关注的问题.本文描述了用于数据网格的一种高效的分类系统.该系统动态综合作为网格服务的多种分类方法(Dynamical Synthesis of Multiple Methods,DSMM),能够动态地改善传统分类方法的低准确率点,以负载平衡为前提将分类工作分布于网格中的各个结点上.另外,DSMM提供的生命周期管理保障了其作为一个网格应用的鲁棒性和灵活性,适合于网格的松耦合体系结构.实验采用了2927个乳腺癌患者病例,结果显示DSMM系统的确能够在数据网格环境中发挥其灵活性、高效性并提高分类的准确率.  相似文献   

16.
Memory-intensive applications present unique challenges to an application-specific integrated circuit (ASIC) designer in terms of the choice of memory organization, memory size requirements, bandwidth and access latencies, etc. The high potential of single-chip distributed logic-memory architectures in addressing many of these issues has been recognized in general-purpose computing, and more recently, in ASIC design. The high-level synthesis (HLS) techniques presented in this paper are motivated by the fact that many memory-intensive applications exhibit irregular array data access patterns. Synthesis should, therefore, be capable of determining a partitioned architecture, wherein array data and computations may have to be heterogeneously distributed for achieving the best performance speed-up. We use a combination of clustering and min-cut style partitioning techniques to yield distributed architectures, based on simulation profiling while considering various factors including data access locality, balanced workloads, inter-partition communication, etc. Our experiments with several benchmark applications show that the proposed techniques yielded two-way partitioned architectures that can achieve upto 2.1 x (average of 1.9 x) performance speed-up over conventional HLS solutions, while achieving upto 1.5 x (average of 1.4 x) performance speed-up over the best homogeneous partitioning solution feasible. At the same time, the reduction in the energy-delay product over conventional single-memory designs is upto 2.7 x (average of 2.0 x). A larger amount of partitioning makes further system performance improvement achievable at the cost of chip area.  相似文献   

17.
一种结合动态写策略的磁盘Cache替换算法   总被引:1,自引:0,他引:1  
磁盘Cache是改善I/O性能的一种技术.通过分析Cache写策略和LRU、LFU替换算法对磁盘Cache性能的影响,引入一种动态写策略,改进替换算法,使基于频率的块替换算法FBR与动态写策略相结合.二者结合较好地应用于磁盘存取中,充分利用局部性规律,提高I/O性能,使磁盘在多种工作环境和不同Cache大小下的性能更优.  相似文献   

18.
Popularity of videos is a key factor for the design and management of a streaming media system. In this paper, three factors representing clients’ access to a media system are investigated to characterize popularity. One of the factors is the commonly used access frequency. Numerical studies on one of the workloads from a video‐on‐demand system in the University of Science and Technology of China are also given. The results show that the commonly used Zipf‐like model is not suitable to characterize popularity distribution. However, the stretched exponential model is better, and the shape of the stretched exponential distribution of popularity is related to the characterization of popularity and the duration of workload. The rank correlation between the access frequency and the other two factors is further studied. It is concluded that the rank of video popularity and the ability of each characterization to distinguish video popularity are different, which implies that the combinative use of different characterizations can increase the distinction of video popularity.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
The rapid deployment of information and communications technology (ICT) across the globe has led to a network of high-density computer data centers to store, process and transmit information. These large-scale technology warehouses consume vast amounts of energy for running the compute infrastructure and auxiliary cooling resources. Recent literature has suggested the possibility of globally staggering compute workloads to take advantage of local climatic conditions as a means to reducing cooling energy costs. This paper further explores this premise by performing an in-depth analysis of the environmental and economic burden of managing the thermal infrastructure of a globally connected data center network. The paper examines a case study where the potential energy savings achievable by staggering workloads across arbitrarily chosen data centers in the U.S., India, and Russia are examined. The results show that the environmental benefit of such off-shoring is mostly dependent on the fuel mix of the grid to which the workload is transferred and the energy consumption in each location. Further, we show that dynamic optimization of the thermal workloads based on local weather patterns can reduce the environmental burden by up to 30%. The paper concludes with a detailed economic assessment. For the case study in this paper, we find that such global workload staggering can potentially reduce operational costs by nearly 35%.  相似文献   

20.
As technology scales toward deep submicron, the integration of a large number of IP blocks on the same silicon die is becoming technically feasible, thus enabling large-scale parallel computations, such as those required for multimedia workloads. The communication architecture is becoming the bottleneck for these multiprocessor Systems-on-Chip (SoC), and efficient contention resolution schemes for managing simultaneous access requests to the shared communication resources are required to prevent system performance degradation. The contribution of this work is to analyze the impact on multiprocessor SoC performance of different bus arbitration policies under different communication patterns, showing the distinctive features of each policy and the strong correlation of their effectiveness with the communication requirements of the applications. Beyond traditional arbitration schemes such as round robin and TDMA, another policy is considered that periodically allocates a temporal slot for contention-free bus utilization to a processor which needs fixed predictable bandwidth for the correct execution of its time-critical task. The results are derived on a complete and scalable multiprocessor SoC simulation platform based on SystemC, whose software support includes a complete embedded multiprocessor OS (RTEMS). The communication architecture is AMBA compliant, and we exploit the flexibility of this multi-master commercial standard, which does not specify the arbitration algorithm, to implement the explored contention resolution schemes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号