首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
为了提高代理系统的整体性能,基于内部网络用户访问时间的局部性和相似性,并结合现有的分布式缓存系统,本文提出了一种新型的分布式代理缓存系统——双层缓存集群.双层缓存集群系统分为网内集群缓存层和代理集群缓存层,采用双层代理缓存结构,充分利用现有内部网络资源,分散了代理的负担.降低了代理之间的通信开销,还增强了缓存资源的利用...  相似文献   

2.
代理缓存是降低流媒体点播系统数据传输代价的一项关键技术,但目前现有的缓存方案,由于受到代理本身缓存空间大小和代理与客户端缺少协作的限制,效率不是很高.针对纯代理缓存系统的不足,文章提出了一种采用代理与客户端缓存协作的方式来弥补以上的不足.通过整合客户端的缓存空间来扩充系统总的缓存空间,依靠代理服务器实现资源的部署与协调.仿真结果显示,对比纯代理缓存系统,由于客户端缓存空间的加入,能显著减少主干网的带宽消耗,降低流媒体数据传输代价.  相似文献   

3.
介绍了网络缓存发展的趋势:网络缓存协作。说明了网络缓存协作所要解决的问题,列举了网络 缓存协作的几种模型及相应的有代表性的协议,介绍了解决代理裁剪的几种方案。  相似文献   

4.
网络缓存协作的实现方法   总被引:1,自引:0,他引:1  
介绍了网络缓存发展的趋势:网络缓存协作.说明了网络缓存协作所要解决的问题,列举了网络缓存协作的几种模型及相应的有代表性的协议,介绍了解决代理裁剪的几种方案.  相似文献   

5.
虚拟化技术已被广泛用于恶意软件分析系统,而虚拟机检测技术作为一种反分析技术,对恶意软件作者和安全研究人员都有重要意义。为了描述和探索虚拟机检测方法,给出了虚拟机检测的基本思想并介绍了几种已有检测方法,考虑到检测方法的通用性,提出了基于CPU缓存(Cache)操作模式差异的虚拟机检测方法,通过比较启用CPU缓存和禁用CPU缓存时的指令执行效率来判断当前环境是否为虚拟环境。通过实验发现在真机环境中禁用CPU缓存对指令执行效率有显著影响,而在虚拟环境中禁用CPU缓存对指令执行效率没有显著影响。实验结果表明,利用CPU缓存操作模式在真机环境与虚拟环境中的差异来检测虚拟机是可行的。  相似文献   

6.
基于P2P的CDN新型网络及缓存替换算法   总被引:1,自引:0,他引:1  
对内容分发网络和P2P网络的特点进行了分析,给出了一种基于P2P的CDN新型网络自治缓存系统的体系结构,提出了自治缓存区域中智能缓存替换问题并给出了智能缓存替换方法和双关键字缓存替换算法.通过仿真实验,可以找到以运算复杂度低命中率高的关键字来实现缓存替换.  相似文献   

7.
吴海博  李俊  智江 《通信学报》2016,37(5):62-72
提出一种基于概率存储的启发式住处中心网络内容缓存方法(PCP)。主要思想是请求消息和数据消息在传输过程中统计必要信息,当数据消息返回时,沿途各缓存节点按照一定概率决策是否在本地缓存该内容。设计缓存概率时综合考虑内容热度和缓存放置收益,即内容热度越高,放置收益越大的内容被缓存的概率越高。实验结果表明,PCP在缓存服务率、缓存命中率、平均访问延迟率等方面,与现有方法相比具有显著优势,同时PCP开销较小。  相似文献   

8.
拥有自己的产业,不做管道搬运工,从流量经营向内容网络的转变,这是企业转型的趋势。内容网络缓存系统的磁盘利用率,对内容网络质量具有重要影响。湖北公司针对内容网络缓存系统进行了一系列的优化,更好的提升了缓存系统服务能力。文章提出了通过优化提升磁盘利用率的方法,增加缓存系统存储的热点资源数量,从而提升缓存系统服务吐出流量,减少回源流量,提升缓存系统的增益比。  相似文献   

9.
余红梅  樊自普 《电子测试》2010,(3):22-26,36
在流媒体点播系统中,现有的CDN架构下的缓存策略并没有很好地解决骨干网带宽资源浪费的问题。为了降低骨干网带宽、启动延迟及网络负载不平衡及更好地支持点播过程中的VCR操作,本文在CDN的流媒体系统架构基础上,结合原有前缀缓存及分段缓存策略,提出一种新的基于代理服务器及备用代理服务器的缓存策略,以缓解系统对骨干网络带宽的需求,并在理论上有效节约了代理服务器的缓存资源,降低了用户点播的启动延迟。  相似文献   

10.
随着Internet技术的发展和普遍应用,流媒体技术在Internet上得到了广泛的应用.对流媒体对象的访问,需要高且稳定的传送速率,网络带宽消耗大且持续时间长,容易给其他类型文件的访问带来影响,若用户过多,还会使初始流媒体服务器过载.代理缓存技术可帮助解决上述问题.文中介绍了流媒体代理缓存的特点,流媒体代理缓存的算法,流媒体代理缓存的评价指标和影响流媒体代理缓存效果的因素.  相似文献   

11.
Proxy-caching strategies, especially prefix caching and interval caching, are commonly used in video-on-demand (VOD) systems to improve both the system performance and the playback experience of users. However, because these caching strategies are designed for homogeneous clients, they do not perform well in the real world where clients are heterogeneous (i.e., different available network bandwidths and different sizes of client-side buffers). This paper investigates the problems caused by heterogeneous client-side buffers. We analyze the theoretical performance of these caching strategies, and then, derive cost functions to measure the corresponding performance gains. Based on these analytical results, we develop a caching strategy that employs both prefix caching and interval caching to minimize the input bandwidth of a proxy. The simulation results demonstrate that the bandwidth requirements of a proxy implementing our caching strategy are significantly lower compared to adopting prefix caching or interval caching alone.  相似文献   

12.
To support large-scale Video-on-Demand (VoD) services in a heterogeneous network environment, either a replication or layering approach can be deployed to adapt the client bandwidth requirements. With the aid of the broadcasting and caching techniques, it has been proved that the overall performance of the system can be enhanced. In this paper, we explore the impact on the broadcasting schemes coupled with proxy caching and develop an analytical model to evaluate the system performance in a highly heterogeneous network environment. We develop guidelines for resources allocation, transmission strategies as well as caching schemes under different system configurations. The model can assist system designers to study various design options as well as perform system dimensioning. Moreover, a systematic comparison between replication and layering is performed. From the results, it can be seen that the system performance of layering is better than that of replication when the environment is highly heterogeneous even if the layering overhead is higher than 25%. In addition, it is found that the system blocking probability can be further reduced by exploring the broadcast capability of the network if the proxy server cannot store all the popular videos.  相似文献   

13.
Efficient web content delivery using proxy caching techniques   总被引:4,自引:0,他引:4  
Web caching technology has been widely used to improve the performance of the Web infrastructure and reduce user-perceived network latencies. Proxy caching is a major Web caching technique that attempts to serve user Web requests from one or a network of proxies located between the end user and Web servers hosting the original copies of the requested objects. This paper surveys the main technical aspects of proxy caching and discusses recent developments in proxy caching research including caching the "uncacheable" and multimedia streaming objects, and various adaptive and integrated caching approaches.  相似文献   

14.
Performance enhancing proxies (PEPs) are widely used to improve the performance of TCP over high delay‐bandwidth product links and links with high error probability. In this paper we analyse the performance of using TCP connection splitting in combination with web caching via traces obtained from a commercial satellite system. We examine the resulting performance gain under different scenarios, including the effect of caching, congestion, random loss and file sizes. We show, via analysing our measurements, that the performance gain from using splitting is highly sensitive to random losses and the number of simultaneous connections, and that such sensitivity is alleviated by caching. On the other hand, the use of a splitting proxy enhances the value of web caching in that cache hits result in much more significant performance improvement over cache misses when TCP splitting is used. We also compare the performance of using different versions of HTTP in such a system. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

15.
Throughput performance of wireless networks can be enhanced by applying network coding (NC) technique based on opportunistic listening. The packets sent or overheard by a network node should be locally cached for the purpose of possible future decoding. How to manage the cache to reduce the overhead incurred in performing NC and, meanwhile, exploit performance gain is an interesting issue that has not been deeply investigated. In this paper, we present a framework for packet caching policy in multihop wireless networks, aiming at improving decoding efficiency, and thus throughput gain of NC. We formulate the caching policy design as an optimization problem for maximizing decoding utility and derive a set of optimization rules. We propose a distributed network coding caching policy (NCP), which can be readily incorporated into various existing NC architectures to improve NC performance gain. We theoretically analyze the performance improvement of NCP over completely opportunistic NC (COPE). In addition, we use simulation experiments based on ns‐2 to evaluate the performance of NCP. Numerical results validate our analytical model and show that NCP can effectively improve the performance gain of NC compared with COPE. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
陈彬强  杨晨阳 《信号处理》2015,31(12):1554-1561
网络的密集化是满足未来移动通信系统高吞吐量需求的有效手段,但当业务负载高时其吞吐量受到小区间干扰的严重制约。在基站端存储流行文件可以降低回传链路的成本和文件的下载时间,也为无需高容量回传链路进行基站协作提供了可能。本文分析了在小基站部署存储器后基站协作所能带来的吞吐量增益,推导了基于存储的基站协作策略的平均吞吐量,并与无干扰管理的基准小小区网络的吞吐量进行比较。分析和仿真结果表明,本地存储带来的性能增益在网络负载较高时和文件请求分布集中时非常明显。   相似文献   

17.
Most research works in transcoding proxies in mobile computing environments are on the basis of the traditional client-server architecture and do not employ the data broadcast technique. In addition, the issues of QoS provision and energy conservation are also not addressed in the prior studies. In view of this, we design in this paper a QoS-aware and energy-conserving transcoding proxy by utilizing the on-demand broadcasting technique. We first propose a QoS-aware and energy-conserving transcoding proxy architecture, abbreviated as QETP, and model it as a queuing network consisting of three queues. By analyzing the queuing network, three lemmas are derived to estimate the load of these queues. We then propose a version decision policy and a service admission control scheme to provide QoS in QETP. The derived lemmas are used to guide the execution of the proposed version decision policy and service admission control scheme to achieve the given QoS requirement. In addition, we also propose a data indexing method to reduce the power consumption of clients. To measure the performance of the proposed architecture, three experiments are conducted. Experimental results show that the average access time reduction of the proposed scheme over the traditional client-server architecture ranges from 45 percent to 75 percent. Experimental results also show that the proposed scheme is more scalable than the traditional client-server architecture and is able to effectively control the system load to attain the given QoS requirements. In addition, the proposed scheme is able to greatly reduce the average tuning time of clients at the cost of a slight increase (around 5 percent in our experiments) in average access time.  相似文献   

18.
Scalable proxy caching of video under storage constraints   总被引:10,自引:0,他引:10  
Proxy caching has been used to speed up Web browsing and reduce networking costs. In this paper, we study the extension of proxy caching techniques to streaming video applications. A trivial extension consists of storing complete video sequences in the cache. However, this may not be applicable in situations where the video objects are very large and proxy cache space is limited. We show that the approaches proposed in this paper (referred to as selective caching), where only a few frames are cached, can also contribute to significant improvements in the overall performance. In particular, we discuss two network environments for streaming video, namely, quality-of-service (QoS) networks and best-effort networks (Internet). For QoS networks, the video caching goal is to reduce the network bandwidth costs; for best-effort networks, the goal is to increase the robustness of continuous playback against poor network conditions (such as congestion, delay, and loss). Two different selective caching algorithms (SCQ and SCB) are proposed, one for each network scenario, to increase the relevant overall performance metric in each case, while requiring only a fraction of the video stream to be cached. The main contribution of our work is to provide algorithms that are efficient even when the buffer memory available at the client is limited. These algorithms are also scalable so that when changes in the environment occur it is possible, with low complexity, to modify the allocation of cache space to different video sequences.  相似文献   

19.
黄兴  宋建新 《电视技术》2012,36(1):26-29
视频转码是个复杂的过程,它需要对已经压缩过的码流进行解析,然后经过处理转换成满足解码终端要求的目标格式码流。为了提高视频转码的效率并降低视频转码的计算复杂度,根据视频转码的要求和图形处理器的并行结构,提出了一种利用GPU强大的并行计算能力来加速视频转码的算法。该算法将视频转码过程中耗时最多、最复杂的运动估计和模式选择过程转移到GPU上并行执行。在开发GPU通用计算能力的时候,采用NVIDIA公司的CUDA(统一计算设备架构)计算平台。实验结果证明,该算法可以有效提高视频转码的速度和效率。  相似文献   

20.
该文构造了一种新的流媒体缓存效用函数,该函数综合考虑流媒体节目的流行度特性及传输网络的代价参数;设计了一种针对多视频服务器、基于网络代价的流媒体缓存分配与替换算法(Network Cost Based cache allocation and replacement algorithm, NCB)。仿真实验结果显示,NCB算法有效提高了缓存命中率,降低了传送流媒体所消耗的总体网络代价;该算法在网络结构复杂、节目数量庞大的Internet流媒体应用环境中表现出较优越的性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号