共查询到20条相似文献,搜索用时 109 毫秒
1.
为了提高代理系统的整体性能,基于内部网络用户访问时间的局部性和相似性,并结合现有的分布式缓存系统,本文提出了一种新型的分布式代理缓存系统——双层缓存集群.双层缓存集群系统分为网内集群缓存层和代理集群缓存层,采用双层代理缓存结构,充分利用现有内部网络资源,分散了代理的负担.降低了代理之间的通信开销,还增强了缓存资源的利用... 相似文献
2.
3.
介绍了网络缓存发展的趋势:网络缓存协作。说明了网络缓存协作所要解决的问题,列举了网络 缓存协作的几种模型及相应的有代表性的协议,介绍了解决代理裁剪的几种方案。 相似文献
4.
网络缓存协作的实现方法 总被引:1,自引:0,他引:1
介绍了网络缓存发展的趋势:网络缓存协作.说明了网络缓存协作所要解决的问题,列举了网络缓存协作的几种模型及相应的有代表性的协议,介绍了解决代理裁剪的几种方案. 相似文献
5.
虚拟化技术已被广泛用于恶意软件分析系统,而虚拟机检测技术作为一种反分析技术,对恶意软件作者和安全研究人员都有重要意义。为了描述和探索虚拟机检测方法,给出了虚拟机检测的基本思想并介绍了几种已有检测方法,考虑到检测方法的通用性,提出了基于CPU缓存(Cache)操作模式差异的虚拟机检测方法,通过比较启用CPU缓存和禁用CPU缓存时的指令执行效率来判断当前环境是否为虚拟环境。通过实验发现在真机环境中禁用CPU缓存对指令执行效率有显著影响,而在虚拟环境中禁用CPU缓存对指令执行效率没有显著影响。实验结果表明,利用CPU缓存操作模式在真机环境与虚拟环境中的差异来检测虚拟机是可行的。 相似文献
6.
基于P2P的CDN新型网络及缓存替换算法 总被引:1,自引:0,他引:1
对内容分发网络和P2P网络的特点进行了分析,给出了一种基于P2P的CDN新型网络自治缓存系统的体系结构,提出了自治缓存区域中智能缓存替换问题并给出了智能缓存替换方法和双关键字缓存替换算法.通过仿真实验,可以找到以运算复杂度低命中率高的关键字来实现缓存替换. 相似文献
7.
8.
9.
在流媒体点播系统中,现有的CDN架构下的缓存策略并没有很好地解决骨干网带宽资源浪费的问题。为了降低骨干网带宽、启动延迟及网络负载不平衡及更好地支持点播过程中的VCR操作,本文在CDN的流媒体系统架构基础上,结合原有前缀缓存及分段缓存策略,提出一种新的基于代理服务器及备用代理服务器的缓存策略,以缓解系统对骨干网络带宽的需求,并在理论上有效节约了代理服务器的缓存资源,降低了用户点播的启动延迟。 相似文献
10.
随着Internet技术的发展和普遍应用,流媒体技术在Internet上得到了广泛的应用.对流媒体对象的访问,需要高且稳定的传送速率,网络带宽消耗大且持续时间长,容易给其他类型文件的访问带来影响,若用户过多,还会使初始流媒体服务器过载.代理缓存技术可帮助解决上述问题.文中介绍了流媒体代理缓存的特点,流媒体代理缓存的算法,流媒体代理缓存的评价指标和影响流媒体代理缓存效果的因素. 相似文献
11.
Chen-Lung Chan Shih-Yu Huang Jia-Shung Wang 《Communications, IEEE Transactions on》2007,55(11):2142-2151
Proxy-caching strategies, especially prefix caching and interval caching, are commonly used in video-on-demand (VOD) systems to improve both the system performance and the playback experience of users. However, because these caching strategies are designed for homogeneous clients, they do not perform well in the real world where clients are heterogeneous (i.e., different available network bandwidths and different sizes of client-side buffers). This paper investigates the problems caused by heterogeneous client-side buffers. We analyze the theoretical performance of these caching strategies, and then, derive cost functions to measure the corresponding performance gains. Based on these analytical results, we develop a caching strategy that employs both prefix caching and interval caching to minimize the input bandwidth of a proxy. The simulation results demonstrate that the bandwidth requirements of a proxy implementing our caching strategy are significantly lower compared to adopting prefix caching or interval caching alone. 相似文献
12.
Performance Study of Large-Scale Video Streaming Services in Highly Heterogeneous Environment 总被引:1,自引:0,他引:1
To support large-scale Video-on-Demand (VoD) services in a heterogeneous network environment, either a replication or layering approach can be deployed to adapt the client bandwidth requirements. With the aid of the broadcasting and caching techniques, it has been proved that the overall performance of the system can be enhanced. In this paper, we explore the impact on the broadcasting schemes coupled with proxy caching and develop an analytical model to evaluate the system performance in a highly heterogeneous network environment. We develop guidelines for resources allocation, transmission strategies as well as caching schemes under different system configurations. The model can assist system designers to study various design options as well as perform system dimensioning. Moreover, a systematic comparison between replication and layering is performed. From the results, it can be seen that the system performance of layering is better than that of replication when the environment is highly heterogeneous even if the layering overhead is higher than 25%. In addition, it is found that the system blocking probability can be further reduced by exploring the broadcast capability of the network if the proxy server cannot store all the popular videos. 相似文献
13.
Efficient web content delivery using proxy caching techniques 总被引:4,自引:0,他引:4
D. Zeng Fei-Yue Wang Mingkuan Liu 《IEEE transactions on systems, man and cybernetics. Part C, Applications and reviews》2004,34(3):270-280
Web caching technology has been widely used to improve the performance of the Web infrastructure and reduce user-perceived network latencies. Proxy caching is a major Web caching technique that attempts to serve user Web requests from one or a network of proxies located between the end user and Web servers hosting the original copies of the requested objects. This paper surveys the main technical aspects of proxy caching and discusses recent developments in proxy caching research including caching the "uncacheable" and multimedia streaming objects, and various adaptive and integrated caching approaches. 相似文献
14.
Navid Ehsan Mingyan Liu Roderick J. Ragland 《International Journal of Communication Systems》2003,16(6):513-534
Performance enhancing proxies (PEPs) are widely used to improve the performance of TCP over high delay‐bandwidth product links and links with high error probability. In this paper we analyse the performance of using TCP connection splitting in combination with web caching via traces obtained from a commercial satellite system. We examine the resulting performance gain under different scenarios, including the effect of caching, congestion, random loss and file sizes. We show, via analysing our measurements, that the performance gain from using splitting is highly sensitive to random losses and the number of simultaneous connections, and that such sensitivity is alleviated by caching. On the other hand, the use of a splitting proxy enhances the value of web caching in that cache hits result in much more significant performance improvement over cache misses when TCP splitting is used. We also compare the performance of using different versions of HTTP in such a system. Copyright © 2003 John Wiley & Sons, Ltd. 相似文献
15.
A framework of data caching for improving decoding efficiency of opportunistic network coding 下载免费PDF全文
Throughput performance of wireless networks can be enhanced by applying network coding (NC) technique based on opportunistic listening. The packets sent or overheard by a network node should be locally cached for the purpose of possible future decoding. How to manage the cache to reduce the overhead incurred in performing NC and, meanwhile, exploit performance gain is an interesting issue that has not been deeply investigated. In this paper, we present a framework for packet caching policy in multihop wireless networks, aiming at improving decoding efficiency, and thus throughput gain of NC. We formulate the caching policy design as an optimization problem for maximizing decoding utility and derive a set of optimization rules. We propose a distributed network coding caching policy (NCP), which can be readily incorporated into various existing NC architectures to improve NC performance gain. We theoretically analyze the performance improvement of NCP over completely opportunistic NC (COPE). In addition, we use simulation experiments based on ns‐2 to evaluate the performance of NCP. Numerical results validate our analytical model and show that NCP can effectively improve the performance gain of NC compared with COPE. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
16.
17.
Jiun-Long Huang J.-L. Ming-Syan Chen M.-S. 《Mobile Computing, IEEE Transactions on》2007,6(8):971-987
Most research works in transcoding proxies in mobile computing environments are on the basis of the traditional client-server architecture and do not employ the data broadcast technique. In addition, the issues of QoS provision and energy conservation are also not addressed in the prior studies. In view of this, we design in this paper a QoS-aware and energy-conserving transcoding proxy by utilizing the on-demand broadcasting technique. We first propose a QoS-aware and energy-conserving transcoding proxy architecture, abbreviated as QETP, and model it as a queuing network consisting of three queues. By analyzing the queuing network, three lemmas are derived to estimate the load of these queues. We then propose a version decision policy and a service admission control scheme to provide QoS in QETP. The derived lemmas are used to guide the execution of the proposed version decision policy and service admission control scheme to achieve the given QoS requirement. In addition, we also propose a data indexing method to reduce the power consumption of clients. To measure the performance of the proposed architecture, three experiments are conducted. Experimental results show that the average access time reduction of the proposed scheme over the traditional client-server architecture ranges from 45 percent to 75 percent. Experimental results also show that the proposed scheme is more scalable than the traditional client-server architecture and is able to effectively control the system load to attain the given QoS requirements. In addition, the proposed scheme is able to greatly reduce the average tuning time of clients at the cost of a slight increase (around 5 percent in our experiments) in average access time. 相似文献
18.
Scalable proxy caching of video under storage constraints 总被引:10,自引:0,他引:10
Proxy caching has been used to speed up Web browsing and reduce networking costs. In this paper, we study the extension of proxy caching techniques to streaming video applications. A trivial extension consists of storing complete video sequences in the cache. However, this may not be applicable in situations where the video objects are very large and proxy cache space is limited. We show that the approaches proposed in this paper (referred to as selective caching), where only a few frames are cached, can also contribute to significant improvements in the overall performance. In particular, we discuss two network environments for streaming video, namely, quality-of-service (QoS) networks and best-effort networks (Internet). For QoS networks, the video caching goal is to reduce the network bandwidth costs; for best-effort networks, the goal is to increase the robustness of continuous playback against poor network conditions (such as congestion, delay, and loss). Two different selective caching algorithms (SCQ and SCB) are proposed, one for each network scenario, to increase the relevant overall performance metric in each case, while requiring only a fraction of the video stream to be cached. The main contribution of our work is to provide algorithms that are efficient even when the buffer memory available at the client is limited. These algorithms are also scalable so that when changes in the environment occur it is possible, with low complexity, to modify the allocation of cache space to different video sequences. 相似文献
19.
视频转码是个复杂的过程,它需要对已经压缩过的码流进行解析,然后经过处理转换成满足解码终端要求的目标格式码流。为了提高视频转码的效率并降低视频转码的计算复杂度,根据视频转码的要求和图形处理器的并行结构,提出了一种利用GPU强大的并行计算能力来加速视频转码的算法。该算法将视频转码过程中耗时最多、最复杂的运动估计和模式选择过程转移到GPU上并行执行。在开发GPU通用计算能力的时候,采用NVIDIA公司的CUDA(统一计算设备架构)计算平台。实验结果证明,该算法可以有效提高视频转码的速度和效率。 相似文献
20.
该文构造了一种新的流媒体缓存效用函数,该函数综合考虑流媒体节目的流行度特性及传输网络的代价参数;设计了一种针对多视频服务器、基于网络代价的流媒体缓存分配与替换算法(Network Cost Based cache allocation and replacement algorithm, NCB)。仿真实验结果显示,NCB算法有效提高了缓存命中率,降低了传送流媒体所消耗的总体网络代价;该算法在网络结构复杂、节目数量庞大的Internet流媒体应用环境中表现出较优越的性能。 相似文献