首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 203 毫秒
1.
基于CDN的流媒体动态调度算法   总被引:6,自引:1,他引:5  
杨戈  樊秀梅 《通信学报》2009,30(2):42-46
采用指数分段缓存补丁块方案,根据媒体流行度更新缓存窗口大小,实现了流媒体对象在代理服务器中缓存的数据量和其流行度成正比的原则.仿真结果表明,该算法比MBP(multicast batched patching)算法和OBP(optimized batch patching)+prefix&patch caching算法具有更好的适应性,在最大缓存空间相同的情况下,能显著减少通过补丁通道传输的补丁数据,从而降低了服务器和骨干网络带宽的使用,同时节省了传输成本.  相似文献   

2.
基于段流行度的移动流媒体代理服务器缓存算法   总被引:1,自引:0,他引:1  
提出了一种基于段流行度的移动流媒体代理服务器缓存算法P2CAS2M2(proxy caching algorithm based on segment popularity for mobile streaming media),根据移动流媒体对象段的流行度,实现了代理服务器缓存的接纳和替换,使移动流媒体对象在代理服务器中缓存的数据量和其流行度成正比,并且根据客户平均访问时间动态决定该对象缓存窗口大小。仿真结果表明,对于代理服务器缓存大小的变化,P2CAS2M2比A2LS(adaptive and lazy segmentation algorithm)具有更好的适应性,在缓存空间相同的情况下,能够得到更大的被缓存流媒体对象的平均数,更小的被延迟的初始请求率,降低了启动延时,而字节命中率接近甚至超过A2LS。  相似文献   

3.
戴忠  杨戈  廖建新  朱晓民  黄海 《通信学报》2008,29(3):98-103
提出了基于自然数分段的流媒体主动预取算法,代理服务器向用户传送已被缓存的数据,同时,提前预取没被缓存的数据,提高了流媒体传送质量,减少了播放抖动.根据提出的自然数分段方法,理论分析了代理服务器预取点的位置和代理服务器为此所需要的最小缓存空间.仿真实验表明,在缓存空间相同的情况下,自然数分段方法比指数分段方法具有更高的字节命中率和更低的代理服务器抖动率,而与相同分段方法接近.  相似文献   

4.
该文介绍了在WCDMA网络中基于代理的移动流媒体系统以及评价其中代理服务器缓存分配算法性能的平均网络传输成本和移动终端的平均播放启动延时这两个指标;推导出在移动批处理(MBatching)传输方案下与这些指标相对应的节省值和综合节省值的计算公式;提出了适用于移动流媒体系统的,使所有流媒体节目的总的综合节省值最大的缓存分配算法。仿真结果表明,该算法与其他分配算法相比,可使总的综合节省值更大,节省更多的网络传输成本,取得更大的字节命中率。  相似文献   

5.
流媒体分发系统关键技术综述   总被引:14,自引:5,他引:9       下载免费PDF全文
 流媒体将是未来通信中的杀手业务.本文讨论了流媒体分发系统的关键技术,阐述了基于CDN (Content Distributed Network)和基于P2P (Peer to Peer )的流媒体的研究现状,针对基于CDN的流媒体,研究了流媒体调度算法,代理服务器缓存算法,基于CDN的交互式操作.针对基于P2P的流媒体,研究了数据分配算法,激励机制,流媒体对象的放置,应用层组播,基于P2P的交互式操作.指出了流媒体的未来研究方向.  相似文献   

6.
一种基于P2P协作的代理缓存流媒体调度算法   总被引:3,自引:0,他引:3  
该文根据流媒体系统中缓存空间不足及服务延迟的问题,提出一种基于P2P协作的代理缓存流媒体调度算法PCSPC(Proxy-Caching Scheduler based on P2P Cooperation)。首先按照流行度高的数据占用较大存储空间的原则,利用媒体文件的存储效率为每个前缀分配相应的存储空间。然后按传输成本将前缀降序排列,代理服务器升序排列,将前缀依次分配到代理服务器上,并且通过理论证明该方法能够有效地减少传输成本。PCSPC算法能够兼顾存储效率与传输成本。仿真实例说明了算法的有效性。  相似文献   

7.
针对交互式流媒体应用及异构网络对媒体类型要求多样性的特点,在代理服务器中引入转码技术,构建一种符合转码技术特点的新的缓存价值判断方法;通过新的缓存替换模型实现缓存替换。仿真结果表明,转码缓存算法能解决媒体类型多样性的要求,较之传统的缓存替换算法具有更高的缓存命中率和更低的启动延时率。  相似文献   

8.
余红梅  樊自普 《电子测试》2010,(3):22-26,36
在流媒体点播系统中,现有的CDN架构下的缓存策略并没有很好地解决骨干网带宽资源浪费的问题。为了降低骨干网带宽、启动延迟及网络负载不平衡及更好地支持点播过程中的VCR操作,本文在CDN的流媒体系统架构基础上,结合原有前缀缓存及分段缓存策略,提出一种新的基于代理服务器及备用代理服务器的缓存策略,以缓解系统对骨干网络带宽的需求,并在理论上有效节约了代理服务器的缓存资源,降低了用户点播的启动延迟。  相似文献   

9.
代理缓存是降低流媒体点播系统数据传输代价的一项关键技术,但目前现有的缓存方案,由于受到代理本身缓存空间大小和代理与客户端缺少协作的限制,效率不是很高.针对纯代理缓存系统的不足,文章提出了一种采用代理与客户端缓存协作的方式来弥补以上的不足.通过整合客户端的缓存空间来扩充系统总的缓存空间,依靠代理服务器实现资源的部署与协调.仿真结果显示,对比纯代理缓存系统,由于客户端缓存空间的加入,能显著减少主干网的带宽消耗,降低流媒体数据传输代价.  相似文献   

10.
廖建新  杨波  朱晓民  王纯 《通信学报》2007,28(11):51-58
提出一种适用于移动通信网的两级缓存流媒体系统结构2CMSA(two—level cache mobile streaming architecture),它突破了移动流媒体系统中终端缓存空间小、无线接入网带宽窄的局限;针对2CMSA结构设计了基于两级缓存的移动流媒体调度算法2CMSS(two—level cache based mobile streaming scheduling algorithm),建立数学模型分析了其性能;仿真实验证明,与原有的移动流媒体系统相比,使用2CMSS调度算法能够有效地节省网络传输开销,降低用户启动时延。  相似文献   

11.
I. Introduction Streaming media has been widely used over the Internet in recent years. However, the growing use in streaming media, which generally has large size, can have a significant impact on the user perceived latency and network congestion. A popular approach to reduce the response time and backbone bandwidth consumption is to deploy proxy caches at the edge of the Internet. Due to the large size and different popularity for different part of the streaming video, it is not practical …  相似文献   

12.
Media streaming in mobile environments is becoming more and more important with the proliferation of 3G technologies and the popularity of online media services such as news clips, live sports, and hot movies. To avoid service interruptions, proper data management strategies must be taken by all parties. We propose a two-level framework and cooperative techniques for mobile media streaming. Headlight prefetching is for the cooperation of streaming access points to deal with unpredictable client movement and seamless hand-off. For each user, we maintain a virtual fan-shaped prefetching zone along the direction of movement similar to a vehicle headlight. The overlapping area and accumulated virtual illuminance of the headlight zone on a particular cell determine the degree and volume of prefetching on that cell. Dynamic chaining facilitates cooperation among users to maximize cache utilization and streaming benefit. On receiving a request from a client, the streaming access point starts a search for supplying partners before attempting to a remote media server. If a qualified partner is found, the client is chained to the partner and receives subsequent segments without server intervention. The client can itself be a supplying partner for other clients and naturally form a chain of users that are viewing and sharing the same media. Simulation results demonstrate that headlight prefetching and dynamic chaining can significantly decrease streaming disruptions, reduce bandwidth consumption, increase cache utilization and improve service response time.  相似文献   

13.
该文构造了一种新的流媒体缓存效用函数,该函数综合考虑流媒体节目的流行度特性及传输网络的代价参数;设计了一种针对多视频服务器、基于网络代价的流媒体缓存分配与替换算法(Network Cost Based cache allocation and replacement algorithm, NCB)。仿真实验结果显示,NCB算法有效提高了缓存命中率,降低了传送流媒体所消耗的总体网络代价;该算法在网络结构复杂、节目数量庞大的Internet流媒体应用环境中表现出较优越的性能。  相似文献   

14.
A mobile transmission strategy, PMPatching (Proxy-based Mobile Patching) transmission strategy is proposed, it applies to the proxy-based mobile streaming media system in Wideband Code Division Multiple Access (WCDMA) network. Performance of the whole system can be improved by using patching stream to transmit anterior part of the suffix that had been played back, and by batching all the demands for the suffix arrived in prefix period and patching stream transmission threshold period. Experimental results show that this strategy can efficiently reduce average network transmission cost and number of channels consumed in central streaming media server.  相似文献   

15.
首先分析了移动流媒体服务器系统的整体架构,对其中主控服务器、流服务器、流媒体代理服务器、移动数据网关、无线流媒体终端等模块在移动流媒体服务器系统中所扮演的角色进行了简略阐述;然后详细分析了其中需要解决的关键技术,例如移动流媒体传输的QoS保证、移动流媒体传输控制会话协议、移动流媒体代理服务器、移动流媒体服务器存储调度策略等,并对每项关键技术进行了深入研究,从而对移动流媒体服务器系统在3G移动网络上亟须解决的关键技术有比较深入的了解。  相似文献   

16.
Scalable proxy caching of video under storage constraints   总被引:10,自引:0,他引:10  
Proxy caching has been used to speed up Web browsing and reduce networking costs. In this paper, we study the extension of proxy caching techniques to streaming video applications. A trivial extension consists of storing complete video sequences in the cache. However, this may not be applicable in situations where the video objects are very large and proxy cache space is limited. We show that the approaches proposed in this paper (referred to as selective caching), where only a few frames are cached, can also contribute to significant improvements in the overall performance. In particular, we discuss two network environments for streaming video, namely, quality-of-service (QoS) networks and best-effort networks (Internet). For QoS networks, the video caching goal is to reduce the network bandwidth costs; for best-effort networks, the goal is to increase the robustness of continuous playback against poor network conditions (such as congestion, delay, and loss). Two different selective caching algorithms (SCQ and SCB) are proposed, one for each network scenario, to increase the relevant overall performance metric in each case, while requiring only a fraction of the video stream to be cached. The main contribution of our work is to provide algorithms that are efficient even when the buffer memory available at the client is limited. These algorithms are also scalable so that when changes in the environment occur it is possible, with low complexity, to modify the allocation of cache space to different video sequences.  相似文献   

17.
The development of proxy caching is essential in the area of video‐on‐demand (VoD) to meet users' expectations. VoD requires high bandwidth and creates high traffic due to the nature of media. Many researchers have developed proxy caching models to reduce bandwidth consumption and traffic. Proxy caching keeps part of a media object to meet the viewing expectations of users without delay and provides interactive playback. If the caching is done continuously, the entire cache space will be exhausted at one stage. Hence, the proxy server must apply cache replacement policies to replace existing objects and allocate the cache space for the incoming objects. Researchers have developed many cache replacement policies by considering several parameters, such as recency, access frequency, cost of retrieval, and size of the object. In this paper, the Weighted‐Rank Cache replacement Policy (WRCP) is proposed. This policy uses such parameters as access frequency, aging, and mean access gap ratio and such functions as size and cost of retrieval. The WRCP applies our previously developed proxy caching model, Hot‐Point Proxy, at four levels of replacement, depending on the cache requirement. Simulation results show that the WRCP outperforms our earlier model, the Dual Cache Replacement Policy.  相似文献   

18.
将边缘缓存技术引入雾无线接入网,可以有效减少内容传输的冗余。然而,现有缓存策略很少考虑已缓存内容的动态特性。该文提出一种基于内容流行度和信息新鲜度的缓存更新算法,该算法充分考虑用户的移动性以及内容流行度的时空动态性,并引入信息年龄(AoI)实现内容的动态更新。首先,所提出算法根据用户的历史位置信息,使用双向长短期记忆网络(Bi-LSTM)预测下一时间段用户位置;其次,根据预测得到的用户位置,结合用户的偏好模型得到各位置区的内容流行度,进而在雾接入点进行内容缓存。然后,针对已缓存内容的信息年龄要求,结合内容流行度分布,通过动态设置缓存更新窗口以实现高时效、低时延的内容缓存。仿真结果表明,所提算法可以有效地提高内容缓存命中率,在保障信息的时效性的同时最大限度地减小缓存内容的平均服务时延。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号