首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
随着宽带接入越来越普及,网上下载自己喜爱的电影和歌曲已成为人们新的上网乐趣,而P2P软件是人们下载它们的最好工具。下面笔者就介绍一种P2P软件——eMule(中文名字电骡子)。eMule可以将用户电脑中的应用程序、游戏软件和电影音乐分享出去,提供给其他需要的使用者下载。同样,用户也可以通过eMule去下载其他使用者共享的文件。  相似文献   

2.
基于Cloud-P2P云存储结构,针对云中心和P2P节点存储层数据副本的访问机制,考虑节点存储层数据副本的修复过程,建立一个三维连续时间Markov链模型。使用矩阵几何解方法导出该模型的稳态解,并给出节点存储层传输率,数据访问延迟和副本修复率等系统性能指标的表达式。通过数值实验和系统仿真定量刻画数据副本数等系统参数对Cloud-P2P云存储结构性能的影响。构造利润函数,进行用户存储层副本数的优化设置。  相似文献   

3.
KAD网络负载均衡技术研究   总被引:1,自引:1,他引:0  
由于应用环境的特殊性和网络节点的异构性,大多数DHT网络都存在负载不均衡问题。以拥有大量用户群的eMule的KAD网络为研究对象,通过实际测量发现,由于关键词使用频率的不同,文件索引信息在KAD网络中的存储分布是不均匀的,会影响系统正常的资源发布和搜索。针对这一问题,本文提出了一个基于多重目标ID的KAD索引信息发布机制,通过让更多的节点负责拥有高频关键词的文件索引,提高KAD网络文件索引资源的负载均衡,并通过仿真实验证明了该方法的有效性。  相似文献   

4.
赵英  李侠 《现代电子技术》2009,32(13):100-102,112
随着计算机和网络技术的迅猛发展,对大容量分布式存储系统的研究已经成为当前热点.在分布式存储系统中,副本技术是一种最为常用的分布式数据管理机制.通过为系统中的文件增加副本,保存冗余的文件数据,可以十分有效地提高文件的可用性和可靠性.提出一种含超级节点的P2P模型,将大量分散的存储节点组织成一个逻辑存储网络.超级节点不仅用于存储"热点"文件的副本,而且还采用RAID技术进行数据备份,以进一步提高系统存储资源利用率和系统可靠性;针对一般节点的惰性,依据文件被查询到的概率,采用方根复制策略进行数据备份,在保证资源下载成功率的同时,进一步提高系统空间利用率.  相似文献   

5.
文件库大小对无线边缘缓存的性能具有重要的影响。本文基于Zipf文件流行度模型,分析了文件库大小与用户请求数、流行度参数等因素的渐近关系。研究结果表明,当用户请求数较少时,文件库大小随请求数线性增长,而当请求数较多时,文件库大小随请求数呈负指数增长,这一结论适用于不同的流行度参数。仿真结果验证了所分析结果的准确性,并基于校园网中采集的典型视频网站请求记录验证了分析结果的有效性。在实际系统中,无线边缘节点覆盖范围内的用户请求数通常远小于视频网站的文件总数,因此用户请求的文件库大小随请求数线性增长,而非文献中猜测的亚线性增长,这对提升无线边缘缓存性能带来严峻挑战。   相似文献   

6.
在基于节点分级的对等网络路由定位算法SP_Route的基础上实现一个分布式存储系统。通过采用可扩展的体系结构、稳定的通信协议、通信机制,简明的文件的组织和节点构造方式,在物理网络上叠加一个P2P网络层。将各个节点贡献的物理上分布的存储资源连接成对用户透明的文件存储系统。该系统能快速地搜索文件和进行路由定位,能为用户提供较稳定的存储服务。  相似文献   

7.
针对共享在公共云环境的用户数据因所有权与管理权分离而导致的用户隐私泄露问题,结合对称加密算法、属性加密算法和副本定位技术,提出一种云环境下的数据多副本安全共享与关联删除方案,对用户数据进行加密等处理封装成副本关联对象(RAO, replication associated object),随后将RAO共享到云服务商,建立副本关联模型对RAO所产生副本进行管理并实现关联删除。分析表明方案是安全与有效的,能够对用户共享的数据及其副本进行安全共享与关联删除,有效保障了数据多副本的隐私安全。  相似文献   

8.
黄昌勤  李源  吴洪艳  汤庸  罗旋 《通信学报》2014,35(10):11-97
以数据节点与网络链路的可靠性因素分析为基础,提出了云存储系统的数据副本服务可靠性模型。根据访问可靠性与数据副本数量、用户访问量之间的关系,设计数据服务可靠性、副本生成时机、存储节点选择的确定方法,实现了副本分布、删除算法,并在云存储系统ERS-Cloud上进行一系列实验,结果表明该方法能够有效保障数据服务的可靠性,进一步降低副本的冗余存储数量。  相似文献   

9.
提出一种适合于路由表大小为O(logN) 的结构化P2P协议的负载均衡方法,该方法采用负载感知的被动式路由表维护算法和路由算法提高轻载结点作为路由中继结点的概率,并通过一种缓存机制来降低承载热点文件的结点的请求负载.实验结果表明,在用户查询服从Zipf分布的环境下,该负载均衡方法可使系统达到较好的负载均衡.  相似文献   

10.
提出了一种基于随机图搜索的副本建立策略,研究了服务获取跳转数和副本复制数对系统流量最小化的影响.发现在网络规模足够大的时候,VoD副本数和点播率服从平方根复制规律,能最大化节约系统的带宽资源.其他情况下,副本数和点播率服从线性比例复制能取得理想的结果.模拟实验和分析证明:该项研究能有效提高分布式网络中资源搜索的命中率,降低整个网络的开销,使网络中的各个节点达到负载平衡.  相似文献   

11.
Han  Zhijie  Ma  Ji_ao  He  Xin  Fan  Weibei 《Journal of Signal Processing Systems》2019,91(10):1149-1157

The generally accepted that Zipf-Distribution is a convinced access pattern for text-based Web. However, with the dramatic increasement of VoD media traffic on the Internet such as Flash P2P, the inconsistency between the access patterns of media objects and the Zipf model has been researched by many scholars. In this paper, we have studied a large variety of media work-loads collected from both browser and server sides in Adobe Flash P2P systems which applied in Youku, Youtube, etc. Through extensive analysis and modeling. And found the object reference ranks of all these workloads follow the logistic (LOG) distribution despite their different media systems and delivery methods by extensive analysis and modeling. This mean it does not follow long tail effect; Furthermore, we have constructed mathematical models which can applied in access pattern in FlashP2P traffic. By analyzing the model of media traffic access, it is possible to better describe the user’s access mode. Meantime, it is very suitable for the configuration and allocation of network resources which can be used more efficiently.

  相似文献   

12.
The type of centralized group key establishment protocols is the most commonly used one due to its efficiency in computation and communication. A key generation center (KGC) in this type of protocols acts as a server to register users initially. Since the KGC selects a group key for group communication, all users must trust the KGC. Needing a mutually trusted KGC can cause problem in some applications. For example, users in a social network cannot trust the network server to select a group key for a secure group communication. In this paper, we remove the need of a mutually trusted KGC by assuming that each user only trusts himself. During registration, each user acts as a KGC to register other users and issue sub-shares to other users. From the secret sharing homomorphism, all sub-shares of each user can be combined into a master share. The master share enables a pairwise shared key between any pair of users. A verification of master shares enables all users to verify their master shares are generated consistently without revealing the master shares. In a group communication, the initiator can become the server to select a group key and distribute it to each other user over a pairwise shared channel. Our design is unique since the storage of each user is minimal, the verification of master shares is efficient and the group key distribution is centralized. There are public-key based group key establishment protocols without a trusted third party. However, these protocols can only establish a single group key. Our protocol is a non-public-key solution and can establish multiple group keys which is computationally efficient.  相似文献   

13.
Wireless Personal Communications - Peer-to-peer (P2P) networks are distributed systems where each user shares his resources and cooperates with other users. These networks are designed over...  相似文献   

14.
Analyzing peer-to-peer traffic across large networks   总被引:14,自引:0,他引:14  
The use of peer-to-peer (P2P) applications is growing dramatically, particularly for sharing large video/audio files and software. In this paper, we analyze P2P traffic by measuring flow-level information collected at multiple border routers across a large ISP network, and report our investigation of three popular P2P systems-FastTrack, Gnutella, and Direct-Connect. We characterize the P2P traffic observed at a single ISP and its impact on the underlying network. We observe very skewed distribution in the traffic across the network at different levels of spatial aggregation (IP, prefix, AS). All three P2P systems exhibit significant dynamics at short time scale and particularly at the IP address level. Still, the fraction of P2P traffic contributed by each prefix is more stable than the corresponding distribution of either Web traffic or overall traffic. The high volume and good stability properties of P2P traffic suggests that the P2P workload is a good candidate for being managed via application-specific layer-3 traffic engineering in an ISP's network.  相似文献   

15.
We address the problem of achieving outage probability constraints on the uplink of a code-division multiple-access (CDMA) system employing power control and linear multiuser detection, where we aim to minimize the total expended power. We propose a generalized framework for solving such problems under modest assumptions on the underlying channel fading distribution. Unlike previous work, which dealt with a Rayleigh fast-fading model, we allow each user to have a different fading distribution. We show how this problem can be formed as an optimization over user transmit powers and linear receivers, and, where the problem is feasible, we provide conceptually simple iterative algorithms that find the minimum power solution while achieving outage specifications with equality. We further generalize a mapping from outage probability specifications to average signal-to-interference-ratio constraints that was previously applicable only to Rayleigh-faded channels. This mapping allows us to develop suboptimal, computationally efficient algorithms to solve the original problem. Numerical results are provided that validate the iterative schemes, showing the closeness of the optimal and mapped solutions, even under circumstances where the map does not guarantee that constraints will be achieved.  相似文献   

16.

A P2P (peer-to-peer) network is a distributed system dependent on the IP-based networks, where independent nodes join and leave the network at their drive. The files (resource) are shared in distributed manner and each participating node ought to share its resources. Some files in P2P networks are accessed frequently by many users and such files are called popular files. Replication of popular files at different nodes in structured P2P networks provides significant reduction in resource lookup cost. Most of the schemes for resource access in the structured P2P networks are governed by DHT (Distributed Hash Table) or DHT-based protocols like Chord. Chord protocol is well accepted protocol among structured P2P networks due to its simple notion and robust characteristics. But Chord or other resource access protocols in structured P2P networks do not consider the cardinality of replicated files to enhance the lookup performance of replicated files. In this paper, we have exploited the cardinality of the replicated files and proposed a resource cardinality-based scheme to enhance the resource lookup performance in the structured P2P networks. We have also proposed the notion of trustworthiness factor to judge the reliability of a donor node. The analytical modelling and simulation analysis indicate that the proposed scheme performs better than the existing Chord and PCache protocols.

  相似文献   

17.
An adaptive network prefetch scheme   总被引:9,自引:0,他引:9  
In this paper, we present an adaptive prefetch scheme for network use, in which we download files that will very likely be requested in the near future, based on the user access history and the network conditions. Our prefetch scheme consists of two parts: a prediction module and a threshold module. In the prediction module, we estimate the probability with which each file will be requested in the near future. In the threshold module, we compute the prefetch threshold for each related server, the idea being that the access probability is compared to the prefetch threshold. An important contribution of this paper is that we derive a formula for the prefetch threshold to determine its value dynamically based on system load, capacity, and the cost of time and system resources to the user. We also show that by prefetching those files whose access probability is greater than or equal to its server's prefetch threshold, a lower average cost can always be achieved. As an example, we present a prediction algorithm for web browsing. Simulations of this prediction algorithm show that, by using access information from the client, we can achieve high successful prediction rates, while using that from the server generally results in more hits  相似文献   

18.
We consider a problem involving the design of a system for concurrent processing of application software using multiple processors on a local area network. The task control-flow graph which graphically describes the software logic is allowed to be an arbitrary directed multigraph. We establish equations of flow conservation which arise in the execution of modules on the set of interconnected processors. Incorporating these equations, we develop a mixed integer programming model to find an optimal allocation of program modules, with possible replications, to the set of capacitated processors. The objective is to minimize the total interprocessor communication cost and module execution cost subject to the capacity constraints of processors and the broadcast channel. The decisions involved are: how many copies of each module should be maintained; how to allocate module copies across processors; and how to distribute invocations of each module across its copies on different processors. We report numerical results from solving the model.  相似文献   

19.
Internet service providers(ISPs) have taken some measures to reduce intolerable inter-ISP peer-to-peer(P2P) traffic costs,therefore user experiences of various P2P applications have been affected.The recently emerging offline downloading service seeks to improve user experience by using dedicate servers to cache requested files and provide high-speed uploading.However,with rapid increase in user population,the server-side bandwidth resource of offline downloading system is expected to be insufficient in the near future.We propose a novel complementary caching scheme with the goal of mitigating inter-ISP traffic,alleviating the load on servers of Internet applications and enhancing user experience.Both architecture and caching algorithm are presented in this paper.On the one hand,with full knowledge of P2P file sharing system and offline downloading service,the infrastructure of complementary caching is designed to conveniently be deployed and work together with existing platforms.The co-operational mechanisms among different major components are also included.On the other hand,with in-depth understanding of traffic characteristics that are relevant to caching,we develop complementary caching algorithm with respect to the density of requests,the redundancy of file and file size.Since such relevant information can be real-time captured in our design,the proposed policy can be implemented to guide the storage and replacement of caching unities.Based on real-world traces over 3 months,we demonstrate that the complementary caching scheme is capable to achieve the ’three-win’ objective.That is,for P2P downloading,over 50% of traffic is redirected to cache;for offline downloading,the average server-dependence of tasks drops from 0.71 to 0.32;for user experience,the average P2P transfer rate is increased by more than 50 KB/s.  相似文献   

20.
Previous literature presents several seemingly different approaches to rooted-tree-based multicast key distribution schemes that try to minimize the user key storage while providing efficient member deletion. In this paper, we show that the user key storage on rooted trees can be systematically studied using basic concepts from information theory. We show that the rooted-tree-based multicast key distribution problem can be posed as an optimization problem that is abstractly identical to the optimal codeword length selection problem in information theory. In particular, we show that the entropy of member deletion statistics quantifies the optimal value of the average number of keys to be assigned to a member. We relate the sustainable key length to statistics of member deletion event and the hardware bit generation rate. We then demonstrate the difference between the key distribution on rooted trees and the optimal codeword length selection problem with an example of a key distribution scheme that attains optimality but fails to prevent user collusion  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号