首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
刘彩燕  白尚旺 《计算机工程与设计》2006,27(17):3163-3164,3177
数据是网格中的一类重要资源,它具有可移动、可复制、可缓存等特点。随着网格中的数据集在数量上和规模上的不断增长,为提高访问速度和减少访问延迟,需要把网格数据传输到距离访问者比较近的位置,并在不同的网格结点上复制和存储数据集副本,使数据访问服务达到更好的性能和可靠性。网格中存在异构数据资源,通讯延迟和资源失效等问题,因此确保副本数据的一致性是一项极具挑战性的任务。通过对基于网格中间件系统的复制一致性服务及其存在的问题进行分析后,有针对性地提出了一种体系结构设计方案。  相似文献   

2.
Using reconfiguration for efficient management of replicated data   总被引:2,自引:0,他引:2  
Replicated data management protocols have been proposed that exploit a logically structured set of copies. These protocols have the advantage that they provide limited fault-tolerance at low communication cost. The proposed protocols can be viewed as analogues of the read-one write-all protocol in the context of logical structures. In this paper, we start by generalizing these protocols in two ways for logical structures. First, the quorum-based approach is applied to develop protocols that use structured read and write quorums, thus attaining a high degree of data availability for both read and write operations. Next, the reconfiguration or views approach is developed for these structures, resulting in protocols that attain high degrees of availability at significantly low communication cost for read operations. In this sense, the proposed protocols have the advantages of the read-one write-all protocol for low-cost read operations as well as the majority quorum protocol for high data availability. Finally, we generalize the reconfiguration approach to allow for the dynamic reconfiguration of the database system from one replica management protocol to another. This allows database systems to adapt to an evolving and dynamic application environment  相似文献   

3.
提出用一种特殊的数据结构交叉树来描述安全协议中的消息。交叉树有一些交叉结点,拥有交叉结点的几棵交叉树形成交叉森林。一棵交叉树唯一对应于一个消息,一个交叉森林表示那些在协议执行过程中采用相同机制发送或接收的消息。一个或几个消息中相同的原子消息在交叉树或交叉森林中用交叉结点来表示,这样易于保证原子消息的一致性,以及公钥和它所有者之间的一致性。另外,交叉树还可用于为消息模板建立可接受消息,这时于建立在模型检验基础上的安全协议分析是非常必要的。  相似文献   

4.
5.
Data replication, as an essential service for MANETs, is used to increase data availability by creating local or nearly located copies of frequently used items, reduce communication overhead, achieve fault-tolerance and load balancing. Data replication protocols proposed for MANETs are often prone to scalability problems due to their definitions or underlying routing protocols they are based on. In particular, they exhibit poor performance when the network size is scaled up. However, scalability is an important criterion for several MANET applications. We propose a scalable and reactive data replication approach, named SCALAR, combined with a low-cost data lookup protocol. SCALAR is a virtual backbone based solution, in which the network nodes construct a connected dominating set based on network topology graph. To the best of our knowledge, SCALAR is the first work applying virtual backbone structure to operate a data lookup and replication process in MANETs. Theoretical message-complexity analysis of the proposed protocols is given. Extensive simulations are performed to analyze and compare the behavior of SCALAR, and it is shown to outperform the other solutions in terms of data accessibility, message overhead and query deepness. It is also demonstrated as an efficient solution for high-density, high-load, large-scale mobile ad hoc networks.  相似文献   

6.
A new protocol for maintaining replicated data that can provide both high data availability and low response time is presented. In the protocol, the nodes are organized in a logical grid. Existing protocols are designed primarily to achieve high availability by updating a large fraction of the copies, which provides some (although not significant) load sharing. In the new protocol, transaction processing is shared effectively among nodes storing copies of the data, and both the response time experienced by transactions and the system throughput are improved significantly. The authors analyze the availability of the new protocol and use simulation to study the effect of load sharing on the response time of transactions. They also compare the new protocol with a voting-based scheme  相似文献   

7.
葛宁  贺俞凯  翟树茂  李晓洲  张莉 《软件学报》2023,34(11):4989-5007
分布式系统在计算环境中发挥重要的作用,其中的共识协议算法用于保证节点间行为的一致性.共识协议的设计错误可能导致系统运行故障,严重时可能对人员和环境造成灾难性的后果,因此保证共识协议设计的正确性非常重要.形式化验证能够严格证明设计模型中目标性质的正确性,适合用于验证共识协议.然而,随着分布式系统的规模增大,问题复杂度提升,使得分布式共识协议的形式化验证更为困难.采用什么方法对共识协议的设计进行形式化验证、如何提升验证规模,是共识协议形式化验证的重要研究问题.对目前采用形式化方法验证共识协议的研究工作进行调研,总结其中提出的重要建模方法和关键验证技术,并展望该领域未来有潜力的研究方向.  相似文献   

8.
王艳玲  秦拯  陶勇 《计算机工程》2012,38(14):76-78
DTN网络一般采用基于消息复制的随机路由策略,由于网络中存在大量的消息副本,因此会导致中间节点缓冲区占用大,出现拥塞。为此,从冗余控制角度出发,基于PROPHET路由算法,设计用于缓冲区管理的3种机制,包括消息副本数量的控制、数据包生存期的动态设置以及已成功传输数据包的主动删除。通过限制消息副本数和删除多余消息,降低网络中消息副本总量,从而减轻节点负载。实验结果表明,在网络资源有限的情况下,上述3种机制能提高消息的成功传输率,降低网络开销。  相似文献   

9.
A new approach for the verification of cache coherence protocols   总被引:1,自引:0,他引:1  
We introduce a cache protocol verification technique based on a symbolic state expansion procedure. A global Finite State Machine (FSM) model characterizing the protocol behavior is built and protocol verification becomes equivalent to finding whether or not the global FSM may enter erroneous states. In order to reduce the complexity of the state expansion process, all the caches in the same state are grouped into an equivalence class and the number of caches in the class is symbolically represented by a repetition constructor. This symbolic representation is partly justified by the symmetry and homogeneity of cache-based systems. However, the key idea behind the representation is to exploit a unique property of cache coherence protocols: the fact that protocol correctness is not dependent on the exact number of cached copies. Rather, symbolic states only need to keep track of whether the caches have 0, 1, or multiple copies. The resulting symbolic state expansion process only takes a few steps and verifies the protocol for any system size. Therefore, it is more efficient and reliable than current approaches. The verification procedure is first applied to the verification of five existing protocols under the assumption of atomic protocol transitions. A simple snooping protocol on a split-transaction shared bus is also verified to illustrate the extension of our approach to protocols with nonatomic transitions  相似文献   

10.
Delay/disruption tolerant networking (DTN) is an approach to networking where intermittent connectivity exists: it is often afforded by a store and forward technique. Depending on the capability of intermediary nodes to carry and forward messages, messages can be eventually delivered to their destination by mobile nodes with an appropriate routing protocol. To have achieved a successful delivery, most DTN routing protocols use message duplication methods. Although messages are rapidly transferred to the destination, the redundancy in the number of message copies increases rapidly. This paper presents a new routing scheme based on a stochastic process for epidemic routing. Message redundancy is efficiently reduced and the number of message copies is controlled reasonably. During the contact process of nodes in the network, the number of message copies changes, and according to the variability in the number of copies, we construct a special Markov chain, birth and death process, on the number of message copies then calculate and obtain a stationary distribution of the birth and death process. Comparing the theoretical model with the simulation we have performed we see similar results. Our method improves on time-to-live (TTL) and antipacket methods, in both redundancy and delivery success efficiency.  相似文献   

11.
无线自组网与传统的有线网不同,它由一些可移动的结点组成,这些结点的带宽、计算能力和能量都受到一定限制。针对这种网络,研究者们提出了按需路由协议,这些协议非常适合无线自组网这种拓扑结构,但是由于缺乏对全局拓扑和结点移动性的了解,可能达不到最优。因此提出了一种高效的路由协议ERNC,该协议基于已提出的SHORT路由协议[13],并对以前所提出的NAOR协议[14]进行了扩展,即利用网络编码技术来进一步提高路由协议的性能。最后,使用NS-2模拟器来评估ERNC的性能,结果显示ERNC在分组投递率和平均端到端时延等方面获得了比已有协议更好的性能。  相似文献   

12.
Some recent studies utilize node contact patterns to aid the design of routing protocol in Opportunistic Mobile Networks (OppNets). However, most existing studies only utilize one hop contact information to design routing protocol. In order to fully utilize nodes’ collected contact information to improve the performance of data forwarding, in this paper we focus on exploiting node contact patterns from the multi-hop perspective. We first give the definition of opportunistic forwarding path, and propose a model to calculate the maximum data delivery probability along different opportunistic forwarding paths. Second, based on the maximum data delivery probability, we propose a novel approach to improve the performance of data forwarding in OppNets based on two forwarding metric. The proposed forwarding strategy first manages to forward data copies to nodes have higher centrality value at the global scope. Afterwards, maximum data delivery probability to the destination is evaluated, to ensure that data is carried and forwarded by relays with higher capability of contacting the destination. Finally, extensive real trace-driven simulations are conducted to compare the proposed routing protocol with other existing routing protocols in terms of delivery ratio and delivery cost. The simulation results show that our proposed routing protocol is close to Epidemic Routing in terms of delivery ratio but with significantly reduced delivery cost. Additionally, our proposed routing protocol outperforms Bubble Rap and Prophet in terms of delivery ratio, and the delivery cost of our proposed routing protocol is very close to that of Bubble Rap.  相似文献   

13.
A Lock-Based Cache Coherence Protocol for Scope Consistency   总被引:5,自引:2,他引:5       下载免费PDF全文
Directory protocols are widely adopted to maintain cache coherence of distributed shared memory multiprocessors.Although scalable to a certain extent,directory protocols are complex enough to prevent it from being used in very large scale multiprocessors with tens of thousands of nodes.his paper proposes a lock-based cache coherence protocol for scope consistency.In does not rely on directory information to maintain cache coherence.Instead,cache coherence is maintained through requiring the releasing processor of a lock to stroe all write-notices generated in the associated critical section to the lock and the acquiring processor invalidates or updates its locally cached data copies according to the write notices of the lock.To evaluate the performance of the lock-based cache coherence protocol,a software SDM system named JIAJIA is built on network of workstations.Besides the lock-based cache coherence protocol,JIAJIA also characterizes itself with its shared memory organization scheme which combines the physical memories of multiple workstations to form a large shared space.Performance measurements with SPLASH2 program suite and NAS benchmarks indicate that,compared to recent SVM systems such as CVM,higher speedup is achieved by JIAJIA.Besides,JIAJIA can solve large scale problems that cannot be solved by other SVM systems due to memory size limitation.  相似文献   

14.
Consider a distributed system with nodes. A protocol running on this system is resilient if it could tolerate up to failures and operate correctly. The reliability of such a protocol is defined as the probability that no more than nodes have failed. In the first part of the paper, we study the scalability of systems running such protocols. We show the existence of a threshold time of operation for these protocols which we call the scalable mission time (SMT). This scalable mission time is the maximum time until which an asymptotic increase in the system size leads to an asymptotic increase in the reliability of the protocol. We show that beyond this scalable mission time, an asymptotic increase in system size leads to an asymptotic decrease in reliability. We also show techniques to compute the scalable mission time. In the second part of the paper, we show that the scalable mission time for a resilient protocol can be used as a good approximation to the mean-time to failure (MTTF) of the protocol, even when the failure distributions are non-exponential and the nodes fail at different rates (a heterogeneous system). We also show that the MTTF asymptotically approaches the SMT with an increase in system size . Computation of the MTTF is quite difficult when the system is heterogeneous even if the failure distribution of the nodes is exponential. Using experimental results, we show that the SMT approximation to the MTTF gives values very close to the real MTTF. Further, we consider the maintenance interval of systems running resilient protocols and show that if the maintenance interval is larger than the scalable mission time, then there is a maximum scalability value beyond which it is undesirable to scale up the size of the system. Received: 4 October 1994 / 30 May 1996  相似文献   

15.
Although directory-based write-invalidate cache coherence protocols have a potential to improve the performance of large-scale multiprocessors, coherence misses limit the processor utilization. Therefore, so-called competitive-update protocols—hybrid protocols that on a per-block basis dynamically switch between write-invalidate and write-update—have been considered as a means to reduce the coherence miss rate and have been shown to be a better coherence policy for a wide range of applications. Unfortunately, such protocols may cause high traffic peaks for applications with extensive use of migratory objects. These traffic peaks can offset the performance gain of a reduced miss rate if the network bandwidth is not sufficient. We propose in this study to extend a competitive-update protocol with a previously published adaptive mechanism that can dynamically detect migratory objects and reduce the coherence traffic they cause. Detailed architectural simulations based on five scientific and engineering applications show that this adaptive protocol outperforms a write-invalidate protocol by reducing the miss rate and bandwidth needed by up to 71 and 26%, respectively.  相似文献   

16.
Contour-based object detection can be formulated as a matching problem between model contour parts and image edge fragments. We propose a novel solution by treating this problem as the problem of finding dominant sets in weighted graphs. The nodes of the graph are pairs composed of model contour parts and image edge fragments, and the weights between nodes are based on shape similarity. Because of high consistency between correct correspondences, the correct matching corresponds to a dominant set of the graph. Consequently, when a dominant set is determined, it provides a selection of correct correspondences. As the proposed method is able to get all the dominant sets, we can detect multiple objects in an image in one pass. Moreover, since our approach is purely based on shape, we also determine an optimal scale of target object without a common enumeration of all possible scales. Both theoretic analysis and extensive experimental evaluation illustrate the benefits of our approach.  相似文献   

17.
Distributed shared memory (DSM) systems provide a simple programming paradigm for networks of workstations, which are gaining popularity due to their cost-effective high computing power. However, DSM systems usually exhibit poor performance due to the large communication delay between the nodes; and a lot of different memory consistency models have been proposed to mask the network delay. In this paper, we propose an asynchronous protocol for the release consistent memory model, which we call an Asynchronous Release Consistency (ARC) protocol. Unlike other protocols where the communication adheres to the synchronous request/receive paradigm, the ARC protocol is asynchronous, such that the necessary pages are broadcast before they are requested. Hence, the network delay can be reduced by proper prefetching of necessary pages. We have also compared the performance of the ARC protocol with the lazy release protocol by running standard benchmark programs; and the experimental results showed that the ARC protocol achieves a performance improvement of up to 29%.  相似文献   

18.
Summary. For data consistency in distributed information systems, it is often necessary to compare remotely located copies of a file. We develop several protocols for the efficient detection of differing pages in a replicated file in different communication and failure models. The first set of protocols assumes a restricted but practical communication model. In this case, the minimum amount of communication necessary to identify any given number of differing pages is determined and a technique to attain this minimum is presented. For the more general communication model and for more refined failure models, we show that more efficient protocols can be derived. Our approach is based on the theory of Galois fields. Received: February 1996 / Accepted: September 1997  相似文献   

19.
The memory model of a shared-memory multiprocessor is a contract between the designer and the programmer of the multiprocessor. A memory model is typically implemented by means of a cache-coherence protocol. The design of this protocol is one of the most complex aspects of multiprocessor design and is consequently quite error-prone. However, it is imperative to ensure that the cache-coherence protocol satisfies the shared-memory model. We present a novel technique based on model checking to tackle this difficult problem for the important and well-known shared-memory model of sequential consistency. Surprisingly, verifying sequential consistency is undecidable in general, even for finite-state cache-coherence protocols. In practice, cache-coherence protocols satisfy the properties of causality and data independence. Causality is the property that values of read events flow from values of write events. Data independence is the property that all traces can be generated by renaming data values from traces where the written values are pairwise distinct. We show that, if a causal and data independent system also has the property that the logical order of write events to each location is identical to their temporal order, then sequential consistency is decidable. We present a novel model checking algorithm to verify sequential consistency on such systems for a finite number of processors and memory locations and an arbitrary number of data values.  相似文献   

20.
In lossy networks the probability of successful communication can be significantly increased by transmitting multiple copies of a same message through independent channels. In this paper we show that communication protocols that exploit this by dynamically assigning the number of transmitted copies of the same data can significantly improve the control performance in a networked control system with only a modest increase in the total number of transmissions. We develop techniques to design communication protocols that exploit the transmission of multiple packets while seeking a balance between stability/estimation performance and communication rate. An average cost optimality criterion is employed to obtain a number of optimal protocols applicable to networks with different computational capabilities. We also discuss stability results under network contention when multiple nodes utilize these protocols.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号