首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Constant-time distributed dominating set approximation   总被引:1,自引:0,他引:1  
Finding a small dominating set is one of the most fundamental problems of classical graph theory. In this paper, we present a new fully distributed approximation algorithm based on LP relaxation techniques. For an arbitrary, possibly constant parameter k and maximum node degree , our algorithm computes a dominating set of expected size in rounds. Each node has to send messages of size . This is the first algorithm which achieves a non-trivial approximation ratio in a constant number of rounds.Received: 9 September 2003, Accepted: 2 September 2004, Published online: 13 January 2005The work presented in this paper was supported (in part) by the National Competence Center in Research on Mobile Information and Communication Systems (NCCR-MICS), a center supported by the Swiss National Science Foundation under grant number 5005-67322.  相似文献   

2.
The condition-based approach for consensus solvability consists of identifying sets of input vectors, called conditions, for which there exists an asynchronous protocol solving consensus despite the occurrence of up to f process crashes. This paper investigates , the largest set of conditions which allow us to solve the consensus problem in an asynchronous shared memory system.The first part of the paper shows that is made up of a hierarchy of classes of conditions, where d is a parameter (called degree of the condition), starting with and ending with d = 0, where . We prove that each one is strictly contained in the previous one: . Various properties of the hierarchy are also derived. It is shown that a class can be characterized in two equivalent but complementary ways: one is convenient for designing protocols while the other is for analyzing the class properties. The paper also defines a linear family of conditions that can be used to derive many specific conditions. In particular, for each d, two natural conditions are presented.The second part of the paper is devoted to the design of efficient condition-based protocols. A generic condition-based protocol is presented. This protocol can be instantiated with any condition C, , and requires at most shared memory read/write operations per process in the synchronization part of the protocol. Thus, the value (f-d) represents the difficulty of the class . An improvement of the protocol for the conditions in is also presented.Received: 15 November 2001, Accepted: 15 April 2003, Published online: 6 February 2004Parts of it have previously appeared in [23] and [25].  相似文献   

3.
A graph G with n vertices and maximum degree cannot be given weak sense of direction using less than colours. It is known that n colours are always sufficient, and it was conjectured that just are really needed, that is, one more colour is sufficient. Nonetheless, it has been shown [3] that for sufficiently large n there are graphs requiring more colours than . In this paper, using recent results in asymptotic graph enumeration, we show that (surprisingly) the same bound holds for regular graphs. We also show that colours are necessary, where d G is the degree of G.Received: April 2002, Accepted: April 2003, Sebastiano Vigna: Partially supported by the Italian MURST (Finanziamento di iniziative di ricerca diffusa condotte da parte di giovani ricercatori).The results of this paper appeared in a preliminary form in Distributed Computing. 14th International Conference, DISC 2000, Springer-Verlag, 2000.  相似文献   

4.
We give a complete characterization of the complexity of the element distinctness problem for n elements of bits each on deterministic and nondeterministic one-tape Turing machines. We present an algorithm running in time for deterministic machines and nondeterministic solutions that are of time complexity . For elements of logarithmic size , on nondeterministic machines, these results close the gap between the known lower bound and the previous upper bound . Additional lower bounds are given to show that the upper bounds are optimal for all other possible relations between m and n. The upper bounds employ hashing techniques, while the lower bounds make use of the communication complexity of set disjointness.Received: 23 April 2001, Published online: 2 September 2003Holger Petersen: Supported by Deutsche Akademie der Naturforscher Leopoldina, grant number BMBF-LPD 9901/8-1 of Bundesministerium für Bildung und Forschung.  相似文献   

5.
A homogeneous set is a non-trivial module of a graph, i.e. a non-empty, non-unitary, proper subset of a graph's vertices such that all its elements present exactly the same outer neighborhood. Given two graphs the Homogeneous Set Sandwich Problem (HSSP) asks whether there exists a sandwich graph which has a homogeneous set. In 2001 Tang et al. published an all-fast algorithm which was recently proven wrong, so that the HSSP's known upper bound would have been reset thereafter at the former determined by Cerioli et al. in 1998. We present, notwithstanding, new deterministic algorithms which have it established at We give as well two even faster randomized algorithms, whose simplicity might lend them didactic usefulness. We believe that, besides providing efficient easy-to-implement procedures to solve it, the study of these new approaches allows a fairly thorough understanding of the problem.  相似文献   

6.
For a set of rooted, unordered, distinctly leaf-labeled trees, the NP-hard maximum agreement subtree problem (MAST) asks for a tree contained (up to isomorphism or homeomorphism) in all of the input trees with as many labeled leaves as possible. We study the ordered variants of MAST where the trees are uniformly or non-uniformly ordered. We provide the first known polynomial-time algorithms for the uniformly and non-uniformly ordered homeomorphic variants as well as the uniformly and non-uniformly ordered isomorphic variants of MAST. Our algorithms run in time , , , and , respectively, where n is the number of leaf labels and k is the number of input trees.  相似文献   

7.
Fast allocation and deallocation with an improved buddy system   总被引:1,自引:0,他引:1  
We propose several modifications to the binary buddy system for managing dynamic allocation of memory blocks whose sizes are powers of two. The standard buddy system allocates and deallocates blocks in time in the worst case (and on an amortized basis), where n is the size of the memory. We present three schemes that improve the running time to O(1) time, where the time bound for deallocation is amortized for the first two schemes. The first scheme uses just one more word of memory than the standard buddy system, but may result in greater fragmentation than necessary. The second and third schemes have essentially the same fragmentation as the standard buddy system, and use bits of auxiliary storage, which is but for all and . Finally, we present simulation results estimating the effect of the excess fragmentation in the first scheme.Received: 4 May 2003, Published online: 22 December 2004Gerth Stølting Brodal: Supported by the Carlsberg Foundation (contract number ANS-0257/20). Partially supported by the Future and Emerging Technologies Programme of the EU under contract number IST-1999-14186 (ALCOM-FT). Basic Research in Computer Science, www.brics.dk, funded by the Danish National Research Foundation.Erik D. Demaine: Partially supported by the Natural Science and Engineering Research Council of Canada (NSERC).J. Ian Munro: Supported by the Natural Science and Engineering Research Council of Canada (NSERC) and the Canada Research Chair in Algorithm Design.This paper includes several results that appeared in preliminary form in the Proceedings of the 19th Conference on the Foundations of Software Technology and Theoretical Computer Science (FST & TCS99) [8].  相似文献   

8.
In 1999 Nakano, Olariu, and Schwing in [20], they showed that the permutation routing of n items pretitled on a mobile ad hoc network (MANET for short) of p stations (p known) and k channels (MANET{(n, p, k)) with k < p, can be carried out in broadcast rounds if k p and if each station has a -memory locations. And if k and if each station has a -memory locations, the permutations of these n pretitled items can be done also in broadcast rounds. They used two assumptions: first they suppose that each station of the mobile ad hoc network has an identifier beforehand. Secondly, the stations are partitioned into k groups such that each group has stations, but it was not shown how this partition can be obtained. In this paper, the stations have not identifiers beforehand and p is unknown. We develop a protocol which first names the stations, secondly gives the value of p, and partitions stations in groups of stations. Finally we show that the permutation routing problem can be solved on it in broadcast rounds in the worst case. It can be solved in broadcast rounds in the better case. Note that our approach does not impose any restriction on k.  相似文献   

9.
We show that for arbitrary positive integers with probability the gcd of two linear combinations of these integers with rather small random integer coefficients coincides with This naturally leads to a probabilistic algorithm for computing the gcd of several integers, with probability via just one gcd of two numbers with about the same size as the initial data (namely the above linear combinations). This algorithm can be repeated to achieve any desired confidence level.  相似文献   

10.
We present approximation algorithms for the unsplittable flow problem (UFP) in undirected graphs. As is standard in this line of research, we assume that the maximum demand is at most the minimum capacity. We focus on the non-uniform capacity case in which the edge capacities can vary arbitrarily over the graph. Our results are: We obtain an approximation ratio for UFP, where n is the number of vertices, is the maximum degree, and is the expansion of the graph. Furthermore, if we specialize to the case where all edges have the same capacity, our algorithm gives an approximation. For certain strong constant-degree expanders considered by we obtain an approximation for the uniform capacity case. For UFP on the line and the ring, we give the first constant-factor approximation algorithms. All of the above results improve if the maximum demand is bounded away from the minimum capacity. The above results either improve upon or are incomparable with previously known results for these problems. The main technique used for these results is randomized rounding followed by greedy alteration, and is inspired by the use of this idea in recent work.  相似文献   

11.
This paper proposes a new approach and new techniques for on-line monitoring of concurrent programs to ensure that some of their safety properties are not violated. The techniques modify erroneous systems, which violate a certain safety property, into new systems which satisfy the safety property. It does so by adding a new layer that controls the scheduling of steps in the system. We formally characterize the relationship between the erroneous and the new system. Safety monitors for mutual-exclusion, -exclusion, and the producer-consumer tasks are presented. Proofs for the mutual-exclusion task and the -exclusion task are presented to demonstrate the applicability of our approach.Received: May 2001, Accepted: December 2002, An extended abstract of this work appears in the Proceedings of the fifth International Symposium on Autonomous Decentralized Systems (ISADS) 2001. Part of this work was done while the first author visited Wayne State University.  相似文献   

12.
By the sometimes so-called Main Theorem of Recursive Analysis, every computable real function is necessarily continuous. We wonder whether and which kinds of hypercomputation allow for the effective evaluation of also discontinuous . More precisely the present work considers the following three super-Turing notions of real function computability: - relativized computation; specifically given oracle access to the Halting Problem or its jump ; - encoding input and/or output y = f(x) in weaker ways also related to the Arithmetic Hierarchy; - nondeterministic computation. It turns out that any computable in the first or second sense is still necessarily continuous whereas the third type of hypercomputation provides the required power to evaluate for instance the discontinuous Heaviside function.  相似文献   

13.
The condition-based approach studies restrictions on the inputs to a distributed problem, called conditions, that facilitate its solution. Previous work considered mostly the asynchronous model of computation. This paper studies conditions for consensus in a synchronous system where processes can fail by crashing. It describes a full classification of conditions for consensus, establishing a continuum between the asynchronous and synchronous models, with the following hierarchy where includes all conditions (and in particular the trivial one made up of all possible input vectors). For a condition , we have:
–  For values of consensus is solvable in an asynchronous system with t failures, and we obtain the known hierarchy of conditions that allows solving asynchronous consensus with more and more efficient protocols as we go from d = 0 to d = −t.
–  For values of consensus is solvable in an asynchronous system with t failures, and we obtain the known hierarchy of conditions that allows solving asynchronous consensus with more and more efficient protocols as we go from d = 0 to d = −t.
–  For values of d<0 consensus is known not solvable in an asynchronous system with t failures, but we obtain a hierarchy of conditions that allows solving synchronous consensus with protocols that can take more and more rounds, as we go from d = 0 to d = t.
–  d = 0 is the borderline case where consensus can be solved in an asynchronous system with t failures, and can be solved optimally in a synchronous system.
After having established the complete hierarchy, the paper concentrates on the two last items: . The main result is that the necessary and sufficient number of rounds needed to solve uniform consensus for a condition (such that ) is d +1. In more detail, the paper presents a generic synchronous early-deciding uniform consensus protocol that enjoys the following properties. Let f be the number of actual crashes, I the input vector and the condition the protocol is instantiated with. The protocol terminates in two rounds when and , and in at most d +1 rounds when and . (It also terminates in one round when and .) Moreover, whether I belongs or not to C, no process requires more than min rounds to decide. The paper then proves a corresponding lower bound stating that at least d +1 rounds are necessary to get a decision in the worst case when (for and ). This paper is based on the DISC’03 and DISC’04 conference versions MRR03,MRR04 A. Mostefaoui is currently Associate Professor at the Computer Science Department of the University of Rennes, France. He received his Engineer Degree in Computer Science in 1990 from the University of Algiers, and a Ph.D. in Computer Science in 1994 from the University of Rennes, France. His research interests include fault-tolerance and synchronization in distributed systems, group communication, data consistency and distributed checkpointing. Achour Mostefaoui has published more than 70 scientific publications and served as a reviewer for more than 20 major journals and conferences. Moreover, Achour Mostéfaoui is heading the software engineer degree of the University of Rennes S. Rajsbaum received a degree in Computer Engineering from the National Autonomous University of Mexico (UNAM) in 1985, and a PhD in the Computer Science from the Technion, Israel, in 1991. Since then he has been a member of the Institute of Mathematics at UNAM, where he is now a Full Professor with Tenure. He has been a regular visiting scientist at the Laboratory for Computer Science of MIT. Also, he was a member of the Cambridge Research Laboratory of HP from 2000 to 2002. He was chair of the program committee for Latin American Theoretical Informatics LATIN2002, and for ACM Principles of Distributed Computing PODC03, and member of the Program Committee of various international conferences such as ADHOC, DISC, ICDCS, IPDPS, LADC, PODC, and SIROCCO. His research interests are in the theory of distributed computing, especially issues related to coordination, complexity and computability, and fault-tolerance. He has also published in graph theory and algorithms. Overall, he has published over fifty papers in journals and international conferences. He runs the Distributed Computing Column of SIGACT News, the newsletter of the ACM Special Interest Group on Algorithms and Computation Theory. He has been editor of several special journal issues, such as the Special 20th PODC Anniversary Special Issue of Distributed Computing Journal (with H. Attiya) and of Computer Networks journal special issue on algorithms. M. Raynalhas been a professor of computer science since 1981. At IRISA (CNRS-INRIA-University joint computing research laboratory located in Rennes), he founded a research group on Distributed Algorithms in 1983. His research interests include distributed algorithms, distributed computing systems, networks and dependability. His main interest lies in the fundamental principles that underly the design and the construction of distributed computing systems. He has been Principal Investigator of a number of research grants in these areas, and has been invited by many universities all over the world to give lectures on distributed algorithms and distributed computing. He belongs to the editorial board of several international journals. Professor Michel Raynal has published more than 90 papers in journals (JACM, Acta Informatica, Distributed Computing, Comm. of the ACM, Information and Computation, Journal of Computer and System Sciences, JPDC, IEEE Transactions on Computers, IEEE Transactions on SE, IEEE Transactions on KDE, IEEE Transactions on TPDS, IEEE Computer, IEEE Software, IPL, PPL, Theoretical Computer Science, Real-Time Systems Journal, The Computer Journal, etc.); and more than 190 papers in conferences (ACM STOC, ACM PODC, ACM SPAA, IEEE ICDCS, IEEE DSN, DISC, IEEE IPDPS, Europar, FST&TCS, IEEE SRDS, etc.). He has also written seven books devoted to parallelism, distributed algorithms and systems (MIT Press and Wiley). Michel Raynal has served in program committees for more than 70 international conferences (including ACM PODC, DISC, ICDCS, IPDPS, DSN, LADC, SRDS, SIROCCO, etc.) and chaired the program committee of more than 15 international conferences (including DISC -twice-, ICDCS, SIROCCO and ISORC). He served as the chair of the steering committee leading the DISC symposium series in 2002-2004. Michel Raynal received the IEEE ICDCS best paper Award three times in a row: 1999, 2000 and 2001. He is a general co-chair of the IEEE ICDCS conference that will be held in Lisbon in 2006.  相似文献   

14.
For a complete network of n processors within which communication lines are private, we show how to achieve concurrently many Byzantine Agreements within constant expected time both on synchronous and asynchronous networks. As an immediate consequence, this provides a solution to the Interactive Consistency problem. Our algorithms tolerate up to (n-1)/3 faulty processors in both the synchronous and asynchronous cases and are therefore resilient-optimal. In terms of time complexity, our results improve a time bound of (for n concurrent agreements) which is immediately implied by the constant expected time Byzantine Agreement of Feldman and Micali (synchronous systems) and of Canetti and Rabin (asynchronous systems). In terms of resiliency, our results improve the resiliency bound of the constant time, -resilient algorithm of Ben-Or. An immediate application of our protocols is a constant expected time simulation of simultaneous broadcast channels over a network with private lines.Received: April 2001, Accepted: September 2002, Michael Ben-Or: Research supported by Israel Academy of Sciences and by United States - Israel Binational Science Foundation grant BSF-87-00082  相似文献   

15.
All the results given in the paper hold true. In the proof of Theorem 1, change steps IV, V, and VI to IV. for every , add to P; V. for every , add , , , , , to P; VI. for every , add to P.  相似文献   

16.
Coupling and self-stabilization   总被引:1,自引:0,他引:1  
A randomized self-stabilizing algorithm is an algorithm that, whatever the initial configuration is, reaches a set of М legal configurations} in finite time with probability 1. The proof of convergence towards is generally done by exhibiting a potential function , which measures the “vertical” distance of any configuration to , such that decreases with non-null probability at each step of . We propose here a method, based on the notion of coupling, which makes use of a “horizontal” distance between any pair of configurations, such that decreases in expectation at each step of . In contrast with classical methods, our coupling method does not require the knowledge of . In addition to the proof of convergence, the method allows us to assess the convergence rate according to two different measures. Proofs produced by the method are often simpler or give better upper bounds than their classical counterparts, as examplified here on Herman's mutual exclusion and Iterated Prisoner's Dilemma algorithms in the case of cyclic graphs.  相似文献   

17.
We consider distributed broadcasting in radio networks, modeled as undirected graphs, whose nodes have no information on the topology of the network, nor even on their immediate neighborhood. For randomized broadcasting, we give an algorithm working in expected time in n-node radio networks of diameter D, which is optimal, as it matches the lower bounds of Alon et al. [1] and Kushilevitz and Mansour [16]. Our algorithm improves the best previously known randomized broadcasting algorithm of Bar-Yehuda, Goldreich and Itai [3], running in expected time . (In fact, our result holds also in the setting of n-node directed radio networks of radius D.) For deterministic broadcasting, we show the lower bound on broadcasting time in n-node radio networks of diameter D. This implies previously known lower bounds of Bar-Yehuda, Goldreich and Itai [3] and Bruschi and Del Pinto [5], and is sharper than any of them in many cases. We also give an algorithm working in time , thus shrinking - for the first time - the gap between the upper and the lower bound on deterministic broadcasting time to a logarithmic factor. Received: 1 August 2003, Accepted: 18 March 2005, Published online: 15 June 2005 Dariusz R. Kowalski: This work was done during the stay of Dariusz Kowalski at the Research Chair in Distributed Computing of the Université du Québec en Outaouais, as a postdoctoral fellow. Research supported in part by KBN grant 4T11C04425. Andrzej Pelc: Research of Andrzej Pelc was supported in part by NSERC discovery grant and by the Research Chair in Distributed Computing of the Université du Québec en Outaouais.  相似文献   

18.
We present a new algorithm to compute motorcycle graphs. It runs in time when n is the number of motorcycles. We give a new characterization of the straight skeleton of a nondegenerate polygon. For a polygon with n vertices and h holes, we show that it yields a randomized algorithm that reduces the straight skeleton computation to a motorcycle graph computation in expected time. Combining these results, we can compute the straight skeleton of a nondegenerate polygon with h holes and with n vertices, among which r are reflex vertices, in expected time. In particular, we cancompute the straight skeleton of a nondegenerate polygon with n vertices in expected time.  相似文献   

19.
This paper investigates two preemptive semi-online scheduling problems to minimize makespan on two uniform machines. In the first semi-online problem, we know in advance that all jobs have their processing times in between p and rp . In the second semi-online problem, we know the size of the largest job in advance. We design an optimal semi-online algorithm which is optimal for every combination of machine speed ratio and job processing time ratio for the first problem, and an optimal semi-online algorithm for every machine speed ratio for the second problem.Received: 2 December 2003, Published online: 16 January 2004This research is supported by the Teaching and Research Award Program for Outstanding Young Teachers in Higher Education Institutions of MOE, China, and National Natural Science Foundation of China (10271110).  相似文献   

20.
We study the problem of how resilient networks are to node faults. Specifically, we investigate the question of how many faults a network can sustain and still contain a large (i.e., linear-sized) connected component with approximately the same expansion as the original fault-free network. We use a pruning technique that culls away those parts of the faulty network that have poor expansion. The faults may occur at random or be caused by an adversary. Our techniques apply in either case. In the adversarial setting we prove that for every network with expansion a large connected component with basically the same expansion as the original network exists for up to a constant times faults. We show this result is tight in the sense that every graph G of size n and uniform expansion can be broken into components of size o(n) with faults. Unlike the adversarial case, the expansion of a graph gives a very weak bound on its resilience to random faults. While it is the case, as before, that there are networks of uniform expansion that are not resilient against a fault probability of a constant times it is also observed that there are networks of uniform expansion that are resilient against a constant fault probability. Thus, we introduce a different parameter, called the span of a graph, which gives us a more precise handle on the maximum fault probability. We use the span to show the first known results for the effect of random faults on the expansion of d-dimensional meshes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号