首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 890 毫秒
1.
We examine the impact ofevenness (all cuts having even free capacity) andlocal evenness (cuts that separate a single vertex having even free capacity) on homotopic knock-knee routing. Kaufmann and Mehlhorn have presented a linear-time algorithm for routing even instances. We show that routing locally even instances is NP-hard. If we are permitted to move modules slightly, however, then we can efficiently route any locally even instance in which the free capacity of every cut is nonnegative. This fact implies that locally even instances can be one-dimensionally compacted in polynomial time. But when the assumption of local evenness is dropped, routing again becomes NP-hard, whether or not modules may move.This work was supported in part by the Deutsche Forschungsgemeinschaft, Sonderforschungsbereich 124, Teilprojekt B2 (VLSI Entwurf und Parallelität), and in part by DIMACS (Center for Discrete Mathematics and Theoretical Computer Science), a National Science Foundation Science and Technology Center, Grant NSF-STC88-09648. Miller Maley was also supported by a Mathematical Sciences Postdoctoral Research Fellowship from the National Science Foundation, Grant DMS-8705835.  相似文献   

2.
We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only during an initialization phase, and from then on runs completely deterministically. It is the first randomized competitive algorithm with this property to beat the deterministic lower bound. We generalize our approach to a model in which access costs are fixed but update costs are scaled by an arbitrary constantd. We prove lower bounds for deterministic list update algorithms and for randomized algorithms against oblivious and adaptive on-line adversaries. In particular, we show that for this problem adaptive on-line and adaptive off-line adversaries are equally powerful.A preliminary version of these results appeared in a joint paper with S. Irani in theProceedings of the 2nd Symposium on Discrete Algorithms, 1991 [17].This research was partially supported by NSF Grants CCR-8808949 and CCR-8958528.This research was partially supported by NSF Grant CCR-9009753.This research was supported in part by the National Science Foundation under Grant CCR-8658139, by DIMACS, a National Science Foundation Science and Technology center, Grant No. NSF-STC88-09648.  相似文献   

3.
We present a randomized algorithm for computing the kth smallest distance in a set ofn points in the plane, based on the parametric search technique of Megiddo [Mel]. The expected running time of our algorithm is O(n4/3 log8/3 n). The algorithm can also be made deterministic, using a more complicated technique, with only a slight increase in its running time. A much simpler deterministic version of our procedure runs in time O(n3/2 log5/2 n). All versions improve the previously best-known upper bound ofO(@#@ n9/5 log4/5 n) by Chazelle [Ch]. A simpleO(n logn)-time algorithm for computing an approximation of the median distance is also presented.Part of this work was done while the first two authors were visting DIMACS, Rutgers University, New Brunswick, NJ. Work by the first three authors has been partly supported by Office of Naval Research Grant N00014-87-K-0129, by National Science Foundation Grant DCR-83-20085, and by DIMACS (Center for Discrete Mathematics and Theoretical Computer Science), a National Science Foundation Science and Technology Center-NSF-STC88-09648. Work by the second author has also been supported by National Security Agency Grant MDA 904-89-H-2030. Work by the third author has also been supported by National Science Foundation Grant CCR-89-01484, and by grants from the U.S.-Israeli Binational Science Foundation, and the Fund for Basic Research administered by the Israeli Academy of Sciences.  相似文献   

4.
We studylazy structure sharing as a tool for optimizing equivalence testing on complex data types. We investigate a number of strategies for implementing lazy structure sharing and provide upper and lower bounds on their performance (how quickly they effect ideal configurations of our data structure). In most cases when the strategies are applied to a restricted case of the problem, the bounds provide nontrivial improvements over the naïve linear-time equivalence-testing strategy that employs no optimization. Only one strategy, however, which employs path compression, seems promising for the most general case of the problem.Work completed while at Princeton University and supported by a Fannie and John Hertz Foundation Fellowship, National Science Foundation Grant No. CCR-8920505, and the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) under NSF-STC-91-19999.Work completed while at Princeton University and DIMACS and supported by DIMACS under NSF-STC-91-19999.Research at Princeton University partially supported by the National Science Foundation, Grant No. CCR-8920505, the Office of Naval Research, Contract No. N00014-91-J-1463, and by DIMACS under NSF-STC-91-19999.  相似文献   

5.
We present two efficient algorithms for the minimum-cost flow problem in which arc costs are piecewise-linear and convex. Our algorithms are based on novel algorithms of Orlin, which were developed for the case of linear arc costs. Our first algorithm uses the Edmonds-Karp scaling technique. Its complexity isO(M logU(m+n logM)) for a network withn vertices,m arcs, M linear cost segments, and an upper boundU on the supplies and the capacities. The second algorithm is a strongly polynomial version of the first, and it uses Tardos's idea of contraction. Its complexity isO(M logM(m+n logM)). Both algorithms improve by a factor of at leastM/m the complexity of directly applying existing algorithms to a transformed network in which arc costs are linear.The final stage of this work was performed while Ron Shamir was a visitor at DIMACS (Center for Discrete Mathematics and Theoretical Computer Science), Rutgers University. Supported in part by National Science Foundation Grant NSF-STC88-09648, and by Air Force Grants AFOSR-89-0512 and AFOSR-90-0008.  相似文献   

6.
We present anO(n 2 log3 n) algorithm for the two-center problem, in which we are given a setS ofn points in the plane and wish to find two closed disks whose union containsS so that the larger of the two radii is as small as possible. We also give anO(n 2 log5 n) algorithm for solving the two-line-center problem, where we want to find two strips that coverS whose maximum width is as small as possible. The best previous solutions of both problems requireO(n 3) time.Pankaj Agarwal has been supported by DIMACS (Center for Discrete Mathematics and Theoretical Computer Science), an NSF Science and Technology Center, under Grant STC-88-09648. Micha Sharir has been supported by the Office of Naval Research under Grants N00014-89-J-3042 and N00014-90-J-1284, by the National Science Foundation under Grant CCR-89-01484, by DIMACS, and by grants from the US-Israeli Binational Science Foundation, the Fund for Basic Research administered by the Israeli Academy of Sciences, and the G.I.F., the German-Israeli Foundation for Scientific Research and Development. A preliminary version of this paper has appeared inProceedings of the Second Annual ACM-SIAM Symposium on Discrete Algorithms, 1991, pp. 449–458.  相似文献   

7.
A linear-time algorithm for finding an ambitus   总被引:2,自引:2,他引:0  
We devise a linear-time algorithm for finding an ambitus ín an undirected graph. An ambitus is a cycle in a graph containing two distinguished vertices such that certain different groups of bridges (calledB itp-,B itQ-, andB itPQ-bridges) satisfy the property that a bridge in one group does not interlace with any bridge in the other groups. Thus, an ambitus allows the graph to be cut into pieces, where, in each piece, certain graph properties may be investigated independently and recursively, and then the pieces can be pasted together to yield information about these graph properties in the original graph. In order to achieve a good time-complexity for such an algorithm employing the divide-and-conquer paradigm, it is necessary to find an ambitus quickly. We also show that, using ambitus, linear-time algorithms can be devised for abiding-path-finding and nonseparating-induced-cycle-finding problems.The research of B. Mishra was supported in part by National Science Foundation Grants DMS-8703458 and CCR-9002819. R. E. Tarjan's research at Princeton University was partially supported by DIMACS, a National Science Foundation Science and Technology Center, Grant No. NSF-STC88-09648, and by National Science Foundation Grant CCR-8929505.  相似文献   

8.
Competitive randomized algorithms for nonuniform problems   总被引:5,自引:0,他引:5  
Competitive analysis is concerned with comparing the performance of on-line algorithms with that of optimal off-line algorithms. In some cases randomization can lead to algorithms with improved performance ratios on worst-case sequences. In this paper we present new randomized on-line algorithms for snoopy caching and the spin-block problem. These algorithms achieve competitive ratios approachinge/(e–1) 1.58 against an oblivious adversary. These ratios are optimal and are a surprising improvement over the best possible ratio in the deterministic case, which is 2. We also consider the situation when the request sequences for these problems are generated according to an unknown probability distribution. In this case we show that deterministic algorithms that adapt to the observed request statistics also have competitive factors approachinge/(e–1). Finally, we obtain randomized algorithms for the 2-server problem on a class of isosceles triangles. These algorithms are optimal against an oblivious adversary and have competitive ratios that approache/(e–1). This compares with the ratio of 3/2 that can be achieved on an equilateral triangle.Supported in part by the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), an NSF Science and Technology Center funded under NSF Contract STC-88-09648 and supported by the New Jersey Commission on Science and Technology.  相似文献   

9.
We give a linear time and space algorithm for analyzing trees in planar graphs. The algorithm can be used to analyze the sensitivity of a minimum spanning tree to changes in edge costs, to find its replacement edges, and to verify its minimality. It can also be used to analyze the sensitivity of a single-source shortest-path tree to changes in edge costs, and to analyze the sensitivity of a minimum-cost network flow. The algorithm is simple and practical. It uses the properties of a planar embedding, combined with a heap-ordered queue data structure.This research was partially supported by Office of Naval Research Grant N00014-87-K-0467 and National Science Foundation Grant CCR-8610181.This research was done while the author was at the Department of Computer Science, Princeton University, Princeton, NJ 08544, USA.This research was done while the author was at the Department of Computer Science, Stanford University, Stanford, CA 94305, USA.  相似文献   

10.
This paper determines upper bounds on the expected time complexity for a variety of parallel algorithms for undirected and directed random graph problems. For connectivity, biconnectivity, transitive closure, minimum spanning trees, and all pairs minimum cost paths, we prove the expected time to beO(log logn) for the CRCW PRAM (this parallel RAM machine allows resolution of write conflicts) andO(logn · log logn) for the CREW PRAM (which allows simultaneous reads but not simultaneous writes). We also show that the problem of graph isomorphism has expected parallel timeO(log logn) for the CRCW PRAM andO(logn) for the CREW PRAM. Most of these results follow because of upper bounds on the mean depth of a graph, derived in this paper, for more general graphs than was known before.For undirected connectivity especially, we present a new probabilistic algorithm which runs on a randomized input and has an expected running time ofO(log logn) on the CRCW PRAM, withO(n) expected number of processors only.Our results also improve known upper bounds on the expected space required for sequential graph algorithms. For example, we show that the problems of finding connected components, transitive closure, minimum spanning trees, and minimum cost paths have expected sequential spaceO(logn · log logn) on a deterministic Turing Machine. We use a simulation of the CRCW PRAM to get these expected sequential space bounds.This research was supported by National Science Foundation Grant DCR-85-03251 and Office of Naval Research Contract N00014-80-C-0647.This research was partially supported by the National Science Foundation Grants MCS-83-00630, DCR-8503497, by the Greek Ministry of Research and Technology, and by the ESPRIT Basic Research Actions Project ALCOM.  相似文献   

11.
M. Jerrum  U. Vazirani 《Algorithmica》1996,16(4-5):392-401
A new approximation algorithm for the permanent of ann ×n 0,1-matrix is presented. The algorithm is shown to have worst-case time complexity exp(O(n 1/2 log2 n)). Asymptotically, this represents a considerable improvement over the best existing algorithm, which has worst-case time complexity exp((n)).Supported by SERC Grant GR/F 90363; work done in part while visiting DIMACS (Center for Discrete Mathematics and Computer Science).Supported by an NSF PYI grant, with matching equipment grant from the AT&T Foundation; work done in part while visiting DIMACS.  相似文献   

12.
The search for good lineal, or depth-first, spanning trees is an important aspect in the implementation of a wide assortment of graph algorithms. We consider the complexity of findingoptimal lineal spanning trees under various notions of optimality. In particular, we show that several natural problems, such as constructing a shortest or a tallest lineal tree, are NP-hard. We also address the issue of polynomial-time, near-optimization strategies for these difficult problems, showing that efficient absolute approximation algorithms cannot exist unlessP = NP.This author's research was supported in part by the Sandia University Research Program and by the National Science Foundation under Grant M IP-8603879.This author's research was supported in part by the National Science Foundation under Grants ECS-8403859 and MIP-8603879.  相似文献   

13.
Aweaving W is a simple arrangement of lines (or line segments) in the plane together with a binary relation specifying which line is above the other. A system of lines (or line segments) in 3-space is called arealization ofW, if its projection into the plane isW and the above-below relations between the lines respect the specifications. Two weavings are equivalent if the underlying arrangements of lines are combinatorially equivalent and the above-below relations are the same. An equivalence class of weavings is said to be aweaving pattern. A weaving pattern isrealizable if at least one element of the equivalence class has a three-dimensional realization. A weaving (pattern)W is calledperfect if, along each line (line segment) ofW, the lines intersecting it are alternately above and below. We prove that (i) a perfect weaving pattern ofn lines is realizable if and only ifn 3, (ii) a perfect m byn weaving pattern of line segments (in a grid-like fashion) is realizable if and only if min(m, n) 3, (iii) ifn is sufficiently large, then almost all weaving patterns ofn lines are nonrealizable.Jànos Pach has been supported in part by Hungarian NFSR Grant 1812, NSF Grant CCR-8901484, and the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), a National Science Foundation Science and Technology Center, under NSF Grant STC88-09648. Richard Pollack has been supported in part by NSA Grant MDA904-89-H-2030, NSF Grants DMS-85-01947 and CCR-8901484, and DIMACS. Emo Welzl has been supported in part by the ESPRIT II Basic Research Actions Program of the EC under Contract No. 3075 (project ALCOM) and DIMACS.  相似文献   

14.
Speech act theory is important not only in Linguistics, but also in Computer Science. It has applications in Distributed Computing, Distributed Artificial Intelligence, Natural Language Processing, and Electronic Data Interchange protocols. While much research into speech acts has been done, one aspect of them that has largely been ignored is their semantics, i.e. their conditions of satisfaction. A formal semantics for speech acts is motivated and presented here that relates their satisfaction to the intentions, know-how, and actions of the participating agents. This makes it possible to state several potentially useful constraints on communication and provides a basis for checking their consistency. An earlier version of this paper was presented at the Second Meeting on the Mathematics of Language, Tarrytown, NY, May 1991. Some parts of this paper overlap with [23]. This research was supported by the National Science Foundation (through Grant No. IRI-8945845 to the Center for Cognitive Science, University of Texas, Austin) and by the Microelectronics and Computer Technology Corporation, Austin. The author is indebted to the anonymous referees for comments.  相似文献   

15.
Wang RL  Tang Z  Cao QP 《Neural computation》2003,15(7):1605-1619
In this article, we present a solution to the maximum clique problem using a gradient-ascent learning algorithm of the Hopfield neural network. This method provides a near-optimum parallel algorithm for finding a maximum clique. To do this, we use the Hopfield neural network to generate a near-maximum clique and then modify weights in a gradient-ascent direction to allow the network to escape from the state of near-maximum clique to maximum clique or better. The proposed parallel algorithm is tested on two types of random graphs and some benchmark graphs from the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). The simulation results show that the proposed learning algorithm can find good solutions in reasonable computation time.  相似文献   

16.
M. Luby  B. Veličković 《Algorithmica》1996,16(4-5):415-433
We develop several quasi-polynomial-time deterministic algorithms for approximating the fraction of truth assignments that satisfy a disjunctive normal form formula. The most efficient algorithm computes for a given DNF formulaF onn variables withm clauses and > 0 an estimateY such that ¦Pr[F] –Y¦ in time which is , for any constant. Although the algorithms themselves are deterministic, their analysis is probabilistic and uses the notion of limited independence between random variables.Research supported in part by National Science Foundation Operating Grant CCR-9016468, National Science Foundation Operating Grant CCR-9304722, United States-Israel Binational Science Foundation Grant No. 89-00312, United States-Israel Binational Science Foundation Grant No. 92-00226, and ESPRIT Basic Research Grant EC-US 030.Research partially done while visiting the International Computer Science Institute and while at Carnegie Mellon University.  相似文献   

17.
This paper presents quasi-optimal upper bounds for simplex range searching. The problem is to preprocess a setP ofn points in d so that, given any query simplexq, the points inP q can be counted or reported efficiently. Ifm units of storage are available (n <m <n d ), then we show that it is possible to answer any query inO(n 1+/m 1/d ) query time afterO(m 1+) preprocessing. This bound, which holds on a RAM or a pointer machine, is almost tight. We also show how to achieveO(logn) query time at the expense ofO(n d+) storage for any fixed > 0. To fine-tune our results in the reporting case we also establish new zone theorems for arrangements and merged arrangements of planes in 3-space, which are of independent interest.A preliminary version of this paper has appeared in theProceedings of the Sixth Annual ACM Symposium on Computational Geometry, June 1990, pp. 23–33. Work on this paper by Bernard Chazelle has been supported by NSF Grant CCR-87-00917 and NSF Grant CCR-90-02352. Work on this paper by Micha Sharir has been supported by Office of Naval Research Grant N00014-87-K-0129, by National Science Foundation Grants DCR-83-20085 and CCR-8901484, and by grants from the U.S.-Israeli Binational Science Foundation, the NCRD—the Israeli National Council for Research and Development, and the Fund for Basic Research administered by the Israeli Academy of Sciences. Work by Emo Welzl has been supported by Deutsche Forschungsgemeinschaft Grant We 1265/1–2. Micha Sharir and Emo Welzl have also been supported by a grant from the German-Israeli Binational Science Foundation. Last but not least, all authors thank DIMACS, an NSF Science and Technology Center, for additional support under Grant STC-88-09648.  相似文献   

18.
We consider the problems of enumerating all minimal strongly connected subgraphs and all minimal dicuts of a given strongly connected directed graph G=(V,E). We show that the first of these problems can be solved in incremental polynomial time, while the second problem is NP-hard: given a collection of minimal dicuts for G, it is NP-hard to tell whether it can be extended. The latter result implies, in particular, that for a given set of points , it is NP-hard to generate all maximal subsets of contained in a closed half-space through the origin. We also discuss the enumeration of all minimal subsets of whose convex hull contains the origin as an interior point, and show that this problem includes as a special case the well-known hypergraph transversal problem. This research was supported by the National Science Foundation (Grant IIS-0118635). The third and fourth authors are also grateful for the partial support by DIMACS, the National Science Foundation’s Center for Discrete Mathematics and Theoretical Computer Science. Our friend and co-author, Leonid Khachiyan tragically passed away on April 29, 2005.  相似文献   

19.
Computing the intersection-depth of polyhedra   总被引:4,自引:0,他引:4  
Given two intersecting polyhedraP, Q and a directiond, find the smallest translation ofQ alongd that renders the interiors ofP andQ disjoint. The same problem can also be posed without specifying the direction, in which case the minimum translation over all directions is sought. These are fundamental problems that arise in robotics and computer vision. We develop techniques for implicitly building and searching convolutions and apply them to derive efficient algorithms for these problems.The work of this author was partially supported by National Science Foundation Grant CCR90-02352.The work of this author was partially supported by Grant A3583 from the Natural Sciences and Engineering Research Council of Canada.  相似文献   

20.
The pairing heap: A new form of self-adjusting heap   总被引:2,自引:0,他引:2  
Recently, Fredman and Tarjan invented a new, especially efficient form of heap (priority queue) called theFibonacci heap. Although theoretically efficient, Fibonacci heaps are complicated to implement and not as fast in practice as other kinds of heaps. In this paper we describe a new form of heap, called thepairing heap, intended to be competitive with the Fibonacci heap in theory and easy to implement and fast in practice. We provide a partial complexity analysis of pairing heaps. Complete analysis remains an open problem.Research partially supported by National Science Foundation Grant MCS 82-04031 and by Bell Communications ResearchResearch partially supported by National Science Foundation Grant DCR 85-14922  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号