首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
B. Mishra  R. E. Tarjan 《Algorithmica》1992,7(1-6):521-554
We devise a linear-time algorithm for finding an ambitus ín an undirected graph. An ambitus is a cycle in a graph containing two distinguished vertices such that certain different groups of bridges (calledB itp-,B itQ-, andB itPQ-bridges) satisfy the property that a bridge in one group does not interlace with any bridge in the other groups. Thus, an ambitus allows the graph to be cut into pieces, where, in each piece, certain graph properties may be investigated independently and recursively, and then the pieces can be pasted together to yield information about these graph properties in the original graph. In order to achieve a good time-complexity for such an algorithm employing the divide-and-conquer paradigm, it is necessary to find an ambitus quickly. We also show that, using ambitus, linear-time algorithms can be devised for abiding-path-finding and nonseparating-induced-cycle-finding problems.  相似文献   

2.
A key problem in computational geometry is the identification of subsets of a point set having particular properties. We study this problem for the properties of convexity and emptiness. We show that finding empty triangles is related to the problem of determining pairs of vertices that see each other in a star-shaped polygon. A linear-time algorithm for this problem which is of independent interest yields an optimal algorithm for finding all empty triangles. This result is then extended to an algorithm for finding empty convex r-gons (r> 3) and for determining a largest empty convex subset. Finally, extensions to higher dimensions are mentioned.The first author is pleased to acknowledge support by the National Science Foundation under Grant CCR-8700917. The research of the second author was supported by Amoco Foundation Faculty Development Grant CS 1-6-44862 and by the National Science Foundation under Grant CCR-8714565.  相似文献   

3.
In this paper we give parallel algorithms for a number of problems defined on point sets and polygons. All our algorithms have optimalT(n) * P(n) products, whereT(n) is the time complexity andP(n) is the number of processors used, and are for the EREW PRAM or CREW PRAM models. Our algorithms provide parallel analogues to well-known phenomena from sequential computational geometry, such as the fact that problems for polygons can oftentimes be solved more efficiently than point-set problems, and that nearest-neighbor problems can be solved without explicitly constructing a Voronoi diagram.The research of R. Cole was supported in part by NSF Grants CCR-8702271, CCR-8902221, and CCR-8906949, by ONR Grant N00014-85-K-0046, and by a John Simon Guggenheim Memorial Foundation fellowship. M. T. Goodrich's research was supported by the National Science Foundation under Grant CCR-8810568 and by the National Science Foundation and DARPA under Grant CCR-8908092.  相似文献   

4.
We studylazy structure sharing as a tool for optimizing equivalence testing on complex data types. We investigate a number of strategies for implementing lazy structure sharing and provide upper and lower bounds on their performance (how quickly they effect ideal configurations of our data structure). In most cases when the strategies are applied to a restricted case of the problem, the bounds provide nontrivial improvements over the naïve linear-time equivalence-testing strategy that employs no optimization. Only one strategy, however, which employs path compression, seems promising for the most general case of the problem.Work completed while at Princeton University and supported by a Fannie and John Hertz Foundation Fellowship, National Science Foundation Grant No. CCR-8920505, and the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS) under NSF-STC-91-19999.Work completed while at Princeton University and DIMACS and supported by DIMACS under NSF-STC-91-19999.Research at Princeton University partially supported by the National Science Foundation, Grant No. CCR-8920505, the Office of Naval Research, Contract No. N00014-91-J-1463, and by DIMACS under NSF-STC-91-19999.  相似文献   

5.
We present the first optimal parallel algorithms for the verification and sensitivity analysis of minimum spanning trees. Our algorithms are deterministic and run inO(logn) time and require linear-work in the CREW PRAM model. These algorithms are used as a subroutine in the linear-work randomized algorithm for finding minimum spanning trees of Cole, Klein, and Tarjan. Research partially supported by a National Science Foundation Graduate Fellowship and by DIMACS (Center for Discrete Mathematics and Theoretical Computer Science), a National Science Foundation Science and Technology Center, Grant No. NSF-STC88-09648. Research at Princeton University was partially supported by the National Science Foundation, Grant No. CCR-8920505, the Office of Naval Research, Contract No. N00014-91-J-1463, and by DIMACS (Center for Discrete Mathematics and Theoretical Computer Science), a National Science Foundation Science and Technology Center, Grant No. NSF-STC88-09648.  相似文献   

6.
We describe a method that extends the lexicographic recursive path ordering of Dershowitz and Kamin and Levy for proving termination of associative-commutative (AC) rewrite systems. Instead of comparing the arguments of an AC-operator using the multiset extension, wepartition them into disjoint subsets and use each subset only once for comparison. To preserve transitivity, we introduce two techniques —pseudocopying andelevating of arguments of an AC operator. This method imposesno restrictions at all on the underlying precedence relation on function symbols. It can therefore prove termination of a much more extensive class of AC rewrite systems than can previous methods, such as associative path ordering, that restrict AC operators to be minimal or subminimal in precedence. A number of examples illustrating the power of the approach are discussed. The method has been implemented inRRL, Rewrite Rule Laboratory, a theorem-proving environment based on rewrite techniques and completion.A preliminary version appears in Proc. of10th Conference on Foundations of Software Technology and Theoretical Computer Science, Bangalore, India (1990).Partially supported by the National Science Foundation Grant no. CCR-8906678.Partially supported by the National Science Foundation Grant no. CCR-9009755Partially supported by the National Science Foundation under Grants CCR-9202838 and CCR-9357851.  相似文献   

7.
We give the first linear-time algorithm for computing single-source shortest paths in a weighted interval or circular-arc graph, when we are given the model of that graph, i.e., the actual weighted intervals or circular-arcsand the sorted list of the interval endpoints. Our algorithm solves this problem optimally inO(n) time, wheren is the number of intervals or circular-arcs in a graph. An immediate consequence of our result is anO(qn + n logn)-time algorithm for the minimum-weight circle-cover problem, whereq is the minimum number of arcs crossing any point on the circle; then logn term in this time complexity is from a preprocessing sorting step when the sorted list of endpoints is not given as part of the input. The previously best time bounds were0(n logn) for this shortest paths problem, andO(qn logn) for the minimum-weight circle-cover problem. Thus we improve the bounds of both problems. More importantly, the techniques we give hold the promise of achieving similar (logn)-factor improvements in other problems on such graphs.The research of M. J. Atallah was supported in part by the Leonardo Fibonacci Institute, Trento, Italy, by the Air Force Office of Scientific Research under Contract AFOSR-90-0107, and by the National Science Foundation under Grant CCR-9202807. D. Z. Chen's research was supported in part by the Leonardo Fibonacci Institute, Trento, Italy. The research of D. T. Lee was supported in part by the Leonardo Fibonacci Institute, Trento, Italy, by the National Science Foundation, and the Office of Naval Research under Grants CCR-8901815, CCR-9309743, and N00014-93-1-0272.  相似文献   

8.
We present a randomized algorithm for computing the kth smallest distance in a set ofn points in the plane, based on the parametric search technique of Megiddo [Mel]. The expected running time of our algorithm is O(n4/3 log8/3 n). The algorithm can also be made deterministic, using a more complicated technique, with only a slight increase in its running time. A much simpler deterministic version of our procedure runs in time O(n3/2 log5/2 n). All versions improve the previously best-known upper bound ofO(@#@ n9/5 log4/5 n) by Chazelle [Ch]. A simpleO(n logn)-time algorithm for computing an approximation of the median distance is also presented.Part of this work was done while the first two authors were visting DIMACS, Rutgers University, New Brunswick, NJ. Work by the first three authors has been partly supported by Office of Naval Research Grant N00014-87-K-0129, by National Science Foundation Grant DCR-83-20085, and by DIMACS (Center for Discrete Mathematics and Theoretical Computer Science), a National Science Foundation Science and Technology Center-NSF-STC88-09648. Work by the second author has also been supported by National Security Agency Grant MDA 904-89-H-2030. Work by the third author has also been supported by National Science Foundation Grant CCR-89-01484, and by grants from the U.S.-Israeli Binational Science Foundation, and the Fund for Basic Research administered by the Israeli Academy of Sciences.  相似文献   

9.
Theslice of a program with respect to a componentc is a projection of the program that includes all components that might affect (either directly or transitively) the values of the variables used atc. Slices can be extracted particularly easily from a program representation called a program dependence graph, originally introduced as an intermediate program representation for performing optimizing, vectorizing, and parallelizing transformations. This paper presents a linear-time algorithm for determining whether two slices of a program dependence graph are isomorphic.This work was supported in part by a David and Lucile Packard Fellowship for Science and Engineering, by the National Science Foundation under grants DCR-8552602 and CCR-8958530, by the Defense Advanced Research Projects Agency, monitored by the Office of Naval Research under contract N00014-88-K-0590, and by grants from IBM, DEC, Xerox, 3M, Eastman Kodak, and the Cray Research Foundation.  相似文献   

10.
We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only during an initialization phase, and from then on runs completely deterministically. It is the first randomized competitive algorithm with this property to beat the deterministic lower bound. We generalize our approach to a model in which access costs are fixed but update costs are scaled by an arbitrary constantd. We prove lower bounds for deterministic list update algorithms and for randomized algorithms against oblivious and adaptive on-line adversaries. In particular, we show that for this problem adaptive on-line and adaptive off-line adversaries are equally powerful.A preliminary version of these results appeared in a joint paper with S. Irani in theProceedings of the 2nd Symposium on Discrete Algorithms, 1991 [17].This research was partially supported by NSF Grants CCR-8808949 and CCR-8958528.This research was partially supported by NSF Grant CCR-9009753.This research was supported in part by the National Science Foundation under Grant CCR-8658139, by DIMACS, a National Science Foundation Science and Technology center, Grant No. NSF-STC88-09648.  相似文献   

11.
We consider a variety of problems on the interaction between two sets of line segments in two and three dimensions. These problems range from counting the number of intersecting pairs between m blue segments andn red segments in the plane (assuming that two line segments are disjoint if they have the same color) to finding the smallest vertical distance between two nonintersecting polyhedral terrains in three-dimensional space. We solve these problems efficiently by using a variant of the segment tree. For the three-dimensional problems we also apply a variety of recent combinatorial and algorithmic techniques involving arrangements of lines in three-dimensional space, as developed in a companion paper.Work on this paper by the first author has been supported in part by the National Science Foundation under Grant CCR-9002352. Work by the second author was supported in part by the National Science Foundation under Grant CCR-8714565. The fourth author has been supported in part by the Office of Naval Research under Grant N0014-87-K-0129, by the National Science Foundation under Grant NSF-DCR-83-20085, by grants from the Digital Equipment Corporation and the IBM Corporation, and by a grant from the US-Israeli Binational Science Foundation.  相似文献   

12.
Given a planar setS ofn points,maxdominance problems consist of computing, for everyp S, some function of the maxima of the subset ofS that is dominated byp. A number of geometric and graph-theoretic problems can be formulated as maxdominance problems, including the problem of computing a minimum independent dominating set in a permutation graph, the related problem of finding the shortest maximal increasing subsequence, the problem of enumerating restricted empty rectangles, and the related problem of computing the largest empty rectangle. We give an algorithm for optimally solving a class of maxdominance problems. A straightforward application of our algorithm yields improved time bounds for the above-mentioned problems. The techniques used in the algorithm are of independent interest, and include a linear-time tree computation that is likely to arise in other contexts.The research of this author was supported by the Office of Naval Research under Grants N00014-84-K-0502 and N00014-86-K-0689, and the National Science Foundation under Grant DCR-8451393, with matching funds from AT&T.This author's research was supported by the National Science Foundation under Grant DCR-8506361.  相似文献   

13.
A set ofn weighted points in general position in d defines a unique regular triangulation. This paper proves that if the points are added one by one, then flipping in a topological order will succeed in constructing this triangulation. If, in addition, the points are added in a random sequence and the history of the flips is used for locating the next point, then the algorithm takes expected time at mostO(nlogn+n [d/2]). Under the assumption that the points and weights are independently and identically distributed, the expected running time is between proportional to and a factor logn more than the expected size of the regular triangulation. The expectation is over choosing the points and over independent coin-flips performed by the algorithm.The research of both authors was supported by the National Science Foundation under Grant CCR-8921421 and the research by the first author was also supported under the Alan T. Waterman award, Grant CCR-9118874. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the view of the National Science Foundation.  相似文献   

14.
A matching and a dominating set in a graph G are related in that they determine diameter-bounded subtree partitions of G. For a maximum matching and a minimum dominating set, the associate partitions have the fewest numbers of trees. The problem of determining a minimum dominating set in an arbitrary graph G is known to be NP-complete. In this paper we present a linear algorithm for partitioning an arbitrary tree into a minimum number of subtrees, each having a diameter at mostk, for a givenk.Research supported in part by the National Science Foundation under Grant ENG 7902960.Research supported in part by the National Science Foundation under Grant STI 7902960.  相似文献   

15.
Given a triangulation of a simple polygonP, we present linear-time algorithms for solving a collection of problems concerning shortest paths and visibility withinP. These problems include calculation of the collection of all shortest paths insideP from a given source vertexS to all the other vertices ofP, calculation of the subpolygon ofP consisting of points that are visible from a given segment withinP, preprocessingP for fast "ray shooting" queries, and several related problems.Work on this paper by this author has been supported by Office of Naval Research Grant N00014-82-K-0381, National Science Foundation Grant No. NSF-DCR-83-20085, and by grants from the Digital Equipment Corporation, the IBM Corporation, and from the U.S.-Israel Binational Science Foundation.Work on this paper by this author has been supported by National Science Foundation Grant DCR-86-05962.  相似文献   

16.
Every class C of languages satisfying a simple topological condition is shown to have probability one if and only if it contains some language that is algorithmically random in the sense of Martin-Löf. This result is used to derive separation properties of algorithmically random oracles and to give characterizations of the complexity classesP, BPP, AM, andPH in terms of reducibility to such oracles. These characterizations lead to results like:P =NP if and only if an algorithmically random set exists that is btt P -hard forNP.The work of the first author was supported in part by the Alexander-von-Humboldt-Stiftung and by the National Science Foundation under Grant CCR-8913584 while he visited the Lehrstuhl für Theoretische Informatik, Institut für Informatik, Universität Würzburg, Germany. The work of the second author was supported in part by the National Science Foundation under Grant CCR-8809238 and in part by DIMACS, where he was a visitor while a portion of his work was done.  相似文献   

17.
We present algorithms for the randomized simulation of a shared memory machine (PRAM) on a Distributed Memory Machine (DMM). In a PRAM, memory conflicts occur only through concurrent access to the same cell, whereas the memory of a DMM is divided into modules, one for each processor, and concurrent accesses to the same module create a conflict. Thedelay of a simulation is the time needed to simulate a parallel memory access of the PRAM. Any general simulation of anm processor PRAM on ann processor DMM will necessarily have delay at leastm/n. A randomized simulation is calledtime-processor optimal if the delay isO(m/n) with high probability. Using a novel simulation scheme based on hashing we obtain a time-processor optimal simulation with delayO(log log(n) log*(n)). The best previous simulations use a simpler scheme based on hashing and have much larger delay: (log(n)/log log(n)) for the simulation of an n processor PRAM on ann processor DMM, and (log(n)) in the case where the simulation is time-processor optimal.Our simulations use several (two or three) hash functions to distribute the shared memory among the memory modules of the PRAM. The stochastic processes modeling the behavior of our algorithms and their analyses based on powerful classes of universal hash functions may be of independent interest.Research partially supported by NSF/DARPA Grant CCR-9005448. Work was done while at the University of California at Berkeley and the International Computer Science Institute, Berkeley, CA.Research partially supported by National Science Foundation Operating Grant CCR-9016468, National Science Foundation Operating Grant CCR-9304722, United States-Israel Binational Science Foundation Grant No. 89-00312, United States-Israel Binational Science Foundation Grant No. 92-00226, and ESPRIT BR Grant EC-US 030.Part of work was done during a visit at the International Computer Science Institute at Berkeley; supported in part by DFG-Forschergruppe Effiziente Nutzung massiv paralleler Systeme, Teilprojekt 4, and by the Esprit Basic Research Action Nr. 7141 (ALCOM II).  相似文献   

18.
Approximate graph coloring takes as input a graph and returns a legal coloring which is not necessarily optimal. We improve the performance guarantee, or worst-case ratio between the number of colors used and the minimum number of colors possible, toO(n(log logn)3/(logn)3), anO(logn/log logn) factor better than the previous best-known result.The work of the first author was supported by Air Force Grant AFOSR-86-0078 and NSF PYI Grant 8657527-CCR. The work of the second author was supported by a National Science Foundation Graduate Fellowship.  相似文献   

19.
This paper is concerned with improvement in optical image quality by image restoration. Image restoration is an ill-posed inverse problem which involves the removal or minimization of degradations caused by noise and blur in an image, resulting from, in this case, imaging through a medium. Our work here concerns the use of the underlying Toeplitz structure of such problems, and associated techniques for accelerating the convergence of iterative image restoration computations. Denoising methods, including total variation minimization, followed by segmentation-based preconditioning methods for minimum residual conjugate gradient iterations, are investigated. Regularization is accomplished by segmenting the image into (smooth) segments and varying the preconditioners across the segments. By taking advantage of the Toeplitz structure, our algorithms can be implemented with computational complexity of onlyO (ln 2 logn), wheren 2 is the number of pixels in the image andl is the number of segments used. Also, parallelization is straightforward. Numerical tests are reported for atmospheric imaging problems, including the case of spatially varying blur. Research supported in part by a National Science Foundation Postdoctoral Research Fellowship. Research sponsored by the U.S. Air Force Office of Scientific Research under grant F49620-97-1-1039. Research sponsored by the U.S. Air Force Office of Scientific Research under grant F49620-97-1-0139, and by the National Science Foundation under grant CCR-96-23356. Research sponsored by the National Science Foundation under grant CCR-96-23356.  相似文献   

20.
We present anO(n 2 log3 n) algorithm for the two-center problem, in which we are given a setS ofn points in the plane and wish to find two closed disks whose union containsS so that the larger of the two radii is as small as possible. We also give anO(n 2 log5 n) algorithm for solving the two-line-center problem, where we want to find two strips that coverS whose maximum width is as small as possible. The best previous solutions of both problems requireO(n 3) time.Pankaj Agarwal has been supported by DIMACS (Center for Discrete Mathematics and Theoretical Computer Science), an NSF Science and Technology Center, under Grant STC-88-09648. Micha Sharir has been supported by the Office of Naval Research under Grants N00014-89-J-3042 and N00014-90-J-1284, by the National Science Foundation under Grant CCR-89-01484, by DIMACS, and by grants from the US-Israeli Binational Science Foundation, the Fund for Basic Research administered by the Israeli Academy of Sciences, and the G.I.F., the German-Israeli Foundation for Scientific Research and Development. A preliminary version of this paper has appeared inProceedings of the Second Annual ACM-SIAM Symposium on Discrete Algorithms, 1991, pp. 449–458.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号