首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Given a textstringx of lengthn, theMinimal Augmented Suffix Tree T (x) ofx is a digital-search index that returns, for anyquery stringw and in a number of comparisons bounded by the length ofw, the maximum number of nonoverlapping occurrences ofw inx. It is shown that, denoting the length ofx byn, T(x) can be built in timeO(n log2 n) and spaceO(n logn), off-line on a RAM.This research was supported in part, through the Leonardo Fibonacci Institute, by the Istituto Trentino di Cultura, Trento, Italy.Additional support was provided by NSF Grants CCR-8900305 and CCR-9201078, by NATO Grant CRG 900293, by the National Research Council of Italy, and by the ESPRIT III Basic Research Programme of the EC under Contract No. 9072 (Project GEPPCOM).Additional support was provided by NSF Grant CCR-91-96176 and ONR Contract N 00014-91-J-4052, ARPA Order 2225.  相似文献   

2.
A stringw isprimitive if it is not a power of another string (i.e., writingw =v k impliesk = 1. Conversely,w is asquare ifw =vv, withv a primitive string. A stringx issquare-free if it has no nonempty substring of the formww. It is shown that the square-freedom of a string ofn symbols over an arbitrary alphabet can be tested by a CRCW PRAM withn processors inO(logn) time and linear auxiliary space. If the cardinality of the input alphabet is bounded by a constant independent of the input size, then the number of processors can be reduced ton/logn without affecting the time complexity of this strategy. The fastest sequential algorithms solve this problemO(n logn) orO(n) time, depending on whether the cardinality of the input alphabet is unbounded or bounded, and either performance is known to be optimal within its class. More elaborate constructions lead to a CRCW PRAM algorithm for detecting, within the samen-processors bounds, all positioned squares inx in timeO(logn) and using linear auxiliary space. The fastest sequential algorithms solve this problem inO(n logn) time, and such a performance is known to be optimal.This research was supported, through the Leonardo Fibonacci Institute, by the Istituto Trentino di Cultura, Trento, Italy. Additional support was provided by the French and Italian Ministries of Education, by the National Research Council of Italy, by the British Research Council Grant SERC-E76797, by NSF Grant CCR-89-00305, by NIH Library of Medicine Grant ROI LM05118, by AFOSR Grant 90-0107, and by NATO Grant CRG900293.  相似文献   

3.
A stringw isprimitive if it is not a power of another string (i.e., writingw =v k impliesk = 1. Conversely,w is asquare ifw =vv, withv a primitive string. A stringx issquare-free if it has no nonempty substring of the formww. It is shown that the square-freedom of a string ofn symbols over an arbitrary alphabet can be tested by a CRCW PRAM withn processors inO(logn) time and linear auxiliary space. If the cardinality of the input alphabet is bounded by a constant independent of the input size, then the number of processors can be reduced ton/logn without affecting the time complexity of this strategy. The fastest sequential algorithms solve this problemO(n logn) orO(n) time, depending on whether the cardinality of the input alphabet is unbounded or bounded, and either performance is known to be optimal within its class. More elaborate constructions lead to a CRCW PRAM algorithm for detecting, within the samen-processors bounds, all positioned squares inx in timeO(logn) and using linear auxiliary space. The fastest sequential algorithms solve this problem inO(n logn) time, and such a performance is known to be optimal.  相似文献   

4.
We study the partial vertex cover problem. Given a graph G=(V,E), a weight function w:VR +, and an integer s, our goal is to cover all but s edges, by picking a set of vertices with minimum weight. The problem is clearly NP-hard as it generalizes the well-known vertex cover problem. We provide a primal-dual 2-approximation algorithm which runs in O(nlog n+m) time. This represents an improvement in running time from the previously known fastest algorithm. Our technique can also be used to get a 2-approximation for a more general version of the problem. In the partial capacitated vertex cover problem each vertex u comes with a capacity k u . A solution consists of a function x:V→ℕ0 and an orientation of all but s edges, such that the number of edges oriented toward vertex u is at most x u k u . Our objective is to find a cover that minimizes ∑ vV x v w v . This is the first 2-approximation for the problem and also runs in O(nlog n+m) time. Research supported by NSF Awards CCR 0113192 and CCF 0430650, and the University of Maryland Dean’s Dissertation Fellowship.  相似文献   

5.
P. V. Poblete 《Algorithmica》2001,29(1-2):227-237
Given a setS ofN distinct elements in random order and a pivotxS, we study the problem of simultaneously finding the left and the right neighbors ofx, i.e.,L=max{u|u<x} andR=min{v|v>x}. We analyze an adaptive algorithm that solves this problem by scanning the setS while maintaining current values for the neighborsL andR. Each new element inspected is compared first against the neighbor in the most populous side, then (if necessary) against the neighbor in the other side, and finally (if necessary), against the pivot. This algorithm may require 3N comparisons in the worst case, but it performs well on the average. If the pivot has rankαN, where α is fixed and <1/2, the algorithm does (1+α)N+Θ(logN) comparisons on the average, with a variance of 3 lnN+Θ(1). However, in the case where the pivot is the median, the average becomes 3/2;N+Θ(√N), while the variance grows to (1/2−π/8)N+Θ(logN). We also prove that, in the αN case, the limit distribution is Gaussian. This work has been supported in part by Grant FONDECYT(Chile) 1950622 and 1981029. Online publication October 6, 2000.  相似文献   

6.
It is shown that the Lyndon decomposition of a word ofn symbols can be computed by ann-processor CRCW PRAM inO(logn) time. Extensions of the basic algorithm convey, within the same time and processors bounds, efficient parallel solutions to problems such as finding the lexicographically minimum or maximum suffix for all prefixes of the input string, and finding the lexicographically least rotation of all prefixes of the input.A. Apostolico's research was supported in part by the French and Italian Ministries of Education, by British Research Council Grant SERC-E76797, by NSF Grants CCR-89-00305 and CCR-9201078, by NIH Library of Medicine Grant R01 LM05118, by AFOSR Grant 89NM682, and by NATO Grant CRG 900293. M. Crochemore's research was supported in part by PRC Mathématiques et Informatique and by NATO Grant CRG 900293.  相似文献   

7.
Squares are strings of the form ww where w is any nonempty string. Main and Lorentz proposed an O(nlogn)-time algorithm for finding the positions of all squares in a string of length n. Based on their result, we show how to find the positions of all squares in a run-length encoded string in time O(NlogN) where N is the number of runs in this string, provided that we do not explicitly compute at all “trivial squares” occurring within runs. The algorithm is optimal and its time complexity is independent of the length of the original uncompressed string.  相似文献   

8.
Given a planar setS ofn points,maxdominance problems consist of computing, for everyp S, some function of the maxima of the subset ofS that is dominated byp. A number of geometric and graph-theoretic problems can be formulated as maxdominance problems, including the problem of computing a minimum independent dominating set in a permutation graph, the related problem of finding the shortest maximal increasing subsequence, the problem of enumerating restricted empty rectangles, and the related problem of computing the largest empty rectangle. We give an algorithm for optimally solving a class of maxdominance problems. A straightforward application of our algorithm yields improved time bounds for the above-mentioned problems. The techniques used in the algorithm are of independent interest, and include a linear-time tree computation that is likely to arise in other contexts.The research of this author was supported by the Office of Naval Research under Grants N00014-84-K-0502 and N00014-86-K-0689, and the National Science Foundation under Grant DCR-8451393, with matching funds from AT&T.This author's research was supported by the National Science Foundation under Grant DCR-8506361.  相似文献   

9.
We show that anyn-net 2-terminal channel routing problem of densityd can be wired on a two-layer grid of widthw =d +O(d 2/3) when vertical wire segments are allowed to overlap for a distance of length 1. This is a considerable asymptotic improvement over the best known, and optimal, channel width of 2d-1 for models in which no vertical overlap is allowed [RBM, PL]. Our result also improves the 3d/2+O(1) channel width achieved by a recent algorithm [G] for the same vertical overlap model. The algorithm presented in this paper produces at most 4 overlaps of unit length between any two nets, usesO(n) contacts, and can be implemented to run inO(nd 2/3) time. We also generalize the algorithm to multi-terminal channel routing problems for which our algorithm uses a width ofw = 2d +O(d 2/3).This work was supported by the Office of Naval Research under Contract N00014-84-K-0502 and by the National Science Foundation under Grant DMC-84-13496.  相似文献   

10.
L. Newberg  D. Wolfe 《Algorithmica》1994,12(2-3):209-224
This article introduces the concept of abad square in aredundant array of inexpensive disks (RAID). Bad squares are used to prove upper limits on the reliability of the2d-parity arrangement when there is the possibility that astring of disks may fail simultaneously. Bad-square analysis motivates several optimal string layouts which achieve these limits. Bad squares also provide a means to calculate the mean time to data loss for a RAID layout, without the use of Monte Carlo simulation.The first author was supported by a National Science Foundation Graduate Fellowship.  相似文献   

11.
Given an irreducible partial upper triangularn × n matrixA, it is shown that for every nonzero vectorb there exists a completionA c ofA such that the pair (A c ,b) is controllable. Various extensions and applications of this result are given.Partially supported by NSF Grant DMS-9000839 and by the United States-Israel Binational Fund.  相似文献   

12.
We say a vertex v in a graph G covers a vertex w if v=w or if v and w are adjacent. A subset of vertices of G is a dominating set if it collectively covers all vertices in the graph. The dominating set problem, which is NP-hard, consists of finding a smallest possible dominating set for a graph. The straightforward greedy strategy for finding a small dominating set in a graph consists of successively choosing vertices which cover the largest possible number of previously uncovered vertices. Several variations on this greedy heuristic are described and the results of extensive testing of these variations is presented. A more sophisticated procedure for choosing vertices, which takes into account the number of ways in which an uncovered vertex may be covered, appears to be the most successful of the algorithms which are analyzed. For our experimental testing, we used both random graphs and graphs constructed by test case generators which produce graphs with a given density and a specified size for the smallest dominating set. We found that these generators were able to produce challenging graphs for the algorithms, thus helping to discriminate among them, and allowing a greater variety of graphs to be used in the experiments. Received October 27, 1998; revised March 25, 2001.  相似文献   

13.
There is a large and growing body of literature concerning the solutions of geometric problems on mesh-connected arrays of processors. Most of these algorithms are optimal (i.e., run in timeO(n 1/d ) on ad-dimensionaln-processor array), and they all assume that the parallel machine is trying to solve a problem of sizen on ann-processor array. Here we investigate the situation where we have a mesh of sizep and we are interested in using it to solve a problem of sizen >p. The goal we seek is to achieve, when solving a problem of sizen >p, the same speed up as when solving a problem of sizep. We show that for many geometric problems, the same speedup can be achieved when solving a problem of sizen >p as when solving a problem of sizep.The research of M. J. Atallah was supported by the Office of Naval Research under Contracts N00014-84-K-0502 and N00014-86-K-0689, the Air Force Office of Scientific Research under Grant AFOSR-90-0107, the National Science Foundation under Grant DCR-8451393, and the National Library of Medicine under Grant R01-LM05118. Jyh-Jong Tsay's research was partially supported by the Office of Naval Research under Contract N00014-84-K-0502, the Air Force Office of Scientific Research under Grant AFOSR-90-0107, and the National Science Foundation under Grant DCR-8451393.  相似文献   

14.
An optimalO(log logn)-time CRCW-PRAM algorithm for computing all period lengths of a string is presented. Previous parallel algorithms compute the period only if it is shorter than half of the length of the string. The algorithm can be used to find all initial palindromes of a string in the same time and processor bounds. Both algorithms are the fastest possible over a general alphabet. We derive a lower bound for finding initial palindromes by modifying a known lower bound for finding the period length of a string [9]. Whenp processors are available the bounds become (n/p+log1+p/n2p).This work was partially supported by NSF Grant CCR-90-14605. D. Breslauer was partially supported by an IBM Graduate Fellowship while studying at Columbia University and by a European Research Consortium for Informatics and Mathematics postdoctoral fellowship.  相似文献   

15.
We study the problem of maintaining the 2-edge-, 2-vertex-, and 3-edge-connected components of a dynamic planar graph subject to edge deletions. The 2-edge-connected components can be maintained in a total ofO(n logn) time under any sequence of at mostO(n) deletions. This givesO(logn) amortized time per deletion. The 2-vertex- and 3-edge-connected components can be maintained in a total ofO(n log2 n) time. This givesO(log2 n) amortized time per deletion. The space required by all our data structures isO(n). All our time bounds improve previous bounds.This work was partially supported by the ESPRIT II Basic Research Actions Program of the EC under Project ALCOM II (contract No. 7141) and Project ASMICS. A preliminary version of this paper appears in [12].Partially supported by a CNR Fellowship. Work done while the author was visiting Columbia University.On leave from IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA.  相似文献   

16.
In this paper we describe a simple parallel algorithm for list ranking. The algorithm is deterministic and runs inO(logn) time on an EREW PRAM withn/logn processors. The algorithm matches the performance of the Cole-Vishkin [CV3] algorithm but is simple and has reasonable constant factors.R. J. Anderson was supported by an NSF Presidential Young Investigator award and G. L. Miller was supported by NSF Grant DCR-85114961.  相似文献   

17.
We address the problem of approximating aminimum cycle cover in parallel. We give the first efficient parallel algorithm for finding an approximation to aminimum cycle cover. Our algorithm finds a cycle cover whose size is within a factor of 0(1 +n logn/(m + n) of the minimum-sized cover usingO(log2 n) time on (m + n)/logn processors.Research supported by ONR Grant N00014-88-K-0243 and DARPA Grant N00039-88-C0113 at Harvard University.Research supported by a graduate fellowship from GE. Additional support provided by Air Force Contract AFOSR-86-0078, and by an NSF PYI awarded to David Shmoys, with matching funds from IBM, Sun Microsystems, and UPS.  相似文献   

18.
Squares are strings of the form ww where w is any nonempty string. Two squares ww and ww are of different types if and only if ww. Fraenkel and Simpson [Avieri S. Fraenkel, Jamie Simpson, How many squares can a string contain? Journal of Combinatorial Theory, Series A 82 (1998) 112-120] proved that the number of square types contained in a string of length n is bounded by O(n). The set of all different square types contained in a string is called the vocabulary of the string. If a square can be obtained by a series of successive right-rotations from another square, then we say the latter covers the former. A square is called a c-square if no square with a smaller index can cover it and it is not a trivial square. The set containing all c-squares is called the covering set. Note that every string has a unique covering set. Furthermore, the vocabulary of the covering set are called c-vocabulary. In this paper, we prove that the cardinality of c-vocabulary in a string is less than , where N is the number of runs in this string.  相似文献   

19.
We give the first efficient parallel algorithms for solving the arrangement problem. We give a deterministic algorithm for the CREW PRAM which runs in nearly optimal bounds ofO (logn log* n) time andn 2/logn processors. We generalize this to obtain anO (logn log* n)-time algorithm usingn d /logn processors for solving the problem ind dimensions. We also give a randomized algorithm for the EREW PRAM that constructs an arrangement ofn lines on-line, in which each insertion is done in optimalO (logn) time usingn/logn processors. Our algorithms develop new parallel data structures and new methods for traversing an arrangement.This work was supported by the National Science Foundation, under Grants CCR-8657562 and CCR-8858799, NSF/DARPA under Grant CCR-8907960, and Digital Equipment Corporation. A preliminary version of this paper appeared at the Second Annual ACM Symposium on Parallel Algorithms and Architectures [3].  相似文献   

20.
In this paper we give efficient parallel algorithms for solving a number of visibility and shortest-path problems for simple polygons. Our algorithms all run inO(logn) time and are based on the use of a new data structure for implicitly representing all shortest paths in a simple polygonP, which we call thestratified decomposition tree. We use this approach to derive efficient parallel methods for computing the visibility ofP from an edge, constructing the visibility graph of the vertices ofP (using an output-sensitive number of processors), constructing the shortest-path tree from a vertex ofP, and determining all-farthest neighbors for the vertices inP. The computational model we use is the CREW PRAM.This research was announced in preliminary form in theProceedings of the 6th ACM Symposium on Computational Geometry, 1990, pp. 73–82. The research of Michael T. Goodrich was supported by the National Science Foundation under Grants CCR-8810568 and CCR-9003299, and by the NSF and DARPA under Grant CCR-8908092.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号