首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 218 毫秒
1.
This paper presents several deterministic algorithms for selecting the k th largest record from a set of n records on any n -node hypercubic network. All of the algorithms are based on the selection algorithm of Cole and Yap, as well as on various sorting algorithms for hypercubic networks. Our fastest algorithm runs in O( lg n lg * n) time, very nearly matching the trivial lower bound. Previously, the best upper bound known for selection was O( lg n lg lg n) . A key subroutine in our O( lg n lg * n) time selection algorithm is a sparse version of the Sharesort algorithm that sorts n records using p processors, , in O( lg n ( lg lg p - lg lg (p/n) ) 2 ) time. Received March 23, 1994; revised October 30, 1997.  相似文献   

2.
Given a set of n intervals representing an interval graph, the problem of finding a maximum matching between pairs of disjoint (nonintersecting) intervals has been considered in the sequential model. In this paper we present parallel algorithms for computing maximum cardinality matchings among pairs of disjoint intervals in interval graphs in the EREW PRAM and hypercube models. For the general case of the problem, our algorithms compute a maximum matching in O( log 3 n) time using O(n/ log 2 n) processors on the EREW PRAM and using n processors on the hypercubes. For the case of proper interval graphs, our algorithm runs in O( log n ) time using O(n) processors if the input intervals are not given already sorted and using O(n/ log n ) processors otherwise, on the EREW PRAM. On n -processor hypercubes, our algorithm for the proper interval case takes O( log n log log n ) time for unsorted input and O( log n ) time for sorted input. Our parallel results also lead to optimal sequential algorithms for computing maximum matchings among disjoint intervals. In addition, we present an improved parallel algorithm for maximum matching between overlapping intervals in proper interval graphs. Received November 20, 1995; revised September 3, 1998.  相似文献   

3.
Deciding whether two n-point sets A,BRd are congruent is a fundamental problem in geometric pattern matching. When the dimension d is unbounded, the problem is equivalent to graph isomorphism and is conjectured to be in FPT.When |A|=m<|B|=n, the problem becomes that of deciding whether A is congruent to a subset of B and is known to be NP-complete. We show that point subset congruence, with d as a parameter, is W[1]-hard, and that it cannot be solved in O(mno(d))-time, unless SNP⊂DTIME(2o(n)). This shows that, unless FPT=W[1], the problem of finding an isometry of A that minimizes its directed Hausdorff distance, or its Earth Mover's Distance, to B, is not in FPT.  相似文献   

4.
M. Farach  M. Thorup 《Algorithmica》1998,20(4):388-404
String matching and compression are two widely studied areas of computer science. The theory of string matching has a long association with compression algorithms. Data structures from string matching can be used to derive fast implementations of many important compression schemes, most notably the Lempel—Ziv (LZ77) algorithm. Intuitively, once a string has been compressed—and therefore its repetitive nature has been elucidated—one might be tempted to exploit this knowledge to speed up string matching. The Compressed Matching Problem is that of performing string matching in a compressed text, without uncompressing it. More formally, let T be a text, let Z be the compressed string representing T , and let P be a pattern. The Compressed Matching Problem is that of deciding if P occurs in T , given only P and Z . Compressed matching algorithms have been given for several compression schemes such as LZW. In this paper we give the first nontrivial compressed matching algorithm for the classic adaptive compression scheme, the LZ77 algorithm. In practice, the LZ77 algorithm is known to compress more than other dictionary compression schemes, such as LZ78 and LZW, though for strings with constant per bit entropy, all these schemes compress optimally in the limit. However, for strings with o(1) per bit entropy, while it was recently shown that the LZ77 gives compression to within a constant factor of optimal, schemes such as LZ78 and LZW may deviate from optimality by an exponential factor. Asymptotically, compressed matching is only relevant if |Z|=o(|T|) , i.e., if the compression ratio |T|/|Z| is more than a constant. These results show that LZ77 is the appropriate compression method in such settings. We present an LZ77 compressed matching algorithm which runs in time O(n log 2 u/n + p) where n=|Z| , u=|T| , and p=|P| . Compare with the na?ve ``decompresion' algorithm, which takes time Θ(u+p) to decide if P occurs in T . Writing u+p as (n u)/n+p , we see that we have improved the complexity, replacing the compression factor u/n by a factor log 2 u/n . Our algorithm is competitive in the sense that O(n log 2 u/n + p)=O(u+p) , and opportunistic in the sense that O(n log 2 u/n + p)=o(u+p) if n=o(u) and p=o(u) . Received December 20, 1995; revised October 29, 1996, and February 6, 1997.  相似文献   

5.
We give a class of heuristic algorithms for minimum weight perfect matching on a complete edge-weighted graph K(V) satisfying the triangle inequality, where V is a set of an even number, n, of vertices. This class is a generalization of the Onethird heuristics, the hypergreedy heuristic, and it possibly employs any given exact or approximate perfect matching algorithm as an auxiliary heuristic to an appropriate subgraph of K(V). In particular, by using the heuristic of Gabow et al. [3] as its auxiliary heuristic, our algorithm can obtain a solution whose weight is at most times the weight of the optimal solution in time, or a solution with an error of in O(n 2 ) time. Received July 21, 1994; revised November 28, 1995.  相似文献   

6.
J. H. Reif 《Algorithmica》2001,29(3):487-510
{This paper is concerned with the problem of computing the characteristic polynomial of a matrix. In a large number of applications, the matrices are symmetric and sparse : with O(n) non-zero entries. The problem has an efficient sequential solution in this case, requiring O(n 2 ) work by use of the sparse Lanczos method. A major remaining open question is: to find a polylog time parallel algorithm with matching work bounds. Unfortunately, the sparse Lanczos method cannot be parallelized to faster than time Ω (n) using n processors. Let M(n) be the processor bound to multiply two n \times n matrices in O(log n) parallel time. Giesbrecht [G2] gave the best previous polylog time parallel algorithms for the characteristic polynomial of a dense matrix with O (M(n)) processors. There is no known improvement to this processor bound in the case where the matrix is sparse. Often, in addition to being symmetric and sparse, the matrix has a sparsity graph (which has edges between indices of the matrix with non-zero entries) that has small separators. This paper gives a new algorithm for computing the characteristic polynomial of a sparse symmetric matrix, assuming that the sparsity graph is s(n) -separable and has a separator of size s(n)=O(n γ ) , for some γ , 0 < γ < 1 , that when deleted results in connected components of ≤α n vertices, for some 0 < α < 1 , with the same property. We derive an interesting algebraic version of Nested Dissection, which constructs a sparse factorization of the matrix A-λ I n where A is the input matrix and I n is the n \times n identity matrix. While Nested Dissection is commonly used to minimize the fill-in in the solution of sparse linear systems, our innovation is to use the separator structure to bound also the work for manipulation of rational functions in the recursively factored matrices. The matrix elements are assumed to be over an arbitrary field. We compute the characteristic polynomial of a sparse symmetric matrix in polylog time using P(n)(n+M(s(n))) ≤ P(n)(n+ s(n) 2.376 ) processors, where P(n) is the processor bound to multiply two degree n polynomials in O(log n) parallel time using a PRAM (P(n) = O(n) if the field supports an FFT of size n but is otherwise O(nlog log n) [CK]. Our method requires only that a matrix be symmetric and non-singular (it need not be positive definite as usual for Nested Dissection techniques). For the frequently occurring case where the matrix has small separator size, our polylog parallel algorithm has work bounds competitive with the best known sequential algorithms (i.e., the Ω(n 2 ) work of sparse Lanczos methods), for example, when the sparsity graph is a planar graph, s(n) ≤ O( \sqrt n ) , and we require polylog time with only P(n)n 1.188 processors. } Received September 26, 1997; revised June 5, 1999.  相似文献   

7.
Dario Bini 《Calcolo》1985,22(1):209-228
The tensor rankA of the linear spaceA generated by the set of linearly independent matricesA 1, A2, …, Ap, is the least integert for wich there existt diadsu (r) v (r)τ, τ=1,2,...,t, such that . IfA=n+k,k≪n then some computational problems concerning matricesAA can be solyed fast. For example the parallel inversion of almost any nonsingular matrixAA costs 3 logn+0(log2 k) steps with max(n 2+p (n+k), k2 n+nk) processors, the evaluation of the determinant ofA can be performed by a parallel algorithm in logp+logn+0 (log2 k) parallel steps and by a sequential algorithm inn(1+k 2)+p (n+k)+0 (k 3) multiplications. Analogous results hold to accomplish one step of bisection method, Newton's iterations method and shifted inverse power method applied toA−λB in order to compute the (generalized) eigenvalues provided thatA, BA. The same results hold if tensor rank is replaced by border rank. Applications to the case of banded Toeplitz matrices are shown. Dedicated to Professor S. Faedo on his 70th birthday Part of the results of this paper has been presented at the Oberwolfach Conference on Komplexitatstheorie, November 1983  相似文献   

8.
There has recently been a resurgence of interest in the shortest common superstring problem due to its important applications in molecular biology (e.g., recombination of DNA) and data compression. The problem is NP-hard, but it has been known for some time that greedy algorithms work well for this problem. More precisely, it was proved in a recent sequence of papers that in the worst case a greedy algorithm produces a superstring that is at most β times (2≤β≤ 4 ) worse than optimal. We analyze the problem in a probabilistic framework, and consider the optimal total overlap O n opt and the overlap O n rg produced by various greedy algorithms. These turn out to be asymptotically equivalent. We show that with high probability , where n is the number of original strings and H is the entropy of the underlying alphabet. Our results hold under a condition that the lengths of all strings are not too short. Received November 1996; revised March 1997.  相似文献   

9.
A set A is nontrivial for the linear-exponential-time class E=DTIME(2 lin ) if for any k≥1 there is a set B k ∈E such that B k is (p-m-)reducible to A and Bk ? DTIME(2k·n)B_{k} \not\in \mathrm{DTIME}(2^{k\cdot n}). I.e., intuitively, A is nontrivial for E if there are arbitrarily complex sets in E which can be reduced to A. Similarly, a set A is nontrivial for the polynomial-exponential-time class EXP=DTIME(2 poly ) if for any k≥1 there is a set [^(B)]k ? EXP\hat{B}_{k} \in \mathrm {EXP} such that [^(B)]k\hat{B}_{k} is reducible to A and [^(B)]k ? DTIME(2nk)\hat{B}_{k} \not\in \mathrm{DTIME}(2^{n^{k}}). We show that these notions are independent, namely, there are sets A 1 and A 2 in E such that A 1 is nontrivial for E but trivial for EXP and A 2 is nontrivial for EXP but trivial for E. In fact, the latter can be strengthened to show that there is a set A∈E which is weakly EXP-hard in the sense of Lutz (SIAM J. Comput. 24:1170–1189, 11) but E-trivial.  相似文献   

10.
Partial words are finite sequences over a finite alphabet A that may contain a number of “do not know” symbols denoted by ?’s. Setting $A_{\diamond}=A\cup\lbrace{\diamond}\rbracePartial words are finite sequences over a finite alphabet A that may contain a number of “do not know” symbols denoted by ’s. Setting Aà=Aè{à}A_{\diamond}=A\cup\lbrace{\diamond}\rbrace , A * denotes the set of all partial words over A. In this paper, we investigate the border correlation function b:Aà*?{a,b}*\beta:A_{\diamond}^{*}\rightarrow\lbrace a,b\rbrace^{*} that specifies which conjugates (cyclic shifts) of a given partial word w of length n are bordered, that is, β(w)=c 0 c 1c n−1 where c i =a or c i =b according to whether the ith cyclic shift σ i (w) of w is unbordered or bordered. A partial word w is bordered if a proper prefix x 1 of w is compatible with a proper suffix x 2 of w, in which case any partial word x containing both x 1 and x 2 is called a border of w. In addition to β, we investigate an extension β′:A *→ℕ* that maps a partial word w of length n to m 0 m 1m n−1 where m i is the length of a shortest border of σ i (w). Our results extend those of Harju and Nowotka.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号