首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 25 毫秒
1.
We give anO(log4 n)-timeO(n 2)-processor CRCW PRAM algorithm to find a hamiltonian cycle in a strong semicomplete bipartite digraph,B, provided that a factor ofB (i.e., a collection of vertex disjoint cycles covering the vertex set ofB) is computed in a preprocessing step. The factor is found (if it exists) using a bipartite matching algorithm, hence placing the whole algorithm in the class Random-NC. We show that any parallel algorithm which can check the existence of a hamiltonian cycle in a strong semicomplete bipartite digraph in timeO(r(n)) usingp(n) processors can be used to check the existence of a perfect matching in a bipartite graph in timeO(r(n)+n 2 /p(n)) usingp(n) processors. Hence, our problem belongs to the class NC if and only if perfect matching in bipartite graphs belongs to NC. We also consider the problem of finding a hamiltonian path in a semicomplete bipartite digraph.  相似文献   

2.
We describe a new heuristic for constructing a minimum-cost perfect matching designed for problems on complete graphs whose cost functions satisfy the triangle inequality (e.g., Euclidean problems). The running time for ann node problem is O(n logn) after a minimum-cost spanning tree is constructed. We also describe a procedure which, added to Kruskal's algorithm, produces a lower bound on the size of any perfect matching. This bound is based on a dual problem which has the following geometric interpretation for Euclidean problems: Pack nonoverlapping disks centered at the nodes and moats surrounding odd sets of nodes so as to maximize the sum of the disk radii and moat widths.This research was supported in part by the Natural Sciences and Engineering Research Council of Canada.  相似文献   

3.
We study the parallel complexity of a bounded size dictionary version (LRU deletion heuristic) of the LZ2 compression algorithm. The unbounded version was shown to be P-complete. When the size of the dictionary is O(logkn), the problem of computing the LZ2 compression is shown to be hard for the class of problems solvable simultaneously in polynomial time and O(logkn) space (that is, SCk). We also introduce a variation of this heuristic that turns out to be an SCk-complete problem (the original heuristic belongs to SCk+1). In virtue of these results, we argue that there are no practical parallel algorithms for LZ2 compression with LRU deletion heuristic or any other heuristic deleting dictionary elements in a continuous way. For simpler heuristics (SWAP, RESTART, FREEZE), practical parallel algorithms are given.  相似文献   

4.
An algorithm for finding a minimal edge coloring of a bipartite multigraph is presented. The algorithm usesO(V 1/2 ElogV + V) time andO(E + V) space. It is based on a divide-and-conquer strategy, using euler partitions to divide the graph. A modification of the algorithm for matching is described. This algorithm finds a maximum matching of a regular bipartite graph with all degrees 2n, inO(E + V) time andO(E + V) space.This work was partially supported by the National Science Foundation under Grant GJ36461.  相似文献   

5.
Given a set of n intervals representing an interval graph, the problem of finding a maximum matching between pairs of disjoint (nonintersecting) intervals has been considered in the sequential model. In this paper we present parallel algorithms for computing maximum cardinality matchings among pairs of disjoint intervals in interval graphs in the EREW PRAM and hypercube models. For the general case of the problem, our algorithms compute a maximum matching in O( log 3 n) time using O(n/ log 2 n) processors on the EREW PRAM and using n processors on the hypercubes. For the case of proper interval graphs, our algorithm runs in O( log n ) time using O(n) processors if the input intervals are not given already sorted and using O(n/ log n ) processors otherwise, on the EREW PRAM. On n -processor hypercubes, our algorithm for the proper interval case takes O( log n log log n ) time for unsorted input and O( log n ) time for sorted input. Our parallel results also lead to optimal sequential algorithms for computing maximum matchings among disjoint intervals. In addition, we present an improved parallel algorithm for maximum matching between overlapping intervals in proper interval graphs. Received November 20, 1995; revised September 3, 1998.  相似文献   

6.
New results for the minimum weight triangulation problem   总被引:1,自引:0,他引:1  
Given a finite set of points in a plane, a triangulation is a maximal set of nonintersecting line segments connecting the points. The weight of a triangulation is the sum of the Euclidean lengths of its line segments. No polynomial-time algorithm is known to find a triangulation of minimum weight, nor is the minimum weight triangulation problem known to be NP-hard. This paper proposes a new heuristic algorithm that triangulates a set ofn points inO(n 3) time and that never produces a triangulation whose weight is greater than that of a greedy triangulation. The algorithm produces an optimal triangulation if the points are the vertices of a convex polygon. Experimental results indicate that this algorithm rarely produces a nonoptimal triangulation and performs much better than a seemingly similar heuristic of Lingas. In the direction of showing the minimum weight triangulation problem is NP-hard, two generalizations that are quite close to the minimum weight triangulation problem are shown to be NP-hard.This research was done while the second author was with the Department of Computer Science, Virginia Polytechnic Institute and State University.  相似文献   

7.
Bast  Mehlhorn  Sch?fer  Tamaki 《Algorithmica》2008,36(1):75-88
Abstract. We consider the single-source many-targets shortest-path (SSMTSP) problem in directed graphs with non-negative edge weights. A source node s and a target set T is specified and the goal is to compute a shortest path from s to a node in T . Our interest in the shortest path problem with many targets stems from its use in weighted bipartite matching algorithms. A weighted bipartite matching in a graph with n nodes on each side reduces to n SSMTSP problems, where the number of targets varies between n and 1 . The SSMTSP problem can be solved by Dijkstra's algorithm. We describe a heuristic that leads to a significant improvement in running time for the weighted matching problem; in our experiments a speed-up by up to a factor of 12 was achieved. We also present a partial analysis that gives some theoretical support for our experimental findings.  相似文献   

8.
We present a deterministic Logspace procedure, which, given a bipartite planar graph on n vertices, assigns O(log n) bits long weights to its edges so that the minimum weight perfect matching in the graph becomes unique. The Isolation Lemma as described in Mulmuley et al. (Combinatorica 7(1):105–131, 1987) achieves the same for general graphs using randomness, whereas we can do it deterministically when restricted to bipartite planar graphs. As a consequence, we reduce both decision and construction versions of the perfect matching problem in bipartite planar graphs to testing whether a matrix is singular, under the promise that its determinant is 0 or 1, thus obtaining a highly parallel SPL\mathsf{SPL} algorithm for both decision and construction versions of the bipartite perfect matching problem. This improves the earlier known bounds of non-uniform SPL\mathsf{SPL} by Allender et al. (J. Comput. Syst. Sci. 59(2):164–181, 1999) and NC\mathsf{NC} 2 by Miller and Naor (SIAM J. Comput. 24:1002–1017, 1995), and by Mahajan and Varadarajan (Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing (STOC), pp. 351–357, 2000). It also rekindles the hope of obtaining a deterministic parallel algorithm for constructing a perfect matching in non-bipartite planar graphs, which has been open for a long time. Further we try to find the lower bound on the number of bits needed for deterministically isolating a perfect matching. We show that our particular method for isolation will require Ω(log n) bits. Our techniques are elementary.  相似文献   

9.
G. Xue  D.-Z. Du 《Algorithmica》1999,23(4):354-362
In 1992 F. K. Hwang and J. F. Weng published an O(n 2 ) time algorithm for computing the shortest network under a given full Steiner topology interconnecting n fixed points in the Euclidean plane. The Hwang—Weng algorithm can be used to improve substantially existing algorithms for the Steiner minimum tree problem because it reduces the number of different Steiner topologies to be considered dramatically. In this paper we present an improved Hwang—Weng algorithm. While the worst-case time complexity of our algorithm is still O(n 2 ) , its average time complexity over all the full Steiner topologies interconnecting n fixed points is O (n log n ). Received August 24, 1996; revised February 10, 1997.  相似文献   

10.
A positive integern is a perfect power if there exist integersx andk, both at least 2, such thatn=x k . The usual algorithm to recognize perfect powers computes approximatekth roots forklog 2 n, and runs in time O(log3 n log log logn).First we improve this worst-case running time toO(log3 n) by using a modified Newton's method to compute approximatekth roots. Parallelizing this gives anNC 2 algorithm.Second, we present a sieve algorithm that avoidskth-root computations by seeing if the inputn is a perfectkth power modulo small primes. Ifn is chosen uniformly from a large enough interval, the average running time isO(log2 n).Third, we incorporate trial division to give a sieve algorithm with an average running time ofO(log2 n/log2 logn) and a median running time ofO(logn).The two sieve algorithms use a precomputed table of small primes. We give a heuristic argument and computational evidence that the largest prime needed in this table is (logn)1+O(1); assuming the Extended Riemann Hypothesis, primes up to (logn)2+O(1) suffice. The table can be computed in time roughly proportional to the largest prime it contains.We also present computational results indicating that our sieve algorithms perform extremely well in practice.This work forms part of the second author's Ph.D. thesis at the University of Wisconsin-Madison, 1991. This research was sponsored by NSF Grants CCR-8552596 and CCR-8504485.  相似文献   

11.
We study the problem of finding a minimum weight complete matching in the complete graph on a set V ofn points ink-dimensional space. The points are the vertices of the graph and the weight of an edge between any two points is the distance between the points under someL q,-metric. We give anO((2c q )1.5k ??1.5k (α(n, n))0.5 n 1.5(logn)2.5) algorithm for finding an almost minimum weight complete matching in such a graph, wherec q =6k 1/q for theL q -metric, α is the inverse Ackermann function, and ? ≤ 1. The weight of the complete matching obtained by our algorithm is guaranteed to be at most (1 + ?) times the weight of a minimum weight complete matching.  相似文献   

12.
This paper presents several deterministic algorithms for selecting the k th largest record from a set of n records on any n -node hypercubic network. All of the algorithms are based on the selection algorithm of Cole and Yap, as well as on various sorting algorithms for hypercubic networks. Our fastest algorithm runs in O( lg n lg * n) time, very nearly matching the trivial lower bound. Previously, the best upper bound known for selection was O( lg n lg lg n) . A key subroutine in our O( lg n lg * n) time selection algorithm is a sparse version of the Sharesort algorithm that sorts n records using p processors, , in O( lg n ( lg lg p - lg lg (p/n) ) 2 ) time. Received March 23, 1994; revised October 30, 1997.  相似文献   

13.
在EREW PRAM(exclusive-read and exclusive-write parallel random access machine)并行计算模型上,对范围很广的一类无向图的边极大匹配问题,给出时间复杂性为O(logn),使用O((n+m)/logn)处理器的最佳、高速并行算法.  相似文献   

14.
Xin He 《Algorithmica》1990,5(1):545-559
We present an efficient algorithm for 4-coloring perfect planar graphs. The best previously known algorithm for this problem takesO(n 3/2) sequential time, orO(log4 n) parallel time withO(n3) processors. The sequential implementation of our algorithm takesO(n logn) time. The parallel implementation of our algorithm takesO(log3 n) time withO(n) processors on a PRAM.  相似文献   

15.
We study the problem of finding a minimum weight complete matching in the complete graph on a set V ofn points ink-dimensional space. The points are the vertices of the graph and the weight of an edge between any two points is the distance between the points under someL q,-metric. We give anO((2c q )1.5k –1.5k ((n, n))0.5 n 1.5(logn)2.5) algorithm for finding an almost minimum weight complete matching in such a graph, wherec q =6k 1/q for theL q -metric, is the inverse Ackermann function, and 1. The weight of the complete matching obtained by our algorithm is guaranteed to be at most (1 + ) times the weight of a minimum weight complete matching.This research was supported by a fellowship from the Shell Foundation.  相似文献   

16.
He  Xin 《Algorithmica》1990,5(1-4):545-559

We present an efficient algorithm for 4-coloring perfect planar graphs. The best previously known algorithm for this problem takesO(n 3/2) sequential time, orO(log4 n) parallel time withO(n3) processors. The sequential implementation of our algorithm takesO(n logn) time. The parallel implementation of our algorithm takesO(log3 n) time withO(n) processors on a PRAM.

  相似文献   

17.
In a recent article, Nakhleh, Ringe and Warnow introduced perfect phylogenetic networks—a model of language evolution where languages do not evolve via clean speciation—and formulated a set of problems for their accurate reconstruction. Their new methodology assumes networks, rather than trees, as the correct model to capture the evolutionary history of natural languages. They proved the NP-hardness of the problem of testing whether a network is a perfect phylogenetic one for characters exhibiting at least three states, leaving open the case of binary characters, and gave a straightforward brute-force parameterized algorithm for the problem of running time O(3 k n), where k is the number of bidirectional edges in the network and n is its size. In this paper, we first establish the NP-hardness of the binary case of the problem. Then we provide a more efficient parameterized algorithm for this case running in time O(2 k n 2). The presented algorithm is very simple, and utilizes some structural results and elegant operations developed in this paper that can be useful on their own in the design of heuristic algorithms for the problem. The analysis phase of the algorithm is very elegant using amortized techniques to show that the upper bound on the running time of the algorithm is much tighter than the upper bound obtained under a conservative worst-case scenario assumption. Our results bear significant impact on reconstructing evolutionary histories of languages—particularly from phonological and morphological character data, most of which exhibit at most two states (i.e., are binary), as well as on the design and analysis of parameterized algorithms. Research of I.A. Kanj was supported in part by DePaul University Competitive Research Grant.  相似文献   

18.
粗糙集理论(RST)中,求解最小属性约简MAR (minimal attribute reduction)是一种NP-难(non-deterministic polynomialhard)组合优化问题.蚁群优化算法ACO(antcolonyoptimization)是进化算法中的一种启发式全局优化算法,粗糙集理论与ACO相结合,是求解属性约简的一种有效、可行的方式.针对蚁群优化算法易于陷入局部最优解、收敛速度慢等问题,首先以一种改进的信息增益率作为启发信息,提出了冗余检测机制,对每个被选属性和每代最优约简集合进行冗余检测,并提出了概率提前计算机制,可避免每只蚂蚁在搜索过程中相同路径上的信息反复计算;针对大数据集的属性约简问题,考虑到蚁群优化算法具有并行能力以及粗糙集中“等价类”计算的可并行性,提出一种将ACO与云计算相结合用于求解大数据集的属性约简算法,在此基础上,进一步提出一种多目标并行求解方案.该方案可以同时计算出其余属性相对于当前属性或约简集合的重要度.实验结果表明,该算法在处理大数据的情况下能够得到最小属性约简,计算属性重要度的时间复杂度由O(n2)降至O(|n|).  相似文献   

19.
Das  Loui 《Algorithmica》2008,31(4):530-547
Abstract. Updating a minimum spanning tree (MST) is a basic problem for communication networks. In this paper we consider single node deletions in MSTs. Let G=(V,E) be an undirected graph with n nodes and m edges, and let T be the MST of G . For each node v in V , the node replacement for v is the minimum weight set of edges R(v) that connect the components of T-v . We present a sequential algorithm and a parallel algorithm that find R(v) for all V simultaneously. The sequential algorithm takes O(m log n) time, but only O(m α (m,n)) time when the edges of E are presorted by weight. The parallel algorithm takes O(log 2 n) time using m processors on a CREW PRAM.  相似文献   

20.
Let A and B be two sets of n objects in \reals d , and let Match be a (one-to-one) matching between A and B . Let min(Match ), max(Match ), and Σ(Match) denote the length of the shortest edge, the length of the longest edge, and the sum of the lengths of the edges of Match , respectively. Bottleneck matching— a matching that minimizes max(Match )— is suggested as a convenient way for measuring the resemblance between A and B . Several algorithms for computing, as well as approximating, this resemblance are proposed. The running time of all the algorithms involving planar objects is roughly O(n 1.5 ) . For instance, if the objects are points in the plane, the running time of the exact algorithm is O(n 1.5 log n ) . A semidynamic data structure for answering containment problems for a set of congruent disks in the plane is developed. This data structure may be of independent interest. Next, the problem of finding a translation of B that maximizes the resemblance to A under the bottleneck matching criterion is considered. When A and B are point-sets in the plane, an O(n 5 log n) -time algorithm for determining whether for some translated copy the resemblance gets below a given ρ is presented, thus improving the previous result of Alt, Mehlhorn, Wagener, and Welzl by a factor of almost n . This result is used to compute the smallest such ρ in time O(n 5 log 2 n ) , and an efficient approximation scheme for this problem is also given. The uniform matching problem (also called the balanced assignment problem, or the fair matching problem) is to find Match * U , a matching that minimizes max (Match)-min(Match) . A minimum deviation matching Match * D is a matching that minimizes (1/n)Σ(Match) - min(Match) . Algorithms for computing Match * U and Match * D in roughly O(n 10/3 ) time are presented. These algorithms are more efficient than the previous O(n 4 ) -time algorithms of Martello, Pulleyblank, Toth, and de Werra, and of Gupta and Punnen, who studied these problems for general bipartite graphs. Received October 21, 1997; revised July 16, 1998.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号