首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
k-trees have established themselves as useful data structures in pattern recognition. A fundamental operation regarding k-trees is the construction of a k-tree. We present a method to store an object as a set of rays and an algorithm to convert such a set into a k-tree. The algorithm is conceptually simple, works for any k, and builds a k-tree from the rays very fast. It produces a minimal k-tree and does not lead to intermediate storage swell.  相似文献   

2.
In this paper a parallel algorithm is given that, given a graph G=(V,E) , decides whether G is a series parallel graph, and, if so, builds a decomposition tree for G of series and parallel composition rules. The algorithm uses O(log \kern -1pt |E|log ^\ast \kern -1pt |E|) time and O(|E|) operations on an EREW PRAM, and O(log \kern -1pt |E|) time and O(|E|) operations on a CRCW PRAM. The results hold for undirected as well as for directed graphs. Algorithms with the same resource bounds are described for the recognition of graphs of treewidth two, and for constructing tree decompositions of treewidth two. Hence efficient parallel algorithms can be found for a large number of graph problems on series parallel graphs and graphs with treewidth two. These include many well-known problems like all problems that can be stated in monadic second-order logic. Received July 15, 1997; revised January 29, 1999, and June 23, 1999.  相似文献   

3.
The minimum k-terminal cut problem is of considerable theoretical interest and arises in several applied areas such as parallel and distributed computing, VLSI circuit design, and networking. In this paper we present two new approximation and exact algorithms for this problem on an n-vertex undirected weighted planar graph G. For the case when the k terminals are covered by the boundaries of m > 1 faces of G, we give a min{O(n 2 log n logm), O(m 2 n 1.5 log2 n + k n)} time algorithm with a (2–2/k)-approximation ratio (clearly, m \le k). For the case when all k terminals are covered by the boundary of one face of G, we give an O(n k3 + (n log n)k 2) time exact algorithm, or a linear time exact algorithm if k = 3, for computing an optimal k-terminal cut. Our algorithms are based on interesting observations and improve the previous algorithms when they are applied to planar graphs. To our best knowledge, no previous approximation algorithms specifically for solving the k-terminal cut problem on planar graphs were known before. The (2–2/k)-approximation algorithm of Dahlhaus et al. (for general graphs) takes O(k n 2 log n) time when applied to planar graphs. Our approximation algorithm for planar graphs runs faster than that of Dahlhaus et al. by at least an O(k/logm) factor (m \le k).  相似文献   

4.
Covering problems are fundamental classical problems in optimization, computer science and complexity theory. Typically an input to these problems is a family of sets over a finite universe and the goal is to cover the elements of the universe with as few sets of the family as possible. The variations of covering problems include well-known problems like Set Cover, Vertex Cover, Dominating Set and Facility Location to name a few. Recently there has been a lot of study on partial covering problems, a natural generalization of covering problems. Here, the goal is not to cover all the elements but to cover the specified number of elements with the minimum number of sets. In this paper we study partial covering problems in graphs in the realm of parameterized complexity. Classical (non-partial) version of all these problems has been intensively studied in planar graphs and in graphs excluding a fixed graph H as a minor. However, the techniques developed for parameterized version of non-partial covering problems cannot be applied directly to their partial counterparts. The approach we use, to show that various partial covering problems are fixed parameter tractable on planar graphs, graphs of bounded local treewidth and graph excluding some graph as a minor, is quite different from previously known techniques. The main idea behind our approach is the concept of implicit branching. We find implicit branching technique to be interesting on its own and believe that it can be used for some other problems.  相似文献   

5.
The multiple alignment of the sequences of DNA and proteins is applicable to various important fields in molecular biology. Although the approach based on Dynamic Programming is well-known for this problem, it requires enormous time and space to obtain the optimal alignment. On the other hand, this problem corresponds to the shortest path problem and the A* algorithm, which can efficiently find the shortest path with an estimator, is usable.

First, this paper directly applies the A* algorithm to multiple sequence alignment problem with more powerful estimator in more than two-dimensional case and discusses the extensions of this approach utilizing an upper bound of the shortest path length and of modification of network structure. The algorithm to provide the upper bound is also proposed in this paper. The basic part of these results was originally shown in Ikeda and Imai [11]. This part is similar to the branch-and-bound techniques implemented in MSA program in Gupta et al. [6]. Our framework is based on the edge length transformation to reduce the problem to the shortest path problem, which is more suitable to generalizations to enumerating suboptimal alignments and parametric analysis as done in Shibuya and Imai [15–17]. By this enhanced A* algorithm, optimal multiple alignments of several long sequences can be computed in practice, which is shown by computational results.

Second, this paper proposes a k-group alignment algorithm for multiple alignment as a practical method for much larger-size problem of, say multiple alignments of 50–100 sequences. A basic part of these results were originally presented in Imai and Ikeda [13]. In existing iterative improvement methods for multiple alignment, the so-called group-to-group two-dimensional dynamic programming has been used, and in this respect our proposal is to extend the ordinary two-group dynamic programming to a k-group alignment programming. This extension is conceptually straightforward, and here our contribution is to demonstrate that the k-group alignment can be implemented so as to run in a reasonable time and space under standard computing environments. This is established by generalizing the above A* search approach. The k-group alignment method can be directly incorporated in existing methods such as iterative improvement algorithms [2, 5] and tree-based (iterative) algorithms [9]. This paper performs computational experiments by applying the k-group method to iterative improvement algorithms, and shows that our approach can find better alignments in reasonable time. For example, through larger-scale computational experiments here, 34 protein sequences with very high homology can be optimally 10-group aligned, and 64 sequences with high homology can be optimally 5-group aligned.  相似文献   


6.
现实生活中的网络通常存在社区结构,社区查询是图数据挖掘的基本任务.现有研究工作提出了多种模型来识别网络中的社区,如基于k-核的模型和基于k-truss的模型.然而,这些模型通常只限制社区内节点或边的邻居数量,忽略了邻居之间的关系,即节点的邻域结构,从而导致社区内节点的局部稠密性较低.针对这一问题,本文将节点的邻域结构信息融入k-核稠密子图中,提出一种新的基于邻域连通k-核的社区模型,并定义了社区的稠密度.基于这一新模型,研究了最稠密单社区搜索问题,即返回包含查询节点集且具有最高稠密度的社区.在现实生活图数据中,一组查询节点可能会分布在多个不相交的社区中.为此,本文进一步研究了基于稠密度阈值的多社区搜索问题,即返回包含查询节点集的多个社区,且每个社区的稠密度不低于用户指定的阈值.针对最稠密单社区搜索和基于稠密度阈值的多社区搜索问题,首先定义了边稠密度的概念,并提出了基于边稠密度的基线算法.为了提高搜索效率,设计了索引树和改进索引树结构,能够支持在多项式时间内返回查询结果.通过与基线算法在多组数据集上的对比,验证了基于邻域连通k-核的社区模型的有效性和所提出查询算法的效率.  相似文献   

7.
Let $G =(V,A)$ be a digraph with source $r$. A weight function $w$ and bandwidth constraint function $b$ (positive integer) on $A$ are given. We present algorithms for finding $k$ $r$-arborescences in $G$ with minimum total weight and with bandwidth constraints.  相似文献   

8.
在计算机网络中,随着大量新兴多媒体实时业务的应用,组播路由问题成为越来越重要的课题。组播路由问题在计算机网络中是著名的Steiner树问题,同时也是NP完全问题。目前许多研究者在单约束(特别是延时约束)组播路由中取得了较好的成果,但对于多约束Qos组播路由方面的研究相对比较少。论文提出了一种基于遗传算法的多约束组播路由优化算法,该算法在满足带宽、延时、延时抖动和包丢失率约束条件下寻找代价最小的组播树,文中描述了一种适应于研究Qos组播路由的网络模型。最后通过仿真实验证明该算法操作简单、搜索速度快、效率高且具有较强的实用性和鲁棒性。  相似文献   

9.
Program transformation system based on generalized partial computation   总被引:1,自引:0,他引:1  
Generalized Partial Computation (GPC) is a program transformation method utilizing partial information about input data, abstract data types of auxiliary functions and the logical structure of a source program. GPC uses both an inference engine such as a theorem prover and a classical partial evaluator to optimize programs. Therefore, GPC is more powerful than classical partial evaluators but harder to implement and control. We have implemented an experimental GPC system called WSDFU (Waseda Simplify-Distribute-Fold-Unfold). This paper demonstrates the power of the program transformation system as well as its theorem prover and discusses some future works. Yoshihiko Futamura, Ph.D.: He is Professor of Department of Information and Computer Science and the director of the Institute for Software Production Technology (ISPT) of Waseda University. He received his BS in mathematics from Hokkaido University in 1965, MS in applied mathematics from Harvard University in 1972 and Ph.D. degree from Hokkaido University in 1985. He joined Hitachi Central Research Laboratory in 1965 and moved to Waseda University in 1991. He was a visiting professor of Uppsala University from 1985 to 1986 and a visiting scholar of Harvard University from 1988 to 1989. Automatic generation of computer programs and programming methodology are his main research fields. He is the inventor of the Futamura Projections in partial evaluation and ISO8631 PAD (Problem Analysis Diagram). Zenjiro Konishi: He is a visiting lecturer of Institute for Software Production Technology, Waseda University. He received his M. Sc. degree in mathematics from Waseda University in 1995. His research interests include automated theorem proving. He received JSSST Takahashi Award in 2001. He is a member of JSSST and IPSJ. Robert Glück, Ph.D., Habil.: He is an Associate Professor of Computer Science at the University of Copenhagen. He received his Ph.D. and Habilitation (venia docendi) from the Vienna University of Technology in 1991 and 1997. He was research assistant at the City University of New York and received twice the Erwin-Schrodinger-Fellowship of the Austrian Science Foundation (FWF). After being an Invited Fellow of the Japan Society for the Promotion of Science (JSPS), he is now funded by the PRESTO21 program for basic research of the Japan Science and Technology Corporation (JST) and located at Waseda University in Tokyo. His main research interests are advanced programming languages, theory and practice of program transformation, and metaprogramming.  相似文献   

10.
针对欧氏平面内连接固定原点的最小树长问题,即欧氏Steiner最优树问题,给出了插入算法、递增优化算法、遗传算法等三种快速算法,并在微机上予以实现。经大量实例测试和结果比较,获得了满意的效果。  相似文献   

11.
A subsequence is obtained from a string by deleting any number of characters; thus in contrast to a substring, a subsequence is not necessarily a contiguous part of the string. Counting subsequences under various constraints has become relevant to biological sequence analysis, to machine learning, to coding theory, to the analysis of categorical time series in the social sciences, and to the theory of word complexity. We present theorems that lead to efficient dynamic programming algorithms to count (1) distinct subsequences in a string, (2) distinct common subsequences of two strings, (3) matching joint embeddings in two strings, (4) distinct subsequences with a given minimum span, and (5) sequences generated by a string allowing characters to come in runs of a length that is bounded from above.  相似文献   

12.
对ICM、RL和HCF等3种基于MRF的图像Excel分割算法.在遥感图像领域的分割应用进行了理论和实验的研究分析.并且提出了改进后的HCF算法,可以实现对遥感图像Excel的快速分割并且得到较好的分割效果.通过实验.给出了它们的各自的性能特点和适用范围.对这3个算法的图像Excel分割性能和优缺点进行了比较.  相似文献   

13.
For a positive integer c, a c-vertex-ranking of a graph G=(V,E) is a labeling of the vertices of G with integers such that, for any label i, deletion of all vertices with labels >i leaves connected components, each having at most c vertices with label i. The c-vertex-ranking problem is to find a c-vertex-ranking of a given graph using the minimum number of ranks. In this paper we give an optimal parallel algorithm for solving the c-vertex-ranking problem on trees in O(log2n) time using linear number of operations on the EREW PRAM model.  相似文献   

14.
The N-body problem is to simulate the motion of N particles under the influence of mutual force fields based on an inverse square law. Greengards algorithm claims to compute the cumulative force on each particle in O(N) time for a fixed precision irrespective of the distribution of the particles. In this paper, we show that Greengards algorithm is distribution dependent and has a lower bound of ­(N log 2 N) in two dimensions and ­(N log 4 N) in three dimensions. We analyze the Greengard and Barnes-Hut algorithms and show that they are unbounded for arbitrary distributions. We also present a truly distribution independent algorithm for the N-body problem that runs in O(N log N) time for any fixed dimension.  相似文献   

15.
We present a framework for an automated generation of exact search tree algorithms for NP-hard problems. The purpose of our approach is twofold—rapid development and improved upper bounds. Many search tree algorithms for various problems in the literature are based on complicated case distinctions. Our approach may lead to a much simpler process of developing and analyzing these algorithms. Moreover, using the sheer computing power of machines it may also lead to improved upper bounds on search tree sizes (i.e., faster exact solving algorithms) in comparison with previously developed hand-made search trees. Among others, such an example is given with the NP-complete Cluster Editing problem (also known as Correlation Clustering on complete unweighted graphs), which asks for the minimum number of edge additions and deletions to create a graph which is a disjoint union of cliques. The hand-made search tree for Cluster Editing had worst-case size O(2.27k), which now is improved to O(1.92k) due to our new method. (Herein, k denotes the number of edge modifications allowed.)  相似文献   

16.
图形消隐算法综述   总被引:4,自引:0,他引:4  
根据图形消隐的基本原理,慨述了当前在三维图形处理中最主要的物体空间消隐算法和图象空间消隐算法,对各种算法的优缺点和适用范围作了总结和比较。认为物体空间算法精度高,受设备的分辨率限制,适用于要求精密的工程应用领域;而图象空间算法生成的画面放大后往往不能令人满意,但由于它在光栅扫描过程中易于利用画面的连贯性,实现的效率往往更高。  相似文献   

17.
胡珊  林丹 《计算机工程》2012,38(7):168-170
传统方法无法有效求解交通道路维护运作中的有补给点及多装载的容量约束弧路径(CARP-RP-ML)问题。为此,提出改进的启发式算法和遗传算法。启发式算法将不同的分割算法用于由所有需求弧随机排序得到的个体上,构造问题的可行解;遗传算法利用分割算法计算其个体适应值,确定对应的可行车辆路径及补给位置,并用局部搜索作为变异算子,进一步扩大搜索空间。数值实验结果表明,与启发式算法相比,遗传算法能更有效地求解CARP-RP-ML问题。  相似文献   

18.
本文分别从算法复杂度、数据结构形式及实现的容易程度等几方面分析了三种求关键路径算法的优劣.  相似文献   

19.
Subexponential algorithms for partial cover problems   总被引:1,自引:0,他引:1  
Partial Cover problems are optimization versions of fundamental and well-studied problems like Vertex Cover and Dominating Set. Here one is interested in covering (or dominating) the maximum number of edges (or vertices) using a given number k of vertices, rather than covering all edges (or vertices). In general graphs, these problems are hard for parameterized complexity classes when parameterized by k. It was recently shown by Amini et al. (2008) [1] that Partial Vertex Cover and Partial Dominating Set are fixed parameter tractable on large classes of sparse graphs, namely H-minor-free graphs, which include planar graphs and graphs of bounded genus. In particular, it was shown that on planar graphs both problems can be solved in time 2O(k)nO(1).During the last decade there has been an extensive study on parameterized subexponential algorithms. In particular, it was shown that the classical Vertex Cover and Dominating Set problems can be solved in subexponential time on H-minor-free graphs. The techniques developed to obtain subexponential algorithms for classical problems do not apply to partial cover problems. It was left as an open problem by Amini et al. (2008) [1] whether there is a subexponential algorithm for Partial Vertex Cover and Partial Dominating Set. In this paper, we answer the question affirmatively by solving both problems in time not only on planar graphs but also on much larger classes of graphs, namely, apex-minor-free graphs. Compared to previously known algorithms for these problems our algorithms are significantly faster and simpler.  相似文献   

20.
课程表的编排是高校教务管理中最为重要和复杂的一项工作。通过对几种自动排课算法的合理比较。统筹分析出各自的优劣,得出贪婪算法的综合适用性是最优的结论。在此基础之上.进一步分析贪婪算法是如何逐步解决排课的现实问题,并给出基于贪婪算法的自动排课系统算法的具体实现过程。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号