首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Biological sequence (e.g. DNA sequence) can be treated as strings over some fixed alphabet of characters (a, c, t and g)[1]. Sequence alignment is a procedure of comparing two or more sequences by searching for a series of individual characters that are in the same order in the sequences. Two-sequence alignment, pair-wise alignment, is a way of stacking one sequence above the other and matching characters from the two sequences thaat lie in the same column: identical characters are placed in …  相似文献   

2.
目的 图像匹配是遥感图像镶嵌拼接的重要环节,图像匹配技术通常采用两步法,首先利用高维描述子的最近和次近距离比建立初始匹配,然后通过迭代拟合几何模型消除错误匹配。尽管外点过滤算法大幅提高了时间效率,但其采用传统的两步法,构建初始匹配的方法仍然非常耗时,导致整个遥感图像拼接的速度提升仍然有限。为了提高遥感图像匹配的效率,本文提出了一种基于空间分治思想的快速匹配方法。方法 首先,通过提取图像的大尺度特征生成少量的初始匹配,并基于初始匹配在两幅图像之间构建成对的分治空间中心点;然后,基于范围树搜索分治空间中心点一定范围内的相邻特征点,构造成对分治空间点集;最后,在各个分治空间点集内分别进行遥感图像特征的匹配。结果 通过大量不同图像尺寸和相对旋转的遥感图像的实验表明,与传统的和其他先进方法相比,本文方法在保证较高精度的同时将匹配时间缩短到1/1001/10。结论 利用初始种子匹配构建分治匹配中心以将图像匹配分解在多个子区间进行的方法有助于提高遥感影像匹配的效率,该算法良好的时间性能对实时遥感应用具有实际价值。  相似文献   

3.
U. Rösler 《Algorithmica》2001,29(1-2):238-261
This paper develops general tools for the analysis of stochastic divide and conquer algorithms. We concentrate on the average performance and the distribution of the running time of the algorithm. As a special example we analyse the average performance and the running time distribution of the (2k + 1)-median version of Quicksort. Online publication October 13, 2000.  相似文献   

4.
5.
土地利用现状数据由CAD格式转换为GIS格式后需重新为图斑对象设置土地分类编码属性,为了提高海量空间数据情况下自动赋值的效率,研究了将分而治之算法应用于海量数据空间叠加分析以提高效率的方法。研究表明,对于所有需通过空间叠加分析来确定不同图层空间对象间的空间关系的问题,均可以采用分而治之方法来降低时间复杂度。在最小化分割的情况下,基于四叉树空间索引,分而治之算法可以使此类应用的时间复杂度降低为On lb n)。实际应用验证了该方法在海量空间数据处理中的效率和实用价值。  相似文献   

6.
This paper discusses our efforts in implementing a divide and conquer algorithm (adaptive quadrature) on the HEP computer system. The one PEM HEP system performs in a MIMD fashion by pipelining execution of instructions from different processes. Unlike most divide and conquer approaches, our strategy ensures that the program will never deadlock due to memory expansion or spawning too many processes. Within this constraint we develop and analyse two different implementations: one using a static number of processes and the other a dynamic number of processes. Our results examine the relative performance of these two schemes. In addition we briefly discuss some of our impressions concerning some ‘myths of parallel programming’.  相似文献   

7.
《国际计算机数学杂志》2012,89(3-4):121-132
In this paper the divide-and-conquer approach to the two-dimensional closest-pair problem is studied. A new algorithm is proposed by which a closest pair, i.e. a pair of points whose distance δ is smallest among a set of N points, is found in θ(N) expected time when the points are drawn independently from the uniform distribution in a unit square. The worst-case running time of the algorithm is θ(N log2 N). The method is to project the points onto one of the coordinate axes, and to compute an initial guess for the smallest distance δ by considering the [N/2] pairs of successive projected points. The shortest of these pairwise distances is a good approximation for the final δ. It is then used in the subsequent merge phases of the divide-and-conquer algorithm to keep the average work minimal. A modification of the basic algorithm guarantees θ(N) performance in the average case and θ(N log N) performance in the worst case.  相似文献   

8.
属性核计算是Rough集理论中的一个重要研究内容.将分治法的思想溶入Rough集算法中,在决策表的属性集上,利用分治法对论域进行划分,给出了基于分治法的正区域计算方法,其时间复杂度分别为D(|U|×|C|);在此基础上,给出了基于分治法的属性核计算方法,其时间复杂度为O(|U|×|C|2).两个算法的时间复杂度都保持了与|U|的线性关系.实验结果表明:文中的算法不仅能高效地处理UCI数据集,且能适合大数据集的处理.  相似文献   

9.
In this paper, we propose an optimal VLSI implementation for a class of programmable FIR filters with binary coefficients, whose architecture is based on a parameterized divide and conquer approach. The proposed design is shown to be easily extendable to FIR filters with multibit coefficients of arbitrary sign. The area efficiency achieved in comparison to direct form realization is demonstrated by VLSI implementation examples, synthesized in TSMC 0.18-μm single poly six metal layer CMOS process using state-of-art VLSI EDA tools. The possible saving in average power consumption is estimated using gate-level power analysis. Suggestions for applications and topics for further research conclude the paper.  相似文献   

10.
传统的快速聚类算法大多基于模糊C均值算(Fuzzy C-means,FCM),而FCM对初始聚类中心敏感,对噪音数据敏感并且容易收敛到局部极小值,因而聚类准确率不高。建立使用分治策略解决聚类问题的算法架构,充分考虑数据本身特性并对传统的FCM算法进行改进,标准数据集的实验结果表明这种基于分治策略的FCM聚类算法较好地提高了算法的聚类准确率,加快了收敛速度。  相似文献   

11.
针对原始k均值法在MapReduce建模中执行时间较长和聚类结果欠佳问题,提出一种基于MapReduce的分治k均值聚类方法。采取分治法处理大数据集,将所要处理的整个数据集拆分为较小的块并存储在每台机器的主存储器中;通过可用的机器传播,将数据集的每个块由其分配的机器独立地进行聚类;采用最小加权距离确定数据点应该被分配的类簇,判断收敛性。实验结果表明,与传统k均值聚类方法和流式k均值聚类方法相比,所提方法用时更短,结果更优。  相似文献   

12.
The discovery of knowledge through data mining provides a valuable asset for addressing decision making problems. Although a list of features may characterize a problem, it is often the case that a subset of those features may influence more a certain group of events constituting a sub‐problem within the original problem. We propose a divide‐and‐conquer strategy for data mining using both the data‐based sensitivity analysis for extracting feature relevance and expert evaluation for splitting the problem of characterizing telemarketing contacts to sell bank deposits. As a result, the call direction (inbound/outbound) was considered the most suitable candidate feature. The inbound telemarketing sub‐problem re‐evaluation led to a large increase in targeting performance, confirming the benefits of such approach and considering the importance of telemarketing for business, in particular in bank marketing.  相似文献   

13.
Kulkarni  S.R.  Mitter  S.K.  Tsitsiklis  J.N. 《Machine Learning》1993,11(1):23-35
The original and most widely studied PAC model for learning assumes a passive learner in the sense that the learner plays no role in obtaining information about the unknown concept. That is, the samples are simply drawn independently from some probability distribution. Some work has been done on studying more powerful oracles and how they affect learnability. To find bounds on the improvement in sample complexity that can be expected from using oracles, we consider active learning in the sense that the learner has complete control over the information received. Specifically, we allow the learner to ask arbitrary yes/no questions. We consider both active learning under a fixed distribution and distribution-free active learning. In the case of active learning, the underlying probability distribution is used only to measure distance between concepts. For learnability with respect to a fixed distribution, active learning does not enlarge the set of learnable concept classes, but can improve the sample complexity. For distribution-free learning, it is shown that a concept class is actively learnable iff it is finite, so that active learning is in fact less powerful than the usual passive learning model. We also consider a form of distribution-free learning in which the learner knows the distribution being used, so that distribution-free refers only to the requirement that a bound on the number of queries can be obtained uniformly over all distributions. Even with the side information of the distribution being used, a concept class is actively learnable iff it has finite VC dimension, so that active learning with the side information still does not enlarge the set of learnable concept classes.  相似文献   

14.
A Hopfield neural network for a large scale problem optimisation poses difficulties due to the issues of stability and the determination of network parameters. In this paper, we introduce the concept of a divide and conquer algorithm to solve large scale optimisation problems using the Hopfield neural network. This paper also introduces the Grossberg Regularity Detector (GRD) neural network as a partition tool. This neural network based partition tool has the advantages of reducing the complexity of partition selection as well as removing the recursive division process during the divide and conquer operation. A large scale combinatorial optimisation problem (i.e. sequence-dependent set-up time minimisation problem with a large number of parts (N> 100)) is linearly partitioned into smaller sets of sub-problems based on their similarity relations. With a large number of parts (N>100), the problem could not effectively be verified with other methods, such as the heuristic or branch and bound methods. Hence, the effectiveness of the divide and conquer strategy implemented by the GRD neural network in conjunction with a Hopfield neural network was benchmarked against the first-come first-serve method, and the Hopfield neural network based on arbitrary separations. The results showed that the divide and conquer strategy of the GRD neural network was far superior to the other methods.  相似文献   

15.
代价敏感属性选择问题的目的是通过权衡测试代价和误分类代价,得到一个具有最小总代价的属性子集。目前,多数代价敏感属性选择方法只考虑误分类代价固定不变的情况,不能较好地解决类分布不均衡等问题。而在大规模数据集上,算法效率不理想也是代价敏感属性选择的主要问题之一。针对这些问题,以总代价最小为目标,设计了一种新的动态误分类代价机制。结合分治思想,根据数据集规模按列自适应拆分各数据集。基于动态误分类代价重新定义最小代价属性选择问题,提出了动态误分类代价下的代价敏感属性选择分治算法。通过实验表明,该算法能在提高效率的同时获得最优误分类代价,从而保证所得属性子集的总代价最小。  相似文献   

16.
传统MDS-MAP算法通过同时提取网络中所有节点间距离信息的特征来实现定位,计算时间复杂度相对较高,影响了算法的定位速度。针对该问题,提出了基于分而治之的快速多维尺度定位算法DMDS-MAP,剔除参与转换的冗余数据,可有效提高原始MDS-MAP算法的定位速度。DMDS-MAP算法将距离矩阵进行划分,选取对角阵作为子矩阵以剔除冗余数据,通过奇异值分解从各子矩阵中提取指定维数的特征转化为相对坐标,融合由各子矩阵求得节点的相对坐标,得到所有节点的相对坐标,最后,根据锚节点坐标信息得到所有节点的全局绝对坐标。实验结果表明,在定位精度相似的情况下,随着参与运算的节点密度的增加,DMDS-MAP算法较MDS-MAP算法在运行时间上有明显的提升。  相似文献   

17.
The purpose of this study was to examine if digital divide exists between elementary school aged children with learning disabilities (LD) and their nondisabled peers in Taiwan. A self-reported questionnaire regarding information and communication technology (ICT) access and ICT competency, Scale of Digital Participation of Elementary School Students, designed by the authors, was used to collect data. Totally, 117 students with LD and 117 peers without disabilities were recruited in this investigation and were conducted with the questionnaire. The results indicated that there was no significant difference in the opportunities to access computers and the Internet at home and at school between children with and without LD. However, there was a significant difference found in ICT competencies between children with and without LD. Moreover, students without LD enhanced their computer competency gradually year by year, but students with LD eventually did not. The findings of this study supported the notion that mere provision of ICT access is not sufficient for children with LD to master ICT skills. A specific designed ICT instruction programs should be provided to children with LD. Finally, suggestions for future studies were also discussed.  相似文献   

18.
分支降阶是目前广泛用于求解组合优化领域中难题的技术之一,该技术的核心思想是将原问题分支成若干个子问题,并递归求解这些子问题。加权分治技术是算法设计和时间复杂度分析中的一种新技术。设计一个基于分支降阶的递归算法求解最大团问题。运用常规技术对该算法进行时间复杂度分析,得出其时间复杂度为[O(1.380np(n)),]其中[p(n)]表示问题规模数[n]的多项式函数。运用加权分治技术对原算法进行时间复杂度分析,将该算法的时间复杂度由原来的[O(1.380np(n))]降为[O(1.325np(n))]。研究结果表明运用加权分治技术能够得到较为精确的时间复杂度。  相似文献   

19.
DNA计算机的可扩展性问题是近年来生物计算领域的重要研究重点之一.根据精确覆盖问题DNA计算求解过程中的并行计算需求,将Aldeman-Lipton模型的操作与粘贴模型的解空间结合,引入荧光标记和凝胶电泳技术,提出了一种求解精确覆盖问题的DNA计算模型和基于分治方法的DNA计算机算法.算法由初始解空间生成算法Init()、冗余解删除算法IllegalRemove()和并行搜索器ParallelSeacher()共3个子算法组成.与同类算法的性能比较分析表明:本算法在保持多项式生物操作复杂性的条件下,将求解n维精确覆盖问题的DNA链数从O(2n)减少至O(1.414n),从而将DNA计算机在试管内可求解的精确覆盖问题集合的基数从60提高到120,改进了相关文献的研究结果.  相似文献   

20.
We investigate the parallel complexity of learning formulas from membership and equivalence queries. We show that many restricted classes of boolean functions cannot be efficiently learned in parallel with a polynomial number of processors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号