首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
周宇  赵威  刘国华  貟慧  翟红敏  万小妹 《软件学报》2014,25(S2):136-146
查询结果重复率高是top-k查询处理过程中亟待解决的问题,已有的解决方法需要遍历初始结果集中所有的对象,因此,查询处理的效率较低.为了提高查询处理的效率,把初始结果集映射到欧氏空间中,根据拉式策略,可选用基于得分或基于距离两种方法之一从该空间选出差异最优子空间,在基于距离的方法中,对欧氏子空间进行分割并且利用探测位置和Voronoi图的几何特性减少二次查询对象的数目.在此基础上,提出了top-k查询结果有界多样化算法,并证明了算法的正确性.实验结果表明,所提出的算法提高了top-k查询处理效率.  相似文献   

2.
李淼  谷峪  陈默  于戈 《软件学报》2017,28(2):310-325
随着地理位置定位技术的蓬勃发展,基于在线位置服务技术的应用也越来越多.提出一种查询类型——反向空间偏好top-k查询.类似于传统的反向空间top-k查询,对于给定的空间查询对象,该查询返回使该对象满足top-k属性得分的那些用户.但不同的是,该对象的属性不是自身具有的特性,而是通过计算该对象与其他偏好对象之间的空间关系(如距离)而确定.这种查询在市场分析等许多重要领域具有需求,例如,根据查询结果,分析出某个地区中某个设施受欢迎的程度.但是,由于大量空间对象的存在导致对象之间空间关系的计算代价非常高,如何实时地计算出对象的空间属性得分,给查询处理带来很大的挑战.针对该问题提出优化的查询处理算法包括:数据集剪枝、数据集批量处理、基于权重的用户分组等策略.通过理论分析和充分的实验验证,证明了所提出方法的有效性.与普通方法相比,这些方法能够大幅度提高查询处理的执行时间和I/O效率.  相似文献   

3.
现实生活中的网络通常存在社区结构,社区查询是图数据挖掘的基本任务.现有研究工作提出了多种模型来识别网络中的社区,如基于k-核的模型和基于k-truss的模型.然而,这些模型通常只限制社区内节点或边的邻居数量,忽略了邻居之间的关系,即节点的邻域结构,从而导致社区内节点的局部稠密性较低.针对这一问题,本文将节点的邻域结构信息融入k-核稠密子图中,提出一种新的基于邻域连通k-核的社区模型,并定义了社区的稠密度.基于这一新模型,研究了最稠密单社区搜索问题,即返回包含查询节点集且具有最高稠密度的社区.在现实生活图数据中,一组查询节点可能会分布在多个不相交的社区中.为此,本文进一步研究了基于稠密度阈值的多社区搜索问题,即返回包含查询节点集的多个社区,且每个社区的稠密度不低于用户指定的阈值.针对最稠密单社区搜索和基于稠密度阈值的多社区搜索问题,首先定义了边稠密度的概念,并提出了基于边稠密度的基线算法.为了提高搜索效率,设计了索引树和改进索引树结构,能够支持在多项式时间内返回查询结果.通过与基线算法在多组数据集上的对比,验证了基于邻域连通k-核的社区模型的有效性和所提出查询算法的效率.  相似文献   

4.
王洪亚  杨利宏  刘晓强 《软件学报》2016,27(12):3051-3066
相似连接算法在数据清理、数据集成和重复网页检测等领域有着广泛的应用.现有相似连接算法有两种类型:基于相似度阈值的相似连接和Top-k相似连接.Top-k连接算法非常适合于相似度阈值未知的应用场景,目前最为有效的Top-k相似连接算法是Xiao等人提出的Topk-join.为了解决Topk-join中存在的性能问题,提出了一种Top-k相似连接算法Opt-join,该算法将Token批处理技术集成在现有的事件驱动框架中,以降低前缀事件的处理代价;通过置换哈希查找与过滤操作的执行位置来降低哈希查找代价,并理论证明了该置换的正确性.实验结果表明:与Topk-join算法相比,Opt-join取得了1.28倍~3.09倍的性能提升.实验数据还显示:随着数据长度的增加或k值的增长,Opt-join的性能优势有不断增加的趋势.  相似文献   

5.
针对DBSCAN聚类算法不能对变密度分布数据集进行有效聚类,VDBSCAN算法借助k-dist图来自动获取各个密度层次的数据对象的邻域半径,解决了具有不同密度层次分布数据集的聚类问题. k-VDBSCAN算法通过对k值的自动获取,减小了VDBSCAN中参数k对最终聚类结果的影响. 针对k值的自动获取,在原有的k-VDBSCAN聚类算法基础上,依据数据集本身,利用数据对象间距离的特征,提出了一种k值改进自动获取聚类算法. 理论分析与实验结果表明,新的改进算法能够有效的自动获得参数k的值,并且在聚类结果、时间效率方面都有明显的提高.  相似文献   

6.
李鸣鹏  高宏  邹兆年 《软件学报》2014,25(4):797-812
研究了基于图压缩的k可达查询处理,提出了一种支持k可达查询的图压缩算法k-RPC及无需解压缩的查询处理算法,k-RPC算法在所有基于等价类的支持k-reach查询的图压缩算法中是最优的.由于k-RPC算法是基于严格的等价关系,因此进一步又提出了线性时间的近似图压缩算法k-GRPC.k-GRPC算法允许从原始图中删除部分边,然后使用k-RPC获得更好的压缩比.提出了线性时间的无需解压缩的查询处理算法.真实数据上的实验结果表明,对于稀疏的原始图,两种压缩算法的压缩比分别可以达到45%,对于稠密的原始图,两种压缩算法的压缩比分别可以达到75%和67%;与在原始图上直接进行查询处理相比,两种基于压缩图的查询处理算法效率更好,在稀疏图上的查询效率可以提高2.5倍.  相似文献   

7.
RP(k)网络上Hypercube通信模式的波长指派算法   总被引:11,自引:1,他引:11       下载免费PDF全文
波长指派是光网络设计的基本问题,设计波长指派算法是洞察光网络通信能力的基本方法.基于光RP(k)网络,讨论了其波长指派问题. 含有N=2n个节点的Hypercube通信模式,构造了节点间的一种排列次序Xn,并设计了RP(k)网络上的波长指派算法.在构造该算法的过程中,得到了在环网络上实现n维Hypercube通信模式的波长指派算法.这两个算法具有较高的嵌入效率.在RP(k)网络上,实现Hypercube通信模式需要max{2,「5(2n-5/3」}个波长.而在环网络上,实现该通信模式需要复用(N/3+N/12(个波长,比已有算法需要复用「N/3+N/4」个波长有较大的改进.这两个算法对于光网络的设计具有较大的指导价值.  相似文献   

8.
武优西  刘茜  闫文杰  郭磊  吴信东 《软件学报》2021,32(11):3331-3350
无重叠条件序列模式挖掘是一种间隙约束序列模式挖掘方法,与同类挖掘方法相比,该方法更容易发现有价值的频繁模式,其核心问题是计算给定模式在序列中的支持度或出现数,进而判定该模式的频繁性.而计算模式支持度问题实质是无重叠条件模式匹配.当前研究采用迭代搜索无重叠出现,然后剪枝无用结点的方式计算模式的支持度,其计算时间复杂度为O (m×m×n×W),其中,mnW分别为模式长度、序列长度及最大间隙.为了进一步提高无重叠条件模式匹配计算速度,从而有效地降低无重叠条件序列模式挖掘时间,提出了一种高效的算法,该算法将模式匹配问题转换为一棵网树,然后从网树的最小树根结点出发,采用回溯策略迭代搜索最左孩子方式计算无重叠最小出现,在网树上剪枝该出现后,无需进一步查找并剪枝无效结点即可实现问题的求解.理论证明了该算法的完备性,并将该算法的时间复杂度降低为O (m×n×W).在此基础上,继续指明该问题还存在另外3种相似的求解策略,分别是从最左叶子出发迭代查找最左双亲方式、从最右树根出发迭代查找最右孩子方式和从最右叶子出发迭代查找最右双亲方式.实验结果验证了该算法的性能,特别是在序列模式挖掘中,应用该方法的挖掘算法可以降低挖掘时间.  相似文献   

9.
移动对象连续k近邻(CKNN)查询是指给定一个连续移动的对象集合,对于任意一个k近邻查询q,实时计算查询qk近邻并在查询有效时间内对查询结果进行实时更新.现实生活中,交通出行、社交网络、电子商务等领域许多基于位置的应用服务都涉及移动对象连续k近邻查询这一基础问题.已有研究工作解决连续k近邻查询问题时,大多需要通过多次迭代确定一个包含k近邻的查询范围,而每次迭代需要根据移动对象的位置计算当前查询范围内移动对象的数量,整个迭代过程的计算代价占查询代价的很大部分.为此,提出了一种基于网络索引和混合高斯函数移动对象分布密度的双重索引结构(grid GMM index,GGI),并设计了移动对象连续k近邻增量查询算法(incremental search for continuous k nearest neighbors,IS-CKNN).GGI索引结构的底层采用网格索引对海量移动对象进行维护,上层构建混合高斯模型模拟移动对象在二维空间中的分布.对于给定的k近邻查询q,IS-CKNN算法能够基于混合高斯模型直接确定一个包含qk近邻的查询区域,减少了已有算法求解该区域的多次迭代过程;当移动对象和查询q位置发生变化时,进一步提出一种高效的增量查询策略,能够最大限度地利用已有查询结果减少当前查询的计算量.最后,在滴滴成都网约车数据集以及两个模拟数据集上进行大量实验,充分验证了算法的性能.  相似文献   

10.
k-Median近似计算复杂度与局部搜索近似算法分析   总被引:1,自引:0,他引:1  
k-Median问题的近似算法研究一直是计算机科学工作者关注的焦点,现有研究结果大多是关于欧式空间和Metric空间的,一般距离空间k-Median的结果多年来一直未见.考虑一般距离空间k-Median问题,设dmax/dmin表示k-Median实例中与客户点邻接的最长边长比最短边长的最大者.首先证明dmax/dmin≤ω+ε的k-Median问题不存在近似度小于1+ω-1/e的多项式时间近似算法,除非,由此推出Metric k-Median问题不可近似到1+2/e,除非NP(∈)DTME(NO(log logn)).然后给出k-Median问题的一个局部搜索算法,分析表明,若有dmax/dmin≤ω,则算法的近似度为1+ω-1/2.该结果亦适用于Metric k-Median,ω≤5时,局部搜索算法求解Metric k-Median的近似度为3,好于现有结果3+2/P.通过计算机实验,进一步研究了k-Median局部搜索求解算法的实际计算效果和该算法的改进方法.  相似文献   

11.
Sequential pattern mining has been studied extensively in the data mining community. Most previous studies require the specification of a min_support threshold for mining a complete set of sequential patterns satisfying the threshold. However, in practice, it is difficult for users to provide an appropriate min_support threshold. To overcome this difficulty, we propose an alternative mining task: mining top-k frequent closed sequential patterns of length no less than min_, where k is the desired number of closed sequential patterns to be mined and min_ is the minimal length of each pattern. We mine the set of closed patterns because it is a compact representation of the complete set of frequent patterns. An efficient algorithm, called TSP, is developed for mining such patterns without min_support. Starting at (absolute) min_support=1, the algorithm makes use of the length constraint and the properties of top-k closed sequential patterns to perform dynamic support raising and projected database pruning. Our extensive performance study shows that TSP has high performance. In most cases, it outperforms the efficient closed sequential pattern-mining algorithm, CloSpan, even when the latter is running with the best tuned min_support threshold. Thus, we conclude that, for sequential pattern mining, mining top-k frequent closed sequential patterns without min_support is more preferable than the traditional min_support-based mining.  相似文献   

12.
Online mining of path traversal patterns from continuous Web click streams is one of the challenging research problems of Web usage mining. Most of previous works focus on mining path traversal patterns over the entire history of Web click streams. Mining the recent changes of Web click streams can provide valuable information for the analysis of the Web click streams. In this paper, we propose a new, online mining algorithm, called Top-DSW (top-k path traversal patterns of stream Damped Sliding Window), to discover the set of top-k path traversal patterns from streaming maximal forward references, where k is the desired number of path traversal patterns to be mined. An effective summary data structure, called TKP-DSW-list (a list of top-k path traversal patterns of stream Damped Sliding Windows) is developed to maintain the essential information about the top-k path traversal patterns from the maximal forward references within a stream damped sliding window. An effective space pruning mechanism, called TKR-list-maintain, is developed to control the memory requirement of the TKP-DSW-list. Experimental studies show that the proposed Top-DSW algorithm is an efficient, single-pass algorithm for online mining of the set of top-k path traversal patterns over stream damped sliding windows.  相似文献   

13.
Finding typical instances is an effective approach to understand and analyze large data sets. In this paper, we apply the idea of typicality analysis from psychology and cognitive science to database query answering, and study the novel problem of answering top-k typicality queries. We model typicality in large data sets systematically. Three types of top-k typicality queries are formulated. To answer questions like “Who are the top-k most typical NBA players?”, the measure of simple typicality is developed. To answer questions like “Who are the top-k most typical guards distinguishing guards from other players?”, the notion of discriminative typicality is proposed. Moreover, to answer questions like “Who are the best k typical guards in whole representing different types of guards?”, the notion of representative typicality is used. Computing the exact answer to a top-k typicality query requires quadratic time which is often too costly for online query answering on large databases. We develop a series of approximation methods for various situations: (1) the randomized tournament algorithm has linear complexity though it does not provide a theoretical guarantee on the quality of the answers; (2) the direct local typicality approximation using VP-trees provides an approximation quality guarantee; (3) a local typicality tree data structure can be exploited to index a large set of objects. Then, typicality queries can be answered efficiently with quality guarantees by a tournament method based on a Local Typicality Tree. An extensive performance study using two real data sets and a series of synthetic data sets clearly shows that top-k typicality queries are meaningful and our methods are practical.  相似文献   

14.
High on-shelf utility itemset (HOU) mining is an emerging data mining task which consists of discovering sets of items generating a high profit in transaction databases. The task of HOU mining is more difficult than traditional high utility itemset (HUI) mining, because it also considers the shelf time of items, and items having negative unit profits. HOU mining can be used to discover more useful and interesting patterns in real-life applications than traditional HUI mining. Several algorithms have been proposed for this task. However, a major drawback of these algorithms is that it is difficult for users to find a suitable value for the minimum utility threshold parameter. If the threshold is set too high, not enough patterns are found. And if the threshold is set too low, too many patterns will be found and the algorithm may use an excessive amount of time and memory. To address this issue, we propose to address the problem of top-k on-shelf high utility itemset mining, where the user directly specifies k, the desired number of patterns to be output instead of specifying a minimum utility threshold value. An efficient algorithm named KOSHU (fast top-K on-shelf high utility itemset miner) is proposed to mine the top-k HOUs efficiently, while considering on-shelf time periods of items, and items having positive and/or negative unit profits. KOSHU introduces three novel strategies, named efficient estimated co-occurrence maximum period rate pruning, period utility pruning and concurrence existing of a pair 2-itemset pruning to reduce the search space. KOSHU also incorporates several novel optimizations and a faster method for constructing utility-lists. An extensive performance study on real-life and synthetic datasets shows that the proposed algorithm is efficient both in terms of runtime and memory consumption and has excellent scalability.  相似文献   

15.
Frequent sequential pattern mining has become one of the most important tasks in data mining. It has many applications, such as sequential analysis, classification, and prediction. How to generate candidates and how to control the combinatorically explosive number of intermediate subsequences are the most difficult problems. Intelligent systems such as recommender systems, expert systems, and business intelligence systems use only a few patterns, namely those that satisfy a number of defined conditions. Challenges include the mining of top-k patterns, top-rank-k patterns, closed patterns, and maximal patterns. In many cases, end users need to find itemsets that occur with a sequential pattern. Therefore, this paper proposes approaches for mining top-k co-occurrence items usually found with a sequential pattern. The Naive Approach Mining (NAM) algorithm discovers top-k co-occurrence items by directly scanning the sequence database to determine the frequency of items. The Vertical Approach Mining (VAM) algorithm is based on vertical database scanning. The Vertical with Index Approach Mining (VIAM) algorithm is based on a vertical database with index scanning. VAM and VIAM use pruning strategies to reduce the search space, thus improving performance. VAM and VIAM are especially effective in mining the co-occurrence items of a long input pattern. The three algorithms were evaluated using real-world databases. The experimental results show that these algorithms perform well, especially VAM and VIAM.  相似文献   

16.
Top-k queries on large multi-attribute data sets are fundamental operations in information retrieval and ranking applications. In this article, we initiate research on the anytime behavior of top-k algorithms on exact and fuzzy data. In particular, given specific top-k algorithms (TA and TA-Sorted) we are interested in studying their progress toward identification of the correct result at any point during the algorithms’ execution. We adopt a probabilistic approach where we seek to report at any point of operation of the algorithm the confidence that the top-k result has been identified. Such a functionality can be a valuable asset when one is interested in reducing the runtime cost of top-k computations. We present a thorough experimental evaluation to validate our techniques using both synthetic and real data sets.  相似文献   

17.
Multi-dimensional top-k dominating queries   总被引:1,自引:0,他引:1  
The top-k dominating query returns k data objects which dominate the highest number of objects in a dataset. This query is an important tool for decision support since it provides data analysts an intuitive way for finding significant objects. In addition, it combines the advantages of top-k and skyline queries without sharing their disadvantages: (i) the output size can be controlled, (ii) no ranking functions need to be specified by users, and (iii) the result is independent of the scales at different dimensions. Despite their importance, top-k dominating queries have not received adequate attention from the research community. This paper is an extensive study on the evaluation of top-k dominating queries. First, we propose a set of algorithms that apply on indexed multi-dimensional data. Second, we investigate query evaluation on data that are not indexed. Finally, we study a relaxed variant of the query which considers dominance in dimensional subspaces. Experiments using synthetic and real datasets demonstrate that our algorithms significantly outperform a previous skyline-based approach. We also illustrate the applicability of this multi-dimensional analysis query by studying the meaningfulness of its results on real data.  相似文献   

18.
Ranking queries, also known as top-k queries, produce results that are ordered on some computed score. Typically, these queries involve joins, where users are usually interested only in the top-k join results. Top-k queries are dominant in many emerging applications, e.g., multimedia retrieval by content, Web databases, data mining, middlewares, and most information retrieval applications. Current relational query processors do not handle ranking queries efficiently, especially when joins are involved. In this paper, we address supporting top-k join queries in relational query processors. We introduce a new rank-join algorithm that makes use of the individual orders of its inputs to produce join results ordered on a user-specified scoring function. The idea is to rank the join results progressively during the join operation. We introduce two physical query operators based on variants of ripple join that implement the rank-join algorithm. The operators are nonblocking and can be integrated into pipelined execution plans. We also propose an efficient heuristic designed to optimize a top-k join query by choosing the best join order. We address several practical issues and optimization heuristics to integrate the new join operators in practical query processors. We implement the new operators inside a prototype database engine based on PREDATOR. The experimental evaluation of our approach compares recent algorithms for joining ranked inputs and shows superior performance.Received: 23 December 2003, Accepted: 31 March 2004, Published online: 12 August 2004Edited by: S. AbiteboulExtended version of the paper published in the Proceedings of the 29th International Conference on Very Large Databases, VLDB 2003, Berlin, Germany, pp 754-765  相似文献   

19.
Sequential pattern mining (SPM) is an important data mining problem with broad applications. SPM is a hard problem due to the huge number of intermediate subsequences to be considered. State of the art approaches for SPM (e.g., PrefixSpan Pei et al. 2001) are largely based on the pattern-growth approach, where for each frequent prefix subsequence, only its related suffix subsequences need to be considered, and the database is recursively projected into smaller ones. Many authors have promoted the use of constraints to focus on the most promising patterns according to the interests of the end user. The top-k SPM problem is also used to cope with the difficulty of thresholding and to control the number of solutions. State of the art methods developed for SPM and top-k SPM, though efficient, are locked into a rather rigid search strategy, and suffer from the lack of declarativity and flexibility. Indeed, adding new constraints usually amounts to changing the data-structures used in the core of the algorithm, and combining these new constraints often require new developments. Recent works (e.g. Kemmar et al. 2014; Négrevergne and Guns 2015) have investigated the use of Constraint Programming (CP) for SPM. However, despite their nice declarative aspects, all these modelings have scaling problems, due to the huge size of their constraint networks. To address this issue, we propose the Prefix-Projection global constraint, which encapsulates both the subsequence relation as well as the frequency constraint. Its filtering algorithm relies on the principle of projected databases which allows to keep in the variables domain, only values leading to a frequent pattern in the database. Prefix-Projection filtering algorithm enforces domain consistency on the variable succeeding the current frequent prefix in polynomial time. This global constraint also allows for a straightforward implementation of additional constraints such as size, item membership, regular expressions and any combination of them. Experimental results show that our approach clearly outperforms existing CP approaches and competes well with the state-of-the-art methods on large datasets for mining frequent sequential patterns, sequential patterns under various constraints, and top-k sequential patterns. Unlike existing CP methods, our approach achieves a better scalability.  相似文献   

20.
Frequent pattern mining in data streams is an important research topic in the data mining community. In previous studies, a minimum support threshold was assumed to be available for mining frequent patterns. However, setting such a threshold is typically difficult. Hence, it is more reasonable to ask users to set a bound on the result size. The present study considers mining top-k frequent patterns from data streams using a sliding window technique. A single-pass algorithm, called MSWTP, is developed for the generation of top-k frequent patterns without a threshold. In the method, the content of the transactions in the sliding window is incrementally maintained in a summary data structure, named SWTP-tree, by scanning the stream only once. To make the mining operation efficient, insignificant patterns are distinguished from others by applying the Chernoff bound. Two kinds of obsolete pattern and one kind of insignificant pattern are periodically pruned from the pattern tree. Whenever necessary, the k most frequent patterns can be selected from SWTP-tree in order of their descending frequency. The performance of the proposed technique is evaluated via simulation experiments. The results show that the proposed method is both efficient and scalable, and that it outperforms comparable algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号