首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
A knowledge-based approach for duplicate elimination in data cleaning   总被引:6,自引:0,他引:6  
Existing duplicate elimination methods for data cleaning work on the basis of computing the degree of similarity between nearby records in a sorted database. High recall can be achieved by accepting records with low degrees of similarity as duplicates, at the cost of lower precision. High precision can be achieved analogously at the cost of lower recall. This is the recall–precision dilemma. We develop a generic knowledge-based framework for effective data cleaning that can implement any existing data cleaning strategies and more. We propose a new method for computing transitive closure under uncertainty for dealing with the merging of groups of inexact duplicate records and explain why small changes to window sizes has little effect on the results of the sorted neighborhood method. Experimental study with two real-world datasets show that this approach can accurately identify duplicates and anomalies with high recall and precision, thus effectively resolving the recall–precision dilemma.  相似文献   

2.
一种基于条件概率分布的近似重复记录检测方法   总被引:3,自引:0,他引:3  
数据集成往往会形成一些近似重复记录 ,如何检测重复信息是数据质量研究中的一个热门课题 .文中提出了一种高效的基于条件概率分布的动态聚类算法来进行近似重复记录检测 .该方法在评估两个记录之间是否近似等价的问题上 ,解决了原来的算法忽略序列结构特点的问题 ,基于条件概率分布定义了记录间的距离 ;并根据近邻函数准则选择了一个评议聚类结果质量的准则函数 ,采用动态聚类算法完成对序列数据集的聚类 .使用该方法 ,对仿真数据进行了聚类实验 ,都获得了比较好的聚类结果  相似文献   

3.
一种使用双阈值的数据仓库环境下重复记录消除算法   总被引:3,自引:1,他引:2  
重复记录消除是数据清理研究中一个很重要的方面,它的目的是检测并消除那些冗余的、可能对后来的OLAP和数据挖掘造成影响的数据。已有研究都是通过设定一个相似度阈值来判断两条记录是否为重复记录。过大的阈值将导致返回率下降,过小的阈值将导致误检率上升。文章提出了一种双阈值的重复记录消除方法,利用数据仓库环境下数据库表之间的外键联系做进一步判断,可以有效地提高判断质量,减小误检率。  相似文献   

4.
Recently there is an increasing need for interactive human-driven analysis on large volumes of data. Online aggregation (OLA), which provides a quick sketch of massive data before a long wait of the final accurate query result, has drawn significant research attention. However, the direct processing of OLA on duplicate data will lead to incorrect query answers, since sampling from duplicate records leads to an over representation of the duplicate data in the sample. This violates the prerequisite of uniform distributions in most statistical theories. In this paper, we propose CrowdOLA, a novel framework for integrating online aggregation processing with deduplication. Instead of cleaning the whole dataset, CrowdOLA retrieves block-level samples continuously from the dataset, and employs a crowd-based entity resolution approach to detect duplicates in the sample in a pay-as-you-go fashion. After cleaning the sample, an unbiased estimator is provided to address the error bias that is introduced by the duplication. We evaluate CrowdOLA on both real-world and synthetic workloads. Experimental results show that CrowdOLA provides a good balance between efficiency and accuracy.  相似文献   

5.
Data cleaning is an inevitable problem when integrating data from distributed operational databases, because no unified set of standards spans all the distributed sources. One of the most challenging phases of data cleaning is removing fuzzy duplicate records. Approximate or fuzzy duplicates pertain to two or more tuples that describe the same real-world entity using different syntaxes. In other words, they have the same semantics but different syntaxes. Eliminating fuzzy duplicates is applicable in any database but is critical in data-integration and analytical-processing domains, which involve data warehouses, data mining applications, and decision support systems. Earlier approaches, which required hard coding rules based on a schema, were time consuming and tedious, and you couldn't later adapt the rules. We propose a novel duplicate-elimination framework which exploits fuzzy inference and includes unique machine learning capabilities to let users clean their data flexibly and effortlessly without requiring any coding  相似文献   

6.
数据清理是构建数据仓库中的一个重要研究领域。检测相似重复记录是数据清洗中一项非常重要的任务。提出了一种聚类检测相似重复记录的新方法,该方法是基于N-gram将关系表中的记录映射到高维空间中,并且通过可调密度的改进型DBSCAN算法IDS来聚类检测相似重复记录。并用实验证明了这种方法的有效性。  相似文献   

7.
对基于MPN数据清洗算法的改进   总被引:2,自引:0,他引:2  
相似重复记录的清除是数据清洗领域中的一个很重要的方面,它的目的是清除冗余的数据.介绍了该问题的流行算法-多趟近邻排序算法MPN(Multi-Pass Sorted Neighborhood),该算法能较好地对相似重复记录进行清除,但也有其不足:一是在识别中窗口大小固定,窗口的大小选取对结果影响很大.二是采用传递闭包,容易引起误识别.提出了基于MPN算法的一种改进算法,试验结果证明改进算法在记忆率和准确率上优于MPN算法.  相似文献   

8.
相似重复记录识别是数据清理中的一个关键问题。文章针对常用的多趟邻接排序法提出了两点改进:一是在多趟排序识别过程中直接合并有重叠的相似记录集,取消了最后计算传递闭包的环节;二是利用关键字按字典序排序的特性,在求编辑距离之前先过滤前面的公共子串,减少了相似记录比较的开销。文章最后给出了改进算法与原算法的对比试验结果。  相似文献   

9.
Detecting and eliminating duplicate records is one of the major tasks for improving data quality. The task, however, is not as trivial as it seems since various errors, such as character insertion, deletion, transposition, substitution, and word switching, are often present in real-world databases. This paper presents an n-gram-based approach for detecting duplicate records in large databases. Using the approach, records are first mapped to numbers based on the n-grams of their field values. The obtained numbers are then clustered, and records within a cluster are taken as potential duplicate records. Finally, record comparisons are performed within clusters to identify true duplicate records. The unique feature of this method is that it does not require preprocessing to correct syntactic or typographical errors in the source data in order to achieve high accuracy. Moreover, sorting the source data file is unnecessary. Only a fixed number of database scans is required. Therefore, compared with previous methods, the algorithm is more time efficient. Published online: 22 August 2001  相似文献   

10.
在大数据环境下,数据库中的记录数量呈指数上升,如何高效率地检测出相似重复记录是数据清洗的关键点和提高数据质量的首要任务。近十年国内外相似重复记录检测方法又涌现出相当多的高水平成果,迫切需要对新的文献加以归纳梳理。以2008—2019年的国内外相似重复记录检测相关文献为研究样本,结合社会网络和知识图谱对其发文量、核心机构、作者合作群、研究热点和研究趋势进行分析。分析发现,作者合作结构整体上较松散,相似重复记录各类检测方式的集成、应用领域的扩展和通用框架的研究成为热点,缺失数据值的处理、多数据源的识别、大数据量的分块处理成为相似重复记录领域的挑战。  相似文献   

11.
Issue tracking systems (ITSs) allow software end-users and developers to file issue reports and change requests. Reports are frequently duplicately filed for the same software issue. The retrieval of these duplicate issue reports is a tedious manual task. Prior research proposed several automated approaches for the retrieval of duplicate issue reports. Recent versions of ITSs added a feature that does basic retrieval of duplicate issue reports at the filing time of an issue report in an effort to avoid the filing of duplicates as early as possible. This paper investigates the impact of this just-in-time duplicate retrieval on the duplicate reports that end up in the ITS of an open source project. In particular, we study the differences between duplicate reports for open source projects before and after the activation of this new feature. We show how the experimental results of prior research would vary given the new data after the activation of the just-in-time duplicate retrieval feature. We study duplicate issue reports from the Mozilla-Firefox, Mozilla-Core and Eclipse-Platform projects. In addition, we compare the performance of the state of the art of the automated retrieval of duplicate reports using two popular approaches (i.e., BM25F and REP). We find that duplicate issue reports after the activation of the just-in-time duplicate retrieval feature are less textually similar, have a greater identification delay and require more discussion to be retrieved as duplicate reports than duplicates before the activation of the feature. Prior work showed that REP outperforms BM25F in terms of Recall rate and Mean average precision. We observe that the performance gap between BM25F and REP becomes even larger after the activation of the just-in-time duplicate retrieval feature. We recommend that future studies focus on duplicates that were reported after the activation of the just-in-time duplicate retrieval feature as these duplicates are more representative of future incoming issue reports and therefore, give a better representation of the future performance of proposed approaches.  相似文献   

12.
String techniques for detecting duplicates in document databases   总被引:2,自引:0,他引:2  
Detecting duplicates in document image databases is a problem of growing importance. The task is made difficult by the various degradations suffered by printed documents, and by conflicting notions of what it means to be a "duplicate". To address these issues, this paper introduces a framework for clarifying and formalizing the duplicate detection problem. Four distinct models are presented, each with a corresponding algorithm for its solution adapted from the realm of approximate string matching. The robustness of these techniques is demonstrated through a set of experiments using data derived from real-world noise sources. Also described are several heuristics that have the potential to speed up the computation by several orders of magnitude.  相似文献   

13.
王琛 《计算机时代》2014,(12):42-44
数据清洗是提高数据质量的有效手段。分析了从Web上抽取的数据存在的质量问题或错误,针对错误类型,给出属性错误(包括不完整数据和异常数据)和重复与相似重复记录的描述,并提出相应的清洗方法;设计了一个数据清洗系统框架,该框架由数据预处理、数据清洗引擎和质量评估三大部分组成,可以针对不同的错误类型,完成不同的清洗任务。实验表明,该框架具有通用性和可扩展性。  相似文献   

14.
基于聚类模式的数据清洗技术   总被引:5,自引:0,他引:5  
在挖掘前都必须对所要挖掘的数据源进行清洗,以去掉不正确的数据。本文对数据清洗中整合多个数据源的问题做了相关的研究。针对现有检测复制记录技术存在的不足,提出了采用Canopy聚类技术进行聚类复制记录的数据清洗方法,并通过实验结果验证了所提算法的有效性和准确性。  相似文献   

15.
近似重复记录的增量式识别算法   总被引:2,自引:0,他引:2  
摘要数据清理是数据仓库中的一个重要研究内容,近似重复记录的识别是其中的一个技术难点。文章介绍了近邻排序方法,并以此为基础,研究了在数据模式与匹配规则不变的前提下,数据源动态增加时近似重复记录识别问题,提出了一种增量式算法IMPN(IncrementalMulti-Passsorted-Neighborhood)。文章最后给出了实验结果。  相似文献   

16.
一种基于二分图最优匹配的重复记录检测算法   总被引:1,自引:0,他引:1  
信息集成系统中存在重复记录,重复记录的存在为数据处理和分析带来了困难.重复记录检测已经成为当前数据库研究中的热点问题之一.目前的方法主要集中在计算具有同样数据类型属性的相似性上,而现实系统中存在大量具有不同数据类型、不同模式的记录.针对具有多种类型不同模式数据的重复记录检测问题,提出了一种基于二分图的最优匹配的记录相似度计算方法,并基于这种记录相似性提出了重复记录检测算法.理论分析和实验结果都表明了方法的正确性和有效性.  相似文献   

17.
As the recent proliferation of social networks, mobile applications, and online services increased the rate of data gathering, to find near‐duplicate records efficiently has become a challenging issue. Related works on this problem mainly aim to propose efficient approaches on a single machine. However, when processing large‐scale dataset, the performance to identify duplicates is still far from satisfactory. In this paper, we try to handle the problem of duplicate detection applying MapReduce. We argue that the performance of utilizing MapReduce to detect duplicates mainly depends on the number of candidate record pairs and intermediate result size, which is related to the shuffle cost among different nodes in cluster. In this paper, we proposed a new signature scheme with new pruning strategies to minimize the number of candidate pairs and intermediate result size. The proposed solution is an exact one, which assures none duplicate record pair can be lost. The experimental results over both real and synthetic datasets demonstrate that our proposed signature‐based method is efficient and scalable. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
The WHO Collaborating Centre for International Drug Monitoring in Uppsala, Sweden, maintains and analyses the world’s largest database of reports on suspected adverse drug reaction (ADR) incidents that occur after drugs are on the market. The presence of duplicate case reports is an important data quality problem and their detection remains a formidable challenge, especially in the WHO drug safety database where reports are anonymised before submission. In this paper, we propose a duplicate detection method based on the hit-miss model for statistical record linkage described by Copas and Hilton, which handles the limited amount of training data well and is well suited for the available data (categorical and numerical rather than free text). We propose two extensions of the standard hit-miss model: a hit-miss mixture model for errors in numerical record fields and a new method to handle correlated record fields, and we demonstrate the effectiveness both at identifying the most likely duplicate for a given case report (94.7% accuracy) and at discriminating true duplicates from random matches (63% recall with 71% precision). The proposed method allows for more efficient data cleaning in post-marketing drug safety data sets, and perhaps other knowledge discovery applications as well. Responsible editor: Hannu Toivonen.  相似文献   

19.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号