共查询到20条相似文献,搜索用时 31 毫秒
1.
Charu C. Aggarwal Philip S. Yu 《The VLDB Journal The International Journal on Very Large Data Bases》2005,14(2):211-221
The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are most important for high-dimensional domains in which the data can contain hundreds of dimensions. Many recent algorithms have been proposed for outlier detection that use several concepts of proximity in order to find the outliers based on their relationship to the other points in the data. However, in high-dimensional space, the data are sparse and concepts using the notion of proximity fail to retain their effectiveness. In fact, the sparsity of high-dimensional data can be understood in a different way so as to imply that every point is an equally good outlier from the perspective of distance-based definitions. Consequently, for high-dimensional data, the notion of finding meaningful outliers becomes substantially more complex and nonobvious. In this paper, we discuss new techniques for outlier detection that find the outliers by studying the behavior of projections from the data set.Received: 19 November 2002, Accepted: 6 February 2004, Published online: 19 August 2004Edited by: R. Ng. 相似文献
2.
3.
针对传统离群点检测算法在类极度不平衡的高维数据集中难以学习离群点的分布模式,导致检测率低的问题,提出了一种生成对抗网络(generative adversarial network,GAN)与变分自编码器(variational auto-encoder,VAE)结合的GAN-VAE算法。算法首先将离群点输入VAE训练,学习离群点的分布模式;然后将VAE与GAN结合训练,生成更多潜在离群点,同时学习正常点与离群点的分类边界;最后将测试数据输入训练后的GAN-VAE,根据正常点与离群点相对密度的差异性计算每个对象的离群值,将离群值高的对象判定为离群点。在四个真实数据集上与六个离群点检测算法进行对比实验,结果表明GAN-VAE在AUC、准确率和F;值上平均提高了5.64%、5.99%和13.30%,证明GAN-VAE算法是有效可行的。 相似文献
4.
Outlier detection algorithms are often computationally intensive because of their need to score each point in the data. Even simple distance-based algorithms have quadratic complexity. High-dimensional outlier detection algorithms such as subspace methods are often even more computationally intensive because of their need to explore different subspaces of the data. In this paper, we propose an exceedingly simple subspace outlier detection algorithm, which can be implemented in a few lines of code, and whose complexity is linear in the size of the data set and the space requirement is constant. We show that this outlier detection algorithm is much faster than both conventional and high-dimensional algorithms and also provides more accurate results. The approach uses randomized hashing to score data points and has a neat subspace interpretation. We provide a visual representation of this interpretability in terms of outlier sensitivity histograms. Furthermore, the approach can be easily generalized to data streams, where it provides an efficient approach to discover outliers in real time. We present experimental results showing the effectiveness of the approach over other state-of-the-art methods. 相似文献
5.
Outlier detection is a useful technique in such areas as fraud detection, financial analysis and health monitoring. Many recent
approaches detect outliers according to reasonable, pre-defined concepts of an outlier (e.g., distance-based, density-based,
etc.). However, the definition of an outlier differs between users or even datasets. This paper presents a solution to this
problem by including input from the users. Our OBE (Outlier By Example) system is the first that allows users to provide examples
of outliers in low-dimensional datasets. By incorporating a small number of such examples, OBE can successfully develop an
algorithm by which to identify further outliers based on their outlierness. Several algorithmic challenges and engineering
decisions must be addressed in building such a system. We describe the key design decisions and algorithms in this paper.
In order to interact with users having different degrees of domain knowledge, we develop two detection schemes: OBE-Fraction
and OBE-RF. Our experiments on both real and synthetic datasets demonstrate that OBE can discover values that a user would
consider outliers. 相似文献
6.
基于距离的孤立点检测研究 总被引:15,自引:0,他引:15
孤立点检测是一个重要的知识发现任务,在分析基于距离的孤立点及其检测算法的基础上,文章提出了一个判定孤立点的新定义,并设计了基于抽样的近似检测算法,用实际数据进行了实验。实验结果表明,新的定义不仅与DB(p,d)孤立点定义有着相同的结果,而且简化了孤立点检测对用户的要求,同时给出了数据对象在数据集中的孤立程度。 相似文献
7.
Uncertain data are common due to the increasing usage of sensors, radio frequency identification(RFID), GPS and similar devices for data collection. The causes of uncertainty include limitations of measurements, inclusion of noise, inconsistent supply voltage and delay or loss of data in transfer. In order to manage, query or mine such data, data uncertainty needs to be considered. Hence,this paper studies the problem of top-k distance-based outlier detection from uncertain data objects. In this work, an uncertain object is modelled by a probability density function of a Gaussian distribution. The naive approach of distance-based outlier detection makes use of nested loop. This approach is very costly due to the expensive distance function between two uncertain objects. Therefore,a populated-cells list(PC-list) approach of outlier detection is proposed. Using the PC-list, the proposed top-k outlier detection algorithm needs to consider only a fraction of dataset objects and hence quickly identifies candidate objects for top-k outliers. Two approximate top-k outlier detection algorithms are presented to further increase the efficiency of the top-k outlier detection algorithm.An extensive empirical study on synthetic and real datasets is also presented to prove the accuracy, efficiency and scalability of the proposed algorithms. 相似文献
8.
针对数据流中离群点挖掘问题,在K-means聚类算法基础上,提出了基于距离的准则进行数据间离群点判断的离群点检测DOKM算法。根据数据流概念漂移检测结果来自适应地调整滑动窗口大小,从而实现对数据流的离群点检测,与其他离群点算法的一系列实验验证和对比结果表明,DOKM算法在人工数据集和真实数据集中均可以实现对离群点的有效检测。 相似文献
9.
针对现有基于距离的离群点检测算法在处理大规模数据时效率低的问题,提出一种基于聚类和索引的分布式离群点检测(DODCI) 算法。首先利用聚类方法将大数据集划分成簇;然后在分布式环境中的各节点处并行创建各个簇的索引;最后使用两个优化策略和两条剪枝规则以循环的方式在各节点处进行离群点检测。在合成数据集和整理后的KDD CUP数据集上的实验结果显示,在数据量较大时该算法比Orca和iDOoR算法快近一个数量级。理论和实验分析表明,该算法可以有效提高大规模数据中离群点的检测效率。 相似文献
10.
A computationally fast procedure for identifying outliers is presented that is particularly effective in high dimensions. This algorithm utilizes simple properties of principal components to identify outliers in the transformed space, leading to significant computational advantages for high-dimensional data. This approach requires considerably less computational time than existing methods for outlier detection, and is suitable for use on very large data sets. It is also capable of analyzing the data situation commonly found in certain biological applications in which the number of dimensions is several orders of magnitude larger than the number of observations. The performance of this method is illustrated on real and simulated data with dimension ranging in the thousands. 相似文献
11.
12.
梁斌梅 《计算机工程与应用》2012,48(9):101-103,107
孤立数据的存在使数据挖掘结果不准确,甚至错误。现有的孤立点检测算法在通用性、有效性、用户友好性及处理高维大数据集的性能还不完善,为此,提出一种有效的全局孤立点检测方法,该方法进行凝聚层次聚类,根据聚类树和距离矩阵来可视化判断数据孤立程度,确定孤立点数目。从聚类树自顶向下,无监督地去除离群数据点。在多个数据集上的仿真实验结果表明,该方法能有效识别孤立程度最大的前n个全局孤立点,适用于不同形状的数据集,算法效率高,用户友好,且适用于大型高维数据集的孤立点检测。 相似文献
13.
基于密度偏倚抽样的局部距离异常检测方法 总被引:1,自引:0,他引:1
异常检测是数据挖掘的重要研究领域,当前基于距离或者最近邻概念的异常数据检测方法,在进行海量高维数据异常检测时,存在运算时间过长的问题.许多改进的异常检测方法虽然提高了算法运算效率,然而检测效果欠佳.基于此本文提出一种基于密度偏倚抽样的局部距离异常检测算法,首先利用基于密度偏倚的概率抽样方法对所需检测的数据集合进行概率抽样,之后对抽样数据利用基于局部距离的局部异常检测方法.对抽样集合进行局部异常系数计算,得到的异常系数既是抽样数据的局部异常系数,又是数据集的近似全局异常系数.之后对得到的每个数据点的局部异常系数进行排序,异常系数值越大的数据点越可能是异常点.实验结果表明,和已有的算法相比,本算法具有更高的检测精确度和更少的运算时间,并且该算法对各种维度和数据规模的数据都具有很好的检测效果,可扩展性强. 相似文献
14.
数据挖掘中孤立点的分析研究在实践中应用 总被引:5,自引:0,他引:5
介绍了孤立点的定义和三种挖掘算法,即基于统计的方法、基于距离的方法和基于偏离的方法,在这个基础上,尝试了利用孤立点检测方法对教务管理系统中积累的数据进行分析,并验证了基于距离和的孤立点检测算法的有效性,通过实验,结果分析表明:基于距离和的算法降低了检测过程对用户设置阈值的要求,在时间复杂度上,稍微优于循环嵌套算法。 相似文献
15.
GridOF:面向大规模数据集的高效离群点检测算法 总被引:12,自引:3,他引:12
作为数据库知识发现研究的重要技术手段,现有离群点检测算法在运用于大型数据集时其时间与空间效率均无法令人满意.通过对数据集中离群点分布特征的分析,在数据空间网格划分的基础上,研究数据超方格层次上的密度近似计算与稠密数据主体滤除策略.给出通过简单的修正近似计算取代繁复的点对点密度函数值计算的方法.基于上述思想构造的离群点检测算法GlidOF在保持足够检测精度的同时显著降低了时空复杂度,运用于大规模数据集离群点检测具有良好的适用性和有效性. 相似文献
16.
17.
Distance-based outliers: algorithms and applications 总被引:20,自引:0,他引:20
Edwin M. Knorr Raymond T. Ng Vladimir Tucakov 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):237-253
This paper deals with finding outliers (exceptions) in large, multidimensional datasets. The identification of outliers can
lead to the discovery of truly unexpected knowledge in areas such as electronic commerce, credit card fraud, and even the
analysis of performance statistics of professional athletes. Existing methods that we have seen for finding outliers can only
deal efficiently with two dimensions/attributes of a dataset. In this paper, we study the notion of DB (distance-based) outliers. Specifically, we show that (i) outlier detection can be done efficiently for large datasets, and for k-dimensional datasets with large values of k (e.g., ); and (ii), outlier detection is a meaningful and important knowledge discovery task.
First, we present two simple algorithms, both having a complexity of , k being the dimensionality and N being the number of objects in the dataset. These algorithms readily support datasets with many more than two attributes.
Second, we present an optimized cell-based algorithm that has a complexity that is linear with respect to N, but exponential with respect to k. We provide experimental results indicating that this algorithm significantly outperforms the two simple algorithms for . Third, for datasets that are mainly disk-resident, we present another version of the cell-based algorithm that guarantees
at most three passes over a dataset. Again, experimental results show that this algorithm is by far the best for . Finally, we discuss our work on three real-life applications, including one on spatio-temporal data (e.g., a video surveillance
application), in order to confirm the relevance and broad applicability of DB outliers.
Received February 15, 1999 / Accepted August 1, 1999 相似文献
18.
Outlier or anomaly detection is a fundamental data mining task with the aim to identify data points, events, transactions which deviate from the norm. The identification of outliers in data can provide insights about the underlying data generating process. In general, outliers can be of two kinds: global and local. Global outliers are distinct with respect to the whole data set, while local outliers are distinct with respect to data points in their local neighbourhood. While several approaches have been proposed to scale up the process of global outlier discovery in large databases, this has not been the case for local outliers. We tackle this problem by optimising the use of local outlier factor (LOF) for large and high-dimensional data. We propose projection-indexed nearest-neighbours (PINN), a novel technique that exploits extended nearest-neighbour sets in a reduced-dimensional space to create an accurate approximation for k-nearest-neighbour distances, which is used as the core density measurement within LOF. The reduced dimensionality allows for efficient sub-quadratic indexing in the number of items in the data set, where previously only quadratic performance was possible. A detailed theoretical analysis of random projection (RP) and PINN shows that we are able to preserve the density of the intrinsic manifold of the data set after projection. Experimental results show that PINN outperforms the standard projection methods RP and PCA when measuring LOF for many high-dimensional real-world data sets of up to 300,000 elements and 102,600 dimensions. A further investigation into the use of high-dimensionality-specific indexing such as spatial approximate sample hierarchy (SASH) shows that our novel technique holds benefits over even these types of highly efficient indexing. We cement the practical applications of our novel technique with insights into what it means to find local outliers in real data including image and text data, and include potential applications for this knowledge. 相似文献
19.
20.
Mining Projected Clusters in High-Dimensional Spaces 总被引:1,自引:0,他引:1
Bouguessa Mohamed Wang Shengrui 《Knowledge and Data Engineering, IEEE Transactions on》2009,21(4):507-522
Clustering high-dimensional data has been a major challenge due to the inherent sparsity of the points. Most existing clustering algorithms become substantially inefficient if the required similarity measure is computed between data points in the full-dimensional space. To address this problem, a number of projected clustering algorithms have been proposed. However, most of them encounter difficulties when clusters hide in subspaces with very low dimensionality. These challenges motivate our effort to propose a robust partitional distance-based projected clustering algorithm. The algorithm consists of three phases. The first phase performs attribute relevance analysis by detecting dense and sparse regions and their location in each attribute. Starting from the results of the first phase, the goal of the second phase is to eliminate outliers, while the third phase aims to discover clusters in different subspaces. The clustering process is based on the K-means algorithm, with the computation of distance restricted to subsets of attributes where object values are dense. Our algorithm is capable of detecting projected clusters of low dimensionality embedded in a high-dimensional space and avoids the computation of the distance in the full-dimensional space. The suitability of our proposal has been demonstrated through an empirical study using synthetic and real datasets. 相似文献