共查询到17条相似文献,搜索用时 343 毫秒
1.
网格聚类中的边界处理技术 总被引:4,自引:0,他引:4
提出利用限制性k近邻和相对密度的概念识别网格聚类边界点的技术,给出网格聚类中的边界处理算法和带边界处理的网格聚类算法(GBCB).实验表明,聚类边界处理技术精度高,能有效地将聚类的边界点和孤立点/噪声数据分离开来.基于该边界处理技术的网格聚类算法GBCB能识别任意形状的聚类.由于它只对数据集进行一遍扫描,算法的运行时间是输入数据大小的线性函数,可扩展性好. 相似文献
2.
一种高效的基于联合熵的边界点检测算法 总被引:1,自引:1,他引:0
为了快速有效地检测出聚类的边界点,提出一种将网格技术与联合熵相结合的边界点检测算法.该算法中网格技术用于快速查找数据集中聚类边界所在的网格范围,联合熵用于在边界落入的网格范围内准确识别聚类的边界点.实验结果表明.该算法能够在含有噪声点,孤立点的数据集上,有效地检测出聚类的边界,运行效率高. 相似文献
3.
4.
5.
6.
7.
根据孤立点是数据集合中与大多数数据的属性不一致的数据,边界点是位于不同密度数据区域边缘的数据对象,提出了基于相对密度的孤立点和边界点识别算法(OBRD)。该算法判断一个数据点是否为边界点或孤立点的方法是:将以该数据点为中心、r为半径的邻域按维平分为2个半邻域,由这些半邻域与原邻域的相对密度确定该数据点的孤立度和边界度,再结合阈值作出判断。实验结果表明,该算法能精准有效地对多密度数据集的孤立点和聚类边界点进行识别。 相似文献
8.
9.
基于网格的共享近邻聚类算法 总被引:1,自引:0,他引:1
提出了一种基于网格的共享近邻聚类算法(Grid based shared Nearest Neighbor algorithm, GNN)。该算法主要利用网格技术去除数据集中的部分孤立点或噪声,使用密度阈值处理技术来处理网格的密度阈值,使用中心点技术提高聚类效率。GNN算法仅对数据集进行一遍扫描,且能处理任意形状和大小的聚类。实验表明,GNN有较好的可扩展性,其精度和效率明显地好于共享近邻SNN算法。 相似文献
10.
传统的聚类算法是一种无监督的学习过程,聚类的精度受到相似性度量方式以及数据集中孤立点的影响,并且算法也没有很好的利用先验知识,无法体现用户的需求。因此提出了基于共享最近邻的孤立点检测及半监督聚类算法。该算法采用共享最近邻为相似度,根据数据点的最近邻居数目来判断是否为孤立点,并在删除孤立点的数据集上进行半监督聚类。在半监督聚类过程中加入了经过扩展的先验知识,同时根据图形分割原理对数据集进行聚类。文中使用真实的数据集进行仿真,其仿真结果表明,本文所提出的算法能有效的检测出孤立点,并具有很好的聚类效果。 相似文献
11.
12.
13.
Junying Zhang Liuqing PengXiaoxue Zhao Ercan E. Kuruoglu 《Expert systems with applications》2012,39(1):335-349
Unsupervised clustering for datasets with severe outliers inside is a difficult task. In this approach, we propose a cluster-dependent multi-metric clustering approach which is robust to severe outliers. A dataset is modeled as clusters each contaminated by noises of cluster-dependent unknown noise level in formulating outliers of the cluster. With such a model, a multi-metric Lp-norm transformation is proposed and learnt which maps each cluster to the most Gaussian distribution by minimizing some non-Gaussianity measure. The approach is composed of two consecutive phases: multi-metric location estimation (MMLE) and multi-metric iterative chi-square cutoff (ICSC). Algorithms for MMLE and ICSC are proposed. It is proved that the MMLE algorithm searches for the solution of a multi-objective optimization problem and in fact learns a cluster-dependent multi-metric Lq-norm distance and/or a cluster-dependent multi-kernel defined in data space for each cluster. Experiments on heavy-tailed alpha-stable mixture datasets, Gaussian mixture datasets with radial and diffuse outliers added respectively, and the real Wisconsin breast cancer dataset and lung cancer dataset show that the proposed method is superior to many existent robust clustering and outlier detection methods in both clustering and outlier detection performances. 相似文献
14.
基于网格技术的高精度聚类算法 总被引:5,自引:1,他引:5
为了提高基于网格技术的聚类精度,提出了利用低密度单元中的点到高密度单元中心的距离作为判断聚类边界点和孤立点的技术,开发了HQGC算法。实验表明,该算法能识别任意形状的聚类,聚类的精度高、运行速度快、可扩展性好。 相似文献
15.
Finding clusters in data is a challenging problem. Given a dataset, we usually do not know the number of natural clusters hidden in the dataset. The problem is exacerbated when there is little or no additional information except the data itself. This paper proposes a general stochastic clustering method that is a simplification of nature-inspired ant-based clustering approach. It begins with a basic solution and then performs stochastic search to incrementally improve the solution until the underlying clusters emerge, resulting in automatic cluster discovery in datasets. This method differs from several recent methods in that it does not require users to input the number of clusters and it makes no explicit assumption about the underlying distribution of a dataset. Our experimental results show that the proposed method performs better than several existing methods in terms of clustering accuracy and efficiency in majority of the datasets used in this study. Our theoretical analysis shows that the proposed method has linear time and space complexities, and our empirical study shows that it can accurately and efficiently discover clusters in large datasets in which many existing methods fail to run. 相似文献
16.
Density based clustering techniques like DBSCAN are attractive because it can find arbitrary shaped clusters along with noisy outliers. Its time requirement is O(n2) where n is the size of the dataset, and because of this it is not a suitable one to work with large datasets. A solution proposed in the paper is to apply the leaders clustering method first to derive the prototypes called leaders from the dataset which along with prototypes preserves the density information also, then to use these leaders to derive the density based clusters. The proposed hybrid clustering technique called rough-DBSCAN has a time complexity of O(n) only and is analyzed using rough set theory. Experimental studies are done using both synthetic and real world datasets to compare rough-DBSCAN with DBSCAN. It is shown that for large datasets rough-DBSCAN can find a similar clustering as found by the DBSCAN, but is consistently faster than DBSCAN. Also some properties of the leaders as prototypes are formally established. 相似文献
17.
简单随机抽样是在分析处理大规模数据集时最常用的数据约简方法,但该方法在处理内部分布不均匀的数据集时容易造成类的丢失。基于固定网格划分的密度偏差抽样算法虽能有效解决该问题,但其速度及效果易受网格划分粒度影响。为此提出了基于可变网格划分的密度偏差抽样算法,根据原始数据集每一维的分布特征确定该维相应的划分粒度,进而构建与原始数据集分布特征一致的网格空间。实验结果表明,在可变网格划分的基础上进行密度偏差抽样,样本质量明显提升,而且相对于基于固定网格划分的密度偏差抽样算法,抽样效率亦有所提高。 相似文献