首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
针对大部分聚类算法无法高效地发现任意形状及不同密度的簇的问题,提出了一种高效的基于距离关联性动态模型的聚类改进算法。首先,为提高聚类效率,使用层次聚类算法对数据集进行初始聚类,并剔除样本点含量过低的簇;其次,为发现任意形状及不同密度的簇,以初始聚类结果的簇的质心作为代表点,利用距离关联性动态模型进行聚类,并利用层次聚类的树状结构进行有效的剪枝计算;最后,检验算法的有效性。实验采用Chameleon数据集进行测试,结果表明,该算法能够有效识别任意形状及不同密度的簇,且与同类算法相比,时间效率有显著的提高。  相似文献   

2.
大多数集成聚类算法使用K-means算法生成基聚类,得到的基聚类效果不太理想.通常在使用共协矩阵对基聚类进行集成时,忽视了基聚类多样性的不同,平等地对待基聚类,且以样本为操作单元生成共协矩阵.当样本数目或集成规模较大时,计算负担显著增加.针对上述问题,提出超簇加权的集成聚类算法(ECWSC).该算法使用随机选点与K-means选点相结合来获取地标点,对地标点使用谱聚类算法得到其聚类结果,再将样本点映射到与之最近邻的地标点上生成基聚类.在此基础上,以信息熵为依据计算基聚类的不确定性,并对基聚类赋予相应权重,使用加权的方式得到加权超簇的共协矩阵,对共协矩阵使用层次聚类算法得到集成结果.选取7个真实数据集和4个人工数据集作为实验数据集,从准确度、鲁棒性和时间复杂度方面进行验证.对比实验结果表明,该算法能够有效提升集成聚类的性能.  相似文献   

3.
针对传统的BIRCH算法用直径来控制聚类的边界,对非球形聚类效果不佳,甚至会把非球状的簇分割为不同簇这一缺点,对BIRCH算法进行改进,改进算法首先建立多棵CF树,每棵CF树代表一个簇,并结合DBSCAN算法的密度可达的思想。该算法能对任意形状的簇进行准确的聚类。实验表明,算法能通过一次扫描进行有效聚类,时间复杂度与BIRCH算法相同,对大规模数据集具有较高的处理速度,实现了动态聚类,并可以准确地对任意形状的簇进行聚类并发现噪声点。  相似文献   

4.
针对数据集中属性间存在依赖关系以及对象间存在相关性,定义了一种新的相似关系模型,该模型所描述的相似关系能够体现对象之间的自然相关性.在此基础上提出一种基于属性依赖关系和对象相关性的自然聚类算法,该聚类算法在不事先指定聚类数目的情况下,将所有相似性达到设定阈值的对象自然聚为一类;当调整相似性阈值时,该算法还可实现不同粒度的聚类.通过分别对数值型数据集和分类型数据集进行实验比较分析,结果表明这种自然聚类算法与其他聚类算法相比,能够真实反映数据间的相关性以及数据集的自然簇结构,同时可以发现任意形状的簇,有效地提高了聚类的精度和质量.  相似文献   

5.
张梅  陈梅  李明 《计算机工程与科学》2021,43(12):2243-2252
针对聚类算法在检测任意簇时精确度不高、迭代次数多及效果不佳等缺点,提出了基于局部中心度量的边界点划分密度聚类算法——DBLCM.在局部中心度量的限制下,数据点被划分到核心区域或边界区域.核心区域的点按照互近邻优先成簇的分配方式形成初始簇,边界区域的点参考互近邻中距离最近点所在簇进行分配,从而得到最终簇.为验证算法的有效性,将DBLCM与3个经典算法和3个近几年新提出的优秀算法,在包含任意形状、任意密度的二维数据集和任意维度的多维数据集上进行测试.另外,为了验证DBLCM算法中参数k的敏感性,在所用的数据集上做了k值与簇质量的相关性测试.实验结果表明,DBLCM算法具有识别精度高,检测任意簇效果好和无需迭代等优点,综合性能优于6个对比算法.  相似文献   

6.
为得到好的聚类效果,需要挑选适合数据集簇结构的聚类算法。文中提出基于网格最小生成树的聚类算法选择方法,为给定数据集自动选择适合的聚类算法。该方法首先在数据集上构建出网格最小生成树,由树的数目确定数据集的潜在簇结构,然后为数据集选择适合所发现簇结构的聚类算法。实验结果表明该方法较有效,能为给定数据集找出适合其潜在簇结构的聚类算法。  相似文献   

7.
提出了网格密度影响因子的概念,通过加权处理考虑了相邻网格的综合影响,能较好地代表当前网格相对密度,然后利用它来识别具有不同密度聚簇的高密度网格单元,并从高密度单元网格进行扩展,直至生成一个聚簇骨架,对边缘网格边界点进行识别和提取,提高网格聚类精度.通过实验验证,新算法能对不同大小与形状的聚簇进行聚类,可以识别具有多个密度的不同类组成的数据集,能捕获聚簇边界点,聚类效果较好.  相似文献   

8.
基于层次划分的最佳聚类数确定方法   总被引:20,自引:0,他引:20  
确定数据集的聚类数目是聚类分析中一项基础性的难题.常用的trail-and-error方法通常依赖于特定的聚类算法,且在大型数据集上计算效率欠佳.提出一种基于层次思想的计算方法,不需要对数据集进行反复聚类,它首先扫描数据集获得CF(clusteringfeature,聚类特征)统计值,然后自底向上地生成不同层次的数据集划分,增量地构建一条关于不同层次划分的聚类质量曲线;曲线极值点所对应的划分用于估计最佳的聚类数目.另外,还提出一种新的聚类有效性指标用于衡量不同划分的聚类质量.该指标着重于簇的几何结构且独立于具体的聚类算法,能够识别噪声和复杂形状的簇.在实际数据和合成数据上的实验结果表明,新方法的性能优于新近提出的其他指标,同时大幅度提高了计算效率.  相似文献   

9.
引入信息熵的CURE聚类算法   总被引:1,自引:0,他引:1  
为了提高传统CURE(Clustering Using REpresentatives) 聚类算法的质量,引入信息熵对其进行改进。该算法使用K-means算法对样本数据集进行预聚类;采用基于信息熵的相似性度量,利用簇中的元素提供的信息度量不同簇之间的相互关系,并描述数据的分布;在高层、低层聚类阶段,采取不同的选取策略,分别选取相应的代表点。在UCI数据集和人造数据集上的实验结果表明,提出的算法在一定程度上提高了聚类的准确率,且在大型数据集上比传统CURE算法有着更高的聚类效率。  相似文献   

10.
一种改进的谱聚类算法   总被引:2,自引:0,他引:2  
谱聚类算法是基于谱图理论的一类新的聚类算法,能对任意形状的数据进行划分,已经被成功应用到图像分割等领域.但谱聚类很难正确发现密度相差比较大的簇,参数的选取要靠多次实验和个人经验.结合DBSCAN的思想,充分考虑数据的局部结构,提出了一种基于近邻自适应尺度的改进谱聚类算法.其基本思想是根据数据点的近邻分布,对每个点设置一个近邻自适应尺度,代替标准谱聚类算法中的全局统一尺度.近邻自适应尺度简化了参数的选取,使得新算法对密度的变化不敏感,对离群点有一定的鲁棒性,同时比标准谱聚类更适合任意形状的数据分布.通过与传统的聚类算法和常见的谱聚类算法做比较,在人工数据集和实际数据集UCI上的实验都验证了本算法能够获得更好的聚类效果.  相似文献   

11.
针对传统的聚类集成算法难以高效地处理海量数据的聚类分析问题,提出一种基于MapReduce的并行FCM聚类集成算法。算法利用随机初始聚心来获取具有差异化的聚类成员,通过建立聚类成员簇间OVERLAP矩阵来寻找逻辑等价簇,最后利用投票法共享聚类成员中数据对象的分类情况得出最终的聚类结果。实验证明,该算法具有良好的精确度,加速比和扩展性,具有处理较大规模数据集的能力。  相似文献   

12.
Cluster analysis plays an important role in identifying the natural structure of the target dataset. It has been widely used in many fields, such as pattern recognition, machine learning, image segmentation, document clustering and so on. There are many different methods to conduct cluster analysis. Namely, most real datasets are non-spherical and have complex shapes. Although these methods are widely used to deal with clustering tasks, they are susceptible to noise and arbitrary shapes. Thus, we propose a novel clustering algorithm (called RNN-NSDC) in this paper, which is based on the natural reverse nearest neighbor structure. Firstly, we apply the reverse nearest neighbors in the algorithm to extract core objects. Secondly, our algorithm uses the neighbor structure information of core objects to cluster. And excluding noise effects, core sets can well represent the structure of clusters. Therefore, the RNN-NSDC can obtain the optimal cluster numbers for the datasets which contain clusters of outliers and arbitrary shapes. To verify the efficiency and accuracy of the RNN-NSDC, synthetic datasets and real datasets are used for experiments. The results indicate the superiority of the RNN-NSDC compared with K-means, DBSCAN, DPC, SNNDPC, DCore and NaNLORE.  相似文献   

13.
We propose a new clustering algorithm, called SyMP, which is based on synchronization of pulse-coupled oscillators. SyMP represents each data point by an Integrate-and-Fire oscillator and uses the relative similarity between the points to model the interaction between the oscillators. SyMP is robust to noise and outliers, determines the number of clusters in an unsupervised manner, and identifies clusters of arbitrary shapes. The robustness of SyMP is an intrinsic property of the synchronization mechanism. To determine the optimum number of clusters, SyMP uses a dynamic and cluster dependent resolution parameter. To identify clusters of various shapes, SyMP models each cluster by an ensemble of Gaussian components. SyMP does not require the specification of the number of components for each cluster. This number is automatically determined using a dynamic intra-cluster resolution parameter. Clusters with simple shapes would be modeled by few components while clusters with more complex shapes would require a larger number of components. The proposed clustering approach is empirically evaluated with several synthetic data sets, and its performance is compared with GK and CURE. To illustrate the performance of SyMP on real and high-dimensional data sets, we use it to categorize two image databases.  相似文献   

14.
Conventional clustering ensemble algorithms employ a set of primary results; each result includes a set of clusters which are emerged from data. Given a large number of available clusters, one is faced with the following questions: (a) can we obtain the same quality of results with a smaller number of clusters instead of full ensemble? (b) If so, which subset of clusters is more efficient to be used in the ensemble? In this paper, these two questions are going to be answered. We explore a clustering ensemble approach combined with a cluster stability criterion as well as a dataset simplicity criterion to discover the finest subset of base clusters for each kind of datasets. Also, a novel method is proposed in order to accumulate the selected clusters and to extract final partitioning. Although it is expected that by reducing the size of ensemble the performance decreases, our experimental results show that our selecting mechanism generally lead to superior results.  相似文献   

15.
In this paper, a novel clustering method in the kernel space is proposed. It effectively integrates several existing algorithms to become an iterative clustering scheme, which can handle clusters with arbitrary shapes. In our proposed approach, a reasonable initial core for each of the cluster is estimated. This allows us to adopt a cluster growing technique, and the growing cores offer partial hints on the cluster association. Consequently, the methods used for classification, such as support vector machines (SVMs), can be useful in our approach. To obtain initial clusters effectively, the notion of the incomplete Cholesky decomposition is adopted so that the fuzzy c‐means (FCM) can be used to partition the data in a kernel defined‐like space. Then a one‐class and a multiclass soft margin SVMs are adopted to detect the data within the main distributions (the cores) of the clusters and to repartition the data into new clusters iteratively. The structure of the data set is explored by pruning the data in the low‐density region of the clusters. Then data are gradually added back to the main distributions to assure exact cluster boundaries. Unlike the ordinary SVM algorithm, whose performance relies heavily on the kernel parameters given by the user, the parameters are estimated from the data set naturally in our approach. The experimental evaluations on two synthetic data sets and four University of California Irvine real data benchmarks indicate that the proposed algorithms outperform several popular clustering algorithms, such as FCM, support vector clustering (SVC), hierarchical clustering (HC), self‐organizing maps (SOM), and non‐Euclidean norm fuzzy c‐means (NEFCM). © 2009 Wiley Periodicals, Inc.4  相似文献   

16.
现有的聚类融合算法从聚类成员的角度出发,若使用全部聚类成员则融合结果受劣质成员影响,对聚类成员进行选择再进行融合则选择的策略存在主观性。为在一定程度上避免这两种局限性,可以从元素的角度出发,提出一种新的聚类融合方法。通过多粒度决策不一致粗糙集来选择一部分类别确定的元素,再利用这部分元素进行聚类融合生成新的划分;多粒度决策不一致粗糙集模型能够刻画多粒度决策过程中属性一致而决策不一致的现象,提出了一种基于多粒度决策不一致的粗糙集模型,并给出了一种聚类融合方法。具体做法是:首先在数据集上多次使用K-means聚类算法,生成论域上的多个粒结构;其次对所有粒结构两两之间求粒间包含度,建立包含度矩阵,对矩阵使用Otsu算法计算阈值,得出多组满足阈值条件的信息粒,求解多粒度决策不一致下近似和上近似;最后分别处理下近似与边界域中元素的类别,从而获得了一个经过融合的聚类划分。实验结果表明,该方法能够有效改善聚类的结果,具有较高的时间效率,且算法具有较好的鲁棒性。  相似文献   

17.
Many clustering algorithms, including cluster ensembles, rely on a random component. Stability of the results across different runs is considered to be an asset of the algorithm. The cluster ensembles considered here are based on k-means clusterers. Each clusterer is assigned a random target number of clusters, k and is started from a random initialization. Here, we use 10 artificial and 10 real data sets to study ensemble stability with respect to random k, and random initialization. The data sets were chosen to have a small number of clusters (two to seven) and a moderate number of data points (up to a few hundred). Pairwise stability is defined as the adjusted Rand index between pairs of clusterers in the ensemble, averaged across all pairs. Nonpairwise stability is defined as the entropy of the consensus matrix of the ensemble. An experimental comparison with the stability of the standard k-means algorithm was carried out for k from 2 to 20. The results revealed that ensembles are generally more stable, markedly so for larger k. To establish whether stability can serve as a cluster validity index, we first looked at the relationship between stability and accuracy with respect to the number of clusters, k. We found that such a relationship strongly depends on the data set, varying from almost perfect positive correlation (0.97, for the glass data) to almost perfect negative correlation (-0.93, for the crabs data). We propose a new combined stability index to be the sum of the pairwise individual and ensemble stabilities. This index was found to correlate better with the ensemble accuracy. Following the hypothesis that a point of stability of a clustering algorithm corresponds to a structure found in the data, we used the stability measures to pick the number of clusters. The combined stability index gave best results  相似文献   

18.
传统聚类方法往往无法避免邻域参数和聚类数量的选择问题,而这些参数在不同形状的数据中的最优选择也不尽相同,需要根据大量先验知识确定合适的参数选择范围.针对上述参数选择问题,提出了一种基于自然邻居思想的边界剥离聚类算法NaN-BP,能够在无需设置邻域参数和聚类数量的情况下得到令人满意的聚类结果.算法核心思想是首先根据数据集的分布特征,自适应迭代至对数稳定状态并获取邻域信息,并根据该邻域信息进行边界点的标记与剥离,最终以核心点为数据簇中心进行聚类.在不同规模不同分布的数据集上进行了广泛的对比实验,实验结果表明了NaN-BP的自适应性和有效性,取得了令人满意的实验结果.  相似文献   

19.
针对互联网流量标注困难以及单个聚类器的泛化能力较弱,提出一种基于互信息(MI)理论的选择聚类集成方法,以提高流量分类的精度。首先计算不同初始簇个数K的K均值聚类结果与训练集中流量协议的真实分布之间的规范化互信息(NMI);然后基于NMI的值来选择用于聚类集成的K均值基聚类器的K值序列;最后采用二次互信息(QMI)的一致函数生成一致聚类结果,并使用一种半监督方法对聚类簇进行标注。通过实验比较了聚类集成方法与单个聚类算法在4个不同测试集上总体分类精度。实验结果表明,聚类集成方法的流量分类总体精度能达到90%。所提方法将聚类集成模型应用到网络流量分类中,提高了流量分类的精度和在不同数据集上的分类稳定性。  相似文献   

20.
结合密度聚类和模糊聚类的特点,提出一种基于密度的模糊代表点聚类算法.首先利用密度对数据点成为候选聚类中心点的可能性进行处理,密度越高的点成为聚类中心点的可能性越大;然后利用模糊方法对聚类中心点进行确定;最后通过合并聚类中心点确定最终的聚类中心.所提出算法具有很好的自适应性,能够处理不同形状的聚类问题,无需提前规定聚类个数,能够自动确定真实存在的聚类中心点,可解释性好.通过结合不同聚类方法的优点,最终实现对数据的有效划分.此外,所提出的算法对于聚类数和初始化、处理不同形状的聚类问题以及应对异常值等方面具有较好的鲁棒性.通过在人工数据集和UCI真实数据集上进行实验,表明所提出算法具有较好的聚类性能和广泛的适用性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号