首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Hierarchical clustering is a stepwise clustering method usually based on proximity measures between objects or sets of objects from a given data set. The most common proximity measures are distance measures. The derived proximity matrices can be used to build graphs, which provide the basic structure for some clustering methods. We present here a new proximity matrix based on an entropic measure and also a clustering algorithm (LEGCIust) that builds layers of subgraphs based on this matrix and uses them and a hierarchical agglomerative clustering technique to form the clusters. Our approach capitalizes on both a graph structure and a hierarchical construction. Moreover, by using entropy as a proximity measure, we are able, with no assumption about the cluster shapes, to capture the local structure of the data, forcing the clustering method to reflect this structure. We present several experiments on artificial and real data sets that provide evidence on the superior performance of this new algorithm when compared with competing ones.  相似文献   

2.
Clustering algorithms have the annoying habit of finding clusters even when the data are generated randomly. Verifying that potential clusterings are real in some objective sense is receiving more attention as the number of new clustering algorithms and their applications grow. We consider one aspect of this question and study the stability of a hierarchical structure with a variation on a measure of stability proposed in the literature.(1,2)Our measure of stability is appropriate for proximity matrices whose entries are on an ordinal scale. We randomly split the data set, cluster the two halves, and compare the two hierarchical clusterings with the clustering achieved on the entire data set. Two stability statistics, based on the Goodman-Kruskal rank correlation coefficient, are defined. The distributions of these statistics are estimated with Monte Carlo techniques for two clustering methods (single-link and complete-link) and under two conditions (randomly selected proximity matrices and proximity matrices with good hierarchical structure). The stability measures are applied to some real data sets.  相似文献   

3.
Dubnov  Shlomo  El-Yaniv  Ran  Gdalyahu  Yoram  Schneidman  Elad  Tishby  Naftali  Yona  Golan 《Machine Learning》2002,47(1):35-61
We present a novel pairwise clustering method. Given a proximity matrix of pairwise relations (i.e. pairwise similarity or dissimilarity estimates) between data points, our algorithm extracts the two most prominent clusters in the data set. The algorithm, which is completely nonparametric, iteratively employs a two-step transformation on the proximity matrix. The first step of the transformation represents each point by its relation to all other data points, and the second step re-estimates the pairwise distances using a statistically motivated proximity measure on these representations. Using this transformation, the algorithm iteratively partitions the data points, until it finally converges to two clusters. Although the algorithm is simple and intuitive, it generates a complex dynamics of the proximity matrices. Based on this bipartition procedure we devise a hierarchical clustering algorithm, which employs the basic bipartition algorithm in a straightforward divisive manner. The hierarchical clustering algorithm copes with the model validation problem using a general cross-validation approach, which may be combined with various hierarchical clustering methods.We further present an experimental study of this algorithm. We examine some of the algorithm's properties and performance on some synthetic and standard data sets. The experiments demonstrate the robustness of the algorithm and indicate that it generates a good clustering partition even when the data is noisy or corrupted.  相似文献   

4.
5.
Data clustering has attracted a lot of research attention in the field of computational statistics and data mining. In most related studies, the dissimilarity between two clusters is defined as the distance between their centroids or the distance between two closest (or farthest) data points However, all of these measures are vulnerable to outliers and removing the outliers precisely is yet another difficult task. In view of this, we propose a new similarity measure, referred to as cohesion, to measure the intercluster distances. By using this new measure of cohesion, we have designed a two-phase clustering algorithm, called cohesion-based self-merging (abbreviated as CSM), which runs in time linear to the size of input data set. Combining the features of partitional and hierarchical clustering methods, algorithm CSM partitions the input data set into several small subclusters in the first phase and then continuously merges the subclusters based on cohesion in a hierarchical manner in the second phase. The time and the space complexities of algorithm CSM are analyzed. As shown by our performance studies, the cohesion-based clustering is very robust and possesses excellent tolerance to outliers in various workloads. More importantly, algorithm CSM is shown to be able to cluster the data sets of arbitrary shapes very efficiently and provide better clustering results than those by prior methods.  相似文献   

6.
Combining multiple clusterings using evidence accumulation   总被引:2,自引:0,他引:2  
We explore the idea of evidence accumulation (EAC) for combining the results of multiple clusterings. First, a clustering ensemble - a set of object partitions, is produced. Given a data set (n objects or patterns in d dimensions), different ways of producing data partitions are: 1) applying different clustering algorithms and 2) applying the same clustering algorithm with different values of parameters or initializations. Further, combinations of different data representations (feature spaces) and clustering algorithms can also provide a multitude of significantly different data partitionings. We propose a simple framework for extracting a consistent clustering, given the various partitions in a clustering ensemble. According to the EAC concept, each partition is viewed as an independent evidence of data organization, individual data partitions being combined, based on a voting mechanism, to generate a new n /spl times/ n similarity matrix between the n patterns. The final data partition of the n patterns is obtained by applying a hierarchical agglomerative clustering algorithm on this matrix. We have developed a theoretical framework for the analysis of the proposed clustering combination strategy and its evaluation, based on the concept of mutual information between data partitions. Stability of the results is evaluated using bootstrapping techniques. A detailed discussion of an evidence accumulation-based clustering algorithm, using a split and merge strategy based on the k-means clustering algorithm, is presented. Experimental results of the proposed method on several synthetic and real data sets are compared with other combination strategies, and with individual clustering results produced by well-known clustering algorithms.  相似文献   

7.
针对字符型数据和混合型数据的聚类方法进行了研究。首先在经典粗糙集理论的基础上,通过松弛对 象之间的不可分辨和相容性条件,得到了基于和谐关系的扩展粗糙集模型;然后定义了新的个体间不可区分度、 类间不可区分度、聚类结果的综合近似精度等概念,提出了新的混合数据类型层次聚类算法。该算法不仅能处 理数值型数据,而且能处理大多数聚类算法不能处理的字符型数据和混合型数据。实验验证了算法的可行性。  相似文献   

8.
Fuzzy clustering for symbolic data   总被引:10,自引:0,他引:10  
Most of the techniques used in the literature in clustering symbolic data are based on the hierarchical methodology, which utilizes the concept of agglomerative or divisive methods as the core of the algorithm. The main contribution of this paper is to show how to apply the concept of fuzziness on a data set of symbolic objects and how to use this concept in formulating the clustering problem of symbolic objects as a partitioning problem. Finally, a fuzzy symbolic c-means algorithm is introduced as an application of applying and testing the proposed algorithm on real and synthetic data sets. The results of the application of the new algorithm show that the new technique is quite efficient and, in many respects, superior to traditional methods of hierarchical nature  相似文献   

9.
A similarity measure is a useful tool for determining the similarity between two objects. Although there are many different similarity measures among the intuitionistic fuzzy sets (IFSs) proposed in the literature, the Jaccard index has yet to be considered as way to define them. The Jaccard index is a statistic used for comparing the similarity and diversity of sample sets. In this study, we propose a new similarity measure for IFSs induced by the Jaccard index. According to our results, proposed similarity measures between IFSs based on the Jaccard index present better properties. Several examples are used to compare the proposed approach with several existing methods. Numerical results show that the proposed measures are more reasonable than these existing measures. On the other hand, measuring the similarity between IFSs is also important in clustering. Thus, we also propose a clustering procedure by combining the proposed similarity measure with a robust clustering method for analyzing IFS data sets. We also compare the proposed clustering procedure with two clustering methods for IFS data sets.  相似文献   

10.
The k-means algorithm is well known for its efficiency in clustering large data sets. However, working only on numeric values prohibits it from being used to cluster real world data containing categorical values. In this paper we present two algorithms which extend the k-means algorithm to categorical domains and domains with mixed numeric and categorical values. The k-modes algorithm uses a simple matching dissimilarity measure to deal with categorical objects, replaces the means of clusters with modes, and uses a frequency-based method to update modes in the clustering process to minimise the clustering cost function. With these extensions the k-modes algorithm enables the clustering of categorical data in a fashion similar to k-means. The k-prototypes algorithm, through the definition of a combined dissimilarity measure, further integrates the k-means and k-modes algorithms to allow for clustering objects described by mixed numeric and categorical attributes. We use the well known soybean disease and credit approval data sets to demonstrate the clustering performance of the two algorithms. Our experiments on two real world data sets with half a million objects each show that the two algorithms are efficient when clustering large data sets, which is critical to data mining applications.  相似文献   

11.
12.
罗会兰  危辉 《计算机科学》2010,37(11):234-238
提出了一种基于集成技术和谱聚类技术的混合数据聚类算法CBEST。它利用聚类集成技术产生混合数据间的相似性,这种相似性度量没有对数据特征值分布模型做任何的假设。基于此相似性度量得到的待聚类数据的相似性矩阵,应用谱聚类算法得到混合数据聚类结果。大量真实和人工数据上的实验结果验证了CBEST的有效性和它对噪声的鲁棒性。与其它混合数据聚类算法的比较研究也证明了CBEST的优越性能。CBEST还能有效融合先验知识,通过参数的调节来设置不同属性在聚类中的权重。  相似文献   

13.
Spatial clustering analysis is an important issue that has been widely studied to extract the meaningful subgroups of geo-referenced data. Although many approaches have been developed in the literature, efficiently modeling the network constraint that objects (e.g. urban facility) are observed on or alongside a street network remains a challenging task for spatial clustering. Based on the techniques of mathematical morphology, this paper presents a new spatial clustering approach NMMSC designed for mining the grouping patterns of network-constrained point objects. NMMSC is essentially a hierarchical clustering approach, and it generally consists of two main steps: first, the original vector data is converted to raster data by utilizing basic linear unit of network as the pixel in network space; second, based on the specified 1-dimensional raster structure, an extended mathematical morphology operator (i.e. dilation) is iteratively performed to identify spatial point agglomerations with hierarchical structure snapped on a network. Compared to existing methods of network-constrained hierarchical clustering, our method is more efficient for cluster similarity computation with linear time complexity. The effectiveness and efficiency of our approach are verified through the experiments with real and synthetic data sets.  相似文献   

14.
The first stage of organizing objects is to partition them into groups or clusters. The clustering is generally done on individual object data representing the entities such as feature vectors or on object relational data incorporated in a proximity matrix.This paper describes another method for finding a fuzzy membership matrix that provides cluster membership values for all the objects based strictly on the proximity matrix. This is generally referred to as relational data clustering. The fuzzy membership matrix is found by first finding a set of vectors that approximately have the same inter-vector Euclidian distances as the proximities that are provided. These vectors can be of very low dimension such as 5 or less. Fuzzy c-means (FCM) is then applied to these vectors to obtain a fuzzy membership matrix. In addition two-dimensional vectors are also created to provide a visual representation of the proximity matrix. This allows comparison of the result of automatic clustering to visual clustering. The method proposed here is compared to other relational clustering methods including NERFCM, Rouben’s method and Windhams A-P method. Various clustering quality indices are also calculated for doing the comparison using various proximity matrices as input. Simulations show the method to be very effective and no more computationally expensive than other relational data clustering methods. The membership matrices that are produced by the proposed method are less crisp than those produced by NERFCM and more representative of the proximity matrix that is used as input to the clustering process.  相似文献   

15.
维度灾难、含有噪声数据和输入参数对领域知识的强依赖性,是不确定数据聚类领域中具有挑战性的问题。针对这些问题,基于相似性度量和凝聚层次聚类思想的基础上提出了高维不确定数据高效聚类HDUDEC(High Dimensional Un-certain Data Efficient Clustering)算法。该算法采用一个能够准确表达不确定高维对象之间的相似度的度量函数计算出对象之间的相似度,然后根据相似度阈值自底向上进行聚类分析。实验证明新的算法需要的先验知识较少、可以有效地过滤噪声数据、可以高效的获得任意形状的高维不确定聚类结果。  相似文献   

16.
彭新东  杨勇 《计算机应用》2015,35(8):2350-2354
针对区间值模糊软集信息测度难以精确定义的问题,提出了区间值模糊软集的距离测度、相似度、熵、包含度、子集度的公理化定义,给出了区间值模糊软集的信息测度公式,并讨论了它们的转换关系。然后提出了一个基于相似度的聚类算法,该算法结合区间值模糊软集的特性,着重对给出评价对象的具有相似知识水平的专家进行聚类,同时讨论了算法的计算复杂度。最后通过实例说明该算法能有效地处理专家聚类问题。  相似文献   

17.
In this paper, we show how one can take advantage of the stability and effectiveness of object data clustering algorithms when the data to be clustered are available in the form of mutual numerical relationships between pairs of objects. More precisely, we propose a new fuzzy relational algorithm, based on the popular fuzzy C-means (FCM) algorithm, which does not require any particular restriction on the relation matrix. We describe the application of the algorithm to four real and four synthetic data sets, and show that our algorithm performs better than well-known fuzzy relational clustering algorithms on all these sets.  相似文献   

18.
一种面向高维混合属性数据的异常挖掘算法   总被引:2,自引:0,他引:2  
李庆华  李新  蒋盛益 《计算机应用》2005,25(6):1353-1356
异常检测是数据挖掘领域研究的最基本的问题之一,它在欺诈甄别、气象预报、客户分类和入侵检测等方面有广泛的应用。针对网络入侵检测的需求提出了一种新的基于混合属性聚类的异常挖掘算法,并且依据异常点(outliers)是数据集中的稀有点这一本质,给出了一种新的数据相似性和异常度的定义。本文所提出算法具有线性时间复杂度,在KDDCUP99和WisconsinPrognosisBreastCancer数据集上的实验表明,算本法在提供了近似线性时间复杂度和很好的可扩展性的同时,能够较好的发现数据集中的异常点。  相似文献   

19.
20.
A hybrid clustering procedure for concentric and chain-like clusters   总被引:1,自引:0,他引:1  
K-means algorithm is a well known nonhierarchical method for clustering data. The most important limitations of this algorithm are that: (1) it gives final clusters on the basis of the cluster centroids or the seed points chosen initially, and (2) it is appropriate for data sets having fairly isotropic clusters. But this algorithm has the advantage of low computation and storage requirements. On the other hand, hierarchical agglomerative clustering algorithm, which can cluster nonisotropic (chain-like and concentric) clusters, requires high storage and computation requirements. This paper suggests a new method for selecting the initial seed points, so that theK-means algorithm gives the same results for any input data order. This paper also describes a hybrid clustering algorithm, based on the concepts of multilevel theory, which is nonhierarchical at the first level and hierarchical from second level onwards, to cluster data sets having (i) chain-like clusters and (ii) concentric clusters. It is observed that this hybrid clustering algorithm gives the same results as the hierarchical clustering algorithm, with less computation and storage requirements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号