首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In high-dimensional data, clusters of objects usually exist in subspaces; besides, different clusters probably have different shape volumes. Most existing methods for high-dimensional data clustering, however, only consider the former factor. They ignore the latter factor by assuming the same shape volume value for different clusters. In this paper we propose a new Gaussian mixture model (GMM) type algorithm for discovering clusters with various shape volumes in subspaces. We extend the GMM clustering method to calculate a local weight vector as well as a local variance within each cluster, and use the weight and variance values to capture main properties that discriminate different clusters, including subsets of relevant dimensions and shape volumes. This is achieved by introducing negative entropy of weight vectors, along with adaptively-chosen coefficients, into the objective function of the extended GMM. Experimental results on both synthetic and real datasets show that the proposed algorithm outperforms its competitors, especially when applying to high-dimensional datasets.  相似文献   

2.
Due to data sparseness and attribute redundancy in high-dimensional data, clusters of objects often exist in subspaces rather than in the entire space. To effectively address this issue, this paper presents a new optimization algorithm for clustering high-dimensional categorical data, which is an extension of the k-modes clustering algorithm. In the proposed algorithm, a novel weighting technique for categorical data is developed to calculate two weights for each attribute (or dimension) in each cluster and use the weight values to identify the subsets of important attributes that categorize different clusters. The convergence of the algorithm under an optimization framework is proved. The performance and scalability of the algorithm is evaluated experimentally on both synthetic and real data sets. The experimental studies show that the proposed algorithm is effective in clustering categorical data sets and also scalable to large data sets owning to its linear time complexity with respect to the number of data objects, attributes or clusters.  相似文献   

3.
Cluster validation is a major issue in cluster analysis of data mining, which is the process of evaluating performance of clustering algorithms under varying input conditions. Many existing validity indices address clustering results of low-dimensional data. Within high-dimensional data, many of the dimensions are irrelevant, and the clusters usually only exist in some projected subspaces spanned by different combinations of dimensions. This paper presents a solution to the problem of cluster validation for projective clustering. We propose two new measurements for the intracluster compactness and intercluster separation of projected clusters. Based on these measurements and the conventional indices, three new cluster validity indices are presented. Combined with a fuzzy projective clustering algorithm, the new indices are used to determine the number of projected clusters in high-dimensional data. The suitability of our proposal has been demonstrated through an empirical study using synthetic and real-world datasets.  相似文献   

4.
In this paper, we present an agglomerative fuzzy $k$-means clustering algorithm for numerical data, an extension to the standard fuzzy $k$-means algorithm by introducing a penalty term to the objective function to make the clustering process not sensitive to the initial cluster centers. The new algorithm can produce more consistent clustering results from different sets of initial clusters centers. Combined with cluster validation techniques, the new algorithm can determine the number of clusters in a data set, which is a well known problem in $k$-means clustering. Experimental results on synthetic data sets (2 to 5 dimensions, 500 to 5000 objects and 3 to 7 clusters), the BIRCH two-dimensional data set of 20000 objects and 100 clusters, and the WINE data set of 178 objects, 17 dimensions and 3 clusters from UCI, have demonstrated the effectiveness of the new algorithm in producing consistent clustering results and determining the correct number of clusters in different data sets, some with overlapping inherent clusters.  相似文献   

5.
传统k-means算法由于初始聚类中心的选择是随机的,因此会使聚类结果不稳定。针对这个问题,提出一种基于离散量改进k-means初始聚类中心选择的算法。算法首先将所有对象作为一个大类,然后不断从对象数目最多的聚类中选择离散量最大与最小的两个对象作为初始聚类中心,再根据最近距离将这个大聚类中的其他对象划分到与之最近的初始聚类中,直到聚类个数等于指定的k值。最后将这k个聚类作为初始聚类应用到k-means算法中。将提出的算法与传统k-means算法、最大最小距离聚类算法应用到多个数据集进行实验。实验结果表明,改进后的k-means算法选取的初始聚类中心唯一,聚类过程的迭代次数也减少了,聚类结果稳定且准确率较高。  相似文献   

6.
The k-means algorithm is well known for its efficiency in clustering large data sets. However, working only on numeric values prohibits it from being used to cluster real world data containing categorical values. In this paper we present two algorithms which extend the k-means algorithm to categorical domains and domains with mixed numeric and categorical values. The k-modes algorithm uses a simple matching dissimilarity measure to deal with categorical objects, replaces the means of clusters with modes, and uses a frequency-based method to update modes in the clustering process to minimise the clustering cost function. With these extensions the k-modes algorithm enables the clustering of categorical data in a fashion similar to k-means. The k-prototypes algorithm, through the definition of a combined dissimilarity measure, further integrates the k-means and k-modes algorithms to allow for clustering objects described by mixed numeric and categorical attributes. We use the well known soybean disease and credit approval data sets to demonstrate the clustering performance of the two algorithms. Our experiments on two real world data sets with half a million objects each show that the two algorithms are efficient when clustering large data sets, which is critical to data mining applications.  相似文献   

7.
A fuzzy k-modes algorithm for clustering categorical data   总被引:12,自引:0,他引:12  
This correspondence describes extensions to the fuzzy k-means algorithm for clustering categorical data. By using a simple matching dissimilarity measure for categorical objects and modes instead of means for clusters, a new approach is developed, which allows the use of the k-means paradigm to efficiently cluster large categorical data sets. A fuzzy k-modes algorithm is presented and the effectiveness of the algorithm is demonstrated with experimental results  相似文献   

8.
提出了一种基于粒计算Web文档聚类(WDCGrc)方法。该方法通过TF-IDF法则计算文档词条的权值,采取设定文档阈值和平均权值相结合的方法实行降维,抽取出每篇文档的主干词;建立了文档的主干词和二进制粒之间的转换,提出了基于粒计算提取文档间的关联规则算法来获取文档间的频繁项集,由频繁项集形成初始聚类,使用优化算法对初始聚类进行优化,得到最终聚类结果。实验结果表明,该方法切实有效,聚类质量较好。  相似文献   

9.
基于COSA算法的中文文本聚类   总被引:5,自引:0,他引:5  
传统聚类算法在计算两个对象间的距离时,每个属性对距离的贡献相同。COSA(Clustering On Subsets of Attributes)算法[1]认为在不同的分组中,每个属性对计算距离所起的作用可能并不相等,因为不同分组中的对象可能在不同的属性子集上聚集。文献[1]在此基础上定义了新的距离,并提出了两种COSA算法: COSA1算法是一种分割的聚类算法;COSA2算法是一种层次聚类算法。为了对比COSA距离和传统的欧氏距离在文本聚类中的表现,本文对中文文本进行了分割聚类和层次聚类的实验。实验结果显示出COSA算法较基于欧氏距离的聚类算法有更好的性能,而且对于属性数的变化,COSA算法更加稳定。  相似文献   

10.
基于层次聚类的k均值算法研究   总被引:4,自引:3,他引:1  
针对k均值算法需要用户事先确定聚簇数k、阈值t和聚簇中心Q,提出了一种基于层次的k均值聚类算法(HKMA)。该算法首先采用层次方法对文档进行初始聚类,得到的聚类总数作为k均值算法中的k值,在此基础上,通过k均值聚类对聚类结果进行修正。最后通过实验验证了算法的准确度和时间效率,通过与其他聚类算法的比较,所提出的算法具有更好的性能。  相似文献   

11.
Document clustering using synthetic cluster prototypes   总被引:3,自引:0,他引:3  
The use of centroids as prototypes for clustering text documents with the k-means family of methods is not always the best choice for representing text clusters due to the high dimensionality, sparsity, and low quality of text data. Especially for the cases where we seek clusters with small number of objects, the use of centroids may lead to poor solutions near the bad initial conditions. To overcome this problem, we propose the idea of synthetic cluster prototype that is computed by first selecting a subset of cluster objects (instances), then computing the representative of these objects and finally selecting important features. In this spirit, we introduce the MedoidKNN synthetic prototype that favors the representation of the dominant class in a cluster. These synthetic cluster prototypes are incorporated into the generic spherical k-means procedure leading to a robust clustering method called k-synthetic prototypes (k-sp). Comparative experimental evaluation demonstrates the robustness of the approach especially for small datasets and clusters overlapping in many dimensions and its superior performance against traditional and subspace clustering methods.  相似文献   

12.
As one of the most fundamental yet important methods of data clustering, center-based partitioning approach clusters the dataset into k subsets, each of which is represented by a centroid or medoid. In this paper, we propose a new medoid-based k-partitions approach called Clustering Around Weighted Prototypes (CAWP), which works with a similarity matrix. In CAWP, each cluster is characterized by multiple objects with different representative weights. With this new cluster representation scheme, CAWP aims to simultaneously produce clusters of improved quality and a set of ranked representative objects for each cluster. An efficient algorithm is derived to alternatingly update the clusters and the representative weights of objects with respect to each cluster. An annealing-like optimization procedure is incorporated to alleviate the local optimum problem for better clustering results and at the same time to make the algorithm less sensitive to parameter setting. Experimental results on benchmark document datasets show that, CAWP achieves favorable effectiveness and efficiency in clustering, and also provides useful information for cluster-specified analysis.  相似文献   

13.
Since in practical data mining problems high-dimensional data are clustered, the resulting clusters are high-dimensional geometrical objects, which are difficult to analyze and interpret. Cluster validity measures try to solve this problem by providing a single numerical value. As a low dimensional graphical representation of the clusters could be much more informative than such a single value, this paper proposes a new tool for the visualization of fuzzy clustering results. By using the basic properties of fuzzy clustering algorithms, this new tool maps the cluster centers and the data such that the distances between the clusters and the data-points are preserved. During the iterative mapping process, the algorithm uses the membership values of the data and minimizes an objective function similar to the original clustering algorithm. Comparing to the original Sammon mapping not only reliable cluster shapes are obtained but the numerical complexity of the algorithm is also drastically reduced. The developed tool has been applied for visualization of reconstructed phase space trajectories of chaotic systems. The case study demonstrates that proposed FUZZSAMM algorithm is a useful tool in user-guided clustering.  相似文献   

14.
改进的基于遗传算法的粗糙聚类方法   总被引:2,自引:0,他引:2       下载免费PDF全文
传统的聚类算法都是使用硬计算来对数据对象进行划分,然而现实中不同类之间对象通常没有明确的界限。粗糙集理论提供了一种处理边界对象不确定的方法。因此将粗糙理论与k-均值方法相结合。同时,传统的k-均值聚类方法必须事先给定聚类数k,但实际情况下k很难确定;另外虽然传统k-均值算法局部搜索能力强,但容易陷入局部最优。遗传算法能得到全局最优解,但收敛过快。鉴于此,提出了一种改进的基于遗传算法的的粗糙聚类方法。该算法能动态地生成k-均值聚类数,采用最大最小原则生成初始聚类中心,同时结合粗糙集理论的上近似和下近似处理边界对象。最后,用UCI的Iris数据集分别对算法进行实际验证。实验结果表明,该算法具有较高的正确率,综合性能更加稳定。  相似文献   

15.
Automated variable weighting in k-means type clustering   总被引:9,自引:0,他引:9  
This paper proposes a k-means type clustering algorithm that can automatically calculate variable weights. A new step is introduced to the k-means clustering process to iteratively update variable weights based on the current partition of data and a formula for weight calculation is proposed. The convergency theorem of the new clustering process is given. The variable weights produced by the algorithm measure the importance of variables in clustering and can be used in variable selection in data mining applications where large and complex real data are often involved. Experimental results on both synthetic and real data have shown that the new algorithm outperformed the standard k-means type algorithms in recovering clusters in data.  相似文献   

16.
高维数据流的自适应子空间聚类算法   总被引:1,自引:0,他引:1       下载免费PDF全文
高维数据流聚类是数据挖掘领域中的研究热点。由于数据流具有数据量大、快速变化、高维性等特点,许多聚类算法不能取得较好的聚类质量。提出了高维数据流的自适应子空间聚类算法SAStream。该算法改进了HPStream中的微簇结构并定义了候选簇,只在相应的子空间内计算新来数据点到候选簇质心的距离,减少了聚类时被检查微簇的数目,将形成的微簇存储在金字塔时间框架中,使用时间衰减函数删除过期的微簇;当数据流量大时,根据监测的系统资源使用情况自动调整界限半径和簇选择因子,从而调节聚类的粒度。实验结果表明,该算法具有良好的聚类质量和快速的数据处理能力。  相似文献   

17.
王宇 《计算机工程与设计》2004,25(11):1884-1885,1896
在分析K-means聚类算法和K-medians聚类算法的基础上,使用Tschebyshev距离(∞-范教)对数据对象集进行聚类分析,得到聚类中心恰为数据对象集的最大值与最小值的均值这一新颖结果,并进而提出了一个新的聚类算法,即K-maxmins聚类算法。给出了K-maxrnins聚类算法与传统K-means聚类算法和K-medians聚类算法的结果比较。  相似文献   

18.
传统k最近邻算法(k-Nearest Neighbor,kNN)作为一种非参数化分类技术在数据分析中具有广泛的应用,但该算法具有较多的冗余计算,致使处理数据时需要花费较多的计算时间。目前大量的研究都集中在数据的预处理阶段,通过为数据建立模型降低kNN查询的计算量。提出一种基于对象数量的宽度加权聚类kNN算法(NOWCkNN),该算法中数据集首先以全局宽度进行聚类,每个生成的子集群根据其对象数量递归计算其宽度的权值,然后算法根据其权值的大小和调和系数调节宽度值,最后生成不同宽度大小的集群用于kNN查询。这不仅减少了算法的聚类时间,还能平衡产生集群的大小,减少迭代次数,使三角不等式修剪率达到最大。实验结果表明,NOWCkNN算法与现有工作相比在各个维度的数据集中有较好的性能,尤其是在高维度、数据量较大的数据集中有更高的修剪效率。  相似文献   

19.

Text document clustering is used to separate a collection of documents into several clusters by allowing the documents in a cluster to be substantially similar. The documents in one cluster are distinct from documents in other clusters. The high-dimensional sparse document term matrix reduces the clustering process efficiency. This study proposes a new way of clustering documents using domain ontology and WordNet ontology. The main objective of this work is to increase cluster output quality. This work aims to investigate and examine the method of selecting feature dimensions to minimize the features of the document name matrix. The sports documents are clustered using conventional K-Means with the dimension reduction features selection process and density-based clustering. A novel approach named ontology-based document clustering is proposed for grouping the text documents. Three critical steps were used in order to develop this technique. The initial step for an ontology-based clustering approach starts with data pre-processing, and the characteristics of the DR method are reduced with the Info-Gain collection. The documents are clustered using two clustering methods: K-Means and Density-Based clustering with DR Feature Selection Process. These methods validate the findings of ontology-based clustering, and this study compared them using the measurement metrics. The second step of this study examines the sports field ontology development and describes the principles and relationship of the terms using sports-related documents. The semantic web rational process is used to test the ontology for validation purposes. An algorithm for the synonym retrieval of the sports domain ontology terms has been proposed and implemented. The retrieved terms from the documents and sport ontology concepts are mapped to the retrieved synonym set words from the WorldNet ontology. The suggested technique is based on synonyms of mapped concepts. The proposed ontology approach employs the reduced feature set in order to clustering the text documents. The results are compared with two traditional approaches on two datasets. The proposed ontology-based clustering approach is found to be effective in clustering the documents with high precision, recall, and accuracy. In addition, this study also compared the different RDF serialization formats for sports ontology.

  相似文献   

20.
This paper proposes a new method to weight subspaces in feature groups and individual features for clustering high-dimensional data. In this method, the features of high-dimensional data are divided into feature groups, based on their natural characteristics. Two types of weights are introduced to the clustering process to simultaneously identify the importance of feature groups and individual features in each cluster. A new optimization model is given to define the optimization process and a new clustering algorithm FG-k-means is proposed to optimize the optimization model. The new algorithm is an extension to k-means by adding two additional steps to automatically calculate the two types of subspace weights. A new data generation method is presented to generate high-dimensional data with clusters in subspaces of both feature groups and individual features. Experimental results on synthetic and real-life data have shown that the FG-k-means algorithm significantly outperformed four k-means type algorithms, i.e., k-means, W-k-means, LAC and EWKM in almost all experiments. The new algorithm is robust to noise and missing values which commonly exist in high-dimensional data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号