首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 62 毫秒
1.
解决文本聚类集成问题的两个谱算法   总被引:8,自引:0,他引:8  
徐森  卢志茂  顾国昌 《自动化学报》2009,35(7):997-1002
聚类集成中的关键问题是如何根据不同的聚类器组合为最终的更好的聚类结果. 本文引入谱聚类思想解决文本聚类集成问题, 然而谱聚类算法需要计算大规模矩阵的特征值分解问题来获得文本的低维嵌入, 并用于后续聚类. 本文首先提出了一个集成算法, 该算法使用代数变换将大规模矩阵的特征值分解问题转化为等价的奇异值分解问题, 并继续转化为规模更小的特征值分解问题; 然后进一步研究了谱聚类算法的特性, 提出了另一个集成算法, 该算法通过求解超边的低维嵌入, 间接得到文本的低维嵌入. 在TREC和Reuters文本数据集上的实验结果表明, 本文提出的两个谱聚类算法比其他基于图划分的集成算法鲁棒, 是解决文本聚类集成问题行之有效的方法.  相似文献   

2.
基于矩阵谱分析的文本聚类集成算法   总被引:1,自引:0,他引:1  
聚类集成技术可有效提高单聚类算法的精度和稳定性,其中的关键问题是如何根据不同的聚类成员组合为更好的聚类结果.文中引入谱聚类算法解决文本聚类集成问题,设计基于正则化拉普拉斯矩阵的谱算法(NLM-SA).该算法基于代数变换,通过求解小规模矩阵的特征值和特征向量间接获得正则化拉普拉斯矩阵的特征向量,并用于后续聚类.进一步研究谱聚类算法的关键思想,设计基于超边转移概率矩阵的谱算法(HTMSA).该算法通过求解超边的低维嵌入间接获得文本的低维嵌入,并用于后续K均值算法.在TREC和Reuters文本集上的实验结果验证NLMSA和HTMSA的有效性,它们都获得比其它基于图划分的集成算法更为优越的结果.HTMSA获得的结果比NLMSA略差,而时间和空间需求则比NLMSA低得多.  相似文献   

3.
徐森  卢志茂  顾国昌 《控制与决策》2009,24(8):1277-1280

聚类集成中的关键问题是如何根据不同的聚类成员组合为更好的聚类结果.引入谱聚类算法解决该问题,提出了基于相似度矩阵的谱算法(SMSA),但该算法高昂的计算代价使其不适合大规模文本集.进一步研究了谱聚类算法的特性,对超边的相似度矩阵进行谱分析,提出了基于超边相似度矩阵的元聚类算法(HSM-MCLA).真实文本数据集的实验结果表明:SMSA 和HSM-MCLA 比其他基于图划分的集成算法更优越;HSM-MCLA 可获得与SMSA 相当的结果,而计算需求却明显低于SMSA.

  相似文献   

4.
用于文本聚类的模糊谱聚类算法   总被引:1,自引:0,他引:1       下载免费PDF全文
谱聚类方法的应用已经开始从图像分割领域扩展到文本挖掘领域中,并取得了一定的成果。在自动确定聚类数目的基础上,结合模糊理论与谱聚类算法,提出了一种应用在多文本聚类中的模糊聚类算法,该算法主要描述了如何实现单个文本同时属于多个文本类的模糊谱聚类方法。实验仿真结果表明该算法具有很好的聚类效果。  相似文献   

5.
基于谱聚类的聚类集成算法   总被引:6,自引:7,他引:6  
周林  平西建  徐森  张涛 《自动化学报》2012,38(8):1335-1342
谱聚类是近年来出现的一类性能优越的聚类算法,能对任意形状的数据进行聚类, 但算法对尺度参数比较敏感,利用聚类集成良好的鲁棒性和泛化能力,本文提出了基于谱聚类的聚类集成算法.该算法首先利用谱聚类算法的内在特性构造多样性的聚类成员; 然后,采用连接三元组算法计算相似度矩阵,扩充了数据点之间的相似性信息;最后,对相似度矩阵使用谱聚类算法得到最终的集成结果. 为了使算法能扩展到大规模应用,利用Nystrm采样算法只计算随机采样数据点之间以及随机采样数据点与剩余数据点之间的相似度矩阵,从而有效降低了算法的计算复杂度. 本文算法既利用了谱聚类算法的优越性能,同时又避免了精确选择尺度参数的问题.实验结果表明:较之其他常见的聚类集成算法,本文算法更优越、更有效,能较好地解决数据聚类、图像分割等问题.  相似文献   

6.
谱聚类算法利用特征向量构造简化的数据空间,在降低数据维数的同时,使得数据在子空间中的分布结构更加明显。该文提出了一种粗糙谱聚类算法,并将其应用于文本数据挖掘。实验表明,该算法与现有的文本聚类算法相比,准确率有一定的提高。  相似文献   

7.
谱聚类算法利用特征向量构造简化的数据空间,在降低数据维数的同时,使得数据在子空间中的分布结构更加明显。该文提出了一种粗糙谱聚类算法,并将其应用于文本数据挖掘。实验表明,该算法与现有的文本聚类算法相比,准确率有一定的提高。  相似文献   

8.
鉴于计算代价高昂的谱聚类无法满足海量网络社区发现的需求,提出一种用于网络重叠社区发现的谱聚类集成算法(SCEA).首先,利用高效的近似谱聚类(KASP)算法生成个体聚类集合;然后,引入个体聚类选择机制对个体聚类进行优选,并对优选后的个体聚类建立簇相似图;最后,进行层次软聚类,得到网络节点的软划分.实验结果表明,与代表性算法(CPM,Link,COPRA,SSDE)相比较,SCEA能够挖掘出具有更高规范化互信息(NMI)的网络重叠社区结构,且具有相对较好的鲁棒性.  相似文献   

9.
随着文本资源的激增,特别是网页文本的迅速增加,针对文本的挖掘分析日益受到重视。谱聚类是文本聚类分析较常用的一种新型方法。该文将非负约束引入到传统的谱聚类算法中,提出了一种基于非负约束的谱聚类方法。文中实验验证了所提出方法在中文文本聚类分析应用中的有效性。  相似文献   

10.
谱聚类算法是建立在谱图理论上的一种点对聚类算法,具有实现简单、理论基础扎实和适应任意数据空间的优点,因而成为机器学习领域的研究热点.谱聚类算法最大的问题在于计算复杂度过高,而并行计算可以提高解题效率,因此本文采用最为流行的并行计算框架MAP/REDUCE在Hadoop环境中实现了并行谱聚类算法,大大改善了谱聚类算法在大规模数据环境中的聚类效率问题.  相似文献   

11.
聚类分析是数据挖掘领域中一个重要研究内容,谱聚类(Spectral Clustering, SC)由于具有计算简便,性能优越等特点,已经成为最流行的聚类算法之一。本文利用四类几何结构数据,对规范化割(Normalized Cut, NCUT)、稀疏子空间聚类(Sparse subspace clustering, SSC)和谱曲率聚类(Spectral Curvature Clustering, SCC)三种谱聚类算法进行了分析和比较。实验结果表明,针对本文实验数据三种算法的聚类结果各有差异,但每类数据都可以找到相对最有效的聚类算法,方便读者对算法的选择和使用。NCUT无法处理相交的数据,适用性较差,但对于不相交的二次曲线聚类精度较高,并且优于SSC和SCC算法;相比NCUT算法,SSC算法适用性较强,能够实现四类几何结构数据的聚类,但在聚类过程中常出现误分现象,导致聚类精度不高;与前两种算法相比,SCC算法具有适用性强,精度高等特点,能够实现四类几何结构数据有效聚类,尤其对于实验数据中“横”和“竖”两类点组成的十字,SCC算法能够得到较好的聚类结果,解决由于数据量大SSC算法无法处理的问题。此外,针对有数据间断的两条相交螺旋线聚类问题,本文在现有SCC算法基础上进行改进,结果表明,改进后算法能够有效地实现数据聚类,具有良好的实用性。最后,文章分析了现有SCC算法存在的不足,并指出进一步研究的方向。  相似文献   

12.
Clustering is the process of grouping objects that are similar, where similarity between objects is usually measured by a distance metric. The groups formed by a clustering method are referred as clusters. Clustering is a widely used activity with multiple applications ranging from biology to economics. Each clustering technique has some advantages and disadvantages. Some clustering algorithms may even require input parameters which strongly affect the result. In most cases, it is not possible to choose the best distance metric, the best clustering method, and the best input argument values for an input data set. Therefore, multiple clusterings can be obtained by several distance metrics, several clustering methods, and several input argument values. And, multiple clusterings can be combined into a new and better quality final clustering. We propose a family of combining multiple clustering algorithms that are memory efficient, scalable, robust, and intuitive. Our new algorithms offer tremendous speed gain and low memory requirements by working at cluster level, while producing very good quality final clusters. Extensive experimental evaluations on some very challenging artificially generated and real data sets from a diverse set of domains establish the usefulness of our methods.  相似文献   

13.
For the past few decades the mainstream data clustering technologies have been fundamentally based on centralized operation; data sets were of small manageable sizes, and usually resided on one site that belonged to one organization. Today, data is of enormous sizes and is usually located on distributed sites; the primary example being the Web. This created a need for performing clustering in distributed environments. Distributed clustering solves two problems: infeasibility of collecting data at a central site, due to either technical and/or privacy limitations, and intractability of traditional clustering algorithms on huge data sets. In this paper we propose a distributed collaborative clustering approach for clustering Web documents in distributed environments. We adopt a peer-to-peer model, where the main objective is to allow nodes in a network to first form independent opinions of local document clusterings, then collaborate with peers to enhance the local clusterings. Information exchanged between peers is minimized through the use of cluster summaries in the form of keyphrases extracted from the clusters. This summarized view of peer data enables nodes to request merging of remote data selectively to enhance local clusters. Initial clustering, as well as merging peer data with local clusters, utilizes a clustering method, called similarity histogram-based clustering, based on keeping a tight similarity distribution within clusters. This approach achieves significant improvement in local clustering solutions without the cost of centralized clustering, while maintaining the initial local clustering structure. Results show that larger networks exhibit larger improvements, up to 15% improvement in clustering quality, albeit lower absolute clustering quality than smaller networks.  相似文献   

14.
Automatic document summarization aims to create a compressed summary that preserves the main content of the original documents. It is a well-recognized fact that a document set often covers a number of topic themes with each theme represented by a cluster of highly related sentences. More important, topic themes are not equally important. The sentences in an important theme cluster are generally deemed more salient than the sentences in a trivial theme cluster. Existing clustering-based summarization approaches integrate clustering and ranking in sequence, which unavoidably ignore the interaction between them. In this paper, we propose a novel approach developed based on the spectral analysis to simultaneously clustering and ranking of sentences. Experimental results on the DUC generic summarization datasets demonstrate the improvement of the proposed approach over the other existing clustering-based approaches.  相似文献   

15.
This paper introduces a novel pairwise-adaptive dissimilarity measure for large high dimensional document datasets that improves the unsupervised clustering quality and speed compared to the original cosine dissimilarity measure. This measure dynamically selects a number of important features of the compared pair of document vectors. Two approaches for selecting the number of features in the application of the measure are discussed. The proposed feature selection process makes this dissimilarity measure especially applicable in large, high dimensional document collections. Its performance is validated on several test sets originating from standardized datasets. The dissimilarity measure is compared to the well-known cosine dissimilarity measure using the average F-measures of the hierarchical agglomerative clustering result. This new dissimilarity measure results in an improved clustering result obtained with a lower required computational time.  相似文献   

16.
In this paper, we shall show the following experimental results: (1) the one-dimensional clustering algorithm advocated by Slagle and Lee(1) can be generalized to the n-dimensional case, n > 1: (2) if a set of points in some n-space (n > 1) are linearly ordered through the short spanning path algorithm, then this set of points can be considered as occupying a one-dimensional space and the original n-dimensional clustering problem can now be viewed as a one-dimensional clustering problem; (3) a short spanning path usually contains as much information as a minimal spanning tree; (4) the one-dimensional clustering algorithm can be used to find the long links in a short spanning path or a minimal spanning tree. These long links have to be broken to obtain clusters.  相似文献   

17.
Document clustering using synthetic cluster prototypes   总被引:3,自引:0,他引:3  
The use of centroids as prototypes for clustering text documents with the k-means family of methods is not always the best choice for representing text clusters due to the high dimensionality, sparsity, and low quality of text data. Especially for the cases where we seek clusters with small number of objects, the use of centroids may lead to poor solutions near the bad initial conditions. To overcome this problem, we propose the idea of synthetic cluster prototype that is computed by first selecting a subset of cluster objects (instances), then computing the representative of these objects and finally selecting important features. In this spirit, we introduce the MedoidKNN synthetic prototype that favors the representation of the dominant class in a cluster. These synthetic cluster prototypes are incorporated into the generic spherical k-means procedure leading to a robust clustering method called k-synthetic prototypes (k-sp). Comparative experimental evaluation demonstrates the robustness of the approach especially for small datasets and clusters overlapping in many dimensions and its superior performance against traditional and subspace clustering methods.  相似文献   

18.
Despite significant successes achieved in knowledge discovery,traditional machine learning methods may fail to obtain satisfactory performances when dealing with complex data,such as imbalanced,high-dimensional,noisy data,etc.The reason behind is that it is difficult for these methods to capture multiple characteristics and underlying structure of data.In this context,it becomes an important topic in the data mining field that how to effectively construct an efficient knowledge discovery and mining model.Ensemble learning,as one research hot spot,aims to integrate data fusion,data modeling,and data mining into a unified framework.Specifically,ensemble learning firstly extracts a set of features with a variety of transformations.Based on these learned features,multiple learning algorithms are utilized to produce weak predictive results.Finally,ensemble learning fuses the informative knowledge from the above results obtained to achieve knowledge discovery and better predictive performance via voting schemes in an adaptive way.In this paper,we review the research progress of the mainstream approaches of ensemble learning and classify them based on different characteristics.In addition,we present challenges and possible research directions for each mainstream approach of ensemble learning,and we also give an extra introduction for the combination of ensemble learning with other machine learning hot spots such as deep learning,reinforcement learning,etc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号