首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Maximum‐margin clustering is an extension of the support vector machine (SVM) to clustering. It partitions a set of unlabeled data into multiple groups by finding hyperplanes with the largest margins. Although existing algorithms have shown promising results, there is no guarantee of convergence of these algorithms to global solutions due to the nonconvexity of the optimization problem. In this paper, we propose a simulated annealing‐based algorithm that is able to mitigate the issue of local minima in the maximum‐margin clustering problem. The novelty of our algorithm is twofold, ie, (i) it comprises a comprehensive cluster modification scheme based on simulated annealing, and (ii) it introduces a new approach based on the combination of k‐means++ and SVM at each step of the annealing process. More precisely, k‐means++ is initially applied to extract subsets of the data points. Then, an unsupervised SVM is applied to improve the clustering results. Experimental results on various benchmark data sets (of up to over a million points) give evidence that the proposed algorithm is more effective at solving the clustering problem than a number of popular clustering algorithms.  相似文献   

2.
Cluster analysis deals with the problem of organization of a collection of objects into clusters based on a similarity measure, which can be defined using various distance functions. The use of different similarity measures allows one to find different cluster structures in a data set. In this article, an algorithm is developed to solve clustering problems where the similarity measure is defined using the L1‐norm. The algorithm is designed using the nonsmooth optimization approach to the clustering problem. Smoothing techniques are applied to smooth both the clustering function and the L1‐norm. The algorithm computes clusters sequentially and finds global or near global solutions to the clustering problem. Results of numerical experiments using 12 real‐world data sets are reported, and the proposed algorithm is compared with two other clustering algorithms.  相似文献   

3.
K‐means clustering can be highly accurate when the number of clusters and the initial cluster centre are appropriate. An inappropriate determination of the number of clusters or the initial cluster centre decreases the accuracy of K‐means clustering. However, determining these values is problematic. To solve these problems, we used density‐based spatial clustering of application with noise (DBSCAN) because it does not require a predetermined number of clusters; however, it has some significant drawbacks. Using DBSCAN with high‐dimensional data and data with potentially different densities decreases the accuracy to some degree. Therefore, the objective of this research is to improve the efficiency of DBSCAN through a selection of region clusters based on density DBSCAN to automatically find the appropriate number of clusters and initial cluster centres for K‐means clustering. In the proposed method, DBSCAN is used to perform clustering and to select the appropriate clusters by considering the density of each cluster. Subsequently, the appropriate region data are chosen from the selected clusters. The experimental results yield the appropriate number of clusters and the appropriate initial cluster centres for K‐means clustering. In addition, the results of the selection of region clusters based on density DBSCAN method are more accurate than those obtained by traditional methods, including DBSCAN and K‐means and related methods such as Partitioning‐based DBSCAN (PDBSCAN) and PDBSCAN by applying the Ant Clustering Algorithm DBSCAN (PACA‐DBSCAN).  相似文献   

4.
Reconstruction‐based one‐class classification has shown to be very effective in a number of domains. This approach works by attempting to capture the underlying structure of the normal class, typically, by means of clusters of objects. It has the main disadvantage, however, that one has to indicate the number of clusters in advance, for this yields an efficient way of computing a clustering. In this paper, we introduce a new algorithm, OCKRA++, which achieves a better performance, by enhancing a clustering‐based one‐class ensemble classifier (OCKRA) with a cluster validity index that is used to set the best number of clusters during the classifier's training process. We have thoroughly tested OCKRA++ in a particular domain, namely masquerade detection. For this purpose, we have used the Windows‐Users and ‐Intruder simulation Logs data set repository, which contains 70 different masquerade data sets. We have found that OCKRA++ is currently the algorithm that achieves the best area under the curve, with a significant difference, in masquerade detection using the file system navigation approach.  相似文献   

5.
在推荐系统中应用K-means算法聚类可有效降维,然而聚类效果往往依赖于选定的初始中心,并且一旦选定目标簇后,推荐过程只针对目标簇进行,与其他簇无关。针对上述两个问题,提出一种基于满二叉树的二分K-means聚类并行推荐算法。该算法首先反复迭代二分K-means算法,迭代过程中使用簇内凝聚度作为分裂阈值,形成一颗满二叉树;然后通过层次遍历将用户归入到K个叶子节点(簇);最后针对K个簇,应用MapReduce框架进行并行推荐预测。MovieLens上的实验结果表明,该算法可大幅度提高推荐系统准确性,同时增强系统可扩展性。  相似文献   

6.
K-means type clustering algorithms for mixed data that consists of numeric and categorical attributes suffer from cluster center initialization problem. The final clustering results depend upon the initial cluster centers. Random cluster center initialization is a popular initialization technique. However, clustering results are not consistent with different cluster center initializations. K-Harmonic means clustering algorithm tries to overcome this problem for pure numeric data. In this paper, we extend the K-Harmonic means clustering algorithm for mixed datasets. We propose a definition for a cluster center and a distance measure. These cluster centers and the distance measure are used with the cost function of K-Harmonic means clustering algorithm in the proposed algorithm. Experiments were carried out with pure categorical datasets and mixed datasets. Results suggest that the proposed clustering algorithm is quite insensitive to the cluster center initialization problem. Comparative studies with other clustering algorithms show that the proposed algorithm produce better clustering results.  相似文献   

7.
Unsupervised feature selection is an important problem, especially for high‐dimensional data. However, until now, it has been scarcely studied and the existing algorithms cannot provide satisfying performance. Thus, in this paper, we propose a new unsupervised feature selection algorithm using similarity‐based feature clustering, Feature Selection‐based Feature Clustering (FSFC). FSFC removes redundant features according to the results of feature clustering based on feature similarity. First, it clusters the features according to their similarity. A new feature clustering algorithm is proposed, which overcomes the shortcomings of K‐means. Second, it selects a representative feature from each cluster, which contains most interesting information of features in the cluster. The efficiency and effectiveness of FSFC are tested upon real‐world data sets and compared with two representative unsupervised feature selection algorithms, Feature Selection Using Similarity (FSUS) and Multi‐Cluster‐based Feature Selection (MCFS) in terms of runtime, feature compression ratio, and the clustering results of K‐means. The results show that FSFC can not only reduce the feature space in less time, but also significantly improve the clustering performance of K‐means.  相似文献   

8.
To reduce the vagueness and subjectivity of customer demand in the process of product–service system design, a fuzzy semantic calculation method is proposed to obtain the importance of service demand. In addition, according to the demand of the clustering of service modules, a new clustering method is proposed to analyse discrete data based on the improved K‐means algorithm that is based on the Kruskal algorithm. According to the criterion of the service module division and its weight, the correlation coefficient between any two service activities is judged to form the comprehensive correlation coefficient matrix, and the comprehensive dissimilarity matrix can be obtained by the additive model. Then, this method calculates the minimum cost spanning tree (MCST) using the Kruskal algorithm. The different clusters of service activities with different centres can be found based on the MCST, and the edge values can be calculated by the improved K‐means algorithm. This paper uses 28 service activities of excavators. These activities can be divided into K (K = 4, 5, 6, and 7) clusters by the improved K‐means algorithm. Finally, the service element configuration model is established based on the demand weight, which is optimized by using the maximum customer satisfaction of competition.  相似文献   

9.
This paper presents cluster‐based ensemble classifier – an approach toward generating ensemble of classifiers using multiple clusters within classified data. Clustering is incorporated to partition data set into multiple clusters of highly correlated data that are difficult to separate otherwise and different base classifiers are used to learn class boundaries within the clusters. As the different base classifiers engage on different difficult‐to‐classify subsets of the data, the learning of the base classifiers is more focussed and accurate. A selection rather than fusion approach achieves the final verdict on patterns of unknown classes. The impact of clustering on the learning parameters and accuracy of a number of learning algorithms including neural network, support vector machine, decision tree and k‐NN classifier is investigated. A number of benchmark data sets from the UCI machine learning repository were used to evaluate the cluster‐based ensemble classifier and the experimental results demonstrate its superiority over bagging and boosting.  相似文献   

10.
In this paper, a novel clustering method in the kernel space is proposed. It effectively integrates several existing algorithms to become an iterative clustering scheme, which can handle clusters with arbitrary shapes. In our proposed approach, a reasonable initial core for each of the cluster is estimated. This allows us to adopt a cluster growing technique, and the growing cores offer partial hints on the cluster association. Consequently, the methods used for classification, such as support vector machines (SVMs), can be useful in our approach. To obtain initial clusters effectively, the notion of the incomplete Cholesky decomposition is adopted so that the fuzzy c‐means (FCM) can be used to partition the data in a kernel defined‐like space. Then a one‐class and a multiclass soft margin SVMs are adopted to detect the data within the main distributions (the cores) of the clusters and to repartition the data into new clusters iteratively. The structure of the data set is explored by pruning the data in the low‐density region of the clusters. Then data are gradually added back to the main distributions to assure exact cluster boundaries. Unlike the ordinary SVM algorithm, whose performance relies heavily on the kernel parameters given by the user, the parameters are estimated from the data set naturally in our approach. The experimental evaluations on two synthetic data sets and four University of California Irvine real data benchmarks indicate that the proposed algorithms outperform several popular clustering algorithms, such as FCM, support vector clustering (SVC), hierarchical clustering (HC), self‐organizing maps (SOM), and non‐Euclidean norm fuzzy c‐means (NEFCM). © 2009 Wiley Periodicals, Inc.4  相似文献   

11.
优化初始聚类中心的K-means聚类算法   总被引:1,自引:0,他引:1       下载免费PDF全文
针对传统K-means算法对初始中心十分敏感,聚类结果不稳定问题,提出了一种改进K-means聚类算法。该算法首先计算样本间的距离,根据样本距离找出距离最近的两点形成集合,根据点与集合的计算公式找出其他所有离集合最近的点,直到集合内数据数目大于或等于[α]([α]为样本集数据点数目与聚类的簇类数目的比值),再把该集合从样本集中删除,重复以上步骤得到K(K为簇类数目)个集合,计算每个集合的均值作为初始中心,并根据K-means算法得到最终的聚类结果。在Wine、Hayes-Roth、Iris、Tae、Heart-stalog、Ionosphere、Haberman数据集中,改进算法比传统K-means、K-means++算法的聚类结果更稳定;在Wine、Iris、Tae数据集中,比最小方差优化初始聚类中心的K-means算法聚类准确率更高,且在7组数据集中改进算法得到的轮廓系数和F1值最大。对于密度差异较大数据集,聚类结果比传统K-means、K-means++算法更稳定,更准确,且比最小方差优化初始聚类中心的K-means算法更高效。  相似文献   

12.
Local Principal Component Analysis (LPCA) is one of the popular techniques for dimensionality reduction and data compression of large data sets encountered in computer graphics. The LPCA algorithm is a variant of k‐means clustering where the repetitive classification of high dimensional data points to their nearest cluster leads to long execution times. The focus of this paper is on improving the efficiency and accuracy of LPCA. We propose a novel SortCluster LPCA algorithm that significantly reduces the cost of the point‐cluster classification stage, achieving a speed‐up of up to 20. To improve the approximation accuracy, we investigate different initialization schemes for LPCA and find that the k‐means++ algorithm [ [AV07] ] yields best results, however at a high computation cost. We show that similar ideas that lead to the efficiency of our SortCluster LPCA algorithm can be used to accelerate k‐means++. The resulting initialization algorithm is faster than purely random seeding while producing substantially more accurate data approximation.  相似文献   

13.
Text summarization is a process of extracting salient information from a source text and presenting that information to the user in a condensed form while preserving its main content. In the text summarization, most of the difficult problems are providing wide topic coverage and diversity in a summary. Research based on clustering, optimization, and evolutionary algorithm for text summarization has recently shown good results, making this a promising area. In this paper, for a text summarization, a two‐stage sentences selection model based on clustering and optimization techniques, called COSUM, is proposed. At the first stage, to discover all topics in a text, the sentences set is clustered by using k‐means method. At the second stage, for selection of salient sentences from clusters, an optimization model is proposed. This model optimizes an objective function that expressed as a harmonic mean of the objective functions enforcing the coverage and diversity of the selected sentences in the summary. To provide readability of a summary, this model also controls length of sentences selected in the candidate summary. For solving the optimization problem, an adaptive differential evolution algorithm with novel mutation strategy is developed. The method COSUM was compared with the 14 state‐of‐the‐art methods: DPSO‐EDASum; LexRank; CollabSum; UnifiedRank; 0–1 non‐linear; query, cluster, summarize; support vector machine; fuzzy evolutionary optimization model; conditional random fields; MA‐SingleDocSum; NetSum; manifold ranking; ESDS‐GHS‐GLO; and differential evolution, using ROUGE tool kit on the DUC2001 and DUC2002 data sets. Experimental results demonstrated that COSUM outperforms the state‐of‐the‐art methods in terms of ROUGE‐1 and ROUGE‐2 measures.  相似文献   

14.
针对直推式支持向量机(TSVM)学习模型求解难度大的问题,提出了一种基于k均值聚类的直推式支持向量机学习算法——TSVMKMC。该算法利用k均值聚类算法,将无标签样本分为若干簇,对每一簇样本赋予相同的类别标签,将无标签样本和有标签样本合并进行直推式学习。由于TSVMKMC算法有效地降低了状态空间的规模,因此运行速度较传统算法有了很大的提高。实验结果表明,TSVMSC算法能够以较快的速度达到较高的分类准确率。  相似文献   

15.
Data clustering is a key task for various processes including sequence analysis and pattern recognition. This paper studies a clustering algorithm that aimed to increase accuracy and sensitivity when working with biological data such as DNA sequences. The new algorithm is a modified version of fuzzy C‐means (FCM) and is based on the well‐known self‐organizing map (SOM). In order to show the performance of the algorithm, seven different data sets are processed. The experimental results demonstrate that the proposed algorithm has the potential to outperform SOM and FCM in terms of clustering and classification accuracy abilities. Additionally, a brief comparison is made the proposed algorithm with some previously studied ‘FCM‐SOM’ hybrid algorithms from the literature.  相似文献   

16.

In the current paper, we have developed two bio-inspired fuzzy clustering algorithms by incorporating the optimization techniques, namely differential evolution and particle swarm optimization. Both these clustering techniques can detect symmetrical-shaped clusters utilizing the established point symmetry-based distance measure. Both the proposed approaches are automatic in nature and can detect the number of clusters automatically from a given dataset. A symmetry-based cluster validity measure, F-Sym-index, is used as the objective function to be optimized in order to automatically determine the correct partitioning by both the approaches. The effectiveness of the proposed approaches is shown for automatically clustering some artificial and real-life datasets as well as for clustering some real-life gene expression datasets. The current paper presents a comparative analysis of some meta-heuristic-based clustering approaches, namely newly proposed two techniques and the already existing automatic genetic clustering techniques, VGAPS, GCUK, HNGA. The obtained results are compared with respect to some external cluster validity indices. Moreover, some statistical significance tests, as well as biological significance tests, are also conducted. Finally, results on gene expression datasets have been visualized by using some visualization tools, namely Eisen plot and cluster profile plot.

  相似文献   

17.
A model‐based co‐clustering divides the data based on two main axes and simultaneously trains a supervised model for each co‐cluster using all other input features. For example, in the rating prediction task of recommender system, the main two axes are items and users. In each co‐cluster, we train a regression model for predicting the rating based on other features such as user's characteristics (e.g., gender), item's characteristics (e.g., genre), contextual features (e.g., location), and so on. In reality, users and items do not necessarily belong to a single co‐cluster, but rather can be associated with several co‐clusters. We extend the model‐based co‐clustering to support fuzzy co‐clustering. In this setting, each item–user pair is associated to every co‐cluster with some membership grade. This grade indicates the level of relevance of the item–user pair to the co‐cluster. Furthermore, we propose a distributed algorithm, based on a map‐reduce approach, to handle big datasets. Evaluating the fuzzy co‐clustering algorithm on three datasets shows a significant improvement comparing with a regular co‐clustering algorithm. In addition, a map‐reduce version of the fuzzy co‐clustering algorithm significantly reduces the runtime.  相似文献   

18.
为了解决k-means算法在Hadoop平台下处理海量高维数据时聚类效果差,以及已有的改进算法不利于并行化等问题,提出了一种基于Hash改进的并行化方案。将海量高维的数据映射到一个压缩的标识空间,进而挖掘其聚类关系,选取初始聚类中心,避免了传统k-means算法对随机选取初始聚类中心的敏感性,减少了k-means算法的迭代次数。又结合MapReduce框架将算法整体并行化,并通过Partition、Combine等机制加强了并行化程度和执行效率。实验表明,该算法不仅提高了聚类的准确率和稳定性,同时具有良好的处理速度。  相似文献   

19.
Kernel approaches can improve the performance of conventional clustering or classification algorithms for complex distributed data. This is achieved by using a kernel function, which is defined as the inner product of two values obtained by a transformation function. In doing so, this allows algorithms to operate in a higher dimensional space (i.e., more degrees of freedom for data to be meaningfully partitioned) without having to compute the transformation. As a result, the fuzzy kernel C‐means (FKCM) algorithm, which uses a distance measure between patterns and cluster prototypes based on a kernel function, can obtain more desirable clustering results than fuzzy C‐means (FCM) for not only spherical data but also nonspherical data. However, it can still be sensitive to noise as in the FCM algorithm. In this paper, to improve the drawback of FKCM, we propose a kernel possibilistic C‐means (KPCM) algorithm that applies the kernel approach to the possibilistic C‐means (PCM) algorithm. The method includes a variance updating method for Gaussian kernels for each clustering iteration. Several experimental results show that the proposed algorithm can outperform other algorithms for general data with additive noise. © 2009 Wiley Periodicals, Inc.  相似文献   

20.
从多角度分析现有聚类算法   总被引:51,自引:3,他引:51  
钱卫宁  周傲英 《软件学报》2002,13(8):1382-1394
聚类是数据挖掘中研究的重要问题之一.聚类分析就是把数据集分成簇,以使得簇内数据尽量相似,簇间数据尽量不同.不同的聚类方法采用不同的相似测度和技术.从以下3个角度分析现有流行聚类算法: (1)聚类尺度; (2)算法框架; (3)簇的表示.在此基础上,分析了一些综合或概括了一些其他方法的算法.由于分析从3个角度进行,所提出的方法能够涵盖,并区分绝大多数现有聚类算法.所做的工作是自调节聚类方法以及聚类基准测试研究的基础.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号