首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
Most clustering algorithms operate by optimizing (either implicitly or explicitly) a single measure of cluster solution quality. Such methods may perform well on some data sets but lack robustness with respect to variations in cluster shape, proximity, evenness and so forth. In this paper, we have proposed a multiobjective clustering technique which optimizes simultaneously two objectives, one reflecting the total cluster symmetry and the other reflecting the stability of the obtained partitions over different bootstrap samples of the data set. The proposed algorithm uses a recently developed simulated annealing-based multiobjective optimization technique, named AMOSA, as the underlying optimization strategy. Here, points are assigned to different clusters based on a newly defined point symmetry-based distance rather than the Euclidean distance. Results on several artificial and real-life data sets in comparison with another multiobjective clustering technique, MOCK, three single objective genetic algorithm-based automatic clustering techniques, VGAPS clustering, GCUK clustering and HNGA clustering, and several hybrid methods of determining the appropriate number of clusters from data sets show that the proposed technique is well suited to detect automatically the appropriate number of clusters as well as the appropriate partitioning from data sets having point symmetric clusters. The performance of AMOSA as the underlying optimization technique in the proposed clustering algorithm is also compared with PESA-II, another evolutionary multiobjective optimization technique.  相似文献   

2.
快速模糊C均值聚类彩色图像分割方法   总被引:33,自引:3,他引:33       下载免费PDF全文
模糊C均值(FCM)聚类用于彩色图像分割具有简单直观、易于实现的特点,但存在聚类性能受中心点初始化影响且计算量大等问题,为此,提出了一种快速模糊聚类方法(FFCM)。这种方法利用分层减法聚类把图像数据分成一定数量的色彩相近的子集,一方面,子集中心用于初始化聚类中心点;另一方面,利用子集中心点和分布密度进行模糊聚类,由于聚类样本数量显著减少以及分层减法聚类计算量小,故可以大幅提高模糊C均值算法的计算速度,进而可以利用聚类有效性分析指标快速确定聚类数目。实验表明,这种方法不需事先确定聚类数目并且在优化聚类性能不变的前提下,可以使模糊聚类的速度得到明显提高,实现彩色图像的快速分割。  相似文献   

3.
马福民  孙静勇  张腾飞 《控制与决策》2022,37(11):2968-2976
在原有数据聚类结果的基础上,如何对新增数据进行归属度量分析是提高增量式聚类质量的关键,现有增量式聚类算法更多地是考虑新增数据的位置分布,忽略其邻域数据点的归属信息.在粗糙K-means聚类算法的基础上,针对边界区域新增数据点的不确定性信息处理,提出一种基于邻域归属信息的粗糙K-means增量式聚类算法.该算法综合考虑边界区域新增数据样本的位置分布及其邻域数据点的类簇归属信息,使得新增数据点与各类簇的归属度量更为合理;此外,在增量式聚类过程中,根据新增数据点所导致的类簇结构的变化,对类簇进行相应的合并或分裂操作,使类簇划分可以自适应调整.在人工数据集和UCI标准数据集上的对比实验结果验证了算法的有效性.  相似文献   

4.
P.A.  M.  D.K.   《Pattern recognition》2006,39(12):2344-2355
Hybrid hierarchical clustering techniques which combine the characteristics of different partitional clustering techniques or partitional and hierarchical clustering techniques are interesting. In this paper, efficient bottom-up hybrid hierarchical clustering (BHHC) techniques have been proposed for the purpose of prototype selection for protein sequence classification. In the first stage, an incremental partitional clustering technique such as leader algorithm (ordered leader no update (OLNU) method) which requires only one database (db) scan is used to find a set of subcluster representatives. In the second stage, either a hierarchical agglomerative clustering (HAC) scheme or a partitional clustering algorithm—‘K-medians’ is used on these subcluster representatives to obtain a required number of clusters. Thus, this hybrid scheme is scalable and hence would be suitable for clustering large data sets and we also get a hierarchical structure consisting of clusters and subclusters and the representatives of which are used for pattern classification. Even if more number of prototypes are generated, classification time does not increase much as only a part of the hierarchical structure is searched. The experimental results (classification accuracy (CA) using the prototypes obtained and the computation time) of the proposed algorithms are compared with that of the hierarchical agglomerative schemes, K-medians and nearest neighbour classifier (NNC) methods. The proposed methods are found to be computationally efficient with reasonably good CA.  相似文献   

5.
Clustering is one of the important data mining tasks. Nested clusters or clusters of multi-density are very prevalent in data sets. In this paper, we develop a hierarchical clustering approach—a cluster tree to determine such cluster structure and understand hidden information present in data sets of nested clusters or clusters of multi-density. We embed the agglomerative k-means algorithm in the generation of cluster tree to detect such clusters. Experimental results on both synthetic data sets and real data sets are presented to illustrate the effectiveness of the proposed method. Compared with some existing clustering algorithms (DBSCAN, X-means, BIRCH, CURE, NBC, OPTICS, Neural Gas, Tree-SOM, EnDBSAN and LDBSCAN), our proposed cluster tree approach performs better than these methods.  相似文献   

6.
针对现有层次聚类算法难以处理不完备数据集,同时考虑样本与类簇之间的不确定关系,提出一种面向不完备数据的集对粒层次聚类算法-SPGCURE.首先,采用集对信息粒的知识对缺失值进行处理,不同于以往算法中将缺失属性删除或者填充,用集对联系度中的差异度来表示缺失属性值,提出一种改进的集对信息距离度量方法,用于考量不完备数据样本间的紧密程度;其次,基于改进后的集对距离度量,给出各个类簇的类内平均距离的定义,形成以正同域Cs(样本一定属于类簇)、边界域Cu(样本可能属于类簇)和负反域Co(样本不属于类簇)表示的集对粒层次聚类;SPGCURE算法在完备和不完备数据都适用,最后,选用5个经典的UCI数据集,与常用的经典及改进聚类算法进行实验评价,结果表明,SPGCURE算法在准确度、F-measure、调整兰德系数和标准互信息等指标上均具有不错的聚类性能.  相似文献   

7.
相比于k-means算法,模糊C均值(FCM)通过引入模糊隶属度,考虑不同数据簇之间的相互作用,进而避免了聚类中心趋同性问题.然而模糊隶属度具有拖尾和翘尾的结构特征,因此使得FCM算法对噪声点和孤立点很敏感;此外,由于FCM算法倾向于将各数据簇均等分,因此算法对数据簇大小也很敏感,对非平衡数据簇聚类效果不佳.针对这些问题,本文提出了基于可靠性的鲁棒模糊聚类算法(RRFCM).该算法基于当前的聚类结果,对样本点进行可靠性分析,利用样本点的可靠性和局部近邻信息,突出不同数据簇之间的可分性,从而提高了算法对噪声的鲁棒性,并且降低了对非平衡数据簇大小的敏感性,得到了泛化性能更好的聚类结果.与相关算法进行对比,RRFCM算法在人造数据集,UCI真实数据集以及图像分割实验中均取得最优的结果.  相似文献   

8.
针对密度峰值聚类算法(Density Peaks Clustering,DPC)需要人为指定截断距离d c,以及局部密度定义简单和一步分配策略导致算法在复杂数据集上表现不佳的问题,提出了一种基于自然最近邻的密度峰值聚类算法(Density Peaks Clustering based on Natural Nearest Neighbor,NNN-DPC)。该算法无需指定任何参数,是一种非参数的聚类方法。该算法首先根据自然最近邻的定义,给出新的局部密度计算方法来描述数据的分布,揭示内在的联系;然后设计了两步分配策略来进行样本点的划分。最后定义了簇间相似度并提出了新的簇合并规则进行簇的合并,从而得到最终聚类结果。实验结果表明,在无需参数的情况下,NNN-DPC算法在各类数据集上都有优秀的泛化能力,对于流形数据或簇间密度差异大的数据能更加准确地识别聚类数目和分配样本点。与DPC、FKNN-DPC(Fuzzy Weighted K-nearest Density Peak Clustering)以及其他3种经典聚类算法的性能指标相比,NNN-DPC算法更具优势。  相似文献   

9.
In this paper the problem of automatic clustering a data set is posed as solving a multiobjective optimization (MOO) problem, optimizing a set of cluster validity indices simultaneously. The proposed multiobjective clustering technique utilizes a recently developed simulated annealing based multiobjective optimization method as the underlying optimization strategy. Here variable number of cluster centers is encoded in the string. The number of clusters present in different strings varies over a range. The points are assigned to different clusters based on the newly developed point symmetry based distance rather than the existing Euclidean distance. Two cluster validity indices, one based on the Euclidean distance, XB-index, and another recently developed point symmetry distance based cluster validity index, Sym-index, are optimized simultaneously in order to determine the appropriate number of clusters present in a data set. Thus the proposed clustering technique is able to detect both the proper number of clusters and the appropriate partitioning from data sets either having hyperspherical clusters or having point symmetric clusters. A new semi-supervised method is also proposed in the present paper to select a single solution from the final Pareto optimal front of the proposed multiobjective clustering technique. The efficacy of the proposed algorithm is shown for seven artificial data sets and six real-life data sets of varying complexities. Results are also compared with those obtained by another multiobjective clustering technique, MOCK, two single objective genetic algorithm based automatic clustering techniques, VGAPS clustering and GCUK clustering.  相似文献   

10.
基于密度峰值和网格的自动选定聚类中心算法   总被引:1,自引:0,他引:1  
夏庆亚 《计算机科学》2017,44(Z11):403-406
针对快速搜索和发现密度峰值的聚类算法(DPC)中数据点之间计算复杂,最终聚类的中心个数需要通过决策图手动选取等问题,提出基于密度峰值和网格的自动选定聚类中心的改进算法GADPC。首先结合Clique网格聚类算法的思想,不再针对点对象进行操作,而是将点映射到网格,并将网格作为聚类对象,从而减少了DPC算法中对数据点之间的距离计算和聚类次数;其次通过改进后的聚类中心个数判定准则更精确地自动选定聚类中心个数;最后对网格边缘点和噪声点,采用网格内点对象和相邻网格间的相似度进行了处理。实验通过采用UEF(University of Eastern Finland)提供的数据挖掘使用的人工合成数据集和UCI自然数据集进行对比,其聚类评价指标(Rand Index)表明,改进的算法在计算大数据集时聚类质量不低于DPC和K-means算法,而且提高了DPC算法的处理效率。  相似文献   

11.
In this paper, a quantization-based clustering algorithm (QBCA) is proposed to cluster a large number of data points efficiently. Unlike previous clustering algorithms, QBCA places more emphasis on the computation time of the algorithm. Specifically, QBCA first assigns the data points to a set of histogram bins by a quantization function. Then, it determines the initial centers of the clusters according to this point distribution. Finally, QBCA performs clustering at the histogram bin level, rather than the data point level. We also propose two approaches to improve the performance of QBCA further: (i) a shrinking process is performed on the histogram bins to reduce the number of distance computations and (ii) a hierarchical structure is constructed to perform efficient indexing on the histogram bins. Finally, we analyze the performance of QBCA theoretically and experimentally and show that the approach: (1) can be easily implemented, (2) identifies the clusters effectively and (3) outperforms most of the current state-of-the-art clustering approaches in terms of efficiency.  相似文献   

12.
基于高斯分布的簇间距离计算方法   总被引:2,自引:0,他引:2  
凝聚的层次聚类算法是一种性能优越的聚类算法,该算法通过不断合并距离相近的簇最终将数据集合划分为用户指定的若干个类别。在聚类的过程中簇间距离计算的准确性是影响算法性能的重要因素。本文提出一种新的基于高斯分布的簇间距离的计算方法,该方法通过簇自身的大小、密度分布等因素改进算法的计算准确性,在不同文本集合上与现有的簇间距离计算方法进行了对比实验,实验结果表明该方法有效地改进了层次聚类算法的性能。  相似文献   

13.
针对混合属性空间中具有同一(或相近)分布特性的带类别标记的小样本集和无类别标记的大样本数据集,提出了一种基于MST的自适应优化相异性度量的半监督聚类方法。该方法首先采用决策树方法来获取小样本集的"规则聚类区域",然后根据"同一聚类的数据点更为接近"的原则自适应优化建构在该混合属性空间中的相异性度量,最后将优化后的相异性度量应用于基于MST的聚类算法中,以获得更为有效的聚类结果。仿真实验结果表明,该方法对有些数据集是有改进效果的。为进一步推广并在实际中发掘出该方法的应用价值,本文在最后给出了一个较有价值的研究展望。  相似文献   

14.
Classical clustering methods, such as partitioning and hierarchical clustering algorithms, often fail to deliver satisfactory results, given clusters of arbitrary shapes. Motivated by a clustering validity index based on inter-cluster and intra-cluster density, we propose that the clustering validity index be used not only globally to find optimal partitions of input data, but also locally to determine which two neighboring clusters are to be merged in a hierarchical clustering of Self-Organizing Map (SOM). A new two-level SOM-based clustering algorithm using the clustering validity index is also proposed. Experimental results on synthetic and real data sets demonstrate that the proposed clustering algorithm is able to cluster data in a better way than classical clustering algorithms on an SOM.  相似文献   

15.
针对密度峰值聚类算法在面对复杂结构数据集时容易出现分配错误的问题,提出一种优化分配策略的密度峰值聚类算法(ODPC)。新算法首先引入参数积γ,扩大了聚类中心的选取范围;然后使用改进的数据点分配策略,对数据集的数据点进行基于相似度指标MS的重新分配,进一步优化了簇类中点集的分配;最后使用dc近邻法优化识别数据集的噪声点。在人工数据集及UCI真实数据集上的实验均可证明,新算法能够在优化噪声识别的同时,提高复杂流形数据集中数据点分配的正确率,并取得比DPC算法、DenPEHC算法、GDPC算法更好的聚类效果。  相似文献   

16.

The fuzzy c-means algorithm (FCM) is aimed at computing the membership degree of each data point to its corresponding cluster center. This computation needs to calculate the distance matrix between the cluster center and the data point. The main bottleneck of the FCM algorithm is the computing of the membership matrix for all data points. This work presents a new clustering method, the bdrFCM (boundary data reduction fuzzy c-means). Our algorithm is based on the original FCM proposal, adapted to detect and remove the boundary regions of clusters. Our implementation efforts are directed in two aspects: processing large datasets in less time and reducing the data volume, maintaining the quality of the clusters. A significant volume of real data application (> 106 records) was used, and we identified that bdrFCM implementation has good scalability to handle datasets with millions of data points.

  相似文献   

17.
Clustering problem is an unsupervised learning problem. It is a procedure that partition data objects into matching clusters. The data objects in the same cluster are quite similar to each other and dissimilar in the other clusters. Density-based clustering algorithms find clusters based on density of data points in a region. DBSCAN algorithm is one of the density-based clustering algorithms. It can discover clusters with arbitrary shapes and only requires two input parameters. DBSCAN has been proved to be very effective for analyzing large and complex spatial databases. However, DBSCAN needs large volume of memory support and often has difficulties with high-dimensional data and clusters of very different densities. So, partitioning-based DBSCAN algorithm (PDBSCAN) was proposed to solve these problems. But PDBSCAN will get poor result when the density of data is non-uniform. Meanwhile, to some extent, DBSCAN and PDBSCAN are both sensitive to the initial parameters. In this paper, we propose a new hybrid algorithm based on PDBSCAN. We use modified ant clustering algorithm (ACA) and design a new partitioning algorithm based on ‘point density’ (PD) in data preprocessing phase. We name the new hybrid algorithm PACA-DBSCAN. The performance of PACA-DBSCAN is compared with DBSCAN and PDBSCAN on five data sets. Experimental results indicate the superiority of PACA-DBSCAN algorithm.  相似文献   

18.
Dubnov  Shlomo  El-Yaniv  Ran  Gdalyahu  Yoram  Schneidman  Elad  Tishby  Naftali  Yona  Golan 《Machine Learning》2002,47(1):35-61
We present a novel pairwise clustering method. Given a proximity matrix of pairwise relations (i.e. pairwise similarity or dissimilarity estimates) between data points, our algorithm extracts the two most prominent clusters in the data set. The algorithm, which is completely nonparametric, iteratively employs a two-step transformation on the proximity matrix. The first step of the transformation represents each point by its relation to all other data points, and the second step re-estimates the pairwise distances using a statistically motivated proximity measure on these representations. Using this transformation, the algorithm iteratively partitions the data points, until it finally converges to two clusters. Although the algorithm is simple and intuitive, it generates a complex dynamics of the proximity matrices. Based on this bipartition procedure we devise a hierarchical clustering algorithm, which employs the basic bipartition algorithm in a straightforward divisive manner. The hierarchical clustering algorithm copes with the model validation problem using a general cross-validation approach, which may be combined with various hierarchical clustering methods.We further present an experimental study of this algorithm. We examine some of the algorithm's properties and performance on some synthetic and standard data sets. The experiments demonstrate the robustness of the algorithm and indicate that it generates a good clustering partition even when the data is noisy or corrupted.  相似文献   

19.
Hybrid mining approach in the design of credit scoring models   总被引:1,自引:0,他引:1  
Unrepresentative data samples are likely to reduce the utility of data classifiers in practical application. This study presents a hybrid mining approach in the design of an effective credit scoring model, based on clustering and neural network techniques. We used clustering techniques to preprocess the input samples with the objective of indicating unrepresentative samples into isolated and inconsistent clusters, and used neural networks to construct the credit scoring model. The clustering stage involved a class-wise classification process. A self-organizing map clustering algorithm was used to automatically determine the number of clusters and the starting points of each cluster. Then, the K-means clustering algorithm was used to generate clusters of samples belonging to new classes and eliminate the unrepresentative samples from each class. In the neural network stage, samples with new class labels were used in the design of the credit scoring model. The proposed method demonstrates by two real world credit data sets that the hybrid mining approach can be used to build effective credit scoring models.  相似文献   

20.
A computationally efficient agglomerative clustering algorithm based on multilevel theory is presented. Here, the data set is divided randomly into a number of partitions. The samples of each such partition are clustered separately using hierarchical agglomerative clustering algorithm to form sub-clusters. These are merged at higher levels to get the final classification. This algorithm leads to the same classification as that of hierarchical agglomerative clustering algorithm when the clusters are well separated. The advantages of this algorithm are short run time and small storage requirement. It is observed that the savings, in storage space and computation time, increase nonlinearly with the sample size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号