首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
薄树奎  荆永菊 《计算机科学》2016,43(Z6):217-218, 259
遥感影像单类信息提取是一种特殊的分类,旨在训练和提取单一兴趣类别。研究了基于最近邻分类器的单类信息提取方法,包括类别划分和样本选择问题。首先分析论证了最近邻方法提取单类信息只与所选择的样本相关,而与类别划分无关,因此可以将单类信息提 取作为二类分类问题进行处理。然后在二类分类问题中,根据空间和特征邻近性选择非兴趣类别的部分训练样本,简化了分类过程。实验结果表明,所提出的方法可以有效实现遥感影像单类信息的提取。  相似文献   

2.
由于K—means聚类要求每个像素要和所有聚类中心求欧氏距离,因此当聚类数很多时,这是一个相当耗时的工作。改进后的K—means聚类算法使类内像素只通过和相邻的聚类中心进行距离计算来聚类,由于随着算法的迭代进行,大量类的状态基本固定,因此使得聚类速度不断加快。多层次聚类无损压缩就是利用改进的K—means聚类算法具有快速收敛的特点,和利用分层次去冗余的方法来聚类,因此可最大限度消除残差冗余。基于SP整数小波变换的多层次聚类由于其不仅能消除空间冗余、结构冗余,还能进一步对残差数据去冗余,因而实现了多光谱遥感图象无损压缩的突破。最后通过不同算法对TM图象进行压缩的比较和参数分析,论证了多层次聚类无损压缩的高效及合理性。  相似文献   

3.
The k-nearest neighbor (KNN) rule is a classical and yet very effective nonparametric technique in pattern classification, but its classification performance severely relies on the outliers. The local mean-based k-nearest neighbor classifier (LMKNN) was firstly introduced to achieve robustness against outliers by computing the local mean vector of k nearest neighbors for each class. However, its performances suffer from the choice of the single value of k for each class and the uniform value of k for different classes. In this paper, we propose a new KNN-based classifier, called multi-local means-based k-harmonic nearest neighbor (MLM-KHNN) rule. In our method, the k nearest neighbors in each class are first found, and then used to compute k different local mean vectors, which are employed to compute their harmonic mean distance to the query sample. Finally, MLM-KHNN proceeds in classifying the query sample to the class with the minimum harmonic mean distance. The experimental results, based on twenty real-world datasets from UCI and KEEL repository, demonstrated that the proposed MLM-KHNN classifier achieves lower classification error rate and is less sensitive to the parameter k, when compared to nine related competitive KNN-based classifiers, especially in small training sample size situations.  相似文献   

4.
Finding k nearest neighbor objects in spatial databases is a fundamental problem in many geospatial systems and the direction is one of the key features of a spatial object. Moreover, the recent tremendous growth of sensor technologies in mobile devices produces an enormous amount of spatio-directional (i.e., spatially and directionally encoded) objects such as photos. Therefore, an efficient and proper utilization of the direction feature is a new challenge. Inspired by this issue and the traditional k nearest neighbor search problem, we devise a new type of query, called the direction-constrained k nearest neighbor (DCkNN) query. The DCkNN query finds k nearest neighbors from the location of the query such that the direction of each neighbor is in a certain range from the direction of the query. We develop a new index structure called MULTI, to efficiently answer the DCkNN query with two novel index access algorithms based on the cost analysis. Furthermore, our problem and solution can be generalized to deal with spatio-circulant dimensional (such as a direction and circulant periods of time such as an hour, a day, and a week) objects. Experimental results show that our proposed index structure and access algorithms outperform two adapted algorithms from existing kNN algorithms.  相似文献   

5.
密度分布不均数据是指类簇间样本分布疏密程度不同的数据.密度峰值聚类(DPC)算法在处理密度分布不均数据时,倾向于在密度较高区域内找到类簇中心,并易将稀疏类簇的样本分配给密集类簇.为避免上述缺陷,提出一种面向密度分布不均数据的近邻优化密度峰值聚类(DPC-NNO)算法.DPC-NNO算法结合逆近邻和k近邻定义新的局部密度,提高稀疏样本的局部密度,使算法能更准确地找到类簇中心;定义分配策略时引入共享近邻,计算样本间相似性,构造相似矩阵,使同一类簇样本联系更紧密,避免错误分配样本.将所提出的DPC-NNO算法与IDPC-FA、DPCSA、FNDPC、FKNN-DPC、DPC算法进行对比,实验结果表明,DPC-NNO算法在处理密度分布不均数据时能获得优异的聚类效果,对于复杂数据集和UCI数据集,DPC-NNO算法的综合性能优于对比算法.  相似文献   

6.
k近邻方法是文本分类中广泛应用的方法,对其性能的优化具有现实需求。使用一种改进的聚类算法进行样本剪裁以提高训练样本的类别表示能力;根据样本的空间位置先后实现了基于类内和类间分布的样本加权;改善了k近邻算法中的大类别、高密度训练样本占优现象。实验结果表明,提出的改进文本加权方法提高了分类器的分类效率。  相似文献   

7.
CID: an efficient complexity-invariant distance for time series   总被引:1,自引:1,他引:0  
The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.  相似文献   

8.
Monte Carlo methods were used to estimate the percent misclassification of 13 clustering methods for six types of parameterizations of two bivariate normal populations. The clustering methods were compared by using the probabilities of misclassification and incidence matrices. It was determined that correlations and differences in population sizes adversely influenced all clustering methods, where differences in the variance structure did not appreciably affect the results. The k-means partitioning method was the overall best method. Considering only agglomerative methods, the sum of squares, variance, furthest neighbor and rank score methods were generally superior to the other non-partitioning methods considered. The overall poorest methods were judged to be nearest neighbor and maximum likelihood. However, as the complexity of the distributions increased, the differences between all of the methods decreased.  相似文献   

9.
深入分析了传统的基于密度的聚类方法的特点和存在的问题及讨论了基于密度聚类算法研究现状,提出了一种改进的基于密度分布函数的聚类算法.使用K最近邻(KNN)的思想度量密度以寻找当前密度最大点,即中心点.并使用区域比例,将类从中心点开始扩展,每次扩展的同时引入半径比例因子以发现核心点.再从该核心点的KNN扩展类,直至密度下降到中心点密度的给定比率时结束.给出了数个算法实例并与基于网格的共享近邻聚类(GNN)算法在聚类准确率和效率上进行了试验比较,试验表明该算法极大降低了基于密度聚类算法对参数的敏感性、改善了对高维密度分布不均数据集的聚类效果、提高了聚类准确率和效率.  相似文献   

10.
K nearest neighbor and Bayesian methods are effective methods of machine learning. Expectation maximization is an effective Bayesian classifier. In this work a data elimination approach is proposed to improve data clustering. The proposed method is based on hybridization of k nearest neighbor and expectation maximization algorithms. The k nearest neighbor algorithm is considered as the preprocessor for expectation maximization algorithm to reduce the amount of training data making it difficult to learn. The suggested method is tested on well-known machine learning data sets iris, wine, breast cancer, glass and yeast. Simulations are done in MATLAB environment and performance results are concluded.  相似文献   

11.
CFSFDP是基于密度的新聚类算法,可聚类非球形数据集,具有聚类速度快实现简单等优点。CFSFDP需人工尝试确定密度阈值dc且对一个类中存在多密度峰值的数据无法进行准确聚类,为解决该缺点,本文提出基于近邻距离曲线和类合并优化CFSFDP(简称 NM-CFSFDP)的聚类算法。首先,算法用近邻距离曲线变化情况自动确定密度阈值dc;然后,用本文提出自动确定dc的CFSFDP对数据聚类;最后用本文计算dc值的方法指导类的合并,引入内聚程度衡量参数解决了类合并后不能撤销的难题,从而实现对多密度峰值数据的正确聚类。通过实验对比,NM-CFSFDP算法确实比CFSFDP算法具有更加精确的聚类效果。  相似文献   

12.
基于共享最近邻聚类和模糊集理论的分类器   总被引:1,自引:0,他引:1  
李订芳  胡文超  何炎祥 《控制与决策》2006,21(10):1103-1108
提出一种基于共享最近邻聚类和模糊集理论的分类器.首先,在提出与核点密切相关的核半径概念的基础上,应用共享最近邻聚类得到正常类空间的部分核点和核半径,建立求解正常类空间补充核点的多目标优化模型,从而获得刻画正常类空间的全部核点和核半径.然后,将模糊集理论引入正常类的类属划分中,利用核点和核半径定义正常类的隶属度函数,建立基于隶属度函数的分类函数或分类器.实验表明,该分类器能处理包含噪音、孤立点和不规则子类的高维数据集的分类问题.  相似文献   

13.
陆林花 《计算机仿真》2009,26(7):122-125,158
为了在聚类数不明确的情况下实现聚类分析,提出一种新的结合最近邻聚类和遗传算法的动态聚类算法.新算法包括两个阶段:第一阶段用最近邻聚类算法根据最近邻方法把最相似的实例分到同一个簇中并根据一些相似性或相异性度量过滤掉噪声数据从而得到初始聚类集,第二阶段是遗传优化阶段,利用动态聚类评估函数,动态地合并初始聚类集,从而获得接近最优的解.最后对算法进行了实验仿真,实验结果表明方法在事先不知道聚类数的情况下能够有效地进行聚类.  相似文献   

14.
提出一种近邻类鉴别分析方法,线性鉴别分析是该方法的一个特例。线性鉴别分析通过最大化类间散度同时最小化类内散度寻找最佳投影,其中类间散度是所有类之间散度的总体平均;而近邻类鉴别分析中类间散度定义为各个类与其k个近邻类之间的平均散度。该方法通过选取适当的近邻类数,能够缓解线性鉴别降维后造成的部分类的重叠。实验结果表明近邻类鉴别分析方法性能稳定且优于传统的线性鉴别分析。  相似文献   

15.
针对SMOTE(synthetic minority over-sampling technique)等基于近邻值的传统过采样算法在处理类不平衡数据时近邻参数不能根据少数类样本的分布及时调整的问题,提出邻域自适应SMOTE算法AdaN_SMOTE.为使合成数据保留少数类的原始分布,跟踪精度下降点确定每个少数类数据的近邻值,并根据噪声、小析取项或复杂的形状及时调整近邻值的大小;合成数据保留了少数类的原始分布,算法分类性能更佳.在KE E L数据集上进行实验对比验证,结果表明AdaN_SMOTE分类性能优于其他基于近邻值的过采样方法,且在有噪声的数据集中更有效.  相似文献   

16.
文本表示的高维性会增加文本分类时的计算复杂度。针对该问题,构建基于类邻域字典的线性回归分类模型。采用K近邻方法构造各类别的类邻域字典,根据对测试样本的不同表示,分别提出基于级联类邻域字典和基于类邻域字典的线性回归分类算法。此外,为缓解噪声数据对分类性能的影响,通过度量测试样本与各个类别之间的相关度裁剪噪声类数据。实验结果表明,该模型对长文本和短文本均能够得到较高的分类精度和计算效率,同时,噪声类裁剪策略使其对包含较多类别数的文本语料也具有较好的分类性能。  相似文献   

17.
The nearest neighbor classification method assigns an unclassified point to the class of the nearest case of a set of previously classified points. This rule is independent of the underlying joint distribution of the sample points and their classifications. An extension to this approach is the k-NN method, in which the classification of the unclassified point is made by following a voting criteria within the k nearest points.The method we present here extends the k-NN idea, searching in each class for the k nearest points to the unclassified point, and classifying it in the class which minimizes the mean distance between the unclassified point and the k nearest points within each class. As all classes can take part in the final selection process, we have called the new approach k Nearest Neighbor Equality (k-NNE).Experimental results we obtained empirically show the suitability of the k-NNE algorithm, and its effectiveness suggests that it could be added to the current list of distance based classifiers.  相似文献   

18.
In this paper, we present a fast and versatile algorithm which can rapidly perform a variety of nearest neighbor searches. Efficiency improvement is achieved by utilizing the distance lower bound to avoid the calculation of the distance itself if the lower bound is already larger than the global minimum distance. At the preprocessing stage, the proposed algorithm constructs a lower bound tree (LB-tree) by agglomeratively clustering all the sample points to be searched. Given a query point, the lower bound of its distance to each sample point can be calculated by using the internal node of the LB-tree. To reduce the amount of lower bounds actually calculated, the winner-update search strategy is used for traversing the tree. For further efficiency improvement, data transformation can be applied to the sample and the query points. In addition to finding the nearest neighbor, the proposed algorithm can also (i) provide the k-nearest neighbors progressively; (ii) find the nearest neighbors within a specified distance threshold; and (iii) identify neighbors whose distances to the query are sufficiently close to the minimum distance of the nearest neighbor. Our experiments have shown that the proposed algorithm can save substantial computation, particularly when the distance of the query point to its nearest neighbor is relatively small compared with its distance to most other samples (which is the case for many object recognition problems).  相似文献   

19.
It is very expensive and time-consuming to annotate huge amounts of data. Active learning would be a suitable approach to minimize the effort of annotation. A novel active learning approach, coupled K nearest neighbor pseudo pruning (CKNNPP), is proposed in the paper, which is based on querying examples by KNNPP method. The KNNPP method applies k nearest neighbor technique to search for k neighbor samples from labeled samples of unlabeled samples. When k labeled samples are not belong to the same class, the corresponded unlabeled sample is queried and given its right label by supervisor, and then it is added to labeled training set. In contrast with the previous depiction, the unlabeled sample is not selected and pruned, that is the pseudo pruning. This definition is enlightened from the K nearest neighbor pruning preprocessing. These samples selected by KNNPP are considered to be near or on the optimal classification hyperplane that is crucial for active learning. Especially, in order to avoid the excursion of the optimal classification hyperplane after adding a queried sample, CKNNPP method is proposed finally that two samples with different class label (like a couple, annotated by supervisor) are queried by KNNPP and added in the training set simultaneously for updating training set in each iteration. The CKNNPP can provide a good performance, and especially it is simple, effective, and robust, and can solve the classification problem with unbalanced dataset compared with the existing methods. Then, the computational complexity of CKNNPP is analyzed. Additionally, a new stopping criterion is applied in the proposed method, and the classifier is implemented by Lagrangian Support Vector Machines in iterations of active learning. Finally, twelve UCI datasets, image datasets of aircrafts, and the dataset of radar high-resolution range profile are used to validate the feasibility and effectiveness of the proposed method. The results illuminate that CKNNPP gains superior performance compared with the other seven state-of-the-art active learning approaches.  相似文献   

20.
A number of approaches to pattern recognition employ variants of nearest neighbor recall. This procedure uses a number of prototypes of known class and identifies an unknown pattern vector according to the prototype it is nearest to. A recall criterion of this type that depends on the relation of the unknown to a single prototype is a non-smooth function and leads to a decision boundary that is a jagged, piecewise linear hypersurface. Collective recall, a pattern recognition method based on a smooth nearness measure of the unknown to all the prototypes, is developed. The prototypes are represented as cells in a brain-state-in-a-box (BSB) network. Cells that represent the same pattern class are linked by positive weights and cells representing different pattern classes are linked by negative weights. Computer simulations of collective recall used in conjunction with learning vector quantization (LVQ) show significant improvement in performance relative to nearest neighbor recall for pattern classes defined by nonspherically symmetric Gaussians.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号