首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper presents a general methodology for the development of fuzzy algorithms for learning vector quantization (FALVQ). The design of specific FALVQ algorithms according to existing approaches reduces to the selection of the membership function assigned to the weight vectors of an LVQ competitive neural network, which represent the prototypes. The development of a broad variety of FALVQ algorithms can be accomplished by selecting the form of the interference function that determines the effect of the nonwinning prototypes on the attraction between the winning prototype and the input of the network. The proposed methodology provides the basis for extending the existing FALVQ 1, FALVQ 2, and FALVQ 3 families of algorithms. This paper also introduces two quantitative measures which establish a relationship between the formulation that led to FALVQ algorithms and the competition between the prototypes during the learning process. The proposed algorithms and competition measures are tested and evaluated using the IRIS data set. The significance of the proposed competition measure is illustrated using FALVQ algorithms to perform segmentation of magnetic resonance images of the brain.  相似文献   

2.
Derives an interpretation for a family of competitive learning algorithms and investigates their relationship to fuzzy c-means and fuzzy learning vector quantization. These algorithms map a set of feature vectors into a set of prototypes associated with a competitive network that performs unsupervised learning. Derivation of the new algorithms is accomplished by minimizing an average generalized distance between the feature vectors and prototypes using gradient descent. A close relationship between the resulting algorithms and fuzzy c-means is revealed by investigating the functionals involved. It is also shown that the fuzzy c-means and fuzzy learning vector quantization algorithms are related to the proposed algorithms if the learning rate at each iteration is selected to satisfy a certain condition  相似文献   

3.
学习矢量量化的软竞争算法   总被引:1,自引:0,他引:1  
尽管FALVQ算法的亏损因子为模糊隶属度函数,但由于它的尺度函数并不是模糊隶属度函数,使得算法的性能不稳定.为了克服这个问题,通过推广FALVQ中获胜亏损因子的定义,导出了广义LVQ的一类软竞争算法(SCALVQ),并且给出了它的3种具体形式.在SCALVQ中,亏损因子和对应的尺度函数是同一个模糊隶属度函数,它汲取了FALVQ和软竞争格式的优点,有效地克服了FALVQ存在的问题.  相似文献   

4.
非线性空间几何收缩的分形图象压缩编码   总被引:2,自引:0,他引:2       下载免费PDF全文
在经典的空间几何线性均值收缩算法的基础上,提出了一种非线性空间几何收缩算法。由实验表明,该算法不仅能提高压缩比,而且对信噪比也有一定的改善。  相似文献   

5.
An axiomatic approach to soft learning vector quantization andclustering   总被引:11,自引:0,他引:11  
This paper presents an axiomatic approach to soft learning vector quantization (LVQ) and clustering based on reformulation. The reformulation of the fuzzy c-means (FCM) algorithm provides the basis for reformulating entropy-constrained fuzzy clustering (ECFC) algorithms. According to the proposed approach, the development of specific algorithms reduces to the selection of a generator function. Linear generator functions lead to the FCM and fuzzy learning vector quantization algorithms while exponential generator functions lead to ECFC and entropy-constrained learning vector quantization algorithms. The reformulation of LVQ and clustering algorithms also provides the basis for developing uncertainty measures that can identify feature vectors equidistant from all prototypes. These measures are employed by a procedure developed to make soft LVQ and clustering algorithms capable of identifying outliers in the data set. This procedure is evaluated by testing the algorithms generated by linear and exponential generator functions on speech data.  相似文献   

6.
For pt.I see ibid., p.775-85. In part I an equivalence between the concepts of fuzzy clustering and soft competitive learning in clustering algorithms is proposed on the basis of the existing literature. Moreover, a set of functional attributes is selected for use as dictionary entries in the comparison of clustering algorithms. In this paper, five clustering algorithms taken from the literature are reviewed, assessed and compared on the basis of the selected properties of interest. These clustering models are (1) self-organizing map (SOM); (2) fuzzy learning vector quantization (FLVQ); (3) fuzzy adaptive resonance theory (fuzzy ART); (4) growing neural gas (GNG); (5) fully self-organizing simplified adaptive resonance theory (FOSART). Although our theoretical comparison is fairly simple, it yields observations that may appear parodoxical. First, only FLVQ, fuzzy ART, and FOSART exploit concepts derived from fuzzy set theory (e.g., relative and/or absolute fuzzy membership functions). Secondly, only SOM, FLVQ, GNG, and FOSART employ soft competitive learning mechanisms, which are affected by asymptotic misbehaviors in the case of FLVQ, i.e., only SOM, GNG, and FOSART are considered effective fuzzy clustering algorithms.  相似文献   

7.
结合广义学习矢量量化神经网络的思想和信息论中的极大熵原理,提出了一种熵约束 广义学习矢量量化神经网络,利用梯度下降法导出其学习算法,该算法是软竞争格式的一种推 广.由于亏损因子和尺度函数被定义为同一个模糊隶属度函数,它可以有效地克服广义学习矢 量量化网络的模糊算法存在的问题.文中还给出熵约束广义学习矢量量化网络及其软竞争学习 算法的许多重要性质,以此为依据,讨论拉格朗日乘子的选取规则.  相似文献   

8.
This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.  相似文献   

9.
讨论了Pal等的广义学习量化算法(GLVQ)和Karayiannis等的模糊学习量化算法(FGLVQ)的优缺点,提出了修正广义学习量化(RGLVQ)算法。该算法的迭代系数有很好的上下界,解决了GLVQ的“Scale”问题,又不像FGLVQ算法对初始学习率敏感。用IRIS数据集对算法进行了测试,并应用所给算法进行了用于图像压缩的量化码书设计。该文算法与FGLVQ类算法性能相当,但少了大量浮点除法,实验过程表明节约训练时间约l0%。  相似文献   

10.
This paper proposes a framework for constructing and training radial basis function (RBF) neural networks. The proposed growing radial basis function (GRBF) network begins with a small number of prototypes, which determine the locations of radial basis functions. In the process of training, the GRBF network gross by splitting one of the prototypes at each growing cycle. Two splitting criteria are proposed to determine which prototype to split in each growing cycle. The proposed hybrid learning scheme provides a framework for incorporating existing algorithms in the training of GRBF networks. These include unsupervised algorithms for clustering and learning vector quantization, as well as learning algorithms for training single-layer linear neural networks. A supervised learning scheme based on the minimization of the localized class-conditional variance is also proposed and tested. GRBF neural networks are evaluated and tested on a variety of data sets with very satisfactory results.  相似文献   

11.
A variant of nearest-neighbor (NN) pattern classification and supervised learning by learning vector quantization (LVQ) is described. The decision surface mapping method (DSM) is a fast supervised learning algorithm and is a member of the LVQ family of algorithms. A relatively small number of prototypes are selected from a training set of correctly classified samples. The training set is then used to adapt these prototypes to map the decision surface separating the classes. This algorithm is compared with NN pattern classification, learning vector quantization, and a two-layer perceptron trained by error backpropagation. When the class boundaries are sharply defined (i.e., no classification error in the training set), the DSM algorithm outperforms these methods with respect to error rates, learning rates, and the number of prototypes required to describe class boundaries.  相似文献   

12.
Generalized clustering networks and Kohonen''s self-organizingscheme   总被引:7,自引:0,他引:7  
The relationship between the sequential hard c-means (SHCM) and learning vector quantization (LVQ) clustering algorithms is discussed. The impact and interaction of these two families of methods with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method but often lends ideas to clustering algorithms, are considered. A generalization of LVQ that updates all nodes for a given input vector is proposed. The network attempts to find a minimum of a well-defined objective function. The learning rules depend on the degree of distance match to the winner node; the lesser the degree of match with the winner, the greater the impact on nonwinner nodes. Numerical results indicate that the terminal prototypes generated by this modification of LVQ are generally insensitive to initialization and independent of any choice of learning coefficient. IRIS data obtained by E. Anderson's (1939) is used to illustrate the proposed method. Results are compared with the standard LVQ approach.  相似文献   

13.
This paper presents the development of soft clustering and learning vector quantization (LVQ) algorithms that rely on multiple weighted norms to measure the distance between the feature vectors and their prototypes. Clustering and LVQ are formulated in this paper as the minimization of a reformulation function that employs distinct weighted norms to measure the distance between each of the prototypes and the feature vectors under a set of equality constraints imposed on the weight matrices. Fuzzy LVQ and clustering algorithms are obtained as special cases of the proposed formulation. The resulting clustering algorithm is evaluated and benchmarked on three data sets that differ in terms of the data structure and the dimensionality of the feature vectors. This experimental evaluation indicates that the proposed multinorm algorithm outperforms algorithms employing the Euclidean norm as well as existing clustering algorithms employing weighted norms.  相似文献   

14.
Earlier clustering techniques such as the modified learning vector quantization (MLVQ) and the fuzzy Kohonen partitioning (FKP) techniques have focused on the derivation of a certain set of parameters so as to define the fuzzy sets in terms of an algebraic function. The fuzzy membership functions thus generated are uniform, normal, and convex. Since any irregular training data is clustered into uniform fuzzy sets (Gaussian, triangular, or trapezoidal), the clustering may not be exact and some amount of information may be lost. In this paper, two clustering techniques using a Kohonen-like self-organizing neural network architecture, namely, the unsupervised discrete clustering technique (UDCT) and the supervised discrete clustering technique (SDCT), are proposed. The UDCT and SDCT algorithms reduce this data loss by introducing nonuniform, normal fuzzy sets that are not necessarily convex. The training data range is divided into discrete points at equal intervals, and the membership value corresponding to each discrete point is generated. Hence, the fuzzy sets obtained contain pairs of values, each pair corresponding to a discrete point and its membership grade. Thus, it can be argued that fuzzy membership functions generated using this kind of a discrete methodology provide a more accurate representation of the actual input data. This fact has been demonstrated by comparing the membership functions generated by the UDCT and SDCT algorithms against those generated by the MLVQ, FKP, and pseudofuzzy Kohonen partitioning (PFKP) algorithms. In addition to these clustering techniques, a novel pattern classifying network called the Yager fuzzy neural network (FNN) is proposed in this paper. This network corresponds completely to the Yager inference rule and exhibits remarkable generalization abilities. A modified version of the pseudo-outer product (POP)-Yager FNN called the modified Yager FNN is introduced that eliminates the drawbacks of the earlier network and yi- elds superior performance. Extensive experiments have been conducted to test the effectiveness of these two networks, using various clustering algorithms. It follows that the SDCT and UDCT clustering algorithms are particularly suited to networks based on the Yager inference rule.  相似文献   

15.
This paper presents the development of soft clustering and learning vector quantization (LVQ) algorithms that rely on a weighted norm to measure the distance between the feature vectors and their prototypes. The development of LVQ and clustering algorithms is based on the minimization of a reformulation function under the constraint that the generalized mean of the norm weights be constant. According to the proposed formulation, the norm weights can be computed from the data in an iterative fashion together with the prototypes. An error analysis provides some guidelines for selecting the parameter involved in the definition of the generalized mean in terms of the feature variances. The algorithms produced from this formulation are easy to implement and they are almost as fast as clustering algorithms relying on the Euclidean norm. An experimental evaluation on four data sets indicates that the proposed algorithms outperform consistently clustering algorithms relying on the Euclidean norm and they are strong competitors to non-Euclidean algorithms which are computationally more demanding.  相似文献   

16.
基于样本密度和分类误差率的增量学习矢量量化算法研究   总被引:1,自引:0,他引:1  
李娟  王宇平 《自动化学报》2015,41(6):1187-1200
作为一种简单而成熟的分类方法, K最近邻(K nearest neighbor, KNN)算法在数据挖掘、模式识别等领域获得了广泛的应用, 但仍存在计算量大、高空间消耗、运行时间长等问题. 针对这些问题, 本文在增量学习型矢量量化(Incremental learning vector quantization, ILVQ)的单层竞争学习基础上, 融合样本密度和分类误差率的邻域思想, 提出了一种新的增量学习型矢量量化方法, 通过竞争学习策略对代表点邻域实现自适应增删、合并、分裂等操作, 快速获取原始数据集的原型集, 进而在保障分类精度基础上, 达到对大规模数据的高压缩效应. 此外, 对传统近邻分类算法进行了改进, 将原型近邻集的样本密度和分类误差率纳入到近邻判决准则中. 所提出算法通过单遍扫描学习训练集可快速生成有效的代表原型集, 具有较好的通用性. 实验结果表明, 该方法同其他算法相比较, 不仅可以保持甚至提高分类的准确性和压缩比, 且具有快速分类的优势.  相似文献   

17.
An adaptive conscientious competitive learning (ACCL) algorithm is proposed in this paper. The ACCL algorithm can adjust the conscience parameter itself according to the feedback information about the practical winning situation of all neurons during the learning process. The a priori information about the distribution range of the input patterns which is required for the conventional conscientious competitive learning (CCL) algorithm, is no longer required in the ACCL algorithm. The “neurons get stuck” problem of the competitive learning (CL) algorithm and conscientious competitive learning (CCL) algorithm with small conscience parameter is overcome. At the same time, neurons will not be tangled together as in the case of the CCL algorithm with large conscience parameter. The ACCL algorithm is applied to vector quantization (VQ) and probability density function estimation (PDFE). It can generate better results than the conventional CL and CCL algorithms. Experimental results are also included to demonstrate its effectiveness.  相似文献   

18.
Likas A 《Neural computation》1999,11(8):1915-1932
A general technique is proposed for embedding online clustering algorithms based on competitive learning in a reinforcement learning framework. The basic idea is that the clustering system can be viewed as a reinforcement learning system that learns through reinforcements to follow the clustering strategy we wish to implement. In this sense, the reinforcement guided competitive learning (RGCL) algorithm is proposed that constitutes a reinforcement-based adaptation of learning vector quantization (LVQ) with enhanced clustering capabilities. In addition, we suggest extensions of RGCL and LVQ that are characterized by the property of sustained exploration and significantly improve the performance of those algorithms, as indicated by experimental tests on well-known data sets.  相似文献   

19.
Soft nearest prototype classification   总被引:3,自引:0,他引:3  
We propose a new method for the construction of nearest prototype classifiers which is based on a Gaussian mixture ansatz and which can be interpreted as an annealed version of learning vector quantization (LVQ). The algorithm performs a gradient descent on a cost-function minimizing the classification error on the training set. We investigate the properties of the algorithm and assess its performance for several toy data sets and for an optical letter classification task. Results show 1) that annealing in the dispersion parameter of the Gaussian kernels improves classification accuracy; 2) that classification results are better than those obtained with standard learning vector quantization (LVQ 2.1, LVQ 3) for equal numbers of prototypes; and 3) that annealing of the width parameter improved the classification capability. Additionally, the principled approach provides an explanation of a number of features of the (heuristic) LVQ methods.  相似文献   

20.
We introduce a batch learning algorithm to design the set of prototypes of 1 nearest-neighbour classifiers. Like Kohonen's LVQ algorithms, this procedure tends to perform vector quantization over a probability density function that has zero points at Bayes borders. Although it differs significantly from their online counterparts since: (1) its statistical goal is clearer and better defined; and (2) it converges superlinearly due to its use of the very fast Newton's optimization method. Experiments results using artificial data confirm faster training time and better classification performance than Kohonen's LVQ algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号