首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
遗传LVQ聚类算法   总被引:1,自引:0,他引:1  
Kohonen提出了学习矢量量化(LVQ)聚类算法及网络对聚类分析产生了深刻的影响,但LVQ存在神经元未被充分利用以及算法对初值敏感的问题。通过对LVQ聚类算法进行分析,根据基因算法的特点,论文提出一种结合基因算法与LVQ聚类算法的改进方法。实验结果证明改进的算法较LVQ聚类算法对初值具有明显的稳定性和有效性。  相似文献   

2.
讨论了关于改进LVQ聚类网络的理论与算法.为克服LVQ网络聚类算法对初值敏 感的问题广义学习矢量量化(GLVQ)网络算法对LVQ算法进行了改进,但GLVQ算法性能不 稳定.GLVQ-F是对GLVQ网络算法的修改,但GLVQ-F算法仍存在对初值的敏感问题.分 析了GLVQ-F网络算法对初值敏感的原因以及算法不稳定的理论缺陷,改进了算法理论并给 出了一种新的改进的网络算法(MLVQ).实验结果表明新的算法解决了原有算法所存在的问 题,而且性能稳定.  相似文献   

3.
学习向量量化(LVQ)和泛化学习向量量化(GLVQ)算法都是采用欧氏距离作为相似性度量函数, 忽视了向量各维属性的数据取值范围,从而不能区分各维属性在分类中的不同作用。针对该问题,使用一种面向特征取值范围的向量相似性度量函数,对GLVQ进行改进,提出了GLVQ-FR算法。使用视频车型分类数据进行改进型GLVQ和LVQ2.1、GLVQ、GRLVQ、GMLVQ等算法的对比实验,结果表明:GLVQ-FR算法在车型分类中具有较高的分类准确性、运算速度和真实生产环境中的可用性。  相似文献   

4.
The utilisation of clustering algorithms based on the optimisation of prototypes in neural networks is demonstrated for unsupervised learning. Stimulated by common clustering methods of this type (learning vector quantisation [LVQ, GLVQ] and K-means) a globally operating algorithm was developed to cope with known shortcomings of existing tools. This algorithm and K-means (for the common methods) were applied to the problem of clustering EEG patterns being pre-processed. It can be shown that the algorithm based on global random optimisation may find an optimal solution repeatedly, whereas K-means provides different sub-optimal solutions with respect to the quality measure defined as objective function. The results are presented. The performance of the algorithms is discussed.  相似文献   

5.
Repairs to GLVQ: a new family of competitive learning schemes   总被引:2,自引:0,他引:2  
First, we identify an algorithmic defect of the generalized learning vector quantization (GLVQ) scheme that causes it to behave erratically for a certain scaling of the input data. We show that GLVQ can behave incorrectly because its learning rates are reciprocally dependent on the sum of squares of distances from an input vector to the node weight vectors. Finally, we propose a new family of models-the GLVQ-F family-that remedies the problem. We derive competitive learning algorithms for each member of the GLVQ-F model and prove that they are invariant to all scalings of the data. We show that GLVQ-F offers a wide range of learning models since it reduces to LVQ as its weighting exponent (a parameter of the algorithm) approaches one from above. As this parameter increases, GLVQ-F then transitions to a model in which either all nodes may be excited according to their (inverse) distances from an input or in which the winner is excited while losers are penalized. And as this parameter increases without limit, GLVQ-F updates all nodes equally. We illustrate the failure of GLVQ and success of GLVQ-F with the IRIS data.  相似文献   

6.
广义LVQ神经网络的性能分析及其改进   总被引:4,自引:1,他引:3  
首先从理论上分析了广义学习矢量量化(GLVQ)网络的GLVQF算法的性能,GLVQF算法在一定程度上克服了GLVQ 算法存在的问题.然而,它对获胜表现型的学习具有好的性能,对于其它的表现型,性能却十分不稳定.分析了产生这个问题的原因,直接从表现型的学习率出发,提出了选取学习率的准则,并给出了两种改进的算法.最后,使用IRIS数据验证了算法的性能,改进算法较之GLVQF算法具有明显的稳定性和有效性.  相似文献   

7.
This paper presents the implementation of two hardware architectures, i.e., A2 Lattice Vector Quantization (LVQ) and Multistage A2LVQ (MA2LVQ), using a Field-Programmable Gate Array (FPGA). First, the renowned LVQ quantizer by Conway and Sloane is implemented followed by a low-complexity A2LVQ based on a new A2LVQ algorithm. It is revealed that the implementation requires high number of multiplier circuits. Then the implementation of a low-complexity A2LVQ is presented. This implementation uses only the first quadrant of the A2 lattice Voronoi region formed by W and T regions. This paper also presents the implementation of a multistage A2LVQ (MA2LVQ) with an architecture built from successive A2 quantizer blocks. Synthesis results show that the execution time of the low-complexity A2LVQ reaches up to 35.97 ns. The MA2LVQ is implemented using both low-complexity A2LVQ and ordinary A2 architectures. The system with the former architecture utilizes less logic and register elements by 47%.  相似文献   

8.
An analysis of the GLVQ algorithm   总被引:6,自引:0,他引:6  
Generalized learning vector quantization (GLVQ) has been proposed in as a generalization of the simple competitive learning (SCL) algorithm. The main argument of GLVQ proposal is its superior insensitivity to the initial values of the weights (code vectors). In this paper we show that the distinctive characteristics of the definition of GLVQ disappear outside a small domain of applications. GLVQ becomes identical to SCL when either the number of code vectors grows or the size of the input space is large. Besides that, the behavior of GLVQ is inconsistent for problems defined on very small scale input spaces. The adaptation rules fluctuate between performing descent and ascent searches on the gradient of the distortion function.  相似文献   

9.
讨论了Pal等的广义学习量化算法(GLVQ)和Karayiannis等的模糊学习量化算法(FGLVQ)的优缺点,提出了修正广义学习量化(RGLVQ)算法。该算法的迭代系数有很好的上下界,解决了GLVQ的“Scale”问题,又不像FGLVQ算法对初始学习率敏感。用IRIS数据集对算法进行了测试,并应用所给算法进行了用于图像压缩的量化码书设计。该文算法与FGLVQ类算法性能相当,但少了大量浮点除法,实验过程表明节约训练时间约l0%。  相似文献   

10.
Five existing LVQ algorithms are reviewed. The Premature Clustering Phenomenon, which downgrades the performance of LVQ is explained. By introducing and applying the “equalizing factor” as a remedy for the premature clustering phenomenon a breakthrough is achieved in improving the performance of the LVQ network, and its performance becomes competitive with that of the best known classifiers. For estimating the equalizing factor four different formulas are suggested, which result in four different versions of the LVQ4a algorithm. A new weight-updating formula for LVQ is presented, and the LVQ4b algorithm is presented as implementation of this new weight-updating formula in batch mode training. In addition, four variants of the LVQ4c algorithm are presented as the customized LVQ4b algorithm for pattern mode training.A meticulous analysis of their performances and that of five early training algorithms has been carried out and they have been compared against each other, on 16 databases of the Farsi optical character recognition problem.  相似文献   

11.
Class Directed Unsupervised Learning (CDUL) is a dynamic self-organising network which has been shown to overcome many of the problems associated with unsupervised learning, thereby yielding performance characteristics superior to similar networks such as counter-propagation and LVQ. In this paper, the CDUL algorithm is developed further, to a point where the original two-phase learning process is combined into a single system of dynamic parameter variation; a training cycle that can then be terminated automatically at a point of zero error over the training set. The ability to improve training times using a FastCDUL algorithm is also explored. The new algorithm, CDUL2, is subsequently applied to the benchmark problem of mine detection given sonar data, and shown to outperform both backpropagation and LVQ in terms of training speed and recall performance. Finally, a measure of computational cost is estimated for both CDUL2 and LVQ training periods, reinforcing the suggested efficiency of CDUL2 over its counterparts.  相似文献   

12.
13.
Soft nearest prototype classification   总被引:3,自引:0,他引:3  
We propose a new method for the construction of nearest prototype classifiers which is based on a Gaussian mixture ansatz and which can be interpreted as an annealed version of learning vector quantization (LVQ). The algorithm performs a gradient descent on a cost-function minimizing the classification error on the training set. We investigate the properties of the algorithm and assess its performance for several toy data sets and for an optical letter classification task. Results show 1) that annealing in the dispersion parameter of the Gaussian kernels improves classification accuracy; 2) that classification results are better than those obtained with standard learning vector quantization (LVQ 2.1, LVQ 3) for equal numbers of prototypes; and 3) that annealing of the width parameter improved the classification capability. Additionally, the principled approach provides an explanation of a number of features of the (heuristic) LVQ methods.  相似文献   

14.
结合广义学习矢量量化神经网络的思想和信息论中的极大熵原理,提出了一种熵约束 广义学习矢量量化神经网络,利用梯度下降法导出其学习算法,该算法是软竞争格式的一种推 广.由于亏损因子和尺度函数被定义为同一个模糊隶属度函数,它可以有效地克服广义学习矢 量量化网络的模糊算法存在的问题.文中还给出熵约束广义学习矢量量化网络及其软竞争学习 算法的许多重要性质,以此为依据,讨论拉格朗日乘子的选取规则.  相似文献   

15.
《自动化学报》1999,25(5):1
In this paper, the performance of GLVQ-F algorithm of GLVQ network is theoretically analyzed. The GLVQF algorithm, to some extent, has overcome the shortcomings that GLVQ algorithm possesses. But, there are some problems in GLVQF algorithm, for example, the algorithm has good performance on the winning prototype, and on other prototypes, its performance is very unstable. In this paper, the reasons of the problem are discussed. The rules of choosing the learning rates are proposed, and two modified algorithms are developed therefrom. Finally, the performance of the modified algorithms is verified with IRIS data, which shows the modified algorithms are more stable and effective than GLVQF algorithm.  相似文献   

16.
《国际计算机数学杂志》2012,89(1-2):183-200
Robust and adaptive training algorithms aiming at enhancing the capabilities of self-organizing and Radial Basis Function (RBF) neural networks are reviewed in this paper. The following robust variants of Learning Vector Quantizer (LVQ) are described: the order statistics LVQ, the L 2 LVQ and the split-merge LVQ. Successful application of the marginal median LVQ that belongs to the class of order statistics LVQs in the self-organized selection of the centers in RBF neural networks is reported. Moreover, the use of the median absolute deviation in the estimation of the covariance matrix of the observations assigned to each hidden unit in RBF neural networks is proposed. Applications that prove the superiority of the proposed variants of LVQ and RBF neural networks in noisy color image segmentation, color-based image recognition, segmentation of ultrasonic images, motion-field smoothing and moving object segmentation are outlined.  相似文献   

17.
首先从理论上分析了广义学习矢量量化(GLVQ)网络的GLVQ-F算法的性能, GLVQ-F算法在一定程度上克服了GLVQ算法存在的问题.然而,它对获胜表现型的学习具 有好的性能,对于其它的表现型,性能却十分不稳定,分析了产生这个问题的原因,直接从表 现型的学习率出发,提出了选取学习率的准则,并给出了两种改进的算法.最后,使用IRIS数 据验证了算法的性能,改进算法较之GLVQ-F算法具有明显的稳定性和有效性.  相似文献   

18.
We introduce a batch learning algorithm to design the set of prototypes of 1 nearest-neighbour classifiers. Like Kohonen's LVQ algorithms, this procedure tends to perform vector quantization over a probability density function that has zero points at Bayes borders. Although it differs significantly from their online counterparts since: (1) its statistical goal is clearer and better defined; and (2) it converges superlinearly due to its use of the very fast Newton's optimization method. Experiments results using artificial data confirm faster training time and better classification performance than Kohonen's LVQ algorithms.  相似文献   

19.
无监督学习矢量量化(LVQ)是一类基于最小化风险函数的聚类方法,文中通过对无监督LVQ风险函数的研究,提出了无监督LVQ算法的广义形式,在此基础上将当前典型的LVQ算法表示为基于不同尺度函数的LVQ算法,极大地方便了学习矢量量化神经网络的推广与应用。通过对无监督LVQ神经网络的改造,得到了基于无监督聚类算法的有监督LVQ神经网络,并将其应用于说话人辨认,取得了满意的结果并比较了几种典型聚类算法的优劣。  相似文献   

20.
LVQ神经网络在交通事件检测中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
提出一种基于LVQ神经网络的交通事件检测方法。提取上下游的流量和占有率为特征,LVQ神经网络作为分类器进行交通事件自动检测。LVQ网络结构简单,但却表现出比BP神经网络更强的有效性和鲁棒性。为进一步提高神经网络的泛化能力,采用改进的Boosting算法,进行网络集成。运用Matlab 进行了仿真分析,结果表明提出的交通事件检测算法具有良好的检测性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号