首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 437 毫秒
1.
本文分析讨论了学习量化类算法,包括广义学习量化算法(GLVQ)、模糊学习量化算法(FGLVQ)和修正广义学习量化(RGLVQ)算法等.提出了可以提高广义学习量化类算法性能的措施基于模拟退火算法思想,通过引入激励因子在不改变网络结构、不明显增加计算量的情况下提高学习量化算法的收敛性能.实验结果表明该方法用来训练码书,其峰值信噪比(PSNR)可提高但训练时间几乎不变.  相似文献   

2.
讨论了关于改进LVQ聚类网络的理论与算法.为克服LVQ网络聚类算法对初值敏 感的问题广义学习矢量量化(GLVQ)网络算法对LVQ算法进行了改进,但GLVQ算法性能不 稳定.GLVQ-F是对GLVQ网络算法的修改,但GLVQ-F算法仍存在对初值的敏感问题.分 析了GLVQ-F网络算法对初值敏感的原因以及算法不稳定的理论缺陷,改进了算法理论并给 出了一种新的改进的网络算法(MLVQ).实验结果表明新的算法解决了原有算法所存在的问 题,而且性能稳定.  相似文献   

3.
首先从理论上分析了广义学习矢量量化(GLVQ)网络的GLVQ-F算法的性能, GLVQ-F算法在一定程度上克服了GLVQ算法存在的问题.然而,它对获胜表现型的学习具 有好的性能,对于其它的表现型,性能却十分不稳定,分析了产生这个问题的原因,直接从表 现型的学习率出发,提出了选取学习率的准则,并给出了两种改进的算法.最后,使用IRIS数 据验证了算法的性能,改进算法较之GLVQ-F算法具有明显的稳定性和有效性.  相似文献   

4.
学习向量量化(LVQ)和泛化学习向量量化(GLVQ)算法都是采用欧氏距离作为相似性度量函数, 忽视了向量各维属性的数据取值范围,从而不能区分各维属性在分类中的不同作用。针对该问题,使用一种面向特征取值范围的向量相似性度量函数,对GLVQ进行改进,提出了GLVQ-FR算法。使用视频车型分类数据进行改进型GLVQ和LVQ2.1、GLVQ、GRLVQ、GMLVQ等算法的对比实验,结果表明:GLVQ-FR算法在车型分类中具有较高的分类准确性、运算速度和真实生产环境中的可用性。  相似文献   

5.
结合广义学习矢量量化神经网络的思想和信息论中的极大熵原理,提出了一种熵约束 广义学习矢量量化神经网络,利用梯度下降法导出其学习算法,该算法是软竞争格式的一种推 广.由于亏损因子和尺度函数被定义为同一个模糊隶属度函数,它可以有效地克服广义学习矢 量量化网络的模糊算法存在的问题.文中还给出熵约束广义学习矢量量化网络及其软竞争学习 算法的许多重要性质,以此为依据,讨论拉格朗日乘子的选取规则.  相似文献   

6.
基于广义基函数的CMAC(Cerebeliar Model Articulation Controller)学习算法(称 C-L算法)收敛条件依赖于基函数和学习样本,很难同时满足学习快速性与收敛性.提出了一 种改进学习算法,并证明改进算法是收敛的,而且收敛条件不依赖于基函数和学习样本.仿真 结果表明改进算法优于C—L算法和标准的Albus算法.  相似文献   

7.
简化的广义多层感知机模型及其学习算法   总被引:1,自引:0,他引:1  
方宁  李景治  贺贵明 《计算机工程》2004,30(1):50-51,113
提出了简化的广义多层感知机模型(SGMLP模型),并针对SGMLP模型给出了两种学习算法:广义误差反向传播算法(GBP算法)和基于遗传算法(GA)的学习算法。两个典型算例的实验结果表明,该模型及其学习算法是可行和有效的。  相似文献   

8.
在线群体交互有助于数字图书馆发挥其服务人类需求的潜力,但如何量化在线群体交互对个人信息访问方面的影响还有待进一步研究.该文用隐Markov模型(HMM)来建模交互用户的状态序列及其相应的信息搜索行为,并基于影响模型理论提出了一个在线群体交互影响模型来分析用户在从数字图书馆选择资料和搜索所需信息时的相互影响.为满足本应用问题中增量模型学习的需要,文章还从耦合隐Markov模型(CHMM)学习算法引申出基于梯度的方法来进行在线群体交互影响模型参数的训练.实验结果显示,本文所提出的模型和算法能较准确地刻画在线群体交互对个人信息访问行动的影响.  相似文献   

9.
广义逆向学习方法的自适应差分算法   总被引:1,自引:0,他引:1  
针对差分算法(differential evolution,DE)在解决高维优化问题时参数设置复杂、选择变异策略困难的现象,提出了广义逆向学习方法的自适应差分进化算法(self-adaptive DE algorithm via generalized opposition-based learning,SDE-GOBL)。利用广义的逆向学习方法(generalized opposition-based learning,GOBL)来进行多策略自适应差分算法(Self-adaptive DE,Sa DE)的初始化策略调整,求出各个候选解的相应逆向点,并在候选解和其逆向点中选择所需要的最优初始种群,然后再进行自适应变异、杂交、选择操作,最后通过CEC2005国际竞赛所提供的9个标准测试函数对SDE-GOBL算法进行验证,结果证明该算法具有较快的收敛速度和较高的求解精度。  相似文献   

10.
无监督学习矢量量化(LVQ)是一类基于最小化风险函数的聚类方法,文中通过对无监督LVQ风险函数的研究,提出了无监督LVQ算法的广义形式,在此基础上将当前典型的LVQ算法表示为基于不同尺度函数的LVQ算法,极大地方便了学习矢量量化神经网络的推广与应用。通过对无监督LVQ神经网络的改造,得到了基于无监督聚类算法的有监督LVQ神经网络,并将其应用于说话人辨认,取得了满意的结果并比较了几种典型聚类算法的优劣。  相似文献   

11.
Repairs to GLVQ: a new family of competitive learning schemes   总被引:2,自引:0,他引:2  
First, we identify an algorithmic defect of the generalized learning vector quantization (GLVQ) scheme that causes it to behave erratically for a certain scaling of the input data. We show that GLVQ can behave incorrectly because its learning rates are reciprocally dependent on the sum of squares of distances from an input vector to the node weight vectors. Finally, we propose a new family of models-the GLVQ-F family-that remedies the problem. We derive competitive learning algorithms for each member of the GLVQ-F model and prove that they are invariant to all scalings of the data. We show that GLVQ-F offers a wide range of learning models since it reduces to LVQ as its weighting exponent (a parameter of the algorithm) approaches one from above. As this parameter increases, GLVQ-F then transitions to a model in which either all nodes may be excited according to their (inverse) distances from an input or in which the winner is excited while losers are penalized. And as this parameter increases without limit, GLVQ-F updates all nodes equally. We illustrate the failure of GLVQ and success of GLVQ-F with the IRIS data.  相似文献   

12.
The classification performance of nearest prototype classifiers largely relies on the prototype learning algorithm. The minimum classification error (MCE) method and the soft nearest prototype classifier (SNPC) method are two important algorithms using misclassification loss. This paper proposes a new prototype learning algorithm based on the conditional log-likelihood loss (CLL), which is based on the discriminative model called log-likelihood of margin (LOGM). A regularization term is added to avoid over-fitting in training as well as to maximize the hypothesis margin. The CLL in the LOGM algorithm is a convex function of margin, and so, shows better convergence than the MCE. In addition, we show the effects of distance metric learning with both prototype-dependent weighting and prototype-independent weighting. Our empirical study on the benchmark datasets demonstrates that the LOGM algorithm yields higher classification accuracies than the MCE, generalized learning vector quantization (GLVQ), soft nearest prototype classifier (SNPC) and the robust soft learning vector quantization (RSLVQ), and moreover, the LOGM with prototype-dependent weighting achieves comparable accuracies to the support vector machine (SVM) classifier.  相似文献   

13.
A learning vector quantization (LVQ) algorithm called harmonic to minimum LVQ algorithm (H2M-LVQ)1 is presented to tackle the initialization sensitiveness problem associated with the original generalized LVQ (GLVQ) algorithm. Experimental results show superior performance of the H2M-LVQ algorithm over the GLVQ and one of its variants on several datasets.  相似文献   

14.
This paper proposes a general local learning framework to effectively alleviate the complexities of classifier design by means of “divide and conquer” principle and ensemble method. The learning framework consists of a quantization layer which uses generalized learning vector quantization (GLVQ) and an ensemble layer which uses multi-layer perceptrons (MLP). The proposed method is tested on public handwritten character data sets, which obtains a promising performance consistently. In contrast to other methods, the proposed method is especially suitable for a large-scale real-world classification problems although it is easily scaled to a small training set while preserving a good performance.  相似文献   

15.
广义LVQ神经网络的性能分析及其改进   总被引:4,自引:1,他引:3  
首先从理论上分析了广义学习矢量量化(GLVQ)网络的GLVQF算法的性能,GLVQF算法在一定程度上克服了GLVQ 算法存在的问题.然而,它对获胜表现型的学习具有好的性能,对于其它的表现型,性能却十分不稳定.分析了产生这个问题的原因,直接从表现型的学习率出发,提出了选取学习率的准则,并给出了两种改进的算法.最后,使用IRIS数据验证了算法的性能,改进算法较之GLVQF算法具有明显的稳定性和有效性.  相似文献   

16.
This paper discusses an alternative approach to parameter optimization of well-known prototype-based learning algorithms (minimizing an objective function via gradient search). The proposed approach considers a stochastic optimization called the cross entropy method (CE method). The CE method is used to tackle efficiently the initialization sensitiveness problem associated with the original generalized learning vector quantization (GLVQ) algorithm and its variants. Results presented in this paper indicate that the CE method can be successfully applied to this kind of problem on real-world data sets. As far as known by the authors, it is the first use of the CE method in prototype-based learning.  相似文献   

17.
An analysis of the GLVQ algorithm   总被引:6,自引:0,他引:6  
Generalized learning vector quantization (GLVQ) has been proposed in as a generalization of the simple competitive learning (SCL) algorithm. The main argument of GLVQ proposal is its superior insensitivity to the initial values of the weights (code vectors). In this paper we show that the distinctive characteristics of the definition of GLVQ disappear outside a small domain of applications. GLVQ becomes identical to SCL when either the number of code vectors grows or the size of the input space is large. Besides that, the behavior of GLVQ is inconsistent for problems defined on very small scale input spaces. The adaptation rules fluctuate between performing descent and ascent searches on the gradient of the distortion function.  相似文献   

18.
《自动化学报》1999,25(5):1
In this paper, the performance of GLVQ-F algorithm of GLVQ network is theoretically analyzed. The GLVQF algorithm, to some extent, has overcome the shortcomings that GLVQ algorithm possesses. But, there are some problems in GLVQF algorithm, for example, the algorithm has good performance on the winning prototype, and on other prototypes, its performance is very unstable. In this paper, the reasons of the problem are discussed. The rules of choosing the learning rates are proposed, and two modified algorithms are developed therefrom. Finally, the performance of the modified algorithms is verified with IRIS data, which shows the modified algorithms are more stable and effective than GLVQF algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号