首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 578 毫秒
1.
图象压缩的模糊竞争矢量量化方法   总被引:2,自引:0,他引:2       下载免费PDF全文
在分析神经网络竞争学习算法和模糊C均值算法的基础上,提出了模糊竞争学习算法,并对模糊隶属度函数进行了探讨。理论分析和实验结果表明,模糊竞争学习算法用于图象矢量量化压缩编码是一种非常有效的方法。  相似文献   

2.
无监督学习矢量量化(LVQ)是一类基于最小化风险函数的聚类方法,文中通过对无监督LVQ风险函数的研究,提出了无监督LVQ算法的广义形式,在此基础上将当前典型的LVQ算法表示为基于不同尺度函数的LVQ算法,极大地方便了学习矢量量化神经网络的推广与应用。通过对无监督LVQ神经网络的改造,得到了基于无监督聚类算法的有监督LVQ神经网络,并将其应用于说话人辨认,取得了满意的结果并比较了几种典型聚类算法的优劣。  相似文献   

3.
提出一种模糊神经网络的自适应控制方案。针对连续空间的复杂学习任务,提出了一种竞争式Takagi-Sugeno模糊再励学习网络,该网络结构集成了Takagi-Sugeno模糊推理系统和基于动作的评价值函数的再励学习方法。相应地,提出了一种优化学习算法,其把竞争式Takagi-Sugeno模糊再励学习网络训练成为一种所谓的Takagi-Sugeno模糊变结构控制器。以一级倒立摆控制系统为例,仿真研究表明所提出的学习算法在性能上优于其它的再励学习算法。  相似文献   

4.
模糊学习矢量量化算法 (FL VQ)虽然解决了硬的竞争学习对初始码本的依赖性问题 ,但收敛速度变慢 ,且仍无法克服陷入局部最小 .为此在分析模糊学习矢量量化图象编码原理的基础上 ,探讨了 FL VQ算法的几种优化途径 ,进而提出了一种基于 Tabu搜索 (TS)的模糊学习矢量量化的新算法 (TS- FL VQ) ,并给出了该算法的具体实现方法及步骤 .该算法首先利用 TS技术产生一个面向全局搜索的寻优列表 ,然后再进行模糊学习以得到最优解 .实验结果表明 ,该算法在收敛速度及编码效果上均较 FL VQ有较大的提高 .  相似文献   

5.
基于自组织特征映射网络的模糊矢量量化算法   总被引:1,自引:0,他引:1  
自组织特征映射(SOFM)是一种常用的矢量量化算法,它具有设计码书不依赖于初始码书等优点。模糊矢量量化算法(FVQ)将模糊关系引入码书的设计,训练矢量与码矢之间的模糊关系用隶属函数表示。本文提出了一种基于自组织特征映射网络的模糊矢量量化算法(FSOFM),FSOFM算法将SOFM网络的调节节点邻域看作训练矢量的模糊集,网络权值学习步长的选择依赖于隶属函数。由于设计码书的评价一般采用最小均方误差准则,而隶属函数是训练矢量与码矢之间距离的函数,FSOFM算法保证了网络的全局成优化和网络权值的局部调整一致;因此,FSOFM算法能够优化码书的设计,改善设计码书的性能。此外,FSOOFM算法还具良好的适应性,当网络的将LBG、SOFM、FVQ和FOSOFM算法用于一组具有不同边缘特性的图像的矢量量化中,我们发现采用FSOFM算法进行矢量量化的所有图像都具有最高的峰值信噪比PSNR。  相似文献   

6.
一种基于模糊CMAC神经网络的自学习控制器   总被引:6,自引:0,他引:6  
通过分析模糊控制和基于广义基函数的CMAC神经网络,提出一种模糊CMAC(FCMAC)神经网络。通过FCMAC权系数的在线学习,实现修正模糊逻辑。给出一种基于FCMAC的自学习控制器的结构及合适的学习算法,这种网络每次学习少量参数,算法简单。仿真结果表明所提出的控制器优于传统的PID控制器。  相似文献   

7.
一种多层前馈网参数可分离学习算法   总被引:1,自引:0,他引:1  
目前大部分神经网络学习算法都是对网络所有的参数同时进行学习.当网络规模较 大时,这种做法常常很耗时.由于许多网络,例如感知器、径向基函数网络、概率广义回归网络 以及模糊神经网络,都是一种多层前馈型网络,它们的输入输出映射都可以表示为一组可变 基的线性组合.网络的参数也表现为二类:可变基中的参数是非线性的,组合系数是线性的. 为此,提出了一个将这二类参数进行分离学习的算法.仿真结果表明,这个学习算法加快了学 习过程,提高了网络的逼近性能.  相似文献   

8.
提出一种模糊神经网络的自适应控制方案。针对连续空间的复杂学习任务,提出了一种竞争式Takagi—Sugeno模糊再励学习网络,该网络结构集成了Takagi-Sugeno模糊推理系统和基于动作的评价值函数的再励学习方法。相应地,提出了一种优化学习算法,其把竞争式Takagi-Sugeno模糊再励学习网络训练成为一种所谓的Takagi-Sugeno模糊变结构控制器。以一级倒立摆控制系统为例.仿真研究表明所提出的学习算法在性能上优于其它的再励学习算法。  相似文献   

9.
一种混合学习算法   总被引:2,自引:0,他引:2  
沈智鹏  郭晨 《计算机工程》2003,29(21):12-13,27
提出了一种基于模糊逻辑和神经网络的自学习网络模型和一种结合自组织学习和BP学习的BPSOM混合学习算法。该模型通过BPSOM算法训练样本,能自动生成模糊逻辑规则,调节输入、输出变量的隶属函数;而且该算法比通常的BP学习算法收敛性好、速度快.  相似文献   

10.
基于约束最大信息熵的贝叶斯网络结构学习算法   总被引:3,自引:0,他引:3  
贝叶斯网络的学习可分为结构学习和参数学习.基于约束最大信息熵的结构学习算法是一种以搜索最高记分函数为原则的方法.本文以KL距离、相互信息以及最大相互信息为基础,通过附加合适的约束函数降低变量维数和网络结构的复杂度,提出了一种附加约束的最大熵记分函数,并结合爬山法设计一种贝叶斯网络结构学习的启发式算法.通过与著名的K2和B&B-MDL算法的实验比较,结果表明该算法在时间和精度上都具有较好的效果.  相似文献   

11.
讨论了Pal等的广义学习量化算法(GLVQ)和Karayiannis等的模糊学习量化算法(FGLVQ)的优缺点,提出了修正广义学习量化(RGLVQ)算法。该算法的迭代系数有很好的上下界,解决了GLVQ的“Scale”问题,又不像FGLVQ算法对初始学习率敏感。用IRIS数据集对算法进行了测试,并应用所给算法进行了用于图像压缩的量化码书设计。该文算法与FGLVQ类算法性能相当,但少了大量浮点除法,实验过程表明节约训练时间约l0%。  相似文献   

12.
The classification performance of nearest prototype classifiers largely relies on the prototype learning algorithm. The minimum classification error (MCE) method and the soft nearest prototype classifier (SNPC) method are two important algorithms using misclassification loss. This paper proposes a new prototype learning algorithm based on the conditional log-likelihood loss (CLL), which is based on the discriminative model called log-likelihood of margin (LOGM). A regularization term is added to avoid over-fitting in training as well as to maximize the hypothesis margin. The CLL in the LOGM algorithm is a convex function of margin, and so, shows better convergence than the MCE. In addition, we show the effects of distance metric learning with both prototype-dependent weighting and prototype-independent weighting. Our empirical study on the benchmark datasets demonstrates that the LOGM algorithm yields higher classification accuracies than the MCE, generalized learning vector quantization (GLVQ), soft nearest prototype classifier (SNPC) and the robust soft learning vector quantization (RSLVQ), and moreover, the LOGM with prototype-dependent weighting achieves comparable accuracies to the support vector machine (SVM) classifier.  相似文献   

13.
This paper discusses an alternative approach to parameter optimization of well-known prototype-based learning algorithms (minimizing an objective function via gradient search). The proposed approach considers a stochastic optimization called the cross entropy method (CE method). The CE method is used to tackle efficiently the initialization sensitiveness problem associated with the original generalized learning vector quantization (GLVQ) algorithm and its variants. Results presented in this paper indicate that the CE method can be successfully applied to this kind of problem on real-world data sets. As far as known by the authors, it is the first use of the CE method in prototype-based learning.  相似文献   

14.
A learning vector quantization (LVQ) algorithm called harmonic to minimum LVQ algorithm (H2M-LVQ)1 is presented to tackle the initialization sensitiveness problem associated with the original generalized LVQ (GLVQ) algorithm. Experimental results show superior performance of the H2M-LVQ algorithm over the GLVQ and one of its variants on several datasets.  相似文献   

15.
An analysis of the GLVQ algorithm   总被引:6,自引:0,他引:6  
Generalized learning vector quantization (GLVQ) has been proposed in as a generalization of the simple competitive learning (SCL) algorithm. The main argument of GLVQ proposal is its superior insensitivity to the initial values of the weights (code vectors). In this paper we show that the distinctive characteristics of the definition of GLVQ disappear outside a small domain of applications. GLVQ becomes identical to SCL when either the number of code vectors grows or the size of the input space is large. Besides that, the behavior of GLVQ is inconsistent for problems defined on very small scale input spaces. The adaptation rules fluctuate between performing descent and ascent searches on the gradient of the distortion function.  相似文献   

16.
Repairs to GLVQ: a new family of competitive learning schemes   总被引:2,自引:0,他引:2  
First, we identify an algorithmic defect of the generalized learning vector quantization (GLVQ) scheme that causes it to behave erratically for a certain scaling of the input data. We show that GLVQ can behave incorrectly because its learning rates are reciprocally dependent on the sum of squares of distances from an input vector to the node weight vectors. Finally, we propose a new family of models-the GLVQ-F family-that remedies the problem. We derive competitive learning algorithms for each member of the GLVQ-F model and prove that they are invariant to all scalings of the data. We show that GLVQ-F offers a wide range of learning models since it reduces to LVQ as its weighting exponent (a parameter of the algorithm) approaches one from above. As this parameter increases, GLVQ-F then transitions to a model in which either all nodes may be excited according to their (inverse) distances from an input or in which the winner is excited while losers are penalized. And as this parameter increases without limit, GLVQ-F updates all nodes equally. We illustrate the failure of GLVQ and success of GLVQ-F with the IRIS data.  相似文献   

17.
According to the pulverized coal combustion flame image texture features of the rotary-kiln oxide pellets sintering process,a combustion working condition recognition method based on the generalized learning vector(GLVQ) neural network is proposed.Firstly,the numerical flame image is analyzed to extract texture features,such as energy,entropy and inertia,based on grey-level co-occurrence matrix(GLCM) to provide qualitative information on the changes in the visual appearance of the flame.Then the kernel principal component analysis(KPCA) method is adopted to deduct the input vector with high dimensionality so as to reduce the GLVQ target dimension and network scale greatly.Finally,the GLVQ neural network is trained by using the normalized texture feature data.The test results show that the proposed KPCA-GLVQ classifer has an excellent performance on training speed and correct recognition rate,and it meets the requirement for real-time combustion working condition recognition for the rotary kiln process.  相似文献   

18.
This paper proposes a general local learning framework to effectively alleviate the complexities of classifier design by means of “divide and conquer” principle and ensemble method. The learning framework consists of a quantization layer which uses generalized learning vector quantization (GLVQ) and an ensemble layer which uses multi-layer perceptrons (MLP). The proposed method is tested on public handwritten character data sets, which obtains a promising performance consistently. In contrast to other methods, the proposed method is especially suitable for a large-scale real-world classification problems although it is easily scaled to a small training set while preserving a good performance.  相似文献   

19.
学习向量量化(LVQ)和泛化学习向量量化(GLVQ)算法都是采用欧氏距离作为相似性度量函数, 忽视了向量各维属性的数据取值范围,从而不能区分各维属性在分类中的不同作用。针对该问题,使用一种面向特征取值范围的向量相似性度量函数,对GLVQ进行改进,提出了GLVQ-FR算法。使用视频车型分类数据进行改进型GLVQ和LVQ2.1、GLVQ、GRLVQ、GMLVQ等算法的对比实验,结果表明:GLVQ-FR算法在车型分类中具有较高的分类准确性、运算速度和真实生产环境中的可用性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号