首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
遗传LVQ聚类算法   总被引:1,自引:0,他引:1  
Kohonen提出了学习矢量量化(LVQ)聚类算法及网络对聚类分析产生了深刻的影响,但LVQ存在神经元未被充分利用以及算法对初值敏感的问题。通过对LVQ聚类算法进行分析,根据基因算法的特点,论文提出一种结合基因算法与LVQ聚类算法的改进方法。实验结果证明改进的算法较LVQ聚类算法对初值具有明显的稳定性和有效性。  相似文献   

2.
首先从理论上分析了广义学习矢量量化(GLVQ)网络的GLVQ-F算法的性能, GLVQ-F算法在一定程度上克服了GLVQ算法存在的问题.然而,它对获胜表现型的学习具 有好的性能,对于其它的表现型,性能却十分不稳定,分析了产生这个问题的原因,直接从表 现型的学习率出发,提出了选取学习率的准则,并给出了两种改进的算法.最后,使用IRIS数 据验证了算法的性能,改进算法较之GLVQ-F算法具有明显的稳定性和有效性.  相似文献   

3.
An analysis of the GLVQ algorithm   总被引:6,自引:0,他引:6  
Generalized learning vector quantization (GLVQ) has been proposed in as a generalization of the simple competitive learning (SCL) algorithm. The main argument of GLVQ proposal is its superior insensitivity to the initial values of the weights (code vectors). In this paper we show that the distinctive characteristics of the definition of GLVQ disappear outside a small domain of applications. GLVQ becomes identical to SCL when either the number of code vectors grows or the size of the input space is large. Besides that, the behavior of GLVQ is inconsistent for problems defined on very small scale input spaces. The adaptation rules fluctuate between performing descent and ascent searches on the gradient of the distortion function.  相似文献   

4.
《自动化学报》1999,25(5):1
In this paper, the performance of GLVQ-F algorithm of GLVQ network is theoretically analyzed. The GLVQF algorithm, to some extent, has overcome the shortcomings that GLVQ algorithm possesses. But, there are some problems in GLVQF algorithm, for example, the algorithm has good performance on the winning prototype, and on other prototypes, its performance is very unstable. In this paper, the reasons of the problem are discussed. The rules of choosing the learning rates are proposed, and two modified algorithms are developed therefrom. Finally, the performance of the modified algorithms is verified with IRIS data, which shows the modified algorithms are more stable and effective than GLVQF algorithm.  相似文献   

5.
讨论了关于改进LVQ聚类网络的理论与算法.为克服LVQ网络聚类算法对初值敏 感的问题广义学习矢量量化(GLVQ)网络算法对LVQ算法进行了改进,但GLVQ算法性能不 稳定.GLVQ-F是对GLVQ网络算法的修改,但GLVQ-F算法仍存在对初值的敏感问题.分 析了GLVQ-F网络算法对初值敏感的原因以及算法不稳定的理论缺陷,改进了算法理论并给 出了一种新的改进的网络算法(MLVQ).实验结果表明新的算法解决了原有算法所存在的问 题,而且性能稳定.  相似文献   

6.
According to the pulverized coal combustion flame image texture features of the rotary-kiln oxide pellets sintering process,a combustion working condition recognition method based on the generalized learning vector(GLVQ) neural network is proposed.Firstly,the numerical flame image is analyzed to extract texture features,such as energy,entropy and inertia,based on grey-level co-occurrence matrix(GLCM) to provide qualitative information on the changes in the visual appearance of the flame.Then the kernel principal component analysis(KPCA) method is adopted to deduct the input vector with high dimensionality so as to reduce the GLVQ target dimension and network scale greatly.Finally,the GLVQ neural network is trained by using the normalized texture feature data.The test results show that the proposed KPCA-GLVQ classifer has an excellent performance on training speed and correct recognition rate,and it meets the requirement for real-time combustion working condition recognition for the rotary kiln process.  相似文献   

7.
学习向量量化(LVQ)和泛化学习向量量化(GLVQ)算法都是采用欧氏距离作为相似性度量函数, 忽视了向量各维属性的数据取值范围,从而不能区分各维属性在分类中的不同作用。针对该问题,使用一种面向特征取值范围的向量相似性度量函数,对GLVQ进行改进,提出了GLVQ-FR算法。使用视频车型分类数据进行改进型GLVQ和LVQ2.1、GLVQ、GRLVQ、GMLVQ等算法的对比实验,结果表明:GLVQ-FR算法在车型分类中具有较高的分类准确性、运算速度和真实生产环境中的可用性。  相似文献   

8.
讨论了Pal等的广义学习量化算法(GLVQ)和Karayiannis等的模糊学习量化算法(FGLVQ)的优缺点,提出了修正广义学习量化(RGLVQ)算法。该算法的迭代系数有很好的上下界,解决了GLVQ的“Scale”问题,又不像FGLVQ算法对初始学习率敏感。用IRIS数据集对算法进行了测试,并应用所给算法进行了用于图像压缩的量化码书设计。该文算法与FGLVQ类算法性能相当,但少了大量浮点除法,实验过程表明节约训练时间约l0%。  相似文献   

9.
This paper discusses an alternative approach to parameter optimization of well-known prototype-based learning algorithms (minimizing an objective function via gradient search). The proposed approach considers a stochastic optimization called the cross entropy method (CE method). The CE method is used to tackle efficiently the initialization sensitiveness problem associated with the original generalized learning vector quantization (GLVQ) algorithm and its variants. Results presented in this paper indicate that the CE method can be successfully applied to this kind of problem on real-world data sets. As far as known by the authors, it is the first use of the CE method in prototype-based learning.  相似文献   

10.
结合广义学习矢量量化神经网络的思想和信息论中的极大熵原理,提出了一种熵约束 广义学习矢量量化神经网络,利用梯度下降法导出其学习算法,该算法是软竞争格式的一种推 广.由于亏损因子和尺度函数被定义为同一个模糊隶属度函数,它可以有效地克服广义学习矢 量量化网络的模糊算法存在的问题.文中还给出熵约束广义学习矢量量化网络及其软竞争学习 算法的许多重要性质,以此为依据,讨论拉格朗日乘子的选取规则.  相似文献   

11.
A learning vector quantization (LVQ) algorithm called harmonic to minimum LVQ algorithm (H2M-LVQ)1 is presented to tackle the initialization sensitiveness problem associated with the original generalized LVQ (GLVQ) algorithm. Experimental results show superior performance of the H2M-LVQ algorithm over the GLVQ and one of its variants on several datasets.  相似文献   

12.
基于模糊最小二乘支持向量机的软测量建模   总被引:10,自引:0,他引:10  
张英  苏宏业  褚健 《控制与决策》2005,20(6):621-624
将模糊隶属度概念引入最小二乘支持向量机,提出一种基于支持向量数据域描述的模蝴隶属度函数模型,将输入空间中的样本映射到一个高维的特征空间;然后根据其偏离数据域的程度赋予不同的隶属度.该方法提高了最小二乘支持向量机的抗噪声能力,尤其适用于未能完全揭示输入样本特性的情况.将提出的方法用于催化裂化分馏塔轻柴油凝固点的软测量建模,仿真结果表明,该模糊隶属度函数模型能够提高最小二乘支持向量机的预测精度.  相似文献   

13.
This paper optimizes the performance of the growing cell structures (GCS) model in learning topology and vector quantization. Each node in GCS is attached with a resource counter. During the competitive learning process, the counter of the best-matching node is increased by a defined resource measure after each input presentation, and then all resource counters are decayed by a factor alpha. We show that the summation of all resource counters conserves. This conservation principle provides useful clues for exploring important characteristics of GCS, which in turn provide an insight into how the GCS can be optimized. In the context of information entropy, we show that performance of GCS in learning topology and vector quantization can be optimized by using alpha=0 incorporated with a threshold-free node-removal scheme, regardless of input data being stationary or nonstationary. The meaning of optimization is twofold: (1) for learning topology, the information entropy is maximized in terms of equiprobable criterion and (2) for leaning vector quantization, the use is minimized in terms of equi-error criterion.  相似文献   

14.
广义LVQ神经网络的性能分析及其改进   总被引:4,自引:1,他引:3  
首先从理论上分析了广义学习矢量量化(GLVQ)网络的GLVQF算法的性能,GLVQF算法在一定程度上克服了GLVQ 算法存在的问题.然而,它对获胜表现型的学习具有好的性能,对于其它的表现型,性能却十分不稳定.分析了产生这个问题的原因,直接从表现型的学习率出发,提出了选取学习率的准则,并给出了两种改进的算法.最后,使用IRIS数据验证了算法的性能,改进算法较之GLVQF算法具有明显的稳定性和有效性.  相似文献   

15.
Fuzzy algorithms for learning vector quantization   总被引:14,自引:0,他引:14  
This paper presents the development of fuzzy algorithms for learning vector quantization (FALVQ). These algorithms are derived by minimizing the weighted sum of the squared Euclidean distances between an input vector, which represents a feature vector, and the weight vectors of a competitive learning vector quantization (LVQ) network, which represent the prototypes. This formulation leads to competitive algorithms, which allow each input vector to attract all prototypes. The strength of attraction between each input and the prototypes is determined by a set of membership functions, which can be selected on the basis of specific criteria. A gradient-descent-based learning rule is derived for a general class of admissible membership functions which satisfy certain properties. The FALVQ 1, FALVQ 2, and FALVQ 3 families of algorithms are developed by selecting admissible membership functions with different properties. The proposed algorithms are tested and evaluated using the IRIS data set. The efficiency of the proposed algorithms is also illustrated by their use in codebook design required for image compression based on vector quantization.  相似文献   

16.
In this paper we introduce an integrative approach towards color texture classification and recognition using a supervised learning framework. Our approach is based on Generalized Learning Vector Quantization (GLVQ), extended by an adaptive distance measure, which is defined in the Fourier domain, and adaptive filter kernels based on Gabor filters. We evaluate the proposed technique on two sets of color texture images and compare results with those other methods achieve. The features and filter kernels learned by GLVQ improve classification accuracy and they are able to generalize much better for data previously unknown to the system.  相似文献   

17.
This paper proposes a general local learning framework to effectively alleviate the complexities of classifier design by means of “divide and conquer” principle and ensemble method. The learning framework consists of a quantization layer which uses generalized learning vector quantization (GLVQ) and an ensemble layer which uses multi-layer perceptrons (MLP). The proposed method is tested on public handwritten character data sets, which obtains a promising performance consistently. In contrast to other methods, the proposed method is especially suitable for a large-scale real-world classification problems although it is easily scaled to a small training set while preserving a good performance.  相似文献   

18.
The classification performance of nearest prototype classifiers largely relies on the prototype learning algorithm. The minimum classification error (MCE) method and the soft nearest prototype classifier (SNPC) method are two important algorithms using misclassification loss. This paper proposes a new prototype learning algorithm based on the conditional log-likelihood loss (CLL), which is based on the discriminative model called log-likelihood of margin (LOGM). A regularization term is added to avoid over-fitting in training as well as to maximize the hypothesis margin. The CLL in the LOGM algorithm is a convex function of margin, and so, shows better convergence than the MCE. In addition, we show the effects of distance metric learning with both prototype-dependent weighting and prototype-independent weighting. Our empirical study on the benchmark datasets demonstrates that the LOGM algorithm yields higher classification accuracies than the MCE, generalized learning vector quantization (GLVQ), soft nearest prototype classifier (SNPC) and the robust soft learning vector quantization (RSLVQ), and moreover, the LOGM with prototype-dependent weighting achieves comparable accuracies to the support vector machine (SVM) classifier.  相似文献   

19.
In this article, an iterative procedure is proposed for the training process of the probabilistic neural network (PNN). In each stage of this procedure, the Q(0)-learning algorithm is utilized for the adaptation of PNN smoothing parameter (σ). Four classes of PNN models are regarded in this study. In the case of the first, simplest model, the smoothing parameter takes the form of a scalar; for the second model, σ is a vector whose elements are computed with respect to the class index; the third considered model has the smoothing parameter vector for which all components are determined depending on each input attribute; finally, the last and the most complex of the analyzed networks, uses the matrix of smoothing parameters where each element is dependent on both class and input feature index. The main idea of the presented approach is based on the appropriate update of the smoothing parameter values according to the Q(0)-learning algorithm. The proposed procedure is verified on six repository data sets. The prediction ability of the algorithm is assessed by computing the test accuracy on 10 %, 20 %, 30 %, and 40 % of examples drawn randomly from each input data set. The results are compared with the test accuracy obtained by PNN trained using the conjugate gradient procedure, support vector machine algorithm, gene expression programming classifier, k–Means method, multilayer perceptron, radial basis function neural network and learning vector quantization neural network. It is shown that the presented procedure can be applied to the automatic adaptation of the smoothing parameter of each of the considered PNN models and that this is an alternative training method. PNN trained by the Q(0)-learning based approach constitutes a classifier which can be treated as one of the top models in data classification problems.  相似文献   

20.
We introduce a fuzzy rough granular neural network (FRGNN) model based on the multilayer perceptron using a back-propagation algorithm for the fuzzy classification of patterns. We provide the development strategy of the network mainly based upon the input vector, initial connection weights determined by fuzzy rough set theoretic concepts, and the target vector. While the input vector is described in terms of fuzzy granules, the target vector is defined in terms of fuzzy class membership values and zeros. Crude domain knowledge about the initial data is represented in the form of a decision table, which is divided into subtables corresponding to different classes. The data in each decision table is converted into granular form. The syntax of these decision tables automatically determines the appropriate number of hidden nodes, while the dependency factors from all the decision tables are used as initial weights. The dependency factor of each attribute and the average degree of the dependency factor of all the attributes with respect to decision classes are considered as initial connection weights between the nodes of the input layer and the hidden layer, and the hidden layer and the output layer, respectively. The effectiveness of the proposed FRGNN is demonstrated on several real-life data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号