首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
Parameter Incremental Learning Algorithm for Neural Networks   总被引:1,自引:0,他引:1  
In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable  相似文献   

2.
为获得快速、准确而精简的模糊神经网络,提出一种连续增量学习模糊神经网络(ISL-FNN).将修剪策略引入到神经元的产生过程,用错误下降率定义输入数据对系统输出的影响并应用于神经元的增长过程.在参数的学习阶段,所有隐含层神经元(无论是新增还是已有)的参数使用扩展的卡尔曼算法更新.通过仿真实验,该算法在达到与其它算法性能相当甚至更好的情况下,能获得更精简的结构.  相似文献   

3.
A Learning Algorithm for Evolving Cascade Neural Networks   总被引:4,自引:0,他引:4  
A new learning algorithm for Evolving Cascade Neural Networks (ECNNs) is described. An ECNN starts to learn with one input node and then adding new inputs as well as new hidden neurons evolves it. The trained ECNN has a nearly minimal number of input and hidden neurons as well as connections. The algorithm was successfully applied to classify artifacts and normal segments in clinical electroencephalograms (EEGs). The EEG segments were visually labeled by EEG-viewer. The trained ECNN has correctly classified 96.69% of the testing segments. It is slightly better than a standard fully connected neural network. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

4.
一种快速支持向量机增量学习算法   总被引:16,自引:0,他引:16  
孔锐  张冰 《控制与决策》2005,20(10):1129-1132
经典的支持向量机(SVM)算法在求解最优分类面时需求解一个凸二次规划问题,当训练样本数量很多时,算法的速度较慢,而且一旦有新的样本加入,所有的训练样本必须重新训练,非常浪费时间.为此,提出一种新的SVM快速增量学习算法.该算法首先选择那些可能成为支持向量的边界向量,以减少参与训练的样本数目;然后进行增量学习.学习算法是一个迭代过程,无需求解优化问题.实验证明,该算法不仅能保证学习机器的精度和良好的推广能力,而且算法的学习速度比经典的SVM算法快,可以进行增量学习.  相似文献   

5.
Fast Learning Algorithms for Feedforward Neural Networks   总被引:7,自引:0,他引:7  
In order to improve the training speed of multilayer feedforward neural networks (MLFNN), we propose and explore two new fast backpropagation (BP) algorithms obtained: (1) by changing the error functions, in case using the exponent attenuation (or bell impulse) function and the Fourier kernel function as alternative functions; and (2) by introducing the hybrid conjugate-gradient algorithm of global optimization for dynamic learning rate to overcome the conventional BP learning problems of getting stuck into local minima or slow convergence. Our experimental results demonstrate the effectiveness of the modified error functions since the training speed is faster than that of existing fast methods. In addition, our hybrid algorithm has a higher recognition rate than the Polak-Ribieve conjugate gradient and conventional BP algorithms, and has less training time, less complication and stronger robustness than the Fletcher-Reeves conjugate-gradient and conventional BP algorithms for real speech data.  相似文献   

6.
王军平  陈全世 《信息与控制》2004,33(4):426-428,433
当采用最小方差型的误差成本函数进行输入含噪系统的参数学习时,参数不能收敛至真值,利用包含噪声方差的误差成本函数可解决此问题.本文将此误差成本函数推广到多人单出系统,将之引入到模糊逻辑系统的参数学习中,并且输入输出数据中的噪声方差也通过学习而得到,不必进行多次测量.最后通过仿真对比验证表明了该方法的有效性.  相似文献   

7.
In recent years, interest in studying evolutionary algorithms (EAs) for dynamic optimization problems (DOPs) has grown due to its importance in real-world applications. Several approaches, such as the memory and multiple population schemes, have been developed for EAs to address dynamic problems. This paper investigates the application of the memory scheme for population-based incremental learning (PBIL) algorithms, a class of EAs, for DOPs. A PBIL-specific associative memory scheme, which stores best solutions as well as corresponding environmental information in the memory, is investigated to improve its adaptability in dynamic environments. In this paper, the interactions between the memory scheme and random immigrants, multipopulation, and restart schemes for PBILs in dynamic environments are investigated. In order to better test the performance of memory schemes for PBILs and other EAs in dynamic environments, this paper also proposes a dynamic environment generator that can systematically generate dynamic environments of different difficulty with respect to memory schemes. Using this generator, a series of dynamic environments are generated and experiments are carried out to compare the performance of investigated algorithms. The experimental results show that the proposed memory scheme is efficient for PBILs in dynamic environments and also indicate that different interactions exist between the memory scheme and random immigrants, multipopulation schemes for PBILs in different dynamic environments.  相似文献   

8.
增量学习是在原有学习成果的基础上,对新信息进行学习,以获取新知识的过程,它要求尽量保持原有的学习成果.文章先简述了基于覆盖的构造型神经网络,然后在此基础上提出了一种快速增量学习算法.该算法在原有网络的分类能力基础上,通过对新样本的快速增量学习,进一步提高网络的分类能力.实验结果表明该算法是有效的.  相似文献   

9.
一种适合于增量学习的支持向量机的快速循环算法   总被引:5,自引:0,他引:5  
安金龙  王正欧 《计算机应用》2003,23(10):12-14,17
当样本数量大到计算机内存中放不下时,常规支持向量机方法就失去了学习能力,为了解决这一问题,提高支持向量机的训练速度,文章分析了支持向量机分类的本质特征,根据支持向量机分类仅与支持向量有关的特点,提出了一种适合于支持向量机增量学习的快速循环算法(PFI-SVM),提高了支持向量机的训练速度和大样本学习的能力,而支持向量机的分类能力不受任何影响,取得了较好的效果。  相似文献   

10.
贾文臣  叶世伟 《计算机工程》2005,31(10):142-144,176
提出的算法是利用凸函数共轭性质中的Young不等式构造优化目标函数,这个优化目标函数对于权值和隐层输出来说为凸函数,不存在局部最小。首先把隐层输出作为变量进行优化更新,然后快速计算出隐层前后的权值。数值实验表明:此算法简单,收敛速度快,泛化能力强,并大大降低了学习误差。  相似文献   

11.
In this paper, we introduce a new algorithm for incremental learning of a specific form of Takagi–Sugeno fuzzy systems proposed by Wang and Mendel in 1992. The new data-driven online learning approach includes not only the adaptation of linear parameters appearing in the rule consequents, but also the incremental learning of premise parameters appearing in the membership functions (fuzzy sets), together with a rule learning strategy in sample mode. A modified version of vector quantization is exploited for rule evolution and an incremental learning of the rules' premise parts. The modifications include an automatic generation of new clusters based on the nature, distribution, and quality of new data and an alternative strategy for selecting the winning cluster (rule) in each incremental learning step. Antecedent and consequent learning are connected in a stable manner, meaning that a convergence toward the optimal parameter set in the least-squares sense can be achieved. An evaluation and a comparison to conventional batch methods based on static and dynamic process models are presented for high-dimensional data recorded at engine test benches and at rolling mills. For the latter, the obtained data-driven fuzzy models are even compared with an analytical physical model. Furthermore, a comparison with other evolving fuzzy systems approaches is carried out based on nonlinear dynamic system identification tasks and a three-input nonlinear function approximation example.   相似文献   

12.
为了提高大规模高维度数据的训练速度和分类精度,提出了一种基于局部敏感哈希的SVM快速增量学习方法。算法首先利用局部敏感哈希能快速查找相似数据的特性,在SVM算法的基础上筛选出增量中可能成为SV的样本,然后将这些样本与已有SV一起作为后续训练的基础。使用多个数据集对该算法进行了验证。实验表明,在大规模增量数据样本中,提出的SVM快速增量学习算法能有效地提高训练学习的速度,并能保持有效的准确率。  相似文献   

13.
This survey article considers methods and algorithms for fast estimation of data distance/similarity measures from formed real-valued vectors of small dimension. The methods do not use learning and mainly use random projection and sampling. Initial data are mainly high-dimensional vectors with different measures of distance (Euclidean, Manhattan, statistical, etc.) and similarity (dot product, etc.). Vector representations of non-vector data are also considered. The resultant vectors can also be used in similarity search algorithms, machine learning, etc.  相似文献   

14.
This paper presents a novel Heuristic Global Learning (HER-GBL) algorithm for multilayer neural networks. The algorithm is based upon the least squares method to maintain the fast convergence speed, and the penalized optimization to solve the problem of local minima. The penalty term, defined as a Gaussian-type function of the weight, is to provide an uphill force to escape from local minima. As a result, the training performance is dramatically improved. The proposed HER-GBL algorithm yields excellent results in terms of convergence speed, avoidance of local minima and quality of solution.  相似文献   

15.
Nonlinear quantum processing allows the solution of an optimization problem by the exhaustive search on all its possible solutions. Hence, it can replace advantageously the algorithms for learning from a training set. In order to pursue this possibility in the case of neurofuzzy networks, we propose in this paper to tailor their architectures to the requirements of quantum processing. In particular, superposition is introduced to pursue parallelism and entanglement to associate the network performance with each solution present in the superposition. Two aspects of the proposed method are considered in detail: the binary structure of membership functions and fuzzy reasoning and the use of a particular nonlinear quantum algorithm for extracting the optimal neurofuzzy network by exhaustive search.   相似文献   

16.
This survey paper considers index structures for fast similarity search for objects represented by real-valued vectors. Index structures based on locality-sensitive hashing and their modifications are discussed. The ideas of specific algorithms, including the recently proposed ones, are stated. Their interrelations and some theoretical aspects are discussed.  相似文献   

17.
祝美龙  陶亮 《微机发展》2007,17(10):50-53
Gabor变换在很多领域被认为是非常有用的方法,然而实时应用却因其很高的计算复杂性而受到限制。为了减小计算复杂性,曾提出了基于DCT的实值离散Gabor变换。文中回顾了基于DCT的实值离散Gabor变换,为了有效地和快速地计算实值离散Gabor变换,提出了在临界抽样条件下,一维实值离散Gabor变换系数求解的块时间递归算法以及由变换系数重建原信号的块时间递归算法,研究了该算法使用并行格型结构的实现方法,并讨论和比较了算法的计算复杂性和优越性。  相似文献   

18.
针对标准支持向量机在P2P网络流量识别中不支持增量学习的问题.提出一种适于P2P网络流量识别的SVM快速增量学习方法。在对违背Karush—Kuhn—Tucker条件的新增正负样本集分别进行聚类分析基础上,运用聚类簇中心对支持向量机训练生成一个接近增量学习最优分类超平面的过渡超平面.并以此超平面为基准确定初始训练样本集上非支持向量和支持向量的互相转化.进而生成新的样本集实现SVM增量学习。理论分析和实验结果表明。该方法能有效简化增量学习的训练样本集.在不降低P2P网络流量识别精度的前提下.明显缩短SVM的增量学习时间和识别时间。  相似文献   

19.
本文研究了p2p网络中基于内容的节点聚类。基于文件名关键词精确匹配的查询没有考虑文本语义及内容相似性。如果能够根据节点发布内容的相似性,建立节点聚类,信息查询在类内进行,必将提高查询效率。本文提出了一种基于增量学习的节点聚类方法,通过兴趣爬虫代理计算节点得分,据此判断一个节点是否可以加入节点簇。实验表明,节点簇的建立可以有效地提高 p2p 网络的查询效率。  相似文献   

20.
One of the open problems in neural network research is how to automatically determine network architectures for given applications. In this brief, we propose a simple and efficient approach to automatically determine the number of hidden nodes in generalized single-hidden-layer feedforward networks (SLFNs) which need not be neural alike. This approach referred to as error minimized extreme learning machine (EM-ELM) can add random hidden nodes to SLFNs one by one or group by group (with varying group size). During the growth of the networks, the output weights are updated incrementally. The convergence of this approach is proved in this brief as well. Simulation results demonstrate and verify that our new approach is much faster than other sequential/incremental/growing algorithms with good generalization performance.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号