共查询到20条相似文献,搜索用时 11 毫秒
1.
Parameter Incremental Learning Algorithm for Neural Networks 总被引:1,自引:0,他引:1
In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable 相似文献
2.
为获得快速、准确而精简的模糊神经网络,提出一种连续增量学习模糊神经网络(ISL-FNN).将修剪策略引入到神经元的产生过程,用错误下降率定义输入数据对系统输出的影响并应用于神经元的增长过程.在参数的学习阶段,所有隐含层神经元(无论是新增还是已有)的参数使用扩展的卡尔曼算法更新.通过仿真实验,该算法在达到与其它算法性能相当甚至更好的情况下,能获得更精简的结构. 相似文献
3.
A Learning Algorithm for Evolving Cascade Neural Networks 总被引:4,自引:0,他引:4
A new learning algorithm for Evolving Cascade Neural Networks (ECNNs) is described. An ECNN starts to learn with one input
node and then adding new inputs as well as new hidden neurons evolves it. The trained ECNN has a nearly minimal number of
input and hidden neurons as well as connections. The algorithm was successfully applied to classify artifacts and normal segments
in clinical electroencephalograms (EEGs). The EEG segments were visually labeled by EEG-viewer. The trained ECNN has correctly
classified 96.69% of the testing segments. It is slightly better than a standard fully connected neural network.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
4.
一种快速支持向量机增量学习算法 总被引:16,自引:0,他引:16
经典的支持向量机(SVM)算法在求解最优分类面时需求解一个凸二次规划问题,当训练样本数量很多时,算法的速度较慢,而且一旦有新的样本加入,所有的训练样本必须重新训练,非常浪费时间.为此,提出一种新的SVM快速增量学习算法.该算法首先选择那些可能成为支持向量的边界向量,以减少参与训练的样本数目;然后进行增量学习.学习算法是一个迭代过程,无需求解优化问题.实验证明,该算法不仅能保证学习机器的精度和良好的推广能力,而且算法的学习速度比经典的SVM算法快,可以进行增量学习. 相似文献
5.
Fast Learning Algorithms for Feedforward Neural Networks 总被引:7,自引:0,他引:7
In order to improve the training speed of multilayer feedforward neural networks (MLFNN), we propose and explore two new fast backpropagation (BP) algorithms obtained: (1) by changing the error functions, in case using the exponent attenuation (or bell impulse) function and the Fourier kernel function as alternative functions; and (2) by introducing the hybrid conjugate-gradient algorithm of global optimization for dynamic learning rate to overcome the conventional BP learning problems of getting stuck into local minima or slow convergence. Our experimental results demonstrate the effectiveness of the modified error functions since the training speed is faster than that of existing fast methods. In addition, our hybrid algorithm has a higher recognition rate than the Polak-Ribieve conjugate gradient and conventional BP algorithms, and has less training time, less complication and stronger robustness than the Fletcher-Reeves conjugate-gradient and conventional BP algorithms for real speech data. 相似文献
6.
当采用最小方差型的误差成本函数进行输入含噪系统的参数学习时,参数不能收敛至真值,利用包含噪声方差的误差成本函数可解决此问题.本文将此误差成本函数推广到多人单出系统,将之引入到模糊逻辑系统的参数学习中,并且输入输出数据中的噪声方差也通过学习而得到,不必进行多次测量.最后通过仿真对比验证表明了该方法的有效性. 相似文献
7.
Population-Based Incremental Learning With Associative Memory for Dynamic Environments 总被引:9,自引:0,他引:9
Shengxiang Yang Xin Yao 《Evolutionary Computation, IEEE Transactions on》2008,12(5):542-561
In recent years, interest in studying evolutionary algorithms (EAs) for dynamic optimization problems (DOPs) has grown due to its importance in real-world applications. Several approaches, such as the memory and multiple population schemes, have been developed for EAs to address dynamic problems. This paper investigates the application of the memory scheme for population-based incremental learning (PBIL) algorithms, a class of EAs, for DOPs. A PBIL-specific associative memory scheme, which stores best solutions as well as corresponding environmental information in the memory, is investigated to improve its adaptability in dynamic environments. In this paper, the interactions between the memory scheme and random immigrants, multipopulation, and restart schemes for PBILs in dynamic environments are investigated. In order to better test the performance of memory schemes for PBILs and other EAs in dynamic environments, this paper also proposes a dynamic environment generator that can systematically generate dynamic environments of different difficulty with respect to memory schemes. Using this generator, a series of dynamic environments are generated and experiments are carried out to compare the performance of investigated algorithms. The experimental results show that the proposed memory scheme is efficient for PBILs in dynamic environments and also indicate that different interactions exist between the memory scheme and random immigrants, multipopulation schemes for PBILs in different dynamic environments. 相似文献
8.
张春平 《计算机与数字工程》2012,40(10):31-33,39
增量学习是在原有学习成果的基础上,对新信息进行学习,以获取新知识的过程,它要求尽量保持原有的学习成果.文章先简述了基于覆盖的构造型神经网络,然后在此基础上提出了一种快速增量学习算法.该算法在原有网络的分类能力基础上,通过对新样本的快速增量学习,进一步提高网络的分类能力.实验结果表明该算法是有效的. 相似文献
9.
一种适合于增量学习的支持向量机的快速循环算法 总被引:5,自引:0,他引:5
当样本数量大到计算机内存中放不下时,常规支持向量机方法就失去了学习能力,为了解决这一问题,提高支持向量机的训练速度,文章分析了支持向量机分类的本质特征,根据支持向量机分类仅与支持向量有关的特点,提出了一种适合于支持向量机增量学习的快速循环算法(PFI-SVM),提高了支持向量机的训练速度和大样本学习的能力,而支持向量机的分类能力不受任何影响,取得了较好的效果。 相似文献
10.
提出的算法是利用凸函数共轭性质中的Young不等式构造优化目标函数,这个优化目标函数对于权值和隐层输出来说为凸函数,不存在局部最小。首先把隐层输出作为变量进行优化更新,然后快速计算出隐层前后的权值。数值实验表明:此算法简单,收敛速度快,泛化能力强,并大大降低了学习误差。 相似文献
11.
《Fuzzy Systems, IEEE Transactions on》2008,16(6):1393-1410
12.
13.
This survey article considers methods and algorithms for fast estimation of data distance/similarity measures from formed real-valued vectors of small dimension. The methods do not use learning and mainly use random projection and sampling. Initial data are mainly high-dimensional vectors with different measures of distance (Euclidean, Manhattan, statistical, etc.) and similarity (dot product, etc.). Vector representations of non-vector data are also considered. The resultant vectors can also be used in similarity search algorithms, machine learning, etc. 相似文献
14.
This paper presents a novel Heuristic Global Learning (HER-GBL) algorithm for multilayer neural networks. The algorithm is based upon the least squares method to maintain the fast convergence speed, and the penalized optimization to solve the problem of local minima. The penalty term, defined as a Gaussian-type function of the weight, is to provide an uphill force to escape from local minima. As a result, the training performance is dramatically improved. The proposed HER-GBL algorithm yields excellent results in terms of convergence speed, avoidance of local minima and quality of solution. 相似文献
15.
《Fuzzy Systems, IEEE Transactions on》2009,17(3):698-710
16.
D. A. Rachkovskij 《Cybernetics and Systems Analysis》2018,54(1):152-164
This survey paper considers index structures for fast similarity search for objects represented by real-valued vectors. Index structures based on locality-sensitive hashing and their modifications are discussed. The ideas of specific algorithms, including the recently proposed ones, are stated. Their interrelations and some theoretical aspects are discussed. 相似文献
17.
Gabor变换在很多领域被认为是非常有用的方法,然而实时应用却因其很高的计算复杂性而受到限制。为了减小计算复杂性,曾提出了基于DCT的实值离散Gabor变换。文中回顾了基于DCT的实值离散Gabor变换,为了有效地和快速地计算实值离散Gabor变换,提出了在临界抽样条件下,一维实值离散Gabor变换系数求解的块时间递归算法以及由变换系数重建原信号的块时间递归算法,研究了该算法使用并行格型结构的实现方法,并讨论和比较了算法的计算复杂性和优越性。 相似文献
18.
毕孝儒 《电脑与微电子技术》2014,(10):3-6
针对标准支持向量机在P2P网络流量识别中不支持增量学习的问题.提出一种适于P2P网络流量识别的SVM快速增量学习方法。在对违背Karush—Kuhn—Tucker条件的新增正负样本集分别进行聚类分析基础上,运用聚类簇中心对支持向量机训练生成一个接近增量学习最优分类超平面的过渡超平面.并以此超平面为基准确定初始训练样本集上非支持向量和支持向量的互相转化.进而生成新的样本集实现SVM增量学习。理论分析和实验结果表明。该方法能有效简化增量学习的训练样本集.在不降低P2P网络流量识别精度的前提下.明显缩短SVM的增量学习时间和识别时间。 相似文献
19.
20.
Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning 总被引:9,自引:0,他引:9
《Neural Networks, IEEE Transactions on》2009,20(8):1352-1357