首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 781 毫秒
1.
针对增量型超限学习机(incremental extreme learning machine,I-ELM)中大量冗余节点可导致算法学习效率降低,网络结构复杂化等问题,提出基于多层学习(multi-learning)优化克隆选择算法(clone selection algorithm,CSA)的改进式I-ELM.利用Baldwinian learning操作改变抗体信息的搜索范围,结合Lamarckian learning操作提高CSA的搜索能力.改进后的算法能够有效控制I-ELM的隐含层节点数,使网络结构更加紧凑,提高算法精度.仿真结果表明,所提出的基于多层学习克隆选择的增量型核超限学习机(multi-learning clonal selection I-ELMK,MLCSIELMK)算法能够有效简化网络结构,并保持较好的泛化能力,较强的学习能力和在线预测能力.  相似文献   

2.
基于压缩动量项的增量型ELM虚拟机能耗预测   总被引:1,自引:0,他引:1  
邹伟东  夏元清 《自动化学报》2019,45(7):1290-1297
在基于基础设施即服务(Infrastructure as a service,IaaS)的云服务模式下,精准的虚拟机能耗预测,对于在众多物理服务器之间进行虚拟机调度策略的制定具有十分重要的意义.针对基于传统的增量型极限学习机(Incremental extreme learning machine,I-ELM)的预测模型存在许多降低虚拟机能耗预测准确性和效率的冗余节点,在现有I-ELM模型中加入压缩动量项将网络训练误差反馈到隐含层的输出中使预测结果更逼近输出样本,能够减少I-ELM的冗余隐含层节点,从而加快I-ELM的网络收敛速度,提高I-ELM的泛化性能.  相似文献   

3.

针对软测量模型在实际应用中遇到的问题, 结合AdaBoost 集成学习思想, 提出适用于软测量回归的集成学习算法, 以提高传统软测量模型的精度. 为了克服模型更新技术对软测量实际应用的制约, 将增量学习机制加入软测量集成建模中, 使软测量模型具有在线实时更新的增量学习能力. 对浆纱过程使用新方法建立上浆率软测量模型, 并使用实际生产数据对模型进行检验, 检验结果表明, 该模型具有很好的预测精度, 并能够较好地实现在线更新.

  相似文献   

4.
韩敏  刘晓欣 《控制与决策》2014,29(9):1576-1580

针对回归问题中存在的变量选择和网络结构设计问题, 提出一种基于互信息的极端学习机(ELM) 训练算法, 同时实现输入变量的选择和隐含层的结构优化. 该算法将互信息输入变量选择嵌入到ELM网络的学习过程之中, 以网络的学习性能作为衡量输入变量与输出变量相关与否的指标, 并以增量式的方法确定隐含层节点的规模.在Lorenz、Gas Furnace 和10 组标杆数据上的仿真结果表明了所提出算法的有效性. 该算法不仅可以简化网络结构, 还可以提高网络的泛化性能.

  相似文献   

5.

针对RBF 神经网络的结构设计问题, 提出一种基于输出敏感度方差重要性的结构优化算法. 首先, 检验网络隐层节点的输出敏感度在样本集上的方差是否与零有显著差异, 以此作为依据增加或删除相应的隐层节点; 然后,对调整后的网络参数进行修正, 使网络具有更好的拟合精度和收敛性; 最后, 对所提出的优化算法进行仿真实验, 结果表明, 所提出的算法可根据研究对象自适应地调整RBF 的网络结构, 具有良好的逼近能力和泛化能力.

  相似文献   

6.
李军  乃永强 《控制与决策》2015,30(9):1559-1566

针对一类多输入多输出(MIMO) 仿射非线性动态系统, 提出一种基于极限学习机(ELM) 的鲁棒自适应神经控制方法. ELM随机确定单隐层前馈网络(SLFNs) 的隐含层参数, 仅需调整网络的输出权值, 能以极快的学习速度获得良好的推广性. 在所提出的控制方法中, 利用ELM逼近系统的未知非线性项, 针对ELM网络的权值、逼近误差及外界扰动的未知上界值分别设计参数自适应律, 通过Lyapunov 稳定性分析可以保证闭环系统所有信号半全局最终一致有界. 仿真结果表明了该控制方法的有效性.

  相似文献   

7.

基于极限学习机理论, 将主成分分析技术与ELM特征映射相结合, 提出一种基于主成分分析的压缩隐空间构建新方法. 结合多层神经网络学习方法对隐空间进行多层融合, 进一步提出了堆叠隐空间模糊C 均值聚类算法,从而提高对非线性数据的学习能力. 实验结果表明, 所提出算法在处理复杂非线性数据时更加高效、稳定, 同时克服了模糊聚类算法对模糊指数的敏感性问题.

  相似文献   

8.
增量型极限学习机(incremental extreme learning machine,I-ELM)在训练过程中,由于输入权值及隐层神经元阈值的随机获取,造成部分隐层神经元的输出权值过小,使其对网络输出贡献小,从而成为无效神经元.这个问题不但使网络变得更加复杂,而且降低了网络的稳定性.针对此问题,本文提出了一种给I-ELM隐层输出加上偏置的改进方法(即Ⅱ-ELM),并分析证明了该偏置的存在性.最后对I-ELM方法在分类和回归问题上进行仿真对比,验证Ⅱ-ELM的有效性.  相似文献   

9.
极限学习机(ELM)在训练过程中无需调整隐层节点参数,因其高效的训练方式被广泛应用于分类和回归,然而极限学习机也面临着结构选择与过拟合等严重等问题。为了解决此问题,针对隐层节点增量数目对收敛速度以及训练时间的影响进行了研究,提出一种利用网络输出误差的变化率控制网络增长速度的变长增量型极限学习机算法(VI-ELM)。通过对多个数据集进行回归和分类问题分析实验,结果表明,本文提出的方法能够以更高效的训练方式获得良好的泛化性能。  相似文献   

10.
韩敏  吕飞 《控制与决策》2015,30(11):2089-2092

针对集成学习中的准确性和差异性平衡问题, 提出一种基于信息论的选择性集成核极端学习机. 采用具有结构简单、训练简便、泛化性能好的核极端学习作为基学习器. 引入相关性准则描述准确性, 冗余性准则描述差异性,将选择性集成问题转化为变量选择问题. 利用基于互信息的最大相关最小冗余准则对生成的核极端学习机进行选择, 从而实现准确性和差异性的平衡. 基于UCI 基准回归和分类数据的仿真结果验证了所提出算法的优越性.

  相似文献   

11.
提出一种基于差分进化(DE)和粒子群优化(PSO)的混合智能方法—–DEPSO算法,并通过对10个典型函数进行测试,表明DEPSO算法具有良好的寻优性能。针对单隐层前向神经网络(SLFNs)提出一种改进的学习算法—–DEPSO-ELM算法,即应用DEPSO算法优化SLFNs的隐层节点参数,采用极限学习算法(ELM)求取SLFNs的输出权值。将DEPSO-ELM算法应用于6个典型真实数据集的回归计算,并与DE-ELM、SaE-ELM算法相比,获得了更精确的计算结果。最后,将DEPSO-ELM算法应用于数控机床热误差的建模预测,获得了良好的预测效果。  相似文献   

12.
Single-hidden-layer feedforward networks with randomly generated additive or radial basis function hidden nodes have been theoretically proved that they can approximate any continuous function. Meanwhile, an incremental algorithm referred to as incremental extreme learning machine (I-ELM) was proposed which outperforms many popular learning algorithms. However, I-ELM may produce redundant nodes which increase the network architecture complexity and reduce the convergence rate of I-ELM. Moreover, the output weight vector obtained by I-ELM is not the least squares solution of equation  = T. In order to settle these problems, this paper proposes an orthogonal incremental extreme learning machine (OI-ELM) and gives the rigorous proofs in theory. OI-ELM avoids redundant nodes and obtains the least squares solution of equation  = T through incorporating the Gram–Schmidt orthogonalization method into I-ELM. Simulation results on nonlinear dynamic system identification and some benchmark real-world problems verify that OI-ELM learns much faster and obtains much more compact neural networks than ELM, I-ELM, convex I-ELM and enhanced I-ELM while keeping competitive performance.  相似文献   

13.
Convex incremental extreme learning machine   总被引:8,自引:2,他引:6  
Guang-Bin  Lei   《Neurocomputing》2007,70(16-18):3056
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879–892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such “generalized” SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.  相似文献   

14.
改进递归最小二乘RBF神经网络溶解氧预测   总被引:1,自引:0,他引:1  
为提高溶解氧预测的准确性,将基于改进型递归最小二乘算法优化的径向基函数( RBF)神经网络方法应用于溶解氧预测。利用K均值聚类算法进行隐层单元中心选择;利用改进型递归最小二乘算法优化RBF神经网络隐含层到输出层的权值。仿真结果表明:该方法对溶解氧的预测具有较好的非线性拟合能力,预测精度优于RBF神经网络和递归最小二乘算法优化的RBF神经网络。  相似文献   

15.
随着工业系统复杂性的逐步增加,对故障预测的实时性和准确性提出了更高的要求.对此,提出一种基于动态记忆反馈的改进ELM神经网络模型进行故障预测.此模型在结构上增加了反馈层用于记忆隐含层输出,并从反馈层记忆的信息中提取数据变化趋势特征,从而动态更新反馈层的输出权值.通过对非线性动态系统的下一时刻输出进行预测,并对预测输出进行诊断,达到故障预测的目的.通过人工数据Sinc验证和TE过程实例应用表明了所提出方法具有预测精度高、动态适应能力强等优点,对非线性时序系统具有较好的预测能力.  相似文献   

16.
针对径向基函数(RBF)网络隐层结构难以确定的问题,基于自适应共振理论(ART)网络良好的在线分类特性,提出一种RBF网络结构设计算法。该算法将ART网络的聚类特性用于RBF网络结构设计中,通过对输入向量与已存模式的相似度比较将输入向量进行分类,确定隐含层节点个数和初始参数,使网络具有精简的结构。对典型非线性函数逼近的仿真结果表明,所提出的结构具有快速的学习能力和良好的逼近能力。  相似文献   

17.
Patra  A.  Das  S.  Mishra  S. N.  Senapati  M. R. 《Neural computing & applications》2017,28(1):101-110

For financial time series, the generation of error bars on the point of prediction is important in order to estimate the corresponding risk. In recent years, optimization techniques-driven artificial intelligence has been used to make time series approaches more systematic and improve forecasting performance. This paper presents a local linear radial basis functional neural network (LLRBFNN) model for classifying finance data from Yahoo Inc. The LLRBFNN model is learned by using the hybrid technique of backpropagation and recursive least square algorithm. The LLRBFNN model uses a local linear model in between the hidden layer and the output layer in contrast to the weights connected from hidden layer to output layer in typical neural network models. The obtained prediction result is compared with multilayer perceptron and radial basis functional neural network with the parameters being trained by gradient descent learning method. The proposed technique provides a lower mean squared error and thus can be considered as superior to other models. The technique is also tested on linear data, i.e., diabetic data, to confirm the validity of the result obtained from the experiment.

  相似文献   

18.
Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号