首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 126 毫秒
1.
隐层节点数是影响极端学习机(ELM)泛化性能的关键参数,针对传统的ELM隐层节点数确定算法中优化过程复杂、容易过学习或陷入局部最优的问题,提出结构风险最小化-极端学习机(SRM-ELM)算法。通过分析VC维与隐层节点数量之间的关联,对VC信任函数进行近似改进,使其为凹函数,并结合经验风险重构近似的SRM。在此基础上,将粒子群优化的位置值直接作为ELM的隐层节点数,利用粒子群算法最小化结构风险函数获得极端学习机的隐层节点数,作为最优节点数。使用6组UCI数据和胶囊缺陷数据进行仿真验证,结果表明,该算法能获得极端学习机的最优节点数,并具有更好的泛化能力。  相似文献   

2.
神经网络的隐层数和隐层节点数决定了网络规模,并对网络性能造成较大影响。在满足网络所需最少隐层节点数的前提下,利用剪枝算法删除某些冗余节点,减少隐层节点数,得到更加精简的网络结构。基于惩罚函数的剪枝算法是在目标函数后加入一个惩罚函数项,该惩罚函数项是一个变量为网络权值的函数。由于惩罚函数中的网络权值变量可以附加一个可调参数,将单一惩罚函数项泛化为一类随参数规律变化的新的惩罚函数,初始惩罚函数可看作泛化后惩罚函数的参数取定值的特殊情况。实验利用基于标准BP神经网络的XOR数据进行测试,得到隐层节点剪枝效果和网络权值随惩罚函数的泛化而发生变化,并从数据分析中得出具有更好剪枝效果及更优网络结构的惩罚函数泛化参数。  相似文献   

3.
随机模糊神经网络的结构学习算法研究   总被引:1,自引:1,他引:0  
张骏  吕静静 《计算机应用》2005,25(10):2390-2391
基于输入层、隐层、输出层相互关系准则函数的随机模糊神经网络结构学习算法,综合考虑了输入、输出信号对隐层函数的影响。此算法的一个关键的问题是如何确定随机模糊神经网络的最佳隐层节点数。本文给出了确定最佳规则数的一般方法,并根据结果给出了相应的仿真实例。  相似文献   

4.
前向神经网络隐含层节点数的一种优化算法   总被引:38,自引:0,他引:38  
由于前向神经网络隐合层节点数的确定尚无理论依据,为此提出一种基于黄金分割原理的优化算法,首先确定网络隐含层节点数频繁出现的区间范围;将网络总误差作为试验结果,然后利用黄金分割法搜索其区间中的理想数值;兼顾高精度的需要,将隐含层节点数频繁出现的区间作拓展,可以求得逼近能力更强的节点数.算法分析和仿真例子表明,此优化算法是切实可行的,不仅能找到理想的隐含层节点数,而且能起到节省成本、提高搜索效率等功效.  相似文献   

5.
为提高夹杂热失控现象的微波干燥褐煤过程中神经网络预测温度精度,提出一种基于二次滤波及粒子群寻优的神经网络参数优化算法。该方法先引入小波分析对训练数据进行软阈值滤波处理,使温度数据在描述变化趋势的同时突出非平稳特征,而后使用粒子群算法寻找该趋势特征对应的神经网络最优的隐层节点数、学习率及最佳训练次数的组合,最后在预测中使用前向均值阈值滤波处理输入数据配合该最优网络进行预测。实验结果表明,该方法能同时提高热失控和非热失控状态下温度预测精度,使预测平均绝对误差下降59.2%。  相似文献   

6.
针对大数据分类问题应用设计了一种快速隐层优化方法来解决分布式超限学习机(Extreme Learning Machine,ELM)在训练过程中存在的突出问题--需要独立重复运行多次才能优化隐层结点个数或模型泛化性能。在不增加算法时间复杂度的前提下,新算法能同时训练多个ELM隐层网络,全面兼顾模型泛化能力和隐层结点个数的优化,并通过分布式计算避免大量重复计算。同时,在算法求解过程中通过这种方式能更精确、更直观地学习隐含层结点个数变化带来的影响。比较多种类型标准测试函数的实验结果,相对于分布式ELM,新算法在求解精度、泛化能力、稳定性上大大提高。  相似文献   

7.
神经网络模型的预测精度影响因素分析及其优化   总被引:3,自引:0,他引:3  
神经网络对于非线性模型的辨识和非平稳信号的预测,与传统预测模型相比具有较明显的优势,但是神经网络的结构对于信号预测或模型辨识的精度具有较大影响.本文针对广泛使用的BP神经网络预测模型,以太阳黑子数据为例,分析了网络预测的拓扑结构(输入节点数、隐层节点数)及网络允许的训练误差MSE(Mean ofSquared Error)对其预测能力的影响.发现最优网络模型对应于一定的拓扑结构,收敛于某个由MSE目标值决定的最优位置,该收敛位置并不是网络的全局最优点.在此基础上,利用遗传算法,对输入节点数、隐层节点数和MSE目标值进行了优化,得到了最优的网络预测模型.最后,用算例验证了本文对BP网络模型预测精度影响因素分析的正确性.  相似文献   

8.
申琳  周坚华 《遥感信息》2013,28(1):71-76
隐层数和隐层结点数直接关乎BP网络的学习能力,但目前对隐层结点数的选择尚无适用的理论,一般凭经验或试凑确定.本文提出一种分段式自适应确定隐层结点数的算法,它通过评估网络输出相对误差相应调整隐层结点数,通过迭代运算在使网络输出相对误差逐步减小的同时,逼近可能的最优隐层结点数.通常这个最优结点数即网络输出相对误差出现震荡的起点对应的结点数,以这个结点数决定的网络结构能够在网络输出精度与运算开销之间取得较佳平衡.  相似文献   

9.
为提高查找表逆半调算法中“未出现半调模式逆半调值”的估计精度,本文提出了基于IRN神经网络的逆半调逼近模型,通过分析、训练和优化确定了合适的IRN网络结构、隐层数、隐层节点数。实验结果表明,应用该算法训练、拟合出的查找表数据在逆半调重建图像的视觉效果及PSNR指标上表现良好,而且算法执行速度快、空间复杂度低。  相似文献   

10.
为了提高极限学习机(ELM)网络的稳定性,提出基于改进粒子群优化的极限学习机(IPSO-ELM)。结合改进的粒子群优化算法寻找ELM网络中最优的输入权值、隐层偏置及隐层节点数。通过引入变异算子,增强种群的多样性,并提高收敛速度。为了处理大规模电力负荷数据,提出基于Spark并行计算框架的并行化算法(PIPSO-ELM)。基于真实电力负荷数据的实验表明,PIPSO-ELM具有更高的稳定性及可扩展性,适合处理大规模的电力负荷数据。  相似文献   

11.
李奕  施鸿宝 《软件学报》1996,7(7):435-441
本文为解决知识系统构造过程中的瓶颈问题──知识获取,提出了一种基于神经网络NN(neuralnetwork)的自动获取多级推理产生式规则的N-R方法,该方法采用了特有的NN结构模型和相应的学习算法,使得NN在学习过程中动态确定隐层节点数的同时,也产生了样例集中没有定义的新概念,学习后的NN能用本文提出的转换算法转换成推理网络,最终方便地得到产生式规则集.  相似文献   

12.
A structured-based neural network (NN) with backpropagation through structure (BPTS) algorithm is conducted for image classification in organizing a large image database, which is a challenging problem under investigation. Many factors can affect the results of image classification. One of the most important factors is the architecture of a NN, which consists of input layer, hidden layer and output layer. In this study, only the numbers of nodes in hidden layer (hidden nodes) of a NN are considered. Other factors are kept unchanged. Two groups of experiments including 2,940 images in each group are used for the analysis. The assessment of the effects for the first group is carried out with features described by image intensities, and, the second group uses features described by wavelet coefficients. Experimental results demonstrate that the effects of the numbers of hidden nodes on the reliability of classification are significant and non-linear. When the number of hidden nodes is 17, the classification rate on training set is up to 95%, and arrives at 90% on the testing set. The results indicate that 17 is an appropriate choice for the number of hidden nodes for the image classification when a structured-based NN with BPTS algorithm is applied.  相似文献   

13.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for classifying power system disturbances using particle swarm optimization (PSO). Learning time is an important factor while designing any computational intelligent algorithms for classifications. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are chosen randomly and the output weights are calculated analytically. However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. One of the advantages of ELM over other methods is that the parameter that the user must properly adjust is the number of hidden nodes only. But the optimal selection of its parameter can improve its performance. In this paper, a hybrid optimization mechanism is proposed which combines the discrete-valued PSO with the continuous-valued PSO to optimize the input feature subset selection and the number of hidden nodes to enhance the performance of ELM. The experimental results showed the proposed algorithm is faster and more accurate in discriminating power system disturbances.  相似文献   

14.
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance  相似文献   

15.
传统的极限学习机作为一种有监督的学习模型,任意对隐藏层神经元的输入权值和偏置进行赋值,通过计算隐藏层神经元的输出权值完成学习过程.针对传统的极限学习机在数据分析预测研究中存在预测精度不足的问题,提出一种基于模拟退火算法改进的极限学习机.首先,利用传统的极限学习机对训练集进行学习,得到隐藏层神经元的输出权值,选取预测结果评价标准.然后利用模拟退火算法,将传统的极限学习机隐藏层输入权值和偏置视为初始解,预测结果评价标准视为目标函数,通过模拟退火的降温过程,找到最优解即学习过程中预测误差最小的极限学习机的隐藏层神经元输入权值和偏置,最后通过传统的极限学习机计算得到隐藏层输出权值.实验选取鸢尾花分类数据和波士顿房价预测数据进行分析.实验发现与传统的极限学习机相比,基于模拟退火改进的极限学习机在分类和回归性能上都更优.  相似文献   

16.
相比径向基(RBF)神经网络,极限学习机(ELM)训练速度更快,泛化能力更强.同时,近邻传播聚类算法(AP)可以自动确定聚类个数.因此,文中提出融合AP聚类、多标签RBF(ML-RBF)和正则化ELM(RELM)的多标签学习模型(ML-AP-RBF-RELM).首先,在该模型中输入层使用ML-RBF进行映射,且通过AP聚类算法自动确定每一类标签的聚类个数,计算隐层节点个数.然后,利用每类标签的聚类个数通过K均值聚类确定隐层节点RBF函数的中心.最后,通过RELM快速求解隐层到输出层的连接权值.实验表明,ML-AP-RBF-RELM效果较好.  相似文献   

17.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

18.
Evolutionary selection extreme learning machine optimization for regression   总被引:2,自引:1,他引:1  
Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.  相似文献   

19.
This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.  相似文献   

20.
In this paper a new learning algorithm is proposed for the problem of simultaneous learning of a function and its derivatives as an extension of the study of error minimized extreme learning machine for single hidden layer feedforward neural networks. Our formulation leads to solving a system of linear equations and its solution is obtained by Moore-Penrose generalized pseudo-inverse. In this approach the number of hidden nodes is automatically determined by repeatedly adding new hidden nodes to the network either one by one or group by group and updating the output weights incrementally in an efficient manner until the network output error is less than the given expected learning accuracy. For the verification of the efficiency of the proposed method a number of interesting examples are considered and the results obtained with the proposed method are compared with that of other two popular methods. It is observed that the proposed method is fast and produces similar or better generalization performance on the test data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号