首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Interval data offer a valuable way of representing the available information in complex problems where uncertainty, inaccuracy, or variability must be taken into account. Considered in this paper is the learning of interval neural networks, of which the input and output are vectors with interval components, and the weights are real numbers. The back-propagation (BP) learning algorithm is very slow for interval neural networks, just as for usual real-valued neural networks. Extreme learning machine (ELM) has faster learning speed than the BP algorithm. In this paper, ELM is applied for learning of interval neural networks, resulting in an interval extreme learning machine (IELM). There are two steps in the ELM for usual feedforward neural networks. The first step is to randomly generate the weights connecting the input and the hidden layers, and the second step is to use the Moore–Penrose generalized inversely to determine the weights connecting the hidden and output layers. The first step can be directly applied for interval neural networks. But the second step cannot, due to the involvement of nonlinear constraint conditions for IELM. Instead, we use the same idea as that of the BP algorithm to form a nonlinear optimization problem to determine the weights connecting the hidden and output layers of IELM. Numerical experiments show that IELM is much faster than the usual BP algorithm. And the generalization performance of IELM is much better than that of BP, while the training error of IELM is a little bit worse than that of BP, implying that there might be an over-fitting for BP.  相似文献   

2.
极限学习机与支持向量机在储层渗透率预测中的对比研究   总被引:4,自引:0,他引:4  
极限学习机ELM是一种简单易用、有效的单隐层前馈神经网络SLFNs学习算法。传统的神经网络学习算法(如BP算法)需要人为设置大量的网络训练参数,并且很容易产生局部最优解。极限学习机只需要设置网络的隐层节点个数,在算法执行过程中不需要调整网络的输入权值以及隐元的偏置,并且产生唯一的最优解,因此具有学习速度快且泛化性能好的优点。本文将极限学习机引入到储层渗透率的预测中,通过对比支持向量机,分析其在储层渗透率预测中的可行性和优势。实验结果表明,极限学习机与支持向量机有近似的预测精度,但在参数选择以及学习速度上极限学习机具有明显的优势。  相似文献   

3.
A study on effectiveness of extreme learning machine   总被引:7,自引:0,他引:7  
Extreme learning machine (ELM), proposed by Huang et al., has been shown a promising learning algorithm for single-hidden layer feedforward neural networks (SLFNs). Nevertheless, because of the random choice of input weights and biases, the ELM algorithm sometimes makes the hidden layer output matrix H of SLFN not full column rank, which lowers the effectiveness of ELM. This paper discusses the effectiveness of ELM and proposes an improved algorithm called EELM that makes a proper selection of the input weights and bias before calculating the output weights, which ensures the full column rank of H in theory. This improves to some extend the learning rate (testing accuracy, prediction accuracy, learning time) and the robustness property of the networks. The experimental results based on both the benchmark function approximation and real-world problems including classification and regression applications show the good performances of EELM.  相似文献   

4.
Extreme learning machine (ELM) is widely used in training single-hidden layer feedforward neural networks (SLFNs) because of its good generalization and fast speed. However, most improved ELMs usually discuss the approximation problem for sample data with output noises, not for sample data with noises both in input and output values, i.e., error-in-variable (EIV) model. In this paper, a novel algorithm, called (regularized) TLS-ELM, is proposed to approximate the EIV model based on ELM and total least squares (TLS) method. The proposed TLS-ELM uses the idea of ELM to choose the hidden weights, and applies TLS method to determine the output weights. Furthermore, the perturbation quantities of hidden output matrix and observed values are given simultaneously. Comparison experiments of our proposed TLS-ELM with least square method, TLS method and ELM show that our proposed TLS-ELM has better accuracy and less training time.  相似文献   

5.
Dynamic ensemble extreme learning machine based on sample entropy   总被引:1,自引:1,他引:0  
Extreme learning machine (ELM) as a new learning algorithm has been proposed for single-hidden layer feed-forward neural networks, ELM can overcome many drawbacks in the traditional gradient-based learning algorithm such as local minimal, improper learning rate, and low learning speed by randomly selecting input weights and hidden layer bias. However, ELM suffers from instability and over-fitting, especially on large datasets. In this paper, a dynamic ensemble extreme learning machine based on sample entropy is proposed, which can alleviate to some extent the problems of instability and over-fitting, and increase the prediction accuracy. The experimental results show that the proposed approach is robust and efficient.  相似文献   

6.
极限学习机是一种随机化算法,它随机生成单隐含层神经网络输入层连接权和隐含层偏置,用分析的方法确定输出层连接权。给定网络结构,用极限学习机重复训练网络,会得到不同的学习模型。本文提出了一种集成模型对数据进行分类的方法。首先用极限学习机算法重复训练若干个单隐含层前馈神经网络,然后用多数投票法集成训练好的神经网络,最后用集成模型对数据进行分类,并在10个数据集上和极限学习机及集成极限学习机进行了实验比较。实验结果表明,本文提出的方法优于极限学习机和集成极限学习机。  相似文献   

7.
As a novel learning algorithm for single-hidden-layer feedforward neural networks, extreme learning machines (ELMs) have been a promising tool for regression and classification applications. However, it is not trivial for ELMs to find the proper number of hidden neurons due to the nonoptimal input weights and hidden biases. In this paper, a new model selection method of ELM based on multi-objective optimization is proposed to obtain compact networks with good generalization ability. First, a new leave-one-out (LOO) error bound of ELM is derived, and it can be calculated with negligible computational cost once the ELM training is finished. Furthermore, the hidden nodes are added to the network one-by-one, and at each step, a multi-objective optimization algorithm is used to select optimal input weights by minimizing this LOO bound and the norm of output weight simultaneously in order to avoid over-fitting. Experiments on five UCI regression data sets are conducted, demonstrating that the proposed algorithm can generally obtain better generalization performance with more compact network than the conventional gradient-based back-propagation method, original ELM and evolutionary ELM.  相似文献   

8.
Recently, a novel learning algorithm for single-hidden-layer feedforward neural networks (SLFNs) named extreme learning machine (ELM) was proposed by Huang et al. The essence of ELM is that the learning parameters of hidden nodes, including input weights and biases, are randomly assigned and need not be tuned while the output weights can be analytically determined by the simple generalized inverse operation. The only parameter needed to be defined is the number of hidden nodes. Compared with other traditional learning algorithms for SLFNs, ELM provides extremely faster learning speed, better generalization performance and with least human intervention. This paper firstly introduces a brief review of ELM, describing the principle and algorithm of ELM. Then, we put emphasis on the improved methods or the typical variants of ELM, especially on incremental ELM, pruning ELM, error-minimized ELM, two-stage ELM, online sequential ELM, evolutionary ELM, voting-based ELM, ordinal ELM, fully complex ELM, and symmetric ELM. Next, the paper summarized the applications of ELM on classification, regression, function approximation, pattern recognition, forecasting and diagnosis, and so on. In the last, the paper discussed several open issues of ELM, which may be worthy of exploring in the future.  相似文献   

9.
极限学习机(ELM)是一种新型单馈层神经网络算法,在训练过程中只需要设置合适的隐藏层节点个数,随机赋值输入权值和隐藏层偏差,一次完成无需迭代.结合遗传算法在预测模型参数寻优方面的优势,找到极限学习机的最优参数取值,建立成都双流国际机场旅客吞吐量预测模型,通过对比支持向量机、BP神经网络,分析遗传-极限学习机算法在旅客吞吐量预测中的可行性和优势.仿真结果表明遗传-极限学习机算法不仅可行,并且与原始极限学习机算法相比,在预测精度和训练速度上具有比较明显的优势.  相似文献   

10.
Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.  相似文献   

11.
Extreme learning machine (ELM), as an emergent technique for training feed-forward neural networks, has shown good performances on various learning domains. This paper investigates the impact of random weights during the training of ELM. It focuses on the randomness of weights between input and hidden layers, and the dimension change from input layer to hidden layer. The direct motivation is to verify as to whether during the training of ELM, the randomly assigned weights exert some positive effects. Experimentally we show that for many classification and regression problems, the dimension increase caused by random weights in ELM has a performance better than the dimension increase caused by some kernel mappings. We assume that via the random transformation, output-samples are more concentrate than input-samples which will make the learning more efficient.  相似文献   

12.
相比径向基(RBF)神经网络,极限学习机(ELM)训练速度更快,泛化能力更强.同时,近邻传播聚类算法(AP)可以自动确定聚类个数.因此,文中提出融合AP聚类、多标签RBF(ML-RBF)和正则化ELM(RELM)的多标签学习模型(ML-AP-RBF-RELM).首先,在该模型中输入层使用ML-RBF进行映射,且通过AP聚类算法自动确定每一类标签的聚类个数,计算隐层节点个数.然后,利用每类标签的聚类个数通过K均值聚类确定隐层节点RBF函数的中心.最后,通过RELM快速求解隐层到输出层的连接权值.实验表明,ML-AP-RBF-RELM效果较好.  相似文献   

13.
Over the last two decades, automatic speaker recognition has been an interesting and challenging problem to speech researchers. It can be classified into two different categories, speaker identification and speaker verification. In this paper, a new classifier, extreme learning machine, is examined on the text-independent speaker verification task and compared with SVM classifier. Extreme learning machine (ELM) classifiers have been proposed for generalized single hidden layer feedforward networks with a wide variety of hidden nodes. They are extremely fast in learning and perform well on many artificial and real regression and classification applications. The database used to evaluate the ELM and SVM classifiers is ELSDSR corpus, and the Mel-frequency Cepstral Coefficients were extracted and used as the input to the classifiers. Empirical studies have shown that ELM classifiers and its variants could perform better than SVM classifiers on the dataset provided with less training time.  相似文献   

14.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

15.
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for classifying power system disturbances using particle swarm optimization (PSO). Learning time is an important factor while designing any computational intelligent algorithms for classifications. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are chosen randomly and the output weights are calculated analytically. However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. One of the advantages of ELM over other methods is that the parameter that the user must properly adjust is the number of hidden nodes only. But the optimal selection of its parameter can improve its performance. In this paper, a hybrid optimization mechanism is proposed which combines the discrete-valued PSO with the continuous-valued PSO to optimize the input feature subset selection and the number of hidden nodes to enhance the performance of ELM. The experimental results showed the proposed algorithm is faster and more accurate in discriminating power system disturbances.  相似文献   

16.
Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM model, we first extend the manifold regularization (MR) framework and then demonstrate the relation between the extended MR framework and ELM. Finally, a manifold regularized extreme learning machine is derived from the proposed framework, which maintains the properties of ELM and can be applicable to large-scale learning problems. Experimental results show that the proposed semi-supervised extreme learning machine is the most cost-efficient method. It tends to have better scalability and achieve satisfactory generalization performance at a relatively faster learning speed than traditional semi-supervised learning algorithms.  相似文献   

17.
Face recognition based on extreme learning machine   总被引:2,自引:0,他引:2  
Extreme learning machine (ELM) is an efficient learning algorithm for generalized single hidden layer feedforward networks (SLFNs), which performs well in both regression and classification applications. It has recently been shown that from the optimization point of view ELM and support vector machine (SVM) are equivalent but ELM has less stringent optimization constraints. Due to the mild optimization constraints ELM can be easy of implementation and usually obtains better generalization performance. In this paper we study the performance of the one-against-all (OAA) and one-against-one (OAO) ELM for classification in multi-label face recognition applications. The performance is verified through four benchmarking face image data sets.  相似文献   

18.
罗庚合 《计算机应用》2013,33(7):1942-1945
针对极限学习机(ELM)算法随机选择输入层权值的问题,借鉴第2类型可拓神经网络(ENN-2)聚类的思想,提出了一种基于可拓聚类的ELM(EC-ELM)神经网络。该神经网络是以隐含层神经元的径向基中心向量作为输入层权值,采用可拓聚类算法动态调整隐含层节点数目和径向基中心,并根据所确定的输入层权值,利用Moore-Penrose广义逆快速完成输出层权值的求解。同时,对标准的Friedman#1回归数据集和Wine分类数据集进行测试,结果表明,EC-ELM提供了一种简便的神经网络结构和参数学习方法,并且比基于可拓理论的径向基函数(ERBF)、ELM神经网络具有更高的建模精度和更快的学习速度,为复杂过程的建模提供了新思路。  相似文献   

19.
This paper proposes a modified ELM algorithm that properly selects the input weights and biases before training the output weights of single-hidden layer feedforward neural networks with sigmoidal activation function and proves mathematically the hidden layer output matrix maintains full column rank. The modified ELM avoids the randomness compared with the ELM. The experimental results of both regression and classification problems show good performance of the modified ELM algorithm.  相似文献   

20.
Online learning algorithms have been preferred in many applications due to their ability to learn by the sequentially arriving data. One of the effective algorithms recently proposed for training single hidden-layer feedforward neural networks (SLFNs) is online sequential extreme learning machine (OS-ELM), which can learn data one-by-one or chunk-by-chunk at fixed or varying sizes. It is based on the ideas of extreme learning machine (ELM), in which the input weights and hidden layer biases are randomly chosen and then the output weights are determined by the pseudo-inverse operation. The learning speed of this algorithm is extremely high. However, it is not good to yield generalization models for noisy data and is difficult to initialize parameters in order to avoid singular and ill-posed problems. In this paper, we propose an improvement of OS-ELM based on the bi-objective optimization approach. It tries to minimize the empirical error and obtain small norm of network weight vector. Singular and ill-posed problems can be overcome by using the Tikhonov regularization. This approach is also able to learn data one-by-one or chunk-by-chunk. Experimental results show the better generalization performance of the proposed approach on benchmark datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号