首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Damage location detection has direct relationship with the field of aerospace structure as the detection system can inspect any exterior damage that may affect the operations of the equipment. In the literature, several kinds of learning algorithms have been applied in this field to construct the detection system and some of them gave good results. However, most learning algorithms are time-consuming due to their computational complexity so that the real-time requirement in many practical applications cannot be fulfilled. Kernel extreme learning machine (kernel ELM) is a learning algorithm, which has good prediction performance while maintaining extremely fast learning speed. Kernel ELM is originally applied to this research to predict the location of impact event on a clamped aluminum plate that simulates the shell of aerospace structures. The results were compared with several previous work, including support vector machine (SVM), and conventional back-propagation neural networks (BPNN). The comparison result reveals the effectiveness of kernel ELM for impact detection, showing that kernel ELM has comparable accuracy to SVM but much faster speed on current application than SVM and BPNN.  相似文献   

2.
陈琳  邓万宇  王昕 《计算机工程与设计》2011,32(4):1430-1433,1437
协作过滤是一种有效的个性化推荐技术,针对该技术随着用户和资源的增多,数据的高维稀疏特性严重导致推荐质量的下降和计算速度减慢的问题,研究并实现了一种基于极速神经网络的协作过滤方法。采用主成分分析解决数据高维稀疏性问题,采用极速神经网络技术解决计算速度慢的问题。实验结果表明,该方法具有良好的泛化性能和学习速度,能很好的满足个性化资源推荐的需求。  相似文献   

3.
RTS game strategy evaluation using extreme learning machine   总被引:1,自引:0,他引:1  
The fundamental game of real-time strategy (RTS) is collecting resources to build an army with military units to kill and destroy enemy units. In this research, an extreme learning machine (ELM) model is proposed for RTS game strategy evaluation. Due to the complicated game rules and numerous playable items, the commonly used tree-based decision models become complex, sometimes even unmanageable. Since complex interactions exist among unit types, the weighted average model usually cannot be well used to compute the combined power of unit groups, which results in misleading unit generation strategy. Fuzzy measures and integrals are often used to handle interactions among attributes, but they cannot handle the predefined unit production sequence which is strictly required in RTS games. In this paper, an ELM model is trained based on real data to obtain the combined power of units in different types. Both the unit interactions and the production sequence can be implicitly and simultaneously handled by this model. Warcraft III battle data from real players are collected and used in our experiments. Experimental results show that ELM is fast and effective in evaluating the unit generation strategies.  相似文献   

4.
多层极限学习机在入侵检测中的应用   总被引:1,自引:0,他引:1  
康松林  刘乐  刘楚楚  廖锓 《计算机应用》2015,35(9):2513-2518
针对神经网络在入侵检测应用存在的维度高、数据大、获取标记样本难、特征构造难、训练难等问题,提出了一种基于深度多层极限学习机(ML-ELM)的入侵检测方法。首先,采用多层网络结构和深度学习方法抽取检测样本最高层次的抽象特征,用奇异值对入侵检测数据进行特征表达;然后,利用极限学习机(ELM)建立入侵检测数据的分类模型;其次,利用逐层的无监督学习方法解决入侵检测获取标记样本难的问题;最后采用KDD99数据集对该方法的性能进行了验证。实验结果表明:多层极限学习机的方法提高了检测正确率,检测漏报率也低至0.48%,检测速度比其他深度模型的检测方法提高了6倍以上。同时在极少标记样本的情况下仍有85%以上的正确率。通过多层网络结构的构建提高了对U2L、R2L这两类攻击的检测率。该方法集成深度学习和无监督学习的优点,能对高维度,大数据的网络记录用较少的参数得到更好的表达,在入侵检测的检测速度以及特征表达两个方面都具有优势。  相似文献   

5.
Credit score classification is a prominent research problem in the banking or financial industry, and its predictive performance is responsible for the profitability of financial industry. This paper addresses how Spiking Extreme Learning Machine (SELM) can be effectively used for credit score classification. A novel spike-generating function is proposed in Leaky Nonlinear Integrate and Fire Model (LNIF). Its interspike period is computed and utilized in the extreme learning machine (ELM) for credit score classification. The proposed model is named as SELM and is validated on five real-world credit scoring datasets namely: Australian, German-categorical, German-numerical, Japanese, and Bankruptcy. Further, results obtained by SELM are compared with back propagation, probabilistic neural network, ELM, voting-based Q-generalized extreme learning machine, Radial basis neural network and ELM with some existing spiking neuron models in terms of classification accuracy, Area under curve (AUC), H-measure and computational time. From the experimental results, it has been noticed that improvement in accuracy and execution time for the proposed SELM is highly statistically important for all aforementioned credit scoring datasets. Thus, integrating a biological spiking function with ELM makes it more efficient for categorization.  相似文献   

6.
Extreme learning machine (ELM) [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004], a novel learning algorithm much faster than the traditional gradient-based learning algorithms, was proposed recently for single-hidden-layer feedforward neural networks (SLFNs). However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. Experimental results show that this approach is able to achieve good generalization performance with much more compact networks.  相似文献   

7.
Symmetric extreme learning machine   总被引:1,自引:1,他引:0  
Extreme learning machine (ELM) can be considered as a black-box modeling approach that seeks a model representation extracted from the training data. In this paper, a modified ELM algorithm, called symmetric ELM (S-ELM), is proposed by incorporating a priori information of symmetry. S-ELM is realized by transforming the original activation function of hidden neurons into a symmetric one with respect to the input variables of the samples. In theory, S-ELM can approximate N arbitrary distinct samples with zero error. Simulation results show that, in the applications where there exists the prior knowledge of symmetry, S-ELM can obtain better generalization performance, faster learning speed, and more compact network architecture.  相似文献   

8.
In order to overcome the disadvantage of the traditional algorithm for SLFN (single-hidden layer feedforward neural network), an improved algorithm for SLFN, called extreme learning machine (ELM), is proposed by Huang et al. However, ELM is sensitive to the neuron number in hidden layer and its selection is a difficult-to-solve problem. In this paper, a self-adaptive mechanism is introduced into the ELM. Herein, a new variant of ELM, called self-adaptive extreme learning machine (SaELM), is proposed. SaELM is a self-adaptive learning algorithm that can always select the best neuron number in hidden layer to form the neural networks. There is no need to adjust any parameters in the training process. In order to prove the performance of the SaELM, it is used to solve the Italian wine and iris classification problems. Through the comparisons between SaELM and the traditional back propagation, basic ELM and general regression neural network, the results have proven that SaELM has a faster learning speed and better generalization performance when solving the classification problem.  相似文献   

9.
10.
Pattern Analysis and Applications - Imbalanced learning is one of the substantial challenging problems in the field of data mining. The datasets that have skewed class distribution pose hindrance...  相似文献   

11.
Wu  Wei  Alvarez  Jaime  Liu  Chengcheng  Sun  Hung-Min 《Microsystem Technologies》2018,24(1):209-217
Microsystem Technologies - This research focuses on bot detection through implementation of techniques such as traffic analysis, unsupervised machine learning, and similarity analysis between...  相似文献   

12.
Rapid building detection using machine learning   总被引:1,自引:0,他引:1  
This work describes algorithms for performing discrete object detection, specifically in the case of buildings, where usually only low quality RGB-only geospatial reflective imagery is available. We utilize new candidate search and feature extraction techniques to reduce the problem to a machine learning (ML) classification task. Here we can harness the complex patterns of contrast features contained in training data to establish a model of buildings. We avoid costly sliding windows to generate candidates; instead we innovatively stitch together well known image processing techniques to produce candidates for building detection that cover 80–85 % of buildings. Reducing the number of possible candidates is important due to the scale of the problem. Each candidate is subjected to classification which, although linear, costs time and prohibits large scale evaluation. We propose a candidate alignment algorithm to boost classification performance to 80–90 % precision with a linear time algorithm and show it has negligible cost. Also, we propose a new concept called a Permutable Haar Mesh (PHM) which we use to form and traverse a search space to recover candidate buildings which were lost in the initial preprocessing phase. All code and datasets from this paper are made available online (http://kdl.cs.umb.edu/w/datasets/ and https://github.com/caitlinkuhlman/ObjectDetectionCLUtility).  相似文献   

13.
李军  乃永强 《控制与决策》2015,30(9):1559-1566

针对一类多输入多输出(MIMO) 仿射非线性动态系统, 提出一种基于极限学习机(ELM) 的鲁棒自适应神经控制方法. ELM随机确定单隐层前馈网络(SLFNs) 的隐含层参数, 仅需调整网络的输出权值, 能以极快的学习速度获得良好的推广性. 在所提出的控制方法中, 利用ELM逼近系统的未知非线性项, 针对ELM网络的权值、逼近误差及外界扰动的未知上界值分别设计参数自适应律, 通过Lyapunov 稳定性分析可以保证闭环系统所有信号半全局最终一致有界. 仿真结果表明了该控制方法的有效性.

  相似文献   

14.
癫痫发作检测可以实现脑电分类和病灶定位,对癫痫的临床治疗具有重要意义。针对大数据量、高特征值空间长程脑电的快速和准确分类问题,提出一种基于最大相关和最小冗余准则及极限学习机的癫痫发作检测方法。对脑电信号进行短时傅里叶变换,并选取能量时频分布为特征,利用基于最大相关和最小冗余准则的方法进行特征选择,并使用极限学习机、支持向量机和反向传播算法对癫痫不同状态进行分类和判别。实验结果表明,极限学习机的分类准确率和训练速度两方面性能优于支持向量机和反向传播算法,发作间期和发作期的分类准确率达到98%以上,训练时间仅为0.8s,所提方法能够实时准确地检测癫痫发作。  相似文献   

15.
Variational Bayesian extreme learning machine   总被引:1,自引:0,他引:1  
Extreme learning machine (ELM) randomly generates parameters of hidden nodes and then analytically determines the output weights with fast learning speed. The ill-posed problem of parameter matrix of hidden nodes directly causes unstable performance, and the automatical selection problem of the hidden nodes is critical to holding the high efficiency of ELM. Focusing on the ill-posed problem and the automatical selection problem of the hidden nodes, this paper proposes the variational Bayesian extreme learning machine (VBELM). First, the Bayesian probabilistic model is involved into ELM, where the Bayesian prior distribution can avoid the ill-posed problem of hidden node matrix. Then, the variational approximation inference is employed in the Bayesian model to compute the posterior distribution and the independent variational hyperparameters approximately, which can be used to select the hidden nodes automatically. Theoretical analysis and experimental results elucidate that VBELM has stabler performance with more compact architectures, which presents probabilistic predictions comparison with traditional point predictions, and it also provides the hyperparameter criterion for hidden node selection.  相似文献   

16.
Convex incremental extreme learning machine   总被引:6,自引:2,他引:6  
Guang-Bin  Lei   《Neurocomputing》2007,70(16-18):3056
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879–892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such “generalized” SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.  相似文献   

17.
Due to the significant efficiency and simple implementation, extreme learning machine (ELM) algorithms enjoy much attention in regression and classification applications recently. Many efforts have been paid to enhance the performance of ELM from both methodology (ELM training strategies) and structure (incremental or pruned ELMs) perspectives. In this paper, a local coupled extreme learning machine (LC-ELM) algorithm is presented. By assigning an address to each hidden node in the input space, LC-ELM introduces a decoupler framework to ELM in order to reduce the complexity of the weight searching space. The activated degree of a hidden node is measured by the membership degree of the similarity between the associated address and the given input. Experimental results confirm that the proposed approach works effectively and generally outperforms the original ELM in both regression and classification applications.  相似文献   

18.
A wavelet extreme learning machine   总被引:2,自引:0,他引:2  
Extreme learning machine (ELM) has been widely used in various fields to overcome the problem of low training speed of the conventional neural network. Kernel extreme learning machine (KELM) introduces the kernel method to ELM model, which is applicable in Stat ML. However, if the number of samples in Stat ML is too small, perhaps the unbalanced samples cannot reflect the statistical characteristics of the input data, so that the learning ability of Stat ML will be influenced. At the same time, the mix kernel functions used in KELM are conventional functions. Therefore, the selection of kernel function can still be optimized. Based on the problems above, we introduce the weighted method to KELM to deal with the unbalanced samples. Wavelet kernel functions have been widely used in support vector machine and obtain a good classification performance. Therefore, to realize a combination of wavelet analysis and KELM, we introduce wavelet kernel functions to KELM model, which has a mix kernel function of wavelet kernel and sigmoid kernel, and introduce the weighted method to KELM model to balance the sample distribution, and then we propose the weighted wavelet–mix kernel extreme learning machine. The experimental results show that this method can effectively improve the classification ability with better generalization. At the same time, the wavelet kernel functions perform very well compared with the conventional kernel functions in KELM model.  相似文献   

19.
针对极端学习机(extreme learning machine,ELM)结构设计问题,基于隐含层激活函数及其导函数提出一种前向神经网络结构增长算法.首先以Sigmoid函数为例给出了一类基函数的派生特性:导函数可以由其原函数表示.其次,利用这种派生特性提出了ELM结构设计方法,该方法自动生成双隐含层前向神经网络,其第1隐含层的结点随机逐一生成.第2隐含层的输出由第1隐含层新添结点的激活函数及其导函数确定,输出层权值由最小二乘法分析获得.最后给出了所提算法收敛性及稳定性的理论证明.对非线性系统辨识及双螺旋分类问题的仿真结果证明了所提算法的有效性.  相似文献   

20.
Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM model, we first extend the manifold regularization (MR) framework and then demonstrate the relation between the extended MR framework and ELM. Finally, a manifold regularized extreme learning machine is derived from the proposed framework, which maintains the properties of ELM and can be applicable to large-scale learning problems. Experimental results show that the proposed semi-supervised extreme learning machine is the most cost-efficient method. It tends to have better scalability and achieve satisfactory generalization performance at a relatively faster learning speed than traditional semi-supervised learning algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号