首页 | 官方网站   微博 | 高级检索  
     


TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization
Authors:Yoan MicheAuthor Vitae  Mark van HeeswijkAuthor VitaePatrick BasAuthor Vitae  Olli SimulaAuthor VitaeAmaury LendasseAuthor Vitae
Affiliation:a Information and Computer Science Department, Aalto University School of Science and Technology, FI-00076 Aalto, Finland
b Gipsa-Lab, INPG 961 rue de la Houille Blanche, BP46 F-38402 Grenoble Cedex, France
Abstract:In this paper an improvement of the optimally pruned extreme learning machine (OP-ELM) in the form of a L2 regularization penalty applied within the OP-ELM is proposed. The OP-ELM originally proposes a wrapper methodology around the extreme learning machine (ELM) meant to reduce the sensitivity of the ELM to irrelevant variables and obtain more parsimonious models thanks to neuron pruning. The proposed modification of the OP-ELM uses a cascade of two regularization penalties: first a L1 penalty to rank the neurons of the hidden layer, followed by a L2 penalty on the regression weights (regression between hidden layer and output layer) for numerical stability and efficient pruning of the neurons. The new methodology is tested against state of the art methods such as support vector machines or Gaussian processes and the original ELM and OP-ELM, on 11 different data sets; it systematically outperforms the OP-ELM (average of 27% better mean square error) and provides more reliable results - in terms of standard deviation of the results - while remaining always less than one order of magnitude slower than the OP-ELM.
Keywords:ELM  Regularization  Ridge regression  Tikhonov regularization  LARS  OP-ELM
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号