首页 | 官方网站   微博 | 高级检索  
     

带惩罚项与随机输入的BP神经网络在线梯度学习算法的收敛性
引用本文:鲁慧芳,吴微,李正学.带惩罚项与随机输入的BP神经网络在线梯度学习算法的收敛性[J].数学研究及应用,2007,27(3):643-653.
作者姓名:鲁慧芳  吴微  李正学
作者单位:1. 大连理工大学应用数学系,辽宁,大连,116024;山东交通学院数理系,山东,济南,250023
2. 大连理工大学应用数学系,辽宁,大连,116024
摘    要:本文对三层BP神经网络中带有惩罚项的在线梯度学习算法的收敛性问题进行了研究,在网络训练每一轮开始执行之前,对训练样本随机进行重排,以使网络学习更容易跳出局部极小,文中给出了误差函数的单调性定理以及该算法的弱收敛和强收敛性定理。

关 键 词:BP神经网络  在线梯度法  收敛性  惩罚项  随机输入.
文章编号:1000-341X(2007)03-0643-11
收稿时间:2005/9/28 0:00:00
修稿时间:2005-09-28

Convergence of Online Gradient Method with a Penalty Term \\for BP Neural Network with Stochastic Inputs
LU Hui-fang,WU Wei and LI Zheng-xue.Convergence of Online Gradient Method with a Penalty Term \\for BP Neural Network with Stochastic Inputs[J].Journal of Mathematical Research with Applications,2007,27(3):643-653.
Authors:LU Hui-fang  WU Wei and LI Zheng-xue
Affiliation:Department of Mathematics, Dalian University of Technology, Liaoning 116024, China; Department of Mathematics and Physics, Shandong Jiaotong University, Shandong 250023, China;Department of Mathematics, Dalian University of Technology, Liaoning 116024, China;;Department of Mathematics, Dalian University of Technology, Liaoning 116024, China;
Abstract:In this paper,we present and discuss an online gradient method with a penalty term for three- layer BP neural networks.The input training examples are reset stochastically before the performance of each batch so that the learning is easy to jump off from local minima.The monotonicity and the convergence of deterministic nature are proved.
Keywords:BP neural networks  online gradient method  convergence  penalty term  stochastic inputs
本文献已被 CNKI 维普 万方数据 等数据库收录!
点击此处可从《数学研究及应用》浏览原始摘要信息
点击此处可从《数学研究及应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号