共查询到19条相似文献,搜索用时 109 毫秒
1.
2.
3.
4.
一种改进的 BP 神经网络算法与应用 总被引:3,自引:0,他引:3
针对传统 BP 算法存在的收敛速度过慢、易陷入局部极小、缺乏统一的理论指导网络结构设计的缺点,分析了一般的改进算法在神经网络优化过程中存在的问题,从蚁群算法和 BP 算法融合的角度上,并引入了放大因子,提出一种综合改进的 BP 算法.该算法引入放大因子改善 BP 算法易陷入局部极小的情况,结合蚁群算法用于指导网络结构设计,并极大地改善了收敛速度过慢的问题.最后,将改进的 BP 算法与传统 BP 算法进行应用于煤矿瓦斯预测.通过对实验结果的分析,从时间和正确率上都表明改进的 BP 算法要优于传统的 BP 算法 相似文献
5.
基于进化策略的非线性方程组求解 总被引:1,自引:0,他引:1
基于在求解非线性方程组过程中传统算法存在着对于初始点敏感和串行运行速度过慢的问题,提出一种求解非线性方程组的进化策略算法.该算法充分发挥了进化策略的群体搜索和全局收敛的特性,能够快速求得非线性方程组的根,有效地克服了经典算法的初始点敏感和速度过慢的问题.仿真计算表明,该算法比传统的经典算法、改进的遗传算法和神经网络算法具有更高的求解质量和求解效率,为求解非线性方程组提供了一条比较有效的途径. 相似文献
6.
7.
梯度下降算法作为卷积神经网络训练常用优化算法,其性能的优劣直接影响网络训练收敛性.本文主要分析了目前梯度优化算法中存在超调而影响收敛性问题以及学习率自适应性问题,提出了一种带微分项的自适应梯度优化算法,旨在改善网络优化过程收敛性的同时提高收敛速率.首先,针对优化过程存在较大超调量的问题,通过对迭代算法的重整合以及结合传统控制学原理引入微分项等方式来克服权重更新滞后于实际梯度改变的问题;然后,引入自适应机制来应对因学习率的不适应性导致的收敛率差和收敛速率慢等问题;紧接着,基于柯西-施瓦茨和杨氏不等式等证明了新算法的最差性能上界(悔界)为■.最后,通过在包括MNIST数据集以及CIFAR-10基准数据集上的仿真实验来验证新算法的有效性,结果表明新算法引入的微分项和自适应机制的联合模式能够有效地改善梯度下降算算法的收敛性能,从而实现算法性能的明显改善. 相似文献
8.
应用传统的压缩感知理论对天线阵列信号的波达方向(Direction-of-arrival,DOA) 进行估计,存在基的失配问题。基于交替方向乘子法 (Alternative Direction Method of Multiplier,ADMM) 的无网格压缩感知(Grid-less Compressive Sensing) 技术能够解决该问题,但仍存在收敛速度慢的缺陷。针对该缺陷, 提出带自适应惩罚项的ADMM (ADMM with adaptive penalty,AP-ADMM)算法,即根据输入信号的噪声功率,自适应地选择惩罚项的初始值;同时在算法迭代求解的过程中,自适应地对目标函数的惩罚项进行调整。与传统算法相比,在保证收敛精度和DOA的恢复成功概率的条件下,带自适应惩罚项的ADMM算法收敛速率明显加快。仿真结果验证了新算法的有效性。 相似文献
9.
10.
BP算法(误差反向传播算法)是前馈神经网络中最常用的算法之一.在对前馈神经网络和传统的BP算法研究的基础上,发现了传统算法中存在的问题.通过引入网络复杂性的量,提出了一种新的改进算法,命名为基于网络复杂性的BP算法.该算法能够删除掉冗余的连接甚至节点,通过对网络学习步长的动态调整,避免了算法收敛速度过慢和反复震荡的问题.最后通过实验说明该算法在一定程度上比传统BP算法有一些优越性. 相似文献
11.
In this paper, the deterministic convergence of an online gradient method with penalty and momentum is investigated for training two-layer feedforward neural networks. The monotonicity of the new error function with the penalty term in the training iteration is firstly proved. Under this conclusion, we show that the weights are uniformly bounded during the training process and the algorithm is deterministically convergent. Sufficient conditions are also provided for both weak and strong convergence results. 相似文献
12.
Training pi-sigma network by online gradient algorithm with penalty for small weight update 总被引:3,自引:0,他引:3
A pi-sigma network is a class of feedforward neural networks with product units in the output layer. An online gradient algorithm is the simplest and most often used training method for feedforward neural networks. But there arises a problem when the online gradient algorithm is used for pi-sigma networks in that the update increment of the weights may become very small, especially early in training, resulting in a very slow convergence. To overcome this difficulty, we introduce an adaptive penalty term into the error function, so as to increase the magnitude of the update increment of the weights when it is too small. This strategy brings about faster convergence as shown by the numerical experiments carried out in this letter. 相似文献
13.
This paper investigates an online gradient method with penalty for training feedforward neural networks with linear output.
A usual penalty is considered, which is a term proportional to the norm of the weights. The main contribution of this paper
is to theoretically prove the boundedness of the weights in the network training process. This boundedness is then used to
prove an almost sure convergence of the algorithm to the zero set of the gradient of the error function. 相似文献
14.
Hongmei ShaoAuthor Vitae Dongpo XuAuthor VitaeGaofeng ZhengAuthor Vitae Lijun LiuAuthor Vitae 《Neurocomputing》2012,77(1):243-252
In this paper, we study the convergence of an online gradient method with inner-product penalty and adaptive momentum for feedforward neural networks, assuming that the training samples are permuted stochastically in each cycle of iteration. Both two-layer and three-layer neural network models are considered, and two convergence theorems are established. Sufficient conditions are proposed to prove weak and strong convergence results. The algorithm is applied to the classical two-spiral problem and identification of Gabor function problem to support these theoretical findings. 相似文献
15.
This paper investigates the split-complex back-propagation algorithm with momentum and penalty for training complex-valued neural networks. Here the momentum are used to accelerate the convergence of the algorithm and the penalty are used to control the magnitude of the network weights. The sufficient conditions for the learning rate, the momentum factor, the penalty coefficient, and the activation functions are proposed to establish the theoretical results of the algorithm. We theoretically prove the boundedness of the network weights during the training process, which is usually used as a precondition for convergence analysis in literatures. The monotonicity of the error function and the convergence of the algorithm are also guaranteed. 相似文献
16.
A class of neural networks that solve linear programming problems is analyzed. The neural networks considered are modeled by dynamic gradient systems that are constructed using a parametric family of exact (nondifferentiable) penalty functions. It is proved that for a given linear programming problem and sufficiently large penalty parameters, any trajectory of the neural network converges in finite time to its solution set. For the analysis, Lyapunov-type theorems are developed for finite time convergence of nonsmooth sliding mode dynamic systems to invariant sets. The results are illustrated via numerical simulation examples 相似文献
17.
Boundedness and Convergence of Online Gradient Method With Penalty for Feedforward Neural Networks 总被引:1,自引:0,他引:1
《Neural Networks, IEEE Transactions on》2009,20(6):1050-1054
18.
A new algorithm, mean field annealing (MFA), is applied to the graph-partitioning problem. The MFA algorithm combines characteristics of the simulated-annealing algorithm and the Hopfield neural network. MFA exhibits the rapid convergence of the neural network while preserving the solution quality afforded by simulated annealing (SA). The rate of convergence of MFA on graph bipartitioning problems is 10-100 times that of SA, with nearly equal quality of solutions. A new modification to mean-field annealing is also presented which supports partitioning graphs into three or more bins, a problem which has previously shown resistance to solution by neural networks. The temperature-behavior of MFA during graph partitioning is analyzed approximately and shown to possess a critical temperature at which most of the optimization occurs. This temperature is analogous to the gain of the neurons in a neural network and can be used to tune such networks for better performance. The value of the repulsion penalty needed to force MFA (or a neural network) to divide a graph into equal-sized pieces is also estimated. 相似文献