共查询到20条相似文献,搜索用时 140 毫秒
1.
2.
贝叶斯网络的动态知识获取与修正 总被引:1,自引:0,他引:1
对贝叶斯网络的在线参数学习进行了研究.分析了ML和VotingEM算法的特点.发现它们在快速适应样本特征变化、预测与确定算法参数方面存在的不足,并提出基于上述两种方法的混合在线学习算法.改进算法根据修正参数误差以及调节数据量权重动态获取与确定贝叶斯网络.研究结果表明,改进算法在快速获取知识参数与知识检验正确率方面,比Voting EM方法具有更好的特点. 相似文献
3.
针对LM算法不能在线训练RBF网络以及RBF网络结构设计算法中存在的问题,提出一种基于LM算法的在线自适应RBF网络结构优化算法.该算法引入滑动窗口和在线优化网络结构的思想,滑动窗口的引入既使得LM算法能够在线训练RBF网络,又使得网络对学习参数的变化具有更好的鲁棒性,并且易于收敛.在线优化网络结构使得网络在学习过程中能够根据训练样本的训练误差和隐节点的相关信息,在线自适应调整网络结构,跟踪非线性时变系统的变化,使网络维持最为紧凑的结构,以保证网络的泛化性能.最后通过仿真实验验证了所提出算法的性能. 相似文献
4.
提出应用小波神经网络实现非线性系统模型的在线建立及自校正方法.首先提出了小波网络节点库的构成方法和一种小波网络模型结构确定和权值估计方法.在此基础上运用限定记忆最小二乘法,设计小波网络自学习建模和在线校正的算法.该算法能根据系统输入输出数据自动地建立小波网络模型,并使得在线校正得到的小波神经网络在某种准则下是最优的. 相似文献
5.
双目深度估计的在线适应是一个有挑战性的问题,其要求模型能够在不断变化的目标场景中在线连续地自我调整并适应于当前环境.为处理该问题,提出一种新的在线元学习适应算法(Online meta-learning model with adaptation,OMLA),其贡献主要体现在两方面:首先引入在线特征对齐方法处理目标域和源域特征的分布偏差,以减少数据域转移的影响;然后利用在线元学习方法调整特征对齐过程和网络权重,使模型实现快速收敛.此外,提出一种新的基于元学习的预训练方法,以获得适用于在线学习场景的深度网络参数.相关实验分析表明, OMLA和元学习预训练算法均能帮助模型快速适应于新场景,在KITTI数据集上的实验对比表明,本文方法的效果超越了当前最佳的在线适应算法,接近甚至优于在目标域离线训练的理想模型. 相似文献
6.
针对如何对分布式网络采集的数据进行在线学习的问题,提出了一种基于交替方向乘子法(ADMM)的分布式在线学习优化算法--分布式在线交替方向乘子法(DOM)。首先,针对分布式在线学习需要各节点根据新采集的数据来更新本地估计,同时保持网络中所有节点的估计趋于一致这一问题,建立了数学模型并设计DOM算法对其进行求解。其次,针对分布式在线学习问题定义了Regret 界,用以表征在线估计的性能;证明了当本地即时损失函数是凸函数时,DOM算法是收敛的,并给出了其收敛速度。最后,通过数值仿真实验结果表明,相比现有的分布式在线梯度下降法(DOGD)和分布式在线自主学习算法(DAOL),所提出的DOM算法具有更快的收敛性能。 相似文献
7.
针对模型未知的线性离散系统在扰动存在条件下的调节控制问题, 提出了一种基于Off-policy的输入输出数据反馈的H∞控制方法. 本文从状态反馈在线学习算法出发, 针对系统运行过程中状态数据难以测得的问题, 通过引入增广数据向量将状态反馈策略迭代在线学习算法转化为输入输出数据反馈在线学习算法. 更进一步, 通过引入辅助项的方法将输入输出数据反馈策略迭代在线学习算法转化为无模型输入输出数据反馈Off-policy学习算法. 该算法利用历史输入输出数据实现最优输出反馈策略的学习, 同时克服了On-policy算法需要频繁与实际环境进行交互这一缺点. 除此之外, 与On-policy算法相比, Off-policy学习算法具有克服学习噪声的影响, 使学习结果收敛于理论最优值这一优点. 最终, 通过仿真实验验证了学习算法的收敛性. 相似文献
8.
基于在线减法聚类的RBF神经网络结构设计 总被引:2,自引:1,他引:1
以设计最小径向基函数(RBF)神经网络结构为着眼点,提出一种在线RBF网络结构设计算法.该算法将在线减法聚类能实时跟踪工况的特性与RBF网络参数学习过程相结合,使得网络既能在线适应实时对象的变化又能维持紧凑的结构,有效地解决了RBF神经网络结构自组织问题.该算法只调整欧氏距离距实时工况最近的核函数,大大提高了网络的学习速度.通过对典型非线性函数逼近和混沌时间序列预测的仿真,表明所提出的算法具有良好的动态特性响应能力和逼近能力. 相似文献
9.
10.
为了快速地构造一个有效的模糊神经网络,提出一种基于扩展卡尔曼滤波(EKF)的模糊神经网络自组织学习算法。在本算法中,按照提出的无须经过修剪过程的生长准则增加规则,加速了网络在线学习过程;使用EKF算法更新网络的自由参数,增强了网络的鲁棒性。仿真结果表明,该算法具有快速的学习速度、良好的逼近精度和泛化能力。 相似文献
11.
Ensemble of online sequential extreme learning machine 总被引:3,自引:0,他引:3
Liang et al. [A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006), 1411–1423] has proposed an online sequential learning algorithm called online sequential extreme learning machine (OS-ELM), which can learn the data one-by-one or chunk-by-chunk with fixed or varying chunk size. It has been shown [Liang et al., A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006) 1411–1423] that OS-ELM runs much faster and provides better generalization performance than other popular sequential learning algorithms. However, we find that the stability of OS-ELM can be further improved. In this paper, we propose an ensemble of online sequential extreme learning machine (EOS-ELM) based on OS-ELM. The results show that EOS-ELM is more stable and accurate than the original OS-ELM. 相似文献
12.
13.
Extreme learning machine (ELM) [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: a new learning scheme of feedforward neural networks, in: Proceedings of the International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25-29 July 2004], a novel learning algorithm much faster than the traditional gradient-based learning algorithms, was proposed recently for single-hidden-layer feedforward neural networks (SLFNs). However, ELM may need higher number of hidden neurons due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed which uses the differential evolutionary algorithm to select the input weights and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. Experimental results show that this approach is able to achieve good generalization performance with much more compact networks. 相似文献
14.
机器人因其高效的感知、决策和执行能力,在人工智能、信息技术和智能制造等领域中具有巨大的应用价值。目前,机器人学习与控制已成为机器人研究领域的重要前沿技术之一。各种基于神经网络的智能算法被设计,从而为机器人系统提供同步学习与控制的规划框架。首先从神经动力学(ND)算法、前馈神经网络(FNNs)、递归神经网络(RNNs)和强化学习(RL)四个方面介绍了基于神经网络的机器人学习与控制的研究现状,回顾了近30年来面向机器人学习与控制的智能算法和相关应用技术。最后展望了该领域存在的问题和发展趋势,以期促进机器人学习与控制理论的推广及应用场景的拓展。 相似文献
15.
从理论上提出了子空间信息量(SIQ)及其准则(SIQC)的概念;在此基础上阐述了基于上述准则的前向神经网络设计的相关理论,包括前向神经网络隐含层信息量(HLIQ)、存在性和逼近定理,给出了选择隐含层神经元数、权值向量集和隐含层激励函数的指导方向;提出了基于上述理论的一种可行的次优网络设计算法;最后,详细分析了网络性能指标及其影响因素,上述理论和方法完全克服了传统学习算法的各种弊端,丰富了前向神经网络设计领域的理论依据,具有较大的理论指导和实际应用价值,文中通过具体实例验证了上述理论和方法的可行性和优越性. 相似文献
16.
Mathematical essence and structures of the feedforward neural networks are investigated in this paper. The interpolation mechanisms of the feedforward neural networks are explored. For example, the well-known result, namely, that a neural network is an universal approximator, can be concluded naturally from the interpolative representations. Finally, the learning algorithms of the feedforward neural networks are discussed. 相似文献
17.
José B. Aragão Jr. 《Computers & Electrical Engineering》2010,36(3):536-544
Voice over IP (VoIP) applications requires a buffer at the receiver to minimize the packet loss due to late arrival. Several algorithms are available in the literature to estimate the playout buffer delay. Classic estimation algorithms are non-adaptive, i.e. they differ from more recent approaches basically due to the absence of learning mechanisms. This paper introduces two new formulations of adaptive algorithms for online learning and prediction of the playout buffer delay, the first one being based on the standard Box-Jenkins autoregressive model, while the second one being based on the feedforward and recurrent neural networks. The obtained results indicate that the proposed algorithms present better overall performance than the classic ones. 相似文献
18.
19.
Regularized online sequential learning algorithm for single-hidden layer feedforward neural networks 总被引:3,自引:0,他引:3
Online learning algorithms have been preferred in many applications due to their ability to learn by the sequentially arriving data. One of the effective algorithms recently proposed for training single hidden-layer feedforward neural networks (SLFNs) is online sequential extreme learning machine (OS-ELM), which can learn data one-by-one or chunk-by-chunk at fixed or varying sizes. It is based on the ideas of extreme learning machine (ELM), in which the input weights and hidden layer biases are randomly chosen and then the output weights are determined by the pseudo-inverse operation. The learning speed of this algorithm is extremely high. However, it is not good to yield generalization models for noisy data and is difficult to initialize parameters in order to avoid singular and ill-posed problems. In this paper, we propose an improvement of OS-ELM based on the bi-objective optimization approach. It tries to minimize the empirical error and obtain small norm of network weight vector. Singular and ill-posed problems can be overcome by using the Tikhonov regularization. This approach is also able to learn data one-by-one or chunk-by-chunk. Experimental results show the better generalization performance of the proposed approach on benchmark datasets. 相似文献
20.
A backpropagation learning algorithm for feedforward neural networks withan adaptive learning rate is derived. The algorithm is based uponminimising the instantaneous output error and does not include anysimplifications encountered in the corresponding Least Mean Square (LMS)algorithms for linear adaptive filters. The backpropagation algorithmwith an adaptive learning rate, which is derived based upon the Taylorseries expansion of the instantaneous output error, is shown to exhibitbehaviour similar to that of the Normalised LMS (NLMS) algorithm. Indeed,the derived optimal adaptive learning rate of a neural network trainedby backpropagation degenerates to the learning rate of the NLMS for a linear activation function of a neuron. By continuity, the optimal adaptive learning rate for neural networks imposes additional stabilisationeffects to the traditional backpropagation learning algorithm. 相似文献