首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 648 毫秒
1.
A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.  相似文献   

2.
A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the H/sup /spl infin// control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with H/sup /spl infin// stabilization.  相似文献   

3.
Mathematical essence and structures of the feedforward neural networks are investigated in this paper. The interpolation mechanisms of the feedforward neural networks are explored. For example, the well-known result, namely, that a neural network is an universal approximator, can be concluded naturally from the interpolative representations. Finally, the learning algorithms of the feedforward neural networks are discussed.  相似文献   

4.
This study presents an explicit demonstration on constructing a multilayer feedforward neural network to approximate polynomials and conduct polynomial fitting. Built on an algebraic analysis of sigmoidal activation functions rather than incremental training, this work reveals the capability of the “universal approximator” by relating the “soft computing tool” to an important class of conventional computing tools widely used in modeling nonlinear dynamic systems and many other scientific computing applications. The authors strive to enable physical interpretations and afford full control when applying the highly adaptive, powerful yet subjective neural network approach. This work is a part of the effort of bridging the gap between the black-box and mechanics-based parametric modeling.  相似文献   

5.
This paper presents a function approximation to a general class of polynomials by using one-hidden-layer feedforward neural networks(FNNs). Both the approximations of algebraic polynomial and trigonometric polynomial functions are discussed in details. For algebraic polynomial functions, an one-hidden-layer FNN with chosen number of hidden-layer nodes and corresponding weights is established by a constructive method to approximate the polynomials to a remarkable high degree of accuracy. For trigonometric functions, an upper bound of approximation is therefore derived by the constructive FNNs. In addition, algorithmic examples are also included to confirm the accuracy performance of the constructive FNNs method. The results show that it improves efficiently the approximations of both algebraic polynomials and trigonometric polynomials. Consequently, the work is really of both theoretical and practical significance in constructing a one-hidden-layer FNNs for approximating the class of polynomials. The work also paves potentially the way for extending the neural networks to approximate a general class of complicated functions both in theory and practice.  相似文献   

6.
研究了一种新的多输出神经元模型.首先,给出这类模型的一般形式,并将该模型应用于多层前向神经网络;其次,给出了其学习算法,即递推最小二乘算法,最后通过几个模拟实验表明,采用多输出神经元模型的多层前向神经网络,具有结构简单,泛化能力强,收敛速度快,收敛精度高等特点,其性能远远优于激活函数可调模型的多层前向神经网络.  相似文献   

7.
正则模糊神经网络是模糊值函数的泛逼近器   总被引:2,自引:0,他引:2       下载免费PDF全文
通过分析多元模糊值Bernstein多项式的近似特性,证明了4层前向正则模糊神经网络(FNN)的逼近性能,该类网络构成了模糊值函数的一类泛逼近器,即在欧氏空间的任何紧集上,任意连续模糊值函数能被这类FNN逼近到任意精度,最后通过实例给出了实现这种近似的具体步骤。  相似文献   

8.
Function approximation with spiked random networks   总被引:4,自引:0,他引:4  
Examines the function approximation properties of the "random neural-network model" or GNN, The output of the GNN can be computed from the firing probabilities of selected neurons. We consider a feedforward bipolar GNN (BGNN) model which has both "positive and negative neurons" in the output layer, and prove that the BGNN is a universal function approximator. Specifically, for any finC([0,1](s)) and any epsilon>0, we show that there exists a feedforward BGNN which approximates f uniformly with error less than epsilon. We also show that after some appropriate clamping operation on its output, the feedforward GNN is also a universal function approximator.  相似文献   

9.
前向代数神经网络的函数逼近理论及学习算法   总被引:12,自引:0,他引:12  
文中对MP神经元模型进行了推广,定义了多项代数神经元、多项式代数神经网络,将多项式代数融入代数神经网络,分析了前向多项式代数神经网络函数逼近能力及理论依据,设计出了一类双输入单输出的前向4层多层式代数神经网络模型,由该模型构成的网络能够逼近于给定的二元多项式到预定的精度。给出了在P-adic意义下的多项式代数神经网络函数逼近整体学习算法,在学习的过程中,不存在局部极小,通过实例表明,该算法有效,最  相似文献   

10.
A novel technique of designing application specific defuzzification strategies with neural learning is presented. The proposed neural architecture considered as a universal defuzzification approximator is validated by showing the convergence when approximating several existing defuzzification strategies. The method is successfully tested with fuzzy controlled reverse driving of a model truck. The transparent structure of the universal defuzzification approximator allows us to analyze the generated customized defuzzification method using the existing theories of defuzzification. The integration of universal defuzzification approximator instead of traditional methods in Mamdani-type fuzzy controllers can also be considered as an addition of trainable nonlinear noise to the output of the fuzzy rule inference before calculating the defuzzified crisp output. Therefore, nonlinear noise trained specifically for a given application shows a grade of confidence on the rule base, providing an additional opportunity to measure the quality of the fuzzy rule base. The possibility of modeling a Mamdani-type fuzzy controller as a feedforward neural network with the ability of gradient descent training of the universal defuzzification approximator and antecedent membership functions fulfil the requirement known from multilayer preceptrons in finding solutions to nonlinear separable problems  相似文献   

11.
多项式函数型回归神经网络模型及应用   总被引:2,自引:1,他引:2  
周永权 《计算机学报》2003,26(9):1196-1200
文中利用回归神经网络既有前馈通路又有反馈通路的特点,将网络隐层中神经元的激活函数设置为可调多项式函数序列,提出了多项式函数型回归神经网络新模型,它不但具有传统回归神经网络的特点,而且具有较强的函数逼近能力,针对递归计算问题,提出了多项式函数型回归神经网络学习算法,并将该网络模型应用于多元多项式近似因式分解,其学习算法在多元多项式近似分解中体现了较强的优越性,通过算例分析表明,该算法十分有效,收敛速度快,计算精度高,可适用于递归计算问题领域,该文所提出的多项式函数型回归神经网络模型及学习算法对于代数符号近似计算有重要的指导意义。  相似文献   

12.
Four types of neural net learning rules are discussed for dynamic system identification. It is shown that the feedforward network (FFN) pattern learning rule is a first-order approximation of the FFN-batch learning rule. As a result, pattern learning is valid for nonlinear activation networks provided the learning rate is small. For recurrent types of networks (RecNs), RecN-pattern learning is different from RecN-batch learning. However, the difference can be controlled by using small learning rates. While RecN-batch learning is strict in a mathematical sense, RecN-pattern learning is simple to implement and can be implemented in a real-time manner. Simulation results agree very well with the theorems derived. It is shown by simulation that for system identification problems, recurrent networks are less sensitive to noise.  相似文献   

13.
多层前向小世界神经网络及其函数逼近   总被引:1,自引:0,他引:1  
借鉴复杂网络的研究成果, 探讨一种在结构上处于规则和随机连接型神经网络之间的网络模型—-多层前向小世界神经网络. 首先对多层前向规则神经网络中的连接依重连概率p进行重连, 构建新的网络模型, 对其特征参数的分析表明, 当0 < p < 1时, 该网络在聚类系数上不同于Watts-Strogatz 模型; 其次用六元组模型对网络进行描述; 最后, 将不同p值下的小世界神经网络用于函数逼近, 仿真结果表明, 当p = 0:1时, 网络具有最优的逼近性能, 收敛性能对比试验也表明, 此时网络在收敛性能、逼近速度等指标上要优于同规模的规则网络和随机网络.  相似文献   

14.
K-积分模意义下折线模糊神经网络的泛逼近性   总被引:1,自引:0,他引:1  
为克服模糊数运算的复杂性引入折线模糊数的定义,利用折线模糊数的优良性质获得了两个重要不等式,并给出实例说明折线模糊数的逼近能力有效.其次,引进K-拟可加积分和K-积分模概念,在折线模糊数空间满足可分性的基础上,借助于模糊值简单函数和模糊值Bernstein多项式研究了若干函数空间的稠密性问题,获得了可积有界模糊值函数类依K-积分模构成完备可分的度量空间.最后,在K-积分模意义下讨论了四层正则折线模糊神经网络对模糊值简单函数的泛逼近性,进而得到该网络对可积有界函数类也具有泛逼近性.该结果表明正则折线模糊神经网络对连续模糊系统的逼近能力可以推广为对一般可积系统的逼近能力.  相似文献   

15.
Extreme learning machine (ELM) is widely used in training single-hidden layer feedforward neural networks (SLFNs) because of its good generalization and fast speed. However, most improved ELMs usually discuss the approximation problem for sample data with output noises, not for sample data with noises both in input and output values, i.e., error-in-variable (EIV) model. In this paper, a novel algorithm, called (regularized) TLS-ELM, is proposed to approximate the EIV model based on ELM and total least squares (TLS) method. The proposed TLS-ELM uses the idea of ELM to choose the hidden weights, and applies TLS method to determine the output weights. Furthermore, the perturbation quantities of hidden output matrix and observed values are given simultaneously. Comparison experiments of our proposed TLS-ELM with least square method, TLS method and ELM show that our proposed TLS-ELM has better accuracy and less training time.  相似文献   

16.
In this paper, we treat the problem of combining fingerprint and speech biometric decisions as a classifier fusion problem. By exploiting the specialist capabilities of each classifier, a combined classifier may yield results which would not be possible in a single classifier. The Feedforward Neural Network provides a natural choice for such data fusion as it has been shown to be a universal approximator. However, the training process remains much to be a trial-and-error effort since no learning algorithm can guarantee convergence to optimal solution within finite iterations. In this work, we propose a network model to generate different combinations of the hyperbolic functions to achieve some approximation and classification properties. This is to circumvent the iterative training problem as seen in neural networks learning. In many decision data fusion applications, since individual classifiers or estimators to be combined would have attained a certain level of classification or approximation accuracy, this hyperbolic functions network can be used to combine these classifiers taking their decision outputs as the inputs to the network. The proposed hyperbolic functions network model is first applied to a function approximation problem to illustrate its approximation capability. This is followed by some case studies on pattern classification problems. The model is finally applied to combine the fingerprint and speaker verification decisions which show either better or comparable results with respect to several commonly used methods.  相似文献   

17.
针对一类不确定非线性动态系统,提出了一种基于神经网络在线逼近结构的鲁棒故障 检测方法.该方法通过构造神经网络通过在线逼近结构学习非线性故障特性来监测动态系统 的反常行为,当故障发生时,在线估计器可逼近各种可能的未知故障,然后对其进行诊断和 适应.神经网络权重的在线学习律没有持续激励的要求,并采用Lyapunov稳定性理论保证了 闭环误差系统一致最终有界稳定.  相似文献   

18.
文章提出了二阶有理式多层前馈神经网络的数学模型。有理式多层神经网络的思想来源于函数逼近理论中的有理式逼近。有理式前馈神经网络模型是传统前俯神经网络模型的推广,能有效地求解函数逼近问题。文章给出了有理式多层神经网络的学习算法,即误差反传播学习算法。就计算复杂度而言,有理式神经网络的学习算法与传统的多层神经网络反传播算法是同阶的。文章还给出了函数逼近和模式识别两个应用实例,实验结果说明二阶有理式多层神经网络在解决传统的问题上是有效的。  相似文献   

19.
This paper discusses issues related to the approximation capability of neural networks in modeling and control. We show that neural networks are universal models and universal controllers for a class of nonlinear dynamic systems. That is, for a given dynamic system, there exists a neural network which can model the system to any degree of accuracy over time. Moreover, if the system to be controlled is stabilized by a continuous controller, then there exists a neural network which can approximate the controller such that the system controlled by the neural network is also stabilized with a given bound of output error.  相似文献   

20.
基于阻尼最小二乘法的神经网络自校正一步预测控制器   总被引:4,自引:1,他引:3  
针对非线性控制器设计中遇到的模型结构及模型参数辨识问题,采用多层前馈神经网络去逼近任意的非线性系统,并使用收敛速度快且稳定性好的阻尼最小二乘法在线学习网络的仅植。基于估计的神经网络模型,依据辨识与控制的对偶原则,设计了基于阻尼最小二乘法的一步向前预测控制器。仿真研究表明,这种神经网络自校正控制器不仅具有很好的性能,而且不会产生参数爆发现象。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号