首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The essential order of approximation for neural networks   总被引:15,自引:0,他引:15  
There have been various studies on approximation ability of feedforward neural networks (FNNs). Most of the existing studies are, however, only concerned with density or upper bound estimation on how a multivariate function can be approximated by an FNN, and consequently, the essential approximation ability of an FNN cannot be revealed. In this paper, by establishing both upper and lower bound estimations on approximation order, the essential approximation ability (namely, the essential approximation order) of a class of FNNs is clarified in terms of the modulus of smoothness of functions to be approximated. The involved FNNs can not only approximate any continuous or integrable functions defined on a compact set arbitrarily well, but also provide an explicit lower bound on the number of hidden units required. By making use of multivariate approximation tools, it is shown that when the functions to be approximated are Lipschitzian with order up to 2, the approximation speed of the FNNs is uniquely deter  相似文献   

2.
Smooth function approximation using neural networks   总被引:4,自引:0,他引:4  
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.  相似文献   

3.
This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.  相似文献   

4.
In this paper, a systematic design is proposed to determine fuzzy system structure and learning its parameters, from a set of given training examples. In particular, two fundamental problems concerning fuzzy system modeling are addressed: 1) fuzzy rule parameter optimization and 2) the identification of system structure (i.e., the number of membership functions and fuzzy rules). A four-step approach to build a fuzzy system automatically is presented: Step 1 directly obtains the optimum fuzzy rules for a given membership function configuration. Step 2 optimizes the allocation of the membership functions and the conclusion of the rules, in order to achieve a better approximation. Step 3 determines a new and more suitable topology with the information derived from the approximation error distribution; it decides which variables should increase the number of membership functions. Finally, Step 4 determines which structure should be selected to approximate the function, from the possible configurations provided by the algorithm in the three previous steps. The results of applying this method to the problem of function approximation are presented and then compared with other methodologies proposed in the bibliography.  相似文献   

5.
There have been many studies on the simultaneous approximation capability of feed-forward neural networks (FNNs). Most of these, however, are only concerned with the density or feasibility of performing simultaneous approximations. This paper considers the simultaneous approximation of algebraic polynomials, employing Taylor expansion and an algebraic constructive approach, to construct a class of FNNs which realize the simultaneous approximation of any smooth multivariate function and all of its derivatives. We also present an upper bound on the approximation accuracy of the FNNs, expressed in terms of the modulus of continuity of the functions to be approximated.  相似文献   

6.
1 Introduction Artificial neural networks have been extensively applied in various fields of science and engineering. Why is so is mainly because the feedforward neural networks (FNNs) have the universal approximation capability[1-9]. A typical example of…  相似文献   

7.
On global-local artificial neural networks for function approximation   总被引:1,自引:0,他引:1  
We present a hybrid radial basis function (RBF) sigmoid neural network with a three-step training algorithm that utilizes both global search and gradient descent training. The algorithm used is intended to identify global features of an input-output relationship before adding local detail to the approximating function. It aims to achieve efficient function approximation through the separate identification of aspects of a relationship that are expressed universally from those that vary only within particular regions of the input space. We test the effectiveness of our method using five regression tasks; four use synthetic datasets while the last problem uses real-world data on the wave overtopping of seawalls. It is shown that the hybrid architecture is often superior to architectures containing neurons of a single type in several ways: lower mean square errors are often achievable using fewer hidden neurons and with less need for regularization. Our global-local artificial neural network (GL-ANN) is also seen to compare favorably with both perceptron radial basis net and regression tree derived RBFs. A number of issues concerning the training of GL-ANNs are discussed: the use of regularization, the inclusion of a gradient descent optimization step, the choice of RBF spreads, model selection, and the development of appropriate stopping criteria.  相似文献   

8.
Let SFd and Πψ,n,d = { nj=1bjψ(ωj·x+θj) :bj,θj∈R,ωj∈Rd} be the set of periodic and Lebesgue’s square-integrable functions and the set of feedforward neural network (FNN) functions, respectively. Denote by dist (SF d, Πψ,n,d) the deviation of the set SF d from the set Πψ,n,d. A main purpose of this paper is to estimate the deviation. In particular, based on the Fourier transforms and the theory of approximation, a lower estimation for dist (SFd, Πψ,n,d) is proved. That is, dist(SF d, Πψ,n,d) (nlogC2n)1/2 . T...  相似文献   

9.
Engineering design has great importance in the cost and safety of engineering structures. Rock mass rating (RMR) system has become a reliable and widespread pre-design system for its ease of use and variety in engineering applications such as tunnels, foundations, and slopes. In RMR system, six parameters are employed in classifying a rock mass: uniaxial compressive strength of intact rock material (UCS), rock quality designation (RQD), spacing of discontinuities (SD), condition of discontinuities (CD), condition of groundwater (CG), and orientation of discontinuities (OD). The ratings of the first three parameters UCS, RQD, and SD are determined via graphic readings where the last three parameters CD, CG, and OD are estimated by the tables that are composed of interval valued linguistic expressions. Because of these linguistic expresions, the estimated rating values of the last three become fuzzy especially when the related conditions are close to border of any two classes. In such cases, these fuzzy situations could lead up incorrect rock class estimations. In this study, an empirical database based on the linguistic expressions for CD, CG, and OD is developed for training Artificial Neural Network (ANN) classifiers. The results obtained from graphical readings and ANN classifiers are unified in a simulation model (USM). The data obtained from five different tunnels, which were excavated for derivation purpose, are used to evaluate classification results of conventional method and proposed model. Finally, it is noted that more accurate and realistic ratings are reached by means of proposed model.  相似文献   

10.
Two major approximate techniques have been proposed for the analysis of general closed queueing networks, namely the aggregation method and Marie's method. The idea of the aggregation technique is to replace a subsystem (a subnetwork) by a flow equivalent single-server with load-dependent service rates. The parameters of the equivalent server are obtained by analyzing the subsystem in isolation as a closed system with different populations. The idea of Marie's method is also to replace a subsystem by an equivalent exponential service station with load-dependent service rates. However, in this case, the parameters of the equivalent server are obtained by analyzing the subsystem in isolation under a load-dependent Poisson arrival process. Moreover, in Marie's case, the procedure is iterative.

In this paper we provide a general and unified view of these two methods. The contributions of this paper are the following. We first show that their common principle is to partition the network into a set of subsystems and then to define an equivalent product-form network. To each subsystem is associated a load-dependent exponential station in the equivalent network. We define a set of rules in order to partition any general closed network with various features such as general service time distributions, pupulation constraints, finite buffers, state-dependent routing. We then show that the aggregation method and Marie's method are two ways of obtaining the parameters of the equivalent network associated with a given partition. Finally, we provide a discussion pertaining to the comparison of the two methods with respect to their accuracy and computational complexity.  相似文献   


11.
The paper describes a novel application of the B-spline membership functions (BMF's) and the fuzzy neural network to the function approximation with outliers in training data. According to the robust objective function, we use gradient descent method to derive the new learning rules of the weighting values and BMF's of the fuzzy neural network for robust function approximation. In this paper, the robust learning algorithm is derived. During the learning process, the robust objective function comes into effect and the approximated function will gradually be unaffected by the erroneous training data. As a result, the robust function approximation can rapidly converge to the desired tolerable error scope. In other words, the learning iterations will decrease greatly. We realize the function approximation not only in one dimension (curves), but also in two dimension (surfaces). Several examples are simulated in order to confirm the efficiency and feasibility of the proposed approach in this paper.  相似文献   

12.
Back AD  Chen T 《Neural computation》2002,14(11):2561-2566
Recently, there has been interest in the observed capabilities of some classes of neural networks with fixed weights to model multiple nonlinear dynamical systems. While this property has been observed in simulations, open questions exist as to how this property can arise. In this article, we propose a theory that provides a possible mechanism by which this multiple modeling phenomenon can occur.  相似文献   

13.
Optimized approximation algorithm in neural networks without overfitting.   总被引:2,自引:0,他引:2  
In this paper, an optimized approximation algorithm (OAA) is proposed to address the overfitting problem in function approximation using neural networks (NNs). The optimized approximation algorithm avoids overfitting by means of a novel and effective stopping criterion based on the estimation of the signal-to-noise-ratio figure (SNRF). Using SNRF, which checks the goodness-of-fit in the approximation, overfitting can be automatically detected from the training error only without use of a separate validation set. The algorithm has been applied to problems of optimizing the number of hidden neurons in a multilayer perceptron (MLP) and optimizing the number of learning epochs in MLP's backpropagation training using both synthetic and benchmark data sets. The OAA algorithm can also be utilized in the optimization of other parameters of NNs. In addition, it can be applied to the problem of function approximation using any kind of basis functions, or to the problem of learning model selection when overfitting needs to be considered.  相似文献   

14.
We present a type of single-hidden layer feed-forward wavelet neural networks. First, we give a new and quantitative proof of the fact that a single-hidden layer wavelet neural network with n + 1 hidden neurons can interpolate + 1 distinct samples with zero error. Then, without training, we constructed a wavelet neural network X a (x, A), which can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. The given wavelet neural network can uniformly approximate any continuous function of one variable.  相似文献   

15.
多层前向小世界神经网络及其函数逼近   总被引:1,自引:0,他引:1  
借鉴复杂网络的研究成果, 探讨一种在结构上处于规则和随机连接型神经网络之间的网络模型—-多层前向小世界神经网络. 首先对多层前向规则神经网络中的连接依重连概率p进行重连, 构建新的网络模型, 对其特征参数的分析表明, 当0 < p < 1时, 该网络在聚类系数上不同于Watts-Strogatz 模型; 其次用六元组模型对网络进行描述; 最后, 将不同p值下的小世界神经网络用于函数逼近, 仿真结果表明, 当p = 0:1时, 网络具有最优的逼近性能, 收敛性能对比试验也表明, 此时网络在收敛性能、逼近速度等指标上要优于同规模的规则网络和随机网络.  相似文献   

16.
直觉模糊神经网络的函数逼近能力   总被引:3,自引:0,他引:3  
运用直觉模糊集理论,建立了自适应神经-直觉模糊推理系统(ANIFIS)的控制模型,并证明了该模型具有全局逼近性质.首先将Zadeh模糊推理神经网络变为直觉模糊推理网络,建立一个多输入单输出的T-S型ANIFIS模型;然后设计了系统变量的属性函数和推理规则,确定了各层的输入输出计算关系,以及系统输出结果的合成计算表达式;最后通过证明所建模型的输出结果计算式满足Stone-Weirstrass定理的3个假设条件,完成了该模型的全局逼近性证明.  相似文献   

17.
A feedforward Sigma-Pi neural network with a single hidden layer of m neurons is given by /sup m//spl Sigma//sub j=1/c/sub j/g(n/spl Pi//sub k=1/x/sub k/-/spl theta//sub k//sup j///spl lambda//sub k//sup j/) where c/sub j/, /spl theta//sub k//sup j/, /spl lambda//sub k//spl isin/R. We investigate the approximation of arbitrary functions f: R/sup n//spl rarr/R by a Sigma-Pi neural network in the L/sup p/ norm. An L/sup p/ locally integrable function g(t) can approximate any given function, if and only if g(t) can not be written in the form /spl Sigma//sub j=1//sup n//spl Sigma//sub k=0//sup m//spl alpha//sub jk/(ln|t|)/sup j-1/t/sub k/.  相似文献   

18.
基于折线模糊数间的模糊算术以及一个新的扩展原理建立了一种新的模糊神经网络模型,证明了当输入为负模糊数时,相应的前向三层折线模糊网络可以作为连续模糊函数的通用逼近器,并给出了此时连续模糊函数所需满足的等价条件,最后给出了一个仿真实例。  相似文献   

19.
The ability of a neural network to learn from experience can be viewed as closely related to its approximating properties. By assuming that environment is essentially stochastic it follows that neural networks should be able to approximate stochastic processes. The aim of this paper is to show that some classes of artificial neural networks exist such that they are capable of providing the approximation, in the mean square sense, of prescribed stochastic processes with arbitrary accuracy. The networks so defined constitute a new model for neural processing and extend previous results concerning approximating capabilities of artificial neural networks.  相似文献   

20.
This paper is aimed at exposing the reader to certain aspects in the design of the best approximants with Gaussian radial basis functions (RBFs). The class of functions to which this approach applies consists of those compactly supported in frequency. The approximative properties of uniqueness and existence are restricted to this class. Functions which are smooth enough can be expanded in Gaussian series converging uniformly to the objective function. The uniqueness of these series is demonstrated by the context of the orthonormal basis in a Hilbert space. Furthermore, the best approximation to a given band-limited function from a truncated Gaussian series is analyzed by an energy-based argument. This analysis not only gives a theoretical proof concerned with the existence of best approximations but addresses the problems of architectural selection. Specifically, guidance for selecting the variance and the oversampling parameters is provided for practitioners.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号