首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
An important consideration when applying neural networks to pattern recognition is the sensitivity to weight perturbation or to input errors. In this paper, we analyze the sensitivity of single hidden-layer networks with threshold functions. In a case of weight perturbation or input errors, the probability of inversion error for an output neuron is derived as a function of the trained weights, the input pattern, and the variance of weight perturbation or the bit error probability of the input pattern. The derived results are verified with a simulation of the Madaline recognizing handwritten digits. The result shows that the sensitivity of trained networks is far different from that of networks with random weights.  相似文献   

2.
Wang Y  Zeng X  Yeung DS  Peng Z 《Neural computation》2006,18(11):2854-2877
The sensitivity of a neural network's output to its input and weight perturbations is an important measure for evaluating the network's performance. In this letter, we propose an approach to quantify the sensitivity of Madalines. The sensitivity is defined as the probability of output deviation due to input and weight perturbations with respect to overall input patterns. Based on the structural characteristics of Madalines, a bottom-up strategy is followed, along which the sensitivity of single neurons, that is, Adalines, is considered first and then the sensitivity of the entire Madaline network. By means of probability theory, an analytical formula is derived for the calculation of Adalines' sensitivity, and an algorithm is designed for the computation of Madalines' sensitivity. Computer simulations are run to verify the effectiveness of the formula and algorithm. The simulation results are in good agreement with the theoretical results.  相似文献   

3.
Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined.  相似文献   

4.
Neural networks based on metric recognition methods have a strictly determined architecture. Number of neurons, connections, as well as weights and thresholds values are calculated analytically, based on the initial conditions of tasks: number of recognizable classes, number of samples, metric expressions used. This paper discusses the possibility of transforming these networks in order to apply classical learning algorithms to them without using analytical expressions that calculate weight values. In the received network, training is carried out by recognizing images in pairs. This approach simplifies the learning process and easily allows to expand the neural network by adding new images to the recognition task. The advantages of these networks, including such as: (1) network architecture simplicity and transparency; (2) training simplicity and reliability; (3) the possibility of using a large number of images in the recognition problem using a neural network; (4) a consistent increase in the number of recognizable classes without changing the previous values of weights and thresholds.  相似文献   

5.
In this paper, we introduce a smoothed piecewise linear network (SPLN) and develop second order training algorithms for it. An embedded feature selection algorithm is developed which minimizes training error with respect to distance measure weights. Then a method is presented which adjusts center vector locations in the SPLN. We also present a gradient method for optimizing the SPLN output weights. Results with several data sets show that the distance measure optimization, center vector optimization, and output weight optimization, individually and together, reduce testing errors in the final network.  相似文献   

6.
This paper proposes a modified ELM algorithm that properly selects the input weights and biases before training the output weights of single-hidden layer feedforward neural networks with sigmoidal activation function and proves mathematically the hidden layer output matrix maintains full column rank. The modified ELM avoids the randomness compared with the ELM. The experimental results of both regression and classification problems show good performance of the modified ELM algorithm.  相似文献   

7.
Sigmoidal feedforward artificial neural networks (FFANNs) have been established to be universal approximators of continuous functions. The universal approximation results are summarized to identify the function sets represented by the sigmoidal FFANNs with the universal approximation properties. The equicontinuous properties of the identified sets is analyzed. The equicontinuous property is related to the fault tolerance of the sigmoidal FFANNs. The generally used arbitrary weight sigmoidal FFANNs are shown to be nonequicontinuous sets. A class of bounded weight sigmoidal FFANNs is established to be equicontinuous. The fault-tolerance behavior of the networks is analyzed and error bounds for the induced errors established.  相似文献   

8.
In this paper, the existing algorithms for modeling uncertain data streams based on radial basis function neural networks have problems of low accuracy, weak stability and slow speed. A new clustering method for uncertain data streams is proposed. Radial basis function neural network of the algorithm. The algorithm firstly models the uncertain data stream, then combines the fuzzy theory and the neural network principle to obtain the radial basis function neural network, and then obtains the radial basis function neural network through the clustering algorithm of the regular tetrahedral uncertain vector. The central weight and width weights ultimately result in hidden layer output and output layer output results. The experimental results show that the proposed algorithm is an effective algorithm for modeling uncertain data streams using clustering radial basis function neural networks. It has higher precision, stability and speed than similar algorithms.  相似文献   

9.
A new adaptive backpropagation (BP) algorithm based on Lyapunov stability theory for neural networks is developed in this paper. It is shown that the candidate of a Lyapunov function V(k) of the tracking error between the output of a neural network and the desired reference signal is chosen first, and the weights of the neural network are then updated, from the output layer to the input layer, in the sense that DeltaV(k)=V(k)-V(k-1)<0. The output tracking error can then asymptotically converge to zero according to Lyapunov stability theory. Unlike gradient-based BP training algorithms, the new Lyapunov adaptive BP algorithm in this paper is not used for searching the global minimum point along the cost-function surface in the weight space, but it is aimed at constructing an energy surface with a single global minimum point through the adaptive adjustment of the weights as the time goes to infinity. Although a neural network may have bounded input disturbances, the effects of the disturbances can be eliminated, and asymptotic error convergence can be obtained. The new Lyapunov adaptive BP algorithm is then applied to the design of an adaptive filter in the simulation example to show the fast error convergence and strong robustness with respect to large bounded input disturbances  相似文献   

10.
This works presents a neural-adaptive control strategy for trajectory tracking for a two-link flexible joint robot, with experimental results. The method of backstepping with tuning functions (using analytic differentiation) guides the design, rather than using neural approximation of derivatives. Traditional tuning function design results in a weight update dominated by the last error in the backstepping design, not the output error. The novel method in this paper weights the errors in the tuning function so that the output error becomes significant in training. An additional modification ensures robustness to approximation errors. Experimental results show the improved performance compared to both derivative-estimation and normal tuning function methods.  相似文献   

11.
Recurrent Neural Networks Training With Stable Bounding Ellipsoid Algorithm   总被引:1,自引:0,他引:1  
Bounding ellipsoid (BE) algorithms offer an attractive alternative to traditional training algorithms for neural networks, for example, backpropagation and least squares methods. The benefits include high computational efficiency and fast convergence speed. In this paper, we propose an ellipsoid propagation algorithm to train the weights of recurrent neural networks for nonlinear systems identification. Both hidden layers and output layers can be updated. The stability of the BE algorithm is proven.   相似文献   

12.
In this work we present a new hybrid algorithm for feedforward neural networks, which combines unsupervised and supervised learning. In this approach, we use a Kohonen algorithm with a fuzzy neighborhood for training the weights of the hidden layers and gradient descent method for training the weights of the output layer. The goal of this method is to assist the existing variable learning rate algorithms. Simulation results show the effectiveness of the proposed algorithm compared with other well-known learning methods.  相似文献   

13.
This paper addresses the design of an exponential function-based learning law for artificial neural networks (ANNs) with continuous dynamics. The ANN structure is used to obtain a non-parametric model of systems with uncertainties, which are described by a set of nonlinear ordinary differential equations. Two novel adaptive algorithms with predefined exponential convergence rate adjust the weights of the ANN. The first algorithm includes an adaptive gain depending on the identification error which accelerated the convergence of the weights and promotes a faster convergence between the states of the uncertain system and the trajectories of the neural identifier. The second approach uses a time-dependent sigmoidal gain that forces the convergence of the identification error to an invariant set characterized by an ellipsoid. The generalized volume of this ellipsoid depends on the upper bounds of uncertainties, perturbations and modeling errors. The application of the invariant ellipsoid method yields to obtain an algorithm to reduce the volume of the convergence region for the identification error. Both adaptive algorithms are derived from the application of a non-standard exponential dependent function and an associated controlled Lyapunov function. Numerical examples demonstrate the improvements enforced by the algorithms introduced in this study by comparing the convergence settings concerning classical schemes with non-exponential continuous learning methods. The proposed identifiers overcome the results of the classical identifier achieving a faster convergence to an invariant set of smaller dimensions.   相似文献   

14.
Multi-layer networks of threshold logic units (TLU) offer an attractive framework for the design of pattern classification systems. A new constructive neural network learning algorithm (DistAl) based on inter-pattern distance is introduced. DistAl constructs a single hidden layer of hyperspherical threshold neurons. Each neuron is designed to determine a cluster of training patterns belonging to the same class. The weights and thresholds of the hidden neurons are determined directly by comparing the inter-pattern distances of the training patterns. This offers a significant advantage over other constructive learning algorithms that use an iterative (and often time consuming) weight modification strategy to train individual neurons. The individual clusters (represented by the hidden neurons) are combined by a single output layer of threshold neurons. The speed of DistAl makes it a good candidate for datamining and knowledge acquisition from large datasets. The paper presents results of experiments using several artificial and real-world datasets. The results demonstrate that DistAl compares favorably with other learning algorithms for pattern classification.  相似文献   

15.
Sensitivity of feedforward neural networks to weight errors   总被引:3,自引:0,他引:3  
An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).  相似文献   

16.
强化学习是解决自适应问题的重要方法,被广泛地应用于连续状态下的学习控制,然而存在效率不高和收敛速度较慢的问题.在运用反向传播(back propagation,BP)神经网络基础上,结合资格迹方法提出一种算法,实现了强化学习过程的多步更新.解决了输出层的局部梯度向隐层节点的反向传播问题,从而实现了神经网络隐层权值的快速更新,并提供一个算法描述.提出了一种改进的残差法,在神经网络的训练过程中将各层权值进行线性优化加权,既获得了梯度下降法的学习速度又获得了残差梯度法的收敛性能,将其应用于神经网络隐层的权值更新,改善了值函数的收敛性能.通过一个倒立摆平衡系统仿真实验,对算法进行了验证和分析.结果显示,经过较短时间的学习,本方法能成功地控制倒立摆,显著提高了学习效率.  相似文献   

17.
PID神经元网络之权值直接确定法研究   总被引:1,自引:1,他引:0       下载免费PDF全文
网络权值如何确定,是人工神经网络研究中的一个重要课题。传统PID神经元网络在该问题的研究上,大多数采用误差回传(BP)的思想,通过迭代训练而估算出该网络的连接权值。针对PID神经元网络,对其进行简单巧妙的转化,可提出一种基于矩阵伪逆表述的直接计算权值的方法,从而避免了冗长的迭代训练过程。计算机仿真结果表明,该权值直接确定方法不仅有更快的学习/计算速度,而且能达到更高的计算精度。  相似文献   

18.
Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed (1995) an incremental internode communication method. In the incremental communication method, instead of communicating the full magnitude of the output value of a node, only the increment or decrement to its previous value is sent to a communication link. In this paper, the effects of the limited precision incremental communication method on the convergence behavior and performance of multilayer neural networks are investigated. The nonlinear aspects of representing the incremental values with reduced (limited) precision for the commonly used error backpropagation training algorithm are analyzed. It is shown that the nonlinear effect of small perturbations in the input(s)/output of a node does not cause instability. The analysis is supported by simulation studies of two problems. The simulation results demonstrate that the limited precision errors are bounded and do not seriously affect the convergence of multilayer neural networks.  相似文献   

19.
An important consideration when applying neural networks is the sensitivity to weights and threshold in strict separating systems representing a linearly separable function. Perturbations may affect weights and threshold so that it is important to estimate the maximal percentage error in weights and threshold, which may be allowed without altering the linearly separable function. In this paper, we provide the greatest allowed bound which can be associated to every strict separating system representing a linearly separable function. The proposed bound improves the tolerance that Hu obtained. Furthermore, it is the greatest bound for any strict separating system. This is the reason why we call it the greatest tolerance.  相似文献   

20.
The use of multilayer perceptrons (MLP) with threshold functions (binary step function activations) greatly reduces the complexity of the hardware implementation of neural networks, provides tolerance to noise and improves the interpretation of the internal representations. In certain case, such as in learning stationary tasks, it may be sufficient to find appropriate weights for an MLP with threshold activation functions by software simulation and, then, transfer the weight values to the hardware implementation. Efficient training of these networks is a subject of considerable ongoing research. Methods available in the literature mainly focus on two-state (threshold) nodes and try to train the networks by approximating the gradient of the error function and modifying appropriately the gradient descent, or by progressively altering the shape of the activation functions. In this paper, we propose an evolution-motivated approach, which is eminently suitable for networks with threshold functions and compare its performance with four other methods. The proposed evolutionary strategy does not need gradient related information, it is applicable to a situation where threshold activations are used from the beginning of the training, as in “on-chip” training, and is able to train networks with integer weights.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号