首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The paper is concerned with a general learning (with arbitrary criterion and state-dependent constraints) of continuous trajectories by means of recurrent neural networks with time-varying weights. The learning process is transformed into an optimal control framework, where the weights to be found are treated as controls. A learning algorithm based on a variational formulation of Pontryagin's maximum principle is proposed. This algorithm is shown to converge, under reasonable conditions, to an optimal solution. The neural networks with time-dependent weights make it possible to efficiently find an admissible solution (i.e., initial weights satisfying state constraints) which then serves as an initial guess to carry out a proper minimization of a given criterion. The proposed methodology may be directly applicable to both classification of temporal sequences and to optimal tracking of nonlinear dynamic systems. Numerical examples are also given which demonstrate the efficiency of the approach presented.  相似文献   

2.
Training recurrent networks by Evolino   总被引:1,自引:0,他引:1  
In recent years, gradient-based LSTM recurrent neural networks (RNNs) solved many previously RNN-unlearnable tasks. Sometimes, however, gradient information is of little use for training RNNs, due to numerous local minima. For such cases, we present a novel method: EVOlution of systems with LINear Outputs (Evolino). Evolino evolves weights to the nonlinear, hidden nodes of RNNs while computing optimal linear mappings from hidden state to output, using methods such as pseudo-inverse-based linear regression. If we instead use quadratic programming to maximize the margin, we obtain the first evolutionary recurrent support vector machines. We show that Evolino-based LSTM can solve tasks that Echo State nets (Jaeger, 2004a) cannot and achieves higher accuracy in certain continuous function generation tasks than conventional gradient descent RNNs, including gradient-based LSTM.  相似文献   

3.
Robust local stability of multilayer recurrent neural networks   总被引:1,自引:0,他引:1  
We derive a condition for robust local stability of multilayer recurrent neural networks with two hidden layers. The stability condition follows from linking theories about linearization, robustness analysis of linear systems under nonlinear perturbation, and matrix inequalities. A characterization of the basin of attraction of the origin is given in terms of the level set of a quadratic Lyapunov function. Similar to the NL(q) theory, the local stability is imposed around the origin and the apparent basin of attraction is made large by applying the criterion, while the proven basin of attraction is relatively small due to conservatism of the criterion. Modification of the dynamic backpropagation by the new stability condition is discussed and illustrated by simulation examples.  相似文献   

4.
In this note, the approximation capability of a class of discrete-time dynamic recurrent neural networks (DRNN's) is studied. Analytical results presented show that some of the states of such a DRNN described by a set of difference equations may be used to approximate uniformly a state-space trajectory produced by either a discrete-time nonlinear system or a continuous function on a closed discrete-time interval. This approximation process, however, has to be carried out by an adaptive learning process. This capability provides the potential for applications such as identification and adaptive control  相似文献   

5.
This paper proposes a new hybrid approach for recurrent neural networks (RNN). The basic idea of this approach is to train an input layer by unsupervised learning and an output layer by supervised learning. In this method, the Kohonen algorithm is used for unsupervised learning, and dynamic gradient descent method is used for supervised learning. The performances of the proposed algorithm are compared with backpropagation through time (BPTT) on three benchmark problems. Simulation results show that the performances of the new proposed algorithm exceed the standard backpropagation through time in the reduction of the total number of iterations and in the learning time required in the training process.  相似文献   

6.
The speed of processing in the visual cortical areas can be fast, with for example the latency of neuronal responses increasing by only approximately 10 ms per area in the ventral visual system sequence V1 to V2 to V4 to inferior temporal visual cortex. This has led to the suggestion that rapid visual processing can only be based on the feedforward connections between cortical areas. To test this idea, we investigated the dynamics of information retrieval in multiple layer networks using a four-stage feedforward network modelled with continuous dynamics with integrate-and-fire neurons, and associative synaptic connections between stages with a synaptic time constant of 10 ms. Through the implementation of continuous dynamics, we found latency differences in information retrieval of only 5 ms per layer when local excitation was absent and processing was purely feedforward. However, information latency differences increased significantly when non-associative local excitation was included. We also found that local recurrent excitation through associatively modified synapses can contribute significantly to processing in as little as 15 ms per layer, including the feedforward and local feedback processing. Moreover, and in contrast to purely feed-forward processing, the contribution of local recurrent feedback was useful and approximately this rapid even when retrieval was made difficult by noise. These findings suggest that cortical information processing can benefit from recurrent circuits when the allowed processing time per cortical area is at least 15 ms long.  相似文献   

7.
Chen  Tianyu  Li  Sheng  Yan  Jun 《Neural computing & applications》2022,34(19):16515-16532
Neural Computing and Applications - Recurrent neural networks (RNNs) provide powerful tools for sequence problems. However, simple RNN and its variants are prone to high computational cost, for...  相似文献   

8.
Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations. It is shown that this technique (which is called ;weight perturbation') is suitable for multilayer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is presented.  相似文献   

9.
A.  A. 《Neurocomputing》2000,30(1-4):153-172
We present a stochastic learning algorithm for neural networks. The algorithm does not make any assumptions about transfer functions of individual neurons and does not depend on a functional form of a performance measure. The algorithm uses a random step of varying size to adapt weights. The average size of the step decreases during learning. The large steps enable the algorithm to jump over local maxima/minima, while the small ones ensure convergence in a local area. We investigate convergence properties of the proposed algorithm as well as test the algorithm on four supervised and unsupervised learning problems. We have found a superiority of this algorithm compared to several known algorithms when testing them on generated as well as real data.  相似文献   

10.
This article addresses the problem of performing Nearest Neighbor (NN) queries on uncertain trajectories. The answer to an NN query for certain trajectories is time parameterized due to the continuous nature of the motion. As a consequence of uncertainty, there may be several objects that have a non-zero probability of being a nearest neighbor to a given querying object, and the continuous nature further complicates the semantics of the answer. We capture the impact that the uncertainty of the trajectories has on the semantics of the answer to continuous NN queries and we propose a tree structure for representing the answers, along with efficient algorithms to compute them. We also address the issue of performing NN queries when the motion of the objects is restricted to road networks. Finally, we formally define and show how to efficiently execute several variants of continuous NN queries. Our experiments demonstrate that the proposed algorithms yield significant performance improvements when compared with the corresponding naïve approaches.  相似文献   

11.
A new method of inter-neuron communication called incremental communication is presented. In the incremental communication method, instead of communicating the whole value of a variable, only the increment or decrement of its previous value is sent on a communication link. The incremental value may be either a fixed-point or a floating-point value. Multilayer feedforward network architecture is used to illustrate the effectiveness of the proposed communication scheme. The method is applied to three different learning problems and the effect of the precision of incremental input-output values of the neurons on the convergence behavior is examined. It is shown through simulation that for some problems even four-bit precision in fixed- and/or floating-point representations is sufficient for the network to converge. With 8-12 bit precisions almost the same results are obtained as that with the conventional communication using 32-bit precision. The proposed method of communication can lead to significant savings in the intercommunication cost for implementations of artificial neural networks on parallel computers as well as the interconnection cost of direct hardware realizations. The method can be incorporated into most of the current learning algorithms in which inter-neuron communications are required. Moreover, it can be used along with the other limited precision strategies for representation of variables suggested in literature.  相似文献   

12.
In this paper, we present a new approach for chaos reproduction using variable structure recurrent neural networks (VSRNN). A neural network identifier is designed, with a variable structure that will change according to its output performance as compared to the given orbits of an unknown chaotic systems. A tradeoff between identification errors and computational complexity is discussed.  相似文献   

13.
14.
In several applications, data objects move on pre-defined spatial networks such as road segments, railways, and invisible air routes. Many of these objects exhibit similarity with respect to their traversed paths, and therefore two objects can be correlated based on their motion similarity. Useful information can be retrieved from these correlations and this knowledge can be used to define similarity classes. In this paper, we study similarity search for moving object trajectories in spatial networks. The problem poses some important challenges, since it is quite different from the case where objects are allowed to move freely in any direction without motion restrictions. New similarity measures should be employed to express similarity between two trajectories that do not necessarily share any common sub-path. We define new similarity measures based on spatial and temporal characteristics of trajectories, such that the notion of similarity in space and time is well expressed, and moreover they satisfy the metric properties. In addition, we demonstrate that similarity range queries in trajectories are efficiently supported by utilizing metric-based access methods, such as M-trees.  相似文献   

15.
The authors point out that some of the claims and expressions in the paper mentioned in the title by Suykens et al. (ibid., vol.44 (1999)) are incorrect.  相似文献   

16.
In this paper, a continuous time recurrent neural network (CTRNN) is developed to be used in nonlinear model predictive control (NMPC) context. The neural network represented in a general nonlinear state-space form is used to predict the future dynamic behavior of the nonlinear process in real time. An efficient training algorithm for the proposed network is developed using automatic differentiation (AD) techniques. By automatically generating Taylor coefficients, the algorithm not only solves the differentiation equations of the network but also produces the sensitivity for the training problem. The same approach is also used to solve the online optimization problem in the predictive controller. The proposed neural network and the nonlinear predictive controller were tested on an evaporation case study. A good model fitting for the nonlinear plant is obtained using the new method. A comparison with other approaches shows that the new algorithm can considerably reduce network training time and improve solution accuracy. The CTRNN trained is used as an internal model in a predictive controller and results in good performance under different operating conditions.  相似文献   

17.
Here, formation of continuous attractor dynamics in a nonlinear recurrent neural network is used to achieve a nonlinear speech denoising method, in order to implement robust phoneme recognition and information retrieval. Formation of attractor dynamics in recurrent neural network is first carried out by training the clean speech subspace as the continuous attractor. Then, it is used to recognize noisy speech with both stationary and nonstationary noise. In this work, the efficiency of a nonlinear feedforward network is compared to the same one with a recurrent connection in its hidden layer. The structure and training of this recurrent connection, is designed in such a way that the network learns to denoise the signal step by step, using properties of attractors it has formed, along with phone recognition. Using these connections, the recognition accuracy is improved 21% for the stationary signal and 14% for the nonstationary one with 0db SNR, in respect to a reference model which is a feedforward neural network.  相似文献   

18.
In subject classification, artificial neural networks (ANNS) are efficient and objective classification methods. Thus, they have been successfully applied to the numerous classification fields. Sometimes, however, classifications do not match the real world, and are subjected to errors. These problems are caused by the nature of ANNS. We discuss these on multilayer perceptron neural networks. By studying of these problems, it helps us to have a better understanding on its classification.  相似文献   

19.
Abstract: We aimed to examine the diagnostic performances of multilayer perceptron neural networks (MLPNNs) for predicting coronary artery disease and to compare them with different types of artificial neural network methods, namely recurrent neural networks (RNNs) and two statistical methods (quadratic discriminant analysis (QDA) and logistic regression (LR)). MLPNNs were trained with backpropagation, quick propagation, delta-bar-delta and extended delta-bar-delta algorithms as classifiers; the RNN was trained with the Levenberg–Marquardt algorithm; LR and QDA were used for predicting coronary artery disease. Coronary artery disease was classified with accuracy rates varying from 79.9% to 83.9% by MLPNNs. Even though MLPNNs achieved higher accuracy rates than the statistical methods, LR (73.2%) and QDA (58.4%), their performances were lower compared to the RNN (84.7%). Among the four different types of training algorithms that trained MLPNNs, quick propagation achieved the highest accuracy rate; however, it was lower than the RNN trained with the Levenberg–Marquardt algorithm. RNNs, which demonstrated 84.7% accuracy and 86.5% positive predictive rates, may be a helpful tool in medical decision making for diagnosis of coronary artery disease.  相似文献   

20.
多层网络是当今网络科学研究的一个前沿方向。针对多层双向耦合星形网络的特征值谱对网络的同步能力进行了研究。通过严格推导出多层双向耦合星形网络特征值的解析表达式,分析了节点数、层数、层内耦合强度和层间耦合强度与网络同步能力的关系,重点分析了层数对网络同步能力的影响。网络的同步能力除了受层内耦合强度和层间叶子节点之间的耦合强度影响外,当同步域无界时,若层间叶子节点之间的耦合强度较弱,网络的同步能力还依赖于层数。当同步域有界时,网络的同步能力随节点数、层间中心节点之间的耦合强度增大而变弱;若层内耦合强度较弱,网络的同步能力随层数增大而减弱;若层间叶子节点之间的耦合强度较弱,网络的同步能力随层数增大反而增强。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号