首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 859 毫秒
1.
Continuous attractors of a class of recurrent neural networks   总被引:1,自引:0,他引:1  
Recurrent neural networks (RNNs) may possess continuous attractors, a property that many brain theories have implicated in learning and memory. There is good evidence for continuous stimuli, such as orientation, moving direction, and the spatial location of objects could be encoded as continuous attractors in neural networks. The dynamical behaviors of continuous attractors are interesting properties of RNNs. This paper proposes studying the continuous attractors for a class of RNNs. In this network, the inhibition among neurons is realized through a kind of subtractive mechanism. It shows that if the synaptic connections are in Gaussian shape and other parameters are appropriately selected, the network can exactly realize continuous attractor dynamics. Conditions are derived to guarantee the validity of the selected parameters. Simulations are employed for illustration.  相似文献   

2.
Computing with continuous attractors: stability and online aspects   总被引:1,自引:0,他引:1  
Wu S  Amari S 《Neural computation》2005,17(10):2215-2239
Two issues concerning the application of continuous attractors in neural systems are investigated: the computational robustness of continuous attractors with respect to input noises and the implementation of Bayesian online decoding. In a perfect mathematical model for continuous attractors, decoding results for stimuli are highly sensitive to input noises, and this sensitivity is the inevitable consequence of the system's neutral stability. To overcome this shortcoming, we modify the conventional network model by including extra dynamical interactions between neurons. These interactions vary according to the biologically plausible Hebbian learning rule and have the computational role of memorizing and propagating stimulus information accumulated with time. As a result, the new network model responds to the history of external inputs over a period of time, and hence becomes insensitive to short-term fluctuations. Also, since dynamical interactions provide a mechanism to convey the prior knowledge of stimulus, that is, the information of the stimulus presented previously, the network effectively implements online Bayesian inference. This study also reveals some interesting behavior in neural population coding, such as the trade-off between decoding stability and the speed of tracking time-varying stimuli, and the relationship between neural tuning width and the tracking speed.  相似文献   

3.
Advances in understanding the neuronal code employed by cortical networks indicate that networks of parametrically coupled nonlinear iterative maps, each acting as a bifurcation processing element, furnish a potentially powerful tool for the modeling, simulation, and study of cortical networks and the host of higher-level processing and control functions they perform. Such functions are central to understanding and elucidating general principles on which the design of biomorphic learning and intelligent systems can be based. The networks concerned are dynamical in nature, in the sense that they compute not only with static (fixed-point) attractors but also with dynamic (periodic and chaotic) attractors. As such, they compute with diverse attractors, and utilize transitions (bifurcation) between attractors and transient chaos to carry out the functions they perform. An example of a dynamical network, a parametrically coupled net of logistic processing elements, is described and discussed together some of its behavioural attributes that are relevant to elucidating the possible role for coherence, bifurcation, and chaos in higher-level brain functions carried out by cortical networks.  相似文献   

4.
Fractal variation of dynamical attractors is observed in complex-valued neural networks where a negative-resistance nonlinearity is introduced as the neuron nonlinear function. When a parameter of the negative-resistance nonlinearity is continuously changed, it is found that the network attractors present a kind of fractal variation in a certain parameter range between deterministic and non-deterministic attractor ranges. The fractal pattern has a convergence point, which is also a critical point where deterministic attractors change into chaotic attractors. This result suggests that the complex-valued neural networks having negative-resistance nonlinearity present the dynamics complexity at the so-called edge of chaos.The author is also with the Research Center for Advanced Science and Technology (RCAST), University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153, Japan  相似文献   

5.
支持向量机理论与基于规划的神经网络学习算法   总被引:22,自引:3,他引:19  
张铃 《计算机学报》2001,24(2):113-118
近年来支持向量机(SVM)理论得到国外学者高度的重视,普遍认为这是神经网络学习的新研究方向,近来也开始得到国内学者的注意。该文将研究SVM理论与神经网络的规划算法的关系,首先指出,Vapnik的基于SVM的算法与该文作者1994年提出的神经网络的基于规划的算法是等价的,即在样本集是线性可分的情况下,二者求得的均是最大边缘(maximal margin)解。不同的是,前者(通常用拉格郎日乘子法)求解的复杂性将随规模呈指数增长,而后者的复杂性是规模的多项式函数。其次,作者将规划算法化为求一点到某一凸集上的投影,利用这个几何的直观,给出一个构造性的迭代求解算法--“单纯形迭代算法”。新算法有很强的几何直观性,这个直观性将加深对神经网络(线性可分情况下)学习的理解,并由此导出一个样本集是线性可分的充分必要条件。另外,新算法对知识扩充问题,给出一个非常方便的增量学习算法。最后指出,“将一些必须满足的条件,化成问题的约束条件,将网络的某一性能,作为目标函数,将网络的学习问题化为某种规划问题来求解”的原则,将是研究神经网络学习问题的一个十分有效的办法。  相似文献   

6.
Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Recently, we proposed a paradigm called 'latent attractors' where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus--a brain region of fundamental significance for memory and spatial learning. Latent attractor networks are a special case of associative memory networks. The model studied here consists of a two-layer recurrent network with attractors stored in the recurrent connections using a clipped Hebbian learning rule. The firing in both layers is competitive--K winners take all firing. The number of neurons allowed to fire, K, is smaller than the size of the active set of the stored attractors. The performance of latent attractor networks depends on the number of such attractors that a network can sustain. In this paper, we use signal-to-noise methods developed for standard associative memory networks to do a theoretical and computational analysis of the capacity and dynamics of latent attractor networks. This is an important first step in making latent attractors a viable tool in the repertoire of neural computation. The method developed here leads to numerical estimates of capacity limits and dynamics of latent attractor networks. The technique represents a general approach to analyse standard associative memory networks with competitive firing. The theoretical analysis is based on estimates of the dendritic sum distributions using Gaussian approximation. Because of the competitive firing property, the capacity results are estimated only numerically by iteratively computing the probability of erroneous firings. The analysis contains two cases: the simple case analysis which accounts for the correlations between weights due to shared patterns and the detailed case analysis which includes also the temporal correlations between the network's present and previous state. The latter case predicts better the dynamics of the network state for non-zero initial spurious firing. The theoretical analysis also shows the influence of the main parameters of the model on the storage capacity.  相似文献   

7.
基于神经网络的机器人轨迹鲁棒跟踪控制   总被引:1,自引:0,他引:1  
在神经网络辨识的基础上 ,提出一种新的鲁棒迭代学习控制方法。该方法利用神经网络对非线性系统进行在线辨识 ,产生迭代学习控制算法的前馈作用 ,并与实时反馈控制相结合 ,实现连续轨迹跟踪控制。仿真结果表明 ,该方法能克服机器人系统动力学模型的不确定性和外部干扰 ,且以极少的学习次数和网络训练次数达到满意的跟踪控制要求 ,具有良好的鲁棒性和控制性能  相似文献   

8.
Chaotic dynamics in a recurrent neural network model, in which limit cycle memory attractors are stored, is investigated by means of numerical methods. In particular, we focus on quick and sensitive response characteristics of chaotic memory dynamics to external input, which consists of part of an embedded memory attractor. We have calculated the correlation functions between the firing activities of neurons to understand the dynamical mechanisms of rapid responses. The results of the latter calculation show that quite strong correlations occur very quickly between almost all neurons within 1 ~ 2 updating steps after applying a partial input. They suggest that the existence of dynamical correlations or, in other words, transient correlations in chaos, play a very important role in quick and/or sensitive responses.  相似文献   

9.
A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the quality of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.  相似文献   

10.
一种基于正交神经网络的曲线重建方法   总被引:2,自引:0,他引:2       下载免费PDF全文
提出了一种基于正交神经网络的曲线重建方法。该正交神经网络结构与三层交向网络相同,不同的是正交网的隐单元处理函数采用Tchebycheff正交函数,而不是sigmoidial函数,新的曲线重建方法具有利用较少的数据点列将光滑的曲线以较高的精度重建的特点,网络训练采用Givens正交学习算法,由于它不是一种迭代算法,故学习速度快,而且没有网络初始参数的选取问题,网络训练又能避免陷入局部极小解等问题。实  相似文献   

11.
Siri B  Berry H  Cessac B  Delord B  Quoy M 《Neural computation》2008,20(12):2937-2966
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.  相似文献   

12.
Real-time algorithms for gradient descent supervised learning in recurrent dynamical neural networks fail to support scalable VLSI implementation, due to their complexity which grows sharply with the network dimension. We present an alternative implementation in analog VLSI, which employs a stochastic perturbation algorithm to observe the gradient of the error index directly on the network in random directions of the parameter space, thereby avoiding the tedious task of deriving the gradient from an explicit model of the network dynamics. The network contains six fully recurrent neurons with continuous-time dynamics, providing 42 free parameters which comprise connection strengths and thresholds. The chip implementing the network includes local provisions supporting both the learning and storage of the parameters, integrated in a scalable architecture which can be readily expanded for applications of learning recurrent dynamical networks requiring larger dimensionality. We describe and characterize the functional elements comprising the implemented recurrent network and integrated learning system, and include experimental results obtained from training the network to represent a quadrature-phase oscillator.  相似文献   

13.
Suemitsu Y  Nara S 《Neural computation》2004,16(9):1943-1957
Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.  相似文献   

14.
提出了一种利用比例无轨迹卡尔曼滤波(Scaled-UKF)进行神经网络权值估计的算法,该算法可以克服BP算法存在的学习速率缓慢、计算量大、容易使学习陷入局部极小等缺点。以Mackey-Grass混沌时间序列作为神经网络输入,运用比例UKF算法、UKF算法、BP算法仿真神经网络。结果表明,比例UKF算法较之BP算法具有更快的训练速度和更高的预测精度,且可以避免网络学习陷入局部极小;而相对于UKF算法,其变量分布可不限定为高斯型且能保证状态方差半正定。  相似文献   

15.
On Chaos and Neural Networks: The Backpropagation Paradigm   总被引:2,自引:0,他引:2  
In training feed-forward neural networks using the backpropagation algorithm, a sensitivity to the values of the parameters of the algorithm hasbeen observed. In particular, it has been observed that this sensitivity with respect to the values of the parameters, such as thelearning rate, plays an important role in the final outcome. In thistutorial paper, we will look at neural networks from a dynamical systemspoint of view andexamine its properties. To this purpose, we collect results regarding chaostheory as well as the backpropagation algorithmand establish a relationship between them. We study in detail as an example the learning of the exclusive OR,an elementary Boolean function. The following conclusions hold for our XOR neural network: no chaos appears for learning rates lower than 5, when chaosoccurs, it disappears as learning progresses. For non-chaotic learning rates, the network learns faster than for other learning rates for which chaos occurs.  相似文献   

16.
In this paper we analyze how supervised learning occurs in ecological neural networks, i.e. networks that interact with an autonomous external environment and, therefore, at least partially determine with their behavior their own input. Using an evolutionary method for selecting good teaching inputs we surprisingly find that to obtain a desired outputX it is better to use a teaching input different fromX. To explain this fact we claim that teaching inputs in ecological networks have two different effects: (a) to reduce the discrepancy between the actual output of the network and the teaching input, (b) to modify the network's behavior and, as a consequence, the network's learning experiences. Evolved teaching inputs appear to represent a compromise between these two needs. We finally show that evolved teaching inputs that are allowed to change during the learning process function differently at different stages of learning, first giving more weight to (b) and, later on, to (a).  相似文献   

17.
对一种在Elman动态递归网络基础上发展而来的复合输入动态递归网络(CIDRNN)作 了改进,提出一种新的动态递归神经网络结构,称为状态延迟动态递归神经网络(State Delay Input Dynamical Recurrent Neural Network).具有这种新的拓扑结构和学习规则的动态递归网 络,不仅明确了各权值矩阵的意义,而且使权值的训练过程更为简洁,意义更为明确.仿真实验 表明,这种结构的网络由于增加了网络输入输出的前一步信息,提高了收敛速度,增强了实时 控制的可能性.然后将该网络用于机器人未知非线性动力学的辨识中,使用辨识实际输出与机理 模型输出之间的偏差,来识别机理模型或简化模型所丢失的信息,既利用了机器人现有的建模 方法,又可以减小网络运算量,提高辨识速度.仿真结果表明了这种改进的有效性.  相似文献   

18.
基于复合正交神经网络的自适应逆控制系统   总被引:10,自引:0,他引:10  
叶军 《计算机仿真》2004,21(2):92-94
目前,在自适应逆控制系统中常采用BP神经网络,而BP网络存在算法复杂、易陷入局部极小解等不足。而正交神经网络能克服BP网络的不足,但由于正交神经网络学习算法存在某些局限性,提出了一种复合正交神经网络,该正交网络结构与三层前向正交网络相同,不同的是正交网络的隐单元处理函数采用带参数的Sigmoid函数的复合正交函数,该神经网络算法简单,学习收敛速度快,并能对网络的函数参数进行优化,为非线性系统的动态建模提供了一种方法。仿真实验表明,网络在用于过程的自适应逆控制中具有很高的控制精度和自适应学习能力。该动态神经网络比其它神经网络具有更强的建模能力与学习适应性,有线性、非线性逼近精度高等优异特性,非常适合于实时控制系统。  相似文献   

19.
A method is proposed for constructing salient features from a set of features that are given as input to a feedforward neural network used for supervised learning. Combinations of the original features are formed that maximize the sensitivity of the network's outputs with respect to variations of its inputs. The method exhibits some similarity to Principal Component Analysis, but also takes into account supervised character of the learning task. It is applied to classification problems leading to improved generalization ability originating from the alleviation of the curse of dimensionality problem.  相似文献   

20.
针对不确定非线性混沌系统,提出一种基于动态神经网络建模的控制新方法.基于Lyapunov稳定性理论,推导出了神经网络权值在线学习规律,保证了系统的全局稳定性.在混沌建模阶段,神经网络用于学习不确定混沌系统,然后在所建模型的基础上,设计控制器将混沌状态引导到期望目标位置;并且对系统的稳定性能进行了严格的数学分析.把该方法应用到Logistic映射和Hénon 映射建模和控制,数值仿真表明该方法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号