首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
Zhang W  Wu S 《Neural computation》2012,24(7):1695-1721
Descending feedback connections, together with ascending feedforward ones, are the indispensable parts of the sensory pathways in the central nervous system. This study investigates the potential roles of feedback interactions in neural information processing. We consider a two-layer continuous attractor neural network (CANN), in which neurons in the first layer receive feedback inputs from those in the second one. By utilizing the intrinsic property of a CANN, we use a projection method to reduce the dimensionality of the network dynamics significantly. The simplified dynamics allows us to elucidate the effects of feedback modulation analytically. We find that positive feedback enhances the stability of the network state, leading to an improved population decoding performance, whereas negative feedback increases the mobility of the network state, inducing spontaneously moving bumps. For strong, negative feedback interaction, the network response to a moving stimulus can lead the actual stimulus position, achieving an anticipative behavior. The biological implications of these findings are discussed. The simulation results agree well with our theoretical analysis.  相似文献   

2.
Recurrent neural architectures having oscillatory dynamics use rhythmic network activity to represent patterns stored in short-term memory. Multiple stored patterns can be retained in memory over the same neural substrate because the network's state persistently switches between them. Here we present a simple oscillatory memory that extends the dynamic threshold approach of Horn and Usher (1991) by including weight decay. The modified model is able to match behavioral data from human subjects performing a running memory span task simply by assuming appropriate weight decay rates. The results suggest that simple oscillatory memories incorporating weight decay capture at least some key properties of human short-term memory. We examine the implications of the results for theories about the relative role of interference and decay in forgetting, and hypothesize that adjustments of activity decay rate may be an important aspect of human attentional mechanisms.  相似文献   

3.
Toyoizumi T 《Neural computation》2012,24(10):2678-2699
Many cognitive processes rely on the ability of the brain to hold sequences of events in short-term memory. Recent studies have revealed that such memory can be read out from the transient dynamics of a network of neurons. However, the memory performance of such a network in buffering past information has been rigorously estimated only in networks of linear neurons. When signal gain is kept low, so that neurons operate primarily in the linear part of their response nonlinearity, the memory lifetime is bounded by the square root of the network size. In this work, I demonstrate that it is possible to achieve a memory lifetime almost proportional to the network size, "an extensive memory lifetime," when the nonlinearity of neurons is appropriately used. The analysis of neural activity revealed that nonlinear dynamics prevented the accumulation of noise by partially removing noise in each time step. With this error-correcting mechanism, I demonstrate that a memory lifetime of order [Formula: see text] can be achieved.  相似文献   

4.
Recently multineuronal recording has allowed us to observe patterned firings, synchronization, oscillation, and global state transitions in the recurrent networks of central nervous systems. We propose a learning algorithm based on the process of information maximization in a recurrent network, which we call recurrent infomax (RI). RI maximizes information retention and thereby minimizes information loss through time in a network. We find that feeding in external inputs consisting of information obtained from photographs of natural scenes into an RI-based model of a recurrent network results in the appearance of Gabor-like selectivity quite similar to that existing in simple cells of the primary visual cortex. We find that without external input, this network exhibits cell assembly-like and synfire chain-like spontaneous activity as well as a critical neuronal avalanche. In addition, we find that RI embeds externally input temporal firing patterns to the network so that it spontaneously reproduces these patterns after learning. RI provides a simple framework to explain a wide range of phenomena observed in in vivo and in vitro neuronal networks, and it will provide a novel understanding of experimental results for multineuronal activity and plasticity from an information-theoretic point of view.  相似文献   

5.
A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.  相似文献   

6.
We propose a simple neural network model to understand the dynamics of temporal pulse coding. The model is composed of coincidence detector neurons with uniform synaptic efficacies and random pulse propagation delays. We also assume a global negative feedback mechanism which controls the network activity, leading to a fixed number of neurons firing within a certain time window. Due to this constraint, the network state becomes well defined and the dynamics equivalent to a piecewise nonlinear map. Numerical simulations of the model indicate that the latency of neuronal firing is crucial to the global network dynamics; when the timing of postsynaptic firing is less sensitive to perturbations in timing of presynaptic spikes, the network dynamics become stable and periodic, whereas increased sensitivity leads to instability and chaotic dynamics. Furthermore, we introduce a learning rule which decreases the Lyapunov exponent of an attractor and enlarges the basin of attraction.  相似文献   

7.
In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset. Here, we compare the performance of two prominent classes of memory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can outperform feedforward ones. Under a simple model in which neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli.  相似文献   

8.
现代工业过程建模中,生产过程的多变量、非线性及动态性会导致模型复杂度增高且建模精度降低.针对这一问题,将非负绞杀算法(NNG)嵌入长短期记忆(LSTM)神经网络,提出一种基于LSTM神经网络及其输入变量选择的动态软测量算法.首先,通过参数优化生成训练好的LSTM神经网络,利用其出色的历史信息记忆能力处理工业过程中的动态、时滞等问题;其次,采用NNG算法对LSTM网络输入权重进行压缩,剔除冗余变量,提高模型精度,并采用网格搜索法与分块交叉验证对其超参数寻优;最后,将算法应用于某火电厂脱硫过程排放烟气SO2浓度软测量建模,并与其它先进算法进行性能比较.实验结果表明所提算法能有效剔除冗余变量,降低模型复杂度并提高其预测性能.  相似文献   

9.
The algorithms that simple feedback neural circuits representing a brain area can rapidly carry out are often adequate to solve easy problems but for more difficult problems can return incorrect answers. A new excitatory-inhibitory circuit model of associative memory displays the common human problem of failing to rapidly find a memory when only a small clue is present. The memory model and a related computational network for solving Sudoku puzzles produce answers that contain implicit check bits in the representation of information across neurons, allowing a rapid evaluation of whether the putative answer is correct or incorrect through a computation related to visual pop-out. This fact may account for our strong psychological feeling of right or wrong when we retrieve a nominal memory from a minimal clue. This information allows more difficult computations or memory retrievals to be done in a serial fashion by using the fast but limited capabilities of a computational module multiple times. The mathematics of the excitatory-inhibitory circuits for associative memory and for Sudoku, both of which are understood in terms of energy or Lyapunov functions, is described in detail.  相似文献   

10.
11.
O Araki  K Aihara 《Neural computation》2001,13(12):2799-2822
Although various means of information representation in the cortex have been considered, the fundamental mechanism for such representation is not well understood. The relation between neural network dynamics and properties of information representation needs to be examined. We examined spatial pattern properties of mean firing rates and spatiotemporal spikes in an interconnected spiking neural network model. We found that whereas the spatiotemporal spike patterns are chaotic and unstable, the spatial patterns of mean firing rates (SPMFR) are steady and stable, reflecting the internal structure of synaptic weights. Interestingly, the chaotic instability contributes to fast stabilization of the SPMFR. Findings suggest that there are two types of network dynamics behind neuronal spiking: internally-driven dynamics and externally driven dynamics. When the internally driven dynamics dominate, spikes are relatively more chaotic and independent of external inputs; the SPMFR are steady and stable. When the externally driven dynamics dominate, the spiking patterns are relatively more dependent on the spatiotemporal structure of external inputs. These emergent properties of information representation imply that the brain may adopt a dual coding system. Recent experimental data suggest that internally driven and externally driven dynamics coexist and work together in the cortex.  相似文献   

12.

Time series forecasting (TSF) consists on estimating models to predict future values based on previously observed values of time series, and it can be applied to solve many real-world problems. TSF has been traditionally tackled by considering autoregressive neural networks (ARNNs) or recurrent neural networks (RNNs), where hidden nodes are usually configured using additive activation functions, such as sigmoidal functions. ARNNs are based on a short-term memory of the time series in the form of lagged time series values used as inputs, while RNNs include a long-term memory structure. The objective of this paper is twofold. First, it explores the potential of multiplicative nodes for ARNNs, by considering product unit (PU) activation functions, motivated by the fact that PUs are specially useful for modelling highly correlated features, such as the lagged time series values used as inputs for ARNNs. Second, it proposes a new hybrid RNN model based on PUs, by estimating the PU outputs from the combination of a long-term reservoir and the short-term lagged time series values. A complete set of experiments with 29 data sets shows competitive performance for both model proposals, and a set of statistical tests confirms that they achieve the state of the art in TSF, with specially promising results for the proposed hybrid RNN. The experiments in this paper show that the recurrent model is very competitive for relatively large time series, where longer forecast horizons are required, while the autoregressive model is a good selection if the data set is small or if a low computational cost is needed.

  相似文献   

13.
Delay-independent stability in bidirectional associative memorynetworks   总被引:8,自引:0,他引:8  
It is shown that if the neuronal gains are small compared with the synaptic connection weights, then a bidirectional associative memory network with axonal signal transmission delays converges to the equilibria associated with exogenous inputs to the network. Both discrete and continuously distributed delays are considered; the asymptotic stability is global in the state space of neuronal activations and also is independent of the delays.  相似文献   

14.
The aim of this paper was to propose a recurrent neural network-based predictive controller for robotic manipulators. A neural network controller for a six-joint Stanford robotic manipulator was designed using the generalized predictive control (GPC) and the Elman network. The GPC algorithm, which is a class of digital control method, requires long computational time. This is a disadvantage in real-time robot control; therefore, the Elman network controller was designed to reduce processing time by avoiding the highly mathematical and computational complexity of the GPC. The main reason for choosing the Elman network, amongst several neural network algorithms, was that the presence of feedback loops have a profound impact on the learning capability of the network. The designed neural network controller was able to recover quickly because of its significant generalization capability, which allowed it to adapt very rapidly to changes in inputs. The performance of the controller was also shown graphically using simulation software, including the dynamics and kinematics of the robot model.  相似文献   

15.
Complex-valued multistate neural associative memory   总被引:2,自引:0,他引:2  
A model of a multivalued associative memory is presented. This memory has the form of a fully connected attractor neural network composed of multistate complex-valued neurons. Such a network is able to perform the task of storing and recalling gray-scale images. It is also shown that the complex-valued fully connected neural network may be considered as a generalization of a Hopfield network containing real-valued neurons. A computational energy function is introduced and evaluated in order to prove network stability for asynchronous dynamics. Storage capacity as related to the number of accessible neuron states is also estimated.  相似文献   

16.
An iterative constrained inversion technique is used to find the control inputs to the plant. That is, rather than training a controller network and placing this network directly in the feedback or feedforward paths, the forward model of the plant is learned, and iterative inversion is performed on line to generate control commands. The control approach allows the controllers to respond online to changes in the plant dynamics. This approach also attempts to avoid the difficulty of analysis introduced by most current neural network controllers, which place the highly nonlinear neural network directly in the feedback path. A neural network-based model reference adaptive controller is also proposed for systems having significant dynamics between the control inputs and the observed (or desired) outputs and is demonstrated on a simple linear control system. These results are interpreted in terms of the need for a dither signal for on-line identification of dynamic systems.  相似文献   

17.
Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Recently, we proposed a paradigm called 'latent attractors' where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus--a brain region of fundamental significance for memory and spatial learning. Latent attractor networks are a special case of associative memory networks. The model studied here consists of a two-layer recurrent network with attractors stored in the recurrent connections using a clipped Hebbian learning rule. The firing in both layers is competitive--K winners take all firing. The number of neurons allowed to fire, K, is smaller than the size of the active set of the stored attractors. The performance of latent attractor networks depends on the number of such attractors that a network can sustain. In this paper, we use signal-to-noise methods developed for standard associative memory networks to do a theoretical and computational analysis of the capacity and dynamics of latent attractor networks. This is an important first step in making latent attractors a viable tool in the repertoire of neural computation. The method developed here leads to numerical estimates of capacity limits and dynamics of latent attractor networks. The technique represents a general approach to analyse standard associative memory networks with competitive firing. The theoretical analysis is based on estimates of the dendritic sum distributions using Gaussian approximation. Because of the competitive firing property, the capacity results are estimated only numerically by iteratively computing the probability of erroneous firings. The analysis contains two cases: the simple case analysis which accounts for the correlations between weights due to shared patterns and the detailed case analysis which includes also the temporal correlations between the network's present and previous state. The latter case predicts better the dynamics of the network state for non-zero initial spurious firing. The theoretical analysis also shows the influence of the main parameters of the model on the storage capacity.  相似文献   

18.
This letter aims at studying the impact of iterative Hebbian learning algorithms on the recurrent neural network's underlying dynamics. First, an iterative supervised learning algorithm is discussed. An essential improvement of this algorithm consists of indexing the attractor information items by means of external stimuli rather than by using only initial conditions, as Hopfield originally proposed. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags. The impact of the learning on the network's dynamics is the following: the more information to be stored as limit cycle attractors of the neural network, the more chaos prevails as the background dynamical regime of the network. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise. Next, we introduce a new form of supervised learning that is more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence that is no longer specified a priori. Based on its spontaneous dynamics, the network decides "on its own" the dynamical patterns to be associated with the stimuli. Compared with classical supervised learning, huge enhancements in storing capacity and computational cost have been observed. Moreover, this new form of supervised learning, by being more "respectful" of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex but still very informative regime is referred to as "frustrated chaos."  相似文献   

19.
We analyze a neural network implementation for puck state prediction in robotic air hockey. Unlike previous prediction schemes which used simple dynamic models and continuously updated an intercept state estimate, the neural network predictor uses a complex function, computed with data acquired from various puck trajectories, and makes a single, timely estimate of the final intercept state. Theoretically, the network can account for the complete dynamics of the table if all important state parameters are included as inputs, an accurate data training set of trajectories is used, and the network has an adequate number of internal nodes. To develop our neural networks, we acquired data from 1500 no‐bounce and 1500 one‐bounce puck trajectories, noting only translational state information. Analysis showed that performance of neural networks designed to predict the results of no‐bounce trajectories was better than the performance of neural networks designed for one‐bounce trajectories. Since our neural network input parameters did not include rotational puck estimates and recent work shows the importance of spin in impact analysis, we infer that adding a spin input to the neural network will increase the effectiveness of state estimates for the one‐bounce case. © 2001 John Wiley & Sons, Inc.  相似文献   

20.
We introduce a novel type of neural network, termed the parallel Hopfield network, that can simultaneously effect the dynamics of many different, independent Hopfield networks in parallel in the same piece of neural hardware. Numerically we find that under certain conditions, each Hopfield subnetwork has a finite memory capacity approaching that of the equivalent isolated attractor network, while a simple signal-to-noise analysis sheds qualitative, and some quantitative, insight into the workings (and failures) of the system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号