首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 906 毫秒
1.
Daw ND  Touretzky DS 《Neural computation》2002,14(11):2567-2583
This article addresses the relationship between long-term reward predictions and slow-timescale neural activity in temporal difference (TD) models of the dopamine system. Such models attempt to explain how the activity of dopamine (DA) neurons relates to errors in the prediction of future rewards. Previous models have been mostly restricted to short-term predictions of rewards expected during a single, somewhat artificially defined trial. Also, the models focused exclusively on the phasic pause-and-burst activity of primate DA neurons; the neurons' slower, tonic background activity was assumed to be constant. This has led to difficulty in explaining the results of neurochemical experiments that measure indications of DA release on a slow timescale, results that seem at first glance inconsistent with a reward prediction model. In this article, we investigate a TD model of DA activity modified so as to enable it to make longer-term predictions about rewards expected far in the future. We show that these predictions manifest themselves as slow changes in the baseline error signal, which we associate with tonic DA activity. Using this model, we make new predictions about the behavior of the DA system in a number of experimental situations. Some of these predictions suggest new computational explanations for previously puzzling data, such as indications from microdialysis studies of elevated DA activity triggered by aversive events.  相似文献   

2.
The notion of prediction error has established itself at the heart of formal models of animal learning and current hypotheses of dopamine function. Several interpretations of prediction error have been offered, including the model-free reinforcement learning method known as temporal difference learning (TD), and the important Rescorla-Wagner (RW) learning rule. Here, we present a model-based adaptation of these ideas that provides a good account of empirical data pertaining to dopamine neuron firing patterns and associative learning paradigms such as latent inhibition, Kamin blocking and overshadowing. Our departure from model-free reinforcement learning also offers: 1) a parsimonious distinction between tonic and phasic dopamine functions; 2) a potential generalization of the role of phasic dopamine from valence-dependent "reward" processing to valence-independent "salience" processing; 3) an explanation for the selectivity of certain dopamine manipulations on motivation for distal rewards; and 4) a plausible link between formal notions of prediction error and accounts of disturbances of thought in schizophrenia (in which dopamine dysfunction is strongly implicated). The model distinguishes itself from existing accounts by offering novel predictions pertaining to the firing of dopamine neurons in various untested behavioral scenarios.  相似文献   

3.
To develop a nonverbal communication channel between an operator and a system, we built a tracking system called the Adaptive Visual Attentive Tracker (AVAT) to track and zoom in to the operator's behavioral sequence which represents his/her intention. In our system, hidden Markov models (HMMs) first roughly model the gesture pattern. Then, the state transition probabilities in HMMs are used to assign as the rewards in temporal difference (TD) learning. Later, the TD learning method is utilized to adjust the action model of the tracker for its situated behaviors in the tracking task. Identification of the hand sign gesture context through wavelet analysis autonomously provides a reward value for optimizing AVAT's action patterns. Experimental results of tracking the operator's hand sign action sequences during her natural walking motion with higher accuracy are shown which demonstrate the effectiveness of the proposed HMM-based TD learning algorithm of AVAT. During TD learning experiments, the exploring randomly chosen actions sometimes exceed the predefined state area, and thus involuntarily enlarge the domain of states. We describe a method utilizing HMMs with continuous observation distribution to detect whether the state would be split to make a new state. The generation of new states brings the ability of enlarging the predefined area of states.  相似文献   

4.
Although the responses of dopamine neurons in the primate midbrain are well characterized as carrying a temporal difference (TD) error signal for reward prediction, existing theories do not offer a credible account of how the brain keeps track of past sensory events that may be relevant to predicting future reward. Empirically, these shortcomings of previous theories are particularly evident in their account of experiments in which animals were exposed to variation in the timing of events. The original theories mispredicted the results of such experiments due to their use of a representational device called a tapped delay line. Here we propose that a richer understanding of history representation and a better account of these experiments can be given by considering TD algorithms for a formal setting that incorporates two features not originally considered in theories of the dopaminergic response: partial observability (a distinction between the animal's sensory experience and the true underlying state of the world) and semi-Markov dynamics (an explicit account of variation in the intervals between events). The new theory situates the dopaminergic system in a richer functional and anatomical context, since it assumes (in accord with recent computational theories of cortex) that problems of partial observability and stimulus history are solved in sensory cortex using statistical modeling and inference and that the TD system predicts reward using the results of this inference rather than raw sensory data. It also accounts for a range of experimental data, including the experiments involving programmed temporal variability and other previously unmodeled dopaminergic response phenomena, which we suggest are related to subjective noise in animals' interval timing. Finally, it offers new experimental predictions and a rich theoretical framework for designing future experiments.  相似文献   

5.
Anticipatory neural activity preceding behaviorally important events has been reported in cortex, striatum, and midbrain dopamine neurons. Whereas dopamine neurons are phasically activated by reward-predictive stimuli, anticipatory activity of cortical and striatal neurons is increased during delay periods before important events. Characteristics of dopamine neuron activity resemble those of the prediction error signal of the temporal difference (TD) model of Pavlovian learning (Sutton & Barto, 1990). This study demonstrates that the prediction signal of the TD model reproduces characteristics of cortical and striatal anticipatory neural activity. This finding suggests that tonic anticipatory activities may reflect prediction signals that are involved in the processing of dopamine neuron activity.  相似文献   

6.
This paper presents a neural architecture for learning category nodes encoding mappings across multimodal patterns involving sensory inputs, actions, and rewards. By integrating adaptive resonance theory (ART) and temporal difference (TD) methods, the proposed neural model, called TD fusion architecture for learning, cognition, and navigation (TD-FALCON), enables an autonomous agent to adapt and function in a dynamic environment with immediate as well as delayed evaluative feedback (reinforcement) signals. TD-FALCON learns the value functions of the state-action space estimated through on-policy and off-policy TD learning methods, specifically state-action-reward-state-action (SARSA) and Q-learning. The learned value functions are then used to determine the optimal actions based on an action selection policy. We have developed TD-FALCON systems using various TD learning strategies and compared their performance in terms of task completion, learning speed, as well as time and space efficiency. Experiments based on a minefield navigation task have shown that TD-FALCON systems are able to learn effectively with both immediate and delayed reinforcement and achieve a stable performance in a pace much faster than those of standard gradient-descent-based reinforcement learning systems.  相似文献   

7.
Information signal from real case and natural complex dynamical systems such as traffic flow are usually specified by irregular motions. Chaotic nonlinear dynamics approach is now the most powerful tool for scientists to deal with complexities in real cases, and neural networks and neuro-fuzzy models are widely used for their capabilities in nonlinear modeling of chaotic systems more than the traditional methods. As mentioned, the traffic flow conditions caused the forecasting values of traffic flow to lack robustness and accuracy. In this paper, the traffic flow forecasting is analyzed with emotional concepts and multi-agent systems (MASs) points of view as a new method in this field. The findings enabled the researchers to develop a newly object-oriented method of forecasting traffic flow. Its architecture is based on a temporal difference (TD) Q-learning with a neuro-fuzzy structure, which is the nonparametric approach. The performance of TD Q-learning is improved by emotional learning. The proposed method on the present conditions and the action of the system according to the criteria could forecast traffic signals so that the objectives are reached in minimum time. The ability of presented learning algorithm to prospect gains from future actions and obtain rewards from its past experiences allows emotional TD Q-learning algorithm to improve its decisions for the best possible actions. In addition, to study in a more practical situation, the neuro-fuzzy behaviors could be modeled by MAS. The proposed method (intelligent/nonparametric approach) is compared by parametric approach, autoregressive integrated moving average (ARIMA) method, which is implemented by multi-layer perceptron neural networks and called ARIMANN. Here, the ARIMANN is updated by backpropagation and temporal difference backpropagation for the first time. The simulation results revealed that the studied forecaster could discover the optimal forecasting by means of the Q-learning algorithm. Difficult to handle through parametric and classic methods, the real traffic flow signals used for fitting the algorithms is obtained from a two-lane street I-494 in Minnesota City.  相似文献   

8.
陈学松  刘富春 《控制与决策》2013,28(12):1889-1893

提出一类非线性不确定动态系统基于强化学习的最优控制方法. 该方法利用欧拉强化学习算法估计对象的未知非线性函数, 给出了强化学习中回报函数和策略函数迭代的在线学习规则. 通过采用向前欧拉差分迭代公式对学习过程中的时序误差进行离散化, 实现了对值函数的估计和控制策略的改进. 基于值函数的梯度值和时序误差指标值, 给出了该算法的步骤和误差估计定理. 小车爬山问题的仿真结果表明了所提出方法的有效性.

  相似文献   

9.
The successor representation was introduced into reinforcement learning by Dayan ( 1993 ) as a means of facilitating generalization between states with similar successors. Although reinforcement learning in general has been used extensively as a model of psychological and neural processes, the psychological validity of the successor representation has yet to be explored. An interesting possibility is that the successor representation can be used not only for reinforcement learning but for episodic learning as well. Our main contribution is to show that a variant of the temporal context model (TCM; Howard & Kahana, 2002 ), an influential model of episodic memory, can be understood as directly estimating the successor representation using the temporal difference learning algorithm (Sutton & Barto, 1998 ). This insight leads to a generalization of TCM and new experimental predictions. In addition to casting a new normative light on TCM, this equivalence suggests a previously unexplored point of contact between different learning systems.  相似文献   

10.
We propose a principled way to construct an internal representation of the temporal stimulus history leading up to the present moment. A set of leaky integrators performs a Laplace transform on the stimulus function, and a linear operator approximates the inversion of the Laplace transform. The result is a representation of stimulus history that retains information about the temporal sequence of stimuli. This procedure naturally represents more recent stimuli more accurately than less recent stimuli; the decrement in accuracy is precisely scale invariant. This procedure also yields time cells that fire at specific latencies following the stimulus with a scale-invariant temporal spread. Combined with a simple associative memory, this representation gives rise to a moment-to-moment prediction that is also scale invariant in time. We propose that this scale-invariant representation of temporal stimulus history could serve as an underlying representation accessible to higher-level behavioral and cognitive mechanisms. In order to illustrate the potential utility of this scale-invariant representation in a variety of fields, we sketch applications using minimal performance functions to problems in classical conditioning, interval timing, scale-invariant learning in autoshaping, and the persistence of the recency effect in episodic memory across timescales.  相似文献   

11.
Linear Least-Squares Algorithms for Temporal Difference Learning   总被引:8,自引:2,他引:6  
We introduce two new temporal difference (TD) algorithms based on the theory of linear least-squares function approximation. We define an algorithm we call Least-Squares TD (LS TD) for which we prove probability-one convergence when it is used with a function approximator linear in the adjustable parameters. We then define a recursive version of this algorithm, Recursive Least-Square TD (RLS TD). Although these new TD algorithms require more computation per time-step than do Suttons TD() algorithms, they are more efficient in a statistical sense because they extract more information from training experiences. We describe a simulation experiment showing the substantial improvement in learning rate achieved by RLS TD in an example Markov prediction problem. To quantify this improvement, we introduce the TD error variance of a Markov chain, TD, and experimentally conclude that the convergence rate of a TD algorithm depends linearly on TD. In addition to converging more rapidly, LS TD and RLS TD do not have control parameters, such as a learning rate parameter, thus eliminating the possibility of achieving poor performance by an unlucky choice of parameters.  相似文献   

12.
This paper proposes a TD (temporal difference) and GA (genetic algorithm)-based reinforcement (TDGAR) learning method and applies it to the control of a real magnetic bearing system. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to perform the reinforcement learning task. The TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network to guide the learning of the other network (the action network) which determines the outputs (actions) of the TDGAR learning system. The action network can be a normal neural network or a neural fuzzy network. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. The key concept of the TDGAR learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA such that the GA can evaluate the candidate solutions (chromosomes) regularly, even during periods without external feedback from the environment. This enables the GA to proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning since a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problem. The proposed TDGAR learning system has been used to control an active magnetic bearing (AMB) system in practice. A systematic design procedure is developed to achieve successful integration of all the subsystems including magnetic suspension, mechanical structure, and controller training. The results show that the TDGAR learning scheme can successfully find a neural controller or a neural fuzzy controller for a self-designed magnetic bearing system.  相似文献   

13.
Clustering uncertain trajectories   总被引:4,自引:3,他引:1  
Knowledge discovery in Trajectory Databases (TD) is an emerging field which has recently gained great interest. On the other hand, the inherent presence of uncertainty in TD (e.g., due to GPS errors) has not been taken yet into account during the mining process. In this paper, we study the effect of uncertainty in TD clustering and introduce a three-step approach to deal with it. First, we propose an intuitionistic point vector representation of trajectories that encompasses the underlying uncertainty and introduce an effective distance metric to cope with uncertainty. Second, we devise CenTra, a novel algorithm which tackles the problem of discovering the Centroid Trajectory of a group of movements taking into advantage the local similarity between portions of trajectories. Third, we propose a variant of the Fuzzy C-Means (FCM) clustering algorithm, which embodies CenTra at its update procedure. Finally, we relax the vector representation of the Centroid Trajectories by introducing an algorithm that post-processes them, as such providing these mobility patterns to the analyst with a more intuitive representation. The experimental evaluation over synthetic and real world TD demonstrates the efficiency and effectiveness of our approach.  相似文献   

14.
A common approach to learning from delayed rewards is to use temporal difference (TD) methods for predicting future reinforcement values. They are parameterized by a recency factor λ which determines whether and how the outcomes from several consecutive time steps contribute to a single prediction update. TD(λ > 0) has been found to usually yield noticeably faster learning than TD(0), but its standard eligibility traces implementation is associated with some well known deficiencies, in particular significantly increased computation expense. This article investigates theoretically two possible ways of implementing TD(λ) without eligibility traces, both proposed by prior work. One is the TTD procedure, which efficiently approximates the effects of eligibility traces by the use of truncated TD(λ) returns. The other is experience replay, which relies on replaying TD prediction updates backwards in time. We provide novel theoretical results related to the former and present an original analysis of the effects of two variations of the latter. The ultimate effect of these investigations is a unified view of the apparently different computational techniques. This contributes to the TD(λ) research in general, by highlighting interesting relationships between several TD-based algorithms and facilitating their further analysis.  相似文献   

15.
戴帅  殷苌茗  张欣 《计算机工程》2009,35(13):190-192
提出一种新的基于因素法方法的TD(λ)算法。其基本思想是状态因素化表示,通过动态贝叶斯网络表示Markov决策过程(MDP)中的状态转移概率函数,结合决策树表示TD(λ)算法中的状态值函数,降低状态空间的搜索与计算复杂度,因而适用于求解大状态空间的MDPs问题,实验证明该表示方法是有效的。  相似文献   

16.
中文拼写纠错是一项检测和纠正文本中拼写错误的任务。大多数中文拼写错误是在语义、读音或字形上相似的字符被误用,因此常见的做法是对不同模态提取特征进行建模。但将不同特征直接融合或是利用固定权重进行求和,使得不同模态信息之间的重要性关系被忽略以及模型在识别错误时会出现偏差,阻止了模型以有效的方式学习。为此,提出了一种新的模型以改善这个问题,称为基于文本序列错误概率和中文拼写错误概率融合的汉语纠错算法。该方法使用文本序列错误概率作为动态权重、中文常见拼写错误概率作为固定权重,对语义、读音和字形信息进行了高效融合。模型能够合理控制不同模态信息流入混合模态表示,更加针对错误发生处进行学习。在SIGHAN基准上进行的实验表明,所提模型的各项评估分数在不同数据集上均有提升,这验证了该算法的可行性。  相似文献   

17.
This research treats a bargaining process as a Markov decision process, in which a bargaining agent’s goal is to learn the optimal policy that maximizes the total rewards it receives over the process. Reinforcement learning is an effective method for agents to learn how to determine actions for any time steps in a Markov decision process. Temporal-difference (TD) learning is a fundamental method for solving the reinforcement learning problem, and it can tackle the temporal credit assignment problem. This research designs agents that apply TD-based reinforcement learning to deal with online bilateral bargaining with incomplete information. This research further evaluates the agents’ bargaining performance in terms of the average payoff and settlement rate. The results show that agents using TD-based reinforcement learning are able to achieve good bargaining performance. This learning approach is sufficiently robust and convenient, hence it is suitable for online automated bargaining in electronic commerce.  相似文献   

18.
Temporal difference TD methods are used by reinforcement learning algorithms for predicting future rewards. This article analyzes theoretically and illustrates experimentally the effects of performing TD lambda prediction udpates backwards for a number of past experiences. More exactly, two related techniques described in the literature are examined, referred to as replayed TD and backwards TD. The former is essentially an online learning method which performs at each time step a regular TD 0 update, and then replays updates backwards for a number of previous states. The latter operates in offline mode, after the end of a trial updating backwards the predictions for all visited states. They are both shown to be approximately equivalent to TD lambda with variable lambda values selected in a particular way. This is true even if they perform only TD 0 updates. The experimental results show that replayed TD 0 is competitive to TD lambda with regard to learning speed and quality.  相似文献   

19.
The ability to analyze the effectiveness of agent reward structures is critical to the successful design of multiagent learning algorithms. Though final system performance is the best indicator of the suitability of a given reward structure, it is often preferable to analyze the reward properties that lead to good system behavior (i.e., properties promoting coordination among the agents and providing agents with strong signal to noise ratios). This step is particularly helpful in continuous, dynamic, stochastic domains ill-suited to simple table backup schemes commonly used in TD(λ)/Q-learning where the effectiveness of the reward structure is difficult to distinguish from the effectiveness of the chosen learning algorithm. In this paper, we present a new reward evaluation method that provides a visualization of the tradeoff between the level of coordination among the agents and the difficulty of the learning problem each agent faces. This method is independent of the learning algorithm and is only a function of the problem domain and the agents’ reward structure. We use this reward property visualization method to determine an effective reward without performing extensive simulations. We then test this method in both a static and a dynamic multi-rover learning domain where the agents have continuous state spaces and take noisy actions (e.g., the agents’ movement decisions are not always carried out properly). Our results show that in the more difficult dynamic domain, the reward efficiency visualization method provides a two order of magnitude speedup in selecting good rewards, compared to running a full simulation. In addition, this method facilitates the design and analysis of new rewards tailored to the observational limitations of the domain, providing rewards that combine the best properties of traditional rewards.  相似文献   

20.
The functional role of dopamine has attracted a great deal of interest ever since it was empirically discovered that dopamine-blocking drugs could be used to treat psychosis. Specifically, the D2 receptor and its expression in the ventral striatum have emerged as pivotal in our understanding of the complex role of the neuromodulator in schizophrenia, reward, and motivation. Our departure from the ubiquitous temporal difference (TD) model of dopamine neuron firing allows us to account for a range of experimental evidence suggesting that ventral striatal dopamine D2 receptor manipulation selectively modulates motivated behavior for distal versus proximal outcomes. Whether an internal model or the TD approach (or a mixture) is better suited to a comprehensive exposition of tonic and phasic dopamine will have important implications for our understanding of reward, motivation, schizophrenia, and impulsivity. We also use the model to help unite some of the leading cognitive hypotheses of dopamine function under a computational umbrella. We have used the model ourselves to stimulate and focus new rounds of experimental research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号