首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
使用深度强化学习解决单智能体任务已经取得了突破性的进展。由于多智能体系统的复杂性,普通算法无法解决其主要难点。同时,由于智能体数量增加,将最大化单个智能体的累积回报的期望值作为学习目标往往无法收敛,某些特殊的收敛点也不满足策略的合理性。对于不存在最优解的实际问题,强化学习算法更是束手无策,将博弈理论引入强化学习可以很好地解决智能体的相互关系,可以解释收敛点对应策略的合理性,更重要的是可以用均衡解来替代最优解以求得相对有效的策略。因此,从博弈论的角度梳理近年来出现的强化学习算法,总结当前博弈强化学习算法的重难点,并给出可能解决上述重难点的几个突破方向。  相似文献   

2.
阐述了强化学习的基本原理和特点,讨论了强化学习中评价函数的神经网络近似问题,重点分析了采用多神经网络近似评价函数的学习问题,实现了状态空间或任务的自动分解,提高了评价函数的推广能力,网络的学习是离线进行,并作为反馈控制器在线应用,并以A-学习为例,将强化学习应用于导弹的制导问题,仿真结果表明了强化学习在导弹制导或控制问题中的应用前景和有效性。  相似文献   

3.
A Reinforcement Learning Scheme for a Partially-Observable Multi-Agent Game   总被引:1,自引:0,他引:1  
We formulate an automatic strategy acquisition problem for the multi-agent card game Hearts as a reinforcement learning problem. The problem can approximately be dealt with in the framework of a partially observable Markov decision process (POMDP) for a single-agent system. Hearts is an example of imperfect information games, which are more difficult to deal with than perfect information games. A POMDP is a decision problem that includes a process for estimating unobservable state variables. By regarding missing information as unobservable state variables, an imperfect information game can be formulated as a POMDP. However, the game of Hearts is a realistic problem that has a huge number of possible states, even when it is approximated as a single-agent system. Therefore, further approximation is necessary to make the strategy acquisition problem tractable. This article presents an approximation method based on estimating unobservable state variables and predicting the actions of the other agents. Simulation results show that our reinforcement learning method is applicable to such a difficult multi-agent problem.Editor Risto Miikkulainen  相似文献   

4.
论文研究了Markov对策模型作为学习框架的强化学习,提出了针对RoboCup仿真球队决策问题这一类复杂问题的学习模型和具体算法。在实验中,成功实现了守门员决策,并取得了良好的效果,证明了算法的可行性和有效性。  相似文献   

5.
Reinforcement Learning in the Multi-Robot Domain   总被引:16,自引:4,他引:16  
This paper describes a formulation of reinforcement learning that enables learning in noisy, dynamic environments such as in the complex concurrent multi-robot learning domain. The methodology involves minimizing the learning space through the use of behaviors and conditions, and dealing with the credit assignment problem through shaped reinforcement in the form of heterogeneous reinforcement functions and progress estimators. We experimentally validate the approach on a group of four mobile robots learning a foraging task.  相似文献   

6.
学习、交互及其结合是建立健壮、自治agent的关键必需能力。强化学习是agent学习的重要部分,agent强化学习包括单agent强化学习和多agent强化学习。文章对单agent强化学习与多agent强化学习进行了比较研究,从基本概念、环境框架、学习目标、学习算法等方面进行了对比分析,指出了它们的区别和联系,并讨论了它们所面临的一些开放性的问题。  相似文献   

7.
基于强化学习规则的两轮机器人自平衡控制   总被引:1,自引:0,他引:1  
两轮机器人是一个典型的不稳定,非线性,强耦合的自平衡系统,在两轮机器人系统模型未知和没有先验经验的条件下,将强化学习算法和模糊神经网络有效结合,保证了函数逼近的快速性和收敛性,成功地实现两轮机器人的自学习平衡控制,并解决了两轮机器人连续状态空间和动作空间的强化学习问题;仿真和实验表明:该方法不仅在很短的时间内成功地完成对两轮机器人的平衡控制,而且在两轮机器人参数变化较大时,仍能维持两轮机器人的平衡。  相似文献   

8.
将博弈理论与多智能体强化学习结合形成博弈强化学习逐渐受到关注,但是也存在算法的计算复杂度高和无法保证纯策略纳什均衡的问题。Meta equilibrium Q-learning算法通过反应函数将原始博弈转换为元博弈,而元博弈推导出的元均衡是纯策略纳什均衡。该算法在保证纯策略纳什均衡的前提下能够使得每个智能体的回报不低于某特定阈值。同时,基于分形的均衡程度评估模型能够通过计算任意状态的分形维数来判断其稳态,并评估任意状态与均衡状态之间的距离,该模型可以检验元均衡的科学性与合理性,上述算法和模型的相关结论在福利博弈和夺控战中都得到具体验证。  相似文献   

9.
针对协作多智能体强化学习中的全局信用分配机制很难捕捉智能体之间的复杂协作关系及无法有效地处理非马尔可夫奖励信号的问题,提出了一种增强的协作多智能体强化学习中的全局信用分配机制。首先,设计了一种新的基于奖励高速路连接的全局信用分配结构,使得智能体在决策时能够考虑其所分得的局部奖励信号与团队的全局奖励信号;其次,通过融合多步奖励信号提出了一种能够适应非马尔可夫奖励的值函数估计方法。在星际争霸微操作实验平台上的多个复杂场景下的实验结果表明:所提方法不仅能够取得先进的性能,同时还能大大提高样本的利用率。  相似文献   

10.
Abstract

Multi-agent systems need to communicate to coordinate a shared task. We show that a recurrent neural network (RNN) can learn a communication protocol for coordination, even if the actions to coordinate are performed steps after the communication phase. We show that a separation of tasks with different temporal scale is necessary for successful learning. We contribute a hierarchical deep reinforcement learning model for multi-agent systems that separates the communication and coordination task from the action picking through a hierarchical policy. We further on show, that a separation of concerns in communication is beneficial but not necessary. As a testbed, we propose the Dungeon Lever Game and we extend the Differentiable Inter-Agent Learning (DIAL) framework. We present and compare results from different model variations on the Dungeon Lever Game.  相似文献   

11.
The iterated prisoner's dilemma (IPD) is an ideal model for analyzing interactions between agents in complex networks. It has attracted wide interest in the development of novel strategies since the success of tit-for-tat in Axelrod's tournament. This paper studies a new adaptive strategy of IPD in different complex networks, where agents can learn and adapt their strategies through reinforcement learning method. A temporal difference learning method is applied for designing the adaptive strategy to optimize the decision making process of the agents. Previous studies indicated that mutual cooperation is hard to emerge in the IPD. Therefore, three examples which based on square lattice network and scale-free network are provided to show two features of the adaptive strategy. First, the mutual cooperation can be achieved by the group with adaptive agents under scale-free network, and once evolution has converged mutual cooperation, it is unlikely to shift. Secondly, the adaptive strategy can earn a better payoff compared with other strategies in the square network. The analytical properties are discussed for verifying evolutionary stability of the adaptive strategy.   相似文献   

12.

This survey paper provides a review and perspective on intermediate and advanced reinforcement learning (RL) techniques in process industries. It offers a holistic approach by covering all levels of the process control hierarchy. The survey paper presents a comprehensive overview of RL algorithms, including fundamental concepts like Markov decision processes and different approaches to RL, such as value-based, policy-based, and actor-critic methods, while also discussing the relationship between classical control and RL. It further reviews the wide-ranging applications of RL in process industries, such as soft sensors, low-level control, high-level control, distributed process control, fault detection and fault tolerant control, optimization, planning, scheduling, and supply chain. The survey paper discusses the limitations and advantages, trends and new applications, and opportunities and future prospects for RL in process industries. Moreover, it highlights the need for a holistic approach in complex systems due to the growing importance of digitalization in the process industries.

  相似文献   

13.
基于模型的强化学习方法利用已收集的样本对环境进行建模并使用构建的环境模型生成虚拟样本以辅助训练,因而有望提高样本效率.但由于训练样本不足等问题,构建的环境模型往往是不精确的,其生成的样本也会因携带的预测误差而对训练过程产生干扰.针对这一问题,提出了一种可学习的样本加权机制,通过对生成样本重加权以减少它们对训练过程的负面影响.该影响的量化方法为,先使用待评估样本更新价值和策略网络,再在真实样本上计算更新前后的损失值,使用损失值的变化量来衡量待评估样本对训练过程的影响.实验结果表明,按照该加权机制设计的强化学习算法在多个任务上均优于现有的基于模型和无模型的算法.  相似文献   

14.
强化学习及其在电脑围棋中的应用   总被引:3,自引:0,他引:3  
陈兴国  俞扬 《自动化学报》2016,42(5):685-695
强化学习是一类特殊的机器学习, 通过与所在环境的自主交互来学习决策策略, 使得策略收到的长期累积奖赏最大. 最近, 在围棋和电子游戏等领域, 强化学习被成功用于取得人类水平的操作能力, 受到了广泛关注. 本文将对强化学习进行简要介绍, 重点介绍基于函数近似的强化学习方法, 以及在围棋等领域中的应用.  相似文献   

15.
Embedding a Priori Knowledge in Reinforcement Learning   总被引:2,自引:0,他引:2  
In the last years, temporal differences methods have been put forward as convenient tools for reinforcement learning. Techniques based on temporal differences, however, suffer from a serious drawback: as stochastic adaptive algorithms, they may need extensive exploration of the state-action space before convergence is achieved. Although the basic methods are now reasonably well understood, it is precisely the structural simplicity of the reinforcement learning principle – learning through experimentation – that causes these excessive demands on the learning agent. Additionally, one must consider that the agent is very rarely a tabula rasa: some rough knowledge about characteristics of the surrounding environment is often available. In this paper, I present methods for embedding a priori knowledge in a reinforcement learning technique in such a way that both the mathematical structure of the basic learning algorithm and the capacity to generalise experience across the state-action space are kept. Extensive experimental results show that the resulting variants may lead to good performance, provided a sensible balance between risky use of prior imprecise knowledge and cautious use of learning experience is adopted.  相似文献   

16.
多智能体深度强化学习研究综述   总被引:1,自引:0,他引:1       下载免费PDF全文
多智能体深度强化学习是机器学习领域的一个新兴的研究热点和应用方向,涵盖众多算法、规则、框架,并广泛应用于自动驾驶、能源分配、编队控制、航迹规划、路由规划、社会难题等现实领域,具有极高的研究价值和意义。对多智能体深度强化学习的基本理论、发展历程进行简要的概念介绍;按照无关联型、通信规则型、互相合作型和建模学习型4种分类方式阐述了现有的经典算法;对多智能体深度强化学习算法的实际应用进行了综述,并简单罗列了多智能体深度强化学习的现有测试平台;总结了多智能体深度强化学习在理论、算法和应用方面面临的挑战和未来的发展方向。  相似文献   

17.
基于随机博弈的Agent协同强化学习方法   总被引:3,自引:0,他引:3       下载免费PDF全文
本文针对一类追求系统得益最大化的协作团队的学习问题,基于随机博弈的思想,提出了一种新的多Agent协同强化学习方法。协作团队中的每个Agent通过观察协作相识者的历史行为,依照随机博弈模型预测其行为策略,进而得出最优的联合行为策略。  相似文献   

18.
强化学习是机器学习领域的研究热点,是考察智能体与环境的相互作用,做出序列决策、优化策略并最大化累积回报的过程.强化学习具有巨大的研究价值和应用潜力,是实现通用人工智能的关键步骤.本文综述了强化学习算法与应用的研究进展和发展动态,首先介绍强化学习的基本原理,包括马尔可夫决策过程、价值函数、探索-利用问题.其次,回顾强化学习经典算法,包括基于价值函数的强化学习算法、基于策略搜索的强化学习算法、结合价值函数和策略搜索的强化学习算法,以及综述强化学习前沿研究,主要介绍多智能体强化学习和元强化学习方向.最后综述强化学习在游戏对抗、机器人控制、城市交通和商业等领域的成功应用,以及总结与展望.  相似文献   

19.
In this paper, we propose a set of algorithms to design signal timing plans via deep reinforcement learning. The core idea of this approach is to set up a deep neural network (DNN) to learn the Q-function of reinforcement learning from the sampled traffic state/control inputs and the corresponding traffic system performance output. Based on the obtained DNN, we can find the appropriate signal timing policies by implicitly modeling the control actions and the change of system states. We explain the possible benefits and implementation tricks of this new approach. The relationships between this new approach and some existing approaches are also carefully discussed.   相似文献   

20.
Džeroski  Sašo  De Raedt  Luc  Driessens  Kurt 《Machine Learning》2001,43(1-2):7-52
Relational reinforcement learning is presented, a learning technique that combines reinforcement learning with relational learning or inductive logic programming. Due to the use of a more expressive representation language to represent states, actions and Q-functions, relational reinforcement learning can be potentially applied to a new range of learning tasks. One such task that we investigate is planning in the blocks world, where it is assumed that the effects of the actions are unknown to the agent and the agent has to learn a policy. Within this simple domain we show that relational reinforcement learning solves some existing problems with reinforcement learning. In particular, relational reinforcement learning allows us to employ structural representations, to abstract from specific goals pursued and to exploit the results of previous learning phases when addressing new (more complex) situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号