首页 | 官方网站   微博 | 高级检索  
     

基于改进强化学习的模块化自重构机器人编队
引用本文:李伟科,岳洪伟,王宏民,杨 勇,赵 敏,邓辅秦.基于改进强化学习的模块化自重构机器人编队[J].计算技术与自动化,2022(3):6-13.
作者姓名:李伟科  岳洪伟  王宏民  杨 勇  赵 敏  邓辅秦
作者单位:(1. 五邑大学 智能制造学部, 广东 江门 529020;2. 深圳市人工智能与机器人研究院, 广东 深圳 518116;3. 深圳市杉川机器人有限公司, 广东 深圳 518006;4.中电科普天科技股份有限公司研发中心,广东 广州 510310)
摘    要:针对传统强化学习算法在训练初期缺乏对周围环境的先验知识,模块化自重构机器人会随机选择动作,导致迭代次数浪费和算法收敛速度缓慢的问题,提出一种两阶段强化学习算法。在第一阶段,利用基于群体和知识共享的Q-learning训练机器人前往网格地图的中心点,以获得一个最优共享Q表。在这个阶段中,为了减少迭代次数,提高算法的收敛速度,引入了曼哈顿距离作为奖赏值,以引导机器人向有利于中心点方向移动,减小稀疏奖励的影响。在第二阶段,机器人根据这个最优共享Q表和当前所处的位置,找到前往指定目标点的最优路径,形成指定的队形。实验结果表明,在50×50的网格地图中,与对比算法相比,该算法成功训练机器人到达指定目标点,减少了将近50%的总探索步数。此外,当机器人进行队形转换时,编队运行时间减少了近5倍。

关 键 词:模块化自重构机器人  强化学习  多机器人  编队

Formation of Modular Self-reconfigurable Robots Based on Improved Reinforcement Learning
LI Wei-ke,YUE Hong-wei,WANG Hong-min,YANG Yong,ZHAO Min,DENG Fu-qin.Formation of Modular Self-reconfigurable Robots Based on Improved Reinforcement Learning[J].Computing Technology and Automation,2022(3):6-13.
Authors:LI Wei-ke  YUE Hong-wei  WANG Hong-min  YANG Yong  ZHAO Min  DENG Fu-qin
Abstract:Based on the traditional reinforcement learning algorithm, due to a lack of prior knowledge of the surrounding environment, the modular self-reconfigurable robot will randomly select actions, resulting in a waste of iterations and slow convergence. A two-stage reinforcement learning algorithm is proposed. In the first stage, based on knowledge sharing among robots, the improved Q-learning algorithm is proposed to speed up the training process and obtain the optimal Q table. In this stage, to reduce the number of iterations and improve the convergence speed of the algorithm, Manhattan distance is introduced as the reward value to guide the robot to move in the direction favorable to the center point and reduce the influence of sparse reward. In the second stage, according to the resulting Q table and the current position, each robot finds the optimal path to the specified target point and forms the specified formation. The experimental results show that in a 50×50 grid map, compared with the comparison algorithm, the algorithm successfully trains the robots to reach the specified target points, reducing the total number of exploration steps by nearly 50%. In addition, when the robots perform formation switching, the formation runtime is reduced by nearly five times.
Keywords:
点击此处可从《计算技术与自动化》浏览原始摘要信息
点击此处可从《计算技术与自动化》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号