首页 | 官方网站   微博 | 高级检索  
     

基于多任务强化学习的堆垛机调度与库位推荐
引用本文:饶东宁,罗南岳.基于多任务强化学习的堆垛机调度与库位推荐[J].计算机工程,2023,49(2):279-287+295.
作者姓名:饶东宁  罗南岳
作者单位:广东工业大学 计算机学院, 广州 510006
基金项目:广东省自然科学基金面上项目(2021A1515012556)。
摘    要:堆垛机调度是物流仓储自动化中的重要任务,任务中的出入库效率、货物存放等情况影响仓储系统的整体效益。传统调度方法在面对较大规模调度问题时,因处理大状态空间从而导致性能受限和收益降低。与此同时,库位优化与调度运行联系密切,但现有多数工作在处理调度问题时未能考虑到库位优化问题。为解决仓储中堆垛机调度问题,提出一种基于深度强化学习算法的近端策略优化调度方法。将调度问题视为序列决策问题,通过智能体与环境的持续交互进行自我学习,以在不断变化的环境中优化调度。针对调度中伴生的库位优化问题,提出一种基于多任务学习的调度、库位推荐联合算法,并基于调度网络构建适用于库位推荐的Actor网络,通过与Critic网络进行交互反馈,促进整体的联动和训练,从而提升整体效益。实验结果表明,与原算法模型相比,该调度方法的累计回报值指标平均提升了33.6%,所提的多任务学习的联合算法能有效地应对堆垛机调度和库位优化的应用场景,可为该类多任务问题提供可行的解决方案。

关 键 词:堆垛机调度  库位优化  多任务学习  深度强化学习  近端策略优化
收稿时间:2022-02-15
修稿时间:2022-04-02

Stacker Scheduling and Repository Location Recommendation Based on Multi-Task Reinforcement Learning
RAO Dongning,LUO Nanyue.Stacker Scheduling and Repository Location Recommendation Based on Multi-Task Reinforcement Learning[J].Computer Engineering,2023,49(2):279-287+295.
Authors:RAO Dongning  LUO Nanyue
Affiliation:School of Computers, Guangdong University of Technology, Guangzhou 510006, China
Abstract:Stacker scheduling is an essential task in warehousing automation.Inbound-outbound efficiency and storage situations affect overall efficiency.When handling large-scale problems, traditional scheduling methods cannot achieve performance because processing large state spaces is challenging.Meanwhile, optimization of repository location is closely related to scheduling operation, but most existing works ignore it when addressing scheduling problems.To solve the scheduling problem, this study proposes a method based on the deep reinforcement learning algorithm Proximal Policy Optimization(PPO).The method considers the warehousing scheduling a sequence decision-making problem.It conducts self-learning through continuous interaction between agent and environment, thereby optimizing the scheduling in a changing environment.A novel algorithm based on multitask learning network is proposed to address the optimization problem of repository location with scheduling tasks.Based on the scheduling network, the algorithm constructs an actor network of repository recommendations.The actor network participates in training through interactive feedback with the critic network, thereby promoting the overall benefit.The experimental results affirm the efficacy of the proposed scheduling method, as evidenced by its average increase of 33.6% in the index of cumulative reward in comparison to the original algorithm model.The proposed multitask learning network can effectively handle the scenarios of stacker scheduling and repository location optimization, thus providing a feasible solution for this type of multitask problem.
Keywords:stacker scheduling  location optimization  multi-task learning  deep reinforcement learning  Proximal Policy Optimization(PPO)  
点击此处可从《计算机工程》浏览原始摘要信息
点击此处可从《计算机工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号