首页 | 官方网站   微博 | 高级检索  
     


Deep reinforcement learning-based joint task offloading and bandwidth allocation for multi-user mobile edge computing
Affiliation:College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
Abstract:The rapid growth of mobile internet services has yielded a variety of computation-intensive applications such as virtual/augmented reality. Mobile Edge Computing (MEC), which enables mobile terminals to offload computation tasks to servers located at the edge of the cellular networks, has been considered as an efficient approach to relieve the heavy computational burdens and realize an efficient computation offloading. Driven by the consequent requirement for proper resource allocations for computation offloading via MEC, in this paper, we propose a Deep-Q Network (DQN) based task offloading and resource allocation algorithm for the MEC. Specifically, we consider a MEC system in which every mobile terminal has multiple tasks offloaded to the edge server and design a joint task offloading decision and bandwidth allocation optimization to minimize the overall offloading cost in terms of energy cost, computation cost, and delay cost. Although the proposed optimization problem is a mixed integer nonlinear programming in nature, we exploit an emerging DQN technique to solve it. Extensive numerical results show that our proposed DQN-based approach can achieve the near-optimal performance.
Keywords:Mobile edge computing  Joint computation offloading and resource allocation  Deep-Q network
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号