首页 | 官方网站   微博 | 高级检索  
     

MEC架构下基于DDPG的车联网任务卸载和资源分配
引用本文:杨金松,孙三山,刘莉,熊有志,冯波涛,陆凌蓉.MEC架构下基于DDPG的车联网任务卸载和资源分配[J].重庆邮电大学学报(自然科学版),2024(2):259-267.
作者姓名:杨金松  孙三山  刘莉  熊有志  冯波涛  陆凌蓉
作者单位:四川师范大学 物理与电子工程学院, 成都 610101;四川师范大学 物理与电子工程学院, 成都 610101;电子科技大学 通信抗干扰技术国家级重点实验室, 成都 611731;深圳大学 电子与信息工程学院, 广东 深圳 518060;UT斯达康通讯有限公司, 杭州 310053
基金项目:中央高校科研经费项目(ZYGX2020ZB044);四川省自然科学基金项目(2022NSFSC0480)
摘    要:为了缓解车联网中个体车辆计算资源配置过低而导致的任务处理时延较大的问题,提出了一种移动边缘计算(mobile edge computing, MEC)架构下的动态任务卸载策略和资源分配方案。以最小化全网任务处理时延为目标,将车联网中的任务卸载和资源分配问题建模为马尔可夫决策过程(Markov decision process, MDP),并利用深度确定性策略梯度(deep deterministic policy gradient, DDPG)算法进行了问题求解。仿真结果表明,与执行者-评价者(actor-critic, AC)和深度Q网络(deep Q-network, DQN)这2种算法相比,DDPG算法以最快的算法收敛特性获得最小的全网任务处理时延。

关 键 词:车联网  移动边缘计算  马尔可夫决策过程  深度确定性策略梯度
收稿时间:2022/12/29 0:00:00
修稿时间:2023/10/9 0:00:00

DDPG-based computation offloading and resource allocation in MEC-enabled internet of vehicles
YANG Jinsong,SUN Shansan,LIU Li,XIONG Youzhi,FENG Botao,LU Lingrong.DDPG-based computation offloading and resource allocation in MEC-enabled internet of vehicles[J].Journal of Chongqing University of Posts and Telecommunications,2024(2):259-267.
Authors:YANG Jinsong  SUN Shansan  LIU Li  XIONG Youzhi  FENG Botao  LU Lingrong
Affiliation:College of Physics and Electronic Engineering, Sichuan Normal University, Chengdu 610101, P.R. China;College of Physics and Electronic Engineering, Sichuan Normal University, Chengdu 610101, P.R. China;National Key Lab of Science and Technology on Communications, University of Electronic Science and Technology of China, Chengdu 611731, P.R. China;College of Electronic and Information Engineering, Shenzhen University, Shenzhen 518060, P.R. China; UTStarcom Incorporated, Hangzhou 310053, P.R. China
Abstract:To alleviate the severe task processing delay caused by insufficient computing resources of individual vehicle in the MEC-enabled internet of vehicles, a dynamic joint computation offloading and resource allocation scheme was proposed. With the goal of minimizing the holistic task processing delay in the internet of vehicles, the problem of joint computation offloading and resource allocation was modeled as a Markov decision process (MDP), and then the problem was further solved using a deep deterministic policy gradient (DDPG) algorithm. The simulation results show that compared with the actor-critic (AC) and deep Q-network (DQN) algorithms, the proposed DDPG algorithm attains the holistic task processing delay minimum with superior convergence.
Keywords:internet of vehicles  mobile edge computing  Markov decision process  deep deterministic strategy gradient
点击此处可从《重庆邮电大学学报(自然科学版)》浏览原始摘要信息
点击此处可从《重庆邮电大学学报(自然科学版)》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号