首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 125 毫秒
1.
针对车联网业务的低时延、低功耗需求及海量设备计算卸载引起的网络拥塞问题,该文提出一种在云雾混合网络架构下的联合计算卸载、计算资源和无线资源分配算法(JODRAA)。首先,该算法考虑将云计算与雾计算结合,以最大时延作为约束,建立最小化系统能耗和资源成本的资源优化模型。其次,将原问题转化为标准二次约束二次规划(QCQP)问题,并设计一种低复杂度的联合卸载决策和计算资源分配算法。进一步,针对海量设备计算卸载引起的网络拥塞问题,建立卸载用户接入请求队列的上溢概率估计模型,提出一种基于在线测量的雾节点时频资源配置算法。最后,借助分式规划理论和拉格朗日对偶分解方法得到迭代的带宽和功率分配策略。仿真结果表明,该文算法可以在满足时延需求的前提下,最小化系统能耗和资源成本。  相似文献   

2.
移动边缘计算(MEC)通过将计算任务卸载到MEC服务器上,在缓解智能移动设备计算负载的同时,可以降低服务时延。然而目前在MEC系统中,关于任务卸载和资源分配仍然存在以下问题:1)边缘节点间缺乏协作;2)计算任务到达与实际环境中动态变化的特征不匹配;3)协作式任务卸载和资源分配动态联合优化问题。为解决上述问题,文章在协作式MEC架构的基础上,提出了一种基于多智能体的深度确定性策略梯度算法(MADDPG)的任务卸载和资源分配算法,最小化系统中所有用户的长期平均成本。仿真结果表明,该算法可以有效降低系统的时延及能耗。  相似文献   

3.
通过移动边缘计算下移云端的应用功能和处理能力支撑计算密集或时延敏感任务的执行成为当前的发展趋势。但面对众多移动终端用户时,如何有效利用计算资源有限的边缘节点来保障终端用户服务质量(QoS)成为关键问题。为此,该文融合边缘云与远端云构建了一种分层的边缘云计算架构,以此架构为基础,以最小化移动设备能耗和任务执行时间为目标,将问题形式化描述为资源约束下的最小化能耗和时延加权和的凸优化问题,并提出基于乘子法的计算卸载及资源分配机制解决该问题。实验结果表明,在计算任务量很大的情况下,提出的计算卸载及资源分配机制能够有效降低移动终端能耗,并在任务执行时延方面较局部计算与计算卸载机制分别降低最高60%与10%,提高系统性能。  相似文献   

4.
移动边缘计算(MEC)通过在无线网络边缘为用户提供计算能力,来提高用户的体验质量。然而,MEC的计算卸载仍面临着许多问题。该文针对超密集组网(UDN)的MEC场景下的计算卸载,考虑系统总能耗,提出卸载决策和资源分配的联合优化问题。首先采用坐标下降法制定了卸载决定的优化方案。同时,在满足用户时延约束下采用基于改进的匈牙利算法和贪婪算法来进行子信道分配。然后,将能耗最小化问题转化为功率最小化问题,并将其转化为一个凸优化问题得到用户最优的发送功率。仿真结果表明,所提出的卸载方案可以在满足用户不同时延的要求下最小化系统能耗,有效地提升了系统性能。  相似文献   

5.
针对如何基于有限的系统剩余资源进行任务优化卸载以增加移动终端的数字货币收益问题,该文在融合区块链与雾计算系统中提出一种基于节点剩余资源、网络时延的任务卸载方案。为了实现任务的优化卸载,首先基于任务量对移动终端的预期收益进行了分析,其次基于网络节点剩余计算资源、存储资源、功率资源、网络时延联合分析了移动终端的支出。此后以最大化移动终端的数字货币收益为优化目标建立了数学优化模型,并利用模拟退火(SA)算法对优化模型进行求解。仿真结果证明上述方案的有效性。  相似文献   

6.
针对D2D辅助的云雾混合架构下资源分配及任务卸载决策优化问题,该文提出一种基于多智能体架构深度强化学习的资源分配及卸载决策算法。首先,该算法考虑激励约束、能量约束以及网络资源约束,联合优化无线资源分配、计算资源分配以及卸载决策,建立了最大化系统总用户体验质量(QoE)的随机优化模型,并进一步将其转化为MDP问题。其次,该算法将原MDP问题进行因式分解,并建立马尔可夫博弈模型。然后,基于行动者-评判家(AC)算法提出一种集中式训练、分布式执行机制。在集中式训练过程中,多智能体通过协作获取全局信息,实现资源分配及任务卸载决策策略优化,在训练过程结束后,各智能体独立地根据当前系统状态及策略进行资源分配及任务卸载。最后,仿真结果表明,该算法可以有效提升用户QoE,并降低了时延及能耗。  相似文献   

7.
针对D2D辅助的云雾混合架构下资源分配及任务卸载决策优化问题,该文提出一种基于多智能体架构深度强化学习的资源分配及卸载决策算法.首先,该算法考虑激励约束、能量约束以及网络资源约束,联合优化无线资源分配、计算资源分配以及卸载决策,建立了最大化系统总用户体验质量(QoE)的随机优化模型,并进一步将其转化为MDP问题.其次,该算法将原MDP问题进行因式分解,并建立马尔可夫博弈模型.然后,基于行动者-评判家(AC)算法提出一种集中式训练、分布式执行机制.在集中式训练过程中,多智能体通过协作获取全局信息,实现资源分配及任务卸载决策策略优化,在训练过程结束后,各智能体独立地根据当前系统状态及策略进行资源分配及任务卸载.最后,仿真结果表明,该算法可以有效提升用户QoE,并降低了时延及能耗.  相似文献   

8.
朱科宇  朱琦 《信号处理》2021,37(6):1055-1065
本文在多基站和远端云构成的多层计算卸载场景中,提出了一种多小区蜂窝网络中基站选择、计算卸载与资源分配联合优化算法。该算法考虑多基站重叠覆盖用户的基站选择,在边缘服务器计算资源约束条件下,构建了能耗与时延加权和的最小化问题,这是NP-hard问题。本文首先对单用户多基站计算卸载问题,采用拉格朗日乘子法对其进行求解;然后针对多用户多基站场景,考虑用户的基站选择以及边缘服务器计算资源的竞争,基于定义的选择函数对接入基站进行选择,采用次优的迭代启发式算法对单用户场景下的卸载决策做出动态修正,获得卸载决策和边缘服务器资源分配。仿真结果表明,提出的计算卸载及资源分配算法能有效的降低任务完成的时延与终端的能耗。   相似文献   

9.
绳韵  许晨  郑光远 《电信科学》2022,38(2):35-46
为了提高移动边缘计算(mobile edge computing,MEC)网络的频谱效率,满足大量用户的服务需求,建立了基于非正交多址接入(non-orthogonal multiple access,NOMA)的超密集MEC系统模型。为了解决多个用户同时卸载带来的严重通信干扰等问题,以高效利用边缘服务器资源,提出了一种联合任务卸载和资源分配的优化方案,在满足用户服务质量的前提下最小化系统总能耗。该方案联合考虑了卸载决策、功率控制、计算资源和子信道资源分配。仿真结果表明,与其他卸载方案相比,所提方案可以在满足用户服务质量的前提下有效降低系统能耗。  相似文献   

10.
为平衡网络负载与充分利用网络资源,针对超密集异构的多用户和多任务边缘计算网络,在用户时延约束下,该文构造了协作式计算任务卸载与无线资源管理的联合优化问题以最小化系统能耗。问题建模时,为应对基站超密集部署导致的严重干扰问题,该文采用了频带划分机制,并引入了非正交多址技术(NOMA)以提升上行频谱利用率。鉴于该目标优化问题具备非线性混合整数的形式,根据多样性引导变异的自适应遗传算法(AGADGM),设计出了协作式计算卸载与资源分配算法。仿真结果表明,在严格满足时延约束条件下,该算法能获取较其他算法更低的系统能耗。  相似文献   

11.
With the widespread application of wireless communication technology and continuous improvements to Internet of Things (IoT) technology, fog computing architecture composed of edge, fog, and cloud layers have become a research hotspot. This architecture uses Fog Nodes (FNs) close to users to implement certain cloud functions while compensating for cloud disadvantages. However, because of the limited computing and storage capabilities of a single FN, it is necessary to offload tasks to multiple cooperating FNs for task completion. To effectively and quickly realize task offloading, we use network calculus theory to establish an overall performance model for task offloading in a fog computing environment and propose a Globally Optimal Multi-objective Optimization algorithm for Task Offloading (GOMOTO) based on the performance model. The results show that the proposed model and algorithm can effectively reduce the total delay and total energy consumption of the system and improve the network Quality of Service (QoS).  相似文献   

12.
Fog computing is an emerging architecture intended for alleviating the network burdens at the cloud and the core network by moving resource-intensive functionalities such as computation, communication, storage, and analytics closer to the End Users (EUs). In order to address the issues of energy efficiency and latency requirements for the time-critical Internet-of-Things (IoT) applications, fog computing systems could apply intelligence features in their operations to take advantage of the readily available data and computing resources. In this paper, we propose an approach that involves device-driven and human-driven intelligence as key enablers to reduce energy consumption and latency in fog computing via two case studies. The first one makes use of the machine learning to detect user behaviors and perform adaptive low-latency Medium Access Control (MAC)-layer scheduling among sensor devices. In the second case study on task offloading, we design an algorithm for an intelligent EU device to select its offloading decision in the presence of multiple fog nodes nearby, at the same time, minimize its own energy and latency objectives. Our results show a huge but untapped potential of intelligence in tackling the challenges of fog computing.  相似文献   

13.
In this paper, we study a UAV-based fog or edge computing network in which UAVs and fog/edge nodes work together intelligently to provide numerous benefits in reduced latency, data offloading, storage, coverage, high throughput, fast computation, and rapid responses. In an existing UAV-based computing network, the users send continuous requests to offload their data from the ground users to UAV–fog nodes and vice versa, which causes high congestion in the whole network. However, the UAV-based networks for real-time applications require low-latency networks during the offloading of large volumes of data. Thus, the QoS is compromised in such networks when communicating in real-time emergencies. To handle this problem, we aim to minimize the latency during offloading large amounts of data, take less computing time, and provide better throughput. First, this paper proposed the four-tier architecture of the UAVs–fog collaborative network in which local UAVs and UAV–fog nodes do smart task offloading with low latency. In this network, the UAVs act as a fog server to compute data with the collaboration of local UAVs and offload their data efficiently to the ground devices. Next, we considered the Q-learning Markov decision process (QLMDP) based on the optimal path to handle the massive data requests from ground devices and optimize the overall delay in the UAV-based fog computing network. The simulation results show that this proposed collaborative network achieves high throughput, reduces average latency up to 0.2, and takes less computing time compared with UAV-based networks and UAV-based MEC networks; thus, it can achieve high QoS.  相似文献   

14.
Internet of vehicles (IoV) comprises connected vehicles and connected autonomous vehicles and offers numerous benefits for ensuring traffic and safety competence. Several IoV applications are delay-sensitive and need resources for computation and data storage that are not provided by vehicles. Therefore, these tasks are always offloaded to highly powerful nodes, namely, fog, which can bring resources nearer to the networking edges, reducing both traffic congestion and load. Besides, the mechanism of offloading the tasks to the fog nodes in terms of delay, computing power, and completion time remains still as an open concern. Hence, an efficient task offloading strategy, named Aquila Student Psychology Optimization Algorithm (ASPOA), is developed for offloading the IoV tasks in a fog setting in terms of the objectives, such as delay, computing power, and completion time. The devised optimization algorithm, known as ASPOA, is the incorporation of Aquila Optimizer (AO) and Student Psychology Based Optimization (SPBO). Task offloading in the IoV-fog system selects suitable resources for executing the tasks of the vehicles by considering several constraints and parameters to satisfy the user requirements. The simulation outcomes have shown that the devised ASPOA-based task offloading method has achieved better performance by achieving a minimum delay of 0.0009 s, minimum computing power of 8.884 W, and minimum completion time of 0.441 s.  相似文献   

15.
Wireless Personal Communications - Fog computing provides cloud services at the user end. User requests are processed on the fog nodes deployed near the end-user layer in a fog computing...  相似文献   

16.
Chen  Siguang  Ge  Xinwei  Wang  Qian  Miao  Yifeng  Ruan  Xiukai 《Wireless Networks》2022,28(7):3293-3304

In view of the existing computation offloading research on fog computing network scenarios, most scenarios focus on reducing energy consumption and delay and lack the joint consideration of smart device rechargeability. This paper proposes a deep deterministic policy gradient-based intelligent rechargeable fog computation offloading mechanism that is combined with simultaneous wireless information and power transfer technology. Specifically, an optimization problem that minimizes the total energy consumption for completing all tasks in a multiuser scenario is formulated, and the joint optimization of the task offloading ratio, uplink channel bandwidth, power split ratio and computing resource allocation is fully considered. Based on the above nonconvex optimization problem with a continuous action space, a communication, computation and energy harvesting co-aware intelligent computation offloading algorithm is developed. It can achieve the optimal energy consumption and delay, and similar to a double deep Q-network, an inverting gradient updating-based dual actor-critic neural network design can improve the convergence and stability of the training process. Finally, the simulation results validate that the proposed mechanism can converge quickly and can effectively reduce the energy consumption with the lowest task delay.

  相似文献   

17.
Mobile device users are involved in social networking, gaming, learning, and even some office work, so the end users expect mobile devices with high-response computing capacities, storage, and high battery power consumption. The data-intensive applications, such as text search, online gaming, and face recognition usage, have tremendously increased. With such high complex applications, there are many issues in mobile devices, namely, fast battery draining, limited power, low storage capacity, and increased energy consumption. The novelty of this work is to strike a balance between time and energy consumption of mobile devices while using data-intensive applications by finding the optimal offloading decisions. This paper proposes a novel efficient Data Size-Aware Offloading Model (DSAOM) for data-intensive applications and to predict the appropriate resource provider for dynamic resource allocation in mobile cloud computing. Based on the data size, the tasks are separated and gradually allocated to the appropriate resource providers for execution. The task is placed into the appropriate resource provider by considering the availability services in the fog nodes or the cloud. The tasks are split into smaller portions for execution in the neighbor fog nodes. To execute the task in the remote side, the offloading decision is made by using the min-cut algorithm by considering the monetary cost of the mobile device. This proposed system achieves low-latency time 13.2% and low response time 14.1% and minimizes 24% of the energy consumption over the existing model. Finally, according to experimental findings, this framework efficiently lowers energy use and improves performance for data-intensive demanding application activities, and the task offloading strategy is effective for intensive offloading requests.  相似文献   

18.
为提高基于非正交多址接入(NOMA)的移动边缘计算(MEC)系统中计算任务部分卸载时的安全性,该文在存在窃听者情况下研究MEC网络的物理层安全,采用保密中断概率来衡量计算卸载的保密性能,考虑发射功率约束、本地任务计算约束和保密中断概率约束,同时引入能耗权重因子以平衡传输能耗和计算能耗,最终实现系统能耗加权和最小。在满足两个用户优先级情况下,为降低系统开销,提出一种联合任务卸载和资源分配机制,通过基于二分搜索的迭代优化算法寻求问题变换后的最优解,并获得最优的任务卸载和功率分配。仿真结果表明,所提算法可有效降低系统能耗。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号