首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Mobile edge cloud computing has been a promising computing paradigm, where mobile users could offload their application workloads to low‐latency local edge cloud resources. However, compared with remote public cloud resources, conventional local edge cloud resources are limited in computation capacity, especially when serve large number of mobile applications. To deal with this problem, we present a hierarchical edge cloud architecture to integrate the local edge clouds and public clouds so as to improve the performance and scalability of scheduling problem for mobile applications. Besides, to achieve a trade‐off between the cost and system delay, a fault‐tolerant dynamic resource scheduling method is proposed to address the scheduling problem in mobile edge cloud computing. The optimization problem could be formulated to minimize the application cost with the user‐defined deadline satisfied. Specifically, firstly, a game‐theoretic scheduling mechanism is adopted for resource provisioning and scheduling for multiprovider mobile applications. Then, a mobility‐aware dynamic scheduling strategy is presented to update the scheduling with the consideration of mobility of mobile users. Moreover, a failure recovery mechanism is proposed to deal with the uncertainties during the execution of mobile applications. Finally, experiments are designed and conducted to validate the effectiveness of our proposal. The experimental results show that our method could achieve a trade‐off between the cost and system delay.  相似文献   

2.
Scheduling is essentially a decision-making process that enables resource sharing among a number of activities by determining their execution order on the set of available resources. The emergence of distributed systems brought new challenges on scheduling in computer systems, including clusters, grids, and more recently clouds. On the other hand, the plethora of research makes it hard for both newcomers researchers to understand the relationship among different scheduling problems and strategies proposed in the literature, which hampers the identification of new and relevant research avenues. In this paper we introduce a classification of the scheduling problem in distributed systems by presenting a taxonomy that incorporates recent developments, especially those in cloud computing. We review the scheduling literature to corroborate the taxonomy and analyze the interest in different branches of the proposed taxonomy. Finally, we identify relevant future directions in scheduling for distributed systems.  相似文献   

3.
随着移动设备数量的急剧增长及计算密集型应用如人脸识别、车联网以及虚拟现实等的广泛使用,为了实现满足用户QoS请求的任务和协同资源的最优匹配,使用合理的计算密集型应用的任务调度方案,从而解决边缘云中心时延长、成本高、负载不均衡和资源利用率低等问题。阐述了边缘计算环境下计算密集型应用的任务调度框架、执行过程、应用场景及性能指标。从时间和成本、能耗和资源利用率以及负载均衡和吞吐量为优化目标的边缘计算环境下计算密集型应用的任务调度策略进行了对比和分析,并归纳出目前这些策略的优缺点及适用场景。通过分析5G环境下基于SDN的边缘计算架构,提出了基于SDN环境下的边缘计算密集型数据包任务调度策略、基于深度强化学习的计算密集型应用的任务调度策略和5G IoV网络中多目标跨层任务调度策略。从容错调度、动态微服务调度、人群感知调度以及安全和隐私等几个方面总结和归纳了目前边缘计算环境中任务调度所面临的挑战。  相似文献   

4.
In recent times, the Internet of Things (IoT) applications, including smart transportation, smart healthcare, smart grid, smart city, etc. generate a large volume of real-time data for decision making. In the past decades, real-time sensory data have been offloaded to centralized cloud servers for data analysis through a reliable communication channel. However, due to the long communication distance between end-users and centralized cloud servers, the chances of increasing network congestion, data loss, latency, and energy consumption are getting significantly higher. To address the challenges mentioned above, fog computing emerges in a distributed environment that extends the computation and storage facilities at the edge of the network. Compared to centralized cloud infrastructure, a distributed fog framework can support delay-sensitive IoT applications with minimum latency and energy consumption while analyzing the data using a set of resource-constraint fog/edge devices. Thus our survey covers the layered IoT architecture, evaluation metrics, and applications aspects of fog computing and its progress in the last four years. Furthermore, the layered architecture of the standard fog framework and different state-of-the-art techniques for utilizing computing resources of fog networks have been covered in this study. Moreover, we included an IoT use case scenario to demonstrate the fog data offloading and resource provisioning example in heterogeneous vehicular fog networks. Finally, we examine various challenges and potential solutions to establish interoperable communication and computation for next-generation IoT applications in fog networks.  相似文献   

5.
目前,研究人员着眼于车载边缘计算(vehicular edge computing,VEC)环境下高效应用和资源调度策略的研究,然而,这些应用和策略的实机验证往往受限于成本和时间,无法快速有效地进行。边缘/雾计算仿真器如iFogSim2的出现降低了实验成本,然而,高速移动车辆的连接切换和资源分配需求对边缘/雾计算仿真器在VEC下应用提出了挑战。因此,改进了iFogSim2,设计了支持高速移动的VEC环境仿真器VECSim。集成开源基站数据并构建车辆轨迹数据集,以便研究人员专注于资源分配策略。首先,为了简化实验步骤,改进了移动轨迹数据解析模块并适配了微观交通仿真软件Simulation of Urban Mobility (SUMO)生成的车辆轨迹数据。其次,基于分布式数据流模型对VEC下的分布式应用进行建模,并提供了服务迁移基准策略算法。此外,VECSim还引入了时间性能优化方法,通过并行化操作,加速仿真事件处理,从而提高了仿真工具的时间性能。实验表明,相比于iFogSim2中同类的服务迁移算法,提出的服务迁移算法在大规模机动车轨迹数据集验证下表现出良好的稳定性,时间性能优化方法在执行时间上取得了5.3%的性能提升。代码开源地址:https://github.com/LiuZiyuan-CS/VECSim。  相似文献   

6.
In recent times, the machine learning (ML) community has recognized the deep learning (DL) computing model as the Gold Standard. DL has gradually become the most widely used computational approach in the field of machine learning, achieving remarkable results in various complex cognitive tasks that are comparable to, or even surpassing human performance. One of the key benefits of DL is its ability to learn from vast amounts of data. In recent years, the DL field has witnessed rapid expansion and has found successful applications in various conventional areas. Significantly, DL has outperformed established ML techniques in multiple domains, such as cloud computing, robotics, cybersecurity, and several others. Nowadays, cloud computing has become crucial owing to the constant growth of the IoT network. It remains the finest approach for putting sophisticated computational applications into use, stressing the huge data processing. Nevertheless, the cloud falls short because of the crucial limitations of cutting-edge IoT applications that produce enormous amounts of data and necessitate a quick reaction time with increased privacy. The latest trend is to adopt a decentralized distributed architecture and transfer processing and storage resources to the network edge. This eliminates the bottleneck of cloud computing as it places data processing and analytics closer to the consumer. Machine learning (ML) is being increasingly utilized at the network edge to strengthen computer programs, specifically by reducing latency and energy consumption while enhancing resource management and security. To achieve optimal outcomes in terms of efficiency, space, reliability, and safety with minimal power usage, intensive research is needed to develop and apply machine learning algorithms. This comprehensive examination of prevalent computing paradigms underscores recent advancements resulting from the integration of machine learning and emerging computing models, while also addressing the underlying open research issues along with potential future directions. Because it is thought to open up new opportunities for both interdisciplinary research and commercial applications, we present a thorough assessment of the most recent works involving the convergence of deep learning with various computing paradigms, including cloud, fog, edge, and IoT, in this contribution. We also draw attention to the main issues and possible future lines of research. We hope this survey will spur additional study and contributions in this exciting area.  相似文献   

7.
容器云是5G边缘计算的重要支撑技术,5G的大带宽、低时延和大连接三大特性给边缘计算带来较大的资源压力,容器云编排器Kubernetes仅采集Node剩余CPU和内存两大资源指标,并运用统一的权重值计算Node优先级作为调度依据,该机制无法适应边缘计算场景下精细化的资源调度需求。面向5G边缘计算的资源调度场景,通过扩展Kubernetes资源调度评价指标,并增加带宽、磁盘两种评价指标进行节点的过滤和选择,提出一种基于资源利用率进行指标权重自学习的调度机制WSLB。根据运行过程中的资源利用率动态计算该应用的资源权重集合,使其能够随着应用流量的大小进行自适应动态调整,利用动态学习得到的资源权重集合来计算候选Node的优先级,并选择优先级最高的Node进行部署。实验结果表明,与Kubernetes原生调度策略相比,WSLB考虑了边缘应用的带宽、磁盘需求,避免了将应用部署到带宽、磁盘资源已饱和的Node,在大负荷与异构请求场景下可使集群资源的均衡度提升10%,资源综合利用率提升2%。  相似文献   

8.
智能设备存在着存储能力以及计算能力不足的问题,导致无法满足计算密集型和延迟敏感型应用的服务质量要求。边缘计算和云计算被认为是解决智能设备局限性的有效方法。为了有效利用边云资源,并在延迟和服务失败概率方面提供良好的服务质量,首先提出了一种三层计算系统框架,然后考虑到边缘服务器的异构性和任务的延迟敏感性,在边缘层提出了一种高效的资源调度策略。三层计算系统框架可以根据应用程序的延迟敏感性提供计算资源和传输时延,保证了边缘资源的有效利用以及任务的实时性。仿真结果验证了所提资源调度策略的有效性,并表明该调度算法优于现有传统方法。  相似文献   

9.
罗慧兰 《计算机测量与控制》2017,25(12):150-152, 176
为缩短云计算执行时间,改善云计算性能,在一定程度上加强云计算资源节点完成任务成功率,需要对云计算资源进行调度;当前的云计算资源调度算法在进行调度时,通过选择合适的调度参数并利用CloudSim仿真工具,完成对云计算资源的调度;该算法在运行时无法有效地进行平衡负载,导致云计算资源调度的均衡性能较差,存在云计算资源调度结果误差大的问题;为此,提出一种基于Wi-Fi与Web的云计算资源调度算法;该算法首先利用自适应级联滤波算法对云计算资源数据流进行滤波降噪,然后以降噪结果为基础,采用本体论对云计算资源进行预处理操作,最后通过人工蜂群算法完成对云计算资源的调度;实验结果证明,所提算法可以良好地应用于云计算资源调度中,有效提高了云计算资源利用率,具有实用性以及可实践性,为该领域的后续研究发展提供了可靠支撑。  相似文献   

10.
The emergent paradigm of fog computing advocates that the computational resources can be extended to the edge of the network, so that the transmission latency and bandwidth burden caused by cloud computing can be effectively reduced. Moreover, fog computing can support and facilitate some kinds of applications that do not cope well with some features of cloud computing, for instance, applications that require low and predictable latency, and geographically distributed applications. However, fog computing is not a substitute but instead a powerful complement to the cloud computing. This paper focuses on studying the interplay and cooperation between the edge (fog) and the core (cloud) in the context of the Internet of Things (IoT). We first propose a three-tier system architecture and mathematically characterize each tier in terms of energy consumption and latency. After that, simulations are performed to evaluate the system performance with and without the fog involvement. The simulation results show that the three-tier system outperforms the two-tier system in terms of the assessed metrics.  相似文献   

11.
With the advent of the Internet of Things (IoT) paradigm, the cloud model is unable to offer satisfactory services for latency-sensitive and real-time applications due to high latency and scalability issues. Hence, an emerging computing paradigm named as fog/edge computing was evolved, to offer services close to the data source and optimize the quality of services (QoS) parameters such as latency, scalability, reliability, energy, privacy, and security of data. This article presents the evolution in the computing paradigm from the client-server model to edge computing along with their objectives and limitations. A state-of-the-art review of Cloud Computing and Cloud of Things (CoT) is presented that addressed the techniques, constraints, limitations, and research challenges. Further, we have discussed the role and mechanism of fog/edge computing and Fog of Things (FoT), along with necessitating amalgamation with CoT. We reviewed the several architecture, features, applications, and existing research challenges of fog/edge computing. The comprehensive survey of these computing paradigms offers the depth knowledge about the various aspects, trends, motivation, vision, and integrated architectures. In the end, experimental tools and future research directions are discussed with the hope that this study will work as a stepping-stone in the field of emerging computing paradigms.  相似文献   

12.
Grid computing is mainly helpful for executing high-performance computing applications. However, conventional grid resources sometimes fail to offer a dynamic application execution environment and this increases the rate at which the job requests of users are rejected. Integrating emerging virtualization technologies in grid and cloud computing facilitates the provision of dynamic virtual resources in the required execution environment. Resource brokers play a significant role in managing grid and cloud resources as well as identifying potential resources that satisfy users’ application requests. This research paper proposes a semantic-enabled CARE Resource Broker (SeCRB) that provides a common framework to describe grid and cloud resources, and to discover them in an intelligent manner by considering software, hardware and quality of service (QoS) requirements. The proposed semantic resource discovery mechanism classifies the resources into three categories viz., exact, high-similarity subsume and high-similarity plug-in regions. To achieve the necessary user QoS requirements, we have included a service level agreement (SLA) negotiation mechanism that pairs users’ QoS requirements with matching resources to guarantee the execution of applications, and to achieve the desired QoS of users. Finally, we have implemented the QoS-based resource scheduling mechanism that selects the resources from the SLA negotiation accepted list in an optimal manner. The proposed work is simulated and evaluated by submitting real-world bio-informatics and image processing application for various test cases. The result of the experiment shows that for jobs submitted to the resource broker, job rejection rate is reduced while job success and scheduling rates are increased, thus making the resource management system more efficient.  相似文献   

13.
Computational grids that couple geographically distributed resources such as PCs, workstations, clusters, and scientific instruments, have emerged as a next generation computing platform for solving large-scale problems in science, engineering, and commerce. However, application development, resource management, and scheduling in these environments continue to be a complex undertaking. In this article, we discuss our efforts in developing a resource management system for scheduling computations on resources distributed across the world with varying quality of service (QoS). Our service-oriented grid computing system called Nimrod-G manages all operations associated with remote execution including resource discovery, trading, scheduling based on economic principles and a user-defined QoS requirement. The Nimrod-G resource broker is implemented by leveraging existing technologies such as Globus, and provides new services that are essential for constructing industrial-strength grids. We present the results of experiments using the Nimrod-G resource broker for scheduling parametric computations on the World Wide Grid (WWG) resources that span five continents.  相似文献   

14.
针对当前云计算数据中心资源调度过程耗时长、能耗高、数据传输准确性较低的问题,提出基于VR沉浸式的虚拟化云计算数据中心资源节能调度算法。构建云计算数据中心资源采样模型,结合虚拟现实(virtual reality,VR)互动装置输出、转换、调度中心资源,提取中心资源的关联规则特征量,采用嵌入式模糊聚类融合分析方法三维重构中心资源,建立虚拟化云计算数据中心资源的信息融合中心,采用决策相关性分析方法,结合差异化融合特征量实现对数据中心资源调度,实现虚拟化云计算数据中心资源实时节能调度。仿真结果表明,采用该方法进行虚拟化云计算数据中心资源节能调度的数据传输准确性较高,时间开销较短,能耗较低,在中心资源调度中具有很好的应用价值。  相似文献   

15.
周墨颂  董小社  陈衡  张兴军 《软件学报》2020,31(12):3981-3999
云计算平台中普遍采用固定资源量的粗粒度资源分配方式,由此会引起资源碎片、过度分配、低集群资源利用率等问题.针对此问题,提出一种细粒度资源调度方法,该方法根据相似任务运行时信息推测任务资源需求;将任务划分为若干执行阶段,分阶段匹配资源,从分配时间和分配资源量两方面细化资源分配粒度;资源匹配过程中,基于资源可压缩特性进一步提高资源利用率和性能;采用资源监控、策略调整、约束检查等机制保证资源使用效率和负载性能.在开源云资源管理平台中,基于细粒度资源调度方法实现了调度器.实验结果表明:细粒度资源调度方法可以在不丧失公平性且调度响应时间可接受的前提下,细化资源匹配的粒度,有效提高云计算平台资源利用率和性能.  相似文献   

16.
针对提高异构云平台中资源调度的效率,提出了一种基于任务和资源分簇的异构云计算平台任务调度方案。利用K-means算法,根据任务的CPU和I/O处理时间对任务分簇,根据资源的计算能力对资源分簇;然后,将任务簇对应到合适的资源簇,并利用最早截止时间优先(EDF)算法对任务簇中的独立任务进行调度,利用提出的改进型最小关键路径(MCP)算法对依赖性任务进行调度。实验结果表明,在资源异构的云计算环境中,该方案执行任务时间短、能耗低。  相似文献   

17.
云计算资源调度研究综述   总被引:27,自引:5,他引:22  
资源调度是云计算的一个主要研究方向.首先对云计算资源调度的相关研究现状进行深入调查和分析;然后重点讨论以降低云计算数据中心能耗为目标的资源调度方法、以提高系统资源利用率为目标的资源管理方法、基于经济学的云资源管理模型,给出最小能耗的云计算资源调度模型和最小服务器数量的云计算资源调度模型,并深入分析和比较现有的云资源调度方法;最后指出云计算资源管理的未来重要研究方向:基于预测的资源调度、能耗与性能折衷的调度、面向不同应用负载的资源管理策略与机制、面向计算能力(CPU、内存)和网络带宽的综合资源分配、多目标优化的资源调度,以便为云计算研究提供有益的参考.  相似文献   

18.
任务调度在云计算环境中发挥着重要作用。提出一种基于Kriging代理模型的动态云任务调度方法。通过对云任务在不同资源组合下的性能表现进行Kriging代理模型建模并优化,从而得到对应于该云任务的最优资源分配方案;利用云平台的API,可动态对该云任务实施资源调度。基于OpenStack开源云平台,对两个工程计算应用进行了任务调度性能测试,结果表明该方法可有效动态调整云任务中的资源配给,按需按优对平台中的云任务进行资源调度。  相似文献   

19.
In today's world, large group migration of applications to the fog computing is registered in the information technology world. The main issue in fog computing is providing enhanced quality of service (QoS). QoS management consists of various method used for allocating fog-user applications in the virtual environment and selecting suitable method for allocating virtual resources to physical resource. The resources allocation in effective manner in the fog environment is also a major problem in fog computing; it occurs when the infrastructure is build using light-weight computing devices. In this article, the allocation of task and placement of virtual machine problems is explained in the single fog computing environment. The experiment is done and the result shows that the proposed framework improves QoS in fog environment.  相似文献   

20.
当网络在云数据中心发送和处理数据的延迟较大时,大多实时智能应用程序都难以达到预期效果。雾计算允许这些对延迟敏感的应用程序在边缘设备上运行,这些设备被称为雾节点,其在地理位置上更接近应用程序。然而,雾计算中的雾节点通常计算资源有限,容易受到海量高维异常流量攻击,为此,提出一种特征降维的改进准递归神经网络,并基于该网络构建轻量级入侵检测模型FR-IQRNN。将雾节点采集到的高维攻击样本编码为低维向量以减少冗余特征,利用FR-IQRNN的循环连接捕获低维向量的时间依赖关系,同时在时间步长和小批量维度中实现并行计算,在此基础上,引入注意力机制强化模型对关键特征的提取能力,从而实现雾节点的入侵检测。在公开数据集UNSW_NB15上,FR-IQRNN模型能取得99.51%的准确率、99.23%的精确率以及99.79%的召回率,优于RNN-IDS、AESVM等模型,并且仅需127.94 s便达到95%以上的训练精度。在NSL-KDD数据集上,FR-IQRNN模型获得99.39%的准确率和99.27%的召回率,且在鲁棒性方面表现突出。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号