首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The latest developments in mobile computing technology have increased the computing capabilities of smartphones in terms of storage capacity, features support such as multimodal connectivity, and support for customized user applications. Mobile devices are, however, still intrinsically limited by low bandwidth, computing power, and battery lifetime. Therefore, the computing power of computational clouds is tapped on demand basis for mitigating resources limitations in mobile devices. Mobile cloud computing (MCC) is believed to be able to leverage cloud application processing services for alleviating the computing limitations of smartphones. In MCC, application offloading is implemented as a significant software level solution for sharing the application processing load of smartphones. The challenging aspect of application offloading frameworks is the resources intensive mechanism of runtime profiling and partitioning of elastic mobile applications, which involves additional computing resources utilization on Smart Mobile Devices (SMDs). This paper investigates the overhead of runtime application partitioning on SMD by analyzing additional resources utilization on SMD in the mechanism of runtime application profiling and partitioning. We evaluate the mechanism of runtime application partitioning on SMDs in the SmartSim simulation environment and validate the overhead of runtime application profiling by running prototype application in the real mobile computing environment. Empirical results indicate that additional computing resources are utilized in runtime application profiling and partitioning. Hence, lightweight alternatives with optimal distributed deployment and management mechanism are mandatory for accessing application processing services of computational clouds.  相似文献   

2.
Cloud computing enables access to the widespread services and resources in cloud datacenters for mitigating resource limitations in low-potential client devices. Computational cloud is an attractive platform for computational offloading due to the attributes of scalability and availability of resources. Therefore, mobile cloud computing (MCC) leverages the application processing services of computational clouds for enabling computational-intensive and ubiquitous mobile applications on smart mobile devices (SMDs). Computational offloading frameworks focus on offloading intensive mobile applications at different granularity levels which involve resource-intensive mechanism of application profiling and partitioning at runtime. As a result, the energy consumption cost (ECC) and turnaround time of the application is increased. This paper proposes an active service migration (ASM) framework for computational offloading to cloud datacenters, which employs lightweight procedure for the deployment of runtime distributed platform. The proposed framework employs coarse granularity level and simple developmental and deployment procedures for computational offloading in MCC. ASM is evaluated by benchmarking prototype application on the Android devices in the real MCC environment. It is found that the turnaround time of the application reduces up to 45 % and ECC of the application reduces up to 33 % in ASM-based computational offloading as compared to traditional offloading techniques which shows the lightweight nature of the proposed framework for computational offloading.  相似文献   

3.
移动边缘计算(MEC)为计算密集型应用和资源受限的移动设备之间的冲突提供了有效解决办法,但大多关于MEC迁移的研究仅考虑移动设备与MEC服务器之间的资源分配,忽略了云计算中心的巨大计算资源。为了充分利用云和MEC资源,提出一种云边协作的任务迁移策略。首先,将云边服务器的任务迁移问题转化为博弈问题;然后,证明该博弈中纳什均衡(NE)的存在以及唯一性,并获得博弈问题的解决方案;最后,提出了一种基于博弈论的两阶段任务迁移算法来求解任务迁移问题,并通过性能指标对该算法的性能进行了评估。仿真结果表明,采用所提算法所产生的总开销分别比本地执行、云中心服务器执行和MEC服务器执行的总开销降低了72.8%、47.9%和2.65%,数值结果证实了所提策略可以实现更高的能源效率和更低的任务迁移开销,并且随着移动设备数量的增加可以很好地扩展规模。  相似文献   

4.
In recent years, the usage of smart mobile applications to facilitate day-to-day activities in various domains for enhancing the quality of human life has increased widely. With rapid developments of smart mobile applications, the edge computing paradigm has emerged as a distributed computing solution to support serving these applications closer to mobile devices. Since the submitted workloads to the smart mobile applications changes over the time, decision making about offloading and edge server provisioning to handle the dynamic workloads of mobile applications is one of the challenging issues into the resource management scope. In this work, we utilized learning automata as a decision-maker to offload the incoming dynamic workloads into the edge or cloud servers. In addition, we propose an edge server provisioning approach using long short-term memory model to estimate the future workload and reinforcement learning technique to make an appropriate scaling decision. The simulation results obtained under real and synthetic workloads demonstrate that the proposed solution increases the CPU utilization and reduces the execution time and energy consumption, compared with the other algorithms.  相似文献   

5.
Nowadays, mobile devices are becoming the most popular computing device as their computing capabilities increase rapidly. However, it is still challenging to execute highly sophisticated applications such as 3D video games on mobile devices due to its constrained key computational resources. Execution offloading approaches have been proposed to resolve this problem by strengthening mobile devices with powerful cloud. Unfortunately, the existing offloading approaches are not suitable for 3D video games because of the unique execution characteristics of them. In this paper, we propose a streaming-based execution offloading framework to enable execution offloading for 3D video games. The experiments show that our framework successfully guarantees 20 frames per second for our benchmark.  相似文献   

6.
Mobile systems, such as smartphones, are becoming the primary platform of choice for a user’s computational needs. However, mobile devices still suffer from limited resources such as battery life and processor performance. To address these limitations, a popular approach used in mobile cloud computing is computation offloading, where resource-intensive mobile components are offloaded to more resourceful cloud servers. Prior studies in this area have focused on a form of offloading where only a single server is considered as the offloading site. Because there is now an environment where mobile devices can access multiple cloud providers, it is possible for mobiles to save more energy by offloading energy-intensive components to multiple cloud servers. The method proposed in this paper differentiates the data- and computation-intensive components of an application and performs a multisite offloading in a data and process-centric manner. In this paper, we present a novel model to describe the energy consumption of a multisite application execution and use a discrete time Markov chain (DTMC) to model fading wireless mobile channels. We adopt a Markov decision process (MDP) framework to formulate the multisite partitioning problem as a delay-constrained, least-cost shortest path problem on a state transition graph. Our proposed Energy-efficient Multisite Offloading Policy (EMOP) algorithm, built on a value iteration algorithm (VIA), finds the efficient solution to the multisite partitioning problem. Numerical simulations show that our algorithm considers the different capabilities of sites to distribute appropriate components such that there is a lower energy cost for data transfer from the mobile to the cloud. A multisite offloading execution using our proposed EMOP algorithm achieved a greater reduction on the energy consumption of mobiles when compared to a single site offloading execution.  相似文献   

7.

In recent years, various studies on OpenStack-based high-performance computing have been conducted. OpenStack combines off-the-shelf physical computing devices and creates a resource pool of logical computing. The configuration of the logical computing resource pool provides computing infrastructure according to the user’s request and can be applied to the infrastructure as a service (laaS), which is a cloud computing service model. The OpenStack-based cloud computing can provide various computing services for users using a virtual machine (VM). However, intensive computing service requests from a large number of users during large-scale computing jobs may delay the job execution. Moreover, idle VM resources may occur and computing resources are wasted if users do not employ the cloud computing resources. To resolve the computing job delay and waste of computing resources, a variety of studies are required including computing task allocation, job scheduling, utilization of idle VM resource, and improvements in overall job’s execution speed according to the increase in computing service requests. Thus, this paper proposes an efficient job management of computing service (EJM-CS) by which idle VM resources are utilized in OpenStack and user’s computing services are processed in a distributed manner. EJM-CS logically integrates idle VM resources, which have different performances, for computing services. EJM-CS improves resource wastes by utilizing idle VM resources. EJM-CS takes multiple computing services rather than single computing service into consideration. EJM-CS determines the job execution order considering workloads and waiting time according to job priority of computing service requester and computing service type, thereby providing improved performance of overall job execution when computing service requests increase.

  相似文献   

8.
Mobile cloud computing is an emerging technology that is gaining popularity as a means to extend the capabilities of resource-constrained mobile devices such as a smartphone. Mobile cloud computing requires specialized application development models that support computation offloading from a mobile device to the cloud. The computation offloading is performed by means of offloading application process, application component, entire application, or clone of the smartphone. The offloading of an entire application or clone of the smartphone to cloud may raise application piracy issues, which, unfortunately, have not been addressed in the existing literature. This paper presents a piracy control framework for mobile cloud environment, named Pirax, which prevents mobile applications from executing on unauthenticated devices and cloud resources. Pirax is formally verified using High Level Petri Nets, Satisfiability Modulo Theories Library and Z3 solver. Pirax is implemented on Android platform and analyzed from security and performance perspectives. The performance analysis results show that Pirax is lightweight and easy to integrate into existing mobile cloud application development models.  相似文献   

9.
Although mobile devices have been considerably upgraded to more powerful terminals, yet their lightness feature still impose intrinsic limitations in their computation capability, storage capacity and battery lifetime. With the ability to release and augment the limited resources of mobile devices, mobile cloud computing has drawn significant research attention allowing computations to be offloaded and executed on remote resourceful infrastructure. Nevertheless, circumstances like mobility, latency, applications execution overload and mobile device state; any can affect the offloading decision, which might dictate local execution for some tasks and remote execution for others. We present in this article a novel system model for computations offloading which goes beyond existing works with smart centralized, selective, and optimized approach. The proposition consists of (1)hotspots selection mechanism to minimize the overhead of the offloading evaluation process yet without jeopardizing the discovery of the optimal processing environment of tasks, (2)a multi-objective optimization model that considers adaptable metrics crucial for minimizing device resource usage and augmenting its performance, and (3)a tailored centralized decision maker that uses genetics to intelligently find the optimal distribution of tasks. The scalability, overhead and performance of the proposed hotspots selection mechanism and hence its effect on the decision maker and tasks dissemination are evaluated. The results show its ability to notably reduce the evaluation cost while the decision maker was able in turn to maintain optimal dissemination of tasks. The model is also evaluated and the experiments prove its competency over existing models with execution speedup and significant reduction in the CPU usage, memory consumption and energy loss.  相似文献   

10.
谢兵 《计算机应用研究》2020,37(10):3014-3019
移动云计算可以通过应用任务的计算迁移降低执行延时和改善移动设备能效,但面对多云站点选择时,迁移决策是NP问题。针对该问题,提出一种能效计算迁移算法。为了实现截止期限和预算约束下执行时间与代价的多目标优化,算法将优化过程分解为三步进行。首先根据用户对时间与代价参数的偏好,设计一种CTTPO算法对应用进行分割,生成迁移模块(云端站点执行)和非迁移模块(移动设备执行);然后为了实现云端多站点间的迁移模块调度,设计一种基于教与学最优化方法的MTS算法,进而产生效率最优的应用调度解;最后设计一种基于动态电压缩放方法的ESM算法,通过多站点的性能缩放进一步降低应用执行能耗。通过两种随机应用结构图进行了仿真实验,实验结果证明,该算法在执行效率、执行代价以及执行能耗上要优于对比算法。  相似文献   

11.
The handling of complex tasks in IoT applications becomes difficult due to the limited availability of resources in most IoT devices. There arises a need to offload the IoT tasks with huge processing and storage to resource enriched edge and cloud. In edge computing, factors such as arrival rate, nature and size of task, network conditions, platform differences and energy consumption of IoT end devices impacts in deciding an optimal offloading mechanism. A model is developed to make a dynamic decision for offloading of tasks to edge and cloud or local execution by computing the expected time, energy consumption and processing capacity. This dynamic decision is proposed as processing capacity-based decision mechanism (PCDM) which takes the offloading decisions on new tasks by scheduling all the available devices based on processing capacity. The target devices are then selected for task execution with respect to energy consumption, task size and network time. PCDM is developed in the EDGECloudSim simulator for four different applications from various categories such as time sensitiveness, smaller in size and less energy consumption. The PCDM offloading methodology is experimented through simulations to compare with multi-criteria decision support mechanism for IoT offloading (MEDICI). Strategies based on task weightage termed as PCDM-AI, PCDM-SI, PCDM-AN, and PCDM-SN are developed and compared against the five baseline existing strategies namely IoT-P, Edge-P, Cloud-P, Random-P, and Probabilistic-P. These nine strategies are again developed using MEDICI with the same parameters of PCDM. Finally, all the approaches using PCDM and MEDICI are compared against each other for four different applications. From the simulation results, it is inferred that every application has unique approach performing better in terms of response time, total task execution, energy consumption of device, and total energy consumption of applications.  相似文献   

12.
何远德  黄奎峰 《计算机应用研究》2020,37(6):1633-1637,1651
移动云计算可以通过计算卸载改善移动设备的能效和应用的执行延时。然而面对云端的多重服务选择时,计算卸载决策是NP问题。为了解决这一问题,提出一种遗传算法寻找计算卸载的最优应用分割决策解。遗传种群初始化中,算法联立预定义和随机染色体方法进行初始种群的生成,减少了无效染色体的发生比例。同时,算法为预定义的预留种群设计一种特定的基于汉明距离函数的适应度函数,更好地衡量了染色体间的差异。种群交叉中分别利用近亲交配与杂交繁育丰富了种群个体。算法通过修正的遗传操作减少了无效解的产生,以更合理的时间代价获得了应用分割的最优可行解。应用现实的移动应用任务图进行仿真实验评估了算法效率。评估结论表明,所设计的遗传算法在应用执行能耗、执行时间以及综合权重代价方面均优于对比算法。  相似文献   

13.
刘伟  黄宇成  杜薇  王伟 《软件学报》2020,31(6):1889-1908
云计算和移动互联网的不断融合,促进了移动云计算的产生和发展,但是其难以满足终端应用对带宽和延迟的需求.移动边缘计算在靠近用户的网络边缘提供计算和存储能力,通过计算卸载,将终端任务迁移至边缘服务器上面执行,能够有效降低应用延迟和节约终端能耗.然而,目前针对移动边缘环境任务卸载的主要工作大多考虑单个移动终端和边缘服务器资源无限的场景,这在实际应用中存在一定的局限性.因此,针对边缘服务器资源受限下的任务卸载问题,提出了一种面向多用户的串行任务动态卸载策略(multi-user serial task dynamic offloading strategy,简称MSTDOS).该策略以应用的完成时间和移动终端的能量消耗作为评价指标,遵循先来先服务的原则,采用化学反应优化算法求解,充分考虑多用户请求对服务器资源的竞争关系,动态调整选择策略,为应用做出近似最优的卸载决策.仿真结果表明,MSTDOS策略比已有算法能够取得更好的应用性能.  相似文献   

14.
近年来,随着移动智能设备的普及以及5G等无线通信技术的发展,边缘计算作为一种新兴的计算模式被提出,作为传统的云计算模式的扩展与补充.边缘计算的基本思想是将移动设备上产生的计算任务从卸载到云端转变为卸载到网络边缘端,从而满足实时在线游戏、增强现实等计算密集型应用对低延迟的要求.边缘计算中的计算任务卸载是一个关键的研究问题...  相似文献   

15.
For the last few years, academia and research organizations are continuously investigating and resolving the security and privacy issues of mobile cloud computing environment. The additional consideration in designing security services for mobile cloud computing environment should be the resource-constrained mobile devices. The execution of computationally intensive security services on mobile device consumes battery’s charging quickly. In this regard, the study presents a novel energy-efficient block-based sharing scheme that provides confidentiality and integrity services for mobile users in the cloud environment. The block-based sharing scheme is compared with the existing schemes on the basis of energy consumption, CPU utilization, memory utilization, encryption time, decryption time, and turnaround time. The experimental results show that the block-based sharing scheme consumes less energy, reduces the resources utilization, improves response time, and provides better security services to the mobile users in the presence of fully untrusted cloud server(s) as compared to the existing security schemes.  相似文献   

16.
Cloudlet is a novel computing paradigm, introduced to the mobile cloud service framework, which moves the computing resources closer to the mobile users, aiming to alleviate the communication delay between the mobile devices and the cloud platform and optimize the energy consumption for mobile devices. Currently, the mobile applications, modeled by the workflows, tend to be complicated and computation‐intensive. Such workflows are required to be offloaded to the cloudlet or the remote cloud platform for execution. However, it is still a key challenge to determine the offloading resolvent for the deadline‐constrained workflows in the cloudlet‐based mobile cloud, since a cloudlet often has limited resources. In this paper, a multiobjective computation offloading method, named MCO, is proposed to address the above challenge. Technically, an energy consumption model for the mobile devices is established in the cloudlet‐based mobile cloud. Then, a corresponding computation offloading method, by improving Nondominated Sorting Genetic Algorithm II, is designed to achieve the goal of energy saving for all the mobile device while satisfying the deadline constraints of the workflows. Finally, extensive experimental evaluations are conducted to demonstrate the efficiency and effectiveness of our proposed method.  相似文献   

17.
Cloud computing is an emerging computing paradigm that offers on-demand, flexible, and elastic computational and storage services for the end-users. The small and medium-sized business organization having limited budget can enjoy the scalable services of the cloud. However, the migration of the organizational data on the cloud raises security and privacy issues. To keep the data confidential, the data should be encrypted using such cryptography method that provides fine-grained and efficient access for uploaded data without affecting the scalability of the system. In mobile cloud computing environment, the selected scheme should be computationally secure and must have capability for offloading computational intensive security operations on the cloud in a trusted mode due to the resource constraint mobile devices. The existing manager-based re-encryption and cloud-based re-encryption schemes are computationally secured and capable to offload the computationally intensive data access operations on the trusted entity/cloud. Despite the offloading of the data access operations in manager-based re-encryption and cloud-based re-encryption schemes, the mobile user still performs computationally intensive paring-based encryption and decryption operations using limited capabilities of mobile device. In this paper, we proposed Cloud-Manager-based Re-encryption Scheme (CMReS) that combines the characteristics of manager-based re-encryption and cloud-based re-encryption for providing the better security services with minimum processing burden on the mobile device. The experimental results indicate that the proposed cloud-manager-based re-encryption scheme shows significant improvement in turnaround time, energy consumption, and resources utilization on the mobile device as compared to existing re-encryption schemes.  相似文献   

18.
移动云计算可以将任务从移动设备计算卸载至云端以增强设备计算能力,而如何实现能效计算卸载机制是当前的主要挑战。为了解决该问题,以降低移动设备能耗和应用完成时间为目标,将计算卸载问题形式化为满足任务顺序与截止时间约束的能效代价最小化问题,并提出一种动态能效感知计算卸载算法。算法由三个子算法组成:计算卸载选择、时钟频率控制及传输功率分配。实验结果表明,通过局部计算时优化调整移动设备CPU时钟频率,以及云端计算时自适应分配传输功率,新算法可以有效降低应用执行能效代价,同时确保满足约束条件,提高执行效率。  相似文献   

19.
Cloud computing allows execution and deployment of different types of applications such as interactive databases or web-based services which require distinctive types of resources. These applications lease cloud resources for a considerably long period and usually occupy various resources to maintain a high quality of service (QoS) factor. On the other hand, general big data batch processing workloads are less QoS-sensitive and require massively parallel cloud resources for short period. Despite the elasticity feature of cloud computing, fine-scale characteristics of cloud-based applications may cause temporal low resource utilization in the cloud computing systems, while process-intensive highly utilized workload suffers from performance issues. Therefore, ability of utilization efficient scheduling of heterogeneous workload is one challenging issue for cloud owners. In this paper, addressing the heterogeneity issue impact on low utilization of cloud computing system, conjunct resource allocation scheme of cloud applications and processing jobs is presented to enhance the cloud utilization. The main idea behind this paper is to apply processing jobs and cloud applications jointly in a preemptive way. However, utilization efficient resource allocation requires exact modeling of workloads. So, first, a novel methodology to model the processing jobs and other cloud applications is proposed. Such jobs are modeled as a collection of parallel and sequential tasks in a Markovian process. This enables us to analyze and calculate the efficient resources required to serve the tasks. The next step makes use of the proposed model to develop a preemptive scheduling algorithm for the processing jobs in order to improve resource utilization and its associated costs in the cloud computing system. Accordingly, a preemption-based resource allocation architecture is proposed to effectively and efficiently utilize the idle reserved resources for the processing jobs in the cloud paradigms. Then, performance metrics such as service time for the processing jobs are investigated. The accuracy of the proposed analytical model and scheduling analysis is verified through simulations and experimental results. The simulation and experimental results also shed light on the achievable QoS level for the preemptively allocated processing jobs.  相似文献   

20.
张展  张宪琦  左德承  付国栋 《软件学报》2020,31(9):2691-2708
目标追踪算法已在诸多领域得到广泛应用,然而由于实时性和功耗问题,使得基于深度学习模型的算法难以在移动终端设备上部署应用.本文结合边缘计算技术,从应用部署优化的角度,对目标追踪算法在移动设备上的部署策略进行研究.通过对目标追踪应用特点、移动设备特性以及边缘云网络架构的分析,提出一种面向边缘计算的目标追踪应用部署策略.通过任务分割策略将目标追踪应用的计算任务合理卸载至边缘云并利用信息融合策略对计算结果进行分析融合,此外,利用运动检测进一步降低终端节点的计算压力和功耗.通过对不同部署策略进行对比实验,实验结果表明,相比计算任务本地计算,该部署策略明显降低了任务响应时间,相比完全卸载至边缘云,该部署策略降低了相同计算任务的处理时间.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号