首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we consider the QoS‐aware replica placement problem. Although there has been much research on the problem, most approaches focus on the average system performance and ignore the quality assurance issue. However, quality assurance is crucial, especially in heterogeneous environments. To fill this research gap, we proposed four heuristic algorithms to determine the locations of replicas in order to satisfy the quality requirements imposed by data requests. Three of the algorithms are greedy heuristics called Greedy‐Cover, Cover‐Partition and Multi‐Source. The fourth algorithm is based on the Simulated Annealing technique. Our experiment results indicated that Greedy‐Cover and the Simulated Annealing‐based algorithms can find effective solutions efficiently. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
针对云计算环境下并行任务易受资源失效的影响而无法完成,且动态提供云资源可靠性较低的问题,首先,引入失效恢复机制,由于在失效可恢复情况下资源失效规律动态变化,使用两参数Weibull分布对不同时段资源节点和通信链路失效规律的局部特征进行描述;然后,根据并行任务之间存在的各类交互关系分析,提出了一种基于变参数失效规则的资源可靠性评估模型;最后,将该模型并入粒子群算法得到基于可靠性感知的自适应惯性权重粒子群资源调度算法R PSO,从而在计算适应度时充分考虑备选资源的可靠程度。仿真实验结果表明,当选择了合适的失效恢复参数时,提出的R PSO算法能够大幅度提高云服务可靠性,且只会增加少量的额外失效恢复开销。  相似文献   

3.
优化虚拟机部署是数据中心降低能耗的一个重要方法。目前大多数虚拟机部署算法都明显地降低了能耗,但过度虚拟机整合和迁移引起了系统性能较大的退化。针对该问题,首先构建虚拟机优化部署模型。然后提出一种二阶段迭代启发式算法来求解该模型,第一阶段是基于首次适应下降装箱算法,提出一种虚拟机优化部署算法,目标是最小化主机数;第二阶段是提出了一种虚拟机在线迁移选择算法,目标是最小化待迁移虚拟机数。实验结果表明,该算法能够有效地降低能耗,具有较低的服务等级协定(SLA)违背率和较好的时间性能。  相似文献   

4.
Cloud computing has grown to become a popular distributed computing service offered by commercial providers. More recently, edge and fog computing resources have emerged on the wide-area network as part of Internet of things (IoT) deployments. These three resource abstraction layers are complementary, and offer distinctive benefits. Scheduling applications on clouds has been an active area of research, with workflow and data flow models offering a flexible abstraction to specify applications for execution. However, the application programming and scheduling models for edge and fog are still maturing, and can benefit from learnings on cloud resources. At the same time, there is also value in using these resources cohesively for application execution. In this article, we offer a taxonomy of concepts essential for specifying and solving the problem of scheduling applications on edge, fog, and cloud computing resources. We first characterize the resource capabilities and limitations of these infrastructure and offer a taxonomy of application models, quality-of-service constraints and goals, and scheduling techniques, based on a literature review. We also tabulate key research prototypes and papers using this taxonomy. This survey benefits developers and researchers on these distributed resources in designing and categorizing their applications, selecting the relevant computing abstraction(s), and developing or selecting the appropriate scheduling algorithm. It also highlights gaps in literature where open problems remain.  相似文献   

5.

Fog computing is considered a formidable next-generation complement to cloud computing. Nowadays, in light of the dramatic rise in the number of IoT devices, several problems have been raised in cloud architectures. By introducing fog computing as a mediate layer between the user devices and the cloud, one can extend cloud computing's processing and storage capability. Offloading can be utilized as a mechanism that transfers computations, data, and energy consumption from the resource-limited user devices to resource-rich fog/cloud layers to achieve an optimal experience in the quality of applications and improve the system performance. This paper provides a systematic and comprehensive study to evaluate fog offloading mechanisms' current and recent works. Each selected paper's pros and cons are explored and analyzed to state and address the present potentialities and issues of offloading mechanisms in a fog environment efficiently. We classify offloading mechanisms in a fog system into four groups, including computation-based, energy-based, storage-based, and hybrid approaches. Furthermore, this paper explores offloading metrics, applied algorithms, and evaluation methods related to the chosen offloading mechanisms in fog systems. Additionally, the open challenges and future trends derived from the reviewed studies are discussed.

  相似文献   

6.
密文策略属性加密为基于云存储的物联网系统提供了一对多的访问控制,然而现有方案中存在开销大、粒度粗等问题.基于此,结合雾计算技术提出了一种支持计算外包的微型属性加密方案.该方案缩短了密钥与密文的长度,减少了客户端的存储开销;将部分计算转载到雾节点,提高了加解密效率;具有更加丰富的策略表达能力,并且可以快速验证外包解密的正...  相似文献   

7.
Data centers now play an important role in modern IT infrastructures. Related research shows that the energy consumption for data center cooling systems has recently increased significantly. There is also strong evidence to show that high temperatures in a data center will lead to higher hardware failure rates, and thus an increase in maintenance costs. This paper devotes itself in the field of thermal aware workload placement for data centers. In this paper, we propose an analytical model, which describes data center resources with heat transfer properties and workloads with thermal features. Then two thermal aware task scheduling algorithms, TASA and TASA-B, are presented which aim to reduce temperatures and cooling system power consumption in a data center. A simulation study is carried out to evaluate the performance of the proposed algorithms. Simulation results show that our algorithms can significantly reduce temperatures in data centers by introducing endurable decline in system performance.  相似文献   

8.
Fog computing or a fog network is a decentralized network placed in between data source and the cloud to minimize the network latency issues and thus support in-time service delivery, of Internet of Things (IoT) applications. However, placing computational tasks of IoT applications in fog infrastructure is a challenging task. State of the art focuses on quality of service and quality of experience (QoE) based application placement. In this article, we design hierarchical fuzzy based QoE-aware application placement strategy for mapping IoT applications with compatible instances in the fog network. The proposed method considers user application expectation parameters and metrics of available fog instances, and assigns the priority of applications using hierarchical fuzzy logic. The method later uses Hungarian maximization assignment algorithm to map applications with compatible instances. The simulation results of the proposed policy show better performance over the existing baseline algorithms in terms of resource gain (RG), processing time reduction ratio (PTRR), and similarly network relaxation ratio. When considering 10 applications in the fog network, our proposed method simulation results show 70.00%, 22.44%, 37.83% improvement in RG, and 28.46%, 37.5%, 23.07% improvement in PTRR, when compared with QoE-aware, randomized, FIFO algorithms, respectively.  相似文献   

9.
Wireless sensor networks are used in many applications in military, ecology, health, and other areas. These applications often include the monitoring of sensitive information making the security issue one of the most important aspects to consider in this field. However, most of protocols optimize for the limited capabilities of sensor nodes and the application specific nature of the networks, but they are vulnerable to serious attacks. In this paper, a Secure Energy and Reliability Aware data gathering protocol (SERA) is proposed, which provides energy efficiency and data delivery reliability as well as security. The proposed protocol’s security was confirmed by a formal verification carried out using the AVISPA tool and analysis of the most common network layer attacks such as selective forwarding, sinkhole, Sybil, wormhole, HELLO flood, and acknowledgment spoofing attacks. Additionally, a visual simulation environment was developed to evaluate the performance of the proposed protocol.  相似文献   

10.
The Non-Affinity Aware Grouping based resource Allocation (NAGA) method toward the General VMPlacement (GP) problem enables (1) some VMs to be co-located onto the same PM while the VMs are required to be placed onto distinct PMs; and (2) some VMs to be dispersedly placed onto distinct PMs while the VMs are required to be co-located onto the same PM, leading to a serious performance degradation of application running over multiple VMs in cloud computing. In this work we study an Affinity Aware VM Placement (AAP) problem and propose a Joint Affinity AwareGrouping and Bin Packing (JAGBP) method to remedy the deficiency of the NAGA method. We firstly introduce affinity of VMs to identify affinity relationships to VMs which are required to be placed with a special VM placement pattern, such as colocation or disperse placement, and formulate the AAP problem. Then, we propose an affinity aware resource scheduling framework, and provide methods to obtain and identify the affinity relationships between VMs, and the JAGBP method. Lastly, we present holistic evaluation experiments to validate the feasibility and evaluate the performance of the proposed methods. The results demonstrate the significance of introduced affinity and the effectiveness of JAGBP method.  相似文献   

11.
The Journal of Supercomputing - The traditional cloud computing technology provides services to a plethora of applications by providing resources. These services support numerous industries for...  相似文献   

12.
Multimedia Tools and Applications - The development of the medical services framework is advanced by the development of the Internet of Things (IoT) innovation. There are many hindrances in the...  相似文献   

13.
Attribute-based encryption with keyword search (ABKS) achieves both fine-grained access control and keyword search. However, in the previous ABKS schemes, the search algorithm requires that each keyword to be identical between the target keyword set and the ciphertext keyword set, otherwise the algorithm does not output any search result, which is not conducive to use. Moreover, the previous ABKS schemes are vulnerable to what we call a peer-decryption attack, that is, the ciphertext may be eavesdropped and decrypted by an adversary who has sufficient authorities but no information about the ciphertext keywords.In this paper, we provide a new system in fog computing, the ciphertext-policy attribute-based encryption with dynamic keyword search (ABDKS). In ABDKS, the search algorithm requires only one keyword to be identical between the two keyword sets and outputs the corresponding correlation which reflects the number of the same keywords in those two sets. In addition, our ABDKS is resistant to peer-decryption attack, since the decryption requires not only sufficient authority but also at least one keyword of the ciphertext. Beyond that, the ABDKS shifts most computational overheads from resource constrained users to fog nodes. The security analysis shows that the ABDKS can resist Chosen-PlaintextAttack (CPA) and Chosen-Keyword Attack (CKA).  相似文献   

14.

In recent years, Fog Computing (FC) is known as a good infrastructure for the Internet of Things (IoT). Using this architecture for the mobile applications in the IoT is named the Mobile Fog Computing (MFC). If we assume that an application includes some modules, thus, these modules can be sent to the Fog or Cloud layer because of the resource limitation or increased runtime at the mobile. This increases the efficiency of the whole system. As data is entered sequentially, and the input is given to the modules, the number of executable modules increases. So, this research is conducted to find the best place in order to run the modules that can be on the mobile, Fog, or Cloud. According to the proposed method, when the modules arrive at gateway, then, a Hidden Markov model Auto-scaling Offloading (HMAO) finds the best destination to execute the module to create a compromise between the energy consumption and execution time of the modules. The evaluation results obtained regarding the parameters of the energy consumption, execution cost, delay, and network resource usage shows that the proposed method on average is better than the local execution, First-Fit (FF), and Q-learning based method.

  相似文献   

15.
Integration of Internet of Things (IoT) with industries revamps the traditional ways in which industries work. Fog computing extends Cloud services to the vicinity of end users. Fog reduces delays induced by communication with the distant clouds in IoT environments. The resource constrained nature of Fog computing nodes demands an efficient placement policy for deploying applications, or their services. The distributed and heterogeneous features of Fog environments deem it imperative to consider the reliability performance parameter in placement decisions to provide services without interruptions. Increasing reliability leads to an increase in the cost. In this article, we propose a service placement policy which addresses the conflicting criteria of service reliability and monetary cost. A multiobjective optimisation problem is formulated and a novel placement policy, Cost and Reliability-aware Eagle-Whale (CREW), is proposed to provide placement decisions ensuring timely service responses. Considering the exponentially large solution space, CREW adopts Eagle strategy based multi-Whale optimisation for taking placement decisions. We have considered real time microservice applications for validating our approaches, and CREW has been experimentally shown to outperform the existing popular multiobjective meta-heuristics such as NSGA-II and MOWOA based placement strategies.  相似文献   

16.
We consider a large‐scale online service system of placing resources geographically distributed over multiple regional cloud data centers. Service providers need to place the resources in these regions so as to maximize profit, accounting for demand granting revenues minus resource placement costs. The challenge is how to optimally place these resources to fulfill varying demands (e.g., multidimensional and stochastic demands) among these cloud data centers. Considering demand stochasticity will significantly increase time complexity of resource placement algorithm, resulting in inefficiency when handling a large number of resources. We propose a fast resource placement algorithm (FRP) to obtain the maximum resource revenue from distributed cloud systems. Experiments show that in scenarios with general settings, FRP can achieve up to 99.2% revenue of existed best solution while reducing execution time by two orders of magnitude. Therefore, FRP is an effective supplement to existing algorithms under time‐tense scheduling scenarios with a large number of resources. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
In today's world, large group migration of applications to the fog computing is registered in the information technology world. The main issue in fog computing is providing enhanced quality of service (QoS). QoS management consists of various method used for allocating fog-user applications in the virtual environment and selecting suitable method for allocating virtual resources to physical resource. The resources allocation in effective manner in the fog environment is also a major problem in fog computing; it occurs when the infrastructure is build using light-weight computing devices. In this article, the allocation of task and placement of virtual machine problems is explained in the single fog computing environment. The experiment is done and the result shows that the proposed framework improves QoS in fog environment.  相似文献   

18.
Device security is one of the major challenges for successful implementation of Internet of Things (IoT) and fog computing. Researchers and IT organizations have explored many solutions to protect systems from unauthenticated device attacks (known as outside device attacks). Fog computing uses many edge devices (e.g., router, switch, and hub) for latency-aware processing of collected data. So, identification of malicious edge devices is one of the critical activities in data security of fog computing. Preventing attacks from malicious edge devices is more difficult because they have certain granted privileges to store and process the data. In this article, a proposed framework uses three technologies, a Markov model, an intrusion detection system (IDS), and a virtual honeypot device (VHD) to identify malicious edge devices in a fog computing environment. A two-stage Markov model is used to categorize edge devices effectively into four different levels. The VHD is designed to store and maintain a log repository of all identified malicious devices, which assists the system to defend itself from any unknown attacks in the future. The proposed model is tested in a simulated environment, and results indicate the effectiveness of the system. The proposed model is successful in identifying the malicious device as well as reducing the false IDS alarm rate.  相似文献   

19.
Fog and Cloud computing are ubiquitous computing paradigms based on the concepts of utility and grid computing. Cloud service providers permit flexible and dynamic access to virtualized computing resources on pay-per-use basis to the end users. The users having mobile device will like to process maximum number of applications locally by defining fog layer to provide infrastructure for storage and processing of applications. In case demands for resources are not being satisfied by fog layer of mobile device then job is transferred to cloud for processing. Due to large number of jobs and limited resources, fog is prone to deadlock at very large scale. Therefore, Quality of Service (QoS) and reliability are important aspects for heterogeneous fog and cloud framework. In this paper, Social Network Analysis (SNA) technique is used to detect deadlock for resources in fog layer of mobile device. A new concept of free space fog is proposed which helps to remove deadlock by collecting available free resource from all allocated jobs. A set of rules are proposed for a deadlock manager to increase the utilization of resources in fog layer and decrease the response time of request in case deadlock is detected by the system. Two different clouds (public cloud and virtual private cloud) apart from fog layer and free space fog are used to manage deadlock effectively. Selection among them is being done by assigning priorities to the requests and providing resources accordingly from fog and cloud. Therefore, QoS as well as reliability to users can be provided using proposed framework. Cloudsim is used to evaluate resource utilization using Resource Pool Manager (RPM). The results show the effectiveness of proposed technique.  相似文献   

20.
Resource provisioning is one of the challenges in federated Grid environments. In these environments each Grid serves requests from external users along with local users. Recently, this resource provisioning is performed in the form of Virtual Machines (VMs). The problem arises when there are insufficient resources for local users to be served. The problem gets complicated further when external requests have different QoS requirements. Serving local users could be solved by preempting VMs from external users which impose overheads on the system. Therefore, the question is how the number of VM preemptions in a Grid can be minimized. Additionally, how we can decrease the likelihood of preemption for requests with more QoS requirements. We propose a scheduling policy in InterGrid, as a federated Grid, which reduces the number of VM preemptions and dispatches external requests in a way that fewer requests with QoS constraints get affected by preemption. Extensive simulation results indicate that the number of VM preemptions is decreased at least by 60%, particularly, for requests with more QoS requirements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号