首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
As the sizes of IT infrastructure continue to grow, cloud computing is a natural extension of virtualisation technologies that enable scalable management of virtual machines over a plethora of physically connected systems. The so-called virtualisation-based cloud computing paradigm offers a practical approach to green IT/clouds, which emphasise the construction and deployment of scalable, energy-efficient network software applications (NetApp) by virtue of improved utilisation of the underlying resources. The latter is typically achieved through increased sharing of hardware and data in a multi-tenant cloud architecture/environment and, as such, accentuates the critical requirement for enhanced security services as an integrated component of the virtual infrastructure management strategy. This paper analyses the key security challenges faced by contemporary green cloud computing environments, and proposes a virtualisation security assurance architecture, CyberGuarder, which is designed to address several key security problems within the ‘green’ cloud computing context. In particular, CyberGuarder provides three different kinds of services; namely, a virtual machine security service, a virtual network security service and a policy based trust management service. Specifically, the proposed virtual machine security service incorporates a number of new techniques which include (1) a VMM-based integrity measurement approach for NetApp trusted loading, (2) a multi-granularity NetApp isolation mechanism to enable OS user isolation, and (3) a dynamic approach to virtual machine and network isolation for multiple NetApp’s based on energy-efficiency and security requirements. Secondly, a virtual network security service has been developed successfully to provide an adaptive virtual security appliance deployment in a NetApp execution environment, whereby traditional security services such as IDS and firewalls can be encapsulated as VM images and deployed over a virtual security network in accordance with the practical configuration of the virtualised infrastructure. Thirdly, a security service providing policy based trust management is proposed to facilitate access control to the resources pool and a trust federation mechanism to support/optimise task privacy and cost requirements across multiple resource pools. Preliminary studies of these services have been carried out on our iVIC platform, with promising results. As part of our ongoing research in large-scale, energy-efficient/green cloud computing, we are currently developing a virtual laboratory for our campus courses using the virtualisation infrastructure of iVIC, which incorporates the important results and experience of CyberGuarder in a practical context.  相似文献   

2.
3.
Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future.  相似文献   

4.
智能电网符合当前需求,意义重大。首先简单介绍了云计算和智能电网,并对云计算在智能电网中的应用做了阐述,然后主要对云计算的安全技术进行了分析。  相似文献   

5.
针对云平台无法从单变量负荷序列中获取完整预测信息的问题,提出了一种基于主成分分析的多变量局域预测模型并应用于云计算底层资源的预测中。利用主成分分析法综合考虑多种底层资源间的影响关系,确定多变量相空间的嵌入维数,并与局域预测法相结合,由此建立多变量局域预测模型。仿真实验表明,基于主成分分析的多变量局域预测模型的预测精度高于单变量局域预测模型,是面向云计算底层资源预测的一种有效方法。  相似文献   

6.
Time series forecasting, as an important tool in many decision support systems, has been extensively studied and applied for sales forecasting over the past few decades. There are many well-established and widely-adopted forecasting methods such as linear extrapolation and SARIMA. However, their performance is far from perfect and it is especially true when the sales pattern is highly volatile. In this paper, we propose a hybrid forecasting scheme which combines the classic SARIMA method and wavelet transform (SW). We compare the performance of SW with (i) pure SARIMA, (ii) a forecasting scheme based on linear extrapolation with seasonal adjustment (CSD + LESA), and (iii) evolutionary neural networks (ENN). We illustrate the significance of SW and establish the conditions that SW outperforms pure SARIMA and CSD + LESA. We further study the time series features which influence the forecasting accuracy, and we propose a method for conducting sales forecasting based on the features of the given sales time series. Experiments are conducted by using real sales data, hypothetical data, and publicly available data sets. We believe that the proposed hybrid method is highly applicable for forecasting sales in the industry.  相似文献   

7.
Cloud computing provides scalable computing and storage resources over the Internet. These scalable resources can be dynamically organized as many virtual machines (VMs) to run user applications based on a pay-per-use basis. The required resources of a VM are sliced from a physical machine (PM) in the cloud computing system. A PM may hold one or more VMs. When a cloud provider would like to create a number of VMs, the main concerned issue is the VM placement problem, such that how to place these VMs at appropriate PMs to provision their required resources of VMs. However, if two or more VMs are placed at the same PM, there exists certain degree of interference between these VMs due to sharing non-sliceable resources, e.g. I/O resources. This phenomenon is called as the VM interference. The VM interference will affect the performance of applications running in VMs, especially the delay-sensitive applications. The delay-sensitive applications have quality of service (QoS) requirements in their data access delays. This paper investigates how to integrate QoS awareness with virtualization in cloud computing systems, such as the QoS-aware VM placement (QAVMP) problem. In addition to fully exploiting the resources of PMs, the QAVMP problem considers the QoS requirements of user applications and the VM interference reduction. Therefore, in the QAVMP problem, there are following three factors: resource utilization, application QoS, and VM interference. We first formulate the QAVMP problem as an Integer Linear Programming (ILP) model by integrating the three factors as the profit of cloud provider. Due to the computation complexity of the ILP model, we propose a polynomial-time heuristic algorithm to efficiently solve the QAVMP problem. In the heuristic algorithm, a bipartite graph is modeled to represent all the possible placement relationships between VMs and PMs. Then, the VMs are gradually placed at their preferable PMs to maximize the profit of cloud provider as much as possible. Finally, simulation experiments are performed to demonstrate the effectiveness of the proposed heuristic algorithm by comparing with other VM placement algorithms.  相似文献   

8.
Virtualization is a key technology to enable cloud computing. Driver domain based model for network virtualization offers isolation and high levels of flexibility. However, it suffers from poor performance and lacks scalability. In this paper, we evaluate networking performance of virtual machines within Xen. The I/O channel transferring packets between the driver domain and the virtual machines is shown to be the bottleneck. To overcome this limitation, we proposed a packet aggregation based mechanism to transfer packets from the driver domain to the virtual machines. Packet aggregation, combined with an efficient core allocation, allows virtual machines throughput to scale up by 700%, while minimizing both memory and CPU consumption. Besides, aggregation impact on packets delay and jitter remains acceptable. Hence, the proposed I/O virtualization model satisfies infrastructure providers to offer Cloud computing services.  相似文献   

9.
本文提出了一种云环境下的网络安全处理模型,模型中的每台云服务器都拥有自己的入侵检测系统,并且所有的服务器共享一个异常管理平台,该平台负责报警信息的接收、处理和日志管理.模型采用报警级别动态调整技术和攻击信息共享方法,最大限度地降低了漏报率和服务器遭受同种攻击的可能性,有效提高了检测效率和系统安全水平.  相似文献   

10.
Wind energy prediction has a significant effect on the planning, economic operation and security maintenance of the wind power system. However, due to the high volatility and intermittency, it is difficult to model and predict wind power series through traditional forecasting approaches. To enhance prediction accuracy, this study developed a hybrid model that incorporates the following stages. First, an improved complete ensemble empirical mode decomposition with adaptive noise technology was applied to decompose the wind energy series for eliminating noise and extracting the main features of original data. Next, to achieve high accurate and stable forecasts, an improved wavelet neural network optimized by optimization methods was built and used to implement wind energy prediction. Finally, hypothesis testing, stability test and four case studies including eighteen comparison models were utilized to test the abilities of prediction models. The experimental results show that the average values of the mean absolute percent errors of the proposed hybrid model are 5.0116% (one-step ahead), 7.7877% (two-step ahead) and 10.6968% (three-step ahead), which are much lower than comparison models.  相似文献   

11.
Cloud computing and Internet of Things have promoted a new logistics service mode, i.e., the cloud logistics mode. This work studies the resource virtualization and service encapsulation of a logistics center, and focuses on the technologies of resource expression and service encapsulation. After the resources of a logistics center are encapsulated in web services, how to find the “best” concrete web service among many is a critically important issue. This work considers service selection as an optimization problem and establishes a Particle Swarm Optimization (PSO)-based web service selection model with quality of service (QoS) constraints. It can be used to address the horizontal adaptation issues from the composite web services. The feasibility and effectiveness of the model are verified by several experiments.  相似文献   

12.
In this paper, we investigate the problem of scheduling precedence-constrained parallel applications on heterogeneous computing systems (HCSs) like cloud computing infrastructures. This kind of application was studied and used in many research works. Most of these works propose algorithms to minimize the completion time (makespan) without paying much attention to energy consumption.We propose a new parallel bi-objective hybrid genetic algorithm that takes into account, not only makespan, but also energy consumption. We particularly focus on the island parallel model and the multi-start parallel model. Our new method is based on dynamic voltage scaling (DVS) to minimize energy consumption.In terms of energy consumption, the obtained results show that our approach outperforms previous scheduling methods by a significant margin. In terms of completion time, the obtained schedules are also shorter than those of other algorithms. Furthermore, our study demonstrates the potential of DVS.  相似文献   

13.
Scheduling is essentially a decision-making process that enables resource sharing among a number of activities by determining their execution order on the set of available resources. The emergence of distributed systems brought new challenges on scheduling in computer systems, including clusters, grids, and more recently clouds. On the other hand, the plethora of research makes it hard for both newcomers researchers to understand the relationship among different scheduling problems and strategies proposed in the literature, which hampers the identification of new and relevant research avenues. In this paper we introduce a classification of the scheduling problem in distributed systems by presenting a taxonomy that incorporates recent developments, especially those in cloud computing. We review the scheduling literature to corroborate the taxonomy and analyze the interest in different branches of the proposed taxonomy. Finally, we identify relevant future directions in scheduling for distributed systems.  相似文献   

14.
Software-defined networking (SDN) has evolved and brought an innovative paradigm shift in computer networks by utilizing a programmable software controller with open protocols. Network functions, previously served on dedicated hardware, have shifted to network function virtualization (NFV) that enabled functions to be virtualized and provisioned dynamically upon generic hardware. In addition to NFV, edge computing utilizes the edge resources close to end-users, which can reduce the end-to-end service delay and the network traffic volume. Although these innovative technologies gained significant attention from both academia and industry, there are limited tools and simulation frameworks for the effectiveness evaluation in a repeatable and controllable manner. Furthermore, large-scale experimental infrastructures are expensive to setup and difficult to maintain. Even if they are created, they are not available or accessible for the majority of researchers throughout the world. In this paper, we propose a framework for simulating NFV functionalities in both edge and cloud computing environments. In addition to the basic network functionalities supported by SDN in CloudSimSDN, we added new NFV features, such as virtualized network functions allocation, migration, and autoscaling with the support of corresponding network functionalities, such as flow load balancing, rerouting, and service function chaining (SFC) maintenance. We evaluated our simulation framework with autoscaling and placement policies for SFC in the integrated edge and cloud computing environments. The results demonstrate its effectiveness in measuring and evaluating the end-to-end delay, response time, resource utilization, network traffic, and power consumption with different algorithms in each scenario.  相似文献   

15.
Multimedia communication research and development often requires computationally intensive simulations in order to develop and investigate the performance of new optimization algorithms. Depending on the simulations, they may require even a few days to test an adequate set of conditions due to the complexity of the algorithms. The traditional approach to speed up this type of relatively small simulations, which require several develop–simulate–reconfigure cycles, is indeed to run them in parallel on a few computers and leaving them idle when developing the technique for the next simulation cycle. This work proposes a new cost-effective framework based on cloud computing for accelerating the development process, in which resources are obtained on demand and paid only for their actual usage. Issues are addressed both analytically and practically running actual test cases, i.e., simulations of video communications on a packet lossy network, using a commercial cloud computing service. A software framework has also been developed to simplify the management of the virtual machines in the cloud. Results show that it is economically convenient to use the considered cloud computing service, especially in terms of reduced development time and costs, with respect to a solution using dedicated computers, when the development time is longer than one hour. If more development time is needed between simulations, the economic advantage progressively reduces as the computational complexity of the simulation increases.  相似文献   

16.
How to reduce power consumption of data centers has received worldwide attention. By combining the energy-aware data placement policy and locality-aware multi-job scheduling scheme, we propose a new multi-objective bi-level programming model based on MapReduce to improve the energy efficiency of servers. First, the variation of energy consumption with the performance of servers is taken into account; second, data locality can be adjusted dynamically according to current network state; last but not least, considering that task-scheduling strategies depend directly on data placement policies, we formulate the problem as an integer bi-level programming model. In order to solve the model efficiently, specific-design encoding and decoding methods are introduced. Based on these, a new effective multi-objective genetic algorithm based on MOEA/D is proposed. As there are usually tens of thousands of tasks to be scheduled in the cloud, this is a large-scale optimization problem and a local search operator is designed to accelerate convergent speed of the proposed algorithm. Finally, numerical experiments indicate the effectiveness of the proposed model and algorithm.  相似文献   

17.
Cloud computing enables many applications of Web services and rekindles the interest of providing ERP services via the Internet. It has the potentials to reshape the way IT services are consumed. Recent research indicates that ERP delivered thru SaaS will outperform the traditional IT offers. However, distributing a service compared to distributing a product is more complicated because of the immateriality, the integration and the one-shot-principle referring to services. This paper defines a CloudERP platform on which enterprise customers can select web services and customize a unique ERP system to meet their specific needs. The CloudERP aims to provide enterprise users with the flexibility of renting an entire ERP service through multiple vendors. This paper also addresses the challenge of composing web services and proposes a web-based solution for automating the ERP service customization process. The proposed service composition method builds on the genetic algorithm concept and incorporates with knowledge of web services extracted from the web service platform with the rough set theory. A system prototype was built on the Google App Engine platform to verify the proposed composition process. Based on experimental results from running the prototype, the composition method works effectively and has great potential for supporting a fully functional CloudERP platform.  相似文献   

18.
We study the optimization of dynamic pricing in a queueing model with a finite buffer, where arrival rates depend on advertised price levels. We apply our study to a pricing policy in a cloud computing service provider setup.The main result of this paper is the multi-threshold structure of the optimal policy.  相似文献   

19.
The first hurdle for carrying out research on cloud computing is the development of a suitable research platform. While cloud computing is primarily commercially-driven and commercial clouds are naturally realistic as research platforms, they do not provide to the scientist enough control for dependable experiments. On the other hand, research carried out using simulation, mathematical modelling or small prototypes may not necessarily be applicable in real clouds of larger scale. Previous surveys on cloud performance and energy-efficiency have focused on the technical mechanisms proposed to address these issues. Researchers of various disciplines and expertise can use them to identify areas where they can contribute with innovative technical solutions. This paper is meant to be complementary to these surveys. By providing the landscape of research platforms for cloud systems, our aim is to help researchers identify a suitable approach for modelling, simulation or prototype implementation on which they can develop and evaluate their technical solutions.  相似文献   

20.
The uses of Big Data (BD) are gradually increasing in many new emerging applications, such as Facebook, eBay, Snapdeal, etc. BD is a term, which is used for describing a large volume of data. The data security is always a big concern of BD. Besides the data security, other issues of BD are data storage, high data accessing time, high data searching time, high system overhead, server demand, etc. In this paper, a new access control model has been proposed for BD to solve all these issues, where fast accessing of the large volume of data are provided based on the data size Here, a long 512-bit Deoxyribonucleic Acid (DNA) based key sequence has been used for improving the data security, and it is secured against the collision attack, man-in-the-middle attack, internal attack, etc. The proposed scheme is evaluated in terms of both theoretical and experimental results, which show the proficiency of the proposed scheme over the existing schemes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号