首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a platform that supports the execution of scientific applications covering different programming models (such as Master/Slave, Parallel/MPI, MapReduce and Workflows) on Cloud infrastructures. The platform includes (i) a high-level declarative language to express the requirements of the applications featuring software customization at runtime, (ii) an approach based on virtual containers to encapsulate the logic of the different programming models, (iii) an infrastructure manager to interact with different IaaS backends, (iv) a configuration software to dynamically configure the provisioned resources and (v) a catalog and repository of virtual machine images. By using this platform, an application developer can adapt, deploy and execute parallel applications agnostic to the Cloud backend.  相似文献   

2.
Cloud Computing is gaining more and more importance in the Information Technologies (IT) scope. One of the major assets of this paradigm is its economic model based on pay-as-you-go model. Cloud Computing gets more attention from IT users when it fits their required QoS and reduces their expenses. This task cannot be done without increasing the autonomy of the provisioned Cloud resources. In this paper, we propose a holistic approach that allows to dynamically adding autonomic management facilities to Cloud resources even if they were designed without these facilities. Based on the Open Cloud Computing Interface (OCCI) standard, we propose a generic model that allows describing the needed resources to render autonomic a given Cloud resource independently of the service level (Infrastructure, Platform or Software). Herein, we define new OCCI Resources, Links and Mixins that allow provisioning autonomic Cloud Resources. In order to illustrate our approach, we propose a use case that specializes our autonomic infrastructure to ensure the elasticity of Service-based Business Processes (SBPs). The elasticity approach that we are using is based on a formal model that features duplication/consolidation mechanisms and a generic Controller that defines and evaluates elasticity strategies. To validate our proposal, we present an end to end scenario of provisioning an elastic SBP on a public PaaS. Evaluation of our approach on a realistic situation shows its efficiency.  相似文献   

3.
Cloud Computing enables the construction and the provisioning of virtualized service-based applications in a simple and cost effective outsourcing to dynamic service environments. Cloud Federations envisage a distributed, heterogeneous environment consisting of various cloud infrastructures by aggregating different IaaS provider capabilities coming from both the commercial and the academic area. In this paper, we introduce a federated cloud management solution that operates the federation through utilizing cloud-brokers for various IaaS providers. In order to enable an enhanced provider selection and inter-cloud service executions, an integrated monitoring approach is proposed which is capable of measuring the availability and reliability of the provisioned services in different providers. To this end, a minimal metric monitoring service has been designed and used together with a service monitoring solution to measure cloud performance. The transparent and cost effective operation on commercial clouds and the capability to simultaneously monitor both private and public clouds were the major design goals of this integrated cloud monitoring approach. Finally, the evaluation of our proposed solution is presented on different private IaaS systems participating in federations.  相似文献   

4.
The Cloud computing becomes an innovative computing paradigm, which aims to provide reliable, customized and QoS guaranteed computing infrastructures for users. This paper presents our early experience of Cloud computing based on the Cumulus project for compute centers. In this paper, we give the Cloud computing definition and Cloud computing functionalities. This paper also introduces the Cumulus project with its various aspects, such as design pattern, infrastructure, and middleware. This paper delivers the state-of-the-art for Cloud computing with theoretical definition and practical experience.  相似文献   

5.
Simulation Cloud can help users to carry out the simulation tasks in various stages quickly and easily by renting instead of buying all the needed resources, such as the computing hardware, simulation devices, software, and models. A monitoring system is necessary, which can dynamically collect information about the characteristics and status of resources in real time. In this paper, we design a Simulation Cloud Monitoring Framework (SCMF), which is a Monitoring Framework based on Simulation Cloud. The main functions of SCMF include: 1. Collecting performance information of Simulation Cloud (including physical resources and virtual resources). 2. Processing the collected performance information, providing ranking information about resource consumption as the customized service to service layer. 3. Detecting abnormal behaviors on Simulation Cloud in real time.The SCMF is based on hierarchical design. It consists of Root Monitoring Node (RMN), Federation Monitoring Node (RMN), and Main Monitoring Node (MMN). There is only one RMN in SCMF. It is responsible for collecting metadata about Simulation Cloud. For robustness, there are several FMNs in a federation. One is primary FMN and others are backup FMNs. MMN is implementing on every host in Simulation Cloud., MMN is responsible for collecting performance information about the host and virtual nodes. In the paper, it designs Sequence-Bucket strategy, which supports quick response for ranking information about resource consumption. It also designs two strategies: Rank-FMN (Federation Monitor Node) strategy and Huffman-Like Strategy. Huffman-Like Strategy combines small federations to reduce total consumption of SCMF, while Rank-FMN strategy is a load balancing strategy, which relieves the bottleneck of FMNs and spreads the loads equally among FMNs. The characteristics of SCMF are real-time, scalability, robustness, light weight, manageability, and archivability. Meanwhile, we design evaluation models for SCMF, which can provide quantitative results of monitoring accuracy and monitoring cost. The simulation results show that SCMF is accurate, low cost and can response in real-time.  相似文献   

6.
The Software as a Service (SaaS) methodology is a key paradigm of Cloud computing. In this paper, we focus on an interesting topic—to dynamically host services on existing production Grid infrastructures. In general, production Grids normally employ a Job-Submission-Execution (JSE) model with rigid access interfaces. In this paper, we implement the Cyberaide onServe, a lightweight middleware with a virtual appliance. The Cyberaide onServe implements the SaaS model on production Grids by translating the SaaS model to the JSE model. The Cyberaide onServe can be deployed on demand in a virtual appliance, host users’ software as a Web service, accept Web service invocations; finally, the Cyberaide onServe can execute them on production Grids. We have deployed the Cyberaide onServe on the TeraGrid and the test results show that the Cyberaide onServe can provide SaaS functionalities with a good performance.  相似文献   

7.
Authorization infrastructures are an integral part of any network where resources need to be protected. As networks expand and organizations start to federate access to their resources, authorization infrastructures become increasingly difficult to manage. In this paper, we explore the automatic adaptation of authorization assets (policies and subject access rights) in order to manage federated authorization infrastructures. We demonstrate adaptation through a Self-Adaptive Authorization Framework (SAAF) controller that is capable of managing policy based federated role/attribute access control authorization infrastructures. The SAAF controller implements a feedback loop to monitor the authorization infrastructure in terms of authorization assets and subject behavior, analyze potential adaptations for handling malicious behavior, and act upon authorization assets to control future authorization decisions. We evaluate a prototype of the SAAF controller by simulating malicious behavior within a deployed federated authorization infrastructure (federation), demonstrating the escalation of adaptation, along with a comparison of SAAF to current technology.  相似文献   

8.
Federated hybrid clouds is a model of service access and delivery to community cloud infrastructures. This model opens an opportunity window to allow the integration of the enhanced science (eScience) with the Cloud paradigm. The eScience is computationally intensive science that is carried out in highly distributed computing infrastructures. Nowadays, the eScience big issue on Cloud Computing is how to leverage on-demand computing in scientific research. This requires innovation at multiple levels, from architectural design to software platforms. This paper characterizes the requirements of a federated hybrid cloud model of Infrastructure as a Service (IaaS) to provide eScience. Additionally, an architecture is defined for constructing Platform as a Service (PaaS) and Software as a Service (SaaS) in a resilient manner over federated resources. This architecture is named Rafhyc (for Resilient Architecture of Federated HYbrid Clouds). This paper also describes a prototype implementation of the Rafhyc architecture, which integrates an interoperable community middleware, named DIRAC, with federated hybrid clouds. In this way DIRAC is providing SaaS for scientific computing purposes, demonstrating that Rafhyc architecture can bring together eScience and federated hybrid clouds.  相似文献   

9.
Security infrastructure is one of the most challenging tasks in the development, integration and deployment of Grid middlewares. Even though the Grid community addresses the security issue through public key infrastructures (PKI) to support mutual authentication using X.509 certificates, maintaining X.509 credentials is not that easy for non-IT-experts, and has proved to be an obstacle for a more wide deployment of Grid technologies. The identity federation is an increasingly popular technology that can facilitate cross-domain single sign-on without requiring the users to maintain any credentials additional to their own institutional accounts. We believe that utilizing identity federation for Grid middlewares is a promising path for the Grid technology to get more widely used. This paper describes a single sign-on infrastructure developed as a part of the NorduGrid ARC (Advanced Resource Connector) Grid middleware. It adopts the identity federation standard (SAML), as well as other Web Service standards. It focuses on a single sign-on solution at the middleware level for users to access Grids by only using their frequently used accounts, without being bothered to maintain X.509 credentials. Users can use their username/password only to access Grids developed in ARC middleware, as well as access Grids developed in other middlewares that requires users to provide X.509 certificates. Moreover, the single sign-on for workflow-like Grid applications (in which intermediate entities act on behalf of users) is also supported. As an important aspect of single sign-on, authorization is also considered by implementing an attribute-based authorization using SAML standard. In addition, the performance of single sign-on solution is measured. We identify performance limitations of security-related services inside this solution, and analyse the ways to avoid the limitations. To our knowledge, the work presented in this paper is the first evaluated implementation that utilizes identity federation for Grid usage on the middleware level.  相似文献   

10.
Recently IT infrastructures change to cloud computing, the demand of cloud data center increased. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared computing resources that can be rapidly provisioned and released with minimal management effort, the interest on data centers to provide the cloud computing services is increasing economically and variably. This study analyzes the factors to improve the power efficiency while securing scalability of data centers and presents the considerations for cloud data center construction in terms of power distribution method, power density per rack and expansion unit separately. The result of this study may be used for making rational decisions concerning the power input, voltage transformation and unit of expansion when constructing a cloud data center or migrating an existing data center to a cloud data center.  相似文献   

11.
Scientific workflow orchestration interoperating HTC and HPC resources   总被引:1,自引:0,他引:1  
In this work we describe our developments towards the provision of a unified access method to different types of computing infrastructures at the interoperation level. For that, we have developed a middleware suite which bridges not interoperable middleware stacks used for building distributed computing infrastructures, UNICORE and gLite. Our solution allows to transparently access and operate on HPC and HTC resources from a single interface. Using Kepler as workflow manager, we provide users with the needed integration of codes to create scientific workflows accessing both types of infrastructures.  相似文献   

12.
13.
Cloud manufacturing is becoming an increasingly popular enterprise model in which computing resources are made available on-demand to the user as needed. Cloud manufacturing aims at providing low-cost, resource-sharing and effective coordination. In this study, we present a genetic algorithm (GA) based resource constraint project scheduling, incorporating a number of new ideas (enhancements and local search) for solving computing resources allocation problems in a cloud manufacturing system. A newly generated offspring may not be feasible due to task precedence and resource availability constraints. Conflict resolutions and enhancements are performed on newly generated offsprings after crossover or mutation. The local search can exploit the neighborhood of solutions to find better schedules. Due to its complex characteristics, computing resources allocation in a cloud manufacturing system is NP-hard. Computational results show that the proposed GA can rapidly provide a good quality schedule that can optimally allocate computing resources and satisfy users’ demands.  相似文献   

14.
The inherent complexity of modern cloud infrastructures has created the need for innovative monitoring approaches, as state-of-the-art solutions used for other large-scale environments do not address specific cloud features. Although cloud monitoring is nowadays an active research field, a comprehensive study covering all its aspects has not been presented yet. This paper provides a deep insight into cloud monitoring. It proposes a unified cloud monitoring taxonomy, based on which it defines a layered cloud monitoring architecture. To illustrate it, we have implemented GMonE, a general-purpose cloud monitoring tool which covers all aspects of cloud monitoring by specifically addressing the needs of modern cloud infrastructures. Furthermore, we have evaluated the performance, scalability and overhead of GMonE with Yahoo Cloud Serving Benchmark (YCSB), by using the OpenNebula cloud middleware on the Grid’5000 experimental testbed. The results of this evaluation demonstrate the benefits of our approach, surpassing the monitoring performance and capabilities of cloud monitoring alternatives such as those present in state-of-the-art systems such as Amazon EC2 and OpenNebula.  相似文献   

15.
16.
To meet the challenges of consistent performance, low communication latency, and a high degree of user mobility, cloud and Telecom infrastructure vendors and operators foresee a Mobile Cloud Network that incorporates public cloud infrastructures with cloud augmented Telecom nodes in forthcoming mobile access networks. A Mobile Cloud Network is composed of distributed cost- and capacity-heterogeneous resources that host applications that in turn are subject to a spatially and quantitatively rapidly changing demand. Such an infrastructure requires a holistic management approach that ensures that the resident applications’ performance requirements are met while sustainably supported by the underlying infrastructure. The contribution of this paper is three-fold. Firstly, this paper contributes with a model that captures the cost- and capacity-heterogeneity of a Mobile Cloud Network infrastructure. The model bridges the Mobile Edge Computing and Distributed Cloud paradigms by modelling multiple tiers of resources across the network and serves not just mobile devices but any client beyond and within the network. A set of resource management challenges is presented based on this model. Secondly, an algorithm that holistically and optimally solves these challenges is proposed. The algorithm is formulated as an application placement method that incorporates aspects of network link capacity, desired user latency and user mobility, as well as data centre resource utilisation and server provisioning costs. Thirdly, to address scalability, a tractable locally optimal algorithm is presented. The evaluation demonstrates that the placement algorithm significantly improves latency, resource utilisation skewness while minimising the operational cost of the system. Additionally, the proposed model and evaluation method demonstrate the viability of dynamic resource management of the Mobile Cloud Network and the need for accommodating rapidly mobile demand in a holistic manner.  相似文献   

17.
A problem commonly faced in Computer Science research is the lack of real usage data that can be used for the validation of algorithms. This situation is particularly true and crucial in Cloud Computing. The privacy of data managed by commercial Cloud infrastructures, together with their massive scale, makes them very uncommon to be available to the research community. Due to their scale, when designing resource allocation algorithms for Cloud infrastructures, many assumptions must be made in order to make the problem tractable.This paper provides deep analysis of a cluster data trace recently released by Google and focuses on a number of questions which have not been addressed in previous studies. In particular, we describe the characteristics of job resource usage in terms of dynamics (how it varies with time), of correlation between jobs (identify daily and/or weekly patterns), and correlation inside jobs between the different resources (dependence of memory usage on CPU usage). From this analysis, we propose a way to formalize the allocation problem on such platforms, which encompasses most job features from the trace with a small set of parameters.  相似文献   

18.
The ability to support Quality of Service (QoS) constraints is an important requirement in some scientific applications. With the increasing use of Cloud computing infrastructures, where access to resources is shared, dynamic and provisioned on-demand, identifying how QoS constraints can be supported becomes an important challenge. However, access to dedicated resources is often not possible in existing Cloud deployments and limited QoS guarantees are provided by many commercial providers (often restricted to error rate and availability, rather than particular QoS metrics such as latency or access time). We propose a workflow system architecture which enforces QoS for the simultaneous execution of multiple scientific workflows over a shared infrastructure (such as a Cloud environment). Our approach involves multiple pipeline workflow instances, with each instance having its own QoS requirements. These workflows are composed of a number of stages, with each stage being mapped to one or more physical resources. A stage involves a combination of data access, computation and data transfer capability. A token bucket-based data throttling framework is embedded into the workflow system architecture. Each workflow instance stage regulates the amount of data that is injected into the shared resources, allowing for bursts of data to be injected while at the same time providing isolation of workflow streams. We demonstrate our approach by using the Montage workflow, and develop a Reference net model of the workflow.  相似文献   

19.
Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as ‘services’ to end‐users under a usage‐based payment model. It can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application services hosted under Cloud computing model have complex provisioning, composition, configuration, and deployment requirements. Evaluating the performance of Cloud provisioning policies, application workload models, and resources performance models in a repeatable manner under varying system and user configurations and requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an extensible simulation toolkit that enables modeling and simulation of Cloud computing systems and application provisioning environments. The CloudSim toolkit supports both system and behavior modeling of Cloud system components such as data centers, virtual machines (VMs) and resource provisioning policies. It implements generic application provisioning techniques that can be extended with ease and limited effort. Currently, it supports modeling and simulation of Cloud computing environments consisting of both single and inter‐networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter‐networked Cloud computing scenarios. Several researchers from organizations, such as HP Labs in U.S.A., are using CloudSim in their investigation on Cloud resource provisioning and energy‐efficient management of data center resources. The usefulness of CloudSim is demonstrated by a case study involving dynamic provisioning of application services in the hybrid federated clouds environment. The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
As the size and complexity of Cloud systems increase, the manual management of these solutions becomes a challenging issue as more personnel, resources and expertise are needed. Service Level Agreement (SLA)-aware autonomic cloud solutions enable managing large scale infrastructure management meanwhile supporting multiple dynamic requirement from users. This paper contributes to these topics by the introduction of Cloudcompaas, a SLA-aware PaaS Cloud platform that manages the complete resource lifecycle. This platform features an extension of the SLA specification WS-Agreement, tailored to the specific needs of Cloud Computing. In particular, Cloudcompaas enables Cloud providers with a generic SLA model to deal with higher-level metrics, closer to end-user perception, and with flexible composition of the requirements of multiple actors in the computational scene. Moreover, Cloudcompaas provides a framework for general Cloud computing applications that could be dynamically adapted to correct the QoS violations by using the elasticity features of Cloud infrastructures. The effectiveness of this solution is demonstrated in this paper through a simulation that considers several realistic workload profiles, where Cloudcompaas achieves minimum cost and maximum efficiency, under highly heterogeneous utilization patterns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号