首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
Cloud storage is one of the main application of the cloud computing. With the data services in the cloud, users is able to outsource their data to the cloud, access and share their outsourced data from the cloud server anywhere and anytime. However, this new paradigm of data outsourcing services also introduces new security challenges, among which is how to ensure the integrity of the outsourced data. Although the cloud storage providers commit a reliable and secure environment to users, the integrity of data can still be damaged owing to the carelessness of humans and failures of hardwares/softwares or the attacks from external adversaries. Therefore, it is of great importance for users to audit the integrity of their data outsourced to the cloud. In this paper, we first design an auditing framework for cloud storage and proposed an algebraic signature based remote data possession checking protocol, which allows a third-party to auditing the integrity of the outsourced data on behalf of the users and supports unlimited number of verifications. Then we extends our auditing protocol to support data dynamic operations, including data update, data insertion and data deletion. The analysis and experiment results demonstrate that our proposed schemes are secure and efficient.  相似文献   

2.
with the increasing popularity of cloud services,attacks on the cloud infrastructure also increase dramatically.Especially,how to monitor the integrity of cloud execution environments is still a difficult task.In this paper,a real-time dynamic integrity validation(DIV) framework is proposed to monitor the integrity of virtual machine based execution environments in the cloud.DIV can detect the integrity of the whole architecture stack from the cloud servers up to the VM OS by extending the current trusted chain into virtual machine's architecture stack.DIV introduces a trusted third party(TTP) to collect the integrity information and detect remotely the integrity violations on VMs periodically to avoid the heavy involvement of cloud tenants and unnecessary information leakage of the cloud providers.To evaluate the effectiveness and efficiency of DIV framework,a prototype on KVM/QEMU is implemented,and extensive analysis and experimental evaluation are performed.Experimental results show that the DIV can efficiently validate the integrity of files and loaded programs in real-time,with minor performance overhead.  相似文献   

3.
MapReduce has emerged as a popular computing model used in datacenters to process large amount of datasets.In the map phase,hash partitioning is employed to distribute data that sharing the same key across data center-scale cluster nodes.However,we observe that this approach can lead to uneven data distribution,which can result in skewed loads among reduce tasks,thus hamper performance of MapReduce systems.Moreover,worker nodes in MapReduce systems may differ in computing capability due to(1) multiple generations of hardware in non-virtualized data centers,or(2) co-location of virtual machines in virtualized data centers.The heterogeneity among cluster nodes exacerbates the negative effects of uneven data distribution.To improve MapReduce performance in heterogeneous clusters,we propose a novel load balancing approach in the reduce phase.This approach consists of two components:(1) performance prediction for reducers that run on heterogeneous nodes based on support vector machines models,and(2) heterogeneity-aware partitioning(HAP),which balances skewed data for reduce tasks.We implement this approach as a plug-in in current MapReduce system.Experimental results demonstrate that our proposed approach distributes work evenly among reduce tasks,and improves MapReduce performance with little overhead.  相似文献   

4.
Resource scheduling algorithm for ForCES (Forwarding and Control Element Separation) networks need to meet the flexibility, programmability and scalability of node resources. DBC (Deadline Budget Constrain) algorithm relies on users select cost or time priority, then scheduling to meet the requirements of users. However, this priority strategy of users is relatively simple, and cannot adapt to dynamic change of resources, it is inevitable to reduce the QoS. In order to improve QoS, we refer to the economic model and resource scheduling model of cloud computing, use SAL (Service Level Agreement) as pricing strategy, on the basis of DBC algorithm, propose an DABP (Deadline And Budget Priority based on DBC) algorithm for ForCES networks, DABP combines both budget and time priority to scheduling. In simulation and test, we compare the task finish time and cost of DABP algorithm with DP (Deadline Priority) algorithm and BP (Budget Priority) algorithm, the analysis results show that DABP algorithm make the task complete with less cost within deadline, benifical to load balancing of ForCES networks.  相似文献   

5.
Failure detection module is one of important components in fault-tolerant distributed systems, especially cloud platform. However, to achieve fast and accurate detection of failure becomes more and more difficult especially when network and other resources' status keep changing. This study presented an efficient adaptive failure detection mechanism based on volterra series, which can use a small amount of data for predicting. The mechanism uses a volterra filter for time series prediction and a decision tree for decision making. Major contributions are applying volterra filter in cloud failure prediction, and introducing a user factor for different QoS requirements in different modules and levels of IaaS. Detailed implementation is proposed, and an evaluation is performed in Beijing and Guangzhou experiment environment.  相似文献   

6.
Cloud computing systems play a vital role in national securi-ty.This paper describes a conceptual framework called dual-system architecture for protecting computing environments.While attempting to be logical and rigorous,formalism meth-od is avoided and this paper chooses algebra CommunicationSequential Process.  相似文献   

7.
This paper proposes an analytical mining tool for big graph data based on MapReduce and bulk synchronous parallel (BSP) com puting model. The tool is named Mapreduce and BSP based Graphmining tool (MBGM). The core of this mining system are four sets of parallel graphmining algorithms programmed in the BSP parallel model and one set of data extractiontransformationload ing (ETE) algorithms implemented in MapReduce. To invoke these algorithm sets, we designed a workflow engine which optimized for cloud computing. Finally, a welldesigned data management function enables users to view, delete and input data in the Ha doop distributed file system (HDFS). Experiments on artificial data show that the components of graphmining algorithm in MBGM are efficient.  相似文献   

8.
Integration of the cloud desktop and cloud storage platform is urgent for enterprises. However, current proposals for cloud disk are not satisfactory in terms of the decoupling of virtual computing and business data storage in the cloud desktop environment. In this paper, we present a new virtual disk mapping method for cloud desktop storage. In Windows, compared with virtual hard disk method of popular cloud disks, the proposed implementation of client based on the virtual disk driver and the file system filter driver is available for widespread desktop environments, especially for the cloud desktop with limited storage resources. Further more, our method supports customizable local cache storage, resulting in userfriendly experience for thinclients of the cloud desktop. The evaluation results show that our virtual disk mapping method performs well in the readwrite throughput of different scale files.  相似文献   

9.
MTBAC: A Mutual Trust Based Access Control Model in Cloud Computing   总被引:1,自引:0,他引:1  
As a new computing mode, cloud computing can provide users with virtualized and scalable web services, which faced with serious security challenges, however. Access control is one of the most important measures to ensure the security of cloud computing. But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing. In cloud computing environment, only when the security and reliability of both interaction parties are ensured, data security can be effectively guaranteed during interactions between users and the Cloud. Therefore, building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment. Combining with Trust Management(TM), a mutual trust based access control (MTBAC) model is proposed in this paper. MTBAC model take both user's behavior trust and cloud services node's credibility into consideration. Trust relationships between users and cloud service nodes are established by mutual trust mechanism. Security problems of access control are solved by implementing MTBAC model into cloud computing environment. Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.  相似文献   

10.
In order to achieve fine-grained access control in cloud computing, existing digital rights management (DRM) schemes adopt attribute-based encryption as the main encryption primitive. However, these schemes suffer from inefficiency and cannot support dynamic updating of usage rights stored in the cloud. In this paper, we propose a novel DRM scheme with secure key management and dynamic usage control in cloud computing. We present a secure key management mechanism based on attribute-based encryption and proxy re-encryption. Only the users whose attributes satisfy the access policy of the encrypted content and who have effective usage rights can be able to recover the content encryption key and further decrypt the content. The attribute based mechanism allows the content provider to selectively provide fine-grained access control of contents among a set of users, and also enables the license server to implement immediate attribute and user revocation. Moreover, our scheme supports privacy-preserving dynamic usage control based on additive homomorphic encryption, which allows the license server in the cloud to update the users' usage rights dynamically without disclosing the plaintext. Extensive analytical results indicate that our proposed scheme is secure and efficient.  相似文献   

11.
In order to provide a practicable solution to data confidentiality in cloud storage service, a data assured deletion scheme, which achieves the fine grained access control, hopping and sniffing attacks resistance, data dynamics and deduplication, is proposed. In our scheme, data blocks are encrypted by a two-level encryption approach, in which the control keys are generated from a key derivation tree, encrypted by an All-Or- Nothing algorithm and then distributed into DHT network after being partitioned by secret sharing. This guarantees that only authorized users can recover the control keys and then decrypt the outsourced data in an owner- specified data lifetime. Besides confidentiality, data dynamics and deduplication are also achieved separately by adjustment of key derivation tree and convergent encryption. The analysis and experimental results show that our scheme can satisfy its security goal and perform the assured deletion with low cost.  相似文献   

12.
Defining a software-defined data center is a vision of the future.An SDDC brings together software-defined compute,software-de fined network,software-defined storage,software-defined hypervisor,software-defined availability,and software-defined security.It also unifies the control planes of each individual software-defined component.A unified control plane enables rich resource abstrac tions for purpose-fit orchestration systems and/or programmable infrastructures.This enables dynamic optimization according to busi ness requirements.  相似文献   

13.
Cloud computing is very attractive for schools, research institutions and enterpri- ses which need reducing IT costs, improving computing platform sharing and meeting lice- nse constraints. Sharing, management and on- demand allocation of network resources are particularly important in Cloud computing. Ho- wever, nearly all-current available cloud com- puting platforms are either proprietary or their software infrastructure is invisible to the rese- arch community except for a few open-source platforms. For universities and research insti- tutes, more open and testable experimental plat- forms are needed in a lab-level with PCs. In this paper, a platform of infrastructure resou- rce sharing system (Platform as a Service (PaaS)) is developed in virtual Cloud comput- hug environment. Its architecture, core modules, main functions, design and operational envir- onment and applications are introduced in de- tail. It has good expandability and can impr- ove resource sharing and utilization and is app- lied to regular computer science teaching and research process.  相似文献   

14.
With increasingly complex website structure and continuously advancing web technologies,accurate user clicks recognition from massive HTTP data,which is critical for web usage mining,becomes more difficult.In this paper,we propose a dependency graph model to describe the relationships between web requests.Based on this model,we design and implement a heuristic parallel algorithm to distinguish user clicks with the assistance of cloud computing technology.We evaluate the proposed algorithm with real massive data.The size of the dataset collected from a mobile core network is 228.7GB.It covers more than three million users.The experiment results demonstrate that the proposed algorithm can achieve higher accuracy than previous methods.  相似文献   

15.
Cloud computing and storage services allow clients to move their data center and applications to centralized large data centers and thus avoid the burden of local data storage and maintenance. However, this poses new challenges related to creating secure and reliable data storage over unreliable service providers. In this study, we address the problem of ensuring the integrity of data storage in cloud computing. In particular, we consider methods for reducing the burden of generating a constant amount of metadata at the client side. By exploiting some good attributes of the bilinear group, we can devise a simple and efficient audit service for public verification of untrusted and outsourced storage, which can be important for achieving widespread deployment of cloud computing. Whereas many prior studies on ensuring remote data integrity did not consider the burden of generating verification metadata at the client side, the objective of this study is to resolve this issue. Moreover, our scheme also supports data dynamics and public verifiability. Extensive security and performance analysis shows that the proposed scheme is highly efficient and provably secure.  相似文献   

16.
In IaaS Cloud, different mapping relationships between virtual machines (VMs) and physical machines (PMs) cause different resource utilization, so how to place VMs on PMs to reduce energy consumption is becoming one of the major concerns for cloud providers. The existing VM scheduling schemes propose optimize PMs or network resources utilization, but few of them attempt to improve the energy efficiency of these two kinds of resources simultaneously. This paper proposes a VM scheduling scheme meeting multiple resource constraints, such as the physical server size (CPU, memory, storage, bandwidth, etc.) and network link capacity to reduce both the numbers of active PMs and network elements so as to finally reduce energy consumption. Since VM scheduling problem is abstracted as a combination of bin packing problem and quadratic assignment problem, which is also known as a classic combinatorial optimization and NP-hard problem. Accordingly, we design a two- stage heuristic algorithm to solve the issue, and the simulations show that our solution outperforms the existing PM- or network-only optimization solutions.  相似文献   

17.
We have witnessed the fast-growing deployment of Hadoop,an open-source implementation of the MapReduce programming model,for purpose of data-intensive computing in the cloud.However,Hadoop was not originally designed to run transient jobs in which us ers need to move data back and forth between storage and computing facilities.As a result,Hadoop is inefficient and wastes resources when operating in the cloud.This paper discusses the inefficiency of MapReduce in the cloud.We study the causes of this inefficiency and propose a solution.Inefficiency mainly occurs during data movement.Transferring large data to computing nodes is very time-con suming and also violates the rationale of Hadoop,which is to move computation to the data.To address this issue,we developed a dis tributed cache system and virtual machine scheduler.We show that our prototype can improve performance significantly when run ning different applications.  相似文献   

18.
Recently, there have been many mo- bile value-added services in the Chinese mo- bile telecommunication market nowadays. Am- ong them, the characteristics of Multimedia Mes- saging Service (MMS) have not yet been fully understood. In this paper, with the help of a cloud computing platform, we investigated the flow-level charactefistcs of Chinese MMS. All of the experimental data were collected by the TMS equipment deployed in a major node in Sou- them China. The collection time spanned six mo- nths. We performed high-level analysis to show the basic distributions of MMS characteristics. Then, by analysing the detailed MMS features, we determined the distribution of personal MMS, and made a comprehensive comparison between 2G and 3G MMS. Finally, we tried to build a model on the personal MMS inter-arrival time, and we found that the Weibull distribution was optimum.  相似文献   

19.
To enhance the security of user data in the clouds, we present an adaptive and dy- namic data encryption method to encrypt user data in the mobile phone before it is uploaded. Firstly, the adopted data encryption algorithm is not static and uniform. For each encryption, this algorithm is adaptively and dynamically selected from the algorithm set in the mobile phone encryption system. From the mobile phone's character, the detail encryption algo- rithm selection strategy is confirmed based on the user's mobile phone hardware information, personalization information and a pseudo-ran- dom number. Secondly, the data is rearranged with a randomly selected start position in the data before being encrypted. The start posi- tion's randomness makes the mobile phone data encryption safer. Thirdly, the rearranged data is encrypted by the selected algorithm and generated key. Finally, the analysis shows this method possesses the higher security be- cause the more dynamics and randomness are adaptively added into the encryption process.  相似文献   

20.
This paper reviews the requirements for Software Defined Radio (SDR) systems for high-speed wireless applications and compares how well the different technology choices available- from ASICs, FPGAs to digital signal processors (DSPs) and general purpose processors (GPPs) - meet them.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号