首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper investigates a transaction processing mechanism in a peer to peer database network. A peer to peer database network is a collection of autonomous data sources, called peers, where each peer augments a conventional database management system with an inter-operability layer (i.e. mappings) for sharing data. In this network, each peer independently manages its database and executes queries as well as updates over the related data in other peers. In this paper, we consider a peer to peer database network where mappings between peers are established through data-level mappings for sharing data and resolving data heterogeneity. With regards to transaction processing in a peer to peer database network, we mainly focus on how to maintain a consistent execution view of concurrent transactions in peers without a global transaction coordinator. Since there is no global transaction coordinator and each peer executes concurrent transactions independently, different peers may produce different execution views for the same set of transactions. For this purpose, we investigate potential problems that arise when maintaining a consistent execution of concurrent transactions. In order to guarantee consistent execution, we introduce a correctness criteria and propose two approaches, namely Merged Transactions and OTM based propagation. We assume that one single peer initiates the concurrent transactions. We also present a solution for ensuring the consistent execution view of concurrent transactions considering the failures of transactions.  相似文献   

2.
In recent years, an increasing number of data-intensive applications deal with continuously changing data objects (CCDOs), such as data streams from sensors and tracking devices. In these applications, the underlying data management system must support new types of spatiotemporal queries that refer to the spatiotemporal trajectories of the CCDOs. In contrast to traditional data objects, CCDOs have continuously changing attributes. Therefore, the spatiotemporal relation between any two CCDOs can change over time. This problem can be more complicated, since the CCDO trajectories are associated with a degree of uncertainty at every point in time. This is due to the fact that databases can only be discretely updated. The paper formally presents a comprehensive framework for managing CCDOs with insights into the spatiotemporal uncertainty problem and presents an original parallel-processing solution for efficiently managing the uncertainty using the map-reduce platform of cloud computing.  相似文献   

3.
针对混合云环境多用户数据共享进行了研究,为了完善混合云环境多用户数据共享机制、提高用户存储安全、解决权限撤销延迟问题,此次应用全同态加密算法,并结合门限技术,提出了一个改进的混合云环境多用户数据共享新方案。方案首先对明文进行顺序性分块,之后以改进后的全同态加密算法对明文加密,并发送至混合云存储。方案中为每个共享用户生成等级权限和时间约束信息,实现对共享用户权限管理,同时建立数据完整性标签,验证存储数据的完整性。最后通过实验进行模拟,权限撤销时间小于2s,数据完整性验证中对数据的改变控制在以内,同时访问中增加一个用户只延迟5s。与已有方案相比,表明新方案在实现数据共享的各个方面都获得较好的改善。  相似文献   

4.
In this paper, we describe the process of parallelizing an existing, production level, sequential Synthetic Aperture Radar (SAR) processor based on the Range-Doppler algorithmic approach. We show how, taking into account the constraints imposed by the software architecture and related software engineering costs, it is still possible with a moderate programming effort to parallelize the software and present an message-passing interface (MPI) implementation whose speedup is about 8 on 9 processors, achieving near real-time processing of raw SAR data even on a moderately aged parallel platform. Moreover, we discuss a hybrid two-level parallelization approach that involves the use of both MPI and OpenMP. We also present GridStore, a novel data grid service to manage raw, focused and post-processed SAR data in a grid environment. Indeed, another aim of this work is to show how the processed data can be made available in a grid environment to a wide scientific community, through the adoption of a data grid service providing both metadata and data management functionalities. In this way, along with near real-time processing of SAR images, we provide a data grid-oriented system for data storing, publishing, management, etc.
Giovanni AloisioEmail:
  相似文献   

5.
梁有懿  凌捷  柳毅  赖琦 《计算机应用研究》2020,37(9):2789-2792,2810
在混合云数据共享中,用户量大,属性多,导致用户端的计算量随着属性数目的增多而增大,影响着群用户之间的云数据共享效率,并且还存在用户的身份隐私及相关属性容易被泄露的风险。针对这些问题进行研究,提出了一种适合混合云环境下安全高效的群数据共享方法。该方法通过使用匿名技术、属性隐藏和计算外包技术相结合,保障用户的身份隐私和属性的安全,降低用户端的计算量。安全性与性能分析和实验结果表明,该方法具有较好的安全性和效率。  相似文献   

6.
Storing and sharing of medical data in the cloud environment, where computing resources including storage is provided by a third party service provider, raise serious concern of individual privacy for the adoption of cloud computing technologies. Existing privacy protection researches can be classified into three categories, i.e., privacy by policy, privacy by statistics, and privacy by cryptography. However, the privacy concerns and data utilization requirements on different parts of the medical data may be quite different. The solution for medical dataset sharing in the cloud should support multiple data accessing paradigms with different privacy strengths. The statistics or cryptography technology alone cannot enforce the multiple privacy demands, which blocks their application in the real-world cloud. This paper proposes a practical solution for privacy preserving medical record sharing for cloud computing. Based on the classification of the attributes of medical records, we use vertical partition of medical dataset to achieve the consideration of different parts of medical data with different privacy concerns. It mainly includes four components, i.e., (1) vertical data partition for medical data publishing, (2) data merging for medical dataset accessing, (3) integrity checking, and (4) hybrid search across plaintext and ciphertext, where the statistical analysis and cryptography are innovatively combined together to provide multiple paradigms of balance between medical data utilization and privacy protection. A prototype system for the large scale medical data access and sharing is implemented. Extensive experiments show the effectiveness of our proposed solution.  相似文献   

7.
In recent years, data mining has become one of the most popular techniques for data owners to determine their strategies. Association rule mining is a data mining approach that is used widely in traditional databases and usually to find the positive association rules. However, there are some other challenging rule mining topics like data stream mining and negative association rule mining. Besides, organizations want to concentrate on their own business and outsource the rest of their work. This approach is named “database as a service concept” and provides lots of benefits to data owner, but, at the same time, brings out some security problems. In this paper, a rule mining system has been proposed that provides efficient and secure solution to positive and negative association rule computation on XML data streams in database as a service concept. The system is implemented and several experiments have been done with different synthetic data sets to show the performance and efficiency of the proposed system.  相似文献   

8.
PETROS is a fixed-format magnetic tape data bank of major-element chemical analyses of igneous rocks divided into groups representing selected geographic areas and petrologic provinces. The 20,000 analyses and additional calculated average igneous rock compositions may be used for a variety of computer-based research and teaching applications. Interactive programs greatly expand the accessibility and usefulness of PETROS.  相似文献   

9.
To assure the confidentiality of the sensitive data stored in public cloud storages, the data owners should encrypt their data before submitting them to the clouds. However, it brings new challenge for us to effectively share the encrypted data in the public clouds. The paradigm of proxy re-encryption provides a promising solution to data sharing as it enables a data owner to delegate the decryption rights of the encrypted data to the authorized recipients without any direct interaction. Certificate-based proxy re-encryption is a new cryptographic primitive to effectively support the data confidentiality in public cloud storages, which enjoys the advantages of certificate-based encryption while providing the functionalities of proxy re-encryption. In this paper, we propose a certificate-based proxy re-encryption scheme without bilinear pairings. The proposed scheme is proven secure under the computational Diffie-Hellman assumption in the random oracle model. Due to avoiding the time-consuming bilinear pairing operations, the proposed scheme significantly reduces the computation cost. Compared to the previous certificate-based proxy re-encryption schemes with bilinear pairings, it enjoys obvious advantage in the computation efficiency, and thus is more suitable for the computation-limited or power-constrained devices.  相似文献   

10.
Recent technology advances have made multimedia on-demand services, such as home entertainment and home-shopping, important to the consumer market. One of the most challenging aspects of this type of service is providing access either instantaneously or within a small and reasonable latency upon request. We consider improvements in the performance of multimedia storage servers through data sharing between requests for popular objects, assuming that the I/O bandwidth is the critical resource in the system. We discuss a novel approach to data sharing, termed adaptive piggybacking, which can be used to reduce the aggregate I/O demand on the multimedia storage server and thus reduce latency for servicing new requests.  相似文献   

11.
Data caching is used to improve the response time and the power consumption of a mobile client in a mobile computing environment. To enhance the performance of data caching, one needs to improve the hit ratio and to reduce the cost in processing a cache miss. In a mobile computing environment, a cached data item of a mobile client needs to remain up-to-date with respect to its corresponding data item in the server. A cached data item which is out of date is called a cached invalidated data item. Accessing a cached invalidated data item can be regarded as processing a cache miss. To access a cached invalidated data item, a mobile client needs to download the new content of the data item from the broadcast channel. This operation is called a re-access operation in this paper. Re-accessing a cached invalidated data item incurs large tuning time overhead. In this paper, we propose a re-access scheme that reduces this overhead by allowing a mobile client to access a cached invalidated data item from the broadcast channel without accessing indices. We analyze the performance of the proposed scheme and validate the analysis through experiments. The experiments showed that the proposed scheme significantly reduces the tuning time of a mobile client. Furthermore, the proposed scheme is robust in the sense that it allows changes on the broadcast structure in data broadcasting.  相似文献   

12.
An aspect ratio invariant visual secret sharing (ARIVSS) scheme is a perfectly secure method for sharing secret images. Due to the nature of the VSS encryption, each secret pixel is expanded to m sub-pixels in each of the generated shares. The advantage of ARIVSS is that the aspect ratio of the recovered secret image is fixed and thus there is no loss of information when the shape of the secret image is our information. For example, a secret image of a circle is compromised to an ellipse if m does not have a square value. Two ARIVSS schemes based on processing one and four pixel blocks, respectively, were previously proposed. In this paper, we have generalized the square block-wise approach to further reduce pixel expansion.  相似文献   

13.
Simultaneous transmission of multiple high quality video streams from a server to the clients is becoming an increasingly important class of traffic in a network of workstations or cluster environment. With a powerful symmetric multiprocessor (SMP) as the server and a high-speed network, such transmission is practicable from a hardware point of view. However, the actual construction of such a video data server system entails tackling a number of difficult problems related to the provision of strict quality of service (QoS) guarantees. Among others, the smoothing and scheduling of multiple video packet streams are two crucial issues. Smoothing is concerned with reducing the rate variability of video streams in view of the fact that video data are usually compressed in a variable bit rate fashion. Scheduling is important to guarantee the requested QoS levels while maximizing the utilization of the resources. Although much work on smoothing has been done, it is not clear which scheduling scheme is suitable for multiplexing smoothed video data to the network. In this paper we present an extensive performance study of the EDF and RM scheduling algorithms which are modified to provide QoS guarantees for smoothed video data. With a probabilistic definition of QoS, admission control conditions are incorporated into the two algorithms. Furthermore, a counter-based scheduling module is included as the core scheduling mechanism which adaptively adjusts the actual QoS levels assigned to requests. Our theoretical analysis of the two modified algorithms, called QEDF and QRM, shows that the QRM algorithm is more robust than the QEDF algorithm for different workload and utilization conditions. We also propose to use a new metric called meta-QoS to quantify the overall performance of a packet scheduler given a set of simultaneous requests. In our experiments based on an SMP-based Linux platform, we find that the QRM algorithm can sustain a rather stable level of meta-QoS even when the workload and utilization levels are increased. On the other hand, the QEDF algorithm, due to its conservative admission control policy, is found to be not suitable for a high level of utilization and a large number of requests. In view of the lower complexity of the QRM algorithm, it seems that the QRM approach is a more suitable candidate for packet scheduling in the client-server environment considered in our study.
Yu-Kwong Kwok (Corresponding author)Email:
  相似文献   

14.
For product design and development, crowdsourcing shows huge potential for fostering creativity and has been regarded as one important approach to acquiring innovative concepts. Nevertheless, prior to the approach could be effectively implemented, the following challenges concerning crowdsourcing should be properly addressed: (1) burdensome concept review process to deal with a large amount of crowd-sourced design concepts; (2) insufficient consideration in integrating design knowledge and principles into existing data processing methods/algorithms for crowdsourcing; and (3) lack of a quantitative decision support process to identify better concepts. To tackle these problems, a product concept evaluation and selection approach, which comprises three modules, is proposed. These modules are respectively: (1) a data mining module to extract meaningful information from online crowd-sourced concepts; (2) a concept re-construction module to organize word tokens into a unified frame using domain ontology and extended design knowledge; and (3) a decision support module to select better concepts in a simplified manner. A pilot study on future PC (personal computer) design was conducted to demonstrate the proposed approach. The results show that the proposed approach is promising and may help to improve the concept review and evaluation efficiency; facilitate data processing using design knowledge; and enhance the reliability of concept selection decisions.  相似文献   

15.
In this paper, we propose a novel distributed resource-scheduling algorithm capable of handling multiple resource requirements for jobs that arrive in a Grid computing environment. In our proposed algorithm, referred to as multiple resource scheduling (MRS) algorithm, we take into account both the site capabilities and the resource requirements of jobs. The main objective of the algorithm is to obtain a minimal execution schedule through efficient management of available Grid resources. We first propose a model in which the job and site resource characteristics can be captured together and used in the scheduling algorithm. To do so, we introduce the concept of a n-dimensional virtual map and resource potential. Based on the proposed model, we conduct rigorous simulation experiments with real-life workload traces reported in the literature to quantify the performance. We compare our strategy with most of the commonly used algorithms in place on performance metrics such as job wait times, queue completion times, and average resource utilization. Our combined consideration of job and resource characteristics is shown to render high-performance with respect to above-mentioned metrics in the environment. Our study also reveals the fact that MRS scheme has a capability to adapt to both serial and parallel job requirements, especially when job fragmentation occurs. Our experimental results clearly show that MRS outperforms other strategies and we highlight the impact and importance of our strategy.  相似文献   

16.
FCL is a higher-order functional programming language which consolidates and extends a number of desirable features of existing languages. This paper describes the salient features of FCL and an algorithm for translation to highly parallel data flow graphs. The translation algorithm is based on a set of extended “combinators”. The relationship between functional programming languages and demand-driven or data-driven data flow architectures is established.  相似文献   

17.
The continuous broadcast of data together with an index structure is an effective way of disseminating data in a wireless, mobile environment. The availability of an index allows a reduction in the tuning time and thus leads to lower power consumption for a mobile client. This paper considers scheduling index trees in multiple channel environments in which a mobile client can tune into a specified channel at one time instance. Let T be an n-node index tree of height h representing multi-dimensional index structure to be broadcast in a c  -channel environment. We describe two algorithms generating broadcast schedules that differ in the worst-case performance experienced by a client executing a general query. A general query is a query which results in an arbitrary traversal of the index tree, compared to a simple query in which a single path is traversed. Our first algorithm schedules any tree using minimum cycle length and it executes a simple query within one cycle. However, a general query may require O(hc)O(hc) cycles and thus result in a high latency. The second algorithm generates a schedule of minimum cycle length on which a general query takes at most O(c)O(c) cycles. For some queries this is the best possible latency.  相似文献   

18.
Client-server object-oriented database management systems differ significantly from traditional centralized systems in terms of their architecture and the applications they target. In this paper, we present the client-server architecture of the EOS storage manager and we describe the concurrency control and recovery mechanisms it employs. EOS offers a semi-optimistic locking scheme based on the multi-granularity two-version two-phase locking protocol. Under this scheme, multiple concurrent readers are allowed to access a data item while it is being updated by a single writer. Recovery is based on write-ahead redo-only logging. Log records are generated at the clients and they are shipped to the server during normal execution and at transaction commit. Transaction rollback is fast because there are no updates that have to be undone, and recovery from system crashes requires only one scan of the log for installing the changes made by transactions that committed before the crash. We also present a preliminary performance evaluation of the implementation of the above mechanisms. Edited by R. King. Received July 1993 / Accepted May 1996  相似文献   

19.
In a distributed stream processing system, streaming data are continuously disseminated from the sources to the distributed processing servers. To enhance the dissemination efficiency, these servers are typically organized into one or more dissemination trees. In this paper, we focus on the problem of constructing dissemination trees to minimize the average loss of fidelity of the system. We observe that existing heuristic-based approaches can only explore a limited solution space and hence may lead to sub-optimal solutions. On the contrary, we propose an adaptive and cost-based approach. Our cost model takes into account both the processing cost and the communication cost. Furthermore, as a distributed stream processing system is vulnerable to inaccurate statistics, runtime fluctuations of data characteristics, server workloads, and network conditions, we have designed our scheme to be adaptive to these situations: an operational dissemination tree may be incrementally transformed to a more cost-effective one. Our adaptive strategy employs distributed decisions made by the distributed servers independently based on localized statistics collected by each server at runtime. For a relatively static environment, we also propose two static tree construction algorithms relying on apriori system statistics. These static trees can also be used as initial trees in a dynamic environment. We apply our schemes to both single- and multi-object dissemination. Our extensive performance study shows that the adaptive mechanisms are effective in a dynamic context and the proposed static tree construction algorithms perform close to optimal in a static environment.  相似文献   

20.
With the process of globalisation and the development of management models and information technologies, enterprise cooperation and collaboration meets a new business and technical environment. The Internet of things, mobile Internet, cloud computing and big data technologies build a sensing environment for all kinds of businesses. Inter enterprise collaboration is also meeting the new challenges of omni-channel marketing, closed-loop supply chain and enterprise networks integration. A data convergence oriented enterprise networks integration architecture with relative enabling technologies is developed in the paper. In order to collect, transfer and fuse data from different data sources, Data Portal (DP) and Collaboration Agent (CA) concepts are introduced, which present a lightweight and loosely coupled infrastructure for enterprise networks integration. How to use the developed technologies to solve problems of product lifecycle management and omni-channel marketing management are discussed in detailed cases studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号