首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 156 毫秒
1.
分布式数据存储过程中的元数据保存在中心节点上,容易造成单点故障和易被恶意修改,安全性较差。虽然,通过引入备份节点可以一定程度上避免该问题,但节点之间的同步和切换效率较低。同时,存储元数据的节点可以达成共识修改元数据,缺乏可信性。针对传统分布式存储中存在的问题,结合区块链的特点,提出一种去中心化的分布式存储模型DMB(Decentralized Metadata Blockchain),通过将元数据保存在区块中、冗余存储区块链、协作验证来保证元数据的完整性。模型分为两个阶段,即元数据存储阶段和元数据验证阶段。在元数据存储阶段,将用户的签名和副本位置数据发送给若干验证节点,生成元数据区块并写入元数据区块链中。在元数据验证阶段,验证节点首先检查本地元数据区块链的状态和全局状态是否相同,如果不相同则进行状态同步。然后,检索本地元数据区块链来验证元数据完整性。理论与实验结果表明,DMB模型可以保证元数据的可追溯性和完整性,有较好的并发处理能力,对数据存储的效率影响较小。  相似文献   

2.
针对P2P网络中由于查询条件的弱语义和粗粒度、检索效率低下以及网络带宽消耗的问题提出了一个基于元数据的高效查询算法,通过在任意P2P数据管理层的基础上建立一个统一的元数据层,各个节点自动抽取共享数据的详细的元数据信息,每个节点不仅保存本地共享数据的元数据信息,而且存储访问过的最感兴趣的数据的元数据信息,并使用数据库对元数据信息进行高效管理,从而使所有节点都具有自我学习的能力,充分利用元数据信息提高检索效率。  相似文献   

3.
一种新的文件系统元数据的检查点容错策略   总被引:1,自引:2,他引:1  
秦航  徐婕 《计算机工程与设计》2004,25(3):334-336,373
针对目前在集群文件系统中出现的元数据的故障问题,在PVFS的基础上提出了一种新的元数据检查点的日志管理策略。该策略在Linux环境下实现,解决了文件系统中较慢的元数据管理这一瓶颈问题,并且具备了较强的容错功能。该方法采用磁盘日志和内存日志的结构,通过对事务的管理,能满足集群文件系统中元数据高可用性的要求。  相似文献   

4.
提出了一种分散式体系结构的高可靠文件存储系统(DHAFS),各个存储节点相互协作,将本地的存储资源虚拟化为一个全局的存储空间,实现统一的文件名字空间,向客户端提供文件接口,存储、缓存、数据/元数据的管理功能分布在各个存储节点中.相对于现有的集群存储系统而言,DHAFS一方面弥补了单一元数据节点的单点失效,另一方面消除了单一元数据节点的性能瓶颈,提高了系统的动态可扩展性.测试实验结果证明DHAFS能够高效、稳定地提供文件存储服务.  相似文献   

5.
基于半结构化的P2P存储系统,设计一种基于兴趣聚集的元数据管理机制。采用分层的Bloom filter结构存储并维护热点和本地元数据信息,将元数据的查询请求路由到不同的超级节点上,实现元数据的分布式管理。实验结果表明,该机制能明显提高元数据的查询效率和访问速度,具有较好的适应性和可扩展性。  相似文献   

6.
网格环境下空间元数据目录服务   总被引:1,自引:0,他引:1  
针对网络环境中分布式空间元数据库整合问题,提出了采用网格技术来实现空间元数据集成的方法。利用网格中间件将已有的分布式空间元数据库节点构建成网格分节点,在管理中心节点上部署目录服务和管理模块,实现了分布式元数据存储、分布式元数据查询、统一的目录服务、元数据管理等功能,有效地整合了网络中分散的空间元数据,为用户提供无缝的一体化目录服务。测试结果表明,该方法简单易行、系统可扩展性强。  相似文献   

7.
8.
元数据操作是影响分布式文件系统性能的一个关键因素。笔者深入研究了分布式文件系统Lustre的元数据存储方式及其访问特点,针对提高元数据的服务效率提出了一种改进方案。该方案将指定的扩展属性(系统元数据)在扩展属性块中的地址偏移存放到元数据文件索引节点的数据区域中,减少了扩展属性遍历的开销。系统测试和分析表明,改进后的系统有更高元的数据访问效率。  相似文献   

9.
田翀  宁洪  李姗姗  雷琦 《计算机工程》2006,32(11):100-102,105
通过对当前各类信息系统开发、移植和集成等方面的需求分析,以及对现有元数据管理方案的深入研究,提出了一套基于CWM的元数据管理方案WMMS,它适用于本地和分布式环境下的元数据管理。简介了CWM的体系结构,叙述了WMMS的系统组成和主要功能,并与国际上比较成熟的元数据管理方案作了简要的对比,说明该方案在技术和应用等方面的优势,最后重点描述了WMMS中建模工具的设计。  相似文献   

10.
一种基于兴趣域的高效对等网络搜索方案   总被引:22,自引:0,他引:22  
为了改进无结构对等网络中搜索效率低下的问题,提出了一种基于兴趣域的高效搜索方案.和常用的随机搜索方案不同,在所提方案中。文档属性由元数据通过RDF语句描述,拥有相同元数据的节点同属一个兴趣域,搜索请求首先在兴趣域中传播,大大提高了搜索效率.随着搜索过程的进行,节点对兴趣域内其他节点了解越多其搜索效率也越高.通过元数据选择窗口和元数据复制机制,可以进一步提高搜索效率.模拟实验结果证实了所提方案在无结构对等网络中的准确和高效.  相似文献   

11.
Warm standby redundancy is an important fault-tolerant design technique for improving the reliability of many systems used in life-critical or mission-critical applications. Traditional warm standby models aim to reduce the operational cost and failure rate of the standby elements by keeping them partially powered and partially exposed to operational stresses. However, depending on the level of readiness of a standby element, significant restoration delays and replacement costs can be incurred when the standby element is needed to replace the failed online element. To achieve a balance between the operation cost of standby elements and the replacement costs, this paper proposes a new warm standby model with scheduled (or time-based) standby mode transfer of standby elements. In particular, each standby element can be transferred from warm standby mode to hot standby mode (a mode in which the standby element is ready to take over at any time) at a fixed/predetermined time instants after the mission starts. To facilitate the optimal design and implementation of the proposed model, this paper first suggests a new algorithm for evaluating the reliability and expected mission cost of 1-out-of-N: G system with standby elements subject to the time-based standby mode transfer. The algorithm is based on a discrete approximation of time-to-failure distributions of the elements and can work with any type of distributions. Based on the suggested algorithm the problem of optimizing transfer times of standby elements to the hot standby mode and optimal sequencing of their transfer to the operation mode is formulated and solved. In this problem the expected mission cost associated with elements’ standby and operation expenses and mode transfer expenses is minimized subject to system reliability constraint. Illustrative examples are provided.  相似文献   

12.
一种支持负载均衡的多机心跳模型   总被引:4,自引:0,他引:4  
状态检测是高可用性和可靠性的重要组成部分,设计实现了一个分布式的高可用检测机制,该机制逻辑上分为一个主节点和多个从节点;主节点由所有节点选举产生,周期性的对需要检测的虚拟子网进行轮循,从节点在收到轮循消息后给予响应;主从节点分别依据轮循响应和轮循信息对彼此的健康状态进行检测.较好的定时策略,确保了较低的误报概率和诊断延迟.该机制支持负载在主从节点上的均衡分配,扩展性好,状态机需要维护的状态数少,管理简单.  相似文献   

13.
This paper deals with the profit function analysis of a single-server two-unit standby system with two modes of each unit normal (N) and total failure (F). Regarding the standby unit, an interconversion is made repeatedly after a random length of time from warm to cold and cold to warm. Upon failure of the operative unit, the standby unit, if it is warm, starts the operation instantaneously; otherwise the system goes down until the cold standby starts operation. System failure occurs when both units fail totally. Identifying the system at suitable regenerative instants, integral equations are set up and explicit expressions of interest to system designers are obtained to carry out the profit function analysis. The results of two particular cases are also derived.  相似文献   

14.
该文提出并实现了一种远程应用级数据库容灾系统。该系统支持多种灾难备份及恢复策略,可实时地将本地数据库的数据及相关操作同步到远程备用数据库中,保证两地数据的一致性。当本地主数据库发生故障时,可迅速切换数据库服务到远程备用数据库。  相似文献   

15.
In this Exa byte scale era, data increases at an exponential rate. This is in turn generating a massive amount of metadata in the file system. Hadoop is the most widely used framework to deal with big data. Due to this growth of huge amount of metadata, however, the efficiency of Hadoop is questioned numerous times by many researchers. Therefore, it is essential to create an efficient and scalable metadata management for Hadoop. Hash-based mapping and subtree partitioning are suitable in distributed metadata management schemes. Subtree partitioning does not uniformly distribute workload among the metadata servers, and metadata needs to be migrated to keep the load roughly balanced. Hash-based mapping suffers from a constraint on the locality of metadata, though it uniformly distributes the load among NameNodes, which are the metadata servers of Hadoop. In this paper, we present a circular metadata management mechanism named dynamic circular metadata splitting (DCMS). DCMS preserves metadata locality using consistent hashing and locality-preserving hashing, keeps replicated metadata for excellent reliability, and dynamically distributes metadata among the NameNodes to keep load balancing. NameNode is a centralized heart of the Hadoop. Keeping the directory tree of all files, failure of which causes the single point of failure (SPOF). DCMS removes Hadoop’s SPOF and provides an efficient and scalable metadata management. The new framework is named ‘Dr. Hadoop’ after the name of the authors.  相似文献   

16.
An efficient and distributed scheme for file mapping or file lookup is critical in decentralizing metadata management within a group of metadata servers. This paper presents a novel technique called Hierarchical Bloom Filter Arrays (HBA) to map filenames to the metadata servers holding their metadata. Two levels of probabilistic arrays, namely, the Bloom filter arrays with different levels of accuracies, are used on each metadata server. One array, with lower accuracy and representing the distribution of the entire metadata, trades accuracy for significantly reduced memory overhead, whereas the other array, with higher accuracy, caches partial distribution information and exploits the temporal locality of file access patterns. Both arrays are replicated to all metadata servers to support fast local lookups. We evaluate HBA through extensive trace-driven simulations and implementation in Linux. Simulation results show our HBA design to be highly effective and efficient in improving the performance and scalability of file systems in clusters with 1,000 to 10,000 nodes (or superclusters) and with the amount of data in the petabyte scale or higher. Our implementation indicates that HBA can reduce the metadata operation time of a single-metadata-server architecture by a factor of up to 43.9 when the system is configured with 16 metadata servers.  相似文献   

17.
A key aspect of interoperation among data-intensive systems involves the mediation of metadata and ontologies across database boundaries. One way to achieve such mediation between a local database and a remote database is to fold remote metadata into the local metadata, thereby creating a common platform through which information sharing and exchange becomes possible. Schema implantation and semantic evolution, our approach to the metadata folding problem, is a partial database integration scheme in which remote and local (meta)data are integrated in a stepwise manner over time. We introduce metadata implantation and stepwise evolution techniques to interrelate database elements in different databases, and to resolve conflicts on the structure and semantics of database elements (classes, attributes, and individual instances). We employ a semantically rich canonical data model, and an incremental integration and semantic heterogeneity resolution scheme. In our approach, relationships between local and remote information units are determined whenever enough knowledge about their semantics is acquired. The metadata folding problem is solved by implanting remote database elements into the local database, a process that imports remote database elements into the local database environment, hypothesizes the relevance of local and remote classes, and customizes the organization of remote metadata. We have implemented a prototype system and demonstrated its use in an experimental neuroscience environment. Received June 19, 1998 / Accepted April 20, 1999  相似文献   

18.
一种数据库服务多点容灾系统*   总被引:2,自引:0,他引:2       下载免费PDF全文
设计并实现了一种数据库服务多点容灾系统。该系统实时监控主数据库的数据变化,将监控到的数据变化在多个异地备用数据库上进行实时重放,保证备用数据库与主数据库的数据一致性。同时对主数据库进行失效检测,及时发现故障,并以较短的时间(秒级)完成数据库服务的漂移,确保服务连续性,提高数据库服务的容灾抗毁能力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号