共查询到18条相似文献,搜索用时 156 毫秒
1.
分布式数据存储过程中的元数据保存在中心节点上,容易造成单点故障和易被恶意修改,安全性较差。虽然,通过引入备份节点可以一定程度上避免该问题,但节点之间的同步和切换效率较低。同时,存储元数据的节点可以达成共识修改元数据,缺乏可信性。针对传统分布式存储中存在的问题,结合区块链的特点,提出一种去中心化的分布式存储模型DMB(Decentralized Metadata Blockchain),通过将元数据保存在区块中、冗余存储区块链、协作验证来保证元数据的完整性。模型分为两个阶段,即元数据存储阶段和元数据验证阶段。在元数据存储阶段,将用户的签名和副本位置数据发送给若干验证节点,生成元数据区块并写入元数据区块链中。在元数据验证阶段,验证节点首先检查本地元数据区块链的状态和全局状态是否相同,如果不相同则进行状态同步。然后,检索本地元数据区块链来验证元数据完整性。理论与实验结果表明,DMB模型可以保证元数据的可追溯性和完整性,有较好的并发处理能力,对数据存储的效率影响较小。 相似文献
2.
肖刚 《计算机与数字工程》2012,40(4):75-77,142
针对P2P网络中由于查询条件的弱语义和粗粒度、检索效率低下以及网络带宽消耗的问题提出了一个基于元数据的高效查询算法,通过在任意P2P数据管理层的基础上建立一个统一的元数据层,各个节点自动抽取共享数据的详细的元数据信息,每个节点不仅保存本地共享数据的元数据信息,而且存储访问过的最感兴趣的数据的元数据信息,并使用数据库对元数据信息进行高效管理,从而使所有节点都具有自我学习的能力,充分利用元数据信息提高检索效率。 相似文献
3.
一种新的文件系统元数据的检查点容错策略 总被引:1,自引:2,他引:1
针对目前在集群文件系统中出现的元数据的故障问题,在PVFS的基础上提出了一种新的元数据检查点的日志管理策略。该策略在Linux环境下实现,解决了文件系统中较慢的元数据管理这一瓶颈问题,并且具备了较强的容错功能。该方法采用磁盘日志和内存日志的结构,通过对事务的管理,能满足集群文件系统中元数据高可用性的要求。 相似文献
4.
5.
6.
网格环境下空间元数据目录服务 总被引:1,自引:0,他引:1
针对网络环境中分布式空间元数据库整合问题,提出了采用网格技术来实现空间元数据集成的方法。利用网格中间件将已有的分布式空间元数据库节点构建成网格分节点,在管理中心节点上部署目录服务和管理模块,实现了分布式元数据存储、分布式元数据查询、统一的目录服务、元数据管理等功能,有效地整合了网络中分散的空间元数据,为用户提供无缝的一体化目录服务。测试结果表明,该方法简单易行、系统可扩展性强。 相似文献
7.
8.
元数据操作是影响分布式文件系统性能的一个关键因素。笔者深入研究了分布式文件系统Lustre的元数据存储方式及其访问特点,针对提高元数据的服务效率提出了一种改进方案。该方案将指定的扩展属性(系统元数据)在扩展属性块中的地址偏移存放到元数据文件索引节点的数据区域中,减少了扩展属性遍历的开销。系统测试和分析表明,改进后的系统有更高元的数据访问效率。 相似文献
9.
10.
一种基于兴趣域的高效对等网络搜索方案 总被引:22,自引:0,他引:22
为了改进无结构对等网络中搜索效率低下的问题,提出了一种基于兴趣域的高效搜索方案.和常用的随机搜索方案不同,在所提方案中。文档属性由元数据通过RDF语句描述,拥有相同元数据的节点同属一个兴趣域,搜索请求首先在兴趣域中传播,大大提高了搜索效率.随着搜索过程的进行,节点对兴趣域内其他节点了解越多其搜索效率也越高.通过元数据选择窗口和元数据复制机制,可以进一步提高搜索效率.模拟实验结果证实了所提方案在无结构对等网络中的准确和高效. 相似文献
11.
Warm standby redundancy is an important fault-tolerant design technique for improving the reliability of many systems used in life-critical or mission-critical applications. Traditional warm standby models aim to reduce the operational cost and failure rate of the standby elements by keeping them partially powered and partially exposed to operational stresses. However, depending on the level of readiness of a standby element, significant restoration delays and replacement costs can be incurred when the standby element is needed to replace the failed online element. To achieve a balance between the operation cost of standby elements and the replacement costs, this paper proposes a new warm standby model with scheduled (or time-based) standby mode transfer of standby elements. In particular, each standby element can be transferred from warm standby mode to hot standby mode (a mode in which the standby element is ready to take over at any time) at a fixed/predetermined time instants after the mission starts. To facilitate the optimal design and implementation of the proposed model, this paper first suggests a new algorithm for evaluating the reliability and expected mission cost of 1-out-of-N: G system with standby elements subject to the time-based standby mode transfer. The algorithm is based on a discrete approximation of time-to-failure distributions of the elements and can work with any type of distributions. Based on the suggested algorithm the problem of optimizing transfer times of standby elements to the hot standby mode and optimal sequencing of their transfer to the operation mode is formulated and solved. In this problem the expected mission cost associated with elements’ standby and operation expenses and mode transfer expenses is minimized subject to system reliability constraint. Illustrative examples are provided. 相似文献
12.
一种支持负载均衡的多机心跳模型 总被引:4,自引:0,他引:4
状态检测是高可用性和可靠性的重要组成部分,设计实现了一个分布式的高可用检测机制,该机制逻辑上分为一个主节点和多个从节点;主节点由所有节点选举产生,周期性的对需要检测的虚拟子网进行轮循,从节点在收到轮循消息后给予响应;主从节点分别依据轮循响应和轮循信息对彼此的健康状态进行检测.较好的定时策略,确保了较低的误报概率和诊断延迟.该机制支持负载在主从节点上的均衡分配,扩展性好,状态机需要维护的状态数少,管理简单. 相似文献
13.
RAKESH GUPTA SACHENDRA BANSAL L. R. GOEL 《International journal of systems science》2013,44(8):1577-1587
This paper deals with the profit function analysis of a single-server two-unit standby system with two modes of each unit normal (N) and total failure (F). Regarding the standby unit, an interconversion is made repeatedly after a random length of time from warm to cold and cold to warm. Upon failure of the operative unit, the standby unit, if it is warm, starts the operation instantaneously; otherwise the system goes down until the cold standby starts operation. System failure occurs when both units fail totally. Identifying the system at suitable regenerative instants, integral equations are set up and explicit expressions of interest to system designers are obtained to carry out the profit function analysis. The results of two particular cases are also derived. 相似文献
14.
15.
In this Exa byte scale era, data increases at an exponential rate. This is in turn generating a massive amount of metadata in the file system. Hadoop is the most widely used framework to deal with big data. Due to this growth of huge amount of metadata, however, the efficiency of Hadoop is questioned numerous times by many researchers. Therefore, it is essential to create an efficient and scalable metadata management for Hadoop. Hash-based mapping and subtree partitioning are suitable in distributed metadata management schemes. Subtree partitioning does not uniformly distribute workload among the metadata servers, and metadata needs to be migrated to keep the load roughly balanced. Hash-based mapping suffers from a constraint on the locality of metadata, though it uniformly distributes the load among NameNodes, which are the metadata servers of Hadoop. In this paper, we present a circular metadata management mechanism named dynamic circular metadata splitting (DCMS). DCMS preserves metadata locality using consistent hashing and locality-preserving hashing, keeps replicated metadata for excellent reliability, and dynamically distributes metadata among the NameNodes to keep load balancing. NameNode is a centralized heart of the Hadoop. Keeping the directory tree of all files, failure of which causes the single point of failure (SPOF). DCMS removes Hadoop’s SPOF and provides an efficient and scalable metadata management. The new framework is named ‘Dr. Hadoop’ after the name of the authors. 相似文献
16.
Yifeng Zhu Hong Jiang Jun Wang Feng Xian 《Parallel and Distributed Systems, IEEE Transactions on》2008,19(6):750-763
An efficient and distributed scheme for file mapping or file lookup is critical in decentralizing metadata management within a group of metadata servers. This paper presents a novel technique called Hierarchical Bloom Filter Arrays (HBA) to map filenames to the metadata servers holding their metadata. Two levels of probabilistic arrays, namely, the Bloom filter arrays with different levels of accuracies, are used on each metadata server. One array, with lower accuracy and representing the distribution of the entire metadata, trades accuracy for significantly reduced memory overhead, whereas the other array, with higher accuracy, caches partial distribution information and exploits the temporal locality of file access patterns. Both arrays are replicated to all metadata servers to support fast local lookups. We evaluate HBA through extensive trace-driven simulations and implementation in Linux. Simulation results show our HBA design to be highly effective and efficient in improving the performance and scalability of file systems in clusters with 1,000 to 10,000 nodes (or superclusters) and with the amount of data in the petabyte scale or higher. Our implementation indicates that HBA can reduce the metadata operation time of a single-metadata-server architecture by a factor of up to 43.9 when the system is configured with 16 metadata servers. 相似文献
17.
Semantic heterogeneity resolution in federated databases by metadata implantation and stepwise evolution 总被引:3,自引:0,他引:3
Goksel Aslan Dennis McLeod 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(2):120-132
A key aspect of interoperation among data-intensive systems involves the mediation of metadata and ontologies across database
boundaries. One way to achieve such mediation between a local database and a remote database is to fold remote metadata into
the local metadata, thereby creating a common platform through which information sharing and exchange becomes possible. Schema
implantation and semantic evolution, our approach to the metadata folding problem, is a partial database integration scheme
in which remote and local (meta)data are integrated in a stepwise manner over time. We introduce metadata implantation and
stepwise evolution techniques to interrelate database elements in different databases, and to resolve conflicts on the structure
and semantics of database elements (classes, attributes, and individual instances). We employ a semantically rich canonical
data model, and an incremental integration and semantic heterogeneity resolution scheme. In our approach, relationships between
local and remote information units are determined whenever enough knowledge about their semantics is acquired. The metadata
folding problem is solved by implanting remote database elements into the local database, a process that imports remote database
elements into the local database environment, hypothesizes the relevance of local and remote classes, and customizes the organization
of remote metadata. We have implemented a prototype system and demonstrated its use in an experimental neuroscience environment.
Received June 19, 1998 / Accepted April 20, 1999 相似文献