首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
Recently, flash memory has gained its popularity as storage on wide spectrum of computing devices such as cellular phones, digital cameras, digital audio players and PDAs. The integration of high-density flash memory has been accelerated twice every year for past few years. As flash memory’s capacity increases and its price drops, it is expected that flash memory will be more competitive with magnetic disk drives. Therefore, it is desirable to adapt disk-based algorithms to take advantage of the flash memory technology.In this paper, we propose a novel Flash-Aware external SorTing algorithm, FAST, that overcomes the limitation of larger writing cost for flash memory to improve both overall execution time and response time. In FAST, we reduce the write operations with additional read operations. We provide the analysis for both traditional and our flash-aware algorithms by comparing the detailed cost formulas. Experimental results with synthetic and real-life data sets show that FAST can result in faster execution time as well as smaller response time than traditional external sorting algorithms.  相似文献   

2.
Flash memories are one of the best media to support portable and desktop computers’ storage areas. Their features include non-volatility, low power consumption, and fast access time for read operations, features which are sufficient to present flash memories as major database storage components for portable computers. However, we need to improve traditional index management schemes based on B-Tree due to the relatively slow characteristics of flash memory operations compared to RAM memory. In order to achieve this goal, we propose a new index management scheme based on a compressed hot-cold clustering called CHC-Tree. The CHC-Tree-based index management scheme improves index operation performance by compressing the flash index nodes and clustering the hot-cold segments. The cold cluster compression techniques using unused free area in index node reduces the number of slow write operations in index node insert/delete processes. Our performance evaluation shows that our scheme significantly reduces the write operation overheads, improving the index update performance of B-Tree by 21.9%.  相似文献   

3.
Flash memory is widely used in embedded devices and enterprise storage systems. Currently, flash-based storage devices usually use a flash translation layer (FTL) to cope with the special features of flash memory. Many methods for the design and implementation of the FTL have been proposed, such as BAST (block-associative sector translation), FAST (fully associative sector translation), and IPL (inpage logging), of which IPL has been demonstrated to have the best performance. However, IPL offers little consideration to reducing merge operations that consequently result in the degradation of the overall performance of flash-memory storage systems. We propose an improvement to IPL, called adaptive IPL (AIPL). The idea of adaptive IPL is to make the log region in a block resizable, therefore a hot block (i.e., a write-intensive block) will use a large log region so as to absorb more page updates and in turn reduce the merge operations, while a cold block, i.e., a block rarely written to, will use a small log region. This is realized by first detecting the update pattern of a block and then presenting an updatepattern-based algorithm to dynamically adjust the log region size of a newly allocated block. We conduct experiments on TPC-C traces and synthetic traces and compare the performance of AIPL with other competitors in terms of merge count, write count and elapsed time. The results demonstrate that compared with IPL, AIPL can reduce merge operations by 65% and write operations by 54% on average.  相似文献   

4.
In general, NAND flash memory has advantages in low power consumption, storage capacity, and fast erase/write performance in contrast to NOR flash. But, main drawback of the NAND flash memory is the slow access time for random read operations. Therefore, we proposed the new NAND flash memory package for overcoming this major drawback. We present a high performance and low power NAND flash memory system with a dual cache memory. The proposed NAND flash package consists of two parts, i.e., an NAND flash memory module, and a dual cache module. The new NAND flash memory system can achieve dramatically higher performance and lower power consumption compared with any conventionM NAND-type flash memory module. Our results show that the proposed system can reduce about 78% of write operations into the flash memory cell and about 70% of read operations from the flash memory cell by using only additional 3KB cache space. This value represents high potential to achieve low power consumption and high performance gain.  相似文献   

5.
Due to the rapid development of flash memory technology, NAND flash has been widely used as a storage device in portable embedded systems, personal computers, and enterprise systems. However, flash memory is prone to performance degradation due to the long latency in flash program operations and flash erasure operations. One common technique for hiding long program latency is to use a temporal buffer to hold write data. Although DRAM is often used to implement the buffer because of its high performance and low bit cost, it is volatile; thus, that the data may be lost on power failure in the storage system. As a solution to this issue, recent operating systems frequently issue flush commands to force storage devices to permanently move data from the buffer into the non-volatile area. However, the excessive use of flush commands may worsen the write performance of the storage systems. In this paper, we propose two data loss recovery techniques that require fewer write operations to flash memory. These techniques remove unnecessary flash writes by storing storage metadata along with user data simultaneously by utilizing the spare area associated with each data page.  相似文献   

6.
Flash memory is becoming a major database storage in building embedded systems or portable devices because of its non-volatile, shock-resistant, power-economic nature, and fast access time for read operations. Flash memory, however, should be erased before it can be rewritten and the erase and write operations are very slow as compared to main memory. Due to this drawback, traditional database management schemes are not easy to apply directly to flash memory database for portable devices. Therefore, we improve the traditional schemes and propose a new scheme called flash two phase locking (F2PL) scheme for efficient transaction processing in a flash memory database environment. F2PL achieves high transaction performance by exploiting the notion of the alternative version coordination which allows previous version reads and efficiently handles slow write/erase operations in lock management processes. We also propose a simulation model to show the performance of F2PL. Based on the results of the performance evaluation, we conclude that F2PL scheme outperforms the traditional schemes.  相似文献   

7.
NAND flash memory has become the mainstream storage medium for both enterprise high performance computers and embedded systems. However, over the past several decades, the storage primitives that access secondary storage have remained unchanged, forcing NAND flash memory to serve merely as a block device like hard disk drive. Recently, several emerging storage primitives have been presented to explore the potential value of non-volatile memory devices. Although these primitives can significantly boost the access performance by providing virtual to logical address mappings, they still suffer from large RAM footprint to maintain the address mapping table and require further support for update operations.This paper presents ESP to optimize E merging S torage P rimitives with virtualization for flash memory storage systems. We propose two optimization strategies, virtual duplication and mapping prefetching to solve the critical issues in existing emerging storage primitives. The objective is to reduce unnecessary flash memory accesses and keep RAM footprint of address mapping table well under control. We have evaluated ESP on an embedded development platform. Experimental results show that ESP can significantly improve the write/read performance and reduce over 30% of garbage collection operations.  相似文献   

8.
Flash memory has critical drawbacks such as long latency of its write operation and a short life cycle. In order to overcome these limitations, the number of write operations to flash memory devices needs to be minimized. The B-Tree index structure, which is a popular hard disk based index structure, requires an excessive number of write operations when updating it to flash memory. To address this, it was proposed that another layer that emulates a B-Tree be placed between the flash memory and B-Tree indexes. This approach succeeded in reducing the write operation count, but it greatly increased search time and main memory usage. This paper proposes a B-Tree index extension that reduces both the write count and search time with limited main memory usage. First, we designed a buffer that accumulates update requests per leaf node and then simultaneously processes the update requests of the leaf node carrying the largest number of requests. Second, a type of header information was written on each leaf node. Finally, we made the index automatically control each leaf node size. Through experiments, the proposed index structure resulted in a significantly lower write count and a greatly decreased search time with less main memory usage, than placing a layer that emulates a B-Tree.  相似文献   

9.
NAND flash memory has become the major storage media in mobile devices, such as smartphones. However, the random write operations of NAND flash memory heavily affect the I/O performance, thus seriously degrading the application performance in mobile devices. The main reason for slow random write operations is the out‐of‐place update feature of NAND flash memory. Newly emerged non‐volatile memory, such as phase‐change memory, spin transfer torque, supports in‐place updates and presents much better I/O performance than that of flash memory. All these good features make non‐volatile memory (NVM) as a promising solution to improve the random write performance for NAND flash memory. In this paper, we propose a non‐volatile memory for random access (NVMRA) scheme to utilize NVM to improve the I/O performance in mobile devices. NVMRA exploits the I/O behaviors of applications to improve the random write performance for each application. Based on different I/O behaviors, such as random write‐dominant I/O behavior, NVMRA adopts different storing decisions. The scheme is evaluated on a real Android 4.2 platform. The experimental results show that the proposed scheme can effectively improve the I/O performance and reduce the I/O energy consumption for mobile devices. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Most superblock-based NAND flash storage systems employ a high-speed write buffer to enhance their writing performance. The main objective is to bind data of adjacent addresses as much as possible in order to transform random data into sequential data, which then facilitates interleaving in the storage system. We have designed a new superblock-based buffer scheme for NAND flash storage systems that improves on traditional schemes. For buffer management, a series of lists need to be specified to monitor the dataflow changes in the current state of the buffered data and the NAND flash memory in order to maximize interleaving during the flush operation. Experimental results show that the proposed scheme achieves higher write speed performance in almost all configurations, with greater than 50% speedup in some cases. Our proposed flash-aware write buffer (FAWB) scheme achieves this higher write performance with a required buffer space of only 1/4th–1/8th that of other schemes, resulting in higher efficiency.  相似文献   

11.
In NAND flash memory, once a page program or block erase (P/E) command is issued to a NAND flash chip, the subsequent read requests have to wait until the time-consuming P/E operation to complete. Preliminary results show that the lengthy P/E operations increase the read latency by 2× on average. This increased read latency caused by the contention may significantly degrade the overall system performance. Inspired by the internal mechanism of NAND flash P/E algorithms, we propose in this paper a low-overhead P/E suspension scheme, which suspends the on-going P/E to service pending reads and resumes the suspended P/E afterwards. Having reads enjoy the highest priority, we further extend our approach by making writes be able to preempt the erase operations in order to improve the write latency performance. In our experiments, we simulate a realistic SSD model that adopts multi-chip/channel and evaluate both SLC and MLC NAND flash as storage materials of diverse performance. Experimental results show the proposed technique achieves a near-optimal performance on servicing read requests. The write latency is significantly reduced as well. Specifically, the read latency is reduced on average by 46.5% compared to RPS (Read Priority Scheduling) and when using write–suspend–erase the write latency is reduced by 13.6% relative to FIFO.  相似文献   

12.
Flash-memory-based solid-state drives (SSDs) are used widely for secondary storage. To be effective for SSDs, traditional indices have to be redesigned to cope with the special properties of flash memory, such as asymmetric read/write latencies (fast reads and slow writes) and out-of-place updates. Previous flash-optimized indices focus mainly on reducing random writes to SSDs, which is typically accomplished at the expense of a substantial number of extra reads. However, modern SSDs show a narrowing gap between read and write speeds, and read operations on SSDs increasingly affect the overall performance of indices on SSDs. As a consequence, how to optimize SSD-aware indices by reducing both write and read costs is a pertinent and open challenge. We propose a new tree index for SSDs that is able to reduce both writes and extra reads. In particular, we use an update buffer and overflow pages to reduce random writes, and we further exploit Bloom filters to reduce the extra reads to the overflow nodes in the tree. With this mechanism, we construct a read/write-optimized index that is capable of offering better overall performance than previous flash-aware indices. In addition, we present an analysis of the proposed index and show that the read and write costs of the operations on the index can be balanced by only tuning the false-positive rate of the Bloom filters. Our experimental results suggest that our proposal is efficient and represents an improvement over existing methods.  相似文献   

13.
Log-structured merge tree (i.e., LSM-tree)-based key–value stores (i.e., KV stores) are widely used in big-data applications and provide high performance. NAND Flash-based Solid-state disks (i.e., SSDs) have become a popular storage device alternative to hard disk drives (i.e., HDDs) because of their high performance and low power consumption. LSM-tree KV stores with SSDs are deployed in large-scale storage systems, which aims to achieve high performance in the cloud. Write amplification in LSM-tree KV stores and NAND Flash memory in SSDs are defined as WA1 and WA2 in this paper. The former, which is attributed to compaction operations in LSM-tree-based KV stores, is a burden on I/O bandwidth between the host and the device. The latter, which results from out-place updates in NAND Flash memory, blocks user I/O requests between the host and NAND Flash memory, thereby degrading the SSD performance. Write amplification impairs the overall system performance. In this study, we explored the two-level cascaded write amplification in LSM-tree KV stores with SSDs. The cascaded write amplification is represented as WA. Our primary goal is to comprehensively study two-level cascaded write amplification on the host-side LSM-tree KV stores and the device-side SSDs. We quantitatively analyze the impact of two-level write amplification on overall performance. The cascaded write amplification is 16.44 (WA1 is 16.55; WA2 is 0.99) and 35.51 (WA1 is 16.6; WA2 is 2.14) for SSD-I and SSD-S with LevelDB’s default setting under DB_bench. The larger cascaded write amplification of KV stores has a bad impact on SSD performance and lifetime. The throughput of SSD-S and SSD-I under an 80%-write workload is approximately 0.28x and 0.31x of that under a 100%-write workload. Therefore, it is important to design a novel approach to balance the cost of an SSD lifetime caused by cascaded write amplification and its high performance under the read-write-mixed workloads. We attempt to reveal details of cascaded write amplification and hope that this study is useful for developers of LSM-tree-based KV stores and SSD software stacks.  相似文献   

14.
Flash memory has its unique characteristics: the write operation is much more costly than the read operation, and in-place updating is not allowed. In this paper, we analyze how these characteristics affect the performance of clustering and non-clustering methods for record management, and show that non-clustering is more suitable in flash memory environment. Also, we identify the problems of the existing non-clustering method when applied to flash memory environment without any modification, and propose an effective method for record management in flash memory databases. This method, which is basically based on the non-clustering method, tries to store consecutively inserted records in the same page in order to make it possible to process them with only one write operation. In this paper, we call this method group write. Moreover, we propose two novel techniques for achieving efficient group writes: (1) dedicated buffers for group writes and (2) free space lists managed in main memory for maintaining only those pages having large free space. Our method greatly improves the write performance of database applications running in flash memory. For performance evaluation, we conduct a variety of experiments. The results show that our method achieves speed up by up to 1.67 times compared with the original non-clustering method.  相似文献   

15.
Many recent sensor devices are being equipped with flash memories due to their unique advantages: non-volatile storage, small size, shock-resistance, fast read access and power efficiency. The ability of storing large amounts of data in sensor devices necessitates the need for efficient indexing structures to locate required information.The challenge with flash memories is that they are unsuitable for maintaining dynamic data structures because of their specific read, write and wear constraints; this combined with very limited data memory on sensor devices prohibits the direct application of most existing indexing methods.In this paper we propose a suite of index structures and algorithms which permit us to efficiently support several types of historical online queries on flash-equipped sensor devices: temporally constrained aggregate queries, historical online sampling queries and pattern matching queries. We have implemented our methods using nesC and have run extensive experiments in TOSSIM, the simulation environment of TinyOS. Our experimental evaluation using trace-driven real world data sets demonstrates the efficiency of our indexing algorithms.  相似文献   

16.
闪存被广泛应用在电子产品的存储设备中, 针对闪存的研究也日益得到重视. 基于访问的局部性原理, 并结合闪存读写代价的差异性, 提出了一种针对闪存特点运用块级局部性原理的cache缓存管理算法LRU-BLL. 实验表明, 这种方法有效地提高了缓存的命中率, 并且减少了缓存的脏页回写次数和提高了缓冲区的平均换出长度.  相似文献   

17.
Due to its low latency,byte-addressable,non-volatile,and high density,persistent memory (PM) is expected to be used to design a high-performance storage system.However,PM also has disadvantages such as limited endurance,thereby proposing challenges to traditional index technologies such as B+ tree.B+ tree is originally designed for dynamic random access memory (DRAM)-based or disk-based systems and has a large write amplification problem.The high write amplification is detrimental to a PM-based system.This paper proposes WO-tree,a write-optimized B+ tree for PM.WO-tree adopts an unordered write mechanism for the leaf nodes,and the unordered write mechanism can reduce a large number of write operations caused by maintaining the entry order in the leaf nodes.When the leaf node is split,WO-tree performs the cache line flushing operation after all write operations are completed,which can reduce frequent data flushing operations.WO-tree adopts a partial logging mechanism and it only writes the log for the leaf node.The inner node recognizes the data inconsistency by the read operation and the data can be recovered using the leaf node information,thereby significantly reducing the logging overhead.Furthermore,WO-tree adopts a lock-free search for inner nodes,which reduces the locking overhead for concurrency operation.We evaluate WO-tree using the Yahoo!Cloud Serving Benchmark(YCSB) workloads.Compared with traditional B+ tree,wB-tree,and Fast-Fair,the number of cache line flushes caused by WO-tree insertion operations is reduced by 84.7%,22.2%,and 30.8%,respectively,and the execution time is reduced by 84.3%,27.3%,and 44.7%,respectively.  相似文献   

18.
混合式固态存储已成为当前消费级终端领域的主流存储设备。然而在学术领域,关于混合式固态存储设计和问题的讨论与分析仍不够充分。该文针对现有的混合式存储设备,结合相关领域前沿研究,从混合式闪存架构介绍、亟待解决的痛点问题和相关研究进展 3 个方面进行讨论和分析。文章介绍和分析了混合式闪存的主流架构及其特点,展示了在真实设备平台上测试的实验数据结果,揭露了混合式闪存中亟待解决的问题,重点介绍了读特征、写特征、读写冲突和容量特征相关问题。同时 介绍了相应问题的最新研究进展,并分析了各个技术的优劣和未来的发展方向。  相似文献   

19.
Flash memory is widely used in the storage system. The direct use of multi-dimensional index structure over flash memory would introduce a large number of redundant writes since such index structure requires fine-grained updates intensively. K-D-B-tree is a classic multi-dimensional index structure. In this paper, the implementation of K-D-B-tree over flash memory, namely F-KDB, is proposed to handle fine-grained updates. In F-KDB, a K-D-B-tree node is represented as a collection of logs (termed as logging entries) to efficiently process the updates of the node. Since the collecting and parsing of all the relevant logging entries to construct a node could degrade the query performance, a Workload Adaptive (WA) online algorithm is proposed to improve the query performance. A series of experiments are conducted to demonstrate the efficiency of F-KDB over flash memory. The response times of insertion/deletion are significantly reduced and the overall performance of F-KDB is improved.  相似文献   

20.
Main memory index is built with the assumption that the RAM is sufficiently large to hold data. Due to the volatility and high unit price of main memory, indices under secondary memory such as SSD and HDD are widely used. However, the I/O operation with main memory is still the bottleneck for query efficiency. In this paper, we propose a self-tuning indexing scheme called Tide-tree for RAM/Disk-based hybrid storage system. Tide-tree aims to overcome the obstacles main memory and disk-based indices face, and performs like the tide to achieve a double-win in space and performance, which is self-adaptive with respect to the running environment. Particularly, Tide-tree delaminates the tree structure adaptively with high efficiency based on storage sense, and applies an effective self-tuning algorithm to dynamically load various nodes into main memory. We employ memory mapping technology to solve the persistent problem of main memory index, and improves the efficiency of data synchronism and pointer translation. To further enhance the independence of Tide-tree, we employ the index head and the level address table to manage the whole index. With the index head, three efficient operations are proposed, namely index rebuild, index load and range search. We have conducted extensive experiments to compare the Tide-tree with several state-of-the-art indices, and the results have validated the high efficiency, reusability and stability of Tide-tree.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号