首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
廉价磁盘冗余阵列(RAID)采用许多小的廉价磁盘来代替大容量昂贵的磁盘,以取得更高的性能和更低的功耗。本文介绍了RAID-5级磁盘阵列中校验信息的八种不同的分布策略,即奇偶校验信息的放置策略,并从几个不同的应用情况对不同的放置策略进行了研究,结论是不同的奇偶校验信息放置策略对磁盘阵列的I/O读写性能有很大的影响。  相似文献   

2.
本文详细介绍了磁盘阵列技术的产生及其发展,重点介绍了目前较为流行的廉价磁盘冗余阵列RAID技术,通过对磁盘阵列技术的介绍,探讨了今后一些有发展前途的新型存储技术。  相似文献   

3.
本文提出了实现冗余磁盘阵列类型1的四种策略,其中两种为同步RAID1两种为异步RAID1。并以排队论中M/G/1排队模型为工具,详细研究了四川策略珠I/O性能。结果表明,其中三种实现策略的平均I/O响应比单台磁盘快,另一种与单台磁盘相同。  相似文献   

4.
RAID是廉价冗余磁盘阵列的简称,它把多磁盘驱动器连接到一起协同工作,提供并行的数据传输和容错功能。本文将介绍RAID技术,探讨如何利用RAID技术提高磁盘子系统的速度和可靠性。  相似文献   

5.
基于IDE的RAID磁盘阵列的安装和使用   总被引:1,自引:0,他引:1  
RAID(Redundant Array of Inexpensive  Disks)磁盘阵列是指将多个硬盘连成一个阵列,并以某种方式读写硬盘,该读写方式可以保证一个或多个硬盘失效时能有效地防止数据丢失,并大大提高了存取速度。磁盘阵列的硬件除了由多个硬盘组成的磁盘组外,就是一个在主机和磁盘组之间提供界面的接口控制器。相对主机来讲,控制器可以使得整个磁盘组就象一个又快、又大、又可靠的虚拟硬盘,为主机提供无缝透明的磁盘操作功能。 RAID可以分为6个级别,即RAID0~RAID5外加一个派生的RAI…  相似文献   

6.
RAID(有冗余的廉价磁盘阵列)的应用是为了提高系统的I/O性能和增加可靠性,本文叙述了如何根据不同的应用需求选择RAID的配置模式。  相似文献   

7.
高可靠磁盘阵列是采用相对廉价的、小容量的高性能磁盘驱动器为单元,通过一定的方式,组成磁盘阵列以提高磁盘容量,提高数据的传输率,目前这一技术已经在国内外引起计算机界的广泛关注。本文将对廉价磁盘冗余阵列体系结构、磁盘阵列控制器及实现过程中理论上和工程上需要解决的若干问题进行一些探讨和研究。  相似文献   

8.
分布式磁盘阵列对于提高数据存储的可靠性、带宽和容量,具有十分重要的意义。本文介绍了分布式磁盘阵列的两种连接方式,磁盘分布连接到计算机和磁盘阵列连接到网络,以及在分布式磁盘阵列中得到应用的两种冗余策略,Chained declustering和RAID-x。  相似文献   

9.
王波 《共创软件》2002,(1):16-22
RAID为冗余磁盘阵列的简称,他意味着物理上使用多个磁盘,而逻辑上却只有一个磁盘设备。然而,RAID技术中包含的技术又远远超过简单的将多个磁盘映射到一个虚拟磁盘设备上的技术,它提出了使用几种不同方式来完成这种工作。关键是RAID通过使用数据冗余带来额外的数据安全性,  相似文献   

10.
本文从冗余磁盘阵列的信道模型出发,研究各部分组成的理论与技术。文中在给出系统的模型描述之后,着重分析了冗余信息元生与检纠错、数据分割与同步。还研究了系统中的负荷平衡问题,指出了磁盘优化调度的方向。  相似文献   

11.

One way to increase storage density is using a shingled magnetic recording (SMR) disk. We propose a novel use of SMR disks with RAID (redundant array of independent disks) arrays, specifically building upon and compared with a basic RAID 4 arrangement. The proposed scheme (called RAID 4SMR) has the potential to improve the performance of a traditional RAID 4 array with SMR disks. Our evaluation shows that compared with the standard RAID 4, when using update in-place in RAID arrays, RAID 4SMR with garbage collection not just can allow the adoption of SMR disks with a reduced performance penalty, but offers a performance improvement of up to 56%.

  相似文献   

12.
To access a RAID (redundant arrays of inexpensive disks), the disk stripe size greatly affects the performance of the disk array. In this article, we present a performance model to analyze the effects of striping with different stripe sizes in a RAID. The model can be applied to optimize the stripe size. Compared with previous approaches, our model is simpler to apply and more accurately reveals the real performance. Both system designers and users can apply the model to support parallel I/O events. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

13.
Performance of RAID5 disk arrays with read and write caching   总被引:1,自引:0,他引:1  
In this paper, we develop analytical models and evaluate the performance of RAID5 disk arrays in normal mode (all disks operational), in degraded mode (one disk broken, rebuild not started) and in rebuild mode (one disk broken, rebuild started but not finished). Models for estimating rebuild time under the assumption that user requests get priority over rebuild activity have also been developed. Separate models were developed for cached and uncached disk controllers. Particular emphasis is on the performance of cached arrays, where the caches are built of Non-Volatile memory and support write caching in addition to read caching. Using these models, we evaluate the performance of arrayed and unarrayed disk subsystems when driven by a database workload such as those seen on systems running any of several popular database managers. In particular, we assume single-block accesses, flat device skew and little seek affinity.With the above assumptions, we find six significant results. First, in normal mode, we find there is no difference in performance between subsystems built out of either small arrays or large arrays as long as the total number of disks used is the same. Second, we find that if our goal is to minimize the average response time of a subsystem in degraded and rebuild modes, it is better to use small arrays rather than large arrays in the subsystem. Third, we find the counter-intuitive result that if our goal is to minimize the average response time of requests to any one array in the subsystem, it is better to use large arrays than small arrays in the subsystem. We call this the best worst-case phenomenon.Fourth, we find that when no caching is used in the disk controller, subsystems built out of arrays have a normal mode performance that is significantly worse than an equivalent unarrayed subsystem built of the same drives. For the specific drive, controller, workload and system parameters we used for our calculations, we find that, without a cache in the controller and operating at typical I/O rates, the normal mode response time of a subsystem built out of arrays is 50% higher than that of an unarrayed subsystem. In rebuild mode, we find that a subsystem built out of arrays can have anywhere from 100% to 200% higher average response time than an equivalent unarrayed subsystem.Out fifth result is that, with cached controllers, the performance differences between arrayed and equivalent unarrayed subsystems shrink considerably. We find that the normal mode response time in a subsystem built out of arrays is only 4.1% higher than that of an equivalent unarrayed system. In degraded (rebuild) mode, a subsystem built out of small arrays has a response time 11% (13%) higher and a subsystem built out of large arrays has a response time 15% (19%) higher than an unarrayed subsystem.Our sixth and last result is that cached arrays have significantly better response times and throughputs than equivalent uncached arrays. For one workload, a cached array with good hit ratios had 5 times the throughout and 10 to 40 times lower response times than the equivalent uncached array. With poor hit ratios, the cached array is still a factor of 2 better in throughput and a factor of 4 to 10 better in response time for this same workload.We conclude that 3 design decisions are important when designing disk subsystems built out of RAID level 5 arrays. First, it is important that disk subsystems built out of arrays have disk controllers with caches, in particular Non-Volatile caches that cache writes in addition to reads. Second, if one were trying to minimize the worst response time seen by any user, one would choose disk array subsystems built out of large RAID level 5 arrays because of the best worst-case phenomenon. Third, if average subsystem response time is the most important design metric, the subsystem should be built out of small RAID level 5 arrays.  相似文献   

14.
Distributed sparing is a method to improve the performance of RAID5 disk arrays with respect to a dedicated sparing system with N+2 disks (including the spare disk), since it utilizes the bandwidth of all N+2 disks. We analyze the performance of RAID5 with distributed sparing in normal mode, degraded mode, and rebuild mode in an OLTP environment, which implies small reads and writes. The analysis in normal mode uses an M/G/1 queuing model, which takes into account the components of disk service time. In degraded mode, a low-cost approximate method is developed to estimate the mean response time of fork-join requests resulting from accesses to recreate lost data on the failed disk. Rebuild mode performance is analyzed by considering an M/G/1 vacationing server model with multiple vacations of different types to take into account differences in processing requirements for reading the first and subsequent tracks. An iterative solution method is used to estimate the mean response time of disk requests, as well as the time to read each disk, which is shown to be quite accurate through validation against simulation results. We next compare RAID5 performance in a system (1) without a cache; (2) with a cache; and (3) with a nonvolatile storage (NVS) cache. The last configuration, in addition to improved read response time due to cache hits, provides a fast-write capability, such that dirty blocks can be destaged asynchronously and at a lower priority than read requests, resulting in an improvement in read response time. The small write penalty is also reduced due to the possibility of repeated writes to dirty blocks in the cache and by taking advantage of disk geometry to efficiently destage multiple blocks at a time  相似文献   

15.
Redundant arrays of independent disks (RAID) provide an efficient stable storage system for parallel access and fault tolerance. The most common fault tolerant RAID architecture is RAID-1 or RAID-5. The disadvantage of RAID-1 lies in excessive redundancy, while the write performance of RAID-5 is only 1/4 of that of RAID-0. In this paper, we propose a high performance and highly reliable disk array architecture, called stripped mirroring disk array (SMDA). It is a new solution to the small-write problem for disk array. SMDA stores the original data in two ways, one on a single disk and the other on a plurality of disks in RAID-0 by stripping. The reliability of the system is as good as RAID-1, but with a high throughput approaching that of RAID-0. Because SMDA omits the parity generation procedure when writing new data, it avoids the write performance loss often experienced in RAID-5.  相似文献   

16.
王志坤  冯丹 《计算机科学》2010,37(11):295-299
传统的磁盘阵列一般采用集中式控制结构,其连接的底层磁盘数受系统总线的制约,容易出现性能瓶颈,且不能容两个以上磁盘出错。从模块化系统的组织方法出发,提出一种采用标准模块化存储单元组成的通过胖树结构互连的大规模磁盘阵列结构MT2RAID,分别就其各种数据分布的性能和可靠性进行了分析和讨论。原型系统测试结果表明,相比集中式磁盘阵列结构,MT2RAID也具有较高的性能。  相似文献   

17.
介绍了一种基于RAID5的Disk Cache的实现。在对磁盘阵列Cache的实现过程中,使用了组相联映射、LRU替换算法等比较成熟的技术,在Cache回写策略上采用write-back方式。从而提高了写磁盘速度,减少冗余写盘操作。另外通过对校验组加锁,有效地防止了同一校验组里多个块同时降级而导致的数据不一致现象。  相似文献   

18.
本文在简要介绍廉价冗余磁盘阵列的一些概念后,结合笔者研制DAS3000系列盘阵的实践,重点讨论了盘阵设计中遇到的几个问题,包括对校验的计算与传输速度的关系、残缺恢复对系统性能的影响以及主轴同步是否必需等。文章认为,及时制定新的接口标准以及来自操作系统的支持,将直接影响到一下代RAID的性能  相似文献   

19.
The performance of traditional RAID Level 5 arrays is, for many applications, unacceptably poor while one of its constituent disks is non-functional. This paper describes and evaluates mechanisms by which this disk array failure-recovery performance can be improved. The two key issues addressed are thedata layout, the mapping by which data and parity blocks are assigned to physical disk blocks in an array, and thereconstruction algorithm, which is the technique used to recover data that is lost when a component disk fails.The data layout techniques this paper investigates are instantiations of thedeclustered parity organization, a derivative of RAID Level 5 that allows a system to trade some of its data capacity for improved failure-recovery performance. We show that our instantiations of parity declustering improve the failure-mode performance of an array significantly, and that a parity-declustered architecture is preferable to an equivalent-size multiple-group RAID Level 5 organization in environments where failure-recovery performance is important. The presented analyses also include comparisons to a RAID Level 1 (mirrored disks) approach.With respect to reconstruction algorithms, this paper describes and briefly evaluates two alternatives,stripeoriented reconstruction anddisk-oriented reconstruction, and establishes that the latter is preferable as it provides faster reconstruction. The paper then revisits a set of previously-proposed reconstruction optimizations, evaluating their efficacy when used in conjunction with the disk-oriented algorithm. The paper concludes with a section on the reliability versus capacity trade-off that must be addressed when designing large arrays.Portions of this material are drawn from papers at the 5th Conference on Architectural Support for Programming Languages and Operating Systems, 1992, and at the 23rd Symposium on Fault-Tolerant Computing, 1993. The work was supported by the National Science Foundation under grant number ECD-8907068, by the Defense Advanced Research Project Agency monitored by ARPA/CMO under contract MDA972-90-C-0035, and by an IBM Graduate Fellowship.  相似文献   

20.
Data redundancy has been widely used to increase data availability in critical applications and several methods have been proposed to organize redundant data across a disk array. Data redundancy consists of either total data replication or the spreading of the data across the disk array along with parity information which can be used to recover missing data in the event of disk failure. In this paper we present an extended comparative analysis, carried out by using discrete event simulation models, between two disk array architectures: the Redundant Arrays of Inexpensive Disks (RAID) level 1 architecture, based on data replication; and the RAID level 5 architecture, based on the use of parity information. The comparison takes both performance and cost aspects into account. We study the performance of these architectures simulating two application environments characterized by different sizes of the data accessed by I/O operations. In addition, several scheduling policies for I/O requests are considered and the impact of non-uniform access to data on performance is investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号