首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Video can be encoded into multiple-resolution format in nature. A multi-resolution or scalable video stream is a video sequence encoded such that subsets of the full resolution video bit stream can be decoded to recreate lower resolution video streams. Employing scalable video enables a video server to provide multiple resolution services for a variety of clients with different decoding capabilities and network bandwidths connected to the server. The inherent advantages of the multi-resolution video server include: heterogeneous client support, storage efficiency, adaptable service, and interactive operations support.For designing a video server, several issues should be dealt with under a unified framework including data placement/retrieval, buffer management, and admission control schemes for deterministic service guarantee. In this paper, we present a general framework for designing a large-scale multi-resolution video server. First, we propose a general multi-resolution video stream model which can be implemented by various scalable compression techniques. Second, given the proposed stream model, we devise a hybrid data placement scheme to store scalable video data across disks in the server. The scheme exploits both concurrency and parallelism offered by striping data across the disks and achieves the disk load balancing during any resolution video service. Next, the retrieval of multi-resolution video is described. The deterministic access property of the placement scheme permits the retrieval scheduling to be performed on each disk independently and to support interactive operations (e.g. pause, resume, slow playback, fastforward and rewind) simply by reconstructing the input parameters to the scheduler. We also present an efficient admission control algorithm which precisely estimates the actual disk workload for the given resolution services and hence permits the buffer requirement to be much smaller. The proposed schemes are verified through detailed simulation and implementation.  相似文献   

2.
Multimedia systems store and retrieve large amounts of data which require extremely high disk bandwidth and their performance critically depends on the efficiency of disk storage. However, existing magnetic disks are designed for small amounts of data retrievals geared to traditional operations; with speed improvements mainly focused on how to reduce seek time and rotational latency. When the same mechanism is applied to multimedia systems, overheads in disk I/O can result in dramatic deterioration in system performance. In this paper, we present a mathematical model to evaluate the performance of constant-density recording disks, and use this model to analyze quantitatively the performance of multimedia data request streams. We show that high disk throughput may be achieved by suitably adjusting the relevant parameters. In addition to demonstrating quantitatively that constant-density recording disks perform significantly better than traditional disks for multimedia data storage, a novel disk-partitioning scheme which places data according to their bandwidths is presented.  相似文献   

3.
Since multizone recording disks have different bandwidths and capacities depending on the zone in use, data placement schemes for traditional constant angular density disks are not suitable for multizone recording disks. In this paper, we propose a new block placement algorithm for multizone recording disks used for continuous media servers. The proposed scheme exploits the bandwidth-saving effect of smoothing variable bit rate data before storing them. The diversity of zone bandwidths in multizone recording disks enables it possible to achieve large smoothing effect using relatively small buffer space. Variable bit rate data blocks of an object are smoothed using multiple smoothing rates which are bandwidths of zones multiplied by the service time assigned to the object and are stored into the corresponding zones. This multirate smoothing technique decreases the buffer space required to provide deterministic service to clients. Simulation results show that a proper restructuring of blocks according to the smoothing algorithm results in dramatic performance enhancement in continuous media servers.  相似文献   

4.
Multimedia data, especially continuous media including video and audio objects, represent a rich and natural stimulus for humans, but require large amount of storage capacity and real-time processing. In this paper, we describe how to organize video data efficiently on multiple disks in order to support arbitrary-rate playback requested by different users independently. Our approach is to segment and decluster video objects and to place the segments in multiple disks using a restricted round-robin scheme, called prime round-robin (PRR). Its placement scheme provides uniform load balance of disks for arbitrary retrieval rate as well as normal playback, since it eliminates hot spots. Moreover, it does not require any additional disk bandwidth to support VCR-like operations such as fast-forward and rewind. We have studied the various effects of placement and retrieval schemes in a storage server by simulation. The results show that PRR offers even disk accesses, and the failure in reading segment by deadline occurs only at the beginning of new operations. In addition, the number of users admitted is not decreased, regardless of arbitrary-rate playback requests.  相似文献   

5.
We propose a efficient writeback scheme that enables guaranteeing throughput in high-performance storage systems. The proposed scheme, called de-fragmented writeback (DFW), reduces positioning time of storage devices in writing workloads, and thus enables fast writeback in storage systems. We consider both of storage media in designing DFW scheme; traditional rotating disk and emerging solid-state disks. First, sorting and filling holes methods are used for rotating disk media for the higher throughput. The scheme converts fragmented data blocks into sequential ones so that it reduces the number of write requests and unnecessary disk-head movements. Second, flash block aware clustering-based writeback scheme is used for solid-state disks considering the characteristics of flash memory. The experimental results show that our schemes guarantee system’s high throughput while guaranteeing data reliability.  相似文献   

6.
In building a large-scale video server, it is highly desirable to use heterogeneous disk-subsystems for the following reasons. First, existing disks may fail, especially in an environment with a large number of disks, enforcing the use of new disks. Second, for a scalable server, to cope with the increasing demand of customers, new disks may be needed to increase the server's storage capacity and throughput. With rapid advances in the performance of disks, the newly added disks generally have a higher data transfer rate and a larger storage capacity than the disks originally in the system. In this paper, we propose a novel striping scheme, termed as resource-based striping (RBS), for video servers built on heterogeneous disks. RBS combines the techniques of wide striping and narrow striping so that it can obtain the optimal stripe allocation and efficiently utilize both the I/O bandwidth and storage capacity of all disks. RBS is suitable for applications whose files are not updated frequently, such as course-on-demand and movie-on-demand. We examine the performance of RBS via simulation experiments. Our results show that RBS greatly outperforms the conventional striping schemes proposed for video servers with heterogeneous or homogeneous disks, in terms of the number of simultaneous streams supported and the number of files that can be stored.  相似文献   

7.
Issues in the design of a storage server for video-on-demand   总被引:2,自引:0,他引:2  
We examine issues related to the design of a storage server for video-on-demand (VOD) applications. The storage medium considered is magnetic disks or arrays of disks. We investigate disk scheduling policies, buffer management policies and I/O bus protocol issues. We derive the number of sessions that can be supported from a single disk or an array of disks and determine the amount of buffering required to support a given number of users. Furthermore, we propose a scheduling mechanism for disk accesses that significantly lowers the buffer-size requirements in the case of disk arrays. The buffer size required under the proposed scheme is independent of the number of disks in the array. This property allows for striping video content over a large number of disks to achieve higher concurrency in access to a particular video object. This enables the server to satisfy hundreds of independent requests to the same video object or to hundreds of different objects while storing only one copy of each video object. The reliability implications of striping content over a large number of disks are addressed and two solutions are proposed. Finally, we examine various policies for dealing with disk thermal calibration and the placement of videos on disks and disk arrays.  相似文献   

8.
In this paper, we deal with the data/parity placement problem which is described as follows: how to place data and parity evenly across disks in order to tolerate two disk failures, given the number of disks N and the redundancy rate p which represents the amount of disk spaces to store parity information. To begin with, we transform the data/parity placement problem into the problem of constructing an N×N matrix such that the matrix will correspond to a solution to the problem. The method to construct a matrix has been proposed and we have shown how our method works through several illustrative examples. It is also shown that any matrix constructed by our proposed method can be mapped into a solution to the placement problem if a certain condition holds between N and p where N is the number of disks and p is a redundancy rate  相似文献   

9.
Several data replication strategies have been proposed to provide high data availability for database applications. However, the trade-offs among the different strategies for various workloads and different operating modes have not been studied before. In this paper, we study the relative performance of three high availability data replication strategies, chained declustering, mirrored disks, and interleaved declustering, in a shared nothing database machine environment. In particular, we have examined (1) the relative performance of the three strategies when no failures have occurred, (2) the effect of load imbalance caused by a disk or processor failure on system throughput and response time, and (3) the tradeoff between the benefit of intra query parallelism and the overhead of activating and scheduling extra operator process. Experimental results obtained from a simulation study indicate that, in the normal mode of operation, chained declustering and interleaved declustering perform comparably. Both perform better than mirrored disks if an application is I/O bound, but slightly worse than mirrored disks if the application is CPU bound. In the event of a disk failure, because chained declustering is able to balance the workload among all remaining operational disks while the other two cannot, it provides noticeably better performance than interleaved declustering and much better performance than mirrored disks.  相似文献   

10.
One of the most important challenges in a videoon-demand (VOD) system is to support interactive browsing functions such as fast forward and fast backward. Typically, these functions impose additional resource requirements on the VOD system in terms of storage space, retrieval throughput, network bandwidth, etc. Moreover, prevalent video compression techniques such as MPEG impose additional constraints on the process since they introduce interframe dependencies. In this paper, we devise methods to support variable rate browsing for MPEG-like video streams and minimize the additional resources required. Specifically, we consider the storage and retrieval for video data in a diskarray-based video server and address the issue of distributing the retrieval requests across the disks evenly. The overall approach proposed in this paper for interactive browsing is composed of (1) a storage method, (2) sampling and placement methods, and (3) a playout method, in which the sampling and placement methods are two alternatives for video-segment selection. The segment-sampling scheme supports browsing at any desired speed while balancing the load on the disk array, as well as minimizing the variation on the number of video segments skipped between samplings. In contrast, the segment-placement scheme supports completely uniform segment sampling across the disk array for some specific speed-up rates. Several theoretical properties for the problem studied are derived. Finally, we describe experimental results on the visual effect of the proposed frame-skipping approach.  相似文献   

11.
In this paper, we propose a practical disk error recovery scheme tolerating multiple simultaneous disk failures in a typical RAID system, resulting in improvement in availability and reliability. The scheme is composed of the encoding and the decoding processes. The encoding process is defined by making one horizontal parity and a number of vertical parities. The decoding process is defined by a data recovering method for multiple disk failures including the parity disks. The proposed error recovery scheme is proven to correctly recover the original data for multiple simultaneous disk failures regardless of the positions of the failed disks. The proposed error recovery scheme only uses exclusive OR operations and simple arithmetic operations, which can be easily implemented on current RAID systems without hardware changes.  相似文献   

12.
The performance of access methods and the underlying disk system is a significant factor in determining the performance of database applications, especially with large sets of data. While modern hard disks are manufactured with multiple physical zones, where seek times and data transfer rates vary significantly across the zones, there has been little consideration of this important disk characteristic in designing access methods (indexing schemes). Instead, conventional access methods have been developed based on a traditional disk model that comes with many simplifying assumptions such as an average seek time and a single data transfer rate. The paper proposes novel partitioning techniques that can be applied to any tree-like access methods, both dynamic and static, fully utilizing zoning characteristics of hard disks. The index pages are allocated to disk zones in such a way that more frequently accessed index pages are stored in a faster disk zone. On top of the zoned data placement, a localized query processing technique is proposed to significantly improve the query performance by reducing page retrieval times from the hard disk.  相似文献   

13.
为了保证网络存储的负载平衡并避免在节点或磁盘故障的情况下造成不可恢复的损失,提出一种基于均衡数据放置策略的分布式网络存储编码缓存方案,针对大型高速缓存和小型缓存分别给出了不同的解决办法。首先,将Maddah方案扩展到多服务器系统,结合均衡数据放置策略,将每个文件作为一个单元存储在数据服务器中,从而解决大型高速缓存问题;然后,将干扰消除方案扩展到多服务器系统,利用干扰消除方案降低缓存的峰值速率,结合均衡数据放置策略,提出缓存分段的线性组合,从而解决小型缓存问题。最后,通过基于Linux的NS2仿真软件,分别在一个和两个奇偶校验服务器系统中进行仿真实验。仿真结果表明,提出的方案可以有效地降低峰值传输速率,相比其他两种较新的缓存方案,提出的方案获得了更好的性能。此外,采用分布式存储虽然限制了将来自不同服务器的内容组合成单个消息的能力,导致编码缓存方案性能损失,但可以充分利用分布式存储系统中存在的固有冗余,从而提高存储系统的性能。  相似文献   

14.
This paper presents a robust watermarking scheme based on feature point detection and image normalization. Firstly some stable feature points are detected from the original image using the proposed multiresolution feature point detection filter. Then, image normalization is applied to the disks centered at these feature points. The watermark is embedded in the subband coefficients of DFT domain of each disk separately. And the watermark detection uses the correlation between the watermark embedding coefficients and the original watermark, and does not need the original image. The proposed scheme combines the advantages of feature point detection and image normalization, which can achieve strong robustness to signal processing and geometrical distortions. The experimental results also demonstrate good performance of the proposed scheme.  相似文献   

15.
Disk load balancing for video-on-demand systems   总被引:5,自引:0,他引:5  
For a video-on-demand computer system, we propose a scheme which balances the load on the disks, thereby helping to solve a performance problem crucial to achieving maximal video throughput. Our load-balancing scheme consists of two components. The static component determines good assignments of videos to groups of striped disks. The dynamic component uses these assignments, and features a “DASD dancing” algorithm which performs real-time disk scheduling in an effective manner. Our scheme works synergistically with disk striping. We examine the performance of the proposed algorithm via simulation experiments.  相似文献   

16.
The existing SCSI parallel bus has been widely used in various multimedia applications. However, due to the unfair bus accesses the SCSI bus may not be able to fully utilize the potential aggregate throughput of disks. The number of disks that can be attached to the SCSI bus is limited, and link level fault tolerance is not provided. The serial storage interfaces such as Serial Storage Architecture (SSA) provide high data bandwidth, fair accesses, long transmission distance between adjacent devices (disks or hosts) and link level fault tolerance. The fairness algorithm of SSA ensures a fraction of data bandwidth to be allocated to each device. In this paper we would like to know whether SSA is a better alternative in supporting continuous media than SCSI. The scalability of a multimedia server is very important since the storage requirement may grow incrementally as more contents are created and stored. SSA in a shared-storage cluster environment also supports concurrent accesses by different hosts as long as their access paths are not overlapped. This feature is called spatial reuse. Therefore, the effective bandwidth over an SSA can be higher than the raw data bandwidth and the spatial reuse feature is critical to the scalability of a multimedia server. This feature is also included in FC-AL3 with a new mode called Multiple Circuit Mode (MCM). Using MCM, all devices can transfer data simultaneously without collision. In this paper we have investigated the scalability of shared-stroage clusters over an SSA environment.  相似文献   

17.
尽管当今的磁盘等外存储设备容量增加得很快,但还是无法满足用户应用程序的需要;在性能上,外存储设备已经成为计算机系统的瓶颈;为此,在集群环境下,将分布式的外设构成动态虚拟盘阵系统是一种较好的解决方案,而数据分布算法是动态虚拟研究的一项重要内容。也就是说,采用优化的数据分布算法,使得盘阵的性能和容量随盘阵的扩展而扩展。研究的主要工作是综述以往对动态盘阵数据分布算法,并对以往SCADDAR算法进行了扩充,提出了D/H(Double/Halve)数据分布算法。  相似文献   

18.
海量存储网络中的虚拟盘副本容错技术   总被引:2,自引:1,他引:2  
大规模存储网络中的数据可用性和读写性能越来越重要.在海量存储虚拟化系统的基础上,实现了多副本虚拟盘技术来提高网络存储的数据容错能力.同时,通过多副本选择调度与异步副本更新以及副本盘空间布局的动态调整算法,提高了系统的数据读写能力.测试结果表明,加入虚拟盘副本后,在设备数量充足情况下的读性能可提高26%;即使少量磁盘失效,读写操作也能正确执行,且读性能仍然比无副本时提高10%以上.  相似文献   

19.
基于单容错编码的数据布局已经不能满足存储系统对可靠性越来越高的要求。对基于多容错编码的数据布局的研究受到了广泛的关注,并且出现了一些三容错的布局算法,如HDD1,HDD2等。但这些布局算法普遍存在冗余度较差、计算负载大等缺点。提出了一种基于三重奇偶校验的多容错数据布局算法TP-RAID(Triple Parity RAID)。该算法只需要在RAID5阵列系统中增加两个校验磁盘,通过水平、正向对角和反向对角三重奇偶校验,可容许同时发生的三个磁盘故障。该算法编码、解码简单,三重校验条纹长度相等,计算负载小,易于实现。此外,由于该算法中尽量减少了三重校验之间逻辑关联,使得该算法的小写性能比其他的三容错算法相比有了大幅度的提高。  相似文献   

20.
HYCOM(hybrid coordinate ocean model)海洋数值模式要求较高的吞吐量和相对较小的计算量,这给并行算法设计带来了巨大的挑战.针对具有高吞吐量的海洋数据同化问题,设计了一种基于区域分解的并行优化算法.首先,提出了一种灵活的文件访问方法,可以高效地从磁盘读取大量的数据,避免数据访问冲突,大幅降低磁盘寻址操作的频率.此外,设计了一种避免通信的策略,以一些额外的计算量为代价大幅减少进程间的通信量.最后,提出了一种基于管道流的通信策略,以实现无冲突的消息传递.实验结果表明,该算法与基线算法相比,总体性能提高了5倍,其中文件读取速度提升6倍,进程间的通信性能提升了2.7倍.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号