共查询到20条相似文献,搜索用时 625 毫秒
1.
Providing QOS guarantees for disk I/O 总被引:1,自引:0,他引:1
In this paper, we address the problem of providing different levels of performance guarantees or quality of service for disk
I/O. We classify disk requests into three categories based on the provided level of service. We propose an integrated scheme
that provides different levels of performance guarantees in a single system. We propose and evaluate a mechanism for providing
deterministic service for variable-bit-rate streams at the disk. We will show that, through proper admission control and bandwidth
allocation, requests in different categories can be ensured of performance guarantees without getting impacted by requests
in other categories. We evaluate the impact of scheduling policy decisions on the provided service. We also quantify the improvements
in stream throughput possible by using statistical guarantees instead of deterministic guarantees in the context of the proposed
approach. 相似文献
2.
A large-scale, distributed video-on-demand (VOD) system allows geographically dispersed residential and business users to
access video services, such as movies and other multimedia programs or documents on demand from video servers on a high-speed
network. In this paper, we first demonstrate through analysis and simulation the need for a hierarchical architecture for
the VOD distribution network.We then assume a hierarchical architecture, which fits the existing tree topology used in today's
cable TV (CATV) hybrid fiber/coaxial (HFC) distribution networks. We develop a model for the video program placement, configuration,
and performance evaluation of such systems. Our approach takes into account the user behavior, the fact that the user requests
are transmitted over a shared channel before reaching the video server containing the requested program, the fact that the
input/output (I/O) capacity of the video servers is the costlier resource, and finally the communication cost. In addition,
our model employs batching of user requests at the video servers. We study the effect of batching on the performance of the
video servers and on the quality of service (QoS) delivered to the user, and we contribute dynamic batching policies which
improve server utilization, user QoS, and lower the servers' cost. The evaluation is based on an extensive analytical and
simulation study. 相似文献
3.
Due to recent advances in network, storage and data compression technologies, video-on-demand (VOD) service has become economically
feasible. It is a challenging task to design a video storage server that can efficiently service a large number of concurrent
requests on demand. One approach to accomplishing this task is to reduce the I/O demand to the VOD server through data- and
resource-sharing techniques. One form of data sharing is the stream-merging approach proposed in [5]. In this paper, we formalize a static version of the stream-merging problem, derive an upper bound on the
I/O demand of static stream merging, and propose efficient heuristic algorithms for both static and dynamic versions of the
stream-merging problem. 相似文献
4.
Adaptive piggybacking: a novel technique for data sharing in video-on-demand storage servers 总被引:17,自引:0,他引:17
Recent technology advances have made multimedia on-demand services, such as home entertainment and home-shopping, important
to the consumer market. One of the most challenging aspects of this type of service is providing access either instantaneously
or within a small and reasonable latency upon request. We consider improvements in the performance of multimedia storage servers
through data sharing between requests for popular objects, assuming that the I/O bandwidth is the critical resource in the system. We discuss a novel approach to data sharing,
termed adaptive piggybacking, which can be used to reduce the aggregate I/O demand on the multimedia storage server and thus
reduce latency for servicing new requests. 相似文献
5.
Kelvin K.W. Law John C.S. Lui Leana Golubchik 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(2):133-153
Advances in high-speed networks and multimedia technologies have made it feasible to provide video-on-demand (VOD) services
to users. However, it is still a challenging task to design a cost-effective VOD system that can support a large number of
clients (who may have different quality of service (QoS) requirements) and, at the same time, provide different types of VCR
functionalities. Although it has been recognized that VCR operations are important functionalities in providing VOD service,
techniques proposed in the past for providing VCR operations may require additional system resources, such as extra disk I/O,
additional buffer space, as well as network bandwidth. In this paper, we consider the design of a VOD storage server that
has the following features: (1) provision of different levels of display resolutions to users who have different QoS requirements,
(2) provision of different types of VCR functionalities, such as fast forward and rewind, without imposing additional demand
on the system buffer space, I/O bandwidth, and network bandwidth, and (3) guarantees of the load-balancing property across
all disks during normal and VCR display periods. The above-mentioned features are especially important because they simplify
the design of the buffer space, I/O, and network resource allocation policies of the VOD storage system. The load-balancing
property also ensures that no single disk will be the bottleneck of the system. In this paper, we propose data block placement,
admission control, and I/O-scheduling algorithms, as well as determine the corresponding buffer space requirements of the
proposed VOD storage system. We show that the proposed VOD system can provide VCR and multi-resolution services to the viewing
clients and at the same time maintain the load-balancing property.
Received June 9, 1998 / Accepted April 26, 1999 相似文献
6.
This paper addresses the problem of resource reservation for applications using the real-time service-oriented architecture paradigm. Real-time services must be completed by their deadlines. They can be scheduled anywhere within an execution interval. Some services have a large execution interval which gives them more flexibility during admission control. However, the conventional approach for real-time process scheduling is to reserve a fixed schedule on the first come, first served basis and thus does not take advantage of this flexibility. In this paper, a reorganization algorithm is presented to relocate existing reservations in order to accommodate new requests that have less flexibility. For service process reservations, intermediate deadlines may also be adjusted to further increase the flexibility of service reservations. Simulation results show that reorganization can greatly enhance the acceptance ratio of real-time requests in most situations. 相似文献
7.
Igor D.D. Curcio Antonio Puliafito Salvatore Riccobene Lorenzo Vita 《Multimedia Systems》1998,6(6):367-381
The relative simplicity of access to digital communications nowadays and the simultaneous increase in the available bandwidth
are leading to the definition of new telematic services, mainly oriented towards multimedia applications and interactivity
with the user. In the near future, a decisive role will be played in this scenario by the providers of interactive multimedia
services of the on-demand type, which will guarantee the end user a high degree of flexibility, speed and efficiency. In this
paper, some of the technical aspects regarding these service providers are dealt with, paying particular attention to the
problems of storing information and managing service requests. More specifically, the paper presents and evaluates a new storage
technique based on the use of disk array technology, which can manage both typical multimedia connections and traditional
requests. The proposed architecture is based on the joint use of the partial dynamic declustering and the information dispersal
algorithm, which are employed for the allocation and retrieval of the data stored on the disk array. We also define efficient
strategies for request management in such a way as to meet the time constraints imposed by multimedia sessions and guarantee
good response times for the rest of the traffic. The system proposed is then analyzed using a simulation approach. 相似文献
8.
In the past, much emphasis has been given to the data throughput of VOD servers. In Interactive Video-on-Demand (IVOD) applications, such as digital libraries, service availability and response times are more visible to the user than the underlying data throughput. Data throughput is a measure of how efficiently resources are utilized. Higher throughput may be achieved at the expense of deteriorated user-perceived performance metrics such as probability of admission and queuing delay prior to admission. In this paper, we propose and evaluate a number of strategies to sequence the admission of pending video requests. Under different request arrival rates and buffer capacities, we measure the probability of admission, queueing delay and data throughput of each strategy. Results of our experiments show that simple hybrid strategies can improve the number of admitted requests and reduce the queuing time, without jeopardizing the data throughput. The techniques we propose are independent of the underlying disk scheduling techniques used. So, they can be employed to improve the user-perceived performance of VOD servers, in general. 相似文献
9.
In a video-on-demand (VOD) environment, batching requests for the same video to share a common video stream can lead to significant
improvement in throughput. Using the wait tolerance characteristic that is commonly observed in viewers behavior, we introduce a new paradigm for scheduling in VOD systems.
We propose and analyze two classes of scheduling schemes: the Max_Batch and Min_Idle schemes that provide two alternative
ways for using a given stream capacity for effective batching. In making a video selection, the proposed schemes take into
consideration the next stream completion time, as well as the viewer wait tolerance. We compared the proposed schemes with
the two previously studied schemes: (1) first-come-first-served (FCFS) that schedules the video with the longest waiting request
and (2) the maximum queue length (MQL) scheme that selects the video with the maximum number of waiting requests. We show
through simulations that the proposed schemes substantially outperform FCFS and MQL in reducing the viewer turn-away probability,
while maintaining a small average response time. In terms of system resources, we show that, by exploiting the viewers wait
tolerance, the proposed schemes can significantly reduce the server capacity required for achieving a given level of throughput
and turn-away probability as compared to the FCFS and MQL. Furthermore, our study shows that an aggressive use of the viewer
wait tolerance for batching may not yield the best strategy, and that other factors, such as the resulting response time,
fairness, and loss of viewers, should be taken into account. 相似文献
10.
Design and analysis of a video-on-demand server 总被引:6,自引:0,他引:6
The availability of high-speed networks, fast computers and improved storage technology is stimulating interest in the development
of video on-demand services that provide facilities similar to a video cassette player (VCP). In this paper, we present a
design of a video-on-demand (VOD) server, capable of supporting a large number of video requests with complete functionality
of a remote control (as used in VCPs), for each request. In the proposed design, we have used an interleaved storage method
with constrained allocation of video and audio blocks on the disk to provide continuous retrieval. Our storage scheme interleaves
a movie with itself (while satisfying the constraints on video and audio block allocation. This approach minimizes the starting delay and the
buffer requirement at the user end, while ensuring a jitter-free display for every request. In order to minimize the starting
delay and to support more non-concurrent requests, we have proposed the use of multiple disks for the same movie. Since a
disk needs to hold only one movie, an array of inexpensive disks can be used, which reduces the overall cost of the proposed
system. A scheme supported by our disk storage method to provide all the functions of a remote control such as “fast-forwarding”,
“rewinding” (with play “on” or “off”), “pause” and “play” has also been discussed. This scheme handles a user request independent
of others and satisfies it without degrading the quality of service to other users. The server design presented in this paper
achieves the multiple goals of high disk utilization, global buffer optimization, cost-effectiveness and high-quality service
to the users. 相似文献
11.
Secure buffering in firm real-time database systems 总被引:2,自引:0,他引:2
Binto George Jayant R. Haritsa 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):178-198
Many real-time database applications arise in electronic financial services, safety-critical installations and military systems
where enforcing security is crucial to the success of the enterprise. We investigate here the performance implications, in terms of killed transactions,
of guaranteeing multi-level secrecy in a real-time database system supporting applications with firm deadlines. In particular, we focus on the buffer management aspects of this issue.
Our main contributions are the following. First, we identify the importance and difficulties of providing secure buffer management
in the real-time database environment. Second, we present SABRE, a novel buffer management algorithm that provides covert-channel-free security. SABRE employs a fully dynamic one-copy allocation policy for efficient usage of buffer resources. It also incorporates
several optimizations for reducing the overall number of killed transactions and for decreasing the unfairness in the distribution
of killed transactions across security levels. Third, using a detailed simulation model, the real-time performance of SABRE
is evaluated against unsecure conventional and real-time buffer management policies for a variety of security-classified transaction
workloads and system configurations. Our experiments show that SABRE provides security with only a modest drop in real-time
performance. Finally, we evaluate SABRE's performance when augmented with the GUARD adaptive admission control policy. Our
experiments show that this combination provides close to ideal fairness for real-time applications that can tolerate covert-channel
bandwidths of up to one bit per second (a limit specified in military standards).
Received March 1, 1999 / Accepted October 1, 1999 相似文献
12.
In this paper, we present an efficient approach for supporting fast-scanning (FS) operations in MPEG-based video-on-demand
(VOD) systems. This approach is based on storing multiple, differently encoded versions of the same movie at the server. A
normal version is used for normal playback, while several scan versions are used for FS. Each scan version supports forward and backward FS at a given speedup. The server responds to an FS request
by switching from the normal version to an appropriate scan version. Scanning versions are produced by encoding a sample of
the raw frames using the same GOP pattern of the normal version. When a scanning version is decoded and played back at the
normal frame rate, it gives a perceptual motion speedup. By being able to control the traffic envelopes of the scan versions,
our approach can be integrated into a previously proposed framework for distributing archived, MPEG-coded video streams. FS
operations are supported using no or little extra network bandwidth beyond what is already allocated for normal playback.
Mechanisms for controlling the traffic envelopes of the scan versions are presented. The actions taken by the server and the
client's decoder in response to various types of interactive requests are described in detail. The latency incurred in implementing
various interactive requests is shown to be within an acceptable range. Striping and disk-scheduling strategies for storing
various versions at the server are presented. Issues related to the implementation of our approach are discussed. 相似文献
13.
Efficient admission control algorithms for multimedia servers 总被引:3,自引:0,他引:3
In this paper, we have proposed efficient admission control algorithms for multimedia storage servers that are providers
of variable-bit-rate media streams. The proposed schemes are based on a slicing technique and use aggressive methods for admission
control. We have developed two types of admission control schemes: Future-Max (FM) and Interval Estimation (IE). The FM algorithm uses the maximum bandwidth requirement of the future to estimate the bandwidth requirement. The IE
algorithm defines a class of admission control schemes that use a combination of the maximum and average bandwidths within
each interval to estimate the bandwidth requirement of the interval. The performance evaluations done through simulations
show that the server utilization is improved by using the FM and IE algorithms. Furthermore, the quality of service is also
improved by using the FM and IE algorithms. Several results depicting the trade-off between the implementation complexity,
the desired accuracy, the number of accepted requests, and the quality of service are presented. 相似文献
14.
Video and audio compression techniques allow continuous media streams to be transmitted at bit rates that are a function
of the delivered quality of service. Digital networks will be increasingly used for the transmission of such continuous media
streams. This paper describes an admission control policy in which the quality of service is negotiated at stream initiation,
and is a function of both the desired quality of service and the available bandwidth resources. The advantage of this approach
is the ability to robustly service large numbers of users, while providing increased quality of service during low usage periods.
Several simple algorithms for implementing this policy are described and evaluated via simulation for a video-on-demand scenario. 相似文献
15.
The effective provision of real-time, packet-based voice conversations over multi-hop wireless ad-hoc networks faces several stringent constraints not found in conventional packet-based networks. Indeed, MANETs (mobile ad-hoc networks) are characterized by mobility of all nodes, bandwidth-limited channel, unreliable wireless transmission medium, etc. This environment will surely induce a high delay variation and packet loss rate impairing dramatically the user experienced quality of conversational services such as VoIP. Indeed, such services require the reception of each media unit before its deadline to guarantee a synchronous playback process. This requirement is typically achieved by artificially delaying received packets inside a de-jitter buffer. To enhance the perceptual quality the buffering delay should be adjusted dynamically throughout the vocal conversation.In this work, we describe the design of a playout algorithm tailored for real-time, packet-based voice conversations delivered over multi-hop wireless ad-hoc networks. The designed playout algorithm, which is denoted MAPA (mobility aware playout algorithm), adjusts the playout delay according to node mobility, which characterizes mobile ad-hoc networks, and talk-spurt, which is an intrinsic feature of voice signals. The detection of mobility is done in service passively at the receiver using several metrics gathered at the application layer. The perceptual quality is estimated using an augmented assessment approach relying on the ITU-T E-Model paradigm while including the time varying impairments observed by users throughout a packet-based voice conversation. Simulation results show that the tailored playout algorithm significantly outperforms conventional playout algorithms, specifically over a MANET with a high degree of mobility. 相似文献
16.
Predictability of execution has seldom been considered important in the design of Web services middleware. However, with the paradigm shift brought by cloud computing and with offerings of Platforms and Infrastructure as services, execution level predictability is mandating an increased importance. Existing Web services middleware are optimised for throughput with unconditional acceptance of requests and execution in a best-effort manner. While achieving perceived levels of throughput, they also result in highly unpredictable execution times. This paper presents a generic set of guidelines, algorithms and software engineering techniques that enable service execution to complete within a given deadline. The proposed algorithms accept requests for execution based on their laxity and executes them to meet requested deadlines. An introduced admission control mechanism results in a large range of laxities, enabling more requests to be scheduled together by phasing out their execution. Specialised development libraries and operating systems empower them with increased control over execution. Two widely used Web services middleware products were enhanced using these techniques. The two systems are compared with their unmodified versions to measure the predictability gain achieved. Empirical evidence confirms that the enhancements made enable these systems to achieve more than 90% of the deadlines under any type of traffic, while the unmodified versions achieve less than 10% of the deadlines in high traffic conditions. Predictability of execution achieved through these techniques, would open up new application areas such as industrial control systems, avionics, robotics and financial trading systems to the use of Web services as a middleware platform. 相似文献
17.
The demand for real-time e-commerce data services has been increasing recently. In many e-commerce applications, it is essential to process user requests within their deadlines, i.e., before the market status changes, using fresh data reflecting the current market status. However, current data services are poor at processing user requests in a timely manner using fresh data. To address this problem, we present a differentiated real-time data service framework for e-commerce applications. User requests are classified into several service classes according to their importance, and they receive differentiated real-time performance guarantees in terms of deadline miss ratio. At the same time, a certain data freshness is guaranteed for all transactions that commit within their deadlines. A feedback-based approach is applied to differentiate the deadline miss ratio among service classes. Admission control and adaptable update schemes are applied to manage potential overload. A simulation study, which reflects the e-commerce data semantics, shows that our approach can achieve a significant performance improvement compared to baseline approaches. Our approach can support the specified per-class deadline miss ratios maintaining the required data freshness even in the presence of unpredictable workloads and data access patterns, whereas baseline approaches fail. 相似文献
18.
Zara Hamid Faisal Bashir Hussain Jae-Young Pyun 《Multimedia Tools and Applications》2016,75(14):8195-8216
Wireless Multimedia Sensor Networks (WMSNs) consist of networks of interconnected devices involved in retrieving multimedia content, such as, video, audio, acoustic, and scalar data, from the environment. The goal of these networks is optimized delivery of multimedia content based on quality of service (QoS) parameters, such as delay, jitter and distortion. In multimedia communications each packet has strict playout deadlines, thus late arriving packets and lost packets are treated equally. It is a challenging task to guarantee soft delay deadlines along with energy minimization, in resource constrained, high data rate WMSNs. Conventional layered approach does not provide optimal solution for guaranteeing soft delay deadlines due to the large amount of overhead involved at each layer. Cross layer approach is fast gaining popularity, due to its ability to exploit the interdependence between different layers, to guarantee QoS constraints like latency, distortion, reliability, throughput and error rate. The paper presents a channel utilization and delay aware routing (CUDAR) protocol for WMSNs. This protocol is based on a cross-layer approach, which provides soft end-to-end delay guarantees along with efficient utilization of resources. Extensive simulation analysis of CUDAR shows that it provides better delay guarantees than existing protocols and consequently reduces jitter and distortion in WMSN communication. 相似文献
19.
In packet audio applications, packets are buffered at a receiving site and their playout delayed in order to compensate for
variable network delays. In this paper, we consider the problem of adaptively adjusting the playout delay in order to keep
this delay as small as possible, while at the same time avoiding excessive “loss” due to the arrival of packets at the receiver
after their playout time has already passed. The contributions of this paper are twofold. First, given a trace of packet audio
receptions at a receiver, we present efficient algorithms for computing a bound on the achievable performance of any playout delay adjustment algorithm. More precisely, we compute upper and lower bounds (which are shown to be tight for the
range of loss and delay values of interest) on the optimum (minimum) average playout delay for a given number of packet losses
(due to late arrivals) at the receiver for that trace. Second, we present a new adaptive delay adjustment algorithm that tracks
the network delay of recently received packets and efficiently maintains delay percentile information. This information, together
with a “delay spike” detection algorithm based on (but extending) our earlier work, is used to dynamically adjust talkspurt
playout delay. We show that this algorithm outperforms existing delay adjustment algorithms over a number of measured audio
delay traces and performs close to the theoretical optimum over a range of parameter values of interest. 相似文献
20.
Hachem Moussa Tong Gao I-Ling Yen Farokh Bastani Jun-Jang Jeng 《Service Oriented Computing and Applications》2010,4(1):17-31
Many application domains are increasingly leveraging service-oriented architecture (SOA) techniques to facilitate rapid system
deployment. Many of these applications are time-critical and, hence, real-time assurance is an essential step in the service
composition process. However, there are gaps in existing service composition techniques for real-time systems. First, admission
control is an essential technique to assure the time bound for service execution, but most of the service composition techniques
for real-time systems do not take admission control into account. A service may be selected for a workflow during the composition
phase, but then during the grounding phase, the concrete service may not be able to admit the workload. Thus, the entire composition
process may have to be repeated. Second, communication time is an important factor in real-time SOA, but most of the existing
works do not consider how to obtain the communication latencies between services during the composition phase. It is clear
that maintaining a full table of communication latencies for all pairs of services is infeasible. Obtaining communication
latencies between candidate services during the composition phase can also be costly, since many candidate services may not
be used for grounding. Thus, some mechanism is needed for estimating the communication latency for composite services. In
this paper, we propose a three-phase composition approach to address the above issues. In this approach, we first use a highly
efficient but moderately accurate algorithm to eliminate most of the candidate compositions based on estimated communication
latencies and assured service response latency. Then, a more accurate timing prediction is performed on a small number of
selected compositions in the second phase based on confirmed admission and actual communication latency. In the third phase,
specific concrete services are selected for grounding, and admissions are actually performed. The approach is scalable and
can effectively achieve service composition for satisfying real-time requirements. Experimental studies show that the three-phase
approach does improve the effectiveness and time for service composition in SOA real-time systems. In order to support the
new composition approach, it is necessary to effectively specify the needed information. In this paper, we also present the
specification model for timing-related information and the extension of OWL-S to support this specification model. 相似文献