共查询到20条相似文献,搜索用时 109 毫秒
1.
Scott D. Stoller 《Distributed Computing》2000,13(2):85-98
Summary. This paper proposes a framework for detecting global state predicates in systems of processes with approximately-synchronized
real-time clocks. Timestamps from these clocks are used to define two orderings on events: “definitely occurred before” and
“possibly occurred before”. These orderings lead naturally to definitions of 3 distinct detection modalities, i.e., 3 meanings of “predicate held during a computation”, namely: (“ possibly held”), (“ definitely held”), and (“ definitely held in a specific global state”). This paper defines these modalities and gives efficient algorithms for detecting
them. The algorithms are based on algorithms of Garg and Waldecker, Alagar and Venkatesan, Cooper and Marzullo, and Fromentin
and Raynal. Complexity analysis shows that under reasonable assumptions, these real-time-clock-based detection algorithms
are less expensive than detection algorithms based on Lamport's happened-before ordering. Sample applications are given to
illustrate the benefits of this approach.
Received: January 1999 / Accepted: November 1999 相似文献
2.
Handling message semantics with Generic Broadcast protocols 总被引:1,自引:0,他引:1
Summary. Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely “syntactic,”
that is, message “semantics” is not taken into consideration despite the fact that in several cases semantic information about
messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages
is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic
Broadcast. The paper also presents two algorithms that solve Generic Broadcast.
Received: August 2000 / Accepted: August 2001 相似文献
3.
We consider concurrent probabilistic systems, based on probabilistic automata of Segala & Lynch [55], which allow non-deterministic
choice between probability distributions. These systems can be decomposed into a collection of “computation trees” which arise
by resolving the non-deterministic, but not probabilistic, choices. The presence of non-determinism means that certain liveness
properties cannot be established unless fairness is assumed. We introduce a probabilistic branching time logic PBTL, based on the logic TPCTL of Hansson [30] and the logic PCTL of [55], resp. pCTL [14]. The formulas of the logic express properties such as “every request is eventually granted with probability at least
p”. We give three interpretations for PBTL on concurrent probabilistic processes: the first is standard, while in the remaining two interpretations the branching time
quantifiers are taken to range over a certain kind of fair computation trees. We then present a model checking algorithm for
verifying whether a concurrent probabilistic process satisfies a PBTL formula assuming fairness constraints. We also propose adaptations of existing model checking algorithms for pCTL
[4, 14] to obtain procedures for PBTL
under fairness constraints. The techniques developed in this paper have applications in automatic verification of randomized
distributed systems.
Received: June 1997 / Accepted: May 1998 相似文献
4.
Laura M. Haas Michael J. Carey Miron Livny Amit Shukla 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):241-256
In this paper, we re-examine the results of prior work on methods for computing ad hoc joins. We develop a detailed cost model for predicting join algorithm performance, and we use the model to develop cost formulas
for the major ad hoc join methods found in the relational database literature. We show that various pieces of “common wisdom” about join algorithm
performance fail to hold up when analyzed carefully, and we use our detailed cost model to derive op
timal buffer allocation schemes for each of the join methods examined here. We show that optimizing their buffer allocations
can lead to large performance improvements, e.g., as much as a 400% improvement in some cases. We also validate our cost model's
predictions by measuring an actual implementation of each join algorithm considered. The results of this work should be directly
useful to implementors of relational query optimizers and query processing systems.
Edited by M. Adiba. Received May 1993 / Accepted April 1996 相似文献
5.
Praveen Seshadri 《The VLDB Journal The International Journal on Very Large Data Bases》1998,7(3):130-140
The explosion in complex multimedia content makes it crucial for database systems to support such data efficiently. This
paper argues that the “blackbox” ADTs used in current object-relational systems inhibit their performance, thereby limiting
their use in emerging applications. Instead, the next generation of object-relational database systems should be based on
enhanced abstract data type (E-ADT) technology. An (E-ADT) can expose the semantics of its methods to the database system, thereby permitting advanced query optimizations. Fundamental architectural changes
are required to build a database system with E-ADTs; the added functionality should not compromise the modularity of data
types and the extensibility of the type system. The implementation issues have been explored through the development of E-ADTs
in Predator. Initial performance results demonstrate an order of magnitude in performance improvements.
Received January 1, 1998 / Accepted May 27, 1998 相似文献
6.
Xiaodong Wen Theodore D. Huffmire Helen H. Hu Adam Finkelstein 《Multimedia Systems》1999,7(5):350-358
We present several algorithms suitable for analysis of broadcast video. First, we show how wavelet analysis of frames of
video can be used to detect transitions between shots in a video stream, thereby dividing the stream into segments. Next we
describe how each segment can be inserted into a video database using an indexing scheme that involves a wavelet-based “signature.”
Finally, we show that during a subsequent broadcast of a similar or identical video clip, the segment can be found in the
database by quickly searching for the relevant signature. The method is robust against noise and typical variations in the
video stream, even global changes in brightness that can fool histogram-based techniques. In the paper, we compare experimentally
our shot transition mechanism to a color histogram implementation, and also evaluate the effectiveness of our database-searching
scheme. Our algorithms are very efficient and run in realtime on a desktop computer. We describe how this technology could
be employed to construct a “smart VCR” that was capable of alerting the viewer to the beginning of a specific program or identifying 相似文献
7.
ExploreNet is an experimental environment for creating and delivering networked “virtual worlds.” This system's style of user
interaction was inspired by the concept of a “habitat” as first articulated in the LucasFilm's Habitat system. Players enter
and interact in a habitat via their animated alter egos, called “avatars.” Habitats may be created for many purposes, including
social interaction, entertainment and education. Our focus has been to facilitate the creation of habitats in which virtual
communities of learners and mentors interact. This paper presents details of the current ExploreNet system, including its
user interface, the means it provides for creating complex behaviors, details of its implementation, the outcomes of several
experiments using this system, and our plans for its natural migration to a World Wide Web-based system. 相似文献
8.
We present a shared memory algorithm that allows a set of f+1 processes to wait-free “simulate” a larger system of n processes, that may also exhibit up to f stopping failures.
Applying this simulation algorithm to the k-set-agreement problem enables conversion of an arbitrary k-fault-tolerant{\it n}-process solution for the k-set-agreement problem into a wait-free k+1-process solution for the same problem. Since the k+1-processk-set-agreement problem has been shown to have no wait-free solution [5,18,26], this transformation implies that there is no
k-fault-tolerant solution to the n-process k-set-agreement problem, for any n.
More generally, the algorithm satisfies the requirements of a fault-tolerant distributed simulation.\/ The distributed simulation implements a notion of fault-tolerant reducibility\/ between decision problems. This paper defines these notions and gives examples of their application to fundamental distributed
computing problems.
The algorithm is presented and verified in terms of I/O automata. The presentation has a great deal of interesting modularity,
expressed by I/O automaton composition and both forward and backward simulation relations. Composition is used to include
a safe agreement\/ module as a subroutine. Forward and backward simulation relations are used to view the algorithm as implementing a multi-try snapshot\/ strategy.
The main algorithm works in snapshot shared memory systems; a simple modification of the algorithm that works in read/write
shared memory systems is also presented.
Received: February 2001 / Accepted: February 2001 相似文献
9.
Carlo Combi Giuseppe Pozzi 《The VLDB Journal The International Journal on Very Large Data Bases》2001,9(4):294-311
The granularity of given temporal information is the level of abstraction at which information is expressed. Different units of measure allow
one to represent different granularities. Indeterminacy is often present in temporal information given at different granularities:
temporal indeterminacy is related to incomplete knowledge of when the considered fact happened. Focusing on temporal databases, different granularities
and indeterminacy have to be considered in expressing valid time, i.e., the time at which the information is true in the modeled
reality. In this paper, we propose HMAP (The term is the transliteration of an ancient Greek poetical word meaning “day”.), a temporal data model extending the capability
of defining valid times with different granularity and/or with indeterminacy. In HMAP, absolute intervals are explicitly represented by their start,end, and duration: in this way, we can represent valid times as “in December 1998 for five hours”, “from July 1995, for 15 days”, “from March
1997 to October 15, 1997, between 6 and 6:30 p.m.”. HMAP is based on a three-valued logic, for managing uncertainty in temporal relationships. Formulas involving different temporal
relationships between intervals, instants, and durations can be defined, allowing one to query the database with different
granularities, not necessarily related to that of data. In this paper, we also discuss the complexity of algorithms, allowing
us to evaluate HMAP formulas, and show that the formulas can be expressed as constraint networks falling into the class of simple temporal problems,
which can be solved in polynomial time.
Received 6 August 1998 / Accepted 13 July 2000 Published online: 13 February 2001 相似文献
10.
In packet audio applications, packets are buffered at a receiving site and their playout delayed in order to compensate for
variable network delays. In this paper, we consider the problem of adaptively adjusting the playout delay in order to keep
this delay as small as possible, while at the same time avoiding excessive “loss” due to the arrival of packets at the receiver
after their playout time has already passed. The contributions of this paper are twofold. First, given a trace of packet audio
receptions at a receiver, we present efficient algorithms for computing a bound on the achievable performance of any playout delay adjustment algorithm. More precisely, we compute upper and lower bounds (which are shown to be tight for the
range of loss and delay values of interest) on the optimum (minimum) average playout delay for a given number of packet losses
(due to late arrivals) at the receiver for that trace. Second, we present a new adaptive delay adjustment algorithm that tracks
the network delay of recently received packets and efficiently maintains delay percentile information. This information, together
with a “delay spike” detection algorithm based on (but extending) our earlier work, is used to dynamically adjust talkspurt
playout delay. We show that this algorithm outperforms existing delay adjustment algorithms over a number of measured audio
delay traces and performs close to the theoretical optimum over a range of parameter values of interest. 相似文献
11.
Design and analysis of a video-on-demand server 总被引:6,自引:0,他引:6
The availability of high-speed networks, fast computers and improved storage technology is stimulating interest in the development
of video on-demand services that provide facilities similar to a video cassette player (VCP). In this paper, we present a
design of a video-on-demand (VOD) server, capable of supporting a large number of video requests with complete functionality
of a remote control (as used in VCPs), for each request. In the proposed design, we have used an interleaved storage method
with constrained allocation of video and audio blocks on the disk to provide continuous retrieval. Our storage scheme interleaves
a movie with itself (while satisfying the constraints on video and audio block allocation. This approach minimizes the starting delay and the
buffer requirement at the user end, while ensuring a jitter-free display for every request. In order to minimize the starting
delay and to support more non-concurrent requests, we have proposed the use of multiple disks for the same movie. Since a
disk needs to hold only one movie, an array of inexpensive disks can be used, which reduces the overall cost of the proposed
system. A scheme supported by our disk storage method to provide all the functions of a remote control such as “fast-forwarding”,
“rewinding” (with play “on” or “off”), “pause” and “play” has also been discussed. This scheme handles a user request independent
of others and satisfies it without degrading the quality of service to other users. The server design presented in this paper
achieves the multiple goals of high disk utilization, global buffer optimization, cost-effectiveness and high-quality service
to the users. 相似文献
12.
S. Arbanowski S. van der Meer S. Steglich R. Popescu-Zeletin 《Personal and Ubiquitous Computing》2001,5(1):34-37
In the last few years, a variety of concepts for service integration and corresponding systems have been developed. On the
one hand, they aim for the interworking and integration of classical telecommunications and data communications services.
On the other, they are focusing on universal service access from a variety of end-user systems. Many of the technical problems,
resulting from the service integration, and service personalisation have been solved during the last years. However, all these
systems are driven by the concept of providing several technologies to users by keeping the peculiarity of each service.
Looking at humans’ communication behaviour and their communication space, it is obvious that human beings interact habitually
in a set of contexts with their environment. The individual information preferences and needs, persons to interact with, and
the set of devices controlled by each individual define their personal communication space. Following this view, a new approach
is to build communication systems not on the basis of specific technologies, but on the analysis of the individual communication
spaces. The result is a communication system adapted to the demands of each individual (I-centric). The communication system
will act on behalf of users’ demands, reflecting recent actions to enable profiling and self-adaptation to contexts and situations.
In this paper, we introduce I-centric Communications, an approach to design communication systems that adapt themselves to
the individual communication space and individual environment and situation. In this context “I” means I, or individual, “Centric”
means adaptable to I requirements and a certain user environment. 相似文献
13.
A common need in machine vision is to compute the 3-D rigid body transformation that aligns two sets of points for which correspondence
is known. A comparative analysis is presented here of four popular and efficient algorithms, each of which computes the translational
and rotational components of the transform in closed form, as the solution to a least squares formulation of the problem.
They differ in terms of the transformation representation used and the mathematical derivation of the solution, using respectively
singular value decomposition or eigensystem computation based on the standard representation, and the eigensystem analysis of matrices derived from unit and dual quaternion forms of the transform. This
comparison presents both qualitative and quantitative results of several experiments designed to determine (1) the accuracy
and robustness of each algorithm in the presence of different levels of noise, (2) the stability with respect to degenerate
data sets, and (3) relative computation time of each approach under different conditions. The results indicate that under
“ideal” data conditions (no noise) certain distinctions in accuracy and stability can be seen. But for “typical, real-world”
noise levels, there is no difference in the robustness of the final solutions (contrary to certain previously published results).
Efficiency, in terms of execution time, is found to be highly dependent on the computer system setup. 相似文献
14.
Flip Korn Alexandros Labrinidis Yannis Kotidis Christos Faloutsos 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):254-266
Association Rule Mining algorithms operate on a data matrix (e.g., customers products) to derive association rules [AIS93b, SA96]. We propose a new paradigm, namely, Ratio Rules, which are quantifiable in that we can measure the “goodness” of a set of discovered rules. We also propose the “guessing
error” as a measure of the “goodness”, that is, the root-mean-square error of the reconstructed values of the cells of the
given matrix, when we pretend that they are unknown. Another contribution is a novel method to guess missing/hidden values
from the Ratio Rules that our method derives. For example, if somebody bought $10 of milk and $3 of bread, our rules can “guess”
the amount spent on butter. Thus, unlike association rules, Ratio Rules can perform a variety of important tasks such as forecasting,
answering “what-if” scenarios, detecting outliers, and visualizing the data. Moreover, we show that we can compute Ratio Rules
in a single pass over the data set with small memory requirements (a few small matrices), in contrast to association rule mining methods
which require multiple passes and/or large memory. Experiments on several real data sets (e.g., basketball and baseball statistics,
biological data) demonstrate that the proposed method: (a) leads to rules that make sense; (b) can find large itemsets in
binary matrices, even in the presence of noise; and (c) consistently achieves a “guessing error” of up to 5 times less than
using straightforward column averages.
Received: March 15, 1999 / Accepted: November 1, 1999 相似文献
15.
As multimedia applications spread widely, it is crucial for programming and design support systems to handle “time” in multimedia
documents effectively and flexibly. This paper presents a set of interactive system support tools for designing and maintaining
the temporal behavior of multimedia documents. The tool set provides mechanisms for anomaly detection, temporal query processing,
and interactive scheduling. It is based on a fast incremental constraint solver we have developed, which can be adapted by
any constraint-based system. The incremental constraint solver provides immediate feedback to the user, supporting a highly
interactive design process. Combined with existing optimal layout generation mechanisms proposed in the literature, our tools
effectively utilize the flexibility provided by constraint-based systems. 相似文献
16.
Fast joins using join indices 总被引:1,自引:0,他引:1
Zhe Li Kenneth A. Ross 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(1):1-24
Two new algorithms, “Jive join” and “Slam join,” are proposed for computing the join of two relations using a join index.
The algorithms are duals: Jive join range-partitions input relation tuple ids and then processes each partition, while Slam
join forms ordered runs of input relation tuple ids and then merges the results. Both algorithms make a single sequential
pass through each input relation, in addition to one pass through the join index and two passes through a temporary file,
whose size is half that of the join index. Both algorithms require only that the number of blocks in main memory is of the
order of the square root of the number of blocks in the smaller relation. By storing intermediate and final join results in
a vertically partitioned fashion, our algorithms need to manipulate less data in memory at a given time than other algorithms.
The algorithms are resistant to data skew and adaptive to memory fluctuations. Selection conditions can be incorporated into
the algorithms. Using a detailed cost model, the algorithms are analyzed and compared with competing algorithms. For large
input relations, our algorithms perform significantly better than Valduriez's algorithm, the TID join algorithm, and hash
join algorithms. An experimental study is also conducted to validate the analytical results and to demonstrate the performance
characteristics of each algorithm in practice.
Received July 21, 1997 / Accepted June 8, 1998 相似文献
17.
Detection of global predicates: Techniques and their limitations 总被引:1,自引:0,他引:1
Summary. We show that the problem of predicate detection in distributed systems is NP-complete. In the past, efficient algorithms
have been developed for special classes of predicates such as stable predicates, observer independent predicates, and conjunctive
predicates. We introduce a class of predicates, semi-linear predicates, which properly contains all of the above classes. We first discuss stable, observer independent and semi-linear classes
of predicates and their relationships with each other. We also study closure properties of these classes with respect to conjunction
and disjunction. Finally, we discuss algorithms for detection of predicates in these classes. We provide a non-deterministic
detection algorithm for each class of predicate. We show that each class can be equivalently characterized by the degree of
non-determinism present in the algorithm. Stable predicates are defined as those that can be detected by an algorithm with
the most non-determinism. All other classes can be derived by appropriately constraining the non-determinism in this algorithm. 相似文献
18.
19.
Henry S. Baird Allison L. Coates Richard J. Fateman 《International Journal on Document Analysis and Recognition》2003,5(2-3):158-163
Abstract. We exploit the gap in ability between human and machine vision systems to craft a family of automatic challenges that tell
human and machine users apart via graphical interfaces including Internet browsers. Turing proposed [Tur50] a method whereby
human judges might validate “artificial intelligence” by failing to distinguish between human and machine interlocutors. Stimulated
by the “chat room problem” posed by Udi Manber of Yahoo!, and influenced by the CAPTCHA project [BAL00] of Manuel Blum et
al. of Carnegie-Mellon Univ., we propose a variant of the Turing test using pessimal print: that is, low-quality images of machine-printed text synthesized pseudo-randomly over certain ranges of words, typefaces,
and image degradations. We show experimentally that judicious choice of these ranges can ensure that the images are legible
to human readers but illegible to several of the best present-day optical character recognition (OCR) machines. Our approach
is motivated by a decade of research on performance evaluation of OCR machines [RJN96,RNN99] and on quantitative stochastic
models of document image quality [Bai92,Kan96]. The slow pace of evolution of OCR and other species of machine vision over
many decades [NS96,Pav00] suggests that pessimal print will defy automated attack for many years. Applications include `bot'
barriers and database rationing.
Received: February 14, 2002 / Accepted: March 28, 2002
An expanded version of: A.L. Coates, H.S. Baird, R.J. Fateman (2001) Pessimal Print: a reverse Turing Test. In: {\it Proc.
6th Int. Conf. on Document Analysis and Recognition}, Seattle, Wash., USA, September 10–13, pp. 1154–1158
Correspondence to: H. S. Baird 相似文献
20.
Restoring subsampled color images 总被引:1,自引:0,他引:1
In some capturing devices, such as digital cameras, there is only one color sensor at each pixel. Usually, 50% of the pixels
have only a green sensor, 25% only a red sensor, and 25% only a blue sensor. The problem is then to restore the two missing
colors at each pixel – this is called “demosaicing”, because the original samples are usually arranged in a mosaic pattern.
In this short paper, a few demosaicing algorithms are developed and compared. They all incorporate a notion of “smoothness
in chroma space”, by imposing conditions not only on the behavior of each color channel separately, but also on the correlation
between the three channels. 相似文献