首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Analytic variations on quadtrees   总被引:1,自引:1,他引:0  
Quadtrees constitute a hierarchical data structure which permits fast access to multidimensional data. This paper presents the analysis of the expected cost of various types of searches in quadtrees — fully specified and partial-match queries. The data model assumes random points with independently drawn coordinate values.The analysis leads to a class of full-history divide-and-conquer recurrences. These recurrences are solved using generating functions, either exactly for dimensiond=2, or asymptotically for higher dimensions. The exact solutions involve hypergeometric functions. The general asymptotic solutions rely on the classification of singularities of linear differential equations with analytic coefficients, and on singularity analysis techniques.These methods are applicable to the asymptotic solution of a wide range of linear recurrences, as may occur in particular in the analysis of multidimensional searching problems.The work of Philippe Flajolet was supported in part by the Basic Research Action of the E.C. under Contract No. 3075 (Project ALCOM). A preliminary version of this paper has been presented in the form of an extended abstract at the Second Annual Symposium on Discrete Algorithms [13].  相似文献   

2.
This article is a continuation of the work reported in [4], introducing unknown but bounded disturbances into the problem of control synthesis studied there. The technique presented allows an algorithmization with an appropriate graphic simulation. The original theoretical solution scheme taken here comes from the theory introduced by N.N. Krasovski [1], from the notion of the alternated integral of L.S. Pontriagin [2] and the funnel equation in the form given in [3]. For alternative treatment of related problems, see also [5], [6], and [7]. The theory is used as a point of application of constructive schemes generated through ellipsoidal techniques developed by the authors. A concise exposition of the latter is the objective of this article. A particular feature is that the ellipsoidal techniques introduced here do indicate an exact approximation of the original solutions based on set-valued calculus by solutions formulated in terms of ellipsoidal-valued functions only. Editor: J. Skowronski  相似文献   

3.
For the interval system of equations defined by [x] = [A][x]+[b]with(|[A])|1 we derive a necessary and sufficient criterion for the existence and uniqueness of solutions [x]. Generalizing former results we allow the absolute value |[A]| of [A] to be reducible.  相似文献   

4.
We study the problem of deterministically predicting boolean values by combining the boolean predictions of several experts. Previous on-line algorithms for this problem predict with the weighted majority of the experts' predictions. These algorithms give each expert an exponential weight m where is a constant in [0, 1) andm is the number of mistakes made by the expert in the past. We show that it is better to use sums of binomials as weights. In particular, we present a deterministic algorithm using binomial weights that has a better worst case mistake bound than the best deterministic algorithm using exponential weights. The binomial weights naturally arise from a version space argument. We also show how both exponential and binomial weighting schemes can be used to make prediction algorithms robust against noise.  相似文献   

5.
Vovk  V. 《Machine Learning》1999,35(3):247-282
In this paper we continue study of the games of prediction with expert advice with uncountably many experts. A convenient interpretation of such games is to construe the pool of experts as one stochastic predictor, who chooses one of the experts in the pool at random according to the prior distribution on the experts and then replicates the (deterministic ) predictions of the chosen expert. We notice that if the stochastic predictor's total loss is at most L with probability at least p then the learner's loss can be bounded by cL + aln for the usual constants c and a. This interpretation is used to revamp known results and obtain new results on tracking the best expert. It is also applied to merging overconfident experts and to fitting polynomials to data.  相似文献   

6.
Fractal objects which, by definition, are objects that have scale-invariant shapes and fractional scaling dimensions (fractal dimension) with magnitudes related to the complexity of the objects, are ubiquitous in nature. In particular, many biological structures and systems have fractal properties and therefore may be well studied and modelled using fractal geometry. In the box counting algorithm, one of several different approaches to calculate the fractal dimension, one determines, for several values of, the number of boxesN() with side length needed to completely cover the studied object. IfN() andr are found to be related by the power law relationshipN(r) r –D, whereD is the scaling dimension, and ifD is a non-integer, then the object is fractal andD is the fractal dimension.In certain circumstances in which one may need to calculate the fractal dimension of a three-dimensional fractal from a two-dimensional projection (e.g. an X-ray), a new mathematical relationship may be utilized to obtain the actual dimensionD; it readsD=–log[1–(1–R 2– D p) R ]/logR + 3, whereD p is the fractal dimension of the two-dimensional planar projection of the object andR is the box size used in calculating the box-counting dimension.Fractal dimension calculations have been found to be particularly useful as quantitative indices of the degree of coronary vascularity and the degree of heart interbeat interval variability. Fractal growth models such as diffusion limited aggregation (DLA) can be used to model artery growth.To sum up, fractal geometry is very useful in studying and modeling certain scale-invariant biological structures or systems which may not be easily described with Euclidean shapes.  相似文献   

7.
Summary Given a sequence of positive weights, W=w 1...w n >0, there is a Huffman tree, T (T-up) which minimizes the following functions: max{d(wi)}; d(wi); f(d(wi)) w i(here d(w i) represents the distance of a leaf of weight w i to the root and f is a function defined for nonnegative integers having the property that g(x) = f(x + 1) – f(x) is monotone increasing) over the set of all trees for W having minimal expected length. Minimizing the first two functions was first done by Schwartz [5]. In the case of codes where W is a sequence of probabilities, this implies that the codes based on T have all their absolute central moments minimal. In particular, they are the least variance codes which were also described by Kou [3]. Furthermore, there exists a Huffman tree T, (T-down) which maximizes the functions considered above.However, if g(x) is monotone decreasing, T and T, respectively maximize and minimize f(d(wi) w i) over the set of all trees for W having minimal expected length. In addition, we derive a number of interesting results about the distribution of labels within Huffman trees. By suitable modifications of the usual Huffman tree construction, (see [1]) T and T can also be constructed in time O(n log n).  相似文献   

8.
Some recognition problems are either too complex or too ambiguous to be expressed as a simple pattern matching problem using a sequence or regular expression pattern. In these cases, a richer environment is needed to describe the patterns and recognition techniques used to perform the recognition. Some researchers have turned to artificial-intelligence techniques and multistep matching approaches for the problems of gene recognition [5], [7], [18], protein structure recognition [13], and on-line character recognition [6]. This paper presents a class of problems which involve finding matches to patterns of patterns, orsuper- patterns, given solutions to the lower-level patterns. The expressiveness of this problem class rivals that of traditional artificial-intelligence characterizations, and yet polynomial-time algorithms are described for each problem in the class.This work was supported in part by the National Institute of Health under Grant ROI LM04960 and by the Aspen Center for Physics.  相似文献   

9.
Maximal word functions occur in data retrieval applications and have connections with ranking problems, which in turn were first investigated in relation to data compression [21]. By the maximal word function of a languageL *, we mean the problem of finding, on inputx, the lexicographically largest word belonging toL that is smaller than or equal tox.In this paper we present a parallel algorithm for computing maximal word functions for languages recognized by one-way nondeterministic auxiliary pushdown automata (and hence for the class of context-free languages).This paper is a continuation of a stream of research focusing on the problem of identifying properties others than membership which are easily computable for certain classes of languages. For a survey, see [24].  相似文献   

10.
We consider the problem of determining which of a set of experts has tastes most similar to a given user by asking the user questions about his likes and dislikes. We describe a simple algorithm for generating queries for a theoretical model of this problem. We show that the algorithm requires at most opt(F)(ln(|F|/opt(F)) + 1) + 1 queries to find the correct expert, where opt(F) is the optimal worst-case bound on the number of queries for learning arbitrary elements of the set of experts F. The algorithm runs in time polynomial in |F| and |X| (where X is the domain) and we prove that no polynomial-time algorithm can have a significantly better bound on the number of queries unless all problems in NP have n O(log log n) time algorithms. We also study a more general case where the user ratings come from a finite set Y and there is an integer-valued loss function on Y that is used to measure the distance between the ratings. Assuming that the loss function is a metric and that there is an expert within a distance from the user, we give a polynomial-time algorithm that is guaranteed to find such an expert after at most 2opt(F, ) ln + 2( + 1)(1 + deg(F, )) queries, where deg(F, ) is the largest number of experts in F that are within a distance 2 of any f F.  相似文献   

11.
A task-specific video recording effort at a trauma centre was studied. Task analysis methodology and an expert review of videos was used to access cognitive aspects of work and performance. Data collection included questionnaires and video reviews that used a template approach to task analysis and audio recordings of the experts think aloud performance assessment. Among 48 video records of airway management, performance deficiencies were identified including communication failures, omission of preparatory and confirmatory checks and lack of patient vital signs monitoring that lessened the margin of patient safety and caused a life-threatening critical incident. The analysis of aggregate data from multiple such videos of airway management allowed detection of the performance problems and development of an equipment design change and a task/communication training algorithm. The performance improvement and the lessons learned from using video as data in a medical domain are described. Targeted video task analysis with expert review may be generalisable to other medical procedures and non-medical domains.  相似文献   

12.
A major source of three-dimensional (3D) information about objects in the world is available to the observer in the form of time-varying imagery. Relative motion between textured objects and the observer generates a time-varying optic array at the image, from which image motion of contours, edge fragments, and feature points can be extracted. These dynamic features serve to sample the underlying image flow field. New, closed-form solutions are given for the structure and motion of planar and curved surface patches from monocular image flow and its derivatives through second order. Both planar and curved surface solutions require at most, the solution of a cubic equation. The analytic solution for curved surface patches combines the transformation of Longuet-Higgins and Prazdny [25] with the planar surface solution of Subbarao and Waxman [43]. New insights regarding uniqueness of solutions also emerge. Thus, the structure-motion coincidence of Waxman and Ullman [54] is interpreted as the duality of tangent plane solutions. The multiplicity of transformation angles (up to three) is related to the sign of the Gaussian curvature of the surface patch. Ovoid patches (i.e., bowls) are shown to possess a unique transform angle, though they are subject to the local structure-motion coincidence. Thus, ovoid patches almost always yield a unique 3D interpretation. In general, ambiguous solutions can be resolved by requiring continuity of the solution over time.The support of the Defense Advanced Research Projects Agency and the U.S. Army Night Vision Laboratory under Contract DAAK70-83-K-0018 (DARPA Order 3206) is gratefully acknowledged.  相似文献   

13.
When human experts express their ideas and thoughts, human words are basically employed in these expressions. That is, the experts with much professional experiences are capable of making assessment using their intuition and experiences. The measurements and interpretation of characteristics are taken with uncertainty, because most measured characteristics, analytical result, and field data can be interpreted only intuitively by experts. In such cases, judgments may be expressed using linguistic terms by experts. The difficulty in the direct measurement of certain characteristics makes the estimation of these characteristics imprecise. Such measurements may be dealt with the use of fuzzy set theory. As Professor L. A. Zadeh has placed the stress on the importance of the computation with words, fuzzy sets can take a central role in handling words [12, 13]. In this perspective fuzzy logic approach is offten thought as the main and only useful tool to deal with human words. In this paper we intend to present another approach to handle human words instead of fuzzy reasoning. That is, fuzzy regression analysis enables us treat the computation with words. In order to process linguistic variables, we define the vocabulary translation and vocabulary matching which convert linguistic expressions into membership functions on the interval [0–1] on the basis of a linguistic dictionary, and vice versa. We employ fuzzy regression analysis in order to deal with the assessment process of experts from linguistic variables of features and characteristics of an objective into the linguistic expression of the total assessment. The presented process consists of four portions: (1) vocabulary translation, (2) estimation, (3) vocabulary matching and (4) dictionary. We employed fuzzy quantification theory type 2 for estimating the total assessment in terms of linguistic structural attributes which are obtained from an expert.This research was supported in part by Grant-in Aid for Scientific Research(C-2); Grant No.11680459 of Ministry of Education of Science, Sports and Culture.  相似文献   

14.
15.
In this paper we propose a long-step logarithmic barrier function method for convex quadratic programming with linear equality constraints. After a reduction of the barrier parameter, a series of long steps along projected Newton directions are taken until the iterate is in the vicinity of the center associated with the current value of the barrier parameter. We prove that the total number of iterations isO(nL) orO(nL), depending on how the barrier parameter is updated.On leave from Eötvös University, Budapest and partially supported by OTKA 2116.  相似文献   

16.
We show that if a complexity classC is closed downward under polynomial-time majority truth-table reductions ( mtt p ), then practically every other polynomial closure property it enjoys is inherited by the corresponding bounded two-sided error class BP[C]. For instance, the Arthur-Merlin game class AM [B1] enjoys practically every closure property of NP. Our main lemma shows that, for any relativizable classD which meets two fairly transparent technical conditions, we haveC BP[C] BP[D C]. Among our applications, we simplify the proof by Toda [Tol], [To2] that the polynomial hierarchy PH is contained in BP[P]. We also show that relative to a random oracleR, PH R is properly contained in P R .The first author was supported in part by NSF Grant CCR-9011248 and the second author was supported in part by NSF Grant CCR-89011154.  相似文献   

17.
On a Connection between Kernel PCA and Metric Multidimensional Scaling   总被引:1,自引:0,他引:1  
In this note we show that the kernel PCA algorithm of Schölkopf, Smola, and Müller (Neural Computation, 10, 1299–1319.) can be interpreted as a form of metric multidimensional scaling (MDS) when the kernel function k(x, y) is isotropic, i.e. it depends only on xy. This leads to a metric MDS algorithm where the desired configuration of points is found via the solution of an eigenproblem rather than through the iterative optimization of the stress objective function. The question of kernel choice is also discussed.  相似文献   

18.
There is a disparity between the multitude of apparently successful expert system prototypes and the scarcity of expert systems in real everyday use. Modern tools make it deceptively easy to make reasonable prototypes, but these prototypes are seldom made subject to serious evaluation. Instead the development team confronts their product with a set of cases, and the primary evaluation criterion is the percentage of correct answers: we are faced with a 95% syndrome. Other aspects related to the use of the system are almost ignored. There is still a long way to go from a promising prototype to a final system.It is maintained in the article that a useful test must be performed by future users in a situation that is as realistic as possible. If this is not done claims of usefulness cannot be justified. It is also stated that prototyping does not make traditional analysis and design obsolete, although the contents of these activities will change.In order to discuss the effects of using the systems a distinction between expert systems as media, tools and experts is proposed.  相似文献   

19.
In-network aggregation has been proposed as one method for reducing energy consumption in sensor networks. In this paper, we explore two ideas related to further reducing energy consumption in the context of in-network aggregation. The first is by influencing the construction of the routing trees for sensor networks with the goal of reducing the size of transmitted data. To this end, we propose a group-aware network configuration method that clusters along the same path sensor nodes that belong to the same group. The second idea involves imposing a hierarchy of output filters on the sensor network with the goal of both reducing the size of transmitted data and minimizing the number of transmitted messages. More specifically, we propose a framework to use temporal coherency tolerances in conjunction with in-network aggregation to save energy at the sensor nodes while maintaining specified quality of data. These tolerances are based on user preferences or can be dictated by the network in cases where the network cannot support the current tolerance level. Our framework, called TiNA, works on top of existing in-network aggregation schemes. We evaluate experimentally our proposed schemes in the context of existing in-network aggregation schemes. We present experimental results measuring energy consumption, response time, and quality of data for Group-By queries. Overall, our schemes provide significant energy savings with respect to communication and a negligible drop in quality of data.Received: 22 October 2003, Accepted: 16 April 2004, Published online: 12 November 2004Edited by: J. Gehrke and J. HellersteinThis work is supported in part by NSF award ANI-0123705. The first author is supported in part by the Andrew Mellon Predoctoral Fellowship. This paper expands on the material presented in two workshops [31,2].  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号