首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary A single multiaccess channel is studied with the outcome of a transmission being either idle, success, or collision (ternary channel). Packets involved in a collision must be retransmitted, and an efficient way to solve a collision is known in the literature as Gallager-Tsybakov-Mikhailov algorithm. Performance analysis of the algorithm is quite hard. In fact, it bases on a numerical solution of some recurrence equations and on a numerical evaluation of some series. The obvious drawback of it is lack of insight into the behavior of the algorithm. We shall present a new approach of looking at the algorithm and discuss some attempts of analyzing its performance. In particular, expected lengths of a resolution interval and a conflict resolution interval as well as throughput of the algorithm will be discussed using asymptotic approximation and a small input rate approximation.  相似文献   

2.
The classUP [V] is the class of sets accepted by polynomial-time nondeterministic Turing machines which have at most one accepting path for every input. The complexity of this class closely relates to that of computing inverses ofone-way functions, where a one-way function is a one-to-one, length-increasing, and polynomial-time computable function whose inverse cannot be computed within polynomial time. It is known [GS], [K] that there exists a one-way function if and only ifP UP. In this paper the intractability of sets inUP is investigated in terms of polynomial-time reducibility to a sparse set. It is shown thatUP has a set that is m P -reducible to no sparse set ifP UP. We interpret this structural property in the relation with approximation algorithms: it is shown that ifP UP, thenUP has a set with no 1-APT approximation and, furthermore,UP has a set that is not m P -reducible to any set with a 1-APT approximation. The implication of this result in the study of one-way functions is also discussed. In order to prove the main theorem, we introduce a variation of tree-pruning methods.This paper was written while the author visited the Department of Mathematics, University of California, Santa Barbara. This research was supported in part by the National Science Foundation under Grant CCR-8611980.  相似文献   

3.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

4.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

5.
The design of the database is crucial to the process of designing almost any Information System (IS) and involves two clearly identifiable key concepts: schema and data model, the latter allowing us to define the former. Nevertheless, the term model is commonly applied indistinctly to both, the confusion arising from the fact that in Software Engineering (SE), unlike in formal or empirical sciences, the notion of model has a double meaning of which we are not always aware. If we take our idea of model directly from empirical sciences, then the schema of a database would actually be a model, whereas the data model would be a set of tools allowing us to define such a schema.The present paper discusses the meaning of model in the area of Software Engineering from a philosophical point of view, an important topic for the confusion arising directly affects other debates where model is a key concept. We would also suggest that the need for a philosophical discussion on the concept of data model is a further argument in favour of institutionalizing a new area of knowledge, which could be called: Philosophy of Engineering.  相似文献   

6.
The number of virtual connections in the nodal space of an ATM network of arbitrary structure and topology is computed by a method based on a new concept—a covering domain having a concrete physical meaning. The method is based on a network information sources—boundary switches model developed for an ATM transfer network by the entropy approach. Computations involve the solution of systems of linear equations. The optimization model used to compute the number of virtual connections in a many-category traffic in an ATM network component is useful in estimating the resource of nodal equipment and communication channels. The variable parameters of the model are the transmission bands for different traffic categories.  相似文献   

7.
A Maple procedure is described by means of which an algebraic function given by an equation f(x y) = 0 can be expanded into a fractional power series (Puiseux series)
where
,
of special (nice) type. It may be a series with polynomial, rational, hypergeometric coefficients, or m-sparse or m-sparse m-hypergeometric series. First, a linear ordinary differential equation with polynomial coefficients Ly(x) = 0 is constructed which is satisfied by the given algebraic function. The , n 0, and a required number of initial coefficients 0, ..., are computed by using Maple algcurves package. By means of Maple Slode package, a solution to the equation Ly(x) = 0 is constructed in the form of a series with nice coefficients, the initial coefficients of which correspond to the calculated 0, ..., . The procedure suggested can construct an expansion at a user-given point x 0, as well as determine points where an expansion of such a special type is possible.  相似文献   

8.
This paper uses Thiele rational interpolation to derive a simple method for computing the Randles–Sevcik function 1/2(x), with relative error at most 1.9 × 10–5 for – < x < . We develop a piecewise approximation method for the numerical computation of 1/2(x) on the union (–, –10) [–10, 10] (10, ). This approximation is particularly convenient to employ in electrochemical applications where four significant digits of accuracy are usually sufficient. Although this paper is primarily concerned with the approximation of the Randles–Sevcik function, some examples are included that illustrate how Thiele rational interpolation can be employed to generate useful approximations to other functions of interest in scientific work.  相似文献   

9.
On Bounding Solutions of Underdetermined Systems   总被引:1,自引:0,他引:1  
Sufficient conditions for the existence and uniqueness of a solution x* D (R n ) of Y(x) = 0 where : R n R m (m n) with C 2(D) where D R n is an open convex set and Y = (x)+ are given, and are compared with similar results due to Zhang, Li and Shen (Reliable Computing 5(1) (1999)). An algorithm for bounding zeros of f (·) is described, and numerical results for several examples are given.  相似文献   

10.
This paper presents a detailed study of Eurotra Machine Translation engines, namely the mainstream Eurotra software known as the E-Framework, and two unofficial spin-offs – the C,A,T and Relaxed Compositionality translator notations – with regard to how these systems handle hard cases, and in particular their ability to handle combinations of such problems. In the C,A,T translator notation, some cases of complex transfer are wild, meaning roughly that they interact badly when presented with other complex cases in the same sentence. The effect of this is that each combination of a wild case and another complex case needs ad hoc treatment. The E-Framework is the same as the C,A,T notation in this respect. In general, the E-Framework is equivalent to the C,A,T notation for the task of transfer. The Relaxed Compositionality translator notation is able to handle each wild case (bar one exception) with a single rule even where it appears in the same sentence as other complex cases.  相似文献   

11.
In this paper we investigate the general problem of discovering recurrent patterns that are embedded in categorical sequences. An important real-world problem of this nature is motif discovery in DNA sequences. There are a number of fundamental aspects of this data mining problem that can make discovery easy or hard—we characterize the difficulty of this problem using an analysis based on the Bayes error rate under a Markov assumption. The Bayes error framework demonstrates why certain patterns are much harder to discover than others. It also explains the role of different parameters such as pattern length and pattern frequency in sequential discovery. We demonstrate how the Bayes error can be used to calibrate existing discovery algorithms, providing a lower bound on achievable performance. We discuss a number of fundamental issues that characterize sequential pattern discovery in this context, present a variety of empirical results to complement and verify the theoretical analysis, and apply our methodology to real-world motif-discovery problems in computational biology.  相似文献   

12.
Infinitestimal Perturbation Analysis (IPA) estimators are based on particular couplings of parameteric families of discrete event systems where small changes in the parameter value, typically, cause small changes in the timing of events and, for finite horizons, the sequence of states visisted remains the same. We consider another coupling approach based on the uniformization procedure and a simple generalization of it. In our case any small change in the parameter value causes a change in the state of the system; our parameterization of trajectories keeps them highly synchronized, hence the effect of such changes can be estimated, sometimes efficiently. In this framework, we define three tupes of performance sensitivity estimators for a broad class of performance measures and with respect to a range of parameter values. Performance measures on finite deterministic horizons are considered and it is shown that they are unbiased under mild conditions. We show that for some systems the derivative estimators can be calculated from a nominal sample path of the system.  相似文献   

13.
This work presents a novel optimization method capable of integrating ordinal optimization (OO) and simulated annealing (SA). A general regression neural network (GRNN) is trained using available data to generate a rough model that approximates the response surface in the feasible domain. A set of good enough candidates are generated by conducting a (SA) search on this rough model. Only candidates accepted by the SA search are actually tested by evaluating their true objective functions. The GRNN model is then updated using these new data. The procedure is repeated until a specified number of tests have been performed. The method (SAOO+GRNN) is tested the well-known paper trim loss problem. SAOO+GRNN approach can substantially reduce the number of function calls and the computing time far below those of simple ordinal optimization method with such as horse race selection rule, as well as straightforward simulated annealing.  相似文献   

14.
In this paper, we consider the linear interval tolerance problem, which consists of finding the largest interval vector included in ([A], [b]) = {x R n | A [A], b [b], Ax = b}. We describe two different polyhedrons that represent subsets of all possible interval vectors in ([A], [b]), and we provide a new definition of the optimality of an interval vector included in ([A], [b]). Finally, we show how the Simplex algorithm can be applied to find an optimal interval vector in ([A], [b]).  相似文献   

15.
Property preserving abstractions for the verification of concurrent systems   总被引:9,自引:0,他引:9  
We study property preserving transformations for reactive systems. The main idea is the use of simulations parameterized by Galois connections (, ), relating the lattices of properties of two systems. We propose and study a notion of preservation of properties expressed by formulas of a logic, by a function mapping sets of states of a systemS into sets of states of a systemS'. We give results on the preservation of properties expressed in sublanguages of the branching time -calculus when two systemsS andS' are related via (, )-simulations. They can be used to verify a property for a system by verifying the same property on a simpler system which is an abstraction of it. We show also under which conditions abstraction of concurrent systems can be computed from the abstraction of their components. This allows a compositional application of the proposed verification method.This is a revised version of the papers [2] and [16]; the results are fully developed in [28].This work was partially supported by ESPRIT Basic Research Action REACT.Verimag is a joint laboratory of CNRS, Institut National Polytechnique de Grenoble, Université J. Fourier and Verilog SA associated with IMAG.  相似文献   

16.
The optimal structural design requiring nonlinear analysis and design sensitivity analysis can be an enormous computational task. It is extremely important to explore ways to reduce the computational effort so that more realistic and larger-scale structures can be optimized. The optimal design process is iterative requiring response analysis of the structure for each design improvement. A recent study has shown that up to 90 percent of the total computational effort is spent in computing the nonlinear response of the structure during the optimal design process. Thus, efficiency of the optimization process for nonlinear structures can be substantially improved if numerical effort for analyzing the structure can be reduced. This paper explores the idea of using design sensitivity coefficients (computed at each iteration to improve design) to predict displacement response of the structure at a changed design. The iterative procedure for nonlinear analysis of the structure is then started from the predicted response. This optimization procedure is called mixed and the original procedure where sensitivity information is not used is called the conventional approach. The numerical procedures for the two approaches are developed and implemented. They are compared on some truss type structures by including both geometric and material nonlinearities. Stress, strain, displacement, and buckling load constraints are imposed. The study shows the mixed method to be numerically stable and efficient.  相似文献   

17.
This article is a continuation of the work reported in [4], introducing unknown but bounded disturbances into the problem of control synthesis studied there. The technique presented allows an algorithmization with an appropriate graphic simulation. The original theoretical solution scheme taken here comes from the theory introduced by N.N. Krasovski [1], from the notion of the alternated integral of L.S. Pontriagin [2] and the funnel equation in the form given in [3]. For alternative treatment of related problems, see also [5], [6], and [7]. The theory is used as a point of application of constructive schemes generated through ellipsoidal techniques developed by the authors. A concise exposition of the latter is the objective of this article. A particular feature is that the ellipsoidal techniques introduced here do indicate an exact approximation of the original solutions based on set-valued calculus by solutions formulated in terms of ellipsoidal-valued functions only. Editor: J. Skowronski  相似文献   

18.
Recently, Yamashita and Fukushima [11] established an interesting quadratic convergence result for the Levenberg-Marquardt method without the nonsingularity assumption. This paper extends the result of Yamashita and Fukushima by using k=||F(xk)||, where [1,2], instead of k=||F(xk)||2 as the Levenberg-Marquardt parameter. If ||F(x)|| provides a local error bound for the system of nonlinear equations F(x)=0, it is shown that the sequence {xk} generated by the new method converges to a solution quadratically, which is stronger than dist(xk,X*)0 given by Yamashita and Fukushima. Numerical results show that the method performs well for singular problems.  相似文献   

19.
The language of standard propositional modal logic has one operator ( or ), that can be thought of as being determined by the quantifiers or , respectively: for example, a formula of the form is true at a point s just in case all the immediate successors of s verify .This paper uses a propositional modal language with one operator determined by a generalized quantifier to discuss a simple connection between standard invariance conditions on modal formulas and generalized quantifiers: the combined generalized quantifier conditions of conservativity and extension correspond to the modal condition of invariance under generated submodels, and the modal condition of invariance under bisimulations corresponds to the generalized quantifier being a Boolean combination of and .  相似文献   

20.
In many language processing tasks, most of the sentences generally convey rather simple meanings. Moreover, these tasks have a limited semantic domain that can be properly covered with a simple lexicon and a restricted syntax. Nevertheless, casual users are by no means expected to comply with any kind of formal syntactic restrictions due to the inherent spontaneous nature of human language. In this work, the use of error-correcting-based learning techniques is proposed to cope with the complex syntactic variability which is generally exhibited by natural language. In our approach, a complex task is modeled in terms of a basic finite state model, F, and a stochastic error model, E. F should account for the basic (syntactic) structures underlying this task, which would convey the meaning. E should account for general vocabulary variations, word disappearance, superfluous words, and so on. Each natural user sentence is thus considered as a corrupted version (according to E) of some simple sentence of L(F). Adequate bootstrapping procedures are presented that incrementally improve the structure of F while estimating the probabilities for the operations of E. These techniques have been applied to a practical task of moderately high syntactic variability, and the results which show the potential of the proposed approach are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号