首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
We present a new definition of optimality intervals for the parametric right-hand side linear programming (parametric RHS LP) Problem () = min{c t x¦Ax =b + ¯b,x 0}. We then show that an optimality interval consists either of a breakpoint or the open interval between two consecutive breakpoints of the continuous piecewise linear convex function (). As a consequence, the optimality intervals form a partition of the closed interval {; ¦()¦ < }. Based on these optimality intervals, we also introduce an algorithm for solving the parametric RHS LP problem which requires an LP solver as a subroutine. If a polynomial-time LP solver is used to implement this subroutine, we obtain a substantial improvement on the complexity of those parametric RHS LP instances which exhibit degeneracy. When the number of breakpoints of () is polynomial in terms of the size of the parametric problem, we show that the latter can be solved in polynomial time.This research was partially funded by the United States Navy-Office of Naval Research under Contract N00014-87-K-0202. Its financial support is gratefully acknowledged.  相似文献   

2.
Suppose a directed graph has its arcs stored in secondary memory, and we wish to compute its transitive closure, also storing the result in secondary memory. We assume that an amount of main memory capable of holdings values is available, and thats lies betweenn, the number of nodes of the graph, ande, the number of arcs. The cost measure we use for algorithms is theI/O complexity of Kung and Hong, where we count 1 every time a value is moved into main memory from secondary memory, or vice versa.In the dense case, wheree is close ton 2, we show that I/O equal toO(n 3/s) is sufficient to compute the transitive closure of ann-node graph, using main memory of sizes. Moreover, it is necessary for any algorithm that is standard, in a sense to be defined precisely in the paper. Roughly, standard means that paths are constructed only by concatenating arcs and previously discovered paths. For the sparse case, we show that I/O equal toO(n 2e/s) is sufficient, although the algorithm we propose meets our definition of standard only if the underlying graph is acyclic. We also show that(n 2e/s) is necessary for any standard algorithm in the sparse case. That settles the I/O complexity of the sparse/acyclic case, for standard algorithms. It is unknown whether this complexity can be achieved in the sparse, cyclic case, by a standard algorithm, and it is unknown whether the bound can be beaten by nonstandard algorithms.We then consider a special kind of standard algorithm, in which paths are constructed only by concatenating arcs and old paths, never by concatenating two old paths. This restriction seems essential if we are to take advantage of sparseness. Unfortunately, we show that almost another factor ofn I/O is necessary. That is, there is an algorithm in this class using I/OO(n 3e/s) for arbitrary sparse graphs, including cyclic ones. Moreover, every algorithm in the restricted class must use(n 3e/s/log3 n) I/O, on some cyclic graphs.The work of this author was partially supported by NSF grant IRI-87-22886, IBM contract 476816, Air Force grant AFOSR-88-0266 and a Guggenheim fellowship.  相似文献   

3.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

4.
This paper deals with the issue of generating one Pareto optimal point that is guaranteed to be in a desirable part of the Pareto set in a given multicriteria optimization problem. A parameterization of the Pareto set based on the recently developed normal-boundary intersection technique is used to formulate a subproblem, the solution of which yields the point of maximum bulge, often referred to as the knee of the Pareto curve. This enables the identification of the good region of the Pareto set by solving one nonlinear programming problem, thereby bypassing the need to generate many Pareto points. Further, this representation extends the concept of the knee for problems with more than two objectives. It is further proved that this knee is invariant with respect to the scales of the multiple objective functions.The generation of this knee however requires the value of each objective function at the minimizer of every objective function (the pay-off matrix). The paper characterizes situations when approximations to the function values comprising the pay-off matrix would suffice in generating a good approximation to the knee. Numerical results are provided to illustrate this point. Further, a weighted sum minimization problem is developed based on the information in the pay-off matrix, by solving which the knee can be obtained.  相似文献   

5.
This paper aims to provide a basis for renewed talk about use in computing. Four current discourse arenas are described. Different intentions manifest in each arena are linked to failures in translation, different terminologies crossing disciplinary and national boundaries non-reflexively. Analysis of transnational use discourse dynamics shows much miscommunication. Conflicts like that between the Scandinavian System Development School and the usability approach have less current salience. Renewing our talk about use is essential to a participatory politics of information technology and will lead to clearer perception of the implications of letting new systems becoming primary media of social interaction.  相似文献   

6.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

7.
The number of virtual connections in the nodal space of an ATM network of arbitrary structure and topology is computed by a method based on a new concept—a covering domain having a concrete physical meaning. The method is based on a network information sources—boundary switches model developed for an ATM transfer network by the entropy approach. Computations involve the solution of systems of linear equations. The optimization model used to compute the number of virtual connections in a many-category traffic in an ATM network component is useful in estimating the resource of nodal equipment and communication channels. The variable parameters of the model are the transmission bands for different traffic categories.  相似文献   

8.
WhenC is a concurrency relation on alphabet , then */= C is a free partially commutative monoid. Here we show that it is decidable in polynomial time whether or not there exists a finite canonical rewriting systemR on such that the congruences R * generated byR and = C induced byC coincide. Further, in case such a systemR exists, one such system can be determined in polynomial time.  相似文献   

9.
A Maple procedure is described by means of which an algebraic function given by an equation f(x y) = 0 can be expanded into a fractional power series (Puiseux series)
where
,
of special (nice) type. It may be a series with polynomial, rational, hypergeometric coefficients, or m-sparse or m-sparse m-hypergeometric series. First, a linear ordinary differential equation with polynomial coefficients Ly(x) = 0 is constructed which is satisfied by the given algebraic function. The , n 0, and a required number of initial coefficients 0, ..., are computed by using Maple algcurves package. By means of Maple Slode package, a solution to the equation Ly(x) = 0 is constructed in the form of a series with nice coefficients, the initial coefficients of which correspond to the calculated 0, ..., . The procedure suggested can construct an expansion at a user-given point x 0, as well as determine points where an expansion of such a special type is possible.  相似文献   

10.
A nonlinear stochastic integral equation of the Hammerstein type in the formx(t; ) = h(t, x(t; )) + s k(t, s; )f(s, x(s; ); )d(s) is studied wheret S, a measure space with certain properties, , the supporting set of a probability measure space (,A, P), and the integral is a Bochner integral. A random solution of the equation is defined to be an almost surely continuousm-dimensional vector-valued stochastic process onS which is bounded with probability one for eacht S and which satisfies the equation almost surely. Several theorems are proved which give conditions such that a unique random solution exists. AMS (MOS) subject classifications (1970): Primary; 60H20, 45G99. Secondary: 60G99.  相似文献   

11.
The language of standard propositional modal logic has one operator ( or ), that can be thought of as being determined by the quantifiers or , respectively: for example, a formula of the form is true at a point s just in case all the immediate successors of s verify .This paper uses a propositional modal language with one operator determined by a generalized quantifier to discuss a simple connection between standard invariance conditions on modal formulas and generalized quantifiers: the combined generalized quantifier conditions of conservativity and extension correspond to the modal condition of invariance under generated submodels, and the modal condition of invariance under bisimulations corresponds to the generalized quantifier being a Boolean combination of and .  相似文献   

12.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

13.
In order to cope with the changing health needs in the community, an holistic approach on AIDS prevention and control with particular reference to essential quality was introduced at an educational seminar at Hebei Medical University in China, 1996. We have identified three major points in the present study through learning and research process: 1. The importance of cultural norm for the unification of science and technology is identified for the community approach; 2. community care emphasising human quality provides unity in diversity for educational program; and 3. community control emphasising quality assurance demonstrates the effectiveness for program analysis from the viewpoint of human centred systems.  相似文献   

14.
For the equation x(t) = x(t) (1-(1/) t-- t- x(u)du), > 0, > 0, > 0, conditions for the stability of a nonzero stationary solution under small perturbations are determined.  相似文献   

15.
In this paper we have a closer look at one of the rules of the tableau calculus presented by Fitting [4], called the -rule. We prove that a modification of this rule, called the +-rule, which uses fewer free variables, is also sound and complete. We examine the relationship between the +-rule and variations of the -rule presented by Smullyan [9]. This leads to a second proof of the soundness of the +-rule. An example shows the relevance of this modification for building tableau-based theorem provers.  相似文献   

16.
The AI methodology of qualitative reasoning furnishes useful tools to scientists and engineers who need to deal with incomplete system knowledge during design, analysis, or diagnosis tasks. Qualitative simulators have a theoretical soundness guarantee; they cannot overlook any concrete equation implied by their input. On the other hand, the basic qualitative simulation algorithms have been shown to suffer from the incompleteness problem; they may allow non-solutions of the input equation to appear in their output. The question of whether a simulator with purely qualitative input which never predicts spurious behaviors can ever be achieved by adding new filters to the existing algorithm has remained unanswered. In this paper, we show that, if such a sound and complete simulator exists, it will have to be able to handle numerical distinctions with such a high precision that it must contain a component that would better be called a quantitative, rather than qualitative reasoner. This is due to the ability of the pure qualitative format to allow the exact representation of the members of a rich set of numbers.  相似文献   

17.
Different solution sets for the interval linear system A x = b are characterized and classified using diagrammatic tools for interval analysis developed recently. In Part I, a thorough analysis of the basic, one-dimensional expression a x b is conducted, with the help of an appropriate interval space diagram, in which all the needed relations { , , , = } are directly representable. The solution sets are obtained with simple diagrammatic constructions, and are characterized by quotient sequences of a and b. A complete classification of all possible solution types is developed in this way, with various ways of visualizing the structure of the set of these types.  相似文献   

18.
The notion of obvious inference in predicate logic is discussed from the viewpoint of proof-checker applications in logic and mathematics education. A class of inferences in predicate logic is defined and it is proposed to identify it with the class of obvious logical inferences. The definition is compared with other approaches. The algorithm for implementing the obviousness decision procedure follows directly from the definition.  相似文献   

19.
Summary Geffert has shown that earch recursively enumerable languageL over can be expressed in the formL{h(x) –1 g(x)x in +} * where is an alphabet andg, h is a pair of morphisms. Our purpose is to give a simple proof for Geffert's result and then sharpen it into the form where both of the morphisms are nonerasing. In our method we modify constructions used in a representation of recursively enumerable languages in terms of equality sets and in a characterization of simple transducers in terms of morphisms. As direct consequences, we get the undecidability of the Post correspondence problem and various representations ofL. For instance,L =(L 0) * whereL 0 is a minimal linear language and is the Dyck reductiona, A.  相似文献   

20.
Consider a binary string x 0 of Kolmogorov complexity K(x 0) n. The question is whether there exist two strings x 1 and x 2 such that the approximate equalities K(x i x j ) n and K(x i x j , x k ) n hold for all 0 i, j, k 2, i j k, i k. We prove that the answer is positive if we require the equalities to hold up to an additive term O(log K(x 0)). It becomes negative in the case of better accuracy, namely, O(log n).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号