首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An atomic representation of a Herbrand model (ARM) is a finite set of (not necessarily ground) atoms over a given Herbrand universe. Each ARM represents a possibly infinite Herbrand interpretation. This concept has emerged independently in different branches of computer science as a natural and useful generalization of the concept of finite Herbrand interpretation. It was shown that several recursively decidable problems on finite Herbrand models (or interpretations) remain decidable on ARMs.The following problems are essential when working with ARMs: Deciding the equivalence of two ARMs, deciding subsumption between ARMs, and evaluating clauses over ARMs. These problems were shown to be decidable, but their computational complexity has remained obscure so far. The previously published decision algorithms require exponential space. In this paper, we prove that all mentioned problems are coNP-complete.  相似文献   

2.
Constructive Hypervolume Modeling   总被引:1,自引:0,他引:1  
This paper deals with modeling point sets with attributes. A point set in a geometric space of an arbitrary dimension is a geometric model of a real/abstract object or process under consideration. An attribute is a mathematical model of an object property of arbitrary nature (material, photometric, physical, statistical, etc.) defined at any point of the point set. We provide a brief survey of different modeling techniques related to point sets with attributes. It spans such different areas as solid modeling, heterogeneous objects modeling, scalar fields or “implicit surface” modeling and volume graphics. Then, on the basis of this survey we formulate requirements to a general model of hypervolumes (multidimensional point sets with multiple attributes). A general hypervolume model and its components such as objects, operations, and relations are introduced and discussed. A function representation (FRep) is used as the basic model for the point set geometry and attributes represented independently using real-valued scalar functions of several variables. Each function defining the geometry or an attribute is evaluated at the given point by a procedure traversing a constructive tree structure with primitives in the leaves and operations in the nodes of the tree. This reflects the constructive nature of the symmetric approach to modeling geometry and associated attributes in multidimensional space. To demonstrate a particular application of the proposed general model, we consider in detail the problem of texturing, introduce a model of constructive hypervolume texture, and then discuss its implementation, as well as the special modeling language we used for modeling hypervolume objects.  相似文献   

3.
We consider finite hypergraphs with hyperedges defined as sets of vertices of unbounded cardinality. Each such hypergraph has a unique modular decomposition, which is a tree, the nodes of which correspond to certain subhypergraphs (induced by certain sets of vertices called strong modules) of the considered hypergraph. One can define this decomposition by monadic second-order (MS) logical formulas. Such a hypergraph is convex if the vertices are linearly ordered in such a way that the hyperedges form intervals. Our main result says that the unique linear order witnessing the convexity of a prime hypergraph (i.e., of one, the modular decomposition of which is trivial) can be defined in MS logic. As a consequence, we obtain that if a set of bipartite graphs that correspond (in the usual way) to convex hypergraphs has a decidable monadic second-order theory (which means that one can decide whether a given MS formula is satisfied in some graph of the set) then it has bounded clique-width. This yields a new case of validity of a conjecture which is still open.  相似文献   

4.
This paper introduces formative processes, composed by transitive partitions. Given a family of sets, a formative process ending in the Venn partition Σ of is shown to exist. Sufficient criteria are also singled out for a transitive partition to model (via a function from set variables to unions of sets in the partition) all set-literals modeled by Σ. On the basis of such criteria a procedure is designed that mimics a given formative process by another where sets have finite rank bounded by C(|Σ|), with C a specific computable function. As a by-product, one of the core results on decidability in computable set theory is rediscovered, namely the one that regards the satisfiability of unquantified set-theoretic formulae involving Boolean operators, the singleton-former, and the powerset operator. The method described (which is able to exhibit a set-solution when the answer is affirmative) can be extended to solve the satisfiability problem for broader fragments of set theory.  相似文献   

5.
We consider the problem of simulation preorder/equivalence between infinite-state processes and finite-state ones. First, we describe a general method how to utilize the decidability of bisimulation problems to solve (certain instances of) the corresponding simulation problems. For certain process classes, the method allows us to design effective reductions of simulation problems to their bisimulation counterparts and some new decidability results for simulation have already been obtained in this way. Then we establish the decidability border for the problem of simulation preorder/equivalence between infinite-state processes and finite-state ones w.r.t. the hierarchy of process rewrite systems. In particular, we show that simulation preorder (in both directions) and simulation equivalence are decidable in EXPTIME between pushdown processes and finite-state ones. On the other hand, simulation preorder is undecidable between PA and finite-state processes in both directions. These results also hold for those PA and finite-state processes which are deterministic and normed, and thus immediately extend to trace preorder. Regularity (finiteness) w.r.t. simulation and trace equivalence is also shown to be undecidable for PA. Finally, we prove that simulation preorder (in both directions) and simulation equivalence are intractable between all classes of infinite-state systems (in the hierarchy of process rewrite systems) and finite-state ones. This result is obtained by showing that the problem whether a BPA (or BPP) process simulates a finite-state one is PSPACE-hard and the other direction is co -hard; consequently, simulation equivalence between BPA (or BPP) and finite-state processes is also co -hard.  相似文献   

6.
In this paper we consider the problem of reconstructing triangular surfaces from given contours. An algorithm solving this problem must decide which contours of two successive slices should be connected by the surface (branching problem) and, given that, which vertices of the assigned contours should be connected for the triangular mesh (correspondence problem). We present a new approach that solves both tasks in an elegant way. The main idea is to employ discrete distance fields enhanced with correspondence information. This allows us not only to connect vertices from successive slices in a reasonable way but also to solve the branching problem by creating intermediate contours where adjacent contours differ too much. Last but not least we show how the 2D distance fields used in the reconstruction step can be converted to a 3D distance field that can be advantageously exploited for distance calculations during a subsequent simplification step.  相似文献   

7.
The exponential output size problem is to determine whether the size of output trees of a tree transducer grows exponentially in the size of input trees. In this paper the complexity of this problem is studied. It is shown to be NL-complete for total top-down tree transducers, DEXPTIME-complete for general top-down tree transducers, and P-complete for bottom-up tree transducers.  相似文献   

8.
We investigate set constraints over set expressions with Tarskian functional and relational operations. Unlike the Herbrand constructor symbols used in recent set constraint formalisms, the meaning of a Tarskian function symbol is interpreted in an arbitrary first order structure. We show that satisfiability of Tarskian set constraints is decidable in nondeterministic doubly exponential time. We also give complexity results and open problems for various extensions and restrictions of the language.  相似文献   

9.
10.
This paper describes the theory and algorithms of distance transform for fuzzy subsets, called fuzzy distance transform (FDT). The notion of fuzzy distance is formulated by first defining the length of a path on a fuzzy subset and then finding the infimum of the lengths of all paths between two points. The length of a path π in a fuzzy subset of the n-dimensional continuous space n is defined as the integral of fuzzy membership values along π. Generally, there are infinitely many paths between any two points in a fuzzy subset and it is shown that the shortest one may not exist. The fuzzy distance between two points is defined as the infimum of the lengths of all paths between them. It is demonstrated that, unlike in hard convex sets, the shortest path (when it exists) between two points in a fuzzy convex subset is not necessarily a straight line segment. For any positive number θ≤1, the θ-support of a fuzzy subset is the set of all points in n with membership values greater than or equal to θ. It is shown that, for any fuzzy subset, for any nonzero θ≤1, fuzzy distance is a metric for the interior of its θ-support. It is also shown that, for any smooth fuzzy subset, fuzzy distance is a metric for the interior of its 0-support (referred to as support). FDT is defined as a process on a fuzzy subset that assigns to a point its fuzzy distance from the complement of the support. The theoretical framework of FDT in continuous space is extended to digital cubic spaces and it is shown that for any fuzzy digital object, fuzzy distance is a metric for the support of the object. A dynamic programming-based algorithm is presented for computing FDT of a fuzzy digital object. It is shown that the algorithm terminates in a finite number of steps and when it does so, it correctly computes FDT. Several potential applications of fuzzy distance transform in medical imaging are presented. Among these are the quantification of blood vessels and trabecular bone thickness in the regime of limited special resolution where these objects become fuzzy.  相似文献   

11.
By reduction from the halting problem for Minsky's two-register machines we prove that there is no algorithm capable of deciding the -theory of one step rewriting of an arbitrary finite linear confluent finitely terminating term rewriting system (weak undecidability). We also present a fixed such system with undecidable *-theory of one step rewriting (strong undecidability). This improves over all previously known results of the same kind.  相似文献   

12.
A set of words X over a finite alphabet A is said to be unavoidable if all but finitely many words in A* have a factor in X. We examine the problem of calculating the cardinality of minimal unavoidable sets of words of uniform length; we correct an error in [8], state a conjecture offering a formula for the minimum size of these so called n-good sets for all values of n, and show that the conjecture is correct in an infinite number of cases.  相似文献   

13.
Consider the problem of computing a function given only an oracle for its graph. For this problem, we present optimal trade-offs between serial and parallel queries. In particular, we give a function for which parallel access to its own graph is exponentially more expensive than sequential access.  相似文献   

14.
We present a new method for the detection of multiple solutions or degeneracy when estimating thefundamental matrix, with specific emphasis on robustness to data contamination (mismatches). The fundamental matrix encapsulates all the information on camera motion and internal parameters available from image feature correspondences between two views. It is often used as a first step in structure from motion algorithms. If the set of correspondences is degenerate, then this structure cannot be accurately recovered and many solutions explain the data equally well. It is essential that we are alerted to such eventualities. As current feature matchers are very prone to mismatching the degeneracy detection method must also be robust to outliers.In this paper a definition of degeneracy is given and all two-view nondegenerate and degenerate cases are catalogued in a logical way by introducing the language of varieties from algebraic geometry. It is then shown how each of the cases can be robustly determined from image correspondences via a scoring function we develop. These ideas define a methodology which allows the simultaneous detection of degeneracy and outliers. The method is called PLUNDER-DL and is a generalization of the robust estimator RANSAC.The method is evaluated on many differing pairs of real images. In particular it is demonstrated that proper modeling of degeneracy in the presence of outliers enables the detection of mismatches which would otherwise be missed. All processing including point matching, degeneracy detection, and outlier detection is automatic.  相似文献   

15.
A term rewriting system is called growing if each variable occurring on both the left-hand side and the right-hand side of a rewrite rule occurs at depth zero or one in the left-hand side. Jacquemard showed that the reachability and the sequentiality of linear (i.e., left-right-linear) growing term rewriting systems are decidable. In this paper we show that Jacquemard's result can be extended to left-linear growing rewriting systems that may have right-nonlinear rewrite rules. This implies that the reachability and the joinability of some class of right-linear term rewriting systems are decidable, which improves the results for right-ground term rewriting systems by Oyamaguchi. Our result extends the class of left-linear term rewriting systems having a decidable call-by-need normalizing strategy. Moreover, we prove that the termination property is decidable for almost orthogonal growing term rewriting systems.  相似文献   

16.
Computation of approximate polynomial greatest common divisors (GCDs) is important both theoretically and due to its applications to control linear systems, network theory, and computer-aided design. We study two approaches to the solution so far omitted by the researchers, despite intensive recent work in this area. Correlation to numerical Padé approximation enabled us to improve computations for both problems (GCDs and Padé). Reduction to the approximation of polynomial zeros enabled us to obtain a new insight into the GCD problem and to devise effective solution algorithms. In particular, unlike the known algorithms, we estimate the degree of approximate GCDs at a low computational cost, and this enables us to obtain certified correct solution for a large class of input polynomials. We also restate the problem in terms of the norm of the perturbation of the zeros (rather than the coefficients) of the input polynomials, which leads us to the fast certified solution for any pair of input polynomials via the computation of their roots and the maximum matchings or connected components in the associated bipartite graph.  相似文献   

17.
Face Detection: A Survey   总被引:5,自引:0,他引:5  
In this paper we present a comprehensive and critical survey of face detection algorithms. Face detection is a necessary first-step in face recognition systems, with the purpose of localizing and extracting the face region from the background. It also has several applications in areas such as content-based image retrieval, video coding, video conferencing, crowd surveillance, and intelligent human–computer interfaces. However, it was not until recently that the face detection problem received considerable attention among researchers. The human face is a dynamic object and has a high degree of variability in its apperance, which makes face detection a difficult problem in computer vision. A wide variety of techniques have been proposed, ranging from simple edge-based algorithms to composite high-level approaches utilizing advanced pattern recognition methods. The algorithms presented in this paper are classified as either feature-based or image-based and are discussed in terms of their technical approach and performance. Due to the lack of standardized tests, we do not provide a comprehensive comparative evaluation, but in cases where results are reported on common datasets, comparisons are presented. We also give a presentation of some proposed applications and possible application areas.  相似文献   

18.
Recently, the author introduced a nonprobabilistic mathematical model of discrete channels, the BEE channels, that involve the error-types substitution, insertion, and deletion. This paper defines an important class of BEE channels, the SID channels, which include channels that permit a bounded number of scattered errors and, possibly at the same time, a bounded burst of errors in any segment of predefined length of a message. A formal syntax is defined for generating channel expressions, and appropriate semantics is provided for interpreting a given channel expression as a communication channel (SID channel) that permits combinations of substitutions, insertions, and deletions of symbols. Our framework permits one to generalize notions such as error correction and unique decodability, and express statements of the form “The code K can correct all errors of type ξ” and “it is decidable whether the code K is uniquely decodable for the channel described by ξ”, where ξ is any SID channel expression.  相似文献   

19.
We introduce new algorithms for deciding the satisfiability of constraints for the full recursive path ordering with status (RPO), and hence as well for other path orderings like LPO, MPO, KNS and RDO, and for all possible total precedences and signatures. The techniques are based on a new notion of solved form, where fundamental properties of orderings like transitivity and monotonicity are taken into account. Apart from simplicity and elegance from the theoretical point of view, the main contribution of these algorithms is on efficiency in practice. Since guessing is minimized, and, in particular, no linear orderings between the subterms are guessed, a practical improvement in performance of several orders of magnitude over previous algorithms is obtained, as shown by our experiments.  相似文献   

20.
The study of the computational power of randomized computations is one of the central tasks of complexity theory. The main goal of this paper is the comparison of the power of Las Vegas computation and deterministic respectively nondeterministic computation. We investigate the power of Las Vegas computation for the complexity measures of one-way communication, ordered binary decision diagrams, and finite automata.(i) For the one-way communication complexity of two-party protocols we show that Las Vegas communication can save at most one half of the deterministic one-way communication complexity. We also present a language for which this gap is tight.(ii) The result (i) is applied to show an at most polynomial gap between determinism and Las Vegas for ordered binary decision diagrams.(iii) For the size (i.e., the number of states) of finite automata we show that the size of Las Vegas finite automata recognizing a language L is at least the square root of the size of the minimal deterministic finite automaton recognizing L. Using a specific language we verify the optimality of this lower bound.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号