首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper studies a system of m robots operating in a set of n work locations connected by aisles in a × grid, where mn. From time to time the robots need to move along the aisles, in order to visit disjoint sets of locations. The movement of the robots must comply with the following constraints: (1) no two robots can collide at a grid node or traverse a grid edge at the same time; (2) a robot's sensory capability is limited to detecting the presence of another robot at a neighboring node. We present a deterministic protocol that, for any small constant ε>0, allows m≤(1-ε)n robots to visit their target locations in O( ) time, where each robot visits no more than dn targets and no target is visited by more than one robot. We also prove a lower bound showing that our protocol is optimal. Prior to this paper, no optimal protocols were known for d>1. For d=1, optimal protocols were known only for m≤ , while for general mn only a suboptimal randomized protocol was known.  相似文献   

2.
Thedistance transform(DT) is an image computation tool which can be used to extract the information about the shape and the position of the foreground pixels relative to each other. It converts a binary image into a grey-level image, where each pixel has a value corresponding to the distance to the nearest foreground pixel. The time complexity for computing the distance transform is fully dependent on the different distance metrics. Especially, the more exact the distance transform is, the worse execution time reached will be. Nowadays, quite often thousands of images are processed in a limited time. It seems quite impossible for a sequential computer to do such a computation for the distance transform in real time. In order to provide efficient distance transform computation, it is considerably desirable to develop a parallel algorithm for this operation. In this paper, based on the diagonal propagation approach, we first provide anO(N2) time sequential algorithm to compute thechessboard distance transform(CDT) of anN×Nimage, which is a DT using the chessboard distance metrics. Based on the proposed sequential algorithm, the CDT of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. Following the mapping as proposed by Lee and Horng, the algorithm for the medial axis transform is also efficiently derived. The medial axis transform of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. The proposed parallel algorithms are composed of a set of prefix operations. In each prefix operation phase, only increase (add-one) operation and minimum operation are employed. So, the algorithms are especially efficient in practical applications.  相似文献   

3.
This paper describes the theory and algorithms of distance transform for fuzzy subsets, called fuzzy distance transform (FDT). The notion of fuzzy distance is formulated by first defining the length of a path on a fuzzy subset and then finding the infimum of the lengths of all paths between two points. The length of a path π in a fuzzy subset of the n-dimensional continuous space n is defined as the integral of fuzzy membership values along π. Generally, there are infinitely many paths between any two points in a fuzzy subset and it is shown that the shortest one may not exist. The fuzzy distance between two points is defined as the infimum of the lengths of all paths between them. It is demonstrated that, unlike in hard convex sets, the shortest path (when it exists) between two points in a fuzzy convex subset is not necessarily a straight line segment. For any positive number θ≤1, the θ-support of a fuzzy subset is the set of all points in n with membership values greater than or equal to θ. It is shown that, for any fuzzy subset, for any nonzero θ≤1, fuzzy distance is a metric for the interior of its θ-support. It is also shown that, for any smooth fuzzy subset, fuzzy distance is a metric for the interior of its 0-support (referred to as support). FDT is defined as a process on a fuzzy subset that assigns to a point its fuzzy distance from the complement of the support. The theoretical framework of FDT in continuous space is extended to digital cubic spaces and it is shown that for any fuzzy digital object, fuzzy distance is a metric for the support of the object. A dynamic programming-based algorithm is presented for computing FDT of a fuzzy digital object. It is shown that the algorithm terminates in a finite number of steps and when it does so, it correctly computes FDT. Several potential applications of fuzzy distance transform in medical imaging are presented. Among these are the quantification of blood vessels and trabecular bone thickness in the regime of limited special resolution where these objects become fuzzy.  相似文献   

4.
This paper presents an original method for analyzing, in an unsupervised way, images supplied by high resolution sonar. We aim at segmenting the sonar image into three kinds of regions: echo areas (due to the reflection of the acoustic wave on the object), shadow areas (corresponding to a lack of acoustic reverberation behind an object lying on the sea-bed), and sea-bottom reverberation areas. This unsupervised method estimates the parameters of noise distributions, modeled by a Weibull probability density function (PDF), and the label field parameters, modeled by a Markov random field (MRF). For the estimation step, we adopt a maximum likelihood technique for the noise model parameters and a least-squares method to estimate the MRF prior model. Then, in order to obtain an accurate segmentation map, we have designed a two-step process that finds the shadow and the echo regions separately, using the previously estimated parameters. First, we introduce a scale-causal and spatial model called SCM (scale causal multigrid), based on a multigrid energy minimization strategy, to find the shadow class. Second, we propose a MRF monoscale model using a priori information (at different level of knowledge) based on physical properties of each region, which allows us to distinguish echo areas from sea-bottom reverberation. This technique has been successfully applied to real sonar images and is compatible with automatic processing of massive amounts of data.  相似文献   

5.
Convexity-Based Visual Camouflage Breaking   总被引:1,自引:0,他引:1  
Camouflage is frequently used by animals and humans (usually for military purposes) in order to conceal objects from visual surveillance or inspection. Most camouflage methods are based on superposing multiple edges on the object that is supposed to be hidden, such that its familiar contours and texture are masked. In this work, we present an operator, Darg, that is applied directly to the intensity image in order to detect 3D smooth convex (or equivalently: concave) objects. The operator maximally responds to a local intensity configuration that corresponds to curved 3D objects, and thus, is used to detect curved objects on a relatively flat background, regardless of image edges, contours, and texture. In that regard, we show that a typical camouflage found in some animal species seems to be a “counter measure” taken against detection that might be based on our method. Detection by Darg is shown to be very robust, from both theoretic considerations and practical examples of real-life images. As a part of the camouflage breaking demonstration, Darg, which is non-edge-based, is compared with a representative edge-based operator. Better performance is maintained by Darg for both animal and military camouflage breaking.  相似文献   

6.
We present a framework for performance prediction of distributed and mobile systems. We rely on process calculi and their structural operational semantics. The dynamic behaviour is described through transition systems whose transitions are labelled by encodings of their proofs that we then map into stochastic processes. We enhance related works by allowing general continuous distributions resorting to a notion of enabling between transitions. We also discuss how the number of resources available affects the overall model. Finally, we introduce a notion of bisimulation that takes stochastic information into account and prove it to be a congruence. When only exponential distributions are of interest our equivalence induces a lumpable partition on the underlying Markov process.  相似文献   

7.
We work with an extension of Resolution, called Res(2), that allows clauses with conjunctions of two literals. In this system there are rules to introduce and eliminate such conjunctions. We prove that the weak pigeonhole principle PHPcnn and random unsatisfiable CNF formulas require exponential-size proofs in this system. This is the strongest system beyond Resolution for which such lower bounds are known. As a consequence to the result about the weak pigeonhole principle, Res(log) is exponentially more powerful than Res(2). Also we prove that Resolution cannot polynomially simulate Res(2) and that Res(2) does not have feasible monotone interpolation solving an open problem posed by Krají ek.  相似文献   

8.
The major challenge that faces American Sign Language (ASL) recognition now is developing methods that will scale well with increasing vocabulary size. Unlike in spoken languages, phonemes can occur simultaneously in ASL. The number of possible combinations of phonemes is approximately 1.5×109, which cannot be tackled by conventional hidden Markov model-based methods. Gesture recognition, which is less constrained than ASL recognition, suffers from the same problem. In this paper we present a novel framework to ASL recognition that aspires to being a solution to the scalability problems. It is based on breaking down the signs into their phonemes and modeling them with parallel hidden Markov models. These model the simultaneous aspects of ASL independently. Thus, they can be trained independently, and do not require consideration of the different combinations at training time. We show in experiments with a 22-sign-vocabulary how to apply this framework in practice. We also show that parallel hidden Markov models outperform conventional hidden Markov models.  相似文献   

9.
The problem of localization, that is, of a robot finding its position on a map, is an important task for autonomous mobile robots. It has applications in numerous areas of robotics ranging from aerial photography to autonomous vehicle exploration. In this paper we present a new strategy LPS (Localize-by-Placement-Separation) for a robot to find its position on a map, where the map is represented as a geometric tree of bounded degree. Our strategy exploits to a high degree the self-similarities that may occur in the environment. We use the framework of competitive analysis to analyze the performance of our strategy. In particular, we show that the distance traveled by the robot is at most O( ) times longer than the shortest possible route to localize the robot, where n is the number of vertices of the tree. This is a significant improvement over the best known previous bound of O(n2/3). Moreover, since there is a lower bound of Ω( ), our strategy is optimal up to a constant factor. Using the same approach we can also show that the problem of searching for a target in a geometric tree, where the robot is given a map of the tree and the location of the target but does not know its own position, can be solved by a strategy with a competitive ratio of O( ), which is again optimal up to a constant factor.  相似文献   

10.
This paper describes a technique to animate three-dimensional sampled volumes. The technique gives the animator the ability to treat volumes as if they were standard polygonal models and to use all of the standard animation/motion capture tools on volumetric data. A volumetric skeleton is computed from a volumetric model using a multi-resolution thinning procedure. The volumetric skeleton is centered in the object and accurately represents the shape of the object. The thinning process is reversible in that the volumetric model can be reconstructed from the volumetric skeleton. The volumetric skeleton is then connected and imported into a standard graphics animation package for animation. The animated skeleton is used for reconstruction, which essentially recreates a deformed volume around the deformed skeleton. Polygons are never computed and the entire process remains in the volumetric domain. This technique is demonstrated on one of the most complex 3D datasets, the Visible Male, resulting in actual “human animation”.  相似文献   

11.
The region calculus of Tofte and Talpin is a polymorphically typed lambda calculus with annotations that make memory allocation and deallocation explicit. It is intended as an intermediate language for implementing Hindley-Milner typed functional languages such as ML without traditional trace-based garbage collection. Static region and effect inference can be used to annotate a statically typed ML program with memory management primitives. Soundness of the calculus with respect to the region and effect system is crucial to guarantee safe deallocation of regions, i.e., deallocation should only take place for objects which are provably dead. The original soundness proof by Tofte and Talpin requires a complex co-inductive safety relation. In this paper, we present two small-step operational semantics for the region calculus and prove their type soundness with respect to the region and effect system. Following the standard syntactic approach of Wright, Felleisen, and Harper, we obtain simple inductive proofs. The first semantics is store-less. It is simple and elegant and gives rise to perspicuous proofs. The second semantics provides a store-based model for the region calculus. Albeit slightly more complicated, its additional expressiveness allows us to model operations on references with destructive update. A pure fragment of both small-step semantics is then proven equivalent to the original big-step operational approach of Tofte and Talpin. This leads to an alternative soundness proof for their evaluation-style formulation.  相似文献   

12.
A lower bound theorem is established for the number of comparators in a merging network. Let M(m, n) be the least number of comparators required in the (m, n)-merging networks, and let C(m, n) be the number of comparators in Batcher's (m, n)-merging network, respectively. We prove for n≥1 that M(4, n)=C(4, n) for n≡0, 1, 3 mod 4, M(4, n)≥C(4, n)−1 for n≡2 mod 4, and M(5, n)=C(5, n) for n≡0, 1, 5 mod 8. Furthermore Batcher's (6, 8k+6)-, (7, 8k+7)-, and (8, 8k+8)-merging networks are optimal for k≥0. Our lower bound for (m, n)-merging networks, mn, has the same terms as C(m, n) has as far as n is concerned. Thus Batcher's (m, n)-merging network is optimal up to a constant number of comparators, where the constant depends only on m. An open problem posed by Yao and Yao (Lower bounds on merging networks, J. Assoc. Comput. Mach.23, 566–571) is solved: limn→∞M(m, n)/n=log m/2+m/2log m.  相似文献   

13.
A major problem in using iterative number generators of the form xi=f(xi−1) is that they can enter unexpectedly short cycles. This is hard to analyze when the generator is designed, hard to detect in real time when the generator is used, and can have devastating cryptanalytic implications. In this paper we define a measure of security, called sequence diversity, which generalizes the notion of cycle-length for noniterative generators. We then introduce the class of counter-assisted generators and show how to turn any iterative generator (even a bad one designed or seeded by an adversary) into a counter-assisted generator with a provably high diversity, without reducing the quality of generators which are already cryptographically strong.  相似文献   

14.
Given two strings X=a1an and P=b1bm over an alphabet Σ, the problem of testing whether P occurs as a subsequence of X is trivially solved in linear time. It is also known that a simple O(n log |Σ|) time preprocessing of X makes it easy to decide subsequently, for any P and in at most |P| log |Σ| character comparisons, whether P is a subsequence of X. These problems become more complicated if one asks instead whether P occurs as a subsequence of some substring Y of X of bounded length. This paper presents an automaton built on the textstring X and capable of identifying all distinct minimal substrings Y of X having P as a subsequence. By a substring Y being minimal with respect to P, it is meant that P is not a subsequence of any proper substring of Y. For every minimal substring Y, the automaton recognizes the occurrence of P having the lexicographically smallest sequence of symbol positions in Y. It is not difficult to realize such an automaton in time and space O(n2) for a text of n characters. One result of this paper consists of bringing those bounds down to linear or O(n log n), respectively, depending on whether the alphabet is bounded or of arbitrary size, thereby matching the corresponding complexities of automata constructions for offline exact string searching. Having built the automaton, the search for all lexicographically earliest occurrences of P in X is carried out in time O(∑i=1mrocci·i) or O(n+∑i=1mrocci·i· log n), depending on whether the alphabet is fixed or arbitrary, where rocci is the number of distinct minimal substrings of X having b1bi as a subsequence (note that each such substring may occur many times in X but is counted only once in the bound). All log factors appearing in the above bounds can be further reduced to log log by resorting to known integer-handling data structures.  相似文献   

15.
A set of words X over a finite alphabet A is said to be unavoidable if all but finitely many words in A* have a factor in X. We examine the problem of calculating the cardinality of minimal unavoidable sets of words of uniform length; we correct an error in [8], state a conjecture offering a formula for the minimum size of these so called n-good sets for all values of n, and show that the conjecture is correct in an infinite number of cases.  相似文献   

16.
In this paper, we investigate several extensions of the linear time hierarchy (denoted by LTH). We first prove that it is not necessary to erase the oracle tape between two successive oracle calls, thereby lifting a common restriction on LTH machines. We also define a natural counting extension of LTH and show that it corresponds to a robust notion of counting bounded arithmetic predicates. Finally, we show that the computational power of the majority operator is equivalent to that of the exact counting operator in both contexts.  相似文献   

17.
We introduce a domain-theoretic foundation for disjunctive logic programming. This foundation is built on clausal logic, a representation of the Smyth powerdomain of any coherent algebraic dcpo. We establish the completeness of a resolution rule for inference in such a clausal logic; we introduce a natural declarative semantics and a fixed-point semantics for disjunctive logic programs, and prove their equivalence; finally, we apply our results to give both a syntax and semantics for default logic in any coherent algebraic dcpo.  相似文献   

18.
We present an approach to attention in active computer vision. The notion of attention plays an important role in biological vision. In recent years, and especially with the emerging interest in active vision, computer vision researchers have been increasingly concerned with attentional mechanisms as well. The basic principles behind these efforts are greatly influenced by psychophysical research. That is the case also in the work presented here, which adapts to the model of Treisman (1985, Comput. Vision Graphics Image Process. Image Understanding31, 156–177), with an early parallel stage with preattentive cues followed by a later serial stage where the cues are integrated. The contributions in our approach are (i) the incorporation of depth information from stereopsis, (ii) the simple implementation of low level modules such as disparity and flow by local phase, and (iii) the cue integration along pursuit and saccade mode that allows us a proper target selection based on nearness and motion. We demonstrate the technique by experiments in which a moving observer selectively masks out different moving objects in real scenes.  相似文献   

19.
Recently, the author introduced a nonprobabilistic mathematical model of discrete channels, the BEE channels, that involve the error-types substitution, insertion, and deletion. This paper defines an important class of BEE channels, the SID channels, which include channels that permit a bounded number of scattered errors and, possibly at the same time, a bounded burst of errors in any segment of predefined length of a message. A formal syntax is defined for generating channel expressions, and appropriate semantics is provided for interpreting a given channel expression as a communication channel (SID channel) that permits combinations of substitutions, insertions, and deletions of symbols. Our framework permits one to generalize notions such as error correction and unique decodability, and express statements of the form “The code K can correct all errors of type ξ” and “it is decidable whether the code K is uniquely decodable for the channel described by ξ”, where ξ is any SID channel expression.  相似文献   

20.
While deterministic finite automata seem to be well understood, surprisingly many important problems concerning nondeterministic finite automata (nfa's) remain open. One such problem area is the study of different measures of nondeterminism in finite automata and the estimation of the sizes of minimal nondeterministic finite automata. In this paper the concept of communication complexity is applied in order to achieve progress in this problem area. The main results are as follows:1. Deterministic communication complexity provides lower bounds on the size of nfa's with bounded unambiguity. Applying this fact, the proofs of several results about nfa's with limited ambiguity can be simplified and presented in a uniform way.2. There is a family of languages KONk2 with an exponential size gap between nfa's with polynomial leaf number/ambiguity and nfa's with ambiguity k. This partially provides an answer to the open problem posed by B. Ravikumar and O. Ibarra (1989, SIAM J. Comput.18, 1263–1282) and H. Leung (1998, SIAM J. Comput.27, 1073–1082).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号