首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Recently, the author introduced a nonprobabilistic mathematical model of discrete channels, the BEE channels, that involve the error-types substitution, insertion, and deletion. This paper defines an important class of BEE channels, the SID channels, which include channels that permit a bounded number of scattered errors and, possibly at the same time, a bounded burst of errors in any segment of predefined length of a message. A formal syntax is defined for generating channel expressions, and appropriate semantics is provided for interpreting a given channel expression as a communication channel (SID channel) that permits combinations of substitutions, insertions, and deletions of symbols. Our framework permits one to generalize notions such as error correction and unique decodability, and express statements of the form “The code K can correct all errors of type ξ” and “it is decidable whether the code K is uniquely decodable for the channel described by ξ”, where ξ is any SID channel expression.  相似文献   

2.
Specularities on surfaces with tangential hairs or grooves are readily observable in nature. Examples of such phenomena are the arched or looped highlights observed on horses and human heads and the linear or curved specularities observed on parts of industrial machinery that have tangential grooves. We investigate the geometry of curvilinear specularities on surfaces of different curvature with tangential hairs or grooves of different orientations under controlled lighting and viewing conditions. First the nature of these specularities is investigated qualitatively. Then specularities on parametric surfaces and hair or groove orientations are calculated for some specific cases. Explicit calculations of specularities on some special surfaces, cylinders, cones, and spheres, are verified by photographs of the reflections. Aspects of the work are applicable to computer graphics and can be utilized for the image interpretation of surface specularities.  相似文献   

3.
A contribution to the automatic 3-D reconstruction of complex urban scenes from aerial stereo pairs is proposed. It consists of segmenting the scene into two different kinds of components: the ground and the above-ground objects. The above-ground objects are classified either as buildings or as vegetation. The idea is to define appropriate regions of interest in order to achieve a relevant 3-D reconstruction. For that purpose, a digital elevation model of the scene is first computed and segmented into above-ground regions using a Markov random field model. Then a radiometric analysis is used to classify above-ground regions as building or vegetation, leading to the determination of the final above-ground objects. The originality of the method is its ability to cope with extended above-ground areas, even in case of a sloping ground surface. This characteristic is necessary in a urban environment. Results are very robust to image and scene variability, and they enable the utilization of appropriate local 3-D reconstruction algorithms.  相似文献   

4.
This paper describes a new self-calibration method for a single camera undergoing general motions. It has the following main contributions. First, we establish new constraints which relate the intrinsic parameters of the camera to the rotational part of the motions. This derivation is purely algebraic. We propose an algorithm which simultaneously solves for camera calibration and the rotational part of motions. Second, we provide a comparison between the developed method and a Kruppa equation-based method. Extensive experiments on both synthetic and real image data show the reliability and outperformance of the proposed method. The practical contribution of the method is its interesting convergence property compared with that of the Kruppa equations method.  相似文献   

5.
We propose a sculpture metaphor based on a multiresolution volumetric representation. It allows the user to model both precise and coarse features while maintaining interactive updates and display rates. The modelled surface is an iso-surface of a scalar field, which is sampled on an adaptive hierarchical grid that dynamically subdivides or undivides itself. Field modifications are transparent to the user: The user feels as if he were directly interacting with the surface via a tool that either adds or removes “material.” Meanwhile, the tool modifies the scalar field around the surface, its size and shape automatically guiding the underlying grid subdivision. In order to give an interactive feedback whatever the tool's size, tools are applied in an adaptive way, the grid being always updated from coarse to fine levels. This maintains interactive rates even for large tool sizes. It also enables the user to continuously apply a tool, with an immediate coarse-scale feedback of the multiple actions being provided. A dynamic level-of-detail (LOD) mechanism ensures that the iso-surface is displayed at interactive rates regardeless of the zoom value; surface elements, generated and stored at each level of resolution, are displayed depending on their size on the screen. The system may switch to a coarser surface display during user actions, thus always ensuring interactive visual feedback. Two applications illustrate the use of this system: First, complex shapes with both coarse and fine features can be sculpted from scratch. Second, we show that the system can be used to edit models that have been converted from a mesh representation.  相似文献   

6.
This paper describes the theory and algorithms of distance transform for fuzzy subsets, called fuzzy distance transform (FDT). The notion of fuzzy distance is formulated by first defining the length of a path on a fuzzy subset and then finding the infimum of the lengths of all paths between two points. The length of a path π in a fuzzy subset of the n-dimensional continuous space n is defined as the integral of fuzzy membership values along π. Generally, there are infinitely many paths between any two points in a fuzzy subset and it is shown that the shortest one may not exist. The fuzzy distance between two points is defined as the infimum of the lengths of all paths between them. It is demonstrated that, unlike in hard convex sets, the shortest path (when it exists) between two points in a fuzzy convex subset is not necessarily a straight line segment. For any positive number θ≤1, the θ-support of a fuzzy subset is the set of all points in n with membership values greater than or equal to θ. It is shown that, for any fuzzy subset, for any nonzero θ≤1, fuzzy distance is a metric for the interior of its θ-support. It is also shown that, for any smooth fuzzy subset, fuzzy distance is a metric for the interior of its 0-support (referred to as support). FDT is defined as a process on a fuzzy subset that assigns to a point its fuzzy distance from the complement of the support. The theoretical framework of FDT in continuous space is extended to digital cubic spaces and it is shown that for any fuzzy digital object, fuzzy distance is a metric for the support of the object. A dynamic programming-based algorithm is presented for computing FDT of a fuzzy digital object. It is shown that the algorithm terminates in a finite number of steps and when it does so, it correctly computes FDT. Several potential applications of fuzzy distance transform in medical imaging are presented. Among these are the quantification of blood vessels and trabecular bone thickness in the regime of limited special resolution where these objects become fuzzy.  相似文献   

7.
This paper proposes a new method for reduction of the number of gray-levels in an image. The proposed approach achieves gray-level reduction using both the image gray-levels and additional local spatial features. Both gray-level and local feature values feed a self-organized neural network classifier. After training, the neurons of the output competition layer of the SOFM define the gray-level classes. The final image has not only the dominant image gray-levels, but also has a texture approaching the image local characteristics used. To split the initial classes further, the proposed technique can be used in an adaptive mode. To speed up the entire multithresholding algorithm and reduce memory requirements, a fractal scanning subsampling technique is adopted. The method is applicable to any type of gray-level image and can be easily modified to accommodate any type of spatial characteristic. Several experimental and comparative results, exhibiting the performance of the proposed technique, are presented.  相似文献   

8.
This paper describes the mathematical basis and application of a probabilistic model for recovering the direction of camera translation (heading) from optical flow. According to the theorem that heading cannot lie between two converging points in a stationary environment, one can compute the posterior probability distribution of heading across the image and choose the heading with maximum a posteriori (MAP). The model requires very simple computation, provides confidence level of the judgments, applies to both linear and curved trajectories, functions in the presence of camera rotations, and exhibited high accuracy up to 0.1°–0.2° in random dot simulations.  相似文献   

9.
We present an approach to attention in active computer vision. The notion of attention plays an important role in biological vision. In recent years, and especially with the emerging interest in active vision, computer vision researchers have been increasingly concerned with attentional mechanisms as well. The basic principles behind these efforts are greatly influenced by psychophysical research. That is the case also in the work presented here, which adapts to the model of Treisman (1985, Comput. Vision Graphics Image Process. Image Understanding31, 156–177), with an early parallel stage with preattentive cues followed by a later serial stage where the cues are integrated. The contributions in our approach are (i) the incorporation of depth information from stereopsis, (ii) the simple implementation of low level modules such as disparity and flow by local phase, and (iii) the cue integration along pursuit and saccade mode that allows us a proper target selection based on nearness and motion. We demonstrate the technique by experiments in which a moving observer selectively masks out different moving objects in real scenes.  相似文献   

10.
We present a simple algorithm for the Euclidean distance transform of a binary image that runs more efficiently than other algorithms in the literature. We show that our algorithm runs in optimal time for many architectures and has optimal cost for the RAM and EREW PRAM.  相似文献   

11.
This article proposes a method for the tracking of human limbs from multiocular sequences of perspective images. These limbs and the associated articulations must first be modelled. During the learning stage, we model the texture linked to the limbs. The lack of characteristic points on the skin is compensated by the wearing of nonrepetitive texture tights. The principle of the method is based on the interpretation of image textured patterns as the 3D perspective projections of points of the textured articulated model. An iterative Levenberg–Marquardt process is used to compute the model pose in accordance with the analyzed image. The calculated attitude is filtered (Kalman filter) to predict the model pose in the following image of the sequence. The image patterns are extracted locally according to the textured articulated model in the predicted attitude. Tracking experiments, illustrated in this paper by cycling sequences, demonstrate the validity of the approach.  相似文献   

12.
A term rewriting system is called growing if each variable occurring on both the left-hand side and the right-hand side of a rewrite rule occurs at depth zero or one in the left-hand side. Jacquemard showed that the reachability and the sequentiality of linear (i.e., left-right-linear) growing term rewriting systems are decidable. In this paper we show that Jacquemard's result can be extended to left-linear growing rewriting systems that may have right-nonlinear rewrite rules. This implies that the reachability and the joinability of some class of right-linear term rewriting systems are decidable, which improves the results for right-ground term rewriting systems by Oyamaguchi. Our result extends the class of left-linear term rewriting systems having a decidable call-by-need normalizing strategy. Moreover, we prove that the termination property is decidable for almost orthogonal growing term rewriting systems.  相似文献   

13.
The role of perceptual organization in motion analysis has heretofore been minimal. In this work we present a simple but powerful computational model and associated algorithms based on the use of perceptual organizational principles, such as temporal coherence (or common fate) and spatial proximity, for motion segmentation. The computational model does not use the traditional frame by frame motion analysis; rather it treats an image sequence as a single 3D spatio-temporal volume. It endeavors to find organizations in this volume of data over three levels—signal, primitive, and structural. The signal level is concerned with detecting individual image pixels that are probably part of a moving object. The primitive level groups these individual pixels into planar patches, which we call the temporal envelopes. Compositions of these temporal envelopes describe the spatio-temporal surfaces that result from object motion. At the structural level, we detect these compositions of temporal envelopes by utilizing the structure and organization among them. The algorithms employed to realize the computational model include 3D edge detection, Hough transformation, and graph based methods to group the temporal envelopes based on Gestalt principles. The significance of the Gestalt relationships between any two temporal envelopes is expressed in probabilistic terms. One of the attractive features of the adopted algorithm is that it does not require the detection of special 2D features or the tracking of these features across frames. We demonstrate that even with simple grouping strategies, we can easily handle drastic illumination changes, occlusion events, and multiple moving objects, without the use of training and specific object or illumination models. We present results on a large variety of motion sequences to demonstrate this robustness.  相似文献   

14.
Given two strings X=a1an and P=b1bm over an alphabet Σ, the problem of testing whether P occurs as a subsequence of X is trivially solved in linear time. It is also known that a simple O(n log |Σ|) time preprocessing of X makes it easy to decide subsequently, for any P and in at most |P| log |Σ| character comparisons, whether P is a subsequence of X. These problems become more complicated if one asks instead whether P occurs as a subsequence of some substring Y of X of bounded length. This paper presents an automaton built on the textstring X and capable of identifying all distinct minimal substrings Y of X having P as a subsequence. By a substring Y being minimal with respect to P, it is meant that P is not a subsequence of any proper substring of Y. For every minimal substring Y, the automaton recognizes the occurrence of P having the lexicographically smallest sequence of symbol positions in Y. It is not difficult to realize such an automaton in time and space O(n2) for a text of n characters. One result of this paper consists of bringing those bounds down to linear or O(n log n), respectively, depending on whether the alphabet is bounded or of arbitrary size, thereby matching the corresponding complexities of automata constructions for offline exact string searching. Having built the automaton, the search for all lexicographically earliest occurrences of P in X is carried out in time O(∑i=1mrocci·i) or O(n+∑i=1mrocci·i· log n), depending on whether the alphabet is fixed or arbitrary, where rocci is the number of distinct minimal substrings of X having b1bi as a subsequence (note that each such substring may occur many times in X but is counted only once in the bound). All log factors appearing in the above bounds can be further reduced to log log by resorting to known integer-handling data structures.  相似文献   

15.
This paper describes a technique to animate three-dimensional sampled volumes. The technique gives the animator the ability to treat volumes as if they were standard polygonal models and to use all of the standard animation/motion capture tools on volumetric data. A volumetric skeleton is computed from a volumetric model using a multi-resolution thinning procedure. The volumetric skeleton is centered in the object and accurately represents the shape of the object. The thinning process is reversible in that the volumetric model can be reconstructed from the volumetric skeleton. The volumetric skeleton is then connected and imported into a standard graphics animation package for animation. The animated skeleton is used for reconstruction, which essentially recreates a deformed volume around the deformed skeleton. Polygons are never computed and the entire process remains in the volumetric domain. This technique is demonstrated on one of the most complex 3D datasets, the Visible Male, resulting in actual “human animation”.  相似文献   

16.
Voxelization is the transformation of geometric surfaces into voxels. Up to date this process has been done essentially using incremental algorithms. Incremental algorithms have the reputation of being efficient but they lack an important property: robustness. The voxelized representation should envelop its continuous model. However, without robust methods this cannot be guaranteed. This article describes novel techniques of robust voxelization and visualization of implicit surfaces. First of all our recursive subdivision voxelization algorithm is reviewed. This algorithm was initially inspired by Duff's image space subdivision method. Then, we explain the algorithm to voxelize implicit surfaces defined in spherical or cylindrical coordinates. Next, we show a new technique to produce infinite replications of implicit objects and their voxelization method. Afterward, we comment on the parallelization of our voxelization procedure. Finally we present our voxel visualization algorithm based on point display. Our voxelization algorithms can be used with any data structure, thanks to the fact that a voxel is only stored once the last subdivision level is reached. We emphasize the use of the octree, though, because it is a convenient way to store the discrete model hierarchically. In a hierarchy the discrete model refinement is simple and possible from any previous voxelized scene thanks to the fact that the voxelization algorithms are robust.  相似文献   

17.
The exponential output size problem is to determine whether the size of output trees of a tree transducer grows exponentially in the size of input trees. In this paper the complexity of this problem is studied. It is shown to be NL-complete for total top-down tree transducers, DEXPTIME-complete for general top-down tree transducers, and P-complete for bottom-up tree transducers.  相似文献   

18.
We work with an extension of Resolution, called Res(2), that allows clauses with conjunctions of two literals. In this system there are rules to introduce and eliminate such conjunctions. We prove that the weak pigeonhole principle PHPcnn and random unsatisfiable CNF formulas require exponential-size proofs in this system. This is the strongest system beyond Resolution for which such lower bounds are known. As a consequence to the result about the weak pigeonhole principle, Res(log) is exponentially more powerful than Res(2). Also we prove that Resolution cannot polynomially simulate Res(2) and that Res(2) does not have feasible monotone interpolation solving an open problem posed by Krají ek.  相似文献   

19.
An atomic representation of a Herbrand model (ARM) is a finite set of (not necessarily ground) atoms over a given Herbrand universe. Each ARM represents a possibly infinite Herbrand interpretation. This concept has emerged independently in different branches of computer science as a natural and useful generalization of the concept of finite Herbrand interpretation. It was shown that several recursively decidable problems on finite Herbrand models (or interpretations) remain decidable on ARMs.The following problems are essential when working with ARMs: Deciding the equivalence of two ARMs, deciding subsumption between ARMs, and evaluating clauses over ARMs. These problems were shown to be decidable, but their computational complexity has remained obscure so far. The previously published decision algorithms require exponential space. In this paper, we prove that all mentioned problems are coNP-complete.  相似文献   

20.
While deterministic finite automata seem to be well understood, surprisingly many important problems concerning nondeterministic finite automata (nfa's) remain open. One such problem area is the study of different measures of nondeterminism in finite automata and the estimation of the sizes of minimal nondeterministic finite automata. In this paper the concept of communication complexity is applied in order to achieve progress in this problem area. The main results are as follows:1. Deterministic communication complexity provides lower bounds on the size of nfa's with bounded unambiguity. Applying this fact, the proofs of several results about nfa's with limited ambiguity can be simplified and presented in a uniform way.2. There is a family of languages KONk2 with an exponential size gap between nfa's with polynomial leaf number/ambiguity and nfa's with ambiguity k. This partially provides an answer to the open problem posed by B. Ravikumar and O. Ibarra (1989, SIAM J. Comput.18, 1263–1282) and H. Leung (1998, SIAM J. Comput.27, 1073–1082).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号