首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present here a new randomized algorithm for repairing the topology of objects represented by 3D binary digital images. By “repairing the topology”, we mean a systematic way of modifying a given binary image in order to produce a similar binary image which is guaranteed to be well-composed. A 3D binary digital image is said to be well-composed if, and only if, the square faces shared by background and foreground voxels form a 2D manifold. Well-composed images enjoy some special properties which can make such images very desirable in practical applications. For instance, well-known algorithms for extracting surfaces from and thinning binary images can be simplified and optimized for speed if the input image is assumed to be well-composed. Furthermore, some algorithms for computing surface curvature and extracting adaptive triangulated surfaces, directly from the binary data, can only be applied to well-composed images. Finally, we introduce an extension of the aforementioned algorithm to repairing 3D digital multivalued images. Such an algorithm finds application in repairing segmented images resulting from multi-object segmentations of other 3D digital multivalued images.
James GeeEmail:
  相似文献   

2.
Fuzzy set theory constitutes a powerful representation framework that can lead to more robustness in problems such as image segmentation and recognition. This robustness results to some extent from the partial recovery of the continuity that is lost during digitization. In this paper we deal with connectivity measures on fuzzy sets. We show that usual fuzzy connectivity definitions have some drawbacks, and we propose a new definition that exhibits better properties, in particular in terms of continuity. This definition leads to a nested family of hyperconnections associated with a tolerance parameter. We show that corresponding connected components can be efficiently extracted using simple operations on a max-tree representation. Then we define attribute openings based on crisp or fuzzy criteria. We illustrate a potential use of these filters in a brain segmentation and recognition process.
Isabelle BlochEmail:
  相似文献   

3.
Due to digitization, usual discrete signals generally present topological paradoxes, such as the connectivity paradoxes of Rosenfeld. To get rid of those paradoxes, and to restore some topological properties to the objects contained in the image, like manifoldness, Latecki proposed a new class of images, called well-composed images, with no topological issues. Furthermore, well-composed images have some other interesting properties: for example, the Euler number is locally computable, boundaries of objects separate background from foreground, the tree of shapes is well defined. Last, but not the least, some recent works in mathematical morphology have shown that very nice practical results can be obtained thanks to well-composed images. Believing in its prime importance in digital topology, we then propose this state of the art of well-composedness, summarizing its different flavors, the different methods existing to produce well-composed signals, and the various topics that are related to well-composedness.  相似文献   

4.
Partial Partitions,Partial Connections and Connective Segmentation   总被引:1,自引:1,他引:0  
In connective segmentation (Serra in J. Math. Imaging Vis. 24(1):83–130, [2006]), each image determines subsets of the space on which it is “homogeneous”, in such a way that this family of subsets always constitutes a connection (connectivity class); then the segmentation of the image is the partition of space into its connected components according to that connection. Several concrete examples of connective segmentations or of connections on sets, indicate that the space covering requirement of the partition should be relaxed. Furthermore, morphological operations on partitions require the consideration of wider framework. We study thus partial partitions (families of mutually disjoint non-void subsets of the space) and partial connections (where connected components of a set are mutually disjoint but do not necessarily cover the set). We describe some methods for generating partial connections. We investigate the links between the two lattices of partial connections and of partial partitions. We generalize Serra’s characterization of connective segmentation and discuss its relevance. Finally we give some ideas on how the theory of partial connections could lead to improved segmentation algorithms.
Christian RonseEmail:
  相似文献   

5.
In this paper we generalize the concept of digital topology to arbitrary dimension n, in the context of (2n, 3n−1)-adjacency. We define an n-digital image as an uplet ( n, , H), where H is a finite subset of n and represents the adjacency relation in the whole lattice in a specific way. We give a natural and simple construction of polyhedral representation of based on cubical-complex decomposition. We develop general properties which provide a link between connectivity in digital and Euclidean space. This enables us to use methods of continuous topology in studying properties related to the connectivity, adjacency graph, and borders connectivity in n-digital images.  相似文献   

6.
In this paper the general concept of a migration process (MP) is introduced; it involves iterative displacement of each point in a set as function of a neighborhood of the point, and is applicable to arbitrary sets with arbitrary topologies. After a brief analysis of this relatively general class of iterative processes and of constraints on such processes, we restrict our attention to processes in which each point in a set is iteratively displaced to the average (centroid) of its equigeodesic neighborhood. We show that MPs of this special class can be approximated by reaction-diffusion-type PDEs, which have received extensive attention recently in the contour evolution literature. Although we show that MPs constitute a special class of these evolution models, our analysis of migrating sets does not require the machinery of differential geometry. In Part I of the paper we characterize the migration of closed curves and extend our analysis to arbitrary connected sets in the continuous domain (Rm) using the frequency analysis of closed polygons, which has been rediscovered recently in the literature. We show that migrating sets shrink, and also derive other geometric properties of MPs. In Part II we will reformulate the concept of migration in a discrete representation (Zm).  相似文献   

7.
《国际计算机数学杂志》2012,89(9):1940-1963
Let G be a simple non-complete graph of order n. The r-component edge connectivity of G denoted as λr (G) is the minimum number of edges that must be removed from G in order to obtain a graph with (at least) r connected components. The concept of r-component edge connectivity generalizes that of edge connectivity by taking into account the number of components of the resulting graph. In this paper we establish bounds of the r component edge connectivity of an important family of interconnection network models, the generalized Petersen graphs GP(n, k) in which n and k are relatively prime integers.  相似文献   

8.
Convex Hodge Decomposition and Regularization of Image Flows   总被引:1,自引:1,他引:0  
The total variation (TV) measure is a key concept in the field of variational image analysis. In this paper, we focus on vector-valued data and derive from the Hodge decomposition of image flows a definition of TV regularization for vector-valued data that extends the standard componentwise definition in a natural way. We show that our approach leads to a convex decomposition of arbitrary vector fields, providing a richer decomposition into piecewise harmonic fields rather than piecewise constant ones, and motion texture. Furthermore, our regularizer provides a measure for motion boundaries of piecewise harmonic image flows in the same way, as the TV measure does for contours of scalar-valued piecewise constant images.
Gabriele SteidlEmail:
  相似文献   

9.
Given a grid graph with two rows, an arbitrary number N of columns (briefly, a ladder ) and a weight function defined on its vertex set V , one wants to partition V into a given number p of connected components, so as to maximize the smallest weight of a component. We present an O(N 4 pmax {p,log N}) -time algorithm, which combines dynamic programming with pre-processing and search techniques. An O(N) -time algorithm for the case p=2 is also given. In a companion paper [2] we show that the problem for a grid graph with three rows is NP-hard, and we give approximate algorithms for grid graphs with an arbitrary number of rows. Received September 21, 1999. Online publication April 9, 2001.  相似文献   

10.
The general concern of the Jacopini technique is the question: “Is it consistent to extend a given lambda calculus with certain equations?” The technique was introduced by Jacopini in 1975 in his proof that in the untyped lambda calculusΩis easy, i.e.,Ωcan be assumed equal to any other (closed) term without violating the consistency of the lambda calculus. The presentations of the Jacopini technique that are known from the literature are difficult to understand and hard to generalise. In this paper we generalise the Jacopini technique for arbitrary lambda calculi. We introduce the concept ofproof-replaceabilityby which the structure of the technique is simplified considerably. We illustrate the simplicity and generality of our formulation of the technique with some examples. We apply the Jacopini technique to theλμ-calculus, and we prove a general theorem concerning the consistency of extensions of theλμ-calculus of a certain form. Many well known examples (e.g., the easiness ofΩ) are immediate consequences of this general theorem.  相似文献   

11.
The structure at infinity of an ordinary differential control system is a finite sequence of increasing integers ending with the differential output rank of the system, namely the number of outputs that can be given as arbitrary functions of time when the inputs are unknown. Its definition and construction, originally done for linear systems, has been extended to affine nonlinear systems and used in order to study dynamic decoupling or model matching. It essentially relies on a state representation. The purpose of this paper is to make a critical examination of this concept and to modify it in order to avoid the state representation. At the same time, we extend it to nonlinear partial differential control systems by exhibiting a link with formal integrability, a highly important concept in the formal theory of systems of partial differential equations that cannot be handled by means of a transfer matrix approach. Many explicit examples will illustrate the main results and the possibility to use computer algebra techniques will be pointed out.  相似文献   

12.
In this paper, we present a novel resource brokering service for grid systems which considers authorization policies of the grid nodes in the process of selecting the resources to be assigned to a request. We argue such an integration is needed to avoid scheduling requests onto resources the policies of which do not authorize their execution. Our service, implemented in Globus as a part of Monitoring and Discovery Service (MDS), is based on the concept of fine-grained access control (FGAC) which enables participating grid nodes to specify fine-grained policies concerning the conditions under which grid clients can access their resources. Since the process of evaluating authorization policies, in addition to checking the resource requirements, can be a potential bottleneck for a large scale grid, we also analyze the problem of the efficient evaluation of FGAC policies. In this context, we present GroupByRule, a novel method for policy organization and compare its performance with other strategies.
E. BertinoEmail:
  相似文献   

13.
In this paper, we present the ARIA media processing workflow architecture that processes, filters, and fuses sensory inputs and actuates responses in real-time. The components of the architecture are programmable and adaptable; i.e. the delay, size, and quality/precision characteristics of the individual operators can be controlled via a number of parameters. Each data object processed by qStream components is subject to transformations based on the parameter values. For instance, the quality of an output data object and the corresponding processing delay and resource usage depend on the values assigned to parameters of the operators in the object flow path. In Candan, Peng, Ryu, Chatha, Mayer (Efficient stream routing in quality- and resource-adaptive flow architectures. In: Workshop on multimedia information systems, 2004), we introduced a class of flow optimization problems that promote creation and delivery of small delay or small resource-usage objects to the actuators in single-sensor, single-actuator workflows. In this paper, we extend our attention to multi-sensor media processing workflow scenarios. The algorithms we present take into account the implicit dependencies between various system parameters, such as resource consumption and object sizes. We experimentally show the effectiveness and efficiency of the algorithms.
Kyung Dong RyuEmail:
  相似文献   

14.
We approach the virtual reality phenomenon by studying its relationship to set theory. This approach offers a characterization of virtual reality in set theoretic terms, and we investigate the case where this is done using the wellfoundedness property. Our hypothesis is that non-wellfounded sets (so-called hypersets) give rise to a different quality of virtual reality than do familiar wellfounded sets. To elaborate this hypothesis, we describe virtual reality through Sommerhoff’s categories of first- and second-order self-awareness; introduced as necessary conditions for consciousness in terms of higher cognitive functions. We then propose a representation of first- and second-order self-awareness through sets, and assume that these sets, which we call events, originally form a collection of wellfounded sets. Strong virtual reality characterizes virtual reality environments which have the limited capacity to create only events associated with wellfounded sets. In contrast, the logically weaker and more general concept of weak virtual reality characterizes collections of virtual reality mediated events altogether forming an entirety larger than any collection of wellfounded sets. By giving reference to Aczel’s hyperset theory we indicate that this definition is not empty because hypersets encompass wellfounded sets already. Moreover, we argue that weak virtual reality could be realized in human history through continued progress in computer technology. Finally, within a more general framework, we use Baltag’s structural theory of sets (STS) to show that within this hyperset theory Sommerhoff’s first- and second-order self-awareness as well as both concepts of virtual reality admit a consistent mathematical representation. To illustrate our ideas, several examples and heuristic arguments are discussed.
Andreas Martin LisewskiEmail:
  相似文献   

15.
This paper addresses the scheduling problem in decentralized grid systems. Such problem focuses on computing a large set of arbitrary tasks to optimize the system performance while minimizing the average system costs. The mainstream solution flourished in recent literatures is to maximize the total system throughput by modeling such systems in either a network flow or a tree. However, most of them neglect the movements of tasks and load-dependent system costs which, in fact, are crucial to the system performance in real situations. In this paper, a Service-Oriented Overlay Network (SOON) is presented, in which the service nodes encapsulate both computation and communication resources and the links are used to track the movements of tasks instead of describing communication. An analytical Cost-Charge (C2) model, in which both running cost and service charge are dependent on load, is proposed to describe the problem by incorporating degree-dependent task allocation into a closed queuing network model. The Infinitesimal Perturbation Analysis (IPA) is applied to solve C2 theoretically. Following the theoretical analysis, a scalable decentralized scheduler named Liana (the movements of tasks in the proposed system like the growth and spread of evergreen liana, so we use Liana to name the proposed scheduler) is proposed. The major components of Liana are an autonomous scheduling algorithm and a Degree-Driven Protocol (DDP). Furthermore, trace based simulations on the test bed distributed widely across the world are implemented to compare the system performance by Liana with recent approaches. The proposed approach shows promising results that the close-to-optimal service utilization is achieved when taking system cost into account.
Chun-Qing LiEmail:
  相似文献   

16.
In this paper, we present conditions which guarantee that every digitization process preserves important topological and differential geometric properties. These conditions also allow us to determine the correct digitization resolution for a given class of real objects. Knowing that these properties are invariant under digitization, we can then use them in feature-based recognition. Moreover, these conditions imply that only a few digital patterns can occur as neighborhoods of boundary points in the digitization. This is very useful for noise detection, since if the neighborhood of a boundary point does not match one of these patterns, it must be due to noise. Our definition of a digitization approximates many real digitization processes. The digitization process is modeled as a mapping from continuous sets representing real objects to discrete sets represented as digital images. We show that an object A and the digitization of A are homotopy equivalent. This, for example, implies that the digitization of A preserves connectivity of the object and its complement. Moreover, we show that the digitization of A will not change the qualitative differential geometric properties of the boundary of A ; i.e., a boundary point which is locally convex cannot be digitized to a locally concave pixel and a boundary point which is locally concave cannot be digitized to a locally convex pixel.  相似文献   

17.
This paper presents a rather concrete view of a semantic universe for typed concurrent computation. Starting with a notion of sets and functions organized in a category featuring the type theory at hand, we identify the lax slice F//Span ( ) of pseudo-functors from a free category into the bicategory of spans over and triangles commuting up-to a lax natural transformation with representable components as a category of models of concurrency over which the semantic universe unfolds. By analogy, we call the objects of F//Span ( ) categorical transition systems and demonstrate their relevance in giving meaning to a range of everyday phenomena including message passing among imperative programs. We identify the bicategory of spans Span(F//Span ( )) as organizing processes at a basic level and address the question of their equality, articulated as bisimulation w.r.t. actions observable at the interfaces given by the legs of such spans. An appropriate notion of simulation yields a system of open maps by which to quotient Span(F//Span ( )), a construction originally considered by Cockett and Spooner in its generality. The resulting process category Proc(F//Span ( ),) properly contains the well-known interaction category SProc introduced by Abramsky et al.  相似文献   

18.
Views over databases have regained attention in the context of data warehouses, which are seen as materialized views. In this setting, efficient view maintenance is an important issue, for which the notion of self-maintainability has been identified as desirable. In this paper, we extend the concept of self-maintainability to (query and update) independence within a formal framework, where independence with respect to arbitrary given sets of queries and updates over the sources can be guaranteed. To this end we establish an intuitively appealing connection between warehouse independence and view complements. Moreover, we study special kinds of complements, namely monotonic complements, and show how to compute minimal ones in the presence of keys and foreign keys in the underlying databases. Taking advantage of these complements, an algorithmic approach is proposed for the specification of independent warehouses with respect to given sets of queries and updates. Received: 21 November 2000 / Accepted: 1 May 2001 Published online: 6 September 2001  相似文献   

19.
One of the concepts from topology that has found use in image processing is the so called Fundamental Group of an image. A definition for the digital fundamental group of a binary picture was introduced by Kong in A digital fundamental group [4]. This paper introduces a fundamental group for greyscale images. We also describe Poincaré's classical method for computing a representation of the fundamental group and extend this to work with our greyscale version.  相似文献   

20.
Fusion Graphs: Merging Properties and Watersheds   总被引:1,自引:1,他引:0  
Region merging methods consist of improving an initial segmentation by merging some pairs of neighboring regions. In this paper, we consider a segmentation as a set of connected regions, separated by a frontier. If the frontier set cannot be reduced without merging some regions then we call it a cleft, or binary watershed. In a general graph framework, merging two regions is not straightforward. We define four classes of graphs for which we prove, thanks to the notion of cleft, that some of the difficulties for defining merging procedures are avoided. Our main result is that one of these classes is the class of graphs in which any cleft is thin. None of the usual adjacency relations on ℤ2 and ℤ3 allows a satisfying definition of merging. We introduce the perfect fusion grid on ℤ n , a regular graph in which merging two neighboring regions can always be performed by removing from the frontier set all the points adjacent to both regions.
L. NajmanEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号