首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
 The development of formal models of spatial relations is a topic of great importance in spatial reasoning, geographic information systems (GIS) and computer vision, and has gained much attention from researchers across these research areas during the past two decades. In recent years significant achievements have been made on the development of models of spatial relations between spatial objects with precisely defined boundaries. However, these models cannot be directly applied to spatial objects with indeterminate boundaries which are found in many applications in geographic analysis and image understanding. This article develops a method for approximately analyzing binary topological relations between geographic regions with indeterminate boundaries based upon previous work on topological spatial relations and fuzzy sets. In addition, examples are given to demonstrate the method and related concepts. It is shown that the eight binary topological relations between regions in a two-dimensional space can be easily determined by the method.  相似文献   

2.
3.
Geographic maps have existed from early stages of human civilization. Various styles of visualizing the geographic information have evolved depending on the nature of information and the technology available for visualization. This has led to innumerable map styles. In this work we develop a technique to create maps by combining two‐dimensional and three‐dimensional information such that the resulting maps are both functional and aesthetically appealing. Our technique requires geographical information in vector form and aerial images as inputs. We use computer vision based approaches and user defined inputs to augment the vector data with information that is required to create stylized maps. We define procedural graphics methods to generate a range of geographic elements that can be composed together into a stylized map. We demonstrate our technique by generating example maps of a region in Las Vegas. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

4.
Given an undirected, connected, weighted graph and a positive integer k, the bounded-diameter minimum spanning tree (BDMST) problem seeks a spanning tree of the graph with smallest weight, among all spanning trees of the graph, which contain no path with more than k edges. In general, this problem is NP-Hard for 4 ≤ k < n − 1, where n is the number of vertices in the graph. This work is an improvement over two existing greedy heuristics, called randomized greedy heuristic (RGH) and centre-based tree construction heuristic (CBTC), and a permutation-coded evolutionary algorithm for the BDMST problem. We have proposed two improvements in RGH/CBTC. The first improvement iteratively tries to modify the bounded-diameter spanning tree obtained by RGH/CBTC so as to reduce its cost, whereas the second improves the speed. We have modified the crossover and mutation operators and the decoder used in permutation-coded evolutionary algorithm so as to improve its performance. Computational results show the effectiveness of our approaches. Our approaches obtained better quality solutions in a much shorter time on all test problem instances considered.  相似文献   

5.
A 0-1 matrix has the Consecutive Ones Property (C1P) if there is a permutation of its columns that leaves the 1's consecutive in each row. The Consecutive Ones Submatrix (C1S) problem is, given a 0-1 matrix A, to find the largest number of columns of A that form a submatrix with the C1P property. Such a problem finds application in physical mapping with hybridization data in genome sequencing. Let (a, b)-matrices be the 0-1 matrices in which there are at most a 1's in each column and at most b 1's in each row. This paper proves that the C1S problem remains NP-hard for (i) (2, 3)-matrices and (ii) (3, 2)-matrices. This solves an open problem posed in a recent paper of Hajiaghayi and Ganjali. We further prove that the C1S problem is polynomial-time 0.8-approximatable for (2, 3)-matrices in which no two columns are identical and 0.5-approximatable for (2, ∞)-matrices in general. We also show that the C1S problem is polynomial-time 0.5-approximatable for (3, 2)-matrices. However, there exists an ε > 0 such that approximating the C1S problem for (∞, 2)-matrices within a factor of nε (where n is the number of columns of the input matrix) is NP-hard.  相似文献   

6.
 The environmental data are in general imprecise and uncertain, but they are located in space and therefore obey to spatial constraints. The “spatial analysis” is a (natural) reasoning process through which geographers take advantage of these constraints to reduce this uncertainty and to improve their beliefs. Trying to automate this process is a really hard problem. We propose here the design of a revision operator able to perform a spatial analysis in the context of one particular “application profile”: it identifies objects bearing a same variable bound through local constraints. The formal background, on which this operator is built, is a decision algorithm from Reiter [9]; then the heuristics, which help this algorithm to become tractable on a true scale application, are special patterns for clauses and “spatial confinement” of conflicts. This operator is “anytime”, because it uses “samples” and works on small (tractable) blocks, it reaggregates the partial revision results on larger blocks, thus we name it a “hierarchical block revision” operator. Finally we illustrate a particular application: a flooding propagation. Of course this is among possible approaches of “soft-computing” for geographic applications. On leave at: Centre de Recherche en Géomatique Pavillon Casault, Université Laval Québec, Qc, Canada – G1K 7P4 Université de Toulon et du Var, Avenue de l'Université, BP 132, 83957 La Garde Cedex, France This work is currently supported by the European Community under the IST-1999-14189 project.  相似文献   

7.
 We introduce an operational way to reduce the spatial complexity in inference processes based on conditional lower–upper probabilities assessments. To reach such goal we must suitably exploit zero probabilities taking account of logical conditions characterizing locally strong coherence. We actually re-formulate for conditional lower–upper probabilities the notion of locally strong coherence already introduced for conditional precise probabilities. Thanks to the characterization, we avoid to build all atoms, so that several real problems become feasible. In fact, the real complexity problem is connected to the number of atoms. Since for an inferential process with lower–upper probabilities several sequences of constraints must be fulfilled, our simplification can have either a “global” or a “partial” effect, being applicable to all or just to some sequences. The whole procedure has been implemented by XLisp-Stat language. A comparison with other approaches will be done by an example. The contribution of L. Galli has been essentially addressed to some aspects of the algorithm's implementation.  相似文献   

8.
Honeycomb Wachspress finite elements for structural topology optimization   总被引:4,自引:4,他引:0  
Traditionally, standard Lagrangian-type finite elements, such as linear quads and triangles, have been the elements of choice in the field of topology optimization. However, finite element meshes with these conventional elements exhibit the well-known “checkerboard” pathology in the iterative solution of topology optimization problems. A feasible alternative to eliminate such long-standing problem consists of using hexagonal (honeycomb) elements with Wachspress-type shape functions. The features of the hexagonal mesh include two-node connections (i.e. two elements are either not connected or connected by two nodes), and three edge-based symmetry lines per element. In contrast, quads can display one-node connections, which can lead to checkerboard; and only have two edge-based symmetry lines. In addition, Wachspress rational shape functions satisfy the partition of unity condition and lead to conforming finite element approximations. We explore the Wachspress-type hexagonal elements and present their implementation using three approaches for topology optimization: element-based, continuous approximation of material distribution, and minimum length-scale through projection functions. Examples are presented that demonstrate the advantages of the proposed element in achieving checkerboard-free solutions and avoiding spurious fine-scale patterns from the design optimization process.  相似文献   

9.
Dynamic analysis of video sequences often relies on the segmentation of the sequence into regions of consistent motions. Approaching this problem requires a definition of which motions are regarded as consistent. Common approaches to motion segmentation usually group together points or image regions that have the same motion between successive frames (where the same motion can be 2D, 3D, or non-rigid). In this paper we define a new type of motion consistency, which is based on temporal consistency of behaviors across multiple frames in the video sequence. Our definition of consistent “temporal behavior” is expressed in terms of multi-frame linear subspace constraints. This definition applies to 2D, 3D, and some non-rigid motions without requiring prior model selection. We further show that our definition of motion consistency extends to data with directional uncertainty, thus leading to a dense segmentation of the entire image. Such segmentation is obtained by applying the new motion consistency constraints directly to covariance-weighted image brightness measurements. This is done without requiring prior correspondence estimation nor feature tracking.  相似文献   

10.
Service Oriented Architectures (SOAs) support service lifecycle tasks, including Development, Deployment, Discovery and Use. We observe that there are two disparate ways to use Grid SOAs such as the Open Grid Services Architecture (OGSA) as exemplified in the Globus Toolkit (GT3/4). One is a traditional enterprise SOA use where end-user services are developed, deployed and resourced behind firewalls, for use by external consumers: a service-centric (or ‘first-order’) approach. The other supports end-user development, deployment, and resourcing of applications across organizations via the use of execution and resource management services: A Resource-centric (or ‘second-order’) approach. We analyze and compare the two approaches using a combination of empirical experiments and an architectural evaluation methodology (scenario, mechanism, and quality attributes) to reveal common and distinct strengths and weaknesses. The impact of potential improvements (which are likely to be manifested by GT4) is estimated, and opportunities for alternative architectures and technologies explored. We conclude by investigating if the two approaches can be converged or combined, and if they are compatible on shared resources.  相似文献   

11.
Operator and parameter adaptation in genetic algorithms   总被引:6,自引:1,他引:6  
 Genetic Algorithms are a class of powerful, robust search techniques based on genetic inheritance and the Darwinian metaphor of “Natural Selection”. These algorithms maintain a finite memory of individual points on the search landscape known as the “population”. Members of the population are usually represented as strings written over some fixed alphabet, each of which has a scalar value attached to it reflecting its quality or “fitness”. The search may be seen as the iterative application of a number of operators, such as selection, recombination and mutation, to the population with the aim of producing progressively fitter individuals. These operators are usually static, that is to say that their mechanisms, parameters, and probability of application are fixed at the beginning and constant throughout the run of the algorithm. However, there is an increasing body of evidence that not only is there no single choice of operators which is optimal for all problems, but that in fact the optimal choice of operators for a given problem will be time-variant i.e. it will depend on such factors as the degree of convergence of the population. Based on theoretical and practical approaches, a number of authors have proposed methods of adaptively controlling one or more of the operators, usually invoking some kind of “meta-learning” algorithm, in order to try and improve the performance of the Genetic Algorithm as a function optimiser. In this paper we describe the background to these approaches, and suggest a framework for their classification, based on the learning strategy used to control them, and what facets of the algorithm are susceptible to adaptation. We then review a number of significant pieces of work within the context of this setting, and draw some conclusions about the relative merits of various approaches and promising directions for future work.  相似文献   

12.
The problem of clustering fingerprint vectors with missing values is an interesting problem in Computational Biology that has been proposed in Figueroa et al. (J. Comput. Biol. 11(5):887–901, 2004). In this paper we show some improvements in closing the gaps between the known lower bounds and upper bounds on the approximability of variants of the biological problem. Moreover, we have studied two additional variants of the original problem proposed in Figueroa et al. (Proc. 11th Computing: The Australasian Theory Symposium (CATS), CRPIT, vol. 41, pp. 57–60, 2005). We prove that all such problems are APX-hard even when each fingerprint contains only two unknown positions and we present a greedy algorithm that has constant approximation factors for these variants. Despite the hardness of these restricted versions of the problem, we show that the general clustering problem on an unbounded number of missing values such that they occur for every fixed position of an input vector in at most one fingerprint is polynomial time solvable.  相似文献   

13.
In this paper we address the problem of integrating independent and possibly heterogeneous data warehouses, a problem that has received little attention so far, but that arises very often in practice. We start by tackling the basic issue of matching heterogeneous dimensions and provide a number of general properties that a dimension matching should fulfill. We then propose two different approaches to the problem of integration that try to enforce matchings satisfying these properties. The first approach refers to a scenario of loosely coupled integration, in which we just need to identify the common information between data sources and perform join operations over the original sources. The goal of the second approach is the derivation of a materialized view built by merging the sources, and refers to a scenario of tightly coupled integration in which queries are performed against the view. We also illustrate architecture and functionality of a practical system that we have developed to demonstrate the effectiveness of our integration strategies. A preliminary version this paper appeared, under the title “Integrating Heterogeneous Multidimensional Databases” [9], in 17th Int. Conference on Scientific and Statistical Database Management, 2005.  相似文献   

14.
We present a goal-directed E-unification procedure with eager Variable Elimination and a new rule, Cycle, for the case of collapsing equations – that is, equations of the type x ≈ v where xVar(v). Cycle replaces Variable Decomposition (or the so-called Root Imitation) and thus removes possibility of some obviously unnecessary infinite paths of inferences in the E-unification procedure. We prove that, as in other approaches, such inferences into variable positions in our goal-directed procedure are not needed. Our system is independent of selection rule and complete for any E-unification problem.  相似文献   

15.
We provide an improved FPTAS for multiobjective shortest paths—a fundamental (NP-hard) problem in multiobjective optimization—along with a new generic method for obtaining FPTAS to any multiobjective optimization problem with non-linear objectives. We show how these results can be used to obtain better approximate solutions to three related problems, multiobjective constrained [optimal] path and non-additive shortest path, that have important applications in QoS routing and in traffic optimization. We also show how to obtain a FPTAS to a natural generalization of the weighted multicommodity flow problem with elastic demands and values that models several realistic scenarios in transportation and communication networks. This work was partially supported by the Future and Emerging Technologies Unit of EC (IST priority—6th FP), under contracts No. IST-2002-001907 (integrated project DELIS) and No. FP6-021235-2 (project ARRIVAL), and the Action PYTHAGORAS of the Operational Programme for Educational & Vocational Training II, with matching funds from the European Social Fund and the Greek Ministry of Education.  相似文献   

16.
All local minima of the error surface of the 2-2-1 XOR network are described. A local minimum is defined as a point such that all points in a neighbourhood have an error value greater than or equal to the error value in that point. It is proved that the error surface of the two-layer XOR network with two hidden units has a number of regions with local minima. These regions of local minima occur for combinations of the weights from the inputs to the hidden nodes such that one or both hidden nodes are saturated for at least two patterns. However, boundary points of these regions of local minima are saddle points. It will be concluded that from each finite point in weight space a strictly decreasing path exists to a point with error zero. This also explains why experiments using higher numerical precision find less “local minima”. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

17.
We show that local dynamics require exponential time for two sampling problems motivated by statistical physics: independent sets on the triangular lattice (the hard-core lattice gas model) and weighted even orientations of the two-dimensional Cartesian lattice (the 8-vertex model). For each problem, there is a parameter λ known as the fugacity, such that local Markov chains are expected to be fast when λ is small and slow when λ is large. Unfortunately, establishing slow mixing for these models has been a challenge, as standard contour arguments typically used to show that a chain has small conductance do not seem to apply. We modify this approach by introducing the notion of fat contours that can have nontrivial area, and use these to establish slow mixing of local chains defined for these models.  相似文献   

18.
The evaluation of interactive adaptive systems has long been acknowledged to be a complicated and demanding endeavour. Some promising approaches in the recent past have attempted tackling the problem of evaluating adaptivity by “decomposing” and evaluating it in a “piece-wise” manner. Separating the evaluation of different aspects can help to identify problems in the adaptation process. This paper presents a framework that can be used to guide the “layered” evaluation of adaptive systems, and a set of formative methods that have been tailored or specially developed for the evaluation of adaptivity. The proposed framework unifies previous approaches in the literature and has already been used, in various guises, in recent research work. The presented methods are related to the layers in the framework and the stages in the development lifecycle of interactive systems. The paper also discusses practical issues surrounding the employment of the above, and provides a brief overview of complementary and alternative approaches in the literature.  相似文献   

19.
Total Variation Wavelet Inpainting   总被引:6,自引:0,他引:6  
We consider the problem of filling in missing or damaged wavelet coefficients due to lossy image transmission or communication. The task is closely related to classical inpainting problems, but also remarkably differs in that the inpainting regions are in the wavelet domain. New challenges include that the resulting inpainting regions in the pixel domain are usually not geometrically well defined, as well as that degradation is often spatially inhomogeneous. We propose two related variational models to meet such challenges, which combine the total variation (TV) minimization technique with wavelet representations. The associated Euler-Lagrange equations lead to nonlinear partial differential equations (PDE’s) in the wavelet domain, and proper numerical algorithms and schemes are designed to handle their computation. The proposed models can have effective and automatic control over geometric features of the inpainted images including sharp edges, even in the presence of substantial loss of wavelet coefficients, including in the low frequencies. Existence and uniqueness of the optimal inpaintings are also carefully investigated. Research supported in part by grants ONR-N00014-03-1-0888, NSF DMS-9973341, DMS-0202565 and DMS-0410062, and NIH contract P 20 MH65166.  相似文献   

20.
We consider the problem of locating a watermark in pages of archaic documents that have been both scanned and back-lit: the problem is of interest to codicologists in identifying and tracking paper materials. Commonly, documents of interest are worn or damaged, and all information is victim to very unfavourable signal-to-noise ratios—this is especially true of ‘hidden’ data such as watermarks and chain lines. We present an approach to recto removal, followed by highlighting of such ‘hidden’ data. The result is still of very low signal quality, and we also present a statistical approach to locate watermarks from a known lexicon of fragments. Results are presented from a comprehensively scanned nineteenth century copy of the Qur’ān. The approach has lent itself to immediate exploitation in improving known watermarks, and distinguishing between twin copies. Mr Hiary was supported by the University of Jordan in pursuing this work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号