首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
Level set methods [Osher and Sethian. Fronts propagating with curvature-dependent speed: algorithms based on Hamilton–Jacobi formulations. J. Comput. Phys. 79 (1988) 12] have proved very successful for interface tracking in many different areas of computational science. However, current level set methods are limited by a poor balance between computational efficiency and storage requirements. Tree-based methods have relatively slow access times, whereas narrow band schemes lead to very large memory footprints for high resolution interfaces. In this paper we present a level set scheme for which both computational complexity and storage requirements scale with the size of the interface. Our novel level set data structure and algorithms are fast, cache efficient and allow for a very low memory footprint when representing high resolution level sets. We use a time-dependent and interface adapting grid dubbed the “Dynamic Tubular Grid” or DT-Grid. Additionally, it has been optimized for advanced finite difference schemes currently employed in accurate level set computations. As a key feature of the DT-Grid, the associated interface propagations are not limited to any computational box and can expand freely. We present several numerical evaluations, including a level set simulation on a grid with an effective resolution of 10243  相似文献   

2.
We present the blueprints for a set of innovative reverse shooting algorithms that trap the global saddle path in systems with 2–4 state variables. The solution procedure is built around a new distance mapping and refined simplex algorithms. Since the algorithms are completely reliable and always work in the same way, we have been able to develop canned programs that solve for the global nonlinear saddle path in any model with 2–4 state variables. The programs are written in the spirit of plug and play: the user types in the equations of the model and then waits for the solution.  相似文献   

3.
F. Chen  L. Shen  J. Deng 《Computing》2007,79(2-4):131-142
Parametric and implicit forms are two common representations of geometric objects. It is important to be able to pass back and forth between the two representations, two processes called parameterization and implicitization, respectively. In this paper, we study the parametrization and implicitization of quadrics (quadratic parametric surfaces with two base points) and cubic surfaces (cubic parametric surfaces with six base points) with the help of μ-bases – a newly developed tool which connects the parametric form and the implicit form of a surface. For both cases, we show that the minimal μ-bases are all linear in the parametric variables, and based on this observation, very efficient algorithms are devised to compute the minimal μ-bases either from the parametric equation or the implicit equation. The conversion between the parametric equation and the implicit equation can be easily accomplished from the minimal μ-bases.  相似文献   

4.
In this paper, we design and implement a variety of parallel algorithms for both sweep spin selection and random spin selection. We analyze our parallel algorithms on LogP, a portable and general parallel machine model. We then obtain rigorous theoretical runtime results on LogP for all the parallel algorithms. Moreover, a guiding equation is derived for choosing data layouts (blocked vs. stripped) for sweep spin selection. In regard to random spin selection, we are able to develop parallel algorithms with efficient communication schemes. We introduce two novel schemes, namely the FML scheme and the α-scheme. We analyze randomness of our schemes using statistical methods and provide comparisons between the different schemes.  相似文献   

5.
This paper presents the positivity analysis of the explicit and implicit Lax–Friedrichs (LxF) schemes for the compressible Euler equations. The theoretical proof is closely based on the decomposition of fluid variables and their corresponding fluxes into the pseudo-particles representation. For both explicit and implicit 1st-order LxF schemes, from any initial realizable state the density and the internal energy could keep non-negative values under the CFL condition with Courant number 1.  相似文献   

6.
Optimal, 7-stage, explicit, strong-stability-preserving (SSP) Hermite–Birkhoff (HB) methods of orders 4 to 8 with nonnegative coefficients are constructed by combining linear k-step methods with a 7-stage Runge–Kutta (RK) method of order 4. Compared to Huang’s hybrid methods of the same order, the new methods generally have larger effective SSP coefficients and larger maximum effective CFL numbers, \textnum\texteff\text{num}_{\text{eff}}, on Burgers’ equation, independently of the number k of steps, especially when k is small for both methods. Based on \textnum\texteff\text{num}_{\text{eff}}, some new methods of order 4 compare favorably with other methods of the same order, including RK104 of Ketcheson. The new SSP HB methods are listed in their Shu–Osher representation in Appendix.  相似文献   

7.
In this paper, we combine a Piecewise Constant Level Set (PCLS) method with a MBO scheme to solve a structural shape and topology optimization problem. The geometrical boundary of structure is represented implicitly by the discontinuities of PCLS functions. Compared with the classical level set method (LSM) for solving Hamilton–Jacobi partial differential equation (H-J PDE) we don’t need to solve H-J PDE, thus it is free of the CFL condition and the reinitialization scheme. For solving optimization problem under some constraints, Additive Operator Splitting (AOS) and Multiplicative Operator Splitting (MOS) schemes will be used. To increase the convergency speed and the efficiency of PCLS method we combine this approach with MBO scheme. Advantages and disadvantages are discussed by solving some examples of 2D structural topology optimization problems.  相似文献   

8.
The numerical solution of time-dependent ordinary and partial differential equations presents a number of well known difficulties—including, possibly, severe restrictions on time-step sizes for stability in explicit procedures, as well as need for solution of challenging, generally nonlinear systems of equations in implicit schemes. In this note we introduce a novel class of explicit methods based on use of one-dimensional Padé approximation. These schemes, which are as simple and inexpensive per time-step as other explicit algorithms, possess, in many cases, properties of stability similar to those offered by implicit approaches. We demonstrate the character of our schemes through application to notoriously stiff systems of ODEs and PDEs. In a number of important cases, use of these algorithms has resulted in orders-of-magnitude reductions in computing times over those required by leading approaches.  相似文献   

9.
DPLL-based SAT solvers progress by implicitly applying binary resolution. The resolution proofs that they generate are used, after the SAT solver’s run has terminated, for various purposes. Most notable uses in formal verification are: extracting an unsatisfiable core, extracting an interpolant, and detecting clauses that can be reused in an incremental satisfiability setting (the latter uses the proof only implicitly, during the run of the SAT solver). Making the resolution proof smaller can benefit all of these goals: it can lead to smaller cores, smaller interpolants, and smaller clauses that are propagated to the next SAT instance in an incremental setting. We suggest two methods that are linear in the size of the proof for doing so. Our first technique, called Recycle-Units uses each learned constant (unit clause) (x) for simplifying resolution steps in which x was the pivot, prior to when it was learned. Our second technique, called   simplifies proofs in which there are several nodes in the resolution graph, one of which dominates the others, that correspond to the same pivot. Our experiments with industrial instances show that these simplifications reduce the core by ≈5% and the proof by ≈13%. It reduces the core less than competing methods such as run- till- fix, but whereas our algorithms are linear in the size of the proof, the latter and other competing techniques are all exponential as they are based on SAT runs. If we consider the size of the proof (the resolution graph) as being polynomial in the number of variables (it is not necessarily the case in general), this gives our method an exponential time reduction comparing to existing tools for small core extraction. Our experiments show that this result is evident in practice more so for the second method: rarely it takes more than a few seconds, even when competing tools time out, and hence it can be used as a cheap proof post-processing procedure.  相似文献   

10.
We present three explicit schemes for distributingM variables amongN memory modules, whereM=Θ(N 1.5),M = Θ(N 2), andM=Θ(N 3), respectively. Each variable is replicated into a constant number of copies stored in distinct modules. We show thatN processors, directly accessing the memories through a complete interconnection, can read/write any set ofN variables in worst-case timeO (N 1/3),O(N 1/2), andO(N 2/3), respectively for the three schemes. The access times for the last two schemes are optimal with respect to the particular redundancy values used by such schemes. The address computation can be carried out efficiently by each processor without recourse to a complete memory map and requiring onlyO(1) internal storage. This paper was partially supported by NFS Grants CCR-91-96152 and CCR-94-00232, by ONR Contract N00014-91-J-4052, ARPA Order 8225, and by the ESPRIT III Basic Research Programme of the EC under Contract No. 9072 (Project GEPPCOM). Results reported here were presented in preliminary form at the 10th Symposium on Theoretical Aspects of Computer Science (Würzburg, Germany, 1993), and at the 5th ACM Symposium on Parallel Algorithms and Architectures (Velen, Germany, 1993).  相似文献   

11.
In Zhang and Shu (J. Comput. Phys. 229:3091–3120, 2010), two of the authors constructed uniformly high order accurate finite volume and discontinuous Galerkin (DG) schemes satisfying a strict maximum principle for scalar conservation laws on rectangular meshes. The technique is generalized to positivity preserving (of density and pressure) high order DG or finite volume schemes for compressible Euler equations in Zhang and Shu (J. Comput. Phys. 229:8918–8934, 2010). The extension of these schemes to triangular meshes is conceptually plausible but highly nontrivial. In this paper, we first introduce a special quadrature rule which is exact for two-variable polynomials over a triangle of a given degree and satisfy a few other conditions, by which we can construct high order maximum principle satisfying finite volume schemes (e.g. essentially non-oscillatory (ENO) or weighted ENO (WENO) schemes) or DG method solving two dimensional scalar conservation laws on triangular meshes. The same method can preserve the maximum principle for DG or finite volume schemes solving two-dimensional incompressible Euler equations in the vorticity stream-function formulation, or any passive convection equation with an incompressible velocity field. We also obtain positivity preserving (for density and pressure) high order DG or finite volume schemes solving compressible Euler equations on triangular meshes. Numerical tests for the third order Runge-Kutta DG (RKDG) method on unstructured meshes are reported.  相似文献   

12.
The paper describes methods for solving very large overdetermined algebraic polynomial systems on an example that appears from a classification of all integrable 3-dimensional scalar discrete quasilinear equations Q 3=0 on an elementary cubic cell of the lattice ℤ3. The overdetermined polynomial algebraic system that has to be solved is far too large to be formulated. A “probing” technique, which replaces independent variables by random integers or zero, allows to formulate subsets of this system. An automatic alteration of equation formulating steps and equation solving steps leads to an iteration process that solves the computational problem. The text was submitted by the author in English.  相似文献   

13.
In this work, we consider linear, shift-invariant and complete two-dimensional (2D) discrete systems from a behavioral point of view. In particular, we examine behaviors with two types of variables: the variables that we are interested to control (the to-be-controlled variables) and the variables on which we are allowed to enforce restrictions (the control variables). The main purpose of this contribution is to derive necessary and sufficient conditions for the stabilization of the to-be-controlled variables by ‘attaching’ a controller to the control variables. This problem turns out to be related to the decomposition of a given behavior into the sum of two sub-behaviors. Moreover, we show that under certain conditions, it is possible to obtain a constructive solution and characterize the structure of the to-be-controlled behavior.  相似文献   

14.
Multi-body dynamics simulation of geometrically exact Cosserat rods   总被引:1,自引:0,他引:1  
In this paper, we present a viscoelastic rod model that is suitable for fast and accurate dynamic simulations. It is based on Cosserat’s geometrically exact theory of rods and is able to represent extension, shearing (‘stiff’ dof), bending and torsion (‘soft’ dof). For inner dissipation, a consistent damping potential proposed by Antman is chosen. We parametrise the rotational dof by unit quaternions and directly use the quaternionic evolution differential equation for the discretisation of the Cosserat rod curvature.  相似文献   

15.
An ad hoc wireless network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. This type of network is of great importance in situations where it is very difficult to provide the necessary infrastructure, but it is a challenging task to enable fast and reliable communication within such a network. In this paper we model and analyze the performance of so-called power-controlled ad hoc wireless networks: networks where the mobile hosts are able to change their transmission power. We concentrate on finding schemes for routing arbitrary permutations in these networks. In general, it is NP-hard even to find an n 1-ε -approximation for any constant ε to the fastest possible strategy for routing a given permutation problem on n mobile hosts. However, we demonstrate here that if we allow ourselves to consider slightly less general problems, efficient solutions can be found. We first demonstrate that there is a natural class of distributed schemes for handling node-to-node communication on top of which online route selection and scheduling strategies can be constructed such that the performance of this class of schemes can be exploited in a nearly optimal way for routing permutations in any static power-controlled ad hoc network. We then demonstrate that if we restrict ourselves to the important case of routing between nodes distributed randomly in a Euclidean space, we can route in a time that is asymptotically optimal for any routing scheme. Received in final form January 31, 2000. Online publication October 10, 2000.  相似文献   

16.
Quantified constraint satisfaction problems (QCSPs) are an extension to constraint satisfaction problems (CSPs) with both universal quantifiers and existential quantifiers.In this paper we apply variab...  相似文献   

17.
We study the evaluation of positive conjunctive queries with Boolean aggregate tests (similar to HAVING in SQL) on probabilistic databases. More precisely, we study conjunctive queries with predicate aggregates on probabilistic databases where the aggregation function is one of MIN, MAX, EXISTS, COUNT, SUM, AVG, or COUNT(DISTINCT) and the comparison function is one of =, ≠,≥,>,≤, or <. The complexity of evaluating a HAVING query depends on the aggregation function, α, and the comparison function, θ. In this paper, we establish a set of trichotomy results for conjunctive queries with HAVING predicates parametrized by (α, θ). For such queries (without self-joins), one of the following three statements is true: (1) the exact evaluation problem has P{\mathcal P} -time data complexity. In this case, we call the query safe. (2) The exact evaluation problem is \sharpP{{\sharp{\mathcal P}}} -hard, but the approximate evaluation problem has (randomized) P{{\mathcal P}} -time data complexity. More precisely, there exists an FPTRAS for the query. In this case, we call the query apx-safe. (3) The exact evaluation problem is \sharpP{{\sharp{\mathcal P}}} -hard, and the approximate evaluation problem is also hard. We call these queries hazardous. The precise definition of each class depends on the aggregate considered and the comparison function. Thus, we have queries that are (MAX,≥ )-safe, (COUNT,≤ )-apx-safe, (SUM,=)-hazardous, etc. Our trichotomy result is a significant extension of a previous dichotomy result for Boolean conjunctive queries into safe and not safe. For each of the three classes we present novel techniques. For safe queries, we describe an evaluation algorithm that uses random variables over semirings. For apx-safe queries, we describe an FPTRAS that relies on a novel algorithm for generating a random possible world satisfying a given condition. Finally, for hazardous queries we give novel proofs of hardness of approximation. The results for safe queries were previously announced (in Ré, C., Suciu, D. Efficient evaluation of. In: DBPL, pp. 186–200, 2007), but all other results are new.  相似文献   

18.
Problems in electromagnetic wave propagation often require high accuracy approximations with low resolution computational grids. For non-stationary problems such schemes should possess the same approximation order in space and time. In the present article we propose for electromagnetic applications an explicit class of robust finite-volume (FV) schemes for the Maxwell equations. To achieve high accuracy we combine the FV method with the so-called ADER approach resulting in schemes which are arbitrary high order accurate in space and time. Numerical results and convergence investigations are shown for two and three-dimensional test cases on Cartesian grids, where the used FV-ADER schemes are up to 8th order accurate in both space and time.  相似文献   

19.
A single Model Transformation Chain (MTC) takes a high-level input model rooted in the problem domain and through one or more transformation steps produces a low-level output model rooted in the solution domain. To build a single “almighty” MTC that is in charge of every design, implementation and specific platform concern is a complex task. Instead, we can use several smaller MTCs that are easier to develop and maintain, because each MTC is independently developed focusing on a specific concern. However, the MTCs must interoperate to produce complete applications; this inherently creates dependencies between them, because each MTC generates a part of the final low-level model. In this paper, we propose an external and explicit mechanism to track dependencies between the MTCs (i.e., the MTCs are oblivious to the mechanism), which is used to automatically derive correspondence relationships between the final models generated by each MTC. The contribution of our mechanism is the reduction of complexity of building interoperable MTCs because the derived correspondences are resolved after the transformations execution, in the solution domain where the semantics of every concept is well-defined. The resolution process consists of (1) checking the consistency between the models, (2) producing communication bridges or (3) guiding the composition of the models. This paper presents three case studies to illustrate the derivation and resolution of correspondence relationships through the MTCs.  相似文献   

20.
Many real-world engineering design problems are naturally cast in the form of optimization programs with uncertainty-contaminated data. In this context, a reliable design must be able to cope in some way with the presence of uncertainty. In this paper, we consider two standard philosophies for finding optimal solutions for uncertain convex optimization problems. In the first approach, classical in the stochastic optimization literature, the optimal design should minimize the expected value of the objective function with respect to uncertainty (average approach), while in the second one it should minimize the worst-case objective (worst-case or min–max approach). Both approaches are briefly reviewed in this paper and are shown to lead to exact and numerically efficient solution schemes when the uncertainty enters the data in simple form. For general uncertainty dependence however, the problems are numerically hard. In this paper, we present two techniques based on uncertainty randomization that permit to solve efficiently some suitable probabilistic relaxation of the indicated problems, with full generality with respect to the way in which the uncertainty enters the problem data. In the specific context of truss topology design, uncertainty in the problem arises, for instance, from imprecise knowledge of material characteristics and/or loading configurations. In this paper, we show how reliable structural design can be obtained using the proposed techniques based on the interplay of convex optimization and randomization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号