首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In a previous paper by Ryan and Shu [Ryan, J. K., and Shu, C.-W. (2003). \hboxMethods Appl. Anal. 10(2), 295–307], a one-sided post-processing technique for the discontinuous Galerkin method was introduced for reconstructing solutions near computational boundaries and discontinuities in the boundaries, as well as for changes in mesh size. This technique requires prior knowledge of the discontinuity location in order to determine whether to use centered, partially one-sided, or one-sided post-processing. We now present two alternative stencil choosing schemes to automate the choice of post-processing stencil. The first is an ENO type stencil choosing procedure, which is designed to choose centered post-processing in smooth regions and one-sided or partially one-sided post-processing near a discontinuity, and the second method is based on the edge detection method designed by Archibald, Gelb, and Yoon [Archibald, R., Gelb, A., and Yoon, J. (2005). SIAM J. Numeric. Anal. 43, 259–279; Archibald, R., Gelb, A., and Yoon, J. (2006). Appl. Numeric. Math. (submitted)]. We compare these stencil choosing techniques and analyze their respective strengths and weaknesses. Finally, the automated stencil choices are applied in conjunction with the appropriate post-processing procedures and it is determine that the resulting numerical solutions are of the correct order.  相似文献   

2.
Edge detection from Fourier spectral data is important in many applications including image processing and the post-processing of solutions to numerical partial differential equations. The concentration method, introduced by Gelb and Tadmor in 1999, locates jump discontinuities in piecewise smooth functions from their Fourier spectral data. However, as is true for all global techniques, the method yields strong oscillations near the jump discontinuities, which makes it difficult to distinguish true discontinuities from artificial oscillations. This paper introduces refinements to the concentration method to reduce the oscillations. These refinements also improve the results in noisy environments. One technique adds filtering to the concentration method. Another uses convolution to determine the strongest correlations between the waveform produced by the concentration method and the one produced by the jump function approximation of an indicator function. A zero crossing based concentration factor, which creates a more localized formulation of the jump function approximation, is also introduced. Finally, the effects of zero-mean white Gaussian noise on the refined concentration method are analyzed. The investigation confirms that by applying the refined techniques, the variance of the concentration method is significantly reduced in the presence of noise. This work was partially supported by NSF grants CNS 0324957, DMS 0510813, DMS 0652833, and NIH grant EB 025533-01 (AG).  相似文献   

3.
Edge detection is an essential task in image processing. In some applications, such as Magnetic Resonance Imaging, the information about an image is available only through its frequency (Fourier) data. In this case, edge detection is particularly challenging, as it requires extracting local information from global data. The problem is exacerbated when the data are noisy. This paper proposes a new edge detection algorithm which combines the concentration edge detection method (Gelb and Tadmor in Appl. Comput. Harmon. Anal. 7:101–135, 1999) with statistical hypothesis testing. The result is a method that achieves a high probability of detection while maintaining a low probability of false detection.  相似文献   

4.
We present a new method for estimating the edges in a piecewise smooth function from blurred and noisy Fourier data. The proposed method is constructed by combining the so called concentration factor edge detection method, which uses a finite number of Fourier coefficients to approximate the jump function of a piecewise smooth function, with compressed sensing ideas. Due to the global nature of the concentration factor method, Gibbs oscillations feature prominently near the jump discontinuities. This can cause the misidentification of edges when simple thresholding techniques are used. In fact, the true jump function is sparse, i.e. zero almost everywhere with non-zero values only at the edge locations. Hence we adopt an idea from compressed sensing and propose a method that uses a regularized deconvolution to remove the artifacts. Our new method is fast, in the sense that it only needs the solution of a single l 1 minimization. Numerical examples demonstrate the accuracy and robustness of the method in the presence of noise and blur.  相似文献   

5.
Consider a piecewise smooth function for which the (pseudo-)spectral coefficients are given. It is well known that while spectral partial sums yield exponentially convergent approximations for smooth functions, the results for piecewise smooth functions are poor, with spurious oscillations developing near the discontinuities and a much reduced overall convergence rate. This behavior, known as the Gibbs phenomenon, is considered as one of the major drawbacks in the application of spectral methods. Various types of reconstruction methods developed for the recovery of piecewise smooth functions have met with varying degrees of success. The Gegenbauer reconstruction method, originally proposed by Gottlieb et al. has the particularly impressive ability to reconstruct piecewise analytic functions with exponential convergence up to the points of discontinuity. However, it has been sharply criticized for its high cost and susceptibility to round-off error. In this paper, a new approach to Gegenbauer reconstruction is considered, resulting in a reconstruction method that is less computationally intensive and costly, yet still enjoys superior convergence. The idea is to create a procedure that combines the well known exponential filtering method in smooth regions away from the discontinuities with the Gegenbauer reconstruction method in regions close to the discontinuities. This hybrid approach benefits from both the simplicity of exponential filtering and the high resolution properties of the Gegenbauer reconstruction method. Additionally, a new way of computing the Gegenbauer coefficients from Jacobian polynomial expansions is introduced that is both more cost effective and less prone to round-off errors.  相似文献   

6.
Data of piecewise smooth images are sometimes acquired as Fourier samples. Standard reconstruction techniques yield the Gibbs phenomenon, causing spurious oscillations at jump discontinuities and an overall reduced rate of convergence to first order away from the jumps. Filtering is an inexpensive way to improve the rate of convergence away from the discontinuities, but it has the adverse side effect of blurring the approximation at the jump locations. On the flip side, high resolution post processing algorithms are often computationally cost prohibitive and also require explicit knowledge of all jump locations. Recent convex optimization algorithms using \(l^1\) regularization exploit the expected sparsity of some features of the image. Wavelets or finite differences are often used to generate the corresponding sparsifying transform and work well for piecewise constant images. They are less useful when there is more variation in the image, however. In this paper we develop a convex optimization algorithm that exploits the sparsity in the edges of the underlying image. We use the polynomial annihilation edge detection method to generate the corresponding sparsifying transform. Our method successfully reduces the Gibbs phenomenon with only minimal blurring at the discontinuities while retaining a high rate of convergence in smooth regions.  相似文献   

7.
In this paper, we generalize the high order well-balanced finite difference weighted essentially non-oscillatory (WENO) scheme, designed earlier by us in Xing and Shu (2005, J. Comput. phys. 208, 206–227) for the shallow water equations, to solve a wider class of hyperbolic systems with separable source terms including the elastic wave equation, the hyperbolic model for a chemosensitive movement, the nozzle flow and a two phase flow model. Properties of the scheme for the shallow water equations (Xing and Shu 2005, J. Comput. phys. 208, 206–227), such as the exact preservation of the balance laws for certain steady state solutions, the non-oscillatory property for general solutions with discontinuities, and the genuine high order accuracy in smooth regions, are maintained for the scheme when applied to this general class of hyperbolic systems  相似文献   

8.
对可调控Bézier曲线的改进   总被引:2,自引:1,他引:1       下载免费PDF全文
目的 在用Bézier曲线表示复杂形状时,相邻曲线的控制顶点间必须满足一定的光滑性条件。一般情况下,对光滑度的要求越高,条件越复杂。通过改进文献中的“可调控Bézier曲线”,以构造具有多种优点的自动光滑分段组合曲线。方法 首先给出了两条位置连续的曲线Gl连续的一个充分条件,进而证明了“可调控Bézier曲线”在普通Bézier曲线的Gl光滑拼接条件下可达Gl(l为曲线中的参数)光滑拼接。然后对“可调控Bézier基”进行改进得到了一组新的基函数,利用该基函数按照Bézier曲线的定义方式构造了一种新曲线。分析了该曲线的光滑拼接条件,并根据该条件定义了一种分段组合曲线。结果 对于新曲线而言,只要前一条曲线的最后一条控制边与后一条曲线的第1条控制边重合,两条曲线便自动光滑连接,并且在连接点处的光滑度可以简单地通过改变参数的值来自由调整。由新曲线按照特殊方式构成的分段组合曲线具有类似于B样条曲线的自动光滑性和局部控制性。不同的是,组合曲线的各条曲线段可以由不同数量的控制顶点定义,选择合适的参数,可以使曲线在各个连接点处达到任何期望的光滑度。另外,改变一个控制顶点,至多只会影响两条曲线段的形状,改变一条曲线段中的参数,只会影响当前曲线段的形状,以及至多两个连接点处的光滑度。结论 本文给出了构造易于拼接的曲线的通用方法,极大简化了曲线的拼接条件。此基础上,提出的一种新的分段组合曲线定义方法,无需对控制顶点附加任何条件,所得曲线自动光滑,且其形状、光滑度可以或整体或局部地进行调整。本文方法具有一般性,为复杂曲线的设计创造了条件。  相似文献   

9.
Spectral series expansions of piecewise smooth functions are known to yield poor results, with spurious oscillations forming near the jump discontinuities and reduced convergence throughout the interval of approximation. The spectral reprojection method, most notably the Gegenbauer reconstruction method, can restore exponential convergence to piecewise smooth function approximations from their (pseudo-)spectral coefficients. Difficulties may arise due to numerical robustness and ill-conditioning of the reprojection basis polynomials, however. This paper considers non-classical orthogonal polynomials as reprojection bases for a general order (finite or spectral) reconstruction of piecewise smooth functions. Furthermore, when the given data are discrete grid point values, the reprojection polynomials are constructed to be orthogonal in the discrete sense, rather than by the usual continuous inner product. No calculation of optimal quadrature points is therefore needed. This adaptation suggests a method to approximate piecewise smooth functions from discrete non-uniform data, and results in a one-dimensional approximation that is accurate and numerically robust.   相似文献   

10.
High-order finite difference discontinuity detectors are essential for the location of discontinuities on discretized functions, especially in the application of high-order numerical methods for high-speed compressible flows for shock detection. The detectors are used mainly for switching between numerical schemes in regions of discontinuity to include artificial dissipation and avoid spurious oscillations. In this work a discontinuity detector is analysed by the construction of a piecewise polynomial function that incorporates jump discontinuities present on the function or its derivatives (up to third order) and the discussion on the selection of a cut-off value required by the detector. The detector function is also compared with other discontinuity detectors through numerical examples.  相似文献   

11.
In this work we show that the generating N-photon Greenberger–Horne–Zeilinger entangled state protocol proposed by Xia et al. (Appl Phys Lett 92(1–3):021127, 2008) which can be realized by a simpler optical setup and with a higher success probability. The present protocol setup involves simple linear optical elements, N single-photon superposition states and conventional photon detectors. This makes the protocol more realizable in experiments.  相似文献   

12.
For use in real-time applications, we present a fast algorithm for converting a quad mesh to a smooth, piecewise polynomial surface on the Graphics Processing Unit (GPU). The surface has well-defined normals everywhere and closely mimics the shape of Catmull–Clark subdivision surfaces. It consists of bicubic splines wherever possible, and a new class of patches—c-patches—where a vertex has a valence different from 4. The algorithm fits well into parallel streams so that meshes with 12,000 input quads, of which 60% have one or more non-4-valent vertices, are converted, evaluated and rendered with 9×9 resolution per quad at 50 frames per second. The GPU computations are ordered so that evaluation avoids pixel dropout.
Young In YeoEmail:
  相似文献   

13.
The convergence to steady state solutions of the Euler equations for the fifth-order weighted essentially non-oscillatory (WENO) finite difference scheme with the Lax–Friedrichs flux splitting [7, (1996) J. Comput. Phys. 126, 202–228.] is studied through systematic numerical tests. Numerical evidence indicates that this type of WENO scheme suffers from slight post-shock oscillations. Even though these oscillations are small in magnitude and do not affect the “essentially non-oscillatory” property of WENO schemes, they are indeed responsible for the numerical residue to hang at the truncation error level of the scheme instead of settling down to machine zero. We propose a new smoothness indicator for the WENO schemes in steady state calculations, which performs better near the steady shock region than the original smoothness indicator in [7, (1996) J. Comput. Phys. 126, 202–228.]. With our new smoothness indicator, the slight post-shock oscillations are either removed or significantly reduced and convergence is improved significantly. Numerical experiments show that the residue for the WENO scheme with this new smoothness indicator can converge to machine zero for one and two dimensional (2D) steady problems with strong shock waves when there are no shocks passing through the domain boundaries. Dedicated to the memory of Professor Xu-Dong Liu.  相似文献   

14.
The Attappady Black goat is a native goat breed of Kerala in India and is mainly known for its valuable meat and skin. In this work, a comparative study of connectionist network [also known as artificial neural network (ANN)] and multiple regression is made to predict the body weight from body measurements in Attappady Black goats. A multilayer feed forward network with backpropagation of error learning mechanism was used to predict the body weight. Data collected from 824 Attappady Black goats in the age group of 0–12 months consisting of 370 males and 454 females were used for the study. The whole data set was partitioned into two data sets, namely training data set comprising of 75 per cent data (277 and 340 records in males and females, respectively) to build the neural network model and test data set comprising of 25 per cent (93 and 114 records in males and females, respectively) to test the model. Three different morphometric measurements viz. chest girth, body length and height at withers were used as input variables, and body weight was considered as output variable. Multiple regression analysis (MRA) was also done using the same training and testing data sets. The prediction efficiency of both models was compared using the R 2 value and root mean square error (RMSE). The correlation coefficients between the actual and predicted body weights in case of ANN were found to be positive and highly significant and ranged from 90.27 to 93.69%. The low value of RMSE and high value of R 2 in case of connectionist network (RMSE: male—1.9005, female—1.8434; R 2: male—87.34, female—85.70) in comparison with MRA model (RMSE: male—2.0798, female—2.0836; R 2: male—84.84, female—81.74) show that connectionist network model is a better tool to predict body weight in goats than MRA.  相似文献   

15.
Abello  Buchsbaum  Westbrook 《Algorithmica》2008,32(3):437-458
Abstract. We present a new approach for designing external graph algorithms and use it to design simple, deterministic and randomized external algorithms for computing connected components, minimum spanning forests, bottleneck minimum spanning forests, maximal independent sets (randomized only), and maximal matchings in undirected graphs. Our I/ O bounds compete with those of previous approaches. We also introduce a semi-external model, in which the vertex set but not the edge set of a graph fits in main memory. In this model we give an improved connected components algorithm, using new results for external grouping and sorting with duplicates. Unlike previous approaches, ours is purely functional—without side effects—and is thus amenable to standard checkpointing and programming language optimization techniques. This is an important practical consideration for applications that may take hours to run.  相似文献   

16.
In this paper, we continue our investigation of the locally divergence-free discontinuous Galerkin method, originally developed for the linear Maxwell equations (J. Comput. Phys. 194 588–610 (2004)), to solve the nonlinear ideal magnetohydrodynamics (MHD) equations. The distinctive feature of such method is the use of approximate solutions that are exactly divergence-free inside each element for the magnetic field. As a consequence, this method has a smaller computational cost than the traditional discontinuous Galerkin method with standard piecewise polynomial spaces. We formulate the locally divergence-free discontinuous Galerkin method for the MHD equations and perform extensive one and two-dimensional numerical experiments for both smooth solutions and solutions with discontinuities. Our computational results demonstrate that the locally divergence-free discontinuous Galerkin method, with a reduced cost comparing to the traditional discontinuous Galerkin method, can maintain the same accuracy for smooth solutions and can enhance the numerical stability of the scheme and reduce certain nonphysical features in some of the test cases.This revised version was published online in July 2005 with corrected volume and issue numbers.  相似文献   

17.
Failure detection and consensus in the crash-recovery model   总被引:2,自引:0,他引:2  
Summary. We study the problems of failure detection and consensus in asynchronous systems in which processes may crash and recover, and links may lose messages. We first propose new failure detectors that are particularly suitable to the crash-recovery model. We next determine under what conditions stable storage is necessary to solve consensus in this model. Using the new failure detectors, we give two consensus algorithms that match these conditions: one requires stable storage and the other does not. Both algorithms tolerate link failures and are particularly efficient in the runs that are most likely in practice – those with no failures or failure detector mistakes. In such runs, consensus is achieved within time and with 4 n messages, where is the maximum message delay and n is the number of processes in the system. Received: May 1998 / Accepted: November 1999  相似文献   

18.
An Improved LOT Model for Image Restoration   总被引:2,自引:0,他引:2  
Some second order PDE-based image restoration models such as total variation (TV) minimization or ROF model of Rudin et al. (Physica D 60, 259–268, 1992) can easily give rise to staircase effect, which may produce undesirable blocky image. LOT model proposed by Laysker, Osher and Tai (IEEE Trans. Image Process. 13(10), 1345–1357, 2004) has alleviated the staircase effect successfully, but the algorithms are complicated due to three nonlinear second-order PDEs to be computed, besides, when we have no information about the noise, the model cannot preserve edges or textures well. In this paper, we propose an improved LOT model for image restoration. First, we smooth the angle θ rather than the unit normal vector n, where n=(cos θ,sin θ). Second, we add an edge indicator function in order to preserve fine structures such as edges and textures well. And then the dual formulation of TV-norm and TV g -norm are used in the numerical algorithms. Finally, some numerical experiments prove our proposed model and algorithms to be effective.
Zhen Liu (Corresponding author)Email:
  相似文献   

19.
We consider a model of game-theoretic network design initially studied by Anshelevich et al. (Proceedings of the 45th Annual Symposium on Foundations of Computer Science (FOCS), pp. 295–304, 2004), where selfish players select paths in a network to minimize their cost, which is prescribed by Shapley cost shares. If all players are identical, the cost share incurred by a player for an edge in its path is the fixed cost of the edge divided by the number of players using it. In this special case, Anshelevich et al. (Proceedings of the 45th Annual Symposium on Foundations of Computer Science (FOCS), pp. 295–304, 2004) proved that pure-strategy Nash equilibria always exist and that the price of stability—the ratio between the cost of the best Nash equilibrium and that of an optimal solution—is Θ(log k), where k is the number of players. Little was known about the existence of equilibria or the price of stability in the general weighted version of the game. Here, each player i has a weight w i ≥1, and its cost share of an edge in its path equals w i times the edge cost, divided by the total weight of the players using the edge. This paper presents the first general results on weighted Shapley network design games. First, we give a simple example with no pure-strategy Nash equilibrium. This motivates considering the price of stability with respect to α-approximate Nash equilibria—outcomes from which no player can decrease its cost by more than an α multiplicative factor. Our first positive result is that O(log w max )-approximate Nash equilibria exist in all weighted Shapley network design games, where w max  is the maximum player weight. More generally, we establish the following trade-off between the two objectives of good stability and low cost: for every α=Ω(log w max ), the price of stability with respect to O(α)-approximate Nash equilibria is O((log W)/α), where W is the sum of the players’ weights. In particular, there is always an O(log W)-approximate Nash equilibrium with cost within a constant factor of optimal. Finally, we show that this trade-off curve is nearly optimal: we construct a family of networks without o(log w max / log log w max )-approximate Nash equilibria, and show that for all α=Ω(log w max /log log w max ), achieving a price of stability of O(log W/α) requires relaxing equilibrium constraints by an Ω(α) factor. Research of H.-L. Chen supported in part by NSF Award 0323766. Research of T. Roughgarden supported in part by ONR grant N00014-04-1-0725, DARPA grant W911NF-04-9-0001, and an NSF CAREER Award.  相似文献   

20.
Sun 《Algorithmica》2008,36(1):89-111
Abstract. We show that the SUM-INDEX function can be computed by a 3-party simultaneous protocol in which one player sends only O(n ɛ ) bits and the other sends O(n 1-C(ɛ) ) bits (0<C(ɛ)<1 ). This implies that, in the Valiant—Nisan—Wigderson approach for proving circuit lower bounds, the SUM-INDEX function is not suitable as a target function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号