Iterative substructuring methods, also known as Schur complement methods, form an important family of domain decomposition algorithms. They are preconditioned conjugate gradient methods where solvers on local subregions and a solver on a coarse mesh are used to construct the preconditioner. For conforming finite element approximations of , it is known that the number of conjugate gradient steps required to reduce the residual norm by a fixed factor is independent of the number of substructures, and that it grows only as the logarithm of the dimension of the local problem associated with an individual substructure. In this paper, the same result is established for similar iterative methods for low-order Nédélec finite elements, which approximate in two dimensions. Results of numerical experiments are also provided.
A new approach to the construction of finite-difference methods is presented. It is shown how the multi-point differentiators can generate regularizing algorithms with a stepsize being a regularization parameter. The explicitly computable estimation constants are given. Also an iteratively regularized scheme for solving the numerical differentiation problem in the form of Volterra integral equation is developed.
In this work, we establish lists for each signature of tenth degree number fields containing a totally real quintic subfield and of discriminant less than in absolute value. For each field in the list we give its discriminant, the discriminant of its subfield, a relative polynomial generating the field over one of its subfields, the corresponding polynomial over , and the Galois group of its Galois closure.
We have examined the existence of several non-isomorphic fields with the same discriminants, and also the existence of unramified extensions and cyclic extensions.
Integral representations are considered of solutions of the inhomogeneous Airy differential equation . The solutions of these equations are also known as Scorer functions. Certain functional relations for these functions are used to confine the discussion to one function and to a certain sector in the complex plane. By using steepest descent methods from asymptotics, the standard integral representations of the Scorer functions are modified in order to obtain nonoscillating integrals for complex values of . In this way stable representations for numerical evaluations of the functions are obtained. The methods are illustrated with numerical results.
This paper is concerned with algorithms for computing in the divisor class group of a nonsingular plane curve of the form which has only one point at infinity. Divisors are represented as ideals, and an ideal reduction algorithm based on lattice reduction is given. We obtain a unique representative for each divisor class and the algorithms for addition and reduction of divisors run in polynomial time. An algorithm is also given for solving the discrete logarithm problem when the curve is defined over a finite field.
We prove that standard information in the randomized setting is as powerful as linear information in the worst case setting. Linear information means that algorithms may use arbitrary continuous linear functionals, and by the power of information we mean the speed of convergence of the th minimal errors, i.e., of the minimal errors among all algorithms using function evaluations. Previously, it was only known that standard information in the randomized setting is no more powerful than the linear information in the worst case setting.
We also study (strong) tractability of multivariate approximation in the randomized setting. That is, we study when the minimal number of function evaluations needed to reduce the initial error by a factor is polynomial in (strong tractability), and polynomial in and (tractability). We prove that these notions in the randomized setting for standard information are equivalent to the same notions in the worst case setting for linear information. This result is useful since for a number of important applications only standard information can be used and verifying (strong) tractability for standard information is in general difficult, whereas (strong) tractability in the worst case setting for linear information is known for many spaces and is relatively easy to check.
We illustrate the tractability results for weighted Korobov spaces. In particular, we present necessary and sufficient conditions for strong tractability and tractability. For product weights independent of , we prove that strong tractability is equivalent to tractability.
We stress that all proofs are constructive. That is, we provide randomized algorithms that enjoy the maximal speed of convergence. We also exhibit randomized algorithms which achieve strong tractability and tractability error bounds.
The theory is developed in the one-dimensional setting. The numerical error is measured with respect to a norm which was introduced by the author in 2005 and somehow plays the role that the energy norm has with respect to symmetric and coercive differential operators. In particular, the mentioned norm possesses features that allow us to obtain a meaningful a-posteriori estimator, robust up to a factor, where is the global Péclet number of the problem. Various numerical tests are performed in one dimension, to confirm the theoretical results and show that the proposed estimator performs better than the usual one known in literature.
We also consider a possible two-dimensional extension of our result and only present a few basic numerical tests, indicating that the estimator seems to preserve the good features of the one-dimensional setting.
Whittaker and collaborators in the thirties, and R. Rankin some twenty years later, were able to prove the conjecture for several families of hyperelliptic surfaces, characterized by the fact that they admit a large group of symmetries. However, general results of the analytic theory of moduli of Riemann surfaces, developed later, imply that Whittaker's conjecture cannot be true in its full generality.
Recently, numerical computations have shown that Whittaker's prediction is incorrect for random surfaces, and in fact it has been conjectured that it only holds for the known cases of surfaces with a large group of automorphisms.
The main goal of this paper is to prove that having many automorphisms is not a necessary condition for a surface to satisfy Whittaker's conjecture.
A characterization of the quasi-split property for an inclusion of -algebras in terms of the metrically nuclear maps is established. This result extends the known characterization relative to inclusions of -factors. An application to type von Neumann algebras is also presented.
In this paper we classify numerical Godeaux surfaces with an involution, i.e. an automorphism of order 2. We prove that they are birationally equivalent either to double covers of Enriques surfaces or to double planes of two different types: the branch curve either has degree 10 and suitable singularities, originally suggested by Campedelli, or is the union of two lines and a curve of degree 12 with certain singularities. The latter type of double planes are degenerations of examples described by Du Val, and their existence was previously unknown; we show some examples of this new type, also computing their torsion group.
(HS) |
This equation has been suggested as a simple model for nematic liquid crystals. We prove that the numerical approximations converge to the unique dissipative solution of (HS), as identified by Zhang and Zheng. A main aspect of the analysis, in addition to the derivation of several a priori estimates that yield some basic convergence results, is to prove strong convergence of the discrete spatial derivative of the numerical approximations of , which is achieved by analyzing various renormalizations (in the sense of DiPerna and Lions) of the numerical schemes. Finally, we demonstrate through several numerical examples the proposed schemes as well as some other schemes for which we have no rigorous convergence results.
In this paper we introduce the maximum Poincaré polynomial of a compact manifold , and prove its uniqueness. We show that its coefficients are topological invariants of the manifolds which, in some cases, correspond to known ones. We also investigate its realizability via a Morse function on .
whose kernel is either discontinuous or not smooth along the main diagonal, is presented. This scheme is of spectral accuracy when is infinitely differentiable away from the diagonal . Relation to the singular value decomposition is indicated. Application to integro-differential Schrödinger equations with nonlocal potentials is given.
Let be an imaginary abelian number field. We know that , the relative class number of , goes to infinity as , the conductor of , approaches infinity, so that there are only finitely many imaginary abelian number fields with given relative class number. First of all, we have found all imaginary abelian number fields with relative class number one: there are exactly 302 such fields. It is known that there are only finitely many CM-fields with cyclic ideal class groups of 2-power orders such that the complex conjugation is the square of some automorphism of . Second, we have proved in this paper that there are exactly 48 such fields.
Our work shows that the PV framework applies to fairly general settings by elucidating the key algebraic concepts underlying it. Also, more importantly, AG codes of arbitrary block length exist over fixed alphabets , thus enabling us to establish new trade-offs between the list decoding radius and rate over a bounded alphabet size.
The work of Parvaresh and Vardy (2005) was extended in Guruswami and Rudra (2006) to give explicit codes that achieve the list decoding capacity (optimal trade-off between rate and fraction of errors corrected) over large alphabets. A similar extension of this work along the lines of Guruswami and Rudra could have substantial impact. Indeed, it could give better trade-offs than currently known over a fixed alphabet (say, ), which in turn, upon concatenation with a fixed, well-understood binary code, could take us closer to the list decoding capacity for binary codes. This may also be a promising way to address the significant complexity drawback of the result of Guruswami and Rudra, and to enable approaching capacity with bounded list size independent of the block length (the list size and decoding complexity in their work are both where is the distance to capacity).
Similar to algorithms for AG codes from Guruswami and Sudan (1999) and (2001), our encoding/decoding algorithms run in polynomial time assuming a natural polynomial-size representation of the code. For codes based on a specific ``optimal' algebraic curve, we also present an expected polynomial time algorithm to construct the requisite representation. This in turn fills an important void in the literature by presenting an efficient construction of the representation often assumed in the list decoding algorithms for AG codes.