首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
We present a quadratic identity on the number of perfect matchings of plane graphs by the method of graphical condensation, which generalizes the results found by Propp [J. Propp, Generalized domino-shuffling, Theoret. Comput. Sci. 303 (2003) 267–301], Kuo [E.H. Kuo, Applications of graphical condensation for enumerating matchings and tilings, Theoret. Comput. Sci. 319 (2004) 29–57], and Yan, Yeh, and Zhang [W.G. Yan, Y.-N. Yeh, F.J. Zhang, Graphical condensation of plane graphs: A combinatorial approach, Theoret. Comput. Sci. 349 (2005) 452–461].  相似文献   

2.
The semi-Lagrangian method using the hybrid-cubic-rational interpolation function [M. Ida, Comput. Fluid Dyn. J. 10 (2001) 159] is modified to a conservative method by incorporating the concept discussed in [R. Tanaka et al., Comput. Phys. Commun. 126 (2000) 232]. In the method due to Tanaka et al., not only a physical quantity but also its integrated quantity within a computational cell are used as dependent variables, and the mass conservation is achieved by giving a constraint to a forth-order polynomial used as an interpolation function. In the present method, a hybrid-cubic-rational function whose optimal mixing ratio was determined theoretically is employed for the interpolation, and its derivative is used for updating the physical quantity. The numerical oscillation appearing in results by the method due to Tanaka et al. is sufficiently eliminated by the use of the hybrid function.  相似文献   

3.
Jun He  Xin Yao   《Artificial Intelligence》2002,140(1-2):245-248
The proof of Theorem 6 in the paper by J. He and X. Yao [Artificial Intelligence 127 (1) (2001) 57–85] contains a mistake, although the theorem is correct [S. Droste et al., Theoret. Comput. Sci. 276 (2002) 51–81]. This note gives a revised proof and theorem. It turns out that the revised theorem is more general than the original one given an evolutionary algorithm with mutation probability pm=1/(2n), using the same proof method as given by J. He and X. Yao [Artificial Intelligence 127 (1) (2001) 57–85].  相似文献   

4.
We first give an elementary proof of the periodicity lemma for strings containing one hole (variously called a “wild card”, a “don’t-care” or an “indeterminate letter” in the literature). The proof is modelled on Euclid’s algorithm for the greatest common divisor and is simpler than the original proof given in [J. Berstel, L. Boasson, Partial words and a theorem of Fine and Wilf, Theoret. Comput. Sci. 218 (1999) 135–141]. We then study the two-hole case, where our result agrees with the one given in [F. Blanchet-Sadri, Robert A. Hegstrom, Partial words and a theorem of Fine and Wilf revisited, Theoret. Comput. Sci. 270 (1-2) (2002) 401–419] but is more easily proved and enables us to identify a maximum-length prefix or suffix of the string to which the periodicity lemma does apply. Finally, we extend our result to three or more holes using elementary methods, and state a version of the periodicity lemma that applies to all strings with or without holes. We describe an algorithm that, given the locations of the holes in a string, computes maximum-length substrings to which the periodicity lemma applies, in time proportional to the number of holes. Our approach is quite different from that used by Blanchet-Sadri and Hegstrom, and also simpler.  相似文献   

5.
Reduced Basis Techniques for Stochastic Problems   总被引:1,自引:0,他引:1  
We report here on the recent application of a now classical general reduction technique, the Reduced-Basis (RB) approach initiated by C. Prud’homme et al. in J. Fluids Eng. 124(1), 70–80, 2002, to the specific context of differential equations with random coefficients. After an elementary presentation of the approach, we review two contributions of the authors: in Comput. Methods Appl. Mech. Eng. 198(41–44), 3187–3206, 2009, which presents the application of the RB approach for the discretization of a simple second order elliptic equation supplied with a random boundary condition, and in Commun. Math. Sci., 2009, which uses a RB type approach to reduce the variance in the Monte-Carlo simulation of a stochastic differential equation. We conclude the review with some general comments and also discuss possible tracks for further research in the direction.  相似文献   

6.
This note corrects two errors that occurred during the typesetting of our paper “Axiomatisations of functional dependencies in the presence of records, lists, sets and multisets”, which appeared in Hartmann et al. [Axiomatisations of functional dependencies in the presence of records, lists, sets and multisets, Theoret. Comput. Sci. 353(2) (2006) 167–196].  相似文献   

7.
Computing the duplication history of a tandem repeated region is an important problem in computational biology (Fitch in Genetics 86:623–644, 1977; Jaitly et al. in J. Comput. Syst. Sci. 65:494–507, 2002; Tang et al. in J. Comput. Biol. 9:429–446, 2002). In this paper, we design a polynomial-time approximation scheme (PTAS) for the case where the size of the duplication block is 1. Our PTAS is faster than the previously best PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002). For example, to achieve a ratio of 1.5, our PTAS takes O(n 5) time while the PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002) takes O(n 11) time. We also design a ratio-6 polynomial-time approximation algorithm for the case where the size of each duplication block is at most 2. This is the first polynomial-time approximation algorithm with a guaranteed ratio for this case. Part of work was done during a Z.-Z. Chen visit at City University of Hong Kong.  相似文献   

8.
Blanchet-Sadri et al. have shown that Avoidability, or the problem of deciding the avoidability of a finite set of partial words over an alphabet of size k≥2, is NP-hard [F. Blanchet-Sadri, R. Jungers, J. Palumbo, Testing avoidability on sets of partial words is hard, Theoret. Comput. Sci. 410 (2009) 968-972]. Building on their work, we analyze in this paper the complexity of natural variations on the problem. While some of them are NP-hard, others are shown to be efficiently decidable. Using some combinatorial properties of de Bruijn graphs, we establish a correspondence between lengths of cycles in such graphs and periods of avoiding words, resulting in a tight bound for periods of avoiding words. We also prove that Avoidability can be solved in polynomial space, and reduces in polynomial time to the problem of deciding the avoidability of a finite set of partial words of equal length over the binary alphabet. We give a polynomial bound on the period of an infinite avoiding word, in the case of sets of full words, in terms of two parameters: the length and the number of words in the set. We give a polynomial space algorithm to decide if a finite set of partial words is avoided by a non-ultimately periodic infinite word. The same algorithm also decides if the number of words of length n avoiding a given finite set of partial words grows polynomially or exponentially with n.  相似文献   

9.
We prove a relationship between the Cleaning problem and the Balanced Vertex-Ordering problem, namely that the minimum total imbalance of a graph equals twice the brush number of a graph. This equality has consequences for both problems. On one hand, it allows us to prove the NP-completeness of the Cleaning problem, which was conjectured by Messinger et al. [M.-E. Messinger, R.J. Nowakowski, P. Pra?at, Cleaning a network with brushes, Theoret. Comput. Sci. 399 (2008) 191-205]. On the other hand, it also enables us to design a faster algorithm for the Balanced Vertex-Ordering problem [J. Kára, K. Kratochvíl, D. Wood, On the complexity of the balanced vertex ordering problem, Discrete Math. Theor. Comput. Sci. 9 (1) (2007) 193-202].  相似文献   

10.
A unification (and generalization) of various Apostol type polynomials was introduced and investigated recently by Luo and Srivastava [Q.-M. Luo, H.M. Srivastava, Some generalizations of the Apostol-Genocchi polynomials and the Stirling numbers of the second kind, Appl. Math. Comput. 217 (2011) 5702-5728]. In this paper, we prove several symmetry identities for these generalized Apostol type polynomials by using their generating functions. As special cases and consequences of our results, we obtain the corresponding symmetry identities for the Apostol-Euler polynomials of higher order, the Apostol-Bernoulli polynomials of higher order and the Apostol-Genocchi polynomials of higher order, and also for another family of generalized Apostol type polynomials which were investigated systematically by Ozden et al. [H. Ozden, Y. Simsek, H.M. Srivastava, A unified presentation of the generating functions of the generalized Bernoulli, Euler and Genocchi polynomials, Comput. Math. Appl. 60 (2010) 2779-2787]. We also derive several relations between the Apostol type polynomials, the generalized sum of integer powers and the generalized alternating sum. It is shown how each of these results would extend the corresponding known identities.  相似文献   

11.
Double hashing with bucket capacity one is augmented with multiple passbits to obtain significant reduction to unsuccessful search lengths. This improves the analysis of Martini et al. [P.M. Martini, W.A. Burkhard, Double hashing with multiple passbits, Internat. J. Found. Theoret. Comput. Sci. 14 (6) (2003) 1165-1188] by providing a closed form expression for the expected unsuccessful search lengths.  相似文献   

12.
In this paper we extend the discrete time Footloose Capital model analyzed in Commendatore et al. (Nonlinear Dyn Psychol Life Sci 11(2):267–289, 2007) by introducing “first nature firms”, i.e., firms that use locally specific blueprints and, therefore, are immobile. Due to the presence of first nature firms (symmetrically distributed across the regions), the central dynamic map becomes a piecewise differentiable function: in addition to “standard” flip and pitchfork bifurcations also border collision bifurcations are possible and instances of multistability may emerge. Our analysis confirms and extends the results of Commendatore et al. (2007): (1) continuous time formulation hides complex dynamics patterns; (2) asymmetric distributions of industrial activity can be endogenously generated and are path dependent.  相似文献   

13.
Code OK1 is a fast and precise three-dimensional computer program designed for simulations of heavy ion beam (HIB) irradiation on a direct-driven spherical fuel pellet in heavy ion fusion (HIF). OK1 provides computational capabilities of a three-dimensional energy deposition profile on a spherical fuel pellet and the HIB irradiation non-uniformity evaluation, which are valuables for optimizations of the beam parameters and the fuel pellet structure, as well for further HIF experiment design. The code is open and complete, and can be easily modified or adapted for users' purposes in this field.

Program summary

Title of program: OK1Catalogue identifier: ADSTProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSTProgram obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer: PC (Pentium 4, ∼1 GHz or more recommended)Operating system: Windows or UNIXProgram language used: C++Memory required to execute with typical data: 911 MBNo. of bits in a word: 32No. of processors used: 1 CPUHas the code been vectorized or parallelized: NoNo. of bytes in distributed program, including test data: 16 557Distribution format: tar gzip fileKeywords: Heavy ion beam, inertial confinement fusion, energy deposition, fuel pelletNature of physical problem: Nuclear fusion energy may have attractive features as one of our human energy resources. In this paper we focus on heavy ion inertial confinement fusion (HIF). Due to a favorable energy deposition behavior of heavy ions in matter [J.J. Barnard et al., UCRL-LR-108095, 1991; C. Deutsch et al., J. Plasma Fusion Res. 77 (2001) 33; T. Someya et al., Fusion Sci. Tech. (2003), submitted] it is expected that heavy ion beam (HIB) would be one of energy driver candidates to operate a future inertial confinement fusion power plant. For a successful fuel ignition and fusion energy release, a stringent requirement is imposed on the HIB irradiation non-uniformity, which should be less than a few percent [T. Someya et al., Fusion Sci. Tech. (2003), submitted; M.H. Emery et al., Phys. Rev. Lett. 48 (1982) 253; S. Kawata et al., J. Phys. Soc. Jpn. 53 (1984) 3416]. In order to meet this requirement we need to evaluate the non-uniformity of a realistic HIB irradiation and energy deposition pattern. The HIB irradiation and non-uniformity evaluations are sophisticated and difficult to calculate analytically. Based on our code one can numerically obtain a three-dimensional profile of energy deposition and evaluate the HIB irradiation non-uniformity onto a spherical target for a specific HIB parameter value set in HIF.Method of solution: OK1 code is based on the stopping power of ions in matter [J.J. Barnard et al., UCRL-LR-108095, 1991; C. Deutsch et al., J. Plasma Fusion Res. 77 (2001) 33; T. Someya et al., Fusion Sci. Tech. (2003), submitted; M.H. Emery et al., Phys. Rev. Lett. 48 (1982) 253; S. Kawata et al., J. Phys. Soc. Jpn. 53 (1984) 3416; T. Mehlhorn, SAND80-0038, 1980; H.H. Andersen, J.F. Ziegler, Pergamon Press, 1977, p. 3]. The code simulates a multi-beam irradiation, obtains the 3D energy deposition profile of the fuel pellet and evaluates the deposition non-uniformity.Restrictions on the complexity of the problem: NoTypical running time: The execution time depends on the number of beams in the simulated irradiation and its characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost of the practical running tests performed, the typical running time for one beam deposition is less than 2 s on a PC with a CPU of Pentium 4, 2.2 GHz (e.g., in Test 2 when the number of beams is 600, the running time is about 18 minutes).Unusual features of the program: No  相似文献   

14.
In this paper, we study a new iteration process for a finite family of nonself asymptotically nonexpansive mappings with errors in Banach spaces. We prove some weak and strong convergence theorems for this new iteration process. The results of this paper improve and extend the corresponding results of Chidume et al. (2003) [10], Osilike and Aniagbosor (2000) [3], Schu (1991) [4], Takahashi and Kim (1998) [9], Tian et al. (2007) [18], Wang (2006) [11], Yang (2007) [17] and others.  相似文献   

15.
In this paper, using the method of Diaz-Barrero et al. (2008) [J.L. Diaz-Barrero, M. Grau-Sanchez, P.G. Popescu, Refinements of Aczél, Popoviciu and Bellman’s inequalities, Comput. Math. Appl. 56 (2008) 2356–2359], refinements of generalized Aczél-Popoviciu’s inequality and generalized Bellman’s inequalities are established. As applications, some integral inequalities are given.  相似文献   

16.
The early algorithms for in-place merging were mainly focused on the time complexity, whereas their structures themselves were ignored. Most of them therefore are elusive and of only theoretical significance. For this reason, the paper simplifies the unstable in-place merge by Geffert et al. [V. Geffert, J. Katajainen, T. Pasanen, Asymptotically efficient in-place merging, Theoret. Comput. Sci. 237 (2000) 159-181]. The simplified algorithm is simple yet practical, and has a small time complexity.  相似文献   

17.
We study an on-line broadcast scheduling problem in which requests have deadlines, and the objective is to maximize the weighted throughput, i.e., the weighted total length of the satisfied requests. For the case where all requested pages have the same length, we present an online deterministic algorithm named BAR and prove that it is 4.56-competitive. This improves the previous algorithm of (Kim, J.-H., Chwa, K.-Y. in Theor. Comput. Sci. 325(3):479–488, 2004) which is shown to be 5-competitive by (Chan, W.-T., et al. in Lecture Notes in Computer Science, vol. 3106, pp. 210–218, 2004). In the case that pages may have different lengths, we give a ( )-competitive algorithm where Δ is the ratio of maximum to minimum page lengths. This improves the (4Δ+3)-competitive algorithm of (Chan, W.-T., et al. in Lecture Notes in Computer Science, vol. 3106, pp. 210–218, 2004). We also prove an almost matching lower bound of Ω(Δ/log Δ). Furthermore, for small values of Δ we give better lower bounds. The work described in this paper was fully supported by grants from the Research Grants Council of the Hong Kong SAR, China [CityU 1198/03E, HKU 7142/03E, HKU 5172/03E], an NSF Grant of China [No. 10371094], and a Nuffield Foundation Grant of UK [NAL/01004/G].  相似文献   

18.
In this paper, we introduce a new modified Ishikawa iterative process for computing fixed points of an infinite family nonexpansive mapping in the framework of Banach spaces. Then, we establish the strong convergence theorem of the proposed iterative scheme under some mild conditions which solves a variational inequality. The results obtained in this paper extend and improve on the recent results of Qin et al. [Strong convergence theorems for an infinite family of nonexpansive mappings in Banach spaces, Journal of Computational and Applied Mathematics 230 (1) (2009) 121–127], Cho et al. [Approximation of common fixed points of an infinite family of nonexpansive mappings in Banach spaces, Computers and Mathematics with Applications 56 (2008) 2058–2064] and many others.  相似文献   

19.
We provide a new representation-independent formulation of Occam's razor theorem, based on Kolmogorov complexity. This new formulation allows us to: (i) obtain better sample complexity than both length-based [Blumer et al., Inform. Process. Lett. 24 (1987) 377-380] and VC-based [Blumer et al., J. ACM 35 (1989) 929-965] versions of Occam's razor theorem, in many applications; and (ii) achieve a sharper reverse of Occam's razor theorem than that of Board and Pitt [STOC, 1999, pp. 54-63]. Specifically, we weaken the assumptions made by Board and Pitt [STOC, 1999, pp. 54-63] and extend the reverse to superpolynomial running times.  相似文献   

20.
In 2000, Biham and Keller [Cryptanalysis of reduced variants of Rijndael, 3rd AES Conference, in press] presented an impossible differential cryptanalysis of the Advanced Encryption Standard (AES) up to 5 rounds. This was later improved in 2001 by Cheon et al. [Improved impossible differential cryptanalysis of Rijndael and Crypton, in: Lecture Notes in Comput. Sci., vol.  2288, Springer-Verlag, Berlin, 2001, pp. 39-49] to apply to 6 rounds of the AES. In this paper, we extend on previous results to present an attack on the AES up to 7 rounds. This is the best-known impossible differential attack on the AES, and works by exploiting weaknesses in the AES key schedule.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号