首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Critical properties of lattice gases with nearest-neighbor exclusion are investigated via adaptive-window Wang–Landau sampling (WLS) on the square and simple cubic lattices, for which the model is known to exhibit an Ising-like phase transition. We study the particle density, order parameter, compressibility, Binder cumulant and susceptibility, in efforts to test WLS, which has been used quite widely in recent years, in the context of lattice gases. Of considerable interest is whether it is possible to estimate critical exponents reliably using WLS with adaptive windows. We find that the method yields results in fair agreement with exact values (in two dimensions) and numerical estimates (in three dimensions).  相似文献   

2.
ContextMost research in software effort estimation has not considered chronology when selecting projects for training and testing sets. A chronological split represents the use of a projects starting and completion dates, such that any model that estimates effort for a new project p only uses as training data projects that were completed prior to p’s start. Four recent studies investigated the use of chronological splits, using moving windows wherein only the most recent projects completed prior to a projects starting date were used as training data. The first three studies (S1–S3) found some evidence in favor of using windows; they all defined window sizes as being fixed numbers of recent projects. In practice, we suggest that estimators think in terms of elapsed time rather than the size of the data set, when deciding which projects to include in a training set. In the fourth study (S4) we showed that the use of windows based on duration can also improve estimation accuracy.ObjectiveThis papers contribution is to extend S4 using an additional dataset, and to also investigate the effect on accuracy when using moving windows of various durations.MethodStepwise multivariate regression was used to build prediction models, using all available training data, and also using windows of various durations to select training data. Accuracy was compared based on absolute residuals and MREs; the Wilcoxon test was used to check statistical significances between results. Accuracy was also compared against estimates derived from windows containing fixed numbers of projects.ResultsNeither fixed size nor fixed duration windows provided superior estimation accuracy in the new data set.ConclusionsContrary to intuition, our results suggest that it is not always beneficial to exclude old data when estimating effort for new projects. When windows are helpful, windows based on duration are effective.  相似文献   

3.
J. Atkins  W. E. Hart 《Algorithmica》1999,25(2-3):279-294
We describe a proof of NP-hardness for a lattice protein folding model whose instances contain protein sequences defined with a fixed, finite alphabet that contains 12 amino acid types. This lattice model represents a protein's conformation as a self-avoiding path that is embedded on the three-dimensional cubic lattice. A contact potential is used to determine the energy of a sequence in a given conformation; a pair of amino acids contributes to the conformational energy only if they are adjacent on the lattice. This result overcomes a significant weakness of previous intractability results, which do not examine protein folding models that have a finite alphabet of amino acids together with physically interesting conformations. Received June 1, 1997; revised March 13, 1998.  相似文献   

4.
5.
6.
In this paper, we introduce the notion of derivation for a lattice and discuss some related properties. We give some equivalent conditions under which a derivation is isotone for lattices with a greatest element, modular lattices, and distributive lattices. We characterize modular lattices and distributive lattices by isotone derivation. Moreover, we prove that if d is an isotone derivation of a lattice L, the fixed set Fixd(L) is an ideal of L. Finally, we prove that D(L) is isomorphic to L in a distributive lattice L.  相似文献   

7.
We propose an exact simulation algorithm for lattice QCD with dynamical Kogut-Susskind fermion in which the Nf-flavor fermion operator is defined as the Nf/4th root of the Kogut-Susskind (KS) fermion operator. The algorithm is an extension of the Polynomial Hybrid Monte Carlo (PHMC) algorithm to KS fermions. The fractional power of the KS fermion operator is approximated with a Hermitian Chebyshev polynomial, with which we can construct an algorithm for any number of flavors. The error which arises from the approximation is corrected by the Kennedy-Kuti noisy Metropolis test. Numerical simulations are performed for the two-flavor case for several lattice parameters in order to confirm the validity and the practical feasibility of the algorithm. In particular tests on a 164 lattice with a quark mass corresponding to mPS/mV∼0.68 are successfully accomplished. We conclude that our algorithm provides an attractive exact method for dynamical QCD simulations with KS fermions.  相似文献   

8.
Current accurate stereo matching algorithms employ some key techniques that are not suitable for parallel GPU architecture. It will be tricky and cumbersome to directly take these techniques into GPU applications. Trying to tackle this difficulty, we design two GPU-based stereo matching algorithms, one using a local fixed aggregation window whose size is configurable, and the other using an adaptive aggregation window which only includes necessary pixels. We use the winner-takes-all (WTA) principle for optimization and a plain voting refinement for post-processing; both do not need complex data structures. We aim to implement on GPU platforms fast stereo matching algorithms that produce results with same-level quality as other WTA local dense methods that use window-based cost aggregation. In our GPU-based implementation of the fixed window partially demosaiced CFA stereo matching application, accelerations up to 20 times are obtained for large size images. In our GPU-based implementation of the adaptive window color stereo matching application, experiment results show that it can handle four pairs of standard images from Middlebury database within roughly 100 ms.  相似文献   

9.
It is not uncommon to encounter a randomized clinical trial (RCT), in which we need to account for both the noncompliance of patients to their assigned treatment and confounders to avoid making a misleading inference. In this paper, we focus our attention on estimation of the relative treatment efficacy measured by the odds ratio (OR) in large strata for a stratified RCT with noncompliance. We have developed five asymptotic interval estimators for the OR. We employ Monte Carlo simulation to evaluate the finite-sample performance of these interval estimators in a variety of situations. We note that the interval estimator using the weighted least squares (WLS) method may perform well when the number of strata is small, but tend to be liberal when the number of strata is large. We find that the interval estimator using weights which are not functions of unknown parameters required to be estimated from data can improve the accuracy of the interval estimator based on the WLS method, but lose precision. We note that the estimator using the logarithmic transformation of the WLS point estimator and the interval estimator using the logarithmic transformation of the Mantel-Haenszel (MH) type of point estimator can perform well with respect to both the coverage probability and the average length in all the situations considered here. We further note that the interval estimator derived from a quadratic equation using a randomization-based method can be of use as the number of strata is large. Finally, we use the data taken from a multiple risk factor intervention trial to illustrate the use of interval estimators appropriate for being employed when the number of strata is small or moderate.  相似文献   

10.
This paper presents an a priori probability density function (pdf)-based time-of-arrival (TOA) source localization algorithms. Range measurements are used to estimate the location parameter for TOA source localization. Previous information on the position of the calibrated source is employed to improve the existing likelihood-based localization method. The cost function where the prior distribution was combined with the likelihood function is minimized by the adaptive expectation maximization (EM) and space-alternating generalized expectation–maximization (SAGE) algorithms. The variance of the prior distribution does not need to be known a priori because it can be estimated using Bayes inference in the proposed adaptive EM algorithm. Note that the variance of the prior distribution should be known in the existing three-step WLS method [1]. The resulting positioning accuracy of the proposed methods was much better than the existing algorithms in regimes of large noise variances. Furthermore, the proposed algorithms can also effectively perform the localization in line-of-sight (LOS)/non-line-of-sight (NLOS) mixture situations.  相似文献   

11.
Monte Carlo (MC) simulations and series expansions (SE) data for the energy, specific heat, magnetization, and susceptibility of the three-state and four-state Potts model and the Baxter-Wu model on the square lattice are analyzed in the vicinity of the critical point in order to estimate universal combinations of critical amplitudes. We also form effective ratios of the observables close to the critical point and analyze how they approach the universal critical-amplitude ratios. In particular, using the duality relation, we show analytically that for the Potts model with a number of states q?4, the effective ratio of the energy critical amplitudes always approaches unity linearly with respect to the reduced temperature. This fact leads to the prediction of relations among the amplitudes of correction-to-scaling terms of the specific heat in the low- and high-temperature phases. It is a common belief that the four-state Potts and the Baxter-Wu model belong to the same universality class. At the same time, the critical behavior of the four-state Potts model is modified by logarithmic corrections while that of the Baxter-Wu model is not. Numerical analysis shows that critical amplitude ratios are very close for both models and, therefore, gives support to the hypothesis that the critical behavior of both systems is described by the same renormalization group fixed point.  相似文献   

12.
We report extensive computer simulations of the Vicsek model [V.M. Vicsek, et al., Phys. Rev. Lett. 75 (1995) 1226], aimed to describe the onset of ordering within the low-velocity regime of the collective displacement of self-driven agents. The VM assumes that each agent adopts the average direction of movement of its neighbors, perturbed by an external noise. The existence of a phase transition between a state of ordered collective displacement (low-noise limit) and a disordered regime (high-noise limit) is most likely the most distinctive feature of the VM. In this paper, after briefly discussing the critical nature of the transition we focus our attention on the behavior of the VM in the low-velocity (v0→0) regime for the displacement of the agents. In fact, while the XY model, which could somewhat be considered as the equilibrium counterpart of the VM, does not exhibit order in d=2 dimensions, an intriguing feature of the VM is precisely the onset of order. Since in the XY model the particles remain fixed in the lattice, we show that the understanding of the v0→0 limit is relevant in order to explain the different behavior of both models.  相似文献   

13.
When comparing an experimental treatment with a standard treatment in a randomized clinical trial (RCT), we often use the risk difference (RD) to measure the efficacy of an experimental treatment. In this paper, we have developed four asymptotic interval estimators for the RD in a stratified RCT with noncompliance. These include an asymptotic interval estimator based on the weighted-least-squares (WLS) estimator of the RD, an asymptotic interval estimator using tanh−1(x) transformation with the WLS optimal weight, an asymptotic interval estimator derived from Fieller’s Theorem, and an asymptotic interval estimator using a randomization-based approach. Based on Monte Carlo simulations, we have compared these four asymptotic interval estimators with the asymptotic interval estimator recently proposed elsewhere. We have found that when the probability of compliance is high, the interval estimator using a randomization-based approach is probably more accurate than the others, especially when the stratum size is not large. When the probability of compliance is moderate, the interval estimator using tanh−1(x) transformation is likely to be the best among all interval estimators considered here. We note that the interval estimator proposed elsewhere can be of use when the underlying RD is small, but lose accuracy when the RD is large. We also note that when the number of patients per assigned treatment is large, the four asymptotic interval estimators developed here are essentially equivalent; they are all appropriate for use. Finally, to illustrate the use of these interval estimators, we consider the data taken from a large field trial studying the effect of a multifactor intervention program on reducing the mortality of coronary heart disease in middle-aged men.  相似文献   

14.
Full potential density functional theory (DFT) calculations have been performed for the MgB2 superconductor. Results show that applying positive (negative) pressure leads to a decrement (increment) in the DOS at the Fermi level that this calculations suggest a decrement in Tc under application of positive pressure. In the Γ-A path of momentum space, the band which has the dominant role in conduction properties, moves upward when c increases or a decreases, and moves downward when c decreases or a increases. By relaxation of the system under the plane strain, we have studied the behavior of axial lattice parameter c. Our results show that changes in the axial lattice constant c is one third of the changes of planar lattice constant a. It has been seen that by applying small in-plane strain (tensile), DOS at the Fermi level increases, but it decreases for higher applied strain. For the negative in-plane strain (compression), DOS decreases monotonically at the Fermi level. It can be seen that tension makes the electronic bands to move downward in the Γ-A direction of the reciprocal lattice, but by compression, they move upward. Based on these results, it can be concluded that by applying small tension, one can enhance Tc in MgB2 compound.  相似文献   

15.
Adaptive reservation is a real-time scheduling technique in which each application is associated a fraction of the computational resource (a reservation) that can be dynamically adapted to the varying requirements of the application by using appropriate feedback control algorithms. An adaptive reservation is typically implemented by using an aperiodic server (e.g. sporadic server) algorithm with fixed period and variable budget. When the feedback law demands an increase of the reservation budget, the system must run a schedulability test to check if there is enough spare bandwidth to accommodate such increase. The schedulability test must be very fast, as it may be performed at each budget update, i.e. potentially at each instance of a task; yet, it must be as efficient as possible, to maximize resource usage. In this paper, we tackle the problem of performing an efficient on-line schedulability test for adaptive resource reservations in fixed priority schedulers. In the literature, a number of algorithms have been proposed for on-line admission control in fixed priority systems. We describe four of these tests, with increasing complexity and performance. In addition, we propose a novel on-line test, called Spare-Pot algorithm, which has been specifically designed for the problem at hand, and which shows a good cost/performance ratio compared to the other tests.  相似文献   

16.
With more and more real deployments of wireless sensor network applications, we envision that their success is nonetheless determined by whether the sensor networks can provide a high quality stream of data over a long period. In this paper, we propose a consistency-driven data quality management framework called Orchis that integrates the quality of data into an energy efficient sensor system design. Orchis consists of four components, data consistency models, adaptive data sampling and process protocols, consistency-driven cross-layer protocols and flexible APIs to manage the data quality, to support the goals of high data quality and energy efficiency. We first formally define a consistency model, which not only includes temporal consistency and numerical consistency, but also considers the application-specific requirements of data and data dynamics in the sensing field. Next, we propose an adaptive lazy energy efficient data collection protocol, which adapts the data sampling rate to the data dynamics in the sensing field and keeps lazy when the data consistency is maintained. Finally, we conduct a comprehensive evaluation to the proposed protocol based on both a TOSSIM-based simulation and a real prototype implementation using MICA2 motes. The results from both simulation and prototype show that our protocol reduces the number of delivered messages, improves the quality of collected data, and in turn extends the lifetime of the whole network. Our analysis also implies that a tradeoff should be carefully set between data consistency requirements and energy saving based on the specific requirements of different applications.  相似文献   

17.
We present PROFESS (PRinceton Orbital-Free Electronic Structure Software), a new software package that performs orbital-free density functional theory (OF-DFT) calculations. OF-DFT is a first principles quantum mechanics method primarily for condensed matter that can be made to scale linearly with system size. We describe the implementation of energy, force, and stress functionals and the methods used to optimize the electron density under periodic boundary conditions. All electronic energy and potential terms scale linearly while terms involving the ions exhibit quadratic scaling in our code. Despite the latter scaling, the program can treat tens of thousands of atoms with quantum mechanics on a single processor, as we demonstrate here. Limitations of the method are also outlined, the most serious of which is the accuracy of state-of-the-art kinetic energy functionals, which limits the applicability of the method to main group elements at present.

Program summary

Program title: PROFESSCatalogue identifier: AEBN_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 35 933No. of bytes in distributed program, including test data, etc.: 329 924Distribution format: tar.gzProgramming language: Fortran 90Computer: Intel with ifort; AMD Opteron with pathf90Operating system: LinuxRAM: Problem dependent, but 2 GB is sufficient for up to 10,000 ionsClassification: 7.3External routines: FFTW (http://www.fftw.org), MINPACK-2Nature of problem: Given a set of coordinates describing the initial ion positions under periodic boundary conditions, recovers the ground state energy, electron density, ion positions, and cell lattice vectors predicted by orbital-free density functional theory. Except for computation of the ion-ion and ion-electron terms, all other terms are effectively linear scaling. Up to ∼10,000 ions may be included in the calculation on just a single processor.Solution method: Computes energies as described in text; minimizes this energy with respect to the electron density, ion positions, and cell lattice vectors.Restrictions: PROFESS cannot use nonlocal (such as ultrasoft) pseudopotentials. Local pseudopotential files for aluminum, magnesium, silver, and silicon are available upon request. Also, due to the current state of the kinetic energy functionals, PROFESS is only reliable for main group metals and some properties of semiconductors.Running time: Problem dependent: the test example provided with the code takes less than a second to run. Timing results for large scale problems are given in the paper.References:[1] Y.A. Wang, N. Govind, E.A. Carter, Phys. Rev. B 58 (1998) 13465;  Y.A. Wang, N. Govind, E.A. Carter, Phys. Rev. B 64 (2001) 129901 (erratum).[2] S.C. Watson, E.A. Carter, Comput. Phys. Comm. 128 (2000) 67.  相似文献   

18.
A possible way of the Markov property introduction within the framework of the orthomodular quantum logic, which is commonly used as the calculus model for quantum mechanics is presented in this paper. The presented work follows the logical line rather than any physical interpretation in the framework of quantum mechanics. The basic algebraic structure, which is used as a model for noncompatible random events is an orthomodular lattice. On the orthomodular lattice, a dynamical structure is introduced coupled with mappings which have similar properties as sn-maps on the orthomodular lattice. This construction leads to the definition of an L-process with the Markov property on the orthomodular lattice.  相似文献   

19.
Landau  Immerman 《Algorithmica》2002,32(3):423-436
This paper answers the following question: Given an ``erector set'' linkage, a connected set of fixed-length links, what is the minimal ?needed to adjust the edge lengths so that the vertices of the linkage can be placed on integer lattice points? Each edge length is allowed to change by at most ? . Angles are not fixed, but collinearity must be preserved (although the introduction of new collinearities is allowed). We show that the question of determining whether a linkage can be embedded on the integer lattice is strongly NP -complete. Indeed, we show that even with ? = 0(under which the problem becomes ``Can this linkage be embedded?''), the problem remains strongly NP -complete. However, for some applications, it is reasonable to assume that lengths of the links and the number of ``co-incident'' cycles are bounded (two cycles are co-incident if they share an edge). We show that under these bounding assumptions, there is a polynomial-time solution to the problem.  相似文献   

20.
We document our Fortran 77 code for multicanonical simulations of 4D U(1) lattice gauge theory in the neighborhood of its phase transition. This includes programs and routines for canonical simulations using biased Metropolis heatbath updating and overrelaxation, determination of multicanonical weights via a Wang-Landau recursion, and multicanonical simulations with fixed weights supplemented by overrelaxation sweeps. Measurements are performed for the action, Polyakov loops and some of their structure factors. Many features of the code transcend the particular application and are expected to be useful for other lattice gauge theory models as well as for systems in statistical physics.

Program summary

Program title: STMC_U1MUCACatalogue identifier: AEET_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEET_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 18 376No. of bytes in distributed program, including test data, etc.: 205 183Distribution format: tar.gzProgramming language: Fortran 77Computer: Any capable of compiling and executing Fortran codeOperating system: Any capable of compiling and executing Fortran codeClassification: 11.5Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors.Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars.Running time: The prepared tests runs took up to 74 minutes to execute on a 2 GHz PC.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号