首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
HiggsBounds is a computer code that tests theoretical predictions of models with arbitrary Higgs sectors against the exclusion bounds obtained from the Higgs searches at LEP and the Tevatron. The included experimental information comprises exclusion bounds at 95% C.L. on topological cross sections. In order to determine which search topology has the highest exclusion power, the program also includes, for each topology, information from the experiments on the expected exclusion bound, which would have been observed in case of a pure background distribution. Using the predictions of the desired model provided by the user as input, HiggsBounds determines the most sensitive channel and tests whether the considered parameter point is excluded at the 95% C.L. HiggsBounds is available as a Fortran 77 and Fortran 90 code. The code can be invoked as a command line version, a subroutine version and an online version. Examples of exclusion bounds obtained with HiggsBounds are discussed for the Standard Model, for a model with a fourth generation of quarks and leptons and for the Minimal Supersymmetric Standard Model with and without CP-violation. The experimental information on the exclusion bounds currently implemented in HiggsBounds will be updated as new results from the Higgs searches become available.

Program summary

Program title: HiggsBoundsCatalogue identifier: AEFF_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFF_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 55 733No. of bytes in distributed program, including test data, etc.: 1 986 213Distribution format: tar.gzProgramming language: Fortran 77, Fortran 90 (two code versions are offered).Computer: HiggsBounds can be built with any compatible Fortran 77 or Fortran 90 compiler. The program has been tested on x86 CPUs running under Linux (Ubuntu 8.04) and with the following compilers: The Portland Group Inc. Fortran compilers (pgf77, pgf90), the GNU project Fortran compilers (g77, gfortran).Operating system: LinuxRAM: minimum of about 6000 kbytes (dependent on the code version)Classification: 11.1External routines: HiggsBounds requires no external routines/libraries. Some sample programs in the distribution require the programs FeynHiggs 2.6.x or CPsuperH2 to be installed (see “Subprograms used”).Subprograms used:
Cat IdTitleReference
ADKT_v2_0FeynHiggsv2.6.5CPC 180(2009)1426
ADSR_v2_0CPsuperH2.0CPC 180(2009)312
Full-size table
  相似文献   

2.
We document our Fortran 77 code for multicanonical simulations of 4D U(1) lattice gauge theory in the neighborhood of its phase transition. This includes programs and routines for canonical simulations using biased Metropolis heatbath updating and overrelaxation, determination of multicanonical weights via a Wang-Landau recursion, and multicanonical simulations with fixed weights supplemented by overrelaxation sweeps. Measurements are performed for the action, Polyakov loops and some of their structure factors. Many features of the code transcend the particular application and are expected to be useful for other lattice gauge theory models as well as for systems in statistical physics.

Program summary

Program title: STMC_U1MUCACatalogue identifier: AEET_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEET_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 18 376No. of bytes in distributed program, including test data, etc.: 205 183Distribution format: tar.gzProgramming language: Fortran 77Computer: Any capable of compiling and executing Fortran codeOperating system: Any capable of compiling and executing Fortran codeClassification: 11.5Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors.Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars.Running time: The prepared tests runs took up to 74 minutes to execute on a 2 GHz PC.  相似文献   

3.
The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams.

Program summary

Program title: HiPPY, HPsrcCatalogue identifier: AEDX_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDX_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GPLv2 (see Additional comments below)No. of lines in distributed program, including test data, etc.: 513 426No. of bytes in distributed program, including test data, etc.: 4 893 707Distribution format: tar.gzProgramming language: Python, Fortran95Computer: HiPPy: Single-processor workstations. HPsrc: Single-processor workstations and MPI-enabled multi-processor systemsOperating system: HiPPy: Any for which Python v2.5.x is available. HPsrc: Any for which a standards-compliant Fortran95 compiler is availableHas the code been vectorised or parallelised?: YesRAM: Problem specific, typically less than 1 GB for either codeClassification: 4.4, 11.5Nature of problem: Derivation and use of perturbative Feynman rules for complicated lattice QCD actions.Solution method: An automated expansion method implemented in Python (HiPPy) and code to use expansions to generate Feynman rules in Fortran95 (HPsrc).Restrictions: No general restrictions. Specific restrictions are discussed in the text.Additional comments: The HiPPy and HPsrc codes are released under the second version of the GNU General Public Licence (GPL v2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, we ask that any publications including results from the use of this code or of modifications of it cite Refs. [1,2] as well as this paper. Finally, we also ask that details of these publications, as well as of any bugs or required or useful improvements of this core code, would be communicated to us.Running time: Very problem specific, depending on the complexity of the Feynman rules and the number of integration points. Typically between a few minutes and several weeks. The installation tests provided with the program code take only a few seconds to run.References:
  • [1] 
    A. Hart, G.M. von Hippel, R.R. Horgan, L.C. Storoni, Automatically generating Feynman rules for improved lattice eld theories, J. Comput. Phys. 209 (2005) 340–353, doi:10.1016/j.jcp.2005.03.010, arXiv:hep-lat/0411026.
  • [2] 
    M. Lüscher, P. Weisz, Efficient Numerical Techniques for Perturbative Lattice Gauge Theory Computations, Nucl. Phys. B 266 (1986) 309, doi:10.1016/0550-3213(86)90094-5.
  相似文献   

4.
The semi-classical atomic-orbital close-coupling method is a well-known approach for the calculation of cross sections in ion–atom collisions. It strongly relies on the fast and stable computation of exchange integrals. We present an upgrade to earlier implementations of the Fourier-transform method.For this purpose, we implement an extensive library for symbolic storage of polynomials, relying on sophisticated tree structures to allow fast manipulation and numerically stable evaluation. Using this library, we considerably speed up creation and computation of exchange integrals. This enables us to compute cross sections for more complex collision systems.

Program summary

Program title: TXINTCatalogue identifier: AEHS_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHS_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 12 332No. of bytes in distributed program, including test data, etc.: 157 086Distribution format: tar.gzProgramming language: Fortran 95Computer: All with a Fortran 95 compilerOperating system: All with a Fortran 95 compilerRAM: Depends heavily on input, usually less than 100 MiBClassification: 16.10Nature of problem: Analytical calculation of one- and two-center exchange matrix elements for the close-coupling method in the impact parameter model.Solution method: Similar to the code of Hansen and Dubois [1], we use the Fourier-transform method suggested by Shakeshaft [2] to compute the integrals. However, we heavily speed up the calculation using a library for symbolic manipulation of polynomials.Restrictions: We restrict ourselves to a defined collision system in the impact parameter model.Unusual features: A library for symbolic manipulation of polynomials, where polynomials are stored in a space-saving left-child right-sibling binary tree. This provides stable numerical evaluation and fast mutation while maintaining full compatibility with the original code.Additional comments: This program makes heavy use of the new features provided by the Fortran 90 standard, most prominently pointers, derived types and allocatable structures and a small portion of Fortran 95. Only newer compilers support these features. Following compilers support all features needed by the program.
  • • 
    GNU Fortran Compiler “gfortran” from version 4.3.0
  • • 
    GNU Fortran 95 Compiler “g95” from version 4.2.0
  • • 
    Intel Fortran Compiler “ifort” from version 11.0
Running time: Heavily dependent on input, usually less than one CPU second.References:
  • [1] 
    J.-P. Hansen, A. Dubois, Comput. Phys. Commun. 67 (1992) 456.
  • [2] 
    R. Shakeshaft, J. Phys. B: At. Mol. Opt. Phys. 8 (1975) L134.
  相似文献   

5.
EPW (Electron–Phonon coupling using Wannier functions) is a program written in Fortran90 for calculating the electron–phonon coupling in periodic systems using density-functional perturbation theory and maximally localized Wannier functions. EPW can calculate electron–phonon interaction self-energies, electron–phonon spectral functions, and total as well as mode-resolved electron–phonon coupling strengths. The calculation of the electron–phonon coupling requires a very accurate sampling of electron–phonon scattering processes throughout the Brillouin zone, hence reliable calculations can be prohibitively time-consuming. EPW combines the Kohn–Sham electronic eigenstates and the vibrational eigenmodes provided by the Quantum ESPRESSO package (see Giannozzi et al., 2009 [1]) with the maximally localized Wannier functions provided by the wannier90 package (see Mostofi et al., 2008 [2]) in order to generate electron–phonon matrix elements on arbitrarily dense Brillouin zone grids using a generalized Fourier interpolation. This feature of EPW leads to fast and accurate calculations of the electron–phonon coupling, and enables the study of the electron–phonon coupling in large and complex systems.

Program summary

Program title: EPWCatalogue identifier: AEHA_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHA_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU Public LicenseNo. of lines in distributed program, including test data, etc.: 304 443No. of bytes in distributed program, including test data, etc.: 1 487 466Distribution format: tar.gzProgramming language: Fortran 90Computer: Any architecture with a Fortran 90 compilerOperating system: Any environment with a Fortran 90 compilerHas the code been vectorized or parallelized?: Yes, optimized for 1 to 64 processorsRAM: Heavily system dependent, as small as a few MBSupplementary material: A copy of the “EPW/examples” directory containing the phonon binary files can be downloadedClassification: 7External routines: MPI, Quantum-ESPRESSO package [1], BLAS, LAPACK, FFTW. (The necessary Blas, Lapack and FFTW routines are included in the Quantum-ESPRESSO package [1].)Nature of problem: The calculation of the electron–phonon coupling from first-principles requires a very accurate sampling of electron–phonon scattering processes throughout the Brillouin zone; hence reliable calculations can be prohibitively timeconsuming.Solution method: EPW makes use of a real-space formulation and combines the Kohn–Sham electronic eigenstates and the vibrational eigenmodes provided by the Quantum-ESPRESSO package with the maximally localized Wannier functions provided by the wannier90 package in order to generate electron–phonon matrix elements on arbitrarily dense Brillouin zone grids using a generalized Fourier interpolation.Running time: Single processor examples typically take 5–10 minutes.References:
  • [1] 
    P. Giannozzi, et al., J. Phys. Condens. Matter 21 (2009), 395502, http://www.quantum-espresso.org/.
  相似文献   

6.
The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10−3,103] eV for radial distances of [10−2,104] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics.

Program summary

Program title: HContinuumGautchiCatalogue identifier: AEHD_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 1233No. of bytes in distributed program, including test data, etc.: 7405Distribution format: tar.gzProgramming language: Fortran90 in fixed formatComputer: AMD ProcessorsOperating system: LinuxRAM: 20 MBytesClassification: 2.7, 4.5Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for calculating the hydrogenic continuum wave functions in a very wide range of parameters, which suffices the needs of most practical applications. The Coulomb phases are also calculated. For any given momentum, radial point, and the largest angular momentum number, the code calculates all the angular components at once. The algorithm we adopt has been given in details by Gautchi [1,2], who suggested a stable minimal solution of general three term recurrence relations.Solution method: Minimum solution of three-term recurrence relations developed by W. Gautchi [1,2].Running time: A few seconds to a few minutes, depending how many different wave functions one needs to calculate.References:
  • [1] 
    W. Gautchi, Computational aspects of three-term recurrence relations, SIAM Review 9 (1967) 24.
  • [2] 
    W. Gautchi, Algorithm 292: Regular Coulomb wave funcitons, Communications of the ACM 9 (1966) 793.
  相似文献   

7.
A new nonlinear gyro-kinetic flux tube code (GKW) for the simulation of micro instabilities and turbulence in magnetic confinement plasmas is presented in this paper. The code incorporates all physics effects that can be expected from a state of the art gyro-kinetic simulation code in the local limit: kinetic electrons, electromagnetic effects, collisions, full general geometry with a coupling to a MHD equilibrium code, and E×B shearing. In addition the physics of plasma rotation has been implemented through a formulation of the gyro-kinetic equation in the co-moving system. The gyro-kinetic model is five-dimensional and requires a massive parallel approach. GKW has been parallelised using MPI and scales well up to 8192+ cores. The paper presents the set of equations solved, the numerical methods, the code structure, and the essential benchmarks.

Program summary

Program title: GKWCatalogue identifier: AEES_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEES_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU GPL v3No. of lines in distributed program, including test data, etc.: 29 998No. of bytes in distributed program, including test data, etc.: 206 943Distribution format: tar.gzProgramming language: Fortran 95Computer: Not computer specificOperating system: Any for which a Fortran 95 compiler is availableHas the code been vectorised or parallelised?: Yes. The program can efficiently utilise 8192+ processors, depending on problem and available computer. 128 processors is reasonable for a typical nonlinear kinetic run on the latest x86-64 machines.RAM:∼128 MB–1 GB for a linear run; 25 GB for typical nonlinear kinetic run (30 million grid points)Classification: 19.8, 19.9, 19.11External routines: None required, although the functionality of the program is somewhat limited without a MPI implementation (preferably MPI-2) and the FFTW3 library.Nature of problem: Five-dimensional gyro-kinetic Vlasov equation in general flux tube tokamak geometry with kinetic electrons, electro-magnetic effects and collisionsSolution method: Pseudo-spectral and finite difference with explicit time integrationAdditional comments: The MHD equilibrium code CHEASE [1] is used for the general geometry calculations. This code has been developed in CRPP Lausanne and is not distributed together with GKW, but can be downloaded separately. The geometry module of GKW is based on the version 7.1 of CHEASE, which includes the output for Hamada coordinates.Running time: (On recent x86-64 hardware) ∼10 minutes for a short linear problem; 48 hours for typical nonlinear kinetic run.References:
  •  
    [1] H. Lütjens, A. Bondeson, O. Sauter, Comput. Phys. Comm. 97 (1996) 219, http://cpc.cs.qub.ac.uk/summaries/ADDH_v1_0.html.
  相似文献   

8.
9.
10.
11.
The QCDMAPT program package facilitates computations in the framework of dispersive approach to Quantum Chromodynamics. The QCDMAPT_F version of this package enables one to perform such computations with Fortran, whereas the previous version was developed for use with Maple system. The QCDMAPT_F package possesses the same basic features as its previous version. Namely, it embodies the calculated explicit expressions for relevant spectral functions up to the four–loop level and the subroutines for necessary integrals.

New version program summary

Program title: QCDMAPT_FCatalogue identifier: AEGP_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGP_v2_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 10 786No. of bytes in distributed program, including test data, etc.: 332 329Distribution format: tar.gzProgramming language: Fortran 77 and higherComputer: Any which supports Fortran 77Operating system: Any which supports Fortran 77Classification: 11.1, 11.5, 11.6External routines: MATHLIB routine RADAPT (D102) from CERNLIB Program Library [1]Catalogue identifier of previous version: AEGP_v1_0Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1769Does the new version supersede the previous version?: No. This version provides an alternative to the previous, Maple, version.Nature of problem: A central object of the dispersive (or “analytic”) approach to Quantum Chromodynamics [2,3] is the so-called spectral function, which can be calculated by making use of the strong running coupling. At the one-loop level the latter has a quite simple form and the relevant spectral function can easily be calculated. However, at the higher loop levels the strong running coupling has a rather cumbersome structure. Here, the explicit calculation of corresponding spectral functions represents a somewhat complicated task (see Section 3 and Appendix B of Ref. [4]), whereas their numerical evaluation requires a lot of computational resources and essentially slows down the overall computation process.Solution method: The developed package includes the calculated explicit expressions for relevant spectral functions up to the four-loop level and the subroutines for necessary integrals.Reasons for new version: The previous version of the package (Ref. [4]) was developed for use with Maple system. The new version is developed for Fortran programming language.Summary of revisions: The QCDMAPT_F package consists of the main program (QCDMAPT_F.f) and two samples of the file containing the values of input parameters (QCDMAPT_F.i1 and QCDMAPT_F.i2). The main program includes the definitions of relevant spectral functions and subroutines for necessary integrals. The main program also provides an example of computation of the values of (M)APT spacelike/timelike expansion functions for the specified set of input parameters and (as an option) generates the output data files with values of these functions over the given kinematic intervals.Additional comments: For the proper functioning of QCDMAPT_F package, the “MATHLIB” CERNLIB library [1] has to be installed.Running time: The running time of the main program with sample set of input parameters specified in the file QCDMAPT_F.i2 is about a minute (depends on CPU).References:
  • [1] 
    Subroutine D102 of the “MATHLIB” CERNLIB library, URL addresses: http://cernlib.web.cern.ch/cernlib/mathlib.html, http://wwwasdoc.web.cern.ch/wwwasdoc/shortwrupsdir/d102/top.html.
  • [2] 
    D.V. Shirkov, I.L. Solovtsov, Phys. Rev. Lett. 79 (1997) 1209;
    •  
      K.A. Milton, I.L. Solovtsov, Phys. Rev. D 55 (1997) 5295;
    •  
      K.A. Milton, I.L. Solovtsov, Phys. Rev. D 59 (1999) 107701;
    •  
      I.L. Solovtsov, D.V. Shirkov, Theor. Math. Phys. 120 (1999) 1220;
    •  
      D.V. Shirkov, I.L. Solovtsov, Theor. Math. Phys. 150 (2007) 132.
  • [3] 
    A.V. Nesterenko, Phys. Rev. D 62 (2000) 094028;
    •  
      A.V. Nesterenko, Phys. Rev. D 64 (2001) 116009;
    •  
      A.V. Nesterenko, Int. J. Mod. Phys. A 18 (2003) 5475;
    •  
      A.V. Nesterenko, J. Papavassiliou, J. Phys. G 32 (2006) 1025;
    •  
      A.V. Nesterenko, Nucl. Phys. B (Proc. Suppl.) 186 (2009) 207.
  • [4] 
    A.V. Nesterenko, C. Simolo, Comput. Phys. Comm. 181 (2010) 1769.
  相似文献   

12.
13.
HiggsBounds 2.0.0 is a computer code which tests both neutral and charged Higgs sectors of arbitrary models against the current exclusion bounds from the Higgs searches at LEP and the Tevatron. As input, it requires a selection of model predictions, such as Higgs masses, branching ratios, effective couplings and total decay widths. HiggsBounds 2.0.0 then uses the expected and observed topological cross section limits from the Higgs searches to determine whether a given parameter scenario of a model is excluded at the 95% C.L. by those searches. Version 2.0.0 represents a significant extension of the code since its first release (1.0.0). It includes now 28/53 LEP/Tevatron Higgs search analyses, compared to the 11/22 in the first release, of which many of the ones from the Tevatron are replaced by updates. As a major extension, the code allows now the predictions for (singly) charged Higgs bosons to be confronted with LEP and Tevatron searches. Furthermore, the newly included analyses contain LEP searches for neutral Higgs bosons (H) decaying invisibly or into (non-flavour tagged) hadrons as well as decay-mode independent searches for neutral Higgs bosons, LEP searches via the production modes τ+τH and , and Tevatron searches via . Also, all Tevatron results presented at the ICHEP?10 are included in version 2.0.0. As physics applications of HiggsBounds 2.0.0 we study the allowed Higgs mass range for model scenarios with invisible Higgs decays and we obtain exclusion results for the scalar sector of the Randall–Sundrum model using up-to-date LEP and Tevatron direct search results.

Program summary

Program title: HiggsBoundsCatalogue identifier: AEFF_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFF_v2_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: GNU General Public Licence version 3No. of lines in distributed program, including test data, etc.: 74 005No. of bytes in distributed program, including test data, etc.: 1 730 996Distribution format: tar.gzProgramming language: Fortran 77, Fortran 90 (two code versions are offered).Classification: 11.1.Catalogue identifier of previous version: AEFF_v1_0Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 138External routines: HiggsBounds requires no external routines/libraries. Some sample programs in the distribution require the programs FeynHiggs 2.7.1 or CPsuperH2.2 to be installed.Does the new version supersede the previous version?: YesNature of problem: Determine whether a parameter point of a given model is excluded or allowed by LEP and Tevatron neutral and charged Higgs boson search results.Solution method: The most sensitive channel from LEP and Tevatron searches is determined and subsequently applied to test this parameter point. The test requires as input, model predictions for the Higgs boson masses, branching ratios and ratios of production cross sections with respect to reference values.Reasons for new version: This version extends the functionality of the previous version.Summary of revisions: List of included Higgs searches has been expanded, e.g. inclusion of (singly) charged Higgs boson searches. The input required from the user has been extended accordingly.Restrictions: Assumes that the narrow width approximation is applicable in the model under consideration and that the model does not predict a significant change to the signature of the background processes or the kinematical distributions of the signal cross sections.Running time: About 0.01 seconds (or less) for one parameter point using one processor of an Intel Core 2 Quad Q6600 CPU at 2.40 GHz for sample model scenarios with three Higgs bosons. It depends on the complexity of the Higgs sector (e.g. the number of Higgs bosons and the number of open decay channels) and on the code version.  相似文献   

14.
15.
We describe an implementation to solve Poisson?s equation for an isolated system on a unigrid mesh using FFTs. The method solves the equation globally on mesh blocks distributed across multiple processes on a distributed-memory parallel computer. Test results to demonstrate the convergence and scaling properties of the implementation are presented. The solver is offered to interested users as the library PSPFFT.

Program summary

Program title: PSPFFTCatalogue identifier: AEJK_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJK_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 110 243No. of bytes in distributed program, including test data, etc.: 16 332 181Distribution format: tar.gzProgramming language: Fortran 95Computer: Any architecture with a Fortran 95 compiler, distributed memory clustersOperating system: Linux, UnixHas the code been vectorized or parallelized?: Yes, using MPI. An arbitrary number of processors may be used (subject to some constraints). The program has been tested on from 1 up to ∼ 13 000 processors. RAM: Depends on the problem size, approximately 170 MBytes for 483 cells per process.Classification: 4.3, 6.5External routines: MPI (http://www.mcs.anl.gov/mpi/), FFTW (http://www.fftw.org), Silo (https://wci.llnl.gov/codes/silo/) (only necessary for running test problem).Nature of problem: Solving Poisson?s equation globally on unigrid mesh distributed across multiple processes on distributed memory system.Solution method: Numerical solution using multidimensional discrete Fourier Transform in a parallel Fortran 95 code.Unusual features: This code can be compiled as a library to be readily linked and used as a blackbox Poisson solver with other codes.Running time: Depends on the size of the problem, but typically less than 1 second per solve.  相似文献   

16.
We present SUSY_FLAVOR – a Fortran 77 program that calculates important leptonic and semi-leptonic low-energy observables in the general R-parity conserving MSSM. For a set of input MSSM parameters, the code gives predictions for the , , and mixing parameters; BXsγ, Bs,dl+l, and decay branching ratios; and the electric dipole moments of the leptons and the neutron. All these quantities are calculated at one-loop level (with some higher-order QCD corrections included) in the exact sfermion mass eigenbasis, without resorting to mass insertion approximations. The program can be obtained from http://www.fuw.edu.pl/susy_flavor.

Program summary

Program title: SUSY_FLAVORCatalogue identifier: AEGV_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGV_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 14 603No. of bytes in distributed program, including test data, etc.: 82 126Distribution format: tar.gzProgramming language: Fortran 77Computer: PCs and workstationsOperating system: Any, tested on LinuxClassification: 11.6Nature of problem: Predicting CP-violating observables, meson mixing parameters and branching ratios for a set of rare processes in the general R-parity conserving MSSM.Solution method: We use standard quantum theoretical methods to calculate Wilson coefficients in MSSM and at one loop including QCD corrections at higher orders when this is necessary and possible. The input parameters can be read from an external file in SLHA format.Restrictions: The results apply only to the case of MSSM with R-parity conservation.Running time: For single parameter set approximately 1 s in double precision on a PowerBook Mac G4.  相似文献   

17.
HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000).The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations.The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform.Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors.

New version program summary

Program title: Huge Dense System Solver (HDSS)Catalogue identifier: AEHU_v1_1Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 87 062No. of bytes in distributed program, including test data, etc.: 1 069 110Distribution format: tar.gzProgramming language: Fortran90, CComputer: Parallel architectures: multiprocessors, computer clustersOperating system: Linux/UnixHas the code been vectorized or parallelized?: Yes, includes MPI primitives.RAM: Tested for up to 190 GBClassification: 6.5External routines: MPI (http://www.mpi-forum.org/), BLAS (http://www.netlib.org/blas/), PLAPACK (http://www.cs.utexas.edu/~plapack/), POOCLAPACK (ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution).Catalogue identifier of previous version: AEHU_v1_0Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533Does the new version supersede the previous version?: YesNature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities.Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient.Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic.Summary of revisions: Version 1.1
  • • 
    Can be used to solve linear systems using double-precision arithmetic.
  • • 
    New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment.
Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.  相似文献   

18.
We discuss a program suite for simulating Quantum Chromodynamics on a 4-dimensional space–time lattice. The basic Hybrid Monte Carlo algorithm is introduced and a number of algorithmic improvements are explained. We then discuss the implementations of these concepts as well as our parallelisation strategy in the actual simulation code. Finally, we provide a user guide to compile and run the program.

Program summary

Program title: tmLQCDCatalogue identifier: AEEH_v1_0Program summary URL::http://cpc.cs.qub.ac.uk/summaries/AEEH_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU General Public Licence (GPL)No. of lines in distributed program, including test data, etc.: 122 768No. of bytes in distributed program, including test data, etc.: 931 042Distribution format: tar.gzProgramming language: C and MPIComputer: anyOperating system: any with a standard C compilerHas the code been vectorised or parallelised?: Yes. One or optionally any even number of processors may be used. Tested with up to 32 768 processorsRAM: no typical values availableClassification: 11.5External routines: LAPACK [1] and LIME [2] libraryNature of problem: Quantum ChromodynamicsSolution method: Markov Chain Monte Carlo using the Hybrid Monte Carlo algorithm with mass preconditioning and multiple time scales [3]. Iterative solver for large systems of linear equations.Restrictions: Restricted to an even number of (not necessarily mass degenerate) quark flavours in the Wilson or Wilson twisted mass formulation of lattice QCD.Running time: Depending on the problem size, the architecture and the input parameters from a few minutes to weeks.References:
  • [1] 
    http://www.netlib.org/lapack/.
  • [2] 
    USQCD, http://usqcd.jlab.org/usqcd-docs/c-lime/.
  • [3] 
    C. Urbach, K. Jansen, A. Shindler, U. Wenger, Comput. Phys. Commun. 174 (2006) 87, hep-lat/0506011.
  相似文献   

19.
We describe QSATS, a parallel code for performing variational path integral simulations of the quantum mechanical ground state of monatomic solids. QSATS is designed to treat Boltzmann quantum solids, in which individual atoms are permanently associated with distinguishable crystal lattice sites and undergo large-amplitude zero-point motions around these sites. We demonstrate the capabilities of QSATS by using it to compute the total energy and potential energy of hexagonal close packed solid 4He at the density .

Program summary

Program title:QSATSCatalogue identifier: AEJE_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJE_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 7329No. of bytes in distributed program, including test data, etc.: 61 685Distribution format: tar.gzProgramming language: Fortran 77.Computer: QSATS should execute on any distributed parallel computing system that has the Message Passing Interface (MPI) [1] libraries installed.Operating system: Unix or Linux.Has the code been vectorized or parallelized?: Yes, parallelized using MPI [1].RAM: The memory requirements of QSATS depend on both the number of atoms in the crystal and the number of replicas in the variational path integral chain. For parameter sets A and C (described in the long write-up), approximately 4.5 Mbytes and 12 Mbytes, respectively, are required for data storage by QSATS (exclusive of the executable code).Classification: 7.7, 16.13.External routines: Message Passing Interface (MPI) [1]Nature of problem: QSATS simulates the quantum mechanical ground state for a monatomic crystal characterized by large-amplitude zero-point motions of individual (distinguishable) atoms around their nominal lattice sites.Solution method: QSATS employs variational path integral quantum Monte Carlo techniques to project the system?s ground state wave function out of a suitably-chosen trial wave function.Restrictions: QSATS neglects quantum statistical effects associated with the exchange of identical particles. As distributed, QSATS assumes that the potential energy function for the crystal is a pairwise additive sum of atom–atom interactions.Additional comments: An auxiliary program, ELOC, is provided that uses the output generated by QSATS to compute both the crystal?s ground state energy and the expectation value of the crystal?s potential energy. End users can modify ELOC as needed to compute the expectation value of other coordinate-space observables.Running time: QSATS requires roughly 3 hours to run a simulation using parameter set A on a cluster of 12 Xeon processors with clock speed 2.8 GHz. Roughly 15 hours are needed to run a simulation using parameter set C on the same cluster.References:
  • [1] 
    For information about MPI, visit http://www.mcs.anl.gov/mpi/.
  相似文献   

20.
A B-spline version of a Hartree–Fock program is described. The usual differential equations are replaced by systems of non-linear equations and generalized eigenvalue problems of the form (HaεaaB)Pa=0, where a designates the orbital. When orbital a is required to be orthogonal to a fixed orbital, this form assumes that a projection operator has been applied to eliminate the Lagrange multiplier. When two orthogonal orbitals are both varied, the energy must also be stationary with respect to orthogonal transformations. At such a stationary point, the matrix of Lagrange multipliers, εab=(Pb|Ha|Pa), is symmetric and the off-diagonal Lagrange multipliers may again be eliminated through projection operators. For multiply occupied shells, convergence problems are avoided by the use of a single-orbital Newton–Raphson method. A self-consistent field procedure based on these two possibilities exhibits excellent convergence. A Newton–Raphson method for updating all orbitals simultaneously has better numerical properties and a more rapid rate of convergence but requires more computer processing time. Both ground and excited states may be computed using a default universal grid. Output from a calculation for Al 3s23p shows the improvement in accuracy that can be achieved by mapping results from low-order splines on a coarse grid to splines of higher order onto a refined grid. The program distribution contains output from additional test cases.

Program summary

Program title: SPHF version 1.00Catalogue identifier: AEIJ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIJ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 13 925No. of bytes in distributed program, including test data, etc.: 714 254Distribution format: tar.gzProgramming language: Fortran 95Computer: Any system with a Fortran 95 compiler. Tested on Intel Xeon CPU X5355, 2.66 GHzOperating system: Any system with a Fortran 95 compilerClassification: 2.1External routines: LAPACK (http://www.netlib.org/lapack/)Nature of problem: Non-relativistic Hartree–Fock wavefunctions are determined for atoms in a bound state that may be used to predict a variety atomic properties.Solution method: The radial functions are expanded in a B-spline basis [1]. The variational principle applied to an energy functional that includes Lagrange multipliers for orthonormal constraints defines the Hartree–Fock matrix for each orbital. Orthogonal transformations symmetrize the matrix of Lagrange multipliers and projection operators eliminate the off-diagonal Lagrange multipliers to yield a generalized eigenvalue problem. For multiply occupied shells, a single-orbital Newton–Raphson (NR) method is used to speed convergence with very little extra computation effort. In a final step, all orbitals are updated simultaneously by a Newton–Raphson method to improve numerical accuracy.Restrictions: There is no restriction on calculations for the average energy of a configuration. As in the earlier HF96 program [2], only one or two open shells are allowed when results are required for a specific LS coupling. These include:
  • 1. 
    N(nl)ns, where l=0,1,2,3
  • 2. 
    N(np)nl, where l=0,1,2,3,…
  • 3. 
    (nd)(nf)
Unusual features: Unlike HF96, the present program is a Fortran 90/95 program without the use of COMMON. It is assumed that Lapack libraries are available.Running time: For Ac 7s27p the execution time varied from 6.9 s to 9.1 s depending on the iteration method.References:
  • [1] 
    C. Froese Fischer, Adv. At. Mol. Phys. 55 (2008) 235.
  • [2] 
    G. Gaigalas, C. Froese Fischer, Comput. Phys. Commun. 98 (1996) 255.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号