首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 110 毫秒
1.
油藏数值模拟和很多其他科学计算问题一样需要求解大型稀疏线性代数方程组.在求解稀疏线性代数方程组的迭代法中,稀疏矩阵向量乘法(SpMV)是影响计算效率的核心函数之一.随着计算机硬件架构异构化,科学计算从单核、多核CPU计算架构逐渐发展到多核CPU+众核加速卡(GPU卡或MIC等)的计算架构.SpMV的实现效率与稀疏矩阵的存储格式及硬件架构关系密切.本文针对油藏模拟中常见的Jacobian矩阵的稀疏模式,利用GPU核心的合并访问和并发计算等特点,结合油藏模拟线性解法器的算法要求,设计了一种BHYB矩阵存储格式及其对应的线程组并行策略.数值实验测得基于该存储格式的SpMV相对串行BCSR格式的SpMV的加速比可达19倍,比cuSPARSE库中效率最高的HYB格式的SpMV快30%到80%.此外,本文所提出的BHYB存储格式对块状矩阵在GPU上的存储以及线程组并行策略对其它GPU并行程序中内核函数的设计和优化能起到一定的借鉴作用.  相似文献   

2.
在工程实际中,许多问题都可以归结为数值法求解偏微分方程(组)的问题.偏微分方程数值解法主要包括有限差分法、有限元法和有限体积法,其中大多数方法都是通过离散的方式将方程转化为线性方程组,通过求解线性系统得到原方程的数值解.在这个过程中,线性方程组的系数矩阵通常很大并且很稀疏,会占用大量存储空间并使方程组难以求解.针对这个问题,本文研究大型稀疏矩阵的压缩存储方法,只存储非零元素,降低存储空间消耗,避免零元素参与计算,提升计算效率.具体来说,在稀疏矩阵生成过程中,使用十字链表法存储,可以在常数时间内完成非零元素的插入操作;在方程组求解过程中,使用按行(列)压缩存储方法,既节约存储空间,又可以提高求解器的求解效率.在实验部分,本文分别使用有限差分法求解Laplace方程和有限元法计算圆环截面应力分布问题,对其中大型稀疏线性方程组的系数矩阵,采用十字链表法和按行(列)压缩存储法存储,使用直接法和迭代法求解线性方程组.实验结果显示,对于结构化和非结构化的稀疏矩阵,压缩存储方法不仅能够大幅度减少内存空间的占用,而且能够显著提升求解器的效率.  相似文献   

3.
基于有限元总刚矩阵的大规模稀疏性、对称性等特性,采用全稀疏存储结构以及最小填入元算法,使得计算机的存储容量达到最少。为了节省计算机的运算时间,对总刚矩阵进行符号LU分解方法,大大减少了数值求解过程中的数据查询。这种全稀疏存储结构和符号LU分解相结合的求解方法,使大规模稀疏线性化方程组的求解效率大大提高。数值算例证明该算法在时间和存贮上都较为占优,可靠高效,能够应用于有限元线性方程组的求解。  相似文献   

4.
提出了应用图形处理器(GPU)加速求解线性方程组的高斯消元法,用二维四通道纹理表示系数矩阵与常数向量构成的矩阵,在该矩阵内完成归一化、消元等操作.提出了新的纹理缩减算法,该算法不要求纹理的边长是2的幂,把该纹理算法应用于高斯消元法的列主元搜索和确定主元行号.根据这些算法,使用OpenGL着色语言编程,用图形处理器实现加速求解线性方程组的高斯消元法,运算时间与基于CPU的算法比较,随着方程组未知量数量增多,基于GPU的算法具有较快的运算速度,证实图形处理器能加速线性方程组的求解.  相似文献   

5.
一种基于GPU硬件加速计算的辐射度实现方法   总被引:2,自引:0,他引:2  
提出一种新的基于GPU(graphics processing unit)的辐射度方法.该方法利用可编程图形处理单元GPU的并行计算能力,将辐射度方法中形状因子计算以及线性方程组求解的全过程完全在可编程图形硬件中完成,避免了原有基于GPU的辐射度方法需要CPU参与的问题,绕开了计算机主内存与GPU纹理内存之间数据交换的瓶颈;在基于半立方体法的形状因子计算和绘制过程中,解决了基于GPU硬件加速的遍历、分类和累加问题.此外,该方法采用新的矩阵和向量在GPU中的存储方法,利用GPU实现Jacobi迭代法快速求解线性方程组.实验结果证明。该方法能够快速有效地实现辐射度的计算和绘制.  相似文献   

6.
不完全 Cholesky 分解预条件共轭梯度(incomplete Cholesky factorization preconditioned conjugate gradient ,ICCG)法是求解大规模稀疏对称正定线性方程组的有效方法。然而ICCG法要求在每次迭代中求解2个稀疏三角方程组,稀疏三角方程组求解固有的串行性成为了ICCG法在GPU上并行求解的瓶颈。针对稀疏三角方程组求解,给出了一种利用GPU 加速的有效方法。为了增加稀疏三角方程组求解在GPU上的多线程并行性,提出了对不完全Cholesky分解产生的稀疏三角矩阵进行分层调度(level scheduling )的方法。为了进一步提高稀疏三角方程组求解的并行性能,提出了在分层调度前通过近似最小度(approximate minimum degree ,AMD)算法对系数矩阵进行重排序、在分层调度后对稀疏三角矩阵进行层排序的方法,降低了分层调度过程中产生的层数,优化了稀疏三角方程组求解的GPU内存访问模式。数值实验表明,与利用NVIDIA CUSPARSE实现的ICCG法相比,采用上述方法性能可以获得平均1倍以上的提升。  相似文献   

7.
大规模稀疏矩阵的主特征向量计算优化方法   总被引:1,自引:0,他引:1  
矩阵主特征向量(principal eigenvectors computing,PEC)的求解是科学与工程计算中的一个重要问题。随着图形处理单元通用计算(general-purpose computing on graphics pro cessing unit,GPGPU)的兴起,利用GPU来优化大规模稀疏矩阵的图形处理单元求解得到了广泛关注。分别从应用特征和GPU体系结构特征两方面分析了PEC运算的性能瓶颈,提出了一种面向GPU的稀疏矩阵存储格式——GPU-ELL和一个针对GPU的线程优化映射策略,并设计了相应的PEC优化执行算法。在ATI HD Radeon5850上的实验结果表明,相对于传统CPU,该方案获得了最多200倍左右的加速,相对于已有GPU上的实现,也获得了2倍的加速。  相似文献   

8.
伍世刚  钟诚 《计算机应用》2014,34(7):1857-1861
依据各级缓存容量,将CPU主存中种群个体和蚂蚁个体数据划分存储到一级、二级和三级缓存中,以减少并行计算过程中数据在各级存储之间的传输开销,在CPU与GPU之间采取异步传送和不完全传送数据、GPU多个内核函数异步执行多个流的方法,设置GPU block线程数量为16的倍数、GPU共享存储器划分大小为32倍的bank,使用GPU常量存储器存储交叉概率、变异概率等需频繁访问的只读参数,将输入串矩阵和重叠部分长度矩阵只读大数据结构绑定到GPU纹理存储器,设计实现了一种多核CPU和GPU协同求解最短公共超串问题的计算、存储和通信高效的并行算法。求解多种规模的最短公共超串问题的实验结果表明,多核CPU与GPU协同并行算法比串行算法快70倍以上。  相似文献   

9.
邵桢  蔡红星  徐春风 《计算机工程》2010,36(24):278-280
采用图形处理器(GPU)为主计算核心,应用时域有限差分法(FDTD)实现电磁学中麦克斯韦方程组的快速求解。通过对FDTD求解麦克斯韦旋度方程的直接时间域的分析,给出FDTD的仿真算法。根据GPU能高效地提高FDTD的仿真速度,解决FDTD仿真算法中的计算量庞大问题。利用GPU在FDTD计算中的处理能力,实现了更长的脉冲持续时间和庞大的模型求解与仿真,在适当的时间内完成了超大量的仿真计算。根据在CPU和FDTD上的实际计算结果表明,基于GPU的FDTD仿真算法具有高精度和高效率等特点。  相似文献   

10.
针对基于GPU求解大规模稀疏线性方程组进行了研究,提出一种稀疏矩阵的分块存储格式HMEC(hybrid multiple ELL and CSR)。通过重排序优化系数矩阵的存储结构,将系数矩阵以一定的比例分块存储,采用ELL与CSR存储格式相结合的方式以适应不同的分块特征,分别使用适用于不对称矩阵的不完全LU分解预处理BICGStab法和对称正定矩阵的不完全Cholesky分解预处理共轭梯度法求解大规模稀疏线性系统。实验表明,应用HMEC格式存储稀疏矩阵并以调用GPU kernel的方式实现前述两种方法,与其他存储格式的实现方式作比较,最优可分别获得31.89%和17.50%的加速效果。  相似文献   

11.
Numerical methods for elliptic partial differential equations (PDEs) within both continuous and hybridized discontinuous Galerkin (HDG) frameworks share the same general structure: local (elemental) matrix generation followed by a global linear system assembly and solve. The lack of inter-element communication and easily parallelizable nature of the local matrix generation stage coupled with the parallelization techniques developed for the linear system solvers make a numerical scheme for elliptic PDEs a good candidate for implementation on streaming architectures such as modern graphical processing units (GPUs). We propose an algorithmic pipeline for mapping an elliptic finite element method to the GPU and perform a case study for a particular method within the HDG framework. This study provides comparison between CPU and GPU implementations of the method as well as highlights certain performance-crucial implementation details. The choice of the HDG method for the case study was dictated by the computationally-heavy local matrix generation stage as well as the reduced trace-based communication pattern, which together make the method amenable to the fine-grained parallelism of GPUs. We demonstrate that the HDG method is well-suited for GPU implementation, obtaining total speedups on the order of 30–35 times over a serial CPU implementation for moderately sized problems.  相似文献   

12.
Many engineering and scientific problems need to solve boundary value problems for partial differential equations or systems of them. For most cases, to obtain the solution with desired precision and in acceptable time, the only practical way is to harness the power of parallel processing. In this paper, we present some effective applications of parallel processing based on hybrid CPU/GPU domain decomposition method. Within the family of domain decomposition methods, the so-called optimized Schwarz methods have proven to have good convergence behaviour compared to classical Schwarz methods. The price for this feature is the need to transfer more physical information between subdomain interfaces. For solving large systems of linear algebraic equations resulting from the finite element discretization of the subproblem for each subdomain, Krylov method is often a good choice. Since the overall efficiency of such methods depends on effective calculation of sparse matrix–vector product, approaches that use graphics processing unit (GPU) instead of central processing unit (CPU) for such task look very promising. In this paper, we discuss effective implementation of algebraic operations for iterative Krylov methods on GPU. In order to ensure good performance for the non-overlapping Schwarz method, we propose to use optimized conditions obtained by a stochastic technique based on the covariance matrix adaptation evolution strategy. The performance, robustness, and accuracy of the proposed approach are demonstrated for the solution of the gravitational potential equation for the data acquired from the geological survey of Chicxulub crater.  相似文献   

13.
A numerical computer method using planar flexural finite line element for the determination of buckling loads of beams, shafts and frames supported by rigid or elastic bearings is presented. Buckling loads and the corresponding mode vectors are determined by the solution of a linear set of eigenvalue equations of elastic stability. The elastic stability matrix is determined as the product of the bifurcation sidesway flexibility matrix and the second order bifurcation sidesway stiffness matrix which is formed using the element bifurcation sidesway stiffness matrices. The bifurcation sidesway flexibility matrix is determined by partitioning the inverse of the global external stiffness matrix of the system which is formed from the element data using the element stiffness matrices. The method is directly applicable to the determination of the buckling loads of beams and frames partially or fully supported by elastic foundations where the foundation stiffness is approximated by a discrete set of springs. The method of the article provides means to consider complex boundary conditions in buckling problems with ease. Four numerical examples are included to illustrate the industrial applications of the contents of the article.  相似文献   

14.
We present graphics processing unit (GPU) data structures and algorithms to efficiently solve sparse linear systems that are typically required in simulations of multi‐body systems and deformable bodies. Thereby, we introduce an efficient sparse matrix data structure that can handle arbitrary sparsity patterns and outperforms current state‐of‐the‐art implementations for sparse matrix vector multiplication. Moreover, an efficient method to construct global matrices on the GPU is presented where hundreds of thousands of individual element contributions are assembled in a few milliseconds. A finite‐element‐based method for the simulation of deformable solids as well as an impulse‐based method for rigid bodies are introduced in order to demonstrate the advantages of the novel data structures and algorithms. These applications share the characteristic that a major computational effort consists of building and solving systems of linear equations in every time step. Our solving method results in a speed‐up factor of up to 13 in comparison to other GPU methods.  相似文献   

15.

Assembly free FEM bypasses the assembly step and solves the system of linear equations at the element level using Conjugate Gradient (CG) type iterative solver. The smaller dense Matrix-vector Products (MvPs) are encapsulated within the CG solver and are computed either at element level or degree of freedom (DoF) level. Both these strategies exploit the computing power of GPU effectively, but the performance is lagging due to the uncoalesced global memory access on GPU. This paper proposes an improved MvP strategy in assembly free FEM, which improves the performance by coalesced global memory access using on-chip faster shared memory and using the texture cache memory on GPU. Since GPU has limited shared memory (in few KBs), the proposed technique suffers from a problem known as low occupancy. Despite the low occupancy issue, the proposed strategy outperforms both element based and DoF based MvP strategies on GPU. Numerical experiments compared with element level and DoF level strategies on GPU and found that, GPU instance of proposed MvP outperforms both strategies approximately by factor of 7 and 1.5 respectively.

  相似文献   

16.
A finite element capability is described for the analysis of sandwich beams with thick unbalanced laminated faces. Particular attention is focused on the effects of bending-membrane coupling in the faces. The stiffness matrix is developed using displacement functions generated from explicit solution of the governing differential equations.  相似文献   

17.
This paper describes a parallel implementation of the finite element method on a multiprocessor computer. The proposed strategy does not require the formation of global system equations. An element or substructure is mapped onto each processor of the multiple-instruction, multiple-data multiprocessing system. Throughout the program, each processor stores only the information relevant to its element (substructure) and generates the local stiffness matrix. A parallel element (substructure) oriented conjugate gradient procedure is employed to compute the displacements. Each processor then determines the strains and stresses for its associated element (substructure). A prototype implementation of this parallel finite element program strategy on a hypercube computer is discussed. Examples for both linear and nonlinear analyses are presented.  相似文献   

18.
基于光滑聚集代数多重网格法实现一种用于结构有限元并行计算的预条件共轭梯度求解方法。对计算区域进行均匀划分,将这些子区域分配给各个进程同时进行单元刚度矩阵的计算,并组合形成分布式存储的整体平衡方程。采用光滑聚集代数多重网格预条件共轭梯度法对整体平衡方程进行并行求解,在天河二号超级计算机上进行数值试验,分析代数多重网格的主要参数对算法性能的影响,测试程序的并行计算性能。试验结果表明该方法具有较好的并行性能和可扩展性,适合于大规模实际应用。  相似文献   

19.
共轭梯度法的GPU实现   总被引:1,自引:0,他引:1       下载免费PDF全文
夏健明  魏德敏 《计算机工程》2009,35(17):274-276
提出基于图形处理单元(GPU)实现矩阵与向量相乘的新算法,只需渲染四边形一次即可实现矩阵与向量乘法。并给出实现向量元素求和的新算法,与缩减算法不同,该算法不要求向量大小为2的幂。基于这2种算法使用OpenGL着色语言(GLSL)编程,用GPU实现求解线性方程组的共轭梯度法。与Krtiger算法相比,该方法所用计算时间更少。  相似文献   

20.
Many problems in geophysical and atmospheric modelling require the fast solution of elliptic partial differential equations (PDEs) in “flat” three dimensional geometries. In particular, an anisotropic elliptic PDE for the pressure correction has to be solved at every time step in the dynamical core of many numerical weather prediction (NWP) models, and equations of a very similar structure arise in global ocean models, subsurface flow simulations and gas and oil reservoir modelling. The elliptic solve is often the bottleneck of the forecast, and to meet operational requirements an algorithmically optimal method has to be used and implemented efficiently. Graphics Processing Units (GPUs) have been shown to be highly efficient (both in terms of absolute performance and power consumption) for a wide range of applications in scientific computing, and recently iterative solvers have been parallelised on these architectures. In this article we describe the GPU implementation and optimisation of a Preconditioned Conjugate Gradient (PCG) algorithm for the solution of a three dimensional anisotropic elliptic PDE for the pressure correction in NWP. Our implementation exploits the strong vertical anisotropy of the elliptic operator in the construction of a suitable preconditioner. As the algorithm is memory bound, performance can be improved significantly by reducing the amount of global memory access. We achieve this by using a matrix-free implementation which does not require explicit storage of the matrix and instead recalculates the local stencil. Global memory access can also be reduced by rewriting the PCG algorithm using loop fusion and we show that this further reduces the runtime on the GPU. We demonstrate the performance of our matrix-free GPU code by comparing it both to a sequential CPU implementation and to a matrix-explicit GPU code which uses existing CUDA libraries. The absolute performance of the algorithm for different problem sizes is quantified in terms of floating point throughput and global memory bandwidth.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号