首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   474篇
  免费   77篇
  国内免费   81篇
工业技术   632篇
  2023年   2篇
  2022年   10篇
  2021年   7篇
  2020年   11篇
  2019年   7篇
  2018年   24篇
  2017年   18篇
  2016年   41篇
  2015年   59篇
  2014年   86篇
  2013年   87篇
  2012年   74篇
  2011年   108篇
  2010年   60篇
  2009年   30篇
  2008年   6篇
  2007年   1篇
  1990年   1篇
排序方式: 共有632条查询结果,搜索用时 406 毫秒
1.
Crashworthiness simulation system is one of the key computer-aided engineering (CAE) tools for the automobile industry and implies two potential conflicting requirements: accuracy and efficiency. A parallel crashworthiness simulation system based on graphics processing unit (GPU) architecture and the explicit finite element (FE) method is developed in this work. Implementation details with compute unified device architecture (CUDA) are considered. The entire parallel simulation system involves a parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty contact force calculation algorithm. Three basic GPU-based parallel strategies are suggested to meet the natural parallelism of the explicit FE algorithm. Two free GPU-based numerical calculation libraries, cuBLAS and Thrust, are introduced to decrease the difficulty of programming. Furthermore, a mixed array and a thread map to element strategy are proposed to improve the performance of the test pairs searching. The outer loop of the nested loop through the mixed array is unrolled to realize parallel searching. An efficient storage strategy based on data sorting is presented to realize data transfer between different hierarchies with coalesced access during the contact pairs searching. A thread map to element pattern is implemented to calculate the penetrations and the penetration forces; a double float atomic operation is used to scatter contact forces. The simulation results of the three different models based on the Intel Core i7-930 and the NVIDIA GeForce GTX 580 demonstrate the precision and efficiency of this developed parallel crashworthiness simulation system.  相似文献   
2.
稀疏矩阵向量乘是很多科学计算问题中的核心问题。本文针对稀疏对角矩阵,在DIA存储格式的基础上,设计了一种新型压缩存储格式CDIA,结合CUDA编程模型的特点,在计算线程上进行了细粒度的任务分配,同时为满足CUDA对存储器的合并访问要求,将压缩矩阵做了相应的转置处理,设计了细粒度算法与程序,并根据稀疏矩阵向量乘特点,做了相应的程序优化。实验数据显示,这种存储格式能够很好地发挥CUDA在数据处理方面的优势,在测试数据中,最高获得了单精度39.6Gflop/s和双精度19.6Gflop/s的浮点计算性能,性能在Nathan Bell和Michael Garland的基础上分别提高了7.6%和17.4%。  相似文献   
3.
We have designed Particle-in-Cell algorithms for emerging architectures. These algorithms share a common approach, using fine-grained tiles, but different implementations depending on the architecture. On the GPU, there were two different implementations, one with atomic operations and one with no data collisions, using CUDA C and Fortran. Speedups up to about 50 compared to a single core of the Intel i7 processor have been achieved. There was also an implementation for traditional multi-core processors using OpenMP which achieved high parallel efficiency. We believe that this approach should work for other emerging designs such as Intel Phi coprocessor from the Intel MIC architecture.  相似文献   
4.
用光谱分析鉴别生物特征,导致数据量大,而实际需要必须实时处理。偏最小二乘法是使用最广泛的鉴别算法,但是对于大规模数据流该算法无法达到实时性。为了解决这个应用矛盾,提出了一种基于NVIDIA CUDA架构下的并行计算策略,利用具有大规模并行计算特征的图形处理器(GPU)作为计算设备,结合GPU存储器的优势实现了偏最小二乘算法。实验的测试结果表明,在GPU上使用CUDA实现的偏最小二乘算法比在CPU上实现该算法快了47倍,性能得到了显著提高,从而使偏最小二乘算法应用于大规模数据流处理成为可能。  相似文献   
5.
为提升高级加密标准(AES)的加密性能,利用显卡的通用计算能力,在统一计算设备架构(CUDA)平台上实现AES的128位、192位和256位3个版本的GPU并行算法,并提出优化的AES并行算法。在考虑块内线程数量、共享存储器容量和总块数的基础上,根据分块最优值的经验数据指导AES算法在GPU上的最优分块。实验结果表明,与未优化的AES并行算法相比,该算法的3个版本在Nvidia Geforce G210显卡上的加密速度分别提高5.28%,14.55%和12.53%,而在Nvidia Geforce GTX460显卡上的加密速度分别提高12.48%,15.40%和15.84%,且能更好地对SSL数据进行加密。  相似文献   
6.
FFT(快速傅里叶变换)是基于提高DFT(离散傅里叶变换)计算的高效算法,它在众多科学和工程领域都得到了广泛的应用。自FFT算法出现以后,从早期的以降低复杂度到近年以来的大规模并行FFT计算,各种优化算法得到广泛的研究。在并行运算领域中,随着可编程的、并行化GPU的不断推广,特别是通用并行统一计算架构CUDA的出现,极大增强了GPU的计算能力,在编程和优化等方面都有显著地提升。鉴于此,本文在分析FFT算法实现的基础上,研究了一种适合GPU运算的FFT并行计算方法,并通过CUDA架构实现了FFT算法在GPU上的运算。该方法的引入在理论不计算数据传输的情况下,使一维FFT运算时间的复杂度由O(N logN2)可以降到O(N/rlogN2)。通过验证,本文提出的CUDA的并行FFT方法得到较好的加速效果,在精度计算上也符合实际的要求,从而证明了该方法的正确性和有效性。  相似文献   
7.
《Parallel Computing》2014,40(5-6):59-69
We present a cache-aware method for accelerating texture-based volume rendering on a graphics processing unit (GPU). Because a GPU has hierarchical architecture in terms of processing and memory units, cache optimization is important to maximize performance for memory-intensive applications. Our method localizes texture memory reference according to the location of the viewpoint and dynamically selects the width and height of thread blocks (TBs) so that each warp, which is a series of 32 threads processed simultaneously, can minimize memory access strides. We also incorporate transposed indexing of threads to perform TB-level cache optimization for specific viewpoints. Furthermore, we maximize TB size to exploit spatial locality with fewer resident TBs. For viewpoints with relatively large strides, we synchronize threads of the same TB at regular intervals to realize synchronous ray propagation. Experimental results indicate that our cache-aware method doubles the worst rendering performance compared to those provided by the CUDA and OpenCL software development kits.  相似文献   
8.
This paper presents a parallel implementation of the hybrid BiCGStab(2) (bi-conjugate gradient stabilized) iterative method in a GPU (graphics processing unit) for solution of large and sparse linear systems. This implementation uses the CUDA-Matlab integration, in which the method operations are performed in a GPU core using Matlab built-in functions. The goal is to show that the exploitation of parallelism by using this new technology can provide a significant computational performance. For the validation of the work, we compared the proposed implementation with a BiCGStab(2) sequential and parallelized implementation in the C and CUDA-C languages. The results showed that the proposed implementation is more efficient and can be viable for simulations being carried out with quality and in a timely manner. The gains in computational efficiency were 76x and 6x compared to the implementation in C and CUDA-C, respectively.  相似文献   
9.
偏微分方程数值解法(包括有限差分法、有限元法)以及大量的数学物理方程数值解法最终都会演变成求解大型线性方程组。因此,探讨快速、稳定、精确的大型线性方程组解法一直是数值计算领域不断深入研究的课题且具有特别重要的意义。在迭代法中,共轭斜量法(又称共轭梯度法)被公认为最好的方法之一。但是,该方法最大缺点是仅适用于线性方程组系数矩阵为对称正定矩阵的情况,而且常规的CPU算法实现非常耗时。为此,通过将线性方程组系数矩阵作转换成对称矩阵后实施基于GPU-CUDA的快速共轭斜量法来解决一般性大型线性方程组的求解问题。试验结果表明:在求解效率方面,基于GPU-CUDA的共轭斜量法运行效率高,当线性方程组阶数超过3000时,其加速比将超过14;在解的精确性与求解过程的稳定性方面,与高斯列主元消去法相当。基于GPU-CUDA的快速共轭斜量法是求解一般性大型线性方程组快速而非常有效的方法。  相似文献   
10.
王震  李仁发  李彦彪  田峥 《计算机工程》2014,(4):318-320,F0003
针对中英文混合文本的匹配准确性及大规模数据文本的匹配效率等问题,基于经典的线索化完全哈希特里树算法,提出一种并行化的中英文混合多模式文本匹配算法。采用拆分文本降低多模式匹配算法的串行度,进而在拆分出的小文本上并行地执行文本匹配。通过并行化预处理过程,设计新的存储结构。实验结果表明,该算法在保证结果正确的前提下,执行效率高于经典的串行匹配算法,当数据规模达到226个字符时,可以获得8倍以上的加速比。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号