排序方式: 共有53条查询结果,搜索用时 15 毫秒
51.
基于群机系统的并行程序的最大加速比计算 总被引:1,自引:0,他引:1
加速比是并行程序的重要指标之一。在大多数并行系统中,在数据规 模确定的情况下,程序的加速比随节点工作站的增加而增加,但是大多数群机 系统的节点工作站是共享物理传输介质的,这使得许多并行程序的加速比在节 点机数目超过某一个值之后会随着节,点机的增加而减少。本文通过对群机系统 上并行程序执行时间的分析,论述了在数据规模确定的情况下,程序能够获得 的最大加速比和最短的计算时间,以及获得这个加速比和计算时间的节点机个 数。 相似文献
52.
层次式面向对象并行环境HOOPE的效率和性能分析 总被引:1,自引:0,他引:1
并行软件的开发始终是并行应用技术发展的难点之一。要开发高效、灵活、复用性好的并行软件更是困难。为了在一定程度上解决这个问题,我们层次式结构、设计模式和面向对象应用框架等技术角度出发,研制了面向专业领域的HOOPE并行开发环境。本文在对该环境作简单介绍的基础上,着重分析其开发效率和所开发软件的并行性能。并由此得出结论:HOOPE是一个适合开发在规模并行计算任务的简单实用、高效灵活且具有良好开放必性的环境。 相似文献
53.
SI Yujing LI Ta PAN Jielin YAN Yonghong 《电子学报:英文版》2014,(1):70-74
In this paper, issues of speeding up Re- current neural network language model (RNNLM) in the testing phase are explored so that RNNLMs can be used to re-rank a large n-best list in real-time systems which could obtain better performance. A new n-best list re- scoring framework, Prefix tree based n-best list re-scoring (PTNR), is proposed to hundred percent eliminate the re- peated computations which makes n-best list re-scoring ineffective. At the same time, the bunch mode technique, widely-used in speeding up the training of Feed-forward neural network language model (FF-NNLM), is combined with PTNR and the speed is further improved. Experi- mental results show that our approach is much faster than basic n-best list re-scoring. Take 1000-best as an exam- ple, our approach is almost 11 times faster than the basic n-best list re-scoring. 相似文献