首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
评测访存延迟对于优化应用访存模式和数据放置有重要的指导意义,然而数据Cache、多线程、数据预取等技术却严重干扰了访存延迟测量的精度。设计并实现了基于可变步长的访存延迟测量模型,在一块空间内根据用户指定的步长创建访问序列环,循环访问这个序列得出平均时间,即为访存延迟。最后对Intel的通用处理器和飞腾处理器在不同数据大小、步长、线程数等情况下的访存延迟进行了测量比较,该模型能够显示存储层次并精确显示测量延迟。  相似文献   

2.
针对多核多线程处理器中乱序访存影响计算实时性的问题,在对典型访存队列进行研究的基础上提出了一种新的访存队列构建模型及其硬件结构.该模型采用窗口优化算法控制最差情况下的访存延迟,保证访存的实时性,同时又利用优化的乱序调度策略减少访存延迟.实验证明,该访存队列可控制最大访存延迟,与顺序访存相比,存储器具备更高的带宽,与传统的乱序访存相比较,可以充分满足计算的实时性需求,而存储器有效带宽基本不受影响,解决了多核多线程处理器承担实时流计算的基础难题.  相似文献   

3.
高性能处理器普遍采用片上集成大容量复杂结构的一级Cache提高处理器性能,但随着Cache容量和复杂度的增加,访问Cache所产生的访存延迟和功耗明显增加;基于存储队列,提出了一种通过减少Cache访问次数来降低功耗和延迟的方法,利用存储队列来缓存Load/Store指令的数据,并且当存储队列不满时,通过空闲入口暂存已经完成的仿存数据,提高了连续访存数据的复用率,减少了Cache的访问次数;仿真结果显示,该方法在增加少量的控制逻辑基础上,显著减少了Cache的访问次数,降低了Cache的功耗,减少了访存延迟,加快了执行速度。  相似文献   

4.
利用数据预取机制降低块执行模型的访存延迟   总被引:1,自引:0,他引:1  
块执行模型通过将串行程序划分成一系列可并行执行的指令块来挖掘应用中潜在的指令级并行性.访存延迟是阻碍块执行模型提高指令级并行性的主要因素之一,而数据预取技术在传统执行模型中可有效降低访存延迟,对块执行模型也同样具有较强的适应性.本文分析了在块执行模型中引入数据预取机制的可行性,并从cache命中率、访存指令的延迟等方面验证了数据预取在块执行模型中的作用,仿真结果表明数据预取可有效降低块执行模型中的访存延迟.  相似文献   

5.
面向非一致Cache的任意步长预提升技术   总被引:2,自引:0,他引:2       下载免费PDF全文
随着微电子工艺的不断进步,片上大容量非一致cache的研究受到广泛关注。提出了一种面向非一致cache的任意步长预提升技术,它能够优化非一致cache中的数据组织,使得即将访问的数据被放置在距离处理器较近的cachebank中,从而降低访存延迟,提升系统性能。详细介绍了任意步长预提升技术的设计,比较了预提升技术与预取技术的差别,并提出了二者的结合技术。通过对来自NPB和SPEC2000的11个基准测试程序在全系统模拟器上的实验评测,发现任意步长预提升技术能够有效减小访存延迟,在访存预测表尺寸为16和32的情况下,系统IPC分别平均增长4.17%和4.91%;在结合预提升和预取技术的情况下,系统IPC分别平均增长8.84%和11.06%。  相似文献   

6.
多线程和向量技术相结合是当前微处理器设计的一个重要趋势.提出一种多线程向量处理器中向量数据存储结构,利用多线程切换来隐藏访存延迟,并让向量数据直接访问二级cache来提高带宽.模拟实验表明在所提出的存储结构下,访存带宽随线程数线性增长,向量数据访问带宽明显高于标量数据访问带宽.  相似文献   

7.
通用处理器的高带宽访存流水线研究   总被引:1,自引:0,他引:1  
存储器访问速度的发展远远跟不上处理器运算速度的发展,日益严峻的访存速度问题严重制约了处理器速度的进一步发展.降低load-to-use延迟是提高处理器访存性能的关键,在其他条件确定的情况下,增加访存通路的带宽是降低load-to-use延迟的最有效途径,但增加带宽意味着增加访存通路的硬件逻辑复杂度,势必会增加访存通路的功耗.文中的工作立足于分析程序固有的访存特性,探索高带宽访存流水线的设计和优化空间,分析程序访存行为的规律性,并根据这些规律性给出高带宽访存流水线的低复杂度、低延迟、低功耗解决方案.文中的工作大大简化了高带宽访存流水线的设计,降低了关键路径的时延和功耗,被用于指导Godsonx处理器的访存设计.在处理器整体面积增加1.7%的情况下,将访存流水线的带宽提高了一倍,处理器的整体件能平均提高了8.6%.  相似文献   

8.
大规模数据排序、搜索引擎、流媒体等大数据应用在面向延迟的多核/众核处理器上运行时资源利用率低下,一级缓存命中率高,二级/三级缓存命中率低,LLC容量的增加对IPC的提升并不明显。针对缓存资源利用率低的问题,分析了大数据应用的访存行为特点,提出了针对大数据应用的两种众核处理器缓存结构设计方案,两种结构均只有一级缓存,Share结构为完全共享缓存,Partition结构为部分共享缓存。评估结果表明,两种方案在访存延迟增加不多的前提下能大幅节省芯片面积,其中缓存容量较低时,Partition结构优于Share结构,缓存容量较高时,Share结构要逐渐优于Partition结构。由于众核处理器中分配到每个处理器核的容量有限,因此Partition结构有一定的优势。  相似文献   

9.
国产异构处理器DCU(deep computing unit)上的本地数据共享(local data share,LDS)是一种低延迟、高带宽的显式寻址内存。国产异构系统的OpenMP未提供LDS访问的编程接口,导致未有效地利用LDS硬件实现数据的高效访存。针对此问题,研究了面向DCU平台的OpenMP Offload执行模式和LDS的分配方法,以及特定于LDS访存的指令结构,实现了LDS访存的手动支持。另外针对于OpenMP Offload的不同执行模式,在此优化方法的基础上实现了LDS访存的自动化,形成了一套面向国产异构平台的高效访存策略。实验采用polybench标准测试集进行测试,利用手动和自动优化方法在单线程模式下平均加速比可达2.60,利用手动优化方法在多线程non-SPMD模式下平均加速比达1.38,利用自动优化方法在多线程SPMD模式下平均加速比达1.11。实验结果表明LDS访存的自动和手动支持有助于提高OpenMP异构程序运行速度。  相似文献   

10.
矩阵乘法作为高性能计算中的关键组成部分,是一种具有计算和访存密集特点的典型应用,因此优化矩阵乘法的性能对通用处理器是非常重要的.为了提高矩阵乘法的性能,本文提出了一种性能模型,用于预测通用处理器上矩阵乘法的执行时间.该模型反映了矩阵乘法执行时间与通用处理器的运算部件、访存带宽、寄存器个数等结构参数之间的关系,可以指导处理器结构的优化来平衡计算和访存能力、提高执行速度.基于该模型本文给出了在一个优化的通用处理器结构中,寄存器个数和访存带宽应满足的理论下界.本文在Godson-3B处理器平台上对该性能模型进行了验证,实验结果表明矩阵乘法执行时间的预测精确度达到95%以上.基于该模型,本文还提出了一种对Godson-3B结构进行优化的方法,使矩阵乘法的执行时间减少了50%左右.  相似文献   

11.
Cache-only memory access (COMA) multiprocessors support scalable coherent shared memory with a uniform memory access programming model. The local portion of shared memory associated with a processor is organized as a cache. This cache-based organization of memory results in long remote memory access latencies. Latency-hiding mechanisms can reduce effective remote memory access latency by making data present in a processor's local memory by the time the data are needed. In this paper we study the effectiveness of latency-hiding mechanisms on the KSR2 multiprocessor in improving the performance of three programs. The communication patterns of each program are analyzed and the mechanisms for latency hiding are applied. Results from a 52-processor system indicate that these mechanisms hide a significant portion of the latency of remote memory accesses. The results also quantify benefits in overall application performance.An earlier version of this paper was presented at the 1995 International Conference on Parallel Processing Techniques and Applications.  相似文献   

12.
A discussion is presented of the use of dynamic storage schemes to improve parallel memory performance during three important classes of data accesses: vector accesses in which multiple strides are used to access a single vector, block accesses, and constant-geometry FFT accesses. The schemes investigated are based on linear address transformations, also known as XOR schemes. It has been shown that this class of schemes can be implemented more efficiently in hardware and has more flexibility than schemes based on row rotations or other techniques. Several analytical results are shown. These include: quantitative analysis of buffering effects in pipelined memory systems; design rules for storage schemes that provide conflict-free access using multiple strides, blocks, and FFT access patterns; and an analysis of the effects of memory bank cycle time on storage scheme capabilities  相似文献   

13.
Much research has focused on reducing and/or tolerating remote memory access latencies on distributed-memory parallel computers. Caching remote data is intended to reduce average access latency by handling as many remote memory accesses as possible using local copies of the data in the cache. Data-flow and multithreaded approaches help programs tolerate the latency of remote memory accesses by allowing processors to do other work while remote operations take place. The thread migration technique described here is a multithreaded architecture where threads migrate to remote processors that contain data they need. By exploiting access locality, the threads often use several data items from that processor before migrating to other processors for more data. Because the threads migrate in search of data, the approach is called Nomadic Threads. A prototype runtime system has been implemented on the CM5 and is portable to other distributed memory parallel computers.  相似文献   

14.
Accesses Per Cycle(APC),Concurrent Average Memory Access Time(C-AMAT),and Layered Performance Matching(LPM)are three memory performance models that consider both data locality and memory assess concurrency.The APC model measures the throughput of a memory architecture and therefore reflects the quality of service(QoS)of a memory system.The C-AMAT model provides a recursive expression for the memory access delay and therefore can be used for identifying the potential bottlenecks in a memory hierarchy.The LPM method transforms a global memory system optimization into localized optimizations at each memory layer by matching the data access demands of the applications with the underlying memory system design.These three models have been proposed separately through prior efforts.This paper reexamines the three models under one coherent mathematical framework.More specifically,we present a new memory-centric view of data accesses.We divide the memory cycles at each memory layer into four distinct categories and use them to recursively define the memory access latency and concurrency along the memory hierarchy.This new perspective offers new insights with a clear formulation of the memory performance considering both locality and concurrency.Consequently,the performance model can be easily understood and applied in engineering practices.As such,the memory-centric approach helps establish a unified mathematical foundation for model-driven performance analysis and optimization of contemporary and future memory systems.  相似文献   

15.
《Parallel Computing》2007,33(1):2-20
In multiprocessor systems, interconnection network design is critical for overall system performance. Among the popular interconnection networks, unidirectional ring-based networks have been one of popular choices for high performance large-scale shared memory multiprocessor systems. In this paper, we propose “Torus Ring”, which is a modified version of two-level hierarchical ring. The Torus Ring has the same complexity as the hierarchical rings, and the only difference is the way it connects the local rings. Compared to hierarchical rings, the Torus Ring helps exploit the memory access locality of application programs more efficiently. It has an advantage over the hierarchical ring when the destination of a packet is the adjacent local ring, especially the backward adjacent local ring. Although we assume that the destination of a network packet is uniformly distributed across the processing nodes, the average number of hops in Torus Ring is equal to that of the hierarchical ring. However, the performance gain of the Torus Ring is expected to increase, due to the memory access locality of the application programs in the real parallel programming environment. In the simulation results, the latency of the interconnection network is reduced by up to 19% and the execution time is reduced by up to 10%, with the moderate ring utilization ratio.  相似文献   

16.
申威众核片上多级存储层次是缓解众核“访存墙”的重要结构.完全由软件管理的SPM结构和片上RMA通信机制给应用性能提升带来很多机会,但也给应用程序开发优化与移植提出了很大挑战.为充分挖掘片上存储层次特点提升应用程序性能,同时减轻用户编程优化负担,本文提出了一种多级存储层次访存与通信融合的编译优化方法.该方法首先设计了融合编译指示,将程序高层信息传递给编译器.其次构建了编译优化收益模型并设计了启发式循环优化方案迭代求解框架,并由编译器完成循环优化方案的求解和优化代码的变换.通过编译生成的DMA和RMA批量数据传输操作,将较低存储层次空间中高访问延迟的核心数据批量缓冲进低访问延迟的更高存储层次空间中.在三个典型测试用例上进行了优化实验测试与分析,结果表明本文所提出的优化在性能上与手工优化相当,较未优化版程序性能有显著提升.  相似文献   

17.
In the past decade, advances in the speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines for database architecture, in terms of both data structures and algorithms. We discuss how vertically fragmented data structures optimize cache performance on sequential data access. We then focus on equi-join, typically a random-access operation, and introduce radix algorithms for partitioned hash-join. The performance of these algorithms is quantified using a detailed analytical model that incorporates memory access cost. Experiments that validate this model were performed on the Monet database system. We obtained exact statistics on events such as TLB misses and L1 and L2 cache misses by using hardware performance counters found in modern CPUs. Using our cost model, we show how the carefully tuned memory access pattern of our radix algorithms makes them perform well, which is confirmed by experimental results. Received April 20, 2000 / Accepted June 23, 2000  相似文献   

18.
Modern processors such as Tilera’s Tile64, Intel’s Nehalem, and AMD’s Opteron are migrating memory controllers (MCs) on-chip, while maintaining a large, flat memory address space. This trend to utilize multiple MCs will likely continue and a core or socket will consequently need to route memory requests to the appropriate MC via an inter- or intra-socket interconnect fabric similar to AMD’s HyperTransportTM, or Intel’s Quick-Path InterconnectTM. Such systems are therefore subject to non-uniform memory access (NUMA) latencies because of the time spent traveling to remote MCs. Each MC will act as the gateway to a particular region of the physical memory. Data placement will therefore become increasingly critical in minimizing memory access latencies. Increased competition for memory resources will also increase the memory access latency variation in future systems. Proper allocation of workload data to the appropriate MC will be important in decreasing the variation and average latency when servicing memory requests. The allocation strategy will need to be aware of queuing delays, on-chip latencies, and row-buffer hit-rates for each MC. In this paper, we propose dynamic mechanisms that take these factors into account when placing data in appropriate slices of physical memory. We introduce adaptive first-touch page placement, and dynamic page-migration mechanisms to reduce DRAM access delays for multi-MC systems. We also introduce policies that can handle data placement in memory systems that have regions with heterogeneous properties. The proposed policies yield average performance improvements of 6.5% for adaptive first-touch page-placement, and 8.9% for a dynamic page-migration policy for a system with homogeneous DRAM DIMMs. We also show improvements in systems that contain DIMMs with different performance characteristics.  相似文献   

19.
Most existing analytical models for memory interference generally assume random bank selection for each memory access. In vector computers, however, memory accesses are typically regularly patterned with a number of data items being accessed concurrently from different banks. Very little is known about the queueing behavior of memory interferences in multiple stream vector accesses. This paper presents an analytical model for memory interferences due to vector accesses in multiple vector processor systems. The model captures the effects of both bank conflicts among elements within one vector access stream and conflicts among multiple vector access streams on system performance. The model is based on a closed queueing network assuming an ideal interconnection network. An approximation technique is proposed to solve the memory queueing system that serves customers in a complicated way (non-FIFO). We also carry out extensive simulation experiments to study memory interference and validate our analytical model. Simulation results and analytical results are in a very good agreement, indicating that the model is very accurate. We further validate our analysis by comparing the numerical results obtained from our analytical model with those measurement results that were published by other researchers. Based on our analytical model and simulations, we carry out performance evaluation of the multiple vector processor systems. Our numerical results show that memory access conflicts pose a severe limitation on the number of useful processors in the system, implying that memory system design is essential to high-performance computing  相似文献   

20.
Both hardware and software prefetching have been shown to be effective in tolerating the large memory latencies inherent in shared-memory multiprocessors; however, both types of prefetching have their shortcomings. While software schemes require less hardware support than hardware schemes, they must generate address calculation instructions and a prefetch instruction for each datum that needs to be prefetched. Hardware schemes, however, must become progressively more complex to be able to compute data access strides and to increase the prefetching lookahead. In this paper, we propose an integrated hardware/software prefetching method that uses simple hardware that can handle most data accesses and software prefetching for the few remaining accesses. A compile time algorithm analyzes the access streams formed by array references and determines sequences of consecutive memory accesses to an access stream that can be prefetched by the hardware mechanism. This analysis is based on the relative memory locations of consecutive accesses to an access stream and the number of intervening data references between consecutive accesses to an access stream. In addition, the prefetching lookahead can be set separately for each access stream. Our approach yields an effective scheme that minimizes both CPU overhead and hardware costs. Execution-driven simulations show our method to be very effective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号