首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
嵌入式处理器TLB设计方法研究   总被引:3,自引:1,他引:3  
以处理器的TLB(Translation Look-aside Buffer)部件为研究对象,探讨嵌入式处理器TLB部件的高能效设计方法.用龙芯1号这款有代表性的真实处理器为设计模型,通过对功耗、面积、关键路径和性能等多方面的试验分析,提出了新颖的TLB低功耗设计方法.在经过改进后的TLB设计中,TLB部件的RAM部分的面积减少了50%,功耗降低了92.7%,整个TLB部件的面积减少了23.7%,功耗降低了28.5%,而电路延迟几乎没有增加,处理器的性能也没有受到影响.这充分说明改进方案是非常实用而有效的.  相似文献   

2.
高性能处理器普遍采用片上集成大容量复杂结构的一级Cache提高处理器性能,但随着Cache容量和复杂度的增加,访问Cache所产生的访存延迟和功耗明显增加;基于存储队列,提出了一种通过减少Cache访问次数来降低功耗和延迟的方法,利用存储队列来缓存Load/Store指令的数据,并且当存储队列不满时,通过空闲入口暂存已经完成的仿存数据,提高了连续访存数据的复用率,减少了Cache的访问次数;仿真结果显示,该方法在增加少量的控制逻辑基础上,显著减少了Cache的访问次数,降低了Cache的功耗,减少了访存延迟,加快了执行速度。  相似文献   

3.
通用处理器的高带宽访存流水线研究   总被引:1,自引:0,他引:1  
存储器访问速度的发展远远跟不上处理器运算速度的发展,日益严峻的访存速度问题严重制约了处理器速度的进一步发展.降低load-to-use延迟是提高处理器访存性能的关键,在其他条件确定的情况下,增加访存通路的带宽是降低load-to-use延迟的最有效途径,但增加带宽意味着增加访存通路的硬件逻辑复杂度,势必会增加访存通路的功耗.文中的工作立足于分析程序固有的访存特性,探索高带宽访存流水线的设计和优化空间,分析程序访存行为的规律性,并根据这些规律性给出高带宽访存流水线的低复杂度、低延迟、低功耗解决方案.文中的工作大大简化了高带宽访存流水线的设计,降低了关键路径的时延和功耗,被用于指导Godsonx处理器的访存设计.在处理器整体面积增加1.7%的情况下,将访存流水线的带宽提高了一倍,处理器的整体件能平均提高了8.6%.  相似文献   

4.
结合访存失效队列状态的预取策略   总被引:1,自引:0,他引:1  
随着存储系统的访问速度与处理器的运算速度的差距越来越显著,访存性能已成为提高计算机系统性能的瓶颈.通过对指令Cache和数据Cache失效行为的分析,提出一种预取策略--结合访存失效队列状态的预取策略.该预取策略保持了指令和数据访问的次序,有利于预取流的提取.并将指令流和数据流的预取相分离,避免相互替换.在预取发起时机的选择上,不但考虑当前总线是否空闲,而且结合访存失效队列的状态,减小对处理器正常访存请求的影响.通过流过滤机制提高预取准确性,降低预取对访存带宽的需求.结果表明,采用结合访存失效队列状态的预取策略,处理器的平均访存延时减少30%,SPEC CPU2000程序的IPC值平均提高8.3%.  相似文献   

5.
片上多核处理器已逐渐取代传统超标量处理器成为集成电路设计的主流结构,但芯片的存储墙问题依旧是设计的一个难题。CMP通过大容量的末级高速缓存来缓解访存压力。在软件编程模式向多线程并行方式转变的背景下,针对多线程应用在多核处理器上的Cache访问特征,提出一种面向私有末级Cache的优化算法,通过硬件缓冲器记录处理器访存地址,从而实现共享数据在Cache间的传递机制,有效降低Cache失效开销。实验结果表明,在硬件开销不超过Cache部件0.1%的情况下,测试用例平均加速比为1.13。  相似文献   

6.
DOOC:一种能够有效消除抖动的软硬件合作管理Cache   总被引:3,自引:0,他引:3  
作为弥补处理器和主存之间速度巨大差异的桥梁,Cache已经成为现代处理器中不可或缺的一部分.经研究发现.传统Cache单独使用硬件进行管理,使用固定的Cache策略和一致性协议难以适应程序中数据访存模式的多样性,容易造成Cache抖动,以致影响性能,提出了一种新的软硬件合作管理Cache--面向数据对象Cache(data-obiect oriented cache,DOOC).DOOC动态地为程序中的数据对象分配Cache段,并且动态变化段容量、段内相联度、块大小和一致性协议,从而适应数据访存模式的多样性,还介绍了DOOC软件管理的编译方法以及面向数据对象的预取机制.分别使用CACTI和基于LEON3处理器的实验平台对DOOC的硬件开销进行评估.验证了DOOC的硬件可实现性,还使用软件模拟的方式分别测试了DOOC在单核和多核处理器平台上的性能.在单核处理器上对15个基准测试程序的评测结果表明.与传统Cache相比,DOOC失效率平均降低44.98%(最大降低93.02%),平均加速比为1.20(最大为2.36).同时.通过在4核处理器平台上运行NPB的OpenMP版本测试程序,失效率平均降低49.69%(最大降低73.99%).  相似文献   

7.
Cache自适应写分配策略   总被引:1,自引:0,他引:1  
处理器所能提供的有效带宽是目前制约处理器性能提高的关键因素 .通过对Cache写失效行为的分析,提出了一种新的提高处理器带宽利用率的Cache写失效处理策略--Cache自适应写分配策略 .该策略在访存失效队列中收集全修改Cache块,对全修改Cache块采用非写分配策略,并能够自适应地切换为写分配策略 .与传统的Cache写失效处理策略相比,Cache自适应写分配策略硬件代价小,避免了不必要的数据传输,降低Cache污染,减少存储管理队列阻塞的频率 .结果表明,采用Cache自适应写分配策略,STREAM基准测试程序带宽平均提高62.6%,SPEC CPU2000程序的IPC值平均提高5.9% .  相似文献   

8.
一种片上众核结构共享Cache动态隐式隔离机制研究   总被引:2,自引:0,他引:2  
访存带宽是限制众核处理器件能提升的关键,将片上最后一级Cache设计为所有处理器核共享是必要的.在共享Cache中隔离放置冲突的数据,是提高共享Cache性能的关键.文中提出了缓存块链接的硬件方法,用于隔离共享Cache中不同线程之间的数据.文中基于时钟精准的片上众核结构模拟器,使用Splash2程序组和生物信息学中的仟务,对所提机制进行了评估.实验结果表明,与传统共享Cache相比,使用缓存块链接机制时,使得共享Cache的冲突性缺失率降低约20%,而使得IPC平均提高了约10%.  相似文献   

9.
“存储墙”问题已经成为处理器性能提升的主要障碍,而处理器内核猜测执行预测路径上访存指令时预载入的存储器数据所导致Cache污染会严重影响处理器性能.本文提出一种针对猜测执行过程中预载入数据的Cache污染控制方法CSDA.首先,利用置信度评估技术从所有预测路径中分离出错误概率较大的路径.然后,根据低置信度污染型访存指令识别历史表将低置信度预测路径上的访存指令划分为预取型和污染型,为污染型的访存指令建立低优先级Load/Store队列,并采用污染数据Cache存储污染数据.仿真结果表明,在双核模式下,CSDA策略相对于baseline结构来说,L1 D-Cache缺失率降低幅度从9%-23%,平均降低了17%;L2 Cache缺失率的下降范围从1.02%-14.39%,平均为5.67%;IPC的提升幅度从0.19% -5.59%,平均为2.21%.  相似文献   

10.
TLB(Translation Look-Asidc Buffer,变换旁视缓冲器)是存储管理单元中完成访存地址转换的核心。但研究发现TLB工作时可以消耗微处理器芯片约17%的功耗。因此,TLB低功耗设计已经引起研究者的重视。通过对经典基准测试集程序访存行为的详细分析和仿真可知,在页面非连续访问时,页面间隔统计参数能够很好地指导TLB的低功耗设计。从这一角度出发,提出了低功耗的TLB设计方法。实验结果显示,改进后的TLB片上功耗明显降低。  相似文献   

11.
改进型缓存敏感B+树的研究   总被引:1,自引:0,他引:1  
王晨  陈刚  董金祥 《计算机测量与控制》2006,14(11):1531-1534,1550
在内存数据库中,处理器缓存的失配次数对系统的性能有重要的影响;缓存敏感的索引能减少在做查询操作时产生的缓存失配次数,从而提高系统的性能;传统的设计思路将结点大小等于缓存块大小,认为这样就能使得缓存失配次数减少;但是这样的设计忽略了TLB失配对系统性能的影响;我们提出了一种缓存敏感索引——改进型缓存敏感B+树(简称MCSB+树),它同时兼顾了缓存失配和TLB失配对系统性能的影响。比传统的缓存敏感索引能提供更好的操作性能。  相似文献   

12.
To support a global virtual memory space, an architecture must translate virtual addresses dynamically. In current processors, the translation is done in a TLB (translation lookaside buffer), before or in parallel with the first-level cache access. As processor technology improves at a rapid pace and the working sets of new applications grow insatiably, the latency and bandwidth demands on the TLB are difficult to meet, especially in multiprocessor systems, which run larger applications and are plagued by the TLB consistency problem. We describe and compare five options for virtual address translation in the context of distributed shared memory (DSM) multiprocessors, including CC-NUMAs (cache-coherent non-uniform memory access architectures) and COMAs (cache only memory access architectures). In CC-NUMAs, moving the TLB to shared memory is a bad idea because page placement, migration, and replication are all constrained by the virtual page address, which greatly affects processor node access locality. In the context of COMAs, the allocation of pages to processor nodes is not as critical because memory blocks can dynamically migrate and replicate freely among nodes. As the address translation is done deeper in the memory hierarchy, the frequency of translations drops because of the filtering effect. We also observe that the TLB is very effective when it is merged with the shared-memory, because of the sharing and prefetching effects and because there is no need to maintain TLB consistency. Even if the effectiveness of the TLB merged with the shared memory is very high, we also show that the TLB can be removed in a system with address translation done in memory because the frequency of translations is very low.  相似文献   

13.
The Translation Look-aside Buffer (TLB) is a very important part in the hardware support for virtual memory management implementation of high performance embedded systems. The TLB though small is frequently accessed, and therefore not only consumes significant energy, but also is one of the important thermal hot-spots in the processor. Recently, several circuit and microarchitectural implementations of TLBs have been proposed to reduce TLB power. One simple, yet effective TLB design for power reduction is the Use-Last TLB architecture proposed in IEEE J Solid State Circuits, 1190–1199, (2004). The Use-Last TLB architecture reduces the power consumption when the last page is accessed again. In this work, we develop code transformation techniques to reduce the page switchings in data cache accesses and propose an efficient page-aware code placement technique to enhance the energy reduction capabilities achieved by the Use-Last TLB architecture for instruction cache accesses. Our comprehensive page switch reduction algorithm results in an average of 39% reduction in the data-TLB page switching, and our code placement heuristic results in an average of 76% reduction in the instrucion-TLB page switchings with negligible impact on the performance on benchmarks from MiBench, Multimedia, DSPStone and BDTI suites. The reduced page switch count through our techniques achieves an equivalent power savings, above and beyond the reduction achieved by the Use-Last TLB architecture implementation.  相似文献   

14.
15.
针对低电压下,Cache硬错误和软错误概率提高导致Cache不能正常工作的问题,提出了一种基于混合纠错码的Cache结构。该结构利用脏数据正确性必须由处理器中Cache保证而干净数据可由片外恢复的数据特征,将Cache分成多比特纠错码和单比特纠错码保护的两个区域。通过采用新的Cache替换策略,使得脏数据总处于多比特纠错码保护区域,保证其得到较强保护,从而保证Cache在低电压下的可靠性运行。基于EEMBC测试基准的实验结果表明,该设计可以在590mv电压下正常运行,与该领域最新研究 VS-ECC相比,降低了23.6%的纠错码存储信息量,性能提高5.9%。  相似文献   

16.
GPUs are widely used in modern high-performance computing systems. To reduce the burden of GPU programmers, operating system and GPU hardware provide great supports for shared virtual memory, which enables GPU and CPU to share the same virtual address space. Unfortunately, the current SIMT execution model of GPU brings great challenges for the virtual-physical address translation on the GPU side, mainly due to the huge number of virtual addresses which are generated simultaneously and the bad locality of these virtual addresses. Thus, the excessive TLB accesses increase the miss ratio of TLB. As an attractive solution, Page Walk Cache (PWC) has received wide attention for its capability of reducing the memory accesses caused by TLB misses. However, the current PWC mechanism suffers from heavy redundancies, which significantly limits its efficiency. In this paper, we first investigate the facts leading to this issue by evaluating the performance of PWC with typical GPU benchmarks. We find that the repeated L4 and L3 indices of virtual addresses increase the redundancies in PWC, and the low locality of L2 indices causes the low hit ratio in PWC. Based on these observations, we propose a new PWC structure, namely Compressed Page Walk Cache (CPWC), to resolve the redundancy burden in current PWC. Our CPWC can be organized in either direct-mapped mode or set-associated mode. Experimental results show that CPWC increases by 3 times over TPC in the number of page table entries, increases by 38.3% over PWC in L2 index hit ratio and reduces by 26.9% in the memory accesses of page tables. The average memory accesses caused by each TLB miss is reduced to 1.13. Overall, the average IPC can improve by 25.3%.  相似文献   

17.
A Low Power TLB Structure for Embedded Systems   总被引:1,自引:0,他引:1  
We present a new two-level TLB (translationlook-aside buffer) architecture that integrates a 2-waybanked filter TLB with a 2-way banked main TLB. Theobjective is to reduce power consumption in embeddedprocessors by distributing the accesses to TLB entriesacross the banks in a balanced manner. First, an advancedfiltering technique is devised to reduce access power byadopting a sub-bank structure. Second, a bank-associativestructure is applied to each level of the TLB hierarchy.Simulation results show that the Energy*Delay productcan be reduced by about 40.9% compared to a fullyassociativeTLB, 24.9% compared to a micro-TLB with4+32 entries, and 12.18% compared to a micro-TLB with16+32 entries.  相似文献   

18.
阵列众核处理器由于其较高的计算性能和能效比已经被广泛应用于高性能计算领域。而要构建未来高性能计算系统处理器必须解决严峻的"访存墙"挑战以及核心协同问题。通常的阵列处理器中,核心多采用单线程结构,以减少开销,但是对访存提出了较高的要求。在阵列众核处理器中,在单核心中引入硬件同时多线程技术,针对实验中一级指令缓存命中率随着线程数增加而显著降低的问题,提出了一种面向阵列众核处理器的冗余指令缓存存储结构,基于该结构,提出采用FIFO及类LRU替换策略。通过上述优化的高速缓存结构设计,经实验模拟,双线程整体指令Cache失效率降低了25.2%,整体CPI性能提升了30.2%。  相似文献   

19.
一种嵌入式处理器的动态可重构Cache设计   总被引:1,自引:0,他引:1  
一般的处理器芯片都有片上高速缓存Cache,它一般是由固定大小的一级Cache(L1)和二级Cache(L2)构成,文章介绍了一种在嵌入式处理器设计中实现的动态可重构Cache。动态可重构Cache的思想最早是罗彻斯特大学(UniversityofRochester)的学者在他们的一篇关于存储层次的论文1中提出的,当时主要是针对高性能的超标量通用处理器。在此嵌入式处理器设计过程中,笔者创造性地继承了这一思想。通过增加少量硬件以及编译器的配合,在嵌入式处理器中L1Cache和L2Cache总体大小不变的情况下,L1Cache和L2Cache的大小可以根据具体的应用程序动态配置。通过对高速缓存的动态配置,不仅可以有效地提高Cache的命中率,还能够有效降低处理器的功耗。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号