首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
多核处理机系统Cache管理技术研究现状   总被引:1,自引:0,他引:1       下载免费PDF全文
多核处理器的Cache结构设计和管理是微处理器设计领域的重要问题。当前主流的商用微处理器均采用共享最后一级Cache的系统结构,而片上最后一级Cache的性能通常对处理器的性能影响较大,因此共享Cache的管理问题成为当前研究热点。本文首先介绍当前主流多核处理器及其设计问题,然后介绍了共享Cache管理的三项重要技术:线程调度、NUCA和Cache划分,最后给出多核处理器Cache管理技术的发展方向。  相似文献   

2.
片上多核Cache资源管理机制研究   总被引:2,自引:1,他引:1  
随着片上多核成为处理器发展的主流和片上Cache资源的持续增长,Cache资源的管理已成为片上多核的关键问题。介绍了片上多核Cache资源管理的研究进展,依据研究内容将Cache资源的管理分为Cache划分和Cache共享两类。对Cache划分,探讨了其主要组成部分和一般形式,分析和比较了典型的片上多核Cache划分机制。对Cache共享,给出了其主要研究内容,并介绍和比较了几种主流的片上多核Cache共享机制。通过分析,认为软硬件协同管理的页划分应是未来片上多核Cache划分机制的研究重点;而片上多核Cache共享机制的研究则应从目标应用的Cache行为特征着手。  相似文献   

3.
多核处理器的出现给实时系统的设计带来了新挑战,如并发任务通过共享Cache相互干扰的现象严重降低了实时系统的实时性,已有的Cache冲突评价模型没有针对多核处理器体系结构,多角度评价共享Cache对多个并发任务的影响.本文基于广泛应用的LRU Cache替换策略,根据任务的Cache静态复用距离,提出一种可以预测并发任务的Cache占用率、失效率和任务间冲突概率的Cache冲突预测模型.分析了在多核背景下共享Cache结构对实时性的影响.实验结果表明本模型不但功能比现有模型全面且精度更高.  相似文献   

4.
赵纯  龙翔  王雷 《微型机与应用》2012,31(2):53-55,59
分区操作系统是综合化航空电子领域中的核心技术。随着单核性能极限的到来,处理器结构向着多核发展。将两者结合起来,在多核分区操作系统的基础上研究分析多核处理器结构为分区操作系统带来的影响。经分析实验数据得出多核处理器结构在多核处理器中共享Cache结构和内核中临界资源并发访问两方面对分区操作系统产生影响。  相似文献   

5.
多核处理器面向低功耗的共享Cache划分方案   总被引:1,自引:0,他引:1       下载免费PDF全文
随着多核处理器的发展,片上Cache的容量随之增大,其功耗占整个芯片功耗的比率也越来越大。如何减少Cache的功耗,已成为当今Cache设计的一个热点。本文研究了面向低功耗的多核处理器共享Cache的划分技术(LP-CP)。文中提出了Cache划分框架,通过在处理器中加入失效率监控器来动态地收集程序的失效率,然后使用面向低功耗的共享Cache划分算法,计算性能损耗阈值范围内的共享Cache划分策略。我们在一个共享L2 Cache的双核处理器系统中,使用多道程序测试集测试了面向低功耗的Cache划分:在性能损耗阈值为1%和3%的情况中,系统的Cache关闭率分别达到了20.8%和36.9%。  相似文献   

6.
数据流编程语言是一种面向领域的编程语言,它能够将计算与通信分离,暴露应用程序的并行性.多核集群中计算、存储和通信等底层资源的复杂性对数据流程序的性能提出了新的挑战.针对数据流程序在多核集群上执行存在资源利用低和扩展性差等问题,利用同步数据流图作为中间表示,文中提出并实现了面向多核集群的层次性流水线并行优化方法.方法包含任务划分与调度、层次流水线调度和数据局部性优化,经过编译优化后生成基于MPI的可并行执行的目标代码.其中任务划分与调度是利用程序中数据和任务并行性将任务映射到计算核上,实现负载均衡和低通信同步开销;层次性流水线调度是利用程序中的并行性构造低延迟流水线调度;数据局部性优化是针对数据访问存在的Cache伪共享做面向存储的优化.实验以X86架构多核处理器组成的集群为平台,选取媒体处理领域的典型应用算法作为测试程序,对层次流水线优化进行实验分析.实验结果表明了优化方法的有效性.  相似文献   

7.
基于多核的多线程程序优化研究   总被引:1,自引:1,他引:0  
随着主流芯片厂商的大力推广,多核处理器已经变得越来越普及.以往串行化的程序设计方法在多核环境下已经不能充分利用多核CPU的资源.怎样高效地利用多核处理器的计算性能,已经成为软件开发者面临的新的课题.文中在传统的多线程编程基础上,根据Intel处理器的微架构(Microarchitecture)特点,以及Linux内核提供的CPU绑定技术,通过采用Cache优化和CPU亲和力(CPU affinity)优化,消除了多核环境下局部多线程Cache行竞争和伪共享,减少了线程的调度开销,提高了多线程程序的运行效率.  相似文献   

8.
针对目前主流的多核处理器,研究了基于共享Cache多核处理器的数据库Nested Loop Join(NINLJ)优化.针对无索引情况下的NLJ,提出了基于Radix-NL-Join算法的NLJ多线程执行框架.从减少Cache访问冲突和提高Cache命中率两个方面优化了NINLJ多线程执行框架中的聚集划分和聚集连接线程.主要贡献如下:1.针对多线程访问共享Cache容易出现共享Cache访问冲突的问题,优化了聚集划分阶段的多线程聚集划分线程的启动时机;2.针对聚集连接阶段,聚集连接线程Cache访问性能不佳,利用聚集连接线程顺序访问聚集的优势,采用预取线程提高聚集连接线程的性能;3.在实验中,基于开源数据库EaseDB实现了上述多线程执行框架,测试了多线程NLJ的性能.实验结果表明,提出的NLJ多线程执行框架,可以充分利用多核处理器的计算资源,并有效地解决共享Cache在多线程条件下的Cache访问冲突问题,大大提高了NLJ的性能,相对于未采用Cache优化的多线程Radix-NL-Join算法,其性能提升了26%左右.  相似文献   

9.
设计一种被称之为消除低重用块和预测访问间隔的Cache管理策略ELRRIP.根据多核处理器的共享最后一级高速缓存中低重用块占用资源时间较长这一特点,ELRRIP策略:1)通过感知最后一级共享高速缓存的上一级Cache中的数据历史访问信息预测出低重用块并优先将其淘汰;2)通过改进的访问间隔预测技术预测出潜在的低重用块并将其优先淘汰.同时,本文还基于ELRRIP提出了TADELRRIP.实验表明,对于4核多核处理器而言,TADELRRIP可以将加权加速比平均提高9.14%.  相似文献   

10.
多核处理器规模的不断扩大和核间通信机制的日益复杂,使得Cache一致性维护变得更加困难。本文从多核处理器Cache一致性问题的产生背景出发,分析监听协议、目录协议、Token协议和Hammer协议的实现机制以及在多核环境中的优缺点,分别从一致性协议与片上互连结构协同设计、面向低功耗应用的协议优化策略、Cache一致性协议验证及容错机制等角度考虑,对未来多核处理器Cache一致性协议设计的发展趋势和技术挑战进行详细分析与讨论。  相似文献   

11.
In this paper, we conduct performance scaling analysis of multithreaded multicore processors (MMPs) for parallel computing. We propose a thread-level closed-queuing network model covering a fairly large design space, accounting for hardware scaling models, coarse-grain, fine-grain, and simultaneous multithreading (SMT) cores, shared resources, including cache, memory, and critical sections. We then derive a closed-form solution for this model in terms of speedup performance measure. This solution makes it possible to analyze performance scaling properties of MMPs along multiple dimensions. In particular, we show that for the parallelizable part of the workload, the speedup, in the absence of resource contention, is no longer just a linear function of parallel processing unit counts, as predicted by Amdahl’s law, but also a strong function of workload characteristics, ranging from strong memory-bound to strong CPU-bound workloads. We also find that with core multithreading, super linear speedup, higher than that predicted by Amdahl’s law, may be achieved for the parallelizable part of the workload, if core threads exhibit strong cache affinity and the workload is strongly memory-bound. Then, we derive a tight speedup upper bound in the presence of both memory resource contention and critical section for multicore processors with single-threaded cores. This speedup upper bound indicates that with resource contention among threads, whether it is due to shared memory or critical section, a sequential term is guaranteed to emerge from the parallelizable part of the workload, fundamentally limiting the scalability of multicore processors for parallel computing, in addition to the sequential part of the workload, as dictated by Amdahl’s law. As a result, to improve speedup performance for MMPs, one should strive to enhance memory parallelism and confine critical sections as locally as possible, e.g., to the smallest possible number of threads in the same core.  相似文献   

12.
In the ongoing quest for greater computational power, efficiently exploiting parallelism is of paramount importance. Architectural trends have shifted from improving single-threaded application performance, often achieved through instruction level parallelism (ILP), to improving multithreaded application performance by supporting thread level parallelism (TLP). Thus, multi-core processors incorporating two or more cores on a single die have become ubiquitous. To achieve concurrent execution on multi-core processors, applications must be explicitly restructured to exploit parallelism, either by programmers or compilers. However, multithreaded parallel programming may introduce overhead due to communications among threads. Though some resources are shared among processor cores, current multi-core processors provide no explicit communications support for multithreaded applications that takes advantage of the proximity between cores. Currently, inter-core communications depend on cache coherence, resulting in demand-based cache line transfers with their inherent latency and overhead. In this paper, we explore two approaches to improve communications support for multithreaded applications. Prepushing is a software controlled data forwarding technique that sends data to destination’s cache before it is needed, eliminating cache misses in the destination’s cache as well as reducing the coherence traffic on the bus. Software Controlled Eviction (SCE) improves thread communications by placing shared data in shared caches so that it can be found in a much closer location than remote caches or main memory. Simulation results show significant performance improvement with the addition of these architecture optimizations to multi-core processors.  相似文献   

13.
The realization of modern processors is based on a multicore architecture with increasing number of cores per processor. Multicore processors are often designed such that some level of the cache hierarchy is shared among cores. Usually, last level cache is shared among several or all cores (e.g., L3 cache) and each core possesses private low level caches (e.g., L1 and L2 caches). Superlinear speedup is possible for matrix multiplication algorithm executed in a shared memory multiprocessor due to the existence of a superlinear region. It is a region where cache requirements for matrix storage of the sequential execution incur more cache misses than in parallel execution. This paper shows theoretically and experimentally that there is a region, where the superlinear speedup can be achieved. We provide a theoretical proof of existence of a superlinear speedup and determine boundaries of the region where it can be achieved. The experiments confirm our theoretical results. Therefore, these results will have impact on future software development and exploitation of parallel hardware on the basis of a shared memory multiprocessor architecture. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
The performance of microprocessors has increased exponentially for over 35 years. However, process technology challenges, chip power constraints, and difficulty in extracting instruction-level parallelism are conspiring to limit the performance of future individual processors. To address these limits, the computer industry has embraced chip multiprocessing (CMP), predominately in the form of multiple high-performance superscalar processors on the same die. We explore the trade-off between building CMPs from a few high-performance cores or building CMPs from a large number of lower-performance cores and argue that CMPs built from a larger number of lower-performance cores can provide better performance and performance/Watt on many commercial workloads. We examine two multi-threaded CMPs built using a large number of processor cores: Sun’s Niagara and Niagara 2 processors. We also explore the programming issues for CMPs with large number of threads. The programming model for these CMPs is similar to the widely used programming model for symmetric multiprocessors (SMPs), but the greatly reduced costs associated with communication of data through the on-chip shared secondary cache allows for more fine-grain parallelism to be effectively exploited by the CMP. Finally, we present performance comparisons between Sun’s Niagara and more conventional dual-core processors built from large superscalar processor cores. For several key server workloads, Niagara shows significant performance and even more significant performance/Watt advantages over the CMPs built from traditional superscalar processors.  相似文献   

15.
Current rendering processors are aiming to process triangles as fast as possible and they have the tendency of equipping with multiple rasterizers to be capable of handling a number of triangles in parallel for increasing polygon rendering performance. However, those parallel architectures may have the consistency problem when more than one rasterizer try to access the data at the same address. This paper proposes a consistency-free memory architecture for sort-last parallel rendering processors, in which a consistency-free pixel cache architecture is devised and effectively associated with three different memory systems consisting of a single frame buffer, a memory interface unit, and consistency-test units. Furthermore, the proposed architecture can reduce the latency caused by pixel cache misses because the rasterizer does not wait until cache miss handling is completed when the pixel cache miss occurs. The experimental results show that the proposed architecture can achieve almost linear speedup upto four rasterizers with a single frame buffer.  相似文献   

16.
对于共享cache的多核处理器,如何管理好各个核对cache的利用,对于充分发挥多核处理器性能是很关键的问题.目前采用的cache替换方法程序间会出现性能干扰,cache静态划分技术则是通过为同时运行的程序分配不同的空间来解决性能干扰问题.为了给程序分配合适大小的cache空间,需要对程序进行性能profiling,即事先多遍运行收集程序在各种cache容量下的性能数据,这种性能profiling方法开销巨大,影响实用.为了解决性能profiling需要多遍运行程序的问题,提出了只需单遍运行的程序性能profiling优化技术.该技术利用在线的phase分析技术识别程序的运行阶段,避免对相同阶段的重复profiling;同时分析程序各phase的性能同cache容量变化的关系趋势,对于性能不敏感的容量变化则不进行profiling,降低开销.在程序运行结束后通过程序各phase在cache各种容量下的性能来估计程序在各容量下的整体性能,以指导cache静态划分.实验表明,该技术的开销仅为7%,而该方法指导的cache划分比未划分时有8%的性能改进,同多遍运行的程序性能profiling指导的cache划分性能相比仅有1%的下降.  相似文献   

17.
In recent years, processor technology has evolved towards multicore processors, which include multiple processing units (cores) in a single package. Those cores, having their own private caches, often share a higher level cache memory dedicated to each processor die. This multi-level cache hierarchy in multicore processors raises the importance of cache utilization problem. Assigning parallel-running software components with common data to processor cores that do not share a common cache increases the number of cache misses. In this paper we present a novel approach that uses model-based information to guide the OS scheduler in assigning appropriate core affinities to software objects at run-time. We build graph models of software and cache hierarchies of processors and devise a graph matcher algorithm that provides mapping between these two graphs. Using this mapping we obtain candidate core sets that each software object can be affiliated with at run-time. These affiliations are determined based on the idea that software components that have the potential to share common data at run-time should run on cores that share a common cache. We also develop an object dispatcher algorithm that keeps track of object affiliations at run-time and dispatches objects by using the information from the compile-time graph matcher. We apply our approach on design pattern implementations and two different application program running on servers using CFS scheduling. Our results show that cache-aware dispatching based on information obtained from software model, decreases number of cache misses significantly and improves CFS’ scheduling performance.  相似文献   

18.
众核处理器中I/O资源被多个处理器核所共享。I/O虚拟化实现了I/O资源的高效共享和安全隔离,被越来越多的处理器设计所采用。硬件支持的I/O虚拟化从体系结构设计时就考虑对I/O虚拟化的支持,提供了一个全面、高效的I/O虚拟化的解决方案。深入研究了硬件支持I/O虚拟化的两大关键技术——DMA重映射技术和中断重定向技术,提出了基于Hint的IOTSB Cache管理方法和基于失效队列的失效方法来对DMA重映射进行优化,提出了多层可操控的中断模型和灵活可控的中断重定向实现方法来对I/O中断重定向进行优化。测试结果表明,提出的硬件支持的I/O虚拟化优化方法以很低的I/O性能开销实现了I/O资源的高效共享,提供了几乎接近无虚拟化环境下的I/O性能。  相似文献   

19.
异构多核处理器通常由高性能的大核和低能耗的小核组成,在其上进行合理的线程调度可以有效地提高资源利用率,节省能耗。之前论文提出的大小核上的公平性调度并没有考虑核上有不同频率/电压状态的情况,而现在支持DVFS调节的处理器越来越普遍,因此很有必要将线程间公平度的计算进行扩展和改进。提出在每个核有若干种不同的DVFS状态时异构多核处理器上线程公平度的计算方法,对已有的性能预测模型进行改进,采用自适应算法调整模型中的系数,并在此基础上提出了一种调度策略,维持各线程之间的公平度和处理器功率满足提前设定的阈值,同时选取能效最优化的配置,实现减小应用运行能耗的目的。实验结果表明,与所提出的调度策略相比,采用static、DVFS-only、swap-only三种调度方法时,在总的运行时间几乎相同的情况下,平均要多产生20%以上能耗,对于有些应用甚至达到了50%。  相似文献   

20.
Nayfeh  B.A. Olukotun  K. 《Computer》1997,30(9):79-85
Presents the case for billion-transistor processor architectures that will consist of chip multiprocessors (CMPs): multiple (four to 16) simple, fast processors on one chip. In their proposal, each processor is tightly coupled to a small, fast, level-one cache, and all processors share a larger level-two cache. The processors may collaborate on a parallel job or run independent tasks (as in the SMT proposal). The CMP architecture lends itself to simpler design, faster validation, cleaner functional partitioning, and higher theoretical peak performance. However for this architecture to realize its performance potential, either programmers or compilers will have to make code explicitly parallel. Old ISAs will be incompatible with this architecture (although they could run slowly on one of the small processors)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号