首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 140 毫秒
1.
一种简单的SDRAM控制器实现   总被引:1,自引:0,他引:1  
提出一种简便、模块化、易扩展和可移植的SDRAM控制器方案。综合成本、容量、速度和功耗因素,选择SDRAM作为数字图像处理系统的内存。为满足系统高速度、大容量数据传输需求,采用延时隐藏和仲裁机制实现SDRAM控制器,并对其性能评估。  相似文献   

2.
设计一种基于网络处理的多核共享SDRAM控制器,提出分层优先级仲裁算法以提高多核访问共享内存的效率,针对IP包处理特点,给出一种基于指令控制的块数据传输机制来缩短IP包的读写延迟。在FPGA平台上进行验证,结果表明,当处理长度为64 Byte的IP包时,SDRAM控制器的读写效率能提高55%以上。  相似文献   

3.
本文采用Altera公司的Stratix系列FPGA实现了一个三端口非透明型SDRAM控制器,该控制器面向用户具有多个端口,通过轮换优先级的设计保证了多个端口平均分配SDRAM的带宽且不会降低传输速率。将访问SDRAM空间虚拟成一个简单的访问三口RAM的操作,采用乒乓的DMA传输机制大大提高了数据传输的带宽和效率。  相似文献   

4.
具有时间隐藏特性的数据块读写SDRAM控制器   总被引:2,自引:1,他引:1       下载免费PDF全文
针对SDRAM控制器读写数据块访问延时长、速度慢的问题,提出时间隐藏技术,将其应用于SDRAM控制器的设计,采用FPGA实现。实验结果表明,时间隐藏技术有效缩短了数据块读写访问延时,提高了读写速度,写4×4数据块可节约时间52%,读8×8数据块可节约时间44%。  相似文献   

5.
本文分别提出多总线多处理机系统采用轮流优先级和循环优先级仲裁的分析模型。轮流优先级仲裁方案采用概率分析,循环优先级仲裁方案采用变更状态和参数分析。分析模型被用来对这二种不同仲裁方案进行性能分析和比较。某些结果表明循环优先级仲裁的总线访问延迟最小。  相似文献   

6.
在介绍DDR SDRAM控制器设计关键技术的基础上,讨论了一种DDR SDRAM控制器的设计方法.通过一种优化的地址映射策略提高了突发访问效率,采用0.18 μm CMOS工艺流片实现.所设计的DDR SDRAM控制器芯片在PCB板级测试中达到预期设计要求.  相似文献   

7.
访存交易的处理顺序对内存访问的性能有重要影响.同一个SoC设备发出的多个未决交易往往地址连续且读写类型相同.然而,传统的总线仲裁方法导致各个设备发出的未决交易序列交错地发送至内存控制器,而内存控制器访存调度的范围有限,最终导致此类序列通常无法连续地访问内存.为解决此问题,提出一种新型的总线仲裁方法CGH,该方法利用SoC设备通信行为的特征,通过识别同一个SoC设备发出的、行地址和读写类型相同的未决交易序列并让其连续获得仲裁授权,减少内存切换行地址和读写类型的次数;同时,在选择将要授权的未决交易序列时,优先考虑行地址和读写类型与最近授权交易相同的申请,进一步提高访存效率.将CGH仲裁方法应用至北大众志-SKSoC后,系统访存性能提高了21.37%,而总线面积仅增加2.83%.此外,由于行地址切换次数减少,内存的能耗也降低了15.15%.  相似文献   

8.
CAN总线是一种基于消息的事件触发通信服务,主要应用于汽车、机器人等实时通信系统.CAN总线上有多个节点互相独立工作,当多个节点访问总线时出现消息碰撞,由于CAN总线采用按位仲裁算法决定节点访问总线的优先级,导致低优先级节点访问失败,而高优先级节点继续传输消息,这种方式导致低优先级节点饥饿现象而丢失消息,因此CAN总线调度算法随之被提出.目前调度策略已从静态发展到动态,但是随着节点的增多,系统维护和调度难度增加,单条总线调度策略难以维持系统性能需要.因此本文考虑将系统中的节点挂载到多条CAN总线上构成CAN网络,针对CAN网络提出了一种层次化的动态调度算法,将节点优先级仲裁分为:单条总线本地优先级仲裁和系统全局优先级仲裁,确定系统优先级最高的节点,使其进行数据传输.利用MATLAB中的Stateflow工具,建立分层动态调度模型,依据CAN总线数据传输机制和仲裁机制,设计实现了节点模块、总线模块、函数模块等,在总线模块实现了两级调度.实验结果表明,本算法在增加了节点总数目的基础上,满足高优先级节点传输且避免了低优先级节点的饿死现象.  相似文献   

9.
随着DDR SDRAM的广泛应用,为满足不同平台下的访存需求,文章设计并实现了一种基于FPGA的DDRSDRAM控制器,通过分级流水结构提高了系统性能,并通过参数的在线配置满足不同内存颗粒的参数需求,保证了控制器的灵活性和可扩展性。  相似文献   

10.
介绍了NXP公司PowerQUICC II系列MPC8280处理器集成的SDRAM存储控制器的工作原理和工作模式,并在某国产化8280嵌入式系统开发板上,利用SDRAM存储控制器对4片SDRAM芯片进行读写操作。实际操作结果表明,该国产化8280处理器SDRAM存储控制器功能正常,可以访问SDRAM外设存储器。  相似文献   

11.
具体阐述了VIM体系结构中对系统性能有着直接影响的关键部分存储交叉开关的研究与实现,包括对流水线的支持、访存指令的无阻塞传输、读取数据的正确返回以及仲裁调度算法的研究等,最后给出了存储交叉开关的顶层模块及仲裁模块的具体实现。  相似文献   

12.
传统优先级反转或固定优先级仲裁方式会降低CPU(central processing unit)访存效率,且无法对内存数据进行保护。为此,设计一种能够仲裁控制多协议对CPU内存单元进行高效加解密读写的数字IP(intellectual property)。将同步电路与握手协议结合,实现两种协议间的跨时钟域处理;对多协议间的高效仲裁进行研究,提出饱和仲裁算法;设计以地址为种子的伪随机加密算法,完成对内存读写数据的加解密操作;设计自定义的访存协议,完成对内存的直接存取。仿真和流片结果表明,设计能很好调度多接口协议访存,防止CPU内存单元内的数据被非法破解。  相似文献   

13.
On-chip distributed memory system has become an attractive solution for massive parallel memory accesses found in future many-core processors. However, increasing number of on-chip cores and memory controllers inevitably introduce many remote memory accesses, which generate a large amount of on-chip traffic and put great pressure on the interconnection. This paper tries to optimize on-chip memory access traffic via runtime thread migration. We first analyze memory access behaviors in multi-threaded applications and find that the memory access targets and volumes are similar during short periods, which makes runtime prediction feasible. But the memory access targets exhibit great mobility during long periods, motivating us to dynamically move threads towards the data. Based on these observations, we propose a novel low-cost and distributed thread migration algorithm which adjusts thread placement in chains based on benefit estimation. We present details of the workflow, including the trigger and arbitration of migration requests and the procedures to determine the migration chains. Simulation results show that our algorithm achieves system performance speedup of 11.5 % and reduces average memory access latency by 11.0 %. It can find a few but effective thread migrations to optimize on-chip memory access traffic with acceptable hardware and runtime overheads.  相似文献   

14.
Conventional memory blocks have a single address input and a single, usually bidirectional, data output. Dual-port memories have two address inputs and two data ports. These memories have been designed to facilitate the exchange of data between CPUs within a multiprocessor system. Each microprocessor can access the multiport memory and therefore read the data of another processor or leave data for another processor. There are two problems in the design of multiport memory systems. The first, and more trivial, concerns the way in which each processor supplies an address to the memory and how it accesses the memory data bus. This is not a particularly complex problem and the designer biggest worry is how to design the interface with the least number of multiplexers and buffers. Whenever a processor wishes to access the multiport memory, it takes control of the address and data bus and then accesses the memory. A more fundamental design problem is posed when two or more processors try to access the memory nearly simultaneously. Memory contention is solved by the use of an arbitration circuit that arbitrates between the contending processors, grants access to only one processor and forces the others to wait. Fortunately, it is no longer necessary for all designers to construct their own dual-port memories from discrete components, since several manufacturers now put the memory, address and data multiplexers plus arbitration circuits on chip. IDT's application note shows how its dual-port memory operates and how it is used in multiprocessor systems.  相似文献   

15.
A comprehensive model for evaluating crossbar networks in which the memory bandwidth and processor acceptance probability are primary measures considered is presented. This analytical model includes all important network control policies, such as the bus arbitration and rejected request handling policies, as well as the home memory concept. Computer simulation validates the correctness of the model. It is confirmed that the home memory and dynamic bus arbitration policy improve the network performance  相似文献   

16.
We describe an efficient, high-level abstraction, multi-port memory-control unit (MCU) capable of providing data at maximum throughput. This MCU has been developed to take full advantage of FPGA parallelism. Multiple parallel processing entities are possible in modern FPGA devices, but this parallelism is lost when they try to access external memories. To address the problem of multiple entities accessing shared data we propose an architecture with multiple abstract access ports (AAPs) to access one external memory. Bearing in mind that hardware designs in FPGA technology are generally slower than memory chips, it is feasible to build a memory access scheduler by using a suitable arbitration scheme based on a fast memory controller with AAPs running at slower frequencies. In this way, multiple processing units connected through the AAPs can make memory transactions at their slower frequencies and the memory access scheduler can serve all these transactions at the same time by taking full advantage of the memory bandwidth.  相似文献   

17.
CAN智能适配卡的设计方案   总被引:3,自引:0,他引:3  
本介绍了发电机状态监测仪中CAN智能适配卡的功能和硬件组成,详细讨论了ISA总线及单片机对双口RAM的地址空间的分配。针对ISA总线和卡上单片机同时对双口RAM读写数据时的仲裁问题,提出了一种硬件判优的实现方法。并对适配卡的软件设计进行了总体上的阐述。  相似文献   

18.
龙芯2F上的访存优化   总被引:1,自引:1,他引:0  
一般的数据处理程序中,计算时间在其中往往只起次要作用,因此访存方式是否有效对程序的性能影响很大。在基于龙芯2F处理器研制的高性能计算机系统KD-50-I上安装ATLAS,经测试其性能只达到龙芯2F理论峰值的30%。通过循环展开减少函数存储访问次数,增大计算访存比;采用数据分块、部分拷贝以增强访存局部性,减少cache失效;利用非阻塞cache加快内存访问速度等访存优化技术,将ATLAS性能提高50%以上。  相似文献   

19.
随着片上网络的发展,片上多处理器系统通信性能提高的同时,存储器的访问性能将成为片上多处理器系统的性能瓶颈.目前片上网络的研究主要依赖于模拟器,而现有的片上网络模拟器都不能完成对存储器访问的准确模拟.本文设计并实现了一个能对存储器访问进行模拟的模拟器,为存储器性能的研究提供了一个实验平台;论文通过采用大量访问集对该模拟器进行测试,得出了若干条与存储器访问性能优化相关的片上网络设计建议.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号