首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
黄华  张建刚  许鲁 《计算机工程》2006,32(4):55-57,60
网络磁盘设备(Network Block Device,NBD)最初于1997年出现在Linux操作系统中。它将远端存储空间映射为本地的虚拟磁盘,就像本地物理磁盘一样使用,但是用户存储在其上的数据都通过网络传输存储在远端。NBD很早就被融入到Linux内核主分支中,井被广泛使用。该文描述如何将NBD移植到Windows 2000以及其中遇到的问题。  相似文献   

2.
在当前计算机产业中,有效的数据存储通常还是由磁盘来实现的。现在,磁盘传输速度不断改善,但远没有微处理的速度增加快,因此,磁盘存取的代价相对于处理器来说,变得越来越大。本文设计并实现一个网络RamDisk,其主要思想是利用远地空闲PC机的内存,来作为一个比本地磁盘快的存储设备。我们的网络RamDisk是在Linux操作系统下,作为一个块设备驱动程序来实现的,因此,不需要修改任何Linux系统的内核代码。我们使用了一些应用程序进行测试,获得了较好的性能.  相似文献   

3.
在Vista中微软应用了ReadyBoost和Windows SuperFetch两项新内存技术,其中Windows SuperFetch是将你最常用的应用程序预加载到内存中以便进行快速访问。其实,Windows早期的RamDisk(内存盘)就是与此功能类似,它能将一部分内存空间模拟成一个硬盘分区。由于系统内存的存取速度远快于硬盘速度,所以对于频繁磁盘存取的应用程序(例如数据库程序、磁盘文件交换程序、网站服务程序等),使用RamDisk能有效地提高其性能。但是,很多朋友不知道怎么利用RamDisk加快程序的速度,下面我们为大家用RamDiskXP举几个例子:  相似文献   

4.
在当前计算机产业中,有效的数据存储通常还是由磁盘来实现的.现在,磁盘传输速度不断改善,但远没有微处理的速度增加快,因此,磁盘存取的代价相对于处理器来说,变得越来越大.本文设计并实现一个网络RamDisk,其主要思想是利用远地空闲PC机的内存,来作为一个比本地磁盘快的存储设备.我们的网络RamDisk是在Linux操作系统下,作为一个块设备驱动程序来实现的,因此,不需要修改任何Linux系统的内核代码.我们使用了一些应用程序进行测试,获得了较好的性能.  相似文献   

5.
在虚拟机(virtual machine)系统中,随着虚拟机数量和应用程序需求的不断增长,内存容量已经成为应用程序性能的主要瓶颈。为了提升内存密集型和I/O密集型程序的页面交换性能,提出了虚拟机的远程磁盘缓存机制REMOCA,它允许运行在一台物理主机上的虚拟机将其他物理主机的内存作为其二级磁盘缓存。由于网络传输延迟远远小于磁盘访问,用网络传输代替磁盘访问就能够有效地降低虚拟机的平均磁盘访问延迟。REMOCA的目标就要尽可能地减少磁盘访问。REMOCA运行在虚拟机管理器中,其基本工作原理是截获并处理虚拟机的页面淘汰、磁盘访问等事件。REMOCA能够与现有的虚拟机内存管理机制(如气球技术、影子缓存)相结合,从而提供更加灵活的内存资源管理策略。实验数据表明,REMOCA能有效地降低页面抖动对虚拟机性能的影响,并在很大程度上提升虚拟机中I/O密集型应用的性能。  相似文献   

6.
大江东去 《电脑迷》2011,(10):68-68
小笨由于工作需要,经常会在单位和家庭两地交换各种办公数据,为方便办公,他一般都先从网络上将资源下载到本地磁盘后,再将资源上传到云存储服务。但是每次交换数据,都需要将网络资源下载到本地磁盘上,再上传到云存储服务,一来一去  相似文献   

7.
Quarterdeck MagnaRAM可以对使用内存的应用程序的数据、指令进行压缩、优化,使其占用较小的内存空间,从而达到增加可供使用的内存数量的目的;并可对磁盘上的交换件进行压缩来增大虚拟内存区的容量,使应用程序运行得更快、更流畅。经测试,在MagnaRAM的压缩优化下,32MB的物理内可达到47MB可用内存的效果(并非最高值);  相似文献   

8.
本文论述了在微型机上如何突破狭小的内存解释空间,来处理较大应用程序的设计方法,以及在更大的扩展内存中使用虚拟磁盘技术来进一步提高程序运行速度的实现方法。同时还阐明了程序运行环境系统及其操作系统环境的自动生成技术。  相似文献   

9.
付湘  倪宏  朱明 《计算机工程》2007,33(24):83-85
提出了一种适用于嵌入式设备的内存压缩机制。利用Linux的页面交换机制,创建一个基于内存的交换分区。当系统内存不足需要将某些页面交换到该分区上时,通过压缩这些页面向应用程序和用户提供更多的可用内存。使用空闲内存块匹配算法避免出现过多的内存碎片而影响系统性能。实验测试表明,使用该机制通常可以获得大于50%的可用内存。  相似文献   

10.
在IBM—PC系列微机上使用BASIC语言,用户源程序只可以使用将近60K内存,在进行大型矩阵计算时,内存紧张是一大障碍,往往是频繁地与磁盘交换信息,以牺牲机时来换取内存空间。本文要介绍一种方法,不借用磁盘,而充分利用系统程序和BASiC解释程序占用空间  相似文献   

11.
Workstation clusters provide significant aggregate amounts of resources, including processing power and main memory. In this paper we explore the collective use of main memory in a workstation cluster to boost the performance of applications that require more memory than a single workstation can provide. We describe the design, simulation, implementation, and evaluation of a pager that uses main memory of remote workstations in a workstation cluster as a faster-than-disk paging device and provides reliability in case of single workstation failures and adaptivity in network and disk load variations. Our pager has been implemented as a block device driver linked to the Digital UNIX operating system, without any modifications to the kernel code. Using several test applications we measure the performance of remote memory paging over an Ethernet interconnection network and find it to be up to twice as fast as traditional disk paging. We also evaluate the performance of various reliability policies and demonstrate their feasibility even over low bandwith networks such as Ethernet. We conclude that the benefits of reliable remote memory paging in workstation clusters are significant today and are likely to increase in the near future.  相似文献   

12.
一种新型网络视频点播的存储层次   总被引:1,自引:0,他引:1  
针对多媒体应用中快速存储资源上对匮乏和磁盘速度受限问题,实现了使用网络空闲内存资源作为快速设备缓冲媒体的机制,扩展了多媒体环境中的存储层次,并给出了一种简单快速的cache替换算法,初步结果表明,该机制可以加速多媒体环境中存储数据的访问速度。  相似文献   

13.
Process checkpointing is a procedure which periodically saves the process states into stable storage. Most checkpointing facilities select hard disks for archiving. However, the disk seek time is limited by the speed of the read‐write heads, thus checkpointing process into a local disk requires extensive disk bandwidth. In this paper, we propose an approach that exploits the memory on idle workstations as a faster storage for checkpointing. In our scheme, autonomous machines which submit jobs to the computation server offer their physical memory to the server for job checkpointing. Eight applications are used to measure the remote memory performance in four checkpointing policies. Experimental results show that remote memory reduces at least 34.5 per cent of the overhead for sequential checkpointing and 32.1 per cent for incremental checkpointing. Additionally, to checkpoint a running process into a remote memory requires only 60 per cent of the local disk checkpoint latency time. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

14.
智能车辆的发展对控制系统提出了更高的要求,因而设计了一款智能车无线控制系统。系统基于ARM Linux,开发并实现了车载系统客户端,在搭建了交叉编译开发环境的基础上对嵌入式Linux内核进行了配置编译,并详细介绍了数据采集与发送层的编码实现过程,实现了采集图像与录制视频、网路发送与监控功能,各模块设计为独立进程,各子模块作为独立线程运行,监控软件客户端通过内存盘中的文件完成数据交换过程,从而有效实现了异步执行和远程监控功能。  相似文献   

15.
In the standard kernel organization on a bus-based multiprocessor, all processors share the code and data of the operating system; explicit synchronization is used to control access to kernel data structures. Distributed-memory multicomputers use an alternative approach, in which each instance of the kernel performs local operations directly and uses remote invocation to perform remote operations. Either approach to interkernel communication can be used in a large-scale shared-memory multiprocessor. In the paper we discuss the issues and architectural features that must be considered when choosing between remote memory access and remote invocation. We focus in particular on experience with the Psyche multiprocessor operating system on the BBN Butterfly Plus. We find that the Butterfly architecture is biased towards the use of remote invocation for kernel operations that perform a significant number of memory references, and that current architectural trends are likely to increase this bias in future machines. This conclusion suggests that straightforward parallelization of existing kernels (e.g. by using semaphores to protect shared data) is unlikely in the future to yield acceptable performance. We note, however, that remote memory access is useful for small, frequently-executed operations, and is likely to remain so.  相似文献   

16.
主机通过高速网络访问远程内存的性能已经达到或远高于访问本地磁盘的性能,通过各种优化手段,网络内存系统的性能能得到更好的提升。该文基于一个Linux网络内存系统(LNMS),在客户端一级提出了一种新的预取算法m-ppm,该算法发展了多Markov链预取模型,使之更适合LNMS。在LNMS上实现了另2种常用的预取算法以作比较,实验数据表明,m-ppm算法对多用户模式更有效。  相似文献   

17.
The CC‐NUMA (cache‐coherent non‐uniform memory access) architecture is an attractive solution to scalable servers. The performance of a CC‐NUMA system heavily depends on the number of accesses to remote memory through an interconnection network. To reduce the number of remote accesses, an operating system needs to exploit the potential locality of the architecture. This paper describes the design and implementation of a UNIX‐based operating system supporting the CC‐NUMA architecture. The operating system implements various enhancements by revising kernel algorithms and data structures. This paper also analyzes the performance of the enhanced operating system by running commercial benchmarks on a real CC‐NUMA system. The performance analysis shows that the operating system can achieve improved performance and scalability for CC‐NUMA by implementing kernel data striping, localization and load balancing. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
一种高效的分布式并行数据库日志机制   总被引:1,自引:0,他引:1  
减少协议中的强制写次数一直是研究分布式原子提交协议的目标.利用超高速网络与磁盘的数据存取速度差距,可以提出一种高效的协同内存缓存日志机制(cooperating memory cached log mechanism,CMCL).它通过在事务参与者的内存中相互备份日志而获得日志的可靠性,从而免除强制写.在给出CMCL的原理,并用它改进两阶段提交协议后,对其性能进行了分析和比较,结果表明CMCL机制在适当的环境下是高效的.  相似文献   

19.
The ever increasing memory demands of many scientific applications and the complexity of today’s shared computational resources still require the occasional use of virtual memory, network memory, or even out-of-core implementations, with well known drawbacks in performance and usability. In Mills et al. (Adapting to memory pressure from within scientific applications on multiprogrammed COWS. In: International Parallel and Distributed Processing Symposium, IPDPS, Santa Fe, NM, 2004), we introduced a basic framework for a runtime, user-level library, MMlib, in which DRAM is treated as a dynamic size cache for large memory objects residing on local disk. Application developers can specify and access these objects through MMlib, enabling their application to execute optimally under variable memory availability, using as much DRAM as fluctuating memory levels will allow. In this paper, we first extend our earlier MMlib prototype from a proof of concept to a usable, robust, and flexible library. We present a general framework that enables fully customizable memory malleability in a wide variety of scientific applications. We provide several necessary enhancements to the environment sensing capabilities of MMlib, and introduce a remote memory capability, based on MPI communication of cached memory blocks between ‘compute nodes’ and designated memory servers. The increasing speed of interconnection networks makes a remote memory approach attractive, especially at the large granularity present in large scientific applications. We show experimental results from three important scientific applications that require the general MMlib framework. The memory-adaptive versions perform nearly optimally under constant memory pressure and execute harmoniously with other applications competing for memory, without thrashing the memory system. Under constant memory pressure, we observe execution time improvements of factors between three and five over relying solely on the virtual memory system. With remote memory employed, these factors are even larger and significantly better than other, system-level remote memory implementations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号