共查询到19条相似文献,搜索用时 187 毫秒
1.
2.
3.
在Vista中微软应用了ReadyBoost和Windows SuperFetch两项新内存技术,其中Windows SuperFetch是将你最常用的应用程序预加载到内存中以便进行快速访问。其实,Windows早期的RamDisk(内存盘)就是与此功能类似,它能将一部分内存空间模拟成一个硬盘分区。由于系统内存的存取速度远快于硬盘速度,所以对于频繁磁盘存取的应用程序(例如数据库程序、磁盘文件交换程序、网站服务程序等),使用RamDisk能有效地提高其性能。但是,很多朋友不知道怎么利用RamDisk加快程序的速度,下面我们为大家用RamDiskXP举几个例子: 相似文献
4.
5.
在虚拟机(virtual machine)系统中,随着虚拟机数量和应用程序需求的不断增长,内存容量已经成为应用程序性能的主要瓶颈。为了提升内存密集型和I/O密集型程序的页面交换性能,提出了虚拟机的远程磁盘缓存机制REMOCA,它允许运行在一台物理主机上的虚拟机将其他物理主机的内存作为其二级磁盘缓存。由于网络传输延迟远远小于磁盘访问,用网络传输代替磁盘访问就能够有效地降低虚拟机的平均磁盘访问延迟。REMOCA的目标就要尽可能地减少磁盘访问。REMOCA运行在虚拟机管理器中,其基本工作原理是截获并处理虚拟机的页面淘汰、磁盘访问等事件。REMOCA能够与现有的虚拟机内存管理机制(如气球技术、影子缓存)相结合,从而提供更加灵活的内存资源管理策略。实验数据表明,REMOCA能有效地降低页面抖动对虚拟机性能的影响,并在很大程度上提升虚拟机中I/O密集型应用的性能。 相似文献
6.
小笨由于工作需要,经常会在单位和家庭两地交换各种办公数据,为方便办公,他一般都先从网络上将资源下载到本地磁盘后,再将资源上传到云存储服务。但是每次交换数据,都需要将网络资源下载到本地磁盘上,再上传到云存储服务,一来一去 相似文献
7.
吴俊 《电脑界(应用文萃)》2000,(1):51-51
Quarterdeck MagnaRAM可以对使用内存的应用程序的数据、指令进行压缩、优化,使其占用较小的内存空间,从而达到增加可供使用的内存数量的目的;并可对磁盘上的交换件进行压缩来增大虚拟内存区的容量,使应用程序运行得更快、更流畅。经测试,在MagnaRAM的压缩优化下,32MB的物理内可达到47MB可用内存的效果(并非最高值); 相似文献
8.
本文论述了在微型机上如何突破狭小的内存解释空间,来处理较大应用程序的设计方法,以及在更大的扩展内存中使用虚拟磁盘技术来进一步提高程序运行速度的实现方法。同时还阐明了程序运行环境系统及其操作系统环境的自动生成技术。 相似文献
9.
10.
在IBM—PC系列微机上使用BASIC语言,用户源程序只可以使用将近60K内存,在进行大型矩阵计算时,内存紧张是一大障碍,往往是频繁地与磁盘交换信息,以牺牲机时来换取内存空间。本文要介绍一种方法,不借用磁盘,而充分利用系统程序和BASiC解释程序占用空间 相似文献
11.
George Dramitinos Evangelos P. Markatos 《Journal of Parallel and Distributed Computing》1999,58(3):505
Workstation clusters provide significant aggregate amounts of resources, including processing power and main memory. In this paper we explore the collective use of main memory in a workstation cluster to boost the performance of applications that require more memory than a single workstation can provide. We describe the design, simulation, implementation, and evaluation of a pager that uses main memory of remote workstations in a workstation cluster as a faster-than-disk paging device and provides reliability in case of single workstation failures and adaptivity in network and disk load variations. Our pager has been implemented as a block device driver linked to the Digital UNIX operating system, without any modifications to the kernel code. Using several test applications we measure the performance of remote memory paging over an Ethernet interconnection network and find it to be up to twice as fast as traditional disk paging. We also evaluate the performance of various reliability policies and demonstrate their feasibility even over low bandwith networks such as Ethernet. We conclude that the benefits of reliable remote memory paging in workstation clusters are significant today and are likely to increase in the near future. 相似文献
12.
一种新型网络视频点播的存储层次 总被引:1,自引:0,他引:1
针对多媒体应用中快速存储资源上对匮乏和磁盘速度受限问题,实现了使用网络空闲内存资源作为快速设备缓冲媒体的机制,扩展了多媒体环境中的存储层次,并给出了一种简单快速的cache替换算法,初步结果表明,该机制可以加速多媒体环境中存储数据的访问速度。 相似文献
13.
Process checkpointing is a procedure which periodically saves the process states into stable storage. Most checkpointing facilities select hard disks for archiving. However, the disk seek time is limited by the speed of the read‐write heads, thus checkpointing process into a local disk requires extensive disk bandwidth. In this paper, we propose an approach that exploits the memory on idle workstations as a faster storage for checkpointing. In our scheme, autonomous machines which submit jobs to the computation server offer their physical memory to the server for job checkpointing. Eight applications are used to measure the remote memory performance in four checkpointing policies. Experimental results show that remote memory reduces at least 34.5 per cent of the overhead for sequential checkpointing and 32.1 per cent for incremental checkpointing. Additionally, to checkpoint a running process into a remote memory requires only 60 per cent of the local disk checkpoint latency time. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
14.
15.
Eliseu M. Chaves Prakash Ch. Das Thomas J. Leblanc Brian D. Marsh Michael L. Scott 《Concurrency and Computation》1993,5(3):171-191
In the standard kernel organization on a bus-based multiprocessor, all processors share the code and data of the operating system; explicit synchronization is used to control access to kernel data structures. Distributed-memory multicomputers use an alternative approach, in which each instance of the kernel performs local operations directly and uses remote invocation to perform remote operations. Either approach to interkernel communication can be used in a large-scale shared-memory multiprocessor. In the paper we discuss the issues and architectural features that must be considered when choosing between remote memory access and remote invocation. We focus in particular on experience with the Psyche multiprocessor operating system on the BBN Butterfly Plus. We find that the Butterfly architecture is biased towards the use of remote invocation for kernel operations that perform a significant number of memory references, and that current architectural trends are likely to increase this bias in future machines. This conclusion suggests that straightforward parallelization of existing kernels (e.g. by using semaphores to protect shared data) is unlikely in the future to yield acceptable performance. We note, however, that remote memory access is useful for small, frequently-executed operations, and is likely to remain so. 相似文献
16.
17.
Moon Seok Chang 《Concurrency and Computation》2003,15(14):1257-1274
The CC‐NUMA (cache‐coherent non‐uniform memory access) architecture is an attractive solution to scalable servers. The performance of a CC‐NUMA system heavily depends on the number of accesses to remote memory through an interconnection network. To reduce the number of remote accesses, an operating system needs to exploit the potential locality of the architecture. This paper describes the design and implementation of a UNIX‐based operating system supporting the CC‐NUMA architecture. The operating system implements various enhancements by revising kernel algorithms and data structures. This paper also analyzes the performance of the enhanced operating system by running commercial benchmarks on a real CC‐NUMA system. The performance analysis shows that the operating system can achieve improved performance and scalability for CC‐NUMA by implementing kernel data striping, localization and load balancing. Copyright © 2003 John Wiley & Sons, Ltd. 相似文献
18.
一种高效的分布式并行数据库日志机制 总被引:1,自引:0,他引:1
减少协议中的强制写次数一直是研究分布式原子提交协议的目标.利用超高速网络与磁盘的数据存取速度差距,可以提出一种高效的协同内存缓存日志机制(cooperating memory cached log mechanism,CMCL).它通过在事务参与者的内存中相互备份日志而获得日志的可靠性,从而免除强制写.在给出CMCL的原理,并用它改进两阶段提交协议后,对其性能进行了分析和比较,结果表明CMCL机制在适当的环境下是高效的. 相似文献
19.
Richard T. Mills Chuan Yue Andreas Stathopoulos Dimitrios S. Nikolopoulos 《Journal of Grid Computing》2007,5(2):213-234
The ever increasing memory demands of many scientific applications and the complexity of today’s shared computational resources
still require the occasional use of virtual memory, network memory, or even out-of-core implementations, with well known drawbacks
in performance and usability. In Mills et al. (Adapting to memory pressure from within scientific applications on multiprogrammed
COWS. In: International Parallel and Distributed Processing Symposium, IPDPS, Santa Fe, NM, 2004), we introduced a basic framework for a runtime, user-level library, MMlib, in which DRAM is treated as a dynamic size cache for large memory objects residing on local disk. Application developers
can specify and access these objects through MMlib, enabling their application to execute optimally under variable memory availability, using as much DRAM as fluctuating memory
levels will allow. In this paper, we first extend our earlier MMlib prototype from a proof of concept to a usable, robust, and flexible library. We present a general framework that enables
fully customizable memory malleability in a wide variety of scientific applications. We provide several necessary enhancements
to the environment sensing capabilities of MMlib, and introduce a remote memory capability, based on MPI communication of cached memory blocks between ‘compute nodes’ and
designated memory servers. The increasing speed of interconnection networks makes a remote memory approach attractive, especially
at the large granularity present in large scientific applications. We show experimental results from three important scientific
applications that require the general MMlib framework. The memory-adaptive versions perform nearly optimally under constant memory pressure and execute harmoniously
with other applications competing for memory, without thrashing the memory system. Under constant memory pressure, we observe
execution time improvements of factors between three and five over relying solely on the virtual memory system. With remote
memory employed, these factors are even larger and significantly better than other, system-level remote memory implementations. 相似文献