首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
实现代理服务器,设计出了代理各个层次,给出了各功能模块的设计思想,重点描述了可有效提高代理服务器性能的缓存部分设计,对其中的缓存存储部分作了详细介绍.本系统具有很好的模块性.在局域网内,能减少用户请求的响应等待时间,提高了效率.  相似文献   

2.
实现代理服务器,设计出了代理各个层次,给出了各功能模块的设计思想,重点描述了可有效提高代理服务器性能的缓存部分设计,对其中的缓存存储部分作了详细介绍。本系统具有很好的模块性。在局域网内,能减少用户请求的响应等待时间,提高了效率。  相似文献   

3.
对象关系型空间数据库使得地理空间对象可以作为一种新的类型存储到空间数据库中。然而索引访问方式与数据类型是紧密联系的。为了使空间数据库用户为每个新空间数据类型建立自己的索引,同时减轻工作量,对将GiST索引框架引入到空间数据库进行了介绍,并分析了GiST框架下的空间索引的好处与劣势。在此基础上,实现了访问效率较高的GiST R*树索引,并对其时间效率和空间效率进行测试。  相似文献   

4.
分布式WebGIS构件化研究   总被引:11,自引:0,他引:11  
对Internet/WWW计算环境下WebGIS系统的构件化进行理论和实践的探索,基于COM/DCOM建立一个实用的WebGIS对象构件系统Ceo-Unio。从地理空间数据模型,体系结构,构件层次和构件功能划分等方面对构件化的WebGIS进行阐述,并通过采用空间索引和空间缓存等技术,为地理空间数据的检索和传输效率等问题提供了一种有效的解决方案,所实现的系统已经在多个领域得到良好的应用。  相似文献   

5.
基于期望预测价值的时移代理缓存替换算法   总被引:1,自引:0,他引:1       下载免费PDF全文
分析移动多媒体广播上时移业务的特点,描述了可预测用户行为的时移代理服务器资源调度策略,并提出一种基于期望预测价值的时移代理服务器缓存替换算法。仿真实验结果表明,该算法与传统的FIFO算法相比较,提高了缓存的预测命中率,减少了用户平均等待延迟。  相似文献   

6.
一种基于流行度和分段适应性的流媒体缓存算法   总被引:1,自引:0,他引:1  
为提高流媒体代理服务器的缓存效率,提出一种基于流行度和分段适应性的流媒体缓存策略。该策略在主流分段缓存替换算法的基础上充分考虑了用户的访问特性,采取基于片段流行度的分段缓存管理策略,将媒体对象内部两点流行度的因素纳入缓存替换策略,改善了流媒体缓存管理的效率、提高了缓存的命中率。利用实际用户访问数据,将该算法与等长分段的缓存算法和指数分段的缓存算法进行了比较,仿真结果证明该算法可以在获得与这些算法相近的请求延迟率的条件下,取得最高的字节命中率。  相似文献   

7.
系统     
傻博士有话说:图标缓存就是专门在内存中开辟出一部分空间,将用户最近使用过的程序图标记录下来,当用户稍后再次访问这些图标时,直接从缓存中读取以提高访问速度。当然,当系统关闭时,在内存中开辟的那部分缓存空间中的数据就会被暂时保存到硬盘中,下次重新启动系统后,再从硬盘中将数据读取到内存中。而当图标缓存出现问题时会  相似文献   

8.
网络地理信息系统日趋复杂化,使得需要处理的数据规模越来越大,对客户端数据读取速度的要求越来越高。为了提升网络地理信息系统中对客户端的响应效率,本文提出并实现了新一代的地图瓦片缓存方案。实践表明,地图瓦片的分布式缓存设计提高了客户端浏览地图的速度,提升了大量并发用户的访问性能。  相似文献   

9.
基于OCI的空间数据访问   总被引:1,自引:0,他引:1  
Oracle Spatial是现今应用广泛的空间数据库,而OCI是Oracle数据库的一个应用程序接口.基于OCI进行复杂的空间数据访问,可以提高数据访问的效率和灵活性。文中用实例介绍了基于OCI进行空间数据访问的方法。  相似文献   

10.
具有高缓存写入效率的流媒体分段缓存方法   总被引:1,自引:0,他引:1  
马杰  樊建平 《计算机学报》2007,30(4):588-596
流媒体代理服务器缓存是能有效降低网络传输负载的技术.长时间持续和大传输码率的两个流媒体访同特点使得流媒体代理服务器面临的负载问题十分严峻.流媒体缓存方法是流媒体代理服务器的核心组成,其引发的缓存写入操作数量对代理服务器负载有着重要的影响.文中从流媒体缓存的执行特点人手,给出了一种高网络传输减少效果和低缓存写入负载的流媒体分段缓存方法.缓存写入与访问热度相结合是该缓存方法的主要特点.在实验测试中证明了该缓存方法相比目前减少网络传输最好的Adaptive & Lazy缓存方法能减少2/3的缓存写入负载,并能获得同样的网络传输减少效果.  相似文献   

11.
为提高P2P空间矢量数据索引网络的性能,在已有混合结构P2P空间索引网络的基础上,引入缓存机制,并提出了一种新的面向多图层的空间矢量数据缓存更新策略。该策略针对空间矢量数据多图层特性,综合考虑图层优先级以及查询频率对于缓存更新的影响,合理地利用了缓存空间。同时,将缓存更新抽象成0/1背包问题的数学模型,采用遗传算法对其优化求解。仿真结果表明该缓存更新策略可以增加缓存命中率,提高空间索引效率。  相似文献   

12.
The performance of the memory hierarchy, and in particular the data cache, can significantly impact program execution speed. Thus, instruction reordering to minimize data cache misses is an important consideration for optimizing compilers. In this paper, we prove that the problem of instruction reordering for data cache miss minimization belongs to the class of NP-complete problems. The framework that we develop for the proof exposes the symbiotic relationship among the references to the cache. This symbiosis exists because a single cache reference lengthens the life span of its neighbors in the cache, and thus provides opportunity for additional cache hits through reference to the neighbors. We present a greedy heuristic designed to exploit this symbiotic relationship to improve data cache performance for general-purpose programs. Experiments with a prototype implementation of the heuristic show that we can improve data cache performance in many cases.  相似文献   

13.
Secure XML query answering to protect data privacy and semantic cache to speed up XML query answering are two hot spots in current research areas of XML database systems. While both issues are explored respectively in depth,they have not been studied together,that is,the problem of semantic cache for secure XML query answering has not been addressed yet. In this paper,we present an interesting joint of these two aspects and propose an efficient framework of semantic cache for secure XML query answering,which can improve the performance of XML database systems under secure circumstances. Our framework combines access control,user privilege management over XML data and the state-of-the-art semantic XML query cache techniques,to ensure that data are presented only to authorized users in an efficient way. To the best of our knowledge,the approach we propose here is among the first beneficial efforts in a novel perspective of combining caching and security for XML database to improve system performance. The efficiency of our framework is verified by comprehensive experiments.  相似文献   

14.
Hash tables, as a type of data indexing structure that provides efficient data access based on key values, are widely used in various computer applications, especially in system software, databases, and high-performance computing field that requires extremely high performance. In network, cloud computing and IoT services, hash tables have become the core system components of cache systems. However, with the large-scale increase in the amount of large-scale data, performance bottlenecks have gradually emerged in systems designed with a multi-core CPU as the core of the hash table structure. There is an urgent need to further improve the high performance and scalability of the hash tables. With the increasing popularity of general-purpose Graphic Processing Units (GPUs) and the substantial improvement of hardware computing capabilities and concurrency performance, various types of system software tasks with parallel computing as the core have been optimized on the GPU and have achieved considerable performance promotion. Due to the sparseness and randomness, using the existing parallel structure of the hash tables directly on the GPUs will inevitably bring high-frequency memory access and frequent bus data transmission, which affects the performance of the hash tables on the GPUs. This study focuses on the analysis of memory access, hit ratio, and index overhead of hash table indexes in the cache system. A hybrid access cache indexing framework CCHT (Cache Cuckoo Hash Table) adapted to GPU is proposed and provided. The cache strategy suitable to different requirements of hit ratios and index overheads allows concurrent execution of write and query operations, maximizing the use of the computing performance and concurrency characteristics of GPU hardware, reducing memory access and bus transferring overhead. Through GPU hardware implementation and experimental verification, CCHT has better performance than other cache indexing hash tables while ensuring cache hit ratios.  相似文献   

15.
以Spark为代表的集群并行计算框架在大数据、云计算浪潮中广泛应用,其运行性能优化是应用的关键。为提高运行性能,分析了Spark框架执行流程、内存管理机制,结合Spark和JVM两个层面内存管理的特点,提出3条优化策略:(1)通过序列化和压缩方式减少缓存数据大小,使得GC消耗降低,提升性能;(2)在一定范围内减少运行内存大小,用重算代替缓存,可以提升性能;(3)配置适当的JVM新生代和老生代的比例、Spark计算与缓存空间比例等内存分配参数,能够较大程度地提升性能。实验结果表明,序列化和压缩能够减少缓存占用空间42%;提交运行内存由1 000 MB减少到800 MB时,性能增加21%;优化内存配比,性能比默认参数有10%~30%的提升。  相似文献   

16.
Providing a real-time cloud service requires simultaneously retrieving a large amount of data. How to improve the performance of file access becomes a great challenge. This paper first addresses the preconditions of dealing with this problem considering the requirements of applications, hardware, software, and network environments in the cloud. Then, a novel distributed layered cache system named HDCache is proposed. HDCahe is built on the top of Hadoop Distributed File System (HDFS). Applications can integrate the client library of HDCache to access the multiple cache services. The cache services are built up with three access layers an in-memory cache, a snapshot of the local disk, and a network disk provided by HDFS. The files loaded from HDFS are cached in a shared memory which can be directly accessed by the client library. In order to improve robustness and alleviate workload, the cache services are organized in a peer-to-peer style using a distributed hash table and every cached file has three replicas scattered in different cache service nodes. Experimental results show that HDCache can store files with a wide range in their sizes and has the access performance in a millisecond level under highly concurrent environments. The tested hit ratio obtained from a real-world cloud serviced is higher than 95 %.  相似文献   

17.
A new cache architecture based on temporal and spatial locality   总被引:5,自引:0,他引:5  
A data cache system is designed as low power/high performance cache structure for embedded processors. Direct-mapped cache is a favorite choice for short cycle time, but suffers from high miss rate. Hence the proposed dual data cache is an approach to improve the miss ratio of direct-mapped cache without affecting this access time. The proposed cache system can exploit temporal and spatial locality effectively by maximizing the effective cache memory space for any given cache size. The proposed cache system consists of two caches, i.e., a direct-mapped cache with small block size and a fully associative spatial buffer with large block size. Temporal locality is utilized by caching candidate small blocks selectively into the direct-mapped cache. Also spatial locality can be utilized aggressively by fetching multiple neighboring small blocks whenever a cache miss occurs. According to the results of comparison and analysis, similar performance can be achieved by using four times smaller cache size comparing with the conventional direct-mapped cache.And it is shown that power consumption of the proposed cache can be reduced by around 4% comparing with the victim cache configuration.  相似文献   

18.
Due to limited server and network capacities, proxies are introduced for streaming applications to cache multimedia content from the media source to enable a scalable service and to improve the user experience. In this paper we first review the security aspect of a proxy encryption framework recently presented by Yeung featuring the multikey RSA technique. Then addressing the performance aspect, we propose a redesigned cost-efficient architecture, which is based on a media key management mechanism substantially different from Yeung 's framework and improves the overall system significantly.   相似文献   

19.
面向多线程多道程序的加权共享Cache划分   总被引:5,自引:1,他引:4  
并行应用在共享Cache结构的多核处理器执行时,会因为对共享Cache的冲突访问而产生性能下降和执行时间不确定的现象.共享Cache划分技术可以把共享Cache互斥地分配给多个进程使用,是解决该问题的有效方法.由于线程间的数据共享,线程数目不同的应用对共享Cache的利用率不同,但传统的以失效率最低为目标的共享Cache划分算法(例如UCP)没有区分应用线程数目的不同.文中设计了一种面向多线程多道程序的加权共享Cache划分框架(Weighted Cache Partitioning,WCP),包括面向应用的失效率监控器和加权Cache划分算法.失效率监控器以进程为单位动态监控在不同的Cache容量下应用的失效率;而加权Cache划分算法扩展了传统的失效率最优的Cache划分算法,根据应用线程数目的不同在进行Cache划分时给应用赋予不同的权值,以使具有更多线程的应用获得更多的共享Cache,从而提高系统的整体性能.实验结果表明:加权Cache划分算法虽然失效率有所增高,但却改进了IPC吞吐率、加权加速比和公平性.在由科学和工程计算应用组成的多道程序测试用例中,WCP-1的IPC吞吐率比以失效率最低为目标函数的共享Cache划分算法最高高出10.8%,平均高出5.5%.  相似文献   

20.
张鸿骏  武延军  张珩  张立波 《软件学报》2020,31(10):3038-3055
散列表(hash table)作为一类根据关键码值(key value)提供高效数据访问的数据索引结构,其广泛应用于各类计算机应用中,尤其是在对性能要求极高的系统软件、数据库以及高性能计算领域.在网络、云计算和物联网服务方面,以散列表为核心结构已经成为缓存系统的重要系统组件.然而,随着大规模数据量的大幅度增加,以多核CPU为核心设计散列表结构的系统已经逐渐出现性能瓶颈,亟需进一步改进散列表的高性能和可扩展性.随着通用图形处理器(graphic processing unit,简称GPU)的日益普及以及硬件计算能力和并发性能的大幅度提升,各类以并行计算为核心的系统软件任务在GPU上进行了优化设计并得到可观的性能提升.由于存在稀疏性和随机性,采用现有散列表的并行结构直接在GPU上应用势必会带来高频次的内存访问和频繁的总线数据传输,影响了散列表在GPU上的性能发挥.重点分析了缓存系统中散列表索引的内存访问、命中率与索引开销,提出并设计了一种适应GPU的混合访问缓存索引框架CCHT(cache cuckoo hash table),提供了两种适应不同命中率和索引开销要求的缓存策略,允许写入与查询操作并发执行,最大程度地利用了GPU硬件的计算性能与并发特性,减少了内存访问与总线传输.通过在GPU硬件上的实现与实验验证,CCHT在保证缓存命中率的同时,性能优于其他用于缓存索引的散列表.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号