共查询到20条相似文献,搜索用时 125 毫秒
1.
2.
为了解决异构作物模型在分布式环境下的集成问题,减少模型Web服务间大量数据传输的响应时间,本文以适应性评价模型Web服务为例,设计了基于.NET Caching类数据缓存和ZIP数据压缩的模型Web复合服务与优化方案。在网络环境中响应时间的测试结果表明,随着地点和并发用户数的增加,基于.NET Caching类数据缓存的Web服务方案可以缩短的响应时间在45%到86%之间,可以解决模型Web服务响应时间长的问题;基于ZIP数据压缩的Web服务方案优化效果较差。最后基于SOA和WebGIS,开发了作物种植制度设计服务系统,实现了小麦适应性评价功能、生产潜力评价功能、安全区划功能、地图操作功能和专题图功能,并以江苏省为例进行了小麦光温生产潜力评价的测试,结果表明江苏省的小麦光温生产潜力基本呈现北高南低的趋势,符合实际情况。 相似文献
3.
Ming-KuanLiu Fei-YueWang DanielDajunZeng 《计算机科学技术学报》2004,19(2):0-0
As the Internet and World Wide Web grow at a fast pace, it is essential that the Web‘s performance should keep up with increased demand and expectations. Web Caching technology has.been widely accepted as one of the effective approaches to alleviating Web traffic and increase the Web Quality of Service (QoS). This paper provides an up-to-date survey of the rapidly expanding Web Caching literature. It discusses the state-of-the-art web caching schemes and techniques, with emphasis on the recent developments in Web Caching technology such as the differentiated Web services, heterogeneous caching network structures, and dynamic content caching. 相似文献
4.
微软公司和Netscape公司新近推出了新版的Web代理服务器,不过,用这两个代理服务器来进行Web代理,给人的感觉就好象用高射炮打蚊子一样。 Netscape Proxy Server 3.5和Microsoft Proxy Server 2.0的基本功能是将经常访问的内容存在高速缓冲区中,然后直接从缓冲区中将内容传送给浏览器,这比直接从Internet网上下载快。它们同时也可以控制对Internet的访问,并过滤掉一些不想要的内容,例如Java小程 相似文献
5.
1 引言随着因特网技术和应用向广度和深度迅猛发展,人们除了在网上共享信息外,还试图通过无所不在的因特网来大规模地共享计算能力和服务等一切可以共享的资源。但是,由于网络中瓶颈的存在及热点资源的相对集中,这种迅猛发展也带来了一个问题,即人们上网时常常体验到的“World WideWait”问题。要解决这一问题,一种有效的解决办法就是采用Web Caching技术,即将Web上常被人们访问的、热门的信息缓存在Web服务器与最终用户之间的缓存服务器系统中。当前,Web Caching是一个正在迅速发展的领域,无论科研界还是工业界,都在这一方面投入了大量的精力。如果设计和部署得当的话,Web Caching可以带来很多好处,如节约大 相似文献
6.
7.
8.
《每周电脑报》1997,(47)
Microsoft Proxy Server 2.0在功能上比原有的1.0版有了巨大的飞跃。新的功能使Proxy Server 2.0成为那些需要代理服务器或防火墙的公司网络的上佳选择之一。作为代理服务器,它连接内部网与外部网。旧的1.0版只实现了Web ProxyServer和WinSock Proxy Server。而2.0版则实现了对SOCKS 4的支持,增加了SockProxy Server。这对非Windows的客户来说是个好消息,他们现在可以做更多的事情(比如RealAudio)而不仅仅限于通过CERN标准的Web Proxy浏览Internet。在1.0版中没有的Proxy级联在新版本中不仅得到了实现,而且被充分地扩展 相似文献
9.
WWW业务访问特性分布研究 总被引:8,自引:0,他引:8
WWW业务表现为一系列的访问序列。而Web Server和Proxy Server的日志很好地记录了这种访问序列的过程及特性。WWW业务的特性研究是Web Server、Web中间件研究和人工合成Web负载的基础。分析了一个Web Server和两个Proxy Server的日志,重点研究了Web页面请求的概率分布、Web静态文档大小的概率分布(含传输文档)、Web静态文档的访问距离的概率分布,并将分析结果同相关文献的结果进行了对比,同时通过试验证实了在使用Size作为Web缓存替换依据时,还应该考虑Web文档的访问频率。 相似文献
10.
11.
This paper proposes a novel contribution in Web caching area, especially in Web cache replacement, so-called intelligent client-side Web caching scheme (ICWCS). This approach is developed by splitting the client-side cache into two caches: short-term cache that receives the Web objects from the Internet directly, and long-term cache that receives the Web objects from the short-term cache. The objects in short-term cache are removed by least recently used (LRU) algorithm as short-term cache is full. More significantly, when the long-term cache saturates, the neuro-fuzzy system is employed efficiently in managing contents of the long-term cache. The proposed solution is validated by implementing trace-driven simulation and the results are compared with least recently used (LRU) and least frequently used (LFU) algorithms; the most common policies of evaluating Web caching performance. The simulation results have revealed that the proposed approach improves the performance of Web caching in terms of hit ratio (HR), up to 14.8% and 17.9% over LRU and LFU. In terms of byte hit ratio (BHR), the Web caching performance is improved up to 2.57% and 26.25%, and for latency saving ratio (LSR), the performance is better with 8.3% and 18.9% over LRU and LFU, respectively. 相似文献
12.
《Future Generation Computer Systems》2006,22(1-2):16-31
Proxy caches are essential to improve the performance of the World Wide Web and to enhance user perceived latency. Appropriate cache management strategies are crucial to achieve these goals. In our previous work, we have introduced Web object-based caching policies. A Web object consists of the main HTML page and all of its constituent embedded files. Our studies have shown that these policies improve proxy cache performance substantially.In this paper, we propose a new Web object-based policy to manage the storage system of a proxy cache. We propose two techniques to improve the storage system performance. The first technique is concerned with prefetching the related files belonging to a Web object, from the disk to main memory. This prefetching improves performance as most of the files can be provided from the main memory rather than from the proxy disk. The second technique stores the Web object members in contiguous disk blocks in order to reduce the disk access time. We used trace-driven simulations to study the performance improvements one can obtain with these two techniques. Our results show that the first technique by itself provides up to 50% reduction in hit latency, which is the delay involved in providing a hit document by the proxy. An additional 5% improvement can be obtained by incorporating the second technique. 相似文献
13.
代理Web Cache性能分析 总被引:3,自引:0,他引:3
采用WebCache技术提高当前Internet性能已成为一个主流的研究领域,其功能原理就象处理器和文件系统中的多级高速缓存一样。大规模Web高速缓存系统已成为许多国家Internet基础设施的重要组成部分。该文从三个不同访问规模的代理WebCache的跟踪日志出发,分析了WebCache的用户访问模式、Cache命中率、Cache服务器处理延迟等统计特征,提出基于分布式共享RAM和外存储结合的两级协同WebCache集群技术,可以提供可扩展的高性能并行Web高速缓存服务。 相似文献
14.
Accesing and circulation of Web objects has been facilitated by the design and implementation of effective caching schemes. Web caching has been integrated in prototype and commercial Web-based information systems in order to reduce the overall bandwidth and increase system's fault tolerance. This paper presents an overview of a series of Web cache replacement algorithms based on the idea of preserving a history record for cached Web objects. The number of references to Web objects over a certain time period is a critical parameter for the cache content replacement. The proposed algorithms are simulated and experimented under a real workload of Web cache traces provided by a major (Squid) proxy cache server installation. Cache and bytes hit rates are given with respect to different cache sizes and a varying number of request workload sets and it is shown that the proposed cache replacement algorithms improve both cache and byte hit rates. 相似文献
15.
16.
通过对Web通信量的分析,人们发现用户对Web对象的访问模式服从Zipf定律或类Zipf定律。在Web缓存的设计中,为得到所期望的Web对象命中率的要求,设计人员可以根据Zipf定律近似计算出相应的缓存大小。因此,Zipf定律为Web缓存结构的设计提供了重要的依据。适当的缓存大小结合P-LFU替换策略可以得到很高的Web缓存命中率。 相似文献
17.
In information-centric networking, in-network caching has the potential to improve network efficiency and content distribution performance by satisfying user requests with cached content rather than downloading the requested content from remote sources. In this respect, users who request, download, and keep the content may be able to contribute to in-network caching by sharing their downloaded content with other users in the same network domain (i.e., user-assisted in-network caching). In this paper, we examine various aspects of user-assisted in-network caching in the hopes of efficiently utilizing user resources to achieve in-network caching. Through simulations, we first show that user-assisted in-network caching has attractive features, such as self-scalable caching, a near-optimal cache hit ratio (that can be achieved when the content is fully cached by the in-network caching) based on stable caching, and performance improvements over in-network caching. We then examine the caching strategy of user-assisted in-network caching. We examine three caching strategies based on a centralized server that maintains all content availability information and informs each user of what to cache. We also examine three caching strategies based on each user’s content availability information. We first show that the caching strategy affects the distribution of upload overhead across users and the number of cache hits in each segment. One interesting observation is that, even with a small storage space (i.e., 0.1% of the content size per user), the centralized and distributed approaches improve the cache hit ratio by 50% and 45%, respectively. With an overall view of caching information, the centralized approach can achieve a higher cache hit ratio than the distributed approach. Based on this observation, we discuss a distributed approach with a larger view of caching information than the distributed approach and, through simulations, confirm that a larger view leads to a higher cache hit ratio. Another interesting observation is that the random distributed strategy yields comparable performance to more complex strategies. 相似文献
18.
缓存和预取在提高无线环境下的Web访问性能方面发挥着重要作用。文章研究针对无线局域网的Web缓存和预取机制,分别基于数据挖掘和信息论提出了采用序列挖掘和延迟更新的预测算法,设计了上下文感知的预取算法和获益驱动的缓存替换机制,上述算法已在Web缓存系统OnceEasyCache中实现。性能评估实验结果表明,上述算法的集成能有效地提高缓存命中率和延迟节省率。 相似文献
19.
在Device-to-Device (D2D)缓存网络中,缓存文件的副本数量是制约系统缓存效率的重要因素,过多的副本会导致缓存资源不能得到充分利用,副本数过低又将使流行文件难以被有效获取。针对D2D缓存网络副本布设问题,以系统缓存命中率最大化为目标,利用凸规划理论,提出了一种缓存文件副本数布设算法(CRP)。仿真结果显示,与现有副本数量布设算法相比,该算法可以有效提升D2D缓存网络总体缓存命中率。 相似文献
20.
Zeng ZengAuthor Vitae Bharadwaj VeeravalliAuthor Vitae Kenli LiAuthor Vitae 《Journal of Parallel and Distributed Computing》2011,71(4):525-536
Nowadays, server-side Web caching becomes an important technique used to reduce the User Perceived Latency (UPL). In large-scale multimedia systems, there are many Web proxies, connected with a multimedia server, that can cache some most popular multimedia objects and respond to the requests for them. Multimedia objects have some particular characteristic, e.g., strict QoS requirements. Hence, even some efficient conventional caching strategies based on cache hit ratio, meant for non-multimedia objects, will confront some problems in dealing with the multimedia objects. If we consider additional resources of proxy besides cache space, say bandwidth, we can readily observe that high hit ratios may deteriorate the entire system performance. In this paper, we propose a novel placement model for networked multimedia systems, referred to as the Hk/T model, which considers the combined influence of arrival rate, size, and playback time to select the objects to be cached. Based on this model, we propose an innovative Web caching algorithm, named as the ART-Greedy algorithm, which can balance the load among the proxies and achieve a minimum Average Response Time (ART) of the requests. Our experimental results conclusively demonstrate that the ART-Greedy algorithm outperforms the most popular and commonly used LFU (Least Frequently Used) algorithm significantly, and can achieve a better performance than the byte-hit algorithm when the system utilization is medium and high. 相似文献