首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
一种Web集群系统的动态分离式调度策略   总被引:1,自引:0,他引:1       下载免费PDF全文
静态分离式调度策略(SSSP)不能有效地分配服务器资源。动态分离式调度策略(DSSP)对静态请求和动态请求分别以请求的文件和用户会话为单元进行调度。请求分发器监测后端服务器的状态,按资源使用情况将服务器区分为轻载、重载和过载,轻载服务器可以接收新的请求单元,重载服务器不接收新请求单元,但继续为已接收的请求单元服务,过载服务器迁移部分请求单元到轻载服务器。试验结果表明,DSSP的效率明显优于SSSP。  相似文献   

2.
在一种新的Web集群体系结构的基础上,提出了一种资源优化的双最小均衡区分服务调度算法:首先在前端调度器按资源均衡度将Web请求分配到各后台服务器.然后将Web请求的优先级与资源均衡度两个特征参数结合起来,综合设计后台服务器的Web请求调度顺序,为了评估该算法的性能,进行了大量的模拟实验.在与其他著名调度策略如分离式调度的对比结果显示:双最小均衡调度算法使Web请求的效率提高了11%,同时很好地实现了区分服务.证实了资源优化调度策略具有一定的普遍意义.  相似文献   

3.
传统的集群服务器依靠前端分发器基于负载均衡来分发客户请求,而ASAS集群采用了基于后端服务器负载分析后的主动调度策略。控制ASAS集群后端服务器的CPU利用客户请求的响应时间,或者连接容量,却很难建立一个精确的数学模型,因此传统的控制器很难控制。这里设计了一个模糊控制算法,控制算法的输入是CPU利用率和客户请求的响应时间,输出是连接数,试验表明这种模糊控制策略取得了很好的性能表现。  相似文献   

4.
传统的集群服务器依靠前端分发器基于负载均衡来分发客户请求,而ASAS集群采用了基于后端服务器负载分析后的主动调度策略。控制ASAS集群后端服务器的CPU利用客户请求的响应时间,或者连接容量,却很难建立一个精确的数学模型,因此传统的控制器很难控制。这里设计了一个模糊控制算法,控制算法的输入是CPU利用率和客户请求的响应时间,输出是连接数,试验表明这种模糊控制策略取得了很好的性能表现。  相似文献   

5.
关于调度算法与Web集群性能的分析   总被引:7,自引:1,他引:7  
用排队论方法分析负载均衡型调度算法和Locality型调度算法对Web集群服务器性能的影响,所获得的结论有:(1)由于充分应用后端结点的主存资源,所以应用Locality型调度算法时的Web集群服务器的性能要好于应用负载均衡型调度算法时的性能;(2)Locality型调度算法中,完全基于请清内容来分发请求将导致负载失衡现象的产生,而必须允许适量的文件复制,才能使Web集群服务器的性能显著提高。  相似文献   

6.
请求速率对集群Web服务器调度的影响   总被引:4,自引:0,他引:4  
讨论了集群Web服务器的几种常见请求调度策略,针对目前大多数的请求调度策略都是只考虑服务器的内在指标,而忽略了请求速率等输入指标对负载调度的影响,即使考虑了请求速率,也很少考虑请求速率变化突发性对Web服务器的影响。通过分析请求速率对Web服务器的性能影响,可以利用输入指标结合服务器指标来调度请求输入,从而有效地解决服务器间的负载不平衡。  相似文献   

7.
现有的请求调度策略应用于动态请求时容易导致集群节点间的负载失衡.针对动态请求,提出了基于分类的请求调度策略,根据URL模式对动态请求分类,同一类的请求具有相同的负载特性,然后用比较简单的轮转策略对各类请求进行调度,因而可以在不估计请求负载的情况下,很好地将负载均衡到了后端节点上.测试结果表明,使用该策略后,集群系统的吞吐量可以提高51.9%.  相似文献   

8.
在对服务器集群Web QoS控制基础上,综合考虑请求内容和各服务器性能以及当前整个集群负载平衡状况,设计了一种基于L4/L7双层分配的混合负载平衡调度策略,算法引入了一个反馈环节动态地改变Web服务器的权值,通过负载平衡程度的阈值进行判断,选择不同的调度策略,从而提高了Web集群系统的性能。  相似文献   

9.
基于AHP的Web集群系统负载均衡算法   总被引:1,自引:0,他引:1       下载免费PDF全文
利用集群技术构建的服务器组在资源利用率上存在较大差异。为此,提出基于层次分析法(AHP)的集群系统负载均衡算法,建立判断矩阵,得到各项评估指标的单项和综合权重。调度器定时接收真实服务器上的4类参数:网络性能,服务器硬件,服务器软件和网络服务类型。根据调度器收到的每一个连接请求,采用动态反馈算法选择负载最小的服务器响应连接请求。实验结果表明,该算法能减少服务器平均响应时间,有效提高集群系统的响应率。  相似文献   

10.
一种应用敏感的Web服务请求调度策略   总被引:10,自引:0,他引:10  
官荷卿  张文博  魏峻  黄涛 《计算机学报》2006,29(7):1189-1198
在当前企业级Web服务应用中,性能问题一直是人们关注的重点.然而作为支撑Web服务应用的主流平台,Web应用服务器对请求的调度仍然是传统的先来先服务策略(FCFS).这种策略无法区分请求的重要性,降低了关键请求的性能.以往的研究较少从应用的性能需求出发考虑服务器的请求调度机制,影响了服务器性能保障的效果.对此,提出了应用敏感的Web服务请求调度策略(AWSRS),使用应用获益来评估服务器为应用提供的性能保障效果.服务器将请求按照应用的需求进行分类,并按照应用获益最大化的目标为不同类型的请求分配资源.实验表明AWSRS策略能够有效提高关键请求的性能.  相似文献   

11.
随着过去几十年互联网服务的指数增长,各大网站的访问量急剧上升。海量的用户请求使得热门网站的网络请求率可能在几秒钟内大规模增加。一旦服务器承受不住这样的高并发请求,由此带来的网络拥塞和延迟会极大地影响用户体验。负载均衡是高可用网络基础架构的关键组件,通过在后端引入一个负载均衡器,将工作负载分布到多个服务器来缓解海量并发请求对服务器造成的巨大压力,提高后端服务器和数据库的性能以及可靠性。而Nginx作为一款高性能的HTTP和反向代理服务器,正越来越多地应用到实践中。文中将分析Nginx服务器负载均衡的体系架构,研究默认的加权轮询算法,并提出一种改进后的动态负载均衡算法,实时收集负载信息,重新计算并分配权值。通过实验测试,对比不同算法下的负载均衡性能,改进后的算法能有效提高服务器集群的性能。  相似文献   

12.
Most web servers, in practical use, use a queuing policy based on the Best Effort model, which employs the first-in-first-out (FIFO) scheduling rule to prioritize web requests in a single queue. This model does not provide Quality of Service (QoS). In the Differentiated Services (DiffServ) model, separate queues are introduced to differentiate QoS for separate web requests with different priorities. This paper presents web server QoS models that use a single queue, along with scheduling rules from production planning in the manufacturing domain, to differentiate QoS for classes of web service requests with different priorities. These scheduling rules are Weighted Shortest Processing Time (WSPT), Apparent Tardiness Cost (ATC), and Earliest Due Date. We conduct simulation experiments and compare the QoS performance of these scheduling rules with the FIFO scheme used in the basic Best Effort model with only one queue, and the basic DiffServ model with two separate queues. Simulation results demonstrate better QoS performance using WSPT and ATC, especially when requested services exceed the capacity of a web server.  相似文献   

13.
Service scheduling is one of the crucial issues in E-commerce environment. E-commerce web servers often get overloaded as they have to deal with a large number of customers’ requests—for example, browse, search, and pay, in order to make purchases or to get product information from E-commerce web sites. In this paper, we propose a new approach in order to effectively handle high traffic load and to improve web server’s performance. Our solution is to exploit networking techniques and to classify customers’ requests into different classes such that some requests are prioritised over others. We contend that such classification is financially beneficial to E-commerce services as in these services some requests are more valuable than others. For instance, the processing of “browse” request should get less priority than “payment” request as the latter is considered to be more valuable to the service provider. Our approach analyses the arrival process of distinct requests and employs a priority scheduling service at the network nodes that gives preferential treatment to high priority requests. The proposed approach is tested through various experiments which show significant decrease in the response time of high priority requests. This also reduces the probability of dropping high priority requests by a web server and thus enabling service providers to generate more revenue.  相似文献   

14.
基于内容的网络集群负载平衡算法模型   总被引:1,自引:0,他引:1  
在论述网络集群负载平衡算法的基础上,基于内容分类的方法,给出基于内容的网络集群负载平衡算法三元组模型。请求分类有利于提高缓存命中率,调度机制说明如何适当地转发请求,动态反馈避免将请求分配到重载的服务器,进而分析了调度机制的八种调度策略和六种基于内容的调度转发技术。该模型利用缓存内容来提高集群的吞吐量和响应时间,可部署多种服务类型。  相似文献   

15.
The growth of web-based applications in business and e-commerce is building up demands for high performance web servers for better throughputs and lower user-perceived latency. These demands are leading to a widespread substitution of powerful single servers by robust newcomers, cluster web servers, in many enterprise companies. In this respect the load-balancing algorithms play an important role in boosting the performance of cluster servers. The previous load-balancing algorithms which were designed for the handling of static contents in web services suffer from significant performance degradation under dynamic and database-driven workloads. Regarding this, we propose an approximation-based load-balancing algorithm with admission control for cluster-based web servers in this study. Since it is difficult to accurately determine the loads of web servers through feedbacks from distributed agents in web servers, we propose an analytical model of a web server to estimate the web servers’ loads. To achieve this, the algorithm classifies requests based on their service times and track numbers of outstanding requests for each class of each web server node and also based on their resource demands to dynamically estimate the loads of each node. For the error handling of the model a proportional integral (PI) controller from control theory is used. Then the estimated available capacity of each web server is used for load balancing and admission control decisions. The implementation results with a standard benchmark confirm the effectiveness of the proposed scheme, which improves both the mean response time and the throughput of the cluster compared to rival load-balancing algorithms, and also avoids situations in which the cluster is overloaded, even when the request rates are beyond the cluster capacity.  相似文献   

16.
We have implemented an efficient and scalable web cluster named LVS-CAD/FC (i.e. LVS with Content-Aware Dispatching and File Caching). In LVS-CAD/FC, a kernel-level one-way content-aware web switch based on TCP Rebuilding is implemented to examine and distribute the HTTP requests from clients to web servers, and the fast Multiple TCP Rebuilding is implemented to efficiently support persistent connection. Besides, a file-based web cache stores a small set of the most frequently accessed web files in server RAM to reduce disk I/Os and a light-weight redirect method is developed to efficiently redirect requests to this cache. In this paper, we have further proposed new policies related to content-based workload-aware request distribution, in which the web switch considers the content of requests and workload characterization in request dispatching. In particular, web files with more access frequencies would be duplicated in more servers’ file-based caches, such that hot web files can be served by more servers. Our goals are to improve cluster performance by obtaining better memory utilization and increasing the cache hit rates while achieving load balancing among servers. Experimental results of practical implementation on Linux show that LVS-CAD/FC is efficient and scales well. Besides, LVS-CAD/FC with the proposed policies can achieve 66.89% better performance than the Linux Virtual Server with a content-blind web switch.  相似文献   

17.
一种基于OSI应用层的Web群集负载平衡调序策略研究   总被引:1,自引:0,他引:1       下载免费PDF全文
本文总结了目前基于L7的Web群集负载平衡调度研究,分析了影响性能的主要因素,在估计Web负载时考虑了请求强度以及Web服务器自身的性能,提出了处理能力异构服务器群集的最小负载调度算法。在算法中还同时考虑了服务器在进入临界状态时性能急剧下降的因素,避免群集进入临界状态.新算法能较为准确地跟踪群集系统的负载,更好地
均衡分配负载。  相似文献   

18.
集群系统中自适应负载反馈平衡策略的研究   总被引:2,自引:0,他引:2  
当前在集群系统中,负载平衡策略虽然很多,但是为了减少反馈开销,一般策略为采用在前端估计后端负载,所以不能很好地完成负载平衡的任务。针对这一问题,提出了一种自适应负载反馈平衡策略,各个服务器根据自身负载的变化来决定负载反馈的时机,前端根据负载信息和请求率计算出各个服务器的负载权值,最后根据负载权值来调度服务器处理请求,以实现负载平衡。由于采用了自适应的反馈策略,在获得各个服务器负载信息的同时减少了负载反馈的开销,实现了系统的负载均衡。测试结果表明该策略表现出了一定的优势。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号