共查询到19条相似文献,搜索用时 187 毫秒
1.
2.
3.
蔡勇 《计算机应用与软件》2009,26(3)
针对网络入侵检测系统因自身性能缘故在高速网络上难以有效地进行实时入侵检测,设计了一种基于动态流量负载均衡的分流式入侵检测系统模型,模型中的数据分流器将捕获的网络数据包在数据链路层转发至多个探测机进行处理,并通过动态负载均衡分流算法实现数据的均衡分流.该设计方法能够充分利用系统的计算资源,具有良好的扩展性、动态流量均衡性和检测性能.实验结果表明,通过分流器分流到各个探测器的数据包个数基本上能平均分配,系统的检测分析能力随探测机数量的增加而明显增强. 相似文献
4.
5.
郑思思 《计算机光盘软件与应用》2011,(4)
通过对P2P网络中负载均衡技术的研究,给出基于应用服务器的负载均衡模型.结合各个服务器节点的服务状态、服务性能权值和节点当前服务的用户数,设计一种基于应用服务器的动态负载均衡调度策略,应用于视频点播系统中. 相似文献
6.
网络管理系统中管理端逐步采用分布式集群构架,通过负载均衡算法调度客户端请求,并将客户端请求分配给多个事务节点进行并行处理。为进一步提高集群系统服务的性能,文中在研究以往负载均衡算法的基础上,提出了一种基于轮转周期的动态反馈负载均衡算法。该算法设计了一种基于剩余资源动态权值的节点剩余负载能力计算方法的动态反馈机制;并在动态反馈负载均衡算法的一个采样周期内引入轮转周期对客户端请求均衡分配。通过实验比较分析,该算法能获得更好的负载均衡效果。 相似文献
7.
针对数据中心由于异构节点资源利用率不均衡导致的负载均衡问题,本文提出了一种基于动态阈值的迁移时机判决算法与基于负载类型感知的选择算法相结合的虚拟机动态迁移选择策略.该策略先通过监控全局负载度与高低负载节点占比动态调整状态阈值,并结合负载评估值判断迁移时机;再分析虚拟机负载类型,依据虚拟机与节点资源的依赖度、虚拟机当前内存带宽比和虚拟机贡献度选择待迁移虚拟机,并根据虚拟机与目的节点的资源匹配度与迁移代价选择目的节点,实现对高负载与低负载节点的虚拟机动态调整,从而优化节点资源配置问题.实验结果表明,该策略可以有效减少虚拟机迁移次数并保证数据中心服务质量,最终改善数据中心的负载均衡能力. 相似文献
8.
基于剩余计算能力的动态负载均衡系统是一种基于新型负载向量的动态负载均衡系统。该系统使用一种新的负载评价指标:剩余计算能力,它兼顾节点的资源使用情况及节点本身的性能特征两个方面,更好地体现了集群系统的处理能力和系统正在处理的负载情况,比常用的其它负载向量更加灵活、准确。系统还将任务调度和进程迁移结合起来,以达到更有效的系统负载均衡,同时,也减小系统负载均衡带来的额外开销。 相似文献
9.
10.
当前网络用户数量、多运营终端节点数量增长趋势明显,导致网络计算资源很难达到均衡状态。提出基于多Agent技术的多运营终端自适应负载均衡算法。定义多运营终端负载状态,采集多运营终端负载信息,并量化处理负载信息。以此为基础,搭建Agent负载均衡结构,引入多Agent技术,结合多运营终端节点工作特点,构建网络资源模型,设计... 相似文献
11.
Deflection routing resolves output port contention in packet switched multiprocessor interconnection networks by granting the preferred port to the highest priority packet and directing contending packets out other ports. When combined with optical links and switches, deflection routing yields simple bufferless nodes, high bit rates, scalable throughput, and low latency. We discuss the problem of packet synchronization in synchronous optical deflection networks with nodes distributed across boards, racks, and cabinets. Synchronous operation is feasible due to very predictable optical propagation delays. A routing control processor at each node examines arriving packets and assigns them to output ports. Packets arriving on different input ports must be bit wise aligned; there are no elastic buffers to correct for mismatched arrivals. “Time of flight” packet synchronization is done by balancing link delays during network design. Using a directed graph network model, we formulate a constrained minimization problem for minimizing link delays subject to synchronization and packaging constraints. We demonstrate our method on a ShuffleNet graph, and show modifications to handle multiple packet sizes and latency critical paths 相似文献
12.
为了在更高带宽的网络中进行有效的入侵检测分析,研究了入侵检测中的数据获取技术,提出了一种可扩展的高效入侵监测框架SEIMA(scalable efficient intrusion monitoring architecture).在SEIMA结构模型中,通过将高效网络流量负载分割器与多个并行工作的入侵检测传感器相结合,从而可以将入侵检测扩展应用到更高的网络带宽中;通过使用高效地址翻译技术和缓冲区管理机制实现了旁路操作系统的高性能用户级网络报文传输模型,以便提高单传感器的报文处理性能;通过采用有限自动机的方法构建了基于用户层的多规则报文过滤器以消除多余数据包的处理开销.模拟环境和实际环境下的测试结果表明,SEIMA在提高网络入侵检测系统数据获取效率的同时,能够降低系统CPU的利用率,从而可以将更多的系统资源用于更复杂的数据分析过程. 相似文献
13.
Load balancing can effectively improve network performance and scalability,but it may cause packet disorder,so worsening the performance.Additionally,without MPLS to establish the desired end-to-end paths,hop-by-hop routing load balancing is more difficult to achieve than the source routing;however,it can significantly improve network performance.In this paper,we propose a load balancing scheme with hop-by-hop routing,by using the burstiness features of flows to make sure that the packets of the same flow arrive at the receiving end in order.Simulation results show that our algorithm can adapt to the dynamic changes of the end-to-end delay and the routing vector,and also can achieve fine-gained load balancing. 相似文献
14.
负载均衡算法被广泛应用于并行处理、服务集群等环境中.一些基于网络报文内容相关性的应用。例如IDS和IPv6的Anycast服务等要求在对报文进行负载均衡分配时要保持网络会话的相关性。即相关的报文要分配到同一个处理节点.否则其语义不能得到正确处理.传统的负载均衡算法对于这类服务需要在会话的上下文信息规模和会话完整度之间权衡,对于会话数量很大的情况通常开销也很大.基于位熵的概念,本文提出了一种可满足会话完整性的负载均衡简化算法一域分类算法.该算法不需要各处理机之间内部通信协调工作.也不需要在调度节点保持会话的上下文。在满足报文或会话相关性要求的同时.仍能保持较好的宏观平衡度和微观平衡度. 相似文献
15.
16.
17.
并行入侵检测系统的预测负载均衡方法 总被引:1,自引:0,他引:1
数据流的高速化使得网络入侵检测系统(network intrusion detection system,NIDS)往往会出现严重的漏报率,并且面对某连接上突发流量的情况,基于连接的负载均衡很难做出较好的应对措施,针对该问题,提出了一种基于包预测的并行入侵检测的负载均衡方案。该方案通过观察每个探测器上数据包的进出情况,由包预测负载均衡算法预测下一个时刻各探测器上的负载情况,避免了将新连接加入到流量突发探测器的可能,提高了负载均衡的效率。仿真实验结果表明了该方案的可行性及有效性,它能有效的均衡负载,减少系统的丢包率。 相似文献
18.
Marcin Bienkowski 《Algorithmica》2014,68(2):426-447
In the online packet buffering problem (also known as the unweighted FIFO variant of buffer management), we focus on a single network packet switching device with several input ports and one output port. This device forwards unit-size, unit-value packets from input ports to the output port. Buffers attached to input ports may accumulate incoming packets for later transmission; if they cannot accommodate all incoming packets, their excess is lost. A packet buffering algorithm has to choose from which buffers to transmit packets in order to minimize the number of lost packets and thus maximize the throughput. We present a tight lower bound of e/(e?1)≈1.582 on the competitive ratio of the throughput maximization, which holds even for fractional or randomized algorithms. This improves the previously best known lower bound of 1.4659 and matches the performance of the algorithm Random Schedule. Our result contradicts the claimed performance of the algorithm Random Permutation; we point out a flaw in its original analysis. 相似文献
19.
Grammatikakis M.D. Liesche S. 《IEEE transactions on pattern analysis and machine intelligence》2000,26(5):401-422
The authors examine the design, implementation, and experimental analysis of parallel priority queues for device and network simulation. They consider: 1) distributed splay trees using MPI; 2) concurrent heaps using shared memory atomic locks; and 3) a new, more general concurrent data structure based on distributed sorted lists, designed to provide dynamically balanced work allocation and efficient use of shared memory resources. We evaluate performance for all three data structures on a Cray-TSESOO system at KFA-Julich. Our comparisons are based on simulations of single buffers and a 64×64 packet switch which supports multicasting. In all implementations, PEs monitor traffic at their preassigned input/output ports, while priority queue elements are distributed across the Cray-TBE virtual shared memory. Our experiments with up to 60000 packets and two to 64 PEs indicate that concurrent priority queues perform much better than distributed ones. Both concurrent implementations have comparable performance, while our new data structure uses less memory and has been further optimized. We also consider parallel simulation for symmetric networks by sorting integer conflict functions and implementing a packet indexing scheme. The optimized message passing network simulator can process ~500 K packet moves in one second, with an efficiency that exceeds ~50 percent for a few thousand packets on the Cray-T3E with 32 PEs. All developed data structures form a parallel library. Although our concurrent implementations use the Cray-TSE ShMem library, portability can be derived from Open-MP or MP1-2 standard libraries, which will provide support for one-way communication and shared memory lock mechanisms 相似文献