首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 109 毫秒
1.
张丽果 《电子设计工程》2013,21(10):184-187
深度包检测技术(DPI)已成为网络信息安全的研究重点。基于硬件实现模式匹配的DPI技术凭借其更强的处理能力受到广泛关注。本文提出一种基于TCAM模式匹配的方法实现DPI,规则表项按字节分别存储在TCAM(三态内容寻址存储器)中,输入字符按不同字节与TCAM中内容进行匹配,提高了DPI中模式匹配的处理速度。针对该技术功耗大的缺点,提出BF(Bloom Filter)和TCAM相结合的两级模式匹配技术,BF可将较少可疑包转发给TCAM处理模块,从而降低了系统功耗,大大提高了系统处理速度。  相似文献   

2.
提出一种基于TCAM的范围匹配方法——C-TCAM(compressed TCAM)。空间方面,通过二级压缩存储,C-TCAM可以将2个扩展后的TCAM表项压缩成一个,最坏情况下范围扩张因子为W 1或者W 2,提高了空间利用率;功耗方面,通过一种新的TCAM查找算法来避免无效表项参与比较从而降低了功耗;分析和仿真显示C-TCAM方法在实现性能分组分类的同时在空间利用率、功耗等方面具有优势。  相似文献   

3.
网络技术的高速发展对模式匹配算法提出了更高的要求,为提高模式匹配效率,文中首先对常用的单模式和多模式匹配算法进行分析,在此基础之上,提出一种基于KR算法和BM算法的多模式快速匹配算法。最后通过实验结果验证了此算法的可用性和高效性。  相似文献   

4.
基于CAM/TCAM分组的Multi-Gigabit速率模式匹配引擎   总被引:1,自引:0,他引:1       下载免费PDF全文
何一凡  徐国银  沈海斌   《电子器件》2007,30(1):158-161
为了提高NIDS中模式匹配模块的处理性能,在分析相关研究的基础上,提出了一种具有multi-gigabit线速度处理能力的模式匹配引擎.该引擎采用按模式串长度和数目不同,在CAM或TCAM中均衡分组存储的方法,以及待测串切换等技术实现了multi-gigabit速率的处理性能和有效的存储空间利用.通过采用多个匹配模块并行处理的方式可以进一步提高引擎的处理能力.在200 MHz的时钟工作频率下,系统输出的性能可以达到6 Gbit/s以上.  相似文献   

5.
该文基于布鲁姆过滤器算法和三态内容寻址存储器(Ternary Content Addressable Memory, TCAM)技术提出一种高效范围匹配方法,解决了目前TCAM范围匹配方案存在的存储利用率低、功耗大的问题。设计基于最长共同前缀的分段匹配算法(Segmented Match on Longest Common Prefix, SMLCP)将范围匹配拆分为前缀匹配和特征区间比对两步,TCAM空间利用率达到100%。根据SMLCP算法设计了BF-TCAM模型,使用布鲁姆过滤器对关键字过滤,屏蔽无关项参与比较,大幅降低功耗。使用流水线缩短关键路径长度,使查找操作在一个时钟周期内完成。研究结果表明,所提方法实现了零范围扩张,工作功耗较传统TCAM降低50%以上。  相似文献   

6.
三重内容可寻址存储器TCAM(ternary content-addressable memory)是执行快速路由查找的常用硬件设备。在TCAM中进行最长前缀匹配操作最糟糕情况可能需要次存储操作,这里提出了一种算法来处理TCAM,结果使增量更新时间在最糟糕情况保持较小。通过对该算法与其他算法的性能分析,证明该算法在前缀长度排序限制条件下较常用算法更优。  相似文献   

7.
点模式匹配问题是计算机视觉和模式识别领域中的一个重要课题,但由于噪声、视场等因素始终难以完全解决.通过构建点模式关系图,把点模式匹配问题转化为关系图最大恒等子图搜索问题,由此给出图、子图、图同构和恒等、支持顶点对及支持顶点对集的概念并对它们满足的一些性质和定理进行了证明,最后提出了一种对最大恒等子图搜索的有效算法,在对...  相似文献   

8.
苗建松  丁炜 《微电子学与计算机》2006,23(10):144-146,149
基于TCAM的硬件路由查找算法能够在一个时钟周期内完成最长前缀匹配.实现快速路由查找和分组转发。但路由表表项的有序性使得更新过程比较复杂从而成为TCAM路由技术发展的瓶颈。根据不同长度前缀表项的分布特性及路由表稳态时的更新规律,优化了路由表的空间分配,并引入了缓冲池的思想.提出了一种改进的路由表更新方法,从而提高路由表更新效率。  相似文献   

9.
包分类技术作为网络交换业务的核心技术,在保证网络的高带宽和低延迟方面发挥着重要作用。在核心网与承载网领域,高性能网络对交换、路由、QoS(Quality of Service)等业务提出了更高的要求。目前高端交换芯片的主流技术仍以基于硬件的包分类算法为主,其中又以TCAM技术的应用最为成熟。本文分析了当前TCAM算法应用现状和研究进展,系统性地介绍了TCAM中的范围匹配和多匹配两大核心问题及其解决方案,对比阐述了现有算法的优点和缺点,最后给出了未来TCAM包分类算法的研究趋势。  相似文献   

10.
在软件定义网络中将防火墙策略定义为访问控制型规则,并将其分布式地部署在网络中能够提高会话的服务质量。为了减少放置在网络中规则的数量,文中提出多路复用和合并的启发式规则放置算法(HARA)。算法考虑到了商品交换机TCAM存储空间和端点交换机相连链路的流量负载,通过建立以最小化规则放置数量为目标的混合整数线性规划模型,解决不同吞吐量的多路由单播会话的规则放置问题。实验结果表明,与nonRM-CP算法相比,在保证不同会话服务质量的前提下,该算法最多能节省56%的TCAM空间,平均能减少13.1%的带宽资源利用率。  相似文献   

11.
丁麟轩  黄昆  张大方 《通信学报》2014,35(8):20-168
提出一种基于字符索引的正则表达式匹配算法,对确定型有限自动机(DFA, deterministic finite automaton)的字母表和状态进行分离存储,构建字符索引,减少匹配时激活的TCAM块数,显著降低TCAM能耗。实验结果表明:与DFA相比,基于字符索引的DFA(CIDFA, character-indexed DFA)在能耗上平均减少了92.7%,在存储空间开销上平均减少了32.0%,在吞吐量上平均提高了57.9%。  相似文献   

12.
在三态内容寻址存储器(Ternary Content Addressable Memory, TCAM)表项宽度和存储容量约束下,该文提出一种基于匹配表项压缩的BF-TCAM算法,采用Bloom-Filter(BF)对匹配关键字进行单字节编码压缩关键字长度,解决了匹配吞吐率低和存储空间不足问题。针对BF在表项压缩过程带来的冲突率上升问题,引入向量存储空间策略,利用向量存储空间实现多个哈希函数映射,相对于比特向量策略,有利于降低匹配冲突率。测试实验表明,相对于传统的TCAM匹配算法,BF-TCAM算法不但提高了匹配吞吐率和存储空间利用率,同时可有效降低BF压缩产生的冲突率。  相似文献   

13.
With the increasing diversity of network functions,packet classification had a higher demand on the number of match fields and depth of match table,which placed a severe burden on the storage capacity of hardware.To ensure the efficiency of matching process while at the same time improve the usage of storage devices,an information entropy based cutting algorithm on match fields was proposed.By the analysis on the redundancy of match fields and distribution pattern in a rule set,a match field cutting model was proposed.With the mapping of matching process to the process of entropy reduction,the complexity of optimal match field cutting was reduced from NP-hard to linear complexity.Experiment results show that compared to existing schemes,this scheme can need 40% less TCAM storage space,and on the other side,with the growing of table size,the time complexity of this algorithm is also far less than other algorithms.  相似文献   

14.
In OpenFlow networks,switches accept flow rules through standardized interfaces,and perform flow-based packet processing.To facilitate the lookup of flow tables,TCAM has been widely used in OpenFlow switches.However,TCAM is expensive and consumes a large amount of power.A hybrid lookup scheme integrating multiple-cell Hash table with TCAM was proposed for flow table matching to simultaneously reduce the cost and power consumption of lookup structure without sacrificing the lookup performance.By theoretical analysis and extensive experiments,optimal capacity configuration of Hash table and TCAM was achieved with the optimized cost of flow table lookup.The experiment results also show that the proposed lookup scheme can save over 90% cost and the power consumption of flow table matching can be reduced significantly compared with the pure TCAM scheme while keeping the similar lookup performance.  相似文献   

15.
提出了一种适合空间协议识别的改进BM算法。首先给出了一种基于比特距离的空间数据预处理算法,增大字符集数量,并通过引入小数跳进机制,提高BM算法协议分组头匹配效率;然后应用正则表达式进行协议识别,利用层次关系法提高多层空间协议识别效率;最后对提出的算法进行了复杂度分析和实验验证。结果表明:对于识别模式串长度为m的单层协议,算法时间复杂度可降低到BM算法的(1+m/4)/m,对多层协议识别效率可提高2.5倍;同时,与BM算法相比,提出的算法可有效解决模式串长度不足与存在大量不确定数据的问题,在数据量较大情况下具有更高的识别效率,且所形成的分组可有效抑制正则表达式DFA匹配引擎状态膨胀。  相似文献   

16.
New network applications like intrusion detection systems and packet-level accounting require multimatch packet classification, where all matching filters need to be reported. Ternary content addressable memories (TCAMs) have been adopted to solve the multimatch classification problem due to their ability to perform fast parallel matching. However, TCAMs are expensive and consume large amounts of power. None of the previously published multimatch classification schemes are both memory and power efficient. In this paper, we develop a novel scheme that meets both requirements by using a new set splitting algorithm (SSA). The main idea behind SSA is that it splits filters into multiple groups and performs separate TCAM lookups into these groups. It guarantees the removal of at least 1/2 the intersections when a filter set is split into two sets, thus resulting in low TCAM memory usage. SSA also accesses filters in the TCAM only once per packet, leading to low-power consumption. We compare SSA with two best known schemes: multimatch using discriminators (MUD) (Lakshminarayanan and Rangarajan, 2005) and geometric intersection-based solutions (Yu and Katz, 2004). Simulation results based on the SNORT filter sets show that SSA uses approximately the same amount of TCAM memory as MUD, but yields a 75%–95% reduction in power consumption. Compared with geometric intersection-based solutions, SSA uses 90% less TCAM memory and power at the cost of one additional TCAM lookup per packet. We also show that SSA can be combined with SRAM/TCAM hybrid approaches to further reduce energy consumption.  相似文献   

17.
Internet routers conduct routing table (RT) lookup based on the destination IP address of the incoming packet to decide which output port to forward the packet. Ternary content-addressible memories (TCAM) uses parallelism to achieve lookup in a single cycle. One of the major drawbacks of TCAM is its high-power consumption. Trie-based architecture has been proposed to reduce TCAM power consumption. The idea is to use an index TCAM to select one of many data TCAM blocks for lookup. However, power reduction is limited by the size of the index TCAM, which is always enabled for search. In this paper we develop a simple but effective trie-partitioning algorithm to reduce the index TCAM size, which achieves better reduction in power consumption, and at the same time guarantees full TCAM space utilization. We compared our algorithm (LogSplit) with PostOrderSplit (IEEE INFOCOM, 2003). For two real-world RTs (AADS and PAIX), the size of the index TCAM generated by LogSplit is 55–70% of that generated by PostOrderSplit; the largest power reduction factor of LogSplit is 41 for AADS and 68 for PAIX, while the largest power reduction factor of PostOrderSplit is 33 for AADS and 52 for PAIX. The improvement is even more significant in the worst case: the size of the index TCAM generated by LogSplit is 18–30% of that generated by PostOrderSplit for IPv4, and less than 1% of that generated by PostOrderSplit for IPv6; the largest power reduction factor of LogSplit is 173 for both IPv4 and IPv6, while the largest power reduction factor of PostOrderSplit is only 82 for IPv4 and 41 for IPv6. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号