首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
针对现有异常应用协议行为检测主要针对某种特定应用,缺乏通用性的问题,提出一种基于条件随机场的异常应用协议行为检测方法,从网络数据流中提取应用协议关键字及其时间间隔作为状态特征,同时考虑关键字的频率分布特征,应用条件随机场模型对协议行为进行建模,将偏离模型的协议行为判定为异常。相比于传统的基于隐马尔可夫模型建模方法,该方法不必对特征量作出严格的独立性假设,具有能够融合多特征的优势。实验结果表明,本文方法在检测协议异常时准确率高,误报率低。  相似文献   

2.
提出一种基于星形距离轮廓特征和LDCRF模型的在线行为识别方法。对视频中已分割出的人物姿态提取轮廓,求取人体轮廓质心及其到轮廓采样点的星形距离向量,以该向量参数化人体运动姿态特征,对原始姿态特征向量进行小波变换,降维的同时获得姿态的多分辨细节信息。利用潜动态条件随机场模型(latent‐dynamic conditional random , LDCRF)对人体行为特征建模,进行在线识别。比对CRF、HCRF、LDCRF模型对10种不同行为的识别结果,对比结果表明,相比CRF和HCRF ,该模型对连续行为序列有较强的识别能力,具有更好的稳定性。  相似文献   

3.
针对隐条件随机场(HCRF)的实时性问题和隐动态条件随机场(LDCRF)行为转换时的标记偏差问题,提出了一种基于分层分数条件随机场(SFCRF)模型的行为识别算法。该算法改进了LDCRF,并提出分数标记的概念,将人体行为的完整性和有向性具体化。实验结果表明,该算法取得了比条件随机场(CRF)、HCRF和LDCRF更好的识别效果。  相似文献   

4.
基于条件随机场的DDoS 攻击检测方法   总被引:2,自引:0,他引:2  
刘运  蔡志平  钟平  殷建平  程杰仁 《软件学报》2011,22(8):1897-1910
近年来,基于机器学习算法的分布式拒绝服务(distributed denial-of-service,简称DDoS)攻击检测技术已取得了很大的进展,但仍存在一些不足:(1)不能充分利用蕴涵于标记和特征观测序列中的上下文信息;(2)对多特征的概率分布存在过强的假设.条件随机场模型具有融合利用上下文信息和多特征的能力,将其应用于DDoS检测,能够有效地弥补上述不足.提出了一种基于条件随机场的DDoS攻击检测方法:首先,定义流特征条件熵(traffic feature conditional entropy,简称TFCE)、行为轮廓偏离度(behavior profile deviate degree,简称BPDD)两组统计量,对TCPflood,UDP flood,ICMP flood这3类攻击的特点进行描述;然后以此为基础,使用条件随机场,通过对其有效训练,分别为3类攻击建立分类模型;最后,通过对模型的有效训练,应用模型推断来完成对DDoS攻击的检测.实验结果表明,该方法能够充分发挥条件随机场模型的优势,准确区分正常流量和攻击流量,与同类方法相比,具有更好的抗背景流量干扰的能力.  相似文献   

5.
人体行为分析一直是计算机视觉领域中具有挑战性的研究方向,近年来深度传感器的引入为解决人体行为分析问题提供了新的研究方法.采用微软Kinect传感器获取深度图像,首先对深度图进行局部梯度特征提取,再结合条件随机场(CRF)模型,提出一种新的人体行为分析方法.实现了对简单人体行为的有效识别,通过在2个流行人体行为数据库上实验,证明了该方法具有较好的识别结果和该方法的优越性能.  相似文献   

6.
基于深度学习框架的隐藏主题变量图模型   总被引:1,自引:0,他引:1  
隐藏主题变量图模型是一种用节点表示潜在主题或者潜在主题变化的概率图模型.针对当前隐藏主题变量图模型只能提取单层主题节点的缺陷,提出一种基于深度学习框架的提取多层主题节点的概率图模型.该模型在隐藏主题变量图模型的底层增加预处理结构层,即引入自组织映射层,可以有效地提取不同层次的主题状态.另外,隐藏主题变量图模型使用了隐马尔可夫网络和条件随机场的相结合的模型.针对条件随机场,提出了一阶逻辑子句定义的特征函数.弥补了长距离依存特性的缺失.在此基础上提出了一种分层次提取主题状态的新深度学习算法.在国际通用的亚马逊情感分析数据、Tripadvisor情感分析数据上的实验表明,新算法可以提升情感分析的准确率.同时实验结果也表明,提取多层主题状态可以更好地挖掘宏观主题分布信息和评论的局部主题信息.  相似文献   

7.
采用多组单应约束和马尔可夫随机场的运动目标检测算法   总被引:1,自引:0,他引:1  
针对现有动态背景下目标检测算法的局限性,提出一种基于多组单应约束和马尔可夫随机场的运动目标检测算法.该算法以视频序列多帧跟踪的运动轨迹为基础,通过轨迹分离和像素标记2个阶段实现运动目标的检测:在轨迹分离阶段,利用多组单应约束对视频序列的背景运动进行建模,并基于该约束通过累积确认的策略实现背景轨迹和前景轨迹的准确分离;在像素标记阶段,以超像素为节点建立时空马尔可夫随机场模型,将轨迹分离信息以及超像素的时空邻域关系统一建模在马尔可夫随机场的能量函数中,并通过最小化能量函数得到每个像素的前背景标记结果.与现有基于运动轨迹的方法相比,文中算法不需要仿射摄像机模型的假设,有效地解决了运动轨迹等长带来的轨迹点区域缺失问题,并可同时处理静态背景和动态背景2种类型的视频;在多个公开数据集的测试结果表明,该算法在轨迹分离准确性、轨迹点密度以及像素标记准确率等方面均优于现有方法.  相似文献   

8.
Web数据语义标注是Web信息抽取中的关键步骤.条件随机场是利用序列特征处理序列标注问题的经典方法.然而现有条件随机场模型无法综合利用已有的Web数据库信息和Web数据元素之间的逻辑关系,导致Web数据语义标注准确率不高.因此,提出一种约束条件随机场模型(CCRF).该模型通过引入可信约束和逻辑约束,有效利用了已有的Web数据库信息和Web数据元素之间的逻辑关系.为了克服现有条件随机场模型Viterbi推理方法无法综合利用这2类约束的不足,该模型采用整数线性规划推理方法,将两类约束同时引入推理过程.通过在多个领域的真实数据集上的实验结果表明,所提出的模型能够显著提高Web数据语义标注的性能,并且为Web信息抽取奠定了良好的基础.  相似文献   

9.
RGB-D图像语义分割是场景识别与分析的基础步骤,基于条件随机场(CRF)的图像分割方法不能有效应用于复杂多变的现实场景,因此提出一种交互式条件随机场的RGB-D图像语义分割方法。首先利用中值滤波和形态重构方法对Kinect相机拍摄的RGB-D图像进行预处理,降低图像噪声及数据缺失;其次,利用基于条件随机场的分割方法对经过预处理的图像进行自动分割,得到粗略的分割结果;最后,用户通过交互平台,将代表正确场景信息的标签反应到条件随机场模型中并进行模型更新,改善分割结果。通过多组实验验证了该算法不仅满足用户对于复杂场景分割与识别的需求,而且用户交互简单、方便、直观。相较于传统的基于条件随机场分割方法,该方法得到较高的分割精度和较好的识别效果。  相似文献   

10.
线性链条件随机场模型难以处理Web对象与各个标注属性之间的特征关系,为解决此问题,提出一种增强约束条件随机场模型。通过将约束条件引入推理过程,改进线性链条件随机场模型的Viterbi算法;运用最大间隔理论的思想训练条件随机场模型,提高模型标注的正确率;将该模型与条件随机场模型及层次条件随机场模型进行对比。实验结果表明该模型能在提高标注正确率的基础上有效地解决Web对象信息抽取问题。  相似文献   

11.
We introduce an online learning approach for multi-target tracking. Detection responses are gradually associated into tracklets in multiple levels to produce final tracks. Unlike most previous approaches which only focus on producing discriminative motion and appearance models for all targets, we further consider discriminative features for distinguishing difficult pairs of targets. The tracking problem is formulated using an online learned CRF model, and is transformed into an energy minimization problem. The energy functions include a set of unary functions that are based on motion and appearance models for discriminating all targets, as well as a set of pairwise functions that are based on models for differentiating corresponding pairs of tracklets. The online CRF approach is more powerful at distinguishing spatially close targets with similar appearances, as well as in tracking targets in presence of camera motions. An efficient algorithm is introduced for finding an association with low energy cost. We present results on four public data sets, and show significant improvements compared with several state-of-art methods.  相似文献   

12.
We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.  相似文献   

13.
This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.  相似文献   

14.
Dynamic analysis of video sequences often relies on the segmentation of the sequence into regions of consistent motions. Approaching this problem requires a definition of which motions are regarded as consistent. Common approaches to motion segmentation usually group together points or image regions that have the same motion between successive frames (where the same motion can be 2D, 3D, or non-rigid). In this paper we define a new type of motion consistency, which is based on temporal consistency of behaviors across multiple frames in the video sequence. Our definition of consistent “temporal behavior” is expressed in terms of multi-frame linear subspace constraints. This definition applies to 2D, 3D, and some non-rigid motions without requiring prior model selection. We further show that our definition of motion consistency extends to data with directional uncertainty, thus leading to a dense segmentation of the entire image. Such segmentation is obtained by applying the new motion consistency constraints directly to covariance-weighted image brightness measurements. This is done without requiring prior correspondence estimation nor feature tracking.  相似文献   

15.
针对传统人体姿态识别数据采集易受环境干扰、难以解决人体运动姿态的相似性和 人体运动执行者的特征差异性等问题,提出一种基于少量关键序列帧的人体姿态识别方法。首先 对原有运动序列进行预选,通过运动轨迹取极值的方法构造初选关键帧序列,再利用帧消减算法 获取最终关键帧序列;然后对不同人体姿态分别建立隐马尔科夫模型,利用 Baum-Welch 算法计 算得到初始概率矩阵、混淆矩阵、状态转移矩阵,获得训练后模型;最后输入待测数据,应用前 向算法,得到对于每个模型的概率,比较并选取最大概率对应的姿态作为识别结果。实验结果表 明,该方法能够有效的选取原始运动序列的关键帧,提高人体姿态识别的准确性。  相似文献   

16.
The segmentation of objects and people in particular is an important problem in computer vision. In this paper, we focus on automatically segmenting a person from challenging video sequences in which we place no constraint on camera viewpoint, camera motion or the movements of a person in the scene. Our approach uses the most confident predictions from a pose detector as a form of anchor or keyframe stick figure prediction which helps guide the segmentation of other more challenging frames in the video. Since even state of the art pose detectors are unreliable on many frames –especially given that we are interested in segmentations with no camera or motion constraints –only the poses or stick figure predictions for frames with the highest confidence in a localized temporal region anchor further processing. The stick figure predictions within confident keyframes are used to extract color, position and optical flow features. Multiple conditional random fields (CRFs) are used to process blocks of video in batches, using a two dimensional CRF for detailed keyframe segmentation as well as 3D CRFs for propagating segmentations to the entire sequence of frames belonging to batches. Location information derived from the pose is also used to refine the results. Importantly, no hand labeled training data is required by our method. We discuss the use of a continuity method that reuses learnt parameters between batches of frames and show how pose predictions can also be improved by our model. We provide an extensive evaluation of our approach, comparing it with a variety of alternative grab cut based methods and a prior state of the art method. We also release our evaluation data to the community to facilitate further experiments. We find that our approach yields state of the art qualitative and quantitative performance compared to prior work and more heuristic alternative approaches.  相似文献   

17.
高全胜  洪炳熔 《软件学报》2007,18(9):2356-2364
利用运动捕获数据,通过学习获得虚拟人运动的统计模型,从而创建真实、可控的虚拟人运动.提出了一种方法:通过对原始运动数据聚类,提取出局部动态运动特征--动态纹理,并用线性动态系统描述,有选择地注释有明确含义的线性动态系统,构建注释动态纹理图.利用这一统计模型,可生成真实感强、可控的虚拟人运动.结果表明,这种方法在交互环境中能够生成流畅、自然的人体运动.  相似文献   

18.
This article presents a motion recognition strategy with rejection ability to extract the meaningful actions according to a given set of motion classes, or categories or types and reject such input patterns whose categories are not known. During the online recognition phrase, the multiple one-versus-one support vector machines are aggregated with the majority voting strategy over the most recent frames in a sliding window to predict the most probable type at each instance. And then, the corresponding index motion map is utilized to determine whether the predicted type should be accepted or not. The motion will be considered to be unknown when consecutive multiple frames are rejected. As a contribution, an adjusted self-organizing map algorithm is proposed to automatically learn the index motion map for each motion class, where the map size and topology are dynamically tuned by the intrinsic characteristics of the trained motions dataset. At the postprocessing step, the procedure is enhanced by an efficient key patterns-based verification strategy, which significantly improves the recognition precision. As a further contribution, we introduce a genetic algorithm learning algorithm to automatically learn the necessary key patterns for each class base on the previous learned index motion map. We evaluate our motion recognition model on various experiments conducted on synthetic data and real data from the freely available sets of motion capture database (HDM05). Experiment results show that the proposed strategy can not only classify motions correctly, but also identify the existence of unknown motion types.  相似文献   

19.
This paper addresses the problem of fully automated mining of public space video data, a highly desirable capability under contemporary commercial and security considerations. This task is especially challenging due to the complexity of the object behaviors to be profiled, the difficulty of analysis under the visual occlusions and ambiguities common in public space video, and the computational challenge of doing so in real-time. We address these issues by introducing a new dynamic topic model, termed a Markov Clustering Topic Model (MCTM). The MCTM builds on existing dynamic Bayesian network models and Bayesian topic models, and overcomes their drawbacks on sensitivity, robustness and efficiency. Specifically, our model profiles complex dynamic scenes by robustly clustering visual events into activities and these activities into global behaviours with temporal dynamics. A Gibbs sampler is derived for offline learning with unlabeled training data and a new approximation to online Bayesian inference is formulated to enable dynamic scene understanding and behaviour mining in new video data online in real-time. The strength of this model is demonstrated by unsupervised learning of dynamic scene models for four complex and crowded public scenes, and successful mining of behaviors and detection of salient events in each.  相似文献   

20.
运动视频中特定运动帧的获取是运动智能化教学实现的重要环节,为了得到视频中的特定运动 帧以便进一步地对视频进行分析,并利用姿态估计和聚类的相关知识,提出了一种对运动视频提取特定运动帧 的方法。首先选用 HRNet 姿态估计模型作为基础,该模型精度高但模型规模过大,为了实际运用的需求,对 该模型进行轻量化处理并与 DARK 数据编码相结合,提出了 Small-HRNet 网络模型,在基本保持精度不变的情 况下参数量减少了 82.0%。然后利用 Small-HRNet 模型从视频中提取人体关节点,将每一视频帧中的人体骨架特 征作为聚类的样本点,最终以标准运动帧的骨架特征为聚类中心,对整个视频进行聚类得到视频的特定运动帧, 在武术运动数据集上进行实验。该方法对武术动作帧的提取准确率为 87.5%,能够有效地提取武术动作帧。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号