共查询到19条相似文献,搜索用时 109 毫秒
1.
2.
针对现有数字视频目标移除取证算法的伪造帧识别准确率低的问题,本文提出了一种基于双通道卷积神经网络的视频目标移除取证算法。该算法利用双通道结构,分别提取视频绝对帧差图像的RGB特征和噪声特征,并利用双线性池化对二者进行特征融合,而后通过分类层输出视频帧的分类结果,从而有效地识别经过篡改的视频帧。其中,RGB通道能够发现绝对帧差图像中不自然的篡改边界和对比度,噪声通道能够发现原始区域和篡改区域之间噪声的不一致性。此外,算法在网络前端增加了预处理层来放大篡改视频帧的伪造痕迹。实验结果显示,所提算法有效地提高了伪造视频帧的识别准确率,且相对于传统的单通道网络结构,双通道特征融合的方式取得了更好的检测性能。 相似文献
3.
4.
基于Snake活动轮廓模型的视频跟踪分割方法 总被引:4,自引:3,他引:1
基于Snake活动轮廓模型,采用时空融合的方式,根据短时间内相邻帧的运动趋势差异相似的前提,首先将视频序列分成若干个小段,每段有k帧视频,选取段内的前两帧为关键帧,通过运动检测的方式自动得到这两帧中运动对象的大致区域;然后进行帧内Snake演变,搜索精确轮廓;最后以关键帧间运动对象形心的运动矢量预测勾勒后续帧的初始轮廓,进行帧内Snake精确轮廓定位,从而实现所有帧的视频对象分割。相比于传统方法,本文方法克服了手动绘制初始轮廓的缺点,在空域对Snake贪婪方法进行了改进而且精确度高,速度快。实验表明,本文方法成功地实现了前后帧图像之间运动对象的对应匹配关系,并通过改进后的Snake贪婪方法得到了精确的分割结果。 相似文献
5.
空域视频场景监视中运动对象的实时检测与跟踪技术 总被引:3,自引:0,他引:3
本文分析了空域视频场景中运动对象实时检测、跟踪系统的模型。提出了一种在运动背景下实时检测与跟踪视频运动目标的技术。该方法首先进行背景的全局运动参数估计,并对背景进行补偿校正,将补偿校正后的相邻两帧进行差分检测。然后利用假设检验从差分图像中提取运动区域,利用遗传学方法在指定区域内确定最优分割门限,提取视频运动对象及其特征;最后利用线性预测器对目标进行匹配跟踪。在基于高速DSP的系统平台上的实验结果表明该方法取得了很好的效果。 相似文献
6.
7.
针对用于运动目标检测的光流算法存在处理复杂、计算量大等问题,提出一种帧间差分算法和金字塔LK光流法相结合的运动目标检测方案.该方法先对视频图像进行帧间差分处理,得到图像的运动区域,再对该运动区域进行金字塔LK光流计算,减少了计算区域,提高目标检测的速度.最后在搭建的视觉避障平台上使用LabVIEW语言进行算法程序验证,实验结果证明了算法的有效性. 相似文献
8.
9.
基于变化检测和帧差累积的视频对象分割方法 总被引:2,自引:2,他引:0
针对目前许多视频对象分割方法中分割边界不精 确、遮挡和不规则运动问题解决效果 不好等问题,提出一种新的视频 对象分割算法。利用人眼的视觉特点,即对运动(时间梯度)和边缘(空间梯度)都特别敏 感,把帧间运动变化检测(时域 定区间帧差累积)和图像的边缘检测结合起来,首先利用t显著 性检验检测对称帧的帧间变化,再对检测出的初始运动变化 区域进行时域定区间帧差累积计算,并进一步整合形成记忆掩膜(MT);然后应用改进的Kirs ch边 缘检测算子较为精确地检测当 前帧中所有的边缘信息,减少MT膜中的残留噪声,并通过时空滤波获得语义视频对 象平面;最终选择性的应用填充及 形态学处理操作,实现视频对象的分割。实验结果验证了本文算法的有效性和准确性。 相似文献
10.
在镜头不动的情况下,提出了一种基于累积帧差分割技术的视频运动区域检测方法,这种方法对传统累积帧差法作了改进采用相邻帧差和隔帧差结合并对图像累计相加.实验结果表明:这种方法不仅能很好的消除视频图像的噪声,且可以有效地从对比度较小和噪声较大的视频序列中较精确地检测出比较完整的运动区域轮廓. 相似文献
11.
为了实现视频监控现场多区域运动目标检测,分析了传统运动检测算法的不足,结合帧间差分法和背景差分法,提出背景动态更新的运动检测算法。该算法能自适应背景的变化,减少由背景变化造成的误检测。构建基于FPGA的视频监控系统,在FPGA上用该算法实现了640pixel×480pixel,30帧/s视频信号流的运动目标实时检测。系统提供了分区域运动目标检测的功能。检测区域的大小、位置和个数可通过简单的按键操作进行设定。测试结果表明,系统可以实时地对进入划定区域的运动目标进行检测和闪烁告警,且资源占用较少,适合在小规模的FPGA上进行实现。 相似文献
12.
Video inpainting under constrained camera motion. 总被引:1,自引:0,他引:1
Kedar A Patwardhan Guillermo Sapiro Marcelo Bertalmío 《IEEE transactions on image processing》2007,16(2):545-553
A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: it may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple preprocessing stage and two steps of video inpainting. In the preprocessing stage, we roughly segment each frame into foreground and background. We use this segmentation to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by reducing the search space. In the first video inpainting step, we reconstruct moving objects in the foreground that are "occluded" by the region to be inpainted. To this end, we fill the gap as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, we inpaint the remaining hole with the background. To accomplish this, we first align the frames and directly copy when possible. The remaining pixels are filled in by extending spatial texture synthesis techniques to the spatiotemporal domain. The proposed framework has several advantages over state-of-the-art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered backgrounds, and the results show that there is no visible blurring or motion artifacts. A number of real examples taken with a consumer hand-held camera are shown supporting these findings. 相似文献
13.
14.
《Journal of Visual Communication and Image Representation》2014,25(5):855-863
We present a bandlet-based framework for video inpainting in order to complete missing parts of a video sequence. The framework applies spatio-temporal geometric flows extracted by bandlets to reconstruct the missing data. First, a priority-based exemplar scheme enhanced by a bandlet-based patch fusion generates a preliminary inpainting result. Then, the inpainting task is completed by a 3D volume regularization algorithm which takes advantage of bandlet bases in exploiting the anisotropic regularities. The method does not need extra processes in order to satisfy visual consistency. The experimental results demonstrate the effectiveness of our proposed video completion technique. 相似文献
15.
视频监控系统中一种运动目标的检测方法 总被引:1,自引:0,他引:1
提出了一种基于改进的背景差分法的运动目标检测和识别的方法,该算法用于视频监控系统中运动目标检测和报警。双阈值法和动态阈值法有效地检测出图像中的运动目标。Matlab 7.0中对算法进行了仿真,实验表明,该方法有效去除了运动目标阴影及背景噪声,可准确地检测出运动目标。 相似文献
16.
17.
18.
A generic definition of video objects, which is a group of pixels with temporal motion coherence, is considered. The generic video object (GVO) is the superset of the conventional video objects considered in the object segmentation literature. Because of its motion coherence, the GVO can be easily recognised by the human visual system. However, due to its arbitrary spatial distribution, the GVO cannot be easily detected by the existing algorithms which often assume the spatial homogeneousness of the video objects. The concept of extended optical flow is introduced and a dynamic programming framework for the GVO detection and segmentation is developed, whose solution is given by the Viterbi algorithm. Using this dynamic programming formulation, the proposed object detection algorithm is able to discover the motion path of the GVO automatically and refine its spatial region of support progressively. In addition to object segmentation, the proposed algorithm can also be applied to video pre-processing, removing the so-called 'video mask' noise in digital videos. Experimental results show that this type of vision-assisted video pre-processing significantly improves the compression efficiency. 相似文献
19.
This paper presents a video context enhancement method for night surveillance. The basic idea is to extract and fuse the meaningful information of video sequence captured from a fixed camera under different illuminations. A unique characteristic of the algorithm is to separate the image context into two classes and estimate them in different ways. One class contains basic surrounding scene information and scene model, which is obtained via background modeling and object tracking in daytime video sequence. The other class is extracted from nighttime video, including frequently moving region, high illumination region and high gradient region. The scene model and pixel-wise difference method are used to segment the three regions. A shift-invariant discrete wavelet based image fusion technique is used to integral all those context information in the final result. Experiment results demonstrate that the proposed approach can provide much more details and meaningful information for nighttime video. 相似文献