首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 120 毫秒
1.
目前,复杂场景下的目标跟踪仍然是智能视频监控领域的一个难点问题,论文针对遮挡和目标尺度变化这两种复杂场景,提出了一种改进的Camshift算法,该算法采用分块跟踪来处理遮挡问题,同时,考虑到遮挡时不应更新模板,目标尺度变化时应该更新模板,采用自适应模板更新机制来局部更新模板,避免错误的模板更新,最后,引入了对目标尺寸变化具有鲁棒性的几何关系直方图来辅助描述运动目标。实验结果表明,该算法既能很好地适应目标的尺度变化,又具有较好的抗遮挡能力,能够保持良好的跟踪效果。  相似文献   

2.
视觉跟踪中,目标信息是不确定的非线性变化过程。随时间和空间而变化的复杂动态数据中学习出较为精确的目标模板并用它来线性表示候选样本外观模型,从而使跟踪器较好地适应跟踪作业中内在或外在因素所引起的目标外观变化是视觉目标跟踪研究的重点。提出一种新颖的多任务混合噪声分布模型表示的视频跟踪算法,将候选样本外观模型假设为由一组目标模板和最小重构误差组成的多任务线性回归问题。利用经典的增量主成分分析法从高维数据中学习出一组低维子空间基向量(模板正样本),并在线实时采样一些特殊的负样本加以扩充目标模板,再利用扩充后的新模板和独立同分布的高斯-拉普拉斯混合噪声来线性拟合当前时刻的候选目标外观模型,最后计算候选样本和真实目标之间的最大似然度,从而准确捕捉当前时刻的真实目标。在一些公认测试视频上的实验结果表明,该算法将能够在线学习较为精准的目标模板并定期更新目标在不同状态时的特殊信息,使得跟踪器始终保持最佳的状态,从而良好地适应不断发生变化的视觉信息(姿态、光照、遮挡、尺度、背景扰乱及运动模糊等),表现出更好的鲁棒性能。  相似文献   

3.
针对传统目标跟踪算法中当目标被遮挡和受光照强度变化等多种因素干扰时,相关滤波器模板更新不准确,误差逐帧累积最终导致目标跟踪失败,提出了一种基于VGG网络的鲁棒目标跟踪算法。首先通过VGG网络对第1帧输入图像中的局部上下文区域提取平均特征图来建立相关滤波器模板;然后通过VGG网络对后续帧输入图像中的局部上下文区域提取平均特征图和仿射变换平均特征图;其次与核相关滤波跟踪算法相结合,自适应确定目标位置和最终目标位置;最后自适应更新最终平均特征图和最终相关滤波器模板。实验结果表明,本文算法在目标被遮挡和受光照强度变化等多种因素干扰时,仍具有较高的目标跟踪精度和较强的鲁棒性。  相似文献   

4.
李飞彬  曹铁勇  黄辉  王文 《计算机应用》2015,35(12):3555-3559
针对视频目标鲁棒跟踪问题,提出了一种基于稀疏表示的生成式算法。首先提取特征构建目标和背景模板,并利用随机抽样获得足够多的候选目标状态;然后利用多任务反向稀疏表示算法得到稀疏系数矢量构造相似度测量图,这里引入了增广拉格朗日乘子(ALM)算法解决L1-min难题;最后从相似度图中使用加性池运算提取判别信息选择与目标模板相似度最高并与背景模板相似度最小的候选目标状态作为跟踪结果,该算法是在贝叶斯滤波框架下实现的。为了适应跟踪过程中目标外观由于光照变化、遮挡、复杂背景以及运动模糊等场景引起的变化,制定了简单却有效的更新机制,对目标和背景模板进行更新。对仿真结果的定性和定量评估均表明与其他跟踪算法相比,所提算法的跟踪准确性和稳定性有了一定的提高,能有效地解决光照和尺度变化、遮挡、复杂背景等场景的跟踪难题。  相似文献   

5.
针对图像序列中的运动目标在跟踪过程中易受到光照等复杂环境、外观变化及部分遮挡影响的问题,提出基于全局信息和局部信息的混合粒子滤波算法.将目标的局部二元模式纹理特征引入粒子滤波算法,通过稀疏编码目标子块,充分利用目标的局部空间信息,并结合全局信息以确定当前帧中目标的位置.在跟踪过程中实时更新模板,这在一定程度上提高算法的鲁棒性.实验表明在目标处于复杂环境中算法能达到较理想的跟踪效果.  相似文献   

6.
在视频运动目标跟踪中,遮挡的出现会使所跟踪目标的尺寸和色彩等外观线索失去可靠性,容易造成误识别,进而导致对于目标的错误跟踪.为了克服这一问题,提出了一种基于目标状态预测和局部光流扫描的抗遮挡跟踪算法.算法根据卡尔曼滤波和目标颜色特征信息,预测各目标是否处于遮挡状态,在目标处于遮挡的情况下,通过由局部光流扫描得到的最佳定位信息更新目标信息.在Directshow软件下的仿真结果表明,所提出算法能够在保证实时性的前提下,在运动目标被背景遮挡或被其它目标遮挡时均能实现较准确跟踪.  相似文献   

7.
彭甜  周越 《微型电脑应用》2010,26(11):39-45
提出了一种基于目标外观模型的粒子滤波跟踪算法,即在粒子滤波跟踪算法中,提出的外观模型作为目标的视觉特征的描述方法来建立目标模板。与常用的颜色直方图不同,提出的外观模型不仅有目标的颜色信息,同时还保留了像素的空间信息。由于自身的持续更新机制,即使在经历长时间的跟踪过程之后,该外观模型仍然能有效的作为当前帧的候选目标的参照模板。基于以上特点,本跟踪算法在目标的大小、方位、旋转角度发生较大变化或背景对前景目标有大面积遮挡的情况下,也能十分稳健的跟踪目标。在多目标的跟踪应用中,本算法结合目标层次划分的技术,能够有效的处理多个目标之间相互遮挡的棘手问题。实验结果证明,与基于颜色直方图的粒子滤波跟踪算法相比,本算法具有更好的鲁棒性。  相似文献   

8.
一种新的相关跟踪方法研究   总被引:40,自引:2,他引:40  
相关跟踪过程中,序列图象中的实时图和参考图之间必然存在灰度差异以及一定程度的几何形变和对目标的局部遮挡,而传统相关匹配算法中,每一对像素点对匹配结果的贡献是均等的,这样就使得算法的性能很容易受到图像中噪声、局部遮挡等因素的影响。为此,从另外的角度提出了一种新的图象相似度度量方法,即将目标图象中与模板图象相接近的点的个数作为相似性度量来进行匹配,这种方法得到的相关曲面更尖锐,匹配置信度更高。又因为跟踪过程,实时图必然存在这样或那样的变化,对模板合理地进行更新是相关跟踪的关键,在上述的图象相似度度量方法的基础上,另外又提出了一种非常有效的模板修正方案,从而使得跟踪算法对环境的适应能力和稳定性得到大大提高,实验结果证明了该方法的优越性。  相似文献   

9.
针对视觉跟踪中的目标遮挡问题,提出一种基于稀疏表达的视觉跟踪算法。采用稀疏表达方法描述跟踪目标,构造基于Gabor特征的目标词典和遮挡词典,通过l1范数最优化求解稀疏表达系数。在粒子滤波框架下跟踪目标,根据稀疏表达系数判断遮挡,并利用重构残差更新遮挡情况下的粒子权重。在目标模板更新时,通过引入可靠性评价来抑制模板漂移。实验结果表明,该算法能够有效地跟踪处于遮挡状态下的运动目标,并对目标姿态变化以及光照变化具有较好的鲁棒性。  相似文献   

10.
针对在线boosting跟踪算法在目标外观发生大幅度变化以及遮挡时易产生“漂移”导致目标丢失问题进行了研究,提出一种尺度自适应在线鲁棒目标跟踪算法。算法基于目标灰度或彩色直方图统计特征构建权重图像,通过对权重图像的矩特征分析,可以实现对目标尺度的自适应调整,同时该算法引入半监督学习策略,很好地解决了由于在线学习导致的跟踪失败问题。实验结果表明,本文算法很好地解决了遮挡、目标外观和尺度变化时的鲁棒跟踪问题。与EM-shift,MIL和SPT三种算法相比,跟踪成功率以及鲁棒性均有所提高。  相似文献   

11.
稀疏表示的Lucas-Kanade目标跟踪   总被引:2,自引:1,他引:1       下载免费PDF全文
提出一种新的目标跟踪算法,将稀疏表示应用于LK(Lucas-Kanade)图像配准框架.通过最小化校准误差的L1范数来求解目标的状态参数,从而实现对目标的准确跟踪.对目标同时建立两个外观模型:动态字典和静态模板,其中动态模型由动态字典的稀疏表示来描述目标外观.为了解决由于动态字典不断更新造成的跟踪漂移问题,一个两阶段迭代机制被采用.两个阶段所采用的目标模型分别为动态字典和静态模板.大量的实验结果表明,本文算法能有效应对外观变化、局部遮挡、光照变化等挑战,同时具有较好的实时性.  相似文献   

12.
Fast occluded object tracking by a robust appearance filter   总被引:10,自引:0,他引:10  
We propose a new method for object tracking in image sequences using template matching. To update the template, appearance features are smoothed temporally by robust Kalman filters, one to each pixel. The resistance of the resulting template to partial occlusions enables the accurate detection and handling of more severe occlusions. Abrupt changes of lighting conditions can also be handled, especially when photometric invariant color features are used, The method has only a few parameters and is computationally fast enough to track objects in real time.  相似文献   

13.
This paper proposes a robust tracking method by the combination of appearance modeling and sparse representation. In this method, the appearance of an object is modeled by multiple linear subspaces. Then within the sparse representation framework, we construct a similarity measure to evaluate the distance between a target candidate and the learned appearance model. Finally, tracking is achieved by Bayesian inference, in which a particle filter is used to estimate the target state sequentially over time. With the tracking result, the learned appearance model will be updated adaptively. The combination of appearance modeling and sparse representation makes our tracking algorithm robust to most of possible target variations due to illumination changes, pose changes, deformations and occlusions. Theoretic analysis and experiments compared with state-of-the-art methods demonstrate the effectivity of the proposed algorithm.  相似文献   

14.
An important problem in tracking methods is how to manage the changes in object appearance, such as illumination changes, partial/full occlusion, scale, and pose variation during the tracking process. In this paper, we propose an occlusion free object tracking method together with a simple adaptive appearance model. The proposed appearance model which is updated at the end of each time step includes three components: the first component consists of a fixed template of target object, the second component shows rapid changes in object appearance, and the third one maintains slow changes generated along the object path. The proposed tracking method not only can detect occlusion and handle it, but also it is robust against changes in the object appearance model. It is based on particle filter which is a robust technique in tracking and handles non-linear and non-Gaussian problems. We have also employed a meta-heuristic approach that is called Modified Galaxy based Search Algorithm (MGbSA), to reinforce finding the optimum state in the particle filter state space. The proposed method was applied to some benchmark videos and its results were satisfactory and better than results of related works.  相似文献   

15.
In this paper, a robust and efficient visual tracking method through the fusion of several distributed adaptive templates is proposed. It is assumed that the target object is initially localized either manually or by an object detector at the first frame. The object region is then partitioned into several non-overlapping subregions. The new location of each subregion is found by an EM1-like gradient-based optimization algorithm. The proposed localization algorithm is capable of simultaneously optimizing several possible solutions in a probabilistic framework. Each possible solution is an initializing point for the optimization algorithm which improves the accuracy and reliability of the proposed gradient-based localization method to the local extrema. Moreover, each subregion is defined by two adaptive templates named immediate and delayed templates to solve the “drift” problem.2 The immediate template is updated by short-term appearance changes whereas the delayed template models the long-term appearance variations. Therefore, the combination of short-term and long-term appearance modeling can solve the template tracking drift problem. At each tracking step, the new location of an object is estimated by fusing the tracking result of each subregion. This fusion method is based on the local and global properties of the object motion to increase the robustness of the proposed tracking method against outliers, shape variations, and scale changes. The accuracy and robustness of the proposed tracking method is verified by several experimental results. The results also show the superior efficiency of the proposed method by comparing it to several state-of-the-art trackers as well as the manually labeled “ground truth” data.  相似文献   

16.
This paper presents an approach for object tracking based on multiple disjoint patches. Initially, the target is subdivided into a set of rectangular patches, and each patch is represented parametrically by the mean vector and covariance matrix computed from a set of feature vectors that represent each pixel of the target. Each patch is tracked independently based on the Bhattacharyya distance, and the displacement of the whole template is obtained using a Weighted Vector Median Filter (WVMF), which reduces the influence of incoherently tracked patches. To smooth the obtained trajectory and also cope with short-term total occlusions, a predicted displacement vector based on the motion of the target in the previous frames is also used, and an updating scheme is applied to deal with appearance changes of the template. Experimental results indicate that the proposed scheme is robust to partial and short-time total occlusions, presenting a good compromise between accuracy and execution time when compared to other approaches.  相似文献   

17.
由于红外图像对比度低、色彩信息匮乏且灰度级动态范围小,基于红外成像的目标跟踪一直是本领域研究的难点和重点。提出了一种融合灰度核直方图和SURF(speeded up robust features)特征的红外目标跟踪算法。在首帧采用灰度核直方图和SURF特征分别描述目标模板,在以后每帧中利用均值漂移算法快速找到局部最优解。考虑到灰度直方图特征信息量少,跟踪误差逐渐累积,采用改进的SURF特征点匹配算法估算当前帧目标尺度和中心位置,及时修正累积误差,避免跟踪窗口漂移且能自适应调整跟踪窗口大小,此外更新目标模板,最终准确跟踪目标。真实场景实验结果表明,本文算法在目标外观发生较大尺度变化、周边具有相似表观物体时能稳定跟踪目标,具有很强的稳健性,且满足实时性要求。  相似文献   

18.
目的 视觉目标跟踪中,目标往往受到自身或场景中各种复杂干扰因素的影响,这对正确捕捉所感兴趣的目标信息带来极大的挑战。特别是,跟踪器所用的模板数据主要是在线学习获得,数据的可靠性直接影响到候选样本外观模型表示的精度。针对视觉目标跟踪中目标模板学习和候选样本外观模型表示等问题,采用一种较为有效的模板组织策略以及更为精确的模型表示技术,提出一种新颖的视觉目标跟踪算法。方法 跟踪框架中,将候选样本外观模型表示假设为由一组复合模板和最小重构误差组成的线性回归问题,首先利用经典的增量主成分分析法从在线高维数据中学习出一组低维子空间基向量(模板正样本),并根据前一时刻跟踪结果在线实时采样一些特殊的负样本加以扩充目标模板数据,再利用新组织的模板基向量和独立同分布的高斯—拉普拉斯混合噪声来线性拟合候选目标外观模型,最后估计出候选样本和真实目标之间的最大似然度,从而使跟踪器能够准确捕捉每一时刻的真实目标状态信息。结果 在一些公认测试视频序列上的实验结果表明,本文算法在目标模板学习和候选样本外观模型表示等方面比同类方法更能准确有效地反映出视频场景中目标状态的各种复杂变化,能够较好地解决各种不确定干扰因素下的模型退化和跟踪漂移问题,和一些优秀的同类算法相比,可以达到相同甚至更高的跟踪精度。结论 本文算法能够在线学习较为精准的目标模板并定期更新,使得跟踪器良好地适应内在或外在因素(姿态、光照、遮挡、尺度、背景扰乱及运动模糊等)所引起的视觉信息变化,始终保持其最佳的状态,使得候选样本外观模型的表示更加可靠准确,从而展现出更为鲁棒的性能。  相似文献   

19.
The Lucas–Kanade tracker (LKT) is a commonly used method to track target objects over 2D images. The key principle behind the object tracking of an LKT is to warp the object appearance so as to minimize the difference between the warped object’s appearance and a pre-stored template. Accordingly, the 2D pose of the tracked object in terms of translation, rotation, and scaling can be recovered from the warping. To extend the LKT for 3D pose estimation, a model-based 3D LKT assumes a 3D geometric model for the target object in the 3D space and tries to infer the 3D object motion by minimizing the difference between the projected 2D image of the 3D object and the pre-stored 2D image template. In this paper, we propose an extended model-based 3D LKT for estimating 3D head poses by tracking human heads on video sequences. In contrast to the original model-based 3D LKT, which uses a template with each pixel represented by a single intensity value, the proposed model-based 3D LKT exploits an adaptive template with each template pixel modeled by a continuously updated Gaussian distribution during head tracking. This probabilistic template modeling improves the tracker’s ability to handle temporal fluctuation of pixels caused by continuous environmental changes such as varying illumination and dynamic backgrounds. Due to the new probabilistic template modeling, we reformulate the head pose estimation as a maximum likelihood estimation problem, rather than the original difference minimization procedure. Based on the new formulation, an algorithm to estimate the best head pose is derived. The experimental results show that the proposed extended model-based 3D LKT achieves higher accuracy and reliability than the conventional one does. Particularly, the proposed LKT is very effective in handling varying illumination, which cannot be well handled in the original LKT.  相似文献   

20.
仿射模型目标跟踪中的一种模板修正策略   总被引:3,自引:0,他引:3       下载免费PDF全文
在目标跟踪过程中,由于目标姿态不断变化,因此必须要对跟踪模板进行必要的修正,为了进行这种动态变化目标的跟踪,提出了一种动态更新模板的新方法,该方法是在基于仿射模型跟踪的基础上,首先在跟踪结果序列图象中进行运动目标检测,然后根据检测的结果来判断目标的姿态变化,再由给定的规则来对模板做出相应的修正,实验证明这种方法可以有效地适应目标的变化。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号