首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
李飞彬  曹铁勇  黄辉  王文 《计算机应用》2015,35(12):3555-3559
针对视频目标鲁棒跟踪问题,提出了一种基于稀疏表示的生成式算法。首先提取特征构建目标和背景模板,并利用随机抽样获得足够多的候选目标状态;然后利用多任务反向稀疏表示算法得到稀疏系数矢量构造相似度测量图,这里引入了增广拉格朗日乘子(ALM)算法解决L1-min难题;最后从相似度图中使用加性池运算提取判别信息选择与目标模板相似度最高并与背景模板相似度最小的候选目标状态作为跟踪结果,该算法是在贝叶斯滤波框架下实现的。为了适应跟踪过程中目标外观由于光照变化、遮挡、复杂背景以及运动模糊等场景引起的变化,制定了简单却有效的更新机制,对目标和背景模板进行更新。对仿真结果的定性和定量评估均表明与其他跟踪算法相比,所提算法的跟踪准确性和稳定性有了一定的提高,能有效地解决光照和尺度变化、遮挡、复杂背景等场景的跟踪难题。  相似文献   

2.
This paper presents a unified approach to solve different bilinear factorization problems in computer vision in the presence of missing data in the measurements. The problem is formulated as a constrained optimization where one of the factors must lie on a specific manifold. To achieve this, we introduce an equivalent reformulation of the bilinear factorization problem that decouples the core bilinear aspect from the manifold specificity. We then tackle the resulting constrained optimization problem via Augmented Lagrange Multipliers. The strength and the novelty of our approach is that this framework can seamlessly handle different computer vision problems. The algorithm is such that only a projector onto the manifold constraint is needed. We present experiments and results for some popular factorization problems in computer vision such as rigid, non-rigid, and articulated Structure from Motion, photometric stereo, and 2D-3D non-rigid registration.  相似文献   

3.
Hopfield-type networks convert a combinatorial optimization to a constrained real optimization and solve the latter using the penalty method. There is a dilemma with such networks: when tuned to produce good-quality solutions, they can fail to converge to valid solutions; and when tuned to converge, they tend to give low-quality solutions. This paper proposes a new method, called the augmented Lagrange-Hopfield (ALH) method, to improve Hopfield-type neural networks in both the convergence and the solution quality in solving combinatorial optimization. It uses the augmented Lagrange method, which combines both the Lagrange and the penalty methods, to effectively solve the dilemma. Experimental results on the travelling salesman problem (TSP) show superiority of the ALH method over the existing Hopfield-type neural networks in the convergence and solution quality. For the ten-city TSPs, ALH finds the known optimal tour with 100% success rate, as the result of 1000 runs with different random initializations. For larger size problems, it also finds remarkably better solutions than the compared methods while always converging to valid tours.  相似文献   

4.
提出了一种在时间和资源约束下的高层次多电压低功耗调度方法。应用拉格朗日乘数法,将空闲时间分布到数据通路的各个操作节点上,从而有效地降低了设计电路的能耗。该方法在调度和调整过程中,可以同时处理关键通路和非关键通路上的节点。实验结果表明了该方法在功耗优化方面的有效性。  相似文献   

5.
In this paper, an optical tracking system is introduced for the use in augmented reality (AR). Only a few AR-systems currently exist which are using stereo-vision techniques to estimate the viewing pose. The presented system uses binocular images and retroreflective markers in order to speed up the tracking process and make it more precise and robust. The camera calibration as well as the pose estimation technique is presented. This new optical tracking system, which is based on standard PC hardware, is even suitable to make it portable. In addition, the system is evaluated with regard to its pixel accuracy and depth measurements. This paper shows that the computer vision techniques, which will be presented, are a good choice in order to create a flexible, accurate and easy to use tracking system.  相似文献   

6.
This paper considers the design of robust l1 estimators based on multiplier theory (which is intimately related to mixed structured singular value theory) and the application of robust l1 estimators to robust fault detection. The key to estimator-based, robust fault detection is to generate residuals which are robust against plant uncertainties and external disturbance inputs, which in turn requires the design of robust estimators. Specifically, the Popov-Tsypkin multiplier is used to develop an upper bound on an l1 cost function over an uncertainty set. The robust l1 estimation problem is formulated as a parameter optimization problem in which the upper bound is minimized subject to a Riccati equation constraint. A continuation algorithm that uses quasi-Newton BFGS (the algorithm of Broyden, Fletcher, Goldfab and Shanno) corrections is developed to solve the minimization problem. The estimation algorithm has two stages. The first stage solves a mixed-norm H2/l1 estimation problem. In particular, it is initialized with a steady-state Kalman filter and, by varying a design parameter from 0 to 1, the Kalman filter is deformed to an l1 estimator. In the second stage the l1 estimator is made robust. The robust l1 estimation framework is then applied to the robust fault detection of dynamic systems. The results are applied to a simplified longitudinal flight control system. It is shown that the robust fault detection procedure based on the robust l1 estimation methodology proposed in this paper can reduce false alarm rates.  相似文献   

7.
Conditions under which the Lagrange multiplier method (LMM) can be applied to optimization of the class of quadratic systems are considered. It is shown that quadratic systems can be optimised by LMM under rather general, and to some extent constructive, assumptions. The applicability of the results is illustrated with examples.  相似文献   

8.
In this paper, we present new solutions for the problem of estimating the camera pose using particle filtering framework. The proposed approach is suitable for real-time augmented reality (AR) applications in which the camera is held by the user. This work demonstrates that particle filtering improve the robustness of the tracking comparing to existing approaches, such as those based on the Kalman filter. We propose a tracking framework for both points and lines features, the particle filter is used to compute the posterior density for the camera 3D motion parameters. We also analyze the sensitivity of our technique when outliers are present in the match data. Outliers arise frequently due to incorrect correspondences which occur because of either image noise or occlusion. Results from real data in an augmented reality setup are then presented, demonstrating the efficiency and robustness of the proposed method.  相似文献   

9.
Interactive image segmentation has remained an active research topic in image processing and graphics, since the user intention can be incorporated to enhance the performance. It can be employed to mobile devices which now allow user interaction as an input, enabling various applications. Most interactive segmentation methods assume that the initial labels are correctly and carefully assigned to some parts of regions to segment. Inaccurate labels, such as foreground labels in background regions for example, lead to incorrect segments, even by a small number of inaccurate labels, which is not appropriate for practical usage such as mobile application. In this paper, we present an interactive segmentation method that is robust to inaccurate initial labels (scribbles). To address this problem, we propose a structure-aware labeling method using occurrence and co-occurrence probability (OCP) of color values for each initial label in a unified framework. Occurrence probability captures a global distribution of all color values within each label, while co-occurrence one encodes a local distribution of color values around the label. We show that nonlocal regularization together with the OCP enables robust image segmentation to inaccurately assigned labels and alleviates a small-cut problem. We analyze theoretic relations of our approach to other segmentation methods. Intensive experiments with synthetic and manual labels show that our approach outperforms the state of the art.  相似文献   

10.

Digital watermarking is a way to protect the intellectual property of digital media. Among different algorithms, Quantization Index Modulation (QIM) is one of the popular methods in designing a watermark system. In this paper, a sample area quantization method is proposed for robust watermarking of digital images. First, the samples of the host signal form a polygon and low frequency wavelet coefficients of the carrier image is considered as the host sample. A watermark digit is embedded by quantizing the area of the polygon. Then, in order to minimize the distortion, the watermarked samples are obtained as close as possible to the host samples by solving an optimization problem while maintaining the quantized area at its fixed value. The optimization problem is solved using the gradient descend method. Finally, a maximum likelihood detector is designed to extract the watermark digits, assuming a Gaussian distribution for the host signal samples. The performance of the proposed method is theoretically obtained in terms of error probability in the presence of additive white Gaussian noise. Theoretical results are verified using simulation with artificial signals. The proposed method is compared to the state-of-the-art method under different attacks including: noise addition, JPEG compression, filtering, and geometrical attacks. The results confirm that the proposed method outperforms the other ones in terms of error probability against different attacks. The results also show that in the trade-off between robustness, distortion and capacity, by decreasing capacity the two other factors could be improved simultaneously.

  相似文献   

11.
Image authentication is becoming very important for certifying data integrity. A key issue in image authentication is the design of a compact signature that contains sufficient information to detect illegal tampering yet is robust under allowable manipulations. In this paper, we recognize that most permissible operations on images are global distortions like low-pass filtering and JPEG compression, whereas illegal data manipulations tend to be localized distortions. To exploit this observation, we propose an image authentication scheme where the signature is the result of an extremely low-bit-rate content-based compression. The content-based compression is guided by a space-variant weighting function whose values are higher in the more important and sensitive region. This spatially dependent weighting function determines a weighted norm that is particularly sensitive to the localized distortions induced by illegal tampering. It also gives a better compactness compared to the usual compression schemes that treat every spatial region as being equally important. In our implementation, the weighting function is a multifovea weighted function that resembles the biological foveated vision system. The foveae are salient points determined in the scale-space representation of the image. The desirable properties of multifovea weighted function in the wavelet domains fit nicely into our scheme. We have implemented our technique and tested its robustness and sensitivity for several manipulations.  相似文献   

12.
为了解决现有数字水印中鲁棒性和不可感知性之间的矛盾,设计了一种基于非负矩阵分解和离散小波变换的图像零水印算法。原始图像进行不重叠分块,分别对每子块图像进行3级小波分解得到低频近似分量;对细节分量作非负矩阵分解得到可近似表示子块图像的基矩阵和系数矩阵;将系数矩阵量化得到特征向量,通过特征向量和水印的运算得到原始图像的版权信息。实验结果表明该方案对常见信号处理具有很强的鲁棒性,同时密钥的使用保障了算法的安全性。  相似文献   

13.
We propose a fictitious domain method where the mesh is cut by the boundary. The primal solution is computed only up to the boundary; the solution itself is defined also by nodes outside the domain, but the weak finite element form only involves those parts of the elements that are located inside the domain. The multipliers are defined as being element-wise constant on the whole (including the extension) of the cut elements in the mesh defining the primal variable. Inf–sup stability is obtained by penalizing the jump of the multiplier over element faces. We consider the case of a polygonal domain with possibly curved boundaries. The method has optimal convergence properties.  相似文献   

14.
An approximate F-form of the Lagrange multiplier (LM) test for serial correlation in dynamic regression models is compared with three bootstrap tests. In one bootstrap procedure, residuals from restricted estimation under the null hypothesis are resampled. The other two bootstrap tests use residuals from unrestricted estimation under an alternative hypothesis. A fixed autocorrelation alternative is assumed in one of the two unrestricted bootstrap tests and the other is based upon a Pitman-type sequence of local alternatives. Monte Carlo experiments are used to estimate rejection probabilities under the null hypothesis and in the presence of serial correlation.  相似文献   

15.
Automated segmentation of images has been considered an important intermediate processing task to extract semantic meaning from pixels. We propose an integrated approach for image segmentation based on a generative clustering model combined with coarse shape information and robust parameter estimation. The sensitivity of segmentation solutions to image variations is measured by image resampling. Shape information is included in the inference process to guide ambiguous groupings of color and texture features. Shape and similarity-based grouping information is combined into a semantic likelihood map in the framework of Bayesian statistics. Experimental evidence shows that semantically meaningful segments are inferred even when image data alone gives rise to ambiguous segmentations.  相似文献   

16.
Multimedia Tools and Applications - Perceptual image hashing technique uses the appearance of the digital media object as human eye and generates a fixed size hash value. This hash value works as...  相似文献   

17.
Multimedia Tools and Applications - Nowadays, due to widespread usage of the Internet, digital contents are distributed quickly and inexpensively throughout the world. Watermarking techniques can...  相似文献   

18.
In this paper, a geometrically invariant color image watermarking method using Quaternion Legendre-Fourier moments (QLFMs) is presented. A highly accurate, fast and numerically stable method is proposed to compute the QLFMs in polar coordinates. The proposed watermarking method consists of three main steps. First, the Arnold scrambling algorithm is applied to a binary watermark image. Second, the QLFMs of the original host color image are computed. Third, the binary digital watermark is embedding by performing the quantization of selected QLFMs. Two different groups of attacks are considered. The first group includes geometric attacks such as rotation, scaling and translation while the second group includes the common signal processing attacks such as image compression and noise. Experiments are performed where the performance of proposed method is compared with the existing moment-based watermarking methods. The proposed method is superior over all existing quaternion moment-based watermarking in terms of visual imperceptibility capability and robustness to different attacks.  相似文献   

19.
This paper proposes a robust image hashing method in discrete Fourier domain that can be applied in such fields as image authentication and retrieval. In the pre-processing stage, image resizing and total variation based filtering are first used to regularize the input image. Then the secondary image is obtained by the rotation projection, and the robust frequency feature is extracted from the secondary image after discrete Fourier transform. More sampling points are chosen from the low- and middle-frequency component to represent the salient content of the image effectively, which is achieved by the non-uniform sampling. Finally, the intermediate sampling feature vectors are scrambled and quantized to produce the resulting binary hash securely. The security of the method depends entirely on the secret key. Experiments are conducted to show that the present method has satisfactory robustness against perceptual content-preserving manipulations and has also very low probability for collision of the hashes of distinct images.  相似文献   

20.
In this paper we present a robust information integration approach to identifying images of persons in large collections such as the Web. The underlying system relies on combining content analysis, which involves face detection and recognition, with context analysis, which involves extraction of text or HTML features. Two aspects are explored to test the robustness of this approach: sensitivity of the retrieval performance to the context analysis parameters and automatic construction of a facial image database via automatic pseudofeedback. For the sensitivity testing, we reevaluate system performance while varying context analysis parameters. This is compared with a learning approach where association rules among textual feature values and image relevance are learned via the CN2 algorithm. A face database is constructed by clustering after an initial retrieval relying on face detection and context analysis alone. Experimental results indicate that the approach is robust for identifying and indexing person images.Y. Alp Aslandogan: Correspondence to:  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号