首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes a joint maximum likelihood and Bayesian methodology for estimating Gaussian mixture models. In Bayesian inference, the distributions of parameters are modeled, characterized by hyperparameters. In the case of Gaussian mixtures, the distributions of parameters are considered as Gaussian for the mean, Wishart for the covariance, and Dirichlet for the mixing probability. The learning task consists of estimating the hyperparameters characterizing these distributions. The integration in the parameter space is decoupled using an unsupervised variational methodology entitled variational expectation-maximization (VEM). This paper introduces a hyperparameter initialization procedure for the training algorithm. In the first stage, distributions of parameters resulting from successive runs of the expectation-maximization algorithm are formed. Afterward, maximum-likelihood estimators are applied to find appropriate initial values for the hyperparameters. The proposed initialization provides faster convergence, more accurate hyperparameter estimates, and better generalization for the VEM training algorithm. The proposed methodology is applied in blind signal detection and in color image segmentation.  相似文献   

2.
3.
The interest in automatic surveillance and monitoring systems has been growing over the last years due to increasing demands for security and law enforcement applications. Although, automatic surveillance systems have reached a significant level of maturity with some practical success, it still remains a challenging problem due to large variation in illumination conditions. Recognition based only on the visual spectrum remains limited in uncontrolled operating environments such as outdoor situations and low illumination conditions. In the last years, as a result of the development of low-cost infrared cameras, night vision systems have gained more and more interest, making infrared (IR) imagery as a viable alternative to visible imaging in the search for a robust and practical identification system. Recently, some researchers have proposed the fusion of data recorded by an IR sensor and a visible camera in order to produce information otherwise not obtainable by viewing the sensor outputs separately. In this article, we propose the application of finite mixtures of multidimensional asymmetric generalized Gaussian distributions for different challenging tasks involving IR images. The advantage of the considered model is that it has the required flexibility to fit different shapes of observed non-Gaussian and asymmetric data. In particular, we present a highly efficient expectation–maximization (EM) algorithm, based on minimum message length (MML) formulation, for the unsupervised learning of the proposed model’s parameters. In addition, we study its performance in two interesting applications namely pedestrian detection and multiple target tracking. Furthermore, we examine whether fusion of visual and thermal images can increase the overall performance of surveillance systems.  相似文献   

4.
高斯混合模型(GMMs)是统计学习理论的基本模型,在可视媒体领域应用广泛。近些年来,随着可视媒体信息的增长和分析技术的深入,GMMs在(纹理)图像分割、视频分析、图像配准、聚类等领域有了进一步的发展。从GMMs的基本模型出发,从理论和应用的角度讨论和分析了GMMs的求解算法,包括EM算法、变化形式等,论述了GMMs的模型选择问题:在线学习和模型约简。在视觉应用领域,介绍了GMMs在图像分段、视频分析、图像配准、图像降噪等领域的扩展模型与方法,详细地阐述了一些最新的典型模型的原理与过程,如用于图像分段的空间约束GMMs、图像配准中的关联点漂移算法。最后,讨论了一些潜在的发展方向与存在的困难问题。  相似文献   

5.
Pattern Analysis and Applications - Visual tracking is a challenging task in computer vision, which intends to estimate the motion state of the target of interest in subsequent video frames. In...  相似文献   

6.
The traditional Gaussian Mixture Model(GMM)for pattern recognition is an unsupervised learning method.The parameters in the model are derived only by the training samples in one class without taking into account the effect of sample distributions of other classes,hence,its recognition accuracy is not ideal sometimes.This paper introduces an approach for estimating the parameters in GMM in a supervising way.The Supervised Learning Gaussian Mixture Model(SLGMM)improves the recognition accuracy of the GMM.An experimental example has shown its effectiveness.The experimental results have shown that the recognition accuracy derived by the approach is higher than those obtained by the Vector Quantization(VQ)approach,the Radial Basis Function (RBF) network model,the Learning Vector Quantization (LVQ) approach and the GMM.In addition,the training time of the approach is less than that of Multilayer Perceptrom(MLP).  相似文献   

7.
传统的视觉词典一般通过K-means聚类生成,一方面这种无监督的学习没有充分利用类别的先验信息,另一方面由于K-means算法自身的局限性导致生成的视觉词典性能较差。针对上述问题,提出一种基于谱聚类构建视觉词典的算法,根据训练样本的类别信息进行分割并采用动态互信息的度量方式进行特征选择,在特征空间中进行谱聚类并生成最终的视觉词典。该方法充分利用了样本的类别信息和谱聚类的优点,有效地解决了图像数据特征空间的高维性和结构复杂性所带来的问题;在Scene-15数据集上的实验结果验证了算法的有效性。  相似文献   

8.
Baibo  Changshui  Xing 《Pattern recognition》2005,38(12):2351-2362
Gaussian Mixture Models (GMM) have been broadly applied for the fitting of probability density function. However, due to the intrinsic linearity of GMM, usually many components are needed to appropriately fit the data distribution, when there are curve manifolds in the data cloud.

In order to solve this problem and represent data with curve manifolds better, in this paper we propose a new nonlinear probability model, called active curve axis Gaussian model. Intuitively, this model can be imagined as Gaussian model being bent at the first principal axis. For estimating parameters of mixtures of this model, the EM algorithm is employed.

Experiments on synthetic data and Chinese characters show that the proposed nonlinear mixture models can approximate distributions of data clouds with curve manifolds in a more concise and compact way than GMM does. The performance of the proposed nonlinear mixture models is promising.  相似文献   


9.
This paper presents a new extension of Gaussian mixture models (GMMs) based on type-2 fuzzy sets (T2 FSs) referred to as T2 FGMMs. The estimated parameters of the GMM may not accurately reflect the underlying distributions of the observations because of insufficient and noisy data in real-world problems. By three-dimensional membership functions of T2 FSs, T2 FGMMs use footprint of uncertainty (FOU) as well as interval secondary membership functions to handle GMMs uncertain mean vector or uncertain covariance matrix, and thus GMMs parameters vary anywhere in an interval with uniform possibilities. As a result, the likelihood of the T2 FGMM becomes an interval rather than a precise real number to account for GMMs uncertainty. These interval likelihoods are then processed by the generalized linear model (GLM) for classification decision-making. In this paper we focus on the role of the FOU in pattern classification. Multi-category classification on different data sets from UCI repository shows that T2 FGMMs are consistently as good as or better than GMMs in case of insufficient training data, and are also insensitive to different areas of the FOU. Based on T2 FGMMs, we extend hidden Markov models (HMMs) to type-2 fuzzy HMMs (T2 FHMMs). Phoneme classification in the babble noise shows that T2 FHMMs outperform classical HMMs in terms of the robustness and classification rate. We also find that the larger area of the FOU in T2 FHMMs with uncertain mean vectors performs better in classification when the signal-to-noise ratio is lower.  相似文献   

10.
Imputation through finite Gaussian mixture models   总被引:1,自引:0,他引:1  
Imputation is a widely used method for handling missing data. It consists in the replacement of missing values with plausible ones. Parametric and nonparametric techniques are generally adopted for modelling incomplete data. Both of them have advantages and drawbacks. Parametric techniques are parsimonious but depend on the model assumed, while nonparametric techniques are more flexible but require a high amount of observations. The use of finite mixture of multivariate Gaussian distributions for handling missing data is proposed. The main reason is that it allows to control the trade-off between parsimony and flexibility. An experimental comparison with the widely used imputation nearest neighbour donor is illustrated.  相似文献   

11.
动态场景的自适应高斯混合模型的研究   总被引:1,自引:0,他引:1  
混合高斯模型能够拟合像素颜色值分布、跟踪复杂的场景变化,基于它的算法已经成为对视频序列实施背景减法时的一个标准背景建模方法。分析了GMM算法的理论框架,提出了算法改进的两个方面:模型参数更新和BG/FG分类决策。在综述各种已有的算法的基础上,从学习因子控制、模态个数调节、算法评价以及算法初始化等几个方面展开分析。这些分析结果将为后续研究提供思路和方向。  相似文献   

12.
The expectation maximization algorithm has been classically used to find the maximum likelihood estimates of parameters in probabilistic models with unobserved data, for instance, mixture models. A key issue in such problems is the choice of the model complexity. The higher the number of components in the mixture, the higher will be the data likelihood, but also the higher will be the computational burden and data overfitting. In this work, we propose a clustering method based on the expectation maximization algorithm that adapts online the number of components of a finite Gaussian mixture model from multivariate data or method estimates the number of components and their means and covariances sequentially, without requiring any careful initialization. Our methodology starts from a single mixture component covering the whole data set and sequentially splits it incrementally during expectation maximization steps. The coarse to fine nature of the algorithm reduce the overall number of computations to achieve a solution, which makes the method particularly suited to image segmentation applications whenever computational time is an issue. We show the effectiveness of the method in a series of experiments and compare it with a state-of-the-art alternative technique both with synthetic data and real images, including experiments with images acquired from the iCub humanoid robot.  相似文献   

13.
Methods of multi-view learning attain outstanding performance in different fields compared with the single-view based strategies. In this paper, the Gaussian Process Latent Variable Model (GPVLM), which is a generative and non-parametric model, is exploited to represent multiple views in a common subspace. Specifically, there exists a shared latent variable across various views that is assumed to be transformed to observations by using distinctive Gaussian Process projections. However, this assumption is only a generative strategy, being intractable to simply estimate the fused variable at the testing step. In order to tackle this problem, another projection from observed data to the shared variable is simultaneously learned by enjoying the view-shared and view-specific kernel parameters under the Gaussian Process structure. Furthermore, to achieve the classification task, label information is also introduced to be the generation from the latent variable through a Gaussian Process transformation. Extensive experimental results on multi-view datasets demonstrate the superiority and effectiveness of our model in comparison to state-of-the-art algorithms.  相似文献   

14.
Akaho S  Kappen HJ 《Neural computation》2000,12(6):1411-1427
Theories of learning and generalization hold that the generalization bias, defined as the difference between the training error and the generalization error, increases on average with the number of adaptive parameters. This article, however, shows that this general tendency is violated for a gaussian mixture model. For temperatures just below the first symmetry breaking point, the effective number of adaptive parameters increases and the generalization bias decreases. We compute the dependence of the neural information criterion on temperature around the symmetry breaking. Our results are confirmed by numerical cross-validation experiments.  相似文献   

15.
Clustering is a useful tool for finding structure in a data set. The mixture likelihood approach to clustering is a popular clustering method, in which the EM algorithm is the most used method. However, the EM algorithm for Gaussian mixture models is quite sensitive to initial values and the number of its components needs to be given a priori. To resolve these drawbacks of the EM, we develop a robust EM clustering algorithm for Gaussian mixture models, first creating a new way to solve these initialization problems. We then construct a schema to automatically obtain an optimal number of clusters. Therefore, the proposed robust EM algorithm is robust to initialization and also different cluster volumes with automatically obtaining an optimal number of clusters. Some experimental examples are used to compare our robust EM algorithm with existing clustering methods. The results demonstrate the superiority and usefulness of our proposed method.  相似文献   

16.
Bayesian feature and model selection for Gaussian mixture models   总被引:1,自引:0,他引:1  
We present a Bayesian method for mixture model training that simultaneously treats the feature selection and the model selection problem. The method is based on the integration of a mixture model formulation that takes into account the saliency of the features and a Bayesian approach to mixture learning that can be used to estimate the number of mixture components. The proposed learning algorithm follows the variational framework and can simultaneously optimize over the number of components, the saliency of the features, and the parameters of the mixture model. Experimental results using high-dimensional artificial and real data illustrate the effectiveness of the method.  相似文献   

17.
We show that a simple spectral algorithm for learning a mixture of k spherical Gaussians in works remarkably well—it succeeds in identifying the Gaussians assuming essentially the minimum possible separation between their centers that keeps them unique (solving an open problem of Arora and Kannan (Proceedings of the 33rd ACM STOC, 2001). The sample complexity and running time are polynomial in both n and k. The algorithm can be applied to the more general problem of learning a mixture of “weakly isotropic” distributions (e.g. a mixture of uniform distributions on cubes).  相似文献   

18.
Latent Dirichlet allocation (LDA) is one of the major models used for topic modelling. A number of models have been proposed extending the basic LDA model. There has also been interesting research to replace the Dirichlet prior of LDA with other pliable distributions like generalized Dirichlet, Beta-Liouville and so forth. Owing to the proven efficiency of using generalized Dirichlet (GD) and Beta-Liouville (BL) priors in topic models, we use these versions of topic models in our paper. Furthermore, to enhance the support of respective topics, we integrate mixture components which gives rise to generalized Dirichlet mixture allocation and Beta-Liouville mixture allocation models respectively. In order to improve the modelling capabilities, we use variational inference method for estimating the parameters. Additionally, we also introduce an online variational approach to cater to specific applications involving streaming data. We evaluate our models based on its performance on applications related to text classification, image categorization and genome sequence classification using a supervised approach where the labels are used as an observed variable within the model.  相似文献   

19.
Journal of Intelligent Manufacturing - Robot learning from demonstration (LfD) emerges as a promising solution to transfer human motion to the robot. However, because of the open-loop between the...  相似文献   

20.
Automated classification of tissue types of Region of Interest (ROI) in medical images has been an important application in Computer-Aided Diagnosis (CAD). Recently, bag-of-feature methods which treat each ROI as a set of local features have shown their power in this field. Two important issues of bag-of-feature strategy for tissue classification are investigated in this paper: the visual vocabulary learning and weighting, which are always considered independently in traditional methods by neglecting the inner relationship between the visual words and their weights. To overcome this problem, we develop a novel algorithm, Joint-ViVo, which learns the vocabulary and visual word weights jointly. A unified objective function based on large margin is defined for learning of both visual vocabulary and visual word weights, and optimized alternately in the iterative algorithm. We test our algorithm on three tissue classification tasks: classifying breast tissue density in mammograms, classifying lung tissue in High-Resolution Computed Tomography (HRCT) images, and identifying brain tissue type in Magnetic Resonance Imaging (MRI). The results show that Joint-ViVo outperforms the state-of-art methods on tissue classification problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号