首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于SMC-PHDF的部分可分辨的群目标跟踪算法   总被引:11,自引:4,他引:7  
提出一种基于粒子概率假设密度滤波器(Sequential Monte Carlo probability hypothesis density filter, SMC-PHDF)的部分可分辨的群目标跟踪算法. 该算法可直接获得群而非个体的个数和状态估计. 这里群的状态包括群的质心状态和形状. 为了估计群的个数和状态, 该算法利用高斯混合模型(Gaussian mixture models, GMM)拟合SMC-PHDF中经重采样后的粒子分布, 这里混合模型的元素个数和参数分别对应于群的个数和状态. 期望最大化(Expectation maximum, EM)算法和马尔科夫链蒙特卡洛(Markov chain Monte Carlo, MCMC)算法分别被用于估计混合模型的参数. 混合模型的元素个数可通过删除、合并及分裂算法得到. 100次蒙特卡洛(Monte Carlo, MC)仿真实验表明该算法可有效跟踪部分可分辨的群目标. 相比EM算法, MCMC算法能够更好地提取群的个数和状态, 但它的计算量要大于EM算法.  相似文献   

2.
We introduce a novel clustering algorithm named GAKREM (Genetic Algorithm K-means Logarithmic Regression Expectation Maximization) that combines the best characteristics of the K-means and EM algorithms but avoids their weaknesses such as the need to specify a priori the number of clusters, termination in local optima, and lengthy computations. To achieve these goals, genetic algorithms for estimating parameters and initializing starting points for the EM are used first. Second, the log-likelihood of each configuration of parameters and the number of clusters resulting from the EM is used as the fitness value for each chromosome in the population. The novelty of GAKREM is that in each evolving generation it efficiently approximates the log-likelihood for each chromosome using logarithmic regression instead of running the conventional EM algorithm until its convergence. Another novelty is the use of K-means to initially assign data points to clusters. The algorithm is evaluated by comparing its performance with the conventional EM algorithm, the K-means algorithm, and the likelihood cross-validation technique on several datasets.  相似文献   

3.
The two-parameter Birnbaum-Saunders distribution has been used successfully to model fatigue failure times. Although censoring is typical in reliability and survival studies, little work has been published on the analysis of censored data for this distribution. In this paper, we address the issue of performing testing inference on the two parameters of the Birnbaum-Saunders distribution under type-II right censored samples. The likelihood ratio statistic and a recently proposed statistic, the gradient statistic, provide a convenient framework for statistical inference in such a case, since they do not require to obtain, estimate or invert an information matrix, which is an advantage in problems involving censored data. An extensive Monte Carlo simulation study is carried out in order to investigate and compare the finite sample performance of the likelihood ratio and the gradient tests. Our numerical results show evidence that the gradient test should be preferred. Further, we also consider the generalized Birnbaum-Saunders distribution under type-II right censored samples and present some Monte Carlo simulations for testing the parameters in this class of models using the likelihood ratio and gradient tests. Three empirical applications are presented.  相似文献   

4.
In this paper, we derive two novel learning algorithms for time series clustering; namely for learning mixtures of Markov Models and mixtures of Hidden Markov Models. Mixture models are special latent variable models that require the usage of local search heuristics such as Expectation Maximization (EM) algorithm, that can only provide locally optimal solutions. In contrast, we make use of the spectral learning algorithms, recently popularized in the machine learning community. Under mild assumptions, spectral learning algorithms are able to estimate the parameters in latent variable models by solving systems of equations via eigendecompositions of matrices or tensors of observable moments. As such, spectral methods can be viewed as an instance of the method of moments for parameter estimation, an alternative to maximum likelihood. The popularity stems from the fact that these methods provide a computationally cheap and local optima free alternative to EM. We conduct classification experiments on human action sequences extracted from videos, clustering experiments on motion capture data and network traffic data to illustrate the viability of our approach. We conclude that the spectral methods are a practical and useful alternative in terms of computational effort and solution quality to standard iterative techniques such as EM in several sequence clustering applications.  相似文献   

5.
The Expectation Maximization (EM) algorithm has been widely used for parameter estimation in data-driven process identification. EM is an algorithm for maximum likelihood estimation of parameters and ensures convergence of the likelihood function. In presence of missing variables and in ill conditioned problems, EM algorithm greatly assists the design of more robust identification algorithms. Such situations frequently occur in industrial environments. Missing observations due to sensor malfunctions, multiple process operating conditions and unknown time delay information are some of the examples that can resort to the EM algorithm. In this article, a review on applications of the EM algorithm to address such issues is provided. Future applications of EM algorithm as well as some open problems are also provided.  相似文献   

6.
当数据存在缺值时,通常应用EM算法学习贝叶斯网络.然而,EM算法以联合似然作为目标函数,与判别预测问题的目标相偏离.与EM算法不同,CEM(Conditional Expectation Maximum)算法直接以条件似然作为目标函数.研究了判别贝叶斯网络学习的CEM算法,提出一种使得CEM算法具有单调性和收敛性的Q函数.为了简化计算,在CEM算法的E步,应用Q函数的一种简化形式;在CEM算法的M步,应用梯度下降法的一次搜索结果作为最优值的近似.最后,在UCI数据集上的实验结果表明了CEM算法在判别贝叶斯网络学习中的有效性.  相似文献   

7.
A regression mixture model is proposed where each mixture component is a multi-kernel version of the Relevance Vector Machine (RVM). This mixture model exploits the enhanced modeling capability of RVMs, due to their embedded sparsity enforcing properties. In order to deal with the selection problem of kernel parameters, a weighted multi-kernel scheme is employed, where the weights are estimated during training. The mixture model is trained using the maximum a posteriori approach, where the Expectation Maximization (EM) algorithm is applied offering closed form update equations for the model parameters. Moreover, an incremental learning methodology is also presented that tackles the parameter initialization problem of the EM algorithm along with a BIC-based model selection methodology to estimate the proper number of mixture components. We provide comparative experimental results using various artificial and real benchmark datasets that empirically illustrate the efficiency of the proposed mixture model.  相似文献   

8.
Mixture models are ubiquitous in applied science. In many real-world applications, the number of mixture components needs to be estimated from the data. A popular approach consists of using information criteria to perform model selection. Another approach which has become very popular over the past few years consists of using Dirichlet processes mixture (DPM) models. Both approaches are computationally intensive. The use of information criteria requires computing the maximum likelihood parameter estimates for each candidate model whereas DPM are usually trained using Markov chain Monte Carlo (MCMC) or variational Bayes (VB) methods. We propose here original batch and recursive expectation-maximization algorithms to estimate the parameters of DPM. The performance of our algorithms is demonstrated on several applications including image segmentation and image classification tasks. Our algorithms are computationally much more efficient than MCMC and VB and outperform VB on an example.  相似文献   

9.
The Birnbaum-Saunders distribution has been used quite effectively to model times to failure for materials subject to fatigue and for modeling lifetime data. In this paper we obtain asymptotic expansions, up to order n−1/2 and under a sequence of Pitman alternatives, for the non-null distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the Birnbaum-Saunders regression model. The asymptotic distributions of all four statistics are obtained for testing a subset of regression parameters and for testing the shape parameter. Monte Carlo simulation is presented in order to compare the finite-sample performance of these tests. We also present two empirical applications.  相似文献   

10.
ABSTRACT

Learning parameters of a probabilistic model is a necessary step in machine learning tasks. We present a method to improve learning from small datasets by using monotonicity conditions. Monotonicity simplifies the learning and it is often required by users. We present an algorithm for Bayesian Networks parameter learning. The algorithm and monotonicity conditions are described, and it is shown that with the monotonicity conditions we can better fit underlying data. Our algorithm is tested on artificial and empiric datasets. We use different methods satisfying monotonicity conditions: the proposed gradient descent, isotonic regression EM, and non-linear optimization. We also provide results of unrestricted EM and gradient descent methods. Learned models are compared with respect to their ability to fit data in terms of log-likelihood and their fit of parameters of the generating model. Our proposed method outperforms other methods for small sets, and provides better or comparable results for larger sets.  相似文献   

11.
We consider a network of sensors deployed to sense a spatio-temporal field and infer parameters of interest about the field. We are interested in the case where each sensor's observation sequence is modeled as a state-space process that is perturbed by random noise, and the models across sensors are parametrized by the same parameter vector. The sensors collaborate to estimate this parameter from their measurements, and to this end we propose a distributed and recursive estimation algorithm, which we refer to as the incremental recursive prediction error algorithm. This algorithm has the distributed property of incremental gradient algorithms and the on-line property of recursive prediction error algorithms.   相似文献   

12.
Most state-of-the-art blind image deconvolution methods rely on the Bayesian paradigm to model the deblurring problem and estimate both the blur kernel and latent image. It is customary to model the image in the filter space, where it is supposed to be sparse, and utilize convenient priors to account for this sparsity. In this paper, we propose the use of the spike-and-slab prior together with an efficient variational Expectation Maximization (EM) inference scheme to estimate the blur in the image. The spike-and-slab prior, which constitutes the gold standard in sparse machine learning, selectively shrinks irrelevant variables while mildly regularizing the relevant ones. The proposed variational Expectation Maximization algorithm is more efficient than usual Markov Chain Monte Carlo (MCMC) inference and, also, proves to be more accurate than the standard mean-field variational approximation. Additionally, all the prior model parameters are estimated by the proposed scheme. After blur estimation, a non-blind restoration method is used to obtain the actual estimation of the sharp image. We investigate the behavior of the prior in the experimental section together with a series of experiments with synthetically generated and real blurred images that validate the method's performance in comparison with state-of-the-art blind deconvolution techniques.  相似文献   

13.
Motivated by the high demand to construct compact and accurate statistical models that are automatically adjustable to dynamic changes, in this paper, we propose an online probabilistic framework for high-dimensional spherical data modeling. The proposed framework allows simultaneous clustering and feature selection in online settings using finite mixtures of von Mises distributions (movM). The unsupervised learning of the resulting model is approached using Expectation Maximization (EM) for parameter estimation along with minimum message length (MML) to determine the optimal number of mixture components. The gradient stochastic descent approach is considered for incremental updating of model parameters, also. Through empirical experiments, we demonstrate the merits of the proposed learning framework on diverse high dimensional datasets and challenging applications.  相似文献   

14.
We present an approach for exact maximum likelihood estimation of parameters from univariate and multivariate autoregressive fractionally integrated moving average models with Gaussian errors using the Expectation Maximization (EM) algorithm. The method takes advantage of the relation between the VARFIMA(0,d,0) process and the corresponding VARFIMA(p,d,q) process in the computation of the likelihood.  相似文献   

15.
蔡崇超  王士同 《计算机应用》2007,27(5):1235-1237
在Bernoulli混合模型和期望最大化(EM)算法的基础上给出了一种基于不完整数据的改进方法。首先在已标记数据的基础上通过Bernoulli混合模型和朴素贝叶斯算法得到似然函数参数估计初始值, 然后利用含有权值的EM算法对分类器的先验概率模型进行参数估计,得到最终的分类器。实验结果表明,该方法在准确率和查全率方面要优于朴素贝叶斯文本分类。  相似文献   

16.
In this paper, two models predicting mean time until next failure based on Bayesian approach are presented. Times between failures follow Weibull distributions with stochastically decreasing ordering on the hazard functions of successive failure time intervals, reflecting the tester's intent to improve the software quality with each corrective action. We apply the proposed models to actual software failure data and show they give better results under sum of square errors criteria as compared to previous Bayesian models and other existing times between failures models. Finally, we utilize likelihood ratios criterion to compare new model's predictive performance  相似文献   

17.
杨天鹏  陈黎飞 《计算机应用》2018,38(10):2844-2849
针对传统K-means型算法的"均匀效应"问题,提出一种基于概率模型的聚类算法。首先,提出一个描述非均匀数据簇的高斯混合分布模型,该模型允许数据集中同时包含密度和大小存在差异的簇;其次,推导了非均匀数据聚类的目标优化函数,并定义了优化该函数的期望最大化(EM)型聚类算法。分析结果表明,所提算法可以进行非均匀数据的软子空间聚类。最后,在合成数据集与实际数据集上进行的实验结果表明,所提算法有较高的聚类精度,与现有K-means型算法及基于欠抽样的算法相比,所提算法获得了5%~50%的精度提升。  相似文献   

18.
Meilă  Marina  Heckerman  David 《Machine Learning》2001,42(1-2):9-29
We compare the three basic algorithms for model-based clustering on high-dimensional discrete-variable datasets. All three algorithms use the same underlying model: a naive-Bayes model with a hidden root node, also known as a multinomial-mixture model. In the first part of the paper, we perform an experimental comparison between three batch algorithms that learn the parameters of this model: the Expectation–Maximization (EM) algorithm, a winner take all version of the EM algorithm reminiscent of the K-means algorithm, and model-based agglomerative clustering. We find that the EM algorithm significantly outperforms the other methods, and proceed to investigate the effect of various initialization methods on the final solution produced by the EM algorithm. The initializations that we consider are (1) parameters sampled from an uninformative prior, (2) random perturbations of the marginal distribution of the data, and (3) the output of agglomerative clustering. Although the methods are substantially different, they lead to learned models that are similar in quality.  相似文献   

19.
We present a statistical approach to skew detection, where the distribution of textual features of document images is modeled as a mixture of straight lines in Gaussian noise. The Expectation Maximization (EM) algorithm is used to estimate the parameters of the statistical model and the estimated skew angle is extracted from the estimated parameters. Experiments demonstrate that our method is favorably comparable to other existing methods in terms of accuracy and efficiency.  相似文献   

20.
Gibbsian fields or Markov random fields are widely used in Bayesian image analysis, but learning Gibbs models is computationally expensive. The computational complexity is pronounced by the recent minimax entropy (FRAME) models which use large neighborhoods and hundreds of parameters. In this paper, we present a common framework for learning Gibbs models. We identify two key factors that determine the accuracy and speed of learning Gibbs models: The efficiency of likelihood functions and the variance in approximating partition functions using Monte Carlo integration. We propose three new algorithms. In particular, we are interested in a maximum satellite likelihood estimator, which makes use of a set of precomputed Gibbs models called "satellites" to approximate likelihood functions. This algorithm can approximately estimate the minimax entropy model for textures in seconds in a HP workstation. The performances of various learning algorithms are compared in our experiments  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号